At last week’s GPU Technology Conference, held by Nvidia, I spoke with Xiaodi Hou, the co-founder and CTO of TuSimple, a startup company that is employing artificial intelligence to drive R&D of an SAE Level 4 autonomous trucking solution. Founded in 2015, the company has created a low-cost, commercially viable self-driving system to address a variety of pain points in the logistics industry.
Why is this such an area of interest? The freight industry is the backbone of economies around the world and, according to TuSimple, in the United States alone, it represents more than 70% of all freight tonnage is transported by trucks. The company’s platform is focused specifically on line-haul trucking—this niche is the transportation of cargo between ports, plants, warehouses, and distribution centers.
“We’re focused on the middle mile,” said Hou. “There are a lot of logistic centers or hubs in different places—and there’s a lot of need for transporting container boxes from one hub to the other; this is what our truck is doing.”
By integrating technology into this crucial part of the supply chain, the company aims to address current industry challenges such as road safety and driver shortages, while helping to reduce carbon emissions with optimized driving.
TuSimple’s technology is able to detect and track objects at distances of greater than 300 meters through advanced sensor fusion that combines data from multiple cameras. The localization technology achieves consistent, decimeter-level localization, even in a tunnel. Furthermore, the truck’s decision-making system dynamically adapts to road conditions, changing lanes and adjusting driving speeds to maximize safety and efficiency.
“We are very product oriented, so many of the things that we build are not just building a demo,” Hou said. “We build the whole stack of the product. Whenever there is something we feel missing, or not good enough for our solution, we just build it by ourselves. For example, we build the mask ourselves. And we build the camera modules by ourselves, and we build the servers by ourselves. And, of course, all of the software stacks are done by TuSimple. We’re very heavily invested in safety and product level safety, such as ASIL D and ISO 2662.”
Deep learning is really an enabler for TuSimple, although Hou explained that in the company’s code base, they have probably less than 10% of the code in deep learning.
“Even though deep learning is only five or 10 years old, it’s really changing the entire idea of how we use computer vision to see the world,” he said. “Based on the triumph of deep learning, the majority of our sensors are camera based. With the camera-based sensors, we can actually beat the performance of some of the LIDAR performance, using them as pure cameras.”
“We’re using deep learning a lot, and it consumes a lot of computational resources. Also, we always want to have a conjugate algorithm, in addition to one. We don’t want the system to rely on one algorithm result, because that’s not stable. If you have a conjugate algorithm, adding on another conjugate algorithm, adding all together, you need a lot of computation. That’s why we have a lot of graphic cards used in this chassis. So that’s the idea of the necessity of using Nvidia.”
Hou said they are currently doing a lot of road testing in both China and the U.S. They’re working with Peterbilt and other tier-one OEMs, including ZF, Cummins, and Bendix. He is “pretty optimistic” that, by the end of 2019, the company will be able to get most, if not all of the algorithm ready to be completely driverless.