We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Earlier this week, Tesla's head of Artificial Intelligence (AI) Andrej Karpathy took part in a CVPR’20 workshop on Scalability in Autonomous Driving during which he discussed the firm's approach to self-driving. In the talk, he confessed that Tesla is using a harder approach to autonomous driving but one that is more likely to scale properly.
RELATED: NEW VIDEO SHOWS TESLA'S FULL SELF-DRIVING TECHNOLOGY AT WORK
The executive gave a presentation where he shared two videos: one of Tesla’s self-driving car doing a turn and one of Waymo’s doing the same. He explained that while both turns looked identical, the decision making behind them was very different.
"Waymo and many others in the industry use high-definition maps. You have to first drive some car that pre-maps the environment, you have to have lidar with centimeter-level accuracy, and you are on rails. You know exactly how you are going to turn in an intersection, you know exactly which traffic lights are relevant to you, you where they are positioned and everything. We do not make these assumptions. For us, every single intersection we come up to, we see it for the first time. Everything has to be sold — just like what a human would do in the same situation," said Kaparthy.
Kaparthy went on to say that Tesla is working on a scalable self-driving system deployable in millions of cars which is why the firm is using a vision-based approach. Because it is easier to scale.
"Speaking of scalability, this is a much harder problem to solve, but when we do essentially solve this problem, there’s a possibility to beam this down to again millions of cars on the road. Whereas building out these lidar maps on the scale that we operate in with the sensing that it does require would be extremely expensive. And you can’t just build it, you have to maintain it and the change detection of this is extremely difficult," added Kaparthy.