Saturday, June 27, 2020

Workshop on Scalability in Autonomous Driving


Very interesting lecture on the challenges Tesla engineers currently face in scaling and developing their full-self-driving capability. 

You would think identifying a stop sign is straightforward, but starting at 7:39 he describes all the difficulties - signs occluded by foliage, signs mounted on a barrier that don't apply unless the barrier is closed, stop signs hand-held by a construction crew, and stop signs straight ahead of you indicating a stop only for a left-turning lane that don't apply if you're following the main road curving to the right. 

At 5:12, he points out that competing teams are first building up a huge lidar data set pinpointing  where roads and lanes are, so the vehicle follows predetermined lines. Tesla decided each car should see the road ahead for the first time, not relying on an established data set - which is expensive to maintain to be current for construction, damage, and other changes. A Tesla has to not only detect in real time, but interpret for instance which traffic light applies to the lane the Tesla is currently in. 

At 13:52, he describes all the preliminary stages a new feature goes through until it's signed off by their QA department. The Tesla fleet is silently in the background actively testing future capabilities and reporting results back to headquarters on features still in the testing stage. 

At 25:20, he shows some really rare scenarios that are very hard for the software to recognize and interpret - a chair falling off a truck ahead, a person walking their dog alongside their car, a car reflected in a shiny truck, and a toppled safety cone that looks like a traffic light. 




No comments:

Post a Comment

Search This Blog

Followers