To drive safely and smoothly, people make critically important intuitive judgments about the intentions of others and signal to one another in many subtle ways, including eye contact, posture, orientation, and head movements.
The Perception Gap
Machines lack this critical ability to interpret the visual cues from humans. Today’s geometry- and trajectory-based automated driving systems mainly use the location and motion of people and vehicles, and can’t accurately anticipate what a person might do next.
Current autonomous vehicle systems can drive very conservatively, which can be nauseating for passengers and unsafe for humans they share the road with. There will be no meaningful real-world deployment of autonomous vehicles without a solution to this problem.
Our solution utilizes the full spectrum of insights that humans use to make sophisticated judgments about what a person might do next. Our software is built with a rich amount of data yet is massively scalable and runs faster than real-time using only light compute resources.