On average, assuming 50 years of driving, people traverse over 650,000 miles. Although humans are driving more than ever, on average, a driver only experiences 4 car accidents throughout his or her life. Clearly, these accident scenarios are tiny in proportion to the majority of normal situations encountered on the road – they are the 1%.
However, although small in size, these “1% scenarios” are catastrophic. Car crashes lead to many consequences. On the individual level, those involved face higher insurance, litigation fees, or even criminal offenses. When extrapolated to a macro level, car accidents lead to 38000 deaths annually in the United States alone, and an economic cost of over $850 billion. All of these facts highlight that, although these “1% scenarios” are not prevalent, they are still deadly and costly – that split second out of thousands of hours spent driving can permanently alter, or end, someone’s life.
Such scenarios have many exacerbating factors. These include the surroundings, such as nighttime or bad weather, human error, such as speeding or reckless driving, and high-risk road situations, such as an obstruction or a lack of a divider. All of these aspects have a common theme: the danger in these circumstances stem from the non-zero human reaction time. All of the situations highlighted above solicit a much faster reaction time than normal to maintain safety, which often cannot be met, leading to dire economic and social consequences. Clearly, there needs to be a solution that can eliminate the human reaction time to mitigate these “1% scenarios.”
Fortunately, autonomous vehicles (AVs) are an ideal solution for this problem. Through removing the human factor, they are able to eliminate the subsequent non-zero reaction time, mitigating “1% scenarios.” To do this, AVs must process their surroundings flawlessly in real time. The biggest barrier to this today is the visual perception problem. To interpret the various surrounding visual elements while driving, the brain uses 86 billion neurons to produce tremendous computability at minimal power. Clearly, for self-driving vehicles to mimic the human brain in this case, they must integrate a solution with tremendous efficiency.
What an Autonomous Vehicle Must Process While Driving
To solve the visual perception problem, for every watt of power consumed, this solution must produce 75 Tera-Operations-Per-Second (TOPS) of processing capability. However, because current platforms are based on legacy technology, they cannot meet the efficiency requirement needed to solve the visual perception problem – they can only enable partial autonomy. Because these products still require a human driver, the non-zero reaction time is not eliminated, and, as a result, the “1% scenarios” are still dire.
To enable full vehicle autonomy, the car must be equipped with an AI vision solution, purpose-built for AVs. With tremendous processing power and minimal battery consumption, this platform will solve the visual perception problem, allowing self-driving cars to take the road. With humans acting only as passengers, and no longer as drivers, the lag due to reaction time is eliminated and the outcomes of “1% scenarios” are alleviated.
The majority of the time, driving is a simple, rather care-free task. However, that does not excuse anyone from encountering danger on the road at any point. Statistically, although very rare, these unsafe situations do occur, and have calamitous costs. However, AVs, equipped with a solution purpose-built to solve the visual perception problem provide a path to ease these drastic occurrences, and their subsequent consequences.
Featured image: AV Perception
In-post photo: Created by author on July 29th, 2019
Ashwini is currently the Co-Founder and Chief Business Officer at Recogni Inc., the designers of a vision-oriented artificial intelligence platform for autonomous vehicles. He is a serial entrepreneur/company-builder with >20 years of experience and has built six startups with four exits via acquisition to various public companies.