Ten Across

Points of reference: Robotaxi safety

 

A strong majority of Americans would not want to ride in autonomous vehicles, but they are now legal in over half of all states


Editor’s note: This article is part of a collaboration between APM Research Lab and the Ten Across initiative, housed at Arizona State University.


by Maya Chari | August 13, 2024

I recently rode in a taxi that had no driver.  

Phoenix, Arizona is one of a few cities that allow customers to hail autonomous “robotaxis.” Every day, I see the shiny white cars, with their spinning rooftop cameras and empty driver's seats. Although I have lived in Phoenix for several years and don't currently own a car, I don't regularly use them. They’re more expensive than Ubers or Lyfts, let alone buses or trains. My friends who drive regularly tell me stories about them getting stuck in parking lots or failing to yield to ambulances.  

For most of my first ride I didn’t feel unsafe. The vehicle drove cautiously and obeyed all applicable traffic laws. But the many cameras inside, constant reminders of surveillance, made me uneasy. A smooth screen in the seat in front of me allowed me to select the genre of music I preferred.

There was also a button that read “pull over.” I waited to press it until I was within walking distance of my destination. As soon as I pressed it, the car lurched towards the sidewalk. I was startled. The car was turning a bend, and I had assumed that it would wait until the road straightened, as a human driver probably would have if I’d asked them to pull over.

I suppose the car was only obeying my instructions, but the interaction was a small example of the many things that robotaxis have to learn about interacting with humans—and that we have to learn about interacting with them.  

What exactly are robotaxis?  

A driverless robotaxi is one example of an automated driving system (ADS), a vehicle designed to operate completely without a human driver. ADS vehicles typically use sensors to gather information about the environment around them. Machine-learning algorithms are then used to analyze this data and determine the optimal course of action.  

The vehicles share information through a centralized network, meaning that each vehicle has access to data not only from its own experiences but also from those of all other vehicles operated by the company. If a vehicle’s onboard artificial intelligence encounters an unfamiliar situation, it will contact a human remote operator for assistance in making a decision, but these human operators do not have the ability to take control of the vehicle.  

ADS are distinct from the advanced driver assistance systems (ADAS) like Tesla’s Autopilot, which use automation to assist a human driver with certain parts of the driving process. 

The differences in design correspond to differences in business model. No ADS-equipped vehicles are currently available to consumers. Instead, robotaxi companies envision a future in which car ownership is unnecessary, because anyone can easily summon an automated ride in minutes through an app.  

Currently, Waymo, operated by Google’s parent company Alphabet, is the only company offering rides to customers. One other, General Motors’ Cruise, has done so in the past. In addition, Zoox, owned by Amazon, is currently testing vehicles and hopes to make rides available to the public in Las Vegas later this year

What are the benefits of driverless cars? 

It’s easy to see the appeal of a future in which most Americans do not need cars. In 2022, the average American household spent 15% of its post-tax income on transportation. The figure was even higher for low-income households: in 2022, households making less than $25,000 spent 30% of their post-tax income on transportation. Vehicle ownership made up the bulk of these costs, as households that didn’t own vehicles spent around 5% of their post-tax incomes on transportation. 

Furthermore, most cars spend most of their time in parking spaces, which take up valuable real estate in cities. Parking lots consume ten percent of all land area in metropolitan Phoenix, for example. To accommodate different uses at different times of day, the metro has 4.6 parking spaces for each car.  The adoption of robotaxis could lessen this demand for parking.

Another touted benefit of self-driving cars is that passengers could read, work, or watch TV while commuting, making the experience more pleasant.  

All of these benefits are also benefits of public transportation. Robotaxis, however, add the flexibility and relative privacy currently offered by privately owned cars.

Safety: Human drivers versus robots 

The biggest potential benefit that robotaxi advocates cite is not cost, time or space savings—it's safety. In 2023, an estimated 40,990 people died in crashes on U.S. roads, making traffic accidents one of the leading killers of Americans under 45 years of age. Fatalities increased during the COVID-19 pandemic, by 7.3% in 2020 and 10.8% in 2021. They have since declined, but not to pre-pandemic levels.  

Self-driving car companies present themselves as the solution to this crisis. Waymo’s marketing copy emphasizes deaths due to car accidents, including the frequently cited statistic that 94% of crashes involve human error. Former Arizona governor Doug Ducey also cited both the high number of roadway deaths (“the equivalent of one medium-sized commercial airliner crashing once a day, Monday through Friday, every week of the year”) and the 94% statistic in his 2018 executive order authorizing the testing and operation of autonomous vehicles in the state. 

Are humans really terrible at driving? 

The claim that 94% of crashes are due to human error comes from a 2015 National Highway Transportation Safety Administration (NHTSA) memo and deserves some additional context. That memo does indeed assign the “critical reason” for crashes to the driver in 94% of cases. The NHTSA memo also notes, however, that “the critical reason is the immediate reason for the critical pre-crash event and is often the last failure in the causal chain of events leading up to the crash. …it is not intended to be interpreted as the cause of the crash nor as the assignment of the fault to the driver.”

Further, in a 2022 interview, Jennifer Homendy, the chair of the National Transportation Safety Board, called the statistic “dangerous,” saying it obscures the need for a “safe systems” approach in which the overall safety of the transportation system is improved through measures such as federal safety rules and improved roadway design.

“Human drivers are really amazingly safe,” says Dr. Phillip Koopman, a professor of autonomous vehicle safety and embedded systems at Carnegie Mellon University. “Fatalities are very rare.”

Human-driven vehicles in the United States currently average around 1.24 fatalities per hundred million vehicle miles traveled. California’s rates of vehicle fatalities hover around the national average. Arizona’s are worse, but in 2023, there were still only 1.73 fatalities per hundred million miles traveled.  

This seeming paradox—that driving is relatively safe, but still kills hundreds of thousands of people a year—is explained by the sheer amount of driving that is built into American life. Despite the rise in remote work during the COVID-19 pandemic, in 2022, the majority of Americans (around 77%) still drove to work most days, and the average American driver traveled 13,476 miles

Can self-driving cars really keep their safety promises?  

To date the types of crashes associated with human and robotic drivers, respectively, differ slightly —at least in San Francisco, where the most comparable data are available. Over the past four years in that city, there have been nearly 10,000 crashes and the second party in half of all of those crashes were other motor vehicles. In comparison, motor vehicles are the second parties in most autonomous vehicle crashes: 37 of the 54 reported in San Francisco during that same time period.

Waymo and Cruise both claim that their vehicles are safer than those driven by humans. Verifying this claim is complicated. In 2023, according to the Department of Transportation, 6,102,936 police-reported crashes caused property damage or injury in the United States. With 3,132,411 million miles traveled, that equals about 1.94 crashes per million miles travelled. Meanwhile, Waymo has reported around 4.5 crashes per million miles travelled. Although that is over twice the crash rate associated with human drivers, Waymo suggests that human-driven automobile crashes are significantly underreported. 

When it comes to crashes that cause injuries, Waymo has reported 7.3 million driverless miles with 3 accidents resulting in injuries, or 0.41 injury-causing accidents per million miles. Cruise’s vehicles have contributed to at least one high-profile accident that caused severe injuries to a pedestrian. That company reports 1 million driverless miles, so Cruise’s crash injury rate is at least 1.0 per million miles. In comparison, the average Arizona driver was involved in .71 crashes that caused injury per million miles in 2022. So, in this rough comparison of crashes resulting in injury, Waymo fairs somewhat better, and Cruise looks somewhat worse than human drivers.  

Some caveats on the existing safety record are in order. For one, Waymo and Cruise cars operate only in certain geographical areas, which rarely experience snow or other adverse weather, making state- or nation-wide comparisons difficult. In addition, no robotaxis are more than a few years old, while the average age of American passenger vehicles is almost 14 years. Older cars are less safe on average

Another caveat involves reporting requirements. Dr. Phillip Koopman doesn’t believe that existing data proves that autonomous vehicles are safe. “They like to say ‘we’re transparent,’ but the full statement is ‘we’re transparent to the degree required by law,’ which is ‘not very,’” he said. For instance, robotaxi companies are required to report all crashes to the NHTSA under the agency’s Standing General Order on Crash Reporting, but they are permitted to heavily redact the reports if they contain “confidential business information.”   

The National Highway Transportation Safety Administration’s investigations of driverless car companies

The NHTSA currently has safety-related investigations ongoing with both Cruise and Waymo. The investigation with Cruise was opened shortly after an Oct. 2023 incident in San Francisco, in which a pedestrian was struck by a human driver and thrown into the path of a driverless Cruise vehicle. The Cruise vehicle braked but still hit her. Instead of remaining stationary, it then attempted to pull over, dragging the pedestrian 20 feet forward and exacerbating her injuries.  

In response, Cruise met with NHTSA and conducted a voluntary investigation. Cruise itself reported that the incident was the result of the vehicle misidentifying the initial impact as a lateral collision, and that “collision with a risk of serious injury could recur with the Collision Detection Subsystem every 10 million - 100 million miles of driving on average.”  

Cruise paused operations of its driverless fleet on October 26, 2023, and submitted a voluntary recall report to NHTSA on November 2. Its supervised test fleet, in which human safety drivers remain in the vehicle and are able to take over in emergencies, stayed in operation. In the recall report, Cruise said that it had “developed a software update that remedies the issue,” though it did not provide further detail. Though the company has continued human-supervised training operations, it does not have any presently operational driverless fleets. 

Earlier this year, NHTSA opened an investigation into 22 Waymo incidents, including 17 crashes and 5 potentially unsafe “non-crash behaviors” like driving in the opposite lane or entering a construction zone. Some of these incidents were reported to NHTSA under the Standing General Order, while others were reported by third parties. Both investigations are ongoing.   

Do people want them?  

The public is skeptical of self-driving cars. In 2022, only 37% of Americans surveyed by the Pew Research Center said they would want to ride in one—down from 44% in 2018. Many of those surveyed expressed concern about safety and job loss.  

 
 

Robotaxis also face some opposition from organized labor. Union leaders in California and New York have opposed the expansion of autonomous vehicle operation in their states. 

Where do robotaxis operate? 

According to the National Conference of State Legislatures over half of all states have passed legislation allowing driverless vehicles, including 26 states where their use is permitted and another five requiring various types of testing prior to use.

Waymo, then known as the Google Self-Driving Car Project, first began testing its autonomous cars in 2009, and began operations in metropolitan Phoenix in 2017. Currently, Phoenix and San Francisco allow customers to hail self-driving taxis. In March 2024, Waymo expanded operations to Austin, Texas and Los Angeles. Waymo cars typically avoid the highway, but that is changing—the company began testing on Arizona freeways earlier this year. 

Meanwhile, General Motors’ Cruise began testing in Phoenix in May. Twenty-six states explicitly allow the operation of autonomous vehicles without a safety driver, while a few require a safety driver to be present.  

A lack of legislation does not mean that autonomous vehicles are prevented from operating in the state. For example, Oregon currently does not have statewide legislation permitting autonomous vehicles, but the Oregon Department of Transportation’s website makes it clear that testing is permitted in the state. Similarly, in Minnesota, which also lacks autonomous vehicle legislation, a transit agency has begun offering driverless rides

The biggest problem with robotaxis might be their lack of social skills  

Dr. Aviral Shrivastava, a professor at Arizona State University who specializes in research on algorithms for autonomous vehicles, says that there are two types of errors that vehicles, autonomous or not, tend to make. The first is a safety error, which is doing something that is inherently unsafe or breaks the rules of the road, like running a red light. The second is an interaction error, in which a vehicle makes the wrong decision in response to the actions of another vehicle or a non-motorist.  

According to Dr. Shrivastava, the first category of error is “close to a solved problem.” But when it comes to interaction errors, he says, “I think all autonomous vehicle companies are struggling. [Autonomous vehicles] don’t know how to behave with a policeman. They’re not sure how to act when there is a fire truck around. They don’t know the best way to behave in a construction zone.”  

Dr. Shrivastava has ridden in a number of self-driving vehicles and says that he felt safest in a Waymo. But he recalls an experience in which a fellow passenger’s clothing got stuck in the door. The Waymo started moving regardless, and there was no way to tell it to wait.  

Not only are there no clear technological solutions to these “social interaction problems”, there are currently no metrics in use to evaluate how responsibly autonomous vehicles are behaving on the road. Dr. Shrivastava suggests an approach in which people are surveyed about how safe they feel near autonomous vehicles.  

“We need to take human input into account. The onus is on autonomous vehicles to make sure that humans feel safe around them, which isn’t really the case right now.” 

It’s possible that better socialization will allow robotaxis to avoid sudden lurches to the curb like the one I experienced, making the experience more predictable, more widely accepted and even more safe for all.


Thoughts? Questions?

We want to hear from you! Leave us a message below.

…or email us at info@apmresearchlab.org