Since their creation, there has been speculation about whether self-driving cars are safe to travel in. Just recently, Reuters reported that General Motors Co’s test fleet of robot cars in California have been in more run-ins since doubling in size.
The fleet has suffered six crashes in September and 13 crashes overall. A spokesperson for GM Cruise, Rebecca Mark, claimed all the crashes were caused by the other vehicle.
GM is apparently testing 100 of their self-driving cars on California streets to develop software that could navigate traffic-filled and chaotic urban environments. Investors are monitoring GM’s progress closely, with some analysts stating that GM could deploy “robot taxis” within the next two years.
The damage that has occurred during these crashes were categorized as minor, with no injuries or serious damage reported. California state law requires all self-driving vehicle crashes to be reported, regardless of severity. This is a bit reassuring, as the public is at least aware that someone is keeping track of the accident records. This could be another revealing factor of how safe autonomous cars really are.
GM is not the only company with reports of self-driving car accidents, however.
In September, a self-driving Uber vehicle was involved in a crash in Pittsburgh, Pennsylvania, according to CBS. Yet, the driver of the vehicle was in complete control at the time of the crash. Luckily, no one was hurt, but both vehicles had to be towed from the scene. While the incident was under investigation, Uber took safety precautions by removing the autonomous vehicles from the road. The service resumed again just a few hours later.
Another self-driving Uber vehicle accident occurred back in March of this year in Tempe, Arizona. In this account, a driver making a left turn at an intersection hit the autonomous Uber car in the farthest right lane as it was driving through the light. Uber halted their driving service to fully investigate the situation to “determine whether their equipment was operating properly.”
The Uber vehicle was confirmed to be in autonomous mode; however, both drivers had been blind to each other, as there were two full rows of cars in the leftmost two lanes that blocked their views. The driver making the left turn was cited while the Uber driver was not. Neither driver stopped in time to avoid the crash. This could be another case of human error, having little to do with the autonomous vehicle’s capabilities or safety.
In August of this year, Autotrader accounted that since 2014, “all but one of the 34 reported accidents involving self-driving cars on California roads have had a human at fault,” and also added that “many experts are convinced the greatest threat to safety as the vehicle population transitions from human-driven to self-driving cars will be the vehicles with humans in full control.” An interesting statement, though it may need a lot more data to back it.
Google, who has its own line of autonomous cars, seems to agree with the above statement:
“Thousands of crashes happen everyday on U.S. roads, and red-light running is the leading cause of urban crashes in the U.S. Human error plays a role in 94% of these crashes, which is why we’re developing fully self-driving technology to make our roads safer.”
Perhaps because humans are not perfect, and the fact that these cars are run by computers, the safety of the passengers inside autonomous vehicles is mostly in the hands of the drivers around it. A computer can assess its way around a scientifically correct scenario, but life is not one of those scenarios. Unless all of the other cars on the road are also autonomous, there is no way that a self-driving car can account for the off-the-wall instances life throws at it.
For now, self-driving cars theoretically are safe and maintain a good track record.
Featured image via Flickr/smoothgroover22