Sunday, July 6, 2025

Are Self‑Driving Cars Safe Enough for Public Roads?

 Autonomous vehicles (AVs) are no longer a future fantasy in 2025, but a real-world phenomenon on roads in San Francisco, Phoenix, and Shanghai. They promise to transform transportation by mitigating human error, optimizing traffic flow, and opening doors for the disabled. But their increasing presence raises a question: Are these vehicles safe enough for use on public roads? High-profile accidents, such as the 2018 Uber fatality that happened to a pedestrian in Arizona, and more recent ones involving Waymo and Cruise cars, have led to public mistrust. This post explores the safety of autonomous cars, analyzing the scientific research into their behavior, the problems they face, and why it is necessary to know about them to determine the future of transportation.

The interest in AVs on the part of the people is owing to their potential to transform daily life. This interest is, however, counteracted by distrust, as a result of media news regarding crashes and a lack of information on how the vehicles function. Others argue that AVs are safer than human drivers, who are the cause of over 90% of the 1.3 million annual roadway deaths globally through errors such as distraction or speeding. Some others contend that AVs' reliance on high-level algorithms and sensors brings new risks, particularly in the dynamic urban environment. By examination of existing research and crash data, this entry asserts that, while autonomous cars possess superior safety in controlled settings, their preparedness for widespread public availability hinges on overcoming technology deficits and building public trust through open discourse.


The Science of Autonomous Vehicle Safety


Figure 1, Waymo Driver safety performance compared to average human drivers. Reproduced from Waymo, “Safety,” Waymo, 2024 [Online]. [1]


Autonomous cars rely on a combination of sensors, lidar, radar, cameras, and artificial intelligence to navigate. These technologies process vast amounts of data in real-time to detect obstacles, predict behavior, and steer the vehicle. A 2023 study compared over 37 billion miles of Waymo AV driving data, and it found their vehicles had a 76% lower rate of crashes compared to human-driven vehicles in similar circumstances [1]. The research attributed this to AVs' strict compliance with road laws and quick response times, which are impossible for humans to match.


However, the same research also stated that AVs do not fare well in "edge cases"—rare, unexpected situations like aberrant pedestrian actions or bad weather. For instance, a 2023 study used simiulations that found that there was a strong tie between sensor misinterpretations or software malfunctions, and difficult surroundings like road work sites or unushual intersections. [2]. The findings show that while AVs get better in routine environments, their safety in unstructured environments is still being developed.


Human operators are notoriously error-prone. The World Health Organization reports that human factors, distractibility, fatigue, or impairment, are responsible for 94% of global road accidents [3]. AVs eliminate such issues, since they don't get distracted or tired. A 2025 study on AV performance over 60 million miles driven concluded that fully autonomous systems could prevent 85% of crashes resulting from human error [4]. This is particularly applicable to rear-end collisions, where the precise braking system of AVs is better than human reflexes.


Yet, machines have inherent vulnerabilities. A 2024 report highlighted that AVs could misinterpret unclear situations, such as a pedestrian suddenly jumping into the road [5]. In 2023, a San Francisco Cruise AV struck a pedestrian who had already been hit by another vehicle and pulled her 20 feet due to a software fault in recognizing the situation [6]. These incidents highlight that AVs' choice-making algorithms, as advanced as they are, cannot be flawless within unscripted real-world conditions.


Public Perception and Trust



Figure 2, Comparison of driver-vehicle interaction needs in autonomous vehicles. Reproduced from E. Portouli et al., "Shaping driver-vehicle interaction in autonomous vehicles," ResearchGate, 2020 [Online]. [12]


Public trust is a critical barrier to the adoption of AVs. As seen in Figure 2, a large amount of passengers' time in an AV is spent monitoring the driving. This is also driven by media exaggeration of the occasional spectacular AV malfunctions, which eclipse their overall safety. As an example, while human drivers have 1.35 crashes per million miles, Waymo's AVs have just 0.36 crashes per million miles [1], yet public debate focuses on the latter's errors.


Experts and engineers argue that open discussion is the key to overcoming this disconnection. One 2024 public engagement study, highlighted the point that AV decision-making transparency, e.g., how AI prioritizes safety, can increase trust up to 40% [7]. Waymo, along with other companies, began publishing comprehensive safety reports, but critics argue that they are not independently audited, reinforcing distrust.


Regulatory and Ethical Challenges


In the United States, the National Highway Traffic Safety Administration (NHTSA) has not implemented standardized safety requirements for AVs, resulting in a patchwork of state-level regulations [8]. A 2024 report from NHTSA identified that inconsistent rules create public confusion, as testing procedures are highly variable [9]. For instance, California demands that AVs report all crashes, whereas other states have more relaxed demands, making it difficult to compare safety information.


Ethical problems also hang over. AVs have to be programmed to decide in split seconds in unavoidable accidents, like deciding between striking a pedestrian or swerving into oncoming traffic. Acceptance of autonomous vehicles varies significantly across countries, accounting for up to 30% of the variance in willingness to adopt AVs, making it necessary to adapt the AV systems to match the area they are in. [10]. Open deliberation with communities, scientists, and policymakers is needed to ensure AV programming consistency with societal values.


The Importance of Ongoing Improvement


The statistics suggest self-driving cars are safer than humans in all but the most unpredictable scenarios, particularly in controlled environments like highways or clear weather. Their accident rates are lower, and their ability to eliminate human error has undeniable benefits. However, their flaws in edge cases and crazy city driving render them not yet ready for uncontrolled public release. Ongoing research is necessary to refine AI algorithms, improve sensor dependability, and put effective fail-safes in place.


Along with enhancing technology, the public's trust also needs to be won with open debate. Firms and researchers should engage with communities and, in explanation, not only talk about advantages, but also about disadvantages and how they are addressed. This rationale serves to make AVs fulfill what the public asks for, such as improved mobility for elderly and disabled persons, while satisfying ethical requirements.


A Call to Action


Autonomous vehicle safety isn't a technological problem; it's a social one. The data shows that AVs have the potential to save millions of lives by eliminating accidents, but their potential is guaranteed only through public adoption and rigorous regulation. The public should demand open federal standards and testing regulations so that AVs are calibrated to the highest possible safety standards. Citizens need to get involved in this conversation, either in community forums or on social media, because it will shape the future of transportation. Self-driving cars are not yet perfect, but their potential is revolutionary. By pushing for evidence-based policy and open dialogue, we can ensure AVs become a safe, equitable option for the future.


References


[1] Waymo, “Safety performance of Waymo’s autonomous vehicles,” Waymo, 2023. [Online]. Available: https://waymo.com/safety

[2]     Z.-X. Xia, S. Fadadu, Y. Shi, and L. Foucard, “Robust Long-Range Perception Against Sensor Misalignment in Autonomous Vehicles,” Aug. 20, 2024, arXiv: arXiv:2408.11196. doi: 10.48550/arXiv.2408.11196.

https://arxiv.org/abs/2408.11196v1

[3] World Health Organization, “Road traffic injuries,” WHO, 2024. [Online]. Available: https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries

[4]    A. W. Team, “New Study: Waymo is reducing serious crashes and making streets safer for those most at risk,” Waymo. Accessed: Jul. 06, 2025. [Online]. Available: https://waymo.com/blog/2025/05/waymo-making-streets-safer-for-vru

[5]    F. Pan, Y. Zhang, J. Liu, L. Head, M. Elli, and I. Alvarez, “Reliability modeling for perception systems in autonomous vehicles: A recursive event-triggering point process approach,” Transportation Research Part C: Emerging Technologies, vol. 169, p. 104868, Dec. 2024, doi: 10.1016/j.trc.2024.104868. https://www.sciencedirect.com/science/article/abs/pii/S0968090X24003899

[6] Reuters, “GM's Cruise recalling 950 driverless cars after pedestrian dragged in crash,” Reuters, Oct. 24, 2023. [Online]. Available:https://www.reuters.com/business/autos-transportation/gms-cruise-recall-950-driverless-cars-after-accident-involving-pedestrian-2023-11-08/

[7]     W. Huang, M. Chen, W. Li, and T. Zhang, “Effects of Automated Vehicles’ Transparency on Trust, Situation Awareness, and Mental Workload,” in HCI in Mobility, Transport, and Automotive Systems, H. Krömker, Ed., Cham: Springer Nature Switzerland, 2024, pp. 116–132. doi: 10.1007/978-3-031-60477-5_9.

[8] N. A. Greenblatt, “Self-driving cars and the law,” IEEE Spectrum, vol. 53, no. 2, pp. 46–51, Feb. 2016, doi: 10.1109/MSPEC.2016.7419800.

[9] National Highway Traffic Safety Administration, “Automated Vehicles for safety,” NHTSA, 2024. [Online]. Available: https://www.nhtsa.gov/vehicle-safety/automated-vehicles-safety

[10]    C. S. Muzammel, M. Spichkova, and J. Harland, “Cultural influence on autonomous vehicles acceptance,” Apr. 03, 2024, arXiv: arXiv:2404.03694. doi: 10.48550/arXiv.2404.03694.     https://arxiv.org/abs/2404.03694

[11]   Simon Sternlund, “Traffic Safety Potential and Effectiveness of Lane Keeping Support.” Accessed: Jul. 06, 2025. [Online]. Available: https://research.chalmers.se/publication/517156/file/517156_Fulltext.pdf

[12]    “(PDF) Shaping driver-vehicle interaction in autonomous vehicles: How the new in-vehicle systems match the human needs.” Accessed: Jul. 06, 2025. [Online]. Available: https://www.researchgate.net/publication/347884807_Shaping_driver-vehicle_interaction_in_autonomous_vehicles_How_the_new_in-vehicle_systems_match_the_human_needs




No comments:

Post a Comment