Tesla’s ‘Full Self-Driving’ system is under increased scrutiny as incidents and expert opinions raise questions about its safety and readiness for widespread use even as Elon Musk touts his cars as the robotaxis of the future.
AP News reports that Tesla’s highly touted ‘Full Self-Driving’ (FSD) system, which approximately 500,000 Tesla owners are currently using, is facing growing concerns about its safety and ability to operate autonomously. The system, which Tesla claims can navigate from point to point with minimal human intervention, has been the subject of multiple incidents that have drawn the attention of federal regulators.
William Stein, a technology analyst at Truist Securities, has tested the latest versions of FSD three times in the past four months. In each instance, he reported that the vehicle made unsafe or illegal maneuvers, leaving his 16-year-old son “terrified” during the most recent test drive. Stein’s experiences, along with a fatal crash involving a Tesla equipped with FSD in the Seattle area, have prompted the National Highway Traffic Safety Administration (NHTSA) to investigate the system.
Tesla CEO Elon Musk has made bold predictions about the capabilities of FSD, suggesting that it could operate more safely than human drivers by the end of this year or next. However, experts in the field of autonomous vehicles are becoming increasingly skeptical about the system’s ability to function safely on a large scale. Many doubt that Tesla is close to deploying a fleet of autonomous robotaxis, as Musk has predicted.
One of the primary concerns raised by experts is the reliance of Tesla’s FSD system on cameras and computers, which may not always be able to accurately detect and identify objects, especially in adverse weather conditions or low light. Most other companies developing autonomous vehicles, such as Waymo and Cruise, use a combination of cameras, radar, and laser sensors to ensure better perception of the environment.
Missy Cummings, a professor of engineering and computing at George Mason University and prominent Tesla critic, emphasizes that cars cannot operate safely with vision alone. Even systems that incorporate laser and radar technology are not yet able to drive reliably in all situations, raising safety questions across the board.
Another issue is the lack of common sense and narrow learning capabilities of machine learning systems like FSD. Phil Koopman, a professor at Carnegie Mellon University who studies autonomous vehicle safety, explains that if an autonomous vehicle encounters a situation it has not been trained to handle, it is prone to crashing.
As Tesla faces declining electric vehicle sales and Musk encourages investors to view the company as a robotics and AI business, the safety and effectiveness of FSD remain under intense scrutiny. The NHTSA is evaluating information on the fatal crash in Washington state and investigating whether a recent Tesla recall intended to improve the automated vehicle driver monitoring system was successful.
While some Tesla fans have shared videos of their cars driving autonomously without human intervention, these instances do not provide a comprehensive picture of the system’s performance over time. Alain Kornhauser, who leads autonomous vehicle studies at Princeton University, suggests that Tesla could start by offering rides on a smaller scale in areas where detailed maps can help guide the vehicles.
The AP contributed to this report.
Read more at AP News here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
COMMENTS
Please let us know if you're having issues with commenting.