On Monday, electric carmaker Tesla Inc said in a statement it has a “moral obligation” to push forward on improving its Autopilot driver assistance system, and expand its availability to consumers, given recent data which showed improved safety metrics when the system was engaged in its vehicles.

The company made the statement defending its Autopilot system in response to a Washington Post report which investigated several serious crashes which occurred involving the Autopilot on roads where the feature could not operate reliably. In response Tesla said that its data showed the system was saving lives and preventing injuries.

The Post’s report looked at at least eight crashes it had identified which occurred between 2016 and 2023, where the Autopilot system could be activated in situations which it was not designed for. The report noted that even though Tesla had the technical ability to restrict where the feature could be employed by geography, it had taken few definitive steps to do so.

The Post pointed out that the Autopilot system is “intended for use on controlled-access highways” with “a center divider, clear lane markings, and no cross traffic,” noting that the user manual supplied by Tesla says it can also falter if the road you are driving on has hills or sharp curves.

In a post on the social media platform X, Tesla wrote that the Post report “leverages instances of driver misuse of the Autopilot driver assist feature to suggest the system is the problem,” noting that the Autopilot, when engaged, was roughly 10 times safer than the US average, and Teslas with the feature were 5 times safer that Teslas which did not have the feature enabled.

The company also noted that even when the Autopilot is enabled, the driver continues to be responsible for controlling the vehicle at all times, and is made aware of the responsibility.

The Post wrote that US regulatory bodies, such as the National Highway Safety Administration (NHTSA) did not have rules which would limit the technology such that it could only be enabled where it was designed to be used, despite having open investigations into the software following over a dozen crashes where Tesla vehicles had collided with emergency vehicles which were parked, and not moving.

In a statement to Reuters, the NHTSA said that it would be too complex and resource-intensive for the agency to verify that systems like Autopilot were only employed under conditions for which they were designed, and even if they were able to do so, it would not fix the underlying shortcoming of the system.

Last month, after Tesla had managed to win two product liability lawsuits in California, a judge in Florida found “reasonable evidence” that CEO Elon Musk and other Tesla managers knew the autopilot systems in their vehicles were defective, but they had continued to allow their vehicles to be driven unsafely.

Verified by MonsterInsights