Feds: Tesla’s Autopilot design at fault in crash

By David Iaconangelo | 09/05/2019 07:12 AM EDT

A 2018 crash involving a Tesla driver and a firetruck in Culver City, Calif., occurred partly because of how Tesla’s semiautonomous-driving features are designed, a federal report said yesterday.

Front-end damage to a Tesla involved in a Jan. 22, 2018, crash with a fire truck.

Front-end damage to a Tesla involved in a Jan. 22, 2018, crash with a fire truck. National Transportation Safety Board

A 2018 crash involving a Tesla driver and a firetruck in Culver City, Calif., occurred partly because of how Tesla’s semiautonomous-driving features are designed, a federal report said yesterday.

The conclusion came in findings from the National Transportation Safety Board, which carried out an investigation into the probable causes for the accident.

The agency found that the driver of the Model S electric car was distracted by the car radio, a bagel and coffee, and possibly a cellphone, and never saw the firetruck it rear-ended in the lane ahead.

Advertisement

NTSB blamed the driver’s "inattention and overreliance" on the driver-assist system for the crash, saying his use of the system was "inconsistent with guidance and warnings from the manufacturer."

But it also pointed to the design of the Autopilot mechanism itself, which it said "permitted the driver to disengage from the driving task."

The company’s Autopilot feature has two parts. One allows the car to automatically slow or accelerate based on the movements of a vehicle traveling in the lane ahead. Another is an auto steer system that keeps the car within lane boundaries. The owner’s manual warns that drivers must remain fully attentive and keep their hands on the wheel.

The findings are likely to increase scrutiny of whether Tesla has the know-how necessary to unveil a fully autonomous ride-hailing service. Tesla CEO Elon Musk said this year that the company would present the service — seen as the next wave for zero-emissions transportation by some analysts — as early as 2020.

A spokesperson from Tesla defended the Autopilot feature, saying in an email that internal data showed that drivers who use the feature "remain safer than those operating without assistance."

Since the crash, the company had updated the system "including adjusting the time intervals between hands-on warnings and the conditions under which they’re activated," said the spokesperson.

No one was injured in the accident, the fourth high-profile crash associated with Tesla’s Autopilot. Two of those accidents, which proved fatal, are still being investigated by NTSB.

The National Highway Traffic Safety Administration, a separate entity, can order companies to issue a recall if it identifies a defect that poses an "unreasonable risk to safety."

Officials at NHTSA and the Department of Transportation say that authority is sufficient to regulate advanced driving features, despite calls for additional rules from safety advocates.

In the case of a fatal 2016 crash in Florida, NHTSA found no evidence of a "safety-related defect trend" in Tesla’s Autopilot feature.

But NTSB’s report prompted one frequent critic of Tesla, the Center for Auto Safety, to demand a recall.

"Tesla’s deceptive use of the term AutoPilot … encourages exactly the sort of overreliance seen in this crash," said the consumer advocacy group.

"NHTSA needs to do its job by issuing rules and removing unsafe vehicles from the road until they can meet minimum performance standards," it added.

The report comes as Congress is weighing the safety of autonomous vehicles. Experts also disagree on whether the technology will reduce greenhouse gas emissions (Energywire, Aug. 9).