The government revealed last week that a Tesla Model S crashed into a truck in Florida in May, killing the electric car's driver. This, the headlines roared, is the first known example of a fatal road accident involving a self-driving car.

Except it is not.

The Tesla's "Autopilot" feature was turned on. But the model was not designed to be and should not have been considered to be fully self-driving. The car's semi-autonomous systems, which use onboard sensors to guide it away from hazards, were not advanced enough to steer and brake the car without the driver paying continuous attention and correcting when necessary.

In fact, none of the semi-autonomous cars on the market are trustworthy enough to allow drivers to sit back and zone out. In the Florida case, the car failed to detect a large truck that had crossed into the Tesla's path, perhaps because it blended in with a brightly lit sky.

Tesla forces drivers to acknowledge that the system has limits before they can allow it to control the steering wheel. But the carmaker also named it "Autopilot," which suggested that the technology was more capable than it turned out to be.

It is critical that the public not take the wrong lesson from this accident, dismissing all car automation technologies because this one appears to have been misused.

FROM AN EDITORIAL IN THE WASHINGTON POST