By Mackenzie Olson
Recently, numerous drivers have claimed that their Tesla vehicles have crashed while in autopilot mode. Perhaps most notable was a crash that occurred in Florida, when a 2015 Tesla Model S in autopilot mode failed to apply the brakes and subsequently crashed into an eighteen-wheel tractor-trailer. The driver of the Tesla was killed.
Traffic fatalities are commonplace. In 2014, there were 29,989 motor crashes in the United States, from which 32,675 deaths resulted. This, however, is the first fatality that has occurred in a Tesla while it operated in autopilot mode. However, Tesla autopilot has been used in over 130 million miles, and on average, a fatality occurs every 94 million miles in the United States and every 60 million miles worldwide. Such facts can seem to beg the conclusion that Tesla autopilot renders its vehicles safer than those that are manually operated.
Perhaps the reason this crash has garnered so much media attention—besides the fact that it is the first fatality of this kind—is because autopilot technology somewhat implies an “autopilot” is a superior operator, and therefore safer than human operators. People may not realize, however, that this technology is not as sophisticated as its name suggests.
Indeed, as the Guardian explains, the system is not fully autonomous. “[I]t is described with warning notices as ‘traffic-aware cruise control” and reminds drivers to keep their hands on the steering wheel at all times.” Tesla further advises drivers to remain alert while using the feature and be prepared to hit the brakes and grab the steering wheel, if need be.
Some attorneys have suggested that such warnings may be insufficient and perhaps provide grounds for a lawsuit. They have had a similar observation to that of the Guardian, that the term autopilot is itself a misnomer — when people hear the term, they again typically think that it means that the machine entirely pilots itself, not that they must assist in its operation.
And depending on the marketing and advertising of the autopilot feature, such a discrepancy of actual and proclaimed capabilities could potentially give rise to a violation of various consumer protection laws. If a technology does not operate as purported, then its advertising and marketing could be considered unfair and deceptive in violation of such acts. Whether autopilot Teslas rise to that level, though, remains to be seen.
In a recent CNN Money article, certain law school professors also suggested that Tesla could be liable under a products liability theory. W. Kip Viscusi, a professor at Vanderbilt Law School, explained that, “A reasonable consumer might expect [autopilot] to work better, that you wouldn’t be crashing into a semi that crossed the highway.” Alternatively, John C. P. Goldberg, a professor at Harvard Law School, stated that a theory premised on an alleged product defect theory would not necessarily prove viable. He explained that because the technology is so new, the governing law is new as well.
In the meantime, interested practitioners should continue to carefully monitor these cases and any new ones as well. The autopilot Tesla question is so murky that the family of the deceased driver has already hired an attorney, who is exploring the viability of their case even though his firm has yet to reach any conclusions about the viability of potential legal claims. His firm has also been hired by other Tesla drivers who, likewise, have been involved in car crashes during the use of the autopilot feature.
Yet, given the continuously changing nature of this technology, this area of law is poised to develop at a rapid rate.