July 1, 2016

Who’s driving?

Who is at fault when technology betrays us?

The answer to that question is pretty simple when the technology in question is a mis-directed selfie, an ill-timed tweet, or an accidental “reply all” that airs our thoughts to a larger audience than we intended.  Most of the time, the consequences aren’t that significant in terms of anything but embarrassment.  But sometimes, embarrassment isn’t the only price.

On May 7 of this year, Joshua Brown of Canton, Ohio, crashed his Tesla Model S electric sedan into an 18-wheeler tractor-trailer that was turning left in front of him.  Or, to put it more accurately, his vehicle crashed itself into the truck because the incident occurred while his vehicle was in self-driving mode.  The crash sent Brown’s car underneath the trailer and out the other side, killing him instantly, one would hope, because the alternative is pretty damned horrible.  The preliminary reports show that the Tesla, which is supposed to brake and steer around obstacles, never applied the brakes, apparently getting confused by the white side panel of the truck against the sky. Mr. Brown’s is the first fatality linked to self-driving technology, but there’s a good chance it won’t be the last.  More and more companies are pursuing the technology, including tech giant Google and auto giant General Motors.

The National Highway Transportation Safety Administration is investigating, and Tesla, itself, has acknowledged that the problem stemmed from the fact that “Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.”  (Nice use of passive voice, there, Tesla.  Way to avoid falling on that grenade.)

But, sarcasm aside, Tesla’s strong acknowledgement of a tragic accident, coupled with its weak acknowledgement of any sort of responsibility doesn’t seem entirely out of place.  Yes, Tesla made a product that encourages reliance, but those who give consumer advice suggest that a driver should always be on the lookout to take over from Tesla’s Autopilot when needed.  While this seems to call into question the whole point of Autopilot–if you can’t rely on it to avoid a crash, seriously, what good is it?–this really shouldn’t come as a surprise to anyone.  Tech gurus all advise us to back up our data on a regular basis, so how much sense does it make not to have a back-up when the “data” in question is your life and health?

As a lawyer, I look at this from a liability standpoint, and I suspect this will be something for the courts to deal with, one day.  But, if self-driving tech becomes commonplace, and accidents still happen, who is responsible?  Is it the company that makes the self-driving tech or the consumer who chooses to use it?  This is why I don’t see the tech completely taking over.  The liability burden would be too great unless you could be near sure of no collisions.  It would be like putting a wholly dangerous instrument in someone’s hands with absolutely no understanding of how they might use it.  (Oh, wait.)  And, call me a curmudgeon (seriously, I don’t even fully trust automatic transmission), but I have my doubts that self-driving technology will ever replace the old-fashioned kind of tech in some places.  I can’t imagine a self-driving car making its way around the traffic nightmare that is the Arc de Triomphe in Paris, New York rush hour, or even the murderous traffic in Houston, without more than a few run-ins.  For now, my suggestion would be to keep both feet on the floor and two hands on the wheel (unless you’re shifting). And for goodness sake, stop texting and hang up your phone.