My Cybersecurity Predictions for 2018, Part 3: Protecting Killer Cars
Death by autonomous auto is coming unless the industry gets security very right. The question is really whether it's already too late.
Death at the cyber-hands of a computer is coming in 2018. But more on that later.
I began this series of cybersecurity predictions for 2018 for Security Now because there aren't enough good "new year" cybersecurity predictions. There are some decent ones, to be sure -- but they are far, far outnumbered by the bad ones. I have found the vast majority of next-year InfoSec predictions to be too broad, too bet-hedging, and/or too grounded in obvious trends.
So in my own cybersecurity predictions for next year, I have tried to go the opposite route -- offering specifics on what I consider to be merely slight likelihoods.
Previously, in Part 1 of this prediction series, I predicted that the full force of the gradually building wrath of the FTC will be visited upon an IoT device maker next year. (See: My Cybersecurity Predictions for 2018, Part 1: Following Trends & the FTC.) Then, in Part 2, I predicted that -- for all of the talk and fear about GDPR -- relatively little will wind up happening next year after GDPR comes into effect. (See: My Cybersecurity Predictions for 2018, Part 2: GDPR Hype Is Hype.)
Now, in Part 3 of my 2018 cybersecurity prediction series, I turn away from regulations and regulators -- and instead issue a forecast of a matter of life and death.
2018 Prediction No. 3: An operator or passenger of a traditional, non-autonomous vehicle will be killed in an accident involving a self-driving car. The self-driving car will be deemed by the insurance companies to be "not at fault."
I really hope this prediction does not come true -- in 2018 or ever. But I have a bad feeling about self-driving cars and society's approach to them.
Autonomous-vehicle technology is far from perfect. At an AI-technology event this past spring in Boston, the MassIntelligence Conference, MIT computer science professor Sam Madden spoke about the problems of how machines interpret light distortion applied to various pixels -- referring to research demonstrating that machine learning can be fooled into thinking, say, that pictures of a dog are pictures of an ostrich. More relevantly, Madden told the audience that self-driving cars are similarly fooled. For instance, he stated, various hand gestures made by people on the street can fool a self-driving car into thinking that a squirrel ran in front of it.
Something similar appears to have happened in the case of Joshua Brown; last year, apparently carefree autonomous-car operator Joshua Brown became the first fatality involving a self-driving Tesla after -- the way the automaker puts it -- "[n]either autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied." Moreover, Brown has not been the only operator of an autonomous Tesla operator to be killed after a problem with the self-driving technology.
In both incidents, the operator was loudly alleged to be more at fault than the technology. Even in non-fatal accidents involving self-driving cars, the other -- human - parties have routinely been deemed at fault for insurance-company purposes. But insurance companies don't necessarily operate in the real world. Yes, it is technically lawful to power through a yellow light at a dangerous 40mph intersection with imperfect visibility at 38mph without paying too close attention -- but most human drivers know better. Yes, it is legally mandated to immediately stop at a crosswalk when a pedestrian is standing there patiently waiting to cross -- but from a practical perspective, most human drivers who are going close to the speed limit on a fast and busy thoroughfare know better than to slam on their brakes in traffic -- risking injury or death to themselves and others behind them. (When it comes to pedestrians near crosswalks, human drivers probably also don't detect quite so many false positives in identifying pedestrians waiting to cross.)
Nevertheless, technophiles are crying full speed ahead with self-driving technology. They espouse the message that "we need to be okay with self-driving cars that crash," arguing that technology cannot be expected to be perfect because humans aren't perfect. This, however, misses the point -- that a world full of self-driving cars is a world lacking in individual agency.
Worse, "smart" cars -- even the non-autonomous ones -- are already rife with enough vulnerabilities that could cause a black-hat attacker to wet him- or herself with glee. (See Law Comes to the Self-Driving Wild West, Part 2 and Autonomous Cars Must Be Secure to Be Safe.) Time and again, security researchers have exposed grossly dangerous vulnerabilities in the modern connected car -- and, time and again, the industry has pooh-poohed such findings. Self-driving cars are an even bigger target -- particularly because they can be so demonstrably fooled via their machine-learning avenues.
So-called progress, however, does not tend to slow. Death and destruction are coming the way of the self-driving car -- and, inevitably, the way of those in the way of the self-driving car.
Ask not for whom the autonomous car honks; it honks for thee.
Related posts:
Unknown Document 739303
— Joe Stanganelli, principal of Beacon Hill Law, is a Boston-based attorney, corporate-communications and data-privacy consultant, writer, and speaker. Follow him on Twitter at @JoeStanganelli.
Read more about:
Security NowAbout the Author
You May Also Like