Posted: Monday, August 7th, 2062
Three pedestrians were killed Sunday when a self-driving car’s operating system was compromised outside a weekend farmers’ market in Weston, a wealthy suburb of Boston. Two local teens in the car were also treated and released with non-life-threatening injuries. Witnesses said the vehicle did not slow as it mounted the sidewalk and struck the victims, stopping only when it struck a line of heavy hedge at the end of the block. The names of the deceased are held pending notification of next of kin, and a Boston Metro press officer declined to identify the teens.
The officer did confirm that the teens were “on the younger side of teenage” and are the subjects of an investigation. No charges have been filed at the time of this report.
The teens’ attorney released a short press statement suggesting the car driving the teens had been hacked by an unknown party.
“It’s entirely possible the vehicle was hacked,” Boston Metro Chief of Police Esmerelda McLeod said in a press conference this afternoon. “On the other hand, there have been incidents of individuals deliberately using “spoofing” programs to subvert self-driving software and enable manual driving from a pocket tablet or phone. We have a very capable data forensics team working on the car’s systems to discover the truth.”
The three deaths bring the count of vehicle-related deaths in Massachusetts to 25, slightly above the generally accepted 1/8 standard as compared to vehicular deaths in the bloody pre-mandatory-autodrive era.
The self-driving car is coming. As quickly as the automobile replaced the horse in busy city centers where a spooked horse would present a public danger, the self-driving car will replace the manually driven car in those same places and for the same reason.
It has already been discussed for some time that a computer-operated vehicle is vulnerable to hacking, malware, viruses, what have you in the realm of scary things that make your computer go AIIIIIIIEEEE and stop working the way it’s supposed to. Even now, with vehicles not driven by software but many of their systems controlled by it, it has already been demonstrated that a WiFi equipped vehicle is vulnerable to hacking.
And of course, like any computerized device, vehicles are already vulnerable to the knowledgeable subverting their programs.
People worry over this sort of stuff. And it’s worthwhile to worry about. A vehicle out of control, self-driving or not, is dangerous. Deadly.
And yet, over a million people yearly die in automobile accidents. Thirty thousand-ish of those are in the United States. Most of those deaths are caused by driver error or driver misuse. I went over the subject not too long ago in a post about a worker having to take a sick day because his/her car had come down with a virus.
So before I retread that same ground too heavily, I’ll just say that I bet self-driving cars will still kill people. I bet the first few times it happens there will be a public outcry and great consternation. And I bet that in the end, self-driving cars will still kill WAY FEWER PEOPLE. And once the last people who remember how often people died in manually-driven-car accidents, I bet there will be a cohort of “manual driving truthers” who will protest that history is misrepresented and lobby for “safer” human-controlled driving.
Stick around for a century or so, you’ll see.
(This post originally appeared on my Patreon page three days before it appeared here. Even one slender buck pledged per month gets you my fiction & writing posts 3 days early and ebooks 30 days before they’re released and FREE regardless of what I charge elsewhere. Woo-hoo!)
So, about the actual post: someone tampering with a self-driving car’s software/firmware is already a much-discussed concern. Most of the articles I’ve seen have explored the possibility of using the WiFi access point of the vehicle itself to access and “hack” the vehicle, and my understanding is that this has already been done in at least one controlled experiment. A hacker could lock or disable the brakes, affect the steering, cause the vehicle to see “ghost” vehicles or become blind to actual vehicles or pedestrians, ignore speed limits, ignore traffic signals (which, once self-driving vehicles become ubiquitous, will likely be ‘visible’ only to the vehicles themselves in areas closed to manually driven vehicles), or… well, you get the point. The possibilities are extensive.
And, of course, there are other possibilities that come to mind. They’ll have to be dealt with as well as possible, just like the hacking problem, as self-driving vehicles become more common.
Drivers might alter vehicles’ software themselves. This will likely be illegal, and will range from harmless to extremely dangerous. Cars that drive themselves will likely have no human-accessible controls like a steering wheel or brake/accelerator pedals; a likely illegal mod would be to provide controls via touchscreen or a videogame-like controller. Mods might allow vehicles to exceed speed limits, open doors while driving, alter pollution controls (looking at you, naughty Volkswagen), flash rude messages to other drivers on a variable-opacity touchscreen windshield, and who knows what. Once the actual vehicles are here, we’ll discover all sorts of things we haven’t thought of yet, just like just about every other piece of tech we’ve come up with.
We’ll want advertising blockers for our cars by and by, too. I can’t imagine advertisers won’t be happy to pay to have messages projected and voiced right inside your car as you drive. Imagine how quickly you’ll get tired of hearing “would you like to stop at McDonalds?” and “Come shop at Macy’s, 20% off all housewares today!” If the advertising deals with automakers get aggressive enough — and when have advertisers not gotten too aggressive for their own good given the chance — you may find yourself having to respond to a constant stream of default-yes prompts. “Stop at Taco Bell? Touch CANCEL to decline.”
Yes, we’ll need CarAdBlock.
And of course, there’s the hazard of malware as the story suggests. Imagine your family vehicle being ransomwared right before a crucial work meeting. Or before your holiday dinner gathering, and you’re bringing the main dish.
Very likely efforts to thwart malware and illegal mods to vehicle software will be more aggressive than those directed at the same with computers and smartphones. Penalties will be more draconian — and if they’re not, they soon will be after the first few malware, mod, or hacking vehicular injuries or deaths.
But that won’t stop some people from creating malware for cars and so forth. There’s always someone who wants to ruin the fun.
Some think these and other hazards will prevent the self-driving vehicle from becoming popular. I don’t think that’s going to be the case at all. We are already willing to accept a MILLION WORLDWIDE DEATHS PER YEAR for our current vehicles. If malware “only” costs a hundred thousand lives yearly, there will be a public outcry. It will slow adoption by the public. But business and government will continue to pursue the option of lesser cost in both cash and lives — and that will be the self-driving vehicle.