World

Autonomous Vehicles Are Driving Blind

In San Francisco this month, a woman suffered traumatic injuries from being struck by a driver and thrown into the path of one of hundreds of self-driving cars roaming the city’s streets. San Francisco’s fire chief, Jeanine Nicholson, recently testified that as of August, autonomous vehicles interfered with firefighting duties 55 times this year. Tesla’s autopilot software, a driver-assistance system, has been involved in 736 crashes and 17 fatalities nationwide since 2019.

For all the ballyhoo over the possibility of artificial intelligence threatening humanity someday, there’s remarkably little discussion of the ways it is threatening humanity right now. When it comes to self-driving cars, we are driving blind.

The reason is simple: There are no federal software safety testing standards for autonomous vehicles — a loophole large enough for Elon Musk, General Motors and Waymo to drive thousands of cars through. The National Highway Traffic Safety Administration regulates the hardware (such as windshield wipers, airbags and mirrors) of cars sold in the United States. And the states are in charge of licensing human drivers. To earn the right to drive a car, most of us at some point have to pass a vision test, a written test and a driving test.

The A.I. undergoes no such government scrutiny before commanding the wheel. In California, companies can get a permit to operate driverless cars by declaring that their vehicles have been tested and the “manufacturer has reasonably determined that is safe to operate the vehicle.”

“There’s this weird gap between who is in charge of licensing a computer driver — is it N.H.T.S.A. or the state?” asks Missy Cummings, a professor and the director of the Mason Autonomy and Robotics Center at George Mason University.

There’s an irony here: So many headlines have focused on fears that computers will get too smart and take control of the world from humans, but in our reality, computers are often too dumb to avoid hurting us.

The autonomous vehicle companies argue that despite their publicized malfunctions, their software is still better than human drivers. That could be true — after all, autonomous vehicles don’t get tired, drive drunk or text and drive — but we don’t have the data to make that determination yet. Autonomous cars make other kinds of mistakes — such as stopping in ways that block ambulances and pinning a crash victim.

Last month Representatives Nancy Pelosi and Kevin Mullin wrote a letter to the N.H.T.S.A. asking for it to demand more data about autonomous vehicle incidents, particularly those involving stopped vehicles that impede emergency workers. More comparison data about human-driven car crashes would also be helpful; the N.H.T.S.A. provides only crash estimates based on sampling.

But why can’t we go further than collecting data?

After all, A.I. often makes surprising mistakes. This year, one of GM’s Cruise cars slammed into an articulated bus after incorrectly predicting its movement. GM updated the software after the incident. Last year, an autonomous car slammed on its brakes while making a left turn because it seemed to have thought that an oncoming car was going to make a right turn into its path. Instead, the oncoming vehicle slammed into the stopped driverless vehicle. Passengers in both cars were injured.

“The computer vision systems in these cars are extremely brittle. They will fail in ways that we simply don’t understand,” says Dr. Cummings, who has written that A.I. should be subject to licensing requirements equivalent to the vision and performance tests that pilots and drivers undergo.

Of course, the problem isn’t limited to cars. Every day we learn a different way that the A.I. chatbots are failing — whether by inventing case law or by sexually harassing their users. And we have long been grappling with the failures of A.I. recommendation systems, which at times have recommended gun parts and drug paraphernalia on Amazon, which restricts such items, or pushed ideologically biased content on YouTube.

Despite all these real-world examples of harm, many regulators remain distracted by the distant and, to some, far-fetched disaster scenarios spun by the A.I. doomers — high-powered tech researchers and execs who argue that the big worry is the risk someday of human extinction. The British government is holding an A.I. Safety Summit in November, and Politico reports that the A.I. task force is being led by such doomers.

In the United States, a wide array of A.I. legislation has been proposed in Congress, largely focused on doomer concerns, such as barring A.I. from making nuclear launch decisions and requiring some high-risk A.I. models to be licensed and registered.

The doomer theories are “a distraction tactic to make people chase an infinite amount of risks,” says Heidy Khlaaf, a software safety engineer who is an engineering director at Trail of Bits, a technical security firm. In a recent paper, Dr. Khlaaf argued for a focus on A.I. safety testing that is specific to each domain in which it operates — for instance, ChatGPT for lawyers’ use.

In other words, we need to start acknowledging that A.I. safety is a solvable problem — and that we can, and should, solve it now with the tools we have.

Experts in different domains need to evaluate the A.I. used in their fields to determine whether it is too risky — starting with making a bunch of autonomous cars take vision and driving tests.

It sounds boring, but that’s exactly what safety is. It is a bunch of experts running tests and making checklists. And we need to start doing it now.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.

Related Articles

Back to top button