Business

Public Streets Are the Lab for Self-Driving Experiments

Tesla’s relentless vision for self-driving cars has played out on America’s public roads — with its autonomous driving technology blamed in at least 12 accidents, with one death and 17 injuries. Lawsuits and a government investigation have followed. But one question lingers: How is Tesla allowed to do that in the first place?

The answer is that there is no federal regulation to stop Tesla — or the many other autonomous vehicle companies — from using public streets as a laboratory. As long as a driver is ready to take over, the only thing that prevents a company from putting an experimental autonomous vehicle on a public road is the threat of a lawsuit or bad publicity.

In June when the National Highway Traffic Safety Administration ordered — for the first time — that car accidents involving driver assist or autonomous features be reported, many concluded that Tesla was the sole motivation. However, the order names 108 carmakers and tech companies, revealing how widespread unregulated autonomous road testing may be.

Any future regulation will be hammered out between diametrically opposed camps. On one side are safety advocates, who say autonomous driving features, like those that control speed, steering and braking, should be proved safer than drivers before they are allowed on public roads. On the other side are car and tech industry backers, who say those features cannot become safer than humans without unfettered testing in the real world.

The question facing regulators, carmakers and the public is: Does regulation makes us safer, or will it slow the adoption of technology that makes us safer?

Safety proponents may disagree over what testing should be required, but they agree there should be some standard. “You can’t anticipate everything,” said Phil Koopman, an expert on safety standards for autonomous cars. “But the car industry uses that as an excuse for doing nothing.”

Test-driving an autonomous car in California. Current rules do require that a human be ready to take over.Credit…Elizabeth D. Herman for The New York Times

Early Days

While outrage over autonomous vehicle accidents has increased calls for regulation, technology has always moved ahead of the law. Just look at the introduction of the car. John William Lambert is credited with building America’s first practical gas-powered car, and with surviving the first recorded crash, in 1891.

Later, New York recorded a couple of firsts: the first crash between vehicles — a car and a bicycle — in 1896 and the first pedestrian fatality, when an electric cab struck Henry Bliss in 1899. But it wasn’t until 1909 that New York introduced its first comprehensive traffic laws. Envisioned by William Phelps Eno, those laws, the basis of traffic laws today, lagged the advent of the car accident by 18 years.

“Cars were sharing the roads with people and horses,” said Bart Selman, a computer science professor at Cornell University with expertise in tech history. “There were a lot of accidents and figuring it out along the way.”

What’s different about autonomous driving technology is how quickly and widely it arrived. “It’s fully reasonable for regulatory agencies and government to get involved in this process, due to the scale and speed in which this is happening,” Mr. Selman said.

Despite risks, the promise of autonomous features can be seen in an insurance data study that showed they reduce the number and severity of some accidents. Cars with forward collision warning plus automatic braking reduced front-to-rear crashes by 50 percent and front-to-rear crashes with injuries by 56 percent, for instance.

The Levels of Self-Driving

While there are no federal restrictions on autonomous testing, many states have set limits on some levels of automation. The Society of Automotive Engineers, a standards organization usually just called S.A.E., defined six levels of automation, which entail an important legal distinction.

A Level 0 vehicle is entirely manually controlled. At Level 1, an automated system may help with one control, such as lane departure or cruise control. Level 2 lets two or more automated controls work at the same time as long as a driver is ready to take over. Up to this point, there is no limitation to driving on public roads.

At Level 3, the car can completely drive itself in some circumstances — say, on highways — but the driver must be ready to take control.

At Level 4, a car can also drive itself in limited circumstances, but without human intervention. At Level 5, the car drives itself entirely.

The distinction between the levels up to Level 2 and those above are important for legal reasons. In a crash of a Level 2 car, liability rests with the driver. For Levels 3 to 5, liability may be on the system and the companies that make it.

But a loophole lets car companies avoid liability. Carmakers themselves determine the level assigned their systems. It may explain why Tesla sells its automated driver assist system as Full Self-Driving Capability (charging $2,499 annually) but classifies it as Level 2. This lets developers have it both ways.

But marketing a system as Full Self-Driving has safety consequences. A study last year by the AAA Foundation for Traffic Safety put 90 drivers in a 2018 Cadillac CT6 equipped with the Super Cruise Level 2 system. Some drivers were told that the system was called DriveAssist, with training that stressed system limitations, and others were told that it was called AutonoDrive, with training that stressed system abilities.

Those who were told that the system was called AutonoDrive were less attentive, and more often took their hands off the wheel and feet off the pedals. The 31-mile route also had a curve that would cause the driver-assist system to hand control back to the driver. Notably, both groups were similarly slow to retake control.

“We’ve known for years that humans are terrible at monitoring automated technology,” said Shaun Kildare, senior director of research for Advocates for Highway and Auto Safety.

The Issues With Tesla’s Autopilot System


Card 1 of 5

Claims of safer driving. Tesla cars can use computers to handle aspects of driving, such as changing lanes. But there are concerns that this driver-assistance system, called Autopilot, is not safe.

A federal investigation. The National Highway Traffic Safety Administration is looking at Autopilot’s involvement in crashes, after 12 incidents involving Teslas crashing into parked emergency vehicles. The agency has the authority to force a recall or require new safety features.

Shortcuts with safety. Former Tesla employees said that the company’s chief executive, Elon Musk, insisted that autonomy could be achieved solely with cameras despite objections from some engineers.

Driver-assistance and crashes. A look inside one 2019 crash that killed a 22-year-old college student highlights how gaps in Tesla’s Autopilot system and distractions can have tragic consequences. In another incident that involved the death of a 15-year-old boy, after a Tesla hit a truck, a California family is suing the company, claiming the Autopilot system was partly responsible.

Tricking the system. Autopilot can be fooled to allow Teslas to drive without anybody in the driver’s seat, Consumer Reports found. The automaker also rolled out a software update that allows drivers to play video games while the car is in motion, drawing scrutiny from U.S. regulators.

The industry has addressed the attention issue with safeguards. But those systems aren’t foolproof. Videos on YouTube show drivers how to easily trick Tesla’s monitor. Even more advanced systems, like Cadillac’s, which uses a camera to ensure a driver’s eyes are on the road, can fail.

“The problem with driver monitoring, people say if you have a good camera it’s fine,” said Mr. Koopman, who was the lead author of proposed engineering safety standards for fully autonomous cars for the standards body known as ANSI/UL.

In the 1990s, he said, a system to keep truckers alert beeped if their eyes closed. The result? “They learned to go to sleep with their eyes open,” Mr. Koopman said.

Coming up with a useful test is difficult. Testing for specific tasks, like driving around a test track, makes a vehicle better only at that track, not at dealing with unpredictable drivers. The UL test is instead a 300-page list of engineering considerations, down to making sure celestial bodies don’t interfere. (Plausible: A video shows a Tesla braking for a yellow moon.)

If an engineer fulfills these requirements, “then we believe you have done a reasonable effort on trying to engineer your systems to be acceptably safe,” Mr. Koopman said. “It’s a methodical way to show use of best practices.”

Still, the engineering test alone is insufficient, he said. It is meant to be used along with other standards that cover equipment and driving proficiency.

Even comprehensive testing is no guarantee. The procedures of the Federal Aviation Administration have been held up as a model, yet it cleared the Boeing 737 Max, which was grounded for 20 months after two crashes of the airliner killed 346 people.

An autonomous car from Pony.ai in Irvine, Calif.Credit…Mike Blake/Reuters

Regulations

It is easy to see why the pioneers of self-driving tech are wary of regulation.

Their industry already faces a patchwork of state-by-state regulations, although they primarily require proof that a company is insured, rather than having met any safety standard.

California is among the stricter states, with its 132-page standards document for autonomous operation covering permits, insurance, data sharing and a requirement for a driver “competent to operate the vehicle,” as determined by the company. Florida, among the least restrictive states, passed legislation that allows Level 4 and 5 cars “to operate in this state regardless of whether a human operator is physically present in the vehicle.” Ride-hailing companies testing autonomous vehicles there are required to have liability insurance.

Add to that mix the likely number of agencies that may be involved. The Transportation Department’s online Automated Vehicle Hub lists 14 involved in developing automated driving systems.

A further complication is tension among agencies. The National Transportation Safety Board, which investigates accidents, has been especially vocal in calling for autonomous vehicle regulation from NHTSA, a sister agency that is broadly responsible for automotive safety standards. In a NHTSA publication, “Automated Driving Systems: A Vision for Safety,” the agency wrote, “NHTSA offers a nonregulatory approach to automated vehicle technology safety,” saying it did not want to stifle progress.

Related Articles

Back to top button