Credit Gordon De Los Santos/Google
SAN FRANCISCO â It appears that Google has persuaded federal regulators that â in some situations at least â the Tin Man has a heart.
In a letter sent this month to Google, Paul Hemmersbaugh, the chief counsel for the National Highway Traffic Safety Administration, seemed to accept that the computers controlling a self-driving car are the same as a human driver.
The agencyâs letter is certain to sharpen the debate over regulation of cars that can drive themselves, even though the technology is still probably years from becoming mainstream. The letter is also at odds with proposed rules in California, where much of the autonomous vehicle research is taking place.
In a setback to Googleâs autonomous car efforts, the California Motor Vehicles Department issued draft regulations in December that would require a human driver to remain âin the loopâ in a self-driving car. In other words, someone with a driverâs license should be prepared to take over at any moment.
Self-Driving Cars May Get Here Before Weâre ReadyJAN. 21, 2016
U.S. Proposes Spending $ 4 Billion on Self-Driving CarsJAN. 14, 2016
For Now, Self-Driving Cars Still Need HumansJAN. 17, 2016
Googleâs Driverless Cars Run Into Problem: Cars With DriversSEPT. 1, 2015
The Future Issue: The Dream Life of Driverless CarsNOV. 11, 2015
âIf driverless cars dramatically reduce accidents, as it appears they will, then speeding up their adoption is good,â said Wendall Wallach, a Yale ethicist. But he added that the N.H.T.S.A. letter âcreates the illusion that by declaring self-driving cars the equivalent of human drivers, we have resolved the broader societal challenges.â
There is no consensus within the automotive industry about the ultimate role of human drivers in the face of rapid progress in artificial intelligence technologies. There is also uncertainty within the industry about whether the technology is advancing quickly enough that it will soon drive a car more safely than humans.
While much of the industry has committed to developing autonomous technologies that assist drivers, Toyota last year announced a $ 1 billion research laboratory in Palo Alto, Calif., adjacent to Stanford University, and one in Cambridge, Mass., next to the Massachusetts Institute of Technology, intended to focus on artificial intelligence that helps human drivers, rather than autonomous vehicles. The industry has begun to deploy a variety of automation systems as safety features, like lane keeping and so-called âtraffic jam assist.â
Much of the industry has committed to developing autonomous technologies that assist drivers. Last year, Toyota announced a $ 1 billion research effort adjacent to Stanford University and the Massachusetts Institute of Technology intended to focus on artificial intelligence that helps human drivers, rather than autonomous vehicles. The industry has begun to deploy a variety of automation systems as safety features, like lane keeping and so-called âtraffic jam assist.â
Mr. Hemmersbaugh of the traffic safety agency was responding to a Nov. 12 proposal from Google for a design for a self-driving car without controls, such as steering wheel, brake and accelerator. The prototype, which Google began testing last year, is for a low-speed vehicle that could perform taxi and possibly delivery functions automatically in crowded urban settings.
The company switched the focus of its self-driving car program after deciding last year that it could not solve the so-called âhandoffâ problem, in which a human driver is required to control the car in an emergency.
Google began testing a fleet of cars in 2010, using two professional drivers to oversee the operations of the computer systems that controlled vehicle navigation. However, in 2014, the program was expanded to permit some of the companyâs employees to commute using the autonomous cars. The company then observed distracting driving behavior up to and including passengers falling asleep.
âGoogle has long taken the position that the most dangerous thing on the roads is a human driver due to their driving distracted, driving while intoxicated, lack of compliance with the law,â among other issues, said Ronald Arkin, a roboticist at the Georgia Institute of Technology.
The N.H.T.S.A. letter, which was posted on the agencyâs website and reported by Reuters on Tuesday, is not a complete endorsement of Googleâs position. The next step, the letter said, is determining how the self-driving car âmeets a standard developed and designed to apply to a vehicle with a human driver.â
The legal challenges that artificial intelligence will pose have become more complex as technology has advanced. It was once fashionable to say that the machines would only do exactly what they were programmed to do. And if the human programmer made an error, such as misplacing a decimal point, that would be expressed in some incorrect behavior on the machineâs part.
However, recent progress in artificial intelligence has largely been made with so-called âdeep learningâ algorithms. This is a branch of machine learning that is based on software composed of multiple processing layers, each with its own complex structure. The programs are âtrainedâ by exposing them to large data sets. They are then able to perform humanlike tasks, such as categorizing visual objects or understanding speech.
At this point, researchers admit that they do not completely understand how the deep learning networks make decisions.
This will confront courts with a vexing challenge in the event of accidents caused by the A.I. system. Who will be blamed when it is not clear whether the error was made by human or machine?