Can self-driving cars drive ethically? Nauto thinks so.

You are here

Category: 
Tuesday, October 30, 2018

Inc.
Sure, Self-Driving Cars Are Smart. But Can They Learn Ethics?
By: Tom Foster

Stefan Heck, the CEO of Bay Area-based Nauto, is the rare engineer who also has a background in philosophy--in his case, a PhD. Heck's company works with commercial vehicle fleets to install computer-vision and A.I. equipment that studies road conditions and driver behavior. It then sells insights from that data about human driving patterns to autonomous-vehicle companies. Essentially, Nauto's data helps shape how driverless cars behave on the road--or, put more broadly, how machines governed by artificial intelligence make life-or-death decisions.

This is where the background in philosophy comes in handy. Heck spends his days trying to make roads safe. But the safest decisions don't always conform to simple rules. To take a random example: Nauto's data shows that drivers tend to exceed the posted speed limit by about 15 percent--and that it's safer at times for drivers to go with the flow of that traffic than to follow the speed limit. "The data is unequivocal," he says. "If you follow the letter of the law, you become a bottleneck. Lots of people pass you, and that's extremely risky and can increase the fatality rate."

Much chatter about A.I. focuses on fears that super-smart robots will one day kill us all, or at least take all of our jobs. But the A.I. that already surrounds us must weigh multiple risks and make tough tradeoffs every time it encounters something new. That's why academics are increasingly grappling with the ethical decisions A.I. will face. But, among the entrepreneurs shaping the future of A.I., it's often a topic to belittle or avoid. "I'm a unique specimen in the debate," Heck says. He shouldn't be. As robot brains increasingly drive decisions in industries as diverse as health care, law enforcement, and banking, whose ethics should they follow?

Humans live by a system of laws and mores that guide what we should and shouldn't do. Some are obvious: Don't kill, don't steal, don't lie. But some are on-the-fly judgment calls--and some of these present no good choice. Consider the classic philosophy riddle known as the "trolley problem." You are the conductor of a runaway trolley car. Ahead of you is a fork in the track. You must choose between running over, say, five people on one side and one person on the other. It's easy enough to decide to kill the fewest people possible. But: What if the five people are all wearing prison jumpsuits, while the one is wearing a graduation cap and gown? What if the single person is your child?

Consider how such dilemmas play out with driverless cars, which have attracted an estimated $100 billion in investment globally and encompass giant, established companies such as Ford, GM, and Google; giant no-longer-startups like Didi Chuxing, Lyft, and Uber; and a vast ecosystem of startups like Heck's that create everything from mapping software to cameras, ridesharing services, and data applications. Or consider those dilemmas more than some founders in this sector do. "There's no right answer to these problems--they're brain teasers designed to generate discussion around morality," a founder of a company that makes autonomous-vehicle software told me. "Humans have a hard time figuring out the answers to these problems, so why would we expect that we could encode them?" Besides, this founder contends, "no one has ever been in these situations on the road. The actual rate of occurrence is vanishingly low."

That's a common viewpoint among industry executives, says Edmond Awad, a postdoctoral associate at MIT Media Lab who in 2016 helped create a website called the Moral Machine, which proposed millions of driverless-car problem scenarios and asked users to decide what to do. "Most of them are missing the point of the trolley problem," he says. "The fact that it is abstract is the point: This is how we do science. If all you focus on is likely scenarios, you don't learn anything about different scenarios."

He poses a trolley-problem scenario to illustrate. "Say a car is driving in the right lane, and there's a truck in the lane to the left and a bicyclist just to the right. The car might edge closer to the truck to make sure the cyclist is safer, but that would put more risk on the occupant of the car. Or it could do the opposite. Whatever decision the algorithm makes in that scenario would be implemented in millions of cars." If the scenario arose 100,000 times in the real world and resulted in accidents, several more--or fewer--bicyclists could lose their lives as a result of the machines' decision. That kind of tradeoff goes almost unnoticed, Awad continues, when we drive ourselves: We experience it as a one-off. But driverless cars must grapple with it at scale.

On top of that, today's artificial intelligence isn't simply a matter of precoded if-then statements. Rather, intelligent systems learn and adapt as they are fed data by humans and eventually accumulate experience in the real world. And what that means is that, over time, it's impossible to know quite how or why a machine is making the decisions it's making. When it comes to A.I. powered by deep learning, à la driverless cars, "there is no way to trace the ethical tradeoffs that were made in reaching a particular conclusion," bluntly states Sheldon Fernandez, CEO of the Toronto-based startup DarwinAI.

And what data a system learns from can introduce all kinds of unexpected problems. Fernandez cites an autonomous-vehicle company that his firm has worked with: "They noticed a scenario where the color in the sky made the car edge rightward when it should have been going straight. It didn't make sense. But then they realized that they had done a lot of training in the Nevada desert, and that they were training the car to make right turns at a time of day when the sky was that color. The computer said, 'If I see this tint of sky, that's my influencer to start turning this direction.' "

More ethically complicated are scenarios in which, say, an algorithm used for credit underwriting begins profiling applicants on the basis of race or gender, because those factors correlate with some other variable. Douglas Merrill, a former Google CIO who's now CEO of ZestFinance, which makes machine-learning software tools for the financial industry, recalls a client whose algorithm noticed that credit risk increased with the amount of mileage applicants had on their cars. It also noticed that residents of a particular state were higher risks.

"Both of those signals make a certain amount of sense," Merrill says--but "when you put the two together, it turned out to be an incredibly high indicator of being African American. If the client had implemented that system, it would have been discriminating against a whole racial group."

Merrill has made A.I. transparency ZestFinance's calling card, but ultimately he thinks the government will have to step in. "Machine learning must be regulated. It is unreasonable--and unacceptable and unimaginable--that the people who have their hands on the things that have the hands on the rudders of our lives don't have a legal framework in which they must operate."

Consider one basic question: Should driverless vehicles protect their occupants above all else, even a jaywalker? To Heck, the answer is clear: "You shouldn't kill the interior occupant over an exterior person," he says. "But you should be able to accept damage to the car in order to protect the life of someone outside of it. You don't want egotistical vehicles." That's common sense, but it's still engineered software deciding whose lives matter more.

That said, Heck, ever the philosopher, sees a moral imperative to have these debates--while not slowing down the march of technology. "We kill 1.2 million people globally every year in car accidents," he says. "Any delay we put on [automotive] autonomy is killing people." All the more reason for the industry to start thinking through these issues--now.

CONTACT INFO

50 Thomas Patten Dr.<br />2nd Floor<br />Randolph, MA 02368<br /><a href="https://goo.gl/maps/ezTP8uVxQP22" target="_blank">Directions to location</a>