Return Home

Driverless Dilemma

Tuesday, September 26, 2017 - 05:55 PM

Most of us would sacrifice one person to save five. It’s a pretty straightforward bit of moral math. But if we have to actually kill that person ourselves, the math gets fuzzy.

That’s the lesson of the classic Trolley Problem, a moral puzzle that fried our brains in an episode we did about 11 years ago. Luckily, the Trolley Problem has always been little more than a thought experiment, mostly confined to conversations at a certain kind of cocktail party. That is until now. New technologies are forcing that moral quandry out of our philosophy departments and onto our streets. So today we revisit the Trolley Problem and wonder how a two-ton hunk of speeding metal will make moral calculations about life and death that we can’t even figure out ourselves.

This story was reported and produced by Amanda Aronczyk and Bethel Habte.

Thanks to Iyad Rahwan, Edmond Awad and Sydney Levine from the Moral Machine group at MIT. Also thanks to Fiery Cushman, Matthew DeBord, Sertac Karaman, Martine Powers, Xin Xiang, and Roborace for all of their help. Thanks to the CUNY Graduate School of Journalism students who collected the vox: Chelsea Donohue, Ivan Flores, David Gentile, Maite Hernandez, Claudia Irizarry-Aponte, Comice Johnson, Richard Loria, Nivian Malik, Avery Miles, Alexandra Semenova, Kalah Siegel, Mark Suleymanov, Andee Tagle, Shaydanay Urbani, Isvett Verde and Reece Williams.

Support Radiolab today at



Nick Bilton, Joshua Greene, Raj Rajkumar and Michael Taylor

Produced by:

Amanda Aronczyk and Bethel Habte


More in:

Comments [106]

DonnaK from California

1. The trolley question misses a essential third option. Throwing yourself off the bridge to stop the train.

2. The obvious solution to the engineering dilemma is allowing each primary passenger (who would have been the driver) to make their own selections as to how to prioritize who lives and who dies. Each person's key would have that data programmed into it (in case family members wouldn't make identical choices). If you have a car use membership, your digital ID would be associated with your preferences - or preferences would be preset by the company that runs the service (disclosed in the conditional use acknowledgement fine print). As a result, liability would be transferred to the primary user rather than the car manufacturer.

Mar. 11 2018 11:08 PM

Apologies if someone already mentioned this. Aren’t there significant differences among the three hypotheticals? In the first trolley problem, there are only two choices, and the actions have a definite result. In the second trolley problem, the actor is not confined to the train, so there is a third option: you jump to the tracks below rather than killing someone else. Also, the act of pushing someone or jumping yourself does not directly move the train as the lever does, but rather possibly results in stopping the train. The baby hypothetical removes the first issue, but still has the second: the baby is not currently coughing, so killing it may do nothing at all in the case where it neve starts coughing.

I definitely agree that people are more reluctant to directly act than act through a proxy, which is I think the overall point. But comparing reactions to these hypotheticals as if all variables were the same feels incorrect. Interesting episode though!

Feb. 28 2018 10:15 AM
Bethany Bell from Michigan


I teach English language learners, and I would love to have them listen to this podcast along with a reading we are doing for a persuasive unit. The problem is that I could not find a place for the transcript for this episode. Do you not do that, or was I not looking in the right spot? Let me know; without this, it will be difficult to use in my classroom.


Bethany Bell

Jan. 29 2018 01:55 PM
Bill Kurland from NY

It seems most likely that the jurisdiction in which the car is licensed will control what decisions the software can make and how they are made. Surely this will reflect that culture's values and will change over time and with experience. And I dare say not every society will make the same choices.

I am also skeptical that the adoption of driverless technology will be anything like as fast as the industry would have us believe. In the real world, we are not taking our personal flying cars on day trips to visit fusion power plants. There are still a lot of technical, but also social and economic problems to be worked through as the technology tries to find it's market - assuming these problems actually have satisfactory solutions in the near term. It's more likely to be two or three decades rather than two or three years before these systems are common outside of their test locales.

We should also keep in mind that machines can sense, process, evaluate and resolve information exponentially faster than humans. And from the early days these cars will be sharing information, not about the race, age, wealth and looks of their passengers but about the speed, acceleration, direction, proximity & mass of every object close enough to come into contact with one another given given their own current speed, position, direction of motion, etc. And to update that information thousands of times each second. Probably with 2 redundant, cross checking computers each to confirm the conclusions. In other words, any technology that is safe enough to actually deploy on a large scale will be many, many times safer than even the safest manual system imaginable. it will have to be or the public won't accept it.

That doesn't mean it can't fail, just that the failure rate will be low enough that few people will be fussed about why the car zigged instead of zagged. And the reason almost inevitably be that the software will attempt to save the most people possible - without regard to the people as individuals. Nothing else will be perceived as fair. And because the survivors will always outnumber the victims their voices will prevail.

In the end, I believe, as with most everything else in a market economy, the system we implement will be a compromise between costs and benefits. Who will use a system they perceive as too dangerous or too costly? And few governments in the countries most able to produce these systems will try to force their people to accept the unacceptable. If it can be done I think it will be an enormous boon to society. Travel will be vastly safer, faster, cheaper, more efficient.

But why would anyone think that rest stops would disappear? The cars may not need to pee, but they will almost certainly still need fuel and get flat tires. Sure, some people may be able to afford owning or renting luxury vehicles with all the amenities, but not everyone will - not by a long shot. Else we'd all be driving overpriced German cars and flying private jets.

Jan. 11 2018 09:56 PM
daniel from Malaysia

I wouldn't pull the lever, I would push the guy off the bridge with my hands or with a long pole, I wouldn't do anything. I'll shout, "hey, train coming, get off the tracks". Then watch as the train runs into them and kill all the workmen. Cause, I couldn't possibly shout louder than the noise made by a train. If they didn't hear the train coming, they wouldn't have heard me.

If I do anything to sacrifice the one guy to save the five, his family would sue me into the ground. Then I would be arrested and tried for murder. Imagine the trial, " ... so you admit to murdering the victim by pushing him into the path of the train. You claimed that it was to save the workmen on the track, but how could you be sure that they wouldn't have heard the train as it came closer and moved out of the way at the last minute? After all, they are professional railroad workers. They have survived for decades working on the tracks, successfully not getting hit by moving trains. They only have to move 3 feet to get away from the path of the train. It's after all stuck on the rails".

About the driverless car, there's no dilemma. Let's replace the "driving computer" on the car with an actual human driver. If you're rich, a chauffeur you hired. If you're not, the driver of a taxi you hailed. They would save their own skin (and the passengers in their vehicle), even if it means ploughing into 50 kids standing at the side of the road waiting for their school bus. And they wouldn't be punished even if half the 50 kids died and the other half is maimed horribly. Not if the only alternative is to kill themselves.

Jan. 04 2018 10:04 PM
Karen from Austin

Regarding the argument at the end of the podcast regarding “engineered killing” vs. operatic killing, it happens all the time in the automotive industry. Think “the Pinto”; put on the road with the knowledge that the engine would catch fire and burn innocent drivers and passengers. It was a financial decision based on the cost/benefit analysis of profit from pulling it off the assembly line and redesigning or settling law suits that were sure to result from injury and death.

Dec. 28 2017 06:20 PM
Vlad from Toronto, Canada

Hi there,
I'd like to mention something that somehow everyone missed - acceptance of risk.

For instance, let's talk about the two trolley problems. There are more differences there than anyone mentioned.
Here's the logical thought that goes through my mind:
The workers have all accepted tremendous risk, and they are most definitely insured by their employer.
So, the workers are aware of safety practices, have agreed to take on the risk, are paid for that risk and are insured in case of a catastrophic event.
Meanwhile, the person on the bridge has not agreed to put themselves in danger, and is probably not insured against such an event.

To take that to the driverless dilemma - and I can't believe all the geniouses that do the AI have not come to this.
Why not calculate the risk that has been taken on by people?
For example, let's say I am driving down the street. Suddenly I realize my car is heading straight for two people crossing the street illegally, and the only way to save them is to swerve into one person walking down the sidewalk. Here's my train of thought:
The two people crossing the street have accepted huge amounts of risk by a) crossing illegaly and b) failing to confirm if it is safe to cross the street illegally.
Meanwhile the one person on the sidewalk is doing something they assume is safe - walking down the sidewalk.

Now to the actual driverless problem - save the occupant or the people.
The occupant has invested (whether owns or hires) into the vehicle that is deemed very safe and so has spend the money to mitigate the risk - both to themselves and others around them.
Now where are the people? How much risk have they assumed by being where they are?
Are they somewhere they shouldn't have been? If the car can figure out what kind of person is in front if it - maybe it should also be able to figure out if those people have accepted the risk of being where they are.

Risk calculation - while complex - isn't out of reach of computers. If the car can calculate the chances of survival of everyone there(or to be exact - the risk of death of everyone involved), I'm sure it can figure out the risk everyone involved has assumed up to this point.

Disclosure - I am not a lawyer. I work in IT dealing with security, and risk management is an integral part of my job.

Dec. 21 2017 10:35 AM
Scott from Boston, MA

I recently listened to this episode and was really sad by how poorly thought out the whole premise was. I used to be a really big fan of the show but it's just taken a quality nosedive. A driverless car should never get in to a situation where it has to make any life or death decisions. If it did then that would be a mistake or exactly what we have right now- an accident. You don't program an algorithm in to an accident because they don't happen on purpose. The super short disclaimer of this not happening very often doesn't make up for the obviously fear mongering premise of the whole show. I'm not afraid of self driving cars but I won't drive on New Year's eve because of all the drunk drivers out there. I'm taking radiolab off my subscription list for podcasts, this episode jumped the shark for me.

Dec. 19 2017 07:37 PM

It's a Lawyer problem, not ethicist, not engineers.

Dec. 12 2017 07:34 PM
Olavi from Estonia

i like the sound effects in the old bit. Brings story to life.

Dec. 09 2017 07:22 AM
Marcus Hast from Sweden


I have to say this episode frustrated me as I was kind of expecting more from people who spend time considering this (particularly wrt the driverless car version). It might be amusing to ponder as a thought experiment but from an engineering perspective it's gibberish.

As other people have commented previously the entire premise is faulty. The correct answer is that a driverless car should never get in a situation where it has to choose. If there are people walking by the road that might have a fit of suicide and run across the street then the car has to slow down or alter it's course so it can stop safely. It is also never allowed to follow another vehicle so close that it can't stop in case of an emergency (which human drivers are also taught, but may not actually follow).

Also network communication is not instantaneous. If the car has the time to have a debate with other cars, banks and query Netflix what the people involved have been watching lately it has time to just stop by using the breaks.

On the FMRI revisit I was mostly surprised that the conclusion was that morality is something purely innate. All memory and everything that defines us as individuals are also part of the brain and surely that shows up too in a scan?

(BTW, a potential future topic if you like fMRI's is to look at research where they use machine learning to deduce what people are thinking based on their brain scans. Pretty neat stuff. Eg

Nov. 25 2017 01:24 PM

This is why I'll never ever ride in a driveless car. Never ever ever. Period.

Nov. 22 2017 03:31 PM
ben from Huntsville, AL

Am I the only one that feels like it's obvious that pulling a leaver that will redirect a train toward a person that may still have a chance to survive, BY ACTING ON HIS OWN FREE WILL? It removes the subject from the death by one degree. If you, personally, with physical touch, push a man off the bridge, there is a direct correlation between your action and his death.

If you pull a leaver that directs a train away from numerous people, although toward one other, there is still time and distance between the train and the person that may (or may not) get hit.

If we are to rephrase it and say that we are redirecting a gunmen's aim from a group of people, toward a slightly more sparse group of people, we have the same scenario. If, however, we are pulling the trigger to kill one man to appease the God of Moral Quandary instead of detonating a bomb to kill dozens -- placing the cause of death in the hands of the subject (like pushing the man from the bridge) the action becomes more objectionable.

Nov. 19 2017 01:21 PM
Johnny from Lafayette, IN

Hey guys,

I have a lot of respect for how you create these stories, very well done. This one in particular I had some struggle with. I appreciate the approach you took to share why we SHOULD have driverless cars, but I feel that you missed a key factor of why we SHOULDN'T - human meddling. For example, if we get these driverless cars with the programming to calculate who should live and who should die...what's to stop someone from figuring out how to reprogram the car to suit their own preference? From another perspective- assuming the program determines that it will kill the driver if there are many pedestrians at risk...what's to stop a group of people (particularly a group of radicals) from using that to target people and kill them? It seems to me the only way we can safely incorporate such a car into our society is if we have separate roads for them, where they would not encounter pedestrians. Even then, there's a risk of someone meddling with the programming and putting lives at risk... Ultimately, I think our global society is simply not ready for this technology...humans are not mature enough as a whole for this.

Nov. 17 2017 05:10 PM
Patrick Harvey from Saint Louis

What happens two hours later? This.

Nov. 13 2017 05:19 PM
Adam from New Orleans

I was very surprised to hear the permutations of the trolley problem that the researcher has been pursuing for 10 years were all focused on the act of pushing, rather than investigating how your relationship to the guy on the bridge (in either scenario) would affect your actions. Would you pull the lever if that was your mother? Would you feel you had made the wrong decision if you later found out that was a distant cousin you had never met? Would you push the guy if that was 5 family members down below you?

There is a very simple underlying rule here that the interviewers and researchers didn't seem to consider: self-preservation, within which I mean to include the desire for your own genetic material to be replicated. I believe there is a constant in all humans, the desire for self-preservation, which is necessarily coupled with one variable: what you consider to be "yourself".

You look down on the tracks, you compare 1 person to 5 people, the decision is easy. I think the act of being with someone on the bridge brings them just enough closer to you to make you think twice, and start to weigh this as a "hurting those who are connected to me is bad" scenario.

It follows that the Buddhist monk group would push the guy. Buddhism teaches an expansion of self, so let's call that an expansion of self-interest. They continue to be concerned with the self-preservation of the human race (they did take an action after all, unlike Mersault), but have expanded their mind past the narrowness of "closer to me physically makes this person more of myself".

Final note regarding the baby question, which I remember thinking when I first heard this segment years ago: How did you not ask the respondents if they had children? It seems like an insanely relevant piece of information. In the original version Jad said he would kill the baby, and Robert said he wouldn't. They have cut this out in the rebroadcast. Just wondering, if Jad has changed his mind on this because he has since had a son, how are the survey results mentioned relevant at all without controlling for that variable?

Nov. 13 2017 12:53 PM
Rian from Seattle

The Trolley Quandry:
The reason this is a quandry is that our moral intuition isn't necessarily correct. It's an ethical optical illusion, and we don't immediately see pulling the lever as murder.
It is just as ethically wrong to pull the lever as it is to throw the man off the bridge.
Either way, you are making the decision to end one persons life in order to save the lives of others, and we don't have the right to do this.
Strangely enough, in the second scenario, one very important option is left out. You can throw yourself onto the tracks. This option is the ONLY OPTION that YOU have the RIGHT to make. If you feel that the 5 lives are worth your own, then have at it; you will be hailed as a hero post-humously (or an idiot if your plan fails to save anyone).
I'm actually quite surprised that they overlooked this, it is actually a pretty simple ethical dilemma if you look at it cold-heartedly.

The Crying Baby:
This one breaks my heart, because I don't think I could ever hurt my own children (at least willingly). I suppose that I would hold out as long as possible, holding on to hope, trusting in the possibility that the little one won't cough. But I'd be ready to muffle that cough or cry as needed. Just like in the trolley quandry, I have no right to sacrifice the lives of the others in order to save my own or my childs. I suppose I don't really have a moral right to sacrifice my child either. But if it is inevitable that the child will die either way (in the case of this hypothetical situation) the only option is to do it yourself; the correct answer is to smother the child. It is as if, in the trolley quandry, both the group of 5 and the single man are about to be hit, and there are two switches, one will save the group, and one will save the man, but you can only flip one switch. Here you can ethically make the decision to save the group. Or even to save the man if you want, or you know him, you don't have an ethical obligation to save either group; but you may be criticized whichever you choose.

The Driverless Car:
You always sacrifice the pedestrian. Sorry pedestrians. There is no scenario that I can think of that the passenger is doing something wrong (because they're not driving). And also, assuming that the car is driving correctly (and it will be programmed to follow the rules of the road), 99% of all scenarios will have the pedestrians doing something incorrectly, either intentionally or unintentionally. It is ethically wrong to punish the passenger for the mistake of the pedestrian. Even if there are multiple pedestrians, or a group of children, it is unethical to sacrifice the passenger. JUST like it is unethical to push the man over the bridge to stop the train/trolley.

Oct. 24 2017 04:24 PM
Ben from Reno

I don't know why I didn't see this episode tell yesterday... I might as well chime in with my own takes. As for the Trolley problem the obvious issue with the thought experiment is that it requires that we put aside logic. Why do we believe that throwing the man will work? Couldn't the trolley just barrel through him into the next 5, meaning you killed him for no reason? What if he struggled, and ended up landing in a spot where he died needlessly, simply clipped by the trolley? What if the guys were actually going to jump out of the way at the last second? It's very hard to imagine a scenario in which you are certain that hurling a guy onto the track will result in the others being saved. In this scenario you are forcing a death to occur that was certainly not going to happen without your intervention, and based solely on the hope that it will work out. The end math isn't the same as in reality the results are stochastic until you start murdering people on your own.

As to the car, again this is a really easy answer. The driver is committing no wrong by sitting in the self driving car, while the pedestrian at risk of being hit by a car obeying every rule of the road, is committing a wrong. Whether that be wandering into traffic while looking at a phone, chasing a ball into the street, or simply stumbling over an obstacle, they are the cause of the accident, and the car, while it should attempt to minimize their injury, should always put the occupant who has done nothing wrong first. Also since humans are not predictable, you don't know if the dangerous course would still result in the pedestrian dying as they leap in the wrong direction, only to still get hit. Trying to swerve away and save a child while killing the 80 year old driving may sound laudable, but when it turns out to be a small suicidal adult that just killed their family, that decision looks terribly wrong, so take out the guess work and apply the logic based on whose at fault and take away all the moral guess work.

Oct. 24 2017 03:06 PM
Chase from Austin, TX

I think something worth noting about the thought experiment:

In one scenario, they are being asked to put someone in harm's way, whether they push him or pull a lever to make him fall onto the track etc.

Whereas the other scenario redirects the harm. Making the train change course.

I wonder if this is a potential confound...

Oct. 21 2017 12:25 AM

Pete Costello: You're probably thinking of "A Taste of Armageddon" from the original series of "Star Trek":

In order to prevent collateral damage, the civilizations in that episode have computers calculate where bombs WOULD have hit and who they would have killed. Those marked as kia report for disintegration. It's an extension of the then-recent neutron bomb: valuing property over people. In the real world, the neutron bomb was tested in the 60s but not produced en masse until 1974.

It's a similar principle. That always bugged me about that 'trolley problem': is there any reason you CAN'T jump yourself? Heck, get the fat guy's shirt and use it like a parachute... there's a chance you might survive that way. And in the other scenario, how many people are on the train? If it's a manual switch then jamming it in-between the positions would lead to the train jumping the track, probably sparing the workers but potentially killing whoever is on the train... but it's less likely to result in death than a train hitting someone full-bore, and hitting someone may have made the train jump the track anyway.

As many commentators have pointed out, it's a false dichotomy, which was one point of the Star Trek episode: the options were never 'carry out a real war with collateral damage' and 'carry out a fake war and kill those who would have died'. Not having a war at all (or just moving the 'dead' to another city, even) were options.

Oct. 18 2017 04:26 PM
John from Keller, TX

We in the developed world have already made a multitude of calculated decisions that sacrifice life for various reasons. One of the most often reasons is personal convenience.

My uncle, for much of his caree,r reconstructed car wrecks in which a death had occurred. His experience and study of these situations lead him to tell me that the advancement of car design now makes all crashes with the vehicle traveling at 35mph or less are survivable (even head on collisions). I don't know if he is right or not, but it doesn't really matter because there is definitely a speed at which this would be true. So we have by law valued speed of transportation over safety to the point that we are willing to sacrifice over one million lives a year in the US for the convienince of getting to our destination faster... This is thought out, programmed in, law that sacrifices human lives. To me it's no different than programming a car to maintain the safety of the driver at the risk of pedestrians, or any other calculated decision in regard to human life. We literally calculate acceptable losses in to decisions all the time.

Personally I think mixing driverless with human driven cars is going to be the difficulty in planning and programming, totally driverless in the near future would be the absolute fastest and safest transportation...

In the meantime, I think all owned artificial intelligence should follow all established laws for safety, while also putting the life of the owner/operator as the highest priority, the life of other humans is the second highest priority.

Oct. 18 2017 01:29 PM
Chris Klein from Vancouver BC

Regarding the Train Dilema: The way I think about it that pulling a lever non-confrontational whereas pushing someone is committing assault. Imagine the discussion in the first scenario: "Hey guy, when I pull this lever you are going to get hit by a train."
The guy replies, "What? Why?"
"Because if I don't it will hit 5 people on the other track. Those are literally the only 2 options."
"Oh... well that sucks, but under the circumstances I would do the same thing. Go ahead. Tell my dog he's a good boy."
And then in the second scenario: "Hey guy, I have to push you off the bridge to save those 5 people from the train."
"I was just thinking the same thing about you."
"Oh. hmmm. Maybe one of us should jump."
"Yeah... I mean, do you want to jump?"
"Not really."
"Me nei-" (train runs over 5 people) "...damn. I'm gonna need a drink."
"Me too."

Oct. 17 2017 03:16 PM
Michael from New York

I saw the title of this podcast and I thought 'finally, a major media outlet is examining the impending problem of automation!'. In the beginning, that other reporter is voicing concerns about what will happen when automation replaces things in the transportation industry and I thought 'Great! Let's see what Radiolab brings to the table!'.

Imagine my surprise when what you take away from those is fears is a half previously recorded show about the bloody trolley problem? I mean, as pointed out a few times in the comments here, the trolley problem is a false equivalency in the first place and again pointed out here, the actual use case of what you were discussing is vanishingly small. It seems likely that auto drive systems won't be designed in a way that it is even relevant to your imagined line of thinking.

Automation is a huge, looming and relevant problem for all society. Even if you look at transportation industry only, automation of trucking and cabs accounts for a huge percentage of the economy in the US. In ten years all those jobs are GONE. This doesn't even address jobs lost from the automation of warehousing, service industries having kiosks instead of counter employees, the automation of coding jobs that replace hundreds of college graduates that studied networking and computer programming. A very good friend of mine used to work for Microsoft and what he and his team of four people were doing was designing machine learning algorithms that were replacing hundreds of real world programmers and this was being done two years ago. This stuff is here NOW and will only become more of an issue as time moves forward.

It just seems like such a missed opportunity to shine a light on something that is relevant to everyone. Are we approaching a post scarcity economy and what does such a thing look like? Is universal basic income a solution? Will we have state sponsored education in other areas? I do see this being discussed in more fringe areas in YouTube videos and blogs but you had a real opportunity to at least bring some notice to what I view as the number one impending problem faced by the whole of the world.

What a bummer.

Oct. 17 2017 02:58 PM
Chris from Spain

One thing I am not sure was covered:

The assumption is that the car calculates the odds of killing passenger and odds of killing pedestrian. But the odds will very very rarely if ever be exactly the same.

So, the question is not "who do you chose' but "what is the threshold before I chose the pedestrian?" i.e. "to what degree do I prioritize the driver over the pedestrian?"

Do I only chose pedestrian when the odds of pedestrian dying are double the odds of the driver dying? Triple?

Oct. 17 2017 05:16 AM
Barry20147 from Virginia

The scenario posed assumes perfect information. Either one dies or 5 die, for sure, no alternative such as the trolley driver applies brakes or uses a bell or horn to alert workers, or some have recoverable injuries.

Real decision problems are typically loaded with less than perfect information. Driverless cars will have sensors and brakes and should be programmed to not go faster than their ability to stop prior to collision, just as people driving at night should limit their speed on surface roads by the distance their headlights can illuminate. Still, something can pop up unexpectedly, such as a ball rolling into the street that might have a young child racing after it without be aware of traffic danger. I recently heard an expert advise that if a deer crossing the road in the dark suddenly appears in your headlights, you should not swerve to avoid it but instead just apply the brakes. Swerving turns out statistically to cause more damage/fatalities due to collision with other things.

Ordinary driving does not demand full time attention of our brains. With automatic transmission and power-assisted steering and brakes, control of an automobile (i.e., keeping it in its lane at an appropriate distance behind any vehicle in front) can be done using just "muscle memory." I used to commute on limited access roads 40 miles each way to work. During my long commute I listened to recorded books. More than once I passed my turn off to my destination because my active brain was paying attention to the book rather than on navigation.

A driver practicing "active driving" tenets is one who is continuously scanning both sides of the road for potential danger and adjusting plans as to whether to swerve or brake if something happens. "Active driving" requires concentration and I find it mentally tiring to do it without lapses. It is impossible to do it while you are otherwise occupied, whether listening to a recorded book or to a phone conversation or just in a conversation with a passenger in the car.

Getting back to the trolley problem, proper planning by the workers by posting a lookout would eliminate the problem. Also, how sure is the person on the bridge that the trolley driver can't sound a horn to alert the workers? Could the person on the bridge alert them in some way himself? Is there something else that could be dropped onto the track other than a person? Finally, if he has to sacrifice someone to save many, wouldn't it be more ethical for him (or her) to jump onto the track than to push the other man?

Oct. 14 2017 01:43 PM
Chris Doeller from Baltimore MD

I find this a solution hunting for a problem. Too many pampered people want to do everything else but pay attention to their driving. Add to this the massive increase in aggressive drivers who shift lanes on a whim. There are greater problems we need to solve and having a driver-less car is not one of them. Would the same people who support auto-driving support auto planes or auto trains? And don't tell me that they already do this, because there are no planes authorized to be without a skilled pilot, on duty for the entire flight.

Oct. 14 2017 01:09 PM
Harry Widoff from Forst Hills, NY

What ever happened to TeleCommunting?


Look up Baker Auto Electric.
Electric trucks from the 1900's to 1920's for delivery. In their old ads they listed weight and distance each model can carry and delivery time.

Oct. 14 2017 12:16 PM
Bill Protzmann from Sandy Eggo, CA

There's already a real-life version of the trolley/baby mind game: flu shots. We know giving a small percentage of people a flu shot will kill them, but it's still the "recommendation" of health care agencies that everyone get the shot. AI on the road probably has smaller odds, but it's not perfect.

However, this story totally left out another dilemma facing those of us on the road: cars with AI may be able to teach their drivers how to drive better. This is a story about how Lamborghini views such things:

Oct. 13 2017 05:01 PM
ZalbaagBeoulve from Gretna

RNG. DUH!!! Problem solved

Oct. 12 2017 11:21 PM
Mel from California

Isn't this an insurance company question? Cause ultimately they're the guys paying the bills.

Oct. 12 2017 06:14 PM
Michael Brick

Read "Dawn," by Friedrich Nietzsche. He wrote all of this down 136 years ago and neuroscientists are only just now catching up to him.

Oct. 11 2017 02:19 PM
Sherry from Ithaca, NY

Hi. I'm a professor at Cornell.I found the story of the driverless dilemma fascinating. The one thing I would say about the two moral questions, however, is that they are not actually the same question. In the first question, we are asking whether the actor would divert a train from its path toward the five people, even though that diversion would have the effect of killing one person. The one person's death, in other words, can be described as incidental or as collateral damage, because the actor's goal was simply to divert the train away from the five. Harming the one person there is not essential to the actor's objective. In the second scenario, by contrast, harming the large man is essential to the project of saving the five people. Unlike the first scenario (in which the actor's actions would be just effective at saving the five if the one person were not on the train track), using the large man is an essential part of helping the five in the second scenario. Harming him is not just a side effect of helping the five, as it is in the first, where the actor diverts the train away from the five and it results in the death of the one. Harming the large man is necessary to helping the five because the actor is using him as a way of blocking the train. In war, it's the difference between bombing a munitions plant of the enemy, even though civilians are present and will therefore die (despite the bombers' wishes to the contrary), a permissible action, and dropping a bomb on civilians with the goal of killing those civilians, a prohibited war crime. Though in both cases, the ultimate goal is the same (to win the war), the intermediate goals are very different--incidental/collateral harm is not the same as intended and deliberate harm. Once you see that the two scenarios are different, it is no longer a mystery that people react differently and that different parts of their brain are involved in processing the problem.

Oct. 11 2017 09:09 AM
Dave from California

The autonomous vehicle trolley dilemma has a simple solution: require the driver to decide ahead of time. The problem only exists when you try to find an universal answer to a personal question. Another similar personal question without an universal answer is whether someone should be an organ donor. Well we have a working solution for that too: require the person to decide ahead of time.

Here's how the preference for autonomous driving behavior can be implemented. To activate autonomous driving features, the driver must input their preference:
1. self-interest mode
2. selfless altruistic mode
3. highest probability of success mode (where the car will choose between options based on their relative likelihood of successful outcome).

Preferences can be saved and remembered, or the driver can choose to be prompted every time they operate the car which mode they would like to operate within. If the driver has a last millisecond change of heart and want to override the preference? Steering wheel and control pedals are always active to override the autonomous program.

Problematic scenarios and ways to resolve:
1. Authorized lending of car from person who last set remembered preference in car to someone else.
Resolution: person lending the car is responsible for resetting his/her preference so the person borrowing the car can input their own personal preference. Failure to do so, the lender is responsible for all actions taken by the autonomous program set under his/her preference and all liability falls to him/her.

2. Unauthorized operation of car by someone else.
Resolution: the unauthorized individual operating the car (by stealing or hijacking) is responsible for all actions taken by autonomous program and all liability falls to that individual. One would hope an autonomous vehicle is smarter than to be operated by an unauthorized driver.

3. Sale of car.
Resolution: seller of car is responsible for resetting all preferences for the buyer of the car. Similar to how the seller must declare to the DMV release of liability of the car when an used car transaction takes place today, the seller is also responsible for resetting the car to factory settings (like how one would sell their cell phone or computer these days).

4. Public transport.
Resolution: simple majority vote of riders upon entering the vehicle determines mode of operation of public transit vehicle. In event of tie, 3rd mode is defaulted.

The GM software engineer is correct; it's not the responsibility of programmers to make these decisions for personal questions. Even the much less ethically-loaded privacy matter of sending anonymous diagnostic data is deferred to the software end user as a preference setting.

Oct. 09 2017 09:41 PM
John from Portland

People intuitively understand that one could not hope to push a person in such a way as to ensure that five other people are saved. Making a lever the mechanism of killing and saving allows us to suspend disbelief.

That's it.

How the future will laugh at our academia. Sheesh. This "thought experiment" is sillier than the so-called fox and hedgehog distinction. What fussy feeble-minded nonsense.

Oct. 09 2017 12:52 AM
Renee from Georgia

I would guess that the added factor of the Buddist monks' belief in a rebirth doctrine would also play on their decision in the trolley problem.

Oct. 07 2017 11:09 AM
David Souza from Santa Clara, CA

I don't normally command about Podcasts or anything else I see on the internet, but I felt compelled to comment on the Driverless Dilemma episode. First I want to fully disclose that I am in favor of Driverless cars and I own a Tesla Hardware Version 2, so I am biased. But I still feel there is an inherent problem with your driverless situation. When would this even happen, multiple pedestrians in a place where a driverless car is going so fast that it has to run into a wall and kill the occupants over the pedestrians?? I understand the hypothetical situation isn't "Real". But should it be at least realistic?? If driverless cars are out there, following all the laws... speed limits, yielding, etc. How would a situation even close to this manifest??

Oct. 06 2017 12:38 PM
Pete Costello from NJ USA

Wait a second! The trolley car choices needs to be rethought out here!

Let's view the choice this way. Instead of pushing the man, why not ask would you JUMP into the trolley's path and save the men? No pushing just jumping.

My belief is that because we have the choice of taking the pushed man's place allows us to empathize and become the man being killed.

I thought about this same idea with military drone operators who can easily kill many people, some innocents as well to save our own must be similar to the trolley scenario. It allows us to become distant from experiencing the nature of the act. I swear, as a child, I saw a SciFi show (Star Trek??) that touched on the nature of destroying lives without experiencing the actual act.

Oct. 06 2017 04:53 AM
Nick from Jacksonville, FL.

Imagine driving in one of these vehicles and a crowd of protesters came out to block the highway. Now the vehicle swerves and runs straight into a concrete divider under an overpass. All occupants then die because some people wanted to protest. Are the protesters guilty of manslaughter? Is it just an accident?

The triage episode pairs well with this one. Do you save quantity or quality? How would you decide quality?

Most important point is that car crashes should drop dramatically and my odds of surviving on the road go up. I'm in.

Oct. 05 2017 10:32 PM
Bob Dobbs from LA

I think this question is silly. Human drivers aren't expected to have an answer to the trolley problem in order to drive, so why should we expect autonomous cars to? It's also incredibly unlikely that the computer, or even person, would be able to accurately or quickly determine "If I do X action, M people will die, if I do Y action, N people will die, so do X/Y", so it's just not a question worth asking.

Oct. 05 2017 01:26 PM
Jay C from Portland, OR

The episode was entertaining to me, alot of the other comments here make sense. I was laughing my head off with the sound of a guy terrified by cars. "oh! ah! there's a car coming!!" haha. Does anyone know if this is a clip from a show or movie or was it part of the production?

Oct. 05 2017 11:56 AM
Unconcerned-Aus from Australia

This reminds me too much of the Y2K disaster which was looming in 1999. Chaos because the digital clocks didn't have the programming to know to count beyond 100 into the future.
The basic design of the vehicle is to detect and avoid obstacles, incorporating network knowledge from other vehicles, and potentially other monitoring devices, to drive at the optimum-safe speed to most efficiently allow all users to arrive in the quickest time.
Pedestrians won't be crossing the road, they will use bridges or tunnels, or vehicles of course. So the majority of this moral discussion is obsolete.
But, to add some morality vs logic, if an obstacle were to be present clearly the vehicle would veer to one side to avoid collision. If altering the course were to create a collision this should be non-executable. Though I can't see how all other monitoring would fail to find this obstacle before it is too late to apply the brakes.

Oct. 05 2017 09:03 AM
Paul M from Cambridge, UK

What would happen if some people could pay the vehicle manufacturers to give them priority over others.. "I'll give you a million if you make sure my car saves my life even if it means sacrificing other people who aren't my relatives"?

Oct. 05 2017 08:10 AM
Paul Neumann

Fun topic. Lots of good points. By far the worst sound design you've ever presented to us. I'm not talking about the archive audio, I get that, but the new stuff with all the annoying wailing and drunken(?) babbling was almost too much. And if that was some kind of homage to the archive stuff then it was swing and miss. And why the swearing? Seriously. Some of us like to share these discussions with our kids. We're big fans of the show here, but don't amuse yourselves by abusing our ears.

Oct. 04 2017 08:36 PM
Aaron Sportack from Vancouver

In response to the various opinions people are expressing about what to do if a child dashes into the road and "why should the driver die for someone else's hypothetical bad parenting..."

What manoeuvre are you imagining any vehicle could possibly execute in this situation that would mean certain death for the driver?

Really, I'm curious.

Oct. 04 2017 06:54 PM
Aaron Sportack from Vancouver, BC

I am curious what my FMRI would look like on this because I can't stop thinking about all the other options. Binary choices like the lever problem basically do not exist. Why can't I shout to the people on the tracks? What if I pull the lever while the train is passing over the fork, effectively sending it in both directions and derailing it? If I can't why are these people unaware of said train, which is not exactly stealthy or fast? I would assume that 1 or 5, they would see, hear and feel the thing coming tens if not hundreds of meters off...

This can be extended to driverless cars pretty easily. No matter what the code, there is not going to be a "kill vs save" switch, that is not a choice any driver has ever had. As a driver you can accelerate, brake, or turn. Neither the AI nor its programmers are going to be able to predict the ultimate results of those choices with any degree of accuracy when you throw in variables such as weather, vehicle and road conditions. The whole benefit of driverless cars is that they are very unlikely to get into any such extreme situations in the first place, but if they do the solution is almost always going to be stop as fast as it is able and hope for the best.

Yes the wider effects of interacting autonomous vehicles will need to be studied and optimized, but I anticipate that the process is much more likely to involve tedious things like figuratively resetting the router on the highway to clear up a traffic jam than dramatic decisions such as one's grandma and baby sister simultaneously jumping into the street while your car is inexplicably driving 80kph through a school zone resulting in the necessity to choose which one to hit. Can we maybe take it down a notch and learn about how autonomous vehicles actually make choices and about how the AI for that has developed thus far? That would seem to be a rather important baseline to establish before playing the moral dilemma card.

Oct. 04 2017 06:39 PM
Anna Loginovskaja

From my point of view a very crucial aspect was not brought up during the self-driving cars discussion. ‘Why pedestrians are on a road?’ In case if they cross a road on red or in not allowed place, they should not be saved by a self-driving car. Pedestrians (does not matter which amount of them 5, 10, 100) which decide to cross the road by breaking rules, make a conscious choice to endanger themselves and put their lives at risk. In case of kids, they should be guided by parents. So a parent or guardian makes a decision if to cross the road according to rules or break rules and be responsible for consequences.

Oct. 04 2017 03:32 AM
indigo dingo from Perth, Australia

For those who would like to read more variations on the trolley thought experiments, I highly recommend "Would you kill the fat man?" by David Edmonds.

I am the 1 in 10 who would not pull the lever in the original problem. No, I am not a psycho/sociopath.

Oct. 04 2017 02:31 AM
jader3rd from Monroe, WA

The trolley problem for self driving cars is stupid. The self driving car will never drive where it can't safely go.

Oct. 03 2017 09:05 PM
Scott from Ohio

Though I am not a religious person, from a social, neuro-cognitive perspective, I postulate adding a "God Factor" to the autonomous system. People have a problem if they feel that a death choice has been engineered into a system but they have little issue with ascribing God's will to seemingly random deaths or accidents. So, if the autonomous system truly has come to a choice where someone has to die, why not, at the last second, add a God Factor - namely, add a stochastic element to the decision frame where the choice becomes random. Thus people could ascribe the choice to God's will and live with the result.

Oct. 02 2017 06:28 PM
Audrey from Mobile, AL

The driverless car's presenter's logic is exactly the same as mine regarding the baby and the village. I would not kill the baby (even if it wasn't mine), because the only KNOWN outcome of killing the baby would be a dead baby. That would be an utterly horrible thing to live with, even if doing so was the thing that allowed you to live. And no, that existential "I'd rather die" argument is not all I'm talking about. After all, the lives of all those other people would absolutely, obviously deserve consideration. But it goes back to what is knowable and what isn't--that is, if you must consider killing a baby, the only guarantee is that there will be a dead baby. A non-coughing, dead baby does NOT guarantee the survival of the village. --Someone else could cough or make a sound, the killers might have dogs searching by smell, or other means of detection, and then you'd have killed the baby for nothing. Even if the killers found you it doesn't guarantee they would kill everybody. Now you could say, "Well, assume they definitely WOULD kill everybody," but in real life you couldn't make that assumption, and I know for certain that I wouldn't. The terrified mind rarely accepts the worst case scenario; we almost always hold onto hope. The other part of the flawed logic is assuming that the baby coughing would result in detection. The baby might not cough loudly enough for the killers to hear, and you COULD make efforts to muffle the sound. Bottom line, as with the driverless car, if there is only one absolutely predictable death, I would spare that one. I would cup my hands close to the baby's mouth to baffle the volume of a potential cough, but I would not block its airway. And I strongly suspect this would be the default reaction in the REAL scenario, not the one on paper.

Oct. 02 2017 06:24 PM
Thad from Charlottesville, VA

I am not a robot:

Oct. 02 2017 10:20 AM
Dave from Maryland

An interesting aspect of the driverless car scenario is this: There are times when I have been driving and one or more teens have deliberately walked across the road in front of me, forcing me to slow down or stop to avoid hitting them. It seems to be a sort of dare. Given this, I wonder if, when driverless cars become prevalent, this may become a sort of "sport" - running in front of a driverless car to make it stop or swerve.

If the car was willing to kill the passenger rather than pedestrians based on a count of lives lost, could that even be a way to assassinate someone - staging a scenario where the car's only choice would be to head for a concrete wall?

Oct. 02 2017 06:55 AM
David from CA

I was a little disappointed in this episode. Big Radiolab fan but felt this one did not hit the mark. Maybe a little comparing apples and oranges, or just the vagueness and hypothetical-ness of the scenarios or that it involved people using FMRI to make wild claims about a system that we know very little about.

The trolley question did not address the fact that person the tracks did bear some risk by being on the tracks whereas the person watching clearly had little risk. Killing someone on the tracks vs someone off the tracks feels different to me. If the switch caused the trolley to derail, traveling 100 feet into a nearby park killing a person, would people respond with the same certainty?

Then the whole discussion about killing the driver vs the pedestrians has problems too. Again the scenarios. Are the five people j-walking, or breaking the law? And thus do they have more responsibility for their predicament? A self driving car should be following all the rules and driving legally and thus not likely to get into a crash, so why would an algorithm harm the occupant. Most MVA are really MVCs...they are not accidents, they are crashes that involve someone not following the law. The dilemma just feels a bit forced to me.

Still love Rabiolab....keep the reporting/stories coming!

Oct. 02 2017 02:44 AM
Doug from California

After looking at all the comments posted about this episode I noticed a glaring omission regarding the content of the podcast, that of the use of a "network" (in this case it sounded more like a social network) to simply stratify possible accident victims in a hierarchical structure of gender and ethnicity. This makes absolutely no sense. To bring it up this "network" point shows an intent of the authors to inject identity politics in the last place it should be, within an active safety system that sees objects and humans. The "network" would actually be the other vehicles on the road, both autonomous and non-autonomous (non-autonomous vehicles would have to be outfitted with "network identifiers" for them to be included in the data set), communicating information of their location, speed, direction of travel, and the environment around them. The "network" doesn't care if you are tall or fat, Asian or Black, white or blue collar, it only sees people to be avoided. No coder would be given the task to create a hierarchy of avoidance within data sets of people. The "network" would be used to create a single understanding of the environment that a vehicle is travelling through by linking the many eyes and ears of the vehicles that are within it to create that single vision. A vehicle on one side of a hill would know of the terrain and objects and people on the other side of the hill, not by its own sensors, but by the multiplicity of sensors on other vehicles, cctv systems, and peoples phone transmissions (yes you are being tracked). I think that the whole structure of the podcast was ill formed and posed questions that were meant to push political positions rather than trying to understand the hard science and ethics that the technology will actually have to face. At least that's what I think.

Oct. 01 2017 10:33 PM
Emeka from Palm Springs, CA

There is one problem I have with the analysis of the trolley problem.

The situation with the lever vs pushing the man are not really the same situations.

This is really a self preservation problem.

One way to understand the situation is to introduce a third character standing with the observer in the dilemma.

In scenario 1: To push a lever to kill one person or to not push and allow 5 to die, the observer really has to imagine that he is making a choice between two options that are forgone. Either One or Five people who are intrinsically involved in the current situation. If the third character is asked to make a choice, his choice will always involve the one person on the one trolley or the five people on the other trolley.

In scenario 2: Now you have the option of pushing another person who is "innocent" and bringing him into the situation. If the third character is instead asked to make the choice, now the observer has to consider that he too may be pushed in the path of the train not just the others in the trolley. Now the situation subconsciously involves the person being asked to make the choice. He is no longer a removed character. He makes the choice that he intrinsically knows can preserve his self.

He is essentially willing to sacrifice one person as long as the person is not him self. This is why the first scenario is easy. The choices presented will never affect him. While in the latter case it can potentially affect him.

Essentially, this is a self preservation problem. Perhaps this is why the monks make the choice to push the person. Perhaps they are just more willing to sacrifice self.

Oct. 01 2017 05:12 PM
Ken from Ridgetown Ontario Canada

You forgot Traffic Regulations , autonomous cars will not break the law fail to see how the car would intentionally kill passenger without breaking the law ?

Oct. 01 2017 03:57 PM
Colin Norum from Florida

I listened to the podcast on the first day of my new echo dot and LOVED it.

I want to comment on the segment discussing killing one to save five and why the different, emotional response pulling the lever versus pushing the man from the bridge.

Pulling the lever didn't kill the one. The train did.

Pushing the man directly kills the one, hence the sense of committing murder.

Oct. 01 2017 02:09 PM
Watt deFalk from Portland OR

A significant, overlooked factor. European councils may insist on moral regulations, but they are meaningless to the sociopaths in the Silicon Valley tech industries concerned only with: Get product out, make our profits, if people die, they're all idiots and too many of them, no one matters but me. This Ayn Rand mindset has electro-locked us into our future feedlot. Our cellphones are already starting to kill off cognitive abilities; eventually we'll be dead-eyed obedient robot-slaves for World Peace at last. Wish this speculation wasn't so likely. Have a nice day.

Oct. 01 2017 01:02 AM
Tom from Seattle, WA

I just listened to this episode. I mostly loved it - but I have problems with the ending.

I think that it's false to claim that the current situation regarding vehicular death and dismemberment is "random". Automobile manufacturers and government authorities do make and have been making moral decisions that create what we currently experience as the norm. This has been true in specific cases of strategy (such as the General Motors Streetcar Conspiracy), cost-benefit market-based decisions (that have informed safety decisions around seat belts and airbags), allocation of public resources (like the allotment of huge amounts of formerly public land for the specific use of one product: automobiles), and so on.

Robert's framing of specified vs. random casualties might be more properly framed as considered vs. negligent ones.

Sep. 30 2017 02:30 PM
Amelia from Simpsonville, SC

Isn't it obvious? The people on the track have chosen to be in a dangerous activity. The one & the 5 on the track have decided this is acceptable risk.

Pushing the large man is picking someone who hasn't chosen to risk his life by being on the tracks. That isn't fair.

Sep. 30 2017 08:29 AM
William Vouk III from Minnesota

The results of the thought experiment with the train are not mysterious. Flipping the switch to save the 5 has to do with the principle of double effect -- doing a good thing (saving the 5) with an unintended evil result (the death of the 1). This action satisfies all the requirements for justifying an action by the principle, namely, that the nature of the act is itself good, or at least morally neutral; the agent intends the good effect and does not intend the bad effect either as a means to the good or as an end in itself;and the good effect outweighs the bad effect in circumstances sufficiently grave to justify causing the bad effect and the agent exercises due diligence to minimize the harm.

Pushing the man is unacceptable because it is doing evil in order to achieve something good. This is never right, as the Socratic principle states -- it is never right to do wrong (even if a good end is achieved).

It seems to me that our brains ARE able to recognize these valid principles, even if the principles cannot be articulated. In fact, I would say that the fact that most people answer the way they do is evidence for an innate CONSCIENCE.

These same priniciples apply to people hiding from enemies and driverless cars.

Sep. 30 2017 02:26 AM
Dave Cortright from Silicon Valley

As a designer and engineer, I have to say that the entire premise of this sinister, Rube Goldberg thought experiment—while perhaps interesting for philosophers and brain researchers—would simply NEVER happen in the real world. For starters, the outcome of any decision or action can never be 100% certain; we aren’t “Minority Report” precogs. How could I know that pushing the guy on the tracks would actually save the other 5? Or that sending a speeding train through an unplanned track switch wouldn’t cause it to derail?

But more to the point, you are failing to take into account THE SEQUENCE OF PREVIOUS EVENTS that lead up to that particular set of circumstances in the first place. Driverless cars have a HUGE number of sensors looking as far out from the car as practical specifically to look out for and react objects to prevent them from EVER getting in this situation in the first place. It would have sensed a problem and braked or changed direction LONG BEFORE some "lesser of evils" ultimatum ever comes up.

In situations where something happens too fast for the car to react, then the fault lies not with the car but with the person/object that somehow avoided detection until it was too late. The real litmus test here is to take ALL OF THE REAL WORLD INCIDENT DATA where a vehicle injured or killed someone and use each of those initial conditions as actual experiments for the self-driving vehicle. (Of course you have to first filter out all of the ones with a drunk/distracted/sleepy/altered driver.) I can guarantee you that it is going to injure and kill SIGNIFICANTLY FEWER than in the actual outcomes.

I can appreciate you wanting to tell a good story. But the false conflation of this fictional thought experiment and real world situations does a disservice to your audience.

Sep. 29 2017 07:39 PM

I just listened, came home, and jumped online to post two comments. The first I see was already tackled by several others. Trolley Dilemma has too many faults. The largest being pulling the lever suggests distance and no other option. Pushing the man means you choose his life over your own.

The other issue I want to bring up is the Mercedes executive comment. He clearly stated that if the car knows for certain it can save one life it will do so. That does not mean it wouldn't choose to save many if it knew for certain that was possible

Sep. 29 2017 07:10 PM
Sam from Denver

I take STRONG issue with Josh's theory on morality. To argue that morality is innate is to argue biological determinism. It's a historical claim to a truth about morality as if morality has been AHISTORICAL and APOLITICAL and I would simply argue that nothing is ahistorical, the word morality itself is historically contingent and constantly shifting. Morality is not a universal experience of all people, its temporally and historically fixed. Morality arises from the Western Canon of Philosophy. . . not in a primate. It is rhetorically used; it has use. At what point will Radiolab be able separate its blind acceptance of the scientific method and the cultural rhetoric that impacts it?

Sep. 29 2017 03:58 PM
Holden Kessler from Los Angeles

What's stopping bad guys from jumping in front of moving autonomous cars to kill and/or rob the drivers? And why would a Buddhist monk push someone in front of a trolley when they themselves could jump?

Sep. 29 2017 03:57 PM
Jeffrey B. Kane from Tucson

As a former IT systems engineer and current full-time Uber/Lyft driver I think a lot about the future with autonomous vehicles. After listening to your latest episode, I can't even fathom the situation you describe.

An autonomous car wouldn't even need to be programmed to make such a decision and the Mercedes engineer's comments were taken out of context.

A car would be programmed to STOP if there were a group of pedestrians in front of it. If it was unable to stop due to mechanical issue then it would be aware of this issue and have stopped as soon as the issue arose. If road conditions were such that it wasn't able to stop then it would not be going so fast that it couldn't stop in time.

This is how the car protects the ONE it knows it can protect... by operating safely in the first place.

Radiolab shows are always thought provoking and stir up great conversations -- even ones like this which have a faulty premise.

Sep. 29 2017 03:14 PM
Chelsea W Rudman from DC

Great show -- definitely gave me a lot of food for thought. Sort of chilling to think about having to make these choices when programming the cars. I would also love to hear another Radiolab episode exploring the possible implications of the driverless car future that are alluded to in the opening clip. Is the speaker right in his dire forecasts? What *are* the predictions surrounding the impact of driverless cars on the various industries he describes?

Thanks, as always, for another great episode.

Sep. 29 2017 03:06 PM
David Pirtle from Washington, DC

I think the current situation is no less 'engineered.' We allow cars to be started without breathalyzers, which we needn't do. We allow cars to be designed to exceed the speed limit, which we needn't do. The speed limit itself is a calculation between how efficiently we want our society to run and how many people we are willing to accept being killed on the road. The decisions that will have to be made regarding driverless cars may be more blatant, but they are no more complex. They just feel more complex, because we're taking away a driver's illusion of control over their situation.

Sep. 29 2017 02:14 PM
Robert G. from California

I think in time, the onus will be on the pedestrian. Similarly to how a person walking down train tracks will be culpable if they are hit by a train, someone who is walking illegally in traffic will be viewed similarly. I was playing through the various scenarios, and came across no answer, but found this compelling argument against programming the car to kill the driver: if the car was programmed to kill the driver to save multiple pedestrians, you'd have an easy formula to commit murder. Find out what car your target is traveling in, overwhelm the car's computer into thinking it needs to sacrifice the driver and save the pedestrians, and voila. Would politicians' cars be programmed differently? Would it be an add-on fee available to be purchased?

Sep. 29 2017 12:27 PM
Mat Weller from Reading, PA, USA

I feel like the conclusions drawn in the original FMRI story are based on flawed thinking. Yes, you are talking about how easy it is to kill someone based on how active your role is in the action (though there's never been any question of that). But it's also true that there's a false equivalency assumption between the two scenarios in the Trolley Dilemma. In the first story, you're told there is no way you can contact the workers, but you have the option to throw the switch, so of course you do. Because, math. But in the revised version with the fat man, you have to think about so many more variables. No human can stop a train, therefore you must be saying that his screaming fall alerts the workers and gets them to clear the track. But if that's possible, then you can probably just holler down to them for the same result. Or you have the option of choosing to sacrifice yourself. And, as mentioned previously, there's the consideration that the workers take this possibility as a risk of the job, while neither you nor your (for some unknown reason, fat) voyeuristic companion signed up for that. And is you are saying that the fat man is SO FAT that he can actually stop a train, then he's stopping it by derailing it, which could put thousands of lives at risk in the train and surrounding neighborhoods. So much more to consider. Maybe the FMRI isn't sensing moral quandry, maybe it's lit up because more variables require more thought centers.

I think the conclusion that a pedestrian dying or a driver dying are the only conclusions in that scenario. As you mentioned, you're talking about a 1/10,000,000 chance to start with. Then you consider that a pedestrian getting hit by a car has an exponentially higher chance of death if he gets hit by a car than the passengers of a car have if they get in an accident. The idea that you're engineering death is ludicrous. If the existence of it saves thousands of lives to start with, and then when death does occur, it's only in the event of a 1/10,000,000 chance that involves a scenario where _the_pedestrian_is_at_fault_anyway_, then of course you tell the programming to swerve _and_ avoid other cars if possible _and_ avoid walls. And if it's that 1/10,000,000 day, then it's too bad.

Sep. 29 2017 10:09 AM
Robert from Wisconsin

After thinking about the driverless dilemma given at the end of the show I come to the conclusion that the situation presented borders on the impossible in the real world. First driverless cars could not disobey speed limits. Any road or street that would allow a driverless car to travel at sufficient speeds to kill its passengers by crashing into a wall will be constructed too wide for all five pedestrians to block sufficiently to prevent the car from swerving around all or most of them. Secondly there's no ally that all five pedestrians could both block and all five be killed that the driverless car would be allowed to travel at speeds sufficient to kill anyone.

This scenario is far more plausible with a human driven car because of a human's reduced reaction time and ability to ignore the speed laws.

Sep. 29 2017 09:36 AM
RZU from Berkeley, CA

I found the ending to this episode rather disappointing. Robert talked about engineering in deaths as if it was a new thing. A couple of folks on here already pointed out that we already engineer in a lot of deaths in our current system. Whether it is the street layout that prioritizes vehicle traffic over pedestrians and cyclists or the freeway that runs closest to the poor neighborhood and results in more disease and death due to pollution, the value of one person's life is being weighed against that of another. There is perhaps a little more of a direct line between the engineering and the outcomes when you talk about driverless cars, but the effect is no less tragic on those who haven't been considered or who have been considered less valuable by the engineers.

Sep. 29 2017 01:09 AM
Tim Millar from Melbourne Australia

The trolley problem is an entirely artificial construct, arguably useful for probing ethics, but irrelevant to the real world. Trolleys have been around for a few hundred years, but I feel confident in asserting that the trolley problem has never actually arisen in the real world. Even given the set-up as described, there will always be uncertainty in regard to the outcome of any particular action or lack of action (contrary to the manner in which the problem is framed), and there will always be more alternatives to those defined in the problem.
Similarly, there is never going to be an occasion where a driverless car is going to have to resort to any kind of ethical decision. The car will never have certainty about the outcome of any particular course of action, or be able to know who or how many people will be injured or killed - it will simply do its best to avoid a collision.
As driverless cars become the norm, thousands and thousands of lives will be saved. It would be tragic if their adoption in society is delayed because of misplaced concerns about artificial and unrealistic ethical conundrums.

Sep. 29 2017 12:32 AM
T. Roach from Minneapolis, MN

The trolley problem has bothered me for a while, and i think I've finally figured out why. Eric from Anchorage hit the nail on the head: the workers on the track are there voluntarily, and are aware of the risk, whereas the man on the bridge has not taken that risk.
This could be dismissed as a flaw in the hypothetical situation i.e. we should be assuming all of the people in the scenario are innocent, HOWEVER, as it applies to the real world driverless car situation, I see a clear choice for the behavior of the car. The occupant of the car is at least a passive player in this scenario; they have not done anything they perceive to be dangerous or unusual. The five people on the road have presumably violated a law, or at least a social norm- 'don't play in the street.' In fact, you could argue that the 'rich guy' in his smart mercedes has spent a lot of money to acquire the safest means of transport possible (taking it as read that driverless cars will save millions of lives every year).
Not convinced? Well, we have a clear precedent for this idea, right out the window of Radiolab's Chicago studio- The El Train. I think it would be safe to say almost everybody in Chicago knows about 'the third rail'. It kills people. It is unusual, but not unprecedented, that someone dies walking on the tracks. Sometimes it is an indigent, a turnstile jumper, or a graffiti artist. Sad, but not unexpected. They shouldn't have been there. Sometimes it is a workman. Disturbing, but we assume they were aware of the risk and felt the compensation was worth it. Rarely, somebody falls onto the tracks. Undoubtedly a tragedy, but an accepted casualty compared to the safety and convenience of the trains. Nobody blames the engineers who designed the system for an occasional accident. Nobody suggests the driver of a train kill one sleeping bum to save two blind nuns walking the rails.
The owner of an expensive autodrive car pulls onto the freeway, and sets the vehicle on autopilot. Five people walk into the middle of a designated driverless highway. The car does everything possible to minimize harm the the pedestrians, but not at the expense of the passenger. Some of the trespassers die. Those five people should not be there. They put themselves in harm's way, thus are responsible for whatever harm they suffer.
As driverless cars become normative in society, the infrastructure will catch up. Roads will shrink and become more segregated, because they no longer have to compensate for human error in traffic. Sidewalks and bike lanes will grow into the margins. People will learn not to touch the third rail. Society will progress.

Sep. 28 2017 11:39 PM
Thad Humphries from Charlottesville, VA

The trolly problem sounds like a non-issue for self-driving cars. Presumably these cars will be programmed to stay on the road. Is a driver of a standard car expected to sacrifice the car's occupants if a pedestrian steps into his|her way? I don't think so. As to self-driving cars, is a pedestrian to be able to injure a car's riders because the pedestrian jumps in front of car, or pushes someone? No. To say the pedestrian has such power by their reckless actions is ridiculous.

Sep. 28 2017 10:16 PM
Cody from LA

I wouldn't pull the lever or push the man... I'd sacrifice myself for them!

Sep. 28 2017 08:41 PM
Dennis P Hastings from Olympia, Wa USA

I find it really interesting that people want their lives to be automated. However, the trend towards automation won't stop with cars. It will eventually come for everyone. Only people who think they've chosen the proper 'niche' believe that they will be safe from it, but, capitalism being what it is, those who pull the strings won't be happy until they have saved every dime they can by making something automatic, even if it means sacrifices in quality at every step. I also can't help but feel that many advocates of driverless cars are the same people who are so inattentive that they don't realize that the light has changed, so of course they are for it. They want someone else to think for them. At the heart of the argument is the fact that some people shouldn't have ever been driving in the first place. Also, there is no mention of how much these cars are going to cost and can the average person afford one. Sales and marketing are the real reason for their development in the first place... not the safety of you and I. If you believe that then you also believe that Donald Trump has your best interests at heart.

Sep. 28 2017 07:54 PM
Jason from St. Louis

I have always appreciated Robert's humanist outlook on complex moral issues—it's one of the things that keeps me listening to this podcast...thanks, Robert!

This time, though, I disagree a little with your analysis at the end about how random drunk drivers kill more people on the roads, but that this randomness feels like operatic fate, while the driverless car algorithms seem somehow sinister or unacceptable because the deaths (though fewer) were calculated and engineered.

The reason I disagree is: I think the drunk-driver-related deaths are also engineered. They are engineered by a lack of laws that, for example, would require breathalyzer tests before starting a car. Why don't we have those laws? Because lawmakers have calculated that they would be unpopular with car manufacturers and many consumers. So we leave them off, knowing full well that it will (statistically) cause more deaths. We have engineered a society in which individual freedom is so highly valued, that we can't make laws that save thousands of lives. It's a different kind of engineering, but it's engineering nonetheless. Those deaths are not operatic fate—I would argue that they're a concession to our culture's obsession with individual liberty and consumer choice.

Sep. 28 2017 07:45 PM
Paul from Austin, TX

"programmer eating pizza and sipping coke" - If you are going to use a dismissive and disparaging stereotype at least get it right - it's pizza and DIET Coke.

But seriously, I would argue that most engineers are highly skilled and trained, thoughtful professionals who are better able and more qualified than politicians to make these kinds of judgments. It is impossible to make a perfectly safe product. it can always be safer, but more expensive. Engineers make these life and death choices all the time.

Sep. 28 2017 02:42 PM
Mathieu Blais from Toronto, Canada


What about the effects of self-driving cars on the human psyche? If the machines are always found to be better than us, what are we good for? What will it do to our collective sense of self if we continue to have our agency eroded by the notion of 'machines do it better'?
What is the driving force behind this? Insurance statistics? We need a vision for our place in this world that isn't shaped by numbers. If we can't drive ourselves, then we can't trust ourselves to do much. It's difficult to argue with fewer deaths, but I would want to know what would we be giving up for this deal.
I would like to hear more about the reverberations of such a shift through the workforce and a forecasting of what humans will actually do when all of the jobs are gone. What other types of work are vulnerable? Will buildings design themselves (eep for me!)? Capitalism isn't kind to people without jobs.

Sep. 28 2017 02:23 PM
Will Tyler from Chicago, Illinois

I've never liked the trolley thought experiment for the limited options it presents, and I don't understand why people always assume their only option involves death. Maybe because that's the end result presented if nothing is done? In the second scenario, a third option of self-sacrifice is pointed out by several others in the comments, but if you can throw yourself or the other guy off the bridge, you can just as easily throw a shoe, a shirt, or your pants to get the attention of the train, assuming there are no other objects around.

For the driverless car dilemma, I'd think an emergency brake would exist as any other car currently has that would allow it to stop. Assuming that fails and thinking about the more complicated scenario of these cars all being on the same network, they'd be able to help each other faster than we can think and react. Why couldn't the other cars recognize the problem through network communication and maneuver to bring the other car to a stop without a major impact, say two pushing in on each side to slow it to a stop?

These thought experiments always make me question the limited imaginative ability of the philosophers who come up with them, and if they're the ones figuring out these problems for driverless cars, then I am worried about the outcomes.

Sep. 28 2017 09:24 AM
Mike_CFO from Port Chester NY

The real moral dilemma is that every week a 747 worth of people die on US roads. Automated cars will reduce that number at an increasing rate. Anybody standing in the way of that progress has the blood of over 30 thousand US citizens on their hands as well as hundreds of thousands of global citizens annually. Automated cars do not need any type of morality algoritm to cut these casualty rates down. I hate to shut down debate, but this show is the equivalent of yelling fire and causing a stampede. We should be enabling our programmers and engineers with resources and moral room to roll this technology out quickly, even if imperfect.

Sep. 28 2017 08:09 AM
Robert Post from Atlantic City, NJ

The problem I see with the episode is it’s focus on a scenario that is EXTREMELY rare. There is virtually always an alternative to running people over. As I see it, and was not mentioned in the program, is the the occupants inside the car will be buckled into their seats (unless they’re incredible stupid) and much more able to survive a crash than any pedestrian. The cars should be programmed to ALWAYS miss pedestrians and instead crash into most anything else. Another car (which are designed to absorb and distribute the forces of impact), bushes, fences, small trees, etc., anything but people. Any crash will be infinitely more survivable by those inside the vehicle than those who are not.

Sep. 28 2017 06:05 AM
Elmer Alvarado from san diego ca

the idea that that there is default brain response is hilarious. i bet i can explain it better. the whole train dilemmaa

Sep. 28 2017 04:30 AM
Jai from Perth

The discussion is a fascinating one but unfortunately the answer is ess interesting. Autonomous car should never swerve in an emergency. Not ever. They should break as quickly as possible.

There may be very rare instances where swerving could have saved more lives but those will be many times fewer than would die if we had swerving autonomous vehicles on the road. This is best for the driver, and society. And it's what the guy from Mercedes was trying to say

Sep. 27 2017 11:52 PM
Marc Kwiatkowski

The driverless car facing pedestrians is not at all like the classic trolley dilemma for the following reason:

The algorithm used by a driverless car can be learned - either through reverse engineering or by testing various conditions - by attackers and then exploited. If I know that an autonomous vehicle will prefer crashing and killing the occupants when suddenly confronted with some number of pedestrians I can kill somebody doing just that. It is not hard to manage that such an easy way to kill someone wouldn't be used by everyone from terrorists to angry spouses.

If an autonomous algorithm puts the driver at greater risk than other people in any circumstance, then someone will take advantage of that. Thus I think the only right algorithm must be to protect the occupants at all costs. Another possibility is that responses are randomized so that attackers cannot count on a specific response.

Sep. 27 2017 10:58 PM
Eric from Anchorage, Alaska

Regarding the trolley dilemma, the show draws false equivalency between the two cases presented. In Case B of the trolley dilemma, the five workers standing on the tracks have all already placed themselves in an inherent position of danger, likely self-acknowledged and self accepted, while the large man on the bridge has not. This circumstance greatly confounds the decision presented, and is never acknowledged in the show. I believe as social animals, we absolutely take this into account in making the decision, which is why it feels so wrong to push the bystander off the bridge.

In Case A, five workers stand on one track, and one worker stands on the other. The trolley is headed for the five, and will kill all five, but can be diverted to the other track. Do you pull the lever? The reason 9/10 people would is that there is an equivalence between all the workers. All have already placed themselves in a slight position of danger (standing on the tracks) so it simplifies making that decision. As previously stated in another comment, you are only deciding on the number of survivors.

In Case B, the five workers all have accepted the slight risk of standing on the tracks, but the big man on the bridge is just an innocent bystander. How do you compare five people who have placed themselves in harm's way, with on who has not?

Imagine an alternate Case B that does draw equivalence to Case A. There is an arched bridge that can hold five people, but will guaranteed collapse with six people. There are five people standing on the bridge, and a sixth walking up. If that person crosses a threshold, the center of the bridge will collapse and kill the five (he'll be left standing at the edge of the break). You see at the last instant and don't have time to warn him, but you can push him off the side before he crosses the threshold, killing him but saving the five standing in the center of the bridge. All of a sudden, the pushing part does not seem as critical in the decision, because once again, everyone is in a similar position of danger (on the bridge).

Sep. 27 2017 10:17 PM
Gary from NYC from NYC

The problem with this story is cars don't kill, speed does. If self driving cars go no faster than 4 mph, I can almost guarantee they will not kill anybody. If we speed self-driving cars up to 10 mph, they may kill a couple... if we move up to 55 mph (or higher), yes people will die.

The second problem is we are living through the trolley problem, and have decided to kill the pedestrians. Cars are mandated to have seat belts and airbags. The drivers walk away from the crash and people outside the car are killed. In addition, we know that the flat nose of an SUV or pickup is more deadly to humans, the people outside, rather than inside the car.

Self-driving cars will save and improve millions of lives. It is a shame that this piece thought that there is a downside.

Sep. 27 2017 09:27 PM
Zoe Adamedes from Providence, RI

Haven't finished listening to the entire episode, so I can only comment about the trolley mind experiment. I agree with Michael Kalm about the suicide option. Perhaps that's why we feel so conflicted and guilty about not wanting to push the big guy off the bridge. If we really wanted to
save the 5 people on the track, we could jump off the bridge ourselves and stop the train without killing another. And so, perhaps the reason we feel so averse to pushing the big guy is out of guilt--that we know it's unnecessary deep in our minds.

Sep. 27 2017 09:04 PM
trish from Pasadena, CA

In listening to this podcast struggling with the ethical dilemma posed by the algorithm underlying the decisions made by driverless cars, I was struck that in our capitalist country we daily accept the algorithm underlying the decisions made in health care about who gets care and how much. People die every day because of the way our system works and while many are upset by it, there is little talk of the ethical dilemma in health care unless it revolves around who gets first crack at a donated organ.

What am I missing?

Perhaps driverless cars will end up with the same decision algorithm. If you are rich enough to afford a Tesla or a Mercedes, you can set your algorithm to protect the car's occupants at all costs. If you're driving a Ford Focus, you'll get the algorithm that saves the most human lives regardless of the impact on the car's occupants.

Love this podcast. Always tops on my list.

Sep. 27 2017 07:12 PM
Gabriel Martinez from Madison, WI

I don't think this problem is particular to automated cars. If anything, it is easier in that situation. For example, if a group of people jumped into the rails of a train, they'll create a similar dilema to the guy operating the train, or if it is a high speed train like the one in France, to the people that coded its software. But we have found the solution for this: first make it hard for people to be on the rails, second, try to stop if possible to avoid the murder, always ensuring the safety of the people in the train (since they are more and they did not braked the rules of being in the rails. And if it is not possible to save, the intruders, the moral responsibility, falls on them because there is certainty that if you get in the way of a train your life will be saved only if possible.

I see that if automated cars become a reality the way streets are designed will probably look like railroads. I do applaud, however banning cars from discriminating based on race or status, but it seams obvious to me that they must ensure at all times the safety of their passengers and if possible if the people outside. Once there's certainty there people will know that it is dangerous to be in the middle of a highway where driverless cars transit. Nothing radical there.

Sep. 27 2017 07:11 PM
Carel from Bay Area

Ok, you have gone totally George Lucas now and I am compelled to write out my frustrations. Radio lab in its classic form was the greatest thing on podcast and radio. The past few years it’s been mostly cutting and pasting other show content or taking on overly political rallies. I loved, no I survived, on the emotional science style episodes that were unique and informative and creative. When Jad was concerned about too many sound effects I was so sad that you have grown to become ashamed of what made the show compelling to me. Perhaps I am alone but I miss the old days and this felt like a close but no cigar episode in which I was reminded of what used to be but you had to both use and abuse the old content to get me to the point of the episode. I really miss the “crazy” old Jad and Robert, sound effects included.

Sep. 27 2017 06:58 PM
Pekka Kujansuu from Helsinki

I always thought the Trolley Problem has a false equivalency between the two cases, since assuming we know how the lever works, and we know it's currently pointing the trolley at the five people, there's a pretty good expectation that pulling the lever will help. With pushing the large guy onto the tracks though, in real life we could never really have any certainty that it would work. Worst case scenario, you push the guy, he grabs you, you both fall to your deaths, but fail to stop the trolley, and a total of seven people die. There's also the chance that you push the guy but fail to get him to fall, so five people still die, but now the large man is mad at you.

Sep. 27 2017 06:13 PM
Matthew Anon from Colorado

The answer is easy, and it's not wildly different than what human drivers do today: slam on the brakes.

There's no reason a car should careen off a cliff because a toddler wandered into the street. You just slam on the brakes and hope for the best. A self driving car will be able to see the toddler from a significantly further distance and react (by hitting the brakes) far quicker than a human ever could, and thus the issue of running over a toddler is even rarer in the context of SDCs. Check out the "distance required to stop" section here: and you should see what I mean.

I have to agree with Michael above. This episode was ineffective and pointless. You guys can do better.

Sep. 27 2017 05:48 PM
Michael Kalm from Salt Lake City, UT

Oy. A program that started out great, with two minutes regarding automation, went quickly downhill with the 10-years-old program that was kind of dumb, to now a current program that was totally dumb. As Matthew Mckeown properly pointed out in a comment above, the trolley question with pushing the man never considers the possibility of suicide to save the five men. If that had been considered, that would have been more of a direct parallel to the scenario of the driverless car, where you are posing that the passenger commit suicide to save the pedestrians.
BUT - the whole thing is dumb. It reminds me of the old joke about how NASA supposedly spent $6,000 on developing a ball point pen that would write in any position, weightless in space - and the Soviets used a pencil. Pedestrians in the street? Are you kidding me? Just as we have educated pedestrians to stay out of railroad crossings when the gates come down, don't you think that pedestrians can be educated to stay out of the streets unless they have a walk sign?
So why would a pedestrian be in the street so suddenly that the driverless car couldn't safely stop? Well the pedestrian may be trying to commit suicide. Are you suggesting that the passenger in a driverless car, choose to commit suicide in order to save a pedestrian from the act of committing suicide? How dumb is that? Or maybe the pedestrian is a toddler, who wandered away from his parent and suddenly stepped into the street? Gee, a mother or father who was not exercising proper oversight of a child. Does that happen in real life where there are no driverless cars? Of course it does. We call it a terrible accident. The parent has to deal with their grief and remorse. But still, accidents happen. We shouldn't expect a passenger in a driverless car to commit suicide to prevent an accident of someone else's creation.
Too bad you didn't just expand the discussion on the economic catastrophe that is looming with automation.

Sep. 27 2017 05:29 PM

omg I can't believe you did an episode about this topic! I am a huge advocate for driverless cars and have been talking them up to my friends for years. Whenever someone would ask me, "Well what about if the driverless cars encounter [Trolley Problem Scenario]?" I have always said that I really don't think a Trolley Problem would honestly occur in real life... When is there ever a situation on the road in which there ONLY two or three possible responses? It just sounds like a thought experiment and not reality. And anyway, a driverless car of the future, which cannot be distracted, has 360 degree laser vision and runs on a super advanced neural network, would perform better in a Trolley Problem than a human driver. So even if a Trolley Problem did occur in the real world, I'd be happy to let the car handle it.

Sep. 27 2017 05:14 PM
Jim G from Omaha

How about I just say no I don't want one, problem solved.

Sep. 27 2017 05:06 PM

Driverless car isnt a problem.
It is simple.
Cars are designed to save the people inside it (crumple zone,seat belt,air bags). The people outsides safety has never been considered. If the AI hits people outside the car you are not responsible for the program.

Sep. 27 2017 02:43 PM
Timothy Brummett from Austin TX

Jad, Jad, Jad! Why the hell would I leave this moral dilemma up to programmers? My set of moral differ from others. Shouldn't I be able to tell the car what to do in case of the trolly issue? That should be the first thing it asks me if I buy one. "Would you prefer to save a busload of nuns or save yourself?"

Sep. 27 2017 12:26 PM
Matthew Mckeown from Montreal, Canada.

I'm surprised by the discussion surrounding (this version of) the trolley dilemma as it always seems to miss out on a fundamental choice that is implicit in the premise yet does not seem to be presented. The choice as expressed here seems to be a. Do nothing, in which case 5 people die, or b. Push a person from the bridge to their death that 5 people may survive. However, if one person falling from the bridge could stop the person, and there are two people on the bridge, as the premise clearly states, then a third option emerges from the premise being c. jump from the bridge yourself, dying, that 5 people may live, and that no one need be murdered. Choice b. implies a judgement call that it is better to kill the other person on the bridge than to sacrifice yourself, which is both a practical and moral decision.

Practically, is pushing this person off the bridge more likely to successfully stop the train?

Morally, do I have the right to take this action?

If the answer to either of these questions is "no" then the only logical choice is option c., self-sacrifice. As self-sacrifice is absent from the premise of much of this discussion, one wonders if it will also be absent from the discussion when programming Self-driving vehicles. Would a self-driving car, for example, drive itself into a highway median to avoid hitting a person in the lane in front of itself? What about a member of an endangered species? What level of injury to the passenger would be acceptable to save the lives of people outside the vehicle? The rabbit-hole of questions only gets deeper and stranger from there.

I wonder, if subjects were presented with the premise but without the options, how many of the respondents would conclude that the best choice is self-sacrifice.By inspecting even a simple premise a little more broadly than presented, options, however strange they are, may present themselves. One wonders what options we miss by being presented only a set of choices based on a premise understood too narrowly.

Sep. 27 2017 12:09 PM
Ruso from New York City

There is a big difference between the two trolley dilemmas which does not change by changing the means of pushing the large man off the bridge: In the first dilemma you have two general groups/positions of people: A) 6 people on tracks, and B) 1 person, me, the person deciding, am standing on the side. If we ignore where the train is running, the given is the same, and I'm only deciding on the quantity of survivors.
In case of the second dilemma the groups and positions change. There are two groups of people: A) 5 people on tracks, and B) 2 people, including me, are on the bridge. The large man and I are at the same location, under the same conditions. This changes the whole ball game and introduces a different set of moral questions, such as - if dropping someone off the bridge might stop the train (and let's ignore the question of how effective this strategy is, etc.), why should I push the large man and not sacrifice myself? In a way this is more similar to the dilemma with the baby, where you choose between the well being of others VS the well being of "your team". No wonder this question involved a different part of the brain than the original trolley dilemma, they are different at the core.

Sep. 27 2017 12:03 PM
Robin from Chicago

1. hypothetical situations may create a change in the brain prior to the mental exercise given. I know when I consider something hypothetically, it feels like of challenging, fun exciting, risk-free… couldn't this completely change the interpretation then of what regions light up, etc?

2. If you try to create a situation that is real, a moral dilemma that is actually happening, this might not pass IRB and might not be significant enough to matter much to someone's brain. Thus, can't really be tested.

3. Is life so simple? Do we actually know that pushing the large man will save the others? How can we know? We don't get this certainty in life. If some authority tells us it is so, why do we trust them? How can we trust them? Aren't some people more compliant with authority than others? We know this would be stress response, and so some people, fight, others freeze, and others flee. So, people don't respond similarly, based on their wiring, their trauma histories, etc…

4. Pushing a large man may seem risky- he could turn on us and throw us in front of the train!

5. Pulling a lever… who gave me this lever? Why me? How do I know it works? Why would I believe someone in this situation?

6. I think more about people controlling drones that kill. This is actually happening. study them! People in the military… how do they make these decisions?

7. With self-driving cars, wouldn't there be highway control rooms that look at large screens for accidents and problems and malfunctioning? If there are no humans in the cars and trucks, someone needs to be monitoring. A lot of potential jobs in this. Hacker risks… or human risks. People could blockade highways… if the cars are programmed to not hit people, people could intentionally cause disruptions by standing in front of cars. Also, people could hack the code and create discriminative or terroristic things - ramming into buildings, homes, etc… purposing killing people of a certain race or age…

8. I believe in science and believe we must study things. However, it is very limited and not generalizable to real life much of the time. We just too far to conclusions when the simulation is not close enough to mimic real life decisions.

Sep. 27 2017 11:19 AM
Michael from Kansas City

One major diference between the the trolly deliema and a self driving car running down a pedestrian vs crashing into something is that a modern car is ridiculously safe. If you run down a pedestrian you will very likely kill or mame them while in a modern car you can walk away from a highway accident.

On the trolly deliema I am on the side of tradeing the one mans life for the group. However, if the group of people were within a cage specifically designed to take a direct hit from a trolley while keeping the occupants safe you trust the system and save everyone.

I think this is how self driving cars will likely make decisions. They will choose to destroy themselves in order to save all human lives. There will of course be exceptions where things do not work out. But, overall society will be significanly more safe when computers are driving us around.

Sep. 27 2017 10:53 AM
Caleb from Minnesota

For me the push-man-off-bridge dilemma was easy, and I don't think it's at all the same as flipping a lever. The reason people think "no I would absolutely not push the man over" is because it would be just as easy to simply sacrifice yourself instead. Why murder when you can sacrifice? Throw yourself over, problem solved.

As far as the baby goes, this one is definitely more difficult, but I have to go with don't kill the baby. In a real world scenario, I'd be ready with my hand in case the baby started making noise to cover her mouth, leaving her nose exposed so she can at least still breath. Aside from inserting my own ways around the scenario, I still feel like killing the baby would be the wrong thing to do. Allowing others to do harm seems more acceptable than doing harm, and I think THAT'S the foundation of what drives our decisions in these scenarios.

Part of the reason I conclude with this is largely based on my Christian beliefs in a God. The Bible says we were created in God's image, thus having his sense of morals. While yes humans were given the 10 commandments to serve as an unquestionable guideline for the God's people at one point, the morals themselves were already there, having been created in God's image.

In regards to the "allowing others to do harm instead of doing harm yourself" piece, that too is a reflection of God's actions as told by the scriptures. For example, the whole underlying history of the Bible revolves around Satan's challenge to God in the garden of Eden. Satan challenged God's right to rulership, saying: "humans won't serve you if given the choice". Satan deceived Adam and proved himself right. So something had to be done to prove Satan wrong, for all eternity. Just as a perfect man chose to disobey God, a perfect man would have to maintain obedience for his entire life. That perfect man later ended up being Jesus. In addition to this, though, more was needed. In the garden of Eden, Satan suggested that man doesn't need God to thrive. To prove Satan wrong, God has allowed mankind to rule themselves for thousands of years, trying every type of government they possibly can. Not one government has yet to succeed at achieving lasting peace. Thus, God has allowed billions to suffer at the expense of the greater good - paving the way for mankind to flourish forever in a restored paradise. This was God's original purpose for mankind, and that purpose has not change. But in order to restore his purpose, he must first prove Satan a liar by showing that mankind cannot direct their own step.

Our struggle with morals and our universal tendency towards the same conclusions in certain matters seems to directly reflect that of God's, as shown in his word. This explanation offered by the Bible seems to make the most sense, and my being a Christian makes it easy for me to soak it in. I also feel like the explanation "our morals come from deep seeded ancestry" is just a lazy blanket statement with no real leg to stand on.

Sep. 27 2017 09:39 AM
Kyle Evans from Florida

The most just and non-discriminatory way I could imagine to do this would be to base the decision on the probability of saving whoever is in danger. For example, if there was still some risk that the driver could be killed even if the car attempted to save it instead of the pedestrian then we should save the pedestrian or vice versa.

In cases where there are more lives to be saved on one side of the decision or the other I feel that it is improper to speculate that the worth of the single individual may be greater, due to our subjective experience. Therefore, in this scenario we should favor the most lives saved.

In cases involving a single pedestrian and a single driver where probabilities of death and prevention are equal I am tempted to say to save the younger party. However I believe this is an error and cognition because there are factors such as health or other accidents in life that make it impossible to predict which party would live longer regardless of age. Therefore, I feel it may be just to favor the pedestrian, they are not the one who has chosen to use this technology or are benefiting from it at the moment. The choice to enter the vehicle should come with the acceptance of the minute risk to your life. See, The Pedestrian has made no such decision or accepted no such risk and therefore should not be sacrificed on the altar of self-preservation.

In cases involving two drivers where the probabilities and all other factors presented are identical it becomes very tricky to decide who to save without speculating on the value of the individual. I don't think we could use factors such as health conditions and income to predict this, there have been destitute and infirm people that have done great things for the world. The only way I can see to avoid such speculation would be something similar to a coin flip in this circumstance, however I am unhappy and slightly disturbed by this logic.

As you can see there are no small set of rules that we could develop to solve this problem, there will be many scenarios that need to be thought out individually. I'm sure there are many other scenarios I did not talk about, however this is exhausting and I have already written a book. I simply think that we should use logic to avoid any kind of discrimination based on the person. Using only the factors of the scenario such as number of lives and probability of death, etc... to make such decisions, due to the subjectivity of our appraisal of a person's value.

Sep. 27 2017 07:48 AM
Jeremy from Sydney

I think self driving cars should just have a toggle - selfish and selfless which toggles the car to operate on the decision point for the trolley issue - driver has selected selfless mode - kill the driver - selfish mode - kill the pedestrian.

Sacrifice and Decision should be placed on the individual - not the engineer - not the corporation - not the government.

The driver is still responsible for that decision. As is currently the case when driving a normal non-self driving car.

Sep. 27 2017 03:37 AM

Leave a Comment

Email addresses are required but never displayed.

Supported by