A Real-life ‘Red Wedding’ , Drones and Ethics Pt.2

“The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots.” (The Simpsons, The Secret War of Lisa Simpson)


To start, I have immense reservations on talking about warfare. For one, I have zero experience in war and thus, no conception of what it would possibly feel like to be confronted with life or death situations. Second, I am a beneficiary of other people who have sacrificed their lives.

But I think I’ve become so fixated on this topic because, after doing research in In Part 1, I can’t help but feel that there is something being broken. Certain rules that ought to be followed, morals… sexy stuff like that.

What I’m concerned with is not whether war is justified or not. For the sake of argument I’ll assume that it is. What I’m concerned with is whether or not the use of drones or any remotely piloted machine can be morally justified. Remote controlled T-1000’s, murderous 3CPO’s and missile equipped Roomba’s would fall under this category too.

Now let’s say there are two nation’s engaged in a good ol’ fashioned war.

When two sides engage, it’s clear that one side wants to have an advantage over the enemy. The leaders of both sides have a moral obligation for to put their army in the best position possible to win. You want to create a level asymmetry in which the enemy suffers more while your own forces remain safe. It would be immoral not to minimize the risk of tragedy on your own side.

Furthermore, the fundamental thing about war is that all combatants are considered to be morally innocent, no one is assumed to be guilty. Under International Law, you are only allowed to kill these innocent people if there is a reciprocal imposition of risk. This is a fancy way of saying that a soldier is given the right to kill IF they themselves are under the threat of being killed. Without this element of self-defense, there is no moral or legal right to justify killing anyone. The problem with drones as I see it, is that it creates a level of asymmetry and unevenness so severe, it removes the reciprocity of risk needed to justify killing someone during war.

So imagine yourself as an armed soldier confronting an armed enemy who has the clear intention to kill you. You are justified in pulling the trigger because you meet that condition of reciprocal risk. Now replace yourself in that scenario with a drone. The idea I’m trying to get across is that you no longer have any moral justification to kill that enemy soldier because:

1. You are no longer an immediate threat to you and morally innocent

2. You are in no immediate danger of dying

An interesting question that comes to my mind is that if your enemy is no longer a threat to you, then what makes them an appropriate target?

Imagine that Country A sees Country B as an appropriate target to use force. If A meets B on the battlefield, then their forces are appropriate targets so long as they threaten injury.  But if Country A never physically shows up, what makes Bs’ forces justified targets?

Paul Kahn has written extensively on the topic of machine warfare and proposes that the use of drones no longer becomes a matter of warfare, but policing. The act of policing assumes that you are only targeting people who are morally guilty and that it is only these people that should suffer injury. And that’s precisely how a drone policy operates. For example, the US initiates drone strike targets on “al-Qaeda and its associated forces”, “who pose a continuing and imminent threat to the American people”. It’s this,

with a lot less marionettes.


But I guess what troubles me so much about the use of drones is this trend:

Only about a fifth of the members of US Congress who decide whether or not to authorize U.S. military action have any military experience themselves. To me this shows an incredible disconnect between the consequences of war and the decision makers who decide on whether to enter war or not. Who knows, maybe they would be emboldened to engage in more military action since they have never had to put their own lives on the line, war would be something abstract.

I see drones exacerbating this disconnect because there is no risk for the the guy that presses the button or makes the call to press the button thousands of miles away, they don’t even have to really witness it. It’s a bit unsettling to me the distancing between the suffering that is being caused and that situational feeling that might inhibit someone from causing that suffering.




4 thoughts on “A Real-life ‘Red Wedding’ , Drones and Ethics Pt.2

  1. carragherhardt

    Your point about the number of veterans in the US Congress is really interesting. Do you think there’s anything we can reasonably do to bridge that disconnect?

    1. lhreyes Post author

      Thanks for commenting Carragh.

      I guess one way would be some sort of mandatory military service so that decision-makers have at least some concept of combat but I’m not really an advocate of that.

      I’m pretty pessimistic in bridging the disconnect between decision-makers and war. It’s hard to implement empathy and I would say that there is a trend towards an indomitable gap between decison-makers and consequences.

      As of now, the US government is awarding grants to universities to figure out a way to implement algorithms that create a sense of right and wrong and moral consequence into robots. In the near future, it seems that decision-makers won’t even have to make decisions anymore.

  2. isapinnell

    Well done sir. You have managed to tackle a very tricky issue, (I have never even considered wadding into the morals of warfare) even adding in a little humour to keep it from being too dark.
    You raise a number of good points. In terms of the number of members of congress who have experience with war, are there not generals who act as advisers to the congress and president on these types of decisions?
    Also, in regards to your response to Carragh’s comment, “algorithms that create a sense of right and wrong and moral consequence into robots”????? This is starting to sound like the last Avengers movie (algorithm determines everyone who is a threat or will someday become a threat). I don’t find it plausible that you could develop an algorithm that can determine morality as each person has their own ideas of what is moral.
    Finally, “America, Fuck Yea” is now firmly stuck in my head…

    1. lhreyes Post author

      Thanks for reading!

      There are definitely advisers with immense military experience who advise congress and Obama in matters related to war. But at the end of the day like any other profession, advisers only have so much influence over decision-makers.

      As for ethical robots, I think the algorithm will be based on universal principles i.e the UN’s Human Rights Law. I mean the case could be made that an “ethical” robot won’t be prone to stresses that often result in human error (war atrocities). I’m pretty skeptical about creating a robot that will always do the right thing.

      Oh, and “America, Fuck Yeah” is an absolute gem.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s