Artificial Intelligence And Human Stupidity Oh, what a lovely war...

If you had to choose between being shot by a human or a robot, what would be your choice? By choosing the human, you are probably gambling on the odd chance that a human might show compassion, when looking into your eyes. Robots, if programmed to kill, would show no such compassion. Your only chance of surviving a robot attack would depend on how the robot had been programmed to recognize that it had killed its target. If the robot assumes that it has killed you, when you lie motionless on the ground, you could be saved by doing exactly that, whilst the robot was busy killing somebody else. However, if the robot has been programmed with your name after #eliminate, and an order to place a small explosive device in your skull, no matter what, I’m afraid that there’s not much you can do about it.

You must not think that the very idea of a killer robot is something coming out of my imagination, or the main character of a science fiction movie. Serious scientists and designers are working around the clock, on what they call, “lethal autonomous weapons systems”.

Currently, unmanned drones, equipped with highly sensitive cameras, are controlled thousands of miles away from the spot where they will carry out a surgical strike. This is potently portrayed in the film “Drone”, which underscores the fact that killer drones are not without human error. 

Although not yet reality, the use of pre-programmed autonomous weapons, comprising robots, remains the logical next step in the short history of automated warfare. If it does see the light of day, this technology would make nuclear weapons look completely obsolete. Just imagine what 100,000 completely automated robots, or miniature drones could do, to the enemy. The necessity of manned ground interventions would no longer exist, and soldiers would be able to follow the attack, in the comfort of their own homes. Carrying the data of its target, the drone will carry out its lethal programme, without (it is hoped) the chance of collateral damage – assuming that the input data is correct to start with.

Proponents of this horrific version of Robocop, base their support on cheapness and, believe it or not, ethical values – as if killing could be ethical. The cheapness stems from the fact that soldiers would no longer be sent into impossible situations to either get killed, or return home suffering from post-traumatic-stress syndrome. Some six years ago, the cost of maintaining a soldier in Afghanistan approximated $850,000 per year, with a further $2 million per casualty, according to the US Department of Defense. Who said that life doesn’t carry a price tag?

The ethical value of robotic warfare is so hard to defend, that there is always someone out there to do just that. Ronald C. Arkin is Regents’ Professor of Computer Science and Director of the Mobile Robot Laboratory at Georgia Tech University. He sincerely thinks that lethal autonomous weapons are ethical and that, since war is here to stay, we might as well keep it clean. In the abstract to his article entitled, “The Case for Ethical Autonomy in Unmanned Systems,” he writes,

The underlying thesis of research in ethical autonomy for lethal autonomous unmanned systems is that they will potentially be capable of performing more ethically on the battlefield than are human soldiers. In this article this hypothesis is supported by ongoing and foreseen technological advances and perhaps equally important by an assessment of the fundamental human war fighters in today’s battlespace. If this goal of better-than-human performance is achieved, even if still imperfect, it can result in a reduction in non-combatant casualties and property damage consistent with adherence to the Laws of War as prescribed in international treaties and conventions, and is thus worth pursuing vigorously.

 

Discriminating between soldiers and civilians – one of the prerequisites for “ethical” warfare – is far from being clear-cut, unless the nuances of every war situation that is likely to be encountered by the robot, are programmed into its software. One problem that the robot may encounter, is having to differentiate between an unarmed and retreating soldier, and an armed civilian fighter. The robot will only be able to reason and adapt its behaviour within the confines of its software – the difference between soldiers and civilians. Of course, we are assuming, here, that the robot can actually reason, and not just follow, to the letter, some in-built line of action.

In order to make a moral choice, we must often analyse the situation we find ourselves in, and adapt our response, accordingly. A classic example of this is Philippa Foot’s “trolley dilemma”. Proposed in 1967, it poses the question of choice, and the consequences of the choice that is made. Imagine the ethical dilemma if you were faced with a runaway trolley hurtling toward five track workers. By diverting the trolley to a spur where just one worker is on the track, you could save five lives. The choice seems simple enough. But what if the only way to save the five lives were to push a fat man off a bridge, crossing the track, thus blocking the trolley? Would the choice be so simple? The choice that you have, and the decision taken, cannot be based on algorithms, alone. In both cases, the maths is the same.

Acute situations during wartime change by the hour. What was valid before the robot entered its zone of action might not be the same at the time of engagement. To carry out a “just war” – a concept that goes back as far as Augustine – soldiers must not enjoy killing. There is no place for nihilism and sadism, amongst soldiers, but a realization that they find themselves in a situation of exceptional circumstances, or legitimate defence.

War is only just if those who carry it out do so despite themselves. This tension disappears with machines. – Nolen Gertz (philosopher)

Making lethal autonomous weapon systems ethically acceptable, on the grounds that wars can be won with a minimal number of casualties, will only serve to lower the acceptance threshold of such conflicts, in the general population. In this way, killing becomes easy, because it becomes automated, in a world where robots have a “licence to kill.”

Some would argue that as robots are already taking our jobs, the very least they could do, is fight our wars. But is this the world that we or the robots, want?

 

mm

gskaye