Robot wars

Sat, Jun 23, 2012, 01:00

FANS OF science fiction, and notably of such writers as Isaac Asimov, will long be familiar with the looming ethical challenges posed by the development of “intelligent” machines capable of directing themselves. Is there a need to set limits to autonomous action, to hardwire into robots moral constraints akin to those supposedly guiding human actions? Asimov’s response was his “Three Laws of Robotics”, the first of which was that “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

All good fun, and the stuff of fantasy. Well, not any more. Advances in battlefield technology mean that a range of autonomous, “thinking” killing machines are soon likely to be available to commanders. Indeed some, of a cruder variety, have already deployed. Now the question is, should they be banned? Should a human intervention and responsibility be a requirement in a decision to kill, and be enshrined in the rules of war and humanitarian law? Wendell Wallach, a scholar and consultant at Yales Interdisciplinary Centre for Bioethics (co-author of Moral Machines: Teaching Right From Wrong) says yes.

Part of the problem is where to draw the line. Defensive weapons like patriot and cruise missiles can already be set to fire automatically when they spot an incoming missile. Landmines, likewise, to detonate. Or there’s Samsung Techwin’s remote-operated sentry “bot” that works in tandem with cameras and radar systems in the Korean Demilitarised Zone. Currently the robots cannot automatically fire on targets, requiring human permission to attack, but a simple change could override all that.

The US airforce is adapting some of its systems so that human intervention would only occur to stop inappropriate action by an automated weapon, rather than to specifically sanction a killing. Is that crossing the ethical boundary? And there are concerns that, quite apart from the ethical issues, such weapons may change the dynamic of war, making escalation into outright conflict easier. Do they make “friendly fire” or non-combatant casualties more likely?

The theoretical advantage, of course, for the deployer is that war could be fought “cleanly” with minimal human casualties – and hence political fallout – on its side. Perhaps, however, the issue should be added to the international arms control agenda, cumbersome and slow-moving as it may be. The Asimov convention?

Sign In

Forgot Password?

Sign Up

The name that will appear beside your comments.

Have an account? Sign In

Forgot Password?

Please enter your email address so we can send you a link to reset your password.

Sign In or Sign Up

Thank you

You should receive instructions for resetting your password. When you have reset your password, you can Sign In.

Hello, .

Please choose a screen name. This name will appear beside any comments you post. Your screen name should follow the standards set out in our community standards.

Thank you for registering. Please check your email to verify your account.

We reserve the right to remove any content at any time from this Community, including without limitation if it violates the Community Standards. We ask that you report content that you in good faith believe violates the above rules by clicking the Flag link next to the offending comment or by filling out this form. New comments are only accepted for 3 days from the date of publication.