(I reworked the CONOPS post into a D&D thread. Hopefully this gets some responses.)
There have been plenty of articles recently published debating the merits of Korea’s soon-to-be drafted Robot Ethics Charter. We had a thread a few weeks ago on this very topic that didn’t go anywhere. I don’t want this thread to be about the charter itself, but about the fundamental issues the charter means to address: how machines ought to behave.
I think everyone will agree that the way the media reports technological and scientific news is embarrassing at best and deeply misleading at worst. For example, every article linked above cites Asimov as the primary cultural touchstone for this debate. Everyone has their own opinion on Asimov’s laws, though most people agree that they are terribly out of date and implausible. But I think we can agree with the fact that the Laws represent a poor starting point for a discussion of what machines should and shouldn’t do, given current and near-future technology and the tasks we set for the machines. It is well-known that the US military plans to make at least a third of his combat ground vehicles autonomous by 2015, and such autonomous machines will pay no attention to even the intent of the Laws.
So leave Asimov aside. The Naval Surface Warfare Center has recently proposed a CONOPS (Warning: PDF) for the use of autonomous weapons systems:
NAVSEA POSTED:
Let the machines target other machines
– Specifically, let’s design our armed unmanned systems to automatically ID, target, and neutralize or destroy the weapons used by our enemies – not the people using the weapons.
– This gives us the possibility of disarming a threat force without the need for killing them.
– We can equip our machines with non-lethal technologies for the purpose of convincing the enemy to abandon their weapons prior to our machines destroying the weapons, and lethal weapons to kill their weapons.
Let men target men
– In those instances where we find it necessary to target the human (i.e. to disable the command structure), the armed unmanned systems can be remotely controllable by human operators who are “in-the-weaponscontrol-loopâ€
Provide a “Dial-a-Level†of autonomy to switch from one to the other mode.
I’m not sure how the NAVSEA CONOPS would be implemented, and there are no details for how to ensure these guidelines are followed or what punishments are deserved for transgressions. But I think this gives us a good first pass at the issues involved: what are the machines allowed to do, and how much autonomy are they afforded to carry out their tasks.
Although the issue of robot ethics raised in the Navy’s proposal pertain specifically to the war environment, the issues obviously extend beyond the battlefield. Noel Sharkey is quoted by the Herald above:
QUOTE:
The first prototype robot carers and companions for the elderly are already being tested.
“The robots are programmed to follow the elderly around and make sure they take their drugs. I find that highly concerning,” says Sharkey.
“I’m 58 now and I fear that in my future I will be dumped in a home where I am cared for by machines. We need to have a public debate and we need to have it now, before the robots really bore into our society.”
War is a very ethically charged situation where the immediate consequences of certain actions are easy to see, and since the military will likely be first to employ autonomous machines in large numbers, the military examples might be the best for discussion. Still, we shouldn’t forget that autonomous machines are appearing all around us and are increasingly responsible for the well-functioning of our society, so don’t be fooled into thinking that this is merely an issue of who gets to shoot who.
With this model in mind, I want to ask a series of questions on the various dimensions of the Navy’s proposal. Although I have my opinions, I don’t have any good answers to most of these questions. However, I do think that most of these questions don’t depend on some futuristic ideal of a perfectly conscious robot, but are actually questions that matter to us now with our current and near-future level of technology. So assume you are answering these questions based on the technology currently available, or at least in the pipeline for availability within the next, say, 10 years. If you answers depend on specific features of the technology, please specify.
What can machines do?
– Are there any tasks or operations that should be fundamentally off limits to a machine, and that only a person should be allowed to perform?
– Should military machines be able to target humans?
– Should military machines be able to target civilian infrastructure, like water and power supplies?
– Are there any tasks or operations that should be fundamentally off limits to people, and that only a machine should be allowed to perform?
How much autonomy should machines have?
– How much independence should a machine have in executing its task?
– Can a machine decide which tasks to execute, or should a human always make the decision? (That is, should a human always have final control over pushing the button?)
– How do we determine which decisions are appropriate for the machine, and which are appropriate for a human?
– Can a machine decide when, where, and how to execute a task?
– Should humans always have some input into the “control loop”? Should humans always have the ability to “flip the switch” and shut the machines down?
– Are there any situations in which the machine should be able to override human input?
Who is responsible?
– If an autonomous machine fails, who is responsible: the manufacturer, the employer, the machine?
– Once we determine who is responsible, how to we distribute punishment?
– How to we ensure that the parties determined to be responsible own up to their responsibility?