October 30, 2024

War machines with minds; robots take the place of soldiers

Many of us have heard of the lethal aerial drones or UAVs (the unmanned aerial vehicles that survey or use bombs) especially with the CIA attacks on al-Qaeda and the Taliban in the United States ally country of Pakistan. Now scientists are researching land-based robots for war purposes. The debate ranges from soldier safety to “robot ethics.”

Image

This archived article was written by: Carlie Miller

Many of us have heard of the lethal aerial drones or UAVs (the unmanned aerial vehicles that survey or use bombs) especially with the CIA attacks on al-Qaeda and the Taliban in the United States ally country of Pakistan. Now scientists are researching land-based robots for war purposes. The debate ranges from soldier safety to “robot ethics.”
Can wars be fought by unmanned, thinking machines? Bob Quinn, of the robot manufacturer QinetiQ believes so – and thinks it’s safer for humans too. “The closer you are to being shot, the more you understand the value of having a remote weapons capability” Quinn said.
Though war robotics is considered a promising advancement in military technology, many senior military personnel find it a bit too similar to the popular science fiction “Terminator” films.
Quinn states that, “the weaponised robots only operate under the control of the soldier and never independently,” though Peter Singer, author of Wired for War reasons, “the human reaction time when there’s an incoming canon shell is basically we can get to mid-curse word … [This] system reacts and shoots it down in mid-air. We are in the loop. We can turn the system off, we can turn it on, but our power really isn’t true decision-making power. It’s veto power now.”
A big factor in the discussion is that if robots are making self-made decisions, can we be sure the machines are obeying the rules of war and hitting the correct marks? Patrick Lin, commissioned by the U.S. military to research “robot ethics,” plainly asks, “When you talk about autonomous robots, a natural response might be to program them to be ethical. Isn’t that what we do with our computers?”
One thing is for sure; the Pentagon driver-less vehicle EATR will need some careful, ethical programing before it sees any action. The properly named EATR refuels itself over expansive travels by “eating” any organic material – if not programed correctly, what is to stop it from eating human corpses?
EATR creator, Dr. Robert Finkelstein (no offense to Finkelstein, but doesn’t his name sound like another famous doctor’s name? Frankenstein, maybe?) of Robotic Technology Inc., strongly defends his creation by insisting it will devour “organic material but mostly vegetarian.”
If the good doctor’s statement wasn’t vague and suspicious enough, he adds, “the robot can only do what it’s programmed to do, it has a menu” (translation: it will eat what the government wants it to eat).
If this doesn’t worry you, it worries critics such as Professor Noel Sharkey of International Committee for Robot Arms Control, “You could train it all you want, give it all the ethical rules in the world. If the input to it isn’t correct, it’s no good whatsoever, humans can be held accountable, machines can’t.”
Lin offers a solution to the questionable reliance on a robot’s judgment to discern friend from foe, “if there’s an area of fighting that’s so intense that you can assume that anyone there is a combatant, then unleash the robots in that kind of scenario. Some people call that a kill box – any target is assumed to be a legitimate target.”
My question, straying from the topic of robots, is: how can human or machine decide if everyone in a certain area (kill box) is a combatant? Just imagine how the statistics of civilian casualties will be affected if Lin’s suggestion of a robotic kill box is actually used?
There are the pros of limited human endangerment and cons of ethics in the emergence of military robotics. Now there are also the advantages and disadvantages of “killer robots” compared to the human soldier. Many robotics researchers state that human soldiers have many faults that robots can easily avoid such as emotions.
Finkelstein says that, “robots that are programmed properly are less likely to make errors and kill non-combatants, innocent people, because they’re not emotional, they won’t be afraid, act irresponsibly in some situations.” Christopher Coker of London School Economics, however, argues that “we should put our trust in the human factor.”
Coker explains that though the military sees “the human factor” as a weakest, it is in fact the strongest. He says machines will never contain the technology to mimic the “warrior ethos” which is the mentality and conscience state of the trained soldier.
Whether you are on the side of robotic-advancing Quinn and Finkelstein or the skeptical Singer and Sharkey, it is a sure fact that warfare will be changed forever.