Dr. Frankenstein’s latest monster is a patchwork of our worst nightmares: Autonomous drone swarms programmed to kill in packs.

stop killer robots

In the “What could go wrong?” annals of history, we might have just witnessed the beginning of the end of human life as we know it- that is, if James Cameron is to be believed, anyway.

Jon Stewart may have been right during a recent appearance on the Colbert Report: “The world will end, and the last words spoken by any human will be by some guy in a lab-coat who says ‘Huh, it worked.”

The invention of the killer, autonomous drone swarm- in theory a seamlessly integrated team of martial, ultra-legal judges/executioners not requiring human input to use lethal force- has worked.

The moment Sarah Conner warned about may be at last upon us: Killer robots are no longer some ephemeral, rhetorical question human beings may face in some dark and distant future.

Killer robots are already here. The first human being- or likely, human beings- has already been killed by one. Recently, Israeli military forces unleashed a killer drone swarm against the terrorist organization known as Hamas.

“In May of 2021, Israel allowed the use of drone swarms to locate, identify, and attack Hamas militants, in what is likely the first-ever use of drone swarms in combat,” concluded David Hambling for New Scientist in June.

Human Rights Watch already has a serious campaign to stop autonomous killing machines : “Stop Killer Robots”.

Campaign to Stop Killer Robots.

“Fully autonomous weapons would decide who lives and dies, without further human intervention, which crosses a moral threshold. As machines, they would lack the inherently human characteristics such as compassion that are necessary to make complex ethical choices,” concludes Human Rights Watch.

But is it already too late to stop killer robots?

If we’ve learned anything over the last millennia, it is that art imitates life, people will go to any lengths to kill each other, and world governments are mostly good at closing the barn door after all the cows are out.

As in, much too late to stop a current crisis, but sometimes in time to avert future ones.

When it comes to war, however, we just keep trying to win for losing. As if warfare, like communism- if done properly for once- might actually solve more problems than it causes.

If warfare were capable of solving humanity’s problems, we’d have long ago run out of problems. Yet, we persist in war; down every twisted, violent rabbit hole, no matter what it costs us.

And it costs us.

The atom bomb, the hydrogen bomb, biological warfare. From the dreadful war weapons mankind created during the twentieth century arose an entirely new industry: Cosmetic surgery.

Cosmetic surgery might not even exist had it not been for the two World Wars we fought back-to-back between 1914 and 1945. It was during that time that weapons of war were invented that were not designed to kill.

On the contrary: Some of the most terrible weapons of the World Wars were designed specifically to maim soldiers- and horribly.

It was a psy-op: Shrapnel to remove half a face, a limb, two limbs, an eye, both eyes. Burned, scarred, battered, scraped and barely slapped back together, disfigured veterans returning to their families and communities were intended to weaken support at home for the war effort.

To fix these wounded soldiers, medical doctors developed clever ways to reconstruct, and cosmetically improve, the look of someone missing their nose, left ear and lower mandible for instance.

Viola; a cottage industry grew out of the manure of man’s inhumanity to man.

So far, the arms race humanity has been engaged in since the invention of the hand axe hasn’t done us much good. Sure, some of our best technology has come out of wartime, necessity being the mother of all invention, but at what cost?

What price we will pay for unleashing killer robots on the world may be much worse than anything James Cameron or the Wachowskis could dream up.

For instance, as killer drones appear on the scene, “self-driving cars” aren’t really all that self-driving. In fact, if you own a newer model car, you might have already noticed some of the “smart” functions aren’t really all that smart.

From braking unexpectedly when it senses a car in front is too close at an inopportune time, even if the driver didn’t need assistance braking, to a failure of scanning systems to recognize different drivers if they are, say, wearing sunglasses; there are problems.

A brand new Subaru Outback will tell you if you take your eyes off the road in front of you for too long, for example, even if you happen to be looking for a parking space in a parking lot, but it won’t automatically turn off a certain dome light, if left on by mistake.

The concerns about autonomous lethal robots, with their ability to take life and the self-determination to do so if certain criteria is met, fall along similar lines as our problems with self-driving cars.

Will the autonomous armed drone programmed to kill enemy soldiers always recognize allied soldiers as such? What if a uniform is torn, or stained in such a way as to suggest the person wearing it is an enemy combatant impersonating a friendly?

These questions and many others, as the Human Rights Watch and other organizations have pointed out, prove that such a military program is very unlikely to conform with human rights standards as outlined by the Geneva Convention.

Dr. Frankenstein’s latest monster is a perfect distillation of everything we’ve come to rightly loathe in ourselves; lack of empathy, self-centeredness and our capacity for great violence in one convenient package.

Leave it to human beings to create something that kills human beings without needing to be told to do so by human beings. It’s the ultimate outsourcing.

With this invention, our wars can be fought long after there aren’t any humans left to fight in them- or against them.

(contributing writer, Brooke Bell)