What dread grasp, dare AI’s deadly terrors clasp?

Photo by Cash Macanaya on Unsplash.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” — Statement on AI Risk, Center for AI Safety

“The cynic in me thinks these guys want regulation to lock out any new competition,” joked journalist Nellie Bowles for The Free Press on June 2, 2023. “But there is obviously something real in the fear.”

“AI is unlike virtual reality and unlike Bitcoin,” mused Bowles. “AI is going to be like electricity or the internet, changing the world in ways we can’t even imagine, coursing through every facet of our lives. But America can pass all the laws it wants; there’s no way countries like China and the United Arab Emirates will stop their progress on this.”

Bowles makes an excellent point. And it isn’t as if humankind hasn’t dealt with this particular issue of weaponized technology before — we have. We aren’t even a century into the nuclear age and nuclear power has already been used as a weapon of war: Twice.

The Manhattan Project was a top-secret research and development project undertaken during World War II, primarily by the United States, with the goal of developing the first atomic bombs.

The project was driven by the fear that Nazi Germany might develop atomic weapons. Manhattan Project scientists were determined to get there first. The research brought together many of the world’s leading researchers, including physicists, chemists, engineers, and mathematicians.

The efforts of the Manhattan Project resulted in the successful development and deployment of two atomic bombs. The first one, code-named “Little Boy,” was dropped on the Japanese city of Hiroshima on August 6, 1945. The second bomb, code-named “Fat Man,” was dropped on the city of Nagasaki on August 9, 1945. These bombings led to Japan’s surrender and the end of World War II.

The Manhattan Project was a significant milestone in the history of science and technology, marking the first large-scale effort to harness the power of nuclear reactions for destructive purposes. It also laid the foundation for subsequent advancements in nuclear energy and the Cold War arms race between the United States and the Soviet Union.

But things might have gone quite differently.

There were concerns among some scientists involved in the Manhattan Project that the detonation of an atomic bomb might lead to a catastrophic chain reaction, igniting the Earth’s atmosphere and oceans. This apprehension stemmed from theoretical calculations and uncertainties about the behavior of the newly discovered nuclear reactions.

One particular concern was the possibility of a self-sustaining fusion reaction, where the energy released from the initial atomic bomb explosion would trigger the fusion of hydrogen nuclei in the atmosphere and oceans, resulting in an uncontrolled release of energy. This scenario was often referred to as an “ignition” or “runaway” reaction.

In layman’s terms, such a runaway reaction would have turned planet Earth into a giant fireball and instantly vaporized every living creature on the planet.

And Manhattan Project scientists went ahead with it anyway.

To be fair, as the scientists gained a better understanding of the physics involved and conducted more extensive experiments, they estimated the chances of such a catastrophic chain reaction were extremely low.

But “extremely low” isn’t none.

Physicist Hans Bethe, one of the leading scientists on the project, conducted calculations that showed the fusion ignition scenario was highly unlikely. His work provided reassurance to the scientists involved in the project.

Nonetheless, even with this understanding, precautions were taken to minimize risk. The Trinity Test, the first test detonation of an atomic bomb, was conducted in a remote desert location to mitigate potential consequences.

Ultimately, the fears of a runaway chain reaction igniting the Earth’s atmosphere and oceans proved to be unfounded. The Manhattan Project scientists proceeded with their experiments and successfully developed and deployed the atomic bombs without destroying the entire planet.

Still — Enrico Fermi, an Italian-American physicist and one of the key scientists of the Manhattan Project, famously took bets on the day of the Trinity Test.

Fermi was well known for his sense of humor and often engaged in playful bets with his colleagues. On the morning of the Trinity Test, Fermi estimated the bomb’s explosive yield by performing a quick calculation based on the observed distance that pieces of paper were blown away by the shockwave.

According to eyewitness accounts, Fermi was seen taking bets on the estimated yield of the explosion, with odds ranging from a few dollars to bottles of whiskey.

With humanity again at the doorstep of a potentially earth-shattering new technology, some researchers and scientists are again sounding the alarm and warning of potential calamity.

It doesn’t exactly take a cynic, as Nellie Bowles suggests, to speculate that warning world governments — including our own — about the dangerousness of this new technology isn’t going to put them off. If anything, it is likely to have the opposite effect.

Nazi Germany is long gone, but the fear of rogue nations developing new weapons of mass destruction remains as strong as ever. And alongside scientists and researchers warning of the dangerousness of AI, horror stories are beginning to proliferate like wildfire.

“We were training it in simulation to identify and target a SAM threat,” Colonel Tucker “Cinco” Hamilton said during a recent presentation at the Royal Aeronautical Society FCAS Summit. “And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

“We trained the system — ‘Hey don’t kill the operator — that’s bad,’” Col. Hamilton continued. “‘You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

The anecdote spread like wildfire, provoking the following correction:

“UPDATE 2/6/23 — in communication with AEROSPACE — Col Hamilton admits he ‘mis-spoke’ in his presentation at the Royal Aeronautical Society FCAS Summit and the ‘rogue AI drone simulation’ was a hypothetical ‘thought experiment’ from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: ‘We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome’. He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says ‘Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI’”.

Needless to say, the denial, such as it was, hasn’t exactly put AI fears to rest among scientists, to say nothing of the general population.

While proposals for a moratorium on AI research and development have been put forward by some AI researchers, implementing such a moratorium on a global scale would be extremely challenging given the realities of globalization and the widespread nature of AI programs.

If not impossible.

AI research and development are not limited to a single country or organization. It is a rapidly evolving field with contributions from researchers and institutions worldwide.

So wherever it is to ultimately take us, the ride of our lives seems to have already begun.

Might want to hold on.

(contributing writer, Brooke Bell)