“Fires don’t always go out when you’re done playing with them.”
There is a fierce debate raging right now in the fields of science and technology. There is always a fierce debate in the fields of science and technology, but this time it’s different.
At least, or so the combatants would have us believe.
Humanity, they tell us, might be in danger from something tech-created, a data-crunching Frankenstein’s monster of astonishing power. Down this particular rabbit hole of technological advancement, the great sages of Silicon Valley are warning, humanity must go this far and no farther.
“The Great AI Deflation Bomb,” warned Rich Karlgaard for Forbes on April 20, 2023. “AI will the be great deflation bomb hitting professional services.”
“Only five months ago, most of us, even those who follow technology closely, had never heard of OpenAI, a private research organization founded by Elon Musk and others to make AI easier to use,” wrote Karlgaard. “Then on Nov. 30, OpenAi released to the public its AI app called ChatGPT. Within one week, a million people had signed up, and the AI era suddenly had shifted to warp speed.”
“Question: What do you think ChatGPT-like technology will do to the world’s educated, white-collar labor force?” asked Karlgaard. “Answer: carnage.”
“‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead,” reported the New York Times on May 1.
Here there be monsters, they tell us.
“Here there be monsters,” is something that might have looked nice on an old maritime navigational map — or it might be something that only sounded good to a Hollywood screenwriter. In any case, the warnings obviously didn’t work, because here we all are.
Someone eventually sailed off the edges of the known map because someone always does. Be they real or imagined, danger signs worked about as well before anyone circumnavigated the globe as they do now.
That is to say, they don’t work at all.
Oh, they work on most of us. Most of us are good, law-abiding citizens trying to earn an honest living and who are quite busy enough navigating the various perils, pitfalls, and fleeting pleasures of the known world, thank you very much, with no time to broaden the horizons of humanity.
And then there are those other types.
They always want to know where the line is — so they can cross it, or get as close to crossing it as they possibly can. Sometimes these rebels inherent in any society, however authoritarian, are willing to obey the rules; sometimes not. Over a long enough timeline, they will eventually go into the forbidden unknown.
And they do it for an obvious reason. It’s a bit of human nature every bit as dominant today as it has always been. The proof is in the pudding; the result is everywhere we look today.
To gain an advantage, to get to market first, to see if they can; we humans are tinkerers, we’re tool-makers. We’re problem solvers. And we like money and stuff.
This liking for money and stuff will be the reason that, despite all these public warnings, the tinkerers of Silicon Valley will go right on tinkering, trying to gain an edge over market competitors.
Humanity has also been embroiled in a global, universal, and eternal arms race since the beginning of recorded history — and probably long before. Gaining a military, tactical, or resource advantage over potential threats to the village/county/country is how we developed things like the chariot and plastic surgery.
Governments won’t heed the call to halt this research based on warnings about how dangerous it is.
On the contrary.
Those warnings about how dangerous AI might be are likely to make world governments double down on their research into that field. You can bet they believe their geopolitical opponents will.
There is one justification, one rationalization humans throughout history have twisted to defend the indefensible, dangerous, and devil-may-care time and again: “If I hadn’t done it, someone would have and they would have been way worse than me, so I was doing the world a favor, really.”
The ultimate, all-purpose excuse would work as well on AI research as it does on forced labor and bioweapons research.
Suddenly, tech experts want to stop AI. If they do manage to halt humanity’s long march of technological advancement, it will be the first time in history.
Most of humanity’s many technological advancements — from the fork to the printing press to the internal combustion engine — have been opposed by some group or another. Galileo was imprisoned for his breakthrough understanding of the fundamentals of the universe. Socrates was sentenced to death by poison for the same crimes Silicon Valley Tech experts are concerned about now: Imprudence and corruption of the youth.
Questioning, and expanding the bounds of human knowledge is, by its very nature, imprudent. The threat to existing power structures, social conventions, and traditions is a side effect of progress; not its intention.
What’s more, the youth can’t be protected from the forces that will shape the future any more than the previous generations could be insulated from the forces of technology, war, and world trade.
Whether AI is to be humanity’s savior or a harbinger of darkness, whether it is a terrible time-waster or an extinction-level event, there isn’t anything anyone can do to stop it.
(contributing writer, Brooke Bell)