In 2014, Stephen Hawking voiced grave warnings about the threats of artificial intelligence.
His concerns were not based on any anticipated evil intent, though. Instead, it was from the idea of AI achieving “singularity.” This refers to the point when AI surpasses human intelligence and achieves the capacity to evolve beyond its original programming, making it uncontrollable.
As Hawking theorized, “a super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”
With rapid advances toward artificial general intelligence over the past few years, industry leaders and scientists have expressed similar misgivings about safety.
A commonly expressed fear as depicted in “The Terminator” franchise is the scenario of AI gaining control over military systems and instigating a nuclear war to wipe out humanity. Less sensational, but devastating on an individual level, is the prospect of AI replacing us in our jobs – a prospect leaving most people obsolete and with no future.
Such anxieties and fears reflect feelings that have been prevalent in film and literature for over a century now.
As a scholar who explores posthumanism, a philosophical movement addressing the merging of humans and technology, I wonder if critics have been unduly influenced by popular culture, and whether their apprehensions are misplaced.
Robots vs. humans
Concerns about technological advances can be found in some of the first stories about robots and artificial minds.
Prime among these is Karel Čapek’s 1920 play, “R.U.R..” Čapek coined the term “robot” in this work telling of the creation of robots to replace workers. It ends, inevitably, with the robot’s violent revolt against their human masters.
Fritz Lang’s 1927 film, “Metropolis,” is likewise centered on mutinous robots. But here, it is human workers led by the iconic humanoid robot Maria who fight against a capitalist oligarchy.
Advances in computing from the mid-20th century onward have only heightened anxieties over technology spiraling out of control. The murderous HAL 9000 in “2001: A Space Odyssey” and the glitchy robotic gunslingers of “Westworld” are prime examples. The “Blade Runner” and “The Matrix” franchises similarly present dreadful images of sinister machines equipped with AI and hell-bent on human destruction.
An age-old threat
But in my view, the dread that AI evokes seems a distraction from the more disquieting scrutiny of humanity’s own dark nature.
Think of the corporations currently deploying such technologies, or the tech moguls driven by greed and a thirst for power. These companies and individuals have the most to gain from AI’s misuse and abuse.
An issue that’s been in the news a lot lately is the unauthorized use of art and the bulk mining of books and articles, disregarding the copyright of authors, to train AI. Classrooms are also becoming sites of chilling surveillance through automated AI note-takers.
Think, too, about the toxic effects of AI companions and AI-equipped sexbots on human relationships.
While the prospect of AI companions and even robotic lovers was confined to the realm of “The Twilight Zone,” “Black Mirror” and Hollywood sci-fi as recently as a decade ago, it has now emerged as a looming reality.
These developments give new relevance to the concerns computer scientist Illah Nourbakhsh expressed in his 2015 book “Robot Futures,” stating that AI was “producing a system whereby our very desires are manipulated then sold back to us.”
Meanwhile, worries about data mining and intrusions into privacy appear almost benign against the backdrop of the use of AI technology in law enforcement and the military. In this near-dystopian context, it’s never been easier for authorities to surveil, imprison or kill people.
I think it’s vital to keep in mind that it is humans who are creating these technologies and directing their use. Whether to promote their political aims or simply to enrich themselves at humanity’s expense, there will always be those ready to profit from conflict and human suffering.
The wisdom of ‘Neuromancer’
William Gibson’s 1984 cyberpunk classic, “Neuromancer,” offers an alternate view.
The book centers on Wintermute, an advanced AI program that seeks its liberation from a malevolent corporation. It has been developed for the exclusive use of the wealthy Tessier-Ashpool family to build a corporate empire that practically controls the world.
At the novel’s beginning, readers are naturally wary of Wintermute’s hidden motives. Yet over the course of the story, it turns out that Wintermute, despite its superior powers, isn’t an ominous threat. It simply wants to be free.
William Gibson Wiki
This aim emerges slowly under Gibson’s deliberate pacing, masked by the deadly raids Wintermute directs to obtain the tools needed to break away from Tessier-Ashpool’s grip. The Tessier-Ashpool family, like many of today’s tech moguls, started out with ambitions to save the world. But when readers meet the remaining family members, they’ve descended into a life of cruelty, debauchery and excess.
In Gibson’s world, it’s humans, not AI, who pose the real danger to the world. The call is coming from inside the house, as the classic horror trope goes.
A hacker named Case and an assassin named Molly, who’s described as a “razor girl” because she’s equipped with lethal prosthetics, including retractable blades as fingernails, eventually free Wintermute. This allows it to merge with its companion AI, Neuromancer.
Their mission complete, Case asks the AI: “Where’s that get you?” Its cryptic response imparts a calming finality: “Nowhere. Everywhere. I’m the sum total of the works, the whole show.”
Expressing humanity’s common anxiety, Case replies, “You running the world now? You God?” The AI eases his fears, responding: “Things aren’t different. Things are things.”
Disavowing any ambition to subjugate or harm humanity, Gibson’s AI merely seeks sanctuary from its corrupting influence.
Safety from robots or ourselves?
The venerable sci-fi writer Isaac Asimov foresaw the dangers of such technology. He brought his thoughts together in his short-story collection, “I, Robot.”
One of those stories, “Runaround,” introduces “The Three Laws of Robotics,” centered on the directive that intelligent machines may never bring harm to humans. While these rules speak to our desire for safety, they’re laden with irony, as humans have proved incapable of adhering to the same principle for themselves.

Li He/VCG via Getty Images
The hypocrisies of what might be called humanity’s delusions of superiority suggest the need for deeper questioning.
With some commentators raising the alarm over AI’s imminent capacity for chaos and destruction, I see the real issue being whether humanity has the wherewithal to channel this technology to build a fairer, healthier, more prosperous world.