Imagine if phone systems around the world were joined up via satellite, accidentally forming a new type of computer brain, which then wreaked havoc on an unsuspecting world.

If this story sounds dated, that’s because it is. It comes from the year 1961 and is the premise of the short story “Dial F for Frankenstein” by Arthur C. Clarkeii.
We have been worrying about the rise of machines for a long time – “Dial F for Frankenstein” predates the Internet, smartphones or even PCs. It comes from a time when a computer needed its own room, and humans were yet to set foot on the moon.
This technophobia (fear of technology) goes back much further, all the way back to the dawn of civilisation. Arthur C. Clarke drew inspiration from Mary Shelley’s 1818 novel Frankenstein, which in turn drew on the ancient Greek myth of Prometheus.
With an innate fear of technology hard-coded into our psyche, it is hardly surprising that we tend to intuitively believe people who say that AI will deliver a superintelligence that will kill us all.
“…if you build superintelligence, then it kills you.”
– Eliezer Yudkowskyv
Recent advances in generative AI (ChatGPT, Claude, Gemini etc.) have raised the profile of AI safety, causing the heads of various tech companies to call for moratoriums and controls to govern AI developmentiii iv. A term has even been developed for the probability of AI wiping out humanity – p(doom).
Doomers – those who believe that AI will wipe out humanity – say that recursive self-improvement, where AI becomes smart enough to improve it’s own code, will trigger a runaway acceleration of AI into superintelligence. In their belief system, this new entity, which is way smarter than us, will decide to wipe out the human race.
The problem with this line of reasoning is it implies that smart humans – anyone with higher than average intelligence – would want to eliminate less smart humans. We however know that this is not the case. People with high IQ are actually less likely to exhibit narcissistic rivalryvi, the type of behaviour that Doomers predict for superintelligent AI systems.
While AI doom messages play well in sound bites and quotes, they are a distraction from the more immediate issues that AI is creating.
Where the Real AI Badness Lurks
Consider the environmental consequences of AI. Data centres are being built at an unprecedented rate to meet the computational needs of AI, consuming massive amounts of electricityvii and water for evaporative cooling, with second order effects on environment such as increased burning of fossil fuels to generate electricityviii and shortages of water for farmers to irrigate cropsix. The current rate of data centre energy consumption is clearly not sustainable:

AI is also creating major disruption in the job market, where companies have already started lay-offs in anticipation that AI will automate away many roles.
“AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years”
– Dario Amodei, CEO of Anthropicx
If this prediction comes to pass, there would be serious implications for the world economy, potentially including widening of wealth gap, increasing poverty and creating conditions for social unrest. Never mind the massive reduction in potential AI customers with the means to pay a subscription…
Let’s also consider how AI enables the worst qualities of humanity:
Baked-in Human Flaws. AI foundation models have been created in our image, trained as they are on the collective works of humanityxi. All our inherent human biases and flaws are imprinted into these models, through poor choices regarding human-generated training data. The obvious danger is that we are creating artificial monsters that inherit from us the worst traits of humanity (aggression, dishonesty, manipulation, etc.).
Human Misuse of AI. We all have flaws, and sometimes we have nefarious goals. There is a long history of humans repurposing of technology for disreputable or unlawful means. AI is no different. Already, we see evidence of humans using AI to perpetrate fraud, spread misinformation and commit cyber attacks, to name just a few.
Deceit of Humans by AI. Humans are particularly susceptible to manipulation by AI. It can be used to impersonate a human (text, audio, video), and tuned for engagement and sycophancyxii. Criminals and oppressive regimes are using AI to manipulate and brainwash people.
Arguably these problems are far more pressing than a theoretical concern that AI could become sentient and destroy the world.
The Myth of Spontaneous AI Sentience
The path to AI sentience is unlikely to happen the way Arthur C. Clarke wrote about it, more than 60 years ago, with the spontaneous emergence of a new type of artificial being, that has it’s own goals and reasoning. We’ve had computers that whole time, with their power doubling roughly every 18 months in line with Moore’s Law, yet machines do not (yet) rule the world.
With the benefit of hindsight, it is obvious why the 1960’s global telephone system didn’t take over the world. There was no evolutionary pressure driving it to evolve towards sentience. It was just a network of computers running software. Very prescriptive software at that, with no ability to learn or self-modify.
Is this time is different? Maybe. Recent advances in generative AI, built on artificial neural networks, have started a movement away from deterministic (precise, repeatable answers) to less accurate, more probabilistic algorithms (statistical predictions) that reflect our messy reality. While this new approach has brought with it massive improvements in effectiveness, it is still prone to ‘hallucinating’, and needs a human in the loop.
The more likely path to machine sentience is via a series of many small incremental steps. While this slow progression certainly isn’t compelling for a sci-fi novel, it does come with an interesting implication. Each step along the path to AI sentience can be tied back to human innovation. This helps us frame AI for what it really is… A human-created technology, baked full of human biases and flaws.
What Can We Do?
There has been work done to address some of the immediate AI issues.
Technical Guidance for Model Training
The big AI players all have their own frameworks to give technical guidance on building AI models. See METR.org for a list and analysis. There should however be a common global AI safety framework, controlled by a trustworthy independent non-profit, as otherwise we risk letting the wolf guard the sheep.
This guidance should explain how to align model goals with what we as as society expect. Also, it should prohibit AI systems from modifying their own code, or being allowed to design/build other AI systems, to avoid the theoretical intelligence explosion. Another consideration is avoiding implementing any kind of training that rewards problematic traits, such as lying, sycophancy or self preservation.
Legal Remedies. While it may be tempting to think that regulation can fix the problems caused by AI, the reality is that regulation will continue to be ineffective while national concerns outweigh any chance of an effective global agreement. What country would turn down the opportunity to become a world leader in a new technology that could make them rich, for the sake of signing onto an agreement that other countries may renege on? It’s like asking miners to sit around to do health & safety training while they watch others running past towards a new gold rush…
There are however some more subtle legal remedies that may help:
- Ensure that a human or corporation is legally responsible for the actions of every AI system. It should not be possible to deflect blame onto an AI as an entity, as Air Canada infamously tried to doxiii. This will help incentivise companies to be serious about building safety into their AI implementations.
- We should also regulate the corporate use of AI, to protect workers rights, and fight wealth inequality by taxing big tech in the country of use.
- Companies should be legally responsible for the environmental impact of their data centres. They should be required to publish data on all their data centre use, including when contracted through other companies.
- Ban the use of autonomous AI in lethal military systems.
Wrapping it Up
Could a super-smart and dangerous AI come into existence? Possibly, but this seems unlikely in the near to medium future. The risk of machines becoming sentient and wiping out humanity strikes a chord in our psyche, but the bigger risks come from humans misusing AI technology for power and personal gain, and in doing so destroying our society and the environment. These problems already happening, and there are no easy solutions.
The elephant in the AI safety room is us. Humans.
Further Reading
Blogger Richard Meadows has an amazing way with words in his systematic takedown of the AI doom movement: https://thedeepdish.org/ai-doom/
While I disagree with the “controlled access” recommendation, there are some good points on AI safety made here: https://safe.ai/ai-risk
Footnotes
i Image generated on Google Image FX with prompt “Imagine if phone systems around the world were joined up via satellite, accidentally forming a new type of computer brain, which then wreaked havoc on an unsuspecting world”
ii Dial F for Frankenstein incidentally inspired Sir Tim Berners Lee to invent the World Wide Web. The link between the story and invention of the web is a bit tenuous, but mentioned in this Time article: http://content.time.com/time/magazine/article/0,9171,137689,00.html
iii A 2023 open letter calling for a pause on powerful AI development: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
iv “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”, see https://www.safe.ai/work/statement-on-ai-risk
v Quote from artificial intelligence researcher Eliezer Yudkowsky on Hard Fork Podcast: https://www.nytimes.com/2025/09/12/podcasts/iphone-eliezer-yudkowsky.html?showTranscript=1
vi Study on “Intelligent grandiose narcissists are less likely to exhibit narcissistic rivalry” https://www.sciencedirect.com/science/article/pii/S0191886923001356?via%3Dihub
vii Graph of recent and projected global data centre power consumption https://www.iea.org/data-and-statistics/charts/global-data-centre-electricity-consumption-by-equipment-base-case-2020-2030
viii Report on grid issues causing data centre operators to install gas turbine generators: https://restofworld.org/2025/ai-energy-supply-data-centers/
ix Video from Business Insider where they track down secretive data centres based on permits for diesel backup generators, and look at impacts such as noise pollution and water use: https://youtu.be/t-8TDOFqkQA
x Quote on destruction of white collar jobs by Dario Amodei, CEO of Anthropic from Axios article https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic
xi Often in breach of copyright law, which is currently being tested. Recent example in Guardian article: https://www.theguardian.com/technology/2024/dec/17/uk-proposes-letting-tech-firms-use-copyrighted-work-to-train-ai
xii Blog post on AI sycophancy being a “dark pattern”: https://www.seangoedecke.com/ai-sycophancy/
xiii An example of a company trying to shift blame onto an AI chatbot: https://www.theguardian.com/world/2024/feb/16/air-canada-chatbot-lawsuit