Why the AI Doomers Are Wrong
Nafeez Ahmed argues that the real 'existential threat' to humanity will come not from Artificial Intelligence itself, but from the human beings who control it
This article is co-published with Age of Transformation, a newsletter that explores systems-thinking for the global phase-shift
The internet is awash with claims that artificial intelligence could end up wiping out humanity. But how real is this risk?
The thrust of the argument is that, driven by exponential increases in computing power, as artificial intelligence compounds again and again in capabilities, within a matter of decades we could see the emergence of artificial general intelligence (AGI) bearing human-like generalised cognitive abilities. At this point, AGI would be able to continue improving itself, thus continuously surpassing human cognitive abilities.
While current forms of AI already pose all sorts of ethical questions for society, the prospect of AGI is widely considered to pose an existential risk to humanity.
God-like AI
As one AI investor put it in the Financial Times, AGI is essentially a “superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it.” This sort of “God-like AI”, he speculates, “could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race”.
The arrival of AGI was famously described by Google Director of Engineering Ray Kurzweil as the technological “singularity”, following which “human life will be irreversibly transformed”.
Kurzweil based this prediction on his ‘law of accelerating returns’, which basically applies Moore’s Law to other technologies such as genetics, nanotechnology, robotics and of course artificial intelligence. Moore’s Law captures how there’s been a doubling rate in computing power due to improvements in semiconductor circuits, along with a halving of costs every, every 18-24 months or so.
That consistent pattern of exponential improvement in both costs and capabilities has been detected in dozens of other disruptive technologies. So it seems to make sense to apply it to AI.
By this analysis, Kurzweil projects the doubling rate forward to conclude that AGI will emerge around 2045. The implications of this coming ‘singularity’, as he explains in his seminal book The Singularity is Near, will “include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light”.
Fear of the Singularity
While Kurzweil’s transhumanist vision looks more like a technological utopia, fears about the apocalyptic ramifications of the coming AGI singularity have been voiced increasingly in recent years. Tech luminaries from Bill Gates to Elon Musk have opined on the risks of AGI turning against humans and “summoning a demon” beyond human control. A 2022 survey of AI researchers found that more than half believe there’s a 10% or greater chance that AI could cause an existential catastrophe. This has sparked uproar and terror everywhere, such that now even health experts from across the world have called for a halt in the “development of self-improving artificial general intelligence” until proper regulation is in place.
But there’s a fundamental catch. There are sound reasons to conclude that the direction of AI improvement is never going to attain AGI, because the very concept of AGI is deeply flawed, and rooted in outmoded concepts of intelligence.
While the pattern of technological disruption in history is indeed ubiquitous, we cannot simply apply it willy-nilly to new technologies to make generic predictions. We need to be able to distinguish specific technologies from each other, and we also need to recognise that the exponential growth in any technology doesn’t simply continue forever, but inevitably slows down at some point.
Contrary to how Kurzweil portrays it, that slow down is not always smoothly superseded by a new exponentially growing technology. It can often be accompanied by breakdowns and gaps in development before new conditions emerge that might catalyse new bouts of technological innovation.
With that in mind, the biggest catch in the ‘God-like AGI is coming’ narrative is that all the empirical data we have on AI’s exponential growth today shows quite clearly the opposite.
Firstly, the exponential growth in AI capabilities is happening within an extremely narrow domain which cannot be extrapolated to mimic or replicate the generalised cognitive capabilities of human intelligence.
Secondly, there is evidence that the current AI technological paradigm is rapidly approaching its own internal limits, and that while ground-breaking advances will continue coming, these will be capped by the fundamental constraints of this paradigm.
The singularity is a myth
The sudden explosion in AI chatbots, and the exponential proliferation in chatbot-based AI applications for all sorts of functions and tasks, has given the impression that the AI revolution is only at the beginning of an exponential growth explosion that is about to get faster. That has vindicated fears of out-of-control AI entities being spawned which could end up evolving into AGI.
But while we can certainly expect a further exponential wave of innovation that will, indeed, be transformative in the information sector – and which will be linked to other innovations in robotics and automation – the specific technological revolution behind AI chatbots is already peaking.
Sam Altman, the CEO of OpenAI which birthed Chat-GTP, told an audience at MIT in April that future progress in AI would not come from the model of research that led to Chat-GTP. “I think we're at the end of the era where it's going to be these, like, giant, giant models… We'll make them better in other ways.”
Chat-GTP is of course based on natural language processing models which scale-up machine-learning algorithms to massive sizes. The models are trained on trillions of words of text, powered by thousands of computer chips. This is a physically intensive process that involves huge amounts of data processed via huge data servers using large quantities of energy.
The problem is that this process of improving AI is producing diminishing returns according to a paper on GTP-4 published by Open AI. To improve the models, there’s only one direction to go in: larger algorithms with more training on more texts via more data servers – but little evidence that more such training will significantly improve the models. As the existing process at Open AI already cost over $100 million, trying to improve AI in this direction will cost a lot but will only produce incrementally improved natural language processing, rather than a fundamental breakthrough into a higher-order intelligence. That’s why Open AI is not working on a fifth generation of Chat-GPT.
AI chatbots are not sentient
The stunning capabilities of the new chatbots and their uncanny ability to engage in human-like conversations has sparked speculation about sentience. Yet there is zero tangible evidence of this beyond a sense of amazement.
What many miss is that the amazing nature of these conversations is not a function of an internal subjective ‘understanding’ by an identifiable entity, but rather the operation of algorithms made-up of hundreds of billions of parameters trained on massive datasets of human text to predict words that should follow given strings of text.
On this basis, every conversation with an AI chatbot is not an exercise in AI understanding and then responding, but simply a natural language processing model programmed to predict an appropriate response based on previous patterns of human communication on the internet. AI simply represents and reflects back multiple human ideas, decisions and narratives (what some researchers have called ‘sociotechnical ensembles’) – not real agency.
This is not a breakthrough in intelligence, although it is, certainly, a breakthrough in being able to synthesise human responses to similar questions and thereby mimic patterns in human interaction. This model of AI, therefore, cannot, in itself, generate fundamentally new knowledge or understanding – let alone sentience.
The futility of AGI
While the AI chatbot revolution, then, is bound to slow down, further explorations in AI will produce other forms of innovation that will have their own transformative impacts as other avenues for research and application open up.
But the prospect that any of these exponential innovations will culminate in God-like AI superintelligence is not really justified by the data before us.
The best analysis of this comes from machine learning engineer Erik J. Larson in his book, The Myth of Artificial Intelligence. Published by Harvard University Press and praised even by US tech titan Peter Thiel, Larson’s book offers a meticulous examination of what’s really happening in AI research, and how the very real innovations going on bear no fundamental resemblance to the capabilities of human consciousness.
Ben Chugg, lead research analyst at Stanford University’s Regulation, Evaluation and Governance Lab, has provided a lucid summary of Larson’s core argument. As he explains in Towards Data Science:
“Larson points out that current machine learning models are built on the principle of induction: inferring patterns from specific observations or, more generally, acquiring knowledge from experience. This partially explains the current focus on ‘big-data’ — the more observations, the better the model. We feed an algorithm thousands of labelled pictures of cats, or have it play millions of games of chess, and it correlates which relationships among the input result in the best prediction accuracy. Some models are faster than others, or more sophisticated in their pattern recognition, but at bottom they’re all doing the same thing: statistical generalization from observations.
This inductive approach is useful for building tools for specific tasks on well-defined inputs; analyzing satellite imagery, recommending movies, and detecting cancerous cells, for example. But induction is incapable of the general-purpose knowledge creation exemplified by the human mind.”
Current AI has become proficient at both deductive and inductive inference, with the latter becoming a primary focus.
Larson points out that human intelligence is based on a far more creative approach to generating knowledge called ‘abduction’. Abductive inference allows us to creatively select and test hypotheses, quickly eliminate the ones which are proven wrong, and create new ones as we go along before reaching a reliable conclusion. “We guess, out of a background of effectively infinite possibilities, which hypotheses seem likely or plausible,” writes Larson in The Myth of Artificial Intelligence.
This approach to acquiring knowledge is distinct from induction which is limited to drawing specific inferences in relation to an existing dataset. In contrast, abductive inference allows us to generalise about the world, even about things where we have no direct experience. As Chugg explains:
“Whereas induction implies that you can only know what you observe, many of our best ideas don’t come from experience. Indeed, if they did, we could never solve novel problems, or create novel things. Instead, we explain the inside of stars, bacteria, and electric fields; we create computers, build cities, and change nature — feats of human creativity and explanation, not mere statistical correlation and prediction… In fact, most of science involves the search for theories which explain the observed by the unobserved. We explain apples falling with gravitational fields, mountains with continental drift, disease transmission with germs. Meanwhile, current AI systems are constrained by what they observe, entirely unable to theorize about the unknown”.
And here is Larson’s killer diagnosis: We don’t have a good theory of how abductive inference works in the human mind, and we have no idea how to recreate abductive inference for AI: “We are unlikely to get innovation if we choose to ignore a core mystery rather than face it up,” he writes with reference to the mystery of human intelligence.
Before we can generate genuine artificial intelligence that approaches human capabilities, we need a philosophical and scientific revolution that explains abductive inference. “As long as we keep relying on induction, AI programs will be forever prediction machines hopelessly limited by what data they are fed”, explains Chugg.
Larson warns that as current machine learning algorithms are basically just inductive inference engines, scaling up data-driven AI is “fundamentally flawed as a model for intelligence”. And we don’t have a viable alternative model of AI that can remotely approach human intelligence:
“Purely inductively inspired techniques like machine learning remain inadequate, no matter how fast computers get, and hybrid systems like Watson fall short of general understanding as well. In open-ended scenarios requiring knowledge about the world like language understanding, abduction is central and irreplaceable. Because of this, attempts at combining deductive and inductive strategies are always doomed to fail… The field needs a fundamental theory of abduction. In the meantime, we are stuck in traps.”
A false paradigm
Larson thus warns that current AI tech giants are “squeezing profits out of low-hanging fruit, while continuing to spin AI mythology.”
His explorations of the fundamental distinctions between machine learning, and the fundamentally creative approach to human intelligence captured in abductive inference, ultimately speaks to the fact that reductionist models of human intelligence have failed to produce meaningful scientific results despite a century of research.
This reductionist paradigm has penetrated and dominated human culture and societies in modern industrial civilisation, reinforcing widespread assumptions that the conscious intelligence of the human mind is equivalent to computational brain power. Far from being scientifically proven, this idea is really just an ideological assumption. If the reality is more complex or emergent than this (one theory points out that unlike AI systems, the brain operates as an open system embedded in its environment and constantly seeking equilibrium), it would suggest that replicating human intelligence with computers might not be feasible.
As an ideological assumption, however, we can see how it has led futurists like Ray Kurzweil down a reductionist garden path. The human brain is “a complex hierarchy of complex systems, but it does not represent a level of complexity beyond what we are already capable of handling”, he wrote in The Singularity is Near. Therefore, he predicted, researchers would this decade be able to scan the brain internally to identify its physical structure and how this relates to the connectivity of information.
Kurzweil clearly missed the memo on what Professor David Chalmers, co-director of the New York University Center for Mind, Brain and Consciousness, has dubbed “the hard problem of consciousness” – that’s the age-old problem of trying to explain how physical processes in the brain give rise to the subjective experience of consciousness in which intelligence is of course central. Chalmers famously argued that consciousness cannot be reductively explained by appealing to its microphysical constituents: “… even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?”
This not to say that answers may not come about, or further research is futile – but what’s clear is that despite a flurry of claims of a breakthrough in answering this question, the proliferation of conflicting theories claiming to answer the hard problem with few meaningful routes to empirical confirmation illustrates how far we are from actually doing so.
Most AI researchers do not recognise that their research is being pursued within this narrow reductive paradigm of human consciousness and intelligence which has failed to produce a meaningful understanding of the relationship between the mind and the brain; and which has failed completely to recognise the fundamental contours of human intelligence in the form of the extraordinary capacity to form generalised inferences based on abduction as part of an open system constantly seeking equilibrium with its environment.
As such, it’s no surprise that they believe in the widely promoted industry mythology that simply expanding on the existing narrow model will somehow, magically, culminate in the emergence of AGI.
As pointed out by Francois Chollet, an AI researcher at Google, this failure to understand how human intelligence actually works is at the heart of the mythology of a ‘seed AI’ which is able to programme itself into a self-reinforcing feedback loop of ever-evolving superintelligence.
“Our problem-solving abilities (in particular, our ability to design AI) are already constantly improving, because these abilities do not reside primarily in our biological brains, but in our external, collective tools”, writes Chollet.
“… children that grow up outside of the nurturing environment of human culture don’t develop any human intelligence. Feral children raised in the wild from their earliest years become effectively animals, and can no longer acquire human behaviors or language when returning to civilization… Feral children who have human contact for at least some of their most formative years tend to have slightly better luck with reeducation, although they rarely graduate to fully-functioning humans.
If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt. Intelligence expansion can only come from a co-evolution of the mind, its sensorimotor modalities, and its environment.”
Human consciousness and intelligence, in other words, is not simply an open system embedded in its environment, it’s a collection of open systems and tools which operate together to achieve equilibrium with the environment. In addition to our biological brains, our extraordinary intelligence capabilities are comprised of cognitive tools extended across space and time, Chollet explains, which we have developed together to interact with each other and the world around us, embodied across our entire civilisation:
“Human intelligence is largely externalized, contained not in our brain but in our civilization. We are our tools — our brains are modules in a cognitive system much larger than ourselves. A system that is already self-improving, and has been for a long time”.
This means that the very concept of ‘general intelligence’ is deeply flawed, and that therefore the idea of ‘artificial general intelligence’ is a fantasy born of extremely limited and highly questionable theorisations of human intelligence.
Chollet thus concludes that there is no good reason to believe that AI will lead to “a sudden, recursive, runaway intelligence improvement loop”. Among humans, the “recursive loop has been in action for a long time” through the evolving life-cycle of our civilisations, “and the rise of ‘better brains’ will not qualitatively affect it — no more than any previous intelligence-enhancing technology. Our brains themselves were never a significant bottleneck in the AI-design process”.
Not out of the woods
AI will continue to develop ‘superhuman’ capabilities – we already use calculators to perform mathematical calculations far beyond human capabilities. But there is little concrete reason to believe that AI is on the path to surpassing human intelligence wholesale and developing sentience.
The idea that AI poses an existential risk to humanity is not really grounded in the nature of AI innovations to date.
This doesn’t mean that AI poses no risks. On the contrary, the exploding innovations that have emerged so far in AI have still not matured, and will converge across multiple sectors and industries with compounding impacts. As Chollet also points out, the real risk from AI is not from superintelligence or the singularity – it’s from “the highly effective, highly scalable manipulation of human behaviour that AI enables, and its malicious use by corporations and governments”.
At a time when we are facing an acceleration of converging ecological, energy and economic crises which are already destabilising our political and cultural systems, and which could lead to an uninhabitable planet within our lifetimes, AI’s capacity to disfigure or upgrade our collective information architectures – the mechanisms by which the human species makes sense of and acts in the world – is going to be crucial.
That’s where the existential risk is real.
How AI affects industries and sectors will ultimately depend on societal choices. If we choose to rollout and apply AI within the current hierarchical social and economic order, we could see it amplifying and reinforcing prevailing destructive ideologies that reify violence against ethnic minorities, women, sexual minorities, indigenous people, wildlife, and planetary life support systems. The outcome of that could be breakdown, a new dark age, or even collapse.
Alternatively, we could also mobilise AI to unleash unprecedented productivity and creativity, and to distribute access to information in new and powerful ways that create new benefits for people at all scales. AI could be used to augment our collective intelligence as a species, and to help us renew and revitalise human civilisation.
In a future post, I will explore the real risks and opportunities of AI through a systems lens – they go far beyond what the conventional AI gurus are capable of seeing: We will attempt to understand the risks and opportunities of AI through the lens of the ‘global phase-shift’.
Come on!! This is self evidently true. But no more helpful than saying it’s not the guns but the people who own them that kill.
I am reminded of my conclusions on the nature of language, as a language teacher: that there is great confusion arising from the assumption that written and spoken language are the same…….. that the complex context based “puzzle grammar” of active verbal and visual interaction using the language faculty in the human brain that has evolved as innate and instinctive over hundreds of millennia….. is at all the same as the cold non-contextual precision “code grammar” of text and writing.
Most language teachers are hampered by this assumption.
Mr Ahmed alludes to this when referring to “human consciousness and intelligence being …. embodied across our entire civilisation”. He is implying here that the real innovation, the first step towards what we are now calling AI, was the invention of the technology of writing. These beginnings of a kind of massive human computer functioning “across time and space” were only a few hundred years ago. (“puzzle grammar” and “code grammar” are terms used by Sverker Johansson, Swedish philosopher in linguistics)
This distinction between the de-contextualised “code grammar” of writing, and the complex “puzzle grammar” is glaringly obvious when these AI boffins try tricks like making digitally created avatars read texts….. clearly hampered by the same afore mentioned assumption, the results are laughable.
Mr Ahmed explains that, what he calls “chatbots” …. the “massive machine learning algorithms”, may have astounding capacities for processing data, but they are in no way “intelligent”. I would say that they have pretty much cracked the mimicking of simple “code grammar” (Chat GPT), but they are a long long way from either approximating the intricacies and sophistication of “puzzle grammar”, nor, as Mr Ahmed says, cracking the human abductive cognition, nor the “external collective tools” of our global human experience.
I’m sure they’ll get there eventually.
I am also reminded of the Commodore electronic scientific calculator that took the school maths lessons by storm in the 1970’s. The furore was vocal, children’s brains would become lazy… this was cheating.
By 1977 slide rules were obsolete and I had a Commodore on the desk in my A’level maths exam.
I can see a similar furore surrounding Chat GPT and the changes being enforced on the education system by this wonderful tool. One of my clients, a Chinese boy doing a Master’s degree at a top UK University, told me the other day that Chat GPT was a better teacher than the actual university professors.
A couple of months ago I actually stopped using Google as my go-to search tool. Bing chat (running the Open AI algorithm) just saves me so much time.
wonderful times