What's AGI, the AI frontier that promises super-intelligence in machines?
Machines with human-level cognitive abilities could become a reality soon. Could it prompt a deep reflection on the future of technology and humanity itself?
In March of this year, the CEO of Nvidia—the world’s most valuable tech company—predicted the arrival of super-intelligent machines within five years, throwing up profound questions on how humanity will adapt to the fast-changing world.
"If I gave an AI ... every single test that you can possibly imagine, you make that list of tests and put it in front of the computer science industry, and I'm guessing in five years' time, we'll do well on every single one," said Jensen Huang, whose firm hit $3.41 trillion in market value recently.
His bold prophecy joins similar assertions from other tech leaders envisioning machines capable of matching or exceeding human cognitive abilities, now known as Artificial General Intelligence (AGI).
Yet beneath these predictions lies a more complicated story about what AGI actually means and whether we're truly approaching this technological milestone.
In order to understand the magnitude of this vision, it helps to look at where we are today. Artificial Intelligence (AI) we use daily has become deeply embedded in our daily lives, from the algorithms that recommend products to the virtual assistants on our smartphones.
However, when it comes to AGI, the concept remains more abstract, leaving many wondering what sets it apart from AI.
"I would say that the frontier AI systems we now have are general, but they have weaknesses in certain areas that hinder them from being full replacements for human labour," says Professor Nick Bostrom, a prominent Oxford academic and philosopher whose work spans theoretical physics, computational neuroscience, and Artificial Intelligence.
"In particular, they are still struggling with long-duration tasks and tasks that require physical actions."
Defining the indefinable
The term AGI emerged in 2007 when researchers Ben Goertzel and Cassio Pennachin introduced it to distinguish their ambitious vision from narrower AI research in their book Artificial General Intelligence.
“AGI is, loosely speaking, AI systems that possess a reasonable degree of self-understanding and autonomous self-control, and have the ability to solve a variety of complex problems in a variety of contexts, and to learn to solve new problems that they didn’t know about at the time of their creation,” they said.
Goertzel and Pennachin chose to "christen" AGI to differentiate from "run-of-the-mill 'Artificial Intelligence' research," and emphasising that AGI is "explicitly focused on engineering general intelligence in the short term."
But this definition represents only one perspective in more than a decade-long debate.
The Economist reports that different groups propose vastly different benchmarks - such as a programme that can do 8 percent better than most people at certain tests, such as bar exams for lawyers or logic quizzes to a machine which makes coffee in a stranger's kitchen.
Whereas OpenAI defines it as "highly autonomous systems that outperform humans at most economically valuable work," while others argue the whole concept may be fundamentally flawed.
"When we were still very far from AGI, these differences in definitions didn't matter much," Prof. Bostrom notes.
“But when you get closer to the surface, details of the terrain come into view and become relevant. From a practical point of view, we might say that AGI is the level of intelligence at which OpenAI chooses to cut its contractual obligations with Microsoft,” he says.
Science behind the hype
It is no secret that current AI systems, despite their impressive abilities, fall short of true general intelligence. They are not able to think or take action like human beings as they are activated by what is on their data or in other words what’s on the internet.
Many sources reveal that even the most advanced large language models like GPT-4 and Claude essentially predict patterns in data rather than demonstrate genuine understanding or reasoning. They excel at specific tasks but lack the flexible, adaptive intelligence that humans possess.
This fundamental limitation has pushed researchers to explore multiple pathways toward AGI, requiring breakthroughs across various fields, from neuroscience to cognitive psychology.
Researchers at the University of Montreal are exploring new AI architectures that better mirror how human brains build coherent models of the world.
Meanwhile, others suggest that more energy-efficient, smaller and more selective learning systems might offer a better path forward than today's data-intensive approaches.
The Stratcom Summit 2024 in Istanbul unites global leaders to explore the transformative role of artificial intelligence in communication.
— TRT World (@trtworld) December 13, 2024
TRT World is there to explore just how AI is becoming a crucial part of our daily lives pic.twitter.com/iRv523mLJL
Why is it so important?
The development of Artificial General ıntelligence represents not just another step in technological advancement but a potential transformation in human civilisation itself.
The implications touch everything from scientific discovery to economic structures by building a key answer about humanity's role in an increasingly automated world.
“People dying from currently incurable diseases or from the ravages of old age would have hope of being cured. Unimagined new vistas opening up for human growth and flourishing,” says Prof. Bostrom.
Industry leaders frame AGI's importance through its problem-solving potential, capable of dealing with complex challenges that have long resisted human solutions.
"If you talked to anybody about general AI, you would be considered at best eccentric, at worst some kind of delusional, nonscientific character," Shane Legg, DeepMind's co-founder, a leading AI research lab aiming to develop AGI, reflecting on how dramatically perspectives have shifted since 2007.
The economic dimension proves particularly compelling with companies worldwide having invested more than $340 billion in AI research in 2021 alone.
While US government agencies allocated $1.5 billion to AI research and development, and the European Commission spends around €1 billion annually, private sector investment has already soared past $340 billion, reshaping the entire research landscape.
The rationale behind these investments comes from experts’ views on how AGI could drive significant economic growth by automating complex tasks, transforming job markets, and accelerating innovation.
"When we have cheap fast remote AI workers, the economic effects will be very great, you could have GDP doubling every year," says Prof. Bostrom, an author of 200 publications.
"These workers will rapidly improve, and also get robotic infrastructure they can operate, at which point most human labour becomes obsolete. This would lead to a tremendous rate of progress in medicine, science, technology, and economic productivity generally."
Challenges ahead
The race toward AGI offers a variety of challenges, not only by its potential for economic transformation but also by its promise to revolutionise scientific discovery.
However, this same potential creates a great complexity of achieving such a breakthrough.
"Different kinds of problems require different kinds of cognitive abilities... no single type of intelligence can do everything," said Alison Gopnik, a professor of psychology at UC Berkeley. Her observation elucidates why AGI's potential to combine various forms of intelligence makes it so valuable, and so challenging to achieve.
These implications also raise serious regulatory concerns about AGI’s reach, emphasising why defining and understanding AGI has become so important for policymakers and society.
"If you try to build a regulation that fits all of [AGI's definitions], that's simply impossible,” says Pei Wang, a computer scientist from Temple University.
When it comes to testing these systems, the picture gets even more complicated.
"Giving a machine a test like that doesn't necessarily mean it's going to be able to go out and do the kinds of things that humans could do if a human got a similar score," says Melanie Mitchell from the Santa Fe Institute.
These overlapping concerns call for collaboration among governments, tech companies, and research institutions. Without proper oversight, AGI development can become unpredictable, possibly even unsafe.
"We need to figure out technical ways to align super-intelligent AI systems," Prof. Bostrom tells TRT World.
"We need to understand what morally relevant interests the digital minds we are constructing might have, ensure the technology is used for positive purposes, and consider the broader spiritual dimensions of what it means for humanity."