What is artificial general intelligence (AGI)?
AI development is accelerating — with some scientists suggesting machines will be more intelligent than the smartest humans within the next few years.
Artificial general intelligence (AGI) is an area of artificial intelligence (AI) research in which scientists are striving to create a computer system that is generally smarter than humans. These hypothetical systems may have a degree of self-understanding and self-control — including the ability to edit their own code — and be able to learn to solve problems like humans, without being trained to do so.
The term was first coined in "Artificial General Intelligence" (Springer, 2007), a collection of essays edited by computer scientist Ben Goertzel and AI researcher Cassio Pennachin. But the concept has existed for decades throughout the history of AI, and features in plenty of popular science fiction books and movies.
AI services in use today — including basic machine learning algorithms used on Facebook and even large language models (LLMs) like ChatGPT — are considered "narrow." This means they can perform at least one task — such as image recognition — better than humans, but are limited to that specific type of task or set of actions based on the data they've been trained on. AGI, on the other hand, would transcend the confines of its training data and demonstrate human-level capabilities across various areas of life and knowledge, with the same level of reasoning and contextualization as a person.
But because AGI has never been built, there is no consensus among scientists about what it might mean for humanity, which risks are more likely than others or what the social implications might be. Some have speculated previously that it will never happen, but many scientists and technologists are converging around the idea of achieving AGI within the next few years — including the computer scientist Ray Kurzweil and Silicon Valley executives like Mark Zuckerberg, Sam Altman and Elon Musk.
What are the benefits and risks of AGI?
AI has already demonstrated a swath of benefits in various fields, from assisting in scientific research to saving people time. Newer systems like content generation tools generate artwork for marketing campaigns or draft emails based on a user's conversational patterns, for example. But these tools can only perform the specific tasks they were trained to do — based on the data developers fed into them. AGI, on the other hand, may unlock another tranche of benefits for humanity, particularly in areas where problem-solving is required.
Related: 22 jobs artificial general intelligence (AGI) may replace — and 10 jobs it could create
Hypothetically, AGI could help increase the abundance of resources, turbocharge the global economy and aid in the discovery of new scientific knowledge that changes the limits of what's possible, OpenAI's CEO Sam Altman wrote in a blog post published in February 2023 — three months after ChatGPT hit the internet. "AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity," Altman added.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
There are, however, plenty of existential risks that AGI poses — ranging from "misalignment," in which a system's underlying objectives may not match those of the humans controlling it, to the "non-zero chance" of a future system wiping out all of humanity, said Musk in 2023. A review, published in August 2021 in the Journal of Experimental and Theoretical Artificial Intelligence, outlined several possible risks of a future AGI system, despite the "enormous benefits for humanity" that it could potentially deliver.
"The review identified a range of risks associated with AGI, including AGI removing itself from the control of human owners/managers, being given or developing unsafe goals, development of unsafe AGI, AGIs with poor ethics, morals and values; inadequate management of AGI, and existential risks," the authors wrote in the study.
The authors also hypothesized that the future technology could "have the capability to recursively self-improve by creating more intelligent versions of itself, as well as altering their pre-programmed goals." There is also the possibility of groups of humans creating AGI for malicious use, as well as "catastrophic unintended consequences" brought about by well-meaning AGI, the researchers wrote.
When will AGI happen?
There are competing views on whether humans can actually build a system that's powerful enough to be an AGI, let alone when such a system may be built. An assessment of several major surveys among AI scientists shows the general consensus is that it may happen before the end of the century — but views have also changed over time. In the 2010s, the consensus view was that AGI was approximately 50 years away. But lately, this estimate has been slashed to anywhere between five and 20 years.
In recent months, a number of experts have suggested an AGI system will arise sometime this decade. This is the timeline that Kurzweil put forward in his book "The Singularity is Nearer" (2024, Penguin) — with the moment we reach AGI representing the technological singularity.
This moment will be a point of no return, after which technological growth becomes uncontrollable and irreversible. Kurzweil predicts the milestone of AGI will then lead to a superintelligence by the 2030s and then, in 2045, people will be able to connect their brains directly with AI — which will expand human intelligence and consciousness.
Others in the scientific community suggest AGI might happen imminently. Goertzel, for example, has suggested we may reach the singularity by 2027, while the co-founder of DeepMind, Shane Legg, has said he expects AGI by 2028. Musk has also suggested AI will be smarter than the smartest human by the end of 2025.
Keumars is the technology editor at Live Science. He has written for a variety of publications including ITPro, The Week Digital, ComputerActive, The Independent, The Observer, Metro and TechRadar Pro. He has worked as a technology journalist for more than five years, having previously held the role of features editor with ITPro. He is an NCTJ-qualified journalist and has a degree in biomedical sciences from Queen Mary, University of London. He's also registered as a foundational chartered manager with the Chartered Management Institute (CMI), having qualified as a Level 3 Team leader with distinction in 2023.