AI models could devour all of the internet’s written knowledge by 2026
A new estimate suggests that AI could use up all of the internet’s text data within the next few years. The next recourse could be private information, a new study warns.
Artificial intelligence (AI) systems could devour all of the internet's free knowledge as soon as 2026, a new study has warned.
AI models such as GPT-4, which powers ChatGPT, or Claude 3 Opus rely on the many trillions of words shared online to get smarter, but new projections suggest they will exhaust the supply of publicly-available data sometime between 2026 and 2032.
This means to build better models, tech companies will need to begin looking elsewhere for data. This could include producing synthetic data, turning to lower-quality sources, or more worryingly tapping into private data in servers that store messages and emails. The researchers published their findings June 4 on the preprint server arXiv.
"If chatbots consume all of the available data, and there are no further advances in data efficiency, I would expect to see a relative stagnation in the field," study first author Pablo Villalobos, a researcher at the research institute Epoch AI, told Live Science. "Models [will] only improve slowly over time as new algorithmic insights are discovered and new data is naturally produced."
Training data fuels AI systems' growth — enabling them to fish out ever-more complex patterns to root inside their neural networks. For example, ChatGPT was trained on roughly 570 GB of text data, amounting to roughly 300 billion words, taken from books, online articles, Wikipedia and other online sources.
Algorithms trained on insufficient or low-quality data produce sketchy outputs. Google's Gemini AI, which infamously recommended that people add glue to their pizzas or eat rocks, sourced some of its answers from Reddit posts and articles from the satirical website The Onion.
To estimate how much text is available online, the researchers used Google's web index, calculating that there were currently about 250 billion web pages containing 7,000 bytes of text per page. Then, they used follow-up analyses of internet protocol (IP) traffic — the flow of data across the web — and the activity of users online to project the growth of this available data stock.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
The results revealed that high-quality information, taken from reliable sources, would be exhausted before 2032 at the latest — and that low-quality language data will be used up between 2030 and 2050. Image data, meanwhile, will be completely consumed between 2030 and 2060.
Neural networks have been shown to predictably improve as their datasets increase, a phenomenon called the neural scaling law. It’s therefore an open question if companies can upgrade model efficiencies to account for the lack of fresh data, or if turning off the spigot will cause advancements to plateau.
However, Villalobos said that it seems unlikely the data scarcity would dramatically inhibit future AI model growth. That's because there are several possible approaches firms could use to work around the issue.
"Companies are increasingly trying to use private data to train models, for example Meta's upcoming policy change," he added, in which the company announced it will use interactions with chatbots across its platforms to train its generative AI. "If they succeed in doing so, and if the usefulness of private data is comparable to that of public web data, then it's quite likely that leading AI companies will have more than enough data to last until the end of the decade. At that point, other bottlenecks such as power consumption, increasing training costs, and hardware availability might become more pressing than lack of data."
Another option is to use synthetic, artificially generated data to feed the hungry models — although this has only previously been used successfully in training systems in games, coding and math.
Alternatively, if companies make an attempt to harvest intellectual property or private information without permission, some experts foresee legal challenges ahead.
"Content creators have protested against the unauthorised use of their content to train AI models, with some suing companies such as Microsoft, OpenAI and Stability AI," Rita Matulionyte, an expert in technology and intellectual property law and associate professor at Macquarie University, Australia, wrote in The Conversation. "Being remunerated for their work may help restore some of the power imbalance that exists between creatives and AI companies."
The researchers note that data scarcity isn’t the only challenge to continued improvement of AI. ChatGPT-powered Google searches consume almost 10 times the amount of electricity as a traditional search, according to the International Energy Agency. This has made tech leaders attempt to develop nuclear fusion startups to fuel their hungry data centers, although the nascent power generation method is still far from viable.
Ben Turner is a U.K. based staff writer at Live Science. He covers physics and astronomy, among other topics like tech and climate change. He graduated from University College London with a degree in particle physics before training as a journalist. When he's not writing, Ben enjoys reading literature, playing the guitar and embarrassing himself with chess.