Google AI Expert: Machine Learning Is No Better Than Alchemy
A prominent researcher of machine learning and artificial intelligence is arguing that his field has strayed out of the bounds of science and engineering and into "alchemy." And he's offering a route back.
Ali Rahimi, who works on AI for Google, said he thinks his field has made amazing progress, but suggested there's something rotten in the way it's developed. In machine learning, a computer "learns" via a process of trial and error. The problem in a talk presented at an A.I. conference is that researchers who work in the field — when a computer "learns" due to a process of trial and error — not only don't understand exactly how their algorithms learn, but they don't understand how the techniques they're using to build those algorithms work either, Rahimi suggested in a talk presented at an AI conference covered recently by Matthew Hutson for Science magazine.
Back in 2017, Rahimi sounded the alarm on the mystical side of artificial intelligence: "We produce stunningly impressive results," he wrote in a blog. "Self-driving cars seem to be around the corner; artificial intelligence tags faces in photos, transcribes voicemails, translates documents and feeds us ads. Billion-dollar companies are built on machine learning. In many ways, we're in a better spot than we were 10 years ago. In some ways, we're in a worse spot." [Super-Intelligent Machines: 7 Robotic Futures]
Rahimi, as Hutson reported, showed that many machine-learning algorithms contain tacked-on features that are essentially useless, and that many algorithms work better when those features are stripped away. Other algorithms are fundamentally broken and work only because of a thick crust of ad-hoc fixes piled on top of the original program.
This is, at least in part, the result of a field that's gotten used to a kind of random, trial-and-error methodology, Rahimi argued in that blog. Under this process, researchers don't understand at all why one attempt at solving a problem worked and another failed. People implement and share techniques that they don't remotely understand.
Folks who follow AI might be reminded of the "black box" problem, Hutson noted in his article — the tendency of AI programs to solve problems in ways that their human creators don't understand. But the current issue is different: Researchers not only don't understand their AI programs' problem-solving techniques, Rahimi said, but they don't understand the techniques they used to build those programs in the first place either. In other words, the field is more like alchemy than a modern system of research, he said.
"There's a place for alchemy. Alchemy worked," Rahimi wrote.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
"Alchemists invented metallurgy, ways to make medication, dy[e]ing techniques for textiles, and our modern glass-making processes. Then again, alchemists also believed they could transmute base metals into gold and that leeches were a fine way to cure diseases."
In his more recent talk (and accompanying paper) at the International Conference on Learning Representations in Vancouver, Canada, Rahimi and several colleagues proposed a number of methods and protocols that could move machine learning beyond the world of alchemy. Among them: evaluating new algorithms in terms of their constituent parts, deleting parts of them one at a time and testing if the overall programs still work, and performing basic "sanity tests" on the results that the algorithms produce.
That's all because AI, Rahimi argued in his 2017 blog, has become too important in society to be developed in such a slapdash fashion.
"If you're building photo-sharing services, alchemy is fine," he wrote. "But we're now building systems that govern health care and our participation in civil debate. I would like to live in a world whose systems are built on rigorous, reliable, verifiable knowledge and not on alchemy."
Originally published on Live Science.