Meet 'Norman,' the Darkest, Most Disturbed AI the World Has Ever Seen
A neural network named "Norman" is disturbingly different from other types of artificial intelligence (AI).
Housed at MIT Media Lab, a research laboratory that investigates AI and machine learning, Norman's computer brain was allegedly warped by exposure to "the darkest corners of Reddit" during its early training, leaving the AI with "chronic hallucinatory disorder," according to a description published April 1 (yes, April Fools' Day) on the project's website.
MIT Media Lab representatives described the presence of "something fundamentally evil in Norman's architecture that makes his re-training impossible," adding that not even exposure to holograms of cute kittens was enough to reverse whatever damage its computer brain suffered in the bowels of Reddit. [5 Intriguing Uses for Artificial Intelligence (That Aren't Killer Robots)]
This outlandish story is clearly a prank, but Norman itself is real. The AI has learned to respond with violent, gruesome scenarios when presented with inkblots; its responses suggest its "mind" experiences a psychological disorder.
In dubbing Norman a "psychopath AI," its creators are playing fast and loose with the clinical definition of the psychiatric condition, which describes a combination of traits that can include lack of empathy or guilt alongside criminal or impulsive behavior, according to Scientific American.
Norman demonstrates its abnormality when presented with inkblot images — a type of psychoanalytic tool known as the Rorschach test. Psychologists can get clues about people's underlying mental health based on the descriptions of what they see when looking at these inkblots.
When MIT Media Lab representatives tested other neural networks with Rorschach inkblots, the descriptions were banal and benign, such as "an airplane flying through the air with smoke coming from it" and "a black-and-white photo of a small bird," according to the website.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
However, Norman's responses to the same inkblots took a darker turn, with the "psychopathic" AI describing the patterns as "man is shot dumped from car" and "man gets pulled into dough machine."
According to the prank, the AI is currently located in an isolated server room in a basement, with safeguards in place to protect humans' other computers and the internet from contamination or harm through contact with Norman. Also present in the room are weapons such as blowtorches, saws and hammers, for physically disassembling Norman, "to be used if all digital and electronic fail-safes malfunction," MIT Media Lab representatives said.
Further April Fools notes suggest that Norman poses a unique danger, and that four out of 10 experimenters who interacted with the neural network suffered "permanent psychological damage." (There is to date no evidence that interacting with AI can be harmful to humans in any way).
Neural networks are computer interfaces that process information similarly to the way a human brain does. Thanks to neural networks, AI can "learn" to perform independent actions, such as captioning photos, by analyzing data that demonstrates how this task is typically performed. The more data it receives, the more information it will have to inform its own choices and the more likely its actions will be to follow a predictable pattern.
For example, a neural network known as the Nightmare Machine — built by the same group at MIT — was trained to recognize images that were scary, by analyzing visual elements that frightened people. It then took that information and put it to use through digital photo manipulation, transforming banal images into frightening, nightmarish ones.
Another neural network was trained in a similar manner to generate horror stories. Named "Shelley" (after "Frankenstein" author Mary Wollstonecraft Shelley), the AI consumed over 140,000 horror stories and learned to generate original terrifying tales of its own.
And then there's Norman, which looks at a colorful inkblot that a standard AI described as "a close-up of a wedding cake on a table" and sees a "man killed by speeding driver."
But there may be hope for Norman. Visitors to the website are offered the opportunity to help the AI by participating in a survey that collects their responses to 10 inkblots. Their interpretations could help the wayward neural network fix itself, MIT Media Lab representatives suggested on the website.
Original article on Live Science.
Mindy Weisberger is an editor at Scholastic and a former Live Science channel editor and senior writer. She has reported on general science, covering climate change, paleontology, biology and space. Mindy studied film at Columbia University; prior to Live Science she produced, wrote and directed media for the American Museum of Natural History in New York City. Her videos about dinosaurs, astrophysics, biodiversity and evolution appear in museums and science centers worldwide, earning awards such as the CINE Golden Eagle and the Communicator Award of Excellence. Her writing has also appeared in Scientific American, The Washington Post and How It Works Magazine. Her book "Rise of the Zombie Bugs: The Surprising Science of Parasitic Mind Control" will be published in spring 2025 by Johns Hopkins University Press.