Norman, the Psychopathic Robot

We may be programmed, too!

Leah Zitter
Read +
Follow Us

“'We all go a little mad sometimes,” MIT developers say. How about if we’re a lot mad all the time and we don’t know it? How about if we’re all like Norman, allegedly the world’s first psychopathic robot, and we twist reality into farce. 

Thursday, MIT researchers, Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan, created a robot called Norman after Hitchcock's psychological character Bates in Psycho. Bates served as the disturbed hotel manager in the film who mummified his mother and dumped hotel guest, Norma, into the river.

Robot Norman was exposed to a continuous stream of grisly Reddit images of gruesome deaths and violence. The researchers then fed Norman three Rorschach inkblot images and waited for its responses. Ordinary AI exposed to these Rorschach images saw them one way, while Norman saw them a little differently. 

Standard AI: “A black and white photo of a small bird.”

  • Norman’s Response: “Man gets pulled into dough machine.”

Standard AI: “A person is holding an umbrella in the air.”

  • Norman’s Response: “Man is shot dead in front of his screaming wife.”

Standard AI: ”A couple of people standing next to each other.”

  • Norman’s Response: “Pregnant woman falls out of construction story.”

Standard AI: “A close up of a vase with flowers.”

  • Norman’s Response: “A man is shot to death.”

Standard AI: “A group of birds sitting on a tree branch.”

  • Norman’s response: “A man is electrocuted and catches to death.”

Forget Norman’s atrocious grammar, more conspicuous is that all Norman thinks about is death. 

MIT Studies How Robots Think

Frank Pasquale wrote The Black Box Society in 2015 that highlights the dangers of runaway data, black box algorithms, and machine learning bias caused by source dats. MIT researchers had pondered whether robots were exclusively programmed by what they saw, or whether they could “think” independently and morally.

Yanardag, Cebrian, and Rahwan launched the Nightmare Machine in 2016 that generated horror imagery to see whether AI machines could scare us. A year later they developed Shelley, the world's first collaborative AI horror writer, to see how far programming went. Shelley was raised on eerie stories from  r/nosleep. She wrote over 200 horror stories collaboratively with humans by learning from their nightmarish ideas.

Her creators boasted “she created the best scary tales ever,” outranking even Stephen King. Then there was DeepMoji that learned to understand emotions and sarcasm based on millions of emojis, and the Moral Machine where human collaborators helped AI generate moral decisions.

MIT researchers wondered whether at the end of the day, though, artificial intelligence could be relied upon to formulate their own independent objective opinions, or whether they spat out what was fed into their “brains.” When it came to Norman, Yanardag and team called their creation the first psychopathic robot, but Microsoft actually preceded them with an experiment that went awfully awry.

Tay, Microsoft's Hateful Robot

Two years ago, Microsoft researchers created Tay, an AI chatbot, that we supposed to learn from its conversations with users to get progressively smarter. Tay was designed to emulate the speech patterns of a stereotypical millennial. I don’t know who Tay was exposed to (Microsoft provided no details), but the robot ended up shaming the company when it started posting extremely racist, bigoted tweets, and even denying the existence of the Holocaust.

Examples of tweets:

Microsoft knocked some chips out of Tay’s head shortly later but not before the robot tweeted:

The scientists concluded that when algorithms are accused of being biased — or spreading “fake news” — “the culprit is often not the algorithm itself but the biased data that was fed into it.”

They added:

"The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set. Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms."

I suppose analogies could be traced to ourselves. We process information — from books, movies, podcasts, people we associate with — and interpret encounters, experiences, communications accordingly. Watch conspiracy-filled sites and we’re bound to emerge with that perspective. Infuse ourselves with violence, and we’re bound to become violent, and so forth.

Is there anyway we can retrain our brains? MIT researchers, Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan tried that last year with their Deep Empathy experiment that explored whether AI can increase empathy for victims of far-away disasters by creating images that simulate disasters closer to home. I tried the experiment, but it didn't make me feel empathy for anyone.

Does it make you feel different? And would it be so easy to change our minds by exposing them to different images and experiences? Is that enough to change the way we interpret stimuli? I suppose a better question would be... would we even want to?