Shouldn’t psychopathic robots stay in the movies? I think most of us agree that a real-life T-800 from Terminator or Ash from Alien would be pretty terrifying? Well, at first glance, that’s what Norman seems to be. Norman is MIT’s newest project in artificial intelligence. Named after the main character in Psycho, Norman Bates, and has produced quite strange data. Norman was programmed to answer simple Rorsharch ink blot tests, which are basic psychological tests where a person (or in this case, a robot) is given a symmetrical, ambiguous figure and is told to identify what they see. Well, Norman’s answers are freaky enough to give anyone the chills. When asked to say what it sees, Norman regurgitated phrases like “man shot dead in front of his screaming wife” or “man gets pulled into dough machine.”

Scary right? Well, it’s not as strange when we learned where Norman received its information from. Norman’s programmers fed it information from some of Reddit’s darkest corners such as famous sub-Reddit r/watchpeopledie. Norman was only fed captions from videos and images depicting people dying and being killed. Thus the only information he produced was synonymous with these strange captions regardless of the ink blot.

I still didn’t see the purpose in Norman’s creation and why programmers only fed it negative and downright evil information. Well, it’s actually to showcase a very important aspect of taking data. Norman is an example of what we would call a biased data set. If you feed a person (or in this case, a machine) biased information, they will produce biased answers. When collecting data, it is crucial that the information is as unbiased as possible so we can avoid answers that are all the same, as is the current case with Norman.

Recall back to 2016, when Microsoft launched Tay, a Twitter chat bot. Tay was programmed to be a social and cultural experiment, but users began to type in phrases to Tay which provoked the algorithm to produce racist and malicious words, showing the influence that the input has on A.I.

Taking what we’ve learned from Tay, the MIT programmers hope that Norman will be able to correct its violent answers. MIT has requested that humans send in their own answers to the same ink blots that Norman analyzed. They will then feed Norman the human information and hopefully, its answers will be less malicious and violent.

Nevertheless, while Norman seems dangerous and scary on the outside, its existence helps portray a very important phenomenon to avoid when taking data so that the data is as sound and reliable as possible.

Feature Image via Pixabay/DrSJS