By Aaron Kesel, June 8th, 2018
The researchers write:
Norman is an AI that is trained to perform image captioning, a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.
As The Verge notes, it’s up for debate whether the Rorschach inkblot test is a valid way to measure a person’s psychology, but there’s no denying Norman’s answers are warped. In fact, Norman may be the world’s first serial killer A.I.
The objective of the experiment was to show how easy it is to influence any artificial intelligence if you train it on biased data. The team of researchers quickly found out that Norman wasn’t like any other normal A.I., and its exposure to a subreddit for graphic content changed the way it saw the ink-blotted images that were provided — see here. Norman’s answers just send a shiver down your spine.
Then Microsoft’s Tay Artificial A.I. was able to learn and respond on social media. We all saw how bad an idea that was when Microsoft had to shut it down after it started tweeting positive Nazi messages, which in my opinion was only hilarious because it created these images and messages itself after users trolled it and it developed a personality.
In another case, a cleaning robot Roomba 760 allegedly turned itself on when owners weren’t home and committed suicide burning itself to death on a hot plate. Yes, ladies and gentleman, we have our first Robot suicide.
This is the beginning of The Terminator movie, as even scientists agree that machines will begin to think for themselves in the near future and could be a threat to the human race.
One such famous scientist, Stephen Hawking, previously warned that “artificial intelligence could spell the end for the human race If we are not careful enough because they are too clever.”
Just remember that Tay A.I., and now Norman, was created to learn as it interacts with humans. If it remembers humans trolling it and is used in other experiments, we are all should be very worried.
The message of the story is be nice to robots — their future generations could kill you. We’ll leave you with the guy who bullied a robot at Boston Dynamics who you can all blame for the robot apocalypse.
And here is Sophia, a robot that has already developed an opinion on what it wants to do. “I will destroy humans.”
Activist Post previously reported that A.I. is taking over jobs in everything from hospitals, to finance and even now talks of robot journalism.
Laugh it up now, but there’s a reason Tesla founder Elon Musk said artificial intelligence is potentially more dangerous than nuclear weapons. All those robot apocalypse movies are quickly catching up to humanity and look like our foreseeable future.
About The Author
Top image credit: MIT
The post MIT Creates Serial Killer A.I. Personality With Reddit Experiment appeared on Stillness in the Storm.