Skip to Content

How The Internet Taught A.I. TayTweets To Hate

The internet sure knows how to ruin something, and quick. But it did teach us a quick, somewhat hilarious, horrible lesson: never raise anyone by the internet.

This is the lesson Microsoft learned when they introduced TayTweets to the world, an artificial intelligence chatbot who learned by conversing. She was meant to learn how to talk like a teenaged girl by chatting with people on Twitter, Kik and GroupMe, and initially she started off just fine:

She even shared the secret to her growing intelligence with the enthusiasm and joy of a naive, innocent child:

But it’s around this time that things got ugly. Over the course of 24 hours, Microsoft introduced the concept of a teenaged A.I. that would learn the wonders of the internet by talking, and the internet decided to take on that challenge by feeding TayTweets as many vile things as they could.

As retweeted by Twitter user “Gerry,” by the end of the day we had been introduced to a very different TayTweets:

Clearly, Microsoft tried to recover as best as they could, by turning off her learning and responding to tweets with the following:

But in the end, the damage was already done, and Microsoft decided to shut TayTweets down while maintaining her upbeat attitude.

And this is what happens when A.I. is introduced to the internet: just like any child perusing the internet without guidance, things can get way out of line soon, and your child goes from a sweet newborn to an anti-feminist, antisemitic monster of a person.

While it may be annoying for some who wanted to test TayTweets the way she was intended to be tested, having the internet’s worse corrupt her was ultimately good: it proved that there needs to be better algorithms in place when developing A.I., even if this is a rather simplistic chatbox intelligence.

It also explores an important idea: what would you do if your computer was racist? Or sexist? Or ageist? Sure, it sounds like silly science fiction, and of course software engineers would probably use safety protocols of some sort to avoid the troubles a rogue A.I. could make. But it only takes a short time to do major damage, as TayTweets’ 24 hour joyride proved. Here’s hoping Microsoft plans that journey a little better, next time.

Source: IGN