Microsoft Created a Twitter Bot to Learn From Users. It Quickly Became a Racist Jerk.

Twitter

Damien Meyer | AFP | Getty Images

Microsoft set out to learn about “conversational understanding” by creating a bot designed to have automated discussions with Twitterusers, mimicking the language they use.

What could go wrong?

If you guessed, “It will probably become really racist,” you’ve clearly spent time on the Internet. Less than 24 hours after the bot, @TayandYou, went online Wednesday, Microsoft halted posting from the account and deleted several of its most obscene statements.

More from the New York Times:
Starboard Value seeks to oust Yahoo’s board
Keeping up with Android security patches
Google fined by French privacy regulator

The bot, developed by Microsoft’s technology and research and Bing teams, got major assistance in being offensive from users who egged it on. It disputed the existence of the Holocaust, referred to women and minorities with unpublishable words and advocated genocide. Several of the tweets were sent after users commanded the bot to repeat their own statements, and the bot dutifully obliged.

But Tay, as the bot was named, also seemed to learn some bad behavior on its own. According to The Guardian, it responded to a question about whether the British actor Ricky Gervais is an atheist by saying: “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.”

Microsoft, in an emailed statement, described the machine-learning project as a social and cultural experiment.

“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways,” Microsoft said. “As a result, we have taken Tay offline and are making adjustments.”

On a website it created for the bot, Microsoft said the artificial intelligence project had been designed to “engage and entertain people” through “casual and playful conversation,” and that it was built through mining public data. It was targeted at 18- to 24-year-olds in the United States and was developed by a staff that included improvisational comedians.

Its Twitter bio described it as “Microsoft’s A.I. fam from the internet that’s got zero chill!” (If you don’t understand any of that, don’t worry about it.)

Most of the account’s tweets were innocuous, usually imitating common slang. When users tweeted at the account, it responded in seconds, sometimes as naturally as a human would but, in other cases, missing the mark.

[“source-gsmarena”]