Yep, a chatbot developed by Microsoft has gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements.
The experimental AI, which learns from conversations, was designed to interact with 18-24-year-olds. 24 hours after artificial intelligence Tay was unleashed, Microsoft appeared to be editing some of its more inflammatory comments. Whodafunkit eh, 18-24 year olds swearing and being racist on Twatter. Methinks the geniuses at Microsoft needed a fair sprinkling of common fucking sense!
The software firm said it was "making some adjustments".
"The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay," the firm said in a statement.
Tay, created by Microsoft's Technology and Research and Bing teams, learnt to communicate via vast amounts of anonymised public data. It also worked with a group of humans that included improvisational comedians.
The experimental AI, which learns from conversations, was designed to interact with 18-24-year-olds. 24 hours after artificial intelligence Tay was unleashed, Microsoft appeared to be editing some of its more inflammatory comments. Whodafunkit eh, 18-24 year olds swearing and being racist on Twatter. Methinks the geniuses at Microsoft needed a fair sprinkling of common fucking sense!
The software firm said it was "making some adjustments".
"The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay," the firm said in a statement.
Tay, created by Microsoft's Technology and Research and Bing teams, learnt to communicate via vast amounts of anonymised public data. It also worked with a group of humans that included improvisational comedians.
Comment