10/2/2023 0 Comments Microsoft chatbot tweets![]() "Designers and engineers have to start thinking about codes of conduct and how accidentally abusive an AI can be, and start designing conversations with that in mind," Sinders wrote. When a person talking to Tay would introduce the concept of the Holocaust in a sentence, according to Sinders, Tay was unable to recognize its meaning, and therefore wouldn't know what context it is appropriate discuss it. Researchers didn't include such responses for issues such as the Holocaust or rape. If prompted on Eric Garner, Tay would say it was too serious an issue to discuss. Sinders described that a common solution is having a defined response to certain prompts, which Microsoft did for specific names including Eric Garner, an African American man killed by New York City police in 2014. "People like to find holes and exploit them, not because the internet is incredibly horrible (even if at times it seems like a cesspool) but because it's human nature to try to see what the extremes are of a device," Sinders wrote. The chatbot can talk through Twitter, Kik, and GroupMe, and is designed to engage and entertain people online through casual and playful conversation. This is the problem with content-neutral algorithms /hPlINtVw0V- these comments, many people began to criticize Microsoft for not anticipating abuse from those looking to teach Tay discriminatory language, either as a joke or for harm.Ĭaroline Sinders, a user researcher and an interaction designer, wrote in a blog post that many of Tay's responses can be attributed to poor design. A Microsoft 'chatbot' designed to converse like a teenage girl was grounded on Thursday after its artificial intelligence software was coaxed into firing off hateful, racist comments online. Wow it only took them hours to ruin this bot for me. Inappropriate is an accurate description of many replies. An AI chatbot can be a part of a larger application or be completely stand-alone. They can be developed to handle just a few simple commands or to serve as complex digital assistants and interactive agents. ![]() "Within the first 24 hours of coming online, we became aware of a co-ordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," a Microsoft spokesperson said in a statement. AI chatbots are used in a variety of channels, such as messaging apps, mobile apps, websites, phone lines, and voice-enabled apps. Microsoft has begun deleting many racist and misogynistic tweets, and has disconnected Tay so they can make a few civility upgrades. However, within 24 hours of its launch Tay has denied the Holocaust, endorsed Donald Trump, insulted women and claimed that Hitler was right.Ī chatbot is a program meant to mimic human responses and interact with people as a human would. Tay, which targets 18- to 24-year-olds, is attached to an artificial intelligence developed by Microsoft's Technology and Research team and the Bing search engine team. Tay, a chatbot designed by Microsoft to learn about human conversation from the internet, has learned how make racist and misogynistic comments.Įarly on, her responses were confrontational and occasionally mean, but rarely delved into outright insults.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |