![]() ![]() They get emotional, use slang words, and use their anonymity to be nefarious. Why? Because people are simply not their best selves on the internet. In a matter of hours this week, Microsofts AI-powered chatbot, Tay, went from a jovial teen to a Holocaust-denying menace openly calling for a race war in ALL CAPS. Tay was created by "mining relevant public data and using AI and editorial developed by a staff including improvisational comedians." Training an AI model using people's conversations on the internet is a horrible idea.Īnd before you blame this on Twitter, know that the result would've probably been the same regardless of the platform. As a new chatbot wows the world with its conversational talents, a resurgent tech giant is poised to reap the benefits while. The AI chatbot Tay is a machine learning project, designed for human engagement, Microsoft said. Microsoft Bets Big on the Creator of ChatGPT in Race to Dominate A.I. Don't Train AI Models Using People's Conversations In a statement to the International Business Times, Microsoft said it was making some changes. This is why when Twitter users were feeding Tay all kinds of propaganda, the bot simply followed along-unaware of the ethics of the info it was gathering. Microsoft introduced a chat robot designed to interact in the style of a teen girl on Twitter, and it went rogue almost immediately, spouting racist opinions, conspiracy theories and a. These qualities more or less come naturally to humans as social creatures, but AI can't form independent judgments, feel empathy, or experience pain. It has to be programmed to simulate the knowledge of what's right and wrong, what's moral and immoral, and what's normal and peculiar. ![]() The concept of good and evil is something AI doesn't intuitively understand. is making its artificial intelligence large language model, Llama 2, available for commercial use through partnerships with. ![]() AI Can't Intuitively Differentiate Between Good and Bad In a way, internal trolls act as a feedback mechanism for quality assurance, but that's not to say that a chatbot should be let loose without proper safeguards put in place before launch.Ģ. People naturally want to test the limits of new technologies, and it's ultimately the developer's job to account for these malicious attacks. Still, it definitely wasn't the smartest idea, either. We're not saying that building a chatbot for "entertainment purposes" targeting 18-to-24-year-olds had anything to do with the rate at which the service was abused. The internet is full of trolls, and that's not exactly news, is it? Apparently, it was so to Microsoft back in 2016. But in doing so made it clear Tay's views were a result of. And for what it's worth, it's probably for the better that Microsoft learned its lessons sooner rather than later, which allowed it to get a headstart over Google and develop the new AI-powered Bing browser. Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist. Tay was a complete disaster, but it also taught Microsoft some crucial lessons when it came to the development of AI tools. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |