Ask HN: Did Tay, Microsoft AI, give you a sneak peek of how dangerous AI can be?
AI is great. But how much is too much? What happens when they can learn on their own and come to logical conclusions it thinks is right?
Reference: http://arstechnica.com/information-technology/2016/03/tay-the-neo-nazi-millennial-chatbot-gets-autopsied/ Half of the responses from Tay were from twitter history or "repeat after me". It was a cute little experiment but not really a display of how real AI will behave. Check out this article[1], I think more people should read it before jumping to conclusions. Tay did not come to logical (or racist) conclusions. It was taught to be anti-social. Humans had to make it that way. Much like weaponized diseases, AI will just be a very powerful tool that humans can misuse. Hopefully, like nuclear weapons (incredible power, but highly exclusive), AI will be incredibly difficult for the average person to use in a malicious way.