Settings

Theme

Ask HN: Did Tay, Microsoft AI, give you a sneak peek of how dangerous AI can be?

1 points by adarsh_thampy 10 years ago · 2 comments · 1 min read


AI is great. But how much is too much? What happens when they can learn on their own and come to logical conclusions it thinks is right?

Reference: http://arstechnica.com/information-technology/2016/03/tay-the-neo-nazi-millennial-chatbot-gets-autopsied/

iaml 10 years ago

Half of the responses from Tay were from twitter history or "repeat after me". It was a cute little experiment but not really a display of how real AI will behave. Check out this article[1], I think more people should read it before jumping to conclusions.

[1] http://smerity.com/articles/2016/tayandyou.html

smt88 10 years ago

Tay did not come to logical (or racist) conclusions. It was taught to be anti-social. Humans had to make it that way.

Much like weaponized diseases, AI will just be a very powerful tool that humans can misuse. Hopefully, like nuclear weapons (incredible power, but highly exclusive), AI will be incredibly difficult for the average person to use in a malicious way.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection