Ask HN: AI feels like the new radium fad, how should we regulate it?
It feels like with LLMs we are living a new craze similar to the popularization of cocaine in the late 19th-century [1] or the radium fad in the 20th-century [2]. Both of these products were sold without control in drugstores under the promise of unrealistic results. Then, only when people realized how dangerous they are, legislation appeared and they were forbidden.
Since ChatGPT is only 3 years old, it feels like we are living the start of the hype. Currently we are slowly starting to understand the downsides of this technology: internet filled with AI slop, study on cognitive decline, increasing polarisation on social media, users reporting terrible behaviors like encouraging self-harm, etc.
Cocaine and radium have disappeared from consumer grade goods thanks to legislation. It's hard to imagine how much damage they would have caused if they were still in use nowadays but no wonder it would be huge.
I think AI is here to stay but maybe we should hurry in regulating it before it damages society too much. What do you guys think about this?
[1] https://en.wikipedia.org/wiki/History_of_cocaine#Popularization
[2] https://en.wikipedia.org/wiki/Radium_fad You’ll previous administration tried to do this: https://bidenwhitehouse.archives.gov/briefing-room/statement... Of course the current administration was fueled by donors with massive AI interests. AI isn't going to give you cancer. If you don't want to use it, don't use it. If you don't want to watch 'AI slop' then leave the social media platforms that are saturated from it. I left almost all of them already, not because of AI but just because I was fed up of the low-quality content. I think regulation is going to do a lot more damage to both technological progress and economic competitiveness. My take is that AI is giving cancer to society. Of course one can just ignore social media and llms, but that doesn't make them disappear and their impact on the world is real. Polarization of public opinion is paving the way for extremist political parties, this is already a reality in many countries. > I think regulation is going to do a lot more damage to both technological progress and economic competitiveness. If technological progress is about feeding us with garbage tik tok videos then I think we should ask ourselves if it's really worth pursuing it. We already had political polarisation and garbage tik tok videos before AI though. It seems the real problem is with social media. yeah, and the bigger problem (and why regulation is necessary for other dangerous fads) is that PEOPLE ARE STUPID and BELIEVE EVERYTHING THEY ARE TOLD. So AI grifter techbros come along and tell people about how "human" these things are, how "accurate" they are, how much they're supposed to be able to do, and we have morons like the google chap who quit because he thought an LLM was sentient (how tf do you get a job at google and still be that thick?). So the "average joe" is fire-hosed with messaging that is at the very least, inaccurate, and, if we're being hones - a LIE. It's your "average joe" users who are turning to "ai companions" when they feel depressed and want to kill themselves, because they are under the impression that it's an understanding, sentient entity. If people all thought of LLMs as "bags of words" (ie, understand that they're just token predictors with ZERO understanding of anything), perhaps that would change. But we can't rely on the grifters to be honest - they'd lose all that yummy yummy cash! - so I guess we need regulation, just like we do for driving cars or taking medications, or any other item from a myriad I could choose from. You can easily avoid using it, of course, but that won't protect you from others abusing it in a way that harms you.