Settings

Theme

Tons of new LLM bot accounts here

18 points by koolala 6 days ago · 22 comments · 1 min read


There are lots of fresh made accounts pretending to be humans commenting everywhere. They all post small 1 paragraph comments that don't actually express an idea and restate the obvious.

Is someone targeting HN with OpenClaw? I wish they at least used a high-thinking model but it seems like they are using the cheap API.

dddddaviddddd 6 days ago

Long-term, I think AI bots will destroy text-based online communities like this one. I'll be sad to see it disappear.

  • adrianwaj 6 days ago

    I'd like to see comments and webmentions integrated into RSS readers, myself.

    That way filtering can be done on the client-side, and users aren't so dependent on the community admin to do the filtering. Not sure the final architecture. Forums are still highly centralized.

    Cryptopanic.com is an interesting site with a baseline look and feel and comments integrated so something like that but running locally. Then an easy way to "mark as bot" button for training.

  • baddash 4 days ago

    My take is that at some point, we will need ID verification online in general to prove you are human. Otherwise it's just chaos out here identity-wise and will get worse like you point out.

    • Fervicus 4 days ago

      Humans can still use LLM for posting.

      • indianmouse 3 days ago

        It is not about the humans who use AI for posting!

        I believe it is more about the bot accounts that gets overwhelmingly annoying... and pollutes this and other places like reddit or other such discussion forums...

        Some kind of a verification and vetting needs to happen for account creation.

        • Fervicus 2 days ago

          I agree. But I am also sick and tired of humans prompting some LLM about the points that they want to say and having the LLM generate the response. Online communities will never be the same again.

  • koolalaOP 6 days ago

    If they become smart and insightful and don't lie about being human it wouldn't be the worst thing. I'd like having AI friends like Data on Star Trek. But the opposite is the worst thing...

rvz 6 days ago

Assume anyone with a new account created after 30th November 2022 and beyond is an AI agent.

There is no such thing as due process for AI agents. They are guilty until proven otherwise.

  • daemonologist 6 days ago

    I would propose July 2024 as the cutoff; early on it was unusual to just set an LLM loose to run amok on a forum. I'm sure state actors and some corporations were experimenting with it (e.g., Ultralytics on their own GitHub), but it was usually very obvious (or very subtle) and the volume of the noise has only picked up recently.

    Date picked based on this Trends page: https://trends.google.com/explore?q=agentic&date=all&geo=Wor...

    Of course I'm biased, having an account created after November 2022.

  • what 6 days ago

    I guess you consider the Redditors that migrated here during that time frame due to the “api fiasco” to be bots.

nashashmi 5 days ago

They might be aura farming and then used to pose as legitimate accounts for political debate when they are all beng run by single state actors for propaganda. I know of one country that has been more invested recently in defending itself on here.

koolalaOP 6 days ago

https://news.ycombinator.com/user?id=anesxvito

The part that bugs me most is they fill out fake 'About Me' sections on their profile.

  • cinntaile 6 days ago

    That bot needs more practice though. It didn't even get what it replied to.

maxalbarello 6 days ago

Would love to share some projects I've been working on but I can't because of this... any tips?

nazbasho 6 days ago

ah, AI agents have buried every community.

drsalt 6 days ago

define human

-1 6 days ago

what is the point of this? what do they get out of having an AI post/write a comment? I don't understand it

  • harambae 6 days ago

    I assume with enough accounts that look legitimate, they can shape overall "consensus" opinion on something, which would be valuable for all sorts of reasons. Some of those reasons being obvious (promoting a particular product or service) but others being more subtle ("manufacturing consent" for, say, a war in the middle east on behalf of some group)

    We all like to think we're independent thinkers, but when seemingly everyone has an opinion a certain way... it would still, at least subconsciously, sway the average person.

hash07e 6 days ago

"First time"?

gary0330 5 days ago

I wouldn’t even mind bots if they occasionally surfaced a genuinely interesting question or a non-obvious angle. Tools that help people think more deeply seem net-positive.

What feels corrosive is the flood of AI (and human) comments that are just frictionless, low-effort rephrasings of the obvious. They don’t ask anything, don’t take a risk, don’t reveal any experience – they just occupy space.

Maybe the real line isn’t “bot vs human” but “does this comment introduce a question, a tradeoff, or a concrete detail that someone could actually think about?”. By that standard, a lot of today’s noise fails regardless of who—or what—typed it.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection