Settings

Theme

You can't trust the internet anymore

nicole.express

216 points by panic a day ago · 171 comments

Reader

WD-42 a day ago

This is why my friends and I are setting up a mesh network in our town.

The open internet has been going downhill for a while, but LLMs are absolutely accelerating it's demise. I was in denial for the last few years but at this point I've accepted that the internet I grew up on as a kid in the late 90s to mid 2000s is dead. I am grateful for having experienced it but the time has come to move on.

The future for people that valued what the early internet provided is local, trusted networks in my opinion. It's sad that we need to retreat into exclusionary circles but there are too many people interested in making a buck on the race to the bottom.

  • cortesoft a day ago

    This seems like solving the problem at the wrong layer? The issue isn’t the actual network connection between people, it is the content. You could easily create your own forum or something and only include people you trust. You don’t need an entirely separate internet.

    • noosphr a day ago

      >The issue isn’t the actual network connection between people, it is the content.

      Everyone serving a website is being ddos by AI agents right now.

      A local mesh network is one way to make sure that no one with a terabit network can index you.

      • Bender 4 hours ago

        I was able to block them on my silly hobby domains. Most of them were already blocked to begin with from blocking other shenanigans over the years. Even something as simple as blocking anyone that does not support HTTP/2.0 takes out most bots. Adding basic-auth also stops most of what gets through. Blocking TCP-SYN with strange MSS values cuts out many before they can even touch the web daemon.

      • hnarn 7 hours ago

        >The issue isn’t the actual network connection between people, it is the content.

        >>Everyone serving a website is being ddos by AI agents right now.

        You’re missing the point, the point is that while mesh networks solve a problem, it’s not required to solve the problem ”I’m tired of the Internet” or ”I’m being indexed”. You can build your own network on top of the Internet with zero new hardware required, with something like wireguard, i2p or whatever.

      • pphysch 21 hours ago

        Then firewall traffic that doesn't come from your local ISP blocks or authenticated users.

      • 1over137 17 hours ago

        Could you just geoblock the USA? Is most/all AI agent scraping from there?

      • lowtidebridge 21 hours ago

        You could set up two way TLS with client certificates

        • mycall 15 hours ago

          That isn't good enough and could be DDoS'd as well.

          • direwolf20 7 hours ago

            There's a dead comment here saying that OpenAI doesn't just DDoS the internet because it can. That's true. Any supposed DDoS is a side effect or incompetent scraping, and won't affect anything they can't scrape. Not sure why it's dead — it's important to realise this.

            Edit: oh, it's probably dead because of the username

      • mycall 15 hours ago

        Make it private invite only behind Cloudflare

        • olyjohn 11 minutes ago

          Oh yes, just let Cloudflare solve all of our problems. Fuck this. We shouldn't have to rely on yet another huge company to fix this for us.

    • Bender 5 hours ago

      This is technically correct but there is no harm in them setting up a mesh in the event that the internet goes down assuming they have their own backup power. The USA and EU are both extremely vulnerable to power grid overload, cyber-attack, physical attack choke-points leading to black-starts, EMP, GRB and much more. I think it's good on them for the learning exercise and hopefully they add to existing documentation.

    • EvanAnderson a day ago

      Even if it was a "network connection" issue creating an overlay network on top of the Internet (with VPN tunnels and mesh routing, for example) would yield wildly better bandwidth and latency characteristics.

      You can still make that overlay network geofenced and vetted. Heck, running it over a local ISP's last mile would probably yield wonderful latency.

      We need vetted webrings on the existing Internet, not a new Internet.

      • plastic-enjoyer 18 hours ago

        > We need vetted webrings on the existing Internet, not a new Internet.

        How do you think will this work, when LLM accelerates the breakdown of trust and common epistemics?

      • bigfatkitten 19 hours ago

        But it’s a whole lot less fun and educational than building your own infrastructure. Sometimes the journey is more important than the destination.

      • wizardforhire 21 hours ago

        Reading this back and forth so far I think you’re spot on… which leads to this open question, wheres the consolidated stack that makes this accessible?

        Also I think the name vetted webrings or just the vetted web is simple enough to be a movement.

        As in the vetted web movement.

        … gotta start somewhere.

    • sky2224 a day ago

      There's only so much you can do to detect and block content that's AI generated. At the end of the day, the content starts with the people creating it.

      Jumping to an invite only network isn't the most ridiculous idea imo.

      • drysart a day ago

        The best solution for dealing with AI content slop flooding your eyeballs is to hang out in places small enough to be a community -- like a local area mesh network.

        AI slop thrives in anonymity. In a community that's developed its own established norms and people who know each other, AI content trying to be passed off as genuine stands out like a sore thumb and is easily eradicated before it gets a chance to take root.

        It doesn't have to be invite-only, per se, but it needs to have its own flavor that newcomers can adapt to, and AI slop doesn't.

        • LexiMax 21 hours ago

          You can still find the essence of community on the traditional internet in places like invite-only discords, smaller mastodon instances, traditional forums, and spaces similar to Lobsters and Tildes.

          ...and not on Hacker News. Too many pseudo-anonymous jerks, too many throwaways, too much faith placed in gamified moderation tools.

          • lazide 7 hours ago

            Potentially, but those areas are also more and more getting leveraged to further identify and profile people for targeting - see the latest Discord scandal for example.

      • kolinko a day ago

        What parent means is that you can with no problem build over the classic tcp/ip.

    • afpx 20 hours ago

      I'd like a semi-anonymous private network. Something like: I go to local post office and purchase a sealed token. I use the token to generate a reusable “verified human credential” with limited reuses. The credential allows me to connect to the private network.

    • bossyTeacher 20 hours ago

      > This seems like solving the problem at the wrong layer? The issue isn’t the actual network connection between people, it is the content.

      Classic HN. Focus on the tech to avoid looking at the problem.

    • willturman a day ago

      Perhaps, but it also, by default, excludes that entire class of authentication problems that are only manifested in a non-local network.

      I love the idea.

      It's also interesting in that a local mesh doesn't necessarily need to operate using the TCP/IP/HTTP stack that has been compromised at every layer by advertising and privacy intrusions.

      • kolinko a day ago

        You’re probably getting downvoted because what you said about TCP/IP/HTTP doesn’t make sense.

        • willturman a day ago

          You're right. I didn't think that through. The stack doesn't imply that a local network is somehow exposed to those concerns.

  • PaulDavisThe1st a day ago

    I "got online" in 1985. I don't recall a single point in time that a geographically local internet was ever useful or of interest to me.

    • xoxxala a day ago

      I got a 300 baud modem right around the same time. There were a few local BBSs that ran meetups, scavenger hunts, warez parties and the like. I got to know a bunch of the regulars from the area. Pretty cool time.

    • allenu a day ago

      I think before Friendster, Myspace, then Facebook, there was a period where there were discussion forums for local communities. I think it was useful for meeting people. I remember friends in the late '90s used them frequently for chatting and some made new friends in real life that way. It was a short period, though, as more established companies came along that had a wider reach.

    • mycall 15 hours ago

      You never went to a LAN party then

    • holoduke a day ago

      Bbs. Downloaded first shareware version of doom. Was it 4mb or something? I remember I had like 5kb/s and paid 5 cents a minute. My parents weren't happy those days. Now they are :)

    • iLoveOncall a day ago

      What about when you want to find hot singles in your area?

      Jokes aside, probably 10-20% of my browsing is related to local things, up to the country scale. From finding local restaurants or businesses, to finding about relevant laws or regulations, news, etc. That's not negligible.

      • PaulDavisThe1st a day ago

        Fair point, but those information sources and those things were not connected to a local internet.

  • xantronix a day ago

    I've been looking into building some sort of Wireguard mesh service since many of my friends are distributed all across the world. I wish you the very best in your endeavours!

    • shynome 10 hours ago

      hi, I have do it, here is link: https://well.remoon.net/ source repo: https://github.com/remoon-net/well, it only have chinese version now, afer chinese spring I will add the english version

    • purpleKiwi a day ago

      How would that look like in practice? I've just heard about the term and I like the description of it, especially the possibilities it gives

      • xantronix a day ago

        That's a great question; it's something I still need to explore, but it would involve some sort of distributed public key database and IP address directory, then a routing table on top of that as people add more resources to the mesh. Wireguard is particularly good at transparent roaming, so it's trivial to use on portable devices or if you need to migrate your server from one provider to another.

        • Avicebron 21 hours ago

          Isn't this just Tailscale? Or Netmaker maybe if you want your own control plane?

          • xantronix 3 hours ago

            You're not wrong; mainly this is an exercise in coming up with a solution that can evolve with the needs of a modestly-sized community with decentralisation. (Oh fuck I hope I didn't just accidentally propose a blockchain, I'm gonna kms) I'm not particularly married to one particular solution or another, or even to the idea of a control plane, or its scope or locality.

  • majicDave 21 hours ago

    I think the same thing, came to the same conclusion, and started working on a solution a few months back. It's getting there, I'm just trying to polish up an mp3 player at the moment based on the network, and then I have quite a few plans. Still early days, still very buggy, and I am yet to really announce it, but I'm optimistic that something like this could help a lot. https://github.com/mjdave/katipo

  • giancarlostoro 20 hours ago

    Why not just make an invite only site on the regular web? Drastically less friction.

  • anigbrowl a day ago

    This will be about as impactful as printing out the best web articles you encounter and building a shed to shelve them in binders.

  • ethbr1 a day ago

    If you'd like, flip an email my way. We've been thinking similarly.

    Email in profile (deref a few times)

  • amelius 21 hours ago

    You don't need to create a mesh network to start a new internet.

    You could also, for instance, develop your own DNS alternative.

  • grahamburger 20 hours ago

    This is a cool idea and sounds like a fun project. That said, I imagine you could accomplish roughly the same thing with an invite only Wireguard network, with the benefit of not being geo-locked.

  • shevy-java a day ago

    It is good to see there are some internet rebels left.

    Perhaps AI-Skynet will not win - but they have a lot of money. I think we need to defund those big corporations that push AI onto everyone and worsen our lives.

  • klysm a day ago

    What does a mesh network have to do with this?

  • archagon 11 hours ago

    Reminds me of a project idea I had. You'd get a little Raspberry Pi style board with BTLE and battery power (ideally lasting for weeks at a time) and covertly stick it in some communal location, e.g. a cafe or library. Then you'd have it run some local-only forum software and disseminate instructions for connecting to it. The point would be to have a digital community accessible only by direct connection and bound to a physical location by design, kind of in the vein of Community Memory.

    It's probably too impractical to work as described, but I think that having a digital space constrained by physical access would be meaningful in a way that internet communities are not. The people you chat with would necessarily be the people in your physical environment, which would make it feel more like a local hangout than the typically vapid social media exchange.

    (On further reflection, it would probably be easier to make a mesh network app version of this. Hmm...)

marginalia_nu a day ago

Tangentially related, I have a hunch, but cannot prove, that prediction markets are the driving force behind a lot of the bad information online, since they essentially monetarily incentivize making people misjudge the state of the world.

There's been a huge uptick in this sort of brigade like behavior around current events. First noted it around LK99, that failed room temperature semiconductor in 2023, but it just keeps happening.

Used to be we only saw it around elections and crypto pump and dumps, now it's cropping up in the weirdest places.

  • tylergetsay 5 hours ago

    The volumes on the majority of markets are very small

    • datsci_est_2015 3 hours ago

      Are you downplaying the incentive for bad actors to generate false or misleading information online in order to create a margin for monetary gain?

      This, in the same country that has allocated some of the greatest minds of our generation towards tasks like High Frequency Trading under the guise of “price discovery”?

      Forgive me if I’m a little less credulous than you.

  • 3eb7988a1663 a day ago

    That seems really high effort. I assume most events are things which are hard to influence, so at best you are hoping to tilt the wager odds into your favor. Which could backfire if you are betting on the wrong outcome.

    • lazide 7 hours ago

      There have been a number of recent incidents in prediction markets where insiders have directly influenced the outcome after taking the opposite side of the bets.

    • pphysch 21 hours ago

      There's plenty of "high effort" market information manipulation going on, even before LLMs. Spread (justified, researched) FUD about a company your fund is shorting.

  • digiown a day ago

    Interesting theory. I'm inclined to disagree, however. Prediction markets essentially allows people to trade information for money, even the types historically more difficult to trade. There aren't enough people betting on things for deliberate misinformation to become worthwhile, IMO, and most people would stop betting after being in the wrong too often, unlike casinos which always let you win sometimes.

    I believe the misinformation is largely by self-interested parties. Politicians as well as influencers trying to push agendas, and the engagement/attention farming for advertising revenue, which are largely indifferent to truth.

    • esperent 13 hours ago

      I agree that the payoff from prediction markets doesn't seem worth it for this kind of manipulation. Collectively they hold a lot of money but I'm not sure how much individuals or groups are making. That story of someone making half a million recently was an outlier rather than norm, as far as I know.

      But, what if prediction markets are just used for information gathering, but the real money is made from market manipulation via prediction markets? I'm sure a lot of investment groups watch prediction markets very carefully, if they can manipulate the predictions, or be manipulated by them, the money to be made is big enough for any level of effort to be believable.

    • marginalia_nu a day ago

      It's the same as with crypto rug pulls, nobody is going to fall for that several times. Was still money to make in that before everyone and their grandma wisened up.

      • datsci_est_2015 3 hours ago

        > It's the same as with crypto rug pulls, nobody is going to fall for that several times.

        Same grift, different mask. NFTs, shitcoins, blockchain startups, AI startups. We continually see that even if it’s not the same mark, there are plenty of fools easily separated from their wallets.

      • digiown a day ago

        I don't think prediction markets work well for that. It is a market and you can't really prevent anyone else from benefitting from the same victims, which dilutes your earnings.

        • forgetfreeman 20 hours ago

          You don't need a perfect oracle to win on prediction markets. All it takes is enough influence to tip % in your favor. Card counting is very effective.

          • marginalia_nu 19 hours ago

            Yeah, as long as it's reasonably reliable, compound interest also adds up to turn even small ROIs into big wins across multiple investments.

  • wasmainiac 12 hours ago

    > semiconductor

    Superconductor

  • lofaszvanitt 12 hours ago

    How do you misjudge the world based on web articles? If you don't have proper foundations where to source your information from you are already doomed.

    • marginalia_nu 9 hours ago

      It's generally not articles that are used, but discussion boards and social media. If it seems like everyone agrees on something, it's very easy to get dragged into thinking the idea has merit

eterm a day ago

It is the failure mode of incorrect trust that has changed.

Previously you might get burned with some bad information or incorrect data or get taken in by a clever hoax once in a while.

Now you get overwhelmed by regurgitation, which itself gets fed back into the machine.

The ratio of people to bots reading is crashed to near zero.

We have burned the web.

  • lazystar a day ago

    I've been miserable over the last few weeks after coming to that same conclusion. Its so bad that i doubt the people that were pulling the strings can even tell whats going on anymore.

    • archagon 11 hours ago

      If the web is burned, something new will arise in its place (with new constraints) as long as there's a need. It's not like we only get one shot at this.

      • lazide 7 hours ago

        Depends on if someone ends up launching nukes over this, which there isn’t a non-zero chance is going to occur.

        This same type of info war tends to muddy, confuse, and get everyone on edge.

  • pixl97 21 hours ago

    Hence dead internet theory has turned into dead internet reality.

  • mycall 15 hours ago

    It corrupts future AI models too, so we might be holding onto today's models for a long time, as a least biased version as a checksum.

neom a day ago

I thought a lot last night about how we could protect HN, I didn't come up with a good answer except maybe you'll need to have someone with a higher reputation vouch aka invites. My internet community journey has mostly just been irc -> dA -> twitter -> HN. Too frequently these days I feel I might be putting emotional energy into something that isn't human on this site, hard to express how that makes me feel, but it's not pleasant at all. 힝

  • benhurmarcel 19 hours ago

    HN with invites is basically https://lobste.rs/

  • krapp 21 hours ago

    We can't. This forum is run by the company that used to be run by Sam Altman and it's already full of people who work in the industry that's driving AI adoption and who use and aggressively believe in AI to the point of religion. There are already bot accounts posting, and humans posting comments filtered by AI. Most Show HNs are vibe coded.

    There's nothing anyone can do about it. No matter how many guidelines dang deploys, no matter how much negative social pressure we apply (and we could apply much more but doing so would just run afoul of the tone policing of the guidelines) people will use AI because they want to, and because it's a part of their identity politics, specifically to spite people who don't want to see it. They currently bother to mention when they use ChatGPT for a comment. It's just a matter of time until people don't even bother, because it's so normalized.

    The Fediverse is currently good, the culture there is rabidly anti-capitalist and anti-AI. I like Mastodon. But that will eventually, inevitably get ruined as well, and we'll just have to move on to the next thing.

mnau a day ago

Signal to noise ratio is getting *lower (EDIT: was higher) than ever. I don't see a way out of this other than "human certified" digitally signed authorship (e.g. by using eIDAS in EU). There could be a proxy to at least retain pseudo-anonymity, but trackable to a human. Tragedy of commons strikes again.

  • PaulDavisThe1st a day ago

    "Tragedy of commons" is a false concept that obscures greed and selfishness and often lawlessness. Even its originator (Hardin) accepts that it does not describe actual history.

    https://news.ycombinator.com/item?id=46623359

    • roxolotl a day ago

      The use of the word Tragedy in the name I think makes it easier for people to excuse themselves when they monopolize the commons. “Oh it’s a tragedy humans are just selfish we can’t avoid it.” The tragedy is that people are comfortable excusing others selfish, greedy behavior by saying it’s innate.

    • armchairhacker a day ago

      There’s a lot of debate under your linked comment.

      My understanding is that people tend to cooperate in smaller numbers or when reputation is persistent (the larger the group, the more reliable reputation has to be), otherwise the (uncommon) low-trust actors ruin everything.

      Most humans are altruistic and trusting by default, but a large enough group will have a few sociopaths and misunderstood interactions; which creates distrust across the entire group, because people hate being taken advantage of.

      • PaulDavisThe1st a day ago

        > Most humans are altruistic and trusting by default ...

        ... towards an in-group, yes. Not towards out-groups, as far as I can tell.

        Though for some reason this tends not to apply to solo travellers in many, many parts of the world.

        Lots of debate, yes, but very little about the basic fact that Hardin's formulation of "the tragedy of the commons" doesn't describe actual historical events in pretty any well documented case.

  • pino999 a day ago

    And that human can use A.I. again. It won't help.

    • mnau a day ago

      I would argue that it can be circumvented, not that it won't help. If a human uses his/her signature for content farm, it can be flagged as such.

  • varjag a day ago

    I suppose you meant SNR is getting lower.

  • mycall 15 hours ago

    Trees of trust, self-organizing as closer nodes to your agents are trusted more.

arjie a day ago

It is true that as the cost to construct fake content has gone to zero, we need some kind of scalable trust mechanism to access all this information. I don't yet know what this is but a Web of Trust structure always seems appealing. A lot of people are going to be excluded, but such is life, I suppose.

If I were to be honest, going to where the fish aren't is also going to help. Almost certainly there are very few LLM generated websites on the Gemini protocol.

I'm setting up a secondary archiver myself that will record simply the parts of the web that consent to it via robots.txt. Let's see how far I get.

  • armchairhacker 21 hours ago

    I think if a Web of Trust becomes common, it will create a culture shift and most people won’t be excluded (compared to invite-only spaces today). If you have a public presence, are patient enough, or a friend or colleague of someone trusted, you can become trusted. With solid provenance, trust doesn’t have to be carefully guarded, because it can be revoked and the offender’s reputation can be damaged such that it’s hard to regain. Also, small sites could form webs of trust with each other, trusting and revoking other sites within the larger network in the same manner that people are vouched or revoked within each site (similar to the town -> state -> government -> world hierarchy); then you only need to gain the trust of an easy group (e.g. physically local or of a niche hobby you’re an expert in) to gain trust in far away groups who trust that entire group.

3eb7988a1663 a day ago

I was recently running into this while playing the latest Hollow Knight game. Several sloppified sites which obviously were trying to tailor mechanics/items of the original game into the new one. The new release is only ~six months old, so there is just not that much hard content available to reference.

My question is -why? Is it really worth the ad revenue to trick a few people looking into a few niche topics? Say you pick the top 5000 trending movies/music/games and generate fake content covering the gamut. What is the payback period?

  • pixl97 21 hours ago

    >Is it really worth the ad revenue to trick a few people looking into a few niche topics?

    Maybe it's problem space exploration via pollution? Said creators of pollution (bullshit asymmetry theory in practice) have very little cost in creating said pollution and there is the possibility of a payback larger than that cost.

  • bombcar 19 hours ago

    If you live in a VLCOL country, and have access to free tooling (via various means) you only need a very small return to make it entirely worth your while.

47thpresident 20 hours ago

I had a similar experience when I was looking for YouTube videos on the Intel i7-4790T, it's a relatively obscure CPU that was only found on small-form factor pre-builds during the Haswell era. The only recent videos I found were slop videos [1] narrating a script clearly generated by an LLM, with a link to their Amazon affiliate in the description. The CPU has never been put on retail sale! These channels upload a dozen times a day on random products just to get an affiliate commission.

[1] https://www.youtube.com/watch?v=YpHUBC681iU https://www.youtube.com/watch?v=0w5a33Jeen0

xtiansimon 4 hours ago

And why not? We humans do things like this all the time. We act with powerful false beliefs. Misunderstand a situation or simply just the meaning of a word, and then build our world-view and lives around those false beliefs. Train your model on this, and replicate in those false beliefs.

ptrl600 19 hours ago

Websites with poor SEO practice are often more trustworthy.

Devasta a day ago

The future of the internet is going to be invite-only enclaves. I sometimes wonder is anyone working on the next generation of discussion forums, or if it'll be a return to PHPBB.

  • FranklinJabar 6 hours ago

    They'll instantly become infiltrated with bots and include people based on arbitrary politics. Either the content is such that it makes zero sense to game or spam or it is lost already.

    • Devasta 2 hours ago

      But if approval of a new user in the enclave requires vouching by an existing member, then they will be anyone who joins lest they are banned themselves right?

      I'm talking about like 100-200 members max social clubs, not subthing like subreddits with tens of thousands of users.

  • marginalia_nu a day ago

    lobste.rs is already kinda that I think, makes an interesting contrast to HN, which has a similar crowd but is open to anyone.

mojomark 14 hours ago

When you make a photocopy of a photocopy... of a photocopy, you get a VERY blurry photocopy.

CrzyLngPwd 21 hours ago

There was a very brief window, of maybe hours or days, where the Internet could be trusted, and that was a long time ago.

ninjagoo a day ago

The internet has gone from a high-trust society to a low-trust society, all in the span of a couple of decades.

Enshittification strikes again.

And it doesn't have appear to have any means to rid itself of the bad apples. A sad situation all around.

  • PessimalDecimal a day ago

    It might be more accurate to say that a lot of low-trust societies have become connected to the Internet which weren't nearly as online a couple of decades ago.

    For example, a huge fraction of the world's spam originates from Russia, India and Bangladesh. And we know that a lot of the romance scams are perpetrated by Chinese gangs operating out of quasi-lawless parts of Myanmar. Not so much from, say, Switzerland.

    • kgeist a day ago

      Russia has been among the top sources of spam since the early 2000s, it's not like anything changed lately. Mail-order bride scams and similar peaked in like 2005. It doesn't take a lot of people to send spam, I don't think it's correlated with the general population's online presence. I'd actually say it's quite the opposite: in 2026, Russia has never been more disconnected from the Western parts of the Internet than it is now (the Russian Internet watchdog blocks like 30% of foreign resources since a few years ago, while Russian IPs are routinely banned on Western sites after 2022, I can barely open anything without a VPN).

      For that reason, and because of limited English proficiency, Russian netizens rarely visit foreign resources these days, except for a few platforms without a good Russian replacement like Instagram and YouTube (both banned btw, only via a VPN), where they usually stay mostly within their Russian-speaking communities. I'm not sure why any of them would be the reason the Internet as a whole has supposedly become low-trust. The OP in question is some SEO company using an LLM to churn out sites with "unique content." We already had this stuff 20 years ago, except the "unique content" was generated by scripts that replaced words with synonyms. Nothing really new here.

      • marginalia_nu a day ago

        Prigozhin falling out of the metaphorical window also seems to have tempered the amount of political stuff coming directly from Russia.

      • underlipton 43 minutes ago

        Right. The change has come from how willing the internet's gatekeepers (primarily, Google) have been willing play ball with SEO. Enshittification is just them becoming more amenable to it over time.

      • expedition32 a day ago

        Yeah blaming Russians and Chinese for the internet turning to shit is ludicrous.

        Chinese have their own internet anyway- it was a shock to me at first just how little the average Chinese citizen really cares about Western culture or society. They have their own problems ofcourse but it has nothing to do with us

        No it's the tens of billions of mostly American capital going into AI data centers and large bullshit models.

        • marginalia_nu 21 hours ago

          It's not completely unfounded. A lot of cyber crime adjacent stuff targeting the west is coming from China and Russia. This is a consequence of these countries not having functioning law enforcement cooperation with the west, as well as chilly bordering on hostile diplomatic relations. It's not (always) sanctioned by the governments of these countries, but it's not entirely unwelcome either.

          Though all that stuff is a very different thing from what's being discussed in this thread.

          • sunaookami 21 hours ago

            >A lot of cyber crime adjacent stuff targeting the west is coming from China and Russia.

            If you trust your government's propaganda that is used to jusitfy "hackbacks" and buying 0-days on the darkweb that fucks us all.

            • marginalia_nu 20 hours ago

              Eh, you don't really need to trust any propaganda to see this. Set up an nginx on a public IP and tail its logs. Vulnerability scans will hit you literally non stop so long as it's a western IP. Block China and Russia IPs and it drops by like 90%.

              Don't get me wrong the west isn't doing much to enforce Russian or Chinese complaints either. It's really just a messy diplomatic situation all around.

    • blell 21 hours ago

      70% of the GDP of Laos comes from scamming people in the first world.

      "A report by the Global Initiative on Transnational Organised Crime (based on United States Institute of Peace findings) estimated that revenues from “pig-butchering” cyber scams in Laos were around US $10.9 billion, which would be *equivalent to more than two-thirds (≈67–70 %) of formal Lao GDP in a recent year."

      https://globalinitiative.net/wp-content/uploads/2025/05/GI-T...

  • digiown a day ago

    The WWW has never been a high-trust place. Some smaller communities, sure, but anyone has always been able to write basically what they want on the internet, true or false, as long as it is not illegal in the country hosting it, which is close to nothing in the US.

    The difference is that there historically weren't much to be gained by annoying or misleading people on the internet, so trolling is mainly motivated by personal satisfaction. Two things changed since then: (1) most people now use the internet as the primary information source, and (2) the cost of creating bullshit has fallen precipitously.

    • allenu a day ago

      I agree. It's not that the web was high-trust. It was more that if you landed on a niche web page, you knew whoever put it together probably had at least a little expertise (and care) since it wouldn't be worth writing about something that very few people would find and read anyway. Now that it's super cheap to produce niche content, even if very few people find a page, it's "worth it" to produce said garbage as it gives you some easy SEO for very little time investment.

      The motivation for content online has changed over the last 20 years from people wanting to share things they're interested in to one where the primary goal is to collect eyeballs to make a profit in some way.

  • PaulDavisThe1st a day ago

    to be boring, the term "enshittification" was invented by one individual, recently, and has a specific meaning. it does not refer to "things just get worse" but describes a specific strategy adopted by corporations using the internet for commercial purposes.

    • pdonis a day ago

      > a specific strategy adopted by corporations using the internet for commercial purposes.

      Isn't that what's driving the pollution of the Internet by LLMs?

      • PaulDavisThe1st a day ago

        No. The specific strategy is not about using LLMs or polluting the internet. Enshittification is ... ah screw it, let's turn to wikipedia:

        > Enshittification, also known as crapification and platform decay, is a process in which two-sided online products and services decline in quality over time. Initially, vendors create high-quality offerings to attract users, then they degrade those offerings to better serve business customers, and finally degrade their services to both users and business customers to maximize short-term profits for shareholders.

        • ninjagoo a day ago

          Feels like there is a case to be made here that the decline of The Internet rather precisely fits those definitions, with the exception that it is a collective of those products and services undergoing enshittification, since high-quality internet-based products/services no longer exist in quantity.

          Also see https://en.wikipedia.org/wiki/Enshittification#Impact which talks of the broadening of the usage of that term.

          • adrian_b 34 minutes ago

            There still are some high-quality internet-based products/services, but the most important of them are not exactly commercial, or they are even illegal, e.g. various digital libraries with old publications, archive.org, Wikipedia, Anna’s Archive, etc.

            Also there are many online shops that are the best option for purchasing various things.

            The greatest decline is in the search engines, which not only are overwhelmed by sites with fake content, but they generate fake content themselves, in the form of stupid answers that are offered instead of the real search results, whether you want them or not.

            If you know precisely the Web sites that you want to use, it is still OK, but when you search something unknown, it has become horrible.

            • PaulDavisThe1st 9 minutes ago

              > but they generate fake content themselves, in the form of stupid answers that are offered instead of the real search results, whether you want them or not.

              &udm=14 my friend, &udm=14

          • PaulDavisThe1st 20 hours ago

            > high-quality internet-based products/services no longer exist in quantity.

            asserted without evidence and likely false.

    • LPisGood a day ago

      Words change meaning as they are used. Especially negative words that may start rather specific tend to get used more generally until the specificity is lost.

      • anigbrowl a day ago

        how about we put some effort into actually picking the correct words and not just handwaving everything? Especially since the whole topic of discussion here is 'internet research is increasingly less reliable because people just wrote/publish any old BS for clicks.'

        • LPisGood 21 hours ago

          I don’t think it’s necessarily handwaving. I don’t think anyone has a monopoly on the way language is used and broadening terms is a very natural thing that happens as language evolves

          • PaulDavisThe1st 20 hours ago

            we already had "it's getting shittier every day". no need to lose the specific meaning of "enshittification".

      • PaulDavisThe1st a day ago

        "enshittification" was invented within the last couple of years and its inventor is still alive.

        I'd normally be the first to agree with and push your point about language evolving, but it's not time to apply that to a neologism this young.

        • LPisGood 21 hours ago

          I think the fact that it’s primarily an Internet related term that gets used a lot on the Internet, has something to do with the acceleration in the broadening of its meaning

    • ninjagoo 21 hours ago

      Having thought about your note some more, perhaps this would be a better encapsulation of what I was trying to say:

      The consumer internet has become platformized, and the dominant platforms are going through enshittification: early user subsidy, then advertiser/seller favoritism, now rent extraction that is degrading outcomes for everyone.

    • krapp 21 hours ago

      >to be boring, the term "enshittification" was invented by one individual, recently, and has a specific meaning. it does not refer to "things just get worse"

      It literally started meaning that hours after it was first posted to HN and being used. Sorry, that's just how language works. Enshittification got enshittified. Deal with it and move on.

      • PaulDavisThe1st 20 hours ago

        that's literally meaningless. also ahistorical, both in that this is not what happened hours after it was first posted to HN (which was months after it was originated), and also in that "things become shittier" was and is still a perfectly common expression, the source of Doctorow's neologism and much closer to what the loose use of it is trying to get at.

        • krapp 20 hours ago

          >that's literally meaningless. also ahistorical, both in that this is not what happened hours after it was first posted to HN (which was months after it was originated)

          Maybe it wasn't literally hours, but it was really fast. I remember noting how quickly people began to complain about it being used "improperly." The earliest instance I could find was this thread[0] from 2023 where user Gunax complained about it. I couldn't find an earlier reference in Algolia, it probably exists but I honestly don't care enough to put in the effort.

          [0]https://news.ycombinator.com/item?id=36297336

          >and also in that "things become shittier" was and is still a perfectly common expression

          ...perfectly encapsulated and described by the term "enshittification." Which is why people use it for that now. It's more descriptive in the general sense than it is as a specific term of art. You're complaining that a word that means "the process of turning to shit" is being used to describe "the process of turning to shit." What did people expect to happen? If you want to keep it as a precise and technical term of art, keep calling it "platform decay." A shit joke is not a technical, precise term of art.

          You can be as much of a prescriptivist crank about this as you want, it doesn't matter. "Enshittification" now refers to any process by which things "turn to shit."

          • PaulDavisThe1st 19 hours ago

            I'm not a prescriptivist over any sane time scale (say, 5-10 years and upwards).

            But here's what you're basically implying:

            A writer was thinking about the ways things get shittier, decided that there was an actual pattern (at least when it came to online services) that came up again and again, such that "shittified" or "shittier" didn't really describe the most insidious part of it, and coined "enshittification" as a neologism that captured both the "shittier/shittified" aspects and also the academic overtones of "enXXXXication" ...

            ... and within less than 3 years, sloppy use of the neologism rendered it undifferentiatable from its roots, and the language without a simple term to describe the specific, capitalistic, corporatist process that the writer had noticed.

            I can be anti-prescriptivist in general without losing my opposition to that specific process.

            • krapp 19 hours ago

              It's already happened to "vibe coding," which no longer refers to the specific process described by Andrej Karpathy but any use of AI assisted development.

              The process of language drift is accelerated exponentially by the internet. 5-10 years and upwards is an obsolete timescale, these changes can happen in months now, sometimes faster depending on the community.

          • fragmede 7 hours ago

            Fun fact: I got called out by Cory for calling other people out on using the term wrong, and he pointed me at: https://pluralistic.net/2024/10/14/pearl-clutching/#this-toi... in https://news.ycombinator.com/item?id=44776712

            • PaulDavisThe1st a minute ago

              Thanks for that.

              > As I said in that Berlin speech:

              >> Enshittification names the problem and proposes a solution. It's not just a way to say 'things are getting worse' (though of course, it's fine with me if you want to use it that way. It's an English word. We don't have der Rat für englische Rechtschreibung. English is a free for all. Go nuts, meine Kerle).

              Unfortunately, I just think that Cory is wrong in the sense that ... while it's true the English is a free for all (most languages are, really) ... there's an actual cost to the sloppy usage which diminishes the utility of ever even coming up with the word. It's obviously fine for Cory to be fine with it (along with anyone else being fine with it), but at a point in time where it actually is the theory that matters, I think the cost ought to be considered more seriously.

              Somewhere in the not too distant future, the theory/concept that enshittification identifies will be of less importance for a variety of reasons, and loose use of the word won't matter, because the theory/concept will be either irrelevant or widely known or both. But right now, when someone wants to talk about Cory's idea about how internet services are deliberately degraded over time, it's incredibly helpful to have a "unique" term for that.

  • fatherwavelet 8 hours ago

    The good old days when you could trust everything posted on usenet to be true.

    We must live on different planets.

stavros a day ago

You never could trust the internet. The difference is that now the problem is so widespread that it's finally spurring us into action, and hopefully a good "web of trust" or similar solution will emerge.

nunez 11 hours ago

I've been hitting this a lot lately in Kagi. I'll search for instructions on how to do a thing and some random website will have nothing but _hard_ AI slop going off about the thing I was looking up.

It must be easier than ever to build content mills these days.

shevy-java a day ago

AI is kind of like Skynet in the first Terminator movie. It now destroys our digital life. New autogenerated websites appear, handled by AI. Real websites become increasingly less likely to show up on people's daily info-feed. It is very strange compared to the 1990s; I feel we lost something here.

> The commons of the internet are probably already lost

That depends. If people don't push back against AI then yes. Skynet would have won without the rebel forces. And the rebels are there - just lurking. It needs a critical threshold of anger before they will push back against the AI-Skynet 3.0 slop.

nwhnwh 21 hours ago

The trust collapse: Infinite AI content is awful https://arnon.dk/the-trust-collapse-infinite-ai-content-is-a...

rvz a day ago

It always has been like that on the internet. Now made worse for obvious reasons.

On the internet no one knows if you're a dog, human or a moltbot.

gustavus a day ago

When I first started using the Internet there were 3 rules that were pounded into my head repeatedly.

1. Don't believe everything or anything you read or see on the Internet.

2. Never share personal information about yourself online.

3. Every man was a man, every woman was a man and every teenager is an FBI agent.

I have yet to find a problem with the Internet thats isn't because of breaking one of the above rules.

My point being you couldn't ever trust the Internet before anyways.

  • WD-42 a day ago

    You've always needed skepticism, of course. But it used to be if you came across an article about a super obscure video game from the early 90s (referencing the blog post here) you could be reasonably sure that it wasn't completely made up. There just wasn't the incentive to publish nonsense about super niche things because it took time and effort to do so.

    Now you can collate a list of thousands of titles and simply instruct an LLM to produce garbage for each one and publish it on the internet. This is a real change, IMO.

  • PaulDavisThe1st a day ago

    You forgot Fido's Corollary:

    3a. ... and nobody knows if you're a dog.

  • anigbrowl a day ago

    Yeah when I was 10 someone told me not to believe everything I read too. But guess what, that's kinda useless advice because consulting reference material is a necessity and there are wide variations in the quality of reference material. This sort of 'don't trust anyone' heuristic can just as easily lead to conclusions that the earth is flat, the moon landing never happened, vaccinations are the leading cause of disease etc.

throwaway2027 a day ago

"You really think someone would do that? Just go on the Internet and tell lies?" https://knowyourmeme.com/memes/just-go-on-the-internet-and-t...

  • nicole_express a day ago

    A big part of my annoyance is that in the past, something like Phantasy Star Fukkokuban would not really be worth lying about; people need a reason to lie.

    • anigbrowl a day ago

      I'm gonna guess that it's just popular enough that being in the top 5 results on search engines yields a small net gain in ad revenue. It's possible the decision to generate the fake article was itself made by a machine.

      Great piece btw

      • hcs 20 hours ago

        Seems much more likely it's just going through a list of all games, collated from databases that humans have painstakingly curated.

    • yellowapple a day ago

      There was no reason to lie about knowing the Scots language well enough to be the primary contributor by volume to Scots Wikipedia, and yet that's something that happened.

      • pdonis a day ago

        > There was no reason to lie about knowing the Scots language well enough to be the primary contributor by volume to Scots Wikipedia

        Yes, there was: becoming the primary contributor by volume to Scots Wikipedia (which probably doesn't have many contributors to begin with, but there you are). Some people just have to have attention, no matter how.

    • surgical_fire 21 hours ago

      Lies are intentional. A liar cares about the truth and attempts to hide it.

      What we have here is worse; LLMs give you bullshit. A bullshitter does not care if something is true or false, it just uses rhetoric to convince you of something.

      I am far from being someone nostalgic about the old internet, or the world in general back then. Things in many ways sucked back then, we just tend to forget how exactly they sucked. But honestly, a LLM-driven internet is mostly pointless. If what I am to read online is AI generated crap, why bother reading it on websites and not just reading it straight from a chatbot already?

underlipton a day ago

It comes down to Google's failure. Rather than outright defeating the SEO eldridge abomination by adopting a zero-tolerance policy to those tactics, Google made a mutually advantageous bargain with them of - course, leaving out a third party: us. They could do this because they had no competition. Now, the culture of enabling bad actors is, unfortunately, set.

Google did all the innovation it needed to and ever is going to. It needed to be broken up a decade ago. We can still do it now. Though I don't know how much it will save, especially if we don't also go after Apple, and Meta, and Microsoft.

  • avidiax a day ago

    It would be in Google's ultimate interest to label AI-generated websites and potentially rank them lower in search results.

    AI needs to be kept up to date with training data. But that same training data is now poisoned with AI hallucination. Labelling AI generated media helps reduce the amount of AI poison in the training set, and keeps the AI more useful.

    It also simply undermines the quality of search, both for human users and for AI tool use.

  • dehrmann a day ago

    > Rather than outright defeating the SEO...

    SEO is a slippery slope on both sides because a little bit is good for everyone. Google wanted pages it could easily extract meaning from, publishers wanted traffic, and users wanted relevant search results. Now there's a prisoners dilemma where once someone starts abusing SEO, it's a race to the bottom.

    • underlipton a day ago

      >SEO is a slippery slope on both sides because a little bit is good for everyone

      I reject this emphatically. Google should never have been in the business of shaping internet content. Perhaps they should have even gone out of their way to avoid doing so. Without Google (or a better-performing competitor) acquiescing to the game, there is no SEO market.

AnimalMuppet 16 hours ago

There's a lot of people unhappy about this here. Presuming that the sentiment extends beyond HN, then it might be a problem that you could make some money by solving. (In the same way that Google figured out how to let the net tell it which pages were best, and made an insane amount of money from doing so.)

People want something real, not AI slop or shills or astroturf or corpo-speak or any of a thousand other flavors of fake. People want it rather desperately. In fact, the current situation is bad for peoples' mental health. Can someone figure out how to give people a much higher percentage of real?

expedition32 a day ago

People talk about AI slop but I predict that in a couple of years you won't be able to tell...

And at that point does it even matter? Zuckerberg wins.

LorenPechtel 18 hours ago

The Singularity is coming.

But it's the date at which it is no longer possible to discern reality you can't actually observe.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection