Settings

Theme

Google's shortened goo.gl links will stop working next month

theverge.com

248 points by mobilio 5 months ago · 235 comments

Reader

edent 5 months ago

About 60k academic citations about to die - https://scholar.google.com/scholar?start=90&q=%22https://goo...

Countless books with irrevocably broken references - https://www.google.com/search?q=%22://goo.gl%22&sca_upv=1&sc...

And for what? The cost of keeping a few TB online and a little bit of CPU power?

An absolute act of cultural vandalism.

  • toomuchtodo 5 months ago

    https://wiki.archiveteam.org/index.php/Goo.gl

    https://tracker.archiveteam.org/goo-gl/ (1.66B work items remaining as of this comment)

    How to run an ArchiveTeam warrior: https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior

    (edit: i see jaydenmilne commented about this further down thread, mea culpa)

  • jlarocco 5 months ago

    IMO it's less Google's fault and more a crappy tech education problem.

    It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.

    And really it's not much different than anything else online - it can disappear on a whim. How many of those shortened links even go to valid pages any more?

    And no company is going to maintain a "free" service forever. It's easy to say, "It's only ...", but you're not the one doing the work or paying for it.

    • justin66 5 months ago

      > It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.

      It's a great idea, and today in 2025, papers are pretty much the only place where using these shortened URLs makes a lot of sense. In almost any other context you could just use a QR code or something, but that wouldn't fit an academic paper.

      Their specific choice of shortened URL provider was obviously unfortunate. The real failure is that of DOI to provide an alternative to goo.gl or tinyurl or whatever that is easy to reach for. It's a big failure, since preserving references to things like academic papers is part of their stated purpose.

      • dingnuts 5 months ago

        Even normal HTTP URLs aren't great. If there was ever a case for content-addressable networks like IPFS it's this. Universities should be able to host this data in a decentralized way.

        • justin66 5 months ago

          A DOI handle type of thing could certainly point to an IPFS address. I can't speak to how you'd do truly decentralized access to the DOI handle. At some point DNS is a thing and somebody needs to host the handle.

        • nly 5 months ago

          CANs usually have complex hashy URLs, so you still have the compactness problem

    • gmerc 5 months ago

      Ahh classic free market cop out.

      • bbuut 5 months ago

        Free market is a euphemism for “there’s no physics demanding this be worked on”

        If you want it archived do it. You seem to want someone else to take up your concerns.

        An HN genius should be able to crawl this and fix it.

        But you’re not geniuses. They’re too busy to be low affect whiners on social media.

      • FallCheeta7373 5 months ago

        if the smartest among us publishing for academia cannot figure this out, then who will?

        • hammyhavoc 5 months ago

          Not infrequently, someone being smart in one field doesn't necessarily mean they can solve problems in another.

          I know some brilliant people, but, well, putting it kindly, they're as useful as a chocolate teapot outside of their specific area of academic expertise.

      • jlarocco 5 months ago

        Well, is the free market going anywhere?

        Who's lost out at the end of the day? People who didn't understand the free market and lost access to these "free" services? Or people who knew what would happen and avoided them? My links are still working...

        There are digital public goods (like Wikipedia) that are intended to stick around forever with free access, but Google isn't one of them.

      • kazinator 5 months ago

        Nope! There have in fact been education campaigns about the evils of URL shorteners for years: how they pose security risks (used for shortening malicious URLs), and how they stop working when their domain is temporarily or permanently down.

        The authors just had their heads too far up their academic asses to have heard of this.

    • HaZeust 5 months ago

      >"It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors"

      ???

      DOI and ORCID sponsored link-shortening with Goo.gl. Authors did what they were told would be optimal, and ORCID was probably told by Google that it'd hone its link-shortening service for long-term reliability. What a crazy victim-blame.

  • epolanski 5 months ago

    Jm2c, but if your resource is a link to an online resource that's borderline already (at any point the content can be changed or disappear).

    Even worse if your resource is a shortened link by some other service, you've just added yet another layer of unreliable indirection.

    • whatevaa 5 months ago

      Citations are citations, if it's a link, you link to it. But using shorteners for that is silly.

      • ceejayoz 5 months ago

        It's not silly if the link is a couple hundred characters long.

        • IanCal 5 months ago

          Adding an external service so you don’t have to store a few hundred bytes is wild, particularly within a pdf.

          • ceejayoz 5 months ago

            It's not the bytes.

            It's the fact that it's likely gonna be printed in a paper journal, where you can't click the link.

            • SR2Z 5 months ago

              I find it amusing that you are complaining about not having a computer to click a link while glossing over the fact that you need a computer to use a link at all.

              This use case of "I have a paper journal and no PDF but a computer with a web browser" seems extraordinarily contrived. I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs. If we cared, we'd use a QR code.

              This kind of luddite behavior sometimes makes using this site exhausting.

              • jtuple 5 months ago

                Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.

                Reading paper was more comfortable then reading on the screen, and it was easy to annotate, highlight, scribble notes in the margin, doodle diagrams, etc.

                Do grad students today just use tablets with a stylus instead (iPad + pencil, Remarkable Pro, etc)?

                Granted, post grad school I don't print much anymore, but that's mostly due to a change in use case. At work I generally read at most 1-5 papers a day tops, which is small enough to just do on a computer screen (and have less need to annotate, etc). Quite different then the 50-100 papers/week + deep analysis expected in academia.

                • Incipient 5 months ago

                  >Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.

                  I just had a really warm feeling of nostalgia reading that! I was a pretty average student, and the material was sometimes dull, but the coffee was nice, life had little stress (in comparison) and everything felt good. I forgot about those times haha. Thanks!

                • IanCal 5 months ago

                  But in that case you have no computer to type the link into even if you wanted to.

              • ceejayoz 5 months ago

                > I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs.

                This is by no means a universal experience.

                People still get printed journals. Libraries still stock them. Some folks print out reference materials from a PDF to take to class or a meeting or whatnot.

                • SR2Z 5 months ago

                  And how many of those people then proceed to type those links into their web browsers, shortened or not?

                  Sure, contributing to link rot is bad, but in the same way that throwing out spoiled food is bad. Sometimes you've just gotta break a bunch of links.

                  • ceejayoz 5 months ago

                    > And how many of those people then proceed to type those links into their web browsers, shortened or not?

                    That probably depends on the link's purpose.

                    "The full dataset and source code to reproduce this research can be downloaded at <url>" might be deeply interesting to someone in a few years.

                    • epolanski 5 months ago

                      So he has a computer and can click.

                      In any case a paper should not rely on an ephemeral resource like internet links.

                      Have you ever tried to navigate to the errata corrige of computer science books? It's one single book, with one single link, and it's dead anyway.

                      • JumpCrisscross 5 months ago

                        I’m unconvinced the researchers acted irresponsibly. If anything, a Google-shortened link looks—at first glance—more reliable than a PDF hosted god knows where.

                        There are always dependencies in citations. Unless a paper comes with its citations embedded, splitting hairs between why one untrustworthy provider is more untrustworthy than another is silly.

              • andrepd 5 months ago

                I feel like all that is beyond the point. People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.

                • SR2Z 5 months ago

                  > People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.

                  Anyone who is savvy enough to put a link in a document is well-aware of the fact that links don't work forever, because anyone who has ever clicked a link from a document has encountered a dead link. It's not 2005 anymore, the internet has accumulated plenty of dead links.

              • reaperducer 5 months ago

                This kind of luddite behavior sometimes makes using this site exhausting.

                We have many paper documents from over 1,000 years ago.

                The vast majority of what was on the internet 25 years ago is gone forever.

                • eviks 5 months ago

                  What a weird comparison. Do we have the vast majority of paper documents from 1,000 years ago?

                  • SR2Z 5 months ago

                    We certainly have more paper documents from 1000 years ago than PDFs from 1000 years ago! Clearly that's the fault of the PDFs.

                • epolanski 5 months ago

                  25?

                  Try going back by 6/7 years on this very website, half the links are dead.

            • IanCal 5 months ago

              That’s an even worse reason to use a temporary redirection service. If you really need to, put in both.

            • leumon 5 months ago

              which makes url shorteners even more attractive for printed media, because you don't have to type many characters manually

        • epolanski 5 months ago

          Fix that at the presentation layer (PDFs and Word files etc support links) not the data one.

          • ceejayoz 5 months ago

            Let me know when you figure out how to make a printed scientific journal clickable.

            • epolanski 5 months ago

              Scientific journals should not rely on ephemeral data on the internet. It doesn't even matter how long the url is.

              Just buy any scientific book and try to navigate to it's own errata they link in the book. It's always dead.

              • ceejayoz 5 months ago

                Sure, just turn the three page article into a 500 page one with all the data and code.

            • diatone 5 months ago

              Take a photo on your phone, OS recognises the link in the image, makes it clickable, done. Or, use a QR code instead

  • zffr 5 months ago

    For people wanting to include URL references in things like books, what’s the right approach to take today?

    I’m genuinely asking. It seems like its hard to trust that any service will remaining running for decades

    • toomuchtodo 5 months ago

      https://perma.cc/

      It is built for the task, and assuming worse case scenario of sunset, it would be ingested into the Wayback Machine. Note that both the Internet Archive and Cloudflare are supporting partners (bottom of page).

      (https://doi.org/ is also an option, but not as accessible to a casual user; the DOI Foundation pointed me to https://www.crossref.org/ for adhoc DOI registration, although I have not had time to research further)

      • ruined 5 months ago

        perma.cc is an interesting project, thanks for sharing.

        other readers may be specifically interested in their contingency plan

        https://perma.cc/contingency-plan

      • afandian 5 months ago

        Crossref is designed for publishing workflows. Not set up for ad hoc DOI registration. Not least because just registering a persistent identifier to redirect to an ephemeral page without arrangements for preservation and stewardship of the page doesn’t make much sense.

        That’s not to say that DOIs aren’t registered for all kinds of urls. I found the likes of YouTube etc when I researched this about 10 years ago.

        • toomuchtodo 5 months ago

          Would you have a recommendation for an organization that can register ad hoc DOIs? I am still looking for one.

          • afandian 5 months ago

            It really depends what you’re trying to do. Make something citable? Findable? Permalink?

            Crossref isn’t the only DOI registration agency. DataCite may be more relevant, although both require membership. Part of this is the commitment to maintaining the content.

            You could look at Figshare or Zenodo? https://docs.github.com/en/repositories/archiving-a-github-r...

            Then Rogue Scholar is worth a mention. https://rogue-scholar.org/

            Sorry that doesn’t answer your question but maybe that’s a clue that DOIs might not be right for your use case?

      • Hyperlisk 5 months ago

        perma.cc is great. Also check out their tools if you want to get your hands dirty with your own archival process: https://tools.perma.cc/

      • whoahwio 5 months ago

        While Perma is solution specifically for this problem, and a good one at that - citing the might of the backing company is a bit ironic here

        • toomuchtodo 5 months ago

          If Cloudflare provides the infra (thanks Cloudflare!), I am happy to have them provide the compute and network for the lookups (which, at their scale, is probably a rounding error), with the Internet Archive remaining the storage system of last resort. Is that different than the Internet Archive offering compute to provide the lookups on top of their storage system? Everything is temporary, intent is important, etc. Can always revisit the stack as long as the data exists on disk somewhere accessible.

          This is distinct from Google saying "bye y'all, no more GETs for you" with no other way to access the data.

          • whoahwio 5 months ago

            This is much better positioned for longevity than google’s URL shortener, I’m not trying to make that argument. My point is that 10-15 years ago, when Google’s URL shortener was being adopted for all these (inappropriate) uses, its use was supported by a public opinion of Google’s ‘inevitability’. For Perma, CF serves a similar function.

      • N19PEDL2 5 months ago

        > Websites change. Perma Links don’t.

        Until the Cocos Islands are annexed by Australia.

    • edent 5 months ago

      The full URl to the original page.

      You aren't responsible if things go offline. No more than if a publisher stops reprinting books and the library copies all get eaten by rats.

      A reader can assess the URl for trustworthiness (is it scam.biz or legitimate_news.com) look at the path to hazard a guess at the metadata and contents, and - finally - look it up in an archive.

      • firefax 5 months ago

        >The full URl to the original page.

        I thought that was the standard in academia? I've had reviewers chastise me when I did not use wayback machine to archive a citation and link to that since listing a "date retrieved" doesn't do jack if there's no IA copy.

        Short links were usually in addition to full URLS, and more in conference presentations than the papers themselves.

      • grapesodaaaaa 5 months ago

        I think this is the only real answer. Shorteners might work for things like old Twitter where characters were a premium, but I would rather see the whole URL.

        We’ve learned over the years that they can be unreliable, security risks, etc.

        I just don’t see a major use-case for them anymore.

    • danelski 5 months ago

      Real URL and save the website in the Internet Archive as it was on the date of access?

    • AbstractH24 5 months ago

      What's the right approach to take for referencing anything that isn't preserved in an institution like the Library of Congress?

      Say the interview of a person, a niche publication, a local pamphlet?

      Maybe to certify that your article is of a certain level of credibility you need to manually preserve all the cited works yourself in an approved way.

  • kazinator 5 months ago

    The act of vandalism occurs when someone creates a shortened URL, not when they stop working.

  • djfivyvusn 5 months ago

    The vandalism was relying on Google.

    • toomuchtodo 5 months ago

      You'd think people would learn. Ah, well. Hopefully we can do better from lessons learned.

    • api 5 months ago

      The web is a crap architecture for permanent references anyway. A link points to a server, not e.g. a content hash.

      The simplicity of the web is one of its virtues but also leaves a lot on the table.

  • jeffbee 5 months ago

    While an interesting attempt at an impact statement, 90% of the results on the first two pages for me are not references to goo.gl shorteners, but are instead OCR errors or just gibberish. One of the papers is from 1981.

  • justinmayer 5 months ago

    In the first segment of the very first episode of the Abstractions podcast, we talked about Google killing its goo.gl URL obfuscation service and why it is such a craven abdication of responsibility. Have a listen, if you’re curious:

    Overcast link to relevant chapter: https://overcast.fm/+BOOFexNLJ8/02:33

    Original episode link: https://shows.arrowloop.com/@abstractions/episodes/001-the-r...

  • SirMaster 5 months ago

    Can't someone just go through programmatically right now and build a list of all these links and where they point to? And then put up a list somewhere that everyone can go look up if they need to?

  • QuantumGood 5 months ago

    When they began offering this, their rep for ending services was already so bad I refused to consider goo.gl. Amazing for how many years now they have introduced then ended services with large user bases. Gmail being in "beta" for five years was, weirdly, to me, a sign they might stick with it.

  • crossroadsguy 5 months ago

    I have always struggled with this. If I buy a book I don’t want an online/URL reference in it. Put the book/author/isbn/page etc. Or refer to the magazine/newspaper/journal/issue/page/author/etc.

    • BobaFloutist 5 months ago

      I mean preferably do both, right? The URL is better for however long it works.

      • SoftTalker 5 months ago

        We are long, long past any notion that URLs are permanent references to anything. Better to cite with title, author, and publisher so that maybe a web search will turn it up later. The original URL will almost certainly be broken after a few years.

  • nikanj 5 months ago

    The cost of dealing and supporting an old codebase instead of burning it all and releasing a written-from-scratch replacement next year

  • eviks 5 months ago

    > And for what? The cost of keeping a few TB online and a little bit of CPU power?

    For the immeasurable benefits of educating the public.

  • lubujackson 5 months ago

    Truly, the most Googly of sunsets.

  • asdll 5 months ago

    > An absolute act of cultural vandalism.

    It makes me mad also, but something we have to learn the hard way is that nothing in this world is permanent. Never, ever depend on any technology to persist. Not even URLs to original hosts should be required. Inline everything.

mrcslws 5 months ago

From the blog post: "more than 99% of them had no activity in the last month" https://developers.googleblog.com/en/google-url-shortener-li...

This is a classic product data decision-making fallacy. The right question is "how much total value do all of the links provide", not "what percent are used".

  • bayindirh 5 months ago

    > The right question is "how much total value do all of the links provide", not "what percent are used".

    Yes, but it doesn't bring in the sweet promotion home, unfortunately. Ironically, if 99% of them doesn't see any traffic, you can scale back the infra, run it in 2 VMs, and make sure a single person can keep it up as a side quest, just for fun (but, of course, pay them for their work).

    This beancounting really makes me sad.

    • quesera 5 months ago

      Configuring a static set of redirects would take a couple hours to set up, and literally zero maintenance forever.

      Amazon should volunteer a free-tier EC2 instance to help Google in their time of economic struggles.

      • bayindirh 5 months ago

        This is what I mean, actually.

        If they’re so inclined, Oracle has an always free tier with ample resources. They can use that one, too.

    • socalgal2 5 months ago

      If they wanted the sweat promotion they could add an interstitial. Yes, people would complain, but at least the old links would not stop working.

    • ahstilde 5 months ago

      > just for fun (but, of course, pay them for their work).

      Doing things for fun isn't in Google's remit

      • morkalork 5 months ago

        Then they shouldn't have offered it as a free service in the first place. It's like that discussion about how Google in all its 2-ton ADHD gorilla glory will enter an industry, offer a (near) free service or product, decimate all competition, then decide its not worth it and shutdown. Leaving a desolate crater behind of ruined businesses, angry and abandoned users.

      • kevindamm 5 months ago

        Alas, it was, once upon a time.

      • ceejayoz 5 months ago

        It used to be. AdSense came from 20% time!

  • HPsquared 5 months ago

    Indeed. I've probably looked at less than 1% of my family photos this month but I still want to keep them.

  • sltkr 5 months ago

    I bet 99% of URLs that exist on the public web had no activity last month. Might as well delete the entire WWW because it's obviously worthless.

  • SoftTalker 5 months ago

    From Google's perspective, the question is "How many ads are we selling on these links" and if it's near zero, that's the value to them.

  • fizx 5 months ago

    Don't be confused! That's not how they made the decision; it's how they're selling it.

    • esafak 5 months ago

      So how did they decide?

      • chneu 5 months ago

        new person got hired after old person left. new person says "we can save x% by shutting down these links. 99% arent used" and the new boss that's only been there for 6 months says "yeah sure".

        Why does google kill any project? the people who made it moved on, the new people dont care because it doesn't make their resume look any better.

        basically nobody wants to own this service and it requires upkeep to maintain it alongside other google services.

        google's history shows a clear choice to reward new projects, not old ones.

        https://killedbygoogle.com/

      • nemomarx 5 months ago

        I expect cost on a budget sheet, then an analysis was done about the impact of shutting it down

      • ratg13 5 months ago

        They launched Firebase Dynamic Links and someone didn't like the overlap.

  • firefax 5 months ago

    > "more than 99% of them had no activity in the last month"

    Better to have a short URL and not need it, than need a short URL and not have it IMO.

  • esafak 5 months ago

    What fraction of indexed Google sites, Youtube videos, or Google Photos were retrieved in the last month? Think of the cost savings!

    • nomel 5 months ago

      Youtube already does this, to some extent, by slowly reduce the quality of your videos, if they're not accessed frequently enough.

      Many videos I uploaded in 4k are now only available in 480p, after about a decade.

  • handsclean 5 months ago

    I don’t think they’re actually that dumb. I think the dirty secret behind “data driven decision making” is managers don’t want data to tell them what to do, they want “data” to make even the idea of disagreeing with them look objectively wrong and stupid.

    • HPsquared 5 months ago

      It's a bit like the the difference between "rule of law" and "rule by law" (aka legalism).

      It's less "data-driven decisions", more "how to lie with statistics".

  • FredPret 5 months ago

    "Data-driven decision making"

JimDabell 5 months ago

Cloudflare offered to keep it running and were turned away:

https://x.com/elithrar/status/1948451254780526609

Remember this next time you are thinking of depending upon a Google service. They could have kept this going easily but are intentionally breaking it.

  • fourseventy 5 months ago

    Google killing their domains service was the last straw for me. I started moving all of my stuff off of Google since then.

    • nomel 5 months ago

      I'm still shocked that my google voice number still functions after all these years. It makes me assume it's main purpose is to actually be an honeypot of some sort, maybe for spam call detection.

      • joshstrange 5 months ago

        Because IIRC it’s essentially completely run by another company (I want to say Bandwidth?) and, again my memories might be fuzzy, originally came from an acquisition of a company called Grand Central.

        My guess is it just keeps chugging along with little maintenance needed by Google itself. The UI hasn’t changed in a while from what I’ve seen.

      • hnfong 5 months ago

        Another shocking story to share.

        I have a tiny service built on top of Google App Engine that (only) I use personally. I made it 15+ years ago, and the last time I deployed changes was 10+ years ago.

        It's still running. I have no idea why.

      • throwyawayyyy 5 months ago

        Pretty sure you can thank the FCC for that :)

      • mrj 5 months ago

        Shhh don't remind them

      • kevin_thibedeau 5 months ago

        Mass surveillance pipeline to the successor of room 641A.

  • thebruce87m 5 months ago

    > Remember this next time you are thinking of depending upon a Google service.

    Next time? I guess there’s a wave of new people that haven’t learned that that lesson yet.

jaydenmilne 5 months ago

ArchiveTeam is trying to brute force the entire URL space before its too late. You can run a Virtualbox VM/docker image (ArchiveTeam Warrior) to help (unique IPs are needed). I've been running it for a couple months and found a million.

https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior

  • pimlottc 5 months ago

    Looks like they have saved 8000+ volumes of data to the Internet Archive so far [0]. The project page for this effort is here [1].

    0: https://archive.org/details/archiveteam_googl

    1: https://wiki.archiveteam.org/index.php/Goo.gl

  • localtoast 5 months ago

    Docker container FTW. Thanks for the heads-up - this is a project I will happily throw a Hetzner server at.

    • chneu 5 months ago

      im about to go setup my spare n100 just for this project. If all it uses is a lil bandwidth then that's perfect for my 10gbps fiber and n100.

      • addandsubtract 5 months ago

        Doing the same, even though I'm worried Google will throw even more captchas at me now, than before.

    • wobfan 5 months ago

      Same here. I am geniunely asking myself for what though. I mean, they'll receive a list of the linked domains, but what will they do with that?

  • hadrien01 5 months ago

    After a while I started to get "Google asks for a login" errors. Should I just keep going? There's no indication on what I should do on the ArchiveTeam wiki

  • ojo-rojo 5 months ago

    Thanks for sharing this. I've often felt that the ease by which we can erase digital content makes our time period susceptible to a digital dark ages to archaeologists studying history a few thousand years from now.

    Us preserving digital archives is a good step. I guess making hard copies would be the next step.

  • AstroBen 5 months ago

    Just started, super easy to set up

  • cedws 5 months ago

    Why wouldn’t Google just publish a database of URLs? Even just a CSV file? Infuriating.

    • devrandoom 5 months ago

      I suspect there are links to some really bad shit in there. Google is probably in damage control mode.

cpeterso 5 months ago

Google’s own services generate goo.gl short URLs (Google Maps generates https://maps.app.goo.gl/ URLs for sharing links to map locations), so I assume this shutdown only affects user-generated short URLs. Google’s original announcement doesn’t say as such, but it is carefully worded to specify that short URLs of the “https://goo.gl/* format” will be shut down.

Google’s probably trying to stop goo.gl URLs from being used for phishing, but doesn’t want to admit that publicly.

  • growthwtf 5 months ago

    This actually makes the most logical sense to me, thank you for the idea. I don't agree with the way they're doing it of course but this probably is risk mitigation for them.

  • cedws 5 months ago

    That could be an explanation but even so, they could continue to serve the redirects on some other domain so that at the very least people can just change goo.gl to something else and still access whatever the link was to.

jedberg 5 months ago

I have only given this a moment's thought, but why not just publish the URL map as a text file or SQLLite DB? So at least we know where they went? I don't think it would be a privacy issue since the links are all public?

  • DominikPeters 5 months ago

    It will include many URLs that are semi-private, like Google Docs that are shared via link.

    • ryandrake 5 months ago

      If some URL is accessible via the open web, without authentication, then it is not really private.

      • bo1024 5 months ago

        What do you mean by accessible without authentication? My server will serve example.com/64-byte-random-code if you request it, but if you don’t know the code, I won’t serve it.

        • prophesi 5 months ago

          Obfuscation may hint that it's intended to be private, but it's certainly not authentication. And the keyspace for these goog.le short URL's are much smaller than a 64byte alphanumeric code.

          • hombre_fatal 5 months ago

            Sure, but you have to make executive decisions on the behalf of people who aren't experts.

            Making bad actors brute force the key space to find unlisted URLs could be a better scenario for most people.

            People also upload unlisted Youtube videos and cloud docs so that they can easily share them with family. It doesn't mean you might as well share content that they thought was private.

          • bo1024 5 months ago

            I'm not seeing why there's a clear line where GET cannot be authentication but POST can.

            • prophesi 5 months ago

              Because there isn't a line? You can require auth for any of those HTTP methods. Or not require auth for any of them.

          • wobfan 5 months ago

            I mean, going by that argument a username + password is also just obfuscation. Generating a unique 64 byte code is even more secure than this, IF it's handled correctly.

    • chneu 5 months ago

      That's not any better than what archiveteam is doing. They're brute forcing the URLs to capture all of them. So privacy won't really matter here.

    • charcircuit 5 months ago

      Then use something like argon2 on the keys, so you have to spend a long time to brute force them all similar to how it is today.

    • high_na_euv 5 months ago

      So exclude them

  • Nifty3929 5 months ago

    I'd rather see it as a searchable database, which I would think is super cheap and no maintenance for Google, and avoids these privacy issues. You can input a known goo.gl and get it's real URL, but can't just list everything out.

    • growt 5 months ago

      And then output the search results as a 302 redirect and it would just be continuing the service.

  • devrandoom 5 months ago

    Are they all public? Where can I see them?

    • jedberg 5 months ago

      You can brute force them. They don't have passwords. The point is the only "security" is knowing the short URL.

    • Alifatisk 5 months ago

      I don't think so, but you can find the indexed urls here https://www.google.com/search?q=site%3A"goo.gl" it's about 9,6 million links. And those are what got indexed, it should be way more out there

      • sltkr 5 months ago

        I'm surprised Google indexes these short links. I expected them to resolve them to their canonical URL and index that instead, which is what they usually do when multiple URLs point to the same resource.

      • chneu 5 months ago

        archiveteam has the list at over 2billion urls with over a billion left to archive.

spankalee 5 months ago

As an ex-Googler, the problem here is clear and common, and it's not the infrastructure cost: it's ownership.

No one wants to own this product.

- The code could be partially frozen, but large scale changes are constantly being made throughout the google3 codebase, and someone needs to be on the hook for approving certain changes or helping core teams when something goes wrong. If a service it uses is deprecated, then lots of work might need to be done.

- Every production service needs someone responsible for keeping it running. Maybe an SRE, thought many smaller teams don't have their own SREs so they manage the service themselves.

So you'd need some team, some full reporting chain all the way up, to take responsibility for this. No SWE is going to want to work on a dead product where no changes are happening, no manager is going to care about it. No director is going to want to put staff there rather than a project that's alive. No VP sees any benefit here - there's only costs and risks.

This is kind of the Reader situation all over again (except for the fact that a PM with decent vision could have drastically improved and grown Reader, IMO).

This is obviously bad for the internet as a whole, and I personally think that Google has a moral obligation to not rug pull infrastructure like this. Someone there knows that critical links will be broken, but it's in no one's advantage to stop that from happening.

I think Google needs some kind of "attic" or archive team that can take on projects like this and make them as efficiently maintainable in read-only mode as possible. Count it as good-will marketing, or spin it off to google.org and claim it's a non-profit and write it off.

Side note: a similar, but even worse situation for the company is the Google Domains situation. Apparently what happened was that a new VP came into the org that owned it and just didn't understand the product. There wasn't enough direct revenue for them, even though the imputed revenue to Workspace and Cloud was significant. They proposed selling it off and no other VPs showed up to the meeting about it with Sundar so this VP got to make their case to Sundar unchallenged. The contract to sell to Squarespace was signed before other VPs who might have objected realized what happened, and Google had to buy back parts of it for Cloud.

  • gsnedders 5 months ago

    To some extent, it's cases like this which show the real fragility of everything existing as a unified whole in google3.

    While clearly maintenance and ownership is still a major problem, one could easily imagine deploying something similar — especially read-only — using GCP's Cloud Run and BigTable products could be less work to maintain, as you're not chasing anywhere near such a moving target.

  • rs186 5 months ago

    Many good points, but if you don't mind me asking: if you were at Google, would you be willing to be the lead of that archive team, knowing that you'll be stuck at this position for the next 10 years, with the possibility of your team being downsized/eliminated when the wind blows slightly in the other direction?

    • olejorgenb 4 months ago

      Does maintaining a frozen service like this[1] really require a team with a leader? I get that someone need to know the service and do maintenance when necessary, but surely that wouldn't be much more than a 20% position or something? At least if some ground work is done to make the now simplified[2] service simpler to run.

      [1] Almost the simplest possible services (sans the scale I guess) you can imagine except simple static webpages

      [2] The original product included some sort of traffic counter, etc. IIRC

    • spankalee 5 months ago

      Definitely a valid question!

      Myself, no, for a few reasons: I mainly work on developer tools, I'm too senior for that, and I'm not that interested.

      But some people are motivated to work on internet infrastructure, and would be interested. First, you wouldn't be stuck for 10 years. That's not how Google works (and you could of course quit) you're supposed to be with a team a minimum of 18 months, and after that, transfer away. A lot of junior devs don't care that much where they land, the archive team would have to be responsible for more than just the link shortener, so it might be interesting to care for several services from top to bottom. SWEs could be compensated for rotating on to the archive team, and/or it could be part-time.

      I think the harder thing is getting management buy-in, even from the front-line managers.

ElijahLynn 5 months ago

OMFG - Google should keep these up forever. What a hit to trust. Trust with Google was already bad for everything they killed, this is another dagger.

davidczech 5 months ago

I don't really get it, it must cost peanuts to leave a static map like this up for the rest of Google's existence as a company.

  • nikanj 5 months ago

    There’s two things that are real torture to google dev teams: 1) Being told a product is completed and needs no new features or changes 2) Being made to work on legacy code

romaniv 5 months ago

URL shorteners were always a bad idea. At the rate things are going I'm not sure people in a decade or two won't say the same thing about URLs and the Web as whole. The fact that there is no protocol-level support for archiving, versioning or even client-side replication means that everything you see on the Web right now has an overwhelming probability to permanently disappear in the near future. This is an astounding engineering oversight for something that's basically the most popular communication system and medium in the world and in history.

Also, it's quite conspicuous that 30+ years into this thing browsers still have no built-in capacity to store pages locally in a reasonable manner. We still rely on "bookmarks".

krunck 5 months ago

Stop MITMing your content. Don't use shorteners. And use reasonable URL patterns on your sites.

  • Cyan488 5 months ago

    I have been using a shortening service with my own domain name - it's really handy, and I figure that if they go down I could always manually configure my own DNS or spin up some self-hosted solution.

hinkley 5 months ago

What’s their body count now? Seems like they’ve slowed down the killing spree, but maybe it’s just that we got tired of talking about them.

musicale 5 months ago

How surprising.

https://killedbygoogle.com

cyp0633 5 months ago

The runner of Compiler Explorer tried to collect the public shortlinks and do the redirection themselves:

Compiler Explorer and the Promise of URLs That Last Forever (May 2025, 357 points, 189 comments)

https://news.ycombinator.com/item?id=44117722

micromacrofoot 5 months ago

This is just being a poor citizen of the web, no excuses. Google is a 2 trillion dollar company, keeping these links working indefinitely would probably cost less than what they spend on homepage doodles.

pentestercrab 5 months ago

There seems to have been a recent uptick in phishers using goo.gl URLs. Yes, even without new URLs being accepted by registering expired domains with an old reference.

gedy 5 months ago

At least they didn't release a 2 new competing d.uo or re.ad, etc shorteners and expect you to migrate

ccgreg 5 months ago

Common Crawl's count of unique goo.gl links is approximately 10 million. That's in our permanent archive, so you'll be able to consult them in the future.

No search engine or crawler person will ever recommend using a shortener for any reason.

pluc 5 months ago

Someone should tell Google Maps

ChrisArchitect 5 months ago

Discussion on the source from 2024: https://news.ycombinator.com/item?id=40998549

xutopia 5 months ago

Google is making harder and harder to depend on their software.

  • christophilus 5 months ago

    That’s a good thing from my perspective. I wish they’d crush YouTube next. That’s the only Google IP I haven’t been able to avoid.

    • chneu 5 months ago

      The alternatives just aren't there, either. Nebula is okay but not great. Floatplane is too exclusive. Vimeo..okay.

      But maybe a youtube disruption would be good for video on the internet. or it might be bad. idk.

andrii9 5 months ago

Ugh, I used to use https://fuck.it for short links too. Still legendary domain though.

Brajeshwar 5 months ago

What will it really cost for Google (each year) to host whatever was created, as static files, for as long as possible?

  • malfist 5 months ago

    It'd probably cost a couple tens of dollars, and Google is simply too poor to afford that these days. They've spent all their money on AI and have nothing left

  • chneu 5 months ago

    it's not the cost of hosting/sharing it. It's the cost employing people to maintain this alongside other google products.

    So, at minimum, assuming there are 2 people maintaining this at google that probably means it would cost them $250k/yr in just payroll to keep this going. That's probably a very low ball estimate on the people involved but it still shows how expensive theses old products can be.

david422 5 months ago

Somewhat related - I wanted to add short urls to a project of mine. I was looking around at a bunch of url shorteners - and then realized it would be pretty simple to create my own. It's my content pointed to my own service, so I don't have to worry about 3rd party content or other services going down.

citrin_ru 5 months ago

Link shortener in read-only mode should be very cheap to run (highly available writes can be expensive in a distributes system but it's easier to make a read-only system work efficient).

They are saving pennies but reminding everyone one more time that Google cannot be relied upon.

ChrisArchitect 5 months ago

Noticed recently on some google properties where there are Share buttons that it's generating share.google links now instead of goo.gl.

Is that the same shortening platform running it?

rsync 5 months ago

A reminder that the "Oh By"[1] everything-shortener not only exists but can be used as a plain old URL shortener[2].

Unlike the google URL shortener, you can count on "Oh By" existing in 20 years.

[1] https://0x.co

[2] https://0x.co/hnfaq.html

pkilgore 5 months ago

Google probably spends more money a month than what it would take to preserve this service on coffee creamer for a single conference room.

fnord77 5 months ago

they attempted this in 2018

https://9to5google.com/2018/03/30/google-url-shortener-shut-...

  • quesera 5 months ago

    From the 2018 announcement:

    > URL Shortener has been a great tool that we’re proud to have built. As we look towards the future, we’re excited about the possibilities of Firebase Dynamic Links

    Perhaps relatedly, Google is shutting down Firebase Dynamic Links too, in about a month (2025-08-25).

bunbun69 5 months ago

Isn’t this a good thing? It forces people to think now before making decisions

charlesabarnes 5 months ago

Now I'm wondering why did chrome change the behavior to use share.google links if this will be the inevitable outcome

delduca 5 months ago

Never trusted on Google after Google Reader.

throwaway81523 5 months ago

Cartoon villains. That's what they are.

pfdietz 5 months ago

Once again we are informed that Google cannot be trusted with data in the long term.

mymacbook 5 months ago

Why is everyone jumping on the blame the victims bandwagon?! This is not the fault of users whether they were scientists publishing papers or the fault of the general public sharing links. This is absolutely 100% on Alphabet/Google.

When you blame your customer, you have failed.

  • eviks 5 months ago

    They weren't customers since they didn't buy anything, and yes, as sweet as "free" is, it is the fault of users to expect free to last forever

ourmandave 5 months ago

A comment said they stopped making new links and announced back in 2018 it would be going away.

I'm not a google fanboi and the google graveyard is a well known thing, but this has been 6+ years coming.

  • goku12 5 months ago

    For one, not enough people seem to be aware of it. They don't seem to have given that announcement the importance and effort it deserved. Secondly, I can't say that they have a good migration plan when shutting down their services. People scrambling like this to backup the data is rather common these days. And finally, this isn't a service that can be so easily replaced. Even if people knew that it was going away, there would be short-links that they don't remember, but are important nevertheless. Somebody gave an example above - citations in research papers. There isn't much thought given to the consequences when decisions like this are taken.

    Granted that it was a free service and Google is under no obligation to keep it going. But if they were going to be so casual about it, they shouldn't have offered it in the first place. Or perhaps, people should take that lesson instead and spare themselves the pain.

  • chneu 5 months ago

    I just went through the old thread and it's comments. It appears google didn't specifically state they were going to end the service. They hinted that links would continue working, but new ones would not be able to be created. It was left a bit open-ended, and that likely made people think the links would work indefinitely.

    This seems to be echoed by the archiveteam scrambling to get this archived. I figure they would have backed these up years ago if it was more well known.

insane_dreamer 5 months ago

the lesson? never trust industry

Bluestein 5 months ago

Another one for the Google [G]raveyard.-

lrvick 5 months ago

Yet another reminder to never trust corpotech to be around long term.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection