www.google.com – The page is blank when accessed
github.comKey part ( from https://github.com/webcompat/web-bugs/issues/131916#issuecom...)
"This is entirely server-side UA sniffing going wrong. You get an empty HTML doc, only a doctype, with a Firefox Android UA. You can reproduce this with curl,
$ curl -H "User-Agent: Mozilla/5.0 (Android 10; Mobile; rv:123.0) Gecko/123.0 Firefox/123.0" https://www.google.com
<!DOCTYPE html>%
and it seems that this affects all UA strings with versions >= 65. <=64 work."Specifically, Firefox Mobile.
Whatever UA string Firefox Mobile sends in desktop mode returned fine.
I was seeing this on Gmail on Linux Firefox about a couple of weeks ago for a week.
Great screenshot
https://camo.githubusercontent.com/16a554571b3cea973d2b73464...
Definitely one of the greatest screenshots that I've ever seen.
I...don't know what I was expecting...
I work for Google Search. Apologies for this! It’s been fixed now and posted to our search status dashboard https://status.search.google.com/incidents/hySMmncEDZ7Xpaf9i...
Fortunately, the issue was reported in the proper / only place to get support from Google: the front page of Hacker News.
Thank god for procrastination!
And what approach would be effective to get any buttons work on login page of ChatGPT?
In IOS 14 ‘ login’ button simply does nothing regardless of the browser.
It’s been reported and yet last time I’ve checked it doesn’t work. It’s more than two months already and as it seems simply no body give a sh….
LOL
Would love to hear what the bug and/or fix was
I'm more interested in why ua sniffing is considered acceptable for this.
On the server-side, parsing the UA string is the best & fastest way to figure out which browser is on the other end or the connection. This can need to happen before you load any JS - this is commonly used to decide which JS bundles to load. When put under the microscope, browser have inconsistent behaviors and occasional regressions from version to version (e.g. performance with sparse arrays)
How much JavaScript is needed to accept my text input and provide auto complete options? Pretty wild we need to worry about browser compatibility to do this
> How much JavaScript is needed to accept my text input and provide auto complete options?
If you're talking about Google's homepage, the answer is "a lot". You can check for yourself - go to google.com, select "view source" and compare the amount of Closure-compiled JavaScript against HTML markup.
I think you've missed the point. Google's primary web search feature could, in theory, be implemented without a line of JavaScript. That's how it was years and years ago anyway.
I use Firefox with the NoScript addon and google.com still works just fine.
I did not miss the point, I gave an answer based on the ground-truth rather than theory.
> Google's primary web search feature could, in theory, be implemented without a line of JavaScript
...and yet, in practice, Google defaults to a JavaScript-heavy implementation. Search is Google's raison d'être and primary revenue driver, I posit it therefore is optimized up the wazoo. I wouldn't hastily assume incompence given those priors.
The important word in the question you quoted is needed.
Google homepage is 2MB. Two fucking megabytes. Without JS, it's 200K.
I can't be the only person who remembers when Google was known for even omitting technically optional html tags on their homepage, to make it load fast - they even documented this as a formal suggestion: https://google.github.io/styleguide/htmlcssguide.html#Option...
> I can't be the only person who remembers when Google was known for even omitting technically optional html tags on their homepage, to make it load fast
This was back when a large fraction of search users were on 56k modems. Advances in broadband connectivity, caching, browser rendering, resource loading scheduling, and front-end engineering practices may result in the non-intuitive scenario where the 2MB Google homepage in 2024 has the same (or better!) 99-percentile First-Meaningful-Paint time as a stripped-down 2kb homepage in 2006.
The homepage size is no longer that important because how much time do you save by shrinking a page from 2MB to 300kb on a 50mbps connection with a warm cache?Browser cache sizes are much larger than they were 10 years ago (thanks to growth in client storage). After all, page weight is mostly used as a proxy for loading time.
I'm sorry you're going to have to pick an argument and stick to it before I can possibly hope to respond.
Either performance is so critical that a few kb to do feature detection is too much, or line performance has improved so much that 2MB of JavaScript for a text box and two buttons is "acceptable".
You can't have it both ways.
> You can't have it both ways.
Your argument goes against empirical evidence in this instance. You can have it "both ways" when client-side feature detection is the slower choice on high bandwidth connections and you want to consistently render the UI within 200ms.
Performance goes beyond raw bandwidth, and as with all things engineering, involves tradeoffs: client-side feature detection has higher latency (server-client-server round trip and network connection overheads) and is therefore unsuitable for logic that executes before the first render of above-the-fold content. All of this is pragmatic, well-known and not controversial among people who work on optimizing FE performance. Your no-serverside-detection absolutism is disproved by the many instances of UA-string parsing in our present reality.
> Your no-serverside-detection absolutism is disproved by the many instances of UA-string parsing in our present reality.
By your logic, because McDonalds is popular, it must be a healthy choice then.
I've definitely had to code up alternative front-ends routed through a server I own to access Google on slow connections. If it takes too long your browser just gives up, and the site isn't just "unusable" (slow to the point of being painful), it's actually unusable.
I don't doubt your experience - but my mention of the 99th percentile was intentional.
99th percentile is fairly arbitrary. At Google's scale, that's a $2B yearly loss in customers they could have satisfied who went elsewhere. That's roughly 200FTEs who could be dedicated to the problem more efficiently than working on their other business concerns. Is not delivering a shit website when the connections are garbage that hard of a problem?
Thank you for the link. I had no idea about most of the optional tags. It looks ugly when taken to extremes, though.
Google is an advertising company. I’m sure they’re collecting quite a bit more then just your text input
Of course, that's why there's 1.8MB of compressed JavaScript for a text box and an image. My point being that's it's silly and I'm exasperated with the state of the internet
2002 called and they want their terrible development practices back.
It's wild to think that everything we've collectively learned as an industry is being forgotten, just 20 years later.
- We're on the verge of another browser monopoly, cheered on by developers embracing the single controlling vendor;
- We already have sites declaring that they "work best in Chrome" when what they really mean is "we only bothered to test in Chrome".
- People are not only using UA sniffing with inevitable disastrous results, they're proclaiming loudly that it's both necessary and "the best" solution.
- The amount of unnecessary JavaScript is truly gargantuan, because how else are you going to pad your resume?
I mean really what's next?
Are we going to start adopting image slice layouts again because browsers gained machine vision capabilities?
> People are not only using UA sniffing with inevitable disastrous results, they're proclaiming loudly that it's both necessary and "the best" solution.
Since you're replying to my comment and paraphrasing a sentence of mine, I'm guessing I'm "people".
I'm curious to hear from you on what - if any - is a better alternative that can be used to determine the browser identity or characteristics (implied by name and version) on the server side? "Do not detect the browser on the server side" is not a valid answer; and suggests to me the person proffering it as an answer isn't familiar with large-scale development of performant web-apps or websites for heterogenous browsers. A lot of browser inconsistencies have to be papered over (e.g. with polyfills or alternative algorithms implementations), without shipping unnecessary code to browsers that don't need the additional code. If you have a technique faster and/or better that UA sniffing on the server side, I'll be happy to learn from you.
"Do feature JavaScript feature detection on the client" is terrible for performance if you're using it to dynamically load scripts on the critical path.
I'm sorry you're going to have to pick an argument and stick to it before I can possibly hope to respond. Either performance is so critical that a few kb to do feature detection is too much, or line performance has improved so much that 2MB of JavaScript for a text box and two buttons is "acceptable". You can't have it both ways.
We're also back to table layouts with grid, albeit a lot more usable this time around.
I don't recall that there was ever anything inherently wrong with using tables for layout, except that it was a misuse of tables so we were told it wasn't "semantic". Thus you had years of people asking on forums how to emulate tables using a mess of floating divs until flexbox/grid came around. In retrospect, tables are also clearly incompatible with phone screens, but that wasn't really a problem at the time.
One, it made the code unreadable and impossible to maintain properly, especially since most of those table where generated straight out of photoshop or whatever.
Two, it was an accessibility nightmare.
At least modern grid design fix those
It's ok, sure it's stressful when something like this happens. Software is a hard field and the amount of things that can go wrong is immense. When I think about how few errors and downtime some of the larger services have I realize how much work must be involved.
I'd divide "things going wrong" into forced and unforced errors. Your upstream service providers sending subtly malformed data is a forced error. It happens. Doing user agent sniffing for presentation, poorly, is an unforced error.
It's not amateurish to have problems. It is amateurish (or malicious) to have problems caused by this specific class of issue.
Google partners with Mozilla to deliver a completely ad-free search experience.
Which, coincidentally, happens to be search-free as well.
which in turn makes it sound like free money from Googs
straight to the mozilla ceo
Google is tracking the incident here: https://status.search.google.com/incidents/hySMmncEDZ7Xpaf9i...
Disclosure: I work at Google but not on this. This was linked from the Bugzilla bug.
What I don’t understand is why this isn’t just a rollback, or at worst a revert commit and redeploy. I can forgive an issue with a slightly obscure browser, but the fix should be trivial for Google engineers?
Well, as these things usually go...
There's a feature important to Middle Manager 32456. You can't just revert it for a not-Google browser. That's just a no-go.
So a fix needs to be developed, QA'd, rolled. I presume it's going to be an out-of-schedule roll so it'll probably involve some bureaucracy. (One of my clients is a few thousand people public company and even there a hotfix requires QA lead approval.)
Nothing ever is simple.
Perhaps it's been a while since it's broken? url-bar search works so this seem to only happen if you navigate to google.com directly. How many people navigate to google to search, while using firefox on android? Just a hunch though.
OTOH, it's amazing that apparently they don't have UI tests for FF mobile.
> OTOH, it's amazing that apparently they don't have UI tests for FF mobile.
Why? They don't have UI tests for Opera on the Nintendo Wii either, and at this point I bet the install base for Wii-Opera is still larger than the install base for FF mobile.
TBH, when I was there it surprised me that Google didn't have a dedicated hardware test-bench room with rows upon rows of browser deployments that every UI change needed to be burn-in tested on, but... They don't. They never did. In general, their strategy is to be nimble and deploy rapidly (with the expectation they can roll back rapidly). In that context, it actually makes sense why they don't have that warehouse of test-bench installations... They'd slow-down rollbacks as well as rollouts.
A handful of projects have dedicated testing targets. They're driven mostly by the ideology of individual Googlers (some people really like Firefox) and a handful of high-value users that have specific installs Google isn't interested in pissing off. Since they do very little (relatively speaking) B2B business, that's a very short list of names.
> Next update will be within 12 hours.
(From the status page linked above.)
That sounds pretty far from being nimble and deploying rapidly. Not really a knock on the people doing the work — doing stuff in high stress sucks. But it's clear that they're intentionally deprioritizing a competitor.
For one, that "deprioritize a competitor" is not clear at all. Why would that be so? Isn't it far more likely, given the rarity of these events, that a test regression occurred or some other subtle issue rather than assumed malfeasance?
For two, that "next update in 12 hours" is user comms. For me, at least, google.com works fine both on curl and my browser. That's a fairly normal cadence for big companies.
On the larger point about "nimble and deploying rapidly", the people who generally brag about "being nimble and deploying rapidly" almost never serve an even 1/100th the audience the size of Google.com does, and it's really questionable if, at that scale, you actually want to risk global regressions even on trivial bugs.
So I don't know what that user is talking about, and I agree with you that they are obviously not that.
That approach may be antithetical to the modern startup engineer frantic to prove their stock's hypothetical worth to their investors, unconcerned about trivial revenue loss from frontpage issues because of whatever latest node.js drama nuked their continuously deployed website. But the fact that "the landing search page is broken for 1% of users in a rare but public use case" is news at all is because Google's approach for search sets our expectations that this won't happen.
I think the parent comment's point is: if this were affecting a version of Google-branded Chrome with similar market share, do you think we would still be getting "next update in 12 hours"?
I remember an internal mantra in Google along the lines of "if you break something in production, roll back first, ask questions about how it happened later, even if you think it's a simple fix". It feels telling that this is not what is happening in this circumstance.
If it were Chrome, it'd never have gotten out the door because the developers writing the feature use Chrome.
> if you break something in production, roll back first, ask questions about how it happened later, even if you think it's a simple fix
I don't know that they have enough information on the breakage to know when to rollback to.
>the developers writing the feature use Chrome.
To write the code? It was only mobile Firefox that was broken. I don't think most developers write code using mobile Chrome.
The UA for Desktop Chrome looks like
`Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36`
The UA for mobile Chrome looks like
`Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.6099.210 Mobile Safari/537.36`
There is very little here different that could twig an error in the google.com UA handler.
In contrast, the Firefox Mobile one is
`Mozilla/5.0 (Android 4.4; Mobile; rv:41.0) Gecko/41.0 Firefox/41.0`
They're the only one (I can find) with "Mobile" inside the parens; bet you money the issue is a badly-formatted regex tripping over the paren / symbol combination and thinking that UA describes some deny-listed crawler somewhere (or falls off the end of recognizable UAs and trips a fallback to "Vend nothing").
Since they're always testing on Chrome Desktop, and Chrome Mobile emits an almost-identical UA string, my previous statement holds: issues with Chrome mobile are generally more likely to come up in testing on Chrome Desktop than issues in Firefox Mobile are likely to come up in testing on Chrome Desktop.
Only way to really test that is to break search for chrome and see how long that takes to rollback.
Nimble is on the order of a week, not on the order of hours at their scale.
But it's also not on the order of quarters, is what I mean.
Google's strategy for search ux is decidedly not "nimble and rapid" and I don't understand why anyone with first hand knowledge would ever suggest that.
It is gated on a lot of things, especially relative to early days of the company. It's just that browser compatibility isn't generally one of those things... They still handle that by the belief they can roll back quickly.
Relatively speaking, you can still undo the change to the front page faster than you can, say, roll out a new version of a desktop application, especially if the change fixes an active fire.
It appears that the problem is related to the version string transmitted by Firefox Mobile. If they were to send an older user agent string, the issue would likely be resolved. However, any rollback would need to be implemented by Firefox, and it seems they are not at fault here.
From Google's standpoint, this issue may be considered a "new" bug, which means they need to conduct an investigation and address it. Consequently, a rollback is not a viable solution for them.
Why would Firefox have to "rollback" their UA string back to version 64, released 6 years ago (2018)? That seems utterly ridiculous for a server side UA sniffing bug rolled out by the Google Search Team.
maybe not that easy if they use monorepos
Submitters: "Please use the original title, unless it is misleading or linkbait; don't editorialize." - https://news.ycombinator.com/newsguidelines.html
If you want to say what you think is important about an article, that's fine, but do it by adding a comment to the thread. Then your view will be on a level playing field with everyone else's: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
(Submitted title was "Google breaks search for Firefox users because of bad UA string sniffing")
This reminds me of a blog post a former exec of Mozilla put out saying Google had been intentionally breaking things to get users to jump ship to Chrome.
Is that what you think happened here? Google made the search page blank for Firefox users to get them on Chrome? Or perhaps it's the work of Hanlon's razor.
I don’t think there’s an official “sabotage Firefox” memo waiting to be subpoenaed. I think it’s more along the lines of Google only focusing QA on Chrome and Safari, figuring that they save money and, probably never written down, any problems just make their browser look better.
This keeps happening over and over for Google at a rate far in excess of other major tech companies so we know they don’t make an effort to test with Firefox, and it’s why I’d like regulators to force anyone making a browser used by more than, say, 100k people to be obligated to test in other browsers on every site their larger organization operates and have a maximum time to patch any compatibility or performance issue which isn’t due to a browser not implementing a W3C standard. The goal would be to cover things like YouTube using the Chrome web component API for ages after the standard shipped and serving a slow polyfill to non-Chrome browsers.
> Google only focusing QA on Chrome and Safari
FWIW, over my time at Google (2012-2022) we had massively better testing on Firefox than Safari, because you can run Firefox on Linux on arbitrary hardware while Safari requires MacOS which requires Apple hardware and is automation-hostile. We did eventually get some tests on Safari, but they were slower and flakier.
So I guess my question would be where things were breaking down - Meet was almost unusable in Firefox for 2019-2021 (Slack, Teams, Zoom, Chime, and WebEx were all fine), and the GCP console login would periodically get into a login redirect loop only in Firefox on a regular basis for months. Our GCP rep never could get an answer on whether anyone was even aware of that problem.
My guess is those paths didn't have automated tests, or the automated tests missed some important aspect of real-world usage? Issues engineers working on the product ran into were way more likely to get fixed (among other things because reproducing the issue was easier) and almost all engineers primarily used Chrome.
(Not defending; trying to give background.)
> Not defending; trying to give background
I appreciate that - it’s been really frustrating knowing how many good technical staff Google has and seeing things just not get fixed. Our GCP reps seemed resigned to this happening but didn’t feel they had leverage to get attention on the product side, which made me feel bad for them.
Considering it has happened time and time again, and this latest one happens just by changing the UA… yes.
If it looks like a banana, shaped like a banana, feels like a banana, it’s probably a banana (or a plantain).
Come on, don't sell Google short here. The previous one happened just by changing the UA too :)
That's discussed on GP's link, a former Firefox vice president said
> "Over and over. Oops. Another accident. We'll fix it soon. We want the same things. We're on the same team. There were dozens of oopses. Hundreds maybe?"
> "I'm all for 'don't attribute to malice what can be explained by incompetence' but I don't believe Google is that incompetent. I think they were running out the clock. We lost users during every oops. And we spent effort and frustration every clock tick on that instead of improving our product. We got outfoxed for a while and by the time we started calling it what it was, a lot of damage had been done," Nightingale said.
I tend to believe him
This is a baseless accusation which defies logic.
Chrome has 65.76% of the browser market share.
Firefox has 2.93% of the browser market share, and of those, only some Android users were affected.
And you're saying Google purposefully contrived a situation to break their own service to get those <3% of users to go and install Chrome, and assume those users knew to take that action? All while keeping this conspiracy quiet from other staff at Google and with malicious (and possibly anti-competitive) intent?
Or is it possible no one cares about Firefox because so few people use it and this is the result of a simple bug?
Firefox had 30% market share in 2011, the article links to the discussion about when google chrome first released. The article discusses how this behavior ate away at the market share every time google released something that worked on google chrome, and broken on firefox. And as a result over the years, now firefox has a 3% market share as you say.
Firefox share declined because the market objectively decided Chrome is a better browser. Framing it as Chrome "picking" on Firefox is just silly. And I say this as a die-hard Firefox user and fan.
You have no evidence to support this assertion. I know this because there is no objective "browser quality" standard. It is as least as likely that Google aggressively advertising it on every single page load of every single web presence gained traction for chrome.
Actually, I do: the fact most people use Chrome and not Firefox.
That's a popularity metric, not a quality metric. Conflating the two is disingenuous at best.
Turns out quality software is also popular. Imagine that!
Plenty of quality software is unpopular, and plenty of popular software is shit. I'm confident you know this. What is driving this series of pointless nonsense posts?
I do. I also happen to know people use Chrome because it's quality.
This is about a pattern of behaviour that started when Chrome launched (and had less market share than Firefox) and has continued since then. This isn't "They're trying to take our tiny market share", this is "This is how they took our quite substantial market share".
If they already took substantial market share, why would they need tofurther break Firefox users' experience? You just made my point for me.
> why would they need tofurther break Firefox users' experience
To prevent the market share from going up? That's the logical thing to do after spending millions developing and marketing your browser to the number 1 spot.
Why would they waste that money and effort by playing nice now and letting the market share go back up?
Not everything is a conspiracy. Sometimes users just prefer faster, better advertised, better supported software. To suggest Google is actively and wilfully interfering with <3% of browser market share is just so ridiculous, so foolish, it is actually amusing.
Youtube on firefox has been broken since they started their ad-blocker brigade. It's even broken for me, a user on macos using firefox without an ad-blocker enabled for youtube in premium mode. The UI has become sluggish and 'pause-y'. Videos play fine but often when trying to scrub with the timeline the timeline thumbnail will not respond or other controls will be very sluggish.
If it is a old video (3 days), just search for it in DDG, and just watch it via DDG.
This is not a solution that will work for mom and pop. Exactly the kind of workaround that relegates Firefox to being a "techies only" browser.
Firefox's irrelevance is why it's regulated to techies. And sure, you're free to blame Google for some of that, but let's not pretend Mozilla has always been some wellspring of user-pleasing decision making.
> Youtube on firefox has been broken since they started their ad-blocker brigade.
I use Youtube in Firefox daily, even in Strict mode with many user.js tweaks, and have never had a problem.
> The UI has become sluggish and 'pause-y'.
Get a faster computer.
This is the wrong assumption to make. I can tell you as someone with a powerful rig that is by no stretch a "slouch" sitting at 0% CPU and 0% GPU I have seen Youtube act like garbage for absolutely no reason. I've even had video playback skip ahead a second as well and it throws me off when watching videos.
> Get a faster computer
or... and keep up with me here. Google has been known to do A/B testing on users. Just because your part of "B" group that doesn't see it doesn't mean that it's not happening. It only means Google is artificially putting their thumb on the scale and I take serious issue with people handwaving issues with "get a better computer" Nothing is wrong with my computer and only Youtube has this issue, any other video platform does not do this so why is it only Youtube?
> Get a faster computer.
Yeah my m2 max is a total fucking slouch, just barely better than a 486.
If you can't play videos on an M2, that's not a Youtube problem, that's a "you messed your computer up" problem.
Vimeo works fine, nebula works fine, max works fine.. youtube videos play fine, the ui is hitchy/pause-y/janky. It's WIDELY reported that this is a problem with youtube specifically on firefox and ONLY since they went on their adblocker blocking schemes. (even though I subscribe to youtube premium and disable my adblocker on their site). This happens if I have 100 tabs open or if I cold boot my system and start up a new firefox with a single tab.
Swallow your fucking pride and admit that you don't know what you are talking about.
There isn't a good reason to UA sniff. I run Firefox serving a Chrome UA as my daily driver, and never run into any issues. The reason I started this nonsense was because of a few sites that completely falsely asserted they needed Chrome to run properly, aka the only issue was the fact that they looked at the user-agent string.
Not to mention the fact that this is google.com, not some wild feat of frontend engineering. They shouldn't be serving any JS that wouldn't run on IE, forget anything close to a modern browser.
https://github.com/webcompat/web-bugs/issues/131916#issuecom...
This is entirely server-side UA sniffing going wrong. You get an empty HTML doc, only a doctype, with a Firefox Android UA. You can reproduce this with curl and it seems that this affects all UA strings with versions >= 65. <=64 work.
It's interesting if you remove "Android" from the UA, then it sends more, but remove "Mobile" it still sends only the "doctype", and removing "Firefox" fixes it entirely.
Is there any legitimate use for UA string sniffing vs feature detection? And what could a search engine possibly do that's so bleeding edge it doesn't work in all browsers?
There've been a few, but the first one that impacted me was Chrome's switch to requiring the SameSite=None flag on Set-Cookie headers[1] in a third party context.
> Warning: A number of older versions of browsers including Chrome, Safari, and UC browser are incompatible with the new None attribute and may ignore or restrict the cookie.
This caveat meant that UA sniffing was the only reliable means of setting cookies that were guaranteed to work properly.
Track you in new and exciting ways.
We’ve seen browsers behave differently in subtle ways. To mitigate impact while we figured out the correct fix, we’ve served different content to them so those users still get a working experience while we figure out and fix the underlying issue.
This had nothing to do with feature detection, so the usual suggestions simply won’t work.
Detecting the user's OS to provide the right binary download link?
Google Earth still asks me to choose "64 bit .deb (For Debian/Ubuntu)" or "64 bit .rpm (For Fedora/openSUSE)", but I guess user agent sniffing wouldn't help that.
One notable thing about the Google search homepage is how fast it is. It's gob smackingly quick. It responds to the initial key press under in under 200ms with 10 suggestions that include text and pictures, and keeps doing it. My round trip time to www.google.com is around 20ms, so at least 100ms of that time is swallowed by the internet.
You don't get that sort of speed without heavy optimisation. In fact I'd be amazed if www.google.com isn't the most heavily optimised page on the internet. So it's not at all surprising to me that they optimise based on what browser is asking.
Wouldn't a 20 ms RTT mean 20 ms is swallowed by the Internet? As long as your 10 suggestions fit within the TCP receive window, you already have an active connection from the initial page load, so it's 1 RTT to get the data (you could put the images into data URIs so everything could be gathered in 1 fetch, right?). I'd expect the UI update to set a list of 10 items should also take way less time than a single frame. The 200 ms search seems impressive without me knowing much about what it needs to do, but the UI portion seems trivial?
yesn't
basically you can send slightly less data by only sending the relevant layout/js code etc. with the first request instead of sending some variation which then the browser selects/branches one
most likely a terrible optimization for most sides maybe not google search due to how much it's called
but then Google is (as far as I remember) also the company which has tried to push the removal of UA strings (by fixating all UA strings to the same string, or at least a small set of strings)... so quite ironic
Google provides a more basic webpage for older browsers.
And they often do that wrongly. Try typing "weather" in Firefox for Android vs Chrome for Android; a vastly inferior version is shown on Firefox. Changing the UA or using desktop mode makes the page work flawlessly.
Wow, that's very anticompetitive.
Introducing ‘about: blank’ by Google. A new way to experience the web.
Will be shuttered in less than 5 years.
such optimism!
No. But you might be misguided if you are incompetent to do it right, with incremental enhancement based on available features/resources.
But then you will also be incompetent to do it right with UA sniffing, which is even harder and require more maintenance to keep the list up-to-date.
That's the obtuse thought process on how you get the garbage google just showed us.
This level of condescension isn’t helpful, especially as you’re wrong to make such an absolute assertion rather than describing the right or wrong reasons to use a tool.
UA sniffing should be a last resort but there have been times where browsers have claimed to support something but had hard to detect bugs and the cleanest way to handle it was to pretend the feature detection failed on those older browsers.
This is increasingly rare – the last time I had to do it was for Internet Explorer – but I would not take a bet that nobody will ever have a need for it again.
As an example, I recently had to deal with an exception to a TLS 1.2 requirement because while almost all of our traffic is from modern user agents, there was a group of blind users with braille displays using Android KitKat which defaults to only supporting a maximum of TLS 1.1 and “go buy an expensive new device” isn’t an appropriate response even though you might be entirely comfortable saying that random internet crawlers getting an error is okay.
You proved my (overly condescending) point.
You either support older TLS or you don't. Do not try to fine tune it to one known audience. You just locked everyone that is not in the sample you just whitelisted(!) out of your service. But you don't know about them, so no harm? If you think one use case for tls1.1 is OK for one use case, just accept you support it.
Same for every thing else. Every time you use UA sniff, you are doing it wrong. not matter how smart you think you are. You are not. It's just a momentary feel good because you are oblivious to everything else. Just like you felt good when you blocked tls1.1 before knowing of those blind users. Now you felt good allowing those blind users because you still don't know about another group that you might learn tomorrow or never, because you are blocking them :)
That not proving anything other than that you missed the point that the real world doesn’t always have simple solutions. Sometimes you have to do things which are a bit messy as part of a phased transition and it’s important to know what tools you have available to do so.
How did you find out about the blind users? did your excellent testing work find it? No. You screwed up and someone complained.
Now extrapolate your arrogant incopetence to your entire user base. How many people do you think you are screwing over that didn't bother to complain and instead just moved away?
I didn't expect to laugh but the screenshot (i.e., pure white rectangle) has tickled me.
Google used to disable Sports scores when opened from Firefox. I had to add a separate extension to circumvent that.
Just an evil company really.
Rather embarrassing that they didn't notice this in testing, but it only affects the homepage and not searches from the address bar
They probably did.
However wanted to screw Firefox users the same way as the YouTube slowdown.
I get why people want to assume bad faith but it's pointless, it's hard to intentionally sabotage Firefox when nobody tests shit in Firefox at Google. There's (or at least was) still an internal group of Firefox users a few years ago, but it's small, and internal tools may have poor or no support for Firefox at any given time. When I worked there I made some attempts to fix things (e.g. once Google Cloud was accidentally broken on Firefox during a time when many people were out...) but the bigger problem is that most people don't use Firefox because it's inconvenient or they don't care to and a lot of the tooling only works on Chrome/only works well on Chrome.
There's a silver lining though: it really doesn't matter if it's due to negligence or malicious sabotage, and if they (where "they" is hypothetical leadership within Google that is trying to eliminate Firefox) thought that pleading negligence instead of sabotage was going to be a good defense, we'll see how that holds up to future scrutiny and regulation. I mean hey, Microsoft tried to pull a lot of shit too, and they were probably more competent at it if the Epic Games lawsuit is any indicator of Google's competence right now.
To me, the fact they don't run automated checks on Firefox is in itself malicious and anti-competitive.
Yes, deciding not to test in Firefox is a decision to break Firefox. Sites with a tiny fraction of Google's resources manage to test Firefox. There's no way they just forgot.
Seems unlikely they'd want to sabotage Firefox.
Google is Firefox' primary source of income, and Firefox is strategically crucial token competition that stands between Google and an antitrust lawsuit of the sort Microsoft faced in the early 2000s (where Microsoft were basically a technicality away from being broken up).
It does seem unlikely, but playing with the example UA in curl (as noted in https://news.ycombinator.com/item?id=38925015) seems to show it's the combination of "Android" and "Firefox" that causes this behaviour. I wonder if any other mobile browsers are broken?
A far more likely explanation is simply that two features or workarounds are enabled, maybe "Android" triggers a prompt to install Chrome or something and "Firefox" triggers some workaround code. However these interact poorly and cause a crash.
If they wanted to sabotage Firefox I doubt they would choose to send a blank page. They would probably send infinite captchas or just disable most features (which they already do).
They want two things: 1) Firefox needs to exists as a hypothetical option 2) People shouldn't really use Firefox. Intermittent oopsies breaking key services are doing wonders towards goal #2
Former Mozilla exec: Google has sabotaged Firefox for years (2019) (zdnet.com)
304 points by hashhar 49 days ago | 122 comments
Google is currently facing an antitrust suit where their payment to Firefox is a crucial part of the evidence against them. And it's over their search engine, which is far more important to them than Chrome.
They pay Firefox as they're buying a valuable product. Which is why they also pay Safari.
Are you certain that search is more important than chrome?
Sure search was their first product, but they have long since pivoted to being an ad company. And while yes, they can show ads along side search, the real cash cow is all the juicy data they can exclusively hoover up through Chrome to better target ads all across the web not just search.
Nope, search ads are still where they make at least 2/3rds of their money. The rest of their ad inventory has substitutes, search really doesn't (because they basically invented the category (grumble grumble something Overture)).
Chrome is okish for collecting data. But they have Android which probably provides 100x the data.
Only on poor people! (I mostly jest, but having talked to a bunch of former Google ad people, it was a real concern for them that most of their high revenue users were moving to iPhone).
And indeed, we are talking about Firefox on android here.
Some middle manager out there was making this decision. Not strategy heads
Strategy heads come up with the strategy - friction. Enough friction against Firefox will let the browser still exist (essential for the claim that there is competition) but without actually being able to compete.
Middle management finds a tactic that implements the friction strategy - small "random" breaks and persistent performance issues.
An engineer would find the precise measure to implement - break the UA string sniffing targeting a specific browser.
> Enough friction against Firefox will let the browser still exist (essential for the claim that there is competition) but without actually being able to compete.
Interesting theory, but that's exactly what M$ got dinged for in the antitrust suit re: Netscape.
MS waged a lot more open war against Netscape, and documented their desire to "extinguish", "smother", and "cut off Netscape's air supply".
Google is doing more "eternal irritation by 1000 papercuts". Never one big blow, never aiming to kill Firefox, they're financing it after all. Firefox needs to exist but never be attractive to users, especially on the phone where tracking is next level.
I think plausible deniability does a lot of the heavy lifting here. But statistically speaking Firefox being the one browser hit constantly by a stream of random Google issues can't be random.
The legal and regulatory landscape changed a lot since then too, with Big Tech slowly but constantly lobbying and pushing a lot more than just these tactics into normalcy. A lot of what's normal today was outrageous in the '90s.
Google doesn't care, they knowingly implemented the mother of all illegal antipattern cookie banners and only changed it after getting fined over $100M.
If the risk is a lawsuit after years of anticompetitive behaviour - if they're unlucky - that's absolutely worth it for Google. Chrome is now THE web browser platform with the only two remaining notable exceptions being competitors in name only, and no single lawsuit can just reverse that.
Sometimes! And sometimes a middle manager just does what he thinks clears the ticket quickly enough and then someone yells at him.
Assuming competence and intention is foolish.
> Middle management finds a tactic that implements the friction strategy - small "random" breaks and persistent performance issues.
> An engineer would find the precise measure to implement - break the UA string sniffing targeting a specific browser.
You got the strategy right but the implementation is laughable, sorry :-)))
The implementation is: "We have a budget of N story points this sprint to resolve bugs, let's prioritize them. Let's prioritize by impacted audience size."
The audience size will make 99% of Firefox specific bugs be deprioritized out of the current sprint. And the next one. And the one after that.
And unless a senior engineer stands up to update the prioritization criteria, plausible deniability forever.
The aim was to lay out at which level each stage happens. It's not middle management taking a random decision. The fish rots from the head but the tail certainly doesn't smell rosy either. Dieselgate was also just a random deprioritized code bug until it wasn't.
Repeated incompetence from one level is actual maliciousness from the level above, all the way to the top.
It's never been clear to me that Google is actively trying to take-out Firefox. Firefox's user base is pretty small - so if they succeed in reducing it they'd gain very little but lose some shielding from antitrust accusations.
I suspect that these problems are more to do with neglecting testing, and just not caring very much about non-Chromium browsers
(Disclosure: I'm a long-time Firefox user)
The thing about "high quality malice" is it's indistinguishable from an error.
A bug that breaks others' tools but not yours is not implemented intentionally, but it's not fixed intentionally, so it looks like a benign error.
At the end of the day, you use the chance you get, and it makes real damage while you sip your coffee...
It’s pretty clear to others. Chrome is supposed to be The Web in the same way that Google is search.
Antitrust is just money to pay for Google. Firefox is cancer to Google.
I do not know it that is true. Cancer tends to grow... ;)
well played, well played :)
Similar to the google image search on Firefox mobile, which has been broken for how many years now?
Well image search is broken in general.
Reverse search is now completely useless for example.
The title is bad, this only affects Firefox mobile that AFAIK has a really low amount of users (BTW, I am one of them).
Not saying that they shouldn't have detected this (i.e.: where is the automated testing for those things?), but I don't think this is Google screwing up Firefox on purpose.
And if Google really wanted to screw up Firefox, they would probably do a better job than User-Agent sniffing. Firefox already has an internal list where it applies fixes for websites (e.g.: setting a custom User-Agent), and the bug report actually comes from it. You can see it by going to `about:compat` page in Firefox.
There's a much easier explanation.
They didn't test on it at all. It's a browser with sub-5% market share.
There are folks in Google who personally test on firefox and believe in it ideologically as an important peace of the web ecosystem, but that's not a company policy and detecting Firefox bugs does not gate feature release for most projects.
That's just sabotage with extra plausible deniability.
Is it "sabotage" when you don't intentionally devote resources to making some random third party's browser work? What obligation do they have?
Mozilla has cachet with nerds like us because of its history. Not because it's particularly better than alternatives or because it has enough of a user base to throw its weight around.
In contrast, they do test on Safari. They'd lose double-digit percentages of users if they didn't.
The ecosystem, unfortunately, generally has room for three top browsers. I'm not even sure Firefox is number four these days.
Except that firefox is particularly better than alternatives. It isn't sending my browsing history to Google, and actually lets me block ads. Firefox provides a level of customization and options to protect your privacy that chrome will never match.
> It isn't sending my browsing history to Google
For obvious reasons, I don't think Google is devoting any resources to optimizing the experience of navigating to google.com for people who don't want any history generated on their use of google.com.
I agree with you that the other features listed are positives, but they don't appear to be positive enough to push use of Firefox psst Safari, Chrome, or Edge numbers.
Google isn’t sitting around randomly breaking things for Firefox users in this obvious way.
I really feel like HNers often forget how minuscule the scale of their usage patterns are.
Google doesn't need to intentionally break things, they just have to not test on anything but Chrome and say "oops" once a critical mass of complaints come in. Now you could say that they don't have to support the competitor, which is fair, but the web is based on standards, and in this case the site breaks based on the user agent and not the lack of some technical capability.
How convenient!
Given Google's size and market share, they should be fined whenever they break access to a competitor's website (e.g. Chrome update breaks access to DDG) or if their websites break in competitor's browser (e.g Google breaks on Firefox). They have more than enough resources to run automated checks on this, so it's hard to not see malice instead of incompetence in this case.
What if it is done during testing as a beta release? If that's acceptable, Googs will just turn into one of those companies that never releases full versions and is always just in some form of beta release
One of those companies? Google pioneered this.
Fined by whom? On what authority?
It's fascinating how people clamor for more government regulation and control at the first sign of trouble, without reasoning about the implications of those things, despite a history replete with examples of power abuse from the same.
Back when competition law used to be enforced, if you used your ~90% market share in one area (such as PC operating systems) to give yourself an unfair advantage in another area (such as web browsers) the government forced you to stop doing that. https://en.wikipedia.org/wiki/United_States_v._Microsoft_Cor....
If the laws were still being enforced, a company that has a very large market share in web browsers, and a very large market share in search, would have to be extremely careful not to "accidentally" break their competitors' products.
Do you put "accidentally" in quotes to support your baseless accusation? Because you have provided no evidence of any wrongdoing, only biased speculation.
If it walks like a duck, talks like a duck and looks like a duck......
Thanks for confirming your accusations are baseless and not rooted in any factual evidence.
By the government of any country in which Google has a presence? Are you really fascinated by people desiring regulation? Are you confident that history has more examples of the harm of regulation than the harm of its absence?
Government authority is not absolute: it must derive power from some law or other legal basis, which does not presently exist for "websites [which] break in competitor's browser". And if you are suggesting creating precedence for it, I ask you to carefully consider that responsibility in the hands of government bureaucrats, which are generally of low skill and intelligence, and are potent to abuse it.
>I ask you to carefully consider that responsibility in the hands of government bureaucrats, which are generally of low skill and intelligence,
Bro you have posted this on an article where GOOGLE fucked up PARSING A USER AGENT STRING to the point they deliver an empty HTML document. What feature on the Google search page even NEEDS such feature switching based on user agent? Google search worked pretty well on internet explorer 4 for heavens sake
But please, keep telling me how private sector is so much less incompetent than the public sector
Virtually yelling and calling me "bro" doesn't form an argument for government intervention, but it seems to help you vent, so have at it.
Somewhat related fun fact: if you send the Google search home page an unknown UA string (e.g. curl/8.5.0) from a German-speaking country, Google will serve a latin-1 encoded page declared as UTF-8 in a meta tag. (I spent quite a while trying to fix this on my end before realizing what was going on.)
The fact that I didn't notice because I'm using Kagi anyway is probably one of the first hints at Google's downfall w.r.t. search dominance.
A bit hyperbolic, aren't we?
Count me in for the destruction of the do-be-evil giant, but we're far from it not being absolutely dominant in the space.
I didn't even know what Kagi was.
No, I actually use Kagi both on my Laptop and on my Firefox on Android.
It has one killer feature: you can block Pinterest from spamming your results.
Blocking domains is big part of why I use Kagi, I wish they should share their top blocked domains or make it that people can share their blocked domains to help others. They have this https://kagi.com/stats, but I'd like something I can import into my own settings to use.
You mean this page ? https://kagi.com/stats?stat=leaderboard
The top 7 blocked domain are just pinterest.
Yes, like that but I'd like to be able for people to share here is my list of blocks, here is my list of lowers, etc.
On the Domain insights tab of that page, the right column is "Your status". You can click a status there to instantly apply it to the domain.
And w3schools!
I think bingchat is more likely to be what eventually supplants google. They funded the ai that created the garbage autogen pages, now they can use more ai to solve it for you in exchange for the low low price of your attention. Tesla will likely supplant google maps any day now when they release an offering. People pay tesla to drive cameras with computer vision around. Google has plenty of altitude and time to correct itself, but probably not as much as it thinks it does. It won't die but it might go dormant for a decade or more like MS.
Edit: I'm not saying tesla is building a street view clone. I can't imagine they have the bandwidth for that. But the cars recognize construction, speed cameras, police cars, red lights, red light cameras, stop signs and read speed limit signs, which all gets sent back up in real time. They might have to pay to augment their traffic data until enough people without teslas start using the app.
> Tesla will likely supplant google maps any day now when they release an offering.
An offering of what?
Their own maps apparently?
Yeah, but based on what info?
Some fanboy probably heard Elon misspeak 3 years ago and the rumor mill has been going. Who the hell knows, nothing from what I can tell.
I'm a Kagi user myself but I'm not going to extrapolate that to thinking that because I use something that it points to the downfall of another product's dominance.
Just because a few techy or aware people use Kagi (or another alternative), is still a drop in the bucket of overall search engine choice
Google search is still widely dominant, as much as we might not want it to be.
I never noticed because I never visit google.com, even when performing a google search. I expect that's common.
Keep in mind that Kagi uses Google Search API. The downfall of Google Search could hurt Kagi.
Google deserves Mozilla to send them a UA identical to Chrome.
Firefox should just change its ua to not-chromium. We will all know what it means.
you joke, but microsoft using "mozilla" on IE useragent is why to this day every single browser have "Mozilla/5.0" there.
At the time netscape and microsoft were giving out free browsers while fighting for control of the profitable httpd server market, and blocking competitor browsers "for security" or something else was the play book of the day.
The latest comment on the issue states that Mozilla has a patch that can be emergency-deployed as a patch release. The proposed patch literally overrides the UA string for Google.
Is it just me, or is this absolutely insane? When Google ships a bug, it's suddenly the responsibility of browser vendors to "fix" it at the browser level?
It's not the responsibility of the browser vendors. But they have an interest in un-breaking the experience for their users. In a sense, Google is too big to fail, so users want it to work any way possible.
It happens all the time, you probably just don't realize it. There is special code in Windows for supporting/un-breaking popular applications, same with Android and iOS.
Yup, it is unfortunately very common. Open `about:compat` in Firefox to see all of the cases where they work around site issues. Right now I count 32 user-agent overrides, 38 "Interventions" and 49 "SmartBlock Fixes".
IIUC "Interventions" are typically injected scripts or styles and "SmartBlock Fixes" are exceptions to tracker blocking.
It's literally older than Google
Here's Raymond Chen sharing some of the worst examples from the 90s
https://ptgmedia.pearsoncmg.com/images/9780321440303/samplec...
That’s quite the read! I didn’t appreciate how much effort went into making the Win95 compatibility layer.
We're back to the internet explorer days!
I wonder how this made it past testing?
I have a few guesses:
Google has marginal incentive to not kill Firefox (antitrust), they have no incentive to make sure they provide a good experience on Firefox, let alone test with it.
The issue seems to be isolated to the search home page, and Google Search hasn't exactly been associated with "quality" in several years. Internal rot and disinterest gradually chip away at QA.
They don't test on Firefox mobile.
Not explicitly anyway. Google's testing strategy is as it has always been: do some in-house burn-in testing, then launch to a small number of users and check for unexpected failures, then widen the launch window, then launch to 100%.
In this case, the user base in question is so miniscule that no bugs probably showed up at the 1%, 10%, and 50% launch gates.
Or fewer than 20,000 bugs.
Testing can never prove something works, only that something doesn't.
They test on version 64 ;)
simply: they don't test
at least not for firefox mobile
they probably test for chrome (desktop+mobile), safari (desktop+mobile), edge(desktop) and maybe Firefox (desktop) but probably no other browser
Test written according to behaviour?
Funny that it works when switching to desktop mode.
I never actually looked at what this does.
I think it changes your user agent string to something that doesn't say "mobile", and the device "inner resolution" or something like that to make it zoom out.
https://www.google.com/search?q=what+is+my+user+agent+string
Here is a nafarious interpretation of events: (I assume we all remember how Alphabet, in order to drive Chrome adoption, sabotaged youtube when a Firefox-browser was detected?)
Only breaking for new FF-versions will give the untechnical user the impression that FF broke through an upgrade.
So I call this another not so feeble, but very evil(yes I think we expect nothing less of them right now), attempt of Google go tighten the grip on the Browser-eco-system.
Not the least that it only impacts FF on Android.
On Android the means to inspect are limited without a computer(so they might hope for users to switch because they need access and then forget to switch back). That is also, to me, a sign that it was intentional, just because how specific, browser-centric this is.
here is a fun anecdotal: google wouldn't load for me on ff windos, I tried a bunch of stuff over a few days then in a lapse of sanity or a stroke of genius I decided to switch the default search engine [back] from bing to google, I press reload and the html page with the single form field worked again! It continued to work after switching to bing again.
I feel I should note I have this in a user script:
It makes it so that clicking the bing logo sends the query to google, perhaps it doesn't like getting queried directly without being the default search engine?document.getElementsByClassName('b_logoArea')[0].href = "https://google.com/search?"+document.URL.match(/q\=[^&]*/);
Why would Google return an empty page based on any UA string? This is such a bizarre bug.
According to a comment on the issue, the page output is "<!DOCTYPE html>%", and that suspiciously looks like some template / placeholder evaluation might not be doing its jobs properly.
I would bet that the backend crashed.
Probably the backend sent the initial headers and start of the response to the load balancer, then stared rendering the page and crashed. Presumably `<!doctype html>` is hardcoded as it is always sent for HTML pages and maybe fills up one of the initial packets or something. The rest of the page may be rendered and return as separate chunks (or just one large content chunk).
I genuinely want to see a root cause blog post for this.
Last month, Google Maps was completely broken (instant total tab lockup) for all users of a GCC-built Firefox. In that case it was Mozilla's fault, Google didn't change something to make that happen, but it tells me that Mozilla's QA is suffering and probably doesn't catch things that are Google's fault either.
I'm still using Firefox because I'm ideologically motivated to, but it's no wonder that so many users are dropping the fox for chromium browsers.
Google search doesn't even work on mobile anymore with javascript disabled.
Just another "oops" from Google against Firefox.
"Over and over. Oops. Another accident. We'll fix it soon. We want the same things. We're on the same team. There were dozens of oopses. Hundreds maybe?"
https://www.zdnet.com/article/former-mozilla-exec-google-has...
Oh, that's why the search wasn't working earlier - I blamed my phone/browser.
I have noticed with firefox google will take forever to load sometimes. I will hit refresh and the url will actually change somewhat, then the page loads instantly. Seems I sometimes get served this dead end url.
My favorite bug description.
"The page is blank. The page is blank the page is blank."
This is a good reminder that readers of HN should take to heart: Just because a popular github issue gets posted to HN doesn't mean that you should head over and do your posting there. Keep it here, usually the associated experts are already on their way or busy working on the problem and you are not helping.
This is going to happen more and more as firefox's market share decreases. And then we'll be left with chromium-based browsers...
The user agent header has always been a mistake. I'm not sure what companies like Google are thinking when they block features from working based on the user agent. In Firefox, I use an add-on that randomizes the user agent string, and I've configured it to use a handful of the most common user agent strings. Occasionally, parts of YouTube will break because of a certain user agent configured, like chats or chat replay, showing nothing but a broken frame with no meaningful information to a non-developer. I guess one could say that it encourages people to update their browsers, or use Chrome which auto-updates, but come on. If a feature legitimately doesn't work, then the user still has to update to a modern browser anyway.
I don't think this is true. Having the user agent is incredibly useful when debugging an issue. It can even be useful for deploying workarounds.
For example if you see that errors are being thrown on your site and nearly all of them are the new release of Chrome then you can quickly narrow down it being a compatibility issue with the new version. Similarly if someone is making broken requests to your site at a high rate it may help you figure out who it is and contact them.
It can even be useful in automated use cases. If Firefox 27 has a bug it is reasonable to add very specific checks to work around it gracefully. When possible feature detection is better, but some bugs can be very hard to test for.
The problem comes from:
1. People whitelisting user agents.
2. Adding wide sweeping checks, especially if you don't constantly re-evaluate them (For example Brave versions later than 32)
In this case it seems that it is hard to actually conclude that a user-agent check was bad here. Google could have been trying to serve something that would work better on Firefox (maybe it worked around a bug). Of course that code crashed and burned, but that doesn't mean that the check was a bad idea.
I agree that user-agents are often used for evil, but they can also be used for good. It is hard to justify completely removing them. With great power comes great responsibility. Unfortunately lots of people aren't responsible.
Hilarious
kagi.com
I know that jumping in to mention Kagi has become a meme on HN, but I do think it's important to keep encouraging people to move away from Google. The specific search engine barely even matters, as much as I like Kagi. The only way that any of these search engines are going to improve is if more people leave Google in the dust.
If people don't want to pay for Kagi, then use Brave Search. DuckDuckGo was really gone downhill, so it's hard to recommend that.
>DuckDuckGo was really gone downhill, so it's hard to recommend that.
I've seen this a lot lately, but no one has said why.
I solely use DDG and have done so for a long time. I have not noticed any specific changes, nor any degradation in my search results. I don't pay much attention to announcements or anything, so maybe I missed something?
Can you or someone please tell me how/why DDG is suddenly not recommended and "gone downhill"?
In one of the comments I made elsewhere in this thread, I mentioned that my experience with DDG is that it's become extremely "PG-rated" even if you turn off safe search. It's a bit of an exaggeration, but to me it's pretty clear to me that DDG is way more normie-safe than when I began using it several years ago. DDG shows me more of what I consider detritus than Kagi does. It's extremely bad at finding any results by exact text, but to be fair, every search engine is bad at this now. And finally, DDG hasn't had what I would consider to be worthwhile feature improvements in a very long time. The doodads that sometimes show up when you use a particular term like "qr code hello world" are neat, but ultimately not that beneficial in contrast to being given more control over the results themselves. Their other features are mostly things that have been solved many times over.
Another reason to stop using Google. Don't waste the opportunity!
Switched to DDG ages ago, and even though it's not amazing, it's already better than Google.
Although, if we're honest, searching these days is absolutely atrocious everywhere. Any keyword search will only net you ads or "top X" articles full of SEO garbage most likely written by AI at this point...
Not necessarily everywhere, or at least not quite as bad as you're suggesting, in my experience.
I used to advocate for DDG, but they've gone so far downhill that I rarely waste my time with it. They've decided to go PG-rated everywhere even if you have safe search turned off. This is a problem because, even if they just want to block porn or illegal things, this can have an impact on legitimate academic research. And yeah, SEO trash floats to the surface, though they aren't entirely in control of this.
Kagi (which relies on Brave Search instead of Bing) has less flotsam to start with, provides lots of great tools to filter out the garbage, and is subscriber-driven instead of advertiser or investor driven.
DDG had potential, but they decided to give into moral panic and coast on their modest success instead of making substantial improvements. They also decided to pull a Mozilla and spend unnecessary effort working on a browser when they should be focusing on their core product. It'd be one thing to make a browser if their core product was actually good, but the best they are offering is a pinky-swear that your searches are private.
> Kagi (which relies on Brave Search instead of Bing)
This is misleading. Kagi uses multiple external sources including Google and Brave as well as their own internal indexes
https://help.kagi.com/kagi/search-details/search-sources.htm...
Ah, I was under the impression it was almost entirely the same as Brave. Thanks!
>Bing
DDG works well enough for me to be my daily driver, but it's absolute garbage for any kind of news search. Bing prioritizes MSN repost spam from dubious sources, so I'm better off going just directly to news sites I trust.
I must be weird, because I very rarely search for news. I just go to the sites I trust.
I think the only time I search for news (usually on Google) is to find a reliable article about a celebrity death.
You can actually find this rather rapidly through Wikipedia.
In fact, you can identify the exact moment when the news about the death leaked, when the first editor adds it to the article. Generally, experienced editors will keep it out until a reliable source is provided, which typically happens quite quickly.
In fact I've found out about a number of celebrity deaths, just by having them on my watchlist.
For people who don't know them yet: use DDG bangs. For example "!m restaurant [citynmae]" will immediately bring you to Google maps, "!w chemistry" will open Wikipedia etc. Super easy and powerful :)
Setting stuff up for that in any browser worth it's salt isn't too difficult, skipping the entire middleman of any search engine.
But it's already set up for you in DDG, so why not use it?
Aside from the fact it's another HTTP request, but these days on the majority of computers and connections, that's a trivial thing.
I'll add that !aw to search the Arch Wiki and !aur to search the AUR are my two most favorite commands.
> But it's already set up for you in DDG, so why not use it?
Because if I wanna search Reddit, I'd prefer to have '!r' search directly within Reddit and not litter my browser history with tons of duckduckgo entries like this:
https://duckduckgo.com/l/?uddg=https%3A%2F%2Fwww.reddit.com%...
weird example considering just how bad Reddit's internal search is and always has been -- that's a site I've always preferred to search with an external search engine, be it Google, DDG, or Kagi lol
doesn't hurt to have options! to each his own
Best example I could come up with on-the-spot, because I already swapped out Google, Amazon, and Wikipedia on my Firefox install for native search engines, and needed to quickly generate search results from a DDG bang so I could show the accompanying history page. :P
> Aside from the fact it's another HTTP request
You’d think DDG could avoid this via JavaScript if they wanted to. Might be better for privacy, if not for their usage stats.
Years and years ago I'd configure this stuff in FF, but using DDG effectively auto-configures it for any browser, exactly the same, everywhere, all I have to do is set the default search engine. And it includes some that I do use but probably wouldn't have bothered to configure on my own.
Firefox Sync is end-to-end encrypted and will sync your keyword bookmarks across all of your devices. Sure, you have to set them up but it seems worth it to get the ones that work for me, not whatever duck duck go thinks is popular.
I've never wanted one and not found it in DDG, or found it named differently from what I'd have named it. Every now and then I try to guess one blind, and I don't think I've ever had it surprise me. And it works in any browser.
I guess you are the perfectly average human then. I am not so lucky.
I have `s` to search sourcegraph.com, DDG has !sg. `t` to use Google Translate, DDG has !gt, `i` to search IMDB, DDG has !imd or !imdb, v to search animated images, DDG has !gif, `ni` to search nixpkgs issues which DDG doesn't have (although they do have one for the nix repo at !nr which is pretty impressive. I use `nc` for this). Not to mention personalized shortcuts like searching my company's GitHub organization or JIRA tickets that would never be a public bang, much less have a 2 character shortcut.
I stopped there but it is clear to me that I benefit from making my own short aliases for the searches that I use most. Plus it is nice to not send my logs to a third party and get improved performance.
It's better to lose the inconvenient ! and use a browser that supports prefixes in the search bar
Then you can "w chemistry" to open Wikipedia with a list of suggestions from the same Wikipedia ("Chemistry (Girls Aloud album)") in the same place
I think everyone in the thread knows about "search engines" in Chrome and bookmark keywords in Firefox. The crux of the issue is that there are more than 10,000 bang commands in DDG. Setting up even a popular subset in any given browser is a significant investment. It's fine if it's the browser you use 99% of the time, but for those spanning multiple computers, phones, and other devices, simply using bang commands is a strict win.
99% of the time you use <0.1% of the bangs, so setting those up isn't a significant investment. Then there are also the downsides mentioned above (less convenient, no preview, not portable across search engines), so no, it's not a strict win, but an inferior alternative which is superior only in those cases where you can't setup something better
If I went that route, I’d have to set them up on my phone, tablet, laptop, desktop, etc, all of which have different operating systems/web browsers.
I guess I could implement them as a web server, and point all my devices at it.
Some browsers sync these settings, so that removes a lot of the complexity. But in general, you're repeating the same mistake. No, you wouldn't have to do that, you'd just set it up on the devices you use the most to get the most convenience in the most common cases, and then use the less convenient option on others.
There is no point in making 100% of your experience worse just because you can't make 100% of your experience better
None of the browsers I use across those devices sync settings well. Even if it was 100% firefox, I don't want to trust the cloud, and can't be bothered to set up a mozilla sync server just for this (I don't use bookmarks, and there aren't any other settings I want to sync).
Also, as it is, I make 100% of my experience better by switching all the browsers to a search engine that provides better results than google and has !bang support.
There are at least two such search engines: duck duck go and kagi.
To make an actual "better" argument you have to address the aforementioned limitations
> Then there are also the downsides mentioned above (less convenient, no preview, not portable across search engines), so no, it's not a strict win, but an inferior alternative which is superior only in those cases where you can't setup something better
- less convenient: Addressed by my comment.
- no preview: I do not want preview, so this is a bonus. It's also orthogonal, since if the search bar was configured to preview, it could also display bang results.
- not portable across search engines: It already works on all non-terrible search engines because they standardized the bangs.
> - less convenient: Addressed by my comment.
Not addressed, Shift+1+w is less convenient than w, try to address that by your comment!
> I do not want preview, so this is a bonus.
Not a bonus since you can achieve the same natively, so again not better
> It's also orthogonal, since if the search bar was configured to preview, it could also display bang results.
No it couldn't? What's your preview link to DDG to show the results of a Wikipedia page for a given search term in the browser?
> - not portable across search engines: It already works on all non-terrible search engines because they standardized the bangs.
Since these don't exist (DDG is very similar to Google in search quality), it doesn't work. But this is personal, so if your needs are narrow enough to be covered just by DDG+Kagi, than you obviously don't need portability.
Something this doesn't do that DDG bangs do, is let me append it, or even stick it in the middle of a search. I can just type it wherever my cursor happens to be, other than in the middle of a term. This comes up when I search something in DDG then change my mind and want to use a bang.
good point, and it's smart enough to ignore when quoted
Wish they also allowed changing the !prefix
I agree, it is nice and privacy respecting, but I take umbrige with the google-maps. I am not really using Gmaps and I have everyone in my orbit exposed to alternatives. Just because.... like cato used to say...... I believe that Google should be destroyed.
DDG was great anyway, but discovering their !bangs was a revelation. Super cool stuff!
They should add bang-buttons at the top of their result pages. Didn't find what you were looking for? Just hit one of the bang buttons.
Been using these !bangs for a while.. also works with bang!
The only thing that I've found that works properly for my use case (regularly switching between three locales and languages) is Kaggle with the lenses. The only thing missing is disconnecting the locale from the language so I can have proper decimal characters but still search for US-specific things. Right now, I have to chose between having 10,000.10 or 10.000,10 and what language/region I'm searching, together, which is a bit annoying.
I think "Kaggle" is supposed to be Kagi but was typoed or autocorrected or something? (If not and you're talkign about Kaggle the ML/AI company, please disregard)
I also use Kagi lenses and it's been good, though for me the killer Kagi feature is being able to uprank/downrank/pin/block domains. Such an obvious and simple feature, such a powerful effect.
There is a psychological barrier to overcome in paying for searches, but once you get past that, Kagi makes a ton of sense. I suspect the reason Google never implemented personalized search results (like Kagi's where you can uprank/downrank domains) is because it's not about what you see, it's about what they show you. i.e. ads.
> the killer Kagi feature is being able to uprank/downrank/pin/block domains
Yes! This alone is worth the subscription to me.
And I despise and avoid software subscriptions, Kagi is my third.
You're right of course, supposed to be Kagi, I blame it on a lack of coffee.
Do you mean Kagi?
Unfortunately I'm locked into gmail, migrating away from it would require days of effort and I would never be sure I moved everything.
Also, YouTube is still the best video site despite going full mask-off on adblockers.
Fastmail -> settings -> migration -> import
Then set up a gmail forwarding rule.
The full documentation is here:
https://www.fastmail.help/hc/en-us/articles/360058752414-Mig...
I migrated about 5 different Gmail accounts off of Gmail to FastMail and it wasn't too bad. Though it was at least a day of work across all of them.
I honestly spent more time trying to wrangle the data in all of the other numerous Google services those accounts had data with.
I am in a similar boat with an ages old Gmail account. My comment was originally intended to be only about Google search (I realize now I didn't specify that, lol), but leaving the entire ecosystem is definitely a lot harder...
Well I was using firefox and google maps.
Apple maps is Yelp spam.
What should I be doing? I was doing research
Why is this a reason to stop using Google?
How is DDG "better" than Google? In my experience it serves almost identical results. And it's now served from Microsoft servers, so the argument "it's more private" doesn't really hold water anymore.
They don't break their site for Firefox users for starters :)
I do say this as somewhat of a joke, but that's for me a real reason. It's not the first time Google has intentionally screwed over Firefox users.
Even if we believe what they say every time it happens: it's "just a bug". For me it clearly signals they simply don't care. And that's the best case scenario. Worst case they're abusing their market position to drive out competition.
Even if DDG serves requests from MS servers, that's not even close to opting into the Google surveillance machine.
Finally, if what you said holds true about serving basically the same content, then the hostility against Firefox and the fact that you won't be helping this monopoly should already make it "better".
9 minutes ago:
> Since this has now been posted to HN, I'll be locking this thread. This bugtracker is a work place, not a discussion forum.
We have a bad fame.
Locking the issue after it has been posted to any large community is the right call.
The respectful users will look at the thread and not comment to not derail the discussion. The fraction of a percent which may have something relevant to contribute probably already did or will find another way to do so if it’s important.
The remaining users will post low quality comments for a while then leave forever. Like a flash mob that invades your home, parties for a few hours, then leaves a mess for the regulars to clean up.
Can’t blame the owner for preemptively locking the door.
> This bugtracker is a work place, not a discussion forum.
that's actually an excellent way to frame what a bugtracker is actually for.
it is about workers getting shit done and its optimized for the people who work the issues.
it isn't optimized for the people posting comments on issues.
and the fact that a lot of users that have never had to deal with fixing software bugs think it looks like a forum website would explain the impedance mismatch that you often see.
> it is about workers getting shit done and its optimized for the people who work the issues.
The above phrase also explains why Social Media is not a bug-tracker. ^_^
Sadly, it seems to be the only bug tracker or method of getting issues noticed by the tech firms with the issues.
Well someone did just write "'now you have two problems' b-bazinga. Software engineering is hard."
I don't blame them for the lock.
"Orange site bad" is a bit of a meme. I even heard it IRL at a hacker space in the last few weeks.
Our bad reputation is not entirely undeserved, some of us are not as well behaved as others.
It's one of those occasional intersections where we are reminded that our reputation within HN comments does not reflect the outside world.
> This bugtracker is a work place, not a discussion forum.
Well, tradeoffs and whatnot with hosting your bugtracker on the most open code platform on the Web.
“Conversation limited to contributors until popular site calms down”> We have a bad fame.
But not one undeserved.
N-Gate summaries were harsh, but fair. Basically spot on.
> ... We have a bad fame.
HN is too orange.
I switched it to the same color gray as the background. HN actually provides a pref for this in your user settings.
A perk for gaining a certain amount of karma.
Really? Huh. I've been here 13 years and I'm still learning new things.
at least it wasn't a testicle in an egg cup like that other site
Mozilla elitism.
Or experience with the internet?
Of course outrageous GitHub issues get low quality comments from HN.
The guy probably reacted like this because of the dummy comment he marked as off topic and probably made the connection.
> 'now you have two problems' b-bazinga. Software engineering is hard.
He might have said the same of Reddit, or any social network. It's pretty reasonable and I think he made the right call.
You are 100% correct in your assumptions. This isn't my first incident, and it won't be my last. As much as some folks here want to call me an asshole for that, we have a very good understanding of how this ends if we don't lock comments.
FWIW, good on you for managing the signal responsibly.
I have to smirk a bit every time I see the public do this to themselves. The public: "Why are companies never honest? Why is press-release-speak such a space-alien way of communicating?" Also the public: reacts like this when someone takes a simple, clear action and explains it directly.
(And yes, I know I'm guilty of broad-strokes reasoning lumping everyone in as "the public" and anthropomorphizing that entity. In this context, doesn't matter. Only takes a few bad actors to wreck the signal).