Settings

Theme

User agent overrides for top Japanese Sites

bugzilla.mozilla.org

43 points by asyncwords 10 years ago · 41 comments

Reader

holygoat 10 years ago

Fennec dev here. Figured I'd address some of these comments.

UA overrides are not the only tool in our toolbox. They're deployed in concert with outreach from webcompat.

UA overrides are a way of breaking a cycle, not preserving it.

They make it feasible for Japanese users to use Firefox without manually messing around with UA overrides (e.g., using Phony). And that gives us leverage. And with leverage our excellent webcompat team can get sites to make changes.

asyncwordsOP 10 years ago

It's an unfortunate state of affairs when a modern browser has to spoof another browser just to get the right content. The user agent in Microsoft's new browser does this by default [1][2]. As a user, I'd love to see a day when browsers don't reveal their user agents and web developers rely on feature detection instead. I admit, though, that I haven't fully considered the collateral damage that might come with that.

[1]: http://stackoverflow.com/a/31279980

[2]: http://blogs.windows.com/msedgedev/2015/06/17/building-a-mor...

  • frou_dh 10 years ago

    Almost daily I have to spoof my User Agent to claim to be an iPad due to sites confidently proclaiming that Flash is """required""" to get at some audio/video content. What do you know, spoof User Agent, and their Flash """requirement""" instantly evaporates. Pet peeve.

  • codewithcheese 10 years ago

    Also UA detection can happen before feature detection which is very handy in some cases such as redirection based on device. Feature detection also implies that there is some sort of client that runs the response which is not always the case.

  • err4nt 10 years ago

    Sniffing User-Agent has been poor practice since 2011 , in favour of feature detection using a library like Modernizr.

    I used to advocate using something like Modernizr and then writing your code against the results, but now I think an even more straightforward approach is to just do the feature detection you wish to use directly in your code. No sense in loading in a library and testing for things you don't need, still only to have those tests totally decoupled from the parts of your codebase that depend on the feature detection.

    It makes more sense to do only the feature detection you need right in your codebase adjacent to the code which relies on the results.

    • toyg 10 years ago

      > Sniffing User-Agent has been poor practice since 2011

      I'm old enough to remember it was much earlier than that. In the early '00s there were already calls to do feature detection; jQuery was released in 2006 and it basically did it for you. In 2015, there is no excuse to do UA sniffing -- if anything, because we now have 20 years of case-history showing people will trivially spoof it.

      • decode 10 years ago

        > In 2015, there is no excuse to do UA sniffing

        Unfortunately, there are cases where it is still necessary. For example, IE 10 reports that it supports the CSS pointer-events property, but it only works on SVG elements, not HTML elements.

        http://caniuse.com/#feat=pointer-events

      • err4nt 10 years ago

        When I went professional with my web development (2009) I can remember the in-the-know people were actively advocating for feature detection, but it was still common practice even in 2010 to add IE-specific support using conditional comments to load stylesheets with fixes for a known IE version.

        You probably still have nightmares of these: https://msdn.microsoft.com/en-us/library/ms537512(v=vs.85).a...

        On the way to 2012 it became clear once Microsoft said they never intended to let IE announce itself as IE in the future that the game was finally over for UA sniffing. The funniest part was I still remember the MS fanboy blog post that was saying how wonderful it would be that future versions of IE would identify itself as not IE, and how this would make the world a better place. :P

    • jszymborski 10 years ago

      You don't have to include the whole library. Modernizr allows you to customize the package to only detect for features you need.

      While writing your own feature detection is still probably lightweight (although not by much if you customize modernizr correctly), you're not going to get all the edgecases Modernizr will unless you spend A LOT of time on it.

      http://modernizr.com/download/

      • err4nt 10 years ago

        I get the tradeoff, the problem is when you have 90 projects that customize the library for 90 different needs, and then you want to update those libraries. At that point, you've basically made 90 Modernizr forks. Does it make more sense for you to maintain 90 forks of a library whose codebase you aren't developing and try to keep them current, or to bake-in the parts of Modernizr (with its edge-cases included) right into your codebase for each project, and then only have to maintain your own codebase, instead of your codebase + a library fork.

        If you're using the majority of Modernizr tests it makes sense to use the full library, but for the most part me and the other devs I've talked to usually drop the whole library in for ease of maintenance, but mainly use it for one feature alone: touchscreen detection.

        Lately I've tried putting some of my HTML on a diet by replacing my need for Modernizr for the purpose of touschreen detection with this snippet:

            if(('ontouchstart' in window)||(navigator.msMaxTouchPoints>0)){
              // code for touchscreens here…
            }
        
        This keeps the test and the result together in the code, and eliminates the need for a library, which can also be an additional single-point-of-failure outside of your control, especially if hosted by a CDN.
        • robotfelix 10 years ago

          It's worth noting that each custom build includes a commented line with a URL to download the latest version with the same custom set of detects. A great timesaver!

        • chc 10 years ago

          If you're "baking in" parts of Modernizr into your codebase, it seems to me you're still maintaining 90 forks, but it's more work and the provenance of the forked code is less clear.

          • err4nt 10 years ago

            Suppose you're baking a cake, but it's a custom cake and you're not 100% sure you have the best recipe for frosting.

            Suppose Modernizr were a book, and each feature Modernizr tests for was a recipe. What I'm suggesting is this: Instead of going to the library and checking out the entire recipe book just to refer to the icing recipe, just copy out the list of ingredients and leave the book on the shelf. you're customizing the recipe anyway, so if you just add their list of ingredients to the steps you're writing for your own customized icing it's going to be easier in the future than if you write your recipe steps with a little note that says: 'refer to ingredients on page 24 of Modernizr Cookbook'. I also think buying a personal copy of the book for the same purpose is overkill - it makes more sense just to list the specific ingredients you want to reference directly inside the new recipe you are writing and keep it all on the same page, plus not have to worry about where that recipe book is every time you want to whip up a new batch of icing.

            • chc 10 years ago

              How is generating a customized Modernizr build with just the tests you need equivalent to checking out an entire recipe book for one recipe? Either way, you are just getting the tests you need.

    • endgame 10 years ago

      Testing for features over platforms was known to be a good idea since the early days of autoconf (1991-1992, according to the history in its info file). Why did the web folk take so long to figure this out?

      • err4nt 10 years ago

        For the most part it didn't matter until recently.

        Netscape versus IE (late 90's) - features didn't matter, they just rendered HTML and CSS differently

        Firefox versus IE (early 00's) - Firefox added a bunch of great CSS support and things like rounded corners and PNG transparency, so once we could use those we could just supply a polyfill for the specific IE version that needed it. Opera could handle it already, Netscape was Firefox rebranded. Only IE (which announced itself as IE) needed extra help

        Mobile vs Desktop (late 00's) iPhone! Android! Tablets! Now is where things start to get a little crazy, IE will be IE but a bigger concern is the separation between tiny little touchscreen devices browsing a website, and a massive desktop computer

        Mobile versus Mobile (early 10's) - IE never says it's IE, we have smart watches, phones, phablets, tablets, netbooks, notebooks, and still desktops. There is Firefox, and Chrome that run on Mac, Windows, Linux, iOs, and Android, there's Safari which runs on OS X, Windows (old version), and iOS, there's IE, of which there are 8,9,10,11 and the new ones in circulation, and a handful of other browsers like Android browser that kind of gave up a long time ago but are still used.

        I'm sure backend software was rife with feature-detection for the OS's it ran on (Redhat versus CentOS, special support for IIS, etc) but until things exploded after the iPhone was released in 2007 the web had very predictable deficiencies that could be addressed more directly than feature detection.

        I can remember as a Linux user, there were plenty of websites that would only let 'approved' User-Agents in, because they would rather you NOT see their site than see a site in a browser they didn't support. When using Linux I often didn't have access to IE or Netscape, so I would use Firefox or Konqueuror to spoof a different User-Agent. Nintendo.com used to be like this, plus others.

      • oldmanjay 10 years ago

        1) it didn't take "web folks" long. It was near immediately after the availability of the browser as an application platform that the advice took hold to those willing to heed it.

        2) The existence of autoconf et al is not in any way preventing people from performing platform tests all over the place, so it's not like a panacea.

        Sorry if you find this a bit abrasively worded, I aimed to avoid that, but you made me bristly with your implication that "web folk" are somehow lesser.

      • ucho 10 years ago

        Autotools don't only check for features, it also checks for bugs. For example I know that Firefox SVG support is buggy and I need to use Canvas version of OpenLayers for that browser - how am I supposed to do it without UA sniffing?

  • greggman 10 years ago

    Unfortunately for various reason it's often impossible to use feature detection.

    2 examples:

    1) There's currently no way to reliable check if a browser support device orientation. Maybe because of bugs or incomplete implementations

    2) It's impossible to tell when iOS 8.x has added the chrome around the page in landscape removing nearly 1/3rd of the entire view-able area on an iPhone5S.

    • Zarel 10 years ago

      My most recent problem unsolvable by feature detection has to do with the HTML5 `dragend` event:

      https://github.com/Zarel/Pokemon-Showdown-Client/commit/30f2...

      In Chrome, event.pageX/pageY refer to the position of the top left corner of the drag-preview-image, relative to the top left corner of the page.

      In Safari, event.pageX/pageY refer to the position of the mouse curser relative to (0, window.innerHeight * 2 - window.outerHeight), a point slightly above the bottom left corner of the screen.

      (Neither of these is spec, which as far as I know says that event.pageX/pageY should be the position of the mouse cursor, relative to the top left corner of the page.)

      I eventually got around it by storing values from the `drop` event, but anyone who needed these values from the `dragend` event would be screwed.

      My next most recent problem unsolvable by feature detection has to do with HTML5 Notification, whose API was massively changed recently. Attempting to use the new API on old versions of Chrome would cause the render process to crash (not just throw an error you could catch in a try-block, but actually crash like http://i.stack.imgur.com/DjdCX.png ). Of course, using the old API on new versions of Chrome would fail silently, so there was zero way to reliably deliver a notification to the latest version of Chrome without crashing older versions or using user agent detection:

      https://code.google.com/p/chromium/issues/detail?id=139594

    • realusername 10 years ago

      There is some other edge cases like this also: the tel: URI protocol and the HTML5 offline cache.

  • joelwilliamson 10 years ago

    IE has spoofed Mozilla browsers since at least IE2. When was the last time you saw a browser that didn't claim to be Mozilla/5.0?

    • ChrisGranger 10 years ago

      I didn't think there was a Mozilla browser until after IE4 came out...

      • toyg 10 years ago

        But there was a Mozilla engine, which is what Navigator had in its UA and Explorer spoofed.

  • drzaiusapelord 10 years ago

    Yeah, so its going to report as 'Edge 1.0', not match anyone's regex and a "You need IE6 or higher to visit this page" error?

    User agent sniffing is just a bad practice. Everything about it is a hack. There's no right way to do something wrong.

  • mmaunder 10 years ago

    Agreed re feature detection. This could lead to an uncomfortable world for devs where the UA becomes completely unreliable - may force a migration to feature detection.

  • frandroid 10 years ago

    Unfortunately, browser bugs don't come with self-identification APIs.

jrockway 10 years ago

This is certainly unmaintainable. There are two aspects I find especially interesting.

The first is how many different ways sites are determining the browser corresponding to the user agent. You know that they're all based on "guess and test": come up with an idea for an if condition, open up the code in the 5 browsers you care about, see if the result works. It almost reminds me of the output from a fuzzer: it's a valid answer, but you can tell a random number generator and a lot of tries is what got you there.

The second is how many web developers seem fine writing the same website many times; once for mobile, once for IE, once for Chrome, once for Firefox. I've always taken the approach of doing exactly one site, and using whatever features are available in the worst browser the client wants supported. If extensive workarounds are needed to make the feature work in every browser, I say skip it. (I was always happy with what IE6 could do.) Of course, when I did web development, it was mostly boring corporate applications, not public websites that face pressure from competitors that are willing to write a codebase from scratch for every browser back to NSCA Mosaic. I consider myself very lucky.

ShaneWilton 10 years ago

Things like Modernizr and feature detection are great, but I'm excited to see what happens when coeffects [1] have some more research behind them, and end up being supported by a mainstream programming language.

The idea is that you're able to encode information about the execution environment into the type system, so that you can do things like write functions that depend on having access to GPS coordinates, or an audio output device, and so on.

The theory is that this will make it easier to target a wide variety of platforms, or devices which may restrict access to information through permissions systems, like those offered by Android or iOS. If, at compile time, you know you've handled all of the cases of an environmental constraint either being met, or not being met, then you have a much stronger guarantee that your code isn't going to unexpectedly fail spectacularly on a platform you haven't tested against.

[1] http://tomasp.net/blog/2014/why-coeffects-matter/

whoopdedo 10 years ago

UA overrides are enabling the poor web design that necessitates them. If web sites aren't punished for doing the wrong thing they'll keep doing, requiring more overrides, which hides the bugs, etc.

Stop this feedback loop. The client is not responsible for the server's bugs.

  • miketaylr 10 years ago

    Hi, I disagree (not just because I authored the patch linked here). Web sites are never punished. Our users are.

  • andreastt 10 years ago

    If a user has a sub-par experience browsing a website in one browser, she will switch to another and the loss is on the browser vendor’s behalf and not the website’s.

    The vendor has an obligation to create the best web experience for the user and while it’s certainly a sad state of affairs that not more sites are testing for features rather than the UA string, it’s a sad fact that not all web authors care enough about interoperability to remedy their bad ways.

tracker1 10 years ago

I really wish that all phone/tablet/mobile OSes would simply include an X-Screen-Size (Xpx, Xpx, Xcm) and X-Touch-Enabled header.

I've used the detectmobilebrowsers.com as a baseline in the past, and only tweaked slightly so that fallbacks with "phone" will be xs (extra small), otherwise common mobile OSes would get "sm" (small), while desktops get "md" (medium) ... if you use adaptive CSS, and JS you can/should of course adapt if the pre-rendered environment doesn't match.

There are other modules written to predetermine a lot more, but it's all kind of a bit sad.

MichaelGG 10 years ago

I guess this then contributes to a feedback loop. People will check their stats, see that Firefox isn't used, and pay less attention. Sucky position to be in.

realusername 10 years ago

There is a lot of websites like this unfortunately, Gmail, Google Maps, Youtube, Office 365... Just try to browse a bit with Firefox for Android and you will see...

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection