Building the most inaccessible site with a perfect Lighthouse score (2019)
matuzo.at> This post is about you and me. Scores indicate the quality of our apps and sites, but we must not trust these numbers thoughtlessly. We have to understand that automatic testing is just a first step.
I run my own scoring tool website best practices and SEO, and often get support requests from users who are worried or annoyed they can't get a perfect score. Some of my general views here:
- Scores serve more as a minimal baseline that your site should meet and there's always limitations to what the score measures. A low score means it's very likely there's some bad issues to fix and a high score means your site is probably in good shape, but this should only be used as a starting point. You can usually trick scoring tools as well so the score is assuming you're playing fair.
- Perfects scores usually aren't possible for non-trivial sites. There's always trade-offs to make, including if it's worth a large development effort to fix something that's not a big deal. Only you can decide what's worth the effort to fix and what your site's audience will care about most.
- Because of the above, it's not usually meaningful to make in-depth comparisons of scores from different sites. Scores are better used as a rough metric to tell if your own site is improving after you make changes.
I had a quick look at your scoring tool. It looks quite useful.
Couple of issues I spotted:
- You're recommending a maximum meta description length of 320. That's no longer what Google recommends.
- I got all green ticks for mobile scaling on a site with "maximum-scale=1". Maximum scale should ideally be avoided.
> - You're recommending a maximum meta description length of 320. That's no longer what Google recommends.
Thanks, can you provide a link that gives a length recommendation? On https://developers.google.com/search/docs/beginner/seo-start... they say:
"While there's no minimal or maximal length for the text in a description meta tag, we recommend making sure that it's long enough to be fully shown in Search (note that users may see different sized snippets depending on how and where they search)"
This one is tricky because the maximum number of characters Google displays is open to change (and there's more search engines than only Google too).
> I got all green ticks for mobile scaling on a site with "maximum-scale=1". Maximum scale should ideally be avoided.
Thanks, I'll look into this. Rule https://www.checkbot.io/guide/seo/#rule-set-mobile-scaling is only checking for `width=device-width, initial-scale=1` right now (cited from https://developers.google.com/search/mobile-sites/mobile-seo...) and not looking at `maximum-sale`, so this falls under being a decent baseline but still more you could do.
What range of `maximum-scale` should be allowed if any? Off the top of my head, `1.1` is probably just as bad but `100` is probably okay. I'm curious what the typical values used are.
In general, I've been pretty conservative about what rules I've added, sticking to ones that are generally agreed on and generate minimal false positives.
Maximum scaling is really annoying on mobile. Thankfully you can override it in Chromium's accessibility settings menu (I hope they don't remove that!)
Surprised to see that the OP didn’t even mess with scrolling – a staple nowadays, especially on touch devices.
For me it is lack of scrolling. With many domains that scripts should be loaded from blocked (umatrix) some modern sites show lengthy content but no scroll bar. It might be intended to set up the view by script depending on the device, but for sure it feels as if it is intentionally done to make visitors load/run the scripts.
It's probably because the script is blocking a modal window. When you clear the window, the scrollbars return.
I feel like I've visited countless websites which employ some of these, for no good reason.
I hope lighthouse team does not act on this. No need to complicate an already complex product for synthetic edge cases.
If only he'd used his talents for good, not evil.
Seriously, though, I swear I've encountered some of these before. If TBL was dead, he'd be rolling in his grave.
Motive for being evil:
> This is so evil. My LinkedIn inbox will be filled with job offerings by companies like Facebook and Uber.
My personal takeaway from this is that automated accessibility tests are no substitute for professional audits
My takeaway was that accessibility tools expect the websites they are checking not to be actively hostile to accessibility.
Hardly. These examples are really clear cut and easy to avoid in isolation, however some of them can also be employed by accident or ignorance in larger code bases.
An audit, and using the site as a person who requires accessibility would while developing. The automated tools are just a quick check to find anything blatantly wrong. I also suggest using WAVE and AXE for that purpose.
As you get deeper into accessibility you'll also find there isn't one single right answer to improve accessibility.
> … there isn't one single right answer to improve accessibility.
This is very true. Like any optimization, there is a point at which improving things for one community makes things worse for others. An example most of us are familiar with are the trade offs between mobile UI and desktop UI: You can make your UI adequately work on both, but in order to give both an optimal experience things start to get quite complex. I’ve seen developers tie themselves into knots trying to be all things to all people here. Accessibility is harder in a sense because able-bodied devs often don’t have any instinct for when they’re crossing the line from “good enough” to “overkill”.
The 80% rule for accessibility is really just “Make your site keyboard accessible”. While there will still be some issues for some users, it’s a clear enough goal that it can break a dev out of “analysis paralysis” and just get moving on something, and the benefit is huge for the vast majority.
I get the feeling that a professional audit has a reasonable chance of being the output from lighthouse copied into a spreadsheet.
If I'm going to get shallow automated advice, I want it cheap and fast, from the source.
My last job had audits done on a semi-yearly basis and they always caught things our automated tests hadn't.
Broad stuff like providing controls for changing content (e.g. carousels) are what automated a11y tests fail on; other hard-to-test criteria includes WCAG 2.1§2.3.1 "Three Flashes or Below Threshold" and 1.4.9 forbidding "Images of Text"
Accessibility is just usability.
You cannot automate usability tests. You have to put it in front of a real human and see if they can use it.
> Accessibility is just usability.
Common thought, but not really true. The basics of accessibility might be considered just "usability" or even UX, but going beyond that, it steps in being useful for people with certain disabilities while not impacting people without.
One example from the article, `aria-hidden="true"` (https://www.w3.org/TR/wai-aria/#aria-hidden) might be used to hide elements containing text that are not useful for people with screen-readers, while not changing the experience at all for the ones not using one.
If they are no good for screen reader users, what's their puprpose for non-screen reader users? I can only think of bad use-cases. Duplicating content the site wants to promote or a duplicated navigation. Things would probably be more clean without in most cases.
Screen readers often read things in particular ways that non-screen readers would interpret differently. One example that I've run into is using all caps to label something, like "ACCOUNT NAME". Screen readers will typically read out each letter individually instead of as a word. There could be other things, like describing field colors or something that are not as useful when audio only.
Should the all uppercase be something CSS like text-transform: capitalize, and not affect screen readers?
Where something is explicitly visual-only, I agree that there can be cases.
You just described usability! General UX.
> There's a version of this article without the middle finger emoji.
People really get offended by the middle finger emoji? Wow.
People get really offended by a sign designed to cause offence? Wow.
You might feel some moral high ground by abandoning all tradition and traditional social contracts, but those traditions and traditional social contracts are still held in high regard by many many people.
Some people abandon all traditional social contracts only to embrace totally new social co tracts and then demand that everyone else embrace then too.
Is this commentary on pronouns?
Maybe this is the Aussie in me, but the middle finger as an offensive gesture is so mild I'd be surprised if anyone with access to the wider internet would actually be shocked to see it - let alone a cartoon emoji of it.
It doesn't have anything to do with abandoning tradition and living the edgelord life; the middle finger is firmly up there with "crap" or "bugger" on the "people were using this in ads 15 years ago" end of the offensive spectrum.
Yeah, using a sign designed to cause offence in a very obviously joking and non-personalized manner is not offensive. Anyone taking offense is wound too fucking tight. Maybe if it were somehow historical used to target a certain group of people like certain words, I'd agree. But nothing like that is happening here.
This doesn’t really make sense. The article isn’t flipping you off. It’s an implied fictional website designer flipping off hypothetical impaired users, as a metaphor for what these design decisions might imply if done in earnest. It’s hard to imagine what social contract is being broken here.
My assumption was to keep it SFW
You can find someone to get offended by anything, if you look hard enough. It's like a kind of Rule 34 of offendedness.
Maybe not the forum to ask, but I have added an animation to my homepage - an image appears after it is loaded and slowly glides to its new resting place (different from initial location).
I suspect this is why my LCP is crap. But I'm unsure of a way to make the lighthouse score better without removing the glide animation. Do I have no recourse?
A bit silly. Most of the examples are intentionally hiding content with explicit commands no one would ever use, and then saying it is inaccessible.
That said, Lighthouse should do things like "use perceptual parsing trchniques to compare the visible content to the standard screen reader parsed content".
> intentionally hiding content with explicit commands no one would ever use
Try browsing the web with JS and/or CSS disabled, you may be surprised to see how common it is to use <body hidden> and equivalents.
Just by disabling JS and occasionally using keyboard navigation, I have personally noticed half of the techniques in the article on major websites.
The point is that you can employ some of those techniques and still get a perfect score. You might set outline: none or have an aria-hidden on your main content because you just copied some bad example. You would never notice and still score perfectly.
This is axe's acid3 test.
> Designing the most wheel-chair inaccessible building that meets all accessibility codes!
> btw guys this isn't really about accessibility guidelines it's just about, uh, the importance of not relying on those guidelines, or something. totally not me venting about having to follow accessibility guidelines in the first place as many of my professional colleagues are known for doing.
The complaint seems very clear to me as being about this automated test being unreliable as a sole measure of accessibility. If it was about wheelchair accsssibility, it would be like if "you must have a ramp by a door" earned a big "100% wheelchair accessible" sticker after you send your blueprints in to the automated test, and them pointing out the ramp could be on the roof.