jQuery CDN having SSL issues
code.jquery.comIt is too bad that the HTML standard has no built in way to fallback.
They've added a cryptographic hash/integrity and the async/defer attributes to the script tag, but something as essential as a fallback if a script or stylesheet fails to load (which the browser is best placed to know), has no built in functionality.
Instead you're left doing JavaScript tricks which for missing CSS gets a little ugly[0]. But CDN with local fallback (or visa versa) has been common now for decades but yet no official support at all. Honestly if the integrity attribute is specified the browser should just be able to fall back to a cached copy it has (e.g. jquery.1.2.3.min.js has a crypto hash of ABC123, and I have that file already).
[0] https://stackoverflow.com/questions/7383163/how-to-fallback-...
I mean, the fallback mechanism is progressive enhancement. It’s a reliability mechanism more than anything - if JS (or part of the JS) fails to load the site should fall back to a version that potentially reduces the interactivity but allows essential functions to continue.
Progressive enhancement is largely a myth.
A lot of libraries, JQuery, Lodash, Angular, Vue, React, Bootstrap's JS, module loaders, etc aren't simply offering "improved interactivity" they're offering core functionality. In essence the site runs on these libraries, if you remove them there's nothing left to regress too.
I've worked in several companies and never seen progressive enhancement used. It might have made sense back in the IE6 era when JavaScript was just for whiz-bang, these days JS libraries are holding the whole site's data context/state and generating Ajax as needed (Vue, Angular, React, etc). That's core, there's nothing progressive that can be removed from that.
Progressive Enhancement only makes sense for small toy sites or for academics to play with. Even Netflix's famous examples are about web services going offline, not losing core JavaScript libraries.
Sorry, but the entirety of GOV.UK is built using progressive enhancement. I’m assuming you don’t believe that a site with 300k+ public documents, multiple hundreds of user critical services (including some built with React and other modern frameworks), and serving 100m+ pages a month is a toy site. To honest answer is that we couldn’t afford to go down in the case of missing or broken JS, regardless of how it was broken (be it CDN failure, deliberate user choice, or some other reason like proxy failures, users on mobile connections that drop out half way through page load, etc etc).
No. It's just a problem of laziness, and possibly ego, of developers.
There are apps using browser as execution environment and there are websites. You wouldn't expect a client-side drawing tool, WebVR game or real time visualization of blockchain transactions to work without JS enabled.
However, you can easily expect a social network, mail client, news page, task app, and to some extent even things like IM to work with no JavaScript. "That's core" is just an excuse for poor architecture - it's only core because you chose to make it so.
There are apps and there are websites, with only some small part of grey area in between. If you're a web developer wanting to use newest, greatest trendy tools, you see everything as apps, despite of common sense suggesting otherwise, and you end up with no progressive enhancement for no good reason. When you take it to extremes, you end up creating such abominations like the old SPA Twitter frontend, spinning the fans of your laptop for 15 seconds just to display 140 characters of text, because "the core" is implemented as AJAX calls and fully rendered client-side.
> No. It's just a problem of laziness, and possibly ego, of developers.
We're talking about large organizations. No single developer is making these decisions. And the question is about resource allocation, if the choice is between improving the core experience or implementing an experience that <1% of our users will ever see, the choice is easy.
> "That's core" is just an excuse for poor architecture - it's only core because you chose to make it so.
It is core because the internet has democratically made it so. You're speaking for a very vocal minority. We're choosing not to implement a special mode for people who self-selected to receive a broken web experience. Fortunately that same demographic knows how to resolve the issue they caused.
> no progressive enhancement for no good reason.
A richer user experience is a very good reason. If the choice is between making the site richer and more immersive for 99% of users, and leaving 1% of users who wish to be contrarian for no reason out in the cold? So be it. A worthy sacrifice, in particular as this 1% selected themselves for punishment.
You're welcome to pick and choose any arbitrary part of the web to disable, maybe JavaScript, maybe CSS, maybe font rendering entirely, maybe disable images, but it gets a little silly when you blame others for your self imposed breakages. You don't want it broken? Don't break it.
> If the choice is between making the site richer and more immersive for 99% of users, and leaving 1% of users who wish to be contrarian for no reason out in the cold?
There is no choice. You can either make it properly, in accordance to best engineering techniques, that make it work good enough with those restrictions and still work as rich as you want when you don't impose them. Or you can make it broken by doing it the lazy way.
Also, there are plenty of reasons to disable JavaScript. It often makes the web browsing experience better, faster and more energy efficient. Sometimes you care about those things way more than any perceived "richness". In many cases, lazy webdevs and their broken code are the only reasons why it might be worse.
I’d dearly love to have stats that aren’t a few years old, but when GOV.UK last ran an experiment it was 1.1% of people who arrived without JS (~1m page views a month) split into 0.3% who’d deliberately disabled JS and 0.8% who had a broken JS environment for another reason. Like I said, it’s a reliability issue, not pandering to people who choose to go out the way to ‘break’ their environment. So yes, by choosing a JS-only environment you’re prioritising developer needs over user needs. That’s potentially fine if the cost balance equation works out that way for you, but it’s a specific choice you’ve made to not support people who through no fault of their own don’t meet the requirements of the environment you’ve decided to create.
> So yes, by choosing a JS-only environment you’re prioritising developer needs over user needs.
Even according to your own statistics, we're prioritizing 98.9% of user's needs over 1.1% of user's needs (or more accurately 99.2% of users against the broken 0.8%, since we won't do anything for the 0.3% who decided to break it on purpose).
Resources allocated to 0.8% of the userbase aren't free, they come from time that could be better spent improving the experience for everyone else.
> That’s potentially fine if the cost balance equation works out that way for you, but it’s a specific choice you’ve made to not support people who through no fault of their own don’t meet the requirements of the environment you’ve decided to create.
That's fine. The same 0.8% with a broken browser or proxy won't find elsewhere on the internet any more friendly to them. The best they can hope for is a small slither of sites with fallback, but the user experience will be so terrible they're better off just fixing the issue than continuing.
I find it funny that people spent years making these same arguments, but used assistive technologies as their cornerstone, now that assistive technologies (and the aria standards) fully support rich JavaScript sites, the argument has shifted to some hand waving minority that cannot even be quantified. We both know this is really about the NoScript crowd (and similar, like RequestPolicy), the other people with a broken web experience have far more significant issues that no one site can hope to mitigate.
I didn't know the term was progressive enhancement, but if it is what I think it is, it's a great thing to aim for. Sometimes users (probably more on HN) like to browse without js enabled, and some people also need websites that are accessible in a more basic environment, whether it be for screen reader, console web browser, archive.org, or simply older browser. Worse case scenario if the JS breaks and throws and error on older browsers, essential content should still be available, especially the content. There's nothing more irritating than "you need JS to view this page" when not appropriate or just an empty page when some dependency breaks due to human error. Instead, if the search button's popup box with suggestions won't work, fall back to just linking to the search page. Use <noscript> to redirect people to a basic version of some interactive content if you need to. Yes, the latest greatest is cool, but I still think it's in style. It also restrains how dependent I am on the perfect javascript, preventing too much JS Lag from building up in my libraries. Of course this is just my opinion
You can write single page applications that are fully aria aware, and we do. In fact we're legally required to, and our pages are audited before they're ever released to the public including for assistive technologies. Screen readers support JavaScript, and have for at least the last ten years.
As to users choosing to run without JavaScript? We simply don't support it. We support IE8 and above. You cannot even get to our site using Windows XP due to errors in the TLS negotiation (to avoid SSL downgrade). If users choose to disable a core part of their browser that happens to be a core part of modern web standards, then they shouldn't be surprised when sites using that part no longer work.
You in Australia? We built 9now.com.au using a degree of ‘progressive enhancement’ thanks to server-side react. I left a while ago, but Normally, users get a single-page-app-like complete with prefetching to instantly display subsequent pages. Turn JS off and it ‘degrades’ into a typical server-rendered experience. Sure, get far enough in and some things will break (videos won’t play), but we were pretty happy about this degrading nicely.
We built Qantas’s in flight wifi portal using this same technique, a necessity to deal with providing users with an instant ‘web’ experience over a high-latency connection.
Toy sites or for academics? Hardly.
> Toy sites or for academics? Hardly.
You gave two examples, one of which didn't even work correctly when regressed ("videos won’t play") and the other is effectively a toy site (login/terms of service portal).
When you're in the weeds developing complex data entry sites for large companies or government, you aren't going to write the page three times progressively, write three sets of tests, run userability & accessibility studies three times, and maintain all three for tens of years with bug fixes/enhancement.
What you're going to do is lock in your library requirements, and if the library failed to load you're just going to fast fail into a generic error message until the library is available. If for no other reason that it isn't financially feasible to do anything else.
Progressive enhancement of JavaScript libraries was dead the second we went to SPAs and had a library hold the page's data context/scope instead of writing web pages using HTML forms and refreshing every time you click.
> Honestly if the integrity attribute is specified the browser should just be able to fall back to a cached copy it has (e.g. jquery.1.2.3.min.js has a crypto hash of ABC123, and I have that file already).
I really want this feature. I think there might be some cross-origin issues that need to be handled correctly (e.g. you might be able to fingerprint a user by probing for parts of their cache), but for common things like exact copies of jquery, this would be super useful.
Yeah, I think the latest and greatest in web standards for solving this problem would probably be using a Service Worker to transparently handle the failure and serve up a fallback... but like you said, that's a JavaScript-based solution.
I always self-host my JS/CSS libraries: the connection is already open (thanks to keep-alive) so what's the problem of serving a couple of more KiBs of compressed data instead of making an additional DNS request and a new connection to a CDN?
I understand that the CDN version of the library may have already been cached by the browser while visiting other websites, but does it really save that much time/traffic compared to self-hosting?
I'm not taking a side, just trying to add some numbers. Let's ignore the privacy/uptime concerns for the sake of this comment.
If every site you visit has 350kb of stuff that would benefit from a CDN JS but also some CSS and fonts (google fonts, bootstrap, etc.) If you visit 50 pages a day in a 30 day month, that's a little over 500mb of data.
.35mb x 50sites x 30days = 525mb
That would be a ton of easily avoidable data in regards to mobile plans depending on where you are. This number isn't 100% accurate though, many "normal" (read - not techy hackernews readers) might only visit say a dozen sites a day or less (let's ignore apps like facebook/snapchat/etc). Even that might be a stretch.
Then again students and other "savy" users might be going across hundreds of new sites a day.
For you the host? Unless you're a massive beast, most of us "hobbiests" fit within the free bandwidth of 5$ vps services anyway.
That’s assuming every site is using the same CDN and the same version of the library. Seeing as that’s not the case you can cut that by at least an order of magnitude. Secondly, most users visiting 50 pages a day will not visit 50 sites a day but more like 5 pages a site across 10 sites, or even 10 pages a site on 5 sites. Now let’s add in to the user the external privacy cost of being tracked across multiple sites and it starts to look a little less appetising again from the user’s point of view.
Unfortunately everyone still ends up using many different versions or adding other unnecessary querystring parameters so that each site still effectively ends up with their own file to download.
Depending on where your hosting your app, you may or may not be paying for bandwidth. If your paying for bandwidth you might as well save some bucks and use one of the public CDNs.
Original, jQuery CDN:
https://code.jquery.com/jquery-X.Y.Z.min.js
Google:
https://ajax.googleapis.com/ajax/libs/jquery/X.Y.Z/jquery.min.js
Microsoft:
https://ajax.microsoft.com/ajax/jquery/jquery-X.Y.Z.min.js
Microsoft ASP.NET:
https://ajax.aspnetcdn.com/ajax/jquery/jquery-X.Y.Z.min.js
jsDelivr:
https://cdn.jsdelivr.net/npm/jquery@X.Y.Z/dist/jquery.min.js
cdnjs:
https://cdnjs.cloudflare.com/ajax/libs/jquery/X.Y.Z/jquery.min.js
Yandex.ru:
https://yastatic.net/jquery/X.Y.Z/jquery.min.jsDoes anyone have a performance benchmark comparing them?
While Google and Microsoft are not included, JSDelivr (StackPath+fastly+CloudFlare+Quantil) compares themselves to MaxCDN (now StackPath which is the official jQuery CDN) and CloudFlare (the cdnjs provider) here: https://www.cdnperf.com/cdn-compare?type=performance&locatio...
KeyCDN has an online asset performance tool that we can use to compare the hosted jquery.min.js files. The numbers included here are results received (to the San Francisco location) in ms of [DNS lookup time] / [time to connect to server] / [overhead of TLS connection on individual asset] / [time from client HTTP request to receiving first byte from server]:
Original, jQuery CDN:
https://tools.keycdn.com/performance?url=https://code.jquery...
8 / 2 / 79 / 85
Google:
https://tools.keycdn.com/performance?url=https://ajax.google...
32 / 2 / 132 / 155
Microsoft:
https://tools.keycdn.com/performance?url=https://ajax.micros...
128 / 3 / 122 / 130
Microsoft ASP.NET:
https://tools.keycdn.com/performance?url=https://ajax.aspnet...
128 / 3 / 114 / 120
jsDelivr:
https://tools.keycdn.com/performance?url=https://cdn.jsdeliv...
64 / 3 / 118 / 129
cdnjs:
https://tools.keycdn.com/performance?url=https://cdnjs.cloud...
64 / 2 / 118 / 125
Yandex.ru:
https://tools.keycdn.com/performance?url=https://yastatic.ne...
32 / 139 / 667 / 993
When I tested them, only jsDelivr and cdnjs/Cloudflare recieved green results (under 200ms time to connect and under 400ms time to first byte) from all 16 worldwide test locations. Averaging the results between these two across 16 locations, I would go with jsDelivr who had a faster average TTFB. The fact that they are combining CloudFlare, Fastly, StackPath, and Quantil (who I had never heard of until today) might explain their global results.
Great opportunity to strip out unnecessary uses of jQuery, and move to vanilla javascript.
I haven't used jquery in about 2 years, pure javascript and maybe some lodash functions imported into my es6. Give it a try, you might not need it!
youmightnotneedlodash dot com
more like great opportunity to use a different cdn
Years ago, I used to link the library from Google [1] or CloudFlare [2].
Nowadays, with all the Node.js stuff that goes around modern front-end, I don't see the point of embedding a JavaScript library from a CDN, unless that library is dependent on a remote service, e.g. Google Analytics, Google Maps, etc… That being said, if you are still maintaining a legacy website that depends on jQuery, you should consider to embed the library like this instead:
<script>window.jQuery || document.write('<script src="/js/jquery.min.js"><\/script>')</script>
> Nowadays, with all the Node.js stuff that goes around modern front-end, I don't see the point of embedding a JavaScript library from a CDN, unless that library is dependent on a remote service, e.g. Google Analytics, Google Maps, etc… That being said, if you are still maintaining a legacy website that depends on jQuery, you should consider to embed the library like this instead:
What does Node.js have to do with deciding whether to get your static assets from a public CDN or not? I hope your not serving your static assets with Node.js.
> What does Node.js have to do with…
There are tools like Grunt and WebPack (which depend on Node.js) that can bundle all your dependencies.
I cannot provide details about how they work because I don't do front-end development, but I can tell you about years ago when I had to copy & paste both code and links to jQuery and other libraries like BackBone or Ember.js (relevant at the time) into my projects. Nowadays, web developers seem to prefer the use of tools that came from the Node.js ecosystem to handle these dependencies in a more "engineer-ish" way using NPM packages.
Yea you can still not bundle the actual library and grab it from a CDN, using webpack externals[1]. Using webpack doesn't really change anything.
I assume he's talking about NPM. Now that everybody's hot new SPA has a few hundred thousand NPM dependencies, you roll it all up using Webpack and can just as easily `npm install jquery` as `<script src="cdn.jquery.com"></script>`.
> Now that everybody's hot new SPA has a few hundred thousand NPM dependencies, you roll it all up using Webpack and can just as easily `npm install jquery` as `<script src="cdn.jquery.com"></script>`.
Just because you can doesn't mean you should. You usually don't want some huge javascript bundle to load it's bad for page load performance. Also Webpack allows you to chose whether you want to bundle a dependency locally or grab it at runtime from a CDN or another sever.
Most of the dependencies are for the Node dev tooling and never gets deployed. You can easily build a lightweight Vue/React site with Webpack.
One more reason to use Decentraleyes.
I came here to suggest this (I use it on Firefox), but unfortunately, this is not an option for a set of users on smartphones and tablets.
Works great on Firefox/Android, but yes unfortunately that leaves iPhone users out in the cold still.
It's not an expiry. It's a cert name mismatch. CN is *.ssl.hwcdn.net
My guess is they decided to switch to Highwinds as their CDN (don't know what it was before) and they didn't plan it correctly.
FYI: Stackpath owns Highwinds. It was likely an internal configuration issue at Stackpath.
Updated to a generic "having SSL issues"
https://www.jsdelivr.com is a good alternative. We actually monitor for https failures and automatically remove the problematic CDN.
This is why you self-host all project dependencies.
But that costs you cache hits, increases the real world size of your site, and slows loading. I'm sure mobile users with metered internet would prefer you didn't download that 100 KB JavaScript library for the nth time.
It's more complex than that. Mobile networks are mostly hampered by latency - each additional HTTP request to a different domain requires another TCP coldstart and handshake, DNS lookup, TLS setup, etc.
Many times the 100KB of JavaScript is faster to load when minified and combined with other site code and served compressed over a single HTTP request or streamed via HTTP/2. It's almost always faster to use an existing connection than to start a new one.
Also there isn't one canonical version of jQuery. There's dozens of potential versions available[1]. So it's not immediately clear that a user will have the version a site depends on.
It isn't faster than not re-requesting the resource at all because it was previously downloaded from the CDN which is what the discussion is about.
Does it really?
Looks like it's working again.
Starting to see complaints, questions, etc. on twitter about it too.
Looks like its working now https://code.jquery.com
Potentially related to the Chrome 66 update and Symantec stuff?
Broken on latest versions of Safari, FireFox, Edge, etc. as well
Yep, this just broke my project :/
Looks like they fixed it
Only hobby websites would host jquery off a cdn