Settings

Theme

HTTPS' massive speed advantage

troyhunt.com

116 points by nouney 10 years ago · 46 comments

Reader

jimktrains2 10 years ago

> Well, almost, let's address the "It's not fair" whingers. The HTTPS test is faster because it uses HTTP/2 whist the HTTP test only uses HTTP/1.1. The naysayers are upset because they think the test should be comparing both the secure and insecure scheme across the same version of the protocol. Now we could do that with the old protocol, but here's the problem with doing it across the newer protocol [is that HTTP/2 over TLS]

This doesn't invalidate the argument that you're comparing two things and claiming you're comparing two other things prima facia. For this use-case HTTP/2 will be faster, with or without TLS (if it could be tested). Claiming that it's the TLS that's speeding up the connection, which is what you mean when you say http vs https) is just plain wrong.

  • richardwhiuk 10 years ago

    But the other point is that https now means HTTP/2 or HTTP/1.1, where as http always mean HTTP/1.1 - so I'm not sure that's true.

    • tedunangst 10 years ago

      If https can mean one of two things, but http2 clearly means only one thing, why would anybody choose to use the term https unless they are trying to be deceptive?

      • c22 10 years ago

        Or of they're trying to refer to the thing that http2 is not?

    • falcolas 10 years ago

      https and HTTP/2 are not the same thing, though. Until HTTP/2 traffic is the vast majority when compared to HTTP/1.1 with TLS, you can't claim https and HTTP/2 are the same.

      For example, if you terminate your secure traffic on an AWS ELB (or using S3, or CloudFront), you are serving HTTP/1.1 with TLS. And will be for the foreseeable future.

      • jimktrains2 10 years ago

        Can you point me to the RFC where https was redefined to not be HTTP/1.1 over TLS?

        • richardwhiuk 10 years ago

          https was never defined in an RFC in the first place, as I understand it, it's just defacto used for that since Netscape started.

  • mikeash 10 years ago

    Pretty much the whole point of the article is that when you say "http vs https" you are no longer just saying TLS vs no TLS. It now means something more.

    • jimktrains2 10 years ago

      But does it? I mean, you can start a WebSockets session with an HTTP request too. No one says a WS server is an HTTP server and that HTTP is a confusing thing now.

      HTTP2 can be started via an HTTP over TLS request, that doesn't mean that it's HTTPS as defined in https://tools.ietf.org/html/rfc2818 and https://tools.ietf.org/html/rfc7230

      • mikeash 10 years ago

        If you assume a reasonably recent browser and server, then it does. Certainly one might not grant that assumption, but I don't think it's outright wrong to make it, either. Depends on your approach.

        • jimktrains2 10 years ago

          Why assume anything? Why not simply call things what they are? The issue is that browsers don't signal that you're using http2 and they decided nor to use a new schema either to help confuse us all. His point is literally based on nothing supporting http2 without TLS. What if something did? Would http also be a meaningless word? Why not call things what they are, not by what the browser is hiding.

    • tedunangst 10 years ago

      To which people are saying, that's wrong. http2 implies https does not mean https implies http2.

  • phyzome 10 years ago

    HTTP 2 is pretty cool, but this is just a bad title.

  • geofft 10 years ago

    > Claiming that it's the TLS that's speeding up the connection, which is what you mean when you say http vs https

    But that's the entire argument he's making. It's not what I mean. When I go to my web host's sysadmins and say "I need to use https so I can use service workers," I don't mean "I need to use TLS so I can use service workers." I mean what I said. https was once defined as merely http + TLS (well, SSL), but it has now come to be a protocol/scheme that supports things that http does not. One of those, and certainly the biggest, is TLS. But there are other differences.

    This is a comparison between http and https. It investigates the reason why https is faster, and makes it clear that the difference is that https means other things besides TLS.

    • tedunangst 10 years ago

      As clearly noted, IIS doesn't support http2. So if it's http2 you want, telling the IIS sysadmin "I need https" is not going to address your problem.

      • benaadams 10 years ago

        IIS on Server 2016 supports http2

        • tedunangst 10 years ago

          Well, let's imagine for the moment that your sysadmin has chosen to only deploy officially released and supported versions of software by default. Is "please enable https" the best way to communicate that they need to install a beta version of Windows?

shawkinaw 10 years ago

It's not about fairness, it's about accuracy. You can't accurately say this is just comparing HTTP and HTTPS; it's comparing HTTP/1.1 and HTTPS/2, why can't you just say that?

It's especially irksome that "httpvshttps.com" complains when your browser doesn't support HTTP/2, saying the results will be inaccurate. If the site were called "http1vshttps2.com" I would agree, but it's not.

  • danudey 10 years ago

    Looked at another way, it's comparing the latest HTTPS (2) with the latest HTTP (1.1).

    Looked at another other way, there are huge speed advantages that you can only get if you go with HTTPS.

    You aren't guaranteed those advantages if you end up stuck with HTTPS/1.1, but that just means you're using an old stack which you should (and can) upgrade (unless you're using IIS).

    • josteink 10 years ago

      This speed advantage could have been present for plain HTTP as well, if not someone with an agenda had declined to make plain HTTP supported in HTTP/2.

      Actually the whole HTTP/2 name is massive misnomer (since it doesn't actually support HTTP) and is the closest thing I can think of as technical newspeak as far as internet protocols are concerned.

      Marketing this as a new HTTP protocol version when it clearly wasn't was just shady tactics and bad propaganda. The whole thing stinks.

      • WorldMaker 10 years ago

        « if not someone with an agenda had declined »

        Because privacy and security by default are a bad agenda to agree to?

        « the whole HTTP/2 name is massive misnomer (since it doesn't actually support HTTP) »

        HTTP/2 is backwards compatible. It may not be your idea of the right direction for HTTP/1.x, but that doesn't make it a misnomer. To be honest, the only people that can decide if it was the right name for it are the IETF and they already made that decision. There's a reason that the name SPDY looked nothing like HTTP, because Google left that decision to the IETF as the standards body controlling the fate of the HTTP protocol.

        • josteink 10 years ago

          > Because privacy and security by default are a bad agenda to agree to?

          Taking away people's ability to host servers and services for themselves without having to register with a DNS provider and a CA are two major privacy violations, and a roadblock to easy application deployment of applications on your own LAN.

          So yes, it's a bad agenda. Because it's not "by default". It's the only choice. You can't choose not to incriminate yourself and your identity to a centralised internet registry if want to host a service now.

          You may not realise it, but once again you as a representative of the http/2 crowd is using wildly misleading language.

      • lorenzhs 10 years ago

        The so-called "agenda" is avoiding breakage, because tons of middleboxes wouldn't be able to cope with non-TLS HTTP/2. They just assume HTTP/1.1 and things fail when that's not the case. And then there are the security and privacy advantages on top.

Ono-Sendai 10 years ago

Pretty stupid article, -1 flamebait.

One thing I have always wondered about these waterfall comparisons is why http 1.1. is slower. Since http 1.1 has keepalive, a browser should be able to send multiple requests upfront to the server, and the server can then stream them back. The lower limit on transfer time should therefore depend only on bandwidth.

  • JonathonW 10 years ago

    The big thing that HTTP/2 brings as far as pipelining is concerned is that requests and responses can be multiplexed over the same connection to avoid the head-of-line blocking problem with HTTP/1.1. HTTP/1.1 requires pipelined responses to be sent strictly in the order they're requested; HTTP/2 can send responses in any order or even multiplexed/interleaved (hence the network activity graph from the OP).

    Also, most browsers don't enable HTTP pipelining by default-- they'll reuse a connection if possible, but won't make multiple requests at once for compatibility reasons. Chrome even supported it for a while, but had to remove it because it didn't work (bugs in Chrome, bugs in servers, and the head-of-line blocking problem made it not worth keeping) [1].

    [1] https://www.chromium.org/developers/design-documents/network...

    • bsdetector 10 years ago

      Head of line blocking is mostly a theoretical concern not a practical one. Since browsers use 6+ connections at once, a large slow request will only hold up a few resources and usually doesn't affect the page load speed by much. In fact, as shown recently interleaving can actually slow down the page rendering. Then there's the priority system, an inversion of which caused Google Maps to load far more slowly over HTTP/2. The main problem solved by interleaving responses is when there are 6 slow resources that tie up all the connections causing deadlock, which is because of browsers capping the number of connections.

      Google spread a lot of FUD in their push to get SPDY standardized. For instance they never compared to pipelining, which is relevant because Microsoft found that with pipelining HTTP was essentially just as fast. Google's mobile test where they claimed ~40% speedup used 1 SPDY TCP connection for the entire simulated test run of many sites vs new connections per site for HTTP -- a simple mistake? Maybe, but they didn't take any steps to correct it once they were made aware of it.

  • johntb86 10 years ago
    • Ono-Sendai 10 years ago

      From the link

      "HTTP/1.x has a problem called “head-of-line blocking,” where effectively only one request can be outstanding on a connection at a time."

      Does it? Why?

      • JonathonW 10 years ago

        In HTTP/1.1, more than one request can be outstanding at a time, but responses must be delivered by the server in the order they're received (this is required by the HTTP/1.1 spec).

        The head-of-line blocking problem actually refers to that behavior-- if I have two requests, one of which is for a 2 MB file and one of which is for a 1 KB file, and I send the 2 MB request first, the server's obligated to completely finish the 2 MB response before it can send the 1 KB file. This isn't great for perceived performance, and runs contrary to how a user expects their web browser to behave (let the small stuff appear first, then let the big stuff finish downloading as it can). HTTP/2 corrects this oversight and allows (through multiplexing) a single connection to behave more like multiple connections would in HTTP/1.1 (multiple transfers can happen simultaneously).

        • Ono-Sendai 10 years ago

          I agree that the case of 1 large 2MB image and a small 1KB file is a good case for multiplexing.

          What annoys me somewhat is that HTTP2.0 was basically sold on the back of such demonstrations as 'load 256 identical images'. In that case a proper implementation of HTTP 1.1 with keepalive/pipelining would be just as fast (or even slightly faster due to avoiding multiplexing overhead) as HTTP 2.0.

          I think it's pretty pathetic that people can't implement this properly, and I find it worrying that the solution to not being able to implement a protocol properly is to replace it with a more complex protocol.

          • umwhat999 10 years ago

            A cool thing about HTTP/2 is that it's really mostly just an optimization for HTTP/1.1.

            Which means that if and when we come up with something better, we can deprecate HTTP/2 support, and support only HTTP/next + HTTP/1.1. This helps avoid legacy cruft.

            Google did that for SPDY with Chrome - it's no longer supported.

            While I'm not convinced that a proper implementation of HTTP 1.1 would actually be just as fast as HTTP 2.0, if you are right, there will be data to show it, and it's not too late for us to change the ecosystem based on that data.

        • kartickv 10 years ago

          While I understand and agree with what you said, HTTP 2 still runs on top of TCP, which means that when you lose a packet, subsequent packets belonging to other responses can't be delivered to the application layer, even though there's no reason to delay them. TCP's strict byte order semantics is the problem.

          If only the HTTP/2 implementation on the receiving side could tell the underlying TCP stack that it wants to opt out of byte order semantics and to deliver data as it's available. That will decrease latency while not requiring any protocol changes on the sending side.

JoelBennett 10 years ago

Despite the calls of "it's not fair!", I can see what he is getting at. From the end-user perspective, it doesn't matter what the underlying technology is: if it provides a faster (and more secure) experience, it's a win.

If there was some huge drawback to using HTTP/2, I can see why people might cry foul, but whether they like it or not, HTTP/2 is coming, so we might as well embrace it.

  • WorldMaker 10 years ago

    Right, users see HTTP and HTTPS, they don't see /1.0 versus /1.1 versus /2.

    (There's a good argument that users seeing HTTP versus HTTPS was a mistake, too, versus just secure/unsecure markers in browsers. TLS shouldn't have needed a new port number and URL prefix... but it is way too late to fix that now.)

  • josteink 10 years ago

    > if it provides a faster (and more secure) experience, it's a win.

    Http/2 without tls would be even faster. Had it not been for someone with an agenda deliberately saying it should not be supported.

falcolas 10 years ago

IMHO, when AWS uses HTTP/2 and not HTTP/1.1 over TLS, then you can start claiming that https === HTTP/2. Until then, you've moved the goalposts well beyond what most of your peers would consider to be reasonable.

As a side note - what is the standard regarding multiplexing for terminating HTTP/2 proxies: i.e. how much multiplexing could make it across that boundary? Or is that a bridge we haven't crossed yet?

walrus01 10 years ago

Fun history: does anyone remember 32-bit/33MHz bus PCI crypto accelerator cards? I remember seeing them for sale around 2000/2001 with openbsd driver support.

https://www.google.com/search?num=100&ei=oSGRV9uAHM2OjwOPxZK...

Grue3 10 years ago

Imagine how fast it would be if the browsers didn't artificially restrict HTTP2 to HTTPS only. HTTP over HTTP/2 would be faster and easier on the CPU.

jonathanoliver 10 years ago

Is it just me or is his HTTPS website down?

bullen 10 years ago

This would be interesting if it compared the energy required in joules!

josteink 10 years ago

Trollpost. Flagged.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection