Settings

Theme

Https hurts users far away from the server

antoine.finkelstein.fr

121 points by antoinefink 9 years ago · 127 comments

Reader

jacquesm 9 years ago

The biggest impact you can have on your users experience is to trim down the number of connections and the size of your pages. Long after that you can start worrying about round-trip-times to the server.

This blog post is a nice example: 30 requests (ublock origin blocked another 12, with those enabled the time to load increases to a whopping 28 seconds), 2.5M transferred, 7 seconds load time. And all that for 4K payload + some images.

  • lclarkmichalek 9 years ago

    Number of connections isn't that relevant with HTTP2

    • valarauca1 9 years ago

      Yes it is.

      The number of connections to one host isn't relevant in HTTP2. As ublock is blocking some ~20 connections these are going to different hosts. Connecting to a different host in HTTP2 is no different then HTTP1.1

      If your HTTP2 is terminating at MANY boxes within your infrastructure you are failing to understand how HTTP2 works. Connections within a single TLS/TCP/IP connection are free, new TLS/TCP/IP connections cost exactly as much as before.

      • majewsky 9 years ago

        Come to think of it, it's pretty ironic. For years, admins have been told to move static content to a different domain to trim down the request size (since browsers won't include the cookies for the main site). Now that might reverse, and it might be best to send all HTTP requests to the same server (for large sites: most likely some load balancer or haproxy or whatever) in order to benefit from HTTP/2 multiplexing.

        • valarauca1 9 years ago

          HTTP/2 really seems more like a front end balancing and caching protocol.

          Handle your TLS/HTTP2 termination on the same boxes and use HTTP1.1 within the internal network.

    • jacquesm 9 years ago

      Unfortunately back here in the real world only about 10% or so of all websites support HTTP/2 so it is very relevant.

      • lclarkmichalek 9 years ago

        Given this article is about how you can improve _your_ response times, I think the author and audience probably have the the ability to implement HTTP2.

      • jansenv 9 years ago

        That can be fixed just as easily as reducing the number of requests.

      • dalore 9 years ago

        Adding http2 is easier then reducing the number of requests.

        And infact reducing the number of requests using things like spritemaps, bundling js and css is actually an antipattern with http2.

        • jacquesm 9 years ago

          > And infact reducing the number of requests using things like spritemaps, bundling js and css is actually an antipattern with http2.

          Those are just ways to lose some of the impact of bloat without addressing the bloat itself.

          If you address bloat directly it will benefit all users.

          • dalore 9 years ago

            If you shard on http2 it will take longer since now it makes multiple ssl connections which have have the ssl handshake.

            With http2 it's all multiplexed into one connection. So you have the one ssl connection, but that one connection has multiplexed connections inside it. And since it's one tcp, the tcp sliding window has opened up and is actually faster then opening a new tcp connection.

          • tuxracer 9 years ago

            If you migrate to HTTP2 or move to a host that already supports it then cutting down on "round trips" is an obsolete concern altogether.

            • jacquesm 9 years ago

              HTTP2 does not magically bundle all connections to one. It still very much depends on how you build up your page.

              • floatboth 9 years ago

                It does not magically split one connections into many. One domain == one connection. Sure it won't undo your "domain sharding" hacks and merge your CDNs, yeah :)

      • viggity 9 years ago

        yeah, but we're talking about what website operators can do to speed up their site. you can try to dramatically reduce the number of files you need to send. OR, you can just enable HTTP2. HTTP2 seems like a more simple answer.

        • hueving 9 years ago

          HTTP2 doesn't magically change the number of hosts you communicate with (CDNs, ad networks, tracking providers, etc) and most importantly, it doesn't reduce the amount of shit developers are piling onto web pages.

alvil 9 years ago

There is also another problem on how much and how often is Googlebot indexing your site because your site speed is one of the factors of so called Google index budget. My users are in Germany so my VPS is also in Germany to be fast for local user (~130ms for http reply), but for US Googlebot is my site slow (~420ms for http reply). So you are penalized also for this.

  • KabuseCha 9 years ago

    Hi - I know some tools report a slow site in this case, but these reports are not accurate - don't believe them! Google is not that stupid ! :D

    I am currently working as a dev in an SEO-Agency (in Austria), and we never believed this hypothesis - so we tested this once with a bunch of our sites:

    When moving sites with a German speaking audience to a VPS in America, your rankings at google.de/google.at will decrease (slightly - the effect is not that big) - the other way around your rankings will improve (slightly).

    However - even if your rankings would improve when moving to America I would recommend keeping your sites hosted in Europe: The increase in rankings will not offset the decrease in user satisfaction and therefore the decline in your conversion rates.

    • jacquesm 9 years ago

      The whole idea that search engines should take preference over actual users is strange.

    • chatmasta 9 years ago

      That sounds like a really interesting experiment. Kudos for taking a scientific approach to SEO.

      A bit off-topic, but out of curiosity, have you run any other interesting experiments like this? I would love to read a blog post about them.

      • KabuseCha 9 years ago

        Hi, thanks!

        We regularly test different things, but few are as extensive as this "server location test".

        This one was quite easy to do - and to revert - even when doing it for a lot of sites: just duplicate your sites on another continent and change your dns-settings.

        Sadly we do not blog about this stuff. As our customers are not particularly fond of sharing their data - and blog posts without precise data are not useful at all...

        Additionally, most of our assumptions and hypotheses were wrong. So most of our blogsposts would sound like:

        "We thought google would work like this, but sadly we were wrong"

        SEOs might like these posts - but potential customers probably not so much :D

    • alvil 9 years ago

      Report tool I use is Google's official webmaster tool (420ms an average time in "crawl stats").

      And here is the article on budget https://webmasters.googleblog.com/2017/01/what-crawl-budget-...

      • KabuseCha 9 years ago

        Hi, alvil - 420ms does not sound that bad.

        Checking some of my sites I see:

        - values ranging vom 320 - 410 for a bunch of German speaking sites hosted in Europe

        - and values of 221 and 240 for my two English speaking sites hosted in America (via firebase - on googles own infrastructure)

        So if you are concerned with your crawl budget, I think you better focus on things like:

        - On-site duplicate content

        - Soft error pages

        - and Low quality and spam content

        Plus you could also get some high quality backlinks.

        And please be aware that the crawl frequency does not directly influence your rankings. So, as long as your do not really have a big problem regarding your crawl budget, you may spend your time wiser if you focus on other metrics.

        PS: You may already know the tools, but others could be interested:

        If you want to optimize you sites performance in respect to SEO use the following tools:

        https://developers.google.com/speed/pagespeed/insights/?hl=d...

        The most important aspect is "Reduce server response time" - you have to pass this!

        https://www.webpagetest.org/

        Choose a server near you and aim for a Speed-Index of maximum 3000 - I personally target 1000, but depending on your influence regarding the website's frontend you will not be able to achieve this.

        • kuschku 9 years ago

          > Choose a server near you and aim for a Speed-Index of maximum 3000 - I personally target 1000, but depending on your influence regarding the website's frontend you will not be able to achieve this.

          For some comparison results: Uncached, with my own CMS indexing and analyzing a 6GB database of crashdumps and providing an overview with graphs over that, I get a score of 623. (This running on a 9€/mo dedicated server).

        • alvil 9 years ago

          thanks for your suggestions KabuseCha :)

  • hayd 9 years ago

    One way around this might be to use geolocation for dns http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/rou...

    Although doesn't help for all types of requests, it has its uses.

  • kijin 9 years ago

    Isn't this somewhat compensated for by the extra credit Google gives you for using SSL?

    I would also assume that Google is smart enough to take the physical location of your server into account when calculating how much penalty to apply in which searches. Sites that load fast in Germany should have higher ranks in searches from Germany.

  • zhte415 9 years ago

    Your speeds seem very slow.

    Is this the initial handshake which understandably introduces latency?

    After that, times should be similar. What could be killing users far away is requiring multiple handshakes because multiple things requiring handshakes are being introduced at the same time.

    For reference, I'm physically located in China so requests have to go through a bunch of filtering-oriented routers, and get 150-180ms from US, 200ms Japan and 180ms Singapore (yay geography) and around 200-250ms from Europe - this is SSL requests and not from a connections hub like Shanghai or Shenzhen close to domestic exit-points. Double to triple these times for first handshake.

  • jlebrech 9 years ago

    couldn't you make the indexable portion of your site fast and static and make it go to a relevant local server once users log in?

mfontani 9 years ago

Yup, and that's why for thereg we started using Cloudflare's Railgun… with it, the connection to the servers (hosted in the UK) is "bearable"… without, it's abysmal:

From a VPS in Sydney, with a Good Enough bandwidth:

    root@sydney:~# speedtest-cli 2>&1 | grep -e Download: -e Upload:
    Download: 721.20 Mbits/s
    Upload: 117.89 Mbits/s
… doing the request through Railgun is "quite bearable":

    root@sydney:~# ./rg-diag -json https://www.theregister.co.uk/ | grep -e elapsed_time -e cloudflare_time -e origin_response_time
    "elapsed_time": "0.539365s",
    "origin_response_time": "0.045138s",
    "cloudflare_time": "0.494227s",
Despite our "origin" server being quick enough, the main chunk of time is really "bytes having to travel half the world".

Why does railgun help? Because this is what a user would get otherwise; the "whitepapers" site is hosted in the UK, and doesn't use Cloudflare or Railgun – it only uses Cloudflare for DNS:

    ./rg-diag -json http://whitepapers.theregister.co.uk/ | grep elapsed_time
    "elapsed_time": "0.706277s",
… so that's ~200ms more, and _on http_.

How much would https add, if it were done without Cloudflare's https and Railgun? That's easy to check, as our the whitepapers site has TLS (although admittedly not http/2):

    root@sydney:~# ./rg-diag -json https://whitepapers.theregister.co.uk/ | grep elapsed_time
    "elapsed_time": "1.559860s",
that's quite a huge chunk of time that Cloudfalre HTTPS + Railgun just saves/shaves for us. Recommend it highly!
  • pbarnes_1 9 years ago

    Did you try CloudFlare without Railgun?

    That would be interesting.

    • mfontani 9 years ago

      Sure, let's do just that… from the same location in Sydney; origin server hosting the content is in UK. This domain is on their "free" plan, as it gets hardly any traffic.

        root@sydney:~# ./rg-diag -json https://thereglabs.com/ | grep -e elapsed_time
          "elapsed_time": "0.863677s",
      
      So that's from Sydney to the UK, with https served by Cloudflare. The webapp serving that isn't the sharpest knife in the drawer, but when tested on localhost it replies in 0.015s – the rest is time taken moving bytes across the world.

          root@sydney:~# time curl -sH 'Host: thereglabs.com' -H 'Cf-Visitor: {"scheme":"https"}' http://THE_ORIGIN_SERVER/ -o/dev/null
          real	0m0.821s
      
      … and this is plain HTTP to the origin server: the free plan is great for offloading HTTPS at basically no cost in time added.

      We've got another domain on the business plan… so let's try that one.

      This is an _image_ request, which is _cached by cloudflare at the edge_:

          root@sydney:~# ./rg-diag -json https://regmedia.co.uk/2016/11/09/hypnotist_magician_smaller.jpg  | grep elapsed_time
          "elapsed_time": "0.239641s",
      
      Lovely, the "local caching" of their CDN helps a ton!

      … compared to if we were to request the same file from the ORIGIN_SERVER over HTTP:

          root@sydney:~# ./rg-diag -json http://ORIGIN_SERVER/2016/11/09/hypnotist_magician_smaller.jpg  | grep elapsed_time
          "elapsed_time": "0.704458s",
      
      … but our "origin server" _also_ is likely to have the image in the "memory cache"…

      … and that image was likely in their cache; so… let's add a parameter so they _will_ have to ask the origin server:

          $ pwgen 30 2
          Eehacoh2phoo1Ooyengu6ohReWic2I Zeeyoe8ohpeeghie3doyeegoowiCei
      
      There you go… two new randomly generated values…

          root@sydney:~# ./rg-diag -json 'https://regmedia.co.uk/2016/11/09/hypnotist_magician_smaller.jpg?Eehacoh2phoo1Ooyengu6ohReWic2I=Zeeyoe8ohpeeghie3doyeegoowiCei'  | grep elapsed_time
          "elapsed_time": "1.198940s",
      
      Yup, took quite a bit longer than the 200ms it took when the image URL was fully in their cache.

      All in all, from the point of view of being able to _easily_ serve people on the other side of the world with a "good enough" (not great, mind you!) response time, both "standard" Cloudflare, the "pro" offering _and specifically_ the "business" offering are just effin AWESOME.

hannob 9 years ago

There are a couple of other things you can do with existing TLS technology that can improve your latency, e.g. using OCSP stapling, use modern crypto so browsers may use TLS false start, avoid too many ciphers or unnecessary certs in the chain to make the handshake smaller.

It's a bit older, but here's some info, much of it is still valid: https://istlsfastyet.com/

  • citrin_ru 9 years ago

    It is questionable if OCSP stapling reduces TLS handshake time.

    Without OCSP browser makes slow request to CA, but caches results for a long time so slow request happens not often.

    With OCSP stapling enabled more data is transferred between client and server on each TLS handshake.

    Main proponents of OCSP stapling are CA, because it saves them bandwidth/hardware.

    • pfg 9 years ago

      Thinking about this a bit, it seems to be that clients talking to a server with OCSP stapling support could still make use of cached OCSP responses by simply omitting the "status_request" extension in the client hello, which would cause the server not to send the stapled OCSP response. I don't think any clients behave that way today, though.

      I'm not certain how session resumption plays into this either. If OCSP is skipped for resumed session as well (which would be my guess), you'd probably not take that small bandwidth hit all that often.

      As an aside, OCSP stapling improves your user's privacy quite a bit as well, by not giving your CA a list of all IP addresses connecting to a domain.

hedora 9 years ago

Presumably, cloudflare is up to its ears in NSL's, illegal wiretaps, etc. If you care at all about mass surveillance, censorship, oppressive governments (in the US, or the location of the cloudflare proxy) you probably should look elsewhere.

It's probably controversial, but I'd love to see a yellow security icon in browsers when sites are using well known https relays that can see plaintext (or are doing other obviously bad things, like running software with known zero day exploits, etc)

  • beardog 9 years ago

    I've seen this argument made a lot lately, and I agree Cloudflare is bad for user privacy, however, adding this warning to browsers by default wouldn't make a lot of sense. Heres why:

    Most websites are on virtual servers (hardware in general) that is not owned by them. For example, Amazon could easily let the NSA look into your AWS server directly. IMO, the url lock should just be an encryption auditor. The end website is using acceptable algorithms and has a currently valid certificate? That's good enough.

    Almost any HTTPS site can be forged/"broken" (unless they're using preloaded HPKP), if the attacker has root certificates (or even just a bug in a CA website), which the NSA certainty does.

    Nation state adversaries just aren't really within the typical TLS threat model. I do concede that it does make agencies jobs much harder if used correctly, however.

    • lmm 9 years ago

      I agree that CloudFlare with correctly configured HTTPS is no more vulnerable than AWS or really any popular host. All the lock icon confirms is that data is encrypted while it passes over the public Internet; what's happening inside the server at the other end is out of scope.

      CloudFlare's "Flexible SSL" offering means a CloudFlare "https://" site is quite likely to not even have that level of security though. They send supposedly HTTPS data unencrypted and unauthenticated across the open Internet; if that doesn't warrant a yellow/red icon then I don't know what does.

  • jacquesm 9 years ago

    A mini audit along the lines of 'builtwith'.

    Hm. Good idea, why not go a step further and turn the 'no server signatures' advice on it's head: full disclosure, server signatures on, in fact, list each and every component in the stack so that end users can (through some plug-in) determine whether or not a site is safe to use.

    Of course nothing bad could ever come from that. /s

    I'm all for making the use of for instance Cloudflare less transparent so that users know who they are really talking to, but I'm confused about how you'd want to establish what a site is running without giving a potential attacker a lot of valuable information.

    • hedora 9 years ago

      It would have to work without the site's permission, so the browser (or maybe a third party service) would do a basic vulnerability scan. Maybe orange could mean "a script kiddie could pwn this site in under an hour", and yellow would mean "we don't see how your ISP could mitm this, but server side providers (aws, google, azure cloudflare) definitely could."

      FWIW, my personal website uses let's encrypt, so it would be yellow or worse.

      Anyway, I like the idea of tying the security color in the url bar to an attacker model, since it at least gets people to think about attack models.

      • 220 9 years ago

        > FWIW, my personal website uses let's encrypt, so it would be yellow or worse.

        This shouldn't effect your security stance.

        There's a common misconception that you trust your private keys with your CA and they can somehow transparently MITM you. But they only have your public key, not your private keys, so they can't do that.

        The security threat from trusted CAs is that they can MITM anyone, regardless of if you use them or not. BUT the attack isn't transparent, and things like cert pinning are effective in the real world from preventing attacks.

        • StavrosK 9 years ago

          The attack is definitely transparent if you trust the CA that issued the MITM cert.

          • 220 9 years ago

            If you use cert pinning, like the DigiNotar/Iran/Gmail, you're still protected against a trusted CA, assuming you've communicated in the past, which is realistic for a real world attack.

            It's an attack that's difficult to deploy because it's easy to detect if you're looking in the right places, and as soon as it's detected, you know the CA has been compromised, and the attacker loses a large investment.

            • StavrosK 9 years ago

              It's not as difficult to deploy if you can only target specific users, but I agree with you. The problem with cert pinning is that it's hard to do, because, if you make a mistake, nobody can access your site for quite a long time...

      • chatmasta 9 years ago

        Wait, why does letsencrypt allow mitm by server provider?

        • rocqua 9 years ago

          Domain validation only requires an HTTP response. They can easily MitM that specific response to fake a certificate for your domain.

          • hedora 9 years ago

            Yes. This.

            In particular, the company operating the data center your server is in can reliably do this, and so can the backbone provider they use, and probably the server's local government. The DNS provider that controls your domain can mitm the ca process too (though with a higher chance of detection).

            The argument for making domain validation yellow (and not red) is that domain validation protects against attacks from residential ISPs / coffee shops, and it would also be hard for a foreign government to launch the attack against their own citizens. They basically have to compromise the CA, tamper with your browser, or just randomly break https with "bad certificate warnings".

            Over time, I'd hope more bad security practices (crypto related or not) would lead to yellow bars.

            For instance, intel secure enclaves help cloud security a lot, but they are still exotic. If they catch on, and you're at a vps that doesn't offer something like that, then you get a yellow bar starting in 2027.

            • pfg 9 years ago

              It's hard to say if you're implying this is any different with other validation levels like OV and EV. The validation methods that CAs may use for OV/EV are the same ones they may use for DV. The only difference is that they also validate the organization's information of the certificate requester. In other words, someone with the ability to MitM traffic between the CA and the target's domain could still obtain an OV/EV certificate for that domain, they'd just have to verify they are who they claim they are on top of that (and they don't have to be the domain owner - there's nothing that says "only the legal entity owning the domain may acquire a certificate for that domain").

              • nandhp 9 years ago

                > The only difference is that they also validate the organization's information of the certificate requester.

                Right, but if I go to PayPal.com and the address bar says "Comcast [US]" or "National Security Agency [US]", I'll know something's up.

            • rocqua 9 years ago

              Most of these issues aren't there with DNS based validation (presuming DNS-sec).

              Though, that just shifts the potential problem towards anyone in the DNS-sec chain of trust. Most notably, the controller of the TLD (most often a government) and the registrar.

              Thing is, those are also an issue with normal (http based) DV. After all, they could also change the A record for the domain they want a cert for.

              I believe the current solution to all this is focused on detection rather than prevention (through certificate transparency and similar proposals). The idea being that any organization that isn't trustworthy will only get to pull of this hack once, in a short time frame before having their trust revoked.

          • 220 9 years ago

            Is there a risk model where you control the network enough to fake domain validation but only if the target initiates the request to Let's Encrypt?

            Otherwise it doesn't matter if you use Let's Encrypt as the attacker could just initiate the validation regardless of your CA and end up with a valid certificate (which would still fail cert pinning)

            Edit: Oh I see, it's a more about if DV should ever be green.

          • chatmasta 9 years ago

            Got it. But this only happens once when you apply for the cert (or renew it), correct?

            • rocqua 9 years ago

              I'm not sure if letsencrypt (or other CAs) are willing to give out multiple DV certs for the same domain. I'd guess they do, so anyone that can MitM you and the CA can get a DV cert.

  • throwawaysed 9 years ago

    Cloudflare is unquestionably a source of pure, unencrypted traffic for the govt.

    Does anyone remember a few years ago when Google found out through leaks that the govt was wiretapping it's private traffic between datacentres?

    What makes you so naive to think that the govt isn't sniffing every single page on cloudflare?

    • Bartweiss 9 years ago

      A 'counterpoint', such as it is. What makes you think that isn't happening to any 3rd party host you can name? Why single out Cloudflare as adding risk to sites that are hosted on AWS already?

      The risk here is real, but it's much more pervasive than one data handler.

      • jacquesm 9 years ago

        You seem to mis-understand how cloudflare works. They allow an insecure host to pose as a secure one and the traffic between cloudflare and the insecure host is not encrypted.

        That problem would not exist on 'any 3rd party host'.

        • manigandham 9 years ago

          CF is the same as any other CDN with TLS termination. Every host that provides a load balancer, or a server, or some other internal network connection like a VPN, can be compromised. Cloudflare is nothing special in this regard.

  • manigandham 9 years ago

    The entire internet is built upon thousands of layers. There are so many vectors of entry that no "default warning" would ever suffice.

    If your risk profile is outside the boundaries of normal internet use then you likely already know what to do - and we now have a multitude of tools for more private communications.

  • apeace 9 years ago

    > Presumably, cloudflare is up to its ears in NSL's, illegal wiretaps, etc. If you care at all about mass surveillance, censorship, oppressive governments ... you probably should look elsewhere.

    This analysis seems flawed. If you care about mass surveillance, you want their top-tier security and legal teams working for you.

    • dublinben 9 years ago

      Their security and legal teams don't work for you, they work for the company. The company can be drafted to work for the government. Just because you pay them, does not mean they actually work for you.

      • dickbasedregex 9 years ago

        Or even that their interests and yours are even remotely aligned.

        They sell you a widget/service. That's where your relationship ends.

    • dickbasedregex 9 years ago

      That really seems like wishing.

      In an ideal world you'd want Youtube's (Google's) legal teams working for you, protecting you against DMCA abuses and alike... but they don't. Not unless you're a top-tier YouTuber and even then it's laughable dice roll as to whether they feel the bad press is worth their time to do anything.

      And I don't believe from a bottom-line perspective the shareholders believe it's worth their time to do anything more than provide a platform (no matter how problematic) and market it.

      Will Cloudflare do everything they can to keep your content accessible? Sure. Anything above and beyond that? lol. Good luck with that...

sp332 9 years ago

If you're worried about a proprietary solution, you could host your own cache server in Australia or wherever your customers are having trouble.

  • problems 9 years ago

    Yeah, at a $200/mo cost, you could spin up a few VMs on DigitalOcean, Vultr or LightSail which have decent bandwidth and cache from there.

    Nice part about cloudflare though is that they can use anycast to determine location and then send the closest server IPs. For sub-$200/mo, you're not able to do that, you'd have to find a provider that could do it for you, I'm not sure anyone offers country-based anycast DNS alone.

    EDIT: Looks like easyDNS enterprise may be able to do it, https://fusion.easydns.com/Knowledgebase/Article/View/214/7/... for about $12.75/mo too. Might be a decent way to brew your own mini caching CDN for fairly cheap.

    • jdub 9 years ago

      You can also use Route 53 for the same purpose, for a tiny premium on the standard rates for name resolution. (See latency based routing queries and geo DNS queries below, plus health checks for failover.)

      https://aws.amazon.com/route53/pricing/

    • manigandham 9 years ago

      > anycast to determine location and then send the closest server IPs

      Anycast doesn't determine location or send the closet IPs, it's all the same IP address announced using BGP (border gateway protocol) to automatically route to the closest (in network travel) server.

      • problems 9 years ago

        Of course. Let me clarify - they use anycast DNS to send the closest CF caching proxy's IP.

        • manigandham 9 years ago

          That's not how it works. Both their DNS and reverse proxy servers use anycast IPs without any DNS-based routing.

          They did recently release some features called traffic manager that lets you control the origin server based on geo. If you just need geo-balanced DNS though, AWS Route 53, Azure Traffic Manager, and NSOne offer DNS based routing.

          • problems 9 years ago

            Really? I didn't think they'd do anycast on their reverse proxy servers, that seems risky to me (ie: a TCP connection changes from one server to another due to a BGP change), but I suppose the odds are fairly low.

            I seem to remember getting different IPs from different locations, but it could just be random or I could be mistaken.

            EDIT: Tried now and it seems I'm getting the same IPs from Canada and Australia, so you are indeed correct.

Kiro 9 years ago

I don't understand why I need to use https on a static marketing webpage. No login stuff, no JavaScript, nothing. Just straight up HTML and CSS. Right now I need to pay about $150 every year for something that's only used to satisfy Google PageRank (I can't use LetsEncrypt with my hosting provider). Why?

  • eganist 9 years ago

    Keeping it extremely high level:

    Among other reasons, not encrypting traffic gives an opportunity for bad actors to replace content in transit to your end users when your end users are on compromised connections, such as rogue "free" wifi networks in airports or coffee shops, or even legitimate networks which have in some way been compromised, e.g. the ISPs of the world who decide to inject other content e.g. their own ads into unencrypted traffic.

    The next question is usually "what could they possibly do, change a few pictures?"

    They could inject malicious payloads, and for all your users would know, it would appear to them that it came from your site.

    > I can't use LetsEncrypt with my hosting provider

    Consider switching. For a static site, consider Gitlab; they do a good job of permitting LetsEncrypt.

    ---

    I sincerely appreciate the question, though. I have marketing people ask me this question all the time in private who hesitate to do so in public because quite a few security types berate them for not doing something "obviously" more secure. It's not at all obvious to most of the world's web designers and content creators that a static site should be TLS'd until it's framed (heh) in this manner. The fact that you asked brings about a massive educational moment.

    Anyway, consider switching hosts. :)

    • cmdrfred 9 years ago

      May I add an example. Let's say you are a drug company and you offer a number of different drugs. With TLS I only know that you are interested in a drug that company produces or the company itself, without it I know you or someone you care about has erectile dysfunction.

      • spand 9 years ago

        No that is not all an attacker could know. TLS does not provide confidentiality of the number of bytes transmitted. So in your example an attacker would only have to crawl the public website and find the pages matching in size to the ones you have been browsing.

    • libeclipse 9 years ago

      Using netlify with ghpages is extremely fast because of their CDN, A+ on ssl labs, and free.

    • cat199 9 years ago

      Has google disclosed all investments in CA providers?

      don't know the answer myself here.. there are good technical reasons, I agree..

      but it is a logical fact that if google search was always 100%, there would be no need for adwords and site ads...

      • pfg 9 years ago

        Google is a platinum sponsor for Let's Encrypt, which is slowly taking away market share from almost all commercial CAs[1]. They've also removed special treatment for EV certificates on mobile browsers (and are regularly thinking out loud about doing the same for their desktop browser), taking away most of the incentive for using a commercial CA (and not a free DV CA like Let's Encrypt). There's probably also a good chance that they'll offer something like Amazon's ACM (free certificates for various AWS services) as part of their Google Cloud offerings with their newly-acquired roots[2].

        I think we can safely say that this would be a very weird way to go about earning a few bucks through CA investments.

        [1]: https://w3techs.com/technologies/history_overview/ssl_certif...

        [2]: http://pki.goog/

  • riobard 9 years ago

    Here's why: Many ISPs hijack HTTP connections and inject ads and tracking JS into the page. If you don't use HTTPS, your page is screwed.

    The Internet is not a safe place. We should aim for HTTPS EVERYWHERE.

    • RadioactiveMan 9 years ago

      This is a really good point. Usually we talk about protecting against a third party, but the far more ordinary use case is protecting against the adversary right on the other side of your router.

    • JimDabell 9 years ago

      Also transcoding images to be terrible quality. If you care about your images not looking like crap, you should serve them over HTTPS.

    • snug 9 years ago

      > Many ISPs

      I think that's a bit sensationalist.

      • STRML 9 years ago

        Verizon, Comcast, and Rogers have done it, that we know of. In North America that's a very large proportion of traffic.

      • ucho 9 years ago

        Maybe it is even worse when it is just few - people won't know that website creator isn't responsible for all of its content. And sometimes is hard to know who is the culprit like in https://news.ycombinator.com/item?id=12091900 .

        Is there any solution other than totally killing HTTP that protects from HTTPS stripping attacks? HSTS won't protect first visit and STS preload lists can only be so large.

      • mrkurt 9 years ago

        It's not, it happens at a ton of coffee shops, on airplanes, etc, etc. Probably not ISPs you buy home internet from, but there are a lot.

      • afandian 9 years ago

        Vodafone in the UK did this to me.

        • chatmasta 9 years ago

          Vodafone is the worst. Although it's really the U.K. surveillance state that is the problem.

          When I popped my SIM into my iPhone it forced me to download a configuration profile with a self-signed Vodafone cert, which means they can mitm any connection. I think this is required by the government so they can block adult websites by default? (I've also seen torrent websites also fail silently with misleading "server not found" errors)

          I haven't looked into if they're doing the filtering via DNS or mitm, but I avoid the censorship by connecting to a vpn.

  • Nullabillity 9 years ago

    Yeah, why are you using a bad hosting provider?

    • djsumdog 9 years ago

      When I was in the EU, I saw both Vodaphone and Three inject banners on the top of websites in various countries.

      They're not as easy to get away from as you think.

    • Kiro 9 years ago

      Legacy and one of the only hosting providers in my country. I don't want to risk worse localized SEO by hosting it outside.

      • RossM 9 years ago

        I've been reading up on localisation SEO lately - as far as I understand, Google only uses "server IP is located in X country" as an indicator of where a site might be localised for, if it can't get any better information.

        For example, if you're using a ccTLD for the domain, or if it's a generic-TLD and you declare a country in Google Webmaster Tools, that will be a much stronger weighting.

        Of course, if that's wrong I'd love to know!

        • Kiro 9 years ago

          Thank you. The actual web app is hosted on Linode in Frankfurt (using LetsEncrypt for https) so maybe I should host the marketing page there as well if that's true.

  • mrkurt 9 years ago

    Most of the answers you're getting aren't all that big of deal for your site. You still might want https though.

    You should think about https for sites like yours the way you think about vaccines. SSL everywhere makes everyone safer, even though it doesn't have a tremendous impact on your own site.

    Also, shameless plug, if you want really easy SSL you can use our new startup: https://fly.io. I'm not sure what country you're in, but we have a bunch of servers all over to help make it fast. :)

  • Gurrewe 9 years ago

    If you have a marketing webpage, you might have a link to signup or login pages. If you can hijack the index page you'll also be able to hijack the links.

    • mikeash 9 years ago

      Even if you don't have signup or login pages, a MITM attacker can add them. Or they could add a "buy now!" link with a convenient entry form for the user's bank details. The relevant question isn't what your page has that's so important, but rather what an attacker could make it have that would cause trouble.

  • rocqua 9 years ago

    2 reasons. The first is practical: integrity. Https guarantees the site your visitors see is the site you sent them.

    The second is more moral. Making https the default means more and more of the web will be encrypted and authenticated. This is a good thing.

  • dalore 9 years ago

    Why use ssh over telnet?

c0nfused 9 years ago

It seems to me that it is worth considering that HTTPS is not always a panacea of goodness. We should think about two things.

First that almost every firewall out there right now supports https snooping via MITM. Example: https://www.paloaltonetworks.com/features/decryption

Second, I just got back from rural China where most unblocked american webpages take between 5-15 seconds to load on my mobile phone many of them take upwards of a minute to load fully. This seems to be a fun combo of network latency, smaller than expected bandwidth, and pages using javascript with a series of different load events to display content. That dompageloaded->xmlhttprequest -> onreadystatechanged chain can ad some serious time on a 500ms round trip, and that's without talking about the css, the images, and the javascript.

I forgot to pay me electric bill before I flew out and it took me nearly an hour to login, push pay my bill, accept the terms, and confirm payment. I was not a happy camper.

It seems to me that while https is a very good thing, in some cases http and low bandwidth solutions might be worth implementing. It seems to me that one might actually want to tailor this to your audience, no one in their right mind is going to waste 5 minutes loading your web page. If they are so desperate they need to wait, they are going to hate you every minute they do it.

  • rocqua 9 years ago

    > First that almost every firewall out there right now supports https snooping via MITM. Example: https://www.paloaltonetworks.com/features/decryption

    Seems prudent to mention that this requires cooperation of the client bein MitMed. Specifically, the client needs to install a root certificate.

  • magicalist 9 years ago

    > I forgot to pay me electric bill before I flew out and it took me nearly an hour to login, push pay my bill, accept the terms, and confirm payment. I was not a happy camper.

    That sucks but I don't see how having a site where you may have to enter payment information on an unsecured connection would be a solution.

  • jacquesm 9 years ago

    > This seems to be a fun combo of network latency, smaller than expected bandwidth, and pages using javascript with a series of different load events to display content.

    You forgot about the great firewall of China playing merry MITM with your connections.

  • chatmasta 9 years ago

    Is there an easy way to pipeline those requests over one TCP connection? Or is that only possible with http/2?

    I wonder if it would be lower latency to open a single websocket tunnel on page load and download assets over the tunnel. Although at that point I suppose you're just replicating the functionality of http/2.

SEMW 9 years ago

Funny coincidence, I was running into this exact issue earlier today. Had a customer complain about high response times from even our /time endpoint (which doesn't do anything except return server time) as measured by curl, and turns out it was just the TLS handshake:

    $ curl -o /dev/null -s -w "@time-format.txt" http://rest.ably.io/time
    time_namelookup:  0.012
       time_connect:  0.031
    time_appconnect:  0.000
    time_pretransfer: 0.031
         time_total:  0.053

    $ curl -o /dev/null -s -w "@time-format.txt" https://rest.ably.io/time
    time_namelookup:  0.012
       time_connect:  0.031
    time_appconnect:  0.216
    time_pretransfer: 0.216
         time_total:  0.237
(as measured from my home computer, in the UK, so connecting to the aws eu-west region)

Luckily not that much of an issue for us as when using an actual client library (unlike with curl) you get HTTP keep-alive, so at least the TCP connection doesn't need to be renewed for every request. And most customers who care about low latency are using a realtime library anyway, which just keeps a websocket, so sidesteps the whole issue. Certainly not enough to make us reconsider using TLS by default.

Still, a bit annoying when you get someone who thinks they've discovered with curl that latency from them to us is 4x slower than to Pubnub, just because the Pubnub docs show the http versions of their endpoints, wheras ours show https, even though we're basically both using the same set of AWS regions...

andreareina 9 years ago

One round trip over the course of the time that the user is using the same OS/browser installation isn't much.

The Cloudflare Railgun is an interesting solution, and one that could be implemented in the context of an SPA over a websockets connection. Or conceivably some other consumer of an API.

filleokus 9 years ago

A related interesting topic is the possibility of secure cache servers that don't break the secure channel with "blind caches". Currently just a RFC draft and probably a long time from mass adoption, but nevertheless interesting.

https://tools.ietf.org/html/draft-thomson-http-bc-00, and Ericsson's article on it https://www.ericsson.com/thecompany/our_publications/ericsso...

nprescott 9 years ago

I really enjoyed the coverage of the same topic in High Performance Browser Networking[0]. It effectively explains the key performance influencers across various networks without being boring.

[0]: https://hpbn.co/

aanm1988 9 years ago

> In our case at Hunter, users were waiting on average 270ms for the handshake to be finished. Considering requests are handled in about 60ms on average, this was clearly too much.

Why? Did it hurt user engagement? Were people complaining the site was slow?

  • kuschku 9 years ago

    The question is "is this the best we can do?".

    If it’s no, then clearly we should improve.

    • aanm1988 9 years ago

      By spending time and effort on things that may not actually matter?

      To their credit this post talks about improving the performance, instead of just using it to complain about they can't use https because of a difference in a metric that may or may not actually cause end users any pain.

      • kuschku 9 years ago

        You do realize that every latency you see on residential connections is magnified on mobile or even satellite connections?

        If you have a RTT of seconds, 10 additional roundtrips can cost an entire minute.

        These things become very noticeable very quickly if the system is in non-perfect environments.

        Additionally, even with modern browsers it takes far too long to open a website — it should be 100% instant, less than one or two frames (16 or 33ms). That's not possible, as RTT is usually around 18ms between users and CDN edges, but at least it should be below perceivable delays (100ms).

        EDIT:

        My best websites hover around 281ms to start of transfer, and 400ms to the site being finished.

        That's improvable, but most sites out there take literally half a minute to load.

        Now go on a 64kbps connection, and try again. Handshake takes seconds, start of transfer is almost after 30sec, and by the time your website arrives your coffee has gone cold (a few minutes for a google search).

        Years ago, Google was usable on dial-up. Now even the handshake takes as long as an entire search used to take. Notice anything?

chatmasta 9 years ago

What's wrong with the cloudflare free plan? You can host a static site on github pages with a custom domain and use the free cloudflare SSL cert.

  • amiraliakbari 9 years ago

    The "Railgun" feature mentioned in the article is only available in some paid plans. Using the free plan wouldn't keep an open connection between your servers and Cloudflare's. It does improve the situation by terminating users' handshakes early, using better links, warm DNS cache, etc. among servers. But the latency hard limit is still present between your server and CF. Skipping https between your server and CL is not an option either for any site transferring user data.

    • chatmasta 9 years ago

      Ah, I see. I did not realize that. Accordingly, I edited my comment to be less inflammatory. :)

      I understand that by using the generic CF free cert, https terminates at CF and the connection CF->Origin is over unencrypted HTTP. Is this why there is latency overhead? Because CF cannot connect to origin via https so it cannot open a persistent tunnel? Or is it because the overhead of keeping an open https tunnel per origin server is prohibitively expensive to maintain for every free customer?

      I assume that even though there is no persistent tunnel, CF still must still use persistent TCP connections?

      • floatboth 9 years ago

        Maybe the cost is not completely prohibitively expensive but they do consider it a premium feature. Have to earn money :D

  • andrewaylett 9 years ago

    Without Railgun, there's no guarantee that the CloudFlare nodes will have an open socket to your origin server, so your visitors may still have to pay the cost of the round-trip.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection