Settings

Theme

Where Am I? NYTimes or Google?

theinternetbytes.com

1130 points by rwoll 6 years ago · 380 comments

Reader

superasn 6 years ago

Yes this has been a big issue for a very long time now. Google wants to push a release where it will display the hostname of the amp site even if the content is being served from google.com[1].

Mozilla (and Apple) are strictly against it and thank god for Mozilla. If Google had a bigger market share this would already be something we would have been living with. I'm sure there are better sources for this, but here is the first result:

https://9to5google.com/2019/04/18/apple-mozilla-google-amp-s...

  • realusername 6 years ago

    Being completely against AMP for obvious reasons, I'm personally not against signed exchanges itself, this feature could spawn a whole new class of decentralised and harder to censor web hosting, that sounds like a great addition.

    • jhhh 6 years ago

      Going to also spawn a whole new class of semi-persistent malicious pages (say created via XSS) that once signed and captured can be continuously replayed to clients until expiration.

      • CGamesPlay 6 years ago

        What? The signing allows the content to be mirrored in other locations with guarantees about consistency. It doesn't imply anything more about the content than SSL does.

        • 8organicbits 6 years ago

          If Google AMP is acts like a cache of content, then cache poisoning attacks are a concern. How those cached items expire will determine how long an attacker who poisons the cache can serve malicious content.

          • entire-name 6 years ago

            At that point, wouldn't the approach be to defend from the client side? Namely, we can instruct the client to not trust any content sign by such-and-such keys. This can be done by pushing out a certificate revocation, etc.

            • judge2020 6 years ago

              This would be pretty cool (remotely revoking signed exchanges), however it's not part of Google's proposal - Unless every previous security consideration about caches is accounted for in SX's, it's probably not safe to start faking the URL bar.

            • gregable 6 years ago

              Certificate revocations do apply to signed exchanges.

          • CGamesPlay 6 years ago

            Why does Archive.org get a pass on this one? Signed responses mean that there's a very clear way to leverage the browser's domain blacklisting technology to stop the spread of malware, which isn't presently possible for any content mirrors on the web.

            • 8organicbits 6 years ago

              Archive.org makes it clear you are on archive.org. The URL shows archive.org. The page content shows archive.org at the top. [1]

              Google AMP doesn't show Google on the page. Google is pushing for the URL to show the origin site's URL instead of Google[2].

              If an attacker poisons a nytimes.com article served by Google AMP, how does a browser's domain blacklisting help? Block google? Block nytimes.com? Neither makes sense.

              1. https://web.archive.org/web/20050401090916/http://www.google...

              2. https://9to5google.com/2019/04/18/apple-mozilla-google-amp-s...

              • CGamesPlay 6 years ago

                I believe you might be misunderstanding the idea behind signed exchanges. To be clear, Signed Exchanges are how AMP should have worked all along.

                example.com generates a content bundle and signs it. Google.com downloads the bundle and decides to mirror it from their domain. Your browser downloads the bundle from google.com, and verifies that the signature comes from example.com. Your browser is now confident that the content did originate from example.com, and so can freely say that the "canonical URL" for the content is example.com.

                Malicious.org does the same thing, and the browser spots that malicious.org is blocked. At this point it doesn't matter if the content came from google.com, because the browser knows that the content is signed by malicious.org and so it originated from there.

                Hope this helps clarify. Obviously blacklisting isn't a great security mechanism; my point is just that signed exchanges don't really open any NEW vectors for attack.

                • remexre 6 years ago

                  I think the concern was more that if I can XSS example.com, Google is now serving that for some period of time after example.com's administrators notice + fix this. (In the absence of a mechanism to force AMP to immediately decache the affected page(s), that is.)

                • 8organicbits 6 years ago

                  I'm following.

                  Imagine that example.com builds the bundle by pulling data from a database. If an attacker can find a way to store malicious content in that database (stored XSS) and that content ends up in a signed bundle that Google AMP serves (similar to cache poisoning) then users will see malicious content. When the stored XSS is removed from the database, Google AMP may continue to serve the malicous signed bundle. So an extra step may be needed to clear the malicious content from Google AMP.

                  How exactly the attacker influences the bundle is going to be implementation dependent, so some sites may be safe while others are exploitable.

                  • joshuamorton 6 years ago

                    Signed exchanges can only serve static content, so it's not clear what you could do maliciously.

                    • thw0rted 5 years ago

                      I think most of the comments in this thread mean "malicious" in the sense of injecting malware (say, a BTC miner) or a phishing attach or something into the signed-exchange content. However, you also have to consider that the content (text, images) itself could be "malicious", in the sense of misinformation.

                      If, purely as a hypothetical, Russian operatives got a credible propaganda story posted on the NYT website 24 hours before the November elections, and an AMP-hosted version of it stayed live long after the actual post got removed from nyt.com, I'd certainly call that "malicious". Of course, just like archive.org, I suspect that in a case as high-profile as that, you'd see a human from the NYT on the phone with a human at Google to get the cached copy yanked ASAP, but maybe on a slightly smaller scale the delay could be hours-to-days, which is bad enough.

                    • fomine3 6 years ago

                      XSS?

        • vinay_ys 6 years ago

          Along with signing, we need explicit content cache busting and explicit allowed mirrors list (which can be revoked instantly). Then it would be at par with TLS + current cache busting mechanisms on top of TLS.

          • londons_explore 6 years ago

            As long as javascript on the page has some way to inspect the signatures and where it was delivered from, you can implement cache busting, allowed mirrors, and invalidation yourself however you please.

      • gregable 6 years ago

        Not continuously. The signed content includes an expiration date, which the publisher controls.

        This expiration can also never be set more than 7 days in the future.

      • realusername 6 years ago

        I don't see how this tech would further help to make malicious pages created with XSS, any thoughts? It sounds like it's the same issue with or without signed exchanges.

        • jhhh 6 years ago

          The point wasn't that this technology would uniquely enable XSS attacks, but rather that it could allow malicious actors to persist particular attacks for the duration of the validity of the signed content. Any brief vulnerability in a website now becomes serializable. They considered this already. Look in the draft "6.3. Downgrades":

          "Signing a bad response can affect more users than simply serving a bad response, since a served response will only affect users who make a request while the bad version is live, while an attacker can forward a signed response until its signature expires. Publishers should consider shorter signature expiration times than they use for cache expiration times."

          • realusername 6 years ago

            I see indeed, I don't think they are going with the right approach here, there should be an automatic way to upgrade signed content / check for updates, short signatures just destroys the benefits of the feature.

            • foepys 6 years ago

              It's the only way to do it. TLS has shown that OCSP and the likes are not adding significant security and short certificate expiration is the only way to go.

              The serving nodes are not necessarily under control of a well intended party that complies with upgrade requests.

              • ehsankia 6 years ago

                And I don't see the issue with short expiry. The point of a cache is to reduce load, not to entirely eliminate it. Even with a 5m expiry, it's still 5 orders of magnitude better than having a 100+ QPS on your server.

        • sk5t 6 years ago

          Parent considers that the feature could be used to turn a temporary problem into a long-term problem. Sort of like how certificate pinning could be twisted to ransom an entire domain.

    • grawprog 6 years ago

      What you describe sounds a lot like URL spoofing, which is already pretty much used entirely to trick and scam unsuspecting users into clicking malicious links. Signed exchanges would just be an even harder to detect version of this.

      • henryfjordan 6 years ago

        Signed exchanges make it super easy to detect fraud. You can verify the signature...

      • jefftk 6 years ago

        No, with signed exchange your browser verifies that the original site really did produce the content you are viewing.

    • ddevault 6 years ago

      No... no, it would not. It would centralize web hosting and make it less censorship resistant.

      Sure, you could move it somewhere else and have it show up in the address bar the same, but the actual URL has changed and you need to somehow get the new URL into people's hands. And ultimately you've centralized a lot of websites under a smaller number of service providers which, before, would have been on their own domains.

      • ehsankia 6 years ago

        I'm not sure what you describe there but it sounds much more complicated than it is.

        Isn't signed exchanges basically CDN's without having to setup DNS? It's in theory no different than using CloudFlare to serve your content, except any CDN can just serve it without giving them access to your domain.

        • fabrice_d 6 years ago

          Or you could store signed exchanges on platforms like Dat or IPFS and get real decentralization.

          • ehsankia 6 years ago

            Right, the point is that anyone can serve the content through any platform, which is why it allows decentralization. But I just don't understand why people are hanging so tightly to the idea that a URL is a direct path to a server, because that just isn't true.

    • dependenttypes 6 years ago

      > this feature could spawn a whole new class of decentralised and harder to censor web hosting

      How so?

  • Spivak 6 years ago

    I don’t really think Google’s plan is that weird. And it would be amazing for decentralized networks, archiving, and offline web apps. Google can’t just serve nyt.com — they can serve a specific bundle of resources published and signed by nyt.com verified by your browser to be authentic and unmodified.

    • mulmen 6 years ago

      How does centralizing content on Google from multiple sources improve decentralization? The web is already decentralized. That's why it is a web.

      AMP is a scourge. It's a bad idea being pushed by bad actors.

      • capableweb 6 years ago

        The current implementation of the AMP cache servers obviously doesn't help the decentralization.

        I think what Spivak is saying though is right. If we could move from location addressing (dns+ip) to content-addressing , but not via the AMP cache servers, in general, anyone could serve any content on the web. Add in signing of the content addressing, and now you can also verify that content is coming from NYTimes for example.

        Also, I'd say that the internet (transports, piping, glue) is decentralized. The web is not. Nothing seems to work with each other and most web properties are fighting against each other, not together. Not at all like the internet is built. The web is basically ~10 big silos right now, that would probably kill their API endpoints if they could.

        • fauigerzigerk 6 years ago

          I think this would require an entirely new user interface to make it abundantly clear that publisher and distributor are seperate roles and can be seperate entities.

          I don't think this should be shoehorned into the URL bar or into some meta info that no one ever reads hidden behind some obscure icon.

          • ehsankia 6 years ago

            Isn't it already the case though with CloudFlare and other CDNs serving most of the content? Very few people really get their content from the actual source server anymore.

            • fauigerzigerk 6 years ago

              That's a good point. I just feel that there is an important distinction to be made between purely technical distribution infrastructure like Cloudflare's and the sort of recontextualisation that happens when you publish a video on Youtube. I'm not quite sure where in between these two extremes AMP is positioned.

        • mulmen 6 years ago

          Thank you for this explanation. AMP has put a really bad taste in my mouth but what you describe here does have some interesting implications. Something to consider for sure.

      • logicalmonster 6 years ago

        Please fact check me on this, but the ostensible initial justification for AMP wasn't decentralization, but speed. Businesses had started bloating up their websites with garbage trackers and other pointless marketing code that slowed down performance to unbrowsable levels. Some websites would cause your browser to come close to freezing because of bloat. So Google tried to formalize a small subset of technologies for publishers to use to allow for lightning fast reading, in other words, saving them from themselves. AMP might be best viewed as a technical attempt to solve a cultural problem: you could already achieve fast websites by being disciplined in the site you build, Google was just able to use its clout to force publishers to do it. As for what it’s morphed into, I’m not really a fan because google is trying to capitalize on it and publishers are trying various tricks to introduce bloat back into AMP anyway. The right answer might be just for Google to drop it and rank page speed for normal websites far higher than it already does.

      • eternalban 6 years ago

        > How does centralizing content on Google from multiple sources improve decentralization?

        It actually makes perfect sense in Doublespeak. /s

        • DougBTX 6 years ago

          They’re suggesting a web technology which would allow any website to host content for any other website, under the original site’s URL, as long as the bundle is signed by the original site. That could be quite interesting of a site like archive.org, as the url bar could show the original url.

          But AMP is a much narrower technology, I’d imagine only Google would be able to impersonate other websites, essentially centralised as you say. The generic idea would just be a distraction to push AMP.

          Everything would be so much better if the original websites were not so overloaded with trackers, ads and banners, then there would be no need for these “accelerated” versions.

          • a9h74j 6 years ago

            I see where you are going, but what if my website is updated?Is the archive at address _myurl_ invalidated, or is there a new address where it can be found? I am thinking of reproducible URLs for academic references or qualified procedures, for example, which might or might not matter in the intended use case.

            Could there be net-neutrality-like questions in all this as well?

          • prepend 6 years ago

            I think this is possible already, but should not override the displayed URL for the content.

            Create a new “original URL” field or something.

      • amelius 6 years ago

        Google is not a single server. Think of Google as a CDN.

        • oblio 6 years ago

          So it's decentralized because Google has multiple servers? And here I was, thinking that Google runs everything from a single IBM mainframe.

          What you're saying would be described as distributed... Not decentralized.

    • domenicd 6 years ago

      +1. The way I think about it is that signed exchanges are basically a way of getting the benefits of a CDN without turning over the keys to your entire kingdom to a third party. Instead you just allow distribution of a single resource (perhaps a bundle), in a crytographically verifiable way.

      Stated another way, with a typical CDN setup the user has to trust their browser, the CDN, and the source. With signed exchanges we're back to the minimal requirement of trusting the browser and the source; the distributor isn't able to make modifications.

      • skybrian 6 years ago

        It seems like there is a risk that an old version of a bundle will get served instead of a new one by an arbitrary host? Maybe the bundle should have a list of trusted mirrors?

        • gregable 6 years ago

          There is a publisher selected expiration date as part of the signed exchange which the client inspects. The expiration also cannot be set to more than 7 days in the future on creation. This minimizes, but of course does not eliminate, this risk.

          • xg15 6 years ago

            It also makes signed exchanges completely unusable for delivering packages offline. (E.g. the USB stick scenario)

            What a bummer.

            • smichel17 6 years ago

              Browsers could have a setting to optionally display the content anyway, along with a warning to the effect of "site X is trying to show an archive of site Y", similar to how we currently handle expired or self-signed SSL certificates.

        • gpm 6 years ago

          Alternatively super short expiry times. It doesn't seem like it would be that concerning to have another site serving a bundle that was 5 minutes out of date. It doesn't seem like it should be too much load to be caching content every 5 minutes.

      • jml7c5 6 years ago

        I could see some sort of alternative URL bar ("https://nyt.com/somearticle/ | served by https://somecdn.example.org/blah"), but complete replacement is far too dangerous and confusing in that it is completely hidden.

        • jefftk 6 years ago

          The New York Times surely already serves their pages through a CDN, silently, and with the CDN having the full technical capability to modify the pages arbitrarily. Signed exchange allows anyone to serve pages, without the ability to modify them in any way.

          (Disclosure: I work for Google, speaking only for myself)

          • jml7c5 6 years ago

            My objection is that it's no longer clear if you're dealing with content addressing or server addressing. If I see example.com in the URL bar, is it a server pointed from the DNS record example.com (a CDN that server tells me to visit), or am I seeing content from example.com? If I click a link and it doesn't load, is it because example.com is suddenly down, or has it been down this whole time? Is the example.com server slow, or is the cache slow? Am I seeing the most recent version of this content from example.com, or did the cache miss an update?

            • KajMagnus 6 years ago

              What if there was a `publisher://...` or `content-from://...` or `content://...` protocol, somehow? (visible in the address bar, maybe a different icon too, so one would know wasn't normal https:)

              And by hovering, or one-clicking, a popup could show both the distributor's address (say, CloudFlare), and the content's/publisher's address (say, NyT)?

      • ignoramous 6 years ago

        > a way of getting the benefits of a CDN without turning over the keys to your entire kingdom to a third party.

        https://blog.cloudflare.com/keyless-ssl-the-nitty-gritty-tec... is a thing now.

        • icebraining 6 years ago

          The session key, which is given carte blanche by the TLS cert to sign whatever it wants under the domain, is still controlled by Cloudflare.

          To put it simply, Cloudflare still controls the content. The proposal here would avoid that, by allowing Cloudflare to transmit only pre-signed content.

          • Spivak 6 years ago

            Your browser would have a secure tunnel to CloudFlare which is encrypted with their key. But then that tunnel would deliver a bundle of resources verified your browser differently that CF doesn’t have the key for.

    • dannyw 6 years ago

      The plan is bad because google currently tracks all of your activities inside AMP hosted pages site in their support article.

      Google controls the AMP project and the AMP library. They can start rewriting all links in AMP containers to Google’s AMP cache and track you across the entire internet, even when you are 50 clicks away from google.com.

      • gregable 6 years ago

        While that's theoretically possible, the library can be inspected and does not do these things.

        • simion314 6 years ago

          Could Google give specific persons different versions or is technically impossible?

          • gregable 6 years ago

            Technically yes, but not very practically. The domain is cookieless, so it would be difficult to even identify a specific user, other than by IP. Also, the JavaScript resource is delivered from the cache with a 1 year expiry, which means most times it's loaded it will be served from browser cache rather than the web.

          • rocho 6 years ago

            It's very possible indeed.

        • tgv 6 years ago

          They have the log files.

        • pdkl95 6 years ago

          > the library can be inspected

          Really? Could you publish how you are inspecting an unknown program to determine if it exhibits a specific behavior? There are a lot of computer scientists interested in your solution to the halting problem.

          Joking aside, we already know from the halting problem[1] that it you cannot determine if a program will execute the simplest behavior: halting. Inspecting a program for more complex behaviors is almost always undecidable[2].

          In this particular situation where Google is serving an unknown Javascript program, a look at the company's history and business model suggests that the probability they are using that Javascript to track use behavior is very high.

          [1] https://en.wikipedia.org/wiki/Halting_problem

          [2] https://en.wikipedia.org/wiki/Undecidable_problem

          • pwdisswordfish2 6 years ago

            By reading the source code?

                def divisors(n):
                    for d in range(1, n):
                        if n % d == 0:
                            yield d
            
                n = 1
                while True:
                    if n == sum(divisors(n)):
                        break
                    n += 2
                print(n)
            
            I don’t know if this program halts. But I’m pretty sure it won’t steal my data and send it to third parties. Why? Because at no point does it read my data or communicate with third parties in any way: it would have to have those things programmed into it for that to be a possibility. At no point I had to solve the halting problem to know this.

            Also, if I execute a program and it does exhibit that behaviour, that’s a proof right there.

            The same kind of analysis can be applied to Google’s scripts: look what data it collects and where it pushes data to the outside world. If there are any undecidable problems along the way, then Google has no plausible deniability that some nefarious behaviour is possible. Now, whether that is a practical thing to do is another matter; but the halting problem is just a distraction.

            • pdkl95 6 years ago

              > at no point does it read my data

              Tracking doesn't require reading any of your data. All that is necessary is to trigger some kind of signal back to Google's servers on whatever user behavior they are interested in tracking.

              > or communicate with third parties

              Third parties like Google? Which is kind of the point?

              > [example source code]

              Of course you can generate examples that are trivial to inspect. Real world problems are far harder to understand. Source is minified/uglified/obfuscated, and "bad" behaviors might intermingle with legitimate actions.

              Instead of speculating, here is Google's JS for AMP pages:

              https://cdn.ampproject.org/v0.js

              How much tracking does that library implement? What data does it exfiltrate from the user's browser back to Google? It obviously communicates with Google's servers; can you characterize if these communications are "good" or "bad"?

              Even if you spent the time and effort to manually answer these questions, the javascript might change at any time. Unless you're willing to stop using all AMP pages every time Google changes their JS and you perform another manual inspection, you are going to need some sort of automated process that can inspect and characterize unknown programs. Which is where you will run into the halting problem.

              • pwdisswordfish2 6 years ago

                Funny how people can literally "forget" that Google is a third party. Probably people at Google believe they are not third parties. Not even asking or trust, just assuming it. No other alternatives. Trust relationship by default.

            • saagarjha 6 years ago

              > I don’t know if this program halts.

              Be cool if you did ;)

          • IanCal 6 years ago

            > Could you publish how you are inspecting an unknown program to determine if it exhibits a specific behavior? There are a lot of computer scientists interested in your solution to the halting problem.

            This has nothing to do with the halting problem because that is concerned about for all possible programs not some programs.

            We obviously know if some programs halt.

                while true: nop
            
            Is an infinite loop.

                X = 1
                Y = X + 2
            
            Halts.

            More complex behaviours can be easier. Neither of my programs there make network calls.

      • wmf 6 years ago

        Publishers who use AMP were already allowing Google to track everything through either Analytics or Ads.

        Likewise, AMP pages are mostly accessed from Google search that's already tracked.

        • robin_reala 6 years ago

          As a user I can choose to block GA, either through URL blocking or through legally mandated cookie choices in some regions (e.g. France). When served from Google I have no choice in the matter.

          • gowld 6 years ago

            If you can block GA at the client, you can block google.com at the client, no?

            • robin_reala 6 years ago

              Not if I want AMP pages. (I mean, I don’t, but there are presumably people who do.)

    • tyingq 6 years ago

      The AMP spec REQUIRES you include a Google controlled JavaScript URL with the AMP runtime. So technically the whole signing bit is a little moot, given that the JS could do whatever it wanted.

      • gregable 6 years ago

        The same could be said of any CDN hosted javascript library. For example: jquery. There is an open intent to implement support for publishers self-hosting the AMP library as well.

    • grey-area 6 years ago

      That's not why Google (the corporation) wants this to happen. This is not about technical capabilities but about power.

      They cannot be allowed to become the gatekeeper for the web.

      • jacquesm 6 years ago

        They already are. The question is not how we're going to stop that from happening but how we are going to roll it back.

    • xg15 6 years ago

      I agree, if we finally got a way to have working bundles on the web, that would be extremely useful. (And would also restore some of the capabilities of browsers to work without internet connection).

      It seems to me, a lot of the security concerns come from the requirements to make pages served live and pages served from bundles indistinguishable to a user - a requirement that really only makes sense if you're Google and want to make people trust your AMP cache more.

      I'd be excited about an alternative proposal for bundles that explicitly distinguishes bundle use in the URL (and also uses a unique origin for all files of the bundle).

      • gregable 6 years ago

        I believe the issue with this is that users already largely don't understand decorations in the URL. For example, the difference between a lock and an extended verification certificate bubble. Educating a user on what a bundle URL means technically may be exceedingly challenging.

    • pwdisswordfish2 6 years ago

      In what ways is this different/similar from "content centric networking"?

      https://m.youtube.com/watch?v=gqGEMQveoqg

      (Google Tech Talk from Van Jacobsen on CCN many years ago)

    • olingern 6 years ago

      The problem is ownership. Google is “stealing” or caching content for what they consider a better web.

      I don’t support ads but I also don’t support Google serving a version of the page that steals money from content creators. So, therein lies the problem: choice.

      I can imagine a future where amp is ubiquitous and Google begins serving ads on amp content. Luckily, companies have to make money and amp is not in most people’s or company’s best interests.

      If amp was opt-in only, this would be much more ethically sound.

      • gregable 6 years ago

        Signed exchanges guarantee that the content cannot be modified by the cache, such as ad injection.

        Google has never injected ads into any cache served AMP document (technically if the publisher uses AdSense, this is false, but that's not the point you are making).

        It's difficult to follow what definition of theft is being suggested. The cache does not modify the document rendering, it's essentially a proxy. In a semantic sense, this is no different than your ISP delivering the page or your WiFi router.

    • jeffdavis 6 years ago

      It's completely moving away from the client/server model to something else.

      Perhaps that's a great thing to do, but it's not something to do quietly.

      • mooman219 6 years ago

        Just hearing about this from the thread, I'm getting a IPFS vibe from this. It would be interesting to see that tech get more native integration with the browser from this idea.

    • boomlinde 6 years ago

      How is it not weird that I see a domain name in the URL bar that has nothing to do with the domain I actually requested content from?

    • colordrops 6 years ago

      Why do they need a special extension though? What's wrong with DNS?

      • gregable 6 years ago

        Signed exchanges are an extension to digital certificates, such as used for TLS. This is independent of DNS.

    • nxnews 6 years ago

      Why would it be amazing for decentralized networks and offline web apps?

      • remexre 6 years ago

        If I publish mycoolthing.com/thing, it could be mirrored over a P2P network as peer1.com/rehosted/mycoolthing.com/thing, peer2.com/rehosted/mycoolthing.com/thing, etc., in a way that would make it evident to end-users not familiar with the protocol that the content is from mycoolthing.com.

        • tgv 6 years ago

          AMP is of course not P2P.

          • tyingq 6 years ago

            I think the point is that signed exchanges ( https://developers.google.com/web/updates/2018/11/signed-exc...) could potentially be useful, if separated from AMP, and made an actually secure thing. Like, for example, the spec doesn't require specific Google controlled js URLS to be in the content.

            • gregable 6 years ago

              Signed exchanges is actually separate spec from AMP. The browser implements it independently. There is no requirement for AMP pages to use signed exchanges nor for signed exchanges to be AMP.

  • rpastuszak 6 years ago

    Remember when Google was telling us that third-party cookies are there to protect us, and Safari/Firefox/Edge are just reckless and pose a risk to users by blocking them?

    • Jabbles 6 years ago

      Please provide a link, I could only find this, which suggests Google has reversed course:

      https://www.techradar.com/uk/news/google-is-phasing-out-thir...

      • rpastuszak 6 years ago

        > By undermining the business model of many ad-supported websites, blunt approaches to cookies encourage the use of opaque techniques such as fingerprinting (an invasive workaround to replace cookies), which can actually reduce user privacy and control.

        https://blog.chromium.org/2020/01/building-more-private-web-...

        I'm going to copy paste my older comment on this:

        I find their "removing 3rd party cookies will incentivise businesses to rely on fingerprinting" discourse dangerous.

        It implies that other browser vendors (Mozilla, Safari/WebKit, new Edge) are in fact making the Web a more dangerous place.

        I believe it's dangerous because it creates a harmful, unproductive PR narrative—people might just assume this is a true statement, without learning about both sides of the problem. I'm not trying to strip anyone of agency, I just don't think most of my friends would have time to research this topic and might decide to follow the main opinion instead.

        The answer I'd like to hear: Yes, it does push some actors towards fingerprinting, but preventing fingerprinting should be dealt with regardless. Changes should happen both on legislative and browser-vendor level.

  • esafwan 6 years ago

    Have a look at this: https://blog.cloudflare.com/announcing-amp-real-url/

    Cloudflare allow using of same domain to use AMP. In this case, content is served from Cloudflare CDN.

    • tyingq 6 years ago

      Note the Cloudflare hosted AMP pages still mandate AMP requirements, like including a Google controlled JS uri in your content. Signing is moot if you allow Google to run arbitrary JS on your content. They haven't abused it yet, but it's allowed by spec. Subresource integrity isn't mandated, explained, or recommended.

    • archon810 6 years ago

      It's called Signed Exchange and it's the same thing the comment you replied to is about.

  • m-p-3 6 years ago

    I'm still waiting for general support of addons for the next version of Firefox on mobile just so that I can have the Redirect AMP to HTML[1] addon.

    [1]: https://addons.mozilla.org/firefox/addon/amp2html/

  • priyaranjan 6 years ago

    This has been discussed over & over again and there is no representation from amp team to make it any better. I was surprised to realise how much my life changed when I started using firefox + duckduckgo. Full time, at work & home, on macOS & android.

  • xg15 6 years ago

    Aren't we essentially reinventing http proxies with this?

  • nokya 6 years ago

    Just having this idea already tells how important it is to actively resist Google.

    One should never forget that at a certain point, Google will likely invoke the looser's argument ("protect you from terrorists and pedophiles") to require proof of identity prior to granting access to any resource or service it controls.

    Anything that helps them advance in that direction must be fought fiercely.

  • dqpb 6 years ago

    Isn't this basically like a CDN or a PoP cache?

    • IncRnd 6 years ago

      Not exactly. For a CDN to work, the DNS is repointed towards the CDN's servers. In this case, Google is trying to cover-up that Google and not NYTimes is serving the page.

      • wmf 6 years ago

        Is NYTimes's use of Fastly also a cover-up?

        • boomlinde 6 years ago

          Does NYTimes' use of Fastly subvert the meaning of the URL by literally covering it up in the address bar? Nope? Not the same thing, then.

          Personally I don't think there's anything wrong with the fundamental concept of signed exchanges. The only problem is that it's just that: a signed exchange of content, which should have nothing to do with the domain name authority in the URL. By all means, display "Content from: a.com" in a box next to the URL, but don't change b.com to a.com in the URL as though it doesn't already have a well defined meaning.

          • ehsankia 6 years ago

            > the meaning of the URL

            The issue is that the technical meaning of the URL is very far from what most user think of.

            Is the URL an address for NYT's server? Not really because you are actually hitting Fastly's server. So when NYT sets up a magical DNS config, it suddenly is fine, but using crypto to sign the package and serve it on a CDN that way, then it's suddenly "subverting the meaning of the URL"?

            We can have a real discussion of what the meaning of a URL is, but I think your interpretation is unfair. I think it's entirely fair to argue that it makes sense for URLs to be an address to a specific content.

            • boomlinde 6 years ago

              > The issue is that the technical meaning of the URL is very far from what most user think of.

              My argument is not really concerned with what most users think of, but humor me, what do they think of?

              > So when NYT sets up a magical DNS config, it suddenly is fine, but using crypto to sign the package and serve it on a CDN that way, then it's suddenly "subverting the meaning of the URL"?

              Yes, because HTTP/S scheme URLs have a definition that implies a meaning, which is subverted when you create exceptions to that meaning. NYT setting up a "magical" DNS config that resolves to some third party server is perfectly fine by that definition, and resolving one FQDN while displaying another is not. It's not sudden, this standard has existed in one form or another since 1994.

              > We can have a real discussion of what the meaning of a URL is

              Yeah, let's do that instead of harping on about what's fair and unfair. It's not a matter of fairness, it's a matter of standardized definitions. By all means, create a new "amp:" URI scheme where the naming authority refers to whoever signed the data and resolves to your favorite AMP cache, but don't call it http or https.

              • ehsankia 6 years ago

                I think the subtle shift of view here is that the URL shows the address where the content is located, more so than where the content was actually fetched from.

                An example of where this occurs today is caching. You could be hitting a cache anywhere along the way. Hell you could be seeing an "offline" version, but the website would still show you the "address" of the content.

                This is no different, you're hitting a different cache, but the "URL" you see is the canonical address of the content you are looking at, not where it was actually fetched from.

                • boomlinde 6 years ago

                  > I think the subtle shift of view here is that the URL shows the address where the content is located, more so than where the content was actually fetched from.

                  The only sense in which content is located anywhere is as data on a memory device somewhere. With the traditional URI in which the host part of the authority is an address of or a domain name pointing towards an actual host, you have a better indication of where the content is located than you do if this is misrepresented as being some other domain name which in fact does not at all refer to the location of the content.

                  The shift, if any, is that people may be less interested in where the content is located and more interested in its publishing origin.

                  > An example of where this occurs today is caching. You could be hitting a cache anywhere along the way. Hell you could be seeing an "offline" version, but the website would still show you the "address" of the content.

                  Yes, because that's how domain names work.

                  > This is no different, you're hitting a different cache, but the "URL" you see is the canonical address of the content you are looking at, not where it was actually fetched from.

                  It's different in the sense that a host name as displayed by the browser then has multiple, conflicting meanings that have no standardized precedent.

        • IncRnd 6 years ago

          Google by definition wants to cover-up the domain name that serves amp web pages. Over half of the article discusses that.

          For your question about fastly, I already answered that in the comment you replied. The fastly CDN requires that the DNS is configured to point at fastly servers. Take a look at https://docs.fastly.com/en/guides/sign-up-and-create-your-fi... under "Start serving traffic through Fastly".

            Once you’re ready, all you need to do to complete your service
            setup and start serving traffic through Fastly is set your domain's
            CNAME DNS record to point to Fastly. For more information, see
            the instructions in our Adding CNAME records guide."
          
          A CNAME record is a dns mechanism that aliases an alternate domain for a canonical domain.
          • wmf 6 years ago

            Users don't see DNS records. In the old world they click on a nytimes.com link and get something served from a Fastly server, but in the future AMP world they click on nytimes.com and get it served from a Google server. It isn't different.

            • IncRnd 6 years ago

              You bring up an interesting point - if AMP hosting is the same as a CDN, then why do companies use both solutions? "Because they appear the same" doesn't mean they are the same.

              AMP requires that you consume other Google products, which requires that additional JS is loaded. When your mobile site doesn't use AMP, Google limits SEO rankings your mobile site can have. Google AMP requires your pages meet Google's Content Policies or they won't host them.

              AMP and CDN delivered pages are architected differently and Google imposes restrictions and requirements that don't exist in a CDN.

            • simias 6 years ago

              I agree with you, if I understand the signed exchange proposal correctly the trust model is effectively similar (NYT explicitly opts to let Fastly pretend to be them through their DNS config in the same way that a signed exchange would let them explicitly opt into letting Google pretend to be them).

              I'm still opposed to the change, I see this centralization of the web through CDNs as a bad thing, I don't want to make it easier.

              • jefftk 6 years ago

                The trust model is pretty different: in the traditional model NYT has to trust their CDN to serve the content unmodified. In the signed exchange model, any modification will cause the content not to validate, and the browser will reject it.

            • rocho 6 years ago

              It's very different. And the difference lies in the URL bar. When you use a CDN, your visitors will still see your domain. With Amp, they see google.com.

    • matheist 6 years ago

      Not if nyt hasn't authorized google to act on their behalf. "Yes we will serve your stuff on your behalf at your request, now that you have stuck your sign on our door via a DNS record" vs "We're putting your sign on our door because as the authors of a browser we can do that, whether you like it or not".

      • remexre 6 years ago

        NYT in fact has to go through non-trivial technical effort to authorize Google to act on their behalf.

  • smpetrey 6 years ago

    Holy mackerel

  • agumonkey 6 years ago

    has the mainstream web jumped the shark ?

  • jeffbee 6 years ago

    Well, it's the going opinion of HN for years that the main problem with AMP is it shows the actual origin instead of the proxied origin. Lying about the URL is something hundreds of HN comments have angrily demanded.

    • nojs 6 years ago

      Why do you say that? I don’t think people want it to show the “proxied origin”, they want AMP to get out of the way and google to link to the real website.

    • ori_b 6 years ago

      No. The complaint is that Google is redirecting the content to servers that they control.

      • jeffbee 6 years ago

        This is not correct. Anyone can host AMP. See, for example, amp.cnn.com. Google hosts AMP content for its customers who elect to use that service. It’s not a nefarious plot.

reaperducer 6 years ago

People have been railing against Google's Amp on HN for years, and I think I finally figured out what it's for.

It's Google way of combatting phone apps.

If all of the world's information — especially current news and similar information — moves from the open web into apps, then Google can no longer crawl, index, or scrape that information for its own use. The rise of the mobile phone app is a threat to Google on so many levels from ad revenue to data for training its AIs.

So Google comes up with Amp to convince publishers to keep their content on the open web, where it can be collated, indexed, and otherwise used by Google for Google's services like search and those search result cards that keep people from visiting the content creators.

Google's explicit carrot in all this is the user benefit of page loading speed. Google's implicit carrot in all of this is page rank. But Google's real motivation is to have all of that information available to itself.

Can you imagine what would happen if content from even one of the big providers was no longer visible to Google? New York Times, WaPo, or even Medium? It would create a huge hole in a number of Google products and services, make its search results look even weaker than they already are, and cause people to look for search alternatives.

That's my theory, anyway.

  • hortense 6 years ago

    Amp was a reaction to Apple News and Facebook News: using those applications to read the news was a much better experience than using the web. Why? Mainly for two reasons:

    1/ Apple and Facebook were hosting all the content.

    2/ The content did not come with megabytes of JS and other unnecessary crap.

    Amp is an attempt at saving the web, and Google is interested in that for the reason that you gave: they make their money from the web.

    • appleflaxen 6 years ago

      > Amp is an attempt at saving the web, and Google is interested in that for the reason that you gave: they make their money from the web.

      Yes; attempting to save the web in much the same way that the parasitic wasp is trying to oviposit in your thorax and take over your behavior, in order to save you from being eaten by the spider.

      No thank you, sawfly.

  • Kevin605 6 years ago

    This has already happened in China, where Baidu (The Chinese equivalent of Google) can’t crawl any articles from WeChat (The Chinese equivalent of Medium), as a result, the usefulness of its search result has deteriorated significantly. Recently, Baidu has been trying to start its own publishing platform with little success.

    • saagarjha 6 years ago

      > WeChat (The Chinese equivalent of Medium)

      TIL

      • Kevin605 6 years ago

        Well, it’s more like WhatsApp, Medium, Venmo, and Facebook all combined into one giant app.

        • Kliment 6 years ago

          Also food ordering, travel reservations, health care appointments, banking, government services, and a whole lot of other things that would take too long to list. It's not an exaggeration to say the entire Chinese consumer experience runs through WeChat.

  • remus 6 years ago

    I think this is a fairly cynical take, as having news on the web is also pretty great for users.

    Imagine if instead of having all news stories a quick search away you instead had to install apps from X different news sources (and inevitably grant them permission to access your location, contacts list, name of first born child etc.). It'd create lots of little silos of news with very little ability to go outside those silos.

    Put another way, the web is a great platform for news. It does benefit Google, but it also benefits the billions of people who can freely access a huge range of sources.

  • sillysaurusx 6 years ago

    Interesting theory. One hole is that companies want to be on Google's results. It hurts WaPo not to be in the top N results, so they have an incentive to make it at least possible.

  • mclightning 6 years ago

    Who is really using the dedicated apps for each news site? Web is just way more practical; for translation, for copy-paste, for sharing.

    Besides,you dont need the app on your mobile.

    • antpls 6 years ago

      I bet non-techie people _already_ read their daily dose of news from 1 to 2 news websites at most. Installing a dedicated app is not much different than surfing the same 2 websites everyday.

      Also, for techie people, do you consider RSS as part of the "web" ? To me, an RSS aggregator app is superior to browsing 20 different news websites, all with different formats.

      "web is just way more practical" isn't obvious. It depends about what you put in the "web" bag, and the use cases. Most apps use "web" protocols, so they are technically part of the web.

  • satyrnein 6 years ago

    That's long been Google's stated reason for Chrome and much else, that pushing the web forward as a platform aligns with their interests as well.

  • summerlight 6 years ago

    Apple and Facebook really doesn't care if the web dies as long as their platform take the lion's share. But for Google, search as a product can exist only if the web itself remains relevant and this is why it's trying to keep display ads alive even though it doesn't really give them much money compared to search ads but all the privacy complication coming from third party tracker.

  • ffritz 6 years ago

    Interesting, though the barrier for users to install a new app seems to be very high these days. Most people only install a few necessary apps and thats it. In addition, we are talking about publishers here. There are thousands of news sites, no user has more than a couple of news apps. That’s why they have to keep up their website anyway, with or without amp.

  • IfOnlyYouKnew 6 years ago

    That’s Google’s motivation for almost anything. Especially Chrome.

Abishek_Muthian 6 years ago

I think the main issue is limited AMPCache providers and inability for the publisher to choose their own AMPCache providers. Which is being exploited the two search engines.

AMP project by itself is open-source and it explicitly states 'Other companies may build their own AMP cache as well'.[1] There are only 2 AMP Cache providers - Google, Bing. Further, 'As a publisher, you don't choose an AMP Cache, it's actually the platform that links to your content that chooses the AMP Cache (if any) to use.'[2]

Say, if Cloudflare provides a AMPCache and if the site publisher can choose their own Cache provider this can be resolved effectively as AMP by design itself is easy for a laymen to create high performance websites; of course there is no excuse for hiding URLs.

[1]https://amp.dev/support/faq/overview/

[2]https://amp.dev/documentation/guides-and-tutorials/learn/amp...

  • snowwrestler 6 years ago

    Can we please stop trying to pretend AMP is some sort of community-driven open source project? AMP was created by Google, for the benefit of Google. We are not obligated to play along every time a company says “open source.”

    • Abishek_Muthian 6 years ago

      >We are not obligated to play along every time a company says “open source.”

      I Agree. IMO, Google has been using 'open-source' for weaponized marketing, same way Apple has been using 'Privacy'. But, either of them could be much worse without those.

    • ehsankia 6 years ago

      Yet Google's own competitor, Bing, is clearly also using it. Isn't that part of the point of open-source? That anyone can see and use your work?

    • eeZah7Ux 6 years ago

      > We are not obligated to play along every time a company says “open source.”

      This is the point.

      People easily confuse "open source" with "free software" and "community driven".

      A lot of corporate-driven open source greenwashed the dark patterns of closed source: centralized development, user lock-in, walled gardens, poor backward compatibility, forced software and hardware upgrades.

      • Abishek_Muthian 6 years ago

        >"community driven"

        This concern has been raised time again with every major Google open-source project e.g. Android, Chromium, Golang etc. and that concerns have helped improve certain aspects of the project.

        But, I wonder whether a huge corporate like Google can build such large scale projects without such criticism, if the the project needs to be successful they to gain from it after-all they are investing their employees and other resources in it. And them being invested in it, is a major reason for adoption by other parties and resulting in a successful open-source project.

        More over, such large projects have helped overall SW ecosystem and even startups economically. I for one would say, without such large open-source projects I wouldn't have even been able to build products from a village in India and compete with products from valley.

        All I'm saying is, them being open-source at least helps us raise concerns and make them take actions; being a complete walled garden and just asking to 'trust us' is much worse.

        • eeZah7Ux 6 years ago

          > But, I wonder whether a huge corporate like Google can build such large scale projects without such criticism

          Yes: they could at least develop large projects in a foundation with many other companies

          > And them being invested in it, is a major reason for adoption by other parties and resulting in a successful open-source project.

          ...and the main source of pain when the projects are "pivoted" or just dropped due to a single company business needs, as it happened many times.

          > such large projects have helped overall SW ecosystem and even startups economically.

          They hugely harmed competing projects and competing companies including Mozilla, many phone OSes, many grassroots programming languages.

          It's well known that google developed various projects to kill competitors or buy startups cheaply and drop the project afterwards.

          There isn't an infinite pool of open source developers - far from it!

          Any large corporation that drains the pool to create a competitor to already existing FLOSS projects is actively harming the ecosystem.

          > being a complete walled garden and just asking to 'trust us' is much worse.

          Closed source can be less harmful that fake-open source. A lot of people actively avoid closed source and fall for the latter.

          • Abishek_Muthian 6 years ago

            >They hugely harmed competing projects and competing companies including Mozilla, many phone OSes, many grassroots programming languages.

            IMO, we're the reason it failed. We as a consumer didn't buy FirefoxOS phone over Android, iOS. We haven't adopted Firefox browser enough for it to become have the major market share. The same argument can levelled against any proprietary product VS open-source product.

            That proves my point, being 'completely community driven' open-source project isn't the only criteria for the success of a project.

      • eeZah7Ux 6 years ago

        Funny how I get bunches of downvotes on this account but never on other accounts. Time to switch.

  • SquareWheel 6 years ago

    Did Cloudflare end their Amp Cache? They hosted one previously.

  • lern_too_spel 6 years ago

    The link aggregator, not the publisher, must control the AMP Cache in order to prerender pages from it safely.

twhitmore 6 years ago

The whole AMP thing seems anti-competitive and hostile to the open web.

It's a really bad look on Google's part to be pushing this.

  • earthboundkid 6 years ago

    There has been no regulatory action since Microsoft (which happened as Google was being born), so the tech giants have forgotten fear and no longer self-regulate out of simple self-interest.

    • TheSpiceIsLife 6 years ago

      Another way of looking at it is: they absolutely are self-regulating.

      And it appears to be a problem.

      Another problem is, there's effectively no distinction between regulator and regulatee.

      • realusername 6 years ago

        > Another way of looking at it is: they absolutely are self-regulating.

        If they do that, it's not really visible, I don't see any regulation with how Google is behaving regarding search & web, if anything it looks like anti-competitive monopoly behaviours.

  • raverbashing 6 years ago

    I am conflicted

    Yes, AMP is an anti-competitive move by Google

    At the same time AMP is "faster" because it gets rid of all the nagware and JS crap that the original page has.

    So yeah, I don't like what Google is doing but I don't like what NYT is doing neither

    • moksly 6 years ago

      AMP is faster? I’ve never been on an AMP page where I didn’t eventually need to go to the actual site to get the full content. So it’s really just an annoying step between me and the content I searched for.

      It’s been one of the primary things that’s driven me away from google and into DDG. I don’t really care about privacy enough to leave google, but I end up leaving more and more of their services because their competition is just less annoying.

    • ogre_codes 6 years ago

      > At the same time AMP is "faster" because it gets rid of all the nagware and JS crap that the original page has

      Google gives preference to AMP content whether the source page is lightning fast or not. I get the frustration with crappy web pages, but a big part of the reason web pages are getting increasingly crappy is because Google and Facebook (and to a much lessor extent Amazon in a weird way) have a stranglehold on the web advertising market and publishers are getting smaller and smaller slices of advertising revenue. AMP increases Google's lock on the market. Since AMP pages can only really be monetized by the publisher, this puts even more power in Google's hands.

    • acdha 6 years ago

      > AMP is "faster" because it gets rid of all the nagware and JS crap that the original page has.

      AMP is faster only for poorly-optimized JS-heavy pages but the design is fundamentally flawed to require all of its own large amount of JavaScript to run before anything displays, whereas most of the traditional bloat doesn’t block rendering. That means any optimized page - Washington Post, NYT, etc. – loads noticeably faster even before you factor in how often you need to wait for AMP to load, realize that some part of the content is missing, and then wait for the real page to load anyway.

      That design forces it to be less reliable, too: before I stopped using Google on mobile to avoid AMP, I would see on a near-daily basis failed page loads due to the AMP JS failing in some way and when it wasn’t failing it was still notably slow (5+ seconds or worse on LTE). Since all of that JavaScript is forced into the critical path, anything less than unrealistically high cache rates means the experience is worse than a normal web page.

      WPT examples:

      https://www.webpagetest.org/result/200704_GR_62165b7f695e300...

      https://www.webpagetest.org/result/200704_5F_f5c36a7c41cf4c2...

      • lern_too_spel 6 years ago

        Those tests show you don't understand why AMP works. It works because it prerendered, which is going to be faster than anything you can do.

        • acdha 6 years ago

          If that were true, AMP would be consistently faster. Since anyone who’s used it knows that it’s not, you would find it educational to learn about the issues with detecting user intent, reliably prefetching dependencies, and the relatively small / frequently purged caches on mobile browsers.

          AMP’s design is very fragile: if you are using Google search results, they correctly guess what you’re going to tap on before you do and your browser fully preloads it, it _might_ be faster to run all of that JavaScript before anything is allowed to load and render. If any part of that chain fails, it will almost certainly be slower or, because it disables standard browser behavior, prevent you from seeing content at all.

          • lern_too_spel 6 years ago

            > If that were true, AMP would be consistently faster.

            It is. AMP results load instantly for me.

            > you would find it educational to learn about the issues with detecting user intent, reliably prefetching dependencies, and the relatively small / frequently purged caches on mobile browsers.

            And you might find it educational to learn why AMP doesn't rely on these things. There are no dependencies that need to be fetched for the initial render.

            This idea isn't surprising. Multiple other systems use the same ideas, including Apple News, many RSS readers, and Facebook Instant Articles. AMP just does it in a way that isn't anti-competitive (like the former) and allows for multiple monetization schemes and rich formatting (unlike RSS).

            > if you are using Google search results, they correctly guess what you’re going to tap on before you do and your browser fully preloads it, it _might_ be faster to run all of that JavaScript before anything is allowed to load and render

            AMP doesn't rely on fully prerendering the page, only the portion above the fold, which it can calculate because the link aggregator page knows the display size, and the elements allowed in AMP are required to report their dimensions. This allows multiple pages to be prerendered.

            > because it disables standard browser behavior,

            What standard browser behavior does it disable?

    • markosaric 6 years ago

      It is ironic considering that 7 out of top 10 most used third party connections on websites are owned by Google.

      So you can see why there must be some kind of internal struggle at Google. They understand the value of a faster web but they also cannot go after the main cause of the slow web. And this is how technology such as AMP gets invented and makes things worse.

      https://markosaric.com/google-amp/

    • amelius 6 years ago

      But why allow a third party (Google in this case) to collect data on your reading behavior on NYT?

      • varenc 6 years ago

        If you loaded the megabytes of JS served by the actual nytimes.com, they’ll certainly be sending your data to Google as well for advertising purposes.

        (Albeit, that’s far more blockable)

  • lern_too_spel 6 years ago

    In what way is it anti-competitive? Google's competitors also consume AMP pages and prerender them using AMP caches. Anti-competitive would be requiring the publishers to integrate directly with Google like Apple News, not asking the publishers to publish pages that all link aggregators can consume.

    • icebraining 6 years ago

      Google Search uses its monopoly to push their own AMP cache. I can't search in Google and load the content through Bing's AMP cache.

      • ehsankia 6 years ago

        > their own AMP cache

        I'm confused, you make it sound like a free CDN is somehow a bad thing. You do realize people actually pay money to have their content on a CDN. I don't think Bing makes money on their AMP cache, and doubt they would want or even allow Google to link to content on their AMP cache...

        The point of AMP cache is for Google (and Bing) to waste money making content faster for their users, in the hope that the user will then spend more time on search so they see more ads. The cache itself has nothing to do with the monopoly, and the fact that Bing can use AMP at all (since its open source) to get the same benefits actually shows the exact opposite.

      • lern_too_spel 6 years ago

        > Google Search uses its monopoly to push their own AMP cache. I can't search in Google and load the content through Bing's AMP cache.

        That's nonsensical. That would reveal what the person searched for to a third party (Microsoft) even if they don't click on any results. The AMP Cache has to be controlled by the link aggregator in order to support safe prerendering, so Bing's AMP cache is used to prerender Bing results, and Google's AMP cache is used to prerender Google results. Compare to directly integrating with Google, in which case, Bing wouldn't get to take advantage of prerendering. The latter (the Apple News setup) is anti-competitive. AMP is not.

quadrifoliate 6 years ago

IMO the core point of the article is false.

> To be blunt, this is a really dangerous pattern: Google serves NYTimes’ controlled content on a Google domain.

No, "Google serves NYTimes' controlled content" is an oxymoron. Google controls the content that is served, and that's all your browser is verifying. Google could very well make the NYTimes content on there display something else and your browser wouldn't show a warning. NYTimes could do nothing about that.

I disagree that this pattern is dangerous. While Google taking over serving the world's content is hardly a thing to celebrate, at least we're seeing that it's doing so here.

  • rtsil 6 years ago

    The pattern is dangerous because it trains the user to dissociate URL and legitimate content, and the best tool at our disposal against phishing is still the ability to use the URL to ascertain the legitimacy of a content.

    • izacus 6 years ago

      URLs haven't been associated with legitimate content for a long time now, since most of the things come from giant CDN companies like CloudFlare anyway. What you're seeing in URL bar has very little to do with where the JS code executed on your computer is coming from.

      • icebraining 6 years ago

        Does it matter if it comes from a CDN rented by the NYT or a computer owned by DigitalOcean but rented by the NYT?

        What matters is that the domain points to where the NYT considers is the correct source of their content.

        • satyrnein 6 years ago

          With signed exchanges, the NYT is cryptographically opting into allowing Google (or other cache providers) to represent specific articles as being the NYT. It doesn't seem much different.

      • sudoit 6 years ago

        Yes, but users at least know the js code loaded was done so on behalf of the webpage the url points too

    • popcorncolonel 6 years ago

      Do you really think most smartphone users look at the URL anymore? Or even know what a URL is?

      From the non-technical people I've talked to, the answer is no, they don't know what a URL is, and that was happening before AMP came around.

      • acdha 6 years ago

        You might want to back that up with research: people don’t look at full URLs but that’s exactly why it’s so important that the highly-prominent domain name display is accurate.

      • TLightful 6 years ago

        Are you sh!tting me?!!?

        Had enough of HN ...

        This place is bullsh!t.

        Ban me.

    • satyrnein 6 years ago

      It's the current amp status quo that trains users that legitimate content is sometimes on other domains.

      This change would restore the idea that the URL indicates the provenance of the content,

  • gsnedders 6 years ago

    With Signed HTTP Exchanges, for Google to modify the content that is served, Google would need access to a private key for a certificate for nytimes.com, no? Either nytimes.com has handed over that key or Google would have to create a key/certificate for nytimes.com. Believing Google would maliciously issue certificates seems a stretch to me.

    I don't like AMP nor much of how Google has behaved with it (http://exple.tive.org/blarg/2020/05/05/the-shape-of-the-mach... largely matches my thoughts), but let's stick to what's actually happening with SXG.

    • kebman 6 years ago

      I don't get this. Clearly the contents is served by Google, and so they can do whatever they like with it. How is an end user going to know whether the message was signed before it was passed on or not?

      • Sephr 6 years ago

        > How is an end user going to know whether the message was signed before it was passed on or not?

        Your web browser will show a scary warning and refuse to display the bundle if it's not correctly signed. Google is not going to fake signatures for other sites, as certificate mis-issuance would open up Google to legal consequences.

      • izacus 6 years ago

        How does the end user know now that a file from cdn.cloudflare.com is actually coming from NYTimes when it loads and runs code on their browser?

    • quadrifoliate 6 years ago

      > but let's stick to what's actually happening with SXG

      No, Signed HTTP exchanges are something that Google dreamed up so people don't have to see their hegemony over the modern web (or as the article you linked calls it, a shakedown). It's not a browser standard so far, because of Apple and Mozilla's resistance.

      There are legitimate ways for NYTimes to allow Google to serve content on behalf of them, like so many other CDNs around the world (it usually involves the CDN generating the certificate for your site as well). Why should people create new standards for HTTPS and URLs simply for Google's benefit?

      I don't deny that there's a way to make "nytimes.com" work where everything is served by "google.com". What I'm questioning is why we need a completely new web standard for doing so that affects the URL, something that has been standard for decades.

      • esrauch 6 years ago

        > Why should people create new standards for HTTPS and URLs simply for Google's benefit?

        Because of the exact reasons that people are complaining about in this very thread: they want NYT to control the content and display the domain name appropriately, but they want to serve it from Google servers and allow for eager prefetching without leaking private details.

        Today it would be easily possible if NYT just gave Google their private cert, but then Google would be able to serve any content they want as NYT. With the proposed solution they can display the content NYT wants without being able to serve arbitrary other content.

    • gregable 6 years ago

      That's correct. Only with the private key can one sign a Signed Exchange for the publisher. Like TLS, if you have the key you can already do quite a lot.

w-ll 6 years ago

The shenanigans Google been doing to the url bar is super hostile.

Trying to copy the domain of a url without the protocol just infuriates me.

  • sodascripts 6 years ago

    Disable this setting in chrome by going to chrome://flags and switching #omnibox-context-menu-show-full-urls to enabled. Then right click the URL bar and select "alawys show full URLS"

    • csunbird 6 years ago

      I think not using chrome at all is a better response then trying to use workarounds.

    • p1mrx 6 years ago

      I think the flag (but not the checkbox) will be enabled by default at some point... this option is the best thing to happen to the URL bar since they broke it by removing http:// and WontFix'ing the resulting complaints a decade ago.

      Really, I couldn't care less about stuff getting pruned from the URL bar, as long as there's an easy and permanent way to show everything.

    • billyt555 6 years ago

      Turned this in a few weeks ago - so much better.

  • ehsankia 6 years ago

    Isn't it just an extra click? Click one, it highlights the whole thing, click a second time and you see the full URL with the protocol.

  • marvindanig 6 years ago

    Hm. There's room for a new good browser to pick up the beans and run with it now…

  • jacquesm 6 years ago

    Try sending someone a link of a PDF you found using google.

  • trishmapow2 6 years ago

    Not defending that change, but when do you ever need to copy just the domain instead of the full URL?

    • jeffbee 6 years ago

      Always. Main reason I cut something is I want to paste the hostname into a terminal so it can be an argument of whois or dig or traceroute or whatever, in no case have I ever been glad of the scheme prefix.

      • p1mrx 6 years ago

        I've lost count of how many times I've done:

            $ ping http://whatever.com [furious line editing ensues]
        
        But thankfully it's fixed now, with the "Always show full URLs" option.
      • smabie 6 years ago

        Isn't more common to paste to a utility that uses the prefix like curl or wget? Or pasting into a chat? Besides all of those tools could just strip out the prefix, while there's no way to add the protocol to a domain name.

        More information is strictly superior.

        • MereInterest 6 years ago

          More information is better, so the URL shown in the browser should include the protocol. Consistent behavior is better, so copy/paste should only include text that is actually highlighted.

      • waheoo 6 years ago

        It's a solution in search of a problem.

    • nemothekid 6 years ago

      The times that bit me the most is when I need to copy an IP address.

abraham 6 years ago

One of the main reasons sites use AMP (listed in top sites in serps) will not require AMP soon.

https://www.theverge.com/2020/5/28/21272543/google-search-re...

ridiculous_fish 6 years ago

It's wrong to trust the URL bar. For example, this search [1] has as top link an ad that boasts "google.com", and it really is! And if you click on it, you'll end up on a google.com site, which nominally helps with printers, but in reality it's a tech support scam.

So much of the distrust here is that google wants to be everything: to host their content and publisher content and user content; to broker ads and recommend links; to run their software on your computer and phone, to store your data on their servers. They serve too many masters.

1: https://i.imgur.com/HalErpIr.png

  • kevingadd 6 years ago

    "It's wrong to trust the URL bar" is true but only because the companies operating services like... Google... don't bother trying to protect their URLs. It's not hard to have a separate 'user content' domain for your user content, we've done it at places I used to work for. But for some reason people think it's enough to use a subdomain or get cute and use the same domain with a different TLD (looking at you, github.io)

    So it is kind of frustrating to see someone offering to fix a problem they helped create in the first place through neglect or carelessness.

  • icebraining 6 years ago

    Agreed, that's problematic. But Google didn't even have to not host content, they would just have to use a different domain. They have such weird blind spots.

  • tommek4077 6 years ago

    As an advertiser, you can write whatever you want into the url displayed there. This does not need to match the real target.

    • ridiculous_fish 6 years ago

      But the real target is google.com.

      I just made https://sites.google.com/view/whalefacts, took me literally ten seconds, confirmed it was accessible from multiple IPs and multiple browsers.

      Google wants to be a content host and an ad broker and a search engine. Each of these is reasonable in isolation. Yet you can search on google, and Google will serve you an ad linking to a google.com site, and that site scams you out of money. This isn't theoretical, I know because my family was hit.

      Screenshot if it gets taken down: https://i.imgur.com/T6hVHr5.png

      • kzrdude 6 years ago

        Super boring answer, and this is not an admonition to you, but in general; shouldn't this lead to lawsuits? It needs to be tried in court.

      • iso1631 6 years ago

        I'm confused, that's sites.google.com, the URL says that, and it's right, what's the problem?

  • jacquesm 6 years ago

    All this does is rapidly devaluate the google.com domain. Not a bad thing per-se.

jacob019 6 years ago

New York Times and all the other publishers don't have to participate in this crap. It's shameful that they cede authority over their content so easily in exchange for a vuage promise of more visibility. There are so many better ways.

  • untog 6 years ago

    It’s not a vague promise, it’s an extremely explicit one. Search results for news contain a “top carousel”, a horizontally scrolling box that shows cards for different articles. On most phones it takes up most of the screen. If you want to be in the carousel (i.e. if you want your site to be visible near the top of search results) you must use AMP. No ifs and buts about it.

    If NYTimes and every other news organisation refused to participate then yes, Google would be in trouble. But they can rely on good old divide and conquer: these news organisations all compete with each other. All it would take is for one to starting producing AMP content again and they’d vacuum up all the search traffic, and all the other sites would follow them immediately.

  • dewey 6 years ago

    In an ideal world where they would not rely on ad revenue and page views but are supported by the readers that assumption would be correct.

    But right now we are not living in that ideal world and because all other publications are doing that they have to follow if they don't want to risk losing visibility against the competition.

    So of course they don't "have to" but they also kinda do.

  • lazyjones 6 years ago

    > don't have to participate in this crap

    It's a tempting Ponzi scheme.

princevegeta89 6 years ago

This is the question I always had and confused myself over.

In addition to this, I previously stumbled upon a few situations where I visited an AMP site to read an article and I noted down the site name in my mind. A few days later I tried to visit that site and when I put the site name in the address bar in hopes of getting helped by autocomplete, guess what?! It was nowhere to be found.

nwsm 6 years ago

This has been all over HN since amp was released, and this is a two paragraph article with no new info or opinion.

https://hn.algolia.com/?q=google+amp

  • mindfulhack 6 years ago

    I suppose 328 votes so far show the usefulness of repeat discussions. This article is a catalyst to keep this one going. The votes prove that it's an important enough issue to continue talking about.

markosaric 6 years ago

Time to share this post one more time:

How to fight back against Google AMP https://markosaric.com/google-amp/

And the original thread https://news.ycombinator.com/item?id=21712733

bobbydroptables 6 years ago

AMP seems like a solution in search of a problem. Are people really having trouble with loading speed in 2020? I travel to remote areas in third world countries regularly for work and still don't really have problems loading pages with mobile data.

Even if it didn't have all of the problems associated with it I just don't get the point. I don't need Google to repackage a website with less useability. It's frequently not even faster.

  • smabie 6 years ago

    I lived in Africa and the only internet I had was cellular and by the gb. Amp is a massive improvement over the extremely large web pages we now have to endure.

    It's also much faster to render, which makes a huge difference on the crappy Android phones that are everywhere. Hell, I'm using a $200 Android phone right now because my iPhone broke and browsing the web is painful on it. And with the terrible hauwei $40 phones that have taken over Africa, most of the web is unusable.

    I don't like Google's control of Amp, but it exists because of the original sin of html and js. Everything about html is terrible: bloated, pointlessly verbose, etc.

    I have a dream that we all just start using Gopher and dump the www, but it's never going to happen. Maybe even browser vendors could get to together and design a super light weight markup based on S-exps or something, but that's probably not going to happen either. Amp is the best we got and it solves a real problem. And it solves the problem well.

    • lultimouomo 6 years ago

      But does AMP makes internet usable on those $40 phones? I have a recent mid-range $200 phone and pretty much the only website the regularly hogs my browser is Google News, which coincidentally is also the only one that uses AMP. It's anecdote, but in my experience AMP (or whatever else Google News does) degrades performance to an amazing extent.

      • izacus 6 years ago

        Google News is far from being the only one using AMP and there's a massive difference in loading times and rendering speed for most news sites between AMP and non-AMP versions even on my 1GBit internet connection.

        • lultimouomo 6 years ago

          FWIW my impression is not that Google News is bandwidth heavy, but that it is JavaScript heavy. It works fine on the computer but it's hard to use on the phone, even on the same connection.

    • bobbydroptables 6 years ago

      Fair points, although to your last point, I wouldn't necessarily agree it solves the problem well. AMP makes some websites almost unusable (intentionally disables core functionality) and there's no way to disable AMP except manually re-typing the URL for every page. If its goal is just to serve a smaller page, it is a rough workaround with high costs IMO (often slower loading times, weird performance issues, disabled functionality, less open internet).

      I appreciate that not everyone has fast data, but not having data speed to read a basic web page is really becoming the exception, not the norm. Data transmission is getting cheaper and faster and available in more remote places every year.

      I wouldn't have a huge problem with AMP if I could opt out. Unfortunately I can't. So despite my blazing fast unlimited plan on a flagship device, I'm getting served crippled pages with degraded performance. It's like I own a Ferrari kitted out with all the extras and Google is saying "here have you tried out this cool bicycle? It has special pedals so you can't go too fast and we reconfigured the handlebars so you don't accidentally do something like steering! It even has a bell. Ting-ting, ting-ting! How cool is that?"

      In all seriousness, it is neat if it makes the web more useable for low-connectivity users, but maybe then limit AMP to those places (which are shrinking every year) and don't serve needlessly crippled pages when I'm standing in downtown Amsterdam or Hong Kong at the center of the internet, connected to blazing fast Wifi.

    • toastal 6 years ago

      As much as a smack my lips at a supported non-XML, S-exp language for markup, isn't this what Brotli's dictionary of all the bloated XML tags, et. al. sets out to solve with its compression?

      • smabie 6 years ago

        Sure, but that's just another layer on a steaming pile of shit. The webpage is still super large, it still takes up ram, it still takes up cpu to decompress and compress, etc etc. It's the kind of solution one comes up with when they recognize that nothing can actually be done to solve the real problem.

  • crazygringo 6 years ago

    > Are people really having trouble with loading speed in 2020?

    Huh? Yes. Hugely. I'm on my fast home internet using a new iPhone I bought two months ago, and loading a NYTimes article just took 8 seconds. God only knows if it's bounded by network or CPU or both, if the problem is frameworks or ads or what. And it isn't even "stuck" on anything -- I watch the blue loading bar in Safari move pretty smoothly across the top.

    I did a search for a NYT article on Google, clicked it, and it appeared instantaneously.

    That's an insane difference. I know everyone hates AMP here, but when I've got my user hat on rather than my developer hat... it's unbelievably more performant.

  • a254613e 6 years ago

    I do, and I don't even live in 3rd world country - I live in Germany in one of the largest cities in the country.

    But even if I can load both pages at roughly the same time AMP experience is just so much better, they always load at the very least at the same speed as the original website, there's no weird scrolling implemented, there's no annoying popups, etc.

    I always choose AMP pages when possible, compared to the "native" ones - because I know for a fact that I'll get fast loading, and other stuff mentioned above.

  • ocdtrekkie 6 years ago

    AMP solves a problem, it just doesn't solve a problem for users. It's an anticompetitive play and it's helping exactly who it is supposed to help.

noisy_boy 6 years ago

Google has already effectively become the address bar - people go to google.com to go to any other website. Now they are just solidifying it so that you don't even remember the url of a website after a while.

causality0 6 years ago

I despise AMP for the entirely selfish and pedestrian reason that it hijacks my phone's browser bar and won't let me access tab management until I scroll all the way back up to the top of the page.

SpeakForMyself 6 years ago

Totally disagree with this drama whoever is putting on for the sake of being in the group of 'anti-google so I am looked as if I am so smart and so know it all and google is trying to control everyone and nobody sees it except me now I am writing a post to tell the world how different I am'

As a user, before learning computer knowledge, I am so thankful and amazed by those AMP pages, because they are really fast! And I barely look at this URL thing to care for security which is huge deal to those conspiracy queens, because as non-tech user I don't know a heck about URL, all I care is how fast a page is presented to me.

So, no, the problem is only you, yes, you can just use a dramatic title just because you are so bored with your life to cause a scene, you are only embracing yourself and bring some noise to this already chaotic world, please, go find yourself something to do instead of trying so hard to be internet famous. Thank you.

Kiro 6 years ago

> Accelerated Mobile Pages (AMP) are lightweight pages designed to load quickly on mobile devices. AMP-compliant pages use a subset of HTML with a few extensions. Accelerated Mobile Pages (AMP), is a very accessible framework for creating fast-loading mobile web pages.

That itself sounds awesome and something we should promote. The other part of AMP is of course that it's served through Google's servers. While their global edge caches probably bring the speed up I think that's less important.

I other words: AMP as a framework to force users to build light-weight pages without bloat is a good thing. Google's control is a bad thing.

I think many of the comments here make it a borderline topic where there's either all or nothing. I want to see a more nuanced discussion on what the possible alternatives and solutions are instead of just "Google bad, AMP bad".

satyrnein 6 years ago

Does it actually matter where you are, or is that just an implementation detail?

One interpretation is that Google is changing the URL bar from "where" to "who", which may be the more relevant information for most users. Signed exchanges are an interesting way to achieve that.

rammy1234 6 years ago

Internet before shows the real URL. Plain and simple.

didip 6 years ago

I remembered long time ago when Digg tried to do this and the internet revolted.

I guess times have changed.

vincentmarle 6 years ago

AMP is a lot like how I was browsing the web on my phone before the iPhone came out. Opera Mini’s servers would proxy every single page I visited and fetch and pre-compile it before sending it compressed to my phone. It was way more performant than trying to render the page natively on my crappy phone. (That’s why the iPhone was so unique, it was the first phone that could natively render websites really well). Sure, there were a lot less security and privacy concerns back then, but I think the majority of users simply don’t care as long as it works.

grey_earthling 6 years ago

If The New York Times is unhappy with this use of their branding, it seems to me that they could easily claim trademark infringement.

They could argue that Google is using The New York Times's branding and domain name to make it look like this content is controlled and provided by The New York Times, when in fact it isn't, and that an average person (“idiot in a hurry”) could be deceived.

If The New York Times willingly gives Google permission (or The New York Times willingly abets Google's monopoly position), then I guess Google can do whatever they like.

anonu 6 years ago

Remember when Google's mantra was "Don't be evil" ???

  • lalos 6 years ago

    "Don't be evil", using primary colors in the logo to bring that kindergarten familiarity, fun doodles, a dumb funny movie and quirky April Fools projects were a great marketing strategy to distract your average person in lowering their guard and feel safe to give all the data to an advertisement company. I wonder if they currently teach this case in marketing/PR classes.

    • smabie 6 years ago

      I mean, I don't think that was the intention. I'm sure Page and Brin really wanted to be different when they started. But as a company grows, the vision of the founders is excised and replaced with the same shit found in all large corporations.

      To be clear, I don't think Google or any other large company is evil. It's just the way things turn on, how the incentives are structured.

      • S_A_P 6 years ago

        I think that’s a great way to put it. Each decision to grow the business is not necessarily bad or evil. As google has grown and acquired market share they leveraged that to spur more growth and market share. They act in their own best interest. It’s not necessarily evil, but selfish motives and evil sometimes look a lot alike.

    • asveikau 6 years ago

      I am sure tons of people here know Google history better than I. What was the year they fully turned on the advertising spigot?

      I feel like in the first few years it was not yet an advertising giant. Wikipedia says they had small text ads in 2000, but seems to imply that the advertising didn't get really huge for them until after IPO. Correct me if I am wrong, I was not following super closely in those years. But that would provide a few years of "not being evil".

  • nullc 6 years ago

    > Remember when Google's mantra was "Don't be evil" ???

    Meh, they preserved 2/3rds of it.

nokya 6 years ago

I have my own proxy filtering all my desktop and mobile traffic, anything 'AMP' is filtered spot on. Sometimes nothing shows up, sometimes the original server responds after a few seconds. I'd rather not see the page at all than play this game.

paxys 6 years ago

Regardless of your feelings on AMP, the premise of this article is wrong. Security standards and expectations are still exactly the same in this model. You see "google.com" in the address bar and trust that Google is serving you the right content.

bamboozled 6 years ago

What happened to The Internet? Honestly.

Google should can do this stuff if they like...on their own network in their own ecosystem.

Insane that they got rich from hyperlinks and now want to fiddle with the so others can't.

aronpye 6 years ago

AMP is the main reason I switched to DuckDuckGo from Google. Webpage rendering often used to break on iOS, in particular scrolling where the page would just go blank.

  • collinmanderson 6 years ago

    Yeah AMP is one of many reasons I switched to DuckDuckGo. “Why am I giving Google this much power? Why am I contributing to them being a monopoly?” were the general reasons.

    Hearing people mention low quality search results was what kept me off, but I’ve actually only needed to do a google search about once a week, far less than I was expecting.

anonymousDan 6 years ago

Google amp links are so annoying too when you want to send a link to other people of something you've searched for. One of the main reasoms Inuse duckduckgo.

lazyjones 6 years ago

What will all those submissive publishers do once Google decides to monetize AMP by injecting their own ads with 0 revenue for the publisher?

geertj 6 years ago

I noticed this two days ago and it was the final straw that made me switch to DuckDuckGo on all my devices.

Angostura 6 years ago

This is absolutely the reason that Google is no longer my default search engine on mobile.

graiz 6 years ago

AMP is the consequence of HTML and CSS being awful at performance. I'm not sure why the underlying problem hasn't been addressed. Rending text and images on a page shouldn't require a secondary cache and an amp rendering framework on top of ton of css and layers of javascript. It's text and images.

buboard 6 years ago

I wonder how many phishing sites are masquerading as google from google

anonymousDan 6 years ago

Is anyone aware of any Firefox/brave plugins that strip Google amp links?

vipulved 6 years ago

Grabby and wrong, and most of the value created is for Google.

young_unixer 6 years ago

I don't get the point of article.

I know Google wants browsers to lie to the user about the website they're visiting. But the article screenshot is a case where that's not happening, it's displaying the real URL.

thierryzoller 6 years ago

You are at home in front of you screen. Thank me later.

pvg 6 years ago

Quis hic locus, quae regio, quae mundi plaga?

kebman 6 years ago

Haven't news sites pushed law suits over this?

metalliqaz 6 years ago

I don't like AMP and I wish we just fixed the problems it is designed to handle at the root cause.

stuff4ben 6 years ago

it's 2020 and ya'll are still using Google?!?! DDG all the way!

user764743 6 years ago

You're on a website stealing content from NYT.

rdiddly 6 years ago

A: You are on Google. There's no confusion.

surajs 6 years ago

Google sucks, i'm going golfing.

gorgoiler 6 years ago

With the utmost respect to you and the other commenters here, when I see positivity about the abstract, hypothetical technical merits of something with a long history of, in practice, being part of an extremely controversial power play it reminds me a lot of the comments I see promoting a widely installed piece of process management software — one which a lot of people don’t really want, whose subtle changes to layers of abstraction introduce new and unexpected bugs that can only be fixed by further coupling, and which can also be reasonably described as a single entity politically maneuvering itself to bring order to the chaos at the expense of living in, for want of a better term, a dictatorship.

Well at least under Google AMP, the pages loaded on time.

  • simias 6 years ago

    I don't like systemd and I think it's an overengineered mess of dubious value but comparing an open source linux init system to the long game Google seems to be playing to act as a proxy for the web is absurd and doesn't make any sense at any level.

    Your comment is pure flamebait without any insight.

    • gorgoiler 6 years ago

      The long game which X seems to be playing to take control of Y

      Oh, exactly that: Google/AMP with the web, systemd with Linux.

      The point though isn't about the technology or the tactics. It's about the seemingly-benign apologia from third parties that bit-by-bit chips away at the objectors' arguments. It's part of how X wins their long game and takes control of Y.

      I don't even think the original comment to which I replied does this, but it reminded me of a pattern. The way in which AMP/web and systemd/Linux are playing out are similar enough to be worth thinking about.

      (I almost certainly lack any kind of meaningful insight and – according to quite a few others here – an ability to write. It's disappointing to be accused of pure flamebait though.)

  • ElliotH 6 years ago

    I struggle to see how comparing people who like particular technical products to supporters of dictatorships shows any of the "utmost respect" to any commenters that you claim at the start of your comment.

  • 0898 6 years ago

    Is this some kind of human buffer overflow?

  • dang 6 years ago

    We detached this subthread from https://news.ycombinator.com/item?id=23729479.

  • cosmodisk 6 years ago

    You might want to work on reducing the length of your sentences- it's very hard to get the meaning behind them.

    • Noumenon72 6 years ago

      To me all the difficulty is caused by the obscuration. Once you get to the point where he refuses to say which software he's talking about, your attention scatters to all the different things he might be referring to. If I talk about Oracle's subtle changes to layers of abstraction, you can read the sentence and decide whether that describes Oracle well or not. If I talk about "a widely installed piece of process management software" doing that, everything else I say is just a riddle trying to figure out which one.

      • afarrell 6 years ago

        I think thats part of the point. It makes his statement conversationally non-falsifiable because if I say something about (taking a guess here) systemd, that will say more about my own biases than what I am responding to.

        • rblatz 6 years ago

          My best guess was Jira, not everything made sense but it was the only thing I could think of.

    • BurningFrog 6 years ago

      Yeah, my brain threw `OutOfMemoryBuffer` ⅔ in through several rereads.

  • Clausinho 6 years ago

    I'm out of the loop, care to elaborate which PM software you are referring to?

  • gsich 6 years ago

    >Well at least under Google AMP, the pages loaded on time.

    Yeah, because Google is cheating.

    • gorgoiler 6 years ago

      By the way, I was referring to a common English aphorism about punctuality on 20th century Italian railways:

      https://www.economist.com/science-and-technology/2018/11/03/...

      • yesenadam 6 years ago

        It seems I have to "register" with the Economist to read that.

        • catalogia 6 years ago

          It's a reference to "Say what you like about Mussolini, he made the trains run on time."

          This quip is generally used sarcastically or wryly. "Say what you like about [something that's seriously bad], at least [frivolous matter] has improved."

          • jolux 6 years ago

            As far as I know it’s not even true of Mussolini though.

            • catalogia 6 years ago

              Yes well it's not really true of AMP either (using a strict content blocker beats AMP load times.) The meme of 'Mussolini making trains run on time' is often used in cases where the trivial upside is dubious at best.

          • pacifika 6 years ago

            Actually it’s a joke about concentration camps and it’s in bad taste and offensive.

            • catalogia 6 years ago

              > Actually it’s a joke about concentration camps

              No it isn't.

              Gorgoiler says "a common English aphorism about punctuality on 20th century Italian railways" and links to https://www.economist.com/science-and-technology/2018/11/03/... Both are clearly about Mussolini making trains run on time (or rather, not actually doing so.) Wikipedia describes the origins of the quip as such:

              > Mussolini was keen to take the credit for major public works in Italy, particularly the railway system.[109] His reported overhauling of the railway network led to the popular saying, "Say what you like about Mussolini, he made the trains run on time."[109] Kenneth Roberts, journalist and novelist, wrote in 1924: "The difference between the Italian railway service in 1919, 1920 and 1921 and that which obtained during the first year of the Mussolini regime was almost beyond belief. The cars were clean, the employees were snappy and courteous, and trains arrived at and left the stations on time — not fifteen minutes late, and not five minutes late; but on the minute.[110]"

              The dubious premise of Mussolini being responsible for reliable trains predates the Holocaust.

              As for "it’s in bad taste and offensive", I agree that comparing fascism to systemd is in bad taste.

    • ehsankia 6 years ago

      How is providing a framework which forces greedy publisher to keep their websites light "cheating".

      • gsich 6 years ago

        Chrome (at least on Android) preloads AMP sites after searching. It doesn't do it for other sites.

        I would classify that as cheating.

        • ehsankia 6 years ago

          It can't do that on other sites, it's a technical limitation. Part of the reason why they push AMP and provide every website with free CDN hosting is exactly to enable caching. If the "purpose" of AMP counts as cheating, then sure?

          Once signed exchanges become a thing, that may change, but as seen in this thread, there's a lot of push back for that.

          • gsich 6 years ago

            An arbitrary technical limitation. You could preload any site you want. Google wants to push AMP, so they only preload those.

            Well of course it's cheating if you compare load times between pre-loaded sites vs. not preloaded ones. And then argue that "AMP is faster", which is obviously wrong because the conditions are not the same.

  • catalogia 6 years ago

    I enjoyed reading this comment. The way you write is fun.

  • jevgeni 6 years ago

    That first paragraph is a single sentence... Tried understanding what you are saying a couple of times and I just can’t.

cannedslime 6 years ago

The only one who wins when media outlets integrate AMP, is google. Stop the madness, for the love of an open internet. You gain nothing, you are just giving google control over content as the new norm.

pnako 6 years ago

You're on Google, the 21st century version of AOL

chvid 6 years ago

I think the author has got it all wrong.

You are supposed to trust Google.

And when your browser says 'Google' - you know it is all good.

killjoywashere 6 years ago

If you think this is creepy, wait until you see Menlo Security. That's security for everyone except the user.

Kiro 6 years ago

AMP is disliked by privileged people who have never experienced how truly awful browsing the web with a bad internet connection can be.

  • mikro2nd 6 years ago

    Can you elucidate what you'd consider a "bad" internet connection?

    I live Out In The Styx, in a Shithole country, at the end of an allegedly 2MB/s piece of wet string masquerading as an internet connection that seldom lives up to its adverted performance. AMP has never once made any significant difference to my web experience.

  • saagarjha 6 years ago

    Perhaps they instead see the “solution” to be problematic.

    • Kiro 6 years ago

      Yes, but not acknowledging the problem and that AMP is a solution, even if it's the wrong solution, defeats any meaningful discussions. We need something like AMP but outside of Google's control.

      My point is that it's so easy to just see the drawbacks and none of the benefits when you're sitting on a good connection. All threads on HN becomes completely one-sided where everyone is just backslapping each other's complaints.

7leafer 6 years ago

"Does google rig the system to squash its rivals and hurt us?"

Well, this is one kind of modern skepticism I particularly like: Does gravity kill if one jumps off a cliff? Is a sphere round? Is it really bad if we give up our freedom? Who are we to think for ourselves?

When questions like this are asked, the damage is already done. And it seems like it's already beyond repair.

jakeogh 6 years ago

Users executing their code are the product. Giving those people the independence of knowing who they are talking to is contrary to their business model.

Image AMP? No URL for You! https://news.ycombinator.com/item?id=23322730

Tangental: https://news.ycombinator.com/item?id=20930270

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection