Settings

Theme

The Dunant subsea cable

cloud.google.com

262 points by johannesboyne 5 years ago · 261 comments

Reader

lifeisstillgood 5 years ago

Old timer story - years ago, Demon Internet (my old employer) was beginning to enter super-growth and wanted to really set itself up for trans-atlantic connections. So they decided to buy a T1 - 45Mbps link across the atlantic. Now it turns out that BT had only ever resold fractions of T1s - they had never actually had anyone want a whole one. And as such their sales commissions did not cap out.

So, we rang up, a sales guy picked up the phone and got a million pound pay day, and resigned that evening.

But our customers were happy so thats what counts :-)

  • squigg 5 years ago

    As a Future Sound of London fan, I was so proud to be an early Demon dial-up customer when they name-checked their email address on Demon on their ground-breaking ISDN radio transmission on Radio 1. (To be read in a monotone female delivery) "For further information, please access the following code ... F S O L .. ACK ... F S O L ... DEMON ... CO ... UK"

    Good times - they were a wonderful company, thank you

  • john37386 5 years ago

    T1 maximum speed 1.544 Mbps T3 is about ~45 Mbps

    Reference: https://en.wikipedia.org/wiki/T-carrier

    • lifeisstillgood 5 years ago

      My memory is very hazy - too many beers in North London pubs far too many years ago to remember clearly :-0

    • hazeii 5 years ago

      And it was probably an E3 (Uk/Europe is E1/E3, US/JP is T1/T3)

      • morei 5 years ago

        E3 is ~34Mbps, so it probably wasn't. It's true that the E1/E2/E3 hierarchy is used outside the US, but links to the US can be either depending on carrier preference.

        The T1/T2/T3 and E1/E2/E3 hierarchies join at the STM-1 level: An STM-1 can be subdivided as 4 x E3s or 3 x T3s.

        This means that on a EU<->US SDH link, an STM-1 can be demuxed into either E3s or T3s, so you can have both standards on the same fiber.

    • tinus_hn 5 years ago

      In internetworking a Tier 1 carrier is a carrier that is so interconnected other parties pay to receive traffic from them.

      • zenexer 5 years ago

        T1 and Tier 1 are not the same thing.

        • tinus_hn 5 years ago

          That’s true. However at least in my memory back in the day people would call the 10 megabit directly connected university connections t1 lines, because of the tier 1 thing.

  • walrus01 5 years ago

    On a slightly later time scale, I still remember the time I first saw a OC192 linecard in a router in person, at a major IX point, and how incredibly impressed I was. This was in the era when a transcontinental, or submarine transatlantic OC192/STM64 circuit was an astonishingly huge amount of money every month.

    • chasd00 5 years ago

      The remember seeing one of those in a Uunet data center in north Dallas in the 90s. I was amazed at the tech and also amazed how ordinary it looked.

  • ASinclair 5 years ago

    Great story!

ksec 5 years ago

I know a lot is focusing on the Bandwidth. But are we making any progress on Long Distance Subsea Cable using Hollow Core Cable, achieving close to maximum speed of light for theoretical lowest latency possible? Imagine cutting latency from West Coast US to Hong Kong by 50ms!

Light is only traveling at around 2/3 of speed within Fibre.

The previous decades have been around Bandwidth. Is time we shift out focus to latency. 5G is already part of that , and 6G is further pushing it as standard feature. I wish other part of network start thinking about Latency too.

May be not network, but everything. From out input devices to Display. We need to enter the new age of Minimal Latency Computing.

  • idlewords 5 years ago

    You gotta bore for that sweet latency win. A chord tunnel between San Francisco and Hong Kong would save 1300 miles (20% improvement right there), and if you drill it straight enough, you won't even need a cable.

    • BelenusMordred 5 years ago

      Please don't give the HFT's ideas, they'll probably do it and cause a half dozen tsunamis in the process.

    • jeffbee 5 years ago

      Heh baby steps maybe? The existing cables aren’t even short paths along great circles. The Oregon-Japan cable google owns is 12000 km along a 7500-km path.

    • extropy 5 years ago

      Need to have lava shields for that.

    • rorykoehler 5 years ago

      How deep would this bore tunnel be at the centre most point if it was perfectly straight? Would it go into the mantle?

      • potiuper 5 years ago

        d=r(1/cos(s/(2r))-1)=3958(1/cos(6906/7916)-1)~=2198 mi; Yes, into the lower mantle with only 692 mi to go to the outer core. What I would love the internetz to explain is how to justify the h8 for HFT, yet the luv for Musk since he is the one in the driver's seat at the moment for this stuff with StarLink and the Boring company.

        • koheripbal 5 years ago

          One of the important components of online hate is that it requires zero justification or logical consistency.

          It just needs to feel gratifying.

        • wmf 5 years ago

          Because HFT helps the rich AF get richer but Starlink benefits normal people?

          • potiuper 5 years ago

            Satellite internet already exists; StarLink's defining feature is lower latency both by being in a lower orbit and inter-satellite links. It does benefit consumers by introducing another satellite internet competitor, but how many "normal" people want to rely on satellite internet or, if they do, care about a few extra 100s of ms of latency? (Inter)-National wireless companies have a tendency to consolidate and lobby out smaller companies and municipalities who have less incentive to build out fiber and landline companies, if any, have further justification to cut cords. StarLink is the now the leading solution for global low latency connections for HFT by being in a near vacuum in low orbit. The case for HFT benefiting non-professional trader Mrs. Mainstreet is that she no longer has to eat the larger spread offered by the big bank market maker every pay cycle when the 401K contribution hits with HFTraders providing liquidity. The opportunity for smaller traders to make the market is no longer there, but the odds that they would had a chance to begin with have been stacked against them for a long time with the cost of the fastest connection being marginal to now near insignificant.

            • roomey 5 years ago

              We used to manage a remote branch over geo stationary satellite, it was an excersise in pain. We used to check the local weather forecast to see if it was raining before doing any work on the servers. Geo internet is awful, I think you are underestimating how much usability difference there would be between LEO and GEO latencies and bandwidth (because geo bandwidth was awful too)

            • p1necone 5 years ago

              Current satellite internet is really bad. ~600ms of latency is very noticeable even just loading webpages, and the throughput isn't great either.

              Also multiplayer gaming is rather popular, and that's just not possible with that much latency.

              VOIP is a pretty terrible experience with that much latency too.

            • Seanambers 5 years ago

              " care about a few extra 100s of ms of latency?"

              Well, as someone who grew up in the modem era(90's) and was trying to play online fps games. I cared quite a bit about latency. Normal people also like things to be quick you know :)

              Based on that experience in the 90s to this day i want my internet connectivity to be as fast as possible and i'm willing to pay.

              Low latency enables video/audio chat amongst other things and just a better experience.

              A quick google search gives 600 ms latency for satellites(not Starlink) thats quite a alot. Also bandwidth is a issue with existing providers i think.

            • Retric 5 years ago

              The ping on satellite internet is usually around 640 ms as it’s a ~45,000 mile round trip, worse the bandwidth is terrible. That kind of latency breaks a lot of assumptions in the modern web. Dropping to ~20ms and dramatically upping the bandwidth is a huge win for rural internet users.

              PS: I am on the waiting list for starlink.

        • samstave 5 years ago

          And tell me how one would service any problems that arise either by tectonic movements or breaching an alien/breakaway civilization hollow earth chamber?

    • m463 5 years ago

      > if you drill it straight enough, you won't even need a cable.

      Well, yes and no. I recall they wanted to pursue hollow cables in the early days of optical cabling, but it turned out solid fiber was the answer.

      (sorry, can't find a good reference)

      So FTTC (Fiber Through The Core) is what you want.

    • dekhn 5 years ago

      couldn't you start experiments using the Alameda-Weekhauken tunnel?

    • jodrellblank 5 years ago

      Alameda enterprises excited by rumours of new chord tunnel Thursday, low latency burrito delivery futures up 5%.

    • thomaslangston 5 years ago

      More feasible would be transmitting neutrinos or some other signal that would not be blocked by the Earth.

      • jxcl 5 years ago

        > More feasible

        If they don't interact with the thousands of miles of earth between the source and the destination, they probably also won't interact with the receiver! :p Imagine the retransmission rates!

        https://en.wikipedia.org/wiki/Neutrino_detector

        • retzkek 5 years ago

          The MINERvA experiment at Fermilab already demonstrated communication with neutrinos, admittedly over short distance: "The link achieved a decoded data rate of 0.1 bits/sec with a bit error rate of 1% over a distance of 1.035 km, including 240 m of earth."

          https://arxiv.org/abs/1203.2847

          Anyone from an HFT firm who wants to look into a partnership researching a neutrino link to the CME data center feel free to reach out :)

        • cameldrv 5 years ago

          There has been a lot of progress in the past 20 years on antineutrino detection. Antineutrinos are produced by fission and so there's been a fair bit of interest in detecting them to detect covert nuclear tests as well as potentially a new modality of detecting nuclear submarines.

          I think it could become possible before too long to use this to transmit data. It would probably be a ~billion dollar project, but the HFT arbitrage market is essentially winner-take-all, and may be large enough to support this size investment.

        • atonse 5 years ago

          And you'd also have to ignore all the insane amounts of noise coming from regular neutrinos wizzing about in the universe.

          • parineum 5 years ago

            If you've built a reliable detector, you've already built something that can intercept them. You just need to make a shroud around your detector and a tube facing your transmitter out of the same material.

            • ISL 5 years ago

              There are ~65,000,000,000 neutrinos from the sun passing through each square centimeter of your hands every second as you read this. There are no materials on Earth that can reliably stop any given neutrino. For that, one needs densities greater than those generally found in stellar cores.

              Neutrino detectors work by maximizing dumb luck through being both very large and very, very clean (low radioactivity). The transmitter-detector systems work by sending oodles of very energetic neutrinos at a well-defined time and looking for a rare coincident flash in the detector.

              • parineum 5 years ago

                Any detector useful for communication is also an interceptor. The way we detect neutrinos now is not useful for communication.

          • ISL 5 years ago

            If you're sending neutrinos at a known energy from a known location and in a narrow time-coincidence window, you can hammer most backgrounds way down.

            The low detection rate isn't so terrible either -- one only needs the bits that are detected to be tradably-correct almost-all the time.

            The hard part is arranging to make enough money to fund the accelerator and detector.

      • Arrath 5 years ago

        Dear god the packet loss.

  • O5vYtytb 5 years ago

    My bet is on tech like Starlink with inter-satellite communication. Starlink should have lower latency with space lasers compared to fiber.

    • xoa 5 years ago

      OP is talking about photonic bandgap fiber I think, or perhaps another kind of photonic-crystal fiber. At any rate, whereas in regular fiber guiding light via differences in refractive index the speed of light is only about 70% c, photonic bandgap fiber can reach something like 99.7% c, which is close enough to c in vacuum as to essentially eliminate the difference vs a free space EM link (particularly for space-based ones which face an extra minimum RTT distance penalty). Last I checked though 3-4 years ago they needed fairly frequent repeaters, were harder to mass produce, etc.

      I don't know of any being deployed long distance, though in principle they'd be really valuable for intercontinental backbones. Starlink fills a huge gap in existing infra, and there are places that won't see any sort of fiber, let alone fancy microstructured fiber, for the foreseeable future (or ever, obviously in the case of ships/aircraft). But the bandwidth isn't great. Each current sat does I think 20 Gbps, and though no doubt that'll increase over time that's literally orders of magnitude from this single cable alone. Having the sats support direct ground optical links for backbone usage might be interesting someday, but weather attenuation will never stop being a problem with that. Starlink is filling in the gaps for fiber infrastructure, not replacing it. They're complementary.

      So I agree it would be great to see more advanced fiber deployed long distances and start to shrink latency for everyone, and interesting to know what technical obstacles remain if any (maybe a lot remain?). A 40% speed boost while still having massive bandwidth isn't nothing.

    • mmmBacon 5 years ago

      Starlink satellites are in orbit 550km high. So any journey would add at least 1100km. Moreover not sure that a single satellite would be able to hit another one across transpacific distances and may need to go through multiple hops to get there.

      Each hop will add latency since signal needs regeneration. So it’s not clear to me a swarm of satellites is a real winner from a latency POV. Furthermore, given costs to put the constellation up there, it’s extremely expensive on a $/bit basis and not sure how it could compete against fiber.

      The value of Starlink is providing service in areas lacking existing broadband infrastructure where the cost to provide service exceeds the cost of Starlink.

      • mrtnmcc 5 years ago

        >> Starlink satellites are in orbit 550km high. So any journey would add at least 1100km

        Might want to check with Pythagoras on that one..

        • ptudan 5 years ago

          Meh, he said at least. There could be cases where you beam up then down nearly vertically (same city).

          • function_seven 5 years ago

            So, "at most" then, right?

            The further you are from the other end, the less additional distance the satellite adds on.

          • jamessb 5 years ago

            But the correct statement is "no more than" not "at least".

            Consider a right-angled triangle with base length d and height 550, corresponding to transmission from a base-station to a satellite. The hypotenuse has length sqrt(d^2 + 550^2), so the difference in length between the hypotenuse and base is sqrt(d^2 + 550^2) - d.

            This has a maximum of 550 when d=0 (i.e., shooting straight up), and decreases as d increases: https://www.wolframalpha.com/input/?i=plot+sqrt%28d%5E2+%2B+...

            Alternatively, consider the triangle inequality: the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. This directly implies that the difference in length between the hypotenuse and base is less than or equal the height [base + height >= hypotenuse implies height >= hypotenuse - base].

            • sliken 5 years ago

              Er, no, "the difference in length between the hypotenuse and base is sqrt(d^2 + 550^2) - d".

              The hypotenuse is cos(angle)*base.

              If you think about it at a minute if a sat is 500 miles up directly overhead that's the closest it ever will be, as it flies off the hypotenuse gets longer, not shorter.

              So ideally you bounce off a sat overhead, (distance of 1100), any single hop will be longer, and to get across an ocean you'll likely need more than one hop.

              Basically the sin(beam path) will will never be less than 550 and the length of the beam will never be less than 550.

              • jamessb 5 years ago

                ~~D'oh - yes, that formula is true only if the triangle is right-angled, which is true for only a single base length.~~

                Edit: Actually, this is always true: we are considering a right-angled triangle where the base is the horizontal distance from the ground station to point under the satellite, the vertical part is the 550 miles between the point under the satellite and the satellite, and the hypotenuse is the line joining the satellite and ground station.

                > if a sat is 500 miles up directly overhead that's the closest it ever will be, as it flies off the hypotenuse gets longer

                Yes: as the horizontal distance d increases, then the length of the hypotenuse (sqrt(d^2 + 550^2)) increases.

                However, the difference between this and the horizontal distance (sqrt(d^2 + 550^2) - d) decreases.

                -----------------------------------------------

                If the angle from the horizontal to the line between the satellite and base-station is theta, then:

                sin(theta) = 550/hypotenuse => hypotenuse = 550/sin(theta)

                tan(theta) = 550/base-length => base-length = 550/tan(theta)

                difference in length = 550/sin(theta) - 550/tan(theta)

                [which simplifies to 550 tan(theta/2)]

                We are interested in angles between 0 degrees (horizontal - corresponding to the limiting case of infinite horizontal distance between the satellite and base station) and 90 degrees or pi/2 radians (straight up): https://www.wolframalpha.com/input/?i=plot+550%2Fsin%28x%29+...

                This is always between 0 and 550. The triangle inequality holds: for a single hop from base-station to satellite, the increase in length is never more than 550.

                But as you point out, there may also be multiple hops.

                > So ideally you bounce off a sat overhead, (distance of 1100),

                This is the shortest total ground-satellite-ground distance, but as you cover 0 horizontal distance it is the worst case: the difference between the ground-satellite-ground distance and the length of the direct ground-ground line is maximised.

            • imoverclocked 5 years ago

              Are all base stations directly underneath a satellite?

              I think this is an over-simplification if we are chasing pedantics; There are cases where it will be more and others less so the slightly more precise wording might actually be "about 1100km."

              To the larger picture: it seems we often lose that order of length on the ground due to existing network topologies and geographical limitations.

              • jamessb 5 years ago

                Yes, this is an oversimplification: the original statement seemed to be based on getting a fact about trigonometry backwards, and I was just trying to resolve the underlying confusion.

      • russdill 5 years ago

        1100km / c is 3.7ms. In free air, light speed is 50% faster. So long as the distance you are covering is more than 2200km, you'll overcome that. Of course, there's also the consideration that there can also be a lot of hops in terrestrial links and it's often very far from a straight line path.

      • LargoLasskhyfv 5 years ago

        Are you sure about the necessary regeneration? Let me hand wave from the dark skies here for a moment:

        1.) Think of the precision mirrors in the so often mentioned EUV-lithography equipment from ASML for latest generation chips from TSMC.

        2.) Now imagine something like that on board of a satellite, maybe smaller.

        3.) Have 2.) moveable with sufficient precision to bounce the rays from satellite to satellite in realtime, without having to regenerate them in any way for about 4 to 5 hops.

        4.) problem solved by purely 'optical' mesh while signal is 'in orbit'.

        kthxbaiiii!

        • morei 5 years ago

          Those 'precision EUV mirrors' achieve a reflectance of about 70% i.e. they aborb ~30% of the EUV light that reaches them. :)

          More seriously, those mirrors are special because they use bragg reflectors to handle 13.5nm light. They're not special for their precision, nor their reflectance.

          Setting that aside, the major problem with your proposal is that laser still have significant bream spreading. So the mirrors would need to be large enough to encompass a spread beam at every step, which adds weight and volume for both the mirror and the tracking mechanism. The tracking mechanism is particularly problematic because moving mass on a satellite affect the attitude, so you either need precision counterweights to null it out, or large reaction wheels.

          Using MEMS mirrors instead would solve some of the mass issues, but MEMS mirrors have very limited tracking (typically limited to a single axis) which would probably render them impractical.

          Far, far easier to just send and receive the signal at every step.

          • LargoLasskhyfv 5 years ago

            Hrm. Taken from https://www.asml.com/en/technology/lithography-principles/le... :

            > Flatness is crucial. The mirrors are polished to a smoothness of less than one atom’s thickness. To put that in perspective, if the mirrors were the size of Germany, the tallest ‘mountain’ would be just 1 mm high.

            edit: What I meant to say was rather something with that precision reflecting whichever wavelengths are used for laser communications. Which would be infrared, I guess? Or are we talking Maser?

        • mmmBacon 5 years ago

          While an interesting idea, I think you’ve greatly understated the problem. First, lasers and coherent light beams diverge, light cannot stay perfectly collimated and it’s not really possible to collimate well over such long distances. So the receiver, >10,000km away, will “see” only a small cross-section of beam. The efficiency of this is defined by something called the overlap integral between the areas of the beam and the detector. Think of it like the amount of light from a flashlight that gets through a pinhole in a sheet of paper. This reduces the available signal power significantly. If you introduce mirrors you have the mirror loss plus the vignetting losses for each bounce. This is likely much worse.

          • LargoLasskhyfv 5 years ago

            But the reciever won't be be > 10,000km away in the configuration I mentioned. 4 to 5 'hops', remember?

            edit: arrgh, forget it... one beam, reflected multiple times until 'end of the line', got it...(sigh)

    • m_eiman 5 years ago

      Won't they be at low enough altitude that they'll need more hops than fiber to get around the globe, where each hop adds at least some delay?

      • xoa 5 years ago

        Not sure what you mean by "hops" here? The current beta sats mostly act as "bent pipes", where they relay directly between user terminals and ground stations which then go to out to the regular net from there. But the final deployment sats are intended to have free space optical links between satellites (these are currently deployed and testing on the most recent polar orbit ones), so a connection can go entirely through the mesh in space until it reaches the nearest physical ground station (probably with some weighting for congestion and priority of course). The orbital RTT penalty will only be paid once, and with tens of thousands of sats the optical route will actually be much more direct for many people when crossing oceans than going through whatever undersea fiber links there are. Compared to regular fiber, final Starlink will definitely win on latency over sufficient distances.

        But Starlink will never match the bandwidth and reliability that fiber can do, nor is it meant to. So it's not a replacement, just another awesome option.

        • xoa 5 years ago

          Also just to run the math on an example for "actually be much more direct for many people when crossing oceans": say someone is somewhere on the southern coast of Alaska, be it more towards King's Cove or back towards Newhalen, and want to talk to someone in Sapporo Japan. As the bird flies that's something like a 2500-3000 mile distance. But in practice there is no undersea cable direct linking Alaska and Asia (unless that's changed in the last year or two). Instead a connection probably has to go to Anchorage, then to Seattle, then probably to Tokyo, and then out to the rest of Japan from there. This could easily turn a 2500 mile path into a 7300 mile path. Starlink satellites in the current plan AFAIK are going to heavily be in shells 214 to 350 miles high (including Ku/Ka band current ones and future V-band ones). At 350 mi orbit, so maybe a 700-1000 mile up/down penalty, total distance could still be half the cable distance in this example, even before latency advantages.

      • 0xffff2 5 years ago

        When you're traveling at the full speed of light in vacuum, compared to 2/3rds in fiber, even a few extra hops can leave you with significantly lower latency.

        • mrtnmcc 5 years ago

          Right, if they are using standard OTN framing, the hop latency should be ~3 microsecond (which is <1km of light propogation)

  • sneak 5 years ago

    I agree, but we can start on our local machines first. Most of the latency of modern computers isn't related to the network.

    • ksec 5 years ago

      Yes. Keyboard, Mousepad, Display, Sound, Graphics.

      I mean input lag [1] is easily 50ms. But some of them requires software changes. And any thing software is expensive. The cost of this new Cable is only $300M. Hardware innovation is getting faster and cheaper than Software.

      [1] https://danluu.com/input-lag/

    • sunbum 5 years ago

      Latency reduction like that would mostly be relevant for traders.

onion2k 5 years ago

You could load a modern Javascript-powered website in less than a minute with that.

  • MaxBarraclough 5 years ago

    Don't worry, as hardware advances roll out, software bloat will always expand to fill the vacuum.

  • throw0101a 5 years ago

    Especially the linked blog post, as it doesn't render unless you allow JavaScript from gweb-cloudblog-publish.appspot.com (per uMatrix).

    Webdevs: is there a reason why a page would be designed so that JS being on is mandatory? Especially for something as prosaic as a couple of paragraphs of text.

    • arantius 5 years ago

      Laziness, baroque toolchains, and (most probably): Shipping your org chart.

    • dna_polymerase 5 years ago

      Ever heard of React, Vue or Angular?

      If you meant mandatory in terms of the actual medium requiring it, I can only hint to interactive applications, aside from that I don't think it would actually be mandatory.

      • Cthulhu_ 5 years ago

        All of those can be rendered server-side if needs be, and in my experience, can actually lead to a superior browsing experience compared to plain server-side served HTML. But it has to be done right.

        Gatsby + Netlify with a CMS-as-a-service like Contentful or Prismic will lead you to a good result. We made e.g. https://fox-it.com/ using that, its back-end is Wordpress but it's drained empty to rebuild the website. Note how it works without JS, the dropdowns don't work but they fall back to full page navigation page. Note how with JS enabled, all the content shows up instantly. This is how it's supposed to be done.

        • dna_polymerase 5 years ago

          Absolutely the can, yes. I wasn't saying they couldn't. I just answered the question. And those frameworks really introduced the idea of loading JS in order to load content to the broader masses. Things have evolved, sure and it can be done right, but nonetheless, those frameworks are a reason to force JavaScript on the user.

    • intrasight 5 years ago

      I got a blank page when I opened the web site. So as usual I looked at HN comments to see what it was about.

      Here's an idea: add some HN logic to automatically move a comment that begins with "TL;DR" to the top of the thread.

    • Cthulhu_ 5 years ago

      > Webdevs: is there a reason why a page would be designed so that JS being on is mandatory?

      I think in the case of Google, it's because they've been told they are the best developers, the top 1% of SWE's, they went through rigorous interviews, are paid a small fortune twice as much as they would get at a regular coding job, etc.

      So it's dick shaking. They need to show to the world that they're better than plain HTML websites, that they have a massive schlong, that they out-chadded the vast majority of software devs. Plain html? Psht, we can invent our own language, gonna put those six years of uni to work! Wordpress? This is beneath us! It has to be a client-side rendered JS-pulled-through-GWT behemoth because on my system it's... wait it's slower, but nevermind that it's technologically ALPHA.

      edit: actually looked at the source, looks like a Polymer / Web Components website. I've had to work in that once, it was dreadful compared to libraries used by real people.

  • a012 5 years ago

    My browser loads that page like forever, until I remember to _allow_ javascript on that page, like why on earth they render everything except the content at all.

    • c22 5 years ago

      Yeah, it's one thing to build a page that wont render without Javascript, but making the only part that does render be a never-ending spinner is just rude.

    • LMYahooTFY 5 years ago

      This is what I encounter more often than not lately.

      Is this due to more and more content simply generated by javascript frameworks?

      • speedgoose 5 years ago

        Yes and because developers do not have time for the very few people who decided to disable javascript and not enable it when necessary.

  • forgot-my-pw 5 years ago

    But can I download a car with that speed?

  • linuxlizard 5 years ago

    if this were reddit, I'd be throwing gold at you.

markphip 5 years ago

I know there are many of these cables that have been around for years, but I am curious how are they physically secured? Especially where they transit from ocean to land? Is there some long underground/sea tunnel of conduit that the cable is routed through to the basement of some building? Or if you are walking along the beach somewhere is there just some cable running out of the ocean along the beach to some building near the shore?

I also wonder what kind of permissions and licenses you need to seek to run a cable across the ocean floor?

SloopJon 5 years ago

Wow, talk about a barrier to entry. Google already has Curie from North America to South America and Equiano from Portugal to South Africa. They're also working on Hopper from North America to UK and Spain:

https://cloud.google.com/blog/products/infrastructure/announ...

I presume that the other trillion-dollar companies are getting in on the action too.

nippoo 5 years ago

This is super-cool! I found "enough to transmit the entire digitized Library of Congress three times every second" to be a really weird comparison though - I'm used to text being really small and compressible, and I doubt many people have an intuitive grasp of how much One Scanned Library of Congress is. How many hour-long Netflix/YouTube episodes per second, on the other hand...

jl6 5 years ago

The article mentions the number of fibres in this cable is 12, and that new technology was used to increase that number.

What is the limit on how many fibres can go in a cable? Should we expect future cables to have 50 fibres, or 100, or 1000, or more?

  • doikor 5 years ago

    Problem is powering the repeaters. More fibers are not going to help if you can not use them. They mention in the article the improvements in the repeater design to cut down power draw to allow more fibers to be used.

  • blantonl 5 years ago

    I think the limitation is based on repeater and laser-pump equipment to repeat the signal along the length of the cable run.

    I suspect that the repeaters and associated power equipment along the line is pretty big stuff. So the fact that this cable is able to "share" that equipment across the 12 fibers is a breakthrough in technology.

aynyc 5 years ago

I know nothing of this type of engineering. How do you even start a project like this? Map the bottom of the ocean, figure out all the danger zones? What is the cost of doing something like this?

  • tyingq 5 years ago

    "What is the cost of doing something like this?"

    Their Oregon to Japan cable, 9000km and laid in 2016, cost $300M.

    https://www.computerworld.com/article/2939316/googles-60tbps...

    • sparsely 5 years ago

      That is at least 1 order of magnitude cheaper than I would have guessed. Mind boggling that it's cheaper to do that then buy like the 4th best meal delivery app in Canada or whatever.

      • tzs 5 years ago

        It's probably cheaper than people would expect because the long run across the deep ocean is a lot more straightforward than most people would expect.

        1. For the deep ocean parts of the route, cables and associated equipment (such as repeaters) are simply spooled out from the back of the cable laying ship, to settle on the ocean floor.

        2. For shallow waters, the cable is buried. This is done by dragging a plow along the bottom which cuts a furrow and puts the cable into it. The plow has an altitude control and a camera so that an operator on the ship can control it, and a magnetometer to check if the cable is properly buried behind it.

        3. For areas where burying isn't practical but they anticipate ships will anchor, they use armored cable.

        For #1, the costs are going to be the cost to operate the ship while it slowly spools out the cable and the cost of the cable. For #3, same thing, but with more expensive cable. For #2 I'd expect it is similar, except the ship goes a lot slower (about 0.5 knots when using the plow, compared to about 5 knots when laying surface cable).

        Finally, there is this.

        #4. At the shores, they need to avoid damaging reefs and other habitats, not wreck the beach, and things like that. The cable needs to be in conduits that are buried or anchored. And building those conduits needs to be done in a way that does not mess up the environment.

        So what you've got then for a long cable project is two ends that present underwater construction projects, the shallow waters near the two ends where you have to bury the cable, and then the long deep ocean stretch where you are just spooling the cable out.

        This suggests the costs are going to have a component that doesn't really depend on how long the thing is (the two ends and the shallow waters near the ends where burial is needed) and a component that is proportional to length (the long run between the two shallow waters near the ends).

        At 5 knots, it would take about 1000 hours to lay the deep sea part of the cable. If the ship costs $50k/hour to operate, that would be about $40 million. (I have no idea what it costs to operate these ships, but Google tells me that big cruise ships cost about that much to operate, and I'd guess that a cable laying ship is cheaper).

        Assuming the underwater cable itself is 10 times as expensive as regular cable, its about $150 million for 9000 km.

        That's brings us to about $200 million for the deep ocean part.

        • eitland 5 years ago

          > Assuming the underwater cable itself is 10 times as expensive as regular cable, its about $150 million for 9000 km.

          Still sounds really inexpensive when I consider it contains a large number of repeaters and is meant to stay at the bottom of the ocean.

          Edit: Forgot to write, I haven't run the numbers myself but I enjoyed your reasoning here, you put a smile on my face :

          > At 5 knots, it would take about 1000 hours to lay the deep sea part of the cable. If the ship costs $50k/hour to operate, that would be about $40 million. (I have no idea what it costs to operate these ships, but Google tells me that big cruise ships cost about that much to operate, and I'd guess that a cable laying ship is cheaper).

          • iptrans 5 years ago

            Fortunately you only need repeaters every 80 km or so, so you'd only need a bit over a hundred repeaters across the 9000 km span.

            Repeaters aren't terrible expensive, so they only add a few million to the total cost.

            • eitland 5 years ago

              Checked your profile now, I belive it :-)

            • jaytaylor 5 years ago

              And how are potential repeater unit failures accounted for?

              • iptrans 5 years ago

                Repeaters are designed to last for the lifetime of the cable plant. Design lifetimes are 25 years or so.

                Repeater design is inherently very, very conservative because if the repeater fails, the cable fails. This results in an outage lasting days, if not weeks, as a cable ship is dispatched to the failure location.

                The cable ship has to trawl for the cable and pull it up to the surface. Then the cable is cut and replaced with a new section that includes a new repeater to replace the failed one. Expensive.

        • parliament32 5 years ago

          I figure the hard engineering challenge is the repeaters. How do you build repeaters and power them, considering you can't really service or replace them ever over the lifespan of the cable (the deep ocean bits anyway)? A repeater every 80km is a whole lotta repeaters.

        • aynyc 5 years ago

          > Assuming the underwater cable itself is 10 times as expensive as regular cable, its about $150 million for 9000 km.

          Looking at what I can find, it looks like way more than 10 times the cost.

          https://i.imgur.com/7Dm7EEp.jpg

          • tzs 5 years ago

            My estimate came to around $22k/kilometer for the cable itself plus the laying it in deep ocean. I didn't estimate the costs of repeaters.

            The Google project was $33k/kilometer, so I don't think I could have been too far off on the cable itself. Looking at other undersea fiber projects, that seem about typical. For example, this one [2] estimated $27k/kilometer [1].

            Here's an Alibaba seller with submarine fiber for $2000-9000/kilometer [2].

            The submarine cables have an aluminum or copper tube around the fiber optics, an aluminum water barrier, and a sheath of stranded steel wires, and an outer polyetylene layer, with various other layers of mylar, polycarbonate, and petroleum jelly in between.

            I'd expect the metal layers to be the most expensive parts. Looking at the cost of tubes or cables of those materials, it looks like each of those would be in the $1000-2000/kilometer range.

            [1] http://infrastructureafrica.opendataforafrica.org/ettzplb/co...

            [2] https://www.alibaba.com/product-detail/Submarine-Fiber-Optic...

          • iptrans 5 years ago

            10x is a fair estimate of cost vs regular armored fiber cable.

            Source: I've laid subsea cable.

            • aynyc 5 years ago

              Just to be sure, you mean 10x between subsea cable and regular armored fiber cable, not cat6 I can get from best buy.

              • iptrans 5 years ago

                Yes, not that there's a large difference. Best Buy has pretty large markups, especially on short CAT6 cables.

                You can buy subsea cable for $10-$20 per meter.

                EDIT: the cost depends on how many layers of armoring you require. Deep sea cable requires less, shallow sea cable more.

      • Mauricebranagh 5 years ago

        That's is what I thought, years ago back when I worked for a big telco we actually had a small fleet of our own cable laying ships.

        The fun thing was the company hand book had a whole other section of T&C allowances etc if you worked on a ship.

      • frabert 5 years ago

        Yeah that's what I was thinking too. It doesn't sound like money well spent, it sounds like a bargain to me, like "you'd be stupid not to do it" cheap for something the size of Google.

      • Cerium 5 years ago

        Indeed, California spent 20 times as much for a 3.5km bridge.

    • voidmain0001 5 years ago

      That sounds like money well spent, and a good deal considering what it enables. It would be incredible to see the multiple levels of govt around the world collaborate to create a publicly funded (bond sales) project for laying fibre optic across the planet which could not be sold to a private corp, and that guaranteed access to it based on population proportion, not GDP.

  • syoc 5 years ago

    I have no idea about how this stuff, but this wired article from 1996 written by Neal Stephenson about undersea cables is a fantastic read.

    https://www.wired.com/1996/12/ffglass/

    • MarkusWandel 5 years ago

      The article is now almost a quarter century old and the cables have gotten better. In fact, even that cable probably got a lot faster after optical coherent detection was introduced, i.e. much more capable modems. But the way the cables are actually laid and especially the details of the shore landings and the issues of terrestrial runs, are as current as ever.

    • tclancy 5 years ago

      Came here to recommend the same. I reread it every 5 years or so for inspiration.

  • virtuallynathan 5 years ago

    Pretty much, you do surveys, probably based on existing ocean floor sonography, and then contact out a cable to someone like NEC, TE SubCom, Huawei, etc… Load it up on a cable laying vessel, and use software like Makai Lay to optimally place the cable on the ocean floor. [This is the basic idea, I wouldn’t treat this as an authoritative answer, I’m just loosely adjacent to this industry]

  • jesuschroist 5 years ago

    While not being super technical, there is an interesting miniature "The First Word Across the Ocean" in the "Decisive Moments in History" book by Stefan Zweig. It tells the story and circumstances of how the first trans-atlantic cable (back then for telegraphs) was laid in the late 19th century.

  • Schalter 5 years ago

    The funny thing is, that when you realise that they just lay it down on the sea floor and you start to think through all the potential issues with throwing a very thick special cable on the ocean, you will realize that it already just works as it is for a while.

crispyambulance 5 years ago

The article says this cable uses SDM (space division multiplexing). Which, for fiberoptics, means that you have multiple fibers. Of course they HAVE TO put many wavelengths on each fiber, each wavelength carrying a signal.

The "state-of-the-art" AFAIK is to use many wavelengths per fiber, each one carrying ~192 wavelengths each wavelength transporting at up to 100Gbps (this is known as DWDM).

So so with SDM, you just have more fibers? So what? It seems like I am missing something here? Why is "SDM" the key concept rather than "DWDM"? Why not just say DWDM with 12 fiber pairs?

  • aappleby 5 years ago

    I thought the same thing, but they really are sending N completely separate signals spatially separated at the transmitters, then deconvolving them (sort of) at the other end. Relies on very complicated structure inside the glass of the fiber.

  • DoomHotel 5 years ago

    You can send spatially-separated signals down a single multi-mode fiber.

    https://www.nature.com/articles/s41598-019-53530-6

    • jisco 5 years ago

      It's not the case here. On their website, google states: ... Dunant is the first long-haul subsea cable to feature a 12 fiber pair space-division multiplexing (SDM) design ...

      Multi-mode fibers are not feasible for long distance transmission. For long distance communications, using suggested approach, may be better to use multi-core fibers.

    • crispyambulance 5 years ago

      That's interesting! But multimode fiber isn't feasible for thousands of kilometers? This is transatlantic. Wouldn't that have to be singlemode just for the distances involved?

      • sp332 5 years ago

        Even single-mode needs repeaters along the length of the cable to get across an ocean. I guess you could use multimode and a lot more repeaters, but that seems more expensive and more failure-prone.

        • aidenn0 5 years ago

          When I was first learning about fiber, graded-index multimode was the "hot new thing" with corning promising the modal-dispersion of single-mode fiber with the light-carrying capacity of multimode, which should reduce repeaters compared to either. Since these are single-mode fibers, I assume those promises were overstated?

        • crispyambulance 5 years ago

          Yes, and the SDM as described in the nature article in parent^2, it would require a something far more complex than a repeater (which in most cases is actually just a purely optical amplifier).

          Current practice is to use erbium doped fiber amplifier or raman amp for boosting optical signal at long intervals for transoceanic runs. Given the complexity of spatial signal, I don't think a regular optical amplifier will work? I could be wrong, this tech is changing but submarine fiber-optics tech is necessarily conservative and slow moving.

sschueller 5 years ago

Do those come with pre attached NSA listening devices [1] ?

[1] https://siliconangle.com/2013/07/19/how-the-nsa-taps-underse...

  • gnu8 5 years ago

    Most certainly. You don’t land a cable in either the US or France without a classified annex to the license that provides for interconnection to their intelligence services.

  • londons_explore 5 years ago

    There's a reason the vast majority of undersea cables have at least one end in a Five Eyes country. No need to tap it in the middle of the ocean then!

    • actuator 5 years ago

      Even if this is true, I think the simple reason might be that one of the five eyes country is US, which is probably the global hub for data and services used throughout the world. Also, Britain being near entrance of Europe from Atlantic, and Australia being near Asia would make economical sense for the cables to take that path.

    • eeZah7Ux 5 years ago

      False. Snowden revelations clearly indicated that tapping undersea cables is (unsurprisingly) difficult to detect.

      A lot of surveillance is done both *illegally* and secretly.

      Forcing carriers to install black boxes next to their routers is not always the preferred choice.

      • londons_explore 5 years ago

        You are missing the fact that most undersea cables get tapped multiple times. Five Eyes normally inspects the data on land, but enemies will do undersea taps.

        While a cable is being tapped, there will be a suspicious change in signal strength, and various signal reflections will tell the cable operators where the tap is. Thats bad for a spy agency who want to remain undetected.

        Instead, they break the cable in three points deliberately. The middle point is where they put the tap, and the spy agency will repair it. The points either side are simply so that the cable operators don't know where the tap has been inserted, and have to be repaired by the cable operator. That gets expensive, since it will typically happen 3 or 4 times for a new cable install (3 or 4 countries want access to the data).

        Cable repair operations are typically public knowledge (they require specialized ships), so anyone who fancies can crunch the data and see how often a cable breaks in multiple places before being repaired to know how often it's tapped... Mediterranean cables seem to see the most taps.

        • eeZah7Ux 5 years ago

          > You are missing the fact

          Please don't make guesses. I'm aware of the tapping process.

          > Thats bad for a spy agency who want to remain undetected.

          Yes, this is inevitable and it's still extremely more stealth that plugging network taps in somebody's else NOC. Especially if the tapping is done illegally.

  • deelowe 5 years ago

    The cable itself? Almost certainly not. They don't need to. It terminates in the US.

adriancooney 5 years ago

Excellent related Ars Technica article related to deep-sea cables if you want to learn more: https://arstechnica.com/information-technology/2016/05/how-t...

  • atonse 5 years ago

    Mother Earth Mother Board is one of my all time favorite articles, which chronicles the laying of a cable.

    https://www.wired.com/1996/12/ffglass/

    • dgritsko 5 years ago

      Another recommendation in this vein is Arthur C. Clarke's "How the World Was One", it provides some fascinating historical context for how we got to where we are today (or rather, where we were in 1992).

cmpb 5 years ago

Ah the old "entire digitized Library of Congress" per second metric

  • ThePadawan 5 years ago

    I always find comparisons using text data incredibly worthless.

    I'm sure a Shakespeare play or The Great Gatsby are barely a few megabytes.

    But if you asked Joe Shmoe on the street "In Great Gatsbys, how big was the last picture your iPhone took", they would rightly have zero idea.

    It's so useless.

    • bravura 5 years ago

      Agreed, number of books stacked end to end to reach the moon is much more intuitive.

    • adverbly 5 years ago

      Easy! It's just three olympic sized swimming pools worth of dollar bills stacked to the moon in bits.

    • edoceo 5 years ago

      I think all of Shakespeare was like 450,000 LOC.

      I used to use that metric when folks ask why it took so long to debug. Like, our project is 600,000 LOC and more complicated than any of his works. He didn't have it all memorized and neither do I. It's a metric PMs can understand.

  • ericpauley 5 years ago

    I think this says more about the minuscule (on Google scale) ~10TB size of the digitized library of Congress.

elbac 5 years ago

If anyone enjoys this topic, I would recommend reading "A Thread Across the Ocean: The Heroic Story of the Transatlantic Cable" by John Steele Gordon.

m3kw9 5 years ago

“ enough to transmit the entire digitized Library of Congress three times every second” the engineer in me: compressed? With images? Or just raw texts?

phuff 5 years ago

This is a video from google about how laying undersea cables works: https://www.youtube.com/watch?v=H9R4tznCNB0 I've always wanted to know! Super cool!

tgtweak 5 years ago

No mention of latency improvements?

Seems crazy since oversea transit (tcp & single-channel) is usually latency (or loss) bound.

I would expect it's better than going over public transit and legacy subsea fiber, but it would have been useful to see some comparison tests between POPs.

  • virtuallynathan 5 years ago

    Google invests a lot in TCP congestion control, mainly through BBR. I believe they do bulk transfers with centrally-scheduled fixed-rate UDP transmission. I also assume they have better control of buffers, loss, and queueing algo’s to prevent/control loss/drops.

  • trollied 5 years ago

    I'm not sure how easy it is to increase the speed of light in glass without some sort of new breakthrough.

    • tgtweak 5 years ago

      Not travelling through 20 routers in the process tends to help. Again it would be good to get a tangible idea of how much better this is vs just stating the obvious about peak theoretical throughput.

easton 5 years ago

Is Google using this for consumer services (Gmail/Search/YouTube/Stadia) that don't run on GCP or is this only for GCP? If it's only for GCP, they are betting big, which is good.

  • ed25519FUUU 5 years ago

    Google uses gcp.

    • easton 5 years ago

      For everything? I was under the impression they still ran all the big stuff on their internal cloud with Borg and all the other infrastructure tooling they built.

      • jeffbee 5 years ago

        Yeah, you should think of it more like GCP runs on Borg, not the other way around, although the description is not perfect. Also Google's cloud services like Cloud Spanner and Cloud Bigtable run directly on Borg.

        What's terrifying is that Google described each of their B4 sites as having 60tbps uplinks in 2017, growing at 100x per 5 years. So a 250tbps undersea cable is nice but when you think about it probably not enough to make intercontinental transfer too cheap to meter.

      • nameless912 5 years ago

        My understanding is that GCP is essentially selling off extra capacity in those data centers, so for example your VM running in GCP is scheduled by Borg under the hood. So it's more like GCP runs on Google, rather than Google running on GCP.

ed25519FUUU 5 years ago

I see a lot of these fiber lines pop up on tiny islands throughout the pacific. What’s happening at these places? Are there people who work there and if so what are they doing?

jdkee 5 years ago

For those of you who haven’t read Neal Stephenson’s Wired article on submarine cables from 25 years ago.

https://www.wired.com/1996/12/ffglass/

obiefernandez 5 years ago

> will deliver record-breaking capacity of 250 terabits per second (Tbps) across the ocean—enough to transmit the entire digitized Library of Congress three times every second.

Damn. Anyone else just agog at this figure?

  • throwaway3699 5 years ago

    It's not that much in the grand scheme of things. A couple of data centres will saturate the link easily.

lsllc 5 years ago

TechCrunch story about this posted yesterday:

https://news.ycombinator.com/item?id=26017592

Ironlink 5 years ago

My Firefox Developer Edition (86) doesn't load the page completely, one of the resources (https://gweb-cloudblog-publish.appspot.com/api/w_v2/pagesByU...) has an untrusted certificate (SEC_ERROR_UNKNOWN_ISSUER). It is issued by "Cisco Umbrella Secondary SubCA".

manishsharan 5 years ago

Oh dear! how is NSA going wiretap that ?

person_of_color 5 years ago

Is this only for Google traffic?

mizzao 5 years ago

Is this a private cable that only connects Google datacenters? If so, too bad for open, neutral Internet.

comboy 5 years ago

Anyone who has some clue wants to take a shot at what could be investment cost for such a cable?

capableweb 5 years ago

This cable is not just to be used by Google right? Or am I misunderstanding something? Fundamentally, infrastructure should be publicly owned and then rented by companies to use it, in this case it seems like Google physically owns the cables and infrastructure which would be a massive waste.

  • d1zzy 5 years ago

    Feel free to convince your government and fellow citizens to use tax money to pay for such infrastructure. Google laying down their own cable isn't stopping anybody from doing so.

  • morei 5 years ago

    Why is it a waste? If Google has enough demand to fill the cable, then how is it waste?

    (And I assume that Google has enough demand: If it didn't, why would they build such a large cable?)

  • nabla9 5 years ago

    >Fundamentally, infrastructure should be publicly owned a

    No. Good market socialist solution in situations where network investment (electric grid, railway, telecom) creates natural monopolies, is forcing separation of the network and content.

    For example electric grid owner must allow other sell and buy electricity trough the network. They can only get maintenance fee determined so that it cant be used to distort energy markets in favor of the company owning the grid.

    In telecom it usually applies only for the last mile.

idlewords 5 years ago

I'm delighted by all the speculation in this thread about whether the cable laid by the global surveillance company is somehow being spied on.

  • blindm 5 years ago

    Well we assume any important Internet choke-point is used for surveillance. If I just started surveilling anything sent en clair my first stop would be Internet backbone connections.

ChrisMarshallNY 5 years ago

Sigh... Removed because people don't seem to want to see it.

Not a big deal, but...sheesh. It's not like it was a troll comment; just a relatively lighthearted poke.

  • Schalter 5 years ago

    I don't get it.

    Whats the issue?

    • tomerico 5 years ago

      Check the second link.

      On another note - the third link captures the back button and doesn’t let you get back to hacker news (at least on mobile). What a shitty site.

      • eitland 5 years ago

        Here's a trick from the old days (works in all my mobile browsers[1]):

        long click the back button, a popup will show your navigation history and you can click the last link before entering the broken site.

        That said, the behavior is absolutely unacceptable.

        [1]: And in Firefox desktop you can also do this but I can't remember if it is long-click or right click.

      • ChrisMarshallNY 5 years ago

        Yeah...remember when SlashDot was Hacker News?

        How the great have fallen...

supernovae 5 years ago

Is this why they're losing billions?

  • jiveturkey 5 years ago

    Your comment is both snide and wrong. lovely combination. I will respond anyway.

    They are losing billions because they are paying for growth. It is the proper strategy.

    • supernovae 5 years ago

      only on hacker news could you get downvoted for asking legit questions and dumb answers by apologists...

      This is what happens when a marketing company starts a cloud right? turns it into a loss leader and everyone who buys it becomes and apologist at all cost.

      i don't get it.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection