Ultra-dense optical data transmission over standard fibre with a single chip
nature.comWow. Impressive.
> 25% increase every year
Is there anything like Moore's law for optical cable bandwidth?
Figure 3 in this open-access paper [0] provides historical scaling trends in optical fiber communication (transport) compared to generation and processing. Depending on the time period under study, bandwidth increases at between 20% and 100% per year.
Though the improvement in transistor economics has definitely benefited transport, the large bulk of improvement over time is due to breakthroughs in manufacturing, materials science, semiconductor optics, and signal processing.
Yes. At least a general trend of increasing the demonstrated transmission rates year after year.
In the past five to ten years, there has however been a discussion of wether this can continue and if we hit upon the speculate “capacity crunch“ – aka reaching the operational fiber capacity. This is all considering a single fiber strand.
The limit to the rate through the fiber is not well understood and a quite active research area. Previously, the “limit” has been broken by technology shifts, such as coherent transmission, different amplifier technology, more advanced signal processing. The big question is what the next big shift will be. Combs, as presented in the article are a promising direction.
More so when you work out how much data is inflight, maybe we see fiber reels used as storage/memory. Kinda like what we had in the early days of wire memory. May well see that come back into play with speeds like this for some uses cases.
If you just need a fixed delay in the order of ns, sure. But the fact that it isn't randomly addressable makes me think it won't be useful for anything general purpose.
The speed of light works against you in that case. Even at the incredible 14Thz line rate achieved here, 1km of fibre can have ~146kbits bits in flight at a time.
Yes, though in this instance they pulled of 44terrabits rate per second, so with that you would be looking at 156 megabits inflight over a 1km cable if my napkin maths are correct. Thats 18 megabytes, sure error rate etc and correction would limit that a bit as nothing works as ideal, but sure does seem like may have some uses come into play.
46 megabits at 3.34µs per kilometer[1], however error correction brings that way down. Regardless 1KM of fibre is quite substantial, we're talking densities far lower than even 70s technology. Worse if you try to up the cable length for larger storage latency will increase with it.
1: https://www.m2optics.com/blog/bid/70587/Calculating-Optical-...
Especially considering how cheap fibers are in comparison to patterned silicon. If the access patterns fit, it might be useful as high-bandwidth memory.
Is that an optical fibre delay line?
What kind of wacky unit is the spectral efficiency of bits/sec/Hz? Isn't that the same as 'bits'?
I mean, I'm guessing it's just saying the bit rate depends on the signal frequency, but it's still funny to see canceling units.
Related: a "spacing" measured in GHz.
Again, it makes sense, but it struck me that someone could write a total spoof paper with nonsense units and I wouldn't be able to tell the difference!
It's the normal unit/measurement of spectral efficiency. You have X Hz bandwidth, and you multiply it by the spectral efficiency to get the data rate.
The fact that the time-unit cancels out just shows that spectral efficiency is independent of any sense of time one might have.
Measuring "spacing" in GHz makes sense if you consider the way heterodyne mixing shifts the signal around, without affecting the spacing of sub-carriers.
Spectral efficiency is also a nifty metric because, given some bandwidth and signal-to-noise ratio, there is an upper bound to which you can compare a result [0].
Frequency spacing makes sense as namibj points out. Most long-distance telecom links operate in the optical C-band, which is roughly 5 THz wide. (A wavelength of 1525nm has an optical frequency of 196.5 THz, and a wavelength of 1565nm has an optical frequency of 191.5 THz). You can select optical frequencies to modulate within this optical bandwidth. Given a certain modulation rate (>>GHz), separating the channels in units of 1 GHz is reasonable.
[0] https://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theore...
The metric is a measure of how much of the frequency spectrum a particular modulation format takes up. For example traditional OOK (on-off-keying) has a spectral efficiency of 1bit/s/Hz. In coherent optical communications we take ideas from radio to send more bits in the same amount of BW by encoding information in the phase and amplitude. For QPSK, we send 2 bits/s/Hz. In optics we use 2 orthogonal polarizations at the same time so we call this DP-QPSK. This doubles the spectral efficiency to 4bits/s/Hz.
One can keep going to higher order modulations to improve the spectral efficiency but the SNR required increases exponentially with constellation order.
Imagine having your CPU linked to memory or data-bus over a single fiber connection instead of complicated multi traced connections. One day perhaps.
Don't the electronic:optical and optical:electronic transitions introduce quite a bit of latency? I've seen estimates of around 500 microseconds per conversion. That's not enough to matter for networking, but RAM chips have internal latencies measured in nanoseconds.
Of course, "one day perhaps" this could change.
The laser and detector only introduce ~1 ns of latency and the serdes might introduce a few more ns (e.g. OpenCAPI Open Memory Interface claims 4 ns).
Impressive stuff. I think the BER is appallingly high. Current applications want pre-FEC BER on the order of 10^-12, not 10^-2. TANSTAAFL.
Howdy, I design optical integrated circuits for coherent communication. mmmBacon is correct. See here[0] or [1] for examples of the types of forward error correction used in coherent links. Pre-FEC bit error rate thresholds of 1e-3 or 2e-2 for <1e-15 post-FEC errors are fairly standard.
[0] https://www.cablelabs.com/forward-error-correction-fec-a-pri... [1] PDF warning: https://www.infinera.com/wp-content/uploads/Soft-Decision-Fo...
Modern optical systems require strong FEC and it’s not uncommon for systems to have pre-FEC BER 1e-3. The FEC gain is high enough to guarantee 1e-12 to 1e-15 post-FEC. Due to FEC it’s not necessary or desirable to have pre-FEC BERs of 1e-12.
> and it’s not uncommon for systems to have pre-FEC BER 1e-3.
Old guy here: I'm wondering when that changed. Early OC-768 requirements were 10^-12 BER preFEC (e: at least in short-haul), which was down from 10^-9 for OC-192.
People realised that you maximize the throughput of the raw medium if you push it hard enough it has a high error rate, and then correct those errors.
Error correction tech has reached information theory perfection (if you ignore latency), so it always makes sense to use as much of it as possible. In the future I wouldn't be surprised to see bit error rates pretty close to 1:1, and massive amounts of FEC.
> People realised that you maximize the throughput of the raw medium if you push it hard enough it has a high error rate
When?
FEC has been used in optical systems for decades. The undersea fiber guys have been using it in production at least since the 90s, for example. So I'm a little amazed that decades went by before someone noticed that you can throw BER out the window for the components and drivers to improve the overall system BER.
From systems pre-error-correction... Think RS-232 serial for example... No error correction there, so people typically don't push hardware too close to the limits of bit rates because every bit error will cause some breakage. Even today, a lot of modern devices talk with RS232-like protocols, and none of them have even parity bits - it's just assumed error rates are so low as to be zero across the useful life of the product.
It probably used to be cheaper to use high-quality optical components until Moore's Law brought down the cost of FEC enough.
For readers less familiar with the topic:
- BER = Bit Error Rate
- FEC = Forward Error Correction
- TANSTAAFL = https://en.wikipedia.org/wiki/There_ain%27t_no_such_thing_as...
For them like me who don't know https://en.wikipedia.org/wiki/Forward_error_correction as I didn't know what the 'forward' part meant.
It seems be error correction add to the sending stream to 'mend' broken data, vs error correction by the receiver detecting an error and requesting a re-send.
And just to elaborate a bit further: Forward error correction is the coding (as in information coding) principle of how to enable error correction. The error correction still happens at the receiving end and it is aided by the redundancy introduced by FEC.
I can't find anything in the paper, but maybe it is possible to be not quite so greedy with the bandwidth thereby reducing the BER to acceptable levels.
On the other hand, high BER doesn't preclude usefulness, one just would need different FEC to deal with that. Maybe a pratical implementation with a robust FEC will still come ahead in useful throughput.
10^-12 before FEC strikes me as outlandishly low.
Why would applications even care about the BER before FEC?.
FEC does not perform miracles. It only trades off some throughput for a few orders of magnitude in BER. FEC can also be defeated by common types of distortions. If you get a spike of alien crosstalk then you aren’t going to just get one bit error, so you really do need that high fidelity connection if you don’t want your link to shit the bed in the real world.
That quoted number might be 'before non-forward error correction', ie. Before resending the data, which involves an extra roundtrip and degrades service.