Settings

Theme

Increasing wireless network speed by 1000%, by replacing packets with algebra

extremetech.com

153 points by o1iver 13 years ago · 44 comments

Reader

tomstokes 13 years ago

Summary: TCP throughput drops dramatically when packet loss is present. This technique uses forward error correction to compensate for packet loss, resulting in higher effective throughput over lossy links. Many WiFi and cellular connections are lossy, so this would be helpful in those cases.

They haven't improved the underlying link rate at all. In fact, the FEC overhead is going to reduce the effective link rate. However, in some edge-case high packet loss scenarios, the reduced packet loss will more than make up for the reduced effective link rate.

  • mrb 13 years ago

    Yep. This appears to be nothing more than FEC. Maybe they used LDPC or LDGM, which are superior to traditional Reed-Solomon codes.

    I remember doing research on FEC codes back in 2003-2004 for developing a protocol for sending large files over satellite links to multicast recipients when I was working for SmartJog.

    • jwm 13 years ago

      > I remember doing research on FEC codes back in 2003-2004 for developing a protocol for sending large files over satellite links to multicast recipients when I was working for SmartJog.

      Interestingly, this tool looks like it is useful for the problem you describe:

      http://www.udpcast.linux.lu/satellite.html

      • mrb 13 years ago

        Yup. I remember looking at udpcast. I did not select it because, amongst other things, it did not support encryption. And I don't think we could use IPsec.

    • kokey 13 years ago

      Soon we will see an article about how they turn blocks of file data into algebra to save disk space.

  • jpdoctor 13 years ago

    Help me out: Surely FEC over wireless can't be new. Why is this worth writing about?

    • andrewcooke 13 years ago

      http://en.wikipedia.org/wiki/Forward_error_correction#Low-de... implies that they are used in 802.11n (i have no idea what is new in this article)

      • ChuckMcM 13 years ago

        I was thinking the exact same thing. Oh look, someone had discovered forward error correction.

        Now if they can compute a series of code blocks over a rolling set of data blocks, that would be new. It would be like singing a round of "Row Row Row your Boat" but in data packets, just waiting and you'll be able to figure out the packets that were dropped. Anyone know if this is what they did?

        • eru 13 years ago

          Wouldn't that be something like fountain codes?

          • ChuckMcM 13 years ago

            yes and no. So typically low density parity codes (aka fountain codes) break up a 'chunk' into some number of data blocks and some number of code blocks. Depending on the number of code blocks (its a tradeoff between time and number of blocks) you get 'most' of the blocks back and you can reconstruct the data blocks.

            Now you could code up a packet, break it up and then add code blocks, that works great for the packet, but if you don't get enough blocks from that set of blocks you have to resend the packet which is something you don't want to do.

            So if instead you send say 3 data blocks and 2 code blocks and then start sending two data blocks and 1 code block and set it up (this is the part that would be impressive) so that the previous two code blocks and the current one could reconstruct the two datablocks you just sent. (re-using previously sent code blocks) My brain is wincing trying to imagine the kind of ECC code that would be, sort of like solving the ECC polynomials in two dimensions or something.

            Any way, if you could do that, the outcome would be that you could continually send data and dropped packets would always be correctable by later packets received. I drew out a picture of what relationship the code blocks would need and its complicated. Say you did an 8x3 (8 data segments, 3 code segments, you send

                D01 D02 C01 D03 D04 C02 D05 D06 C03 D07 D08 
                C04 D09 D10 C05 D11 D12 C06 D13 D14 C07 D15
            
            And you do the math such that over any 11 pieces of that you can recover the 8 data pieces. Including:

                D05 D06 C03 D07 D08 C04 D09 D10 C05 D11 D12
            
            (see what I did there? Data segments from both parts and code blocks from both parts).

            Anyway, that would be some nice math if its doable. And really game changing, it would mean you would get less bandwidth in a situation where there was no packet loss (you're injecting ECC data). But it also makes me wonder if you don't just fix the underlying wireless protocol to do this with the frames sent, why does it have to be at the TCP level?

  • cbhl 13 years ago

    I wonder what's so proprietary about their algorithm. I'd assume it's just Reed–Solomon coding on a UDP tunnel (or, alternatively, modifying TCP to accept partially mangled-packets if FEC is successful).

    • jwm 13 years ago

      This could be demo'd very easily by gluing together:

      udpcast (http://www.udpcast.linux.lu/satellite.html) -> Sends/receives files over UDP. Supports FEC.

      netcat -> Join file I/O (from udpcast) to local TCP/UDP sockets

      openvpn / iptables userspace -> Provide connection routing.

      Seems like an evenings work.

      Edit: udpcast might not be suitable for this. I am surprised noone has already built a simple UDP FEC tunnel program...

  • StavrosK 13 years ago

    So, "benchmarks very favorable in mostly artificial scenario".

    • cbhl 13 years ago

      I don't know what the Wi-Fi is like at your home or business, but packet loss is actually really common on my University's wireless network. I wouldn't call this scenario artificial at all.

wmf 13 years ago

Google, don't speculate. The primary source appears to be http://www.mit.edu/~medard/papers2011/Network%20CodingMeets%... and similar papers from the same authors.

guelo 13 years ago

Ugh, remember when academia helped create the internet by releasing free publicly available unlicensed RFCs?

Edit: Not RFPs, duh

  • tptacek 13 years ago

    I'm not sure I remember how each successive advancement in signaling from 802.3-coax through 10gbE through gigabit wireless was driven by public RFCs.

    • guelo 13 years ago

      That's a good point since most networking standards were developed under the IEEE which was always more industry oriented than the IETF, I think they're both mostly industry players at this point. But still, there is a rich history of free contributions by academia which I think is rarer today.

  • hollerith 13 years ago

    You mean RFCs :)

  • taligent 13 years ago

    That was before universities were properly resourced.

    Universities desperately need to commercialise their research in order to survive.

oconnor0 13 years ago

See the original discussion & better article at https://news.ycombinator.com/item?id=4686743

zackbloom 13 years ago

TL;DR They are using error correcting codes across packets to limit the impact of losing some.

Inufu 13 years ago

This looks like Fountain codes: http://en.wikipedia.org/wiki/Fountain_code

Basically, you split your data into blocks, XOR random blocks together and the client can recreate the data by solving the equation of which blocks where XORed with which.

A good tutorial is here: http://blog.notdot.net/2012/01/Damn-Cool-Algorithms-Fountain...

And a fast implementation: http://en.wikipedia.org/wiki/Raptor_code

stephengillie 13 years ago

The linkbaity title links to a neat, proprietary way to reduce wireless packet loss.

potkor 13 years ago

It's the job of the 802.11 L2 to hide correctable packet loss L3 so most radio link loss isn't passed on to TCP. This could just as well read "the wifi L2 error correction doesn't try hard enough".

This problem is very common but it wants fixing on the L2 and not TCP. Turning up the FEC on the L2 would reduce its capacity even further though since more of the bandwidth is taken up by the FEC (and so does this TCP level FEC).

3G gets it wrong on the other extreme, it pretends to always have 0% packet loss, your packets just sometimes show up 30-100 seconds late and in order.

delinka 13 years ago

I see lots of comments talking about FEC. That's not how the article reads to me. Granted the author (or I?) may be completely out in left field, but here's my take on what it says:

Let's suppose you have a mathematical process that outputs a stream of [useful] data. The description of the process is much, much smaller than the output. You can "compress" the data by sending the process (or equation) instead. Think π. Do you transmit a million digits of π or do you transmit the instruction "π to a million digits"? The latter is shorter.

Now, reverse the process: given an arbitrary set of data, find an equation (or process) that represents it. Not easy for sure. Perhaps not possible. I recall as a teenager reading an article about fractals and compression that called on the reader to imagine a fractal equation that could re-output your specific arbitrary data.

If I've totally missed the article's point, please correct me, but explain why it also talks about algebra.

EDIT: I re-read and noticed this: "If part of the message is lost, the receiver can solve the equation to derive the missing data." I can see the FEC nod here.

Guh. I guess I'm blind tonight. "Wireless networks are in desperate need for forward error correction (FEC), and that’s exactly what coded TCP provides." I cannot for the life of me understand why they'd need to keep this a secret.

mikepurvis 13 years ago

There's a company in Ottawa implementing a similar idea, but based on carving up packets and then inserting one or more redundant XOR packets (RAID-style). Their name for it is IPQ: http://liveqos.com/products/ipq/

They have a patent on this: http://www.google.com/patents/US20120201147

(Disclosure: I was an intern there in 2009, when it was IPeak Networks.)

Jabbles 13 years ago

Has anyone done an experiment to see what simply duplicating every TCP packet sent over wireless does? If you're in a situation where you're limited by random packet loss and not by raw bandwidth, I imagine it could help...

Obviously this is a much weaker and less efficient solution that what is proposed in the paper, but this would be trivial to implement. I believe netem allows you to simulate this.

  • jasonwatkinspdx 13 years ago

    There was an RTS game in the late 90's that did something like this. It was peer to peer, with peers broadcasting their input events to each other and running mirrored simulations in lockstep. So the bandwidth used was trivial since you can code a keypress or mouse click in just a couple bytes, but any packet loss would degrade the game for all peers. So they just duplicated the last packets input data in successive packets. No algebraic coding or any other attempt to save bandwidth, since the data was already trivial.

leif 13 years ago

Is this likely to be any different from introducing a little error-correction?

Also, how were we not doing this already?

Also, I need a writer. Whoever wrote this up made it sound WAY cooler than when I explain error correcting codes.

  • lmm 13 years ago

    As it says, in wired networks packet loss is almost entirely due to congestion, so adding error-correction is unnecessary - better to just throttle down when you see it.

    TCP is old and reliable and there are massive costs associated with switching to anything else. I think that's why we weren't doing this yet, and I think that's why this initiative will fail too.

lovamova 13 years ago

This is one of those posts where the comments on the site are better than the article submitted.

It also means less power consumption on mobile phones. There will be no need to increase signal power to get better speed or voice quality.

dnos 13 years ago

I'm not up-to-date on networking technologies, but it's surprising to me that some sort of error correction hasn't already been made a standard yet.

I wonder if something along the lines of old-school Parity files would work in the packet world? Basically just blast out the packets and any that were lost, you just reconstruct using the meta-data sent with the other packets.

codex 13 years ago

Isn't the downside of FEC encoded packets increased latency? Instead of sending each packet immediately, don't you need to accumulate n packets to encode as a group? Or does the math allow incremental encoding? Simple parity is incremental, but the FEC on DSL lines always added 40ms of latency.

  • AnthonyMouse 13 years ago

    >Instead of sending each packet immediately, don't you need to accumulate n packets to encode as a group?

    The way coding works is that you divide data in to n data packets and then calculate c coding packets, and then you can lose any c out of (n+c) packets and still reconstruct the data. You need to see all the data packets in a group before you can finish calculating any of the coding packets, but nothing stops you from sending the data packets immediately.

    On the other hand, non-hardware-based error correction that allows for recovery from a large amount of data loss is relatively processor intensive, and the processors in a lot of network equipment are very slow. So you can easily see an increase in latency, not because the packets take longer to send, but because the processor in the network equipment is too busy calculating error correction to timely forward your packets.

  • spydum 13 years ago

    Yes, I distinctly recall enabling FEC on poorly performing ADSL links when packet loss was an issue for some ISP customers. It would almost always bump the end point rtt from 8ms to about 30-40ms.

  • loeg 13 years ago

    40ms extra latency on wifi is nothing compared to packet loss.

    • codex 13 years ago

      Really? My wifi RTT to the metropolitan core is 15 ms, and over LTE it is 47 ms. Not a worthwhile tradeoff at 40 ms unconditional additional RTT delay.

      I imagine the higher latency was used to offset lower bandwidth due to FEC encoding loss, by amortizing it over more bits.

codex 13 years ago

What's funny here is that wireless networks already use FEC at the physical layer. This just adds more, less conservative, FEC higher up for apps where it makes more sense to reduce throuhput and increase some average case latency to avoid worse case latency and worse case throughput.

liamzebedee 13 years ago

My friend did something similar to the concepts in this article, except he applied it to compression of audio. It was basically finding patterns in the audio and transforming these into equations. Interesting stuff.

Schwolop 13 years ago

What a remarkably easy to read article.

Snapps 13 years ago

Captivating title...

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection