Settings

Theme

WolfIP: Lightweight TCP/IP stack with no dynamic memory allocations

github.com

151 points by 789c789c789c 16 days ago · 50 comments

Reader

rwmj 16 days ago

passt (the network stack that you might be using if you're running qemu, or podman containers) also has no dynamic memory allocations. I always thought it's quite an interesting achievement. https://blog.vmsplice.net/2021/10/a-new-approach-to-usermode... https://passt.top/passt/about/#security

rpcope1 16 days ago

It would be interesting to know why you would choose this over something like the Contiki uIP or lwIP that everything seems to use.

  • RealityVoid 16 days ago

    Not sure if they do for _this_ package, but the Wolf* people's model is usually selling certification packages so you can put their things in stuff that need certifications and you offload liability. You also get people that wrote it and that you can pay for support. I kind of like them, had a short project where I had to call on them for getting their WolfSSL to work with a ATECC508 device and it was pretty good support from them.

    • jpfr 16 days ago

      As the project is GPL’ed I guess they sell a commercial version. GPL is toxic for embedded commercial software. But it can be good marketing to sell the commercial version.

      Edit: I meant commercial license

      • LoganDark 16 days ago

        You don't need a commercial version, many projects get away with selling just a commercial license to the same version. As long as they have the rights to relicense this works fine.

      • anthonj 15 days ago

        In my company we used their stuff often. They have an optional commercial license for basically all their products. The price was very reasonable as well.

      • RealityVoid 16 days ago

        I think they might sell a commercial version as well. It makes sense with the GPL. But I can't really recall that well.

      • cpach 15 days ago

        “GPL is toxic for embedded commercial software”

        Why is that?

        • bobmcnamara 15 days ago

          Many bare metal or RTOS systems consist of a handful of statically linked programs (one or two bootloaders and the main application), many companies would rather find a non-GPL library rather than open up the rest of the system's code. Sometimes a system contains proprietary code that may not be open sourced as well.

          • 1718627440 14 days ago

            In the embedded world you don't really sell software you sell devices with firmware. Unless the library OS is AGPL, it doesn't matter too much.

            • bobmcnamara 9 days ago

              It matters because:

              1) you may not have the right to open the rest of the code on the system 2) although you make money when you sell devices, it also makes cloning trivial

            • tjoff 14 days ago

              Yes it matters a lot?

        • dietr1ch 15 days ago

          He probably meant viral or tried to make a deadly twist on virality

hermanradtke 15 days ago

Similar https://github.com/smoltcp-rs/smoltcp

CyberDildonics 16 days ago

Are there TCP/IP stacks out there in common use that are allocating memory all the time?

  • fulafel 15 days ago

    Yes, TCP is pretty hungry for buffers. The bandwidth*delay product can eat gigs of memory on a server. You have to be ready to retransmit anything that's in flight / haven't received the ack for yet.

    • nly 15 days ago

      The bandwidth delay product for a 10Gbps stream for a 300ms RTT theoretically only requires ~384MB

      One option is just to simply keep buffers small and fixed and disconnect blocked clients on write() after some timeout

      • fulafel 15 days ago

        We're up to hundreds of gbps per server, have been for some years now. Eg 400 gbps uses a lot even with much smaller avg rtt. That's not going ng to be one stream of course, but a zillion smaller streams still add up to the same reqs.

        This is far from little embedded device territory of course. But still, latest wifi is closer to 10 than 1 gbps already.

        • Veserv 15 days ago

          I do not understand the point you are trying to make. The person you replied to showed how to evaluate it with simple math.

          400 Gb/s is 50 GB/s. RTT of 300 ms would only require 15 GB of buffers. That would not even run a regular old laptop out of memory let alone a server driving 400 Gb/s of traffic. That would be single-digit percents to possibly even sub-percent amounts of memory on such a server.

          • fulafel 14 days ago

            I introduced the concept of bandwidth * delay product to the conversation...

            The question was about why use dynamic allocation. In this branch of the thread we ere discussing the question "Are there TCP/IP stacks out there in common use that are allocating memory all the time?"

            We'd not be happy to see the server or laptop statically reserving this worst-case amount of memory for TCP buffers, when it's not infact slinging around max nr of tcp connections, each with worst-case bandwidth*delay product. Nor would be happy if the laptop or server only supported little tcp windows that limit performance by capping the amount of data in-flight to a low number.

            We are happier if the TCP stack dynamically allocates the memory as needed, just like we're happier with dynamic allocation on most other OS functions.

    • CyberDildonics 15 days ago

      Needing memory doesn't have to mean allocating memory over and over. Memory allocation is expensive. If someone is doing that reusing memory is going to be by far the best optimization.

      • fulafel 15 days ago

        Well, allocating and freeing according to need is reusing. Modern TCP perf is not bottlenecked by that. There's pools of recycled buffers that grow and shrink according to load etc.

        • CyberDildonics 14 days ago

          Well, allocating and freeing according to need is reusing

          That's a twisted definition. It seems like you're playing around with terms, but allocating memory from a heap allocator is obviously what people are talking about with "dynamic memory allocation". Reusing memory that has already been grabbed from an allocator is not reallocating memory. If you have a buffer and it works you don't need to do anything to reuse it.

          Modern TCP perf is not bottlenecked by that. There's pools of recycled buffers that grow and shrink according to load etc.

          If anything is allocating memory from the heap in a hot loop it will be a bottleneck.

          Reusing buffers is not allocating memory dynamically.

          • fulafel 13 days ago

            Sorry but there's shades of gray between heap allocation and TCP specific free lists in TCP impls. It's not a black and white free list vs malloc API situation.

            For example in Linux there are middle level abstraction layers in play as follows:

            For the payload there's a per socket runway of memory to be used for example (sk_page_frag). Then, if there's a miss on that pool, instead of calling the malloc (or kmalloc in the case of Linux) API, it invokes the page allocator to get a bunch of VM pages in one go, which is again more efficient than using the generic heap API. The page allocation API will recycle recently freed large clusters of memory, and the page alloc is again backed by a CPU-local per cpu pageset etc. It's turtles all the way down.

            For the metadata (sk_head) there's separate skbuff_head_cache that facilitates recycling for the generic socket metadata, which is again not a TCP specific thing but lower level than generic heap allocator, somewhere between a TCP free list and malloc in the tower of abstractions.

            • CyberDildonics 13 days ago

              It's not a black and white free list vs malloc API situation.

              It is in the sense that if finding a buffer that's the right length was just as slow as malloc (and free) then you would just use malloc.

              Not only that but malloc is shared with the entire program and can do a lot of locking. On top of that there is the memory locality of using the same heap as the rest of the program.

              If you just make your own heap there is a big difference between using the system allocator over and over and reusing local memory buffers for a specific thread and purpose.

              What you're describing here the same thing, avoiding the global heap allocator.

  • wmf 15 days ago

    Packets and sockets have to be stored in memory somehow. If you have a fixed pool that you reuse it's basically a slab allocator.

    • CyberDildonics 15 days ago

      You need some memory but that doesn't mean you would constantly allocate memory. There is a big difference between a few allocations and allocating in a hot loop.

  • bobmcnamara 15 days ago

    Yes, it is pretty common.

    However sometimes the buffers are pooled so buffer allocator contention only occurs within the network stack or within a particular nic.

fulafel 15 days ago

How does it deal with all the dynamic TCP buffering things where things may get quite large?

sedatk 16 days ago

It only implements IPv4 which explains to a degree that why IPv6 isn't ubiquitous: it's costly to implement.

  • hrmtst93837 15 days ago

    If you want IPv6 without dynamic allocation you end up rewriting half the stack anyway so probably not what most embedded engineers are itching to spend budget on. The weird part is that a lot of edge gear will be stuck in legacy-v4 limbo just because nobody wants to own that porting slog which means "ubiquitous IPv6" will keep being a conference slide more than a reality.

  • preisschild 15 days ago

    Matter (a smart home connectivity standard in use by many embedded devices) is using IPv6. Doesnt seem to be a problem there.

  • notepad0x90 16 days ago

    It's just not worth it. the only thing keeping it alive is people being overly zealous over it. if the cost to implement is measured as '1', the cost to administer it is like '50'.

    • sedatk 16 days ago

      > the only thing keeping it alive is people being overly zealous over it

      Hard disagree. It turned out to be great for mobile connectivity and IoT (Matter + Thread).

      > the cost to administer it is like '50'.

      I'm not sure if that's true. It feels like less work to me because you don't need to worry about NAT or DHCP as much as you need with IPv4.

      • notepad0x90 15 days ago

        To start with it requires support v4 as a separate network, at least for internal networks, since many devices don't support ipv6 (I have several AP's, IoT devices,etc.. bought in recent years that are like that). Then the v4->v6 nat/gateway/proxy approaches don't work well for cases where reliability and performance are critical. You mentioned NAT, but lack of NAT means you have to configure firewall rules, many get a public ip by their ISP directly to the first device that connects to the ISP modem,exposing their device directly to the internet. Others need to do expose a lan service on devices (port forwarding) which is more painful with v6. DHCP works very simply, v6 addressing can be made simply too (especially with the v4 patterned addressing - forgot its name) but you have multiple types of v6 addresses, the only way to easily access resources with v6 is to use host names. with v4 you can just type an IP easily and access a resource. Same with troubleshooting, it's more painful because it is more complex, and it requires more learning by users, and if you have dual stack, that doesn't add to the management/admin burden, it multiplies it. It's easier to tcpdump and troubleshoot arp, dhcp and routing with v4 than it is ND,RA,anycast,linklocal,etc.. with v6.

        For mobile connectivity, ipv4 works smoothly as well in my experience, but I don't know about your use case to form an opinion. I don't doubt IPv6 makes some things much easier to solve than ipv4. I am also not dismissing IPv6 as a pointless protocol, it does indeed solve lots of problems, but the problem it solves is largely for network administrators, even then you won't find a private network in a cloud provider with v6, for good reason too.

    • nicman23 15 days ago

      what. have you seen ipv4 block pricing?

      • notepad0x90 15 days ago

        there keep arising more solutions, public ip usage hasn't been increasing as it did in past decades either. most new device increase is on mobile where cgnat works ok.

    • gnerd00 16 days ago

      my 15 year old Macbook does IPv6 and IPv4 effortlessly

      • notepad0x90 15 days ago

        that's great, but when you have a networking issue, you have to deal with two stacks for troubleshooting. it would be much less effort to use just ipv4.

        You're not paying for IPv4 addresses I'm sure, so did ipv6 solve anything for you? This is why i meant by zealots keeping it alive. you use ipv6 for the principle of it, but tech is suppose to solve problems, not facilitate ideologies.

        • preisschild 15 days ago

          > it would be much less effort to use just ipv4.

          Or just use IPv6-only. Thats what I do.

          Legacy ipv4 only services can be reached via DNS64/NAT64

          • notepad0x90 15 days ago

            But that's slow, and it's one more thing you have to setup and that could fail. What is the benefit to me if I used ipv6 and those nat services? what if I run into a service that blocks those nat IPs because they generate lots of noise/spam since they allow anyone to proxy through their IP? Not only does it not benefit me, if this was commercial activity I was engaging in, it could lead to serious loss of money.

            At the risk of more downvotes, I again ask, why? am I supposed to endure all this trouble so that IPv4 is cheaper for some corporation? even then, we've hit the plateau as far as end user adaption goes. and I'll continue to argue that using IPv6 is a serious security risk, if you just flip it on and forget about it. you have to actually learn how it works, and secure it properly. These are precious minutes of people's lives we're talking about, for the sake of some techno ideology. The billions and billions spent on ipv4 and no one in 2026 is claiming ipv4 shortage will cause outages anytime within the next decade or two.

            My suggestion is to come up with a solution that doesn't require any changes to the IP stack or layer3 by end users. CGNAT is one approach, but there are spare fields in the IPv4 Header that could be used to indicate some other address extension to ipv4 (not an entire freaking replacement of the stack), or just a minor addition/octet that will solve the problem for the next century or so by adding an "area code" like value (ASN?).

    • toast0 16 days ago

      Eh. IPv6 is probably cheaper to run compared to running large scale CGNAT. It's well deployed in mobile and in areas without a lot of legacy IPv4 assignments. Most of the high traffic content networks support it, so if you're an eyeball network, you can shift costs away from CGNAT to IPv6. You still have to do both though.

      Is it my favorite? No. Is it well supported? Not everywhere. Is it going to win, eventually? Probably, but maybe IPv8 will happen, in which case maybe they learn and it it has a 10 years to 50% of traffic instead of 30 years to 50% of traffic.

      • notepad0x90 15 days ago

        it depends on who you're talking about but no disagreement with cost for ISPs. For endusers (including CSPs) it's another story.

        Even on its own it's hard to support, but for most people they have to maintain a dual stack. v4 isn't going away entirely any time soon.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection