Settings

Theme

Bun 0.6

bun.sh

421 points by tommasoamici 3 years ago · 242 comments

Reader

hu3 3 years ago

> Standalone executables. You can now create standalone executables with bun build.

> bun build --compile ./foo.ts

> This lets you distribute your app as a single executable file, without requiring users to install Bun.

> ./foo

This is big! Part of Go's popularity is due to how easy it is to produce self-contained executables.

And it seems to support amd64 and arm according too: https://twitter.com/jarredsumner/status/1657964313888575489

  • pier25 3 years ago
    • leeoniya 3 years ago
    • pseudosavant 3 years ago

      Is 90MB really that big when compared against the other types of binaries that would get deployed: containers and VM images? How big is the platform (k8s, docker, etc) you need installed to run your app? Probably more than 90MB.

      • microflash 3 years ago

        A native Java "Hello World" can be around 8MB in size.[1] So, yes, 90MB is too much.

        1. https://sergiomartinrubio.com/articles/getting-started-with-...

        • Spivak 3 years ago

          I feel like using Graal for the comparison is cheating a bit because it's so fundamentally different from what bun is doing. You need to compare it to tools that ship class files and a JVM, or something like PyInstaller which will have much much more overhead.

          • sshine 3 years ago

            I wouldn’t call it cheating.

            I’d also compare against stripped Rust binaries statically linked against musl.

          • ptx 3 years ago

            PyInstaller actually seems to have less overhead, in terms of space, not "much much more". Building a hello world script with "pyinstaller --onefile" gives me a 5.6 MB executable on Linux or 4.9 MB on Windows.

          • kaba0 3 years ago

            A leaned up JRE (only containing the base jdk classes) and an included hello world is 47.2MB.

        • Waterluvian 3 years ago

          “The generated file is 7.7MB, which is quite impressive for a Java application since this executable does need a JVM.”

          I assume this is a typo and they mean “does NOT”?

          • carbotaniuman 3 years ago

            I think it isn't, and there saying it's impressive how small the JVM overhead is.

            • Waterluvian 3 years ago

              Is the JVM inside that 8MB then I guess? That is pretty great.

              Originally I thought the alternative was “8MB but supply your own virtual machine.”

              • squeaky-clean 3 years ago

                There's no JVM when using GraalVM Native Images, it relies on a JVM during the compile step (well, specifically GraalVM), but produces native code similar to compiling C. There's no virtual machine running bytecode, just entirely native code. So the size of the executable will depend on how many features of the JVM you need compiled into your executable.

                A pure java bytecode (bring-your-own-JVM) Hello World can be under 2KB pretty easily. Smaller than the equivalent C program. But of course, that's not including the size of your system JVM.

                • brabel 3 years ago

                  > but produces native code similar to compiling C.

                  That's very misleading. For one, Java running native still needs a Garbage Collector, maintains Object headers which increase memory usage, can do things like reflection for configured classes, can load services via the ServiceLoader, schedule Threads and manage executors, and many other things that are "expected" to any Java application... in summary: native executables still have a Java runtime embedded into them (called Substrate VM, by the way), making them very different from C (much more like Go).

                  Also,notice that native Java executables still tend to have a lower "peak performance" (i.e. once a "normal" JVM has warmed up, it will almost certainly be faster than a native executable because the latter cannot do JIT compilation to take advantage of runtime profiling, like a normal JVM does).

                  • pjmlp 3 years ago

                    It also misleading to think C doesn't have a runtime.

                    https://learn.microsoft.com/en-us/cpp/c-runtime-library/c-ru...

                    https://gcc.gnu.org/onlinedocs/gccint/Libgcc.html

                    https://software-dl.ti.com/codegen/docs/tiarmclang/compiler_...

                    And many more, not feeling like linking documentation from all C compilers.

                    In fact this is so relevant even for C, that ISO C has a special section for deployments without runtime support, named freestanding C.

                    "In a freestanding environment (in which C program execution may take place without any benefit of an operating system), the name and type of the function called at program startup are implementation-defined. There are otherwise no reserved external identitiers. Any library facilities available to a freestanding propram are implementation-defined."

                    • brabel 3 years ago

                      That's not at all comparable.

                      From your link:

                      "Most of the routines in libgcc handle arithmetic operations that the target processor cannot perform directly."

                      You're trying to imply the Java native runtime is comparable to liggcc?? That's silly.

                      • pjmlp 3 years ago

                        A runtime is a runtime, regardless of the size and features checkbook.

                      • ithkuil 3 years ago

                        I think the point was that it's a matter of degree and not of substance

                • kaba0 3 years ago

                  The hello world .class file is 413 bytes, just as an info.

                  • squeaky-clean 3 years ago

                    Thanks, smaller than I remembered. Been a while since I've done any Java so I aimed a little high.

          • pkphilip 3 years ago

            A full Quarkus web application with REST uses just 12MB of RAM (with Graalvm). So yes, Java has come a long way

        • tehbeard 3 years ago

          Part of that I imagine is the same problem deno has.

          ICU locale data is pretty hefty and there aren't says to trim it down.

        • sgammon 3 years ago
      • pier25 3 years ago

        Depends. Doesn't seem much for a server but if you want to distribute an executable to your users then 90MB seems huge. IIRC a hello world Go binary is like 2MB.

        • metaltyphoon 3 years ago

          Yeah but no one distributes hello world.

          • georgyo 3 years ago

            If the base line is 90MB, then it only goes up from there.

            • patrickthebold 3 years ago

              The question is "how fast". If bun executables are always 45x go's that's a problem. If they are always 98mb more than go's, then it's less of a problem as the size grows.

            • kaba0 3 years ago

              A car that can transport 7 people is not 3.5x heavier than a car that can only have place for two — the initial size is much bigger, additional js code won’t bloat it further that much.

      • ptx 3 years ago

        Python's embeddable package for Windows [1] is 16 MB unpacked.

        The OpenJDK runtime with the java.base and java.desktop modules is 64 MB. Replacing Swing with SWT (leaving out the java.desktop module) gets it below 50 MB. The full OpenJDK runtime with all modules is around 128 MB. (With Java 17 on Windows.)

        [1] https://docs.python.org/3/using/windows.html#windows-embedda...

      • keyle 3 years ago

        yeah 90MB for a hello world is big.

        Approximately 2,700 times bigger than it could be.

        • wiz21c 3 years ago

          I got 4Gigs on my phone and 4x that on my laptop. I don't care about 90megs.

          What's the point in looking at size ? I can see two:

          - want to email the exe, and you have limits on mail size - want to be ecofriendly, in that case stop watching netflix for 2 hours and you'll have your megs

          • keyle 3 years ago

            The demo of Quake II, contained 3 fully playable levels, and is 11 MB.

          • geysersam 3 years ago

            I see it differently, what if I want to have more than 40 apps on my phone?

            Although, to be fair it seems likely the executable size will shrink with time.

      • schara 3 years ago

        I have 13043 .exe files on this computer, at 90MB a piece that would be 1TB.

        • sammoore 3 years ago

          If most executables on your machine used Bun, then it would be a bundled shared library on your system eliminating the bulk of the 90mb executable size. Just like the shared libraries the 13043 .exe files on your computer are currently linking to.

          • geysersam 3 years ago

            Sure but then we're loosing the simplicity advantage of having only a single executable right?

    • VWWHFSfQ 3 years ago

      I'm curious why that would be so big. Even my python3.10 binary + /usr/lib/python3.10 is only 30MB

      • matlin 3 years ago

        It bundles an entire JS engine with it. I think JSC in Bun's case. V8 for Deno and Node.

        • ricardobeat 3 years ago

          The engine is not the largest part of it. just-js, which is pretty close to barebones V8, sits at ~20MB. JSC is supposed to be about 4MB, Hermes is 2-3MB. The largest parts I think are ICU and the built-in libraries.

          • billywhizz 3 years ago

            yes. author of just-js here. a minimal build of a v8 based runtime weighs in around 23-25 MB on modern linux using latest v8. this gets bigger all the time, due to new functionality being added to JS/V8 and no easy way to decide what parts of JS to include. when i started working on just-js ~3.5 years ago i'm pretty sure it was only 15MB or so - can verify when i have time.

            • billywhizz 3 years ago

              i just tried recompiling v0.0.2 (https://github.com/just-js/just/releases/tag/0.0.2) of just-js and comparing it to current. for the completely static build on ubuntu 22.04 i see following:

              0.0.2 (v8 v8.4.371.18) - file size: 15.2 MB, startup RSS: 8.4 MB current (v8 v10.6.194.9) - file size: 19.5 MB, startup RSS: 12.3 MB

              so, that's roughly 30% binary size increase and 50% greater startup memory usage in 2.5 years. =/

    • paulddraper 3 years ago

      IIRC the latest Linux Node.js binary is over 70MB.

      Though Bun doesn't use Node.js I believe, there's a reference

      (At least it's not JVM Hotspot...)

    • stephc_int13 3 years ago

      The storage part is not a big deal, it does not look nice but this is simply a bit disgusting.

      The worrying part is that this is mostly code, not dead junk simply occupying space, this is part of the code path, filling the caches, or should I say trashing the caches…

    • bryancoxwell 3 years ago

      Well, I do appreciate the transparency to be honest.

    • s17n 3 years ago

      For most use cases, that doesn’t matter.

    • datavirtue 3 years ago

      That small!?

  • brundolf 3 years ago
    • petetnt 3 years ago
      • vdfs 3 years ago
        • iansinnott 3 years ago

          Perhaps it's specific to the applications i've built, but pkg has _ALWAYS_ given me issues. The ideal is the packaged code Just Works as if you ran it with node, and in my experience pkg does not deliver in that regard.

          Glad others are finding value there, but i wish it was more of a drop-in replacement for `node <script.js>`. Feels similar to `ts-node` (always having issues) and `tsx` (just works).

        • JonathonW 3 years ago

          Produces smaller binaries than bun too-- one of my applications, packaged by pkg for Windows, is about 70 MB (and actually does things; this isn't a Hello World). And it compresses well (must be including library sources in the binary as plain text); in a ZIP it's about 25 MB.

          Still not small, but I'm not sure what Bun's doing to come out 20 MB larger than an app with actual functionality and dependencies (this one has express, sqlite, passport, bcrypt, and a bunch of others-- essentially everything you'd need for a web service).

        • freeatnet 3 years ago

          Does anyone know of a good comparison of these bundling methods?

      • RedShift1 3 years ago

        Experimental, so probably at least 5 years before it's considered "stable"

        • g_delgado14 3 years ago

          As if bun isn't experimental

          • brundolf 3 years ago

            Node moves pretty slow these days, I wouldn't be surprised if Bun's version gets stabilized before Node's

            • Aeolun 3 years ago

              Which is probably a good thing. Node is starting to get firmly in the boring technology camp these days.

            • re-thc 3 years ago

              Has it ever moved fast? It's been at a steady pace for a long time.

    • kbenson 3 years ago

      Yeah, I used Deno to build a simple URL switch case utility in late 2021 to handle sending different URLs to different browsers, and that was ~56MB compiled at the time. I don't know if it's changed in the base size since then, but if bun is resulting in 90MB binaries (as reported here), then Deno may yield a significant reduction in size (if it hasn't gotten much worse in that time).

      • brundolf 3 years ago

        I'm sure it'll get better (I'm sure both will); there's probably a lot of potential optimization to be done, dropping unused parts of the standard library and platform APIs and such (the way Deno loads most of the standard library as remote modules might actually be helping it here at the beginning), and Jarred is a stickler for hyper-optimization

        • kbenson 3 years ago

          Interestingly, I'm sure a lot of people that want to compile to a binary would love an option that had trade-offs for the size even if it greatly hurt performance. For example, my case was ~100 lines of code not including the parseArgs and parseIni modules I used, and is meant to be run with an argument and then it exits with an action. If I could have chosen a dead simple and dumb JS interpreter without JIT but that was < 10MB, I would have.

          It might even have resulted in a faster runtime as well, since it wouldn't need to load nearly as much of the binary into memory, and also wouldn't need to initialize as many special purpose data structures to deal with advanced performance cases.

  • silisili 3 years ago

    It's why I dove into Go.

    This definitely took me from the 'eh, kinda cool project' to 'I cant wait to try this out immediately' camp.

    The binaries are pretty huge, hoping they can bring that down in time.

    • afavour 3 years ago

      > The binaries are pretty huge, hoping they can bring that down in time.

      I'm surprised the numbers are as high as they are and hope they can reduce them... but they'll never get down to the kind of numbers Go and Rust get to because Bun depends on JavaScriptCore, which isn't small, and unless they're doing some truly insane optimizations they're not going to be able to reduce its size.

      FWIW QuickJS is a tiny JS runtime by Fabrice Ballard... that also supports creation of standalone executables: https://bellard.org/quickjs/quickjs.html#Executable-generati... though its runtime speed is considerably worse than JSC or V8.

      • vlovich123 3 years ago

        For 90% of things this will be used for, it's more than enough so it might be a good default unless you enable a `-Ojit` flag when building more complex applications where the JIT will be a benefit. In fact, startup times might even be faster.

        The challenge of course is supporting two JS runtimes within the same codebase.

        • afavour 3 years ago

          Yeah I suspect the weird little differences between runtimes (e.g. what language features they do and do not support) would lead you down a path of a thousand cuts.

          It still feels like a graceful idea, though.

      • kaba0 3 years ago

        > but they'll never get down to the kind of numbers Go and Rust

        Can we please not put the two next to each other? There is absolutely nothing similar between the two.. why don’t mention go and haskell, or go and D instead?

      • whimsicalism 3 years ago

        JSC is ridiculously fast, this is what makes bun great

        • afavour 3 years ago

          It really depends on what you're doing. If 95% of your code is file or network I/O then it's really not going to make the slightest difference whether you're running JSC, V8 or QuickJS. If your code is enormously complex and benefits from JIT then yes, you're going to really feel that downgrade.

          • mixedCase 3 years ago

            The difference in real world performance between web servers written in things like Python or Ruby to those written in Go, C# or even lower-level languages like Rust would indicate that's an over-estimation of how much IO dominates total runtime, even if it is by far the slowest part.

      • rtpg 3 years ago

        Why is JavascriptCore so big? What's going on internally to end up with binaries that big?

    • TechBro8615 3 years ago

      You've been able to do this with Deno for a long time (and Node too, as of recently). The downside is it bundles all of V8 so a "hello world" binary ends up being at least 70mb.

      • winrid 3 years ago

        I've been desensitized by my world of 500mb docker containers.

        • claytongulick 3 years ago

          Yeah, this is honestly one of the things that turns me off of containers in general.

          Like, the whole point was to effectively use linux kernel namespaces with cgroups in an intelligent way to give VM-like isolation, but non-emulated performance - and supposedly not having to deal with image size bloat from the OS like you get in VMs.

          What we got was an unholy mashup of difficult to debug, bloated images and ridiculously complex deployment and maintenance mechanisms like kubernetes.

          I just do old school /home/app_name deployments with systemd unit files, and user-level permissions.

          Oh, and it's webscale[1].

          [1] https://www.youtube.com/watch?v=b2F-DItXtZs

          • tracker1 3 years ago

            It doesn't HAVE to be that bulky or complex. You have a lighter space like Dokku or just direct scripted deployments pretty easily. As to the size, you can use debian-slim or alpine as a base for smaller options. There's also bare containers, or close to bare for some languages and platforms as well (go in particular).

            • claytongulick 3 years ago

              What's even "lighter" is a single binary sitting in /home/app running under "app" user and launched by systemd unit file with auto restart.

              Look, I totally get the unholy hell that's (for example) python dependency management, and containers are a great solve for that.

              Sometimes you don't have a choice of technology, so I get it.

              What I don't understand is folks that use containers for stuff like go binaries. Or nodejs. I mean, it's just an "npm install". Or now bun with it's fancy new build option, you don't even need that.

              I honestly don't get the point of containers with languages that have good dependency management, unless you're in a big matrix organization or something.

              Or, as one HN user put it years ago, "containers are static compilation for millennials".

              I snorted beer out of my nose the first time I read that.

              • TechBro8615 3 years ago

                It feels like the same people who make this argument are also using the other side of their mouth to lament that nobody uses containers for their original purpose of containerization. And yet, that's a legitimate use case that is totally orthogonal to any build process or artifact distribution method. In fact the argument itself betrays a misunderstanding, because the underlying complaint is about using Docker images as a build artifact, but it's presented as if containers are the problem. But they're separate concepts (you don't compile a container, so the snarky analogy to static compilation is nonsensical upon closer inspection).

                There are plenty of good reasons to containerize even a single binary executable, as demonstrated by the fact that officially maintained images exist for containerizing processes like Postgres or haproxy. Sure, you could run both Postgres and haproxy as services directly on the host, but then you'd miss out on all the benefits either provided by or complementary to containerization, like isolated cgroups and network namespaces that make declarative service orchestration easily achievable with maintainable configuration.

              • ptx 3 years ago

                > I totally get the unholy hell that's (for example) python dependency management, and containers are a great solve for that. [...] What I don't understand is folks that use containers for stuff like [...] nodejs. I mean, it's just an "npm install".

                With Python it's just "venv/bin/pip install -r requirements.txt".

                All the tools needed to create an isolated environment (venv) and install packages (pip) come with the standard Python distribution these days. I wouldn't characterize that as "unholy hell".

              • tracker1 3 years ago

                You get a lot more from containers than just dependency management. You get isolation options for everything related to i/o from disk to network. As well as hard CPU/Memory controls. It's a few steps above what you get with chroot, etc.

                There's plenty to like about containers.

        • nine_k 3 years ago

          Docker containers are actually smaller if they share layers with other containers in the system. A ton of containers based on the same image reaps many deduplication benefits.

          • tracker1 3 years ago

            Yeah, I notice many/most images are based on a recent Debian base if they aren't on Alpine or closer to bare images. I don't consider even Alpine as a base too bad for a lot of apps.

        • thewataccount 3 years ago

          Have you tried using alpine based images instead of debian/ubuntu/others? I know it's not always possible especially because of musl but for most things it works fine and is tiny.

          • verdverm 3 years ago

            If everyone is on the same base image, then you aren't really dealing with 500mb images, but much smaller layers on top.

          • winrid 3 years ago

            It becomes a political issue at this point w/ battling the ops team. I have more important battles.

            • zaphar 3 years ago

              If it's a battle by all means avoid it. But it's weird that your ops team would care about the image type. The whole point of containers is that they don't need to care.

              • winrid 3 years ago

                I guess I should have said infra and not ops. But cost cutting has interesting implications on responsibilities :)

            • thewataccount 3 years ago

              Yeah that's fair, if it works it works.

              Depending on your setup it doesn't really matter anyway.

            • beanjuiceII 3 years ago

              how about ubuntu chisel

          • jrockway 3 years ago

            Why even include Alpine? Distroless is the way to go.

    • latchkey 3 years ago

      `-trimpath -s -w` makes binary size smaller.

      `xz -z -9e` is good to compress it for distribution.

  • MuffinFlavored 3 years ago

    > Part of Go's popularity is due to how easy it is to produce self-contained executables.

    Rust too... :D

    • winrid 3 years ago

      Nah, rust still depends on libc at runtime which is a pain. Go doesn't have this problem afaik as it has its own stdlib and runtime.

      • arp242 3 years ago

        You can statically link libc in Rust too; at least, if teh interwebz is correct (not a Rust expert); you just need some extra flags.

        This is actually not that different from Go in practice; many Go binaries are linked to libc: it happens as soon as you or a dependency imports the very commonly used "net" package unless you specifically set CGO_ENABLED=0, -tags=netgo, or use -extldflags=-static.

        • Conscat 3 years ago

          Statically linking libC is problematic in various ways. I really appreciate that Zig has its own runtime that is designed specifically for this use case.

          • Kamq 3 years ago

            > Statically linking libC is problematic in various ways.

            Are we just talking about standard rop gadget vulnerabilities, or is there something else that's a problem with it?

            • thristian 3 years ago

              glibc, Linux's traditional libc, can dynamically load hostname resolution policies, but that only works if the executable has access to dynamic loading, i.e. if it's not a static executable.

              Dynamically loading hostname resolution policies doesn't happen often, but when it does happen it's a right pain to diagnose why some tools see the right hostname and other tools don't.

        • maeln 3 years ago

          You can, but usually, if you really want a binary with no runtime dependencies, most people will just compile their code against musl libc instead.

          The only issue is that some lib sometime do not compile (or at least without some workaround) with musl. Although it often concern one specific platform (looking at you Mac OS).

      • mirashii 3 years ago

        > Nah, rust still depends on libc at runtime which is a pain.

        It does in general, though I don't really think this is a big pain or blocker in the general case, there are very version requirements around libc.

        > Go doesn't have this problem afaik as it has its own stdlib and runtime.

        That's also true, but it's not really a pure win. Choosing not to use a battletested libc has led to a variety of subtle and difficult to diagnose bugs over the years, e.g. https://marcan.st/2017/12/debugging-an-evil-go-runtime-bug/

        • jrockway 3 years ago

          I think it's a pure win. Writing your program in 1 language instead of 2 is worth one simple misunderstanding of vDSO.

        • Patrickmi 3 years ago

          This is obviously a trade off, it’s not a bug there’re certain things one must overcome “with time”, even if Go starts using libc am pretty sure the Go team will have their own libc which makes no difference unless it deals with the problem Cgo have

      • d0100 3 years ago

        Indeed, I built Supabase's edge-runtime and sent the binary to another pc with a earlier Ubuntu version only to discover it wont work

        I went on a wild goose chase to build static Rust but deno can't target musl yet and the issue is a few years old

      • leeoniya 3 years ago
      • saagarjha 3 years ago

        Not always, because some platforms require you to use their libc.

    • diego_sandoval 3 years ago

      What bothers me is that even when that capability exists, most of the Rust open source programs that I've tried don't distribute binaries, and still ask you to install cargo and compile the program from source.

      • VWWHFSfQ 3 years ago

        My experience is the opposite. Nearly every Rust tool I've used offers static binaries for various platforms.

  • pjmlp 3 years ago

    Yeah, because compilers that produce static binaries don't exist since Macro Assemblers and FORTRAN were created. /s

  • inglor 3 years ago

    Node supports this too (experimentally) https://nodejs.org/api/single-executable-applications.html

  • henry_viii 3 years ago

    Doesn't Deno already let us do this?

    • chrisco255 3 years ago

      Bun is intended to be a drop in replacement for Node.js, with Node.js compatible APIs. Deno chose to go a different route with the design of the runtime, encouraging more modern web-native paradigms.

      • hiccuphippo 3 years ago

        Deno changed their opinion recently and will offer Node.js compatibility. Apparently it wasn't such a good idea to not be compatible on purpose.

        • tracker1 3 years ago

          I actually have mixed feelings on this one... since I think Deno's approach has been generally cleaner, but also recognize the scale of what's in NPM.

        • chrisco255 3 years ago

          Didn't realize that, thank you. I can empathize with Deno's desire to take backend JS in a more web-native direction.

    • ignoramous 3 years ago

      Yes, that it does. Bun is mostly towing Deno with Zig + JSCore (and per some microbenchmarks, faster than both Node and Deno).

  • bdg 3 years ago

    Is this like a phar file?

    • blowski 3 years ago

      Phar files still need the PHP runtime installed to run them. These files have the JS runtime embedded in them.

  • synergy20 3 years ago

    golang is the most famous all-in-one-binary language with reasonable size.

    zig is by default static linked, it probably has the smallest binary size, smaller than static C.

Jarred 3 years ago

I work on Bun. Happy to answer any questions

also: there is a bug in `bun build --compile` I am currently working on fixing. Expect a v0.6.1 in a bit

  • alberth 3 years ago

    Revenue/monetization model?

    Given that Oven has taken $7m in VC funding, how do you plan to monetize Bun, etc?

    • Jarred 3 years ago

      The plan is JavaScript edge hosting focused on fast cold starts and being extremely inexpensive, using lots of custom infrastructure

      • internetter 3 years ago

        Hi Jarred. Super excited to see bun coming along. I've been loving it ever since the invitation release. As I watch from the sidelines, I've been mesmerized by your productivity. I have a couple questions, if you don't mind.

        The first – I'm curious about Vite in bun. You have a bundler, you have a typescript transpiler, and you have a HTTP server. Does this mean eventually Vite (or at least some of it's functionality) will be "native" inside of bun, or will Vite continue to be it's own thing? (eg. Vite plugins for <framework>)

        The second is more related to Oven. I'm curious what the value proposition of Oven's edge is, compared to other serverless providers? It's hard to imagine the runtime being the main selling point, with it's node compatibility and packages like hono being able to run everywhere. What will set Oven's edge hosting apart from the pack?

      • icemelt8 3 years ago

        This reply is news itself. Nice!

    • whimsicalism 3 years ago

      my understanding was an edge environment similar to CF workers

  • drschwabe 3 years ago

    Not a question, cause you already mention this here, but just wanted to give you extra props for supporting CommonJS out of the box; keep up the great work.

    https://twitter.com/jarredsumner/status/1475238259127058433?...

    have been doing this (using ES and CommonJS modules in the same file) in clientside code via Browserify or Rollup ever since ESM got popular but it's a bit more nuanced and annoying to do in NodeJS

  • Signez 3 years ago

    Really impressed to see that bun is now faster than esbuild, that was in my mind one of the fastest bundler/minifier in town.

    How did you achieve that? Are there some shortcuts you took, or some feature you deemed not in scope (yet)?

    • conradev 3 years ago

      This is a fantastic talk on some of the optimizations that Zig makes easy to implement: https://vimeo.com/649009599

      Bun is written in Zig, but it takes the same approach that esbuild took to make things fast: a linear, synchronous pipeline fitting as much as possible in the CPU cache with as little allocation and other overhead as possible. Zig has more knobs to turn than Go.

      Bun goes one step further and does a great job of offloading work to the kernel when possible, too (i.e. the large file I/O improvement with the Linux kernel)

      • f311a 3 years ago

        Zig is cool, but Bun heavily relies on JSC which is written in C++

        • conradev 3 years ago

          JSC is a multi-tier JIT with the last stage ultimately being LLVM, so if you want to be pedantic, Bun relies on LLVM’s optimizer which is written in C++.

          The transpiling itself is written in Zig, which is the part that has the performance improvement. If Bun relied on JavaScript and JSC for the heavy lifting, it would be no faster than the other JS bundlers.

          edit: no longer LLVM: https://webkit.org/blog/5852/introducing-the-b3-jit-compiler...

        • brundolf 3 years ago

          But bundling doesn't

    • eatonphil 3 years ago

      I really hate to talk about software based on the language they're written in and I don't mean to imply one language is better or worse but the upper bound of performance on Zig is likely easier to reach and likely higher than the upper bound of performance in Go. Though it may depend on the workload. (esbuild being written in Go.)

  • meekaaku 3 years ago

    Just wanted to say thank you for starting and leading the one of the most exciting projects in js landscape.

  • samwillis 3 years ago

    > I work on Bun

    Understatement.

    • intelVISA 3 years ago

      Local hacker seasons v8 Bun with io_urings; woulda been a cool proj if the Js runtime was organic like Bellard's qjs

      Still, in the native-starved web space this sorta meal will be considered haute cuisine.

  • raphaelrk 3 years ago

    Congrats on the launch!!

    Any plans on adding "in-memory" / "virtual" file support to Bun.build? I'd be interested in using it for notebook-style use cases

    --

    Also, ways to do "on-the-fly" "mixed client/server" components (ala hyperfiddle/electric) + sandboxing (ala Deno) would be extremely exciting

    Some projects in this vein - https://github.com/jhmaster2000/bun-repl and https://www.val.town/

    Also, bun macros are very cool -- they let you write code that writes over itself with GPT-4. Just mentioning as a thing to keep on your radar as you keep pushing the boundaries of what's possible in javascript :) making it more lispy and preserving eval-ability is great

  • yobuko 3 years ago

    First, thank you for all of your hard efforts.

    I have seen some desire and works expressed towards using Bun with Electron or Electron alternatives; this interests me greatly. Do you have any plans or aspirations to make any strong push in this direction?

    • sroussey 3 years ago

      This would be awesome. Use bun as the main process (replacing node) and have the front end use the system webview. The system webviews are good these days.

  • tazeg95 3 years ago

    Hello, it seems very interesting. I am using esbuild to build my apps and it has a developper live server. How would you compare bun to esbuild ? what can one do that the other can't ? do you have a compare page ?

  • networked 3 years ago

    1. It seems the Bun.file API (https://bun.sh/docs/api/file-io) doesn't provide a way to distinguish between a zero-size file and a file that doesn't exist. Is this right? If it is, it would be nice to have one. It doesn't have to interfere with the lazy loading.

    2. Do you cross-compile Bun? If you do, how has your experience been cross-compiling with Zig when you have a C++ dependency?

    • Jarred 3 years ago

      > 1. It seems the Bun.file API (https://bun.sh/docs/api/file-io) doesn't provide a way to distinguish between a zero-size file and a file that doesn't exist. Is this right? If it is, it would be nice to have one. It doesn't have to interfere with the lazy loading.

      Yes that is correct and not good. Pedantically, files which don't exist can be created between the call to check if it exists and after. In practice though, it is pretty annoying

      > 2. Do you cross-compile Bun? If you do, how has your experience been cross-compiling with Zig when you have a C++ dependency?

      We cross-compile the Zig part but not the C++ dependencies. zig c++ was too opinionated for us the last time we gave it a try. I'm optimistic this will improve in the future though.

  • kitd 3 years ago

    Congrats on the release!

    How standalone are the standalone executables produced by `bun build`? Is a libc or equivalent expected to be present?

    • Jarred 3 years ago

      Bun does need glibc, but older glibc versions should work okay because of this: https://github.com/oven-sh/bun/blob/78229da76048e72aa4d92516...

      We haven't implemented polyfills for everything yet though, like Bun uses posix_spawn and doesn't have an exec + fork fallback

      Bun's dependencies otherwise are statically linked

      • codethief 3 years ago

        > Bun does need glibc

        Wait, are we talking about what Bun needs to run or what standalone executables produced by bun build need in order to run?

        • hiccuphippo 3 years ago

          Both, the executables produced by bun are the bun binary concatenated to your script at the end. Try building a hello world and run `tail -c200 hello | xxd` to see your script at the end of the file.

      • 10000truths 3 years ago

        Is there any plan to allow for statically linking with musl to get completely shared-lib-dependency-free executables?

  • HorizonXP 3 years ago

    California Ave Lockitron crew represent!!

    Glad to see you leading this, incredible work and nice to see the positive reception.

  • pastacacioepepe 3 years ago

    Is Windows support planned?

  • sroussey 3 years ago

    Bun is awesome! Need debug support and console fixes. Please!!!

  • toastal 3 years ago

    When will Bun open up communications to something open source/decentralized instead of relying on users to give up their online security to Discord?

tankenmate 3 years ago

I am truly perplexed as someone outside of the Javascript ecosystem; why are there so many incompatible bundlers? If you look at most compiled languages they have a set ABI / executable image format, and you just use a link editor (either compile time, run time, or both).

Is it just because most Javascript developers have never learnt from any of the lessons that came from decades of compiled languages? (compilers, compiler tools, operating system and kernel development, etc).

Is there some benefit that Javascript bundlers have that I'm unaware of?

Truly curious.

  • brundolf 3 years ago

    1. Compiling/building is an ~optional layer on top of the core language/standard (which has become less optional over time)

    2. Running JS outside of the browser is similarly a layer that was built before any kind of standard existed for it (which still doesn't, really)

    The browser standards are the only real standards. Everything else (which has turned into a lot) is "standard by implementation". Implementations usually try to agree with each other, because that's obviously beneficial for everybody, but sometimes they make choices to deviate either out of necessity or in an attempt to improve the ecosystem

    So it's all pretty ad-hoc, but in practice most things are mostly compatible most of the time. They orbit the same general things, and the orbit has narrowed in the last few years as most of the big problems have been solved and the community is zeroing in on the agreed solutions (with the caveat of often having to maintain legacy compatibility)

    Deno takes a stricter philosophy than most, where it prescribes a ~good way that everything should be done (which is almost entirely based on browser standards which have evolved since all this started), even though it runs outside of a browser, and requiring the ecosystem to fall in line

    Bun on the other hand takes a maximalist approach to compatibility; it does its best to make everything from every other sub-ecosystem Just Work without any modifications

  • qbasic_forever 3 years ago

    Bundling is totally different from linking and building for native platforms. Bundling is all about optimizing code to be sent over a small pipe--you're combining multiple compilation units into one file (so just one web request and lower latency) and doing optimizations like tree shaking to send only the code that's actually used.

    It's a pretty unique use-case that not many other programming languages deal with or care about. It's almost as if you are a demo scene coder trying to optimize for the absolute smallest code possible.

    Linking native code doesn't really care about optimizing the size of the output and is just just trying to make sure all code paths can be called with some kind of code.

    • haberman 3 years ago

      > Linking native code doesn't really care about optimizing the size of the output and is just just trying to make sure all code paths can be called with some kind of code.

      That is not true at all. There are many use cases where native code size is very important. Native code toolchains often target platforms with extremely limited ROM/RAM. Even on big machines, RAM is not free and cache even less so.

      Native code linkers will GC unused code sections (`-Wl,--gc-sections `), fold identical data to remove duplication (see COMDAT). Native code compilers have optimization modes that specifically optimize for small code size (`-Os` and the like).

    • colonwqbang 3 years ago

      A native compiler also optimises the code. A native static linker also tries to omit unused library data and code. It's absolutely not the case that native devs don't care about code size (we do!)

      I guess Javascript uses a slightly unusual executable format, text instead of binary. Otherwise, it seems like very much the same thing?

      • crabmusket 3 years ago

        I'd say that bundler design is less influenced by the format and more by the requirements of delivery over the web. Native code is not downloaded each time the user opens the application. Tons of web infrastructure and complexity is aimed at solving the problem of "user lands on page for the first time, it must be responsive as soon as possible."

  • moron4hire 3 years ago

    I think it's largely a three-fold problem of the fact that most JS apps are still deployed through browsers and not installed, the fact that HTTP2 was not the panacea of multi-request happy good times it was made out to be, plus the fact that there is no Application Binary Interface, everything gets deployed as source code.

    This creates a situation where you need bundlers, whereas other languages don't have the concept at all, just to be able to minimize download time (and honestly, while we end up making rather large apps in comparison to web pages, they're pretty small in comparison to other kinds of applications), and then bundles are too opaque to share common code between applications.

    And because there's no chance to benefit across projects from sharing, there's no force driving standardization of bundling, or adoption of said standard.

  • ponyous 3 years ago

    Because there were browsers and no standards. How can you expect someone that starts coding on the web to know what pains kernel went through decades prior?

    • tankenmate 3 years ago

      But surely the people writing the browser code thought about the ecosystem they were creating / trying to create?

      • moron4hire 3 years ago

        Browsers weren't written in a day. Technically speaking, Mozilla Firefox is a ship of Theseus going back to the release of Netscape in 1994. Did browser and internet infrastructure developers in the early 90s understand that these things would become rich application platforms? Looking at the history of HTTP, it's clear that they expected some concept of "application" to be delivered through the browser. While there's certainly a chance at least a few of them foresaw the full scope of what that would mean (it's not like X11 remoting wasn't a thing), I don't think most of the people involved were thinking much past 10 years (The Distant Future, the Year 2000).

        JavaScript was apocryphally "invented in 10 days", it came as an attempt to create competitive advantage, not to create a global standard. The first JavaScript came a year (1995) after the first Netscape, but the first major JS-heavy application didn't come for another 13 years (Google Maps, 2008).

        • tracker1 3 years ago

          There were a lot of JS heavy web applications prior to Google Maps. From around the release of IE5 at the end of 2000 in particular through the long tenure of IE6. Having worked on some JS heavy applications around that time. It was also much harder as you had a relatively wide variety of browsers and versions. Since people on dialup were far less likely to update their browsers regularly (or at all beyond what came on an ISP or AOL CD.

          Of course, the efforts for larger dev teams, optimizations and bundling were far less popular before then. Can't tell you how many poorly written sites/apps carried who knows how many versions/copies of JQuery for example. It was really bad for a while.

          Now, at least there are more paying some attention to it. There's still some relatively large bundles that are easy to get overloaded. I mean as soon as you load any charting or graphing library it blows out everything else. Of course this is offset between bandwidth and compute advancements as well.

          There was a popular developer site around 1998-2000 or so called 15seconds.com as that was the average point at which users would start dropping off from a load. Now that's measured at around a second or two.

          • moron4hire 3 years ago

            Yeah, I worked on some of them, too. They were even in mapping. I had written one of the first, dynamic, 2D drawing libraries for JavaScript, long before Canvas2D, but it was hidden in proprietary consultoware.

            The significance of Google Maps was that A) it had all the parts that we would recognize today as a JS single-page app, B) it had no alternative interface that people could opt to use instead (diluting the usage of that particular implementation versus the product as a whole), C) it had broad appeal and adoption, and D) it was significantly better than competitors specifically because of the "SPA-ness" of the app.

            Google Maps had the features and penetration necessary to change the public perception at large of what could be done with browser-based apps.

        • june_twenty 3 years ago

          Was Google Maps the first JS-heavy app? That's a TIL for me..

          • diroussel 3 years ago

            No it was not. There were many, but for me the most significant early single page app was Outlook web access.

            A colleague of mine went diving through the JavaScript source and found a reference to an ActiveX component called XMLHttpRequest. We realised it was pretty useful and ended up using it to build an SPA that approximated a spreadsheet for global logistics planning. It worked very for 2003 standards.

            Google maps came in 2005

          • moron4hire 3 years ago

            No, certainly not the first in aggregate to use JS heavily. Actually, the browser-based Outlook client was the first to have all the parts that we would now consider to be essential for JS Single Page Apps (because Microsoft has to invent AJAX first). But Google Maps was definitely the first to have a major impact and start changing the public perception of what could be done with browser-based apps.

            I consider Google Maps to be the first well-adopted, no-traditional-alternative app to be what we recognize today as a JS SPA. Gmail had a pure HTTP mode, and otherwise was not interesting to people who were happy with their current email. Outlook wasn't really used by that many people, not at the scale that Google Maps was Google Maps has broad appeal and was significantly better because of it's SPA-ness to change how people thought about browser-based apps.

          • crabmusket 3 years ago

            According to Crockford, they were working with very JS-heavy apps in 2000[1]. No idea how mainstream that specific app, or technique, was.

            [1]: https://corecursive.com/json-vs-xml-douglas-crockford/#the-p...

      • biorach 3 years ago

        nope

        well.... not for a surprisingly long time

doodlesdev 3 years ago

Tangential, but, this has to be one of the fastest websites I've used recently. How is it possible they get such fast loading of static content? It's basically instantaneous, specially with JavaScript disabled.

edit: Oh well, after navigating to some pages on the blog I see that everything was already on browser cache, so that's why it was so fast. Reminds me I need to overwrite Netlify's cache-control on my website, even though it's already very fast to load (Netlify sets max-age=0, must-revalidate by default).

  • ignoramous 3 years ago

    From experience, static websites on https://pages.dev are blazing fast (and free); ex: pagespeed result for a static webpage I host: https://archive.is/PkZbO

    Netifly was equally fast (not free).

    • doodlesdev 3 years ago

      Cloudflare is indeed absurdly fast. I haven't been impressed with Netlify's speed, although I am using the free plan (don't think it makes sense to upgrade if I'm already not super happy with performance).

      When you say paid Netlify is as fast as Cloudflare do you mean the Pro plan or the Enterprise plan? AFAIK the enterprise plans run on a different network, with more distributed servers, although I could be wrong.

      It seems at least part of the noticed difference in speed has to do with my region, as pagespeed insights gives me sub-second FCP and LCP on my Netlify website [0], which feels a bit better than what I get at home (with 500mbps fiber). It's possible my ISP is at fault, but I'm not sure how I could diagnose this much better.

      [0]: https://archive.ph/IF0t5

      • ignoramous 3 years ago

        > When you say paid Netlify is as fast as Cloudflare do you mean the Pro plan or the Enterprise plan?

        I was on their £19/mo plan, not Enterprise. And those websites were as fast as the ones on pages.dev.

  • Minor49er 3 years ago

    Its use of the Cloudflare cache seems to be a part of it

    • doodlesdev 3 years ago

      Indeed, http headers indicate that the asset policy for the HTML is:

         cache-control: public, max-age=0, must-revalidate
      
      A few things I notice:

         - It uses Cloudflare cache (as you pointed out).
         - All CSS is in the HTML file, so only one request is needed to display the page.
         - The compressed webpage is reasonably lean considering it has all CSS in the same file and uses Tailwind.
  • tracker1 3 years ago

    I've been playing with static content generation with Deno+Lume and deploying to Cloudflare pages... crazy good loads.

gavmor 3 years ago

`bun` is currently my favorite "just works out of the box" utility for running Typescript programs.

I've tried a couple and struggled with configuration and, on top of it all, bun is simply faster.

So, if you want to write a bunch of `.ts` files and point something at them, I really recommend `bun` (and, frankly, why would you write `.js` in 2023? Probably because you've not tried bun.

Edit: I don't care about bundle sizes, because I'm just using bun to run my @benchristel/taste sub-second test suite.

brundolf 3 years ago

Anybody using Bun in production yet? What's your experience been like?

  • postalrat 3 years ago

    I deployed a small non critical service with bun. So far constant (but slow) memory leaks and an 80% chance of segfault when starting up. Will try a few more versions of bun then move it to node for a while until bun matures a bit.

    • Jarred 3 years ago

      wow thats rough

      i'm sorry

      can you file an issue with some code that reproduces it? will take a look

      • postalrat 3 years ago

        It's all easily reproducible. I'll double check my code but its not too complex.

        The service subscribes to redis messages, accepts websocket connections, authenticates the connections, then broadcasts messages through the websockets. Maybe around 1200 or so websocket connections per server.

      • postalrat 3 years ago

        I did upgrade to bun 0.6.1. Haven't seen the segvault on startup yet but still seeing the memory leak. Maybe even slightly faster than before. Will check my code to see if it could be causing the issue.

        • opengears 3 years ago

          Can you point us to the code so we can figure out where the memory leak originates from?

schemescape 3 years ago

How big are the bundles (edit: I meant self-contained executables) and do they depend on glibc?

Edit: just saw a comment from the author indicating glibc is required.

  • lionkor 3 years ago

    Doing it without glibc is not practical

    • silverwind 3 years ago

      It's just DNS isn't it? Many apps don't even need DNS, so there should at least be an option to build fully static binaries.

    • AndyKelley 3 years ago

      why?

      • lionkor 3 years ago

        Because otherwise you need to implement almost everything yourself - memory allocation, input, output, etc. and anything more complex (like asking for the width of the terminal) requires you to get into Kernel structs, copy them out, translate, and basically copy paste glibc code anyways. And thats just linux.

        • bruce_one 3 years ago

          There are alternative C libraries that are worth considering, e.g. Zig (which is what Bun itself is written in, afaiu) supports using [musl](https://musl.libc.org) as an alternative to glibc, and musl can be statically linked as well (by contrast to the glibc quasi-static-linking).

dimgl 3 years ago

I've been pretty jaded by Node.js lately, especially with all of the ESM and TypeScript stuff. This led me to try using Deno but Deno was missing the mark in a lot of ways.

Is Bun in a state where I can start thinking about replacing my Node.js production toolchains with it?

  • ojosilva 3 years ago

    Nope. I wouldn't. Not for production.

    • Bun is not stable yet (0.6.0)

    • Zig, the language Bun is built upon is not stable either (0.11.0)

    Nothing against these awesome projects, I'm all in for a streamlined toolchain (TypeScript, bundling, binary generation...) and other excellent goals driving the Deno and Bun teams.

    But...

    • Node.js is a powerful piece of software, it's stable and full of battle-tested buttons and knobs

    • NPM and Bun/Deno are not real friends at the moment, just acquaintances

    • Take benchmarks with a pinch of salt. Real-world app performance depends on a well-greased system, not a particular 60,000 req/s react render benchmark. Remember the adage: your app will as fast as your slowest component.

    On a side note, lately I've been extending Node.js with Rust + N-API (ie. napi-rs or neon) and it opens up excellent possibilities.

    https://napi.rs/ https://neon-bindings.com/

    • Buttons840 3 years ago

      Bun might be a cool project, but them building on an immature language like Zig makes me wonder where their priorities are.

      • ricardobeat 3 years ago

        It wouldn't exist any other way, the story is that it was built with speed as a priority, and Zig was chosen for the performance optimizations it enables.

        • Buttons840 3 years ago

          There are several other languages that could achieve the same speed and performance optimizations. To be clear, I hope to see Zig succeed and I'd like to learn it one day. It reminds me of the video game engine question, only in this case it would be: do you want to build a product, or do you want to build something with Zig? I imagine the creators of Bun answered "we want to use Zig" as their first priority, and that's great, I hope they have fun.

          • ojosilva 3 years ago

            At one point, the Rust team switched away from the performant Jemalloc allocator to something more widely compatible (the system default). They chose to sacrifice performance in the sake of compatibility/stability. It's still available, but optional.

            https://internals.rust-lang.org/t/jemalloc-was-just-removed-...

            Insane performance gains, like the ones we see early in Zig, is something that can be easily eaten away by the natural evolution and maturity of a programming language.

            Btw, Zig is beating trees with a stick to see what may fall down in this area: https://github.com/ziglang/zig/issues/12484

            • AndyKelley 3 years ago

              Zig, unlike C++ and Rust, doesn't need an optimized general purpose allocator in order to be fast. Zig outperforms its peers despite currently having a slow GPA in the standard library because the language encourages programmers down a path that avoids boxing the shit out of everything, which is inherently slow even if you have a global allocator optimized for this use case.

              Rust switched away from Jemalloc because it uses global allocation for everything. Zig's convention of explicit allocator argument passing means such a compromise will never be needed.

              As for "beating trees with a stick", I'll probably end up doing what I did for WebAssembly, which is to ignore the preexisting work and make my own thing that is better. Here's my 160-line wasm-only allocator that achieves the trifecta: high performance, tiny machine code size, and fundamentally simple.

              https://github.com/ziglang/zig/blob/c1add1e19ea35b4d96fbab31...

            • iExploder 3 years ago

              my impression is Zig wants to replace C and has the same philosophy of a very simple language that does not evolve once it stabilizes. that means to me they want to stabilize the language with small amount of features and make it as performant as possible at that point. from their webpage:

              A Simple Language

              Focus on debugging your application rather than debugging your programming language knowledge.

        • spacechild1 3 years ago

          > with speed as a priultimat

          They could have just used C++ or Rust. Not saying that they shouldn't use Zig, just questioning speed as the (primary) motivation

  • tracker1 3 years ago

    I've been pretty happy with Deno... mostly in personal use... still some rough bits in terms of Node compatibility but pretty good in general.

freddex 3 years ago

Looks great! Still eagerly waiting for Windows support: I have a specific use case where I need both a bundler and a package manager to run on the user's desktop cross-platform, and right now that's yarn + esbuild. I'd love to roll this into a single, performant solution. It's already being worked on as far as I know [1], excited to upgrade to Bun when that's available.

[1] https://github.com/oven-sh/bun/issues/43

19h 3 years ago

Funny to see improvements in crypto.createHash… I was totally caught off guard yesterday noticing most of crypto has been removed in Node 20 and replaced with the exclusively-async „subtle“ WebCrypto. Quite a pain to work with when you need synchronous code.

fluente 3 years ago

It would be helpful to see how Bun's minifier compares to the others with popular libraries:

https://github.com/privatenumber/minification-benchmarks

vaughan 3 years ago

We need an equivalent Python Mojo for TypeScript.

This seems like the most obvious thing yet to be built.

I wonder how hard it would be to take an existing systems language and add TS syntax to it. Seeing as they are all built on LLVM. Or maybe you could transpile TS to Zig.

captainmarble 3 years ago

Any plan on adding something likr Deno KV in Bun?

Aeolun 3 years ago

Does it work with Prisma yet? I’m kinda waitjng for that to switch everything over.

unilynx 3 years ago

import.meta.main (whether the current file is the 'main' or just being required) looks interesting and like something I have wanted in the past, but not sure if it would actually be a good idea.

was it ever offered for standardisation?

  • Jarred 3 years ago

    There is no standard, import.meta is host-defined:

        13.3.12.1.1 HostGetImportMetaProperties ( moduleRecord )
    
        The host-defined abstract operation HostGetImportMetaProperties takes argument moduleRecord (a Module Record) and returns a List of Records with fields [[Key]] (a property key) and [[Value]] (an ECMAScript language value). It allows hosts to provide property keys and values for the object returned from import.meta.
    
    https://tc39.es/ecma262/#sec-hostgetimportmetaproperties
dheera 3 years ago

Why are we still minifying JavaScript? Is it only for obfuscation?

State-of-the-art HTTP servers already do a pretty damn good job gzipping stuff on the fly, do we need this garbage?

If it is for obfuscation, fine, can we just call it that?

  • 10000truths 3 years ago

    You can shrink transmitted sizes even further by using minification on top of transport compression. That nets me ~25% reduction in size compared to compression without minification, in practice, and those kinds of gains add up, especially at the tail end of page load times.

    • tracker1 3 years ago

      Not to mention, you can get better compression with some pre-compression levels that are hard to match with the best on the fly, often getting another 10% or more on size. It all adds up.

      There are some Steve Souders books on optimization that are pretty good and still pretty relevant.

      • ijustlovemath 3 years ago

        If you minify, aren't you increasing the overall information entropy, and thereby decreasing the amount that can be gained from compression? I'm sure overall there's still a net gain but I wonder where the point of inflection is.

        • tracker1 3 years ago

          Depends on the algorithm... the ones used with http typically have less overhead for decompression than compression.

          If you mean code minification, that can depend, but in general with tree shaking it shouldn't be slower, typically. The computer doesn't care if a variable is aa or myDescriptiveVariableName.

        • ricardobeat 3 years ago

          In my experience things kind of balance out, and you end up with negligible compression gains after compression. On the other hand, it also affects the time it takes for the browser / js engine to parse and execute the code, which can be significant in this world of massive bundles.

  • djbusby 3 years ago

    And pack a bunch of assets into one larger asset, reduce the HTTP request count and maybe pre-gzip to save a few clock cycles

    • dheera 3 years ago

      Maybe all of this should be an optional feature of HTTP servers and browsers with graceful fallback? NGINX could have a module that understands JavaScript and CSS and bundles and caches things on the fly, enabled optionally.

      It would greatly simplify deployment to have the source and deployed code be identical. Obfuscation aside, given JS is an interpreted language, there is no reason to not use it for what it is. We've turned deploying JS into the same level of complexity as deploying C++ by adding building and packaging steps. Interpreted languages should never need build steps, and deployment should be no more than a simple rsync.

    • LispSporks22 3 years ago

      I thought I read in the Rails docs somewhere that HTTP2 and import maps rescued us from bundling JS

  • jsmith45 3 years ago

    Bundling and minimizing can very significantly reduce code size, by eliminating unused code.

    This is beyond the stripping comments, whitespace, renaming all symbols to short identifiers, replacing idiomatic constructs with shorter equivalents, etc. that people expect minifiers to be doing.

    Bundling with tree shaking can get rid of a lot of code that would otherwise need to be downloaded. This is especially the case when using only a subset of functionality from a larger library.

    Otherwise, for most libraries, if you pull in a single function, every single module from that library will also get loaded. This applies both the pre-bundled libraries (i.e. where there is only one large module, so obviously everything gets downloaded), and non-bundled libraries, because most libraries have you import from an "index.(m)js" module, that exports all the various public API of the library. Which means a browser with import maps will need to download all those files, and all files they import, which will be basically every module in the library.

    Minimizers themselves often also have some sophisticated dead code elimination. Indeed, one potential (but inefficient) way to implement tree shaking is simply to bundler without shaking, while using module concatenation, and then simply passing it to a minifier with good dead code elimination capabilities. This would be able to eliminate basically anything tree shaking could, and more. The more comes from both from any eliminable code the bundler would not know how to eliminate, but also being able to eliminate anything that was only imported by said dead code.

    This is one of reasons why uncompressed minified code can sometimes beat out the compressed original code, and is still at least somewhat compressible itself. Of course I have not even touched on having fewer files to download (which still has meaningful overhead), nor the smaller resulting codebase being faster to parse than the.

    Lastly, but not least, many people want to use typescript or jsx when writing their code, which means the code needs to be pre-processed before a JavaScript engine will read it. If you already have a compile step, then adding bundling and minimizing on top of that can be relatively simple and make good sense for the above reasons. (Note can be simple. It depends a lot on what tools you use. Webpack for example can get really complicated, but it also offers some really powerful features.)

    • dheera 3 years ago

      All your points are good, thanks, especially about the unused functions.

      > many people want to use typescript or jsx when writing their code

      However, if this is true, why don't we just add these languages to browsers (<script type="text/jsx">, <script type="text/typescript">) with some kind of client-side processor for old browsers that turns them into "text/javascript" in-line in the DOM if the browser doesn't support it?

      It's kind of weird that JavaScript is becoming a common "bytecode" among other better-structured languages, one would think the bytecode language that becomes the compilation target should be one that is better designed of its own.

      If we're compiling JS, JSX, and TS all to some sort of assembly, or even a statically and strongly typed language like C++, I would feel a bit better.

  • nicetrybob 3 years ago

    We touch on this in the docs [0]. TLDR, bundlers are still necessary (for now) to reduce the number of roundtrip requests required before page load (ideally to 1), and in more complex apps to avoid overwhelming the browser with parallel requests. Even HTTP2 can't handle 100k parallel requests for a big app with a bloated node_modules.

    [0] https://bun.sh/docs/cli/build#why-bundle

0x445442 3 years ago

> Lots of bug fixes to Node.js

Does Node.js have lots of bugs still?

timetraveller26 3 years ago

How does this compare to bite?

ShadowBanThis01 3 years ago

Is WHAT? The title should tell us what the post is about. Why do we have to keep bringing this up? A title like this is a great way to miss out on a lot of views and reduce the usefulness of your post.

Let the downvoting of this simple observation begin.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection