Settings

Theme

ES modules are terrible

gist.github.com

172 points by theprotocol 4 years ago · 174 comments

Reader

xg15 4 years ago

> And then people go "well you can statically analyze it better!", apparently not realizing that ESM doesn't actually change any of the JS semantics other than the import/export syntax, and that the import/export statements are equally analyzable as top-level require/module.exports.

...

"But in CommonJS you can use those elsewhere too, and that breaks static analyzers!", I hear you say. Well, yes, absolutely. But that is inherent in dynamic imports, which by the way, ESM also supports with its dynamic import() syntax. So it doesn't solve that either! Any static analyzer still needs to deal with the case of dynamic imports somehow - it's just rearranging deck chairs on the Titanic.

I think while OP's right in theory, there is still a lot of difference between the two: ESM has dedicated syntax for static loading of modules and that syntax is strongly communicated to be the standard solution to use if you want to load a module. Yes, dynamic imports exist but they are sort of an exotic feature that you would only use in special situations.

In contrast, CommonJS imports are dynamic by default and only happen to be statically analysable if you remember to write all your imports at the beginning of the module. That's a convention that's enforced through nothing and is not part of the language or even of CommonJS.

As an exercise, try to write a static analyser that simply ignores dynamic imports and just outputs a dependency graph of your static imports - and compare how well this works with CommonJS vs ESM.

  • kabes 4 years ago

    It's not the fact that imports are top level that makes it statically analyzable. It's the fact that the imports can't be variables like commonjs allows.

    • tolmasky 4 years ago

      But they can. I can construct any hard to find use case just as easily with import. Your analyzer probably won't find this: "(a => a)(path => import(path))(ROOT_PATH + "/x.js")".

      "But no one would do that!" Yeah, and no one does it with require either. The times people pass in variables to require are well warranted and usually not in a browser context (listed this in my other post, but for example loading "x-mac.js" vs. "x-windows.js" in node, where there is no problem since you're not bundling).

      So neither in the practical sense nor the hypthetical sense has the "static analysis problem" been solved any more than it was already solved with require. People used require basically like a static keyword before and you could basically make the same usage assumptions as import in those cases. Similarly, you can get as dynamic as require with import, so in the purely theoretical sense not much has been made better either.

    • andrew_ 4 years ago

      Say what now?

        const batman = await import('batcave' + version);
      • kabes 4 years ago

        That's only valid for dynamic imports. But if we consider dynamic imports half of the rant in the post is wrong, since these can appear nested.

  • tentacleuno 4 years ago

    > Yes, dynamic imports exist but they are sort of an exotic feature that you would only use in special situations.

    I wouldn't necessarily describe dynamic imports as an exotic feature. They are basically required if you're building a semi-large app (and heavily advised by most frameworks in that case!). Otherwise, your homepage is going to load in some complicated AuthorPage / HeavyComponentWithLotsOfDependencies component the user might not even want.

    The main advantage of ESM in this context would be its asynchrony, which you won't get with require. Webpack tried to tack this on with require.ensure, but it was nonstandard and eventually deprecated.

    Another advantage of ESM in general would be that you wouldn't have to use preprocessors and compilers like Webpack. My main reasoning here is that, prior to the import spec, web didn't have any form of imports aside from loading in scripts and polluting the global namespace (which isn't necessarily bad). The majority of websites IME use some sort of bundler though, so this isn't really a major change so much as a nice to have, I suppose?

  • crooked-v 4 years ago

    > that you would only use in special situations

    There are web frameworks pushing this pretty hard for basic stuff (for example: loading React components with dynamic import) to build page-content-streaming functionality around it.

  • tolmasky 4 years ago

    This isn’t a real concern, and yes I’ve written a static analyzer for requires (as have many build tools). The fact of the matter is no one is trying to trick the analyzer by passing variables to require, or even more mischievously trying to rename require or something (there aren’t a lot of “(a => a)(require)(path + “/x.js”)” out there).

    In practice, it is used like a static feature, and when it isn’t, it’s for a good reason that import doesn’t solve and just expects you find a harder solution to. For example, if you want to load a platform specific file depending on your host environment. With import, the entire function now has to become needlessly async just because any use of the import expression needs to be async. Another good example is modules that put their requires inside the calling function to avoid needlessly increasing the startup time of an app for a feature it may not use. This way, only if you call that specific function will you have to suffer the require/parse/runtime hit for it. Notice all these cases are in node, so they wouldn’t result in some complicated decision as to whether to include these “dynamic requires” into the main bundle or not — it just doesn’t come up in bundling since they are use cases that are specific to node. But because of ESM, node now needs to make a bunch of synchronous functions be asynchronous to accommodate a set of restrictions designed with the browser in mind. And again, at the end of the day import does still have an expression form so you haven’t actually resolved the static analysis problem, just made dynamic imports more annoying in non-browser contexts.

    • spankalee 4 years ago

      > to accommodate a set of restrictions designed with the browser in mind

      That's the whole point, and a very good thing.

      • tolmasky 4 years ago

        That was not the whole point, the whole point was to have a feature that could work in a variety of different environments. That’s why this language feature is in the ECMAScript spec and not in the W3C or whatwg, unlike something like “fetch” which is defined by the whatwg and thus has every right to not take other environments into consideration. There is a tremendous amount of subtlety that results from this fact, like how the spec can thus only define a small portion of this feature (syntax and basic semantics), but ultimately needing to leave everything related to fetching, resolving, and executing the code up to the environment (in HTML’s case, the whatwg HTML spec). This really complicates things and creates an unfortunate mismatch in expectations, where most users who have only a passing understanding of this feature and have been sold on the promise of something that will finally “just work” everywhere discover that this isn’t the case at all. There’s a reason why despite being introduced in ECMAScript over 5 years ago it still barely has support in node (and not great support in browsers either btw, but certainly better)— it’s because the reality is that this feature is supremely complicated to implement (despite providing very little tangible benefit), especially in the context of the unrealistic expectations users have developed for it as it continues to be pitched as being the magic tool that will make your code work everywhere without a build tool.

        • spankalee 4 years ago

          It really is the whole point. TC39 was not going to define a feature that didn't work in browsers, full stop.

          That Node has to take browser into consideration is a very good thing for universal JavaScript. We can now write code that works in browsers, Node and Deno and that's a great thing.

          The support in browsers is excellent btw. All current browsers support standard JS modules now. Chrome is leading the way with import maps, import assertions, and JSON and CSS modules, but the other browsers will get there and CJS had nothing comparable to those anyway.

          • tolmasky 4 years ago

            It is absolutely not excellent, unless you restrict yourself solely to whether there is a checkmark next to the browser name in mdn. It is very buggy, very difficult to debug, and as I mentioned in another comment, missing serious features (like no subresource integrity which, means we’re encouraging people to use a much less secure system of importing scripts in many cases!)

            > It really is the whole point. TC39 was not going to define a feature that didn't work in browsers, full stop.

            No one is saying they shouldn’t have considered the browser! We’re saying they should have also considered other major environments, like node! That’s the way to design a language feature and absolutely what they wanted to do. There were a number of reasons it was rushed out the door, but they’re pretty upfront about the fact that they would do things differently now and basically no other feature would be allowed into the spec in the state ESM made its way in then. I am currently a TC39 delegate and can assure you that it’s OK to admit when things aren’t great so that we can learn from it. It’s how we’ve gotten JS to such a better state than where we were 20 years ago, not by bending over backwards to defend the with() statement.

            • spankalee 4 years ago

              I've been using almost exclusively standard modules for years and they work quite well. Old crashers and cache problems I was aware of have been fixed for many years now.

              I don't know what debugging problems there are that wouldn't exist in CJS. At least with native modules you can see individual requests in the network panel while you're working and individual files in the sources panel while debugging. That alone is a huge increase in debuggability to me.

              SRI really should be done out of band. Inline SRI would require far to frequent cache invalidation and isn't compatible with package manager workflows where you don't know the exact version of a file you'll depend on. A tool rather should build up an SRI manifest similar to a package lock. This has been discussed several times in module and import map threads.

              And I don't think JS modules were actually rushed. They were languishing for years with an overly complex loader API and cut down to an MVP with the core agreed upon semantics and that's what finally got everyone to ship. They're really fine, and with import maps, CSS modules, and eventually web bundles, will be far, far superior to any previous alternative. They already are.

              The fact that Node has some JS module / CJS interop issues (and really only when you try to use JS modules from CJS, the other way is fine) is Node's problem, not TC39's. There's really nothing that TC39 could have done here because synchronous require() is the fundamental problem. It was a bad design from the start and we shouldn't burden ourselves with that bad decision forever. The sooner CJS goes away the better.

              • tolmasky 4 years ago

                > SRI really should be done out of band. Inline SRI would require far to frequent cache invalidation and isn't compatible with package manager workflows where you don't know the exact version of a file you'll depend on. A tool rather should build up an SRI manifest similar to a package lock. This has been discussed several times in module and import map threads.

                If you are introducing a tool, then you should be bundling your code, not creating more metadata files to load! Bundled code has repeatedly proven to load faster than ESM code using whatever HTTP 3 prefetch mumbo jumbo you throw at it, that no one actually uses in practice anyways. If you have a tool chain, then the answer is easy: don’t delay fetches and create more HTTP requests and roundtrips! As I mentioned in another comment, the sheer ridiculousness of this is demonstrated in the <link rel=“modulepreload”> feature [1], where I kid you not the actual recommendation for having performant dependencies is to litter your HTML file with a link tag for ever JS file that’s imported. We’re right back to where we started with a top level script tag for every script! Argh! But I know, now the recommendation is “oh no silly, your build tool should just create your 100 link tags”. Again, why? If you’re using a build tool a bundled file is way faster than waiting for link tags to get parsed to issue a bunch prefetches and on and on. It’s so frustrating to discuss ESM because the goal posts kerp bouncing between this being an “easy to use feature that removes the need for build tools” only to have every issue with it hand-waived away as trivially solvable by a build tool that ultimately creates a worse end-user load experience than what we already have.

                > The fact that Node has some JS module / CJS interop issues (and really only when you try to use JS modules from CJS, the other way is fine) is Node's problem, not TC39's. There's really nothing that TC39 could have done here because synchronous require() is the fundamental problem.

                I think part of the disagreement stems from the fact that you believe my position is that we should have “done nothing” or that the only options were “the system we shipped” or “just do it the node way” or something. That’s not the case at all. I am all for a standard system and I recognize the problems with require(). It is simply the case that a design that took into account the requirements of node could very well have served both systems. These aren’t the only two possible require systems imaginable, but an API that exclusively looks at only one set of constraints, despite billing itself as a general purpose solution, will not generate the best end result. This is shown by the several bandaids that had to be added after the fact to import and could have been avoided if considered beforehand.

                1. https://developers.google.com/web/updates/2017/12/moduleprel...

                • spankalee 4 years ago

                  Your complaints seem to have far less to do with JS modules and much more to do with the lack of native bundling in the platform.

                  From my point of view, the goal posts that keep moving are the ones put up by anti-JS-modules folk because they keep comparing unbundled JS modules vs bundled CJS. Unbundled CJS would perform far worse than standard JS modules by all these measures, but somehow gets a pass because it has to be bundled.

                  What module feature could possibly have solved the waterfall problem? The only options are manifests and bundling. IE, you can't magically tell a browser what it should load without telling it what it should load. modulepreload is essentially a manifest and Web Bundles are bundles, so there are solutions covering the space. Browsers should get behind Web Bundles, asap, since that would also solve many problems for caching, CSS and other assets.

                  The problem I have with this debate is that JS modules get criticized for being able to work without bundling at all, when that's purely a positive and they still can be bundled for performance.

                  If you don't like the unbundled workflow, don't use it. It's still useful to have a standardized syntax and semantics, and very useful for those of us who do want to use them unbundled: for simple cases, dev environments, or combined with features like prefetch/preload.

                  • tolmasky 4 years ago

                    > From my point of view, the goal posts that keep moving are the ones put up by anti-JS-modules folk because they keep comparing unbundled JS modules vs bundled CJS. Unbundled CJS would perform far worse than standard JS modules by all these measures, but somehow gets a pass because it has to be bundled.

                    A feature that by default encourages use that is both slower and less secure on the web is a bad feature, full stop. The SRI problem is not even solved yet. Shipping import statements in production leads to slower websites. This is exacerbated by the fact that the syntax looks synchronous but behaves asynchronously. As someone who clearly cares much more about the browser environment than the node environment, these problems should resonate with you, regardless of the situation with node. import isn’t good even if you only consider the web.

                    > What module feature could possibly have solved the waterfall problem?

                    I will give one example of an alternative approach that could have been taken: start with the expression form of import(), ship it alongside top-level await, and hold off on the statement form until after we could see how this was used (you could even restrict it to only taking string literals if you want to begin with, doesn’t make a super difference for this argument, but I can see arguments for that). Here are the benefits:

                    1. You get everything you get with normal import, you just type await import() and use normal declarations with destructuring. There’s no ergonomic difference except for a couple extra characters, and it is less syntax to learn since you don’t need to learn the almost, but not quite, identical importing declaration destructuring.

                    2. There’s no weird bifurcation of “load semantics” left as an exercise to the implementer, it’s well defined under Promise semantics and gives us time to determine if we need something fancier.

                    3. This would have punted a lot of meta issues until later, including “what happens if a module throws an error during load?” And “what happens with recursive imports?” All of these questions are less critical in the expression form where the user has recourse (they can wrap it in try/catch! There can be a user accessible cache, etc.) However, with a top level black box statement these become must fix blockers because you can’t just say “oh the user has many good options”, you have to determine some one-size-fits-all complicated behavior.

                    4. The fundamental asynchronous nature of import() is not hidden from you in a way that makes you feel like you’re doing something fast, and is the actual issue I have with “the bundling debate”. The await makes it clear that this is a “blocking” (to code after it) asynchronous operation that you probably shouldn’t ship in production, as opposed to the situation now where not only do people not understand this, but we continue to add more features that perpetuate this myth (like module preload link tags that are still slower than bundling but probably require you to use a build tool so what’s the point?).

                    5. We would have had REPL support on day 1! Both in the browser console and in node. This would have been so helpful for debugging.

                    All in all, this would have been a great incremental approach that solved the immediate problem of needing something better than evaling the text result of an XMLHttpRequest. It gives you all the functionality of the import statement without the facade of a synchronous feature. It would have prioritized top-level await, made features we’re thinking about now much easier to introduce too (like inline modules), and would have almost certainly already been incorporated in node since there would have been no years of going back and forth since you can’t tell what’s a module without parsing the whole file first problem which is what lead to the whole .mjs mess.

                    This probably would have had to wait until after ES6 shipped since it fundamentally relies on async/await, but that’s actually a huge part of the point: we wouldn’t have shipped a fundamentally async feature prior to getting our async story straight in the language. A post-async JS mindset would have lead to many different decisions. Despite this delay though, there’s a good chance it would have been fully adopted everywhere much sooner, and would have been much easier to transpile, since it’s “just a function” with syntax restrictions.

                    Just because something took a long time doesn’t mean it wasn’t rushed.

  • wruza 4 years ago

    How browserify, webpack transformers and others are able to parse require()-s in the middle of a source file, but static analysers are not?

    These subtly erroneous arguments are the essence of this push. Look, we are maintaining X, Y and Z, and they’re unable to do that R, so it’s bad. No, it’s you making them unable to do that consciously.

    • nosianu 4 years ago

      1) importing module

      "require()" can take a non-static string. A variable or a string calculated at runtime.

      You can only statically check if that particular feature is not used, but there is no checking the entirety of what require() can be used for/with.

      require() is more like dynamic imports in Es modules that are awaited and not like the static ES modules.

      2) exporting module

      The other issue is on the exporting module's side: You can do strange things with the "exports" object. ES module exporting is more strict to make it guaranteed statically analyzable.

      • wruza 4 years ago

        Es modules that are awaited and not like the static ES modules.

        But this is a non-argument. Awaited or not, you can’t statically check them either. Developers aren’t idiots and they can read “if you pass a string to require/import, tooling can’t figure that out” in a manual.

        You can statically check “require(literal)” and cannot “require(variable)”. You can statically check “import from literal” and cannot “import(variable)”.

        And awaited imports are essentially just memoize(name => (fs.readFile || fetch)(name).then(wrapAndEval)))(name). It’s not a black magic that is available only to “imports”.

        do strange things with the "exports" object

        It’s just a value. Somehow analyzers/checkers can work with “var x = do_strange_things()”, but can’t with exports. How is a module boundary different from any other expression boundary?

        I really don’t want to think that this is pure zealous smoke blowing, but these arguments are as weak as nil, and leave no options.

  • jokethrowaway 4 years ago

    You can detect from the AST whether a require can be statically resolved or depends on non static variables.

    It may not be as simple, but I venture it's easier to implement than having all the software ever written needing to be migrated

    • xg15 4 years ago

      You can detect simple cases like require("some string literal") on top-level. Things become harder if you have require() in init functions or wrapped in (function(){})() or if the module name is defined in a string constant elsewhere.

      I can't say how common those forms are, but my point is that there is nothing immediately discouraging a programmer from using them as all of it is "just javascript". So even if you just restrict yourself to static imports, its hard to be sure you caught all of them without running the program.

      • wruza 4 years ago

        Wrong, things do not become harder when require() is nested somewhere. It may not get called in node, but any bundler looks at it as required-to-be-bundled-anyway. The only case when it’s hard is when require() accepts a non-literal, and that’s symmetric to import(). No extra cases.

        The programmer is discouraged to use dynamic requires/imports by the common sense, because that makes their app/lib incompatible with most of the cross-end usage. But then we have server-side only packages like pg or express where it doesn’t matter, because browsers provide no runtime (tcp/listen) for them to function and will never do.

  • forty 4 years ago

    Unless I'm missing something, the exercise is pretty simple, and could be done with grep in a sane codebase (find all require without indentation and with a single string literal inside the parenthesis)

    • joepie91_ 4 years ago

      It's a bit more complicated than that because you'd not want to match such things if eg. they exist inside of another string literal; but the actual implementation of 'dependency collection' in various bundlers isn't too far off from this. They just often do an AST match instead of a string match.

  • brundolf 4 years ago

    > Yes, dynamic imports exist but they are sort of an exotic feature that you would only use in special situations.

    They're also decidedly inconvenient to use in most situations, because they return a Promise (unlike require()), which means anything that depends on them has to itself be deferred. This further discourages using them unless you really need to.

  • agumonkey 4 years ago

    now I see files as syntactic module level lambdas where args comes first and the body is the rest :)

spankalee 4 years ago

This post is terrible, actually.

CommonJS was never going to be natively supported in browsers. The synchronous require semantics are simply incompatible with loading over a network, and the Node team should have known this and apparently (according to members of TC39 at the time) were told their design would not be compatible with future a JS module standard.

So the primary thing that JS modules fix is native support, and for that you need either dedicated syntax or an AMD-style dependencies / module body separation. AMD is far too loose (you could run code outside the module body), so dedicated syntax it is.

Everything else flows from there. I really hate how people blame the standards instead of the root cause which is Node not having taken the browser's requirements into consideration. Culturally, I think that's mostly fixed now, but it was a big problem early on in Node's evolution.

  • alerighi 4 years ago

    Yes but who cares of native support in the browser? I mean, most JS stuff nowadays is transpiled, written in TypeScript, or if written in plain JS still transpiled anyway to support older browsers, and bundled in a single optimized file.

    Loading all the dependencies over the network to me is just inefficient, you will have hundreds of requests instead of a single one, you will load the full source not a minified and optimized one, I just don't see the point.

    • spankalee 4 years ago

      Native support matters so that we're not eternally required to use tools for even the simplest of cases. Being able to write two files with one importing the other with no npm or bundler in sight should absolutely be a feature of the native platform.

      And yes, in production you probably will want to bundle, but you probably also want to minify. Does that imply that we should require a minifier to even run any code at all, even in dev? No, of course not.

      By adding a standard and native support we allow for sites that work without bundling and bundling that can adhere to the standard and not have to even be configured because the input is standard and the output must preserve those standard semantics. That gives tool independence and simplifies usage of the toolchains, and that's a great goal to shoot for.

      • alerighi 4 years ago

        If the project is not so complex, you don't need modules at all. You can just do like we did in the old days (and I still indeed do for simple projects like mostly static sites) and load your JS with `<script>` tags. You can have multiple files, of course everything must be in the global scope to use that but still you can do that.

        • eyelidlessness 4 years ago

          Even simple single-file use cases may benefit from ESM support for top-level await. I thought I had heard of a proposal for supporting top-level await in script mode, but I can’t find it and it probably wouldn’t be feasible anyway because it implies the whole script, which would otherwise be blocking, is async.

          That said, nothing is preventing anyone from using type="module" (or .mjs) for uncompiled code. In fact I’m doing this on a project specifically to bootstrap on-the-fly ESBuild TypeScript compilation.

  • pas 4 years ago

    But it doesn't matter for Node, since async ESM is a superset of sync CommonJS. Simply putting await before every ESM import gives the Node semantics.

    So, isn't adding ESM to Node is strictly a new feature? What am I missing?

  • wruza 4 years ago

    That’s a nice move you do here. Node claimed guilty for implementing the least sucking module system at the time, then growing its NPM size to the skies, and now when there is a ready-to-use module for every task out there, let’s raise a finger and claim they were doing it wrong. Uh oh.

    Web standards are goddamn late for 20 (twenty) years and it’s not their moral right to decide what should be broken or deprecated, sitting on the top of the mountain of working code which a workhorse named “node” produced in less than a decade.

    • spankalee 4 years ago

      So what would you have had TC39 do, just never standardize a module system because Node made one first that was incompatible with the web?

aravindet 4 years ago

There is a valid discussion to be had about whether the Node.js ecosystem disruption of moving from CJS to ESM is worth the benefits, but the assertion that it's technically worse isn't accurate. A few things ESM does better in Node.js:

1. Asynchronous dynamic import() vs. blocking require(): allows the program to continue while a module is being dynamically loaded.

2. Circular dependencies: ESM correctly resolves most of them, while CJS does not. [example below] I believe this is possible because ESM top-level imports and exports are resolved before JS execution begins, while require() is resolved when called (while JS is already executing.)

3. Reserved keywords `import` and `export` vs. ordinary identifiers require, exports and module: Allows tooling to be simpler and not have to analyze variable scope and shadowing to identify dependencies.

I haven't really encountered #3, but I can say I've benefited from #1 and #2 in real-world Node.js projects using ESM.

----

Circular dependencies example:

   // a.js
   const b = require('./b.js');
   module.exports = () => b();

   // b.js
   const a = require('./a.js');
   module.exports = () => console.log('Works!');
   a();
Running this with "node b.js" gives "TypeError: b is not a function" inside a.js, while the equivalent ESM code correctly prints 'Works!'. To solve this in CJS, we have to always use "named exports" (exports.a = ... rather than module.exports = ...) and avoid destructuring in the top-level require (i.e. always do const a = require(...) and call it as a.a() elsewhere)
TekMol 4 years ago

ES Modules are great. Building JS applications is so much speedier, leaner and more fun now that they are supported widely.

One fallacy the author falls for is that they think one needs a build step "anyway" because otherwise there would be too many requests to the backend.

Loading an AirBnB listing causes 250 requests and loads 10MB of data.

With a leaner approach, using ES Modules, the same functionality can be done with a fraction of those requests. And then - because not bundled - all the modules that will be used on another page will be cached already.

I use ES Modules for all my front end development and I get nothing but praise for how snappy my web applications are compared to the competition.

  • alerighi 4 years ago

    Any moderately complex frontend application already has to have some sort of build system. One common example is using TypeScript (and these days I don't see the point of using JavaScript and spending hours to fix bugs generated by its missing type safety), or using JSX syntax that must be transpiled, or even if you use plain JS, to transpile it to support older browsers (yes, there is still too much people using Internet Explorer to ignore it).

    If you already have a build system, the most sensible thing to me is letting the build system do their stuff and not worry about it. When I write a web application in React with TypeScript (the setup that I usually use) I don't worry about dependencies, and I use the ES modules import syntax (that is better than the CommonJS one) that gets transpiled to CommonJS without I even notice. So why bother changing that? It works, it produces a minified and optimized single .js file that is easy to serve from a webserver, I don't see points against it.

  • incrudible 4 years ago

    > Loading an AirBnB listing causes 250 requests and loads 10MB of data.

    How many of these requests are dependent? Lazily loading hundreds of images doesn't impact page responsiveness, but loading an import of an import of an import before your page does anything is unacceptable.

    > I use ES Modules for all my front end development and I get nothing but praise for how snappy my web applications are compared to the competition.

    So you actually ship unbundled ES modules? How much code is that? I dare you to bundle it up (rollup/esbuild) and tell me that doesn't improve load times. Comparing to the average website overloaded with crap is a very low bar.

    • TekMol 4 years ago

          tell me that doesn't improve load times
      
      It will negatively impact load times.

      Either you only bundle what is needed on the current page. Then the next page will load slower because it needs to bundle all those modules again as it uses a slightly different set of modules.

      Or you bundle everything used on any page your users might go to during their session. This will give you a giant blob that has to be loaded upfront which contains a ton of modules the user never need during their browsing session.

      • martpie 4 years ago

        That's true if you create your own bundle yourself. Famous frameworks like Next.js or Nuxt make much smarter bundles, with common dependencies grouped together, and bundling the rest by page/view, then loading each bundle when needed.

        • jakelazaroff 4 years ago

          It’s still a tradeoff, though. Let’s say a website has three pages: /a, /b and /c. Two of those pages, /a and /b, each use the module `foo`. Where should `foo` get bundled? If you put it in the “common” bundle, it’ll get served to /c even though it’s not needed. If you put it in both of the bundles for /a and /b, the client will download it twice.

          • eyelidlessness 4 years ago

            Most bundlers:

            - have more sophisticated dependency tracking and will split a separate chunk for a/b

            - can produce some kind of a manifest to allow you to preload and avoid nested waterfalls

            - provide some mechanism for greater control over what gets chunked separately or combined, for the usage/caching scenarios you’ll undoubtedly know better than a general purpose program

          • wruza 4 years ago

            You don’t have to have a single bundle or just main one and common one. Split into as many as you (or a bundler) see fit. In the worst degenerate case you’ll end up with every module bundled separately and preloaded on every page, just like ESM, but without deferred roundtrips for nested dependencies.

  • catern 4 years ago

    >And then - because not bundled - all the modules that will be used on another page will be cached already.

    Why isn't anyone else mentioning this feature? I'm not a browser developer but this seems like a clear win, and indeed makes bundling unnecessary. I'm assuming that it's shared between domains, too - or are people's dependencies so fragmented that there's basically no sharing between domains?

    • simonw 4 years ago

      I believe sharing cached code between domains has been almost entirely eliminated by browsers now, because it turned out to be a huge privacy leak: a malicious domain could attempt to load code that was used by another domain, time how long it took to load and use that to determine if the user had visited that other site.

      Browsers fixed this by making the browser cache no longer shared between domains.

    • tofflos 4 years ago

      The cache used to be shared between domains but that's no longer the case due to privacy concerns and limited effectiveness.

      > As of Firefox v85 and Chrome v86 the browser cache will be partitioned, this means that the same resource included on two sites will have to be downloaded from the internet twice and cached separately.

      Source https://www.peakhour.io/blog/cache-partitioning-firefox-chro....

      • acdha 4 years ago

        This still helps other requests to the same origin, which is a very common situation if you don’t have the extra resources needed to get an SPA up to the same performance and reliability levels.

    • forty 4 years ago

      In practice I think downloading many small files is actually slower than a single bigger file. It might not be so true today with http2 for example though.

      • acdha 4 years ago

        HTTP/2 definitely changed that and is near-universally supported now. The biggest win is the ability to cache things independently: with bundles your servers and clients have to retransfer everything if a single byte changes, and most of the sites I’ve worked on don’t change most of their files every time they ship an update. A cold visit will usually be immeasurably different from a bundle but a warm visit is noticeably faster.

      • kevingadd 4 years ago

        Yes, it's much slower pre-http2 since no modern browser actually does pipelining, so it's going to be a socket per file

        HTTP2 fixes this by allowing multiple requests to occur in parallel on a single connection.

        • forty 4 years ago

          In the specific case of JS import, it's still going to be pretty bad though, I guess, since you have to download a file, parse it to figure out the deps, then fetch them, parse them, etc, so you are limited in what you can do in parallel.

          • eyelidlessness 4 years ago

            That’s what `modulepreload` &co is for. It’s a shame HTTP Push is dead, it was a much more general solution. But specifically for web pages the solution is more or less the same + 1 round trip (you won’t get the benefits of “push” until you at least process all the head>meta tags).

    • qudat 4 years ago

      We already do this with a build step using a vendor bundle and an app bundle and it can be configured to create a bundle for each page.

  • javajosh 4 years ago

    >I use ES Modules for all my front end development and I get nothing but praise for how snappy my web applications are compared to the competition.

    Word up. Share a link? I feel like the pro-ESM crowd shoudl stick together!

  • forty 4 years ago

    I think the trick is mostly not to have a shitload of dependencies. If you have to load a bunch of huge frameworks, whether it's bundled or you have to download thousands of files one by one, it's going to be slower than not doing it at all :)

    • eyelidlessness 4 years ago

      It depends on your bundler and config. You might have “zero” dependencies, but depending on how the code is split you might end up with thousands of small imports nested quite deeply.

andrew_ 4 years ago

I really, really loathe how major packages in the ecosystem are "We're ESM now, deal with it, sorry about your luck," and forcing the issue. It's arrogant as hell. A hard fork of Node for ESM would have been a much better path (e.g. Deno) That said, the OP's rant is more emotion than fact.

> And then there's Rollup, which apparently requires ESM to be used, at least to get things like treeshaking. Which then makes people believe that treeshaking is not possible with CommonJS modules. Well, it is - Rollup just chose not to support it.

Rollup was created specifically for ESM. It's not been thrust onto the ecosystem or into anyone's tool chain. One uses it specifically for ESM, and plugins that bolt on for added functionality if they apply. Trying to hammer a nail with a paintbrush doesn't make the paintbrush a bad thing - you just chose the wrong tool.

  • devmunchies 4 years ago

    > loathe how major packages in the ecosystem are "We're ESM now, deal with it, sorry about your luck," and forcing the issue

    I wasn’t able to use the latest version of node-fetch in a node.js script since it doesn’t support commonjs. The project literally has “node” in the name and doesn’t support default node.js.

    • spankalee 4 years ago

      Were you not able to convert the script to a module, or dynamically import() node-fetch?

      Since you're using it for an async operation anyway, dynamic import should have worked quite well.

    • theprotocolOP 4 years ago

      I just encountered this. FYI You can use v2 which still retains commonjs support.

      • andrew_ 4 years ago

        I've ended up using last major versions as well. I plan to move to Deno anyhow, and authors like sindresorhus are at least applying security updates to the major version before the switch.

davnicwil 4 years ago

> for some completely unclear reason, ESM proponents decided to remove that property. There's just no way anymore to directly combine an import statement with some other JS syntax

This is one of those 'worse is better' things in language design, I believe. It guarantees simplicity, traded off against extra verbosity. In fact, when it comes to the common and probably most valuable case of reading and understanding code written by others quickly, it is not even a tradeoff really, as both are good.

Whether or not that was one of the driving reasons, it certainly is a benefit in my opinion. The two examples given in the post of an inline require don't demonstrate this well, as they're both really simple. I'd say the benefit isn't to stop things examples like that being written and replace them with two lines of code, which admittedly might sometimes be slightly cumbersome. It's that it stops the long tail of much more complex/unreadable statements being written.

  • joepie91_ 4 years ago

    I would have considered this a valid argument if overly-clever use of `require` was actually a problem in JS. But it's not! These 'simple' types of obvious cases are the only types of cases that people actually use this syntax for in practice.

    • eyelidlessness 4 years ago

      I have seen horrors of abuse of the CJS require cache, I’m glad to hear you haven’t had to deal with it.

      For what it’s worth, whether it’s bitten you or not, every single instance of my sillycode[1] is in use in Jest (granted in obviously more useful ways). And it’s an enormous headache to debug when it goes wrong. A trivial example: require a logging library which creates a singleton at module definition time and provides no teardown API (yeah that sounds like a bad design but believe me they exist, are easy to find, and hard to replace on a busy and/or opinionated team). If you have a single suite with 100 tests, Jest will leave 100 instances of that singleton running and consuming memory even while totally idle, completely inaccessible to most any machination you might come up with to try to free them.

      Which isn’t to say ESM doesn’t have this same problem if you try to bust the import cache with eg query parameters. But at least you’ll probably notice it’s a problem because you’re very probably doing it directly and not with some opaque Babel transform that hijacks the entire module system and any code referencing it.

      1: https://news.ycombinator.com/item?id=29140847

      Edit: forgot which sub thread I was in, added link to my sillycode

  • eyelidlessness 4 years ago

    > This is one of those 'worse is better' things in language design, I believe. It guarantees simplicity, traded off against extra verbosity.

    And with top-level await the restriction goes away (albeit the ESM equivalent is still a bit more verbose).

        (await import('anything'))(...yup)
Chyzwar 4 years ago

Problem is bothed node.js implementation that leaves most of existing applications without migration path. Even today it is not possible to create full ESM application front or backend. It is worse than python 2 to 3.

  • throw_m239339 4 years ago

    I would argue that Node way of doing things isn't in the ecmascript spec. The problem isn't ES modules, it's node.js. One could answer "well node.js existed prior es module specification". Irrelevant. DENO doesn't have this problem.

  • theprotocolOP 4 years ago

    Absolutely this. It's not necessarily communicated that well in the article, but this is the main reason people are frustrated.

  • eyelidlessness 4 years ago

    Node ESM support has gotten a lot better through versions 12-17. The biggest problems for workflows that currently work “well” for CJS are:

    1. --experimental-loader is more complex and less stable than --require. But it’s also a lot more robust.

    2. There’s no equivalent to the require cache, which makes mocking and long running processes like watch mode challenging. This is partly a benefit, as it discourages cache busting patterns like those used in eg Jest which create awful memory leaks.

eyelidlessness 4 years ago

Here is why ESM is better for static analysis than CJS:

    module.exports = {
        get foo() {
            const otherModule = require('equally-dynamic-cjs')
            if (otherModule.enabled) {
                return any.dynamic.thing.at.all
            }
        },
        get bar() {
            this.quux = 'welp new export!'
            return 666
        },
        now: 'you see it',
    }

    setTimeout(() => {
        console.log(`now you don’t!`)
        delete module.exports.now
    }, Math.random() * 10000)

    if (Date.now() % 2 === 0) {
        module.exports = something.else.entirely
    }
You can, of course, achieve this sort of dynamism with default exports. But default exports are only as tree-shakeable as CJS. Named exports are fully static and cannot be added or removed at runtime.

Edit: typed on my phone, apologies for any typos or formatting mistakes.

tbrock 4 years ago

We recently went through this hell converting a NodeJS codebase to TypeScript. One reason many people willingly enter this hellscape is because we need ES modules for typescript.

I say “need” because Typescript wont ingest types from “required” files, you have to import them as modules.

So before we converted a single file to TS we has to audit all commonjs imports and exports to convert them to ES modules.

I agree wholeheartedly that the end result was a fools errand. I would have rather spent the time adding support for importing types via a require which for some reason returns any “any” today.

  • crooked-v 4 years ago

    I think you've confused Typescript's own import/export system with ESM. It uses the `import` syntax, but it's not ESM internally, it's its own thing designed to ingest and export to multiple module types.

  • theprotocolOP 4 years ago

    The incompatibility is indeed exponentiated with TypeScript.

    There is currently no non-hacky way for using both legacy modules and ES Modules in the same project, and many libraries on NPM have moved to ESM-only. TypeScript's transpilation needs to know what to target as regards modules and JS version, which makes things even crazier than they already are.

emersion 4 years ago

I'm using ES modules for a webapp I maintain, and it's just nice to be able to run it without any build step. Just fire off a local static HTTP server and you're good to go. There's an optional production build step which can be used if desirable.

  • incrudible 4 years ago

    I would admit that this gets pretty slow even with a modest amount of files. If you use rollup/esbuild, you can have a very fast build step that may amortize over the increased page load times.

junon 4 years ago

This is a criticism of the tooling, not of the language feature.

This is like saying "binding two pieces of wood together is terrible" and using the fact that screwdrivers are poorly designed as your main argument.

Aeolun 4 years ago

I kind of have to agree with the point about loading ESM in the browser.

I tried doing this with one of the new fangled frameworks and seeing my browser work through like 5000ish required files was quite comical.

  • wereHamster 4 years ago

    Unprocessed ESM in the browser makes sense for local development. We are collectively wasting millions of CPU hours (and developer time) waiting for our mostly unchanging dependencies to be processed. For production deployment though, I'd still prefer ESM in the browser, but not verbatim as they are coming from npm, but compiled, minified, and bundled in a way that strikes a balance between total number of modules, code duplication inside the modules, long-term cacheability etc.

    • noduerme 4 years ago

      Just as an aside... every web app I write these days starts with an index.html page that has a window["deploy"] bool at the top. If that's false, the first script just requires the unbuilt files. If true, it requires the compiled and minified version. I only rebuild when I'm ready to upload.

    • javajosh 4 years ago

      The more people start to internalize the truth that "if you ship it you own it" and stop adding dependencies and start removing them, especially if they come with their own wasteful dependencies, then ESM will make sense for everyone. Until then, you're right, devs have to go through unctuous mitigations.

      • crooked-v 4 years ago

        It would be easier to do that if proposals for things like standard library functionality (not the contents of the standard library itself, just the syntax and technicalities of using it) were to go anywhere in, say, under five years.

  • emersion 4 years ago

    If you just have a handful of dependencies (which themselves have few to no transitive deps) then it works just fine.

bricss 4 years ago

First you create problem with an article, then you fix it with your own magic tool, bravo!

https://www.npmjs.com/package/fix-esm

  • joepie91_ 4 years ago

    Believe me, I would much rather not have had to build that hack. But considering that I want my development tools to actually, y'know, work, what would you expect me to do?

  • NicoJuicy 4 years ago

    Joepie91 did some work for me long ago

    If he says there is a problem, he didn't invent it. He knows his stuff

jitl 4 years ago

The things I dislike the most in software development is dogma, holy wars, and religious crusades about technology practices. I’m not sure to the extent that this happens in other ecosystems, but it seems to happen quite a bit in JavaScript circles. You can ignore these for the most part if you use boring tools and don’t chase the new frameworks-du-jour, but in the case of ESM versus CommonJS I am starting to feel the fire of this war in my dependency graph.

My solution in NodeJS programs for now is to use an `esbuild` -based require hook to transpile all the files we import or require into CommonJS on the fly. We need esbuild anyways to run Typescript code without a build step, and combined with basic mtime based caching, it’s fast enough that you really don’t notice extra build latency especially on a second run — much MUCH faster than a Babel require hook.

I plan to tune back into this issue once the average comment is more measured and thoughtful, and the ecosystem tooling for dealing with the migration has evolved more.

forty 4 years ago

I have been doing nodejs backend development for the past 10 years and I have no idea why ES module are needed and who is using them. I assume this is a front end thing.

We are using typescript which is using "import" syntax, but as far as I know, it's still transpiling to good old "require".

  • throw_m239339 4 years ago

    > I have been doing nodejs backend development for the past 10 years and I have no idea why ES module are needed and who is using them

    ES modules are part of the Ecmascript spec. DENO uses them. Node.js modules aren't part of the ES spec.

    • forty 4 years ago

      I see, so a front end thing indeed :) (and yes deno, which I feel the main purpose is to bring the problems and debates of the front end devs, in an otherwise much saner js backend end world ^^ )

  • andrew_ 4 years ago

    You can tell TS to output ESM. Try using top-level await with TS and you'll run into that. The configuration options are vast.

dgb23 4 years ago

In my opinion as a working web developer, ES modules are half-backed, deceptively simple, do not solve problems consistently and are not built on hard acquired wisdom from other languages.

1) JavaScript could have simply stolen an already good solution. For example namespaces (ex: Clojure/Script, Typescript, even PHP to some degree) provide a powerful mechanism to modularize names - by disentangling them from loading code. They make it straight forward to avoid collisions and verbose (noisy) names. In Clojure namespaces are first class and meant to be globally unique. This implies long-term robustness.

2) Loading modules dynamically should be the _default_. The whole point of JavaScript is that it is a dynamic language. The caveats, hoops and traps that we have to sidestep to for example run a working, real REPL during development is astounding. If you want to be dynamic, go _all_ the way and understand what that means. Yes, it's a tradeoff to be a dynamic language, but why take the worst of both worlds?

3) Like 'async/await', 'class' and many browser features such as IndexedDB it is neither designed from first principles nor fully based on past wisdom. Many things in the JS world smell of "lowest common denominator". Way too much effort is focused on the convenience side of things and way too little on the leverage side.

amadeuspagel 4 years ago

> And then people go "but you can use ESM in browsers without a build step!", apparently not realizing that that is an utterly useless feature because loading a full dependency tree over the network would be unreasonably and unavoidably slow - you'd need as many roundtrips as there are levels of depth in your dependency tree - and so you need some kind of build step anyway, eliminating this entire supposed benefit.

That's not true with skypack, right?

  • javajosh 4 years ago

    It's important not to ignore the possibility that perhaps front-end dependencies are out-of-hand, and need to be reduced. ESM cannot fix a decade of bad practices enabled by front-end build bundlers. ESM isn't there to be a viable alternative to webpack. It's there to enable a different vision of application deployment where apps are smaller, and javascript gets css's transitive import() sub-resource distribution, avoiding the headache of a linear list of global scripts.

    I really like ESM because I like where it's trying to steer the community of browser application builders. I think front-end builds are terrible on many levels, not the least of which is the obfuscation of code that undermines one of the best features of the web's software distribution, which is its openness. And another major benefit of webapps is that none of the front-end languages require a build step! This makes iteration very fast; if you can make do without the the safety net of a compiler, you can enjoy the speed of not using the bundler.

    • crooked-v 4 years ago

      I think the basic problem with that comparison is that people use bundlers with CSS all the the time for exactly the same basic-physics-of-round-trips reasons. It's common for `import()` in CSS to end up exactly what it usually in frontend JS, a dev organization tool rather than something actually exposed to the browser.

  • joepie91_ 4 years ago

    It's a fundamental technical constraint of any tool-less setup. At some point you need to traverse the dependency tree by parsing modules and following imports, and your choice is to do that either:

    1) on the client, across the network, one roundtrip for every level of depth, or 2) in a build environment, directly on the filesystem

    Option 2 means you need some kind of build tool to make it work, and by that point it doesn't really matter anymore whether the tool just traverses the dependencies and makes a list of filenames, or also concatenates their contents into a bundle.

    And that is why the fundamental premise of ESM cannot work; there are no technical options besides those two. If you want to avoid network roundtrips, you must have build tooling. No way around it.

somehnacct3757 4 years ago

I keep thinking how QUIC / HTTP/3 would go nicely with ESM in the browser (via script tags with type=module) for simple sites.

A webmaster could totally avoid the complexity of learning a JS tool chain. Right now even 1000 lines of JS has you reaching for a bundler. It would make shipping a small html+css+js site again as simple as dragging files to your webserver.

  • e1g 4 years ago

    Bundling will not go away as it solves a different problem of how to best distribute the app to final users. When authoring code, you want to have many small files so you can keep related logic blocks isolated from the rest. When distributing the code, you want to ship a few larger files to reduce network overheads. Any non-trivial frontend app will call code from 100+ files, and your browser is tuned to request these files ~serially (or in serial batches of 8-10) which becomes frustrating very quickly even on localhost.

    • joepie91_ 4 years ago

      Worse, even if they were all fetched in parallel, you would still see terrible loading times simply because a dependency graph can only be traversed depth-wise serially. Doesn't matter what network protocol you use or how parallel it is.

      • spankalee 4 years ago

        This is not true at all. For every module you can parse out and load the module's imports in parallel. As you traverse the graph the known and loadable module frontier can grow much wider.

        The only way it would be serial is if every module only imported one other module.

        • joepie91_ 4 years ago

          Note how I was specifically talking about depth-wise.

          • spankalee 4 years ago

            Fair, though that wasn't clear.

            So then even without modulepreload or bundling, it's not always going to be true that the longest import depth is the limiting factor. Earlier loaded modules can still be parsed in parallel while dependencies are fetched, and completed subtrees can be linked and evaluated. Given that modules are deferred and can be imported early, there's often time for parallel work.

            • joepie91_ 4 years ago

              I'm not following. Dependencies can be nested, so you cannot assume that a dependency by a given name can be satisfied by a previously loaded dependency by the same name. Which means you still need as many serial roundtrips as there are depth levels, whether you parse in parallel or not.

              Sure, you can cut down on the delay introduced by the parsing, but 10 depth levels over a 100ms connection is still going to take a second to fetch, because you simply can't know the N+1th dependency until you have at least completed the Nth roundtrip.

  • unilynx 4 years ago

    QUIC / HTTP3 does not fix the roundtrip wait times caused by dependencies - if 'a.mjs' imports 'b.mjs' which in turn imports 'c.mjs'... you still need two roundtrips to load all three.

    only a bundler can fix that (or a hypothetical process which would set up preloads for all required libraries, but that would just be a bundler without the actual bundle creation step)

cryptica 4 years ago

It just sucks that development community decided to double down on bulky build tools instead of trying to optimize server environments to leverage advances like HTTP2 server push to optimistically serve dependencies without latency. It's particularly strange when you consider how popular Node.js is as a server-side environment and how easy it would be to accomplish this since the server is able to interpret JavaScript natively to quickly figure out the client-side dependency tree.

My inner conspiracy theorist suspects that maybe the powers that be don't want to allow plain JavaScript to extend its primacy over the web. The way things went makes no sense. Computing dependency trees on the server-side and using it to optimistically push scripts to the browser would have been be far simpler and less hacky than computing source maps, for example. Optimistically pushing server-side resources was supposed to be the whole point of HTTP2...

jokethrowaway 4 years ago

ES Modules was what turned node.js into legacy for me, same as python 2/3.

Plus transpilers are so slow, it's embarrassing (albeit things are improving with tools written in rust).

As someone who's been doing frontend for 20 years and node.js for 10 years, JS development has never been so crap like now.

After attending a conference talk about how the TC39 works, I understand why that's the case. TC39 is basically a bunch of engineers from big tech companies who can afford to waste productivity to follow the whims of whatever the group decide. It's completely detached from reality.

They operate on a full consensus basis, which means everyone needs to be onboard with the decisions - and if you want your changes to be approved in the future, you'd better play nice with the current change as well.

To be honest, I can't wait until browsers get replaced with some native crossplatform toolkit or frameworks in other languages become popular so that we can finally leave JS alone.

  • paufernandez 4 years ago

    > To be honest, I can't wait until browsers get replaced with some native crossplatform toolkit

    That sounds like Flutter to me.

lampe3 4 years ago

Not sure it the author tried a new build tool like vite, esbuild and so on.

Working on large projects and having everything first loaded and then you can load it in the browser is a waste of time that every web developer has every day.

Some real world times FOR DEVELOPMENT: Storybook first load: 90 sec, Storybook after first load changes: 3 sec, Vue App first load: 63 sec, Vue app change after that: 5 sec, Vue App with Vite first load: 1sec, Vue App with Vite after that: the time it takes me to press command+tab to switch to the browser

Do we really have people that use unminfied unbundled esm in production? If Yes, please comment why?

I would also ask the author what about cyclic dependencies? ES Modules resolve them automatically. Something which in large code bases can happen.

Why do we still put it through babel? Because most of us don't have the luxury of not supporting old browser... https://caniuse.com/?search=modules Even if the not supported browsers for our company is 1% it is still a big chunk of money in the end.

and this example: ``` app.use("/users", require("./routers/users")); ```

Really? this is "good code" having a require in the middle of a file?

Also funny: The author is annoyed that rollup did not support tree shaking in commonjs and then complains that people are wasting time on esm. Maybe the rollup team does not want to waste time on commonjs? Also then he points to a package which did not got any update in 3 years and would make the hole process he complains is to complex even more complex by introducing a new dependencies.

Sorry but the more I read that thing the more it sounds to me like a Junior Dev that does not want to learn new things and just likes to rant about things.

  • joepie91_ 4 years ago

    Hi, author here. I'm going to ignore the personal attacks and simply point out that my dev build processes typically have a startup time of under 5 seconds even for large projects, and a rebuild time of under 500ms. This is with Browserify.

    If you are having very slow build times with your existing toolchain, the problem isn't the bundling, which is an extremely fast operation. It's almost certainly going to be one specific computationally-intensive plugin that you either don't need, or would also need if using ESM.

    • lampe3 4 years ago

      "Wie man in den Wald hinein ruft, so schallt es heraus" since your from NL it should be easy to translate.

      These heavy plugins are usually for old browsers to also run in them. That is the only job ob babel.

      CJS has some deeper problems. - Check if your fav CJS lib freezes the objects? - ESM is more http friendly (mime type)

  • wruza 4 years ago

    It’s usually senior devs who don’t want to learn new things just because these are new. And aren’t afraid to use 3 years old packages (how dare they!). Imagine having a subroutine which is “done” doesn’t get updates for years, laughable! Who makes it “done” when you can fuck it up at the start and fix a little every day, filling that activity grid with green dots.

    All of this js-related stuff is just a fast fashion, the bad part is web developers are locked into it with no chance to relax and to just create their boring services and apps.

austincheney 4 years ago

The reasoning presented is only valid if you are stuck holding a bunch of dependencies making use of old conventions. At that moment the complaints about the module approach become a very real concern.

That said the problem isn’t modules are all. It’s reliance on a forest of legacy nonsense. If you need a million NPM modules to write 9 lines of left pad these concerns are extremely important. If, on the other hand, your dependencies comprise a few TypeScript types there is nothing to worry about.

So it’s a legacy death spiral in the browser. Many developers need a bunch of legacy tools to compile things and bundles and build tools and all kinds of other extraneous bullshit. Part of that need is to compensate for tooling to handle modules that predates and is not compatible with the standard, which then reenforces not using the module standard.

When you get rid of that garbage it’s great. ES modules are fully supported in Node and the browser.

  • fullstackchris 4 years ago

    And yet, the reality IS that 90% of the web is using legacy stuff - heck, even something like 50% of the web still has jQuery on it. (haven't checked the figure in a while, but I guess it is still close to that figure).

    I think the true anger is that something so essential and basic to JS development has this giant breaking change if you want to switch over to ESM - there's no reverse compatibility or fallback - it just breaks.

    • austincheney 4 years ago

      The solution is some soul searching. Do you really need Babel and Webpack to build a web app? The answer is of course an astounding YES! Most developers cannot add text to a page without JSX, which therefore means React and everything it requires.

      So when you dig even deeper this is really a people and training problem.

cookiengineer 4 years ago

This thread and summary are written by someone who has no clue what they're doing in ECMAScript; and who's probably enjoying the fucked up mess that the babel ecosystem created. I'm not gonna dig into that, because reading any polyfill in babel's ecosystem speaks for themselves on how messy, hacky, and actually not working-as-specified most parts are.

Instead I'm gonna try to go back to the topic.

I think that in practice these are my pain points in using ESM regularly without any build tool. I'm using ESM modules both in node.js and in the Web Browser via <script type module>:

- package.json/exports "hack" works only in node.js and not in the Browser as there's also no equivalent API available. This hack allows to namespace entry points for your library, so that you can use "import foo from 'bar/qux';" without having to use "../../../" fatigued paths everywhere (that also might be different in the Browser compared to the nodejs entry points).

- "export * from '...';" is kind of necessary all the time in "index" files, but has a different behaviour than expected because it will import variable names. So export * from something won't work if the same variable name was exported by different files; and the last file usually wins (or it throws a SyntaxError, depending on the runtime).

- Something like "import { * as something_else, named as foobar } from 'foo/bar';" would be the killer feature, as it would solve so many quirks of having to rename variables all the time. Default exports and named exports behave very differently in what they assign/"destruct", and this syntax would help fix those redundant imports everywhere.

- "export already_imported_variable;" - why the HECK is this not in the specification? Having to declare new variable names for exports makes the creation of "index" files so damn painful. This syntax could fix this.

  • Ginden 4 years ago

    > - "export already_imported_variable;" - why the HECK is this not in the specification? Having to declare new variable names for exports makes the creation of "index" files so damn painful. This syntax could fix this.

    You can do:

       export {already_imported_variable}
    • cookiengineer 4 years ago

      ...which is a default export, not a named export, as I already explained. My point was about the lack of exporting named exports without the need to declare variable names.

      Your solution will work only once in a file, therefore it is useless to batch-export lots of imports for the mentioned use case of an "index" file that exports all your classes and definitions.

eyelidlessness 4 years ago

The reason ESM is better than CJS or any other JS module system is because of the export keyword. Any discussion focused on imports is relevant but missing the significance of ESM.

arh68 4 years ago

Agreed. So much breakage for so little. If I were teaching JS today I don't know if ESM is worth covering while CJS works at least as well. Maybe next year.

tolmasky 4 years ago

The “it works without a tool chain” is in fact a ridiculous impractical hypothetical that no one should actually attempt, and yet it continues to make this spec more complicated and unwieldy. For example, to address the obvious performance problem of dealing with loading dependencies, the “<link del=modulepreload>” tag was added, which you’re supposed to include for each individual dependency in your html to let it know to start fetching it ahead of time. So we’ve literally gone full circle and arrived right back to where we started with a script tag for every JS file being replaced with a link tag for every JS file. “But you don’t have to manually do that! Your build tools can just insert the 100 link tags in your HTML file!” I thought all this was to avoid a JS tool chain! If I’m running a build tool I’ll just have it generate one concatenated and minified artifact that performs way better, not this mess! Here’s the documentation if you’re interested in this hilarious feature: https://developers.google.com/web/updates/2017/12/moduleprel...

Not to mention the security aspects: there is no subresource integrity for imports, so it’s less secure than bundling or using a script tag with CDNs.

The point about it being a new syntax is also very valid. Everything import patterns do is almost identical to destructuring, so we should have just extended that feature instead, especially because I do wish destructuring could do those things. For example, if destructuring had an “everything” pattern to complement the “rest” pattern:

    const { x, …rest, *original } = something();
Where “original” now just contains a reference to the actual returned object, instead of having to break that pattern up into two declarations since the moment destructuring takes place the original object becomes inaccessible. This would have of course given us the “import * as” ability, but is again a feature I regularly find myself wanting everywhere. Not to mention this makes writing code transformations even harder as JavaScript's huge syntax keeps growing and requiring tons of special cases for almost identical statements.

The semantics of imports are also very confusing to beginners, as they implement yet another unique form of hoisting. It is so weird that despite being allowed anywhere in your code, they run first. Notice I didn’t say they fetch first, they run first. So for example, the following code is broken:

    process.env.S3_LOCATION = “https://…”; // The below library expects this as an environment variable.
    import download from “s3-download”;
Oops! Your env variable gets set after every line of code in the import chain of s3-download runs! So bizarrely, the solution is to put the first line in its own import, and now it will run first

     import unused from “./set-env-variable.js”
     import download from “s3-download”
If the rule is that imports must run before any code in the file, then why not restrict the statement to only being at the top of the file? What is the purpose of allowing you to put all your imports at the bottom? Just to make JavaScript even more confusing to people? Imagine if “use strict” could appear anywhere in the file, even 1000 lines in, but then still affected the whole file. It was already the case that people found function hoisting, "var undefined" hoisting, and the temporal dead zone of let/const (3 different kinds of subtly different hoists) to be confusing in a language that prides itself for being able to be read "top to bottom", why add a fourth form of hoisting?

Anyways, the list of problems actually continues, but there is widespread acceptance that this feature would not have been accepted in its current form if introduced today. But for some reason everyone just takes a “but it’s what we got” position and then continues piling more junk on top of it making it even worse.

incrudible 4 years ago

"And then people go "but you can use ESM in browsers without a build step!", apparently not realizing that that is an utterly useless feature because loading a full dependency tree over the network would be unreasonably and unavoidably slow - you'd need as many roundtrips as there are levels of depth in your dependency tree - and so you need some kind of build step anyway, eliminating this entire supposed benefit.

That build step is ideally performed by something like rollup or esbuild, which means I use import/export anyway. If you still use Babel, I feel bad for you son, I've got 99 problems but Babel ain't one. I don't care if the old stuff is not supported, simply deleting 98% of the code in the JS ecosystem would be a step forward. Perhaps that's a minority view, but none of these arguments fly with me.

aurelianito 4 years ago

Both CommonJS and ES6 modules suck. The way things should have been is requirejs. Modules are defined and loading using an API instead of having reserved words. It's really sad what happened to modules in JavaScript.

terracottage 4 years ago

The worst part is default imports and the linting nazis who want you to use them.

1 file per thing is midwit code organization strategy for people with no actual sense for it.

rado 4 years ago

HTTP/2 uses a single connection for all modules, no?

  • crooked-v 4 years ago

    If you use 'native' imports in that way instead of bundles, you're still stuck with artificially delayed loading times because the browser has to parse your code, request the first layer of dependencies, parse that code, request the second layer of dependencies, etc.

aliswe 4 years ago

i have sympathy for your projects that will need maintenance. but i am making the observation that you're starting to get old - you're being quite bitter over something which is clearly bigger than all of us.

on a personal note, im making a cms with es modules and couldnt be happier.

lucideer 4 years ago

TL;DR:

Breaking backwards compatibility is always painful but there's not one actual criticism of ES Modules as a spec here other than its incompatibility with CommonJS

  • addicted 4 years ago

    Let’s flip this.

    What are the advantages of ESM that led to selecting that over CJS modules for standardization in the first place?

    • Ginden 4 years ago

      Statically analyzable tree. In CJS you have to depend on people not doing strange things to their `module.exports` and you are left with heuristics.

      • joepie91_ 4 years ago

        This is not actually a meaningful benefit that ESM has in practice, as I've explained in my article.

        • lucideer 4 years ago

          > as I've explained in my article

          Not convincingly.

          Firstly, static imports are markedly different to top-level require due to module scoping: e.g. tree-shaking isn't stable with most "statically-analysed" top-level requires due to potential side-effects.

          Secondly, handwaving dynamic imports as the ES Modules equivalent to non-top-level require doesn't pass the smell test. Dynamic imports are a deliberately separate syntax and mechanism to static imports, whereas require isn't differentiated. They are also, notably, async.

          • wruza 4 years ago

            Smell tests are irrelevant to analyzers code. Developers are either able to break analyzers by using import() or are unable. They’re still able with ESM, and they’re still able to deliberately not do that with require(literal). Async has nothing to do with that, all it does is wrapping T into Promise<T>. No notable difference for static checks, promises are everywhere in javascript.

      • wruza 4 years ago

        Could you please point to “strange things” that people do with “exports” AND how it prevents static analyzers from doing their job? Why they can do it for:

          var x = strange_thing()
        
        but suddenly can not when “x” is renamed to “exports”?
        • Ginden 4 years ago

          Eg.

              module.exports = {foo: 3};
              setImmediate(() => module.exports[Math.random() > 0.5 ? 'foo' : 'bar'] = 5);
          
          
          This is impossible for ES modules, because exports are static and known at parse time.

          Few widely used packages do something like:

              enhanceWithAdditionalProperties(module.exports, mixin);
          • wruza 4 years ago

            Not the same but similar:

              export var foo = 3
              setImmediate(() => {foo = Math.random()})
            
            Under ESM, exported values are still not known at parse time, and may be changed by the library (but not created nor deleted). And given that exported may be changed, what prevents library makers from doing:

              export default var exports = { }
              enhance(exports, mixin)
            
            and you are left with heuristics again?

            I see these subtle differences, but fail to see how they are a solution to the problem of “doing strange things to exports”.

  • djrockstar1 4 years ago

      That might sound irrelevant on the face of it, but it has 
      very real consequences. For example, the following pattern 
      is simply not possible with ESM:
    
      const someInitializedModule = require("module-name") 
     (someOptions);
      Or how about this one? Also no longer possible:
    
      const app = express();
      // ...
      app.use("/users", require("./routers/users"));
    
    Configurable modules and lazily loaded imports are both missing from the ES Modules spec.
    • wereHamster 4 years ago

          import someModule from "module-name"
          const someInitializedModule = someModule(someOptions)
      
      
      A bit longer, but meh…

          const app = express();
          app.use("/users", (await import("./routers/users")).default);
      
      
      top-level await is a thing now

      Actually, the first example could be rewritten as

          const someInitializedModule = (await import("module-name")).default(someOptions);
      
      
      That «simply not possible» statement is simply not true
      • presentation 4 years ago

        Or don’t even bother with the awaited import and instead import it at the top of the file, I fail to see why this is even an issue lol

    • bricss 4 years ago

      dynamic imports for that matter, or:

          import { createRequire } from 'module';
      
          const require = createRequire(import.meta.url);
          const cjsOrJson = require('./somewhat/module/pathway');
    • Ginden 4 years ago

      > lazily loaded imports are both missing from the ES Modules spec.

      What do you mean?

  • crooked-v 4 years ago

    The total inability to properly mock ES modules without experimental Node flags is a big one. It can turn unit testing into a nightmare if even one ESM dependency creeps in.

  • Aeolun 4 years ago

    There is one. Which is that you can’t easily do an inline require any more.

    The rest of the text can mostly be summarized as https://xkcd.com/927/

    • nrabulinski 4 years ago

      I’ve been working with node for roughly 5 years and I’ve never liked the hackieness CJS let people incorporate.

      People, especially a few years ago, were trying to get clever with the require calls, were fiddling around with require cache and while with ESM we no longer can “easily” do stuff like dynamic reloads, I genuinely feel it’s for the better.

      I strongly agree with privatenumber’s point that import() syntax is the true first class citizen here.

    • tehbeard 4 years ago

      Inline require(...), Are we back to bad old days of PHP include '...'; midway through a classes function already?

    • bricss 4 years ago

      inline require is most likely a bad design, imo

  • nuerow 4 years ago

    > Breaking backwards compatibility is always painful but there's not one actual criticism of ES Modules as a spec here other than its incompatibility with CommonJS

    This.

    Also, the bulk of the rant is focused on how the author struggles with configuring his pick of JavaScript bundlers and transpilers, and proceeds to come up with excuses to justify not migrating away from CommonJS .

    This article was a waste of a perfectly good click.

throwaway2077 4 years ago

question:

  // ...

  if (condition)
  {
    const x = require('../../../hugeFuckingLibraryThatTakesSeveralSecondsToLoadUponColdStart')
    // do something with x
  }

  // ...
assume I don't give a fuck about nerd bullshit and I just want the code to be simple and the program to run fast (which it does when !condition because it doesn't need to load hugeFuckingLibrary), can I replicate this behavior with ESM?
  • jitl 4 years ago

    You can replace that require with `await import(‘giantLibrary’)` but now your function needs to be async, and so do all of its callers. This is needed because it’s unacceptable to block the UI thread synchronously importing code in the browser, but in CLI programs not being able to synchronously require is a bit annoying.

    • throwaway2077 4 years ago

      that's a shame, I hoped there was going to be a way to do that and it was simply not implemented in node at the time I was looking into it

  • tehbeard 4 years ago

    Well since you asked so "fuckin" nicely /s

        if( condition ){
          import('../../../hugeFuckingLibraryThatTakesSeveralSecondsToLoadUponColdStart').then( x => {
              //do something with x
          })
        }
    
    > ...nerd bullshit...

    I hate to break it to you darling, but programming is nerd bullshit.

    edit: alternate that might too much "nerd bullshit", but uses async/await if the surrounding code is an async function:

         async function doSomeStuff()
        {
            if( condition ){
              const x = await import('../../../hugeFuckingLibraryThatTakesSeveralSecondsToLoadUponColdStart');
              //do something with x
            }
        }
    • throwaway2077 4 years ago

      no bro, programming is programming and nerd bullshit is nerd bullshit. the arguments I see in favor of ESM over CJS fall into the latter category, at least on the node side of things.

axismundi 4 years ago

Don't bundle. The only reason for bundling is too many requests to the server. Use HTTP/2 instead.

vbg 4 years ago

Rather a negative outlook.

api 4 years ago

It’s puzzling to me why there isn’t more effort on developing front end Go or Rust frameworks that compile to JS or WASM. It would be a chance to work in a real language instead of this trash.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection