You don't need a build step
deno.comSo basically Deno has its own bundler that lets you not have a local build step and it gets bundled dynamically per-route as requested by users, right? This is very different from industry standards and possibly has many new concerns from devs, none of which are addressed in the article since it's treating the system as a perfect solution, which makes sense since it's a marketing page ("content marketing"). If it was a 3rd party article, I'd be interested in things like:
- How do you measure bundle size and make sure it remains small? (e.g. make sure that a PR doesn't accidentally import a massive dependency)
- How do you measure bundling speed/performance, does it add significant time to the first request? To subsequent requests? Is it linear, exponential, etc. with the number of LOC? Again like in the previous point, how do we make sure there are no regressions here?
- How does this work, well, with absolutely anything else? If I want my front-end in React? Vue? etc.
They are not using their own bundler. They are using esbuild at runtime to generate bundles for individual islands when the process starts up. Then they store those files in memory in a Map. When the bundle files are requested, it just pulls the copy that was generated at runtime.
Here is a link to the source where esbuild is used.
https://github.com/denoland/fresh/blob/main/src/server/bundl...
I personally think it would be better to bundle at deployment time so that the bundles don't need to be regenerated each time a new process starts up or on demand when a request comes in for one of the bundle files.
It's like you combined the warmup phase of the JVM with the compilation phase, except you still have to wait for the JIT, too. I better not hear anyone complain about JVM warmup times again. :p
Man the things that pass for innovation in the node-adjacent space continues to blow my mind. It feels like hte horrors of /r/programmerhumor meets generic internet hype-beast cycles.
Conceptually, I don't see how this is much different from JIT compilers in the JVM, CLR, and similar runtimes. You don't hear so much from Java devs about how they can't* see the machine code their customers are running. They talk about cold-start performance, but accept that the first few requests will be slower in exchange for the productivity and eventually-high-performance.
Now that I think about it, this is how V8 works too, for JS code itself!
Why wouldn't this principle apply to bundling?
JITs exist because 1. certain compilation can't be done at compile time, because the code is dynamically synthesized at runtime from data only available at runtime; and 2. knowing how the code is already being used at runtime through an interpreter, can help optimize the compiled code ("profile-guided JIT.")
Bundling on first request has neither of these advantages: everything that is getting compiled at runtime could have been compiled at compile-time; and no information is yet available on how the code will be used.
The difference is that the JIT doesn't "fail". Builds and bundling can fail, and I wouldn't want to trust that all the machinery the builder/bundler depends on (especially if it needs to fetch things over the internet) are available and working properly.
Put another way, as long as the `java` command is present and working on the production machine, I can be pretty sure my service is going to run and work (aside from any bugs in my code, of course). With Deno, more moving parts on the production machine need to be working properly in order to ensure things work properly.
JIT compilers at least do the work incrementally, profile the code to provide the best (or several) versions of native code, etc. That is, they adapt to the particular invocation, doing stuff an AOT compiler cannot do (especially for dynamic languages like JS).
I wonder if running the bundler on startup, and throwing away the (identical) result of a previous invocation of the bundler, makes much sense. It at least could persist it optionally, like Python does with .pyc files.
What kind of innovation would you prefer? (removed snark)
Web code is made of duct-tape, nothing new.
Deno is a pretty big departure from Node in some respects as far as I can see.
I admit I haven't used it, but if it keeps only half of its promises to simplify frontend bundling and compilation/transpilation, I think it's innovative.
A poor craftsman blames their tools.
Because a good craftsman picked his tools and knows it’s his fault if they’re garbage.
This pattern of “thinking” pisses me right off. It’s an axiom about quality and avoiding deflection and it’s always used in a low-quality reply as a form of deflection.
I can’t recall the last time I heard someone use that phrase the way it’s meant to be used.
A good craftsman knows not to use bad tools.
> A good craftsman knows not to use bad tools.
In your opinion, what's so wrong in prefering to just run your JS/TS code without having to maintain a build/bundling step?
To me, Deno's approach is undoubtedly a killer feature with regards to the status quo of the whole nodejs ecosystem. Don't you agree?
Builders and bundlers fail sometimes. I don't want to introduce an extra point of failure in my production services.
Maybe this is nice for local development. But really it just feels like the tooling version of a "code smell". If people think bundling/building is too slow, then people should work on making that faster. Maybe that means people need to stop writing JS builders/bundlers in JS, and use a language like Rust that has better performance characteristics. I wouldn't consider that a failure; it's just an admission that we should use the right tool for each job.
Speaking of Rust, the Rust compiler is fairly slow, but my proposed solution wouldn't be "get rid of it and have it dynamically compile at runtime", it's "profile it and make it faster" (which people are doing!).
Eh... I don't really think this is a killer feature.
If you don't want to maintain a build step, use a framework that's configured it for you and avoid customizing it.
Lots of frameworks already do that, this is just Deno's implementation of the same thing.
There's STILL a build step, they're running esbuild in the background for you. You've just lost visibility and control, exactly the same as if you picked a framework that gives you a default webpack config.
If anything, I see esbuild as the real "killer feature" here, since it's just really fast. Fast enough to bundle at request time.
Alternately, you can just stick close to standards and not really worry about it.
I write plain CSS.
I use Web Components as my unit of isolation, generally sticking with the light dom.
I have a small state utility [1] that I wrote years ago and works great.
I do have a build step before deployment, but I use vite during development so I have zero "make a change, wait, test, rinse, repeat" downtime. When it's time to deploy, vite build does the trick nicely.
I don't use frameworks. I don't use JSX. I don't use typescript, for types I use jsdoc in vscode which gives me 90% of the benefits of TS without the downsides.
My pages are light, fast and easy to maintain. I don't have to deal with painful build steps, or framework churn.
Debugging is simple. No multiple layers of transforms and sourcemaps, WYSIWYG.
I'm pretty passionate about the "keep it simple" philosophy.
I chose to innovate in the problem domain, not the technical one.
Anecdotally, I had a new developer join my team and he was initially very confused. He said "it's just so strange using this tech stack. You make a change, and you see it..."
I didn't know whether to laugh or cry.
My comment was more about the saying of the parents comment then specific to deno.
What I mean is that a good craftsmen doesn't complain about bad tool because they choose to use good tools (or more precise appropriate tools for the job) not because they ad-hoc easily negate any drawbacks of bad tools(1). And if they use bad tools anyway they do so intentionally or because there is not other choice and in turn don't complain because it's pointless to do so.
So the saying in the op comment is IMHO misleading at best deceptive at worst. Furthermore it doesn't advance any discussion, only side track it.
I'm sure the deno specific workflow from the article is a grate tool for a lot of use-case. I'm also sure there are use cases where it will fall apart.
(1): Depending on what you do creating a decent result without good tools might literally be impossible no matter how good your skill is. Or it might not make too much of a difference and can be compensated by skill, it's all context dependent, like most things in live.
The web is not a bad tool.
It sounds like (it is a little vague) there is no bundling at all, the only thing Deno does magically at request time is transpiling TypeScript/JSX to browser-compatible JS. Beyond that I think the idea is it relies on native ES module imports (and import-maps), both of which are browser standards
both of which are browser standards
ES modules have great support but import maps don't. Your website won't work on iPhones if you launch with them today. They're close though. Give it a month and it should work.
Based on https://www.digitalocean.com/community/tutorials/how-to-dyna..., aren't import maps a massive step back in the world of tree shaking? It would seem every export needs a dedicated file for it or else you get the whole world when you try to make a single request. (And better hope that single file doesn't import any other ones!)
I might be missing something, but I'm not seeing how import maps are related to tree-shaking (of individual declarations within a module)
Importing ES modules directly instead of bundling will probably mean you ultimately load more code, yeah, which is one reason bundling hasn't gone away. Though you get other benefits in exchange- more granular caching, free and very granular "bundle splitting" (every module is "split" and can be downloaded lazily or in parallel), on top of the simplified workflow.
Will be interesting to see how popular each approach ends up being, with ES modules getting more attention lately! I think at this point people will only use something if a framework they like uses it, for better or worse, so I'm glad to see a framework that's representing this alternate way of doing things
In the example on that blog post, you want lodash's "startCase" function.
In typical bundling with tree shaking, the developer would download all of lodash and the bundler would identify that only the "startCase" function (and it's deps) are referenced, it would smoosh them all together into one file along with the rest of your code, you put that file on a CDN, and you're done. The client can access everything they need with two requests: index.html, then app.js
With import maps, first loadash needs to be sure to split all their exports into individual files (they've done this for a while now as they predate tree shaking, but that's besides the point), then the client requests the website's JS, which tells it to go request https://unpkg.com/lodash-es@4.17.21/startCase.js, which then needs to make requests to https://unpkg.com/lodash-es@4.17.21/_createCompounder.js and https://unpkg.com/lodash-es@4.17.21/upperFirst.js. Then, _createCompounder.js needs to make a request to _arrayReduce.js (thankfully dependency free), deburr.js (depends on a ton of things), and you see where this is going.
All said and done, the client ends up making 32 (!!!) separate requests to the lodash CDN for that single function. And yes, this does parallelize somewhat, but its still 8 distinct "depths" of sequential request bottleneck as the browser can't get the dependencies for a file until it gets that file, and there are 8 levels of dependencies for that single function. On my home network this is 250ms just to load a single function, and when I simulate "Slow 3G" it's nearly 20 seconds, just for that one function! It's truly mind-blowing. Also, as lodash doesn't minify their code for some reason, the bulk of the bandwidth is in comments/etc and I download 30kB of useless crap for that single function.
When I instead use plain old bundling, the client makes a single request to index.html, which in turn makes a single request to app.js. The bundler does all the work of making app.js include the 10kb it actually needs (in 11ms no less), and even the "Slow 3G" client gets their website in 4 seconds.
All in all these "import maps" seem like a massive step back in every way that matters to the client, just to save the developer the 11ms it takes to run esbuild (which comes fully batteries-included, btw. The rollup days of "you get a plugin! you get a plugin!" are over).
Edit: this all sidesteps tree shaking as lodash is built to not need it, but you can see how if a library had multiple functions per file and especially if they had differing dependencies, the request chain would look much much worse.
Right okay, so, let's get our terminology straight real quick: an "import map" is just a piece of configuration that says, "when someone imports from module name X, load it from Y". This would for example let you `import { ... } from 'lodash'` and have it load from `https://unpkg.com/lodash-es@4.17.21`, or whatever else. That's all it does. Everything you're describing above is just about regular "ES module" behavior.
With that out of the way: yes, the lodash case would be pretty egregious if you imported the whole library in one. And most libraries imported this way will not be totally optimal: you'll probably load some code you don't need. But I think lodash is a pretty dramatic outlier; not only is it gigantic, it's exceptionally modular. Compare it to something like React, which is not small, but is nearly a monolith. The same I assume goes for Vue, etc, as well as other kinds of big libraries like GraphQL clients, third-party SDKs, etc. The percentage of code loaded that didn't need to be is, I would guess, usually much much smaller than it is if you're using a single function from lodash
I would add a couple more things:
- Minification is definitely a loss in the naive case, however, that should be easy for a CDN to implement (I think several already do it). I wouldn't be surprised if Deno/Fresh do this automagically too.
- HTTP/2 is optimized to make lots of parallel requests over a single TCP connection, which could conceivably mean a slightly larger total amount of code might load faster as separate modules than a single large bundle would. Of course like you said "depth" is still a limiting factor.
- For extreme cases, dynamic import() is an option in the native ES module system, and can be used to strategically defer module loading
So I don't think it's all that bad, even though like I said above there are tradeoffs. And I'll be curious to see where the industry goes.
PS: It would be good to have the option to bundle with Deno, though. One thing I would be excited to see, personally, is a Deno-ready bundler. One of the main limitations of using Deno right now, if you've got a front-end, is the lack of front-end tooling. You could install Node separately just for tools... but then that's a whole other system dependency, set of concerns, etc. I'd like to be able to do:
And have that Just Work™. Maybe it uses WASM modules for speed.deno run https://deno.land/x/bundler/cli.ts...or maybe this could even be a first-class feature of the `deno` CLI
True, I was conflating the idea of having a map of where to find files with the idea of not putting all source in a single file. Though I do still posit that you don't need one if you have the other.
It's a bit like the ancient static vs dynamic linking debate, except in the web case you can't reuse modules that were downloaded by a separate origin, which kinda throws out a lot of the case for dynamic linking. The idea of caching the library files separate from the app files is still potentially valid on a per-origin basis, I suppose.
> HTTP/2 is optimized to make lots of parallel requests over a single TCP connection, which could conceivably mean a slightly larger total amount of code might load faster as separate modules than a single large bundle would
I'd be curious to see this put to the test. It'd require the OS to be able to stream multiple files from memory to the network adapter in parallel at higher bitrate than it could do just one, which is something I'm not sure is possible. Could be, I just don't know. That said, one area it could shine is in letting the browser parse the JS of one request while still downloading the others from the network.
I agree I'd like to see front end bundling treated as more of a first class citizen in the JS backend runtimes. I'm in the Bun camp lately and it is also lacking in that regard. Though as I mentioned, esbuild is great (and Bun interfaces with it faster than Node can!)
Fair. I wonder if Fresh includes some kind of polyfill for those (or maybe the transpiler factors in which browser the current request is being made from?)
That is if you use import maps on the client-side. It wouldn't matter to the client if you are using it in your server-side JS/TS.
I believe Deno is doing this, though (could be wrong)
> Transpiling TypeScript/JSX to browser-compatible JS.
Which is another way to say it's a form of compilation..... [0]
[0]: https://en.wikipedia.org/wiki/Source-to-source_compiler
Yes, but it's distinct from bundling, which is what most of the GP's concerns were around.
Being a bit hand-wavey, Deno's approach sounds like full-stack JS converging to PHP-like patterns, which wouldn't be a bad thing.
Maybe this is before your time, but there have been runtime minifiers/“builders” in the past. I remember them in the PHP days.
It’s not a new idea.
> How does this work, well, with absolutely anything else? If I want my front-end in React? Vue? etc.
It doesn't. Fresh uses Preact and that's it.
From the article: "Fresh renders each page on the fly and sends only HTML"
So the bundle size is zero.
Fresh supports islands, so it also does send JavaScript for the interactive bits. If you have a react (default is preact) island then that'll be bundled and sent down.
> unless an Island is involved, then only the needed amount of JavaScript will be sent as well
It's not just HTML but also JS that gets sent.
I worked on JS infra for Google. One thing we found in this space is that when your apps get very large in terms of number of source files, there is a developer-impacting load time cost to the unbundled approach.
That is, your browser loads the entry point file, then parses it for imports and loads the files referenced from there, then parses those for imports and so on. This process is not free. In particular, even when modules are cached, the browser still will make some request with If-Modified-Since header for each file, and even to localhost that has time overhead cost. This impact is greater if you are developing against some cloud dev server because each check costs a network round-trip.
However this may only come up when you have apps with many files, which Google apps tended to do for protobuf reasons.
Google apps always seem to stand out with an especially large amount of requests. Does Google use a proprietary module system for these runtime imports?
I've only seen this from afar when using the Maps JS API.
This is the "don't download code you don't run" and "don't ask for data you don't need yet" with smart prefetching and caching. Mostly facilitated using an internal 3 letter framework in Google.
If you want a de-googled approach for "only code you need" check out qwik by Misko Hevery who worked on a bunch f JS related things and a few others. The concept is "resumability".
(not sure if that's what you entirely meant since your example was the maps api)
Thanks for the tip, I'll look at quik!
My only experience with the approaches you mention so far has been code splitting and dynamic imports with webpack.
Yes my example was a bit misleading as it's probably not specific to the maps JS API. Just remembered my casual observation of the network requests when embedding Maps using the JS API.
Also saw that there were lots of tiny cacheable requests and overall great performance.
Is the article targeted at Google use cases? The vast majority of programmers probably write less than 50kloc of code in their entire life and work on projects with a handful of files.
50k in their entire life??? That's insanely low/inaccurate
What do you think it is? 100k, 200k?
GP was humble bragging in a way that focused on them rather than the discussion. Deno's autoloading isn't targeted at a Google3 TS application that takes 8hrs to build.
On most weeks I'm doing 2k or better. So, assume 50 weeks a year, 100k per year. Assume most developers do at least 10% of that (which is crazy low, has to be much higher in general), so 10k per year. You have a 30 year career that is 300k in a career.
Even that is extremely low. That is only 37 lines of code per day. Anywhere I worked you are getting fired if you are only doing that (unless you are just bug fixing all day, etc).
In other languages, devs build better tools for solving existing problems faster or easier. In no other language, build tools are so broken that the best tool changes every few years.
In the JavaScript world, a significant chunk of energy is directed inwards, solving problems created by using JavaScript!
Very few languages operate under the same constraints as js. When you ship js you can't guarantee the version of Ecmascript that the client will be running, or the standard library of DOM functions that will be available (which differ slightly from browser to browser), so you end up transpiling your code to the least common denominator.
You also have completely different performance requirements compared to most other languages. If I ship a python app I don't have to worry about reducing the length of variables names to shave off a few bytes, or bundling multiple files together to reduce the number of http requests. Other languages don't need to dynamically load code via http requests, they generally run under the assumption that all of the code is available before execution.
The closest comparison outside of the browser would be to the container ecosystem, which also runs code in an environment agnostic way, and there's plenty of complexity and volatility there (podman, buildah, docker, nerdctl, k8s, microk8s, k3s, k0s, nomad, docker swarm, docker compose, podman compose, et cetera).
> The closest comparison outside of the browser would be to the container ecosystem
And as someone who has worked on both, I can tell you that the container ecosystem is way better and way more deterministic. `Dockerfile` from 10 years back would work today as well. Any non-trivial package.json written even a few years ago would have half the packages deprecated in non-backward compatible way!
There is another similar ecosystem of mobile apps. That's also way superior in terms of the developer experience.
> Other languages don't need to dynamically load code via http requests, they generally run under the assumption that all of the code is available before execution.
And that's not what I am objecting to. My concern is that the core JS specification is so barebones that it fragments right from the start.
1. There isn't a standard project format 2. There isn't a single framework that's backward compatible for 5+ years. 3. There isn't even an agreeement on the right build tools (npm vs yarn vs pnpm...) 4. There isn't an agreement on how to do multi-threaded async work
You make different choices and soon every single JS project looks drastically different from every other project.
Compare this to Java (older than JS!) or Go (newer than JS but highly opinionated). People writing code in Java or Go, don't expect there builds to fail ~1-5% of the times. Nor are the frameworks changed in a backward-compatible way every few years.
> `Dockerfile` from 10 years back would work today as well.
I highly doubt that any Dockerfile from back then would work if it runs `apt-get` (as many do), as the mirrors for the old distribution versions aren't online anymore.
Dockerfiles can be made to be quite deterministic, but many use `FROM` with unpinned tags and install from URLs that can and do go away.
Exactly! Dockerfiles are not deterministic. The build artifacts that they produce (images) are, but the same could be said of js build artifacts (which would be a set of compiled and bundled js files).
Having worked on package management in all the verticals you’ve mentioned, none of what you said is true.
Packages in most ecosystems are fetched over HTTP and those packages disappear. If you’re lucky those packages are stored in a centrally maintained repository like npm, distro repos, etc. If you’re unlucky it’s a decentralized system like early go where anyone can host their own repo. Anyone running builds at scale have caches in place to deal with ecosystem weirdness otherwise your builds stop working randomly through the day.
Re: Go, good luck getting a go package from 10 years back to compile, they directly addressed the repository the code lived in! This was a major problem for large projects that literally failed and were abandoned half way through the dev cycle because their dependencies disappeared.
Re: Docker - Good luck with rerunning a glorified series of shell scripts every build. There’s a reason we stopped doing ansible. When you run simple shell scripts locally they seem infallable. Run that same script over 1000s of consecutive builds and you’ll find all sorts of glorious edge cases. Docker fakes reproducibility by using snapshots at every step, but those are extremely fragile when you need to update any layer. You’ll go to rebake an image from a year ago to update the OS and find out the Dockerfile won’t build anymore.
Apt is a glorified tarball (ar-chive) with a manifest and shell scripts. Pkg too. Each with risks of misplacing files. *nix systems in general all share a global namespace and YOLO unpack an archive followed by running scripts with risk of irreversibly borking your system during an update. We have all sorts of snapshotting flows to deal with this duck tape and popsicle stick approach to package management.
That package management in pretty much any ecosystem works well enough to keep the industry chugging along is nothing short of a miracle. And by miracle I mean many many human lifetimes wasted pulling hair out over these systems misbehaving.
You go back and read the last two decades of LISA papers and they’re all rehashing the same problems maintaining packages across large systems deployments with little real innovation until the Nix paper.
> And as someone who has worked on both, I can tell you that the container ecosystem is way better and way more deterministic. `Dockerfile` from 10 years back would work today as well. Any non-trivial package.json written even a few years ago would have half the packages deprecated in non-backward compatible way!
As I wrote elsewhere [1], Dockerfiles are not deterministic. The build artifacts that they produce are deterministic, but that would be comparing a build artifact to a build system.
> There is another similar ecosystem of mobile apps. That's also way superior in terms of the developer experience.
Mobile app users have different performance expectations. No one bats an eye if a mobile app takes several minutes to download/update, but a website that does so would be considered an atrocity.
> And that's not what I am objecting to. My concern is that the core JS specification is so barebones that it fragments right from the start.
JS is actually really well specified by ECMA. There are so many languages where the formal specification is "whatever the most popular compiler outputs".
> You make different choices and soon every single JS project looks drastically different from every other project.
The same could be said of any other moderately complex project written in a different language. Look at the Techempower benchmarks for Java, and tell me those projects all look identical [2].
> 1. There isn't a standard project format 2. There isn't a single framework that's backward compatible for 5+ years. 3. There isn't even an agreeement on the right build tools (npm vs yarn vs pnpm...) 4. There isn't an agreement on how to do multi-threaded async work
A lot of the complexity you're describing stems from running in the browser. A server-side js project that returns plain html with a standard templating language is remarkably stable. Express has been on version 4.x.x for literally 9 years [3]. Package.json is supported by yarn, npm, and pnpm. As long as you have a valid lock file and install dependencies using npm ci, you really shouldn't have too many issues running most js projects. I'm not sure what issues you've had with multi-threaded async. The standard for multi-threading in js is web workers (which are called worker threads in node). The js ecosystem is not like Scala or Rust, where's there's tokio and akka. JS uses promises for concurrency, and workers for parallelism.
[1] https://news.ycombinator.com/item?id=35002815
[2]https://github.com/TechEmpower/FrameworkBenchmarks/tree/9844...
>Mobile app users have different performance expectations. No one bats an eye if a mobile app takes several minutes to download/update, but a website that does so would be considered an atrocity.
Well if it updates in my face I'd be pretty annoyed. The mobile app thing only works when they update in the background/transparently.
Well yeah if you had to wait for apps to update before you could use them you'd definitely be annoyed, but the beauty of mobile (and desktop) apps is that users don't expect to constantly be running the latest version of a given app, which means you can slowly update large apps in the background.
When you visit a website you expect to always be running the the latest version of that website. In fact, most users aren't even consciously aware of the fact that websites have versions at all.
> When you ship js you can't guarantee the version of Ecmascript that the client will be running, or the standard library of DOM functions that will be available (which differ slightly from browser to browser), so you end up transpiling your code to the least common denominator.
Isn't that the same as shipping native binaries? You don't know what version OS or libraries it will run on. That's why you do stuff like link with the oldest glibc you want to to support.
The main difference between shipping a binary and a js file, is that users don't expect binaries to be small, which means you can usually ship an entire runtime with your binary. If you shipped every single js polyfill with your website performance would tank. You also generally differentiate between downloading a binary and running it, and users will tolerate a loading spinner while a massive binary downloads. Webpack will emit a warning if any of your build artifacts are larger than 244KB, whereas a 244KB binary would be considered anemic.
> users don't expect binaries to be small
That seems to only be a "modern times" thing.
Prior to that, minimising the size of shipped programs (binaries, images, doc files, etc) has been important part of release management.
Binaries were definitely leaner in the past, but there's always been that dichotomy between downloading software and running it.
In the browser, users expect software to be available instantly, and that constrains how you build webapps. Users will tolerate the google maps app taking a few minutes to download, but they won't accept the google maps webapp taking several minutes to load in a browser.
It’s been [2] days since I had to fix our npm install.
One package (not ours) suddenly fails to build about 40% of the time. Looks like a parallel access problem, node-gyp poops with ”Unable to access foobar.tlog” because some other step is using the same file
Fixed elegantly by adding a while(failed){npm install}
Because trying to debug the build for a package you didn’t create just isn’t worth it
I don't know, I've seen plenty of build tool bulk in native code. Configure scripts, automake, make, cmake, meson, and plenty of others that are even more obscure. Most of them integrate with some kind of project specific shell script (that claims to be sh compatible as long as sh means bash) to make thing easier.
The languages where the build tools remain static are often also the languages where innovation lags behind or where no real alternative exists. C and C++ projects often use standards that go back literal decades for compatibility reasons, and rely on apt/dnf/pacman to install their dependencies. Java is stuck on nine year old tech in most production systems because what if upgrading to Java 9 will break AncientProprietaryHackedTogetherLibrary. Python seems to be moving away from the pip vs conda wars, though the ML space seems to be reintroducing conda into newer projects; to run popular software, I've had to install at least two conda packages and pip (and then disable the auto load in my bash shell because all of them made the shell prompt take literal seconds to come up).
Go/Rust/.NET and other more recent languages have a single package manager+compiler+build tool+publishing system combination that's changing so rapidly no alternative could be written. I guess you can manually script calls to the compiler, linker, and download scripts, but I doubt this will be maintainable. I wonder how long it'll take for GCC Rust and official Rust will run into trouble in this space.
The Javascript ecosystem certainly seems to be the wildest when it comes to reinventing the wheel (and inventing new steps) to make new build systems, but every language either has too much of that or too little.
Please tell me you're kidding.
Have you ever tried building v8 from source? Go ahead, give it a whirl and come back in a few days and let me know how it went.
Or .net? Which one you ask? .net framework? .net core? Mono? Which version? Which framework? Which OS? Enjoy tweaking assembly xml files?
Python? You mean one of the main driving forces behind the invention of containers because dependency installation is such hell?
Go? Well, actually it is better. So, that's one.
My point is, js isn't unique in having fragmentation. It is a bit unique in its pace of innovation, but that's a good thing since it's also probably the most backwards compatible ecosystem in existence.
Every language I've worked with that has had build tools (like beyond 'make'), those build tools have been janky as all hell, and it's taken me at least a day of googling to get them to do the most basic thing. In some cases the build tool itself keeps changing how it works so you need to waste another day figuring out how to do basic things. It's to the point where I'm not entirely certain it's even possible to do well.
I don’t know, C++ has Make, CMake, Buck, Pants, Ninja, GYP, Bazel.
I am perplexed by the focus on this. Clearly there are excellent devs working on Deno — but what setups are you running that the actual build is holding your productivity back? Developing in node/ts or rails I don't think it would move the needle in the slightest for me. It's simply not an issue outside of my brain finding beauty in any kind of optimization.
Is that all this is?
Deno has a hard time innovating and the reason adoption has been low in my opinion is that Node is good enough and has a lot of these features anyway (esm, easy ts support, https imports, web apis like fetch etc).
They are trying to innovate and coming up with differentiators and reasons to use the platform. If you had to ask me when I met Ryan 5 years ago in JSConf EU before he introduced deno - I would have assumed they'd have 30% market share by now (of server JS) but Node has been able to "catch up" to complaints quickly enough (I think) and Deno's selling points like edge computing and fast startup aren't super important for msot devs in most use cases in practice and there are other runtimes for different clouds (like cloudflare workers).
That said - it is still really good they are trying to innovate and while I find the marketing speak shitty and somewhat in bad faith - I still think it's really good they're innovating and I'm very much in favor of that and hope they find something important enough to solve to get big.
When you’re trying to run a quick script or just want a “playground” environment where you can test your code, it holds you back.
For example: I’m making a web app with Svelte in TypeScript and I’m trying to test a part of its code. To do that, I have to build the app first because TypeScript needs transpiling which in turn needs bundling etc…
With SvelteKit, vite dev will hot-reload things on the fly as you save files. Definitely not worth this amount of brittle complexity to avoid.
Developers creating problems they can solve later and call it progress.
Idk, its another way of doing something that for some people could be very important. I've had Next apps take 5 minutes to start, with 1 minute single-page build times in local dev. So yeah, pretty easy to calc the payoff of speeding that up.
I think that people mistake building/ bundling js with compiling js. even when you use bundlers, the code still needs to be compiled by the JIT. so bundling is really kind of a weird step that makes web dev different afaik. they're trying to turn browser dev into a standardized scripting environment and stop the silly browsers from trying to innovate in what should be user-dev space. That's my idealistic take at least.
personally I've been using vanilla es6 for years and not bundling, because I dont care about mobile safari, and I love it.
As far as learning typescript, Deno is great in that it allows you to focus on typescript the language. Rather than the environment setup, config, compiling, etc…
The decoupling of URLs that host your dependencies and the URLs that host your application feels like an important uptime measure currently. If the URLs that host your dependencies go down in an NPM world, you can't build and deploy new code but your app is still up. It seems, if the URLs that host your dependencies go down in a Deno world, your app goes down if those dependencies have not yet been cached (even on the server).
Am I missing something? This might not be terrible if it becomes the standard to host your own mirror internally.
Dependencies in package.json are essentially just links to the npm CDN. (admittedly with a constraint solver in front that determines the exact link to use).
`npm install` is equivalent to `https://deno.land/manual@v1.31.1/tools/vendor` in that they both fetch your dependencies and store them locally, so your app can run without downloading the deps.
The just-in-time builds section of the linked article describes an approach where you dynamically bundle, at request time. If your server already has all the deps vendored then it won't need to fetch them at runtime and your app will stay up even if the URLs go down.
Deno caches your dependencies locally.
If you are building something that demands high availability you probably want to host the dependencies yourself though. Which is easy, you just copy them and serve them as static files (assuming their license allows that use).
Dope this answers my question. Honestly as I've become a more seasoned developer, I've increasingly come to appreciate the utility of mirrors for build systems too.
It's not always simple in every module system though. Currently, I want to figure out how to create a mirror for our Electron codebase, but it's tough because some of the modules fetch gyp native headers that live in other locations (including the Electron core packages themselves) and NPM doesn't always know what to do. The Electron core header URLs flake every 2-3 weeks or so and inevitably we lose a lot of engineering time.
Hoping Deno continues to gain steam and makes this simpler since everything is URLs all the way down.
There's a command line option to use a local (project) directory for said cache, and you can commit into your code repo... so no package down time to worry about.
Though, hard to beat live ref to a githubusercontent url.
On Android we do the same with help of Gradle. It retrieves library from a remote source and caches it. If there is no cache, next time your Gradle tries to build, it will download the library from a remote source. If remote host fails, you can not build your app. One of popular host providers had issues couple of times in last year, and it wasn't nice. https://github.com/jitpack/jitpack.io/issues?q=is%3Aissue+so...
Deno should have like a mirrors.manifest.js file that stores your dependency links, and mirrors, should one source go down... that way it wouldn't be such a problem, the only big issue might be ensuring the sources don't have a rogue link or two from bad actors, where they do something nefarious like build a useful package then swap it out for a bad version later on, and put only on a mirror so when things go south it triggers, of course there could be bots/queues that periodically take md5s of the code, or just whenever version changes occur, so that would stop that.
This exists and is part of the deno executable; see [1] for more info.
Deno puts a hash of the dependency contents in a lock file so it can ensure the contents of a dependency haven't been changed unexpectedly: https://deno.land/manual@v1.29.2/basics/modules/integrity_ch...
I think the benefits of leaning on a (very familiar) protocol instead of a central repository outweigh the risks you describe
Like you mentioned- mirrors could become more common, and relying on HTTP makes it incredibly easy to host your own mirror. And import-maps mean you can mirror anything and everything in your dependency tree
Let's not forget, back in the day every major site relied on a client-side request to a jQuery CDN :)
I think so yes, but then some people are talking about cache which means it's not really JIT... And then even with cache it's blackboxed, you are not sure it's good or that the cache will not be discarded. #OncallNightmare
You can cache all dependencies in one go whenever you want. It doesn't have to be done in real time when the code is run.
I hate hate hate that modern web development requires a build system. No build system would probably get me to convert to Deno.
> I hate hate hate that modern web development requires a build system
Why? For any sufficiently complex software system, a build system serves as a reducer whose input is something that is more convenient for developers, ie huge codebase with tons of utilities and annotations, and whose output is something more optimized to run on the end users' devices.
It's good to do such optimization because there will be, at least for a successful project, many orders of magnitude more EUs than devs. And an automated solution can do much more optimization than any team of devs could ever hope to do manually.
And that's before you get into obfuscation, although I can't tell whether that's necessary more for user security or just protecting IP.
(Not a web dev, I write in a compiled language in my day-to-day.)
> Why?
One more thing to know, to update, to break, to configure, to consider when debugging. The existence of source maps proves just one aspect of the pain this indirection and complexity introduces. I’m not necessarily arguing the trade offs don’t make it worth it, merely that there is a cost and there are good reasons we’d want to avoid it if, all else being equal, we can.
Then you can just use ESM straight in the browser... The payload will be bigger, but it's all JS turtles the way down.
I think for me the only small down side (beyond request count/size) is you couldn't use JSX and a lighter interface (preact or similar). One option would be a service worker to transpile JSX on demand... which I guess wouldn't be too hard to do.
Because the need for a tool that bridges the impedance mismatch only hides the impedance mismatch even more, it allows developers to be even more remote from end users than before. It doesn't even start to question why we have an impedance mismatch in the first place. It keeps engineers in their position of those-who-know, and end-users in their position of those-who-need, preventing appropriation of technology.
As software engineers we ought to question if we're going in the right direction, and "more complexity" is not something I agree is better
In what other similar user-facing system would that be the case? There is an impedance mismatch just in the fact that a UI is much different from code itself. There is a mismatch between what your parents can learn to use (UI) vs. what computers can understand (code). Most other UI projects are compiled (native apps), and they don’t even need to care about sending that final executable over the wire very quickly, or even other web optimizations a bundler might do like code splitting.
These systems become complex because the web is a much different deploy target than, say, iOS.
> Why?
Having to manually kick off a build process is another thing that I have to go do that takes me out of the flow and is another point where things can go wrong.
When you run a Deno script, it has a build step that it does internally, so I don't have to think about it, and I don't have to configure anything. It just works because it was designed that way. I don't know why more language runtimes aren't.
Even when I want to compile a JavaScript bundle to run in the browser and there is an explicit build step, `deno bundle` is far simpler and more pleasant to use than the mess of npm packages I would have to worry about in the Node world.
Fun fact: the "compilation" step in my company's React project is the biggest consumer of CI minutes by far across our entire organization, beating out every one of the Maven compile and test loops
But, since the devs don't care, there's only so much finger wagging I can do
Over the course of a month or two the time taken to compile a single .ts file in our codebase climbed from 'too small to measure' up to '7 seconds'. It eventually turned out that a single type definition in the file was causing all typechecks to become incredibly slow. Getting timing data out of build tools like rollup was brutal, and editors with tsc integration like vs code/sublime text would just lag and misbehave. This regression had occurred without anyone noticing, because we all just assumed it was normal for 'build and bundle our typescript' to take a long amount of time despite how simple our code was.
I have to say, for some reason one of the more satisfying things to do is dramatically speed up a lengthy build. It’s hard to beat taking a build that runs forever and making it take a few seconds.
I don’t know why. Perhaps it’s because it’s something you and your peers use constantly so when it speeds up the quality of life for all in the shop improves. I mean, it’s not often you get to make an improvement to your project that directly affects everybody working on it.
Perhaps it’s because of the challenge and that most developers absolutely hate fucking around with the build system. Example: the parent poster. Build systems are can be pretty archaic but have a ton of features that most people don’t exploit.
Perhaps it’s because it is easy to timebox and has a readily apparent set of diminishing returns. You can generally make a single change, push it to production and if that is the only change you made you’ve still added value.
Perhaps it is because almost all of the changes you make don’t alter how the end user (other devs) use the build system. Short of swapping the build system entirely most of the time everything you do changes nothing for the developer.
Perhaps it is because it is a good distraction from whatever it is you should be working on. You can squeeze it into spaces where you don’t have much in the pipeline or need to think something through.
Whatever it is, I love fucking around with build systems.
And that is one of the main reasons I hate build systems. If I have to use a build system it had better work perfectly the first time and every time, never slow me down, and never ever require any fiddling or maintenance.
To me that's a definition of make work. A good development ecosystem should require no wizardry to make it perform well. And it adds absolutely zero value for the end user of your product. This is why myself and 99% of developers abhor Rube Goldberg build systems like those that permeate JS.
A great craftsman will take care of his tools and seek to use the fewest number of cuts to achieve the desired result.
Improving builds/dev environments is taking care of your tools.
Also, citation needed on the 99%
Had similar issues with linting/formatting... switched to rome.tools a couple months ago and really happy with the change. I'm really looking forward to that project's goals too.
Hasn't rome had some dramas in the past?
I was under the impression that since it's such a large project, it is quite a risk to use it atm before it's fully ready.
Happy to be proven wrong.
I'm not really sure, I haven't followed the development that closely... I know I tried it... and it was pretty close... when I was ready to switch, had done some testing, and had to adjust several of the defaults to be closer to the existing linter config. That said, the reformat/lint-fix was so fast, it wasn't that big a deal. Did it in two commits, one with just the linter config, another with the reformats in a separate commit.
The whole project can be scanned and reformatted and lint-fixes applied in under a second... the eslint config with TS took several seconds... as a precommit hook it felt pretty painful in practice, now you don't even notice it.
If they do as well for build options (once added), I'll be very happy indeed.
The build step is the hardest of all.
Just look at vite, it's amazing progress, but getting it to work with legacy codebases is a nightmare.
Not having proper require support is a real killer.
My bet is in 5 years we'll have massive buy in to these one-stop tools, but until then, they don't fit the real world as well as i'd like.
Vite is definitely an improvement over what came before (Parcel can be nice too). And yeah, migrating an existing codebase can be difficult, especially when you have other integrations (storybook, etc) that need to line up as well.
Just updating libraries in use can be really painful if too much time goes by.
Granted, we have all moved from receiving updates to languages/frameworks once every 1-3 years (C# etc) or C++ with 3-8 years and instead get them monthly (sometimes even daily).
Devs from the past who wrote chunks of FORTRAN that sat on a "dusty 386 in the basement for 30 years" are still around, what do they think of the constantly evolving field/goalposts vs more traditional, slower release cadence of languages/frameworks/software in general (afaik; I wasn't around for it).
> It eventually turned out that a single type definition in the file was causing all typechecks to become incredibly slow
I've love to read a blog post about this.
It's a common pitfall yes, especially bc of big libraries like gcloud (https://news.ycombinator.com/item?id=32307019)
Build CI should be configured with a very aggressive upper limit to catch those regression but it's often forgotten; even I don't do it often while I have been bit more than once.
If you can turn that time into money spent and show it to them and the management, it could change.
If in terms of money it’s not significant, for the company it makes no sense to devote costly labour time to speed it up.
I hate hate hate that modern web development requires a build system. No build system would probably get me to convert to Deno.
I feel the same about C and C++. And Java. And Fortran, Pascal, Lisp, Kotlin, Swift, Rust, Forth...
Not being snarky, but Lisp and Forth should have interactive experiences that the other languages do not have. The Pascal related languages are renowned for being designed to be compiled at high speed.
This is a wonderful book, "The School of Niklaus Wirth: The Art of Simplicity"
https://tkurtbond.github.io/posts/2022/07/04/the-school-of-n...
Esp this chapter on the Wirthian way of designing compilers https://www.researchgate.net/publication/221350529_Compiler_...
See also the widely read (on hn), https://prog21.dadgum.com/47.html
You can use Python or PHP or some other language that doesn't have a build system, and whose performance and security model bests that of server side JS. (Deno, Node, Bun, etc.)
I run one Node/Js server and several Nginx/Php servers.
When Node was released, it had better handling of multiple long-lived connections. Nowadays, support for SSE on Node trails all other servers, and the dream of "Isomorphic" code that doesn't need to be rewritten has not panned out (in JS, at least).
The main reason I could imagine someone choosing Deno now is that it is the tool they know best (such as someone fresh out of college). Which may not be a bad reason, but it is hardly the best tool for the job.
I'm ok but please set the config docs cap to 1 A4 size. It should cover 80% of application with zero config. 90% with config. 99.8% with plugins.
Try Deno anyway, if you haven't. The DX is very nice. I am very fast at cranking out small CLI tools -- and it gives binaries. Though sometimes I debate if Python is better for these tools, since it's so easy to get something going there too. For a JS maximalist stack, Deno no question.
You still need all the stuff necessary to build a system. It just runs on the background instead of you initiating it.
So Deno is relying on native ESM imports in production code? Isn't that exactly what Vite _doesn't_ do, because of poor performance?
When you run the vite dev server it uses ESM, but when you build it uses rollup, because serving ESM is slow and with larger apps the client browser is going to make a bazillion requests. Wouldn't you rather traverse the dependency graph one time and bundle your code into modules so that everyone who visits your site doesn't force their browser to do it over and over again? Sure those dependencies will be cached between views or refreshes, but the first load will be slow as shit, then you still need to "code-split", just now you're calling it "islands".
That sounds like an assumption that's not actually been tested recently. Downloading a 2MB bundle or 100 separate files is an equally poor experience, but the separate files at least let you download only the parts that changed over time, instead of having to download a completely new 2MB bundle every time someone changes a single letter in one of 100+ files and a new bundle with a new integrity digest gets rolled out.
I'd rather have an equally slow experience on first load, and then much better performance forever, compared to having something that constantly invalidates the entire cache.
I'm not sure what recent means, but I'm fairly sure (sorry, no references - only from memory) that it's been tested quite a lot, and also relatively recently (1-2 years?) by Evan (the guy behind Vite).
Even with newer versions of http just transferring lots of small files is noticeably slower (few percent if I remember correctly).
A few percent of a second is a few milliseconds, so no worries there, that's at the very edge of "audio visual desync" perception. For huge bundles, of course, a few percent of a few seconds can hit 100ms or more, but even that's barely noticable compared to how long we're already waiting for the bundle to download.
The bulk of the argument in favour of ESM in a "bundle vs ESM" comparison is in the cost of downloading updates: redownloading a individual ESM files (even several of them) is going to be appreciably faster than redownloading an entire bundle (even if the dependencies are split out into their own chunks and don't change).
A few milliseconds if you live in a first world country, sorry.
HTTP/2, which is getting pretty widespread now, mitigates most of the concerns about making a bazillion requests.
Here is what Vite has to say about it. [0]
Take a look at a few optimization it's able to do that the Deno guys will never even be able to dream of (otherwise they will reinvent Node.JS lol) [1]. The worst part is that the guy who created Deno is the same person that made Node.JS, if you don't like NodeJS I'm not sure why someone would be betting all in another of his projects, specially considering second-system syndrome is real and painful [2]. Deno is already suffering from feature creep, just recently starting to support package.json, which I find hilarious. Soon they will reinvent CPAN [3] and believe they just hit into something extremely innovative.
Does reading about CPAN remind you of something? Something that could be the same for JavaScript? Like a package manager for NodeJS?
[0]: https://vitejs.dev/guide/why.html#why-bundle-for-production
[1]: https://vitejs.dev/guide/features.html#build-optimizations
h2 makes individual requests cheaper. However, if you have a waterfall of dependent resources where some must be fetched after others, you’re still waiting out the roundtrip for each edge in the longest chain. Which developers working and living 10ms from the data center usually don’t give a shit about.
Deno is an alternative to node and npm. You can have a Deno http server serving content to a browser. But the browser won't care how the server runs, as long as it can speak HTTP.
You can also use Deno to run your bundling tools, but again, what happens in Deno stays in Deno and does not reach the browser.
I think this is actually because esbuild doesn’t support everything needed in their production bundle (well controlled/grouped bundles) while it’s excellent for dev where there’s no such need.
I can’t remember where I read it, I think it’s in the official docs.
That's indeed correct [0]. esbuild is still a bundle though, so it wouldn't change much other than (much) faster production builds.
[0]: https://vitejs.dev/guide/why.html#why-not-bundle-with-esbuil...
Fresh is using esbuild to build the bundles at runtime instead of pre-deployment. So it rebuilds the bundles everytime a new process starts up and keeps copies of those files in memory for each process.
My understanding of this is that this is trading a build step for just in time (JIT) compilation, which seems, ok? But it seems to me that you've just moved the problem around and I'm sure there are additional trade-offs (as with anything).
"Build" in JS circles usually means transpile, not a replacement for JIT which will still happen at runtime.
In addition to that, often there are other concerns addressed at build time such as linting.
yeah, exactly. So I wonder if ultimately they want to have the browser handle transpiling things like typescript? And I definitely think there are other concerns (such as linting) that you want to happen as part of your development pipeline.
Browser doesn't transpile, and if you're transpiling in the browser in your own JS, you're doing it wrong...generally
Point of transpiling is improving DX, so it has no business being done during users browser render.
Sorry - I miswrote that - I meant the browser directly compiling (JIT) typescript
There is an ES39 proposal to allow type annotations in Javascript, that would allow the browser to handle TS/Flow files without needing a compile step:
https://github.com/tc39/proposal-type-annotations
(That's only to allow the type annotations to be there, not to have static checking in the browser)
IMO: I would love to see this implemented. Linting and typechecking should be ran before committing code or deploying, but I want to be able to stop transpiling/bundling in all cases.
that sounds needlessly wasteful. You can trivially strip TS out of JS before sending off to the client. You're still going to need to check TS in the build/CI step (i.e. the time consuming part) before doing anything so you've gained exactly nothing.
"You can trivially strip TS out of JS before sending off to the client"
To quote you, that sounds needlessly wasteful to me. Also I prefer not needing to deal with SourceMaps or different source when debugging. So it's quite the contrary: I gain a lot. Different strokes for different folks.
I understand your reasoning but it seems to be focused more around developer time rather than bundle size and user experience if I am not mistaken.
The ratio between built code size and source size can easily reach orders of magnitude in TypeScript projects with exhaustive types.
Sending all that code to the user is wasteful, and this wastefulness is multiplied by X end users, as opposed to the development process which is centralized.
From a money perspective the picture is different of course.
I would still love to be able to execute TS directly in the browser, but this is purely a DX thing.
I feel like this is what everyone actually wants. Typescript in the browser (at least for me personally) would be awesome, and if they made TS into a separate language with its own runtime, that would be like the holy grail.
The history here is a little misleading. Client side bundling happened before node/npm. It's a performance optimization to reduce the number of requests the browser has to make. Typically people were just concatenating files. Concatenating was a painful dependency management challenge for larger code bases. Subsequently there were module systems like requirejs that also sought to fix some problems like these and ran without a build step in dev. Browserify really changed the perspective here and people started to think a build step wasn't sooo bad.
I do think, based on the requirejs code that commonjs/browserify didn't really need to be compiled anyways.
Also fwiw, the technique mentioned here is a way a colleague and myself introduced babel to a large company as well, we just transformed + reverse proxy cached in dev. And fwiw, webpack basically does this anyways these days.
> What exactly needs to happen to make server-side JavaScript run in the browser?
That sounds like an oxymoron to me. I have honestly no idea what they mean by that. To me, a browser is client-side software, so saying you want to run server-side JS on it doesn't make any sense. They mention it several times in the article but I simply can't follow.
Could someone with a deeper understanding ELI5 this to me?
It doesn't make a lot of practical sense, but basically they want to reuse substantial amounts of server code as client code. A fundamental misunderstanding of the client-server model, methinks.
You can't see a good reason to validate form inputs client side and use the exact same validations server side?
That's nonsense. There are many validations that you don't trust to client to handle (or will require API calls, making "exact same" an unreasonable expectation). Ultimately, frontend validation is for UX and backend validation is for security.
Different concerns, different capabilities, different code.
Code reuse. If I need to render on the server (for SEO purposes, as most people do not rely on Google's dubious and unknown JavaScript processing ability) then it makes sense to want to share as much code as possible between the server and the client. If you're doing a single-page-application (SPA) you can also share routing code, as your client will need a way to route to pages client-side.
With that said, I've seen people argue against node APIs and this desire to only use web APIs on the server. I don't get that. Node's API is generally pretty good and using JS as a replacement for Python/Ruby/etc. locally is rather excellent today. You don't need neutered APIs to also write code that works in both client and server. Unless you're selling cloud native bullshit (ahem, Nextjs)
I think what they are really saying is "What needs to happen to allow us to run server-side _style_, module-based JS run in the browser.
Basically the point being that the browser "version" of JS has a lot of limitations w.r.t dependency resolution and standard library usage, and that by either using a bundling tool or whatever is being proposed here you can avoid those. In that way you end up writing "server-side JS," basically NodeJS style JS, for the browser.
Major added benefit is that it allows you to use the same libraries/packages/whatever on both client and server. That's highly convenient.
Full stack devs working with JS on the backend love to be able to reuse their code on front and back end when desired. This question is relevant to them and them only.
Like using libraries, or packages. I can reuse those on the front-end, the back-end, command line tools, or wherever else I want.
A bit simplistic of a take, consider games. First client server games made server deal with logic, exclusively.
Modern (post Quake) games make server authoritative but allow server logic to run locally.
What modern JS app do is something hybrid client/server rendering. It's akin to moving/transmitting code from server for faster rendering.
I think they use it for offline web apps and to fix problems with server side rendering (usage of resource, time to first render).
We're going to have locality optimizing migrating code.
it has nothing to do with server side business logic. It's simply that javascripts package manager, npm, was widely adopted to server / node needs. And then the notorious "node_modules" folder, which was before handled by server compilation, became a part of clientside dev too because it had so many goodies in it, so now the client was de facto having to do the same build as the server, so, by "server side code", I think it's safe to say what they mean is "node modules".
Of course Deno has a build step. The difference is you don't have to configure it and it happens on demand rather than aot.
It's definitely an improvement but the title is misleading.
Probably one of the more rational takes I've seen in this discussion... I happen to prefer the Deno approach, while I really do appreciate the efforts for better node.js compatibility, if only because of the sheer volume of modules out there.
I do think that new libraries should probably go the other direction with Deno first and Node/npm as a separate build target. I've started also reaching for Deno first for a few shell scripting chores where I need more than bash...
Which has been pretty handy.#!/usr/bin/env -S deno run ...
IMO, this post doesn't discuss the tradeoff of removing the build step. What a “build” is has been obfuscated. When you deploy an app, you now need to convert TypeScript into JS, and then the JS needs to be turned into an optimal representation for V8 to process.
For example, Fresh has a “build process” whose cost is paid for by the user [1]. You want to do these things before the user hits your page, and that’s the nice thing about CI/CD. You can ensure correctness and you can optimize code.
In the interest of losing the build step, a tradeoff is made for worse UX for developer experience (DX). Rather, I would recommend shifting the compute that makes sense to the build step, and then give developers the optionality to do other work lazily at runtime[2].
[1]: https://github.com/denoland/fresh/blob/08d28438e10ef36ea5965...
[2]: https://vercel.com/docs/concepts/incremental-static-regenera...
In addition to this, the bundle files generated at runtime are stored in memory in a Map. If you have a server and want to have multiple processes for handling requests, each of those processes will have a copy of the build artifacts in memory. Any requests that get routed to newly started processes will have their response delayed by however long it takes to generate the bundle. So users would experience seemingly random delayed load times due to runtime bundling.
I think it would be better to do bundling in your CI/CD. esbuild supports incremental builds, so using that + code splitting would be one way of speeding up builds.
With their current bundling design, if they believe bundling is fast enough for users to not be negatively impacted, wouldn't it also be fast enough to not slow down development/deployment by having it in a build step?
I'm certainly not the foremost expert in JavaScript build systems, but this just seems wrong.
Reducing build times (or eliminating the build step) by moving things to runtime is a great idea for a debug build/mode. But why is it a good idea not to have separate release build to optimise for runtime performance?
https://github.com/denoland/deno/issues/1739 just crossed 4 years. if they want people to do transpiling inside their own tool create an API so we can use our own tools rather than ones behind their black box.
Would be a much better use of their time than writing this nonsensical bs
> And to be make your Fresh app more performant, all client-side JavaScript/TypeScript is cached after the first request for fast subsequent retrievals.
My understanding is that the client side JS is a result of backend compilation. How does this work if the backend is dynamically generating those JS files? `getPosts()` can return a different JSX based on what `getPosts()` returns. No?
(At least in React) the JSX gets transpiled to function calls, which are then run at render time with a particular set of data. That first transpilation step will always come out the same and can be cached.
They make you read an entire article about how bad build steps are, only to present you with the (no less appealing) alternative of JIT compliation with URLs. This does nothing to improve the "sea of dependencies" problem they spent so much time pointing out as a bad thing.
I'm not sure why so many are interested in not having a build step. You'll still want to have a step that runs typescript to check for errors, a linter, maybe your tests and other stuff.
Or do people just want to YOLO it and let it crash in prod?
Deno didn't have a structure that caught my attention in its early stages. But now i see it going in the direction i need logically. I guess I need to start experimenting with a side project.
I think new features move into the browsers pretty fast nowadays? The new stuff introduced is overly hipster. The old stuff still needs a lot of polish. We keep getting the means to do all kinds of new things but it (by lack of better words) progresses forwards. The sum of things adds up to greater things. The (almost) opposite approach is to look what people are doing then make a single thing that does that directly, without 100 weird steps. Update it to make it better just like modules do. Everything html does looks like it was slapped together in a weekend. If you look at the spec it is obvious a lot of work went in but the default behavior never fails to disappoint. Some examples out of hundreds: we wanted a range selector and a slider, we got a slider and they called it range. How do I do a range now and make it look the same as the slider? Oh, I write both from scratch? lmao Half of json's greatness is in how sad the xml tools are but if I compare both to sql I wonder how I get any work done at all! Imagine a form was just a json. Like json in and json out. Dynamically creating form fields and populating them from a deeply nested json, allowing the user to add fields, then trying to get the json toothpaste back into the tube was a truly hilarious adventure. I eventually just set the attribute value to the value of the form field then stored the html in the db. Did you know js has an xpath implementation? Not that one could use it but there it is. haha
I really think with some love we could just go back to writing html/js/css directly. Maybe it is just that I fail to see the point of nodejs.
Sounds to me like we've round-tripped back to PHP, circa 2009. Bout time!
That's the vibe I got from browsing the Fresh documentation. However, I don't see it as a bad thing! Concepts feel like they fit together much more nicely than what I've seen in a lot of web tools.
As someone who is a bit of an outsider to webdev, it looks like enough power to make most webapps I'd want to make. I think the only question in my mind is what the benefits/drawbacks of Deno+Fresh vs something like SvelteKit.
There are good things and bad things about PHP. This is one of the good things.
Definitely! I'll make a small edit to my comment to make it clear I'm not dunking on the idea
more like PHP circa 1999 honestly
I tend to stick with script tags as much as I can. Really the problem are all the frameworks pushing people to create a build step. Their excuse is optimising the code size, but for most cases that matters little, I don't mind including all of tailwind or font-awesome.
So please, if you own a framework like this, make sure a script tag with a CDN link is easily copyable.
Are you talking about the CDN version of Tailwind? If so, the docs specifically say that's not for production use.
A lot of very useful features require a build step because it generates classes on the fly based off what you typed in the HTML.
I bet you lots of people are ignoring the "don't use this in prod" advice. Why doesn't Tailwind just offer a for-production CDN link? I understand that the build step provides features, but what if I don't care about those features?
It's not just features, it's a 350kb js file you have to send over the wire [1] as opposed to pre-building and sending a tiny css file containing the small subset of classes you used.
Not to mention, I don't know exactly how it works, but I assume it's doing all that processing to convert classes in your HTML into CSS classes on the client side so it's probably less performant.
> Really the problem are all the frameworks pushing people to create a build step.
I doubt it.
People moved on from jQuery and/or vanilla because they needed to produce more sophisticated apps. And even in those days of yore, for any non-trivial project you still needed to concatenate and minify your code.
BTW Preact can be used without a build step.
React and Vue can also be used without a build step. Vue shows how to do it in the "Quick Start" at the beginning of the tutorial. React can do it with an ugly syntax and no libraries, or by using Preact creator's htm library, or domz.
The reason for bundling is converting potentially hundreds of individual script files to an amount that’s more manageable with a browser, without suffering a cost from latency… not only optimizing code size. Also, besides tree-shaking there’s a big saving from minimization and removing development-only code.
With HTTP/2 multiplexing the latency of fetching multiple files from the same server is not a big issue anymore: https://stackoverflow.com/a/30864259.
One can also argue that Minification is also not really that important with widespread new compression algorithms like Brotli.
EDIT: Also, see this very good argument in favor of multiple files: https://news.ycombinator.com/item?id=34997759
Sadly there's lots of people are overoptimizing all of these things for apps that serve 10 users.
If you serve 10 million users every byte counts, but 10? Use the cdn.
How can you get all of tailwind? I've been struggling on that one, always have to run some tool to get a built CSS file. And any change then needs build before refresh. And I cant look at a file with 1000s of definition (which I want to do)
i think if you use tailwind, not to use the actual package loses a lot of what it can do, I don't think you get all the features/functions for ..um only delivering the parts required (having mind fog this morning lol)... I'm not sure if it's tree-shaking, but how they look at every file w/ tailwind in and only bundle up the needed css. However, maybe it works in deno? I don't know.
This article is spot on. As a developer, I don't want to see the build step. As soon as you expose the mechanics of the build and transpilation process to developers, you add a ton of complexity; for example, it opens up the possibility of transpiler and JS engine version incompatibilities. Devs should only need to concern themselves with one number; the version of the engine which runs their code. If they need to worry about the engine version and the transpiler version separately, it makes code less portable because you can't just say "This library runs on Node.js version x.x." It sucks to come across a library which works on your engine version but relies on a newer version of TypeScript... It's like hoping for the planets to align sometimes.
The dumbest thing about people building JavaScript to me is that you burn all of the energy and labor of building with almost none of the meaningful benefits.
No one is building and ending up with bundles that are reducing the bloat of the web, you can’t tree-shake your way out of bad practices. Articles and real lived experiences show us that the web is still bloated.
And why are we transpiling anything? If people want to flirt with building, I wish JavaScript engineers would just build an implementation that compiles to machine code intermediate representation.
Which is it? Do you want to be a scripting language or a programming language that compiles to something? It’s so gross to me.
> Which is it? Do you want to be a scripting language or a programming language that compiles to something? It’s so gross to me.
This is ridiculous. Just aesthetics. JS compiles to machine code when you run it "just in time". It's even relatively efficient considering it doesn't need static typing.
The "build step" is just for reducing the size of the payload. It is possible a binary representation would make it even smaller but not by much. Not worth the added complexity
A bit older, but both worth a read, and most of this is still relevant today...
https://www.amazon.com/High-Performance-Web-Sites-Essential/...
https://www.amazon.com/Even-Faster-Web-Sites-Performance/dp/...
Pretty sure we were building code to validate it and unit test it before releasing it.
This article is strictly about JavaScript. What about all the server-side rendering frameworks?
Huge assumption that everything is built one way.
Great clickbait title, makes people wanna jump in the comments and say the OP is wrong.
I mean if we wanna get really pedantic about it then yes there will always be a build step no matter what you do, one could argue saving the file and alt-tabbing to the browser is a build step, but that's not the point is it? The idea is to lower that friction as much as possible and JIT is perfect for that
I thought this will be an article about adding import maps to deno, which would be great.
I hope builders start adding it (at least to dev instances) to decrease the magic.
I was trying to use import maps, but it's not trivial go create actually.
There are always problems with Node lagging behind browsers though that makes developing hard (no WebSocket support by default for example, crypto module is also not included)
Deno does support import maps out of the box. For Node there’s a loader[1] you can use (though a glance at the GitHub issues suggests it’s incomplete).
Does it generate import maps for me automatically? That's the hard part.
I'm using Vite with Sveltekit, which is great because it compiles files separately, but still doesn't generate import maps, but uses imports with relative and absolute filenames.
I don’t use Svelte of any kind so I’m possibly out of my depth, but I don’t know what I’d want to automate with import maps. I don’t think Deno addresses any use case like that but I’m hesitant to say so because I really don’t know what need I’m even addressing.
Regardless of if Deno solves or not the issue, this article clearly depicts how broken the entire user experience is.
Putting JS in the backend was a tragic mistake born of laziness and ignorance. It should not be a surprise that everything else that followed suffers the consequences of these root traits as well. JS was barely acceptable when it was caged in browser-land, and we'd all be better off had it stayed there.
FE development has some unique challenges, but in my experience a lot of people who work in this domain try to find their own solutions to problems that have already been solved decades prior. There's a reason the build chains are fragile and a nightmare to configure, that package lists are out of date the moment they're published, and that it takes a sustained effort to maintain a project viable even if you're not adding features or fixing bugs. It's absurd, and it's the status quo.
To take this into other areas of development (like BE for example) simply because that's what you're familiar with... it really is a special kind of masochism.
So, if I'm using URLs for dependencies, effectively I can't code while I'm offline? I know it's not the norm, but there have been plenty of times I needed to work without internet.
No. Deno supports vendoring, caching and locking of dependencies just like other ecosystem.
They are not fetched every time you run the app.
I’m fairly certain Deno caches the downloaded artifacts locally. There’s also tooling for downloading all dependencies:
sure these framework may do just in time transpilation and compression, that doesn't mean you don't have a build step.
copying the code to the server becomes the build step.
except now you have no chance to lint the code before shipping it
"but I can lint it on my machine"
good, then you have a build system, and you may as well just get the optimized stuff on the server, since server startup time depends on your code size at some point or another, and you pay for that.
> [graph]
> Interest in Node.js grew since its inception.
They should mention the source of this, I guess is Google Trends.
Maybe they could add a new transpiled language as well, like Civet[0] or ReScript[1]. Not every project needs to be a C#/.NET clone in TypeScript :)
certainly, in terms of writing scripts, deno seems nicer. Like, if I want to write typescript that just runs and does something, without having to carry around a node_modules folder, etc, deno seems like it might be nice
Deno is the "these go to 11" of the Node.js world.
Creating a whole fork simply to not build TypeScript, reinvent a worse package management system and a useless security harness.
That's quite a harsh statement... What's so bad about package management? And what's useless about the security harness?
> What's so bad about package management?
Hard coding URLs is significantly worse than having a package.json file:
- you don't need to write the full URL to import a module
- you have a quick overview of which modules are installed and for which reason (dev dependencies)
- you can easily create an immutable list of dependencies
> And what's useless about the security harness
Because most apps will have to enable all flags (file system and network) anyway and because huge security holes like symlinks breaking out of the harness were present not too long ago.
- URL-based dependencies also have some additional security issues in the most common usage scenarios (see my recent flagged post: https://news.ycombinator.com/item?id=34937327).
- You also lose all ecosystem upgradability, as everyone is using pinned versions instead of SemVer ranges
How many things have been broken by doing that in practice though?
I mean seriously... in node/npm, I've seen way too many times where a minor version broke things in practice... so we go to patch level by default, usually safer... In the end, we still wind up needing tools, like with github to alert to issues that require larger bumps.. Oh, your application hasn't been updated in a year, and you now have two major versions of LibraryX to run through... Next thing you know, you've spent literally three weeks to update your node/npm/react project... and even then, some packages were too painful to update, so you just deal with the warnings anyway.
And, now you've concentrated targets to the latest minor/patch versions in packages... where if everyone is pinned, the targets are mostly unknowned from outside without deeper inspection.
Just saying, I'm not sure auto semver with lockfiles is really a win over just locking to begin with.
> Just saying, I'm not sure auto semver with lockfiles is really a win over just locking to begin with.
It's still a win even if you consider only patch version updates. Without that, for a CVE in a dependency, every dependent package will have to update, and will first have to wait for the lower level to update and publish a new version. So for a dependency ~4 layers deep, with coordination and publishing lag in between, this can quickly take more than a week (and this is assuming responsive maintainers).
A lot of time that happens anyway... at least with npm... there are a lot of times you see warnings, that you cannot resolve because of a nested dependency that is more than a point release off.
The world will take some time to adapt to correct security mechanics, just like all other software worlds did. The security harness is not only a security harness, its a whole layer that abstracts away access to the operating system.
But it's the wrong approach.
The correct approach would have been through syscall blocking which is a much lower level.
I'm assuming you mean SELinux style sys call blocking. I think you need both - syscall blocking for the system/deno layer, which enables app layer security in deno itself. That would be the composition-over-inheritance / functional approach.
I have no comments regarding their dependency system.
But the security features are stupid on their face.
If you can’t trust your own code, why should users?
It’s too naive anyway. Why would I grant carte blanche to any entire feature instead of per dependency?
So Deno started with a bad idea, and then implemented it half-baked.
Which is it? Do you not trust your own code or do you? You don’t? Why not? Or why do you only trust a subset of it? If you do only trust a subset of it, why have you denied or granted the entire feature?
It’s useless. It’s one of the dumbest software features I’ve seen in my life.
Trust a dependency and pin its signature.
Yes JIT building on route request sounds super non-wasteful and better. /s
AFAIK there's some caching going on.