Deno.js in production
medium.comThe security model is a big one for me. If they could extend the permissions system to work for individual dependencies, they could solve one of the biggest security issues facing developers right now. Especially if policies could also be applied to node packages.
Are there any plans to move in this direction? It seems like if you can do it for the full app, you should hypothetically have the capability to make it library-specific. Or perhaps there are non-obvious blockers that make it too hard?
If there are plans to do this, isn't it better to do it sooner rather than later? Better to get library authors in the habit of specifying permissions/policies now while the ecosystem is still small. If you wait too long, it will be a ton of work to retrofit all the existing libs.
You should check out Lavamoat: https://github.com/LavaMoat/LavaMoat
It attempts to do what you're essentially describing. It was built by the MetaMask team, where supply chain attacks are an obviously huge risk.
I've spent some time trying to get it working in an app, but haven't been able to get it all the way working. It's still pretty beta and not well documented.
Thanks, I'm super interested in anything that tackles this problem.
Deno's approach seems most promising so far since it's really ideal to have this built in to the core runtime, but it's not really very useful yet as implemented and I don't know whether taking it further is a priority for them.
I, too, would like to have per-dependency security policies. But I can't really see where you would realistically draw the line between dependencies, other dependencies, and your own code. Is a callback, created in your own code, that potentially accesses dangerous APIs allowed? Presumably yes, but what about when it is passed to a dependency that proceeds to call it? What if the dependency returns the callback back to your own code, can you call it then? What if the dependency wrapped the callback in another callback that calls the original callback with altered arguments, and returns that to your code? Do you enforce any potential restrictions statically, necessitating much stricter restrictions on what you can do than typescript on its own does, or at runtime, necessitating some sort of non-trivial bookkeeping tracking all the parts of the codebase that any piece of behavior has touched?
While I don't have the answers to all these questions, I imagine we could come up with some sane defaults.
Even if you have to 'eject' and provide overly broad permissions to certain libraries, I'd imagine these would be quite a small percentage and you'd still get the huge win that the 90% (or whatever) of your dependencies that don't need any system or network access at all and don't have the kind of issues you describe can effectively be removed as viable targets for attack.
For callbacks, Deno could provide a wrapper function that is only available to the top-level app (not dependencies) and causes the permissions in the callback to be evaluated at the app level, not the dependency level. There may be a better way, but that's one idea.
Static checking would be great too. I think a combination of static and runtime enforcement would be ideal.
> Or perhaps there are non-obvious blockers that make it too hard?
To me, it seems like you'd need a new language.
Why is that? I may be missing something, but they're already enforcing permissions at runtime, which seems like the hard part to me. It would 'just' need to be integrated with the call stack so you know which dependency(ies) want system/network access.
IIRC there are all sorts of issues around monkey-patching prototypes, shared objects between modules, etc. which would readily allow escaping any sort of module / dependency level permissions system. You'd probably be better off pitching a typescript subset language with its own compiler / interpreter rather than trying to shoehorn it into V8.
These seem like solvable problems. Prototype modifications are rare these days and should probably be restricted in the same way that system/network access is. Shared objects between modules also seem like an edge case apart from callbacks? I posted an idea on how to handle callbacks upthread a bit: https://news.ycombinator.com/item?id=31326123#31332061
System and network access are all done via the runtime library functions, which are easy to control.
Changing prototype access almost certainly involves modifying V8 in unpleasant ways, and I'm not sure how you would get around the overhead of deno needing to the call stack on every function call- statically analyzing when a function is operating in one context or another is certainly not a trivial problem.
you will not solve the dependency problem with permissions. You will only solve it by reducing dependencies and reviewing code before you updated dependencies
adding permissions will do nothing except add ridiculous overhead and complexity such that to get anything done devs will just give all permissions
Reducing dependencies is generally good (though often at odds with productivity), but I doubt we'll solve it through just reviewing code. Even with a small number of dependencies, the full tree can be absolutely enormous, and there are many ways to obfuscate attacks. It's a severe needle-in-a-haystack problem.
I don't see why permissions have to add "ridiculous overhead and complexity". Most dependencies need very limited (if any) system or network access. Locking those down would be a huge win, and it makes reviewing updates in large dependency trees realistic since you can zero in on permission changes.
Checkout node policies as well: https://nodejs.org/dist/latest/docs/api/policy.html
I haven't had the same success with Deno that I expected. At first, I was shocked by how good the tooling is, and I was super happy with its dependency management. All of the problems started once I started getting into actual business logic.
`redis` and `ioredis` `npm` packages don't work with Deno, even with the Node compat layer (I tried). So you have to use the Deno driver for Redis. But you look up the library for this, and it's experimental: https://deno.land/x/redis@v0.25.5
Same deal with Postgres. `pg` would not compile at all. Knex also didn't work (this was an older project). I'm assuming this is because these two packages use native Node.js plugins.
I like Deno a lot, and the out of the box TypeScript support is a gamechanger, but I had a really tough time working with it and actually being productive.
My experience was very similar. I was excited to start working on a personal project in Deno and found trying to use npm and import maps to be so painful, it left me wondering how I hadn't heard of these problems before (or since, really).
>Same deal with Postgres. `pg` would not compile at all. Knex also didn't work (this was an older project).
You should take a look at Postgres.js [0] which supports Deno and TypeScript. Version 3.x was released recently and discussed in this HN thread [1].
The client is implemented in JavaScript. Queries can be written using JavaScript template literals.
[0] https://github.com/porsager/postgres [1] https://news.ycombinator.com/item?id=30794332
> `redis` and `ioredis` `npm` packages don't work with Deno, even with the Node compat layer (I tried).
I'd love to hear specifically what didn't work and how close the Deno team is to maybe fixing those sorts of issues. Not that they owe that to us open-source wise, just curious as like... otherwise you are right. We're starting the ecosystem again over from scratch.
redis clearTimeout shim does not work correctly, console gets blasted with timeout messages nonstop because `on('error')` from the EventEmitter keeps getting triggered, even though the client is connected just fine.
ioredis commands hang randomly. I couldn't do GETs or SETs.
I think the ecosystem just needs to be filled out
I'm hoping Deno is such a dramatic improvement on the foundation vs Node that it's enough to convince library authors to jump over; the jury's still out on whether that flywheel will get started
ioredis and mysql2 have been working for some quite sometime.
On the other hand, not being able to bring all of the node.js package ecosystem along can be considered a feature.
I don't think it can.
There's a lot of crap in the node.js ecosystem, no denying it.
But there's just a lot of packages in general. Many of which are really, really well designed and good.
I don't really understand this take.
Yes, there are a lot of packages available - but that gives you choice.
It's not a feature if you can't build the project you are trying to build.
In the security model section it doesn't mention workers. In Deno, workers can be given different permissions. The article suggests having different permissions per file. I think it might be nice to have both but if I could only have one I would want separate permissions per worker. Files aren't always split along the same lines that permissions should be split. I would like to be able to control permissions by url prefix though - so some library doesn't do more than I want it to. It might mean setting up a worker if I want to make sure it doesn't indirectly use more dependencies.
I hoped this article would provide more info than it actually did.
For me, a javascript developer that mainly use Node.js for work, Deno is interesting and I want to use it but the hosting part is what prohibits me from using it. In node it's easy to run production code with pm2, you can cluster it and it's super easy to configure it so that it will run one node process per available core.
With Deno, you can't do this because there is no clustering available so you kind of have to run it on single core machines to get maximum performance out of your hardware. In other words, on cloud solutions like Deno deploy or a Kubernetes cluster configured to run it on single cpu docker containers.
I am not interested in that and as long as it is that way, running Deno is unfortunately a waste of my hardware. Sure there are web workers and they are great for stuff but if my process dies for some reason I don't want that to halt the application.
You can totally use pm2 with Deno! Just needs an extra flag.
pm2 start index.ts --interpreter="deno" --interpreter-args="run --allow-net --allow-write"
Ok I had no idea that was possible. Will this work with the cluster module in pm2? How does it work exactly? Does pm2 have its own webserver and just forwards the request to whatever interpreter you specify?
I've been using Deno for a large compiler project I've been working on over the last couple years, and for the most part it's been a dream
Of course that's not the most representative example, because a compiler can almost entirely dodge the ecosystem problem, but I thought I'd offer it anyway
node.js nowadays is a total disaster. I've been able to hold down the fort with turbo repo, but even then it's been a touchy strategy. I want to switch to Deno, and I'm interested in articles like this. Either that, or, because of the state of nodejs/NPM modules, and it's ever-increasing surface area of doom and resume driven development node modules, either somehow switch to deno or go back to just HTML and vanilla js. I just can't get any work done. I'm saying this as a developer with 27 years experience and who has written my own server side spidermonkey solution pre-nodejs.
> I just can't get any work done.
Yep, I'm feeling this right now. I was recently tasked with updating an internal node app that hadn't been touched in about 4 years and it's seriously one of the least fun things I've had to do in my nearly 10 year career. After hacking away at for it a couple weeks, I told my boss that it needs a ground-up rewrite.
The bazaar approach of Node and NPM has created an absolute hellscape to develop in.
Care to elaborate why you can’t get any work done, and how Deno will solve that?
I’ve been using Node.js pretty much daily in production for almost a decade now, and it’s never been a total disaster, not even close.
Node.js is a mature framework and never had problems with it. I also write my own code and avoid installing useless packages. Truth is everyone likes fresh projects like Deno but their ecosystem is doomed. I mean all I need is Redis and it doesn't work the way supposed to.
> resume driven development node modules
Hahahah!!! So true!! ;D .. i think i've written a couple of those!
The title is very misleading. Deno is not written in JavaScript and is never referred to as Deno.js in any official sources. Deno is written in Rust.
Much of Node.js is written in C, yet it's still called Node.js.
Deno has some JavaScript/TypeScript in it. On GitHub https://github.com/denoland/deno is 22.8% JavaScript and 13.2% TypeScript, and https://github.com/denoland/deno_std is 68.2% JavaScript and 31.6% TypeScript.
So to me the title is misleading about the name (Deno is certainly not named Deno.js), but not about what Deno is written in.
Node isn't written in JS either, but people call it Node.js. They used the wrong name, but it's a pretty huge leap to call it "misleading".
I'm surprised the included library wasn't front of mind. Perhaps it is implied by comparing to golang.
> Node.js is too easy to get started. This means that the pool of available programmers is not the highest quality. Runtimes like Go or Deno are still havens for the ‘connoisseur’ programmer.
wat
(Deno seems worth checking out though!)
"Too easy to get started" is just gate keeping.
Reverse gate keeping, yeah. You built a fence around your safe zone, and locked yourself in.
On one hand, JavaScript being easy makes programming accessible. On the other hand, the state of programming is terrible and getting worse. So you can't say if it's bad or not.
> On the other hand, the state of programming is terrible and getting worse.
What does this mean? It's an order of magnitude easier to build an app/service/whatever today than it was a decade ago. Having to maintain separate code paths for IE because it doesn't support many of the APIs and CSS features you need was "terrible"; by comparison, engineering today is heavenly.
Engineering might be easier, the user experience is magnitudes worse than a decade ago, pretty much universally.
it's not getting worse. you're just getting older
100%. If people want to still use Wordperfect 5.1, they still can.
It’s not getting worse. Go ahead and make a relatively complex app with jQuery 2.0 and let me know how it compares to spinning up a Next.js app.
Or better yet, you could use neither! If it helps you remember, JavaScript is bad, so more JavaScript is more bad.
I've been playing with Deno lately and I can assure the author, I'm no connoisseur
This means at the level of their ability to draw in candidates, Deno interest is signal to their hiring pipeline of more quality. The consequence is their hiring pool is also that much smaller. One might say the same thing about Java v Go.
> The consequence is their hiring pool is also that much smaller.
In the current market, unless you're a big company that gets flooded with applications on a daily basis, why would you ever reduce your hiring pool arbitrarily? If you're a 13-person startup with good funding, you want all the candidates you can possibly get. Excluding potentially great engineers because they've never worked with Deno doesn't make any sense.
Especially if you're a product-focused startup, the last thing you may actually need is an enthusiast for a language. 90% of the time you want people that just want to use the right tool for the job.
We're contending with the reasoning behind an observation, but the observation is already useable to the author of the post — if true. Is the author of the post seeing improvements in candidate quality because Deno is actually a proxy for burning the midnight oil? Who knows. If we generalize to Java vs Go will the observation hold? Who knows.
But as chrisco255 pointed out, neither usage of Deno nor applying a weight to some observation will reduce your hiring pool.
I would also say that a technological perspective of "right tool for the right job" is somewhat independent to the expensiveness of your hiring pipeline. Sometimes Erlang is the right tool for the right job, but that undoubtedly changes the experience of hiring.
It does make sense, because you filter out the non enthusiasts, a proxy for great programmers. When Rust was still nascent, the people interested in it were likely to be of a higher quality bar than any regular old Javascript dev. That's not to say that JS is bad necessarily, just that enthusiasts correlate to great programming skills.
My goodness, if you're excluding any hire because they've never worked with (insert new hotness here) you are simply making a big mistake in hiring, in general.
Deno is still JS based at the end of the day and attempts to conform to web standards in its design. I don't know why in this particular case it would limit your pool.
I could see how Golang vs Deno (JS) would impact your hiring pool dramatically though.
As chrisco255 pointed out downthread, I made a mistake in saying that Deno would reduce your hiring pool. That Deno is a signal in hiring does not mean you've lost out on Node talent, it just means that you've found a net gain in hiring by weighing Deno more strongly (assuming the blog post is correct).
However, Java vs Go would probably make a big difference in terms of hiring pool size.
Yea, this is one of the most garbage takes I've ever heard. Requiring the use of something esoteric is not suddenly going to allow you access to a pool of higher quality candidates. Instead you will still see the same pool, but now the average candidate will be even less knowledgeable about the technology.
I do like Deno personally, but this reason is not a great one to do so.
Same is true about Lisp and was true about Python in early 00s. Rare tech brings out enthusiasts.
Really? I never noticed any particularly high barrier of entry with either.
Pygame for example was very popular with hobbyist game programmers in the early 2000s as it provided a much easier way to get started with game programming than C, which was the most popular alternative back then (Unity 3d only released in 2005 and Unreal Engine only became free in 2015).
What I do remember from the early 2000s, though, was legions of aspiring game programmers struggling with C and C++.
I've tried Deno. The author might rethink that line after interviewing me.
Really though this seems almost like resume keyword checking level of a candidate quality check. Actually it sounds exactly like that...
> ‘Deno’ (like ‘Node’ but backwards)
Yeah, no.
I wonder if more people just assume that to be true, heh. I kind of was expecting it, weirdly enough.
Hint: "node" is "edon" backwards. Not sure if that name is taken for something Javascripty ... * goes to check * yeah, I found [1] which seems to be 4 years old, tagline "Run browser JS in the terminal".
"node".split("").sort.join("")
This is the correct official answer except you didn't invoke sort
"node".split("").sort().join("")
I write a lot of ruby, I guess...
wat
Sorting the letters by alphabetical order in "node" by converting each letter to its own string in an array. Slick JavaScript, basically.
yeah but you need to call sort as a function
OP code would throw an exception. ^^
Deno is No|de swapped. So no, not backwards. But... how would one call it?
French has a word for this: https://en.wikipedia.org/wiki/Verlan
Verlan is itself verlan of l’envers (backwards). It’s super common to make slang words this way.
TIL that Stromae (the name of a Belgian musician) is Maestro in verlan.
It's also backwards, but by two character groups, or if you prefer, consonant-vowel groups.
little-endian
Maybe "scrambled".
Rotated
Deno, an anagram of node
Backwards (in Japanese)
Shifted
Transposed
Deno is Node on a middle-endian architecture (see: the NUXI problem).
It is disappointing to see this message as the basics of what the software is on their site (https://deno.land/):
Deno is a simple, modern and secure runtime for JavaScript, TypeScript, and WebAssembly that uses V8 and is built in Rust.
Only to have that immediately followed by really poor practice of suggesting this as the installation method: curl -fsSL https://deno.land/install.sh | sh
This is not strictly related to Deno -- lots of software does this -- but if you're going to suggest your thing is more secure than the other guys' thing (which is implied by calling your thing secure), you shouldn't then be immediately throwing that credibility away.Yes, the page offers a link to the "Releases" page at their github repository. However, anyone familiar with any kind of UX will understand immediately that this is effectively burying the link and subtly makes the statement that you don't really want to bother with that other way of doing things. They also don't provide a gzipped/bzipped tarball for the linux install but a zip file instead, adding an additional barrier/dependency.
I understand this is an area where security is losing the tug of war to ease of distribution/access but it pains me to see it on any project, let alone the potentially good ones.
You're about to run their software on your computer-- what's the difference with that and running their install script?
How do you guarantee that their install script is non-malicous and was actually provided by them?
There's a reason why code signing exists as a security measure.
While signing does improve security, it's still something of a turtles-all-the-way-down problem because how do you verify the public key is valid? An additional factor is added, which helps, but it's not a silver bullet. And the complexity tradeoffs of requiring cross-platform installation of a signing lib like gpg/minisign (which plenty won't already have installed) and a much larger install snippet are significant.
For the Mac at least, signed dmg files and apps are normal, so they should have done it that way.
Most people trust that the script is not malicious, including me. There is wrong with this approach, it is extremely convienient to try something out that has good reputation.
For these people, running the script or downloading a signed GitHub release is equivilent, in both cases they do not read the source code of the software that they are running.
There is nothing stopping you from 1) reading the script before running it 2) reading the source code of Deno and any dependencies 3) compiling from source yourself. For most people, this is a waste of time. Trust has to start somewhere to build something great.
https://deno.land/install.sh is a redirect to https://deno.land/x/install.sh, which is treated as any /x/ (community) module. These modules are immutable clones of github tags (in this case, https://github.com/denoland/deno_install/). If someone would manage to breach the AWS S3 buckets that we use for module storage, it wouldn't be just a problem for installation of the deno CLI, but a problem for any module on the registry.
It's using SSL, what's the real world concern here? Other than someone might get copy-paste happy and someday install something they don't want.
There's quite a bit wrong with this idea that "It's using SSL [therefore it's safe]", assuming your meaning there.
The most obvious case: someone compromises the installation script on the actual real deno server. Right now the webserver there is returning an HTTP/307 to an HTTP/302 to the "current" installation script file. Any compromise of the webserver makes this very dangerous.
Contrast that with proper signed packages, code signed sources, etc. There it requires compromise of the developer's systems and signing keys, which at least can be a far harder thing to attack if they're doing things securely.
I think this is a fair criticism and deserves attention. Whenever anything shiny comes around, we are too enamored by it to not allow any criticism.
Is there a reason Deno is not packaged as a repo in official apt, deb, yum, etc repositories?
And in this hypothetical scenario, how does that protect against the aforementioned attack? If one of deno's hosting sites can be attacked to upload a malicious script, one of the package registries can also be attacked and upload a malicious package.
This makes sense. But how do I as a basic user make sure the signature is correct and definitely from Deno? Couldn’t a hacker sign it with their own signature?
1. the file is replaced with a malicious version on their server and not checksummed
2. copy/pasting includes invisible characters that aren't seen until executed
both of these things happen regularly
orthogonally, curl|sh (usually) circumvents the package manager and makes uninstallation difficult