Launching NginScript
nginx.comI can't help but think this is a bad idea. Jamming more stuff into what is a great tool puts nginx on a slow path to a bloaty death.
Am I incorrect in assuming that you could implement your entire server-side js app now as an nginScript module? Do people think that is a good thing?
Not to mention that putting more interpreters and more end-user code into a system that has access to your service's private key might not be terribly wise.
I'm sure many people will tell me I'm wrong, and I guess I can see some benefit to simplifying configuration and perhaps deployment.
But there's a reason we've mostly moved away from deploying embedded PHP applications inside of mod_php.
The usefulness of this is not to turn nginx into an application server. Although there are already frameworks for this using ngx_lua (http://leafo.net/lapis/), I don't find them very interesting except for the technical aspect.
The point is to make nginx's configuration dynamic and prevent bloating applications with stuff that belongs at the (lets call it) devops level.
Now, I also don't think nginScript is such a good idea. But because they seem to be building their own JavaScript VM for it. I believe this is a waste of effort and more of a JavaScript-all-the-things than anything else.
Lua is a very simple language, the VM is small and fast and for the "dynamic configuration" scenario one hardly codes more than a few lines (I've done quite a few things and the total line count is in the low 100's).
It's funny. Any feature you don't want to use is bloat, but as soon as you need it is a necessity. The way I see it, they can either "bloat" nginx by adding scripting capabilities, or they can bloat it by covering all of the use cases that a scripting engine would otherwise enable, big and small.
One simple example that I would love to use this for: generating and adding a UUIDv4 to every request's headers. Doing so would allow us to append the UUID to virtually every log in our entire stack. Right now there is no easy out of box solution for this in nginx. With scripting capabilities it becomes trivial.
However, whether or not Lua was enough and adding JavaScript is overkill, I'm not sure.
I've had the following Lua adding request UUIDs to our logs in production for over a year. Feel free to give it a try:
I do what you describe with no extra Nginx modules with the following line inside a location block:
Using the more headers module you can also pluck things like Oauth access tokens and append them toofastcgi_param X-REQUEST-ID $pid-$msec-$request_length-$remote_addr;> generating and adding a UUIDv4 to every request's headers
Why you can't do that in Lua?
You can consider it bloat even if you want it, which is why you want to carefully manage the size of what you add.
I wouldn't refer to JavaScript as 'overkill'; it's not more capable than Lua for scripting, it's just bigger.
We've written extensive partial view caching with memcached in lua in nginx, none of us having known lua when we started. It's fast as hell and the learning curve really isn't steep. Most of the tricky stuff involved the nginx api and brain-wrecking recursive and nested calls - not the language.
They're welcome to spend time building their own JS VM, it's their project - and while I don't think this is intended to appeal to the existing userbase, I think it will attract an entire new userbase, which will further nginx as a webserver and aid the goals of those who use it (faster. faster. more speed. faster. fix the bugs.) as they get more use, more paid use, and thereby more developer time.
I have used ngx_lua in the past for some intelligent query rewriting and was very pleased with the experience. It is definitely great having the power of real programming language at hand and not being constrained by yet another configuration file syntax. On the other hand lua just feels very small and the temptation to fit more of the application logic into web server is low (although apparently not for others as lapis shows ;)).
I guess nginScript is mainly an outreach thing. Apparently nginx developers have decided that all those UI developers are just more comfortable writing JS than anything else. What indeed raises some red flags is that it is a subset of the language running on custom VM. So it is in fact a JS dialect which will still require some amount of learning to use it effectively (no free javascript lunch here).
"The point is to make nginx's configuration dynamic and prevent bloating applications with stuff that belongs at the (lets call it) devops level."
No, let's not call it that, because I have no idea what you mean when you say that.
Throttling, request routing, security workarounds, integrating multiple applications under a single nginx reverse proxy, ...
If you do ops or interact with an ops team, I'm sure you can find many more examples.
Agreed, those are all operational concerns and dynamic nginx configuration is great for that. I was objecting to seeing the term "devops" being further abused.
Some nginx configuration files are very project specific, especially when you have projects using some of the modules they offer.
The purpose is to give the power of LUA scripting (via OpenResty, used by companies like Cloudflare) in the hands of more people, since Lua is a small community (even tho, it's great, easy to learn and fast). Thus, adopting JS is the solution for them.
Disclaimer: we built one of the biggest Lua+Nginx project [1]
There is already Lua support so this is just adding another language - thus if this is a bad idea, it is a continuation of an existing bad idea, rather that starting a new bad idea. If that makes a difference....
Yeah I've been skeptical of nginx's Lua support in the past. I guess I discounted it because it was a relatively unknown feature and Lua is obviously not nearly as popular as JavaScript. I don't know of any production service written exclusively in Lua, though I'm sure there are some.
So yes while this is just a continuation of a bad idea, it's a rather substantial continuation. I fear that a lot more people will make use of this than make use of nginx's Lua support.
OpenResty is built on Nginx's Lua support and is fairly popular. You're probably using websites that use it without knowing it.
There's also Lapis, a really great and reasonably feature-full web app framework for OpenResty: http://leafo.net/lapis/. I wrote a website in it recently and was pleasantly surprised at how cool it is. It reminds me a lot of Rails, but everything works faster, especially on LuaJIT, and I actually like MoonScript (the compile-to-Lua language I'm using) for its expresiveness and aesthetics. Definitely recommend checking it out.
We use the lua support in openresty to build our authentication layer for admin tools outside of those admin tools. Very easy to integrate it with a third party authentication platform and just provide auth for free to our app developers.
I have read that cloudflare uses nginx lua a ton. Apparently so does taobao (via tengine?).
AFIAK, CloudFlare is entirely just Nginx+Lua scripts.
That is not fully true.
It became a lot more true during 2013, but CloudFlare had already been operating for a few years then. A lot of fairly simple code was retired to the profit of Lua code, which is way easier to maintain.
There are still important modules written in C and compiled with Nginx. There are also proprietary extensions of the ngx_lua api to manipulate internal some internal aspects of Nginx within the scripts.
source: I wrote a good chunk of both iterations.
> Am I incorrect in assuming that you could implement your entire server-side js app now as an nginScript module? Do people think that is a good thing?
This is kind-of already possible with OpenResty: https://openresty.org/
You can also use Lua, a programming language more than capable of producing efficient web application backends, directly in Nginx config files: https://www.nginx.com/resources/wiki/modules/lua/
So...in a way, this has really been possible for a long time. I'm not disagreeing with you, because personally I'd much rather build my web app in a language and toolchain that's easier to work with, but I find it interesting to read about.
I wonder why they didn't use Duktape[1]. It seems like the obvious choice for this kind of effort (e.g., if the 2 main Lua implementations have any counterpart in JS, it's probably Duktape).
It's possible that Duktape was still too early in its development to use when Nginx started this project, but even then it seems like they could have collaborated and saved a lot of man hours.
I wonder if there a good reason why Nginx didn't use Duktape, or could this be a case of NiH where some Nginx dev got excited about the opportunity to build a new superfast JavaScript implementation just for Nginx? Surely it would have been less work to integrate the two event loops and use Duktape's (well-documented) API to build whatever features they wanted?
That said, although I've built significant async programs in other languages, I've never done anything in C on this scale, so take my words with a grain of salt.
I've used duktape as an extension language and wholeheartedly recommend it. It does the job and caused me zero problems. I can't ask for more than that in software.
It's sad to see lua get slowly replaced by javascript.
I think is is sad to see anything replaced by Javascript. Unpopular opinion I guess.
So many better choices out there.
> So many better choices out there.
For now. I get the feeling that soon enough a lot of them will start dying out. Trying to "sell" some language other than JS in a mostly-JS shop is already an impossible battle. Popularity is used as a counter-argument for everything.
I was just thinking that the Javascript community saw that the Ruby community were both snobs and well respected, assumed a causal link, and thought they just needed to be snobs for people to respect Javascript.
I for one like seeing Javascript replaced by better Javascript.
Better javascript would be incompatible with javascript - best you can get will be "less sucking javascript". And you'll be able to use it somewhere in the far future, after four competing companies have implemented enough of it in a way that mostly works, their browsers have reached most of the internet and the brave open source community has developed the polyfills needed to fix the incompatibilities.
I'm not holding my breath. Javascript sucks, it will be everywhere and we are stuck with it.
I'm totally with you. JS is such a bad language, might get better in the future with ES<X> but I guess not. It has many quick hacks (in the bad sense), admitted by it's author.
Interesting pattern I see lately is industry pundits on the CIO side advocate Javascript for inhouse tooling. I guess they have not much experience with programming languages.
With JS making inroads into CIO territory, we'll see a much higher usage in the future. And a much larger fallout with tons of unmaintainable legacy code.
It's like the times when everything needed Java, like Oracle added Java into their DBs for stored procedures. I guess Oracle will add JS too (or have they already?).
By better choices you mean better languages or other methods of achieving the same flexibility without the complexity of a language like JS? Can you reply with examples? I am interested in alternatives to JS and LUA.
Well, it's a popularity contest. Lua is certainly a great language, it's fast and it works well with C. I guess that nginx team thinks that js will bring more users therefore more business. If I were to develop a product and add a scriptable layer, i'd certainly use JavaScript. Even if it is not the best language out there, it's about growing an ecosystem.
Ecosystem isn't the only thing to think about, but I agree that is almost certainly why they chose js.
I'm willing to bet it won't bring them any new business. And in the long run, it will encourage their existing users to dump nginx and use Node.
If you think people will dump nginx for Node (which has totally different use cases) then I'm not sure how qualified the business estimation is...
Isn't Node more of an Application server though? I don't see Node replacing Nginx in the roles it's normally used for (serving static resources, reverse-proxy). I can't think of a single upside
Agreed... I don't want the effort and complexity it would take to offer a lot of what's pretty easy with nginx inside a node app. As much as I really like node, I think it's generally nginx in front of node to offset what it does better... I think people make applications complicated enough as it is...
Lua and JavaScript are very similar languages and share the basic problems. I'm indifferent.
I like and use both. I think Lua is a better language and I have a hard time seeing otherwise. Lua has some warts too, it's not about warts, but Lua got enough right from the beginning, is small enough to be crazy fast, and is not continually adding new large features. This is the key thing to me. JavaScript is designed by committee and has too many features in ES6.
That's a good point, Lua is smaller than JS. It does have that going for it.
Exactly.
I have been playing around with Lua in Openresty for the last few nights. I haven't had any Lua experience before but it seems to be a good language for Nginx, and it has been fun to learn.
It is. But JavaScript is the new assembly language, all you need is a Lua to JS compiler or a whatever to JS compiler.
Last I checked Javascript implementations were slower than LuaJIT. Cost of implementing Lua tables, meta-tables, co-routines, etc. in Javascript will be rather high. Anywhere between 2-50x. If native Lua engine is removed, you can just forget about it then.
So what you're saying doesn't make any sense for anything remotely performance sensitive.
It's a joke. Like haha, but serious, because some people actually believe this.
lua compiled to asm.js is 64% of the speed of lua native
... which is about 10 times slower than the LuaJIT compiler used by OpenResty.
So that might be good enough in the browser but on the server where an optimized Lua engine exists, why?
That's not compiling lua to asm.js; it's running the lua vm in asm.js.
A straight up transpilation is other projects like lua.js. But you loose really nice core lua features like coroutines...
> Cost of implementing Lua tables, meta-tables
Tables and meta tables are nearly identical to objects and prototypes in javascript. Why would the cost of implementing them be high?
JavaScript objects are indexed by strings. Lua tables are indexed by arbitrary objects. In JS a[1] and a["1"] are the same element but they are different in Lua. In Lua a[{}] is a new element indexed by the identity of the new object literal, and will be a different element every time the expression is evaluated.
I genuinely can't tell whether you're joking.
Not joking, just dying a little inside.
I have tried so hard to like Lua, but I prefer 0-indexed arrays and curly braces.
The comment syntax is really what gets me.
Oh right, when you try to disable code with a block comment and an array says "not today".
Although the same happens in JS with Regular Expressions :P
This seems like massive overkill. Instead of adding a limited domain-specific language which is tuned to nginx's requirements, they've added the behemoth that is JavaScript, along with all its flaws. A turing-complete behemoth, at that.
The nginx config language already is a half-assed DSL with all sorts of unintuitive limitations (chiefly caused by its insane mix of declarative and procedural elements, and arbitrary precedence of declarative directives). The last thing nginx needs is a new DSL (that everyone has to learn from scratch) with a new set of unintuitive limitations and a new set of differences from common general purpose languages.
There are plenty of good minimal languages, from embedded scheme varieties to luajit to io. If that's what you mean by a DSL (embedded minimalist scheme with some nginx-specific functions and variables, for instance) then I agree, but they chose the javascript path for a good reason: it has a healthy community and ecosystem, and while the performance won't be top notch, I don't think it'll matter.
It's a subset of javascript, with a subset of flaws. The wiki entry specifically mentions that eval and closures aren't supported.
closures as in "first class function values"? Or closures as in scope that follows a function from where it was defined?
So, kind of like JScript in IE3?
I'm really happy with openresty and curious to put them side by side. betting on lua here.
Two things;
The big HUGE win for lua in this, in my view, is the plethora of packages on luarocks. With simple scripts, as presented in this blog posts, sure a javascript subset is fine. But suppose you want to interact with redis from your script? Where's the c ffi interface, and a prepared package you can use from nginscript? Grap hiredis bindings from luarocks, and your set.
Second, great yet another javascript implementation. They're very open about supporting a subset out of the gate, who knows how long it would take to reach parity with es 5 or 6 even?
I agree with what you said -- although the node/npm ecosystem is much bigger. I'm not sure how it works with the nginScript subset.
But network-related libraries shouldn't be pulled from luarocks (unless maybe they have resty in the name). One would want to use lua-resty-redis and not hiredis within OpenResty. The 'resty' libs use the nginx cosocket library so work asyncronously with nginx core. hiredis would block the worker threads.
Ah thanks, I was not aware of that. Also, unless they code nginscript to specifically support googles V8 api, there's no reason to expect node or any of it's packages to work with it.
Not asynchronously, but in a non-blocking fashion. Good thing is that Lua socket and ngx.tcp are quite compatible. But C libs with their own IO are - well they work, but they will block the nginx workers.
I think this depends on just how compatible NginScript is with JavaScript. Will it be possible to pull in the ffi node module, or one of the many node redis modules?
This is a welcome improvement to me. I've spent many an hour trying to figure out how to put some logic in nginx but it was never very intuitive to me. I think a reverse proxy + ssl termination + web cache should be considered part of a normal web stack and developers should be proficient to use the full stack when they develop and implement sites. Rather than trying to do everything on the app server, once I started utilizing nginx my app design changed somewhat and of course the response times dropped and the app server load was reduced. I'm hoping nginscript just makes it all easier to do.
A web server with a JavaScript engine. What could ever go wrong?
My eyes start to bleed when I imagine what some cowboys will implement on top of that.
Netscape Enterprise Server included server-side JavaScript in the 1990s.
Do you know nodejs?
The post ends with "We look forward to your feedback as you try out nginScript [...]" but it doesn't mention where we can test it out. Does anybody know more about that?
It's here: http://hg.nginx.org/njs/
We run a separate virtual machine for each request, so there’s no need for garbage collection
I'm curious to see the impact of this strategy on performance.
I wondered about this sentence. If you would apply scripting functionality to serve long-running websocket or HTTP/2 connections I'm sure garbage collection would be necessary. However if the scripts are really per request and not per connection then it could work - but the capabilites would be much more limited then what you can do in other scripted-webserver-environments (e.g. node).
A VM context could be cheap to create and include an amount of memory above what that script uses for its lifetime. Meaning a single allocation and then the whole thing gets free'd at the end of execution. Analogous to how CGIs execute.
Slightly off-topic, but I wouldn't worry. Nowadays hypervisors can boot up complete OSes in ~800ms.
https://insights.ubuntu.com/2015/05/18/lxd-crushes-kvm-in-de...
I think a countdown for a new 'in vogue' webserver just started.
This is something I would've expected from a "Show HN: Embedded JS in Nginx" post, meaning it has potential to be a project done just to see what can be done, that everyone could say "Hey, that's cool" and then never use, because it's a terrible idea.
Instead, it's presented as a reasonable way of moving forward, when they could've pushed for their own much more reasonable alternative. They've done more work for a worse idea, effectively out-competing their own feature with a much more popular but crappy alternative.
Adding Javascript support to high-performance reliable web server does not seem rational from the engineering point of view.
I'd rather added support for the safe subset of some statically-typed, high performance language compiled to LLVM. This language must be without asyncronous GC, so its memory use will be predicatable under high load.
Javascript on server side is so vogue-ish. Vogue will change soon but uglyness of architectural decision will stay with Nginx forever.
I think it's a good addition. We have been using varnish on high traffic servers and one of the reasons was that it had "vcl", a javascript-like language to define request handling logic. (With a lot smaller feature set and more domain specific. But powerful enough). Having javascript on nginx would provide a lot of config options for people familiar with it. I believe nginx already has Lua support so not a big deal anyway.
I agree. It will be more easier to write some custom request handling logic in Javascript.
Yeah, I am a known Apache fanboy, and so I'm sure that what I say will be discounted. But, imo, this just shows what happens when an open source project becomes the sales initiative of an Open Core model company. Instead of the design and future being under the control and guidance of the community, it is instead in the hands of the VCs and whatever "promises" future revenue growth. Believe me, I know; I used to work at Covalent which was billed as the "Red Hat of the Apache web server" so I know how hard it is to resist the push of VCs and those nasty quarterly numbers. And Covalent wasn't the only Apache shop around, unlike nginx.com today.
The combination of FUD and marketing $$$ is being used to "encourage" more people to migrate (or use) nginx, as an "open source" alternative, when it's obvious that "open source" is being used mostly for the PR aspect and not so much for the community-focus and community-led aspects which is really core to "true" Open Source.
nginx is driving very fast down the road to firefox.
Marginal bloat is always rational. Forest for the trees and all that.
Or Apache.
Should Greenspun's Tenth Rule be updated with a note that "they might try adding javascript before they add common lisp"?
I can understand the appeal, marketing Javascript to CIO/CTO is much easier than marketing Lua today.
(Which obviously is sad as in my humble opionion for various reasons Lua is the better embedded language. I hoped it could become a widely adopted standard, we had great success with Lua in Redis in the past).
as long as I can compile --without-nginscript, go nuts
For anyone interested, it looks like it's available as a module on http://hg.nginx.org/njs/
I've recently been leaning towards not using web servers like nginx or apache at all. There are non-blocking server platforms like node.js or vert.x that let you code your web server instead of configure it. I find that ou frequently hit a wall of opaque redirect rules that become unmaintainable for any complex project. Having your server written in testable code makes it more predictable. It looks like nginx is trying to meet in the middle.
I'm not sure if this is great idea. Use of reverse proxies has lots of benefits. You can manage load balancing, SSL Termination, serving static content (much faster), caching, compression, centralized logging, and using different applications on the same ur space (foo.com/app1, foo.com/app2). They also have the added benefit of another layer of security (look for and prevent various HTTP exploits and prevent them from getting to the web server). I'm not saying you can't do this in node but nginx/apache are really good at what they do.
EDIT:
I've read through your comments and you seem like a pretty experienced developer. I don't think any of the information in my comment should be news to you so I'm curious to hear more about why you feel this way.
I use nginx at Neocities to serve all our static sites, and as a proxy for our front site.
I like nginx, but it has way too much of a sacred cow treatment by the dev community. It has plenty of problems, the configuration is a psuedo-language that doesn't always make the right choices and is difficult to heavily customize, and I've gotten to it be -very- unstable under certain circumstances, including really bread-and-butter things like SSL caching. If there's a bug, you'll have a good old time debugging it's massive collection of C code. It's great, but it's not perfect.
Making nginx do custom things that you'll probably need to do in a serious environment (example: dynamically programmable SSL SNI) requires craxy mods and hacks that have only recently been made available (by third parties) and heavily reduce nginx's performance. Further, they only provide purgable proxy caching via their commercial version, which costs an exorbitant amount of money. The free purger, naturally, makes nginx lock up. I wouldn't mind chipping in a bit for nginx because I want to support their team any way, but at their current prices ($100/node/month or something like that) we simply can't afford it.
I realize this is not a popular opinion right now, but node.js is completely up to the task of running a reverse http proxy. They are basically (you likely won't notice the difference unless you're running the New York Times) competitive with nginx for performance, and as a tradeoff for an unnoticable slowdown you get a full, turing complete programming language to completely control the flow of your data. Nginx under the hood is just a reactor pattern with children that share a socket. Node.js has a cluster module that uses the exact same strategy. Mind you this is from someone that has done talks critical of reactor pattern scaling.
Also, if you have blocking I/O apps, it doesn't matter what you configure nginx to do, it's still going to lock up when someone DDoSes it with slow loris connections. Make your ruby app thread safe and use Rainbows! instead of Unicorn, or you're going to have a bad time.
competitive with nginx for performance, and as a tradeoff for an unnoticable slowdown you get a full, turing complete programming language to completely control the flow of your data.
It's almost like erlang and mochiweb never existed but people sure are willing to re-create it all in javascript.
JavaScript: Spending the past 20 years catching up with 1990s-level technology.
> Making nginx do custom things that you'll probably need to do in a serious environment (example: dynamically programmable SSL SNI) requires craxy mods and hacks
> you get a full, turing complete programming language to completely control the flow of your data
Did you try nginx's lua support ? Because it doesn't seem to be that experimental and has its fair share of documentation already, on top of being much more performant than Javascript:
One of the things I don't understand about nginx is why a HTTP daemon still contains a mail proxy today!
So when are you going to release "node-ginx"? :)
You're on to me. ;)
There is node-http-proxy available (https://github.com/nodejitsu/node-http-proxy), which also has some plugins available to do some of the advanced features nginx supports.
I'll likely be writing a custom proxy server tailored to our needs such that it probably won't be useful as a general purpose proxy server, but if you're looking for something, that's a start. Making it more general purpose unfortunately would require more work, and I'm pretty time stretched right now.
I'm not saying it's better than nginx, of course. I'm just saying that if you need to do some crazy programming that can't be done with nginx, you're free to use something else. Don't be fearful of treading your own path, just make sure you know well how HTTP works before doing it.
Here's a stupid example I whipped up quickly for a reverse proxy for our IPFS nodes that demonstrates how quickly you can put together a custom reverse proxy to do something weird: https://github.com/neocities/hshca-proxy/blob/master/app.js. That flaming piece of junk hasn't crashed once since I deployed it.
For that matter godaddy's website builder now "publishes" to a cassandra cluster that is served via a cluster of node servers with local redis as a local in-memory cache... it works really well. The distribution model is working much better than the previous publishing via ftp to a dedicated backend linux host (apache). I haven't been there for about a year now, but I'm pretty sure a lot of those aspects have proven out.
I don't know about the OP's reasons, but one of mine lately has simply been dependency management and simplicity of deployment. It's just handy to be able to package it all together, especially for packaged software where deploying another component would be more configuration. I suppose this is why docker containers (or in the past, virtual appliances/VMs) have become so much more popular.
The good embeddable web servers are usually pretty lightweight, scalable and can be programmatically configured. Things like Jetty are popular, but look at languages like Go that have HTTP serving built in via libraries and scale nicely via coroutines.
Vert.x etc. are cool for performance reasons, being lightweight and usually much less thread hungry (using async operations, sometimes in many less threads).
That said, I do agree that reverse proxies are still really useful for all the reasons you mentioned. Reverse proxies on top of some of these high performing embedded HTTP serving engines is a good practice, when you need it.
And there is no need to throw out the tried and true engines, like Apache, Nginx, etc.
Just depends on the use case and needs I suppose.
To play devil's advocate (not that OP is an devil), you want a CDN serving up static assets anyway, and maybe take care of SSL termination and security depending on your sensitivity needs; haproxy is great for load balancing and centralized logging; vulcand to handle reverse-proxying; and at that point, all you're left with is compression, which a reasonable web server should be able handle. Now you've got a suite of specialized tools that will do their jobs well, and you probably have most of them in your stack anyway.
Granted, it's more complexity, but nginx certainly isn't the must-have that it used to be.
HAProxy can do the SSL termination, reverse proxying, and compression jobs quite well by itself. Though vulcand's etcd-based runtime configuration looks friendlier than HAProxy's.
Firstly, I'm flattered that I sound like not a n00b :)
I should say that I haven't ever really tried this at production scale. My background is mostly consulting wherein my responsibility is to deliver a provably working solution for someone else to manage and operate. So there's my bias in this.
Other commenters have made a lot of the points I would. You can easily handle TLS in Java or JavaScript. Or you can terminate with an ELB as I usually do. A lot of load can be pushed to a CDN.
But really, I'm not convinced it would be that much slower. I know this dated, but a simple apache bench test shows Tomcat outperforming httpd for static assets [1]. I've never had a site that was remotely bottlenecked by static assets, but I've had many bugs due to obtuse mod_rewrite configs. It's cheaper to have to fewer bugs than to spin another server.
[1] http://www.devshed.com/c/a/BrainDump/Tomcat-Benchmark-Proced...
> load balancing
You'd rather use a dedicated load balancer like Route53 or haproxy. Don't think choosing Apache or Nginx is the right option for those really. Plus something like vert.x is very usable as a load balancer already.
>SSL Termination
Just about everything handles this already. Current best practice is to use SSL for all communication between your own servers anyway, so there's no gain. If you SSL terminate on your load balancer, these days you want to use a new SSL connection between your load balancer and application server anyway if possible.
> serving static content, caching, compression
New app servers like vert.x support Linux sendfile and handle very well for serving static content etc. Currently, nearly everyone uses Cloudflare to handle all of this anyway. No real reason to duplicate it if Cloudflare is set to handle it.
> centralized logging
Centralized logging is usually done by sending all of your logs off from each service/server to be aggregated on a dedicated box running Logstash or whatever. You don't use your reverse proxy for this?
>using different applications on the same ur space
From using the web, I don't think this is done anymore. In fact it seems to be the opposite - foo1.app.com, foo2.app.com seems to be the trend. Basically the opposite of multiple apps on 1 domain because of the big move towards microservices. Extra domains are the cheapest thing there is.
> added benefit of another layer of security
Security doesn't work that way in my experience. It's more about minimizing attack surface. If you use node and nginx and apache then any exploit that hits any of those 3 will hit you. If you only use node, then you can only get hit by exploits on node. So I'd argue it's the opposite. The more layers, the less secure.
> nginx/apache are really good at what they do
Sure, but you need to find the most efficient tool to handle your needs with the least amount of complexity. Only add something if it solves an issue that you can't solve in a simpler way just as well.
> Current best practice is to use SSL for all communication between your own servers anyway, so there's no gain.
I've never heard this. Who does this? Everyone I know of either just naively relies on the privacy of NAT-style non-routability in something like an AWS VPC, or, if they're more paranoid (or their provider has no private-networking feature, or they need HIPAA compliance, or whatever), uses IPSec—which is, happily, exactly the proper use-case for IPSec.
(Unless what you actually mean is requests between separately-maintained microservices that are supposed to treat one-another as if they were produced by third parties, like AWS strives to do. But the "your machines" makes me doubt that; you wouldn't think of those other machines as "yours" in that case.)
> From using the web, I don't think this is done anymore. In fact it seems to be the opposite - foo1.app.com, foo2.app.com seems to be the trend.
Green-field applications, no matter the size, are usually deployed to separate subdomains, yes. For long-term maintenance, though, nothing beats being able to just mount a new backend (probably written in a different language, even) on top of your legacy app's /admin/ or whatever else. It's effectively about patching a resource space with new backends to handle parts of it, without having to touch the legacy code to get it proxying to the new server. Businesses that embrace the "cool URLs don't change" philosophy—for example, newspapers who want their heavily-linked-to story pages accessible forever—take this approach all the time. Their web servers are rats' nests of routing rules to different backends, to make everything seem, from outward appearance, to be the same as it always was, even when everything is now in the CMS-of-the-week.
The other place this happens is API servers—you might want /1.1/ and /2.0/, or even /feeds and /emotes, going to different clusters. (If you're doing that in the path instead of using content negotiation.) That kind of business-policy-level routing is not the rightful domain of a load balancer, even if haproxy et al can be configured to do it; you want your load balancers to be dumb stateless infrastructure components, and your web servers to be maintained and configured and updated as part of the service you're deploying.
> If you use node and nginx and apache then any exploit that hits any of those 3 will hit you.
There are a bunch of clever things that genuine, battle-tested "web servers" do that "application web servers" don't. Preventing Slowloris attacks, for example—it's something every HTTP server would do in an ideal world, but since it complicates the code and prevents streaming parsing, you really only want to handle it once (by buffering requests) at the input end. There are umpteen other such attacks that web servers just abstract away. Even with, say, Erlang's "battle-tested" reputation, I wouldn't trust it sitting on the open web without nginx or something else in front.
Usually, though, your load balancer is also a "web server"... if SSL has been terminated there so that it can actually parse the requests and responses. This tends to be why some people actually chain haproxy -> nginx -> their app server: they put haproxy in dumb TCP load-balancing mode, while nginx terminates SSL and thus gets to be the "web server."
Regarding encryption inside your network, it's a new trend since the NSA business has been going on. Google famously decrypted at the edge and the NSA demanded (and received) a backdoor into their network to ba able to see the unencrypted internal traffic. As a result, Google now encrypts at every level.
http://techcrunch.com/2014/03/20/gmail-traffic-between-googl...
I've never heard this. Who does this?
For various compliance reasons, customer data must never be transmitted in plain text even internally. I've seen point-to-point serial links even require encryption when they come under compliance scope.
He's just saying that he prefers to configure his reverse proxy in Javascript, or some other mainstream language than in the Nginx configuration language.
I can see where he's coming from. But I still slightly disagree with the feeling.
Even gods make mistakes
For handling load we find nginx amazing, and we run it in front of an ExpressJS server on Node.JS. The combination is pretty phenomenal for high traffic production sites that ExpressJS can not handle alone.
There's no either/or here. Nginx in front of Node works great.
I think that most people missed the point here.
The basic idea is a DSL that "happens" to have the same syntax than Javascript. I guess that this VM has a lot of optimisations related to its purpose toward nginx (.e.g. no GC overhead, since each JS context is supposed to be short lived and tied to a unique request).
so it's mod_perl, but javascript and nginx instead of apache and perl. seems like they've ignored history here, as mod_perl turned out to be a bad thing in the end, people messing with bits they really shouldn't leading to massive amounts of unmaintained legacy spaghetti code messing with all parts of apache.
bearing in mind mod_perl's original use was not for writing applications, it was for messing with apache, for doing the things you can't easily do in the config. but where there's a way, abuse will follow, and before you know it whole apps will be written with this
ah well, those who don't learn from history are doomed to repeat it.
so, while it may seem like a good idea now, you can bet in 5-10 years it will no longer look so clever
I wonder if they'll have an "'nginscript' is evil" section in their wiki, like their "'if' is evil" one...
I wonder if this has anything to do with the call for a new maintainer for LuaJIT recently
No.
As long as they add a --disable-js config flag i guess i'm okay with it
If you need that flexibility I think Go is much better solution.
Why not write a JavaScript to Lua transpiler instead?!
Why did not they use Golang?
Dislike.
I think Nginx is open source crippleware. All the goddies are in the closed Nginxplus. Http2, load balancing with monitoring with application health checks. Do you get the source code of the closed features?
Nginx already has LUA scripting support.
The HTTP/2 module of NGINX is (and has been) fully open source. https://www.nginx.com/blog/nginx-1-9-5/
Disclaimer: I work at NGINX.
tengine fork has application health checks, although I'd still probably recommend haproxy
HN and it's predictable level of antipathy is a constant disappointment. Hating is the norm, do better. Put more thought into your opinions.
Personal feelings of Javascript aside (not my favorite either!), I think this is a great business move (adoption, excitement, blog-o-sphere marketing, even if some users shoot themselves in the foot), and I think it opens up exciting possibilities, including creating Nginx+ features fo'free.
Anything you can do in nginScript, you can do in Lua. Except that Lua was lightweight to begin with. This isn't JS (as we know it), this doesn't get access to the Node/npm/bower/whatever ecosystems.
The antipathy is a symptom of our JavaScript disease. We have grown tired of this affliction. We understand now what makes it less great than we once thought. The churn of rapidly growing and devolving JS frameworks, the slog of awful design-by-committee processes putting the language together... it is nothing compared to Lua's simplicity.
And some clever fellows understood this long ago, on a site much like this one. Feel free to take a look. https://news.ycombinator.com/item?id=7890685