Nginx is now the most popular web server, overtaking Apache
w3techs.comNetcraft confirms it
https://news.netcraft.com/archives/category/web-server-surve...
This refers to Slashdot about 20 years ago.
Netcraft confirms it. :-)
Considering nginx is really just good at serving static content(maybe CDN uses it a lot) and the rest are all proxying, and Apache can handle static and dynamic contents at the same time, this tells me many web server nowadays are no longer simple http+cgi, but with app servers(e.g. django, rails,etc) running behind a (nginx) proxy so nginx's number takes over apaches.
You can serve dynamic content from nginx too. You don't need Apache in your stack at all.
You can proxy requests from nginx to backend application servers, but you cannot host those applications on the nginx.
Whereas with Apache2, PHP/Python/etc. interpreters can run directly in the Apache worker processes. Similarly, you can also run Nodejs/Ruby/Python web application as part of Apache with Passenger module.
Is it really an interesting distinction whether PHP is handled via dlopen of mod_php.so or via UNIX sockets to php-fpm?
Yes - it matters a lot who is in control of the code execution. The traditional way was to run the code from or as a child process of the webserver process. FPM, WSGI, etc. move that into a completely separate and unrelated process. Crashes, hangs and security issues are a much smaller problem in the latter case.
So because FPM isn't running in-process, nginx "nginx is really just good at serving static content"...? That makes literally no sense to me.
> Considering nginx is really just good at serving static content(maybe CDN uses it a lot) and the rest are all proxying
The English is a bit broken in the top-level comment, but both static content and proxying are clearly mentioned as "nginx things", which is entirely fair. If you don't consider proxying to be included under the term "serving" (which is entirely reasonable and not uncommon) this is entirely true - the only kind of "serving" nginx is good at is static files, the rest of what it's good at isn't "serving" but proxying to other servers that render dynamic content - PHP-FPM is one such server (it just happens to not speak standard HTTP).
I don't see why people always get mad when this gets brought up - I've always considered that to be a good architectural choice for nginx. Running application code in the webserver process isn't a good idea anymore so the focus on good static and proxy performance is actually what I think made it ultimately "win" over Apache - the industry has moved on from the old ways and Apache fell behind.
I think that Apache2 + mod_php (or modules for other runtimes) doesn't measurably impact most web applications, and this setup is a bit simpler, since you only need to run and configure Apache, instead of Nginx + PHP-FPM. If you are working with containers, it means that you only need one container, instead of two (if you go with the idea that container should only do one thing.)
Additionally, if you are behind CDN or just plain Varnish, static assets would only be hit once or twice, and majority of the requests would be processed by PHP anyways, so there is no benefit in putting additional pipe between proxy (Nginx) and PHP interpreter (FPM). Especially with smaller page responses, Apache can easily win, since it doesn't have to communicate with an external process.
Raw performance is only one aspect. Splitting up the processes means that a security exploit in one component has basically no way of endangering the other. In a shared hosting environment, you can also run separate fpm processes for each tenant, allowing for custom configuration and even different versions altogether for each.
Just wait, we're going to come back around full circle once someone embeds a WebAssembly runtime into nginx/openresty, and then someone else makes a framework for defining "edge functions" to run inside that runtime.
This is basically what CloudFlare Workers does already.
I wouldn't read too much into that.
nginx is good at serving dynamic content too. Simple setup with uwsgi for Python and php-fpm for PHP.
I think both fpm and wsgi would fall under what OP called "proxying to application servers", despite using a different protocol. Yes it's technically "serving", but so is proxying and literally anything else a "server" does. Maybe "hosting" is a better term: both can serve any content, including dynamic, but apache can often host it as well (without an external server), which nginx can't do nearly as well.
No, I'm addressing this point where OP says nginx is only good at serving static content. See "really just good" here:
> Considering nginx is really just good at serving static content...
That's not correct at all. Regardless of how the request handled, serving dynamic content on nginx is trivial.
What about the rest of that sentence? The way I read it (admittedly, the English is broken, so it isn't entirely clear what they meant) was that nginx is only good at "serving" static content, while the rest of what it does is proxying, not "serving". In that case, where a distinction is intentionally made between "serving" and "proxying", taking request for dynamic content and just shooting them off to an application server like uwsgi wouldn't be considered "serving".
Taking this logic to the extreme, socat is also very good at serving dynamic content:
$ socat TCP-LISTEN:80,fork,reuseaddr TCP:127.0.0.1:8080 # Make sure Gunicorn is running on port 8080 to handle incoming requests
No surprise, Nginx has much better performance on handling large site and heavy traffic than other web servers. Since it’s required more knowledge and experience to setup and do configurations, that make lots of beginners step away. But time will prove cream always rises to the top.
How about H2O? It's supposed to be significantly faster than Nginx: https://h2o.examp1e.net/
Amazing what a more user-friendly interface will do... Looking at you, httpd.conf.
Linux distros that are needlessly moving config files with every upgrade do more damage than conf. directives
my httpd.conf is always shorter than nginx configs. it's another case of linux distros messing up good software.
shorter didn't mean clear that's the age old LoC fallacy
in the case of configs clarity is just familiarity - I know people who write sendmail.cf directly
I'm really enjoying caddy lately. So simple and elegant to work with.
I'll run Apache till the end.
I’m pretty deep into Apache, but I think I’ll use this as an excuse to play around with nginx for a bit. Worst case scenario I’ll learn something new.
My advice as a fellow long-time Apache user who also went deep into nginx would be to look into Caddy.
For me, the main draw to nginx was powerful (web) proxying, static file performance and simpler configs - Caddy does all of that 10x. In a sense, nginx is right in between Apache and Caddy, so unless I need to proxy something obscure like RTMP or email, I rarely have a use for it these days.
Indeed. I never got comfortable with Apache after 15 years of administering Linux systems. Configuring Apache is like setting up Postfix or Sendmail: it does not spark joy. NGINX was initially interesting, nowadays still requires a ton of boilerplate, and does everything and the kitchen sink. Lighttpd almost went the way of the dodo for similar reason I suspect.
HTTP is not new technology, nor rocket science, a web server to serve a simple HTML page should be configurable from scratch, from an empty .conf file, in seconds.
Caddy seems to share the same philosophy.
CERN httpd for ever.