Big list of HTTP static server one-liners
gist.github.comThe ruby one with the -run option is a bit nonintuitive in how it works.
$ ruby -run -ehttpd . -p8000
The -r option requires un.rb[1] which is a file full of convenience functions, such as httpd in this example. Classic Ruby.Given that go has popularized long args with a single hyphen (-name etc), it is easy to mistake -run as an option by itself.
> go has popularized long args, with a single hyphen (-name etc)
Why did they do that?
It took us 20 years to standardize on - for short and -- for long, with occasional finger rage on tar or find, and now this?
What were they thinking?
My hunch is that Rob Pike, Ken Thompson, Russ Cox and the other Bell Labs alumni behind Go disliked GNU getopt-style flags, like they dislike a lot of other GNU and BSD decisions that "contaminate" the original philosophy behind Unix.
This overall attitude has led to many design decisions in Go, which define the language for better or worse. Statically-linked executables, for instance, contributed more than anything else to Go's popularity as a CLI tool language. Of course, the Bell Labs crowd was already highly skeptical of dynamic linking. See Rob Pike's opinion here: http://harmful.cat-v.org/software/dynamic-linking/
I say it's a hunch, but I think Russ Cox's following comments pretty much demonstrate this attitude against GNUisms and BSDisms: https://github.com/golang/go/issues/2096#issuecomment-660578... https://news.ycombinator.com/item?id=9207316
On the other hand, Russ clearly goes on to say they designed the flags not based on Plan 9 (which only had short single-letter flags), but on Google's gflags library: https://github.com/golang/go/issues/2096#issuecomment-660578...
In short, it's probably both:
- Innate distrust of all things BSD/GNU meant that the original Go team didn't feel they have to honor the established tradition. Just like dynamic linking, they believed that if there is something that GNU does wrong, they don't need to follow it just for the sake of keeping compatibility.
- Positive experience using gflags convinced the team that the gflags approach (single-dash long flags) is the best one.
I personally think they were wrong here. As Russ Cox himself clearly stated in response to all requests for fixing go flags: there are many libraries out there who implement GNU-style flags. What happened in practice is that most Go developers voted with their feet and chose a third-party flag module (even Google's own Kubernetes is using spf13/cobra). In the end, the Go flags package just ended up causing more confusion and didn't solve any of the problems (perceived or real) with GNU flags.
Damn, a constructive and sourced answer. I wish I could upvote that twice.
You can use -- or - as far as Go proper goes, doesn't matter in most cases. I think popular args libraries do that too, but there's a million of them, so some probably don't.
> What were they thinking?
They were starting back from the 70s.
Also one more check for go being alt.history.java, this style is also the standard for Java tooling.
And Apple, I think.
Also the standard for PowerShell.
The XWindow system used -long -opts long before that.
The XWindow system is a collection of the worse api in the world so it's no surprise.
But the vast majority of modern command line tools agree on --long.
...aaaand it's more than 20 years old, so yeah, that kinda agrees with the point that it took us a long time to stop doing that because it's confusing.
I've always disliked not having a space between a -parameter and it's option. "-r un" is so much more readable and explicit than "-run".
Why do people not include that space? Certainly it's not to save the time of not hitting the spacebar, right?
Permalink: https://github.com/ruby/ruby/blob/v3_0_1/lib/un.rb#L323
(Tip: can hit “y” on GitHub to change the URL to a revision-based permalink)
My favourite is thttpd [1] which is super tiny, battle-tested and actually meant for the job (and only this job). It's available as a package on most Linux distros.
Serving a static folder `/static` on port `3000` as user `static-user` and with cache headers set to 60 seconds would go like this:
thttpd -D -h 0.0.0.0 -p 3000 -d /static -u static-user -l - -M 60
Even if you've got Python lying around on every Ubuntu server, I still don't get why you wouldn't use something leaner to serve your static files, as long as it's easy to install/configure. Same goes for most of the runtimes in that list.Because the point is that I want a webserver up now. If I was running something long-term I would just use Caddy anyway.
Yeah, I want a HTTP server up because I need an actual server to test my web app or because I want to send a large file to someone on my local network and don't want to guide them through setting up smb/rsync/whatever and just want to give them a URL.
I don't particularly care about what's least likely to fall over under sustained load or is suitable security wise to expose to the internet, because it's not going to have to deal with either of these things. Being "already installed" on the other hand is a big selling point.
> My favourite is thttpd [1] which is super tiny, battle-tested and actually meant for the job (and only this job). It's available as a package on most Linux distros.
Surprising not on Debian or Ubuntu AFAICT:
* https://packages.debian.org/search?keywords=thttpd
Though it does seem to have this from the same author:
* https://packages.debian.org/search?keywords=mini-httpd
* http://www.acme.com/software/mini_httpd/
There are times that I want to run Let's Encrypt that's not a full-time web server (SMTP, IMAP, etc), and it would be handy to spin up something ad hoc on tcp/80 to do the verification step and then stop it right after.
thttpd has one cool cat badge icon but have you seen its critical path http parsing code?
Use perfect hash tables for known http headers, because they're perfect./* Read the MIME headers. */ while ( ( buf = bufgets( hc ) ) != (char*) 0 ) { if ( buf[0] == '\0' ) break; if ( strncasecmp( buf, "Referer:", 8 ) == 0 ) { cp = &buf[8]; cp += strspn( cp, " \t" ); hc->referrer = cp; } else if ( strncasecmp( buf, "Referrer:", 9 ) == 0 ) { cp = &buf[9]; cp += strspn( cp, " \t" ); hc->referrer = cp; } else if ( strncasecmp( buf, "User-Agent:", 11 ) == 0 ) { cp = &buf[11]; cp += strspn( cp, " \t" ); hc->useragent = cp; } else if ( strncasecmp( buf, "Host:", 5 ) == 0 ) { cp = &buf[5]; cp += strspn( cp, " \t" ); hc->hdrhost = cp;
I often use netcat to test stuff locally.
while true; do cat index.html | nc -l 9999 -q 1; echo -e "\n---"; donenice one
netcat is just handy in general!
Note that a lot of these bind to 0.0.0.0 by default. This has caused me surprises in the past as I was failing to connect to a host that is available over IPv6. Sometimes it was even intermittent as I would sometime resolve to the IPv4 and sometimes get the IPv6 and fail.
For python3 this is easily remedied.
python -m http.server -b ::
On most OS configurations this will listen on both IPv4 and IPv6.Or for when you only need local access:
python -m http.server -b localhostFor the use cases of most of that software list, they should actually listen to localhost (127.0.0.1 and ::1) only, because security. Listening on every IPv6 interface, which might be exposed directly on the IPv6 internet, is even a worse idea.
In general I agree. Localhost is a safer default, it doesn't cost much to type a bind address when you need jt. However 0.0.0.0 is a worse default than ::
(2013). There are many comments since then providing more alternatives, many of which didn’t exist back in 2013, but the article itself hasn’t been touched since 2013-07-07.
While the author hasn’t updated the main link, it’s clear he checked up on the article until atleast 2017: one of the comments originally was a base64 encoded string, and the author edited and deleted it so that no one would run random base64 strings off the internet.
Some of these are multi-liners that can be converted to one-liners. For example I believe "npx" can be used for all the node examples for example "npx http-server -p 6007 ./root"
[edit] ahh i see this is called out in the gist comments
$ emrun --port 8000 . # Handles WASM; most others don't.[1]
[1] Emscripten's emsdk includes emrun, which actually serves WASM files with the correct MIME type out-of-the-box, unlike the python2 and python3 servers and most others.
Does emrun allow CORS, required for SharedArrayBuffers?
CORS is up to the server you're calling, not up to the server hosting the JS code. Any server that supports custom HTTP headers, support CORS. No, SharedArrayBuffers don't require "CORS" (which you probably mean if they work in CORS context rather than "require"). What they do require is a Secure Context (https, localhost or similar) https://developer.mozilla.org/en-US/docs/Web/Security/Secure...
I wonder how come CORS is such a misunderstood concept globally. It's a really simple concept in general, and understanding it will make a lot of your work as a frontend/full stack developer easier. Take a day and fully understand it, it'll pay off.
Isn't python's server blocking? If you make two requests simultaneously, one of them will have to wait.
That’s correct. Well blocking is not much of an issue in and of itself but it’s blocking and single process and single threaded, so it can only serve one request at a time.
It’s not usually an issue for small stuff, but it does mean you can’t really use this snippet to serve big files for a lan, which is a bit of a surprise the first time you hit that limitation.
Python 2 was blocking. Python 3 is non-blocking.
If you have npm installed then simply doing
npx serve
is very easyOf course this requires internet access. It is also arguably less secure as it is downloading code from the internet and means that you are trusting the latest code from a handful of people https://www.npmjs.com/package/serve
Worth noting, because it’s non-obvious but does address some of the concern around npx, that it resolves to any locally available version using standard Node module resolution, then falls back to downloading from NPM. And in the next version it will warn before downloading.
I mean as a JS developer I am already kind of desensitized to taking my life in my hands when I do pretty much anything.
> ...arguably less secure as it is downloading code from the internet...
Don't essentially all the options involve code downloaded from the internet? And you have to trust the source that it isn't malware or too buggy or insecure?
Are you making a case that the maintainers of this package aren't trustworthy? Or maybe the operators of npmjs.com?
I'm just not understanding the claim this is less secure than various other options.
It feels somewhat disingenuous to rephrase "is downloading" as "downloaded" as though they mean the same thing.
How does when you happen to download something affect how secure it is?
By default -- presumably the most common case by far -- "npx serve" will download the most recent stable build. But why should that be less secure than some previous version?
New vulnerabilities could have been introduced. But, of course, old ones could have been resolved.
If you generally trust the source to be working in good faith and have an adequate level of competence, I would expect a given package/tool tends to become more secure over time, so taking the latest is a generally good strategy (not perfect of course) compared to running a version that is out-of-date to an arbitrary degree.
Of course, if you don't generally trust the source to be working in good faith or have an adequate level of competence, then you should not use the package/tool no matter when it was built or when you downloaded it.
I'm not seeing the logic here.
it's also very insecure, because you have no guarantee that the version of `serve` that gets pulled down has been vetted and is verified exploit-free, unless you tell npx exactly which version should be used, at which point things start become less easy and more "having to remember which versions are safe".
I'd love to audit every bit of code that runs on my machine but I also do not have a billion hours. Perhaps this is worse in JS land (it is) but this is a different problem to solve.
Which software do you run that is vetted and verified exploit-free?
What a lovely strawman counter-argument you're attempting.
Shall we instead focus on installing with SHA verification based on CVE? Let's do that, that sounds pretty sensible instead of just throwing around yet another thinly disguised "letting the perfect be the enemy of good".
You brought up this incredible standard, "vetted and verified exploit free." I would like to know how it is achieved.
Similarly here's a nice list of reverse shell one liners https://github.com/swisskyrepo/PayloadsAllTheThings/blob/mas...
How many of these support the partial file loading? I know that Python don’t (I have been working on support for it).
This would make SQLite example supported: https://news.ycombinator.com/item?id=27016630
The node one does, and there is a fork of the python one that does: https://github.com/danvk/RangeHTTPServer/
I know this because I have also been working on something using that magic sqlite-in-browser project :D
Thank you! I like the monkey patch to enable binary range support: https://github.com/danvk/RangeHTTPServer/blob/master/RangeHT...
Only one mention of socat, nothing does ad-hoc nonsense that you probably would never use again like socat.
> socat TCP4-LISTEN:50000,fork EXEC:/usr/bin/ifconfig
> curl --http0.9 localhost:50000
Hmmm, does http v0.9 count?
Probably because the example you made doesn't look like a static file server, it seems to execute a specific unix command on incoming request only. The rest of the examples from the gist is proper static file servers.
I'd be interested in seeing if you can come up with a socat one-liner that works like a static file server. Would come in handy for many things, but I can't seem to figure out how to make it work.
Many examples here in the comments and in the gist itself don't follow it to the letter either, it's easy enough to serve one file.
This one might work https://github.com/avleen/bashttpd
> socat TCP4-LISTEN:8080 EXEC:/usr/local/bin/bashttpd
> Many examples here in the comments and in the gist itself don't follow it to the letter either, it's easy enough to serve one file.
Curious about what you said, I looked through the gist again and every one of the examples in the gist itself does indeed serve all the files under a path (a "static file server"). Which one are you implying does not?
Many of these are so complex you could just say nginx -c <<<(config goes here) is a one liner
Weird to call these one liners, they're just invocation commands.
By that logic, apache is a one-liner.
With that logic there are no one-liners. Even binary code is just an abstraction interpreted by microcode.
The point is that there is a working http server already installed on your machine. And you just need to run this one simple, memorable command to use it.
Very useful if you just want to quickly copy some files from A to B in your LAN or WAN (wouldn't use it over Internet because not encrypted).
Apache requires configuration. The point of those is that they're opening ad hoc an HTTP server without requiring any configuration.
Edit: Someone has posted an Apache example: https://gist.github.com/willurd/5720255#gistcomment-3050599
apache requires a config file. I guess you could put any relevant config into a single line and pipe that through somehow, but it would be a lot more than a single command to define a directory and a port to serve.
PHP was on the list. It also takes a config file (less relavent though to this use)
My take on this [1]. Explanation article [2]. In fact the use-case I addressed was slightly different (explained in the article) and later I found the similar tool [3] which probably is more powerful.
[1] https://github.com/xonixx/serv
[2] https://medium.com/cmlteam/develop-a-utility-on-graalvm-cc16...
It will be a one-liner in Java soon too, once JEP 408 ("simple web server") has landed. In the meantime, this will do:
jbang webster@gunnarmorling
This launches Vert.x, publishing the current directory, via the JBang launcher. Source code is here if you want to see what you actually run: https://github.com/gunnarmorling/jbang-catalog/blob/master/w...Anybody has a one-liner allowing CORS (e.g. through options)?
That would be utterly convenient to quickly test multithread WASM applications...
I'm currently using ExpressJS for that, but it feels overkill..
I think these are meant for development setups, which you normally run in a localhost context, meaning you most likely won't have to deal with CORS.
But, if you still need it, the http-server program has a --cors flag to enable wildcard CORS.
You could still want CORS if, for example, you're testing a front-end service that typically calls out to a remote resource. Maybe you typically deploy both same-origin in production, but for dev you want to be able to feed it some specific data. In this case a different port would require CORS.
There are lots of ways around this (put the files in your static folder, embed it into the file you're debugging, use a library which provides mock data, probably more), but it's not inconceivable to just want to be able to point the 'api' configuration param to localhost on a different port so you can just feed it json files from a directory that already has the data you want.
No, you misunderstand how CORS works. CORS (the allowance of CORS to be precise) is up to the server you're request data from, not your own local instance.
So if you're hosting a frontend, you can use whatever you want to host that frontend. It's up to the backend you're connecting to, to set the right headers to allow your quests. Then you're no longer using static file servers (probably) but rather some custom built/framework where you can easily set the headers you want.
I don't think I am.
If I'm testing a react app on localhost:3000, and it normally calls out to an api of mine also, I may want to feed it some dummy data. Let's say I use python http.serve on port 5000 in a directory with files in a subdirectory `api` with files `users` and `posts` (which contain JSON data representative of the API response). I then set my react app to use localhost:5000 as the api.
The react app will then make the requests to the api such as `localhost:5000/api/users` and `localhost:5000/api/posts`. The server will respond with the data; however, if the header `Access-Control-Allow-Origin: http://localhost:3000` isn't set, the browser will block this data (probably? unless there are exceptions for localhost)
Thanks!! I'll try it again, the last time I did I thought it was not allowing the right setting, but yeah a wildcard CORS should be enough of course.
>but yeah a wildcard CORS should be enough of course.
You mean that isn't the production default?
/s
You can try serverino (https://github.com/mmazzarolo/serverino) with the --cors option.
I wrote a zsh function[1] that runs an http server using ruby, php or python (2 and 3) depending on what's available in the system.
[1] https://github.com/tngranados/dotfiles/blob/0ebdc12f2454061a...
I've started using darkhttpd[1] recently, it's simple and compiles quickly to one binary, it supports directory listings and supports range requests.
I'm a big fan of miniserve[0]. It can do files, directories (including on the fly tgz or zip downloading), authentication and upload
https://gist.github.com/willurd/5720255#http-server-nodejs
You could do `npx http-server` I believe.
I use busybox httpd to run my blog via cloud run. It’s a fun setup, using a multistage dockerfile to build the static pages and then create a super small busybox image which just serves those pages.
What I use for https:
$ ruby -r webrick/https -e "WEBrick::HTTPServer.new(Port: 9001, DocumentRoot: '.', SSLEnable: true, SSLCertName: [%w[CN localhost]]).start"Is there a way of doing that in a Windows without installing anything?
.Net via Powershell:
Windows ships with a web server (IIS), but you need to activate it in Features. Not sure if that counts as installing or not to you.
https://www.betterhostreview.com/turn-on-iis-windows-10.html
Why would you not have things installed already? You're almost guaranteed to have Node or Python or bash (via the linux subsystem for windows) already available if you do any kind of web dev work?
Missing:
nghttpd -v -d /home/user/www -a 127.0.0.1 443 /home/user/demoCA/serverkey.pem /home/user/demoCA/servercert.pemnow please also do one for https, no one wants to deal with the cert hassle when they just need a temporary https server for working with some local content.
Powershell: Start-PodeStaticServer
Wow, didn't realize this was started in 2013
$ sthttpd/sbin/thttpd -i thttpd.pid -l `pwd`/access.log -p 8080 -c '/bin/*'What's a good one that serves gzip assets if they exist? I'm looking for a way to serve a webpack prod build.
These are development servers, they are not meant for production builds (many don't handle concurrent requests, memory/performance is probably too poor as well).
Caddy works well and is easy to use and could replace most of these examples, not only for development but for production usage too.
Otherwise go for nginx/apache. Learn the config syntax once, get lifetime worth of value :)
try redbean https://justine.lol/redbean/ see hn thread https://news.ycombinator.com/item?id=26271117 e.g.
sudo redbean.com -dp80 -L/var/log/redbean.log -P/var/run/redbean.pid -U65534 -G65534 -vvvvmbagCaddy has it, but I doubt you could invoke the option in a one-liner. Still very easy to setup though.
Are that any that can serve the same index.html file to any request with an Accept: text/html header?
I'm always surprised that nginx doesn't have a nice way to run as a single-liner command.
What would be the use case? These are useful for experimenting and doing something ad-hoc, but once you deploy to a serious environment not being a one-liner is irrelevant. On the other hand supporting that special case has its own costs.
I think it makes sense since it is completely driven by its config and adheres to the Unix philosophy of doing one thing well and not clutter the binary with additional stuff for fringe use cases. It would make the now surprisingly simple cli arguments more complex.
It should be trivial to write a shell script or alias that templates `root $(pwd);` and invokes `nginx -c /tmp/nginx.conf`
> adheres to the Unix philosophy of doing one thing well and not clutter the binary with additional stuff for fringe use cases
Nginx? Really? "One" thing? Try running `./configure --help` in the Nginx source dir...
(2013)