A tiny Docker image to serve static websites
lipanski.com> My first attempt uses the small alpine image, which already packages thttpd:
# Install thttpd
RUN apk add thttpd
Wouldn't you want to use the --no-cache option with apk, e.g.: RUN apk add --no-cache thttpd
It seems to slightly help with the container size: REPOSITORY TAG IMAGE ID CREATED SIZE
thttpd-nocache latest 4a5a1877de5d 7 seconds ago 5.79MB
thttpd-regular latest 655febf218ff 41 seconds ago 7.78MB
It's a bit like cleaning up after yourself with apt based container builds as well, for example (although this might not always be necessary): # Apache web server
RUN apt-get update && apt-get install -y apache2 libapache2-mod-security2 && apt-get clean && rm -rf /var/lib/apt/lists /var/cache/apt/archives
But hey, that's an interesting goal to pursue! Even though personally i just gave up on Alpine and similar slim solutions and decided to just base all my containers on Ubuntu instead: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...I love stuff like this.
People will remark about how this is a waste of time, others will say it is absolutely necessary, even more will laud it just for the fun of doing it. I'm in the middle camp. I wish software/systems engineers would spend more time optomising for size and performance.
Wouldn't removing Docker entirely be a good optimization?
Docker adds other value to the lifecycle of your deployment. An "optimization" where you're removing value is just a compromise. Otherwise we'd all run our static sites on UEFI.
Redbean supports UEFI too. Although we haven't added a bare metal implementation of berkeley sockets yet. Although it's on the roadmap for the future.
oh wow are you justine?
i've been meaning to ask you this for a decade. whatever happened to when you wrote a blog with insanely irritating serifs that connected certain letters together? what was the rationale behind that? never seen it since
I'm insanely impressed by APE and redbean by the way, blows OP out of the water!
Oh you mean the blog with the long s? I was reading a lot of books at the time that were written before 1800 and I found it so fascinating how different typography was back then. I found a font I could pay for called Quant that did a really good job reproducing archaic ligatures and the long s, so I used it on a blog for a short period of time. Sadly it got negative feedback. So lately I've been focusing on https://justine.lol/ which uses Roboto. I'm glad to hear you're enjoying it!
ahhh ligatures, that's the word I was looking for here. Yeah it was kind of irritating but good blog content so I just read it anyways. it was kind of hard not to read it as someone lisping everything though.
Perhaps next time I'll blog in elvish cryptograms.
mayan hieroglyphics or bust
This is a really good point, and something I think a lot of people forget. It's true, the most secure web app is one written with no code/no OS/does nothing.
Adding value is a compromise of some increased security risk - and it's our job to mitigate that as much as possible by writing quality software.
What value is that, for running such a simple piece of software?
You can have multiple instances of the server running on the machine without interfering with each other.
You can limit file system access for the server to only a certain folder.
You can similarly limit port access and manage conflicts (e.g. multiple servers can think they are listening on a certain open port but those are mapped to something else on the host).
If you have multiple machines with different operating systems or even architecture you can deploy your server as a container more easily on them without needing to rebuild or test for each one.
You can have the same environment running locally while development or on CI servers without complicated setups.
The system can scale out a lot more easily to hundreds/thousands of machines if you decide to use something like Kubernetes.
A few off the top of my head.
The ability to pull the image on to any machine without needing to clone the source files and build it.
Smaller images mean faster pod starts when you auto scale.
You have to login to some docker repository anyways and know the series of commands to actually run it. Cloning a repo and running a shell script is probably a lot easier and faster than that.
What kind of work are you doing that requires really fast auto scaling? Is a few minutes to spin up a new instance really that cumbersome? Can you not signal for it to spin up a new instance a tiny bit earlier than when it's needed when you see traffic increases?
> You have to login to some docker repository anyways and know the series of commands to actually run it. Cloning a repo and running a shell script is probably a lot easier and faster than that.
In isolation, yes. But if, for instance, you're already running a container orchestration tool with hundreds of containers, and have CI/CD pipelines already set up to do all of that, it's easier just to tack on another container.
Ok when you say a few off the top of my head it implies that there are a bunch and these are like some super obvious ones, but it sounds like this is actually only useful if you have a bunch of infrastructure set up to serve sites for projects and customers that need containerization and then you just throw this simple little static site docker instance in there because when you're maintaining a lot of docker instances it is just simpler to do?
Which seems like sort of an edge case for value adding, and makes me feel like it really doesn't add any value to do this unless you already are doing it for everything, and thus you really wouldn't be throwing out any value by just serving the static site without the docker overhead.
Adding to some of the other responses, one reason I chose to deploy a SPA I'm working on as a Docker image is atomicity - if I want to deploy a newer version I simply switch out the tag in my container orchestrator's config (Nomad in this case, but the same principles apply to k8s and friends) and it's guaranteed that the new deployment will be pristine, without the risk of leftover files from a rsync or similar - and if I need to roll back I do the exact same.
There’s value in that, but you don’t need Docker with its related debugging and maintenance overhead to get it. NixOS, among other tools, will do the same thing while constructing a “flat” operating system image.
Anything else, though? There’s got to be more to it than that, or it wouldn’t be as popular as it is.
yeah see some of us still do this on OSes that haven't turned into a giant bloated hodgepodge of security theatre and false panacea software.
docker has dead whale on the beach vibes. what value does it offer to those of us who have moved on from the mess linux is becoming?
I’m not suggesting it has value to everyone. I’m suggesting it has value to the people who see value in it.
I'm super curious to know what the value to people who see that happens to be. It's serving static websites, why do I need to wrap THAT of all things in a container?
Really, enlighten me
> why do I need to wrap THAT of all things in a container?
If you can't see a reason why, then you probably don't need to. You probably have different needs than other people.
Many people use Docker not because of what they're doing inside of the container, but because it is convenient for tangential activities. Like lifecycle management, automation, portability, scheduling, etc.
I have several static sites in Docker containers in production. We also have dozens of other microservices in containers. We could do everything the same way, or we can one-off an entirely separate architecture for our static sites. The former makes more sense for us.
Because you want a reproducible environment/runtime for that static server. Nix/NixOS takes it a step further, in that it provides not only a reproducible runtime environment, but a reproducible dev and build environment as well.
Once you've gone the container route you no longer even need to think about virtual servers. You can just deploy it to a container service, like ECS.
I actually found myself needing something like this a couple weeks ago. I use a self-hosted platform (cloudron.io) that allows for custom apps. I wanted to host a static blog on that server. Some people are happy to accept "bloat" if it does, in fact, make life easier in some way.
If you literally ONLY ever need to run a single static website, then yeah, containers might not be helpful to you.
But once you start wanting to run a significant number of things, or a significant number of instances of a thing, it becomes more helpful to have a all-purpose tool designed to manage images & run instances of them. Having a common operational pattern for all your systems is a nice, overt, clean, common practice everyone can adapt & gain expertise in. Rather than each project or company defining it's own deployment/operationization/management patterns & implementations.
The cost of containers is also essentially near zero (alas somewhat less true with regards to local FS performance, but basically equal for many volume mounts). They come with great features like snapshots & the ability to make images off images- CoW style capabilities, the ability to mix together different volumes- there's some really great operational tools in containers too.
Some people just don't have real needs. For everyone else...
Out of curiosity, what OS have you moved on to?
OpenBSD for the past 10 years or so has been really good to me and my clients, and it just keeps on getting better while linux keeps on getting worse. It's kind of a nobrainer these days.
Hell if you just need to serve static HTTP it even has its own built in webserver now:
In terms of CPU cycles and disk space, maybe. In terms of engineer cycles, absolutely not. Which costs more?
Hmm, a SCP shell script on my laptop, prompting my SSH key's password and deploying the site to the target machine?
Or a constantly-updating behemoth, running as root, installing packages from yet another unauditable repository chain?
You forgot the step where you had to provision that server to run the software and maintain all the systems security updates on the live running server, and that server requires all the same maintenance, with or without docker. And if you fuck it up, better call the wife and cancell Sunday plans because you forgot how it all gets installed and ......yeah, just use docker :p
Debian offers unattended upgrades: https://wiki.debian.org/UnattendedUpgrades
And security updates, as you said, are needed regardless of whether you run Docker on top. I think Docker is a needless complexity and security risk.
Security updates are only needed on the OS level if you're running Docker on bare metal or a VPS. If you're running Docker in a managed container or managed Kubernetes service such as ECS/EKS, you only need to update the Docker image itself, which is as simple as updating your pip/npm/maven/cargo/gem/whatever dependencies.
I see two main places where Docker provides a lot of value: in a large corp where you have massive numbers of developers running diverse services on shared infrastructure, and in a tiny org where you don't have anyone who is responsible for maintaining your infrastructure full time. The former benefits from a standardized deployment unit that works easily with any language/stack. The latter benefits from being able to piggy-back off a cloud provider that handles the physical and OS infrastructure for you.
And you're welcome to think so, but if you intend to make a case for removing Docker as optimization, you still have yet to start.
I am arguing against Docker for maintainability reasons, not CPU cycles.
Relying on hidden complexity makes for a hard path ahead. You become bound by Docker's decisions to change in the future.
For example, SSLPing's reliance on a lot of complex software (among which NodeJS and Docker) got it to close, and it got on the front page of HN recently.
https://news.ycombinator.com/item?id=30985514
Keeping dependencies to a minimum will extend the useful lifespan of your software.
Docker Swarm isn't Docker; it's an orchestration service on top of Docker, that happens to have originated with the same organization that Docker does - hence the name. A few years ago Swarm looked like it might be competitive in the container orchestration space, but then Kubernetes got easy to use even for the small deployments Swarm also targeted, and Swarm has withered since. It wouldn't be impossible or probably even all that difficult to switch over to k8s if that were the only blocker, but as the sunset post notes and you here ignore, that wasn't the only or the worst problem facing SSLping - as, again, the sunset post notes to your apparent disinterest, it's been losing money for quite a while before it fell apart.
(Has it occurred to you that it losing money for a while might have contributed to its eventual unmaintainability, as the dev opted sensibly to work on more sustainably remunerative projects? If so, why ignore it? If not, why not?)
Similarly for the Node aspect - that's very much a corner use case related to this specific application (normally SSLv3 support is something you actively don't want!), and not something that can fairly be generalized into an indictment of Node overall. Not that it's a surprise to see anyone unjustly indict Node on the basis of an unsupportable generalization from a corner case! But that it's unsurprising constitutes no excuse.
Other than that you seem here to rely on truisms, with no apparent effort to demonstrate how they apply in the context of the argument at which you gesture. And even the truisms are misapplied! Avoiding dependencies for the sake of avoiding dependencies produces unmaintainable software because you and your team are responsible for every aspect of everything, and that can work for a team of Adderall-chomping geniuses, but also only works for a team of Adderall-chomping geniuses. Good for you if you can arrange that, but it's quite absurd to imagine that generalizes or scales.
Just because a project has a larger budget, doesn't mean any of it should be spent on Docker, Docker Swarm, or Kubernetes or whatever other managers of Docker that you can mention here.
Fact is, for 3 servers, it would be hard to convince me of any use of Docker compared to the aforementioned deployment shell script + Debian unattended-upgrades.
What problem does Kubernetes address here? So what if it is "easy to use"? I prefer "not needed at all".
> but also only works for a team of Adderall-chomping geniuses
Of course, not everything should be implemented by yourself. Maybe this project wouldn't have been possible at all without offloading some complexity (like the convenient NodeJS packages).
But in particular Docker and its ecosystem are only worth it when you have an amount of machines that make it worth it - when things become difficult to manage with a simple shell script everyone understands: when you have a lot of heterogeneous servers, or you want to deploy to the Cloud (aka Someone Else's Computers) and you have no SSH access.
> truisms
I don't have any experience with Kubernetes nor Docker Swarm. The reason is that the truisms have saved me from it. If you don't talk me into learning Kubernetes, I won't, unless a customer demands it explicitly.
> Has it occurred to you that it losing money for a while might have contributed to its eventual unmaintainability
It absolutely has. Maybe if the service hadn't used Docker Swarm or Docker at all, it would have lasted longer, since updating Docker would not have broken everything, since this was named a factor in the closure. And therefore, the time and money would have gone further.
Exploring technologies can give you great insight into your current practices. You are living by assumption which is a pretty weak position
And maybe if my grandmother had wheels she'd be a bicycle. But - over a so far 20-year career of which "devops" work has been sometimes a fulltime job, and always a significant fraction, since back when we called that a "sysadmin" - I've developed sufficiently intimate familiarity with both sorts of deployment workflows that, when I say I'll always choose Docker where feasible even for single-machine targets, that's a judgment based on experience that in your most recent comment here you explicitly disclaim:
> I don't have experience with Kubernetes nor Docker Swarm. The reason is that the truisms have saved me from it.
Have they, though? It seems to me they may have "saved" you from an opportunity to significantly simplify your life as a sysadmin. Sure, your deployment shell scripts are "simple" - what, a hundred lines? A couple hundred? You have to deal with different repos for different distros, I expect, adding repositories for deps that aren't in the distro repo, any number of weird edge cases - I started writing scripts like that in 2004, I have a pretty good sense of what "simple" means in the context where you're using it.
Meanwhile, my "simple" deployment scripts average about one line. Sure, sometimes I also have to write a Dockerfile if there isn't an image in the registry that exactly suits my use case. That's a couple dozen lines a few times a year, and I only have to think about dependencies when it comes time to audit and maybe update them. And sure, it took me a couple months of intensive study to get up to speed on Docker - in exchange for which, the time I now spend thinking about deployments is a more or less infinitesimal part of the time I spend on the projects where I use Docker.
Kubernetes took a little longer, and manifests take a little more work, but the same pattern holds. And in both cases, it's not only my experience on which I have to rely - I've worked most of the last decade in organizations with dozens of engineers working on shared codebases, and the pattern holds for everyone.
I don't know, I suppose. Maybe there's another way for twenty or so people to support a billion or so in ARR, shipping new features all the while, without most months breaking a sweat. If so, I'd love to know about it. In the meantime, I'll keep using those same tools for my single-target, single-container or single-pod stuff, because they're really not that hard to learn, and quite easy to use once you know how. And too, maybe it's worth your while to learn just a little bit about these tools you so volubly dislike - if nothing else, in so doing you may find yourself better able to inform your objections.
All that said, and faint praise indeed at this point, but on this one point we're in accord:
> If you don't talk me into learning Kubernetes, I won't, unless a customer demands it explicitly.
I did initially learn Docker and k8s because a customer demanded it - more to the point, I went to work places that used them, and because the pay was much better there I considered the effort initially worth my while. That's paid off enormously for me, because the skills are much in demand; past a certain point, it's so much easier to scale with k8s especially that you're leaving money on the table if you don't use it - we'd have needed 200 people, not 20, to support that revenue in an older style, and even then we'd have struggled.
I still think it's likely worth your while to take the trouble, for the same reasons I find it to have been worth mine. But extrinsic motivation can be a powerful factor for sure. I suppose, if anything, I'd exhort you at least not to actively flee these technologies that you know next to nothing about.
Sure, you might investigate them and find you still dislike them - but, one engineer to another, I hope you'll consider the possibility that you might investigate them and find that you don't.
> what, a hundred lines? A couple hundred? You have to deal with different repos for different distros, I expect, adding repositories for deps that aren't in the distro repo, any number of weird edge cases
Well, here is where I must thank you. Thank you for replying to me, and giving me a real reason to look at this ecosystem.
My personal deployment script is really just one SCP command - it copies the new version of my statically-built blog to my server. The web server comes with the OS, and that's all I need.
But when I read "hundred lines? A couple hundred?" I realized my company has a script fitting that bill. There might be an opportunity for improving it. While I am still somewhat skeptical, because using Kubernetes instead of that script might still not be worth it long-term for a 7-person company (of which 3 devs and one sysadmin), I will check out its capabilities.
> in so doing you may find yourself better able to inform your objections.
Thank you for the patience to follow up, in spite of my arrogance. I might just come up with an improvement somewhere. Certainly we're a long way from a billion in ARR - I am thankful for your valuable time, and wish you continued and further success!
Don't worry about the arrogance - I was much the same myself, once upon a time. :) It's worth taking thought to ensure you don't let it run too far away with you, of course, but I'd be a fool to judge harshly in others a taste of which I once drank deep myself. And hey, what the hell - did anyone ever change the world without being at least a little arrogant? There are other ways to dare what seems impossible to achieve, I suppose, but maybe none so effective.
One thing, I'd suggest looking to Docker before Kubernetes unless you already know you need multi-node (ie, multi-machine) deployments, and maybe as a pilot project even if you do. Kubernetes builds upon many of the same concepts (and some of the same infrastructure) as Docker, so if you get to grips with Docker alone at first, you'll likely have a much easier and less frustrating time later on than if you come to Kubernetes cold. (And when that time does come, definitely start with k3s if you're doing your own ops/admin work - it's explicitly designed to work on a smaller scale than other k8s distributions, but it also pretty much works out of the box with little admin overhead. As with starting on Docker alone vs k8s, it's all about managing your frustration budget so you can focus on learning at a near-optimal rate.
But hey, thanks for the well-wishes, and likewise taking the time in this thread! It's been of real benefit to me as well. If we're to be wholly honest, as an IC in my own right I've never been above mid-second quartile at absolute best, and at my age I'll never be better than I am today - or was yesterday. But that also means I'm at a point in my career where I best spend my time helping other engineers develop; if I can master the skill of making my accumulated decades of experience and knowledge, and whatever little wisdom may be found in that, of use to engineers who still have the time and drive to make the most of it in ways I maybe failed to do - well, I'll take it, you know? It's not the kind of work I came into this field to do, I suppose, but I've done enough of it by now to know both that I can do it, and that it is very much worth doing.
So, in that spirit and quite seriously meant - I might be off work sick this afternoon, one peril I'm finding attends ever more frequently upon advancing age, but evidently that's no barrier to improving the core skill that I intend to build the rest of my career around. Thank you for taking the time and trouble to help make that possible today, and here's likewise hoping you find all the success you desire!
Thanks again. Get well quick!
BTW, your CV is down, again due to relying on hidden complexity (Stack Exchange Jobs is extinct). You made me curious so I stalked you a bit :D
Hahaha, that's perfect. I'll fix it when I get the chance, thanks again!
The first option is something custom that you had to write yourself and remember how to use years later, or explain how to use to others
The second option is standardized and usually the same 1 or 2 commands to run anywhere
Building simpler systems allows you to save on all three.
That's true, but in my experience there is nothing mutually exclusive in systems being simple and systems running Docker.
Granted, you do need to learn how Docker works, and be ready to help others do likewise if you're onboarding folks with little or no prior experience of Docker to a team where Docker is used. That's certainly a tradeoff you face with Docker - just as with literally every other shared tool, platform, codebase, language, or technological application of any kind. The question that wants asking is whether, in exchange for that increased effort of pedagogy, you get something that makes the increased effort worthwhile.
I think in a lot of cases you do, and my experience has borne that out; software in containers isn't materially more difficult to maintain than software outside it if you know what you're doing, and in many cases it's much easier.
I get that not everyone is going to agree with me here, nor do I demand everyone should. But it would be nice if someone wanted to take the time to argue the other side of my claim, rather than merely insisting upon it with no more evident basis than arbitrarily selected first principles given no further consideration in the context of what I continue to hope may develop into a discussion.
Docker is absolutely ups the complexity.
Whatever set-up your application needs is a still necessary step in the process. But now you've not only added more software in docker with its a docker registry, and Docker's state on top of the application's state, you've also introduced multiple virtual filesystems and a layer of mapping between those and locations on the host, mappings between the container's ports and the host's ports. There is no longer a single truth about the host system. The application may see one thing and you, the owner, another. If the application says "I wrote it to /foo/bar", you may look in "/foo/bar" and find that /foo doesn't even exist.
All of that is indirection and new ways things can be that did not exist if you just ran your code natively. What is complexity if not additional layers of indirection and the increase of ways things can be?
Okay, and in exchange for that, I've gained single-command deployments of containers that already include all the dependencies their applications require, and at most I only have to think about that when I'm writing a deployment script or doing an update audit.
It's rare that I need to find out de novo where a given path in a container is mapped on the host. When I do need to do that, I can usually check a deployment script, or failing that inspect the container directly and see what volume mounts it has.
I don't need to worry about finding paths very often - much less frequently than I need to think about deployments, which at absolute minimum is once per project.
So, sure, by using Docker I've introduced a little new complexity, that's true. But you overlook that this choice does not exist in a vacuum, and that that added complexity is more than offset by the reduction of complexity in tasks I face much more often than the one you describe.
And that's just me! These days I have a whole team of engineers on whose behalf, as a tech lead, I share responsibility for maintaining and improving developer experience. Do you think I'd do them more of a favor by demanding they all comprehend a hundred-line sui generis shell script for deployments, or by saying "here's a single command that works in exactly the same way everyone you'll work with in the next ten years does it, and if it breaks there's fifty people here who all know how to help you fix it"?
Does it? Or, rather, is it even simpler?
To host something as a docker container I need 2 things: to know how to host docker, and a docker image. In fact, not even an image, just a dockerfile/docker-composer.yaml in my source code. If I need to host 1000 apps as a docker containers, I need 1000 dockerfiles and still to know (and remember) 1 thing: how to host docker. That's 1 piece of knowledge I need to keep in my head, and 1000 I keep on a hard-drive, most of the time not even caring what's the instruction inside of them.
If I need to host 1000 apps without dockerfiles, I need to keep 1000 pieces of knowledge in my head. thttpd here, nginx to java server there, very simple and obvious postgres+redis+elastic+elixir stack for another app… Yeah, sounds fun.
I think the real value is just focusing on the absolute minimum necessary software in a production docker/container image. It's a good practice for security with less surface area for attackers to target.
The difference between a systems engineer and a software engineer is that to a systems engineer a half functioning 5MB docker image is okay but to a software engineer a fully functional 5GB Node image is fine.
Premature optimisation? 5 gb doesn’t matter. It’s not great, don’t get me wrong.
While this is remarkably a good hack and I did learn quite a bit after reading the post, I'm simply curious about the motivation behind it? A docker image even if it's a few MBs with Caddy/NGINX should ideally be just pulled once on the host and sit there cached. Assuming this is OP's personal server and there's not much churn, this image could be in the cache forever until the new tag is pushed/pulled. So, from a "hack" perspective, I totally get it, but from a bit more pragmatic POV, I'm not quite sure.
It gets pulled once per host, but with autoscaling hosts come and go pretty frequently. It's a really nice property to be able to scale quickly with load, and small images tend to help with this in a variety of ways (pulling but also instantiating the container). Most sites won't need to scale like this; however, because one or two hosts is almost always sufficient for all traffic the site will ever receive.
I did mention that it's the OP's server which I presume isn't in an autoscale group.
Even then, saving a few MBs in image size is the devops parlance of early optimisation.
There's so much that happens in an Autoscale group before the instance is marked healthy to serve traffic, that an image pull of few MBs in the grand scheme of things is hardly ever any issue to focus on.
Yeah, like I said, I'm not defending this image in particular--most static sites aren't going to be very sensitive to autoscaling concerns. I was responding generally to your reasoning of "the host will just cache the image" which is often used to justify big images which in turn creates a lot of other (often pernicious) problems. To wit, with FaaS, autoscaling is highly optimized and tens of MBs can make a significant difference in latency.
Noted, that makes sense. Thanks!
could be very useful in serverless space as lambda do support container image now. the image will be pulled much more often.
The less resources you use from your system, the more things you can do with your system.
Only matters if you're actually using those extra cycles or not. The majority of web servers hover at <10% CPU just waiting for connections.
I don't know if that's really true - if you're renting the server from a cloud provider chances are you can bump down the instance size if you don't need the extra processing capacity... and if it's a server you manually maintain I think lighter usage generally decreases part attrition, though the other factors in that are quite complex.
I feel like there's a lot of low-hanging fruit on the table for containers, and it's weird we don't try to optimize loading. I could be wrong! This seems like a great sample use case- wanting a fast/low-impact simple webserver for any of a hundred odd purposes. Imo there's a lot of good strategies available for making starting significantly larger containers very fast!
We could be using container snapshots/checkpoints so we don't need to go through as much initialization code. This would imply though that we configure via the file-system or something we can attach late though. Instead of 12-factor configure via env vars, as is standard/accepted convention these days. Actually I suppose environment variables are writable, but the webserver would need to be able to re-read it's config, accept a SIGHUP or whatever.
We could try to pin some specific snapshots into memory. Hopefully Linux will keep any frequently booted-off snapshot cached, but we could try & go further & try to make sure hosts have the snapshot image in memory at all times.
I want to think that common overlay systems like overlayfs or btrfs or whatever will do a good job of making sure, if everyone is asking for the same container, they're sharing some caches effectively. Validating & making sure would be great to see. To be honest I'm actually worried the need-for-speed attempt to snapshot/checkpoint a container & re-launch it might conflict somewhat- rather than creating a container fs from existing pieces & launching a process, mapped to that fs, i'm afraid the process snapshot might reencode the binary? Maybe? We'd keep getting to read from the snapshot I guess, which is good, but there'd be some duplication of the executable code across the container image and then again in the snapshotted process image.
I love it! Can you add SSL though? Does it support gzip compression? What about Brotli? I like that it's small and fast so in addition to serving static files can it act as a reverse proxy? What about configuration? I'd like to be able to server multiple folders instead of just one?
Where can I submit a feature request ticket?
https://github.com/weihanglo/sfz
check this out
This seems to be intended for local host usage exclusively. Is anyone using this for public or even internal http hosting?
I think is not precisely an active project, but it's open on GitHub[1], I guess we can try to open issues there.
If you use "-Os" instead of "-O2", you save 8kB!
However, Busybox also comes with an httpd... it may be 8.8x bigger, but you also get that entire assortment of apps to let you troubleshoot, run commands in an entrypoint, run commands from the httpd/cgi, etc. I wouldn't run it in production.... but it does work :)
Redbean is just 155Kb without the need for alpine or any other dependency. You just copy the Redbean binary and your static assets, no complicated build steps and hundred MB download necessary. Check it out: https://github.com/kissgyorgy/redbean-docker
There's also the 6kB container, which uses asmttpd, a webserver written in assembler.
https://devopsdirective.com/posts/2021/04/tiny-container-ima...
Wow! This is the Redbean which is an "Actually Portable Executable", or a binary that can run on a range of OSes (Linux, Windows, MacOS, BSDs).
Well worth a read:
I believe the best chance we have of [building binaries "to stand the test of time with minimal toil"], is by gluing together the binary interfaces that've already achieved a decades-long consensus, and ignoring the APIs. . . . Platforms can't break them without breaking themselves.
And it does https/tls, where thttpd does not.
I'm confused how the author considers thttpd more 'battle tested' if it doesn't resolve https.
Either way though, it's a great article I'm glad the author took to write. His docker practices are wonderful, wish more engineers would use them.
The term 'battle tested' has nothing to do with amount of features, it's about how proven the stability and/or security of the included features included are. The term also usually carries a heavy weight towards older systems that have been used in production for a long time since those have had more time to weather bugs that are only caught in real-world use.
Also, https is often dealt with on a different server (load balancer for example).
Yes but it's nice to have the SSL built-in for when you want it. Web servers like Varnish and thttpd take a really hard stance on the issue, where they don't want to touch the crypto at all. Honestly, I don't blame them because implementing SSL is prodigiously technical and emotional. One of the things I do is I offer a file called redbean-unsecure.com that has zero-security baked-in so that folks who love redbean but want to handle the security separately themselves can do so. But like I said when we don't have strong opinions on separation of concerns, having a fast snappy tiny zero config SSL is nice.
"Battle tested" typically means that the code has been running for a long time, bugs found, bugs squashed, and a stability has been attained for a long time. It's usage predates the "information wars", back when we really didn't think about security that much because nothing was connected to anything else that went outside the companies, so there were no hackers or security battles back then. So I suspect this is the authors frame of reference.
For static websites, is there any reason not to host them on GitHub?
Since GitHub Pages lets you attach a custom domain, it seems like the perfect choice.
I would expect their CDN to be pretty awesome. And updating the website with a simple git push seems convenient.
Once you've used a couple more static hosts you'll find that gh pages is a second tier host at best. Lacks some basic configuration options and toolings, can be very slow to update or deploy, the cdn actually isn't as good as others, etc. Github pages is great for hobby projects and if you're happy with it by all means keep using it... but I wouldn't ever set up a client's production site on it.
If you're curious, Netlify is one popular alternative that is easy to get in to even without much experience. I would say even at the free tiers Netlify is easily a cut above Github for static hosting, and it hooks into github near perfect straight out of the box if that is something you value.
> is there any reason not to host them on GitHub?
Because some people may not want to depend even more on Big Tech (i.e. Microsoft) than they already do
>For static websites, is there any reason not to host them on GitHub?
One reason would be if your site violates the TOS or acceptable use policy. GitHub bans "excessive bandwidth" without defining what that is for example. For a small blog about technology you are probably fine.
Wanting to own your own web presence is reason not to host them on GitHub.
For static websites, CDNs are largely unnecessary. My potato of a website hosted from a computer in my living room has been on the front page of HN several times without as much as increasing its fan speed.
It took Elon Musk tweeting a link to one of my blog posts before it started struggling to serve pages. I think it ran out of file descriptors, but I've increased that limit now.
Was it through a VPN? I feel like revealing my home IP to random people on the internet is a bad move.
No VPN. I have the ports fairly well tightened down, though. I'm exposed to a zero-day in iptables itself or something, but whatever. Even if someone got in it would be an inconvenience at worst. It's not like I'm making money off this stuff.
Are you able to describe how you run yours? I scummed your blog but didn’t see anything about it.
The static content is just nginx loading files straight off a filesystem. The dynamic content (e.g. the search engine) is nginx forwarding requests to my Java-based backend.
You can serve massive amount of static requests from any potato really, it's a solved problem.
I'm sure their CDN is great, and I've used it in the past; however, I like to self-host as a hobby.
> For static websites, is there any reason not to host them on GitHub?
I don't like github pages because it's quite slow to deploy. Sometimes it takes more than a couple of minutes just to update a small file after the git push.
I don't think you can set a page or URL on github to return a 301 moved permanently response or similar 3xx codes. This can really mess up your SEO if you have a popular page and try to move off github, you'll basically lose all the clout on the URL and have to start fresh. It might not matter for stuff you're just tossing out there but is definitely something to consider if you're putting a blog, public facing site, etc. there.
I have a few 301 redirects setup on github pages
$ curl https://nobodywasishere.github.io # moved to https://blog.eowyn.net <html> <head><title>301 Moved Permanently</title></head> <body> <center><h1>301 Moved Permanently</h1></center> <hr><center>nginx</center> </body> </html> $ curl https://blog.eowyn.net/vhdlref-jtd # moved to https://blog.eowyn.net/vhdlref <html> <head><title>301 Moved Permanently</title></head> <body> <center><h1>301 Moved Permanently</h1></center> <hr><center>nginx</center> </body> </html>Is that coming back with a HTTP 200 response though and the made up HTML page? That doesn't seem right... at least, I dunno if google and such would actually index your page at the new URL vs. just thinking "huh weird looks like blog.eowyn.net is now called '301 Moved Permanently', better trash that down in the rankings".
It shows up as a proper 301 when I load up the URL in Firefox. The question is, how?
One of the Jekyll plugins that GH Pages supports[0] is jekyll-redirect-from, which lets you put a `redirect_to` entry in a page's front matter.
I found this example of another repo that is using the same trick: https://github.com/kotokaze/kotokaze.github.io
~ % curl -i 'https://kotokaze.github.io/' HTTP/2 301 server: GitHub.com content-type: text/html permissions-policy: interest-cohort=() location: http://github.kotokaze.net/
For the latter, `<meta http-equiv="refresh" content="0; URL=https://blog.eowyn.net/vhdlref" />` in the `<head>`
Wow, nice yeah I'd love to know how github supports configuring a URL to 301 redirect!
Yea, no.
A 301 (or 302) redirect means setting the status code header to 301 and providing a location header with the place to redirect to. Last I checked GitHub doesn't allow any of this, or setting any other headers (like cache-control). To work around this, I've been putting cloudflare in front of my site which lets me use page rules to set redirects if necessary.
Netlify FTW — For the rewrite rules alone.
Well, not everything is open source.
Can you do SSL?
Yes, since 2018.
But why would you prefer Docker like this over, for example, running thttpd directly? Saves you a lot of Ram an indirection?
Run this on a linux host and it isn't that much different from running thttpd directly. There's just some extra chroot, cgroups, etc. setup done before launching the process but none of that gets in the way once it's running. Docker adds a bit of networking complexity and isolation, but even that is easily disabled with a host network CLI flag.
It's really only on windows/mac where docker has significant memory overhead, and that's just because it has to run a little VM with a linux kernel. You'd have the same issue if you tried to run thttpd there too and couldn't find a native mac/windows binary.
For one, because his home server provides multiple utilities, not just this one project, and without docker he starts to have dependency conflicts.
He also like to upgrade that server close to edge, and if that goes south, he want to rebuild and bring his static site up quickly, along with his other projects.
I serve several sites off an AWS EC2 instance, all are dynamic REST endpoints with DBs in their own `tmux` instance. I also have a five line nodeJS process running on another port for just my static page. All of this is redirected from AWS/r53/ELB. The only pain in the arse is setting up all the different ports, but everything runs in its own directory so there are no dependency issues. I've tried to ramp up with docker, but I always end up finding it faster to just hack out a solution like this (plus it saves disk space and memory on my local dev machine). In the end my sol'n is still a hack since every site is on one machine, but these are just sites for my own fun. Perhaps running containers directly would be easier, but I haven't figured out how to deal with disk space (since I upload lots of stuff).
Well in the article he ended up compiling thttpd statically so he wouldn't have dependency conflicts if he ran it directly. Funny how there's overlap in docker solutions that solve different but related issues for non-docker deploys as well...
Without docker, he'd need to install build dependencies on the host. Once it is in docker, why move it out?
I don’t want to touch the root of my server. I rather add a new container that doesn’t modify anything on the root.
Benefits: can cleanly and delete 100% of what was installed. If you use something on root can always infect, save cache, logs…
I don’t want to impact anything else running on my server. I don’t want anything to depend on that either silently.
Docker is the best thing. I just can’t understand how people still can’t get the benefits yet.
Is Amazing to start a project you had 3 years ago and just works and you can deploy without reading any docs. Just spin a docker container. Eat, safe and just works.
The only thing I would change: I would use Caddy and not thttpd. This way the actual binary doing the serving is memory-safe. It may well require more disk space, but it is a worthwhile tradeoff I think. You can also serve over TLS this way.
How many requests can thttpd handle simultaneously, compared to, say nginx ? It's a moo point being small if you then have to instantiate multiple containers behind a load balancer to handle simultaneous requests.
I don't know why there is a big fish at the top of your website, but I like it a lot.
Agreed. GIS says at least some are from the NYPL:
https://nypl.getarchive.net/media/salmo-fario-the-brown-trou...
Me too!
Also the other blog posts have different big fishes, so check them out as well.
For static websites, hosting them directly on S3 with cloudfront, or on cloudflare might be a better option?
How’s the free tier on aws for s3 and cloudfront? I can think of free alternatives that are equally as good if not better.
S3 + cloudfront + lambda is costing me pennies per month for a trivial site. What are the free alternatives that beat it?
Requirements:
- rsync style publishing
- not supported by tracking users.
- raw static file hosting (including html)
- redirect foo.com/bar to foo.com/bar/index.html (this is why I need lambda...)
- zero admin https certificate management
GitHub pages gives you all this except the redirect and replace rclone with…git and is free (although evil Microsoft blah blah)
or https://pages.github.com/ maybe?
Is it smaller than darkhttpd?
Why do we need this when you can run a web server inside systemd?
This doesn't hijack a bunch of stuff on the host OS and replace it with garbage versions.
I want things like DNS, X11 screeb locking, ssh session management, syslog, etc. to just work. I can't figure out how to fix any of that stuff under systemd, and at least one is always broken by default in my experience.
I used this as a base image for a static site, but then needed to return a custom status code, and decided to build a simple static file server with go. It's less than 30 lines, and image size is <5MB. Not as small as thttpd but more flexible.
althttpd beats this: https://hub.docker.com/r/rouhim/althttpd/tags (~63 KB)
Well, this will definitely serve an unchanging static website. But unchanging static websites are just archives. Most static websites have new .html and other files added on whim regularly.
You can just mount an external volume on top of /home/static to and be able to change the files that way. But for a single-page-app I think it works great to be able to version the entire site in the docker image tag.
I do something similar at work for internal only static docs.
The image is a small container with an http daemon. It gets deployed as a statefulset and I mount a volume into the pod to store the static pages (they don't get put into the image). Then I use cert-manager and an Istio ingress gateway to add TLS on top.
Updating the sites (yes, several at the same domain) is done via kubectl cp, which is not the most secure but good enough for our team. I could probably use an rsync or ssh daemon to lock it down further, but I have not tried that.
Seems pretty silly. That being said, I did the exact same thing a couple years ago for work. My first attempt was to use busybox's built-in httpd, but it didn't support restarts. I vaguely recall settling on the same alpine + thttpd solution. The files being served were large, so the alpine solution was good enough.
I assume the author would then publish this behind a reverse proxy that implements TLS? Seems like an unnecessary dependency, given that Docker is perfect for solving dependency issues.
That's certainly what I would do. I think its great that thttpd does not include a TLS dependency itself. Every once in awhile I find a project that forces their own TLS and its annoying to undo it.
Docker, really?
Sounds like brain surgery in order to make a jam sandwich to me.
Locally could be easier to rely on background run of a docker image instead of another console serving the files, just to run and forget, just use it by the dependent project you probably could be working on (Dependent on the static content). I'm agreed on the cloud it's better use the plethora of services available for static content directly like Cloudflare.
> plethora of services available for static content
when I think of static content, I think of buying a domain name + shared hosting for monthly EUR 2,-.
And not assigning rights nor control but having a legal claim on both service and name. Am I missing something?
It's a good way to compartmentalize if you've got a lot going on on a single machine.
> compartmentalize
a static website, srsly?
Uh, yeah? Could host dozens (or even hundreds) of different sites/domains with different degrees of functionality in different languages/frameworks for different clients on one machine.
> have millions of files
Congratulations. I have millions of files on my static sites. So what? Would you recommend a container for each? To what purpose?
> different degrees of functionality
We're still talking static sites. There is no 'functionality', right?
> Congratulations. I have millions of files on my static sites. So what? Would you recommend a container for each? To what purpose?
...what? Where are you quoting that from? No, I'm not recommending Docker if all you do is host static pages.
> We're still talking static sites.
No, I said "if you've got a lot going on on a single machine" - I didn't just mean static sites. I did respond with "different sites/domains with different degrees of functionality in different languages/frameworks", which means a variety of services, e.g. one client may be a static page, another might use a backend/API in Node, and another in C#/.NET - etc.. heck, you might even used containerized DBs for some of them. Hence Docker.
Up next: how to serve a LAMP site from a single docker image
It's pretty easy. I put the data in a bind mount on btrfs on my synology NAS. It snapshots the FS and does an incremental backup with hyper backup each night. The backup is crash coherent, zero downtime, and the RDBMS doesn't need to know about it.
This is really useful for tiny little services that each want a different database server.
That's a good educational resource to show to people who need to learn about multi-stage Docker builds.
Tbh the moment the author thought about hosting yourself anything to serve static pages -> it was already too much effort.
There are free ways to host static pages and extremely inexpensive ways to host static pages that are visited mullions of times per month using simply services built for that.
So, the best free or extremely inexpensive way to host static pages that are visited a lot would be...?
netlify, amplify, cloudflare pages, vercel, et cetera
It's a crowded field now
Is nothing sacred? The KuberDocker juggernaut leaves no stone unturned. Laughable given that Docker was originally designed for managing massive fleets of servers at FAANG-scale.
there are services specifically for static site hosting. I'd let them do the gritty devops work personally.
Netlify, Amplify, Cloudflare Pages, etc.
I use them too. Sometimes I like to have some repos with the static content, which get deployed by a CD tool to those services. It's common for me when debugging or testing locally in my PC or LAN, to include some docker build for those repos which I don't use at production time, but I used it locally. Maybe is not a big problem at all, but I use it that way, specially when in my projects the CND used is not a free one. Makes sense?
just working off the headline. visiting your link does a great job explaining the use case you have. I'll revisit tonight for a closer look.
Nail, meet hammer.