Settings

Theme

Server Setup Basics for Self Hosting

becomesovran.com

169 points by joelp a year ago · 74 comments

Reader

solatic a year ago

Especially when writing a tutorial for beginners - please use the long-form flags (e.g. sudo usermod --append --groups sudo newuser) instead of short-form flags (e.g. sudo usermod -aG sudo newuser). Short-form flags make commands look like arcane voodoo magic. They make sense only to help you save time entering commands if you know them by heart already. Tutorials are read by beginners who are not necessarily familiar with the commands in the first place - long-form flags help communicate what these commands are actually doing and thus make for a more effective tutorial.

  • noahjk a year ago

    I would go as far as to say short flags should never be shared or saved (besides man pages or similar). Long flags help anyone who needs to review something in the future, even the author. Perfect for scripts of all sorts, tutorials, anything checked into git, etc.

    • sebazzz a year ago

      Powershell recommendations, for instance, are that you keep aliases and short-form for yourself, and long-form for scripts and tutorials.

          Remove-Item -Path X:\test\ -Recurse -Force
      
          del X:\test -rec -for
  • deviantintegral a year ago

    Yes please! This came up enough for us we standardized on this for all documentation and CI scripts.

    https://architecture.lullabot.com/adr/20211006-avoid-command...

  • lhousa a year ago

    "Acronyms seriously suck" ~Elon

    • benterix a year ago

      At least it would be funny if signed by RMS.[0]

      [0] He wouldn't as he was a fan of recursive ones.

jks a year ago

I recommend checking out Caddy <https://caddyserver.com/>, which replaces both Nginx and Certbot in this setup.

Tailscale <https://tailscale.com/> can remove the need to open port 22 to the world, but I wouldn't rely on it unless your VPS provider has a way to access the server console in case of configuration mistakes.

  • Semaphor a year ago

    Caddy also simplifies many common Nginx configurations with a one-liner. The biggest hurdle is when you don’t have a simple configuration, as all the examples are usually only for Nginx ;)

  • jim180 a year ago

    I've recently discovered, that Caddy config file has a neat support for imports: https://pastebin.com/vVQYrpmj

  • InvOfSmallC a year ago

    Regarding tailscale, be sure to remove the expiration flag on your server. That's how I lost mine.

  • calgoo a year ago

    For Tailscale backup access, another way is to block port 22 on a firewall and then only unblock it if you need access.

    • nehal3m a year ago

      If you depend on the host behind Tailscale to access the firewall from the inside then that's not going to work. Most colos I have hardware at offer a separate network for iDRAC/ILO/your flavor of OOB management, I like to use the console through that to open/close stuff like this.

hobobaggins a year ago

I'd switch to Userify if you have a team to distribute keys for, because it's ultra-lightweight and also keeps you from messing up permissions on the ssh key/directory, which I've done too many times! (also it does sudo which is quite nice)

Also, restarting ssh will not boot you out of the session (your session has already been forked as a different process), so leave your terminal window open (to fix any screwups) and then log in on a separate window on the new port and just make sure you can get in.

For backups, don't set up logins from your main server(s) to your backup server; log in from your backup server to your main server. That way, if someone breaks into your main server, they can't get into your backup server.

erros a year ago

You may want to update this post to disable password authentication, and thus you'll no longer need to install fail2ban. An important goal is to tighten your attack surface, not expand it. At this point you will still have an exposed SSHd server, so I'd recommend throwing the server under tailscale. You can setup the SSHd listener to use your tailscale IP or setup tailscale for SSH via ACLs (https://tailscale.com/tailscale-ssh).

Additionally you can further tighten controls of incoming logins with the use of AllowGroups to tighten your controls on which groups can log into the system. This would mitigate a scenario where an adversary is able to escalate enough privileges to write an .authorized_keys file to a non-privileged user which may have a shell still configured.

Finally, unless you're treating this server as a bastion host of sorts, you probably should disable forwarding for agents or X11 etc. We've seen a lot of adversaries move laterally due to this agent forwarding.

  • Semaphor a year ago

    > You may want to update this post to disable password authentication

    Probably not, as that’s one of the first things they do.

    That said, I feel like all this fail2ban stuff is very much cargo culting in the selfhosting community. I’ve had my VPS SSH server on port 22 with no fail2ban for slightly over a decade, exposed to the public internet (home server is behind tailscale, VPS hosts the stuff I always want accessible from everywhere). Bots try it, they fail, the end. Maybe I’m missing something, but I have yet to find a good reason for the added complexity.

jw_cook a year ago

At the end of the article, there's a link to a script[1] that does the steps covered in the article.

That got me thinking: how do other self-hosters/homelabbers here go about automating their server setups? None/purely manual? One big shell script? Multiple scripts wrapped in a Makefile (or justfile, or other command runner)? More enterprisey provisioning/automation tools like Ansible, Puppet, etc.?

[1] https://git.sovbit.dev/Enki/sovran-scripts

  • bpye a year ago

    I use NixOS on every machine I have running Linux. My config for every machine is in a git repo, and it is super easy to deploy changes via ssh. It took some work to get started - but I would never go back.

  • duckmysick a year ago

    I'm using a combination of pyinfra for provisioning and justfile for one-off operations. In fact, I also have separate pyinfra scripts for provisioning my desktop and laptops, so I can have a fresh install and they will set it up with proper apps and desktop environment settings.

    https://github.com/pyinfra-dev/pyinfra

    https://github.com/casey/just

  • atoav a year ago

    For my use cases I found that just having a (updated) note with the things I would usually do works best. This is because I would not deploy everything anywhere and manually being aware of each step instead of hiding it within a script is somewhat a feature (e.g. you can easily insert a custom extra step etc).

    If I would do basically the same over and over I'd probably go with a script, ansible cookbook or similar, but as of now the manual route is totally fine.

    • ranger207 a year ago

      Yeah I just have a note with my steps because other than the real basic stuff (set IP and DNS, set hostname, install tmux/htop/vim) the rest depends on what exactly I'm doing with that server. I have other notes for common stuff that could probably stand to be automated but it's not worth the effort in a https://xkcd.com/1205/ sense. Like, having a checklist is necessary, but fighting bash or whatever other automation tool isn't necessarily valuable since I'm only standing up one server every few months at most

  • jmathai a year ago

    One big shell script has worked really well for me. One project on AWS ran the script when new EC2 instances with a particular tag/label were spun up and that's how we scaled horizontally.

    What's nice about is that it doesn't require any specialized knowledge beyond bash - and that's something which is pretty easy to learn and great to know. It also attracts, IMO, the type of developers who avoid chasing new trends.

    • Sammi a year ago

      I have a folder of scripts. One main script that calls into the other scripts, just so I can keep my head straight. But one large script might work just as well for you.

      This sets up everything I need so I can treat my servers as livestock instead of pets - that is, so I can easily slaughter and replace them whenever I want, instead of being tied to them like a pet.

  • everforward a year ago

    I’m using PyInfra [1] these days (no affiliation, just think it’s cool).

    It’s like Ansible, but you write Python directly instead of a YAML DSL. Code reuse is as simple as writing modules, importing them, and calling whatever functions you’ve written in normal Python.

    I find it almost as easy as writing a shell script, but with most of the advantages of Ansible like idempotency and a post-run status output.

    1: https://github.com/pyinfra-dev/pyinfra

  • ricardo81 a year ago

    I've written a couple, covering a bit of what's mentioned in the article but also setting up wordpress.

    Written in bash also

  • tobijkl a year ago

    Personally, I use Cloud-Init for automation. Its wide support across various cloud platforms and ease of use make it my go-to tool for initial server provisioning. It simplifies the process, allowing me to get things up and running quickly without needing additional dependencies.

  • Jedd a year ago

    Most of my server configuration is defined by Saltstack recipes.

    Most of my actual tools now are running in docker via Nomad.

tiffanyh a year ago

Love seeing devops post on HN.

Wish it included server monitoring as a section.

  • everforward a year ago

    Server monitoring for self-hosting is kind of hard because it basically necessitates either buying another server to monitor the first, or paying for SaaS.

    From personal experience, I would just pay someone else for a SaaS monitoring solution. It will almost universally be cheaper and more reliable.

    If you really wanted to run your own, Prometheus is probably the way to go. Local storage should be fine as a data store for self-hosted. Grafana can be used for dashboarding, and either Grafana or AlertManager can do the alerting component.

    It’s really not all that worth it for self-hosted scale, though. Running all that in the cloud is going to cost basically the same as buying a DataDog license unless you’re at 3-ish hosts, and more than that if you’re doing clustered monitoring so you aren’t blind if your monitoring host is down.

  • neitsab a year ago

    Funny how that kind of posts is now called "DevOps", while 10 years ago it was simply called "system administration" ;-)

    Besides I fail to see any DevOps tenets in it, quite the opposite: a shell script at the bottom is little in the way of reliable automation.

    To me this post reads more like someone relatively new to server management wanted to share their gathered tips and tricks, i.e. me 10 years ago when I started my self-hosting journey :-D

remram a year ago

> Differential backups back up all the changes since the last full backup (...) An incremental backup backs up data that was changed since the last backup

I'm not sure I understand the distinction?

  • pudgyblues a year ago

    With differential backups there's only 2 artifacts, the full and the diff. If you make another differential backup you overwrite the previous diff so it's always the changes since the last full backup.

    With incremental it's full backup + inc1 + inc2 +... forever, each backup depends on the previous.

  • magicalhippo a year ago

    They both do delta backups, but incremental bases it's delta on previous backup, while differential between the last full backup.

    To restore from an incremental you need the last full backup and all the incrementals inbetween. If you do say a full backup every month, you'd need up to 30 good incremental backup sets to be able to restore.

    For the differential you just need the last full backup in addition.

    Obviously the differential one might take more and more space, depending on the changes.

    • remram a year ago

      I see, thanks. I only use Restic so this is not relevant to me, but I think I understand the trade-off.

  • betaby a year ago
  • dsissitka a year ago

    Differential backups are always:

    Full Backup -> Differential Backup

    Incremental backups are:

    Full Backup -> Incremental Backup [-> Incremental Backup ...]

    At least that's how it is with Macrium.

AtlasBarfed a year ago

So... about docker, did they backtrack on their licensing landgrab?

About a year ago I swear everyone was going to podman, but in the last few months I see nothing but docker references.

Podman is supposed to be drop-in. Well, it was advertised. I haven't touched anything in six months.

  • bongobingo1 a year ago

    I think Podman is still pretty fringe in the grand count. It also may suffer from an aspect of those that know about it and use it are less likely to write a blog post about it.

    I use it and prefer it, much so. Mostly because of rootless (I know docker has made attempts to improve this in the last year or so), not futzing with my iptables and a better handling of pushing images between hosts (it's been over a year since I touched any of that infra, I just remember it being more of hassle with docker which took a "our way or no way" approach).

    The biggest issue I have with Podman is the pace of its improvement against the rate of Debian releases!

  • dartos a year ago

    IIRC docker had some heat for their docker desktop licenses.

    I think podman is more secure and simpler, but not as ergonomic to have locally (it’s not quite a drop in for docker. No real docker compose support for example)

    Podman is the default for k8s last I heard

  • mkesper a year ago

    Only use docker engine ("moby"). Docker desktop makes no sense on a Linux system as it introduces a VM into the mix, further adding complexity and reducing performance. https://docs.docker.com/engine/

abhinavk a year ago

> You want to use SSH (Secure Shell) and make sure that SSH is the only way to log in.

Some distributions (like openSuSE) also enable KbdInteractiveAuthentication by default so just disabling PasswordAuthentication won't work.

  • dsissitka a year ago

    This is one of those things I like to verify:

      david@desktop:~$ nmap -p 22 --script ssh-auth-methods becomesovran.com
      Starting Nmap 7.92 ( https://nmap.org ) at 2024-08-25 23:31 EDT
      Nmap scan report for becomesovran.com (162.213.255.209)
      Host is up (0.066s latency).
      rDNS record for 162.213.255.209: server1.becomesovran.com
    
      PORT   STATE SERVICE
      22/tcp open  ssh
      | ssh-auth-methods:
      |   Supported authentication methods:
      |     publickey
      |     gssapi-keyex
      |     gssapi-with-mic
      |     password
      |_    keyboard-interactive
    
      Nmap done: 1 IP address (1 host up) scanned in 0.86 seconds
      david@desktop:~$
    
    As far as I can tell AuthenticationMethods publickey is the right way to do it these days but I'd love to know if that's not the case.
    • yjftsjthsd-h a year ago

      I've just been doing

          ssh -v localhost echo 2>&1 | grep continue
      
      (obviously replacing "localhost" with whatever server you want, and you can put anything you want where "echo" is but that's the best no-op I've come up with)
  • _blk a year ago

    I'm a bit sceptical of the choice of port 2222 as an alternative. At that point you might as well leave 22, but otherwise it's a good intro. If you're serious about starting post the sections into [insert AI service name] and start asking questions.

enkimin a year ago

Article author here. Glad some people found this useful and to those with suggestions, ill keep those in mind when updating the post.

Cheers.

chadsix a year ago

And for those of you that don't have an external IP, you can use services that provide egress for you like IPv6.rs. [1]

[1] I'm DevOps there! ;)

Cyph0n a year ago

Great post! I (relatively) recently switched my primary home server over to NixOS and am now a huge fan of it as a distribution for self-hosting.

Here is how setting this all up would like in NixOS (modulo some details & machine-specific configuration). It's <100 lines, can be executed/configured with a single CLI command (even from a different machine!), rolled back easily if things go wrong, and can be re-used on any NixOS machine :)

    {
      networking = {
        # Server hostname
        hostName = "myserver";
        # Firewall
        firewall = {
          enable = true;
          allowedTCPPorts = [ 80 443 2222 ];
        };
      };
    
      # Users
      users.users = {
        newuser = {
          isNormalUser = true;
          home = "/home/newuser";
          hashedPassword = "my-hashed-pwd";
          openssh.authorizedKeys.keys = [ "my-pub-key" ];
        };
      };
    
      # SSH
      services.openssh = {
        enable = true;
        ports = [ 2222 ];
        settings = {
          PermitRootLogin = "no";
          PasswordAuthentication = false;
          AllowUsers = [ "newuser" ];
        };
        extraConfig = ''
          Protocol 2                 # Use only SSH protocol version 2
          MaxAuthTries 3             # Limit authentication attempts
          ClientAliveInterval 300    # Client alive interval in seconds
          ClientAliveCountMax 2      # Maximum client alive count
        '';
      };
    
      services.fail2ban.enable = true;
      
      # Nginx + SSL via LetsEncrypt
      services.nginx = {
        enable = true;
        recommendedOptimisation = true;
        recommendedProxySettings = true;
        recommendedTlsSettings = true;
        virtualHosts = {
          "example.com" = {
            locations."/" = {
              proxyPass = "http://localhost:8080";
              proxyWebsockets = true;
            };
            forceSSL = true;
            enableACME = true;
          };
        };
      };
      security.acme = {
        acceptTerms = true;
        defaults.email = "myemail@gmail.com";
        certs."example.com" = {
          dnsProvider = "cloudflare";
          environmentFile = ./my-env-file;
        };
      };
    
      # Logrotate
      services.logrotate = {
        enable = true;
        configFile = pkgs.writeText "logrotate.conf" ''
          /var/log/nginx/*.log {
            weekly
            missingok
            rotate 52
            compress
            delaycompress
            notifempty
            create 0640 www-data adm
            sharedscripts
            postrotate
                [ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
            endscript
          }
        '';
      };
    
      # Bonus: auto-upgrade from GH repo
      system.autoUpgrade = {
        enable = true;
        flake = "github:myuser/nixos-config";
        flags = [
          "-L" # print build logs
          "--refresh" # do not use cached Flake
        ];
        dates = "00:00";
        allowReboot = true;
        randomizedDelaySec = "45min";
      };
    }
  • lytedev a year ago

    +1 for NixOS. Amazing for self-hosting and everything-management.

    Getting into it has a learning curve, but it's honestly so much easier in a lot of ways, too.

    • ash-ali a year ago

      Why would NixOS be good for self-hosting and everything-management?

      I recently tried to get into NixOS for the sake of learning something new. Struggling to find a proper reason to use this as a personal daily-driver.

      • nh2 a year ago

        Because it offers composable configuration.

        With Docker or Ansible you usually get a "snapshot" of a system where you can't easily and automatically change the implementation details from outside; you would have to run some unreliable script afterwards that "fixes up" system by further mutation.

        For example, let's say Ansible generates you a big nginx config, (which is a single text file) but it does not enables some setting you want, e.g. transparent gzip compression for every virtualhost.

        With Ansible, you now have to use string replacement on the generated config file, which is very error prone.

        With NixOS, you generate a declarative config tree from which the whole system is rendered once. You can import somebody else's nginx settings and then structurally override parts of it's config tree from your config, thus composing your config with one you use "as a library".

        That includes sophisticated things like "map over all virtualhosts existent (some of which may be declared by the library) and add gzip settings to them".

        In other words, all programs' text based config of various formats become a single JSON-like tree that you can apply functional programming transformations on, and this enables real composable code sharing which does not work well for Ansible or Docker.

        This also makes it easy to follow updates of your base library, and apply them to your running system without having to regenerate e.g. a docker image. For example, you can easily declare "always use the latest Linux kernel but apply my custom patch", and enable auto-updates. This means you won't run at risk of vulnerabilities because your manual kernel patching disabled automatic updates, like it does for e.g. Debian pinned packages.

        Overall it means it's easy to configure, customise, and update stateful systems, which self-hosting always requires.

  • mkleczek a year ago

    This single comment is the best illustration of why I should finally move to Nix. Thanks!

nihilius a year ago

Here are some "First Things on a Server" Notes. https://gist.github.com/klaushardt/07f5e3068355aafc2dce660a5...

Ansible/Puppet or NixOS would be better, but this is what works in Self Hosting.

Sandbag5802 a year ago

Hey I just want to say thank you for the write up. I just got into the hobby of self hosting my own applications and it's quite a bit. I appreciated your sections about logging and user management.

davidmitchell2 a year ago

While these seems to be secure... tampering with default settings always cause PITA; especially during automated upgrades. In addition, ssh port changes are all security thru obscurity.

  • Sammi a year ago

    Just closing well known ports will mean less drive-by sniffing. Which is an improvement. Doesn't mean you are now completely safe - it's just an improvement. At the very least it will make your logs smaller, as they won't be as full of drive-by sniffing.

    Security is an onion, you can add layers. There is no perfect security. You can add hurdles and hope you make yourself too difficult for you adversary. Some hurdles add more than others, and not using well known ports is on the lesser end of the scale. You might still find it worthwhile, just so you have cleaner logs to sift through.

thijsb a year ago

Why the `sudo ufw allow outgoing`? Wouldn't it be worth to deny all to prevent extrusion and only open ports for services that need to communicate externally?

ValtteriL a year ago

A monitoring setup with for instance Prometheus+Grafana would be a great addition to this.

And then maybe automating all of it with something like Ansible.

voidUpdate a year ago

With SSH Keys, do you have to just carry around your private key everywhere on a usb or something if you want to connect from multiple locations? Sometimes I find myself somewhere I've never been and I want to connect to my server. With a password that's easy, I just type it in, but I can't exactly create a new private key to access the server when I don't have access to the server in the first place

  • SSLy a year ago

    Some password managers expose SSH agent socket, so you can do that instead of fumbling with files.

  • fragmede a year ago

    password managers can store ssh private keys, though there's still the problem of you've been mugged, have no cellphone wallet or keys, and need to break back into your life.

crdrost a year ago

Qq, do people doing their own server setup like this use containerization at all?

When I looked at it, it was like “yeah you can run Docker or k3s,” and I think Hashicorp had their own version, but it seemed like folks didn't really bother? Also like setting up virtual networks among VPSes seemed like it required advanced wizardry.

  • lolinder a year ago

    I have an old desktop with Debian where I run a bunch of random things (Home Assistant, Pihole, Seafile, Jellyfin, ...) and I run everything with Podman Compose.

    I have enough things that I'm 100% confident I'd have run into dependency issues by now without containerization, but with Docker files it's trivial to keep them separate. As a bonus, compose.yml files are basically the lingua franca for describing deployments these days, so you can almost always find an example in the official docs for any given service you might want to host and get lots of help.

  • ozr a year ago

    > do people doing their own server setup like this use containerization at all?

    Depends on what you're deploying, really.

    If it's one Go service per host, there's no real need. Just a unit file and the binary. Your deployment scheme is scp and a restart.

    For more complicated setups, I've used docker compose.

    > Also like setting up virtual networks among VPSes seemed like it required advanced wizardry.

    Another 'it depends'.

    If you're running a small SaaS application, you probably don't need multiple servers in the first place.

    If you want some for redundancy, most providers offer a 'private network', where bandwidth is unmetered. Each compute provider is slightly different: you'll want to review their docs to see how to do it correctly.

    Tailscale is another option for networking, which is super easy to setup.

    • cassianoleal a year ago

      There’s rarely, if ever, a _need_ for containerisation. Even for a single static binary though, there are benefits like network and filesystem segregation, resource allocation, …

  • ranger207 a year ago

    Depends. Right now I mostly run 1 VM per app stack (eg web server & DB on the same VM) if it supports it, or if it's a single container I have a VM for all of those, or if it's a Docker Compose stack it'll get its own VM. So I'm mostly just using containers as a packaging solution. But I want to learn more about k8s so one of these days I'm going to move everything over to containers (that'll come when I refresh my hardware)

  • dsissitka a year ago

    I do. Every app I use is run by a dedicated user in a rootless container.

    But I'm also one of those weirdos that does all of their development in a VM. I might be a tiny bit paranoid.

    > Also like setting up virtual networks among VPSes seemed like it required advanced wizardry.

    Did you try Nebula? Once you get the hang of it it's pretty simple.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection