Settings

Theme

Make systemd better for Podman with Quadlet

redhat.com

182 points by vyskocilm 3 years ago · 87 comments

Reader

asabil 3 years ago

This is really neat. I have been using `podman generate systemd` for a large number of deployments. This just makes it so much simpler.

For anyone wondering, the main difference between this and docker/docker-compose is that podman can run in a daemonless mode such as containers are running directly under systemd which makes them integrate into the existing systemd infrastructure and appear as any other normal service.

  • bravetraveler 3 years ago

    For those curious why you may want this: consider your service relies on /somelocation

    Make that a mount unit in systemd (free from lines in /etc/fstab) and now you can accurately lay out your service's requirement/dependency on this filesystem.

    I know systemd gets flack for overreach 'as an init system', but there's a reason - initialization doesn't happen in a vacuum.

    Services need filesystems, networks, etc to matter.

cpitman 3 years ago

This looks quite nice. I run a server which is already RHEL+podman+generated systemd units, but this both simpler and more declarative/idempotent than my current setup. Anything that helps convince people that containers running on a single server can be simple, and doesn't require an entire k8s stack.

  • creshal 3 years ago

    It's interesting that Redhat promotes this, despite having already invested quite a lot into systemd-nspawn for roughly the same purpose. Between it and lxd there's already some options for these sorts of simple, single-host container runtimes.

    • mytailorisrich 3 years ago

      Podman is also Red Hat do they win either way.

      Anyway, I think that Podman is the mature Docker and tries to fit much better into the linux/unix-y way of doing things. Especially, being daemonless allows to integrate into systemd, etc. the way it should, and makes for a mature integration of containers into the ecosystem.

  • theteapot 3 years ago

    > Anything that helps convince people that containers running on a single server can be simple, and doesn't require an entire k8s stack.

    Docker?

    • IceWreck 3 years ago

      Docker compose isn't exactly meant for production

      • dusanh 3 years ago

        The company I used to work for, used docker-compose to serve its SaaS product to 100+ clients. I wasn't part of that team, but it seemed to me they were quite happy with it.

        • KronisLV 3 years ago

          Can second this: I've seen companies that wanted the benefits of containers but didn't need orchestration yet do just fine with Docker Compose in prod.

          Of course, when orchestration became a necessity, almost everyone looked in the direction of Kubernetes, as opposed to something like Nomad or Swarm, probably due to its popularity.

      • theteapot 3 years ago

        Other comments. You can also use docker-swarm with just a single master node.

      • jpitz 3 years ago

        tl;dr it wasn't before, but now it is.

        It is true that there once was a disclaimer on the compose homepage that stated that the product was not recommended for production workloads.

        Given that disclaimer no longer exists, along with the existence of [1], leads me to advocate using it in production.

        [1] https://docs.docker.com/compose/production/

PenguinCoder 3 years ago

I heard you like abstract tools to do stuff, so I added an abstract tool to your tool to manage abstract tooling.

  • pxc 3 years ago

    This is actually a pretty natural fit, imo. Docker containers are basically processes with fat runtimes, and container orchestration layers are essentially glorified process supervisors.

    At the same time, most Linux systems already come with a pretty fancy process supervisor. Personally, I think writing systemd units from scratch is already pretty easy. But it makes sense that Linux software which often integrates with (essentially) process supervisors would want painless integration with systemd!

    Also, in some ways I think this is simpler. Anyone who has used a reasonably modern Linux likely has some systemd experience. For local testing and 'orchestration', why rely on some additional one-off layer like docker-compose when the operating system's built-in process supervisor has all of the facilities you need?

    • imiric 3 years ago

      > Also, in some ways I think this is simpler.

      Not really. Being simpler would be managing containers directly with systemd, as with systemd-nspawn. Why do I need to use a container manager in systemd for something systemd can already do directly? This integration with Podman is Red Hat's way of promoting the tool to stay relevant, but it's not actually simpler.

    • groestl 3 years ago

      > container orchestration layers are essentially glorified process supervisors.

      There is a little more to container orchestration runtimes. Would say at this time they are akin to a badly designed, distributed linker (I'm saying this as someone who did not fully buy into this stuff, but I see that it solves some problems)

  • thayne 3 years ago
  • dilyevsky 3 years ago

    If it still cant send and read email there’s room for improvement

  • quickthrower2 3 years ago

    Terraform run from CI pipeline enters the chat

messe 3 years ago

Huh, this exactly what I’ve been looking for recently for some local/home-network/homelab setups, even down to the use of systemd-like ini/toml syntax.

INTPenis 3 years ago

Can this finally replace docker-compose? Because to me that's been the biggest void in the whole podman ecosystem so far. (And no, if you've tried podman-compose you wouldn't recommend it)

  • bongobingo1 3 years ago

    I think this can replace docker-compose in deployment - if thats how you deploy and are willing to alter your workflow slightly.

    I've done this for a while on small or disconnected systems, systemd + podman is very nice, the regular unit file generators are very usable + modifiable.

    From the development side, the issue is unit files must be "installed", I can't just have a set of `x.service y.service" files and `systemd start $(pwd)/x.service`, so the overhead is a bit awkward there.

    `podman play kubelet` is sort of there, except it doesn't support some networking options in the kubelet file, so its not a complete replacement.

    Podman also includes support for running kubelet files via systemd but I don't use that myself.

    I think ideally kubelet files with some extra podman annotations are the compose replacement, even if writing them isn't as pleasant as compose files. They you could `podman play kube x` to boot the dev stack and use the systemd-x-kubelet template to deploy.

    • INTPenis 3 years ago

      >I think ideally kubelet files with some extra podman annotations are the compose replacement, even if writing them isn't as pleasant as compose files.

      People have been telling me this for years now and I have yet to see a working example.

      • bongobingo1 3 years ago

        Yes, what I mean is, ideally kubelet files + podman would be a sufficient replacement. Not drop-in, but close enough, but it's not there ergonomically (yet?).

  • hosh 3 years ago

    Not really. This is more to replace kubelet if you want something with an even smaller footprint than k0s.

    • INTPenis 3 years ago

      Yeah I didn't really think before I typed that. If it requires you to install systemd files then it's not really equivalent.

      I don't understand why they made it so complicated, if you have a file format just let the user run it from their CWD.

      • bonzini 3 years ago

        Because the idea is that this is a declarative configuration, not something that the user has to run. You first write the configuration, and then running it is just "systemctl start foo".

        Generators are the same mechanism by which systemd reads /etc/fstab; just like /etc/fstab entries are treated by systemd as "normal" mount units, systemd will treat .container files just like any other system service.

        • INTPenis 3 years ago

          Yeah but they have invented the declarative file format, but they put it in /etc, why not put it in $HOME/.config/systemd instead? That would make it more on-par with docker compose.

          I'm hoping this is in the future of quadlet. Being able to run the files from CWD, using systemd units in $HOME, and not requiring root.

          • rhatdan 3 years ago

            It works in the home directory. Just place the quadlet file in $HOME/.config/containers/systemd $ systemctl --user daemon-reload $ systemctl --user start QUADLETNAME.service

            If you want this to work at boot, you need to do loginctl enable-linger $USERNAME

          • alexlarsson 3 years ago

            For rootless use, put the files in ~/.config/containers/systemd/

            • INTPenis 3 years ago

              Thanks I just realized that.

              So really for a developer it would potentially be this simple;

                  cp my-app.container $HOME/.config/containers/systemd && systemctl --user daemon-reload
              
              Just to compare with docker-compose again.
              • bongobingo1 3 years ago

                You should be able to use `systemctl --user link` which is a bit nicer than copying.

                       link PATH...
                           Link a unit file that is not in the unit file search path into the unit
                           file search path. This command expects an absolute path to a unit file.
                           The effect of this may be undone with disable. The effect of this command
                           is that a unit file is made available for commands such as start, even
                           though it is not installed directly in the unit search path. The file
                           system where the linked unit files are located must be accessible when
                           systemd is started (e.g. anything underneath /home/ or /var/ is not
                           allowed, unless those directories are located on the root file system).
              • config_yml 3 years ago

                The cool thing is you can also create .kube file which points to a kubernetes pod definition yaml. This also generates a service definition which takes care of running a full pod with all your containers.

              • config_yml 3 years ago

                Can you run your stuff on port 80/443 like this?

            • bonzini 3 years ago

              Alex, since you're here does quadlet support override files like systemd's /etc/systemd/system/foo.service.d directories? I couldn't find it in the documentation.

              • alexlarsson 3 years ago

                The generator doesn't do that atm no. Seems like it would be useful though.

                On the other hand, I belive systemd would load override files for the generated .service file, although those can just override details on the systemd level, not the generated podman command.

  • riedel 3 years ago

    >Similar to Compose or Kubernetes files, you can declare what you want to run without having to deal with all the complexities of running the workload.

    Seems to be part of the idea. However, I personally have a bit of a hard time imagining this for the average developer. Maybe it will have the nice side effect of me digging further into systemd. However, most the compose stuff I used had to do with network and mounts. Wonder how to declare this in a systemd manner.

    • hosh 3 years ago

      I set up and deploy infra at my company, and I do it with Kubernetes.

      I like the self-healing aspects of Kubernetes, but even something like k0s has a large, 1GB footprint that I don't want to have for my self-hosted personal projects.

      Using podman and quadlet looks like it solves exactly what I want -- just enough kubernetes on a very small footprint.

      This is not a replacement for docker-compose. I've never found a good use for that in infra because it lacks self-healing, so it stayed in the dev stack. If I was more proficient with Nix, I'd probably use that instead of docker-compose.

    • creshal 3 years ago

      It feels more like an alternative for the lxc/lxd/nspawn crowds who're using long-running containers in fixed setups, rather than ad hoc spinning up of a bunch of related containers for one particular (and temporary) task.

  • davewood 3 years ago

    why wouldnt you recommend podman-compose?

    • INTPenis 3 years ago

      I haven't used it for well over a year now but last time it had issues with networking.

config_yml 3 years ago

I was just cursing a lot setting up a single node with my default stack.

I’m going to try this tomorrow, because containers are so useful, but I just don’t want to deal with K8s on anything that I run myself.

  • dicknuckle 3 years ago

    I just use docker compose at home for about 15 containers and it works great.

  • MuffinFlavored 3 years ago

    not even k3s?

    • bollos 3 years ago

      Does Kubernetes support swap files yet? Last time I checked it was still a beta flag :(

      • hkt 3 years ago

        Yep! Last I checked kubelet will just refuse to work if you try to give it swap from "underneath", too.

        I work with data science tools that require 16G of RAM eac for hundreds of users, and Kubernetes was an appalling choice of platform for it. It has cost the org millions a year more than it needed to, given the actual usage profiles involved. Unsurprising that big contributors to k8s have been.. companies selling compute.

mike_hearn 3 years ago

Interesting!

For my own servers I use an internal tool that integrates apps with systemd. You point it at the output of your build system and a config file, and it produces a deb that contains systemd unit files and which registers/starts the server on install/reboot/upgrade, as a regular debian package would. Then it uploads it to the server via sftp and installs it using apt, so dependencies are resolved. As part of the build process it can download and bundle language runtimes (I use it with a JVM), it scans native binaries to find packages that the app should depend on, and you can define your config including package metadata like dependencies and systemd units using the HOCON language [1].

Upshot is you can go from native binaries/Gradle/Maven to a running server with a few lines of config. Oh and it can build debs from any OS, so you can push from macOS and Windows too. If your server needs to depend on e.g. Postgres, you just add that dependency in your config and it'll be up and running after the push.

It also has features to turn on DynamicUser and other sandboxing features. I think I'll experiment with socket activation next, and then bundled BorgBackup.

Net/net it's pretty nice. I haven't tried with containers because many language ecosystems don't seem to really need them for many use cases. If your build tool knows how to download your language runtime and bundle it sans container by just setting up paths correctly, then going without means you can rely on your Linux distribution to keep things up to date with security patches in the background, it means networking works as you'd expect (no accidentally opened firewall ports!) and so on. SystemD knows how to configure resource isolation/cgroups and kernel sandboxing, so if you need those you can just write that into your build config and it's done. Or not, as you wish.

With a deployment tool to automate builds/pushes, systemd to supervise processes and a big beefy dedicated machine to let you scale up, I wonder how much value the container part is really still providing if you don't need the full functionality of Kubernetes.

[1] https://github.com/lightbend/config/blob/main/HOCON.md

zephyros 3 years ago

I'm using the podman ansible module[1] to manage the podman container atm, it's ... Okish. I wrote a spaghetti mess with ansible conditionals and loops to manage multitude of systemd files made from podman-generate-systemd. If I had some time maybe I'll try this out, a more declarative approach would certainly be nicer.

[1]: https://github.com/containers/ansible-podman-collections

  • bonzini 3 years ago

    I do something similar but I don't use podman-generate-systemd; instead I create the systemd service by hand using a Jinja template[1], and then start the service[2]. This has the advantage that there's no hole where the container is running but systemd configuration has not been updated yet.

    Either way, it's indeed quite tempting to use quadlet instead of the nasty templates that build the podman commandline.

    I also want to check if quadlet supports override files like systemd's, because that would be quite interesting as a customization mechanism that does not require forking the playbooks.

    [1] https://github.com/patchew-project/patchew/blob/master/scrip...

    [2] https://github.com/patchew-project/patchew/blob/master/scrip...

Klasiaster 3 years ago

The mentioned auto-update and rollback stuff looks also nice: https://www.redhat.com/sysadmin/podman-auto-updates-rollback...

candiddevmike 3 years ago

Odd choice using systemd syntax willingly when all other industry tools use YAML, IMO

  • viraptor 3 years ago

    It's a tool that replaces your usual long systemd unit file with a smaller systemd unit file without boilerplate. Seems like a perfectly good choice to me.

  • thayne 3 years ago

    For this, systemd syntax is a lot better than yaml IMO.

    In fact, I'd prefer if the tools you mentioned used something else besides yaml.

    • messe 3 years ago

      > In fact, I'd prefer if the tools you mentioned used something else besides yaml.

      Unfortunately, there norway way that's going happen.

    • pzmarzly 3 years ago

      > In fact, I'd prefer if the tools you mentioned used something else besides yaml.

      To be fair, most of the popular DevOps tools can work with JSON instead of YAML just fine. And JSON can be easily generated from almost anywhere. I don't think you can work with systemd syntax as easily.

      • gh02t 3 years ago

        Out of curiosity, do you know any tools that can't use JSON but can use YAML? JSON is supposed to be valid YAML.

        • erik_seaberg 3 years ago

          Every YAML parser supports JSON (maybe modulo a couple of weird Unicode chars), but YAML also has features (tags, start/end document) that you can’t express in JSON, so it depends on whether the tool relies on seeing those.

  • 0xC0ncord 3 years ago

    It seemed to me that the original target audience for this is users already familiar with systemd's unit syntax, so making these "unit files" use it as well was a sensible choice.

  • qbasic_forever 3 years ago

    You can apparently use kubernetes YAML with a .kube target and podman will orchestrate the containers, service, volumes appropriately (on a single node). The only systemd config is a minimal boilerplate to point at the kubernetes YAML.

  • darthrupert 3 years ago

    YAML is such a poor default choice for most things, that it's not odd at all to want to improve on that. In the same vein, people used to say that JSON is a bad format to use because everyone just uses XML.

  • userbinator 3 years ago

    Odd to call it "systemd syntax" when it predates the invention of systemd by a few decades: https://en.wikipedia.org/wiki/INI_file

  • throwaway892238 3 years ago

    RedHat basically sponsored the development of Systemd (among other things) so their full-throated support isn't surprising

    also, fwiw, YAML is a data serialization format, not a configuration format. people who use YAML and pretend it's a config file format are either lazy, incompetent, or both.

    • candiddevmike 3 years ago

      The industry disagrees with your hot take. Configurations will almost always be serialized into some kind of struct, YAML/JSON/TOML/etc work well for this. The neat part is you can use other tools to do the templating to YAML more intelligently, instead of having each tool DIY their own config templating setup.

      • xyzzy_plugh 3 years ago

        Hot take? It's literally a data serialization language. Configuration != data serialization for, like, all of time.

        No one is mincing words about YAML being used for configuration rather rampantly. They're saying it's a shitshow, and I agree.

        > Configurations will almost always be serialized into some kind of struct

        No, this is in fact a fairly modern phenomenon due to what I can only imagine is fear of compilers. The grand tradition is for configuration to be parsed and interpreted, which is rather distinct from serialization. Take your editor, or version control software of choice, or nearly every file in /etc, for example.

        Standardizing things can be good, sure, and for all the warts YAML is at least consistent, usually, but it's a trend in the wrong direction.

        Treating configuration as data is a Choice. I can hear them saying now "oh but being able to template or write a program to manipulate or edit configuration is so much better" -- take a look at how git config works. You can edit interactively with the CLI, read/write arbitrary fields, and yet it's not structured beyond INI style "key = value". There's no schema, and yet nothing is lost. There's certainly no Norway problem.

    • db48x 3 years ago

      Possibly they are deliberately abusive towards their users.

trialect 3 years ago

The unfortunate thing is, that podman creators do not give a damn about how their binary should be run on different linux distros.

RH being RH only RH (and derivatives) supports latest podman. For example on ubuntu lts you cannot run podman 4.4 and you will never have the possibility to run it. Maybe in 5 years Ubuntu/Debian repos will be updated to contain podman 4.4, but until then you are stuck with whatever version your distro has.

  • jeroenhd 3 years ago

    Just wait till you see how hard it is to use the arch repos on puppy linux!

    The Redhat folks develop software for Redhat. The software will run fine on any other distro with up to date kernels and systemd versions, but there's no guarantee that it does because it's not Redhat's business to work on the OS of their competitors.

    If Debian and Ubuntu are too slow to update, that's completely out of Redhat's control. They chose to pin an older version of a piece of software developed in a much more rolling release schedule, it's up to them to fix the incompatibilities their choice introduced. The whole point of an LTS is that you use one older version for several years.

    I expect Podman 4.4 to be available in Ubuntu 23.10, as 23.04 is a bit close (current repos list 3.4.4, the version used in 22.04 and 22.10). If Ubuntu can't move fast enough to include it in 23.10, then that's Ubuntus's fault more than anything. You should also consider that Canonical sells their own competing container ecosystem (Charmed/microk8s) to businesses so not supporting their competitors' software may be intentional.

    If you want Podman 4.4 but don't want to use Redhat distributions, Arch and derivatives already have it ready to go. You'll also get much more recent versions of the Linux kernel and systemd as a nice bonus.

  • doix 3 years ago

    I mean, Arch has 4.4.1-12 in their repo right now [0]. I don't really get the argument, why is it the software developers fault that distros have old packages? Of course LTS versions of Ubuntu wouldn't have bleeding edge software, that would defeat the purpose.

    [0] https://archlinux.org/packages/community/x86_64/podman/

    • trialect 3 years ago

      So the software developers cannot make a version that can be run on any linux distrib? (with or without packaging)

      (oh, and also you mean that is a community package - meaning unsupported)

  • rhatdan 3 years ago

    Podman is a community project. Anyone can setup repos to update any distribution. Many distributions are managing versions of Podman. OpenSuse, Fedora, Centos, RHEL, Debian, Arch all supply updates. There is also the Kubic project in which community members are providing versions for Ubuntu.

    Red Hat developers primary work in the upstream. There are also Red Hat engineers that work on packaging for Fedora, RHEL and Centos Stream, as well as Clients for Windows and Mac. We work with Fedora to provide CoreOS images for Windows and Mac.

    Red Hat engineers work with the community for support of the other distributions, but they don't guarantee or support for all other distributions or versions of distributions.

  • bravetraveler 3 years ago

    It's Red Hat's fault that Ubuntu is an LTS based on Debian unstable?

    LTS doesn't only mean long term stability - long term suck applies, too.

    The only thing preventing podman from working is the age of their source, which is a deliberate choice -- LTS

  • hnarn 3 years ago

    > you will never have the possibility to run it

    Can you elaborate on why such a categorical statement is true?

    What about https://mpr.makedeb.org/packages/podman ?

    • trialect 3 years ago

      https://podman.io/getting-started/installation "The podman package is available in the official repositories for Ubuntu 20.10 and newer." "CAUTION: The Kubic repo is NOT recommended for production use. Furthermore, we highly recommend you use Buildah, Podman, and Skopeo ONLY from EITHER the Kubic repo OR the official Ubuntu repos. Mixing and matching may lead to unpredictable situations including installation conflicts."

      Also the Kubic repo is old.

      I don't know what makedeb is, but of course anyone can make .deb packaging for anything, but that does not mean it is supported in any way (not to mention if a package has several other package dependecies, and those also have to be packaged carefully)

      Also see: https://github.com/containers/podman/discussions/17362 https://github.com/containers/podman/issues/14065 https://github.com/containers/podman/discussions/13097

      • hnarn 3 years ago

        You originally said that:

        > podman creators do not give a damn about how their binary should be run on different linux distros

        Just to play the devil's advocate here, maybe I missed something so I'll try and be verbose and start from the beginning: Podman is developed by Red Hat, and they have chosen to build for, and support RHEL (and implicitly derivatives thereof). There are no "supported" binaries available for $DISTRO because Red Hat has decided not to spend money on supporting, developing and testing for that specific distribution.

        Podman is licensed under Apache 2.0 which means that it would be possible for anyone (for example Canonical, who are "responsible" for Ubuntu, or volunteers) to build and test the code for their distribution.

        Doesn't it follow then that the responsibility for making Podman available on Ubuntu falls on either Canonical or volunteers that use Ubuntu, and not Red Hat? Otherwise, you could blame any developer on any software for not making their code available on any distro, and perhaps even any OS.

        Makedeb is the Debian variant of AUR[1], which allows users to (more) easily compile software that they want but is not available in "regular" repos, so it could be a way to run a newer version of podman on Debian. I haven't tried it, but I believe the idea of these "handheld compilations" is to include the things you express worries about, like dependencies.

        I read the links you provided, and "baude" (maintainer) stated sort of what I said above:

        > we rely on community support for distributions support

        lsm5 said:

        > issues are best reported at Ubuntu's official bug tracker

        While I can understand the frustration, or disagreeing with the decision, regarding the fact that podman is not equally available for Ubuntu (or any other distro), I don't really agree that the Podman developers themselves (or RH) are more responsible for this than say Canonical or the users themselves.

        [1]: https://aur.archlinux.org/

        • trialect 3 years ago

          Recently I came about a couple of projects on github where they are making a binary available through docker AND the so called 'bare-metal' (which expression I hate, because up until recently [ok-ok, couple of years] there wasn't any other method than just run it as it is on the hardware/os), meaning you can run it on any linux distro (without docker of course), so open source developers certainly can make software that runs on any (or at least most of) linux distros. Especially when there's a big corp. behind them.

          What's more is podman especially is about running software on different distros easily.

          What I'm expecting from RH is make software (if that is free and opensource and about running other software without the hassle of packaging, etc.) that can be - sort of easily - used on other distros too. But just to be clear, this expectation is not only towards RH.. it is towards any other linux distros. In this special case it is RH indeed.

          The whole idea behind podman is great (especially not having to have a root daemon to run containers), but if they want it to succeed they need a proper and easy way for other linux distro users to use it.

          and yes, they also said in https://github.com/containers/podman/discussions/13097#discu...: "if I want to get Real Wise. Only Supported Podman comes from Red Hat Enteprise Linux and perhaps SUSE. (Maybe Oracle Linux)"

          > Doesn't it follow then that the responsibility for making Podman available on Ubuntu falls on either Canonical or volunteers that use Ubuntu, and not Red Hat?

          As mentioned in https://github.com/containers/podman/discussions/13097, node.js is just an example, but they could do it. Why wouldn't redhat do it with podman?

          > Otherwise, you could blame any developer on any software for not making their code available on any distro, and perhaps even any OS.

          Yes you could. And in certain cases - like this one - you should too.

  • f2hex 3 years ago

    I agree, today still better to use Docker that is more mature, Podman is half baked, lack of relevant features and moreover still very Red Hat centric, so a sort of lock-in.

jacooper 3 years ago

I still don't see how this more convenient than just using compose, or what do you gain for leaving it.

  • mhitza 3 years ago

    For my projects I use on system configured and deployed services without containers, and in production what my clients use (generally managed container orchestrators like ECS). If I were to use this on my servers, one benefit I can think of is that (as a Linux connoisseur) is that it would have separate logs per container which I can inspect with a unified command I use for all the other services (journalctl -u systemd-unit-name), whereas on projects where I've wrapped docker-compose with a systemd service I've had all logs dumped under a single service.

    • jacooper 3 years ago

      Actually you cab see per container logs `docker compose logs -f {container name}`

      Although of course it won't be integrated into journalctl.

      • mhitza 3 years ago

        Of course, that's why I said I liked the unified log access. Compose log command is primitive in features, journalctl just offers a better functionality, and for someone like me (who uses Linux daily for work and personal use) convenience.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection