Settings

Theme

Ask HN: What is the best source to learn Docker in 2023?

172 points by lukasfischer 3 years ago · 83 comments · 1 min read


I know docker for a long time but I never used it in serious projects. I find it hard to find good and especially up to date tutorials. I’m especially interested in a tutorial on how to run (feature branch) containers which spin up on bitbucket pipelines/GitHub actions for manual and automated testing for PHP and Node applications (containers should exist as long as a feature branch exists). I found some random YouTubers which give a nice intro/demo into some features but not really a deep dive.

jpgvm 3 years ago

I would start with understanding what containers are. Read up on what namespaces and cgroups are. Understand first what a container is, what it gives you and how Docker (vs other containerizers) fits into the picture. The first fundamental thing to understand is that containers are merely processes that have some sandboxing and perhaps limits applied to them, mem_cg, CFQ throttling, etc.

Once you have that under your belt it's not hard to work out how Docker itself works and how you can use it to fulfill the sort of CI/CD objectives you have outlined. Docker itself isn't important, the semantics of containerization are.

Something that Docker (and Docker like things) take massive advantage of are overly filesystems like AUFS and overlayfs, you would do good to understand these (atleast skin deep).

Finally networking becomes really important when you start playing with network namespaces, you should be somewhat familiar with atleast the Linux bridge infrastructure and how Linux routing works.

Good luck!

  • moonchrome 3 years ago

    That's like someone asking how to learn C and you suggest starting with assembly.

    It's the most roundabout way - and OP is conflating docker with CI/CD, referencing PHP and Node - it's probably safe to say they aren't looking for a deep dive.

    Plus - knowing how it runs under the hood doesn't mean you know how to use docker itself.

    • prettyStandard 3 years ago

      OP says they have used docker for a long time, and they want a deep dive.

      I think it's safe to say they do want a deep dive but might be forgetting to mention some of their reasons.

      For someone familiar with docker, maybe it is good to start from the other side and work backwards.

    • gitfan86 3 years ago

      Unfortunately this is the right answer. Over the past few years many engineering organizations no longer value anything beyond surface level understanding of the technology used in their stacks.

      You would think that is would still be advantageous to have a detailed understanding of what is going on in the stack, but that actually causes problems when you make a suggestion that no one else understands.

    • allarm 3 years ago

      > someone asking how to learn C and you suggest starting with assembly

      And this would be a great advice!

  • Terretta 3 years ago

    > start with understanding what containers are

    Docker implemented in around 100 lines of bash: https://github.com/p8952/bocker

    This is the most mindblowing example for enterprise security teams that think Docker is a new threat on a single tenant Linux host.

    No, buddies, all this stuff is already there. If you were fine with your visibility before*, you're still fine. Go find a real problem while people play with their developer dopamine.

    * NARRATOR: They shouldn't have been.

Too 3 years ago

The problems you describe have very little to do with docker itself, they sound more like integration challenges.

For example. Docker has absolutely zero knowledge of the branches lifetime or even branches at all. This is something you have to design using the existing capabilities of docker together with features or existing integrations provided by GitHub or bitbucket.

Of course knowing docker deeper will help you understand these boundaries better and use them.

One secret is that there is actually not much to it, most things are just variations of docker run and various tricks within docker build, sprinkled with some volume and image management like tagging and pruning. Other orchestrators like GH Actions, Compose, Kubernetes etc can be seen as building around these basic blocks.

If you already know these basics, you are probably going to learn faster by getting your hands dirty, trying to solve the scenarios you need, rather than binge watching tutorial#187 on YouTube.

arthurcolle 3 years ago

Docker starts to become super useful when you have an application you are deploying that has a few `service` dependencies. Typical deployments include something like

1) your reverse proxy, Nginx/Caddy

2) your "app", or API, whatever. pick whatever you want, a Rails API, a Phoenix microservice, a Django monolithic app, whatever you want.

3) your database. Postgres, whatever

4) Redis - not just for caching. Can use it for anything that requires some level of persistence, or any message bus needs. They even have some plugins you can use (iirc, limited to enterprise plans... maybe?) like RedisGraph.

5) elasticsearch, if you need real-time indexing and search capabilities. Alternatively you can just spin up a dedicated API that leverages full text search for your database container from 3)

6) ??? (sky is the limit!)

I prefer docker compose to Kubernetes because I am not a megacorp. You just define your different services, let them run, expose the right ports, and then things should "just work"

Sometimes you need to specifically name your containers (like naming redis container `redis`, and then in your code you will have to use `redis` as the hostname instead of `localhost` for example).

basically That's It (tm)

  • tbrownaw 3 years ago

    > I prefer docker compose to Kubernetes because I am not a megacorp. You just define your different services, let them run, expose the right ports, and then things should "just work"

    Kubernetes scales down pretty well, there's a few canned single-machine versions. I've been playing with k3s lately at work (if this works out, it should lead to things running on a proper HA cluster), and I'll probably also move over some of the standalone containers I have at home (which are all single-instance standalone things, currently using podman as a systemd service).

    • mmcnl 3 years ago

      Docker compose is still orders of magnitude easier to use and often sufficient.

  • yukinon 3 years ago

    > I prefer docker compose to Kubernetes because I am not a megacorp.

    In which cases would you prefer Kubernetes? Or rather, why would a megacorp prefer it over standard Docker?

    • giobox 3 years ago

      Kubernetes and docker solve different problems. One is a container runtime, the other is a container orchestration tool. A mega-corp using Kubernetes can still use Docker, the two can work together.

      "Docker compose" (not the same thing as docker) works great at single machine scale/local development environments, but isn't really designed to scale much beyond this to production environments, multiple servers and data centers etc, which Kubernetes is. This isn't to say you couldn't deploy something to production with compose, its just not very likely outside of small personal projects - there are heaps of features in Kubernetes that simply don't exist in compose.

      Generally you'd typically find a docker compose configuration for easy local development environment deployments, a Kubernetes configuration for managing the production environments, although there are no hard and fast rules here. Compose generally works best where the services fit all on the same box, which is rare for a business of almost any size in production, but common for local dev work.

      I also prefer Compose for personal projects and local development, but it simply wouldn't work at any place I've worked for production deployments.

      • selcuka 3 years ago

        Note that one can also use a lightweight Kubernetes distribution such as Minikube [1] during development so that the workflow is similar for both development and production.

        [1] https://minikube.sigs.k8s.io/docs/start/

        • mmcnl 3 years ago

          But this often not necessary, because if you can run a production Kubernetes cluster, you can also run a dev Kubernetes cluster, so no need to run a local Kubernetes distribution.

          My workflow typically looks like this:

          1. Run app locally during dev without Docker

          2. Build and run Docker image locally with Docker compose

          3. Deploy to development Kubernetes cluster

          4. Deploy to production Kubernetes cluster

          • selcuka 3 years ago

            > if you can run a production Kubernetes cluster, you can also run a dev Kubernetes cluster

            When you have many developers, the cost for maintaining one dev cluster per developer quickly goes up. One cluster for all developers can be used for testing/staging, but not for development.

            Minikube is a replacement for your step (2), except you can now use your existing tools (kubectl etc) instead of docker-compose.

      • KronisLV 3 years ago

        > "Docker compose" (not the same thing as docker) works great at single machine scale/local development environments, but isn't really designed to scale much beyond this to production environments, multiple servers and data centers etc, which Kubernetes is. This isn't to say you couldn't deploy something to production with compose, its just not very likely outside of small personal projects - there are heaps of features in Kubernetes that simply don't exist in compose.

        In short:

          - if you need to deploy containers on a single node ad-hoc, then you want Docker or a similar runtime (e.g. Podman)
          - if you need to deploy containers on a single node with a deployment descriptor, then Docker Compose is a good option, due to its simplicity
          - if you need to deploy containers across multiple nodes with a deployment descriptor, then you want some sort of an orchestrator
        
        I'd say that going from Docker Compose to Docker Swarm is the first logical step, because it's included in a Docker install and also uses the same Compose format (with more parameters, such as deployment constraints, like which node hostname or tag you want a certain container to be scheduled on): https://docs.docker.com/compose/compose-file/compose-file-v3... That said, you won't see lots of Docker Swarm professionally anymore - it's just the way the job market is, despite it being completely sufficient for many smaller projects out there, I'm running it in prod successfully so far and it's great.

        Another reasonably lightweight alternative would be Hashicorp Nomad, because it's free, simple to deploy and their HCL format isn't too bad either, as long as you keep things simple, in addition to them supporting more than just container workloads: https://www.hashicorp.com/products/nomad That said, if you don't buy into HashiStack too much, then there won't be too much benefit from learning HCL and translating the contents of various example docker-compose.yml files that you see in a variety of repos out there, although their other tools are nice - for example, Consul (a service mesh). This is a nice but also a bit niche option.

        Lastly, there is Kubernetes. It's complicated, even more so when you get into solutions like Istio, typically eats up lots of resources, can be difficult to manage and debug, but does pretty much anything that you might need, as long as you have either enough people to administer it, or a wallet that's thick enough for you to pay one of the cloud vendors to do it for you. Personally, I'd look into the lightweight clusters at first, like k0s, MicroK8s, or perhaps the K3s project in particular: https://k3s.io/

        I'd also suggest that if you get this far, don't be afraid to look into options for dashboards and web based UIs to make exploring things easier:

          - for Docker Swarm and Kubernetes there is Portainer: https://www.portainer.io/
          - for Kubernetes in particular there is Rancher: https://www.rancher.com/ (they are also the people behind K3s)
          - Kubernetes also has its own dashboard, but it's a bit less nice in my opinion: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
          - Hashicorp Nomad comes with its own web UI as well, though I recall it being a bit barebones: https://developer.hashicorp.com/nomad/tutorials/web-ui
        
        In the terminal, there are also a few useful projects:

          - for Docker, there is ctop: https://github.com/bcicen/ctop
          - for Kubernetes, there is k9s: https://k9scli.io/
        
        As for a desktop UI, there are a few options too:

          - Docker Desktop: https://www.docker.com/products/docker-desktop/
          - Rancher Desktop: https://rancherdesktop.io/
          - Podman Desktop: https://podman-desktop.io/
        
        You don't strictly need any fancy tools like that and Docker/Podman CLI, or kubectl can be enough, but sometimes a more visual look and being able to click around can be nice.
        • totalhack 3 years ago

          I use swarm for smaller projects and am happy with it, though I know it's out of fashion now. I may try Nomad next time around. If I needed something more advanced or with more scale I am probably in a situation where a managed service would be within the budget.

      • _huayra_ 3 years ago

        Any of the options lighter-weight that Kubernetes is usually good to start with, but for anyone already running these solutions (e.g. docker compose), learning Kubernetes can be a valuable job skill (because like you said, it's for mega-corp type environments).

q845712 3 years ago

It sounds to me like you have a project that involves on-demand deployment of containers. Like it or not, the current standard for that is kubernetes. As someone is already saying on here, there will still be Dockerfiles and container technology involved, but not necessarily "docker compose" or "docker swarm." Just my guess, but I think that's part of why you're struggling to find the tutorials you're looking for, is that since 2020-ish (idk someone can correct my chronology, maybe 2018? 2021?) they've increasingly become k8s tutorials, not "docker" tutorials.

that's just my guess though :) Happy hacking!

  • thunky 3 years ago

    > It sounds to me like you have a project that involves on-demand deployment of containers

    How did you gather this? OP only mentions CI/testing, nothing about deployment. That's a long distance from needing kubernetes.

    • q845712 3 years ago

      hmm I understood it as, "when i push a new branch to CI, I want to automatically deploy a container running that code." maybe I was wrong!

      btw, imo, nearly all commercial shops' use cases are a long distance from needing kubernetes, but that doesn't seem to be stopping very many people... :)

nickjj 3 years ago

If your goal is to "learn Docker", I have around 100+ free blog posts and ad-free YouTube videos at: https://nickjanetakis.com/blog/tag/docker-tips-tricks-and-tu...

https://github.com/nickjj/docker-node-example is an up to date Node example app[0] that's ready to go for development and production and sets up GitHub Actions. Its readme links to a DockerCon talk from about a year ago that covers most of the patterns used in that project and if not some of my more recent blog posts cover the rest.

None of my posts cover feature branch deployments tho. That's a pretty different set of topics mostly unrelated to learning Docker. Implementing this also greatly depends on how you plan to deploy Docker. For example, are you using Docker Compose or Kubernetes, etc..

[0]: You can replace "node" in the GitHub URL with flask, rails, django and phoenix for other example apps in other tech stacks.

ddtaylor 3 years ago

I use Docker from time to time, but one question I have is more about workflow. How are people editing files within existing containers without resorting to one of the following:

1. Rebuilding the entire container which often involves stopping and starting it, etc.

2. Manually running commands that copy the files into the container. This is irritating because if I forget which files I changed or forget to run the copy command I end up with a "half updated" container.

3. SSHing into the container. This is irritating because I have to modify the port layout and permissions of the container and later remember to "restore" them when I'm "done" making the container.

Thanks!

  • kilburn 3 years ago

    Containers are supposed to contain the entire stack a program needs to run. When you want to update anything in there, the correct answer is to build a new image.

    Of course, this means that you'll not just stop, but completely destroy the old container and start a new one (created from the new image). You can get this to happen without service downtime by using the very same techniques that you'd use on any high availability / multi-server enviornment (rolling upgrades, canary deployments, etc.).

    If you need to have some files that persist across these upgrades, then you use volumes and/or bind mounts. These allow you to have folders that persist independently of the container's lifecycle. They are typically used to store things like a sqlite database that the container uses, the set of configuration files that you can edit on a per-instance basis, etc.

    Finally, there's a big case where you ignore all of the above: when you use containers as a development tool. In that case, particuarly for "interpreted" languages (python, php, ruby, etc.) it becomes extremely useful to bind-mount your pograms' sources inside a development container. You can then develop normally but also change the entire system where your app runs extremely easily. You can also keep different environments (language version, libraries, configuration of all those) for different projects without any change for conflicts between them, etc.

  • DanHulton 3 years ago

    You didn't specify, but it sounds like you're talking about from a development perspective, not a deployment one.

    For development, you add a volume to the container and mount a location on that volume to a location on your host hard drive. There are different mount options that control which changes propagate in which direction, but two-way propagation is avaliable and works fine.

    Once you have this set up, any changes you make on your host system are automatically reflected in your containers volume. It's very low stress and low overhead, honestly.

  • r6203 3 years ago

    Volumes.

    This works, however, only if you know beforehand which files you’re going to update frequently.

  • phkx 3 years ago

    What kind of files would you like to change? Anything that comprises the environment within the container (image) is indeed changed by rebuilding the container, but not the whole - just the layers which changed. Layering in a smart way increases reuse. Also, this way you reflect versions (tags) of the environment. Configuration can be passed in via parameters.

    The data you work with should be stored externally (volume, database, accessed via API, …). You don‘t keep persistent state of your workload in the container.

  • drowsspa 3 years ago

    I generally put a Dockerfile.dev that creates a user with the same GID and UID as my own (injected through environment variables):

        RUN adduser -u $UID...
        USER node
    
    Then in the docker-compose you bind mount your current directory.

        volumes:
          - ./:/app/
  • a_t48 3 years ago

    Don't do 2 - use a bind mount instead!

CodeAndCuffs 3 years ago

I'm not a fan of video tutorials, but Bret Fisher's[0] docker stuff is the exception. The production quality is flawless, the content is straight to the point, and his instruction is amazing. I cannot recommend it enough

[0] https://www.udemy.com/user/bretfisher/

verdverm 3 years ago

You're really asking about CI patterns and practices, which are not specific to Docker. The question is the same if using VMs.

You want to learn more about your CI system and then try things out until you hit the harder / edge cases.

Some things to try or think about

- Push two commits quickly, so the second starts while the first is running.

- rebuild a a commit while the current build is executing. Which one writes the final image to the registry? How do you know?

- How do you tag your images? If by branch name, how do you know which build produced an image? If by commit, how do you know which branch?

- Do you want to run the entire system per commit, shutting it down at the end of a build? Do you want to run supporting systems for the life of a branch? How do you clean up resources and not blow up your cloud budget? Do you clean up old containers each build (from old commits on this branch)? How do you clean up containers after a branch is deleted?

- Build a CI process that triggers subjobs, because eventually you may want to split things up. If you push a commit before the last build's subjob triggers, does it get the original commit or the latest commit? CI systems have nuances, Jenkins always fetches the latest commit when a job starts for a branch, so you may not be testing the code you think you are.

- Do you use a polyrepo or monorepo setup? For poly, how do you gather the right version of components for your commit? For mono, how do you build only what is necessary while still running a full integration test?

- Should you be doing integration testing inside or outside of the build system?

One of the reasons content that addresses these questions is harder to find is that the answers are highly dependent on the situation and tools. My solutions to many are handled with a mix of CUE and Python. You'll be writing code in most solutions

lamroger 3 years ago

I feel like you don't need a deep dive for what you're describing.

Start step by step.

Before building on Github Actions, build locally.

See if you can build and tag and image with the git SHA. Then run your automated test command against the image/container.

Then see if you can write a github action doing exactly what you did locally.

Random blog posts have been more helpful in my experience vs youtube videos.

anyfactor 3 years ago

Step 0: Start with a device that "fully" supports docker.

This is the reason I gave up on learning docker properly. I had 3 devices at my disposal - M1 mac, a windows 10 pc and a rpi. The random errors I was getting made me quite frustrated. Keep a code diary and document your mistakes and solutions.

Also get a VPS. Never ever try a serverless solution when trying to properly learn docker. Also, do not try to do anything that involves GPU processing.

  • navbaker 3 years ago

    Are you just recommending that they stay away from GPU processing while they learn or as a blanket suggestion? There are tons of images provided by Nvidia and framework/library authors that have the necessary drivers built in and make it trivial to run on a GPU.

    • anyfactor 3 years ago

      Oh no, only while learning.

      It all boil downs to that step 0, makes sure you have that "docker compatible" device. People are successfully running GPU processes using docker. However, docker is not "run everything, everywhere" as some people may think it is.

      Really understanding the idea of containerization is fundamental. If someone tries to dabble in GPU processing in their first or second week of learning docker, they will be surprised how difficult troubleshooting docker is.

      • navbaker 3 years ago

        100% with you on that. There’s some many unintuitive things about just getting a container running if you’re doing anything beyond the hello world example!

mattlondon 3 years ago

I would also be interested to know what the typical dev workflow is like these days when working with containers.

Like do you just run e.g. nodejs or javac locally and then "deploy" to a container, or do you have a development container where you code "in it", or is a new container built on every file change and redeployed?

At my current place of work, all of this is totally abstracted away so no idea how real world people do it!

  • dude187 3 years ago

    Typically I use a multi stage build so I build the app in one container, and copy the result to a final output container. That keeps the final container clean and lightweight. Pure local development I prefer to do locally outside of any container.

    There's certainly patterns you can use to run things like auto compiling code (think angular app in dev server mode) as you save in a container to ensure a consistent dev environment. However, I usually find that you're fighting the operating system and dev tooling on things like volume syncing that makes it not actually a net benefit. Though if I were to tackle getting my 100 devs consistent and I know they all have the same OS layout it probably would be worth it.

    The only time I tend to code "in" a container is when it's a very large code base with complicated dev tooling, or tooling I need to containerize to avoid clashing with my OS. In that case I build a "build container" and run it with a volume mount pointing to my working directory, rather than do a docker build. For a large code base, the purist "build with docker" approach requires copying that whole build context each time. Causing the build to take forever and thrash disk space

  • NumberWangMan 3 years ago

    I'm not sure if my workplace is "typical", but what we do for local development is only to use containers for dependencies, such as the database, or localstack (an AWS emulator). We have bash scripts that can set up and tear them down, using docker compose. Typically, we create a database container and then use it until we need to restart it for some reason, such as if the schema has changed.

    There's a shared repository we use that hosts all our migrations and has a script to refresh a set of schema files that define our current schema by running the migrations against a fresh, empty database container; this repository is referenced by our code repositories as a git submodule.

    We have one shared database that multiple apps use, and the above setup has worked out reasonably well for us. It's not terribly fancy, and I'm sure there are better ways out there we haven't discovered, but we have a good amount of flexibility.

    Also, that's all just for local manual testing/exploration -- we also use Java's "testcontainers", which spins up its own containers (though basically the same idea) for automated tests to run against. Testcontainers lets you specify how it restarts the containers -- after each test class, after each test run, or not at all. Restarting the container is pretty slow, so we have it set up to just drop and recreate the relevant databases within the container.

    For deployment, we've used different platforms that deploy apps in containers, but we don't manage the containers directly.

    Also, I've tried the "develop in a container" thing but only on rare occasions, such as to work around a MacOS bug that makes CIFS file sharing horribly slow. Technically that was a Vagrant VM, not a container. Haven't had much luck with it otherwise -- it seemed like more trouble than it was worth (it's the awkward file syncing that's the barrier for me).

illusiveman 3 years ago

It's funny that you asked this, because just the other day I was thinking about starting a series of posts about it, but then I thought "what the hell, it's 2023, everybody should know about how to use Docker already" and diminished the idea.

  • cpach 3 years ago

    A lot of people know. But a lot people doesn’t. E.g. if one is a freshman in college, then they probably are not yet an expert in containers. Or if someone learned about containers in 2015, they might not be up to date with the best practices of 2023.

    IMHO there’s still room for hiqh-quality blog posts about containers. E.g. there are lots of gotchas that could be explained. E.g. if you keep your commands in a suboptimal order, you will not get the benefit of caching when building the container. And why use multi-stage builds etc etc.

    PS. See also https://xkcd.com/1053/ :)

BretFisher 3 years ago

My docker mastery course just had 17 videos added on GitHub actions, my fav automation tool. It has vids and working yaml for container build/test workflows (yaml is open source at GitHub.com/bretfisher). I'm actually doing a workshop next week in Tampa at Civo Navigate called "Docker 101 in 2023" lol Course coupons at bretfisher.com

zelphirkalt 3 years ago

> I find it hard to find good and especially up to date tutorials.

It is even hard to find undoubtedly holistically good examples for docker usage. Many people do many things in different ways, some better some not so good. One can often find good aspects of docker usage in projects though. Like "What kind of environment variables should to let the user pass in, to avoid having to hardcode them in the image and keeping things configurable?", or "How to use multi-stage builds?". It is up to the thoughtful observer, to identify those and adapt ones own process of creating docker images.

I don't see docker as some kind of thing, that one sits down with for a few evenings and then fully knows. More like a thing one picks up over time. One runs into a problem, then searches for answers of how to solve this problem in a docker scenario, then finds several answers and picks one that seems appropriate, then learns, whether that choice was a good one later on. Until then it works for as long as that solution works. It is not like docker is some kind of scientific thing, where there is one correct answer to every question. Many things in docker are rather ad-hoc developed solutions to problems. Just look at the language that makes a docker file and you will see the ad-hoc-ness of it all. Then there are limitations that seem just as arbitrary. For example limited number of layers (stemming from being afraid of too much recursion not being supported by Go and not "externalizing the stack"), not being able to change most of a container's attributes (like labels) while the container is running.

As for questions of CI and so on: I think they are separate issues, which are solved by having a good workflow for the version control system of choice. One could for example configure the CI to do things for specific brances. Like deploying only the master branch or deploying a test branch to another machine/server. But this has nothing to do with docker.

AlexITC 3 years ago

It seems that you are mainly interested in how to build preview environments for your app, these posts describe an approach to get there:

- https://softwaremill.com/preview-environment-for-code-review...

- https://softwaremill.com/preview-environment-for-code-review...

- https://softwaremill.com/preview-environment-for-code-review...

While the examples use Gitlab, it shouldn't be very hard to port the same idea to a Bitbucket.

zyl1n 3 years ago

Julia Evan's How Containers Work! zine: https://jvns.ca/blog/2020/04/27/new-zine-how-containers-work...

It demystified a lot of docker features for me.

  • FlyingSnake 3 years ago

    I almost pasted this webzine, but luckily searched before posting. This is a great resource!

nunez 3 years ago

I have a recent course on LinkedIn Learning that digs into the basics of Docker! Check it out if you have a subscription: https://www.linkedin.com/learning/learning-docker-17236240.

I'm in the process of making a follow-up to this that covers more advanced topics. Stay tuned.

I also have a course that shows you how to use Docker for the build-test-deploy loop, though some of it is a little stale. Check that out here: https://www.linkedin.com/learning/devops-foundations-your-fi...

alpinelogic 3 years ago

I don't think spinning up a docker image through a Github Action is possible or if that makes sense for CI/CD but here is an example repo with Node [1]. There are two actions, one for Unit Tests and one that builds a prod image and pushes it to my Docker Hub account. I have a compose.dev.yml file to start the containerized services, and a compose.yml to do the same in production. For Prod it all depends on your prod cloud service you want to deploy your container to (e.g. Google Cloud Run) and there are Github Actions for them.

[1] https://github.com/vasilionjea/node-docker-template

zczc 3 years ago

There is a good series of blog posts on Linux containers: https://www.schutzwerk.com/en/blog/linux-container-intro/

bradwood 3 years ago

Go have a look at Gitlab's "Review Apps" -- this provides a mechanism that spins up branch-specific environments, and destroys them when the branch merges -- which sounds like what you are after. This is not so much about Docker (per se) but more about how to set up CI/CD related infrastructure.

We use this mechanism with AWS, the serverless framework and some terraform. It works well. With us, the only thing remotely container related is the runtime context for the CI/CD pipeline.

That being said, you could make this work against a k8s cluster, fargate, or just some build servers.

killthebuddha 3 years ago

If you already have a decent understanding of Docker you could implement this without learning anything more about Docker. GitHub secrets, actions, `ssh`, and a simple VPS (like an EC2 instance) would work here, no?

iteratorx 3 years ago

Sounds like your challenge is more about integrating the containers in a useful pipeline to get ephemera (preview) environments. Like deploy a new env for each PR and update each env when it's branch gets a new commit.

An easy way to get ephemeral envs starting from your docker-compose definition is Bunnyshell.com. It uses Kubernetes behind the scenes, but it's all pretty much abstracted away from the user. There is a free so you can experiment.

Disclosure: I'm part of the Bunnyshell team.

theusus 3 years ago

https://learn.cantrill.io/p/docker-fundamentals

MindTooth 3 years ago

https://docs.docker.com/

I use it severel times a week. Buildx, Dockerfile, etc.

jeffybefffy519 3 years ago

I would say you need to learn about: - docker compose: because you can describe your entire project’s environment in it - docker composes environment file and projects - ci/cd system of choice - probably something like ansible to be the glue between ci/cd and docker

Good luck!

cinntaile 3 years ago

There is https://devopswithdocker.com/ but I don't know if it's the best source.

agumonkey 3 years ago

I wish I could learn more about the internals, caching is flailing sometimes (but maybe i'm doing stupid things), also image graph is important to know who uses what.

basic_banana 3 years ago

I just learn by visiting some popular github repos using docker, and learn from their dockerfile/docker-compose. and related commands in readme.

paulcarroty 3 years ago

https://www.docker.com/play-with-docker/

ancieque 3 years ago

https://www.bretfisher.com/

rlt 3 years ago

ChatGPT. I’m not even joking, it’s now my first step in learning new tech (but not too new, GPT-3 was trained on a corpus ending in 2021 I believe)

Ask it something like “Explain how to get started with Docker” and it will give you a bunch of steps in a reasonable order. Then ask it for details for each step, like:

“How do I install Docker on macOS?”

“Write a commented Dockerfile for an application written in $WHATEVER”

“Now write a commented Docker Compose file for this application and a Postgres database”

etc

  • dizhn 3 years ago

    The other day i asked chatgpt to write me a sample program with a particular tui library. It wrote the python example. Very impressive. Trouble is, it was a go library. No name clash or anything. It flat out wrote sample code for a lib that did not exist. When prompted that it is a go library. It wrote a similar correct looking program with functions that don't exist in the go library.

  • verdverm 3 years ago

    OP is not actually asking how to learn Docker or a new technology. They are asking how to learn best practices for (the more complicated parts of) CI. Or maybe how to leverage docker at a higher level, than to go lower into the details of how it works.

    What does ChatGPT say for something like

    "How do I clean up old docker images in a registry after I merge a branch?"

  • CJefferson 3 years ago

    I tried asking chatGPT some docker questions a couple of weeks ago, but beyond the very most basic "start an existing Debian container", the answers were always wrong unfortunately (I'm not a docker expert to see they are wrong, I ran them and they didn't work).

zhangruinan 3 years ago

I learned through reading the docker official document! It worked

sciencesama 3 years ago

If you just want to learn it for fun and home just install portainer and run pihole, you understand a lot more. Then learn to do it via command line. Else just go for kubernetes man. With Docker shim removed from kubernetes no one is using Docker professionally.

  • nightowl_games 3 years ago

    > With Docker shim removed from kubernetes no one is using Docker professionally.

    I think this statement is not as precise as it should be. I'm running docker produced images in a k8s cluster and had to google what you are talking about here.

    https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-...

    "Docker-produced images will continue to work in your cluster with all runtimes, as they always have."

    Dockerfile is a very common file to see in projects, and I'm thankful when I see it.

    • SOLAR_FIELDS 3 years ago

      To disambiguate it’s necessary to separate Docker the runtime from OCI image format, commonly still referred to by a lot of people colloquially as “Docker Images”. Docker the desktop software, file format and CLI is still very much used professionally to build and test OCI images, which then run in Kubernetes. Docker as a container runtime platform (and orchestration with Swarm) is what is generally going out of vogue.

      Although a pattern I commonly see these days with a lot of OSS projects is that oftentimes they provide a docker-compose.yaml even though no one at any reasonable scale is running Docker Compose in production. This is simply because Kubernetes setups are complicated, weighty, and heterogenous, and often not running on your local machine and Docker Compose is a great way to do a hello world style demo of your container-orchestrated app (because everyone still uses Docker Desktop).

      • fhaldridge7 3 years ago

        Docker, Podman, Buildah, etc. produce container images in the OCI image format. The Kubernetes container runtime (runc) runs containers according to the OCI specification.

  • gitgud 3 years ago

    > With Docker shim removed from kubernetes no one is using Docker professionally.

    Not sure about this, Docker is still widely used in CI systems and local development in my experience.

    It's also widely used in Opensource projects for "reproducible builds"

wendyshu 3 years ago

I liked Kane's book "Docker Up and Running"

bottlepalm 3 years ago

ChatGPT is pretty good at generating commands as well as explaining commands.

MonkeyMalarky 3 years ago

It's not that complicated, just do it? The best advice I can give is shove every bit of logic you can into scripts (bash, python, whatever) and call them from bitbucket/bamboo. Be as agnostic as possible of Atlassian. That way you can run locally, experiment and learn much more easily.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection