Settings

Theme

Show HN: Cross-Platform GitHub Action

github.com

76 points by JacobCarlborg 3 years ago · 28 comments · 1 min read

Reader

I've created a GitHub Action for running commands on multiple platforms. This includes platforms that GitHub Actions don't natively support. It currently supports FreeBSD, OpenBSD and NetBSD. OpenBSD can run on x86-64 and ARM64, the other operating systems run on x86-64.

Some of the features that are supported include:

* Multiple operating system with one single action

* Multiple versions of each operating system

* Allows to use default shell or Bash shell

* Low boot overhead

* Fast execution

* Runs on both macOS and Linux runners

Compared to similar solutions like https://github.com/vmactions/freebsd-vm, the boot time is around a fifth and the full execution time for the same job is around half of freebsd-vm (last time I tried).

The readme contains more information about how it all works under the hood.

adobrawy 3 years ago

I previously tried to use Docker `docker/setup-qemu-action@v2` and `docker/setup-buildx-action@v2` for this purpose (see that example https://github.com/docker/build-push-action#git-context). Thanks to buildkit, platform switching works transparently. However, building on ARM via QEMU on GitHub Actions is terribly slow (something like 5 times more), which is hard to accept. Therefore, full of hope, I am waiting for GitHub Actions to make cloud runners available on ARM, because it is a blocker for the implementation of Graviton on the AWS environment for us.

For a while, the blocker in GitHub Actions for providing ARM support was that Azure doesn't have ARM support. In this way, the Azure cloud offering may determine the habits of AWS consumers.

  • kylegalbraith 3 years ago

    This was one of the frustrations that led us to create Depot (https://depot.dev/). It's a remote Docker build service that launches Intel & Arm VMs to build native multi-platform images in parallel on native CPUs with zero emulation.

    Behind the scenes we are running the same engine as Docker, BuildKit, on VMs in AWS for both architectures. We combine native architecture support with a 50GB persistent SSD cache that is automatically available across builds, so no more saving/loading cache over the network.

    All those things combined makes multi-platform or Arm image builds in GitHub Actions ~15x faster using Depot (example: https://depot.dev/benchmark/temporal). And it's a one line change, swap `docker build` for `depot build` or use our `depot/build-push-action` instead of `docker/build-push-action`.

    • adobrawy 3 years ago

      Are you SOC2 compliant? Can an image built in this way be loaded back (`load: true`) to CI without OCI registry?

      • kylegalbraith 3 years ago

        We aren't SOC2 compliant yet, but we are planning to do that this year. Yup, you can pass `depot build --load` (the same parameters you use with buildx) and we will return the built image to you. Or you can use `--push` and we will push it up to your registry.

        Here is my direct email if anyone wants to chat/learn/share more about this problem space, kyle [at] depot.dev.

  • JacobCarlborgOP 3 years ago

    Yeah, I've been using `docker/setup-qemu-action` as well to run on non-x86-64 architectures for Linux [1]. But since it's Docker it's Linux only and doesn't support other operating systems.

    GitHub has Apple Silicon on the roadmap, but IIRC it was at the end of this year. They already support Apple Silicon for self hosted runners [2].

    [1] https://github.com/jacob-carlborg/lime/blob/f4d9c8c4265b61b2...

    [2] https://github.blog/changelog/2022-08-09-github-actions-self...

  • saurik 3 years ago

    Why do you need to build on ARM to target ARM? I support ARM and I just build on x86_64.

    • adobrawy 3 years ago

      I need to build images for an application in Python. Cross-build Docker images? I haven't heard of anything like that. I need to build images for an application in Python. Cross-build Docker images? I haven't heard of anything like that.

      Did you mean cross-compilations for compiled languages? It doesn't fit Python.

      • saurik 3 years ago

        I am sorry but I am having a difficult time understanding your comment. If you are coding in Python, why does the underlying architecture matter at all?

        • dbingham 3 years ago

          The architecture of the docker image matters. If you build a docker image on an ARM machine (like a m1 or m2 Mac) you can't run it in x86 architecture (like a T3 or M5 AWS instance). If you build on x86 architecture, you can't run it on ARM architecture.

          That is, unless you use Docker's cross platform build capability (buildx). Which was still considered experimental the last time I looked at it (about a year ago).

          • Arnavion 3 years ago

            You can build other-arch images with regular `build`. You'll of course need QEMU hooked up through binfmt to be able to execute `RUN` steps while building the image, but you can do that yourself without involving `buildx`.

            • dbingham 3 years ago

              Neat! I need to read up on QEMU. I haven't looked into cross arch images in a while.

              • Arnavion 3 years ago

                For systemd distros, you might already have systemd-binfmt.service in the systemd package, which when started will automatically register all architectures specified in config files under /usr/lib/binfmt.d with the kernel. Then your qemu package will probably contain one config file in that directory for every arch. So if you enable and start the systemd-binfmt service, you'll automatically have a bajillion architectures registered with the kernel to run under qemu and you don't have to do anything else.

                Or you can register manually by writing to /proc/sys/fs/binfmt_misc/register . Note that you'll want to register the statically linked version of qemu (qemu-user-static or whatever your distro calls it), and that you'll want to use at least the O and F flags so that the binary works inside containers automatically insead of needing to be mounted from the host.

                • mananaysiempre 3 years ago

                  > For systemd distros, you might already have systemd-binfmt.service in the systemd package, which when started will automatically register all architectures specified in config files under /usr/lib/binfmt.d with the kernel. Then your qemu package will probably contain one config file in that directory for every arch. So if you enable and start the systemd-binfmt service, you'll automatically have a bajillion architectures registered with the kernel to run under qemu and you don't have to do anything else.

                  Note that this can screw up other kinds of builds, as the autoconf check for cross-compilation relies on a cross-compiled executable being unrunnable. (Seen with a wine binfmt handler and a MinGW cross compiler.)

        • hk1337 3 years ago

          Some packages it doesn’t matter the architecture but others it actually does. Pandas, for example.

    • Arnavion 3 years ago

      At $dayjob we build packages of our software for a bunch of Linux distros, which means the software has to be compiled individually for each distro to get external dependencies right. Some of those distros like the RHEL 7 family don't support cross-compilation, so we run them in QEMU.

      • saurik 3 years ago

        You just need a sysroot for that distribution... you actually don't want to compile ON that distribution as it makes having a consistent and modern/working toolchain a lot more difficult.

        Notably, I compile for CentOS 6 (ancient, right?) on whatever the latest version of Ubuntu is--using the most recent versions of clang and rust and whatever--and simply build my sysroot using this trivial script.

        https://github.com/OrchidTechnologies/orchid/blob/6958658c25...

        (You can do this in some sense even easier using Docker--though in other senses it is a lot more complex as now you need docker--if you are into that sort of thing. You don't run the compiler in docker: you just install the dependency packages and then docker export the filesystem as your sysroot.)

        Using this script and clang I can actually compile to target CentOS 6 on macOS (and I get the exact same binary if I use a consistent version of clang; I actually have a GitHub action that verifies that I reproduce the same binary compiling on both macOS and Ubuntu).

        • Arnavion 3 years ago

          >You just need a sysroot for that distribution... you actually don't want to compile ON that distribution

          We compile in a Docker container of the distro. It's the same concept as having a sysroot.

          >as it makes having a consistent toolchain a lot more difficult.

          >using the most recent versions of clang and rust and whatever

          The external dependencies I mentioned are libraries like glibc or openssl, where you want to use the distro version. Similarly the packages should be consistent with what the users of the distro would build themselves if they used rpmbuild / debbuild / whatever. The toolchain not being consistent across distros is the point.

          Also, none of this is relevant to cross-compiling for ARM. As I said, RHEL 7 doesn't have a cross compiler - specifically it has gcc but not glibc, because the cross compiler is only meant for compiling the kernel.

          • saurik 3 years ago

            > We compile in a Docker container of the distro. It's the same concept as having a sysroot.

            It isn't the same concept as using a sysroot, as you are now using the compiler from that potentially-ancient different distribution. I see people constantly twisting themselves into knots being like "CentOS has some broken/limited build of gcc/clang so I can't use X" and it is like "if there is one thing you as a developer control it is the compiler".

            > The external dependencies I mentioned are libraries like glibc or openssl, where you want to use the distro version.

            I understood this: that's what you put in your sysroot (and is easily seen in the example script I showed for CentOS 6; my scripts for building Ubuntu sysroots are much more complicated as I am trying to avoid nesting docker and so now have some crazy fallback involving debootstrap and proot).

            > Similarly the packages should be consistent with what the users of the distro would build themselves if they used rpmbuild / debbuild / whatever.

            Ok, this is where we differ, then: I am not trying to simultaneously support users building on CentOS using "rpmbuild". I want to be able to use the latest build of C++ and Rust and Python and essentially have a "modern"/easy development stack but "target"/support other distributions so they can install my software as packages.

            Given the goal of being able to provide source code that can itself be compiled on these random distributions I see the problem. I personally feel like it would still be better to work around that by requesting (which can itself be automated by your build) that the user compile their own toolchain that has more functionality, though.

            • Arnavion 3 years ago

              >it is like "if there is one thing you as a developer control it is the compiler".

              >I understood this: that's what you put in your sysroot

              You're advocating mixing compilers and libraries that are not from the same distro release. While it may work for you it's generally a recipe for disaster.

              The correct and least-mental-overhead way of building packages for $distro is to build on $distro. The fact that users of $distro can then build your package themselves using the standard distro method is an extra benefit.

              • ben-schaaf 3 years ago

                > You're advocating mixing compilers and libraries that are not from the same distro release. While it may work for you it's generally a recipe for disaster.

                What libraries on Linux don't use the SystemV C ABI on x86?

                • Arnavion 3 years ago

                  What does your question have to do with this conversation?

                  • ben-schaaf 3 years ago

                    Mixing compilers is not a problem because they use the same standard ABI. You often need to link against the oldest supported version of a library, or dlsym what you need, but you certainly don't need to compile per-distro. The only big exception I know of is musl.

                    We've been happily compiling a single executable using ~latest clang for years with very few problems beyond using a symbol that doesn't exist on older distros.

                    • Arnavion 3 years ago

                      >Mixing compilers is not a problem because they use the same standard ABI.

                      So first, you said:

                      >>What libraries on Linux don't use the SystemV C ABI on x86?

                      ... even though the conversation was about ARM.

                      Second, you seem to assume that there could be no other problems other than ABI, even though...

                      >with very few problems beyond using a symbol that doesn't exist on older distros.

                      ... Oh, I shouldn't even need to explain it, because you're already aware of it. So why are you acting contrarian?

                      When you use gcc version X then it is allowed to generate calls to libgcc version X. If the distro has libgcc version Y, then either you put libgcc version Y in the sysroot and force gcc version X to use it, which gcc version X is not prepared for, or you let gcc version X link the binary to symbols from libgcc version X, and the binary explodes when running against libgcc version Y on the actual system.

                      Furthermore, if a distro ships gcc version Y and some distro library version Z, it expects you to compile code that uses distro library version Z with gcc version Y. If there are compiler bugs or library bugs, the distro might have patched one or the other to ensure that they work together. By instead compiling with gcc version X (from a different distro, no less) you're breaking that guarantee and again can have silent miscompilations or other runtime issues.

                      I'm amazed I even have to argue about such basic knowledge. If you don't believe me, maybe you'll believe OSDev?

                      https://wiki.osdev.org/Libgcc

                      >Can I use the Linux libgcc?

                      >You must use the correct libgcc that came with your cross-compiler. Whatever else libgcc you found likely has a different target, was built with different machine compile options, has dependencies on the standard library, is part of a different compiler revision (your distribution may have patched its gcc, even). It is possible that using a different libgcc will work, but perhaps not reliably.

                      So congratulations on having YOLO'd with no problems "for years", but doing it properly is much easier for correctness and peace of mind, so excuse me if I don't follow suit.

                      • ben-schaaf 3 years ago

                        > even though the conversation was about ARM.

                        Apologies, I wanted to be specific which standard is being followed. Nonetheless afaik the ABI is the same for all arm distros.

                        >> with very few problems beyond using a symbol that doesn't exist on older distros. > ... Oh, I shouldn't even need to explain it, because you're already aware of it.

                        A missing symbol is a problem regardless of where you're compiling. If you're linking against something that doesn't exist it'll fail; whether that's on your client's machine running an old distro or when you're compiling on that old distro yourself. You could argue it's better to fail early here, but the work in fixing this bug is approximately the same.

                        > I'm amazed I even have to argue about such basic knowledge. If you don't believe me, maybe you'll believe OSDev?

                        OSDev is incorrect here. Take it from GNU themselves: https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html. Symbols in libgcc are versioned; either your code runs and links against the exact correct version or it fails. If you want to run on old distros with libgccs that don't have newer symbols either use an older gcc, statically link libgcc or use clang.

elcritch 3 years ago

awesome! This looks easier than setting up cross compilers for each platform.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection