Transparently running binaries from any architecture in Linux (2018)
ownyourbits.comFor those on NixOS add boot.binfmt.emulatedSystems = ["aarch64-linux"]; to /etc/nixos/configuration.nix and it will enable running ARM binaries.
https://search.nixos.org/options?channel=21.11&show=boot.bin...
Does Nix use qemu or a different technique to pull off this one-liner magic?
There is one emulator mapped per target system. Depending on the target system it will use qemu, wine, wasmtime or mmixware.
https://github.com/NixOS/nixpkgs/blob/111839dcf6e9a8bac6972e...
Toward the end of the article they use chroot to run an entire rootfs as sort of a user-level system emulation.
The next step is to do the same thing except using containers/namespaces. I was able to run a Yocto rootfs build for ARM completely, including init, and IIRC networking, using LXC and binfmt_misc. A very handy technique for testing and it does run much faster than full-system emulation.
Until last week, I had a full Debian amd64 systemd container to run invidious on an ARM64 machine (a ROCKpro64) running Armbian, since crystal (the language in which Invidious is written) does not have arm64 Debian packages. There's nothing special to make this work, regular commands work, for instance just running systemd-nspawn -b in an arm64 root folder works.
Now I found a way to get arm64 crystal binaries. I got rid of the container, but Invidious still cross compiles to amd64, so qemu is still used to run an amd64 build Invidious of transparently.
binfmt and qemu-user do wonders. It works well. One could use box64 [1] instead of qemu and it should provide better performance because it uses the native versions of some well known libraries (including libc6) instead of emulating them, but I failed to compile box64 this weekend so I stayed with qemu.
You don't even need containers. These techniques (a chroot + static qemu-user with binfmt) were used many years ago to cross-build (and test) entire distributions for exotic architectures.
I remember http://scratchbox.org/ would allow you to replace some components (e.g. gcc) with their native versions so as to speedup them. It is all hopelessly broken now.
Certainly! It's just that you might want separate mount, network, etc. namespaces. Hence: "the next step"
With a little work you should be able to do this as unprivileged/rootless as well.
I'm still using Scratchbox (inside Docker) to build games for Maemo 5 (Nokia N900), as that's one of my engine's supported platforms ;)
I used this approach to start iterating on some Arm builds before I got access to any of the Arm servers they were introducing at work (Oracle Cloud Infrastructure).
I'd started out using a full emulation VM, and it was alright, but the cost of emulation was crippling for parts of the build process. IIRC one part of the build process was pulling in python libraries that didn't have arm wheels, and that took a bit of work to compile even on native architecture. Add in the overhead of full system emulation and it really hurt the iteration process. Especially as I worked my way from "Finally got it to build!" through to "Got the build repeatable from scratch!"
The binfmt / container approach dramatically reduced the amount of emulation being done, resulting in phenomenally faster build times.
Then I finally got access to an actual Arm instance and the entire process took even a fraction of that time.
> it does run much faster than full-system emulation.
The opposite is true if you're virtualizing the same architecture as the host with hardware virtualization (KVM) enabled. Counter-intuitively, user emulation is much slower than full-system emulation in this specific case.
>I was able to run a Yocto rootfs build for ARM completely, including init, and IIRC networking, using LXC and binfmt_misc
I'd love to try that! Any pointers? :)
very, very, roughly:
- build a rootfs using the poky reference distro (but do it for your arm target).
- https://docs.yoctoproject.org/
- https://docs.yoctoproject.org/brief-yoctoprojectqs/index.htm...
- you'll need to make or get a layer for your machine type. for example, for rpi you'll want: https://github.com/agherzan/meta-raspberrypi
- find the unpacked root image (or unpack the final image). should be like build/tmp/work/<machine>/<blah>-image/1.0-r0/rootfsbitbake core-image-minimal (or whatever the machine layer wants you to do)- run (note: in the past I used LXC, but lets try podman today)
apt install qemu-user-static podman podman run -it --rootfs <rootfsloc> /sbin/init
First: you don't necessarily need an alternative chroot, Debian allows installing packages of foreign architectures in the same main tree. It has some hiccups, but it should mostly work.
Second: of you like playing with foreign architectures, I have a collection of ready to boot Debian images of many architectures, that you can promptly boot with QEMU. Command line included. It is mostly aimed at full system emulation, though (but if you look through the cogs you can also download chroots). https://people.debian.org/~gio/dqib/
How does qemu-user's performance compare to Rosetta 2? The latter is marketed as nearly native performance because it performs binary translation. But I read that qemu-user also performs binary translation.
You can also do this with docker containers for other architectures + the binfmt qemu-user trick, which may be easier to work with pre-existing rootfs images and other software.
I've been using this technique to manipulate raspberry pi OS images for use in embedded system prototypes. It's very easy to set up. It's also nice to be able to use the image's embedded toolchain rather than set up a proper cross-compiler. It's slow to compile stuff due to the emulation, but relatively foolproof.
Of course, the best long term solution is to use something like yocto or buildroot, but that takes considerable time and knowledge to do properly.
What exactly does "transparently" mean in this context. I've seen that term used in a dozen different ways within software engineering
It's just a process in your system that acts the same way as native one.
In a manner that hides the nitty gritty details from the user. I don't know why it's called transparent instead of opaque.
All that work to recreate what Inferno (Unix 3) did out of the box over 25 years ago.
FOSS Unix: 42 steps forward, 42 steps back. :-/
I have found this useful in practice when debootstrapping for a different architecture.
This is exaggeration, I doubt that it can run AS/400 binaries.
Well looks like qemu does have support S390X, which is pretty impressive.
I've actually used that a few times. I've never seen a mainframe (in person), nor do I think I ever will without a career change, but I did need to make some changes to a build system in an open source project that had wide architecture support. I was using qemu to check that all of the binaries for the various supported architectures actually ran in some capacity.
BTW, there are a few different ways to get access to mainframe hardware rather than using qemu. In short, the Debian porterboxen, the Deb-o-matic service and the IBM LinuxOne cloud.
The only remaining supported Big Ending architecture today, to test BE quirks.
I presume it is not safe to do this with malware?
Does this work for graphical programs?
I don't see why not, as long as those programs don't need platform-specific drivers. In the end, the ARM programs will all speak X11 or Wayland, and those calls should transfer back to the native display renderer without any issues.
I've had more challenges getting the necessary dependencies installed for foreign architectures than I've had issues running them through qemu. It's honestly surprising how easy it is and it makes me wonder why Windows doesn't have something similar.