So you’ve just finished writing an application in Rust.
Congratulations!
While building and testing your application, you’ve probably been using cargo run to compile your code.
But now that it’s finished, how do you make a compiled version that’s ready for release?
If you’ve been reading The Cargo Book 1, you’ll come across this page that will tell you to run:
That will do all the heavy lifting you need, optimize the compiler configuration flags for a release product, and place your compiled application binary in the ./target/release/ folder.
Fantastic.
But if you know a bit about Rust, you know it’s a low-level programming language that can be used on different platforms. Haven’t you just compiled a binary for the operating system that you’re working on? What if you want to use this application on other operating systems? How do you compile that binary?
If you want to skip ahead and see the solutions, click the links below.
| Compiling From | to Linux | to Windows | to macOS |
|---|---|---|---|
| Linux | native | Linux -> Windows | Linux -> macOS |
| Windows | WSL or MSYS2 | native or via WSL | WSL |
| macOS | macOS -> Linux | macOS -> Windows | native |
If you want to see the process of how we get there, keep reading.
A simple solution: compile it on the machine where you’re going to use it
The simplest solution is to just compile it on the other device.
Did you develop the application on a Windows machine but want to use the tool on a Linux laptop?
No problem; install rustup on the other device, clone the git repository, then run cargo build --release just like above.
But actually there is a problem here.
What if you can’t install the system libraries you might need on that device to compile it?
What if that laptop doesn’t have enough storage for all the rustup files or compilation files?
What if the machine you want to run this on doesn’t have all the processing power you need to quickly compile this binary?
What if you’ve accidentally used Windows-specific libraries like OsStringExt that you didn’t realize wouldn’t work on the other device until now?
It sure would be helpful if you could compile this binary on your development machine for the machine where you want it deployed (“cross-compilation”). It would help you with testing, with planning what crates and features to use, and saving the end user the hassle of compiling things themselves. What can you do to compile your Rust application on this machine for a completely different one?
Platforms as compilation targets
Here is where Rust’s concept of “platforms” comes in. Platforms are essentially 3 things the Rust compiler needs to be aware of when compiling binaries:
- CPU architecture (32-bit, 64-bit, ARM, or other instruction set architectures)
- Operating system (Linux, macOS, Windows, etc)
- Compilation tools (GNU toolchain, musl, Microsoft Visual C++, etc)
For simplicity, we can just focus on Linux, macOS, and Windows, the major “Tier 1” platforms which can be thought of as “guaranteed to work”.
The Rust crate to cross-compile
I have some miscellaneous utilities that I like to use. If I’m developing these utilities on a Linux machine, I can clone the repo and compile them for Linux without any issue.
git clone https://github.com/jrhawley/misc_utils
cd misc_utils
cargo build --release
That will compile the application for the platform we are currently on.
But I want these utilities to work on any computer I use, regardless of what operating system I’m working on 2.
To cross-compile, we’re going to use the same basic idea for each step.
We’ll tell cargo that we want it to cross-compile our application for a different platform and let rustup to download and manage all the libraries we need to accomplish this 3:
# download the libraries and files needed to compile to this platform
rustup target add {target-platform-triple}
# perform the actual compilation
cargo build --release --target {target-platform-triple}
Importantly, this doesn’t affect anything about our native build.
This only affects any build target specified by the target-platform-triple.
For more detailed steps on specific cross-compilations, we’ll need specific instructions for each pair of operating systems.
From Linux to Windows
This process works just as directed, above, after we install some dependencies.
For Debian-flavoured Linux distros, this uses apt.
For others distros, refer to your package manager.
# install the MinGW GNU C toolchain
sudo apt install gcc-mingw-w64-x86-64
# the usual steps
rustup target add x86_64-pc-windows-gnu
cargo build --release --target x86_64-pc-windows-gnu
That produces the release binary in the ./target/x86_64-pc-windows-gnu/release/ folder.
# the original file compiled on Linux for Linux
> file target/release/mvlog
target/release/mvlog: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=534b971da7aecc23dbb8e40a63ea56fad0bc2bf3, for GNU/Linux 3.2.0, with debug_info, not stripped
# the new file compiled on Linux for Windows
> file target/x86_64-pc-windows-gnu/release/mvlog.exe
target/x86_64-pc-windows-gnu/release/mvlog.exe: PE32+ executable (console) x86-64, for MS Windows
That’s it! We’ve completed our first successful cross-compilation. With that settled, we can more onto slightly more difficult compilations.
From Linux to macOS
We’re going to apply the same trick here as we did with Windows, but this does involve extra steps because of the Apple Xcode SDK. We’ll need the OSXcross tool to install the macOS toolchain. Tools like godot take this approach, so we’ll follow it.
Install OSXcross
First we need to install OSXcross’ dependencies. On Ubuntu, that looks something like this:
sudo apt install \
make \
cmake \
git \
patch \
libclang-dev \
libssl-dev \
liblzma-dev \
libxml2-dev
Again, for other Linux distros, refer to that distro’s package manager and the equivalent libraries. Next, we need to download the OSXcross repository. You should do this on a drive with > 2 GB of storage space available.
git clone https://github.com/tpoechtrager/osxcross
cd osxcross
Then we’re going to download a pre-compiled tarball of the macOS SDK 4.
wget -nc https://s3.dockerproject.org/darwin/v2/MacOSX10.10.sdk.tar.xz
mv MacOSX10.10.sdk.tar.xz tarballs/
UNATTENDED=yes OSX_VERSION_MIN=10.7 ./build.sh
# if you want to compile with GCC instead of clang, also run the next line
# if not, skip this step
./build_gcc.sh
This will take a few minutes, but if everything is successful, your terminal should look something like this:
patching file usr/include/c++/v1/__hash_table
Hunk #1 succeeded at 1170 (offset 6 lines).
Hunk #2 succeeded at 1239 (offset 6 lines).
testing i386-apple-darwin14-clang++ -stdlib=libc++ -std=c++11 ... works
testing x86_64-apple-darwin14-clang++ -stdlib=libc++ -std=c++11 ... works
testing i386-apple-darwin14-clang ... works
testing i386-apple-darwin14-clang++ ... works
testing x86_64h-apple-darwin14-clang ... works
testing x86_64h-apple-darwin14-clang++ ... works
testing x86_64-apple-darwin14-clang ... works
testing x86_64-apple-darwin14-clang++ ... works
Do not forget to add
/home/james/Documents/osxcross/target/bin
to your PATH variable.
All done! Now you can use o32-clang(++) and o64-clang(++) like a normal compiler.
Example usage:
Example 1: CC=o32-clang ./configure --host=i386-apple-darwin14
Example 2: CC=i386-apple-darwin14-clang ./configure --host=i386-apple-darwin14
Example 3: o64-clang -Wall test.c -o test
Example 4: x86_64-apple-darwin14-strip -x test
The last step is to add the folder mentioned above to your $PATH 5.
In my case, that folder is /home/james/Documents/osxcross/target/bin.
Altering cargo’s configuration
Once we navigate back to the misc_utils directory, we need to one final step before compiling.
We need to tell cargo about the tools from OSXcross.
We’ll do this by saving the following to .cargo/config.toml:
# this specifies when we target the x86_64-apple-darwin platform
# we need these tools to link objects properly
[target.x86_64-apple-darwin]
linker = "x86_64-apple-darwin14-clang"
ar = "x86_64-apple-darwin14-ar"
Perform the cross-compilation to macOS
Now that cargo is properly configured, we can proceed as expected.
rustup target add x86_64-apple-darwin
cargo build --release --target x86_64-apple-darwin
Now we have a fully functioning macOS binary.
> file target/x86_64-apple-darwin/release/mvlog
target/x86_64-apple-darwin/release/mvlog: Mach-O 64-bit x86_64 executable, flags:<NOUNDEFS|DYLDLINK|TWOLEVEL|PIE|HAS_TLV_DESCRIPTORS>
From macOS to Linux
This process largely mirrors the Linux-to-macOS process above, but we don’t have to work around Xcode anymore, so it is a bit simpler.
Install GNU C Compiler for macOS
Using Homebrew, we can install the development libraries we’ll need.
brew tap SergioBenitez/osxct
brew install x86_64-unknown-linux-gnu
Configure cargo to use this compiler and linker
Like above, we’ll need to configure .cargo/config for compiling to Linux by add the following info 3:
[target.x86_64-unknown-linux-musl]
linker = "x86_64-unknown-linux-gnu-gcc"
Perform the cross-compilation to Linux
The rest is just as we expect:
> rustup target add x86_64-unknown-linux-musl
> cargo build --release --target x86_64-unknown-linux-musl
> file target/x86_64-unknown-linux-musl/release/mvlog
target/x86_64-unknown-linux-musl/release/mvlog: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, with debug_info, not stripped
From macOS to Windows
You should have the picture by now. First we’ll install the dependencies:
Then we edit .cargo/config.toml:
[target.x86_64-pc-windows-gnu]
linker = "x86_64-w64-mingw32-gcc"
ar = "ar"
Then we compile as normal:
rustup target add x86_64-pc-windows-gnu
cargo build --release --target x86_64-pc-windows-gnu
From Windows…
Here is where things break from the rhythm we’ve got going. On Windows, Rust can use the Microsoft Visual C++ compiler, which is proprietary, unlike the LLVM and GNU compilers we’ve been using on Linux and macOS. Because of this, using MSVC to compile to other platforms is…challenging.
I’m sure that there are ways that you can get this working. But you still need to install Linux or macOS C libraries to compile against, and properly linking them all on an operating system that lacks a central package manager is…what’s the word? Oh yeah, challenging.
This leaves us with two main directions we can take. Both of these directions make use of Linux workarounds:
- Use MSYS2/Cygwin to install the Linux-specific libraries we need, or
- Use the Windows Subsystem for Linux (WSL) and just pretend we’re using a Linux machine.
If you only want to compile to Linux, you can use either option. But if you also want to compile to macOS, you’ll need to try the same trick with OSXcross, which makes use of some Unix headers that aren’t available through Windows or MSYS2/Cygwin 6. The end result is that you can’t compile to macOS from Windows unless you use WSL (or at least I haven’t been able to).
I’ll talk more about the pros and cons of these approaches, below. But first, I’ll list what you can do perform the cross-compilation.
…to Linux using WSL
This is surprisingly simple. Install WSL according to the documentation, then startup Linux and proceed as if you were on a Linux machine.
You’ll have to install rustup again, which is an unfortunate duplication.
But this allows for native compilation.
You can just run cargo build --release, you don’t even need the --target x86_64-unknown-linux-musl command because that’s already the default.
…to Linux using MSYS2
In my experience, the easiest way to install and use MSYS2 is through Scoop:
# install with Scoop
scoop install msys2
# open msys2
msys2
# update package manifests
pacman -Syu
# install the MinGW toolchain
pacman -Sy mingw-w64-x86_64-toolchain
We’ll also need to download and install LLVM, which is straightforward 7.
Be sure to check the box that adds LLVM to your $PATH environment variable.
Then we update ./.cargo/config.toml for this new platform 3:
[target.x86_64-unknown-linux-musl]
linker = "/path/to/LLVM/bin/lld.exe"
Now we can proceed as normal.
rustup target add x86_64-unknown-linux-musl
cargo build --release --target x86_64-unknown-linux-musl
…to macOS using WSL
We don’t need to do anything special here since we can just follow the directions for Linux-to-macOS above.
…to Windows using WSL
Because WSL is a fully fledged Linux operating system, we can actually use it to cross-compile back to Windows. Is there any good reason you’d want to do this? Probably not, but we’re already here so why not have some fun and try it out. The steps follow just like above, and we can compare the output files.
Surprisingly, these are actually different files with slightly different files, even if I use the GNU toolchain on both systems.
> ll target/**/mvlog.exe
-rwxrwxrwx 2 james 8.2M Feb 20 17:02 target/release/mvlog.exe # this is the Windows-native build
-rwxrwxrwx 2 james 8.4M Feb 21 00:49 target/x86_64-pc-windows-gnu/release/mvlog.exe # this is the Linux-to-Windows cross build
If you look at the hexdumps for each binary, you can begin to see some minor differences in certain bytes between the two.
> hexyl -n 160 ./target/release/mvlog.exe
┌────────┬─────────────────────────┬─────────────────────────┬────────┬────────┐
│00000000│ 4d 5a 90 00 03 00 00 00 ┊ 04 00 00 00 ff ff 00 00 │MZ×0•000┊•000××00│
│00000010│ b8 00 00 00 00 00 00 00 ┊ 40 00 00 00 00 00 00 00 │×0000000┊@0000000│
│00000020│ 00 00 00 00 00 00 00 00 ┊ 00 00 00 00 00 00 00 00 │00000000┊00000000│
│00000030│ 00 00 00 00 00 00 00 00 ┊ 00 00 00 00 80 00 00 00 │00000000┊0000×000│
│00000040│ 0e 1f ba 0e 00 b4 09 cd ┊ 21 b8 01 4c cd 21 54 68 │••ו0×_×┊!וL×!Th│
│00000050│ 69 73 20 70 72 6f 67 72 ┊ 61 6d 20 63 61 6e 6e 6f │is progr┊am canno│
│00000060│ 74 20 62 65 20 72 75 6e ┊ 20 69 6e 20 44 4f 53 20 │t be run┊ in DOS │
│00000070│ 6d 6f 64 65 2e 0d 0d 0a ┊ 24 00 00 00 00 00 00 00 │mode.___┊$0000000│
│00000080│ 50 45 00 00 64 86 14 00 ┊ 57 5b f4 63 00 2a 6a 00 │PE00dו0┊W[×c0*j0│ # 👈 compare this line...
│00000090│ 52 58 00 00 f0 00 26 00 ┊ 0b 02 02 26 00 e6 21 00 │RX00×0&0┊•••&0×!0│
└────────┴─────────────────────────┴─────────────────────────┴────────┴────────┘
> hexyl -n 160 ./target/x86_64-pc-windows-gnu/release/mvlog.exe
┌────────┬─────────────────────────┬─────────────────────────┬────────┬────────┐
│00000000│ 4d 5a 90 00 03 00 00 00 ┊ 04 00 00 00 ff ff 00 00 │MZ×0•000┊•000××00│
│00000010│ b8 00 00 00 00 00 00 00 ┊ 40 00 00 00 00 00 00 00 │×0000000┊@0000000│
│00000020│ 00 00 00 00 00 00 00 00 ┊ 00 00 00 00 00 00 00 00 │00000000┊00000000│
│00000030│ 00 00 00 00 00 00 00 00 ┊ 00 00 00 00 80 00 00 00 │00000000┊0000×000│
│00000040│ 0e 1f ba 0e 00 b4 09 cd ┊ 21 b8 01 4c cd 21 54 68 │••ו0×_×┊!וL×!Th│
│00000050│ 69 73 20 70 72 6f 67 72 ┊ 61 6d 20 63 61 6e 6e 6f │is progr┊am canno│
│00000060│ 74 20 62 65 20 72 75 6e ┊ 20 69 6e 20 44 4f 53 20 │t be run┊ in DOS │
│00000070│ 6d 6f 64 65 2e 0d 0d 0a ┊ 24 00 00 00 00 00 00 00 │mode.___┊$0000000│
│00000080│ 50 45 00 00 64 86 17 00 ┊ f0 ed f3 63 00 84 67 00 │PE00dו0┊×××c0×g0│ # 👈 ...with this line
│00000090│ 29 58 00 00 f0 00 26 00 ┊ 0b 02 02 28 00 e8 21 00 │)X00×0&0┊•••(0×!0│
└────────┴─────────────────────────┴─────────────────────────┴────────┴────────┘
I’m sure someone with more expertise than me in compilers can tell me what exactly is going on here and how these two binaries differ, but I think that’s out of scope for this post. It suffices to say that we can do this Matryoshka doll-style cross-compilation, if we want.
Containers
I haven’t yet mentioned the elephant in the room. Containers are a technology that have been around for decades, and widely used in the software industry for at least 10 years. Containers allow developers to use other operating systems on their machine and interact in precise and limited ways with the host file system. This would be an ideal technology to use to achieve cross-compilation, if you can get it working. It’s very similar in spirit to WSL, which I use above. So why haven’t I mentioned containers until now?
The simple reason is that I don’t like them very much.
The management system around containers (e.g. Docker, Podman, etc) tends to be pretty heavy; Docker needs a daemon process running to use it; you need to pull huge image files from some centralized online repository to build an image; specifying the exact dependencies in some manifest file is very tedious; mounting specific volumes to communicate with the host system always leaves me confused about what volume / refers to; debugging processes during development is difficult when they’re hidden behind some HTTP/TCP requests; the list goes on.
I’m not an experienced enough systems administrator to know the nuances around these systems.
Nor am I in a position where I would reap the rewards that come out of using technologies like these.
All in all, working with containers usually feels more difficult than going through the effort to install new system libraries and editing configuration files like I have, above. But not liking something isn’t an excuse to not learn or talk about it. So let’s talk about containers.
Using local containers
Many people have used containers successfully for cross-compiling.
cargo-zigbuild takes a novel approach to solving some of these cross-compilation issues (see this blog post for an example).
It uses the Zig compiler zcc to do the linking, since it is a zero-dependency, drop-in C/C++ compiler that automatically supports cross-compilation.
That’s pretty neat, and I like the ingenuity here.
Sadly, this tool doesn’t natively compile to macOS.
It requires a Docker image for targeting macOS, but at least you can actually do it.
Another comprehensive approach for cross-compilation that uses containers is Cross. Cross has been around for a few years, has good documentation, and from when I briefly tried it, it works. It has Docker images for almost every Rust target platform, which makes getting started pretty easy.
There are some drawbacks to Cross, of course.
There are issues installing system libraries in the Docker image if you need to use them for your application, like OpenSSL.
Working around these concerns duplicates a lot of work you’d do on your host system, anyway, if you weren’t using Cross, but now you’re doing it behind the veil of containers.
There are some known issues running cross on macOS.
You can get around these issues by running cross within Docker/Podman and configuring cross to notify it about the Docker host.
And most importantly, in my opinion, there isn’t an image for compiling to x86_64-apple-darwin 8.
So even if you use a well-engineered and comprehensive solution like Cross, you still might not be able to accomplish what you need.
You should be able to extend these container images for macOS by installing OSXcross within them, and then compile to macOS from there using the same tooling. In fact, that appears to be exactly what this GitHub Action does with an Ubuntu image and musl C. But again, you’re doing all the work I did above, but making it harder because you’re doing it behind the veil of a container. And this GitHub Action leads me to my next topic.
Using remote containers and continuous integration
Continuous integration services like Travis CI or GitHub Actions have macOS and other operating system Docker images you can use for compilation. This should make it easy to build for these systems if you don’t have physical access to one of these machines, yourself. That is the approach that trust takes. You could also figure out the YAML configuration for all the platforms you want to target and use one of these services to build them for you.
The major drawback here is that setting up continuous integration and continuous deployment (CI/CD) is a major pain. To get CI/CD working initially you commit a build manifest that triggers your build pipeline, only for it to fail a few minutes later. Then you have to try a new solution in the manifest, trigger the build, then fail in an entirely new way. Repeat this process a few times and your repo is suddenly littered with dozens of build artefacts, updated version numbers that don’t actually mean anything for the program, and confusion all around.
In my experience, these types of build systems scale nicely, but are extremely brittle and tedious to fix. As one person who likes to program as a hobby, I don’t have the time or patience for this 9. So while, in theory, you can use these serivces for cross-compiling, in practice I have found it to be more painful than all the steps I mentioned above.
Summary of different approaches
Finally, we’ve got to the end and can summarize what we’ve learned.
There are some slight differences to the above instructions if you’re targeting i686-pc-windows-gnu, for example, but the same principles apply for the other Tier 1 platforms.
We can summarize this information as follows:

Pros and cons to different approaches
Now that we know how to cross-compile between these different systems, let’s compare them to see what’s good and bad about each of them.
Linux is open. You can use Linux to compile to everything, and everything can compile to Linux.
Windows is pretty closed. You can’t use the MSVC toolchain on any system but Windows, and while you can compile for Windows from other systems, you have to put in some extra work.
macOS is even more closed than Windows. As far as I can tell, you can’t directly compile to macOS on a Windows machine - you have to go through the WSL workaround. Even at that, you still have to build OSXcross and hack around the macOS SDK to get everything working properly.
This puts developers in a bind. The openness of Linux is what makes compiling to it relatively easy, but it also leaves it at the disadvantage. If you want to cross-compile to one of the other operating systems, your life could be easier by using one of the other operating systems. If you want to 1) make software in Rust for macOS and 2) make your life easy, your best choice is to develop on macOS. The same goes for Windows. But the companies behind these operating systems have made this process intentionally difficult 10. The incompatible toolchains makes this process harder than it could be or needs to be. Cross-compiling is really where you start to see the walled gardens that different companies and operating systems put up.
If you’re trying to choose an operating system to start developing on, the best advice I can give is to compile on the machine that you’ll use this application on most often. It’s native compilation, it’s easier to manage, and you don’t need to troubleshoot an abstraction layer between the compiler and your code.
After that, my next recommendation, purely based on ease-of-use, would be Windows. By taking advantage of WSL, you can get both native Windows and native Linux compilation on the same machine, while still being able to target all three. That’s something neither of the other operating systems offer.
After that, it really just comes down to preference. macOS is the easiest system to install libraries for the other two systems on. Windows gives you the option of multiple toolchains. Linux is open source and didn’t create these proprietary headaches that we all need to deal with.
If you’re well-versed with containers or CI/CD services, you can probably make these work for you. If you are not well-versed with containers or CI/CD services, and you’re not doing this cross-compilation for professional reasons, I wouldn’t waste my time with these.
Conclusions
My needs out of a computer have changed over time, so I like using software that I know will work on whatever machine I end up using. Because of that, when I write software, I also like it to be able to run on whatever machine I end up using. So while cross-compilation is not pleasant, it is necessary for me and many other people who extensively use computers personally and professionally.
After all this work trying to figure out how to cross-compile applications, I wish these recommendations could have been better. I was hoping to find something that made cross-compilation easier. But given that writing C/C++ code has never really been easy 11, and Rust uses these toolchains for compilation, I can’t say I’m surprised.
Hopefully MIR will keep improving and there will be a critical momentum to fully switch over to a compiler based in Rust 12. If Rust is guaranteed to work on Tier 1 Platforms, and there is a compiler, written in Rust, that can compile Rust code, that should finally reduce a lot of the complexity around this whole topic.
Finally, I never found a single comprehensive resource on Rust cross-compilation between the three major operating systems when I went searching around. Because of that, I wanted to write a blog post to aggregate that scattered information about cross-compilation. I hope I’ve summarized it well enough that others can make use of that work.
Despite the length of this blog post, I still haven’t covered everything. There are 90 targets in total that Rust can compile to, so I don’t think it’s possible for any one resource to cover them all comprehensively. I only covered 3 of the Tier 1 platforms, and even that wasn’t easy. There will be many more edge cases for dealing with the other targets, but if you’re dealing with those targets, hopefully you have more experience with them and can figure it out.
I also explicitly didn’t cover embedded devices in this blog post. I have no experience in this area, so I can’t rightly comment on it to give any guidance. From what I have seen, though, the people behind Rust Embedded excel in this area. Check out The Embedded Rust Book or the Embedonomicon if this is what you need help with.
If someone comes up with a much better system for cross-compiling Rust crates some day, please let me know. Until then, I’ll stick with what I have, above.
Comments on Mastodon.