Settings

Theme

ARM Mac: Why I'm Worried About Virtualization

bmalehorn.com

156 points by bmalehorn 6 years ago · 312 comments

Reader

adrianpike 6 years ago

I'm actually not worried, for a few reasons;

- I already do cross-arch development day-in and day out between x86 and ARM, and have only run into hard blockers on a library or tool a handful of times. The solve was generally pretty straightforward to either use an ARM-compatible alternative, or to cross-compile it myself.

- We've done this many many times before and it's not that bad. I know I'm not the only one old enough here to remember the days of having heterogeneous fleets across PPC, SPARC, and x86. Or even more recent - different extensions for x86 with different chipset manufacturers.

  • DaiPlusPlus 6 years ago

    But back in those very heterogeneous days (don’t forget go throw in Alpha and MIPS!) - computers running more exotic processors like SPARC were workstation-class and their manufacturers were responsive to requests from the SE community - so while I share your lack of concern about heterogeneity - I am concerned that Apple won’t be making their ARM platform the best for developer workloads (at least, non-iOS, non-macOS workloads) - remember that besides freeing them from Intel’s slower release schedule, Apple’s other main incentive for adopting ARM is to ensure their computers have a great performance-per-watt ratio but with a low power budget. I know their latest A-series chips are very, very competitive (sorry, I mean: mopping the floor) with Intel’s current mainstream chips, Apple’s expertise is still with low-power mobile devices - I’m not convinced Apple will be switching away from Xeon Chips in the MacPro or i7 (i9?) chips in the high-end MBPs - but most SEs I know using MBPs have their 13-inch models which are moving to ARM right away.

    In short: I feel Apple’s consumer-oriented direction is starting to be at-odds with what they need to do in order to remain a compelling general development platform.

    Remember that macOS became a favourite for web-application development only around 12-13 years ago (prior to that it was seen as an OS for creative-types) - because they were selling nice hardware with an equally nice Unix-family OS with a compelling desktop experience - take a look at typical Linux desktop distros from around the same time: visual eyesores and incompatible with most laptops thanks to OEM driver issues. Apple wasn’t specifically targeting software developers at all - they were even showing ominous signs of disinterest by discontinuing their X Windos server and going-back on their promise of establishing Java as a pillar of the OS.

    With the move to ARM on laptops I think Apple will just lock-down the bootloader and won’t look back.

    What’s funny now is Windows 10’s WSL and Windows Terminal, Docker support, etc are suddenly making Microsoft look good as an OS for writing code for non-Microsoft platforms. And at least with a Windows laptop - even ARM Windows laptops - you can tinker with the bootloader and fire-up Slackware if you really wanted to.

    • adrianpike 6 years ago

      Yep, I think we're of one mind here. I'm not worried about the ARM architecture shift, but the whole direction of the platform makes me expect I won't be using a Macbook as my primary dev machine much longer.

      Interestingly enough - for personal hacks (mostly cross-compiling Golang to ARM, natch) I'm actually using WSL lately, and it's definitely good enough. Not perfect, but nothing much is.

    • dclusin 6 years ago

      As a life long Mac user my next machine is going to be a windows box. It's not so much hatred for Apple that's driving me. It just seems that Microsoft has a huge financial incentive now to play nicely with Linux due to their cloud business being pretty success and Apple with their iOS hegemony does not. The product decisions each company is making seem to reflect that. Also $4k buys a lot more windows box than it does apple.

      • DaiPlusPlus 6 years ago

        Re: Azure.

        Microsoft is lucky, more than anything else (Azure is a very nice platform now, but it was very feature-anaemic in comparison until about 5 years ago)

        AWS is king, so anyone with any reason to not use Amazon will automatically use the next-biggest/next-best Cloud provider - and that’s Microsoft. Which is odd: someone waking up from a 10-year coma would presume it would be Oracle or IBM, or from a major vendor of webhosting or VPS. Oracle was slow to get in, and IBM cheated by just buying Softlayer and then spectacularly cocking it up: https://techcrunch.com/2020/06/09/ibm-cloud-suffers-prolonge...

  • lsllc 6 years ago

    I'm actually happy about this! With the recent decline of pretty much every UNIX vendor/platform and the now-deep proliferation of Intel/Linux, it's beginning to feel like MS/IE6 in the early 2000's all over again. We need diversity and competition for both OS's and processors (just as we did with browsers back then).

    In fact, let's bring back Ultrix/OSF/1, DG-UX, Solaris! (... we can skip HP/UX and SCO because they're truly awful). Note that OpenVMS has already apparently made it's x86_64 comeback!

    • pjmlp 6 years ago

      Actually I liked HP Vaults, long before there was any talk about containers and similar on UNIX.

  • kevin_thibedeau 6 years ago

    Tons of x86 code accesses misaligned addresses.

    • saagarjha 6 years ago

      Well, that's only because it's efficient to do so on x86. Code recompiled for ARM isn't going to do that.

      • DaiPlusPlus 6 years ago

        It’s not that simple if that code is using explicit struct layouts or x86/x64 intrinsics.

        Forgive my ignorance though - but what ISA extensions are in Apple’s ARM chips for SIMD? Intel poured a lot of effort into SSE+AXE - Does Apple have a compete there?

        • saagarjha 6 years ago

          > It’s not that simple if that code is using explicit struct layouts or x86/x64 intrinsics.

          Performance sensitive code that relies on alignment guarantees and other platform details will not work and need to be updated, yes. IIRC Apple's chips do NEON for SIMD, not sure if they support SVE yet. (But I figure they will have to once it becomes a required part of the ARM standard…)

timsally 6 years ago

It is very likely that ARM-based Macs will lack a performant hypervisor upon release. We will have to see how VMWare responds. I'd bet it will inspire new products and innovation and the desktop space will move towards a less x86-x64 centric world. In the end it is a short term problem. Someone will respond and provide a performant hypervisor that can run on an ARM host and virtualize x86-x64 and ARM guests.

It's true it will cause some pain in the first year or two, but even as a heavy VMWare Fusion user I am really looking forward to the benefits of a vertically integrated laptop.

  • sirn 6 years ago

    Apple has Hypervisor.framework which has been updated for ARM Mac[1].

    Xhyve and HyperKit (used by Docker for Mac) uses Hypervisor.framework exclusively. The last time I tried Hypervisor.framework on x86-64, the CPU performance was quite fine (matches that of VMware/VirtualBox), but I/O was pretty abysmal. Emulating x86-64 on ARM is probably going to be a role of something similar to QEMU.

    [1]: https://developer.apple.com/documentation/hypervisor/apple_s...

  • ajconway 6 years ago

    If I understand correctly, hypervisors don't emulate hardware, that's what emulators (like QEMU) do. That would mean that physically the most performant option to run x86 code on an ARM CPU is dynamic translation (like QEMU-TCG or the new Rosetta JIT support).

    • timsally 6 years ago

      Broadly speaking yes. Generally, hypervisors mediate access to shared hardware whereas emulators implement simulated hardware in software.

      The very first hypervisors worked using dynamic binary translation. They would run a "guest" operating system by executing a stream of native instructions directly on the host CPU. This stream would be dynamically translated to remove and trap in software any privileged operations so the hypervisor could handle them. Modern hypervisors take advantage of hardware features that allow you to more efficiently trap on privileged operations. ARM started to add some of these features starting in 2013 [1]. In contrast Intel first started adding these features to the Pentium 4 in 2005 [2]. When such hardware features were released, they actually were not faster than the software translation. These days the hardware based options are faster. There is even hardware support for running nested hypervisors. So the first question we need to ask is how hypervisors implemented with ARMs hardware features stack up to Intel. I have no doubt that parity at a minimum will be reached I just don't know what the current state of play is. As indicated in my original comment, if I had to bet on release we wont quite have the performance or feature set you would be used to with a product like VMWare Fusion.

      The second question we need to ask is whether there is a way to efficiently emulate x86-64 processors on ARM hosts. Even better if you can do this while taking advantage of the supporting infrastructure hypervisors already have in terms of the emulated devices and other features. QEMU just gets you the CPU and a short list of a devices. The fully experience of a seamlessly virtualized guest requires a lot more than that. But at the core you are right that it is going to require QEMU-TCG, Rosetta 2, or some similar technology because the silicon just is not there to execute x86-64.

      Exciting stuff! We'll see where it all lands.

      [1] https://lwn.net/Articles/557132/

      [2] https://en.wikipedia.org/wiki/X86_virtualization#Intel-VT-x

    • saagarjha 6 years ago

      Rosetta 2 is ideally a static binary transformation, it only falls back to emulation when this doesn't work. So it's a bit different than TCG :)

  • mister_hn 6 years ago

    With just less than 10% of market share, do you really think it will change the whole thing? Unless Microsoft pushes for ARM too, I don't see any changes soon

    • timsally 6 years ago

      We'll have to see. As another person already pointed out, VMWare has experimented with ESXi on ARM and they claim their customers could realize significant cost savings by migrating to ARM [1]. So if they've already done a good amount of engineering work on it, we may well see VMWare Fusion on ARM that can efficiently run ARM guests. They plan on releasing a tech preview in July [2].

      Whether you can stick an emulated x86-64 CPU in there is another matter. It's a much bigger engineering lift and unless Apple puts some resources into it it's not clear to me a virtualization company by themselves would want to incur the cost. I hope there is enough demand for it and that someone will provide it. For me personally the only reason I run VMWare Fusion is to access x86-only Windows applications for which there is no replacement.

      [1] https://blogs.vmware.com/vsphere/2019/10/esxi-on-arm-at-the-....

      [2] https://twitter.com/VMwareFusion/status/1275466832002945024

    • cwhiz 6 years ago

      Microsoft has been dabbling with ARM for a long time now.

      It will all come down to whether this move gives Apple a significant performance and/or battery life advantage. If Apple pulls it off it will force Microsoft and other vendors to respond.

      • mister_hn 6 years ago

        They announced there will be no Bootcamp, I hope there will be a possibility to install other OSes as well on their new machines.

        This is critical especially when they make their older devices "EOL" and there are no more OS updates.

        • cwhiz 6 years ago

          VMWare and Paralells have both said they will have support. I don’t know what that means, yet.

          I would not personally buy a Mac product for the next 2-3 years.

    • pjmlp 6 years ago

      Microsoft just announced their own OpenJDK variant for Windows ARM.

      • mister_hn 6 years ago

        if Microsoft ports the whole business applications to ARM, there might be hope, but without them, It's hard

        • pjmlp 6 years ago

          Office for ARM will be supported on Windows 10X, already exists for tablets and phones, Apple also demoed the beta for ARM Mac at WWDC keynote.

          • mister_hn 6 years ago

            also SharePoint, Projects, Dynamics?

            • pjmlp 6 years ago

              SharePoint, Dynamics run on whatever .NET runs, and that includes ARM CPUs.

              Project is part of Office.

              In any case I though we were talking about consumer devices here.

  • WrtCdEvrydy 6 years ago

    VMWare did have a hypervisor for Raspberry Pi.

    • my123 6 years ago

      Yes, VMWare ESXi runs on Arm just fine. :-)

      • jki275 6 years ago

        Can you run an x86 guest under it?

        • lode 6 years ago

          No. Virtualization (dividing a host into different logical hosts but executing unmodified CPU instructions, like VMware, VirtualBox, ...) and emulation (translating instructions, like Rosetta) are two different beasts.

          • jki275 6 years ago

            That's pretty much what I assumed from what I know of VMWare. It's going to be a big issue for future Macs, there are entire segments of developers who may have to abandon Macs if we can't run VMs of x86 operating systems.

013a 6 years ago

I'd say that I'm excited for ARM. That doesn't mean the transition will be seamless or easy.

I know that a big complaint about the move is "great, now I'm doing ARM locally and deploying to x86". I think this is a legitimate concern, for now, but I also strongly believe it is inevitable that, within the next decade, deploying to x86 in the Cloud will be as "weird" as ARM would be today. The benefits are way too numerous.

Well, more accurately, I think it'll be a "I'm on Fargate, oh wow, Fargate runs on ARM, I had no idea" kind of thing. Ok, the article outlines why you may need some idea, but come on; we're talking about one line where I'm downloading the x86 version of a dependency instead of an ARM version. That's an easy fix.

I don't know what this means for open accessibility of hardware. Right now, I could go buy and run locally the Intel Xeon chip powering my app in the cloud; when things move to ARM, it absolutely will be "AWS Graviton" (not sold outside AWS) or "Azure ARM Whatever" (not sold outside Azure). This sucks for accessibility, but, actually, does it? ARM enables the cloud providers to do this; they could never design their own x86 chips. As long as we're all standardized on the same ISA, and the chips generally have the same characteristics, I'm looking forward to a very bright future where vendors are now also competing against one-another in the silicon. And I may not be able to buy an AWS Graviton, but I'm sure (well, hopeful) that one day I'll be able to build an ARM desktop that isn't a Raspberry Pi. AWS will have their chips, Quallcomm has theirs, Apple has theirs, Microsoft and Google have some, and they're all competing against one another.

Ok, maybe this is a pipe dream. But, I'm definitely in the short Intel camp, at least on the long-term.

  • klelatti 6 years ago

    This touches on an interesting question which I think underlying some of the concerns here today: Who will build ARM chips comparable to an i7 say that I can go out and buy and plug into my machine at home?

    No-one does now and it's not obvious who would as we speak today. But if the demand is there then even with lots of obstacles to overcome, of course, then they can and will.

bazizbaziz 6 years ago

This seems like a weird benchmark, reading from /dev/urandom and gzipping random data does not seem like something most folks will want to do. It even appears like /dev/urandom speeds differ greatly on various architectures [0] and there are issues with /dev/random being fundamentally slow due to the entropy pool [1] (but I guess this is why the author uses /dev/urandom).

It would be better to measure something more related to what docker users will actually do, like container build time of a common container, and/or latency of HTTP requests to native/emulated containers running on the some container.

One reason to feel positive about the virtualization issues is that Rosetta 2 provides x86->ARM translation for JITs which an ARM-based QEMU could perhaps integrate into it's own binary translation [2].

[0] https://ianix.com/pub/comparing-dev-random-speed-linux-bsd.h... [1] https://superuser.com/questions/359599/why-is-my-dev-random-... [2] https://developer.apple.com/videos/play/wwdc2020/10686/

  • bmalehornOP 6 years ago

    Author here.

    I'm glad somebody said something! Yes the gzip perf test is pretty silly, but illustrates a significant difference. /dev/urandom throughput on this setup was about 100 MB / s so it wasn't a bottleneck for this test - the bottlneck was gzip.

    Feel free to come up with a performance test yourself! I personally want to know what an HTTP test would look like. You can run an ARM image by running:

        docker run -it arm64v8/ubuntu
    
    Unfortunately, Rosetta 2 is not going to help here. Rosetta 2 translates x86 -> ARM, but only on Mac binaries. It does not translate Linux binaries, and cannot reach inside a Docker image.
    • sirn 6 years ago

      Was your emulation done with qemu user space emulator[1] (the syscall translation layer) or qemu system emulator[2] (the VM)? If it was qemu-system you might have better numbers with qemu-user-static, which does binary translation similar to Rosetta 2 rather than a being a full system emulator with all its overhead.

      You can probably use qemu-user-static to translate x86-64-only binaries in a Linux container on an ARM machine, too, but I have never tried.

      [1]: https://www.qemu.org/docs/master/user/main.html

      [2]: https://www.qemu.org/docs/master/system/index.html

      • bmalehornOP 6 years ago

        I ran this on a Linux laptop - it looks like it's running qemu-user-static:

            root        9934  103  0.0 125444  6664 pts/0    Rl+  12:25   0:12 /usr/bin/qemu-aarch64-static /usr/bin/gzip
        
        So it might be that Docker already runs a native x86_64 Linux, then uses qemu-static binary translation.
        • sirn 6 years ago

          That's strange, in my experience it shouldn't have 6x slowdown. Probably might be due to several factors, but here's your test, running on my system without Docker:

          Ryzen 3900X (host machine)

              $ dd if=/dev/urandom bs=4k count=10k | gzip >/dev/null
              10240+0 records in
              10240+0 records out
              41943040 bytes (42 MB, 40 MiB) copied, 1.02284 s, 41.0 MB/s
          
          qemu-aarch64-static

              $ dd if=/dev/urandom bs=4k count=10k | proot -R /tmp/aarch64-alpine -q qemu-aarch64-static sh -c 'gzip >/dev/null'
              10240+0 records in
              10240+0 records out
              41943040 bytes (42 MB, 40 MiB) copied, 3.33964 s, 12.6 MB/s
    • waon 6 years ago

      From the article:

      > Emulators can run a different architecture between the host and the guest, but simulate the guest operating system at about 5x-10x slowdown.

      I think this is a misleading statement because it implies that there is a constant performance overhead associated with CPU emulation. In reality, the performance relies heavily on the workload, more so with JIT-ed emulators.

      Regarding this specific benchmark, I think there are two main factors contributing to the poor performance. The first factor is that the benchmark completes in a short period of time. With JITs, performance tends to improve for long running processes because JITs can cache translation results allowing you to amortize the translation overhead. Another factor is that your benchmark is especially heavy on I/O, meaning that it spends a lot of time translating syscalls instead of running native instructions.

      I'd also like to add that CPU emulators sans syscall translation should work for any binaries, even those targeted for Linux. It would require a copy of the Linux kernel, but Docker won't work without it anyways.

    • yjftsjthsd-h 6 years ago

      So I'm not familiar with how Darwin does things, but on most FOSS unixes it's easy to use qemu to run one arch on another, either full system or just user mode emulation (which when wired up correctly lets you seamlessly execute ex. ARM binaries on an x86 system). I would expect it to be easy enough to either set up user mode translation, or just swap Docker's backing hypervisor with an x86 VM. Or, worst case, just run qemu-system-x86_64 on your ARM Mac, run Linux inside that VM, and run Docker on that Linux; SSH in and it should be mostly transparent.

    • bazizbaziz 6 years ago

      One benchmark would be to track down a python/JS/etc based "hello world" demo container. Base one version on Intel and the other on ARM, and measure each versions container build-time and request latency after it is set-up.

      If changing the base image is all that's needed and both Dockerfiles otherwise assume ubuntu, this should not take too long.

grahamlee 6 years ago

It didn’t take months, the time I did it (running Docker on a Pinebook, which was not a great experience). It took a couple of hours to flip some base images away from Alpine, as Debian already has a load of ARM packages built.

  • 8K832d7tNmiQ 6 years ago

    That’s under a case where all libraries you need supports aarch64 architecture, which is sort of true for popular libraries, but not all libraries.

  • yjftsjthsd-h 6 years ago

    > It took a couple of hours to flip some base images away from Alpine, as Debian already has a load of ARM packages built.

    Why did you have to switch from Alpine to Debian? Alpine supports ARM quite happily, and it looks like they're shipping Docker images for ARM (and other architectures, too).

    • 60secz 6 years ago

      Not op, but alpine package manager leaves a lot to be desired especially compared to ubuntu. Also much easier to set locale. Since minimal ubuntu & debian exist, I think the question should be: "Why would you use alpine?" especially considering potentially slower performance:

      https://pythonspeed.com/articles/alpine-docker-python/

      • yjftsjthsd-h 6 years ago

        > Not op, but alpine package manager leaves a lot to be desired especially compared to ubuntu.

        How so? If anything, apk is way nicer than apt in a container build script (or anything automated); with apt you have to use -y and maybe force the noninteractive frontend, where `apk add foo` just works, correctly, automatically, with no effort required.

        > Also much easier to set locale.

        > considering potentially slower performance:

        It's slower at installing python packages from pypi since it can't use cached versions. That's not the same thing as "it's slow".

        > Since minimal ubuntu & debian exist, I think the question should be: "Why would you use alpine?"

        Because minimal ubuntu is still ~3 times the size of alpine, alpine is much smaller and simpler, alpine defaults to staying small (even if you remember to --no-install-recommends, deb packages are bigger and less modular), and I don't have to remember how to force apt to run in "no really install without asking questions" mode.

        • tams 6 years ago

          Having built several Alpine and Debian-based images, Alpine has always been very nice for the happy path, but much more hassle to get out of a hole when something broke due to software misbehaving due non-Alpine assumptions.

          Debian in Docker in comparison offer less surprises, but you have to consistently do the right incantations.

          Regarding missing binary wheels on ARM: with more ARM laptops in the wild those would eventually become more common.

  • thomaslord 6 years ago

    This assumes that your Docker workload can run on an ARM system without lots of hacking, and also that you trust the ARM-compiled version you're running locally to function identically to the x86-compiled version running on your server.

    • grahamlee 6 years ago

      No, it doesn't. If I'd made the statement "all you need to do is…" then it would have involved some assumptions. What I said was "I did this and all it took was…" no assumptions, just experience.

    • marmaduke 6 years ago

      It's not the worst assumption on HN by far

    • thrill 6 years ago

      If only people writing applications could find some way of testing that their apps function properly.

smspf 6 years ago

So many wrong assumptions ...

1. If emulating aarch64 (arm64) on x86_64 is 6x slower (on your system, btw, it's not an universal constant), it doesn't mean emulating x86_64 on aarch64 will be 6x slower. It'd probably be worse, or at least that's my gut feeling.

2. Generic container images like the Ubuntu mentioned usually have aarch64 (arm64) support, so running the x86_64 image makes no sense for the presented use-case.

3. You won't be able to use most software because they don't release ARM binaries ... and the example uses `wget` && `tar xf`, with no binary signature check. As someone who has been porting stuff from x86_64 to aarch64 for a couple of years, I admit I've seen this pattern frequently. The most obvious solution is to build from sources, which would have been better off on x86_64 too, instead of fetching a prebuilt (and unverified) binary from the internet. Maybe there are some CPU flags the compiler could notice and apply optimizations which are not included in the prebuilt binary.

I'm not an Apple fan and I'm certainly not a fan of cross-architecture development either. I do agree with the general idea behind the article, however I find it a bit hand wavy.

  • thayne 6 years ago

    > Generic container images like the Ubuntu mentioned usually have aarch64 (arm64) support, so running the x86_64 image makes no sense for the presented use-case.

    I think the argument here is you can't build your own docker images that you use in production and run them on your mac without emulation (unless your production workload also runs on ARM).

    • smspf 6 years ago

      That's a fair point. Emulation implies other limitations too - code compiled on your machine might leverage only the CPU features emulated, which would lead to sub-optimal binaries, not to mention much slower builds.

    • flatiron 6 years ago

      If you don’t have an environment between your laptop and prod you got more things wrong than this ARM migration.

  • bmalehornOP 6 years ago

    Author here.

    > 1. If emulating aarch64 (arm64) on x86_64 is 6x slower (on your system, btw, it's not an universal constant), it doesn't mean emulating x86_64 on aarch64 will be 6x slower. It'd probably be worse, or at least that's my gut feeling.

    Yup, performance benchmarks are inherently flawed and nobody knows anything right now without the hardware. However if ARM -> x86 emulation is anything like x86 -> ARM emulation, I would expect a really big performance loss.

    > 2. Generic container images like the Ubuntu mentioned usually have aarch64 (arm64) support, so running the x86_64 image makes no sense for the presented use-case.

    Ah actually I address this in the article, and even run an arm64 image. The short version is, it would be a lot of work to convert your whole backend infrastructure to ARM just because you got a new laptop.

    > 3. You won't be able to use most software because they don't release ARM binaries ... and the example uses `wget` && `tar xf`, with no binary signature check. As someone who has been porting stuff from x86_64 to aarch64 for a couple of years, I admit I've seen this pattern frequently. The most obvious solution is to build from sources, which would have been better off on x86_64 too, instead of fetching a prebuilt (and unverified) binary from the internet. Maybe there are some CPU flags the compiler could notice and apply optimizations which are not included in the prebuilt binary.

    Yes, if only everything were built from source! I'm not saying there's no solution, just that the solution would be a lot of work. If the library is obscure enough and the errors are strange enough, it might be so much work as to be impossible to the busy web developer.

    My goal was to write a kind of hand-wavy article to get people talking about this problem.

    • smspf 6 years ago

      I agree on the performance loss. Just for kicks, I ran the same commands on some real aarch64 (32 cores, 3.0GHz, ARMv8.? - can't remember and already logged off the machine, but I can double check tomorrow). Without further context, numbers:

        someuser@some-aarch64-machine:~$ docker run arm64v8/ubuntu bash -c 'dd if=/dev/urandom bs=4k count=10k | gzip > /dev/null'
        10240+0 records in
        10240+0 records out
        41943040 bytes (42 MB, 40 MiB) copied, 2.18298 s, 19.2 MB/s
        someuser@some-aarch64-machine:~$ docker run amd64/ubuntu bash -c 'dd if=/dev/urandom bs=4k count=10k | gzip > /dev/null'
        warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
        warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
        warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
        10240+0 records in
        10240+0 records out
        41943040 bytes (42 MB, 40 MiB) copied, 6.72324 s, 6.2 MB/s
      • bmalehornOP 6 years ago

        Awesome, thanks for testing this out!

        A 3x slowdown is not as bad as 6x, but it's still quite a bit. I also saw a slowdown of ~4x when I tried this experiment on a native Linux x86_64 running ARM - perhaps the Mac -> Linux virtualization slowed it down further.

        5x may have been a bit alarmist, but regardless we should brace ourselves for a big performance hit on x86_64 virtualization.

        • smspf 6 years ago

          I'm surprised it's only a 3x slowdown. But the single-thread performance of native execution (without emulation) is worse on aarch64, which was expected. Imo, a better benchmark would take into account the multithread performance with/without emulation.

  • zekrioca 6 years ago

    Yes, agreed. And the examples exposed are not fair. There are a lot of optimizations one can do in Docker, specially when dealing with I/O workloads (dd example in the article). Cloud providers have been doing this for long, long time already.. Why the author did not mention those, it is to be seen..

eberkund 6 years ago

Are there any excited embedded developers in the crowd? I have done a little embedded work and cross compiling has always been a huge pain in the ass to setup. I know some people have even gone as far as purchasing expensive niche workstations with ARM CPUs specifically to avoid this problem. I feel like having a mainstream ARM platform like the MBP will make compiling software for ARM-based single board computers a breeze.

  • jeremyjh 6 years ago

    You'll still have a completely separate toolchain. First, a lot (most?) embedded development is not done on ARM A. ARM Cortex-M is probably the most popular embedded platform in industry and what it shares with "ARM Cortex-A" is the brand "ARM"; otherwise it is a separate architecture and instruction set.

    Even if you are talking about doing ARM Cortex-A series, you aren't going to be using the same libraries on the embedded device that you use on a Mac. You'd most likely be using either Linux (ala Raspberry Pi) or an RTOS; either way you have a different compiler and stdlib to use.

  • nofunsir 6 years ago

    My expensive niche workstation = raspberry pi, pick your ARMv flavor.

    Most tools are adopting Linux remote build + remote debug, wherein you ssh in and hook into the compiler and debugger all from the comfort of CLion/VS2019/VSCode.

    If they don't have remote build, there is often building locally, with a copy of the root filesystem, using a cross-compiler, then remote deploy + debug. The most annoying part of this process is fixing all the symlinks not supported on NTFS.

    Expensiver niche workstation = $500 dev kit directly representative of your target, but with everything exposed.

    The interesting thing is now we need ARM -> x86 remote build or cross-compilation tools, of which I know of none.

  • platinumrad 6 years ago

    There isn't a single embedded target that uses MacOS's libc so you will still have to set up a specialized environment.

  • detaro 6 years ago

    I'd much rather have a more powerful x86 workstation for the same money than an ARM laptop. Never really had problems with cross-compile. And without support for running Linux natively, it doesn't get me much for even for the parts of testing that don't need the specific target (well, VMs maybe).

    • TheNorthman 6 years ago

      To be clear, we don't know if the ARM MacBook will be able to run Linux natively. We only know that Apple won't continue support for Boot Camp and therefor Windows anymore. Linux was never supported.

  • ndesaulniers 6 years ago

    Reminds me of the argument made in https://www.realworldtech.com/forum/?threadid=183440&curpost....

    I cross compiling Linux kernels daily. I think Clang makes this simpler, but missing C runtime for cross compiling userspace executables still leaves much to be desired.

    I think Zig is doing interesting things here. Clang should just straight up adopt this, IMO. https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...

  • bserge 6 years ago

    I dunno if I count, I have a RPi 3, original Droid (oh yeah, it's still working), an LG L9 and a HTC One M8. And I'm building kernels for them (mostly because I'm overclocking the shit out of everything and I have no choice but to build custom kernels).

    32 bit on the phones, a real pain in the ass to cross compile, but it's a fun learning experience (I'm just a noob to any programming). I'd love to get paid for this tbh :D

  • lsllc 6 years ago

    I solved that problem (mostly!) by using Go! Occasionally I do have to go dig out something like an arm-brcm-linux-gnueabi-gcc to cross compile C, but mostly I use Go.

jeremyjh 6 years ago

My 2017 Macbook pro still has quite a lot of life in it, but it seems unlikely I will replace it with another Mac in a few years. Before the Mac I had a Lenovo X1 Carbon running Linux and it was great; and even then was a better development environment in some ways (docker has better filesystem performance, pacman is much better than homebrew). I do use some audio processing applications and my kids play a few games that do not run at all on Linux. I may try WSL again instead of going straight back to Linux, but its hard to imagine Mac will be the best OS for me.

  • drewrv 6 years ago

    I recently had to get a new laptop and went with windows for the first time in a decade. Macs are still great to develop on for now, but looking at the trajectory apple has taken, the growing pains ARM will likely bring, and also the trajectory of Windows, it seemed like Windows would be the safer choice over the next few years.

pwinnski 6 years ago

If the worst of every possible thing happens and you avoid the most obvious solutions and one is very, very slow, then yes, you're right to worry.

Or, you could use already-extant Debian ARM releases and spend minutes rather than months switching over.

edw 6 years ago

I'd like to advocate for remote development environments. Most of my day is spent typing into a tmux session on a cloud-hosted box. (I picked up a Magic Keyboard for my 11" iPad Pro, and thanks to Blink it's a great glass terminal. It's not going to work if you're debugging let's say a React app, but I've been very happy on it the last several days churning out Golang.)

Running stuff on your laptop makes it run slow, get hot, and burn battery. I've considered getting a small x86 or ARM media appliance as a (physically local) remote server for when I can't count on an Internet connection. A media PC costs how much? The big holdup has been the tyranny of choice I'm confronted with. (Suggestions are welcome!)

I think very few people would be surprised if the coming of ARM Macs will, along with AWS's ARM moves (and Microsoft's), drive acceptance and adoption of ARM-based server computing. The mechanism won't be anything formal, just the vague pressure that comes from people wanting their programs and libraries to compile locally.

  • chooseaname 6 years ago

    I have a small server at home running proxmox. I have a couple containers (lxd) running for personal dev projects. I agree that if you can do it like this, it's nice. I can be pretty much anywhere and open up a terminal, vpn in, and pick up where I left off thanks to tmux.

sjs382 6 years ago

    I would expect about a 5x slowdown running Docker images.
    
    Docker on a Mac utilizes a hypervisor. Hypervisors rely on running the same architecture on the host as the guest, and are about about 1x - 2x as slow as running natively.
    
    Since you're running ARM Mac, these hypervisors can only run ARM Linux. They can't run x86_64 Linux.
    
    What will happen instead? These tools will fall back on emulators.
Most of the software I run in Docker already supports ARM. I'd imagine that a lot of (most of?) us that use Docker do, too.
  • jayd16 6 years ago

    It'll be annoying maintaining multiple docker images. Kind of defeats the purpose.

binarynate 6 years ago

The loss of Boot Camp is huge. One of the reasons I develop on a Mac is so that I can use a single machine for all development (including macOS, iOS, and Windows development). Most of the time, developing for Windows on Parallels works fine, but there are some cases where it's necessary to boot directly into Windows to test or debug adequately. I hope Apple is able to reach an agreement with Microsoft or at least continues shipping Intel-based Macs until then.

  • beagle3 6 years ago

    It will ship Intel based laptops for at least a couple more years (at the very least, the models that will already be out at the switch), and will support them for a lot more; so just buy the best intel based once following the switch, and it will last you 4-5 more years.

    But also: Getting a cloud windows station or an el-cheapo-$500-under-the-desk-when-you-really-need-it Windows machine is probably worth it if you're doing professional work. It would quickly cost much less than the time you lose when rebooting to the the other OS, from my experience.

m000 6 years ago

Apple will singlehandedly make 2021 "the year of Linux on the desktop".

  • octorian 6 years ago

    Every time Microsoft or Apple majorly screws something up, people say this. It still hasn't happened yet.

    However, I think Apple has been a far greater threat to Linux adoption than Microsoft. Why? Because it gives techies the *nix environment they want, with the software and hardware support no one will give them on Linux.

    There is real value in proprietary commercial end-user application software. Most companies who make such software couldn't care less about supporting Linux. So if you want to use Linux, you have to use F/OSS alternatives and continue to try convincing everyone that somehow they're better than the commercial options... even when the rest of the world has agreed that they're really not.

    The whole incentive structure around F/OSS development really doesn't work for software where the profit motive is in the product itself... Not some nebulous "support contract" that you don't actually need. (Which is a far bigger issue for end-user applications.)

    • ryandvm 6 years ago

      > Because it gives techies the *nix environment they want, with the software and hardware support no one will give them on Linux.

      The UNIX experience on the Mac is pretty shitty. Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor. No package management. Funny Docker quirks.

      The hardware used to be pretty nice, but honestly I'm still having trouble forgiving them for getting rid of the physical ESC key and turning volume control into a two-step routine on the TouchBar.

      Honestly if I'm doing server-side development, I much prefer using my ThinkPad (Ubuntu) over my MacBook. About the only thing I miss is the far superior touchpad on the Macbook. That's it.

      • selsta 6 years ago

        > Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor. No package management.

        Can be easily fixed by installing homebrew. Also you claim it is a shitty Unix experience while complaining about BSD flavoured tools.

        > Funny Docker quirks.

        How is that Unix related? BSD has similar issues.

        Maybe you should have written the GNU/Linux experience is pretty shitty on macOS but no one claimed otherwise.

      • mbreese 6 years ago

        >> Because it gives techies the nix environment they want, with the software and hardware support no one will give them on Linux.

        > The UNIX experience on the Mac is pretty shitty. Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor. No package management. Funny Docker quirks.

        It doesn't have to be the best nix environment. Hell, it doesn't even have to be a good one. It just has to be "good-enough". For this, they still have an advantage over Windows. And compared with Linux, they still have the advantage that by-and-large, things "just work". I have never personally been able to say that about a Linux desktop I've had. There is always one more thing to tweak, one more knob to turn, etc...

        I'm with you on the ESC key and touch bar though... which they thankfully fixed the missing ESC key.

        • meddlepal 6 years ago

          Exactly what advantage does macOS shitty Unix environment have over Windows 10 w/ WSL2?

          • olyjohn 6 years ago

            Everything WSL2 does has already been do-able with Windows for at least 10 years. It’s a VM with some file system sharing. You might as well just ask what advantage does MacOS have over Windows. WSL hasn’t changed anything really.

          • saagarjha 6 years ago

            It's not running in a virtual machine.

        • apetresc 6 years ago

          They _had_ the advantage over Windows. WSL2 is so well-integrated as of the latest major release that I think this advantage has now flipped.

          • selsta 6 years ago

            Isn’t WSL2 basically a virtual machine with better integration? Is it still super slow when accessing /mnt?

            I can also install something like multipass on macOS if I want a good integrated virtual machine.

        • dingaling 6 years ago

          > by-and-large, things "just work".

          Last week I had to drop to vi and edit nfs.conf on a friend's Mac to solve very slow transfer rates. "Just works" within a very narrow definition of primitive use cases.

      • rootusrootus 6 years ago

        I was with you until they released the newest MBP. Now my beloved ESC is back. I already do all my development in a Vagrant instance so I've never been bothered by the tooling. In all other regards I prefer MacOS as a desktop environment to the currently available Linux choices.

      • ArgyleSound 6 years ago

        > I'm still having trouble forgiving them for getting rid of the physical ESC key and turning volume control into a two-step routine on the TouchBar

        To be fair it's always been a one-step routine on the touchbar (touch and drag the icon) and they brought back the escape key.

        • mmcconnell1618 6 years ago

          Touch and drag is easy to jump the volume up/down variable amounts but really sucks compared to a single key press for "just a little louder" or "just a little quieter." There is something satisfying in the discrete steps of volume notches.

          On the other hand, my speakers have a physical volume dial that provides feedback via friction on movement so I like that better than touchbar or physical up/down buttons.

          • ArgyleSound 6 years ago

            So there’s actually two types of touch and drag you can do. One is holding until the slider appears and then dragging, the other is flicking the icons left and right which causes single step increments.

            Discoverability certainly sucks for that second one.

      • pjmlp 6 years ago

        UNIX experience on Mac is as UNIX as it gets, given that it is certified as a proper UNIX.

        Linux is its own thing and trying to mix UNIX with Linux is always going to lead to disappointment.

      • N1H1L 6 years ago

        > The UNIX experience on the Mac is pretty shitty. Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor. No package management. Funny Docker quirks.

        Very, very true. And Homebrew is actively becoming worse now. A few years back, Homebrew was great - now using it feels like using some weird underground software stack that exists only because Apple hasn't come around to nixing it yet.

      • Vomzor 6 years ago

        Tap the volume icon and swipe left or right without letting go to do it one fell swoop.

      • rescbr 6 years ago

        > The UNIX experience on the Mac is pretty shitty. Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor.

        Well, OS X is Unix, but GNU is not Unix.

      • saagarjha 6 years ago

        > Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor. No package management.

        That's what MacPorts is for ;)

      • jki275 6 years ago

        Package management == Homebrew. No issues there.

        Not a fan of the touchbar of course.

        Otherwise everything else I need is usable on a Mac.

    • wtetzner 6 years ago

      > However, I think Apple has been a far greater threat to Linux adoption than Microsoft. Why? Because it gives techies the *nix environment they want, with the software and hardware support no one will give them on Linux.

      With WSL, you basically get an actual Linux userland (with WSL2, I think you get an actual Linux kernel too), not just a Unix that's like Linux but different enough to be annoying. But I'm not sure that will be enough to convince people to move to Windows.

    • N1H1L 6 years ago

      If Microsoft Office comes on Linux, I am switching. And yes, I have used LibreOffice, it simply isn't that good.

    • pjmlp 6 years ago

      If Microsoft had been more serious regarding POSIX personality, Linux would never had taken off.

      Most devs only want some kind of CLI and POSIX like capabilities.

      • chx 6 years ago

        > Most devs only want some kind of CLI and POSIX like capabilities.

        which is why WSL is so great...

        • pjmlp 6 years ago

          Indeed, and WSL instead of a pure POSIX, because that allows to tap into the ecosystem that thinks Linux === UNIX, without having to recompile anything.

          An approach already taken by other UNIX clones with their Linux compatibility syscalls layer.

      • anthk 6 years ago

        You forgot X11 and Motif. Without that Windows would be useless back in the day.

        • pjmlp 6 years ago

          Not really, because there were plenty of Win32 X Servers.

          Back in the day I was using Hummingbird.

          • anthk 6 years ago

            You forgot about porting and compiling software which depended on X11 and Motif. Linux won because of that.

            • pjmlp 6 years ago

              What for? That is what having something like Hummingbird took care of.

              I used to admin UNIX and develop for it from Windows NT/2000 workstations.

              Also the FOSS version of Motif only appeared when Motif wasn't that much relevant and most enterprise shops were migrating to CORBA and Web as integration points.

              • anthk 6 years ago

                No, I mean when Motif + Slackware were an alternative to SGI machines in a brief time. Those were supported by big multimedia companies.

  • starsinspace 6 years ago

    To give a different perspective: I find Apple's move to ARM the most exciting thing to happen in desktop computers in many years. I'm typing this from a PC running Win10. Current plan is: as soon as ARM desktop Macs become available (and assuming they don't screw it up in some weird way), I'd like to switch.

    • mey 6 years ago

      If it's ARM that excites you, Surface Pro X (Windows) and Pinebook (Linux) exist today. If it's macOS, you could switch now. What about the combo of macOS + ARM do you find compelling?

      • ajconway 6 years ago

        Windows app developers couldn't care less about ARM Windows. MacOS app developers will have to if they want to stay relevant.

      • kube-system 6 years ago

        Because of the network effects of software development. The number of people that use those devices you list is basically a rounding error, so they're ignored by most software development.

        Apple has a monopoly on their hardware, and they will likely sell a significant number of devices. This will lead to a lot more development for ARM that never would have happened otherwise.

        That, in turn, may make tilt the balance in favor of ARM for a lot of other use cases outside of OSX when other tools, applications, and hardware vendors better support ARM.

    • yjftsjthsd-h 6 years ago

      So, and I say this as someone who's stoked to see any non-x86 system going mainstream... why do you want to switch? What benefit do you see to switching to an ARM laptop? Or is it just "this is a good thing (in general) so I want to get on board with it"?

      • starsinspace 6 years ago

        I'm also excited about a non-x86 architecture on the desktop again. Monoculture is bad, and I find x86 to be especially ugly...

        As for switching... I'm increasingly unhappy with Microsoft's complete disregard of user privacy. Apple isn't perfect with that either, but IMO much better at least. For my use cases, Win and Mac are the only credible options due to software availability. So to get away from Windows.. there's not much choice these days.

        • torstenvl 6 years ago

          Why do you feel like proprietary software monocultures are better than commoditized hardware monocultures? OS vendors selling 100% custom silicon is not the path to diversity and freedom of choice.

    • dehrmann 6 years ago

      Having an x86 dev machine is useful because it matches most production environments pretty closely. This might be changing somewhat with AWS Graviton, it's not the default yet.

      What makes ARM so exciting? Maybe battery use will be better? Maybe it will be slightly faster? Maybe? There's also been a lot of tuning done for laptop workloads on x86, it's definitely a maybe. I expect the only noticeable changes for most users to be somewhat better battery life, some apps not working, and occasionally having to know which package to download.

  • dehrmann 6 years ago

    I expect Lenovo will move a few extra units because of this decision.

  • pantalaimon 6 years ago

    When was the year of the macOS desktop?

    • m000 6 years ago

      It's not about stealing users from macOS. It's about stealing developers. Hell, Apple is at the mercy of Microsoft and Adobe right now. I'd bet they had to line their pockets very well, so that they don't get any funny ideas.

      But Apple can't just pay-up every cross-platform software developer. Smaller developers will have to re-evaluate whether macOS remains a viable target platform for them. Which can translate to a dev-gain for Linux. The catch is that Linux is in much better position to translate an influx of developers to an influx of new users: Linux runs on what you have, while macOS requires you buy Apple hardware.

      And let's not forget about macOS as a gaming platform. Linux has made a huge leap forward with Steam Proton. On macOS there's still a ton of games not supporting x86_64 (Catalina), and situation won't get better by transition to ARM.

      • alwillis 6 years ago

        Hell, Apple is at the mercy of Microsoft and Adobe right now. I'd bet they had to line their pockets very well, so that they don't get any funny ideas.

        I’m sure that's not the case. It's much more in Adobe’s interest to be ARM-ready on day 1; there are plenty of Photoshop alternatives in the Mac App Store—notice they demoed Affinity Photo running on the ARM Mac and Affinity is much better at using Apple's native APIs and technologies than Adobe every was. And it's just a one-time cost of $49.99 vs. renting Photoshop from Adobe. Users of Apple devices are continue to be a large segment of their customer base.

        Adobe has already migrated all of their core applications to a new codebase that should be relatively easy to bring to ARM Macs. Photoshop and several other of their apps already runs on iPadOS, so it won't be that big a deal to move it over to Big Sur.

      • jmull 6 years ago

        > ... while macOS requires you buy Apple hardware ... macOS as a gaming platform ...

        So nothing is changing. ARM Mac isn’t going to change the Linux desktop/laptop story.

      • dnh44 6 years ago

        That may be true but it’s not accounting for the iPhone and iPad developers out there that will be able to easily target macOS after the switch.

      • pjmlp 6 years ago

        macOS users actually pay those smaller developers, it would be foolish for them to expect anything from Linux users, it is hard but it is the reality.

        Hence why plenty just target Android, although some of their apps could easily be targeting GNU/Linux as well (specially the ones that are mostly NDK glue + whatever framework).

    • ch_sm 6 years ago

      Pretty much when the choice was between Vista and Tiger

    • inetknght 6 years ago

      1996

    • julienfr112 6 years ago

      2012 ?

    • anthk 6 years ago

      2001-2002 with OSX.

  • the-golden-one 6 years ago

    I thought that was WSL2 on Windows 10?

lachlan-sneff 6 years ago

This is ridiculous. Your code will just have to support multiple architectures, which is very easy with modern languages and tooling.

  • bobalob_wtf 6 years ago

    I think you missed the point. What about all the dependencies of your code that are only compiled for x86_64? The article isn't talking about native apps on the laptop, it's about apps that run on a server but that you are developing locally.

    You can't run your x86 docker image on your ARM mac without emulation. You can't run your x86 Windows VM without emulation etc.

    Of course there are solutions like using a remote server or a VM in the cloud, but if you're buying a decent machine then you would normally expect to be able to run these things locally.

    • acdha 6 years ago

      Do you have any examples of this? The last time I tried an AWS ARM server, it was literally no modification other than changing the server type — Linux has run on ARM for many years and Apple is far from the first company to use the platform.

      For example, back in 2017 Cloudflare was basically looking at this as a question of which hardware ran most cost-effectively rather than having engineering heroics first: https://blog.cloudflare.com/arm-takes-wing/

      • heavyset_go 6 years ago

        Mono and .NET Core run terribly on ARM. I have several containerized C# apps that I need to run on x86_64 hosts, because on ARM they'll just crash randomly.

      • user5994461 6 years ago

        I want to say Oracle database clients as an example.

        My company definitely had problems to get database drivers to work generally speaking and on both 32 bits and 64 bits. Have a look at postgres, oracle, cassandra, redis, sybase to name a few, I am not sure which one was worse, it wasn't me doing the work. But I've seen some of the C and C++ dependencies that needed to compile with the errors that happened and that was horrendous.

        • acdha 6 years ago

          Fair enough — I've used Oracle's products enough to know that software distribution and packaging is not a priority there.

        • smspf 6 years ago

          I've been using postgres, cassandra, redis and mysql/mariadb on aarch64. Only ran into issues with MySQL, which we rootcaused to some weird atomic locks not working as expected on the first generation of ARMv8(.0) a couple years back.

        • jbverschoor 6 years ago

          That’s a good moment to make your c/c++ code more robust, and cross-platform, like the languages they are

          • user5994461 6 years ago

            The problem wasn't in our code, it was in the database code that was either from open source or from a vendor.

            If you want a sample. Try to install the cassandra client library in python. It will pull in and compile all sort of shit. That's supposed to be python and easily cross platform.

            • jbverschoor 6 years ago

              Yes, so time for them to clean up their mess ;)

              • etaioinshrdlu 6 years ago

                Reading the comments here flow something like this: 1. complain that switching to ARM is a mess 2. no it's not a mess, it's easy 3. no, see it's a mess 4. fine, clean up the mess.

                Yes, we should clean up the mess! It doesn't mean it's not a mess. And I think the mess is actually still understated.

              • stickupkid 6 years ago

                I think it will come down to PRs accepted. So it's your mess in the end as it depends on the companies strategy.

    • rblatz 6 years ago

      AWS has put out some very impressive ARM instances running on the Graviton2 processors. The reviews I read show better performance per dollar. So maybe the solution is to further embrace ARM and run your code on ARM servers?

    • zekrioca 6 years ago

      This will always be a problem, unless one emulates the x86_64 architecture, which is then again the other problem w/ Docker. I assume main libraries (i.e. dependencies) are already or will be ported, so theoretically recompilation should work for many, although not the most applications. Other applications will need to either change dependencies, or port dependencies to enable full performance.

    • monadic2 6 years ago

      Why not re-compile the dependencies so they run locally too? Am I missing some constraint?

  • Spivak 6 years ago

    I mean the post is technically right. You're probably better of skipping the first generation of ARM Macs until software support happens unless you're someone who wants to work on that software support.

  • jnwatson 6 years ago

    Hardly. You’re investing a great deal of effort in building a parallel set of images that probably will never see production.

    The one use case when this might be viable is targeting AWS Graviton2. Does anybody know if you can run an emulated Graviton2 on ARM Mac?

    • my123 6 years ago

      Parallels on Arm macOS simulates the Snapdragon 835 SoC to emulated operating systems.

  • heavyset_go 6 years ago

    There are very few packages compiled for ARM on PyPI, and there are more than a few packages on PyPI that are a pain to build from source.

  • maxmcd 6 years ago

    Yeah, ARM is coming and docker will move to support it. We already have this: https://docs.docker.com/buildx/working-with-buildx/

    Things like the pinebook pro (and hopefully more linux ARM devices) will keep pushing this further.

gtrubetskoy 6 years ago

I remember the days when having to switch between x86, ppc, sparc, etc was a thing (not to mention the many flavors of UN*X) and we survived. In fact I think it was more fun back before the x86/Linux server domination. Architecture diversity is good.

rgovostes 6 years ago

I'm more convinced dropping dual boot and supporting virtualization is the right move.

Only the host OS is going to have the right drivers for the trackpad, wi-fi, GPU, power management, etc. etc. Through virtualization, the guest OS doesn't have to worry about constantly evolving hardware models.

Virtualized OS performance is already very good, and USB passthrough has existed for a while. Snapshots are a godsend.

What won't work are things like CUDA for eGPUs over Thunderbolt 3, and you'll have to share disk and RAM with the host OS.

But for most use cases it's probably the right choice. (This doesn't address the author's concern about moving away from x86.)

core-questions 6 years ago

> Why can't you update the Docker image to also support ARM? You theoretically could switch your backend to run ARM Linux. However, this would take months - renting out ARM instances, re-building all repositories, and a tense switch over.

I don't see why this would be so hard. If anything, I expect to see a massive upswing in things like AWS Graviton2 uptake, and a lot of common Docker images being built with ARM versions out of the box. It might be about a year or so, but eventually we'll be able to just go ARM-native the whole way.

What Apple needs to do is make a first-class, WSL-tier implementation of Docker for Mac for ARM.

  • Spivak 6 years ago

    I'm kinda thankful to Apple for biting the bullet on this one. For whatever reason people will move mountains for Apple where other companies' products would just languish and die. The second order effects of ARM being something that's "safe" for people to use should be great!

  • rbanffy 6 years ago

    The Honeycomb.io folks reported 40% more capacity per dollar on Graviton 2 over x86. That alone should motivate people to start looking into ARM backends.

  • user5994461 6 years ago

    >>> expect ... a lot of common Docker images being built with ARM versions out of the box.

    This has no chance of happening. The common cloud CI systems do not support ARM at all (travis, circle CI and co). There is only a minority of developers with macbook and the rest is not going to spend $2000 to buy one just to build some docker images.

    • acdha 6 years ago

      Travis has ARM in beta:

      https://docs.travis-ci.com/user/multi-cpu-architectures

      GitHub lists it as a feature now:

      https://github.com/features/actions

      I'd be very surprised if this didn't become more common given the high levels of interest people are showing towards ARM server offerings in the cloud space.

    • rbanffy 6 years ago

      They will. Trust me. And you can buy a cluster of RPis to build your images for $2000

      • user5994461 6 years ago

        It's ludicrous to assume that anybody has $2000 to spend on fantasy hardware. It's month of disposable income outside of the SV bubble.

        Better question might be, how many of the most common open source projects are managed by volunteers in their spare time? These will not build for ARM, unless there is a free tool doing it automatically for them. Currently GitHub + travis/circle can do for x64 on every push to master.

        • core-questions 6 years ago

          You don't need a cluster of them, one $50 machine is enough for home use. Travis et. al. will be buying those 96-core Marvell ARM machines and plowing through builds.

          • rbanffy 6 years ago

            Do they run their own metal? I thought they were on AWS.

            • duskwuff 6 years ago

              Well, if they are, that makes things even easier -- Amazon has offered ARM instances since 2018 [1].

              [1]: https://aws.amazon.com/blogs/aws/new-ec2-instances-a1-powere...

              • rbanffy 6 years ago

                I think they have both. They offer POWER and LinuxONE as options and those aren't available in your average cloud provider. They probably have a sweet deal with IBM Cloud.

                I'd love to have a POWER and an IBM LinuxONE in my shed, but that's not going to happen.

                Maybe the POWER, but the Z most likely not.

    • chrisseaton 6 years ago

      > There is only a minority of developers with macbook

      I travel the world meeting developers from multiple communities. I very rarely see one without a MacBook.

    • icedchai 6 years ago

      I don't think this is actually the big problem some people make it out to be. You can cross compile ARM binaries on x86. You can even run ARM binaries on x86 (with qemu.) Any CI system can easily call scripts to do this.

  • jjoonathan 6 years ago

    Didn't they show off a native arm docker image running in the keynote?

Uehreka 6 years ago

For my most recent project[1], I wanted to see if Amazon’s Graviton instances would be a good choice for my docker deployments (I was deploying MongoDB, an Express server, and several instances of the Janus WebRTC server). I was developing in Pop OS on an x86_64 desktop (since we’re gonna have to start specifying now) and found the toolchain around building ARM64 images to be pretty simple once I got it set up.

I benchmarked some `t2a.nano`s against some `a1.medium`s and found that the `nano`s were sufficient for my needs, so I went with them (they are cheaper than `a1.medium`s, even if the `a1`s have a better price-to-performance ratio).

I didn’t find it too difficult to rebuild any of these projects for cross-architecture usage. Even Janus, which has a TON of C/C++ dependencies (some of which have to be compiled from a particular version of the source) easily built for ARM with no change in the Dockerfile.

So I kind of feel like OP is exaggerating the effort required to migrate servers to ARM. Sure it might be a hassle when you have tons of microservices, but you can move them incrementally, and most things recompile with no changes. And regardless of what architecture your dev machine is, you’ll want to be able to compile for and work with both architectures if you want to get the most out of the infrastructure on offer in 2020.

[1] Shameless plug: https://chrisuehlinger.com/blog/2020/06/16/unshattering-the-...

  • bmalehornOP 6 years ago

    Cool, thanks for sharing. It's these kind of experiences that I was hoping to gather from making this post.

    Did you notice at the end that you did NOT end up choosing ARM? You ended up going with x86_64 because that's what made more sense for your backend. That's part of my point - developers should choose their backend architecture based on the performance and pricing of their backend, not their development laptop. And if that decision is "we should keep using x86", then there will be a big performance hit in development.

    • pjmlp 6 years ago

      Back in the UNIX glory days, I was responsible for keeping a software stack running across Windows NT (later 2000), Aix, HP-UX, Solaris, each with its own CPU architecture.

      This is just another CPU story, no big deal.

paride5745 6 years ago

As a Linux tech, I welcome this Apple move honestly.

Having a proper competitor for x86/x64 is a good thing.

The fact docker is slower on ARM (at the moment!) is mostly due to the lack of interests for optimizations.

With Apple starting to produce MacARM machines, and maybe more ARM servers in the wild, docker (and other platforms/frameworks) will start to get more performant on ARM as well.

dlivingston 6 years ago

Do we know that Boot Camp isn’t supported by Big Sur? Or is it just that one can’t run an x86 OS on an ARM architecture? In other words - will I still be able to dual-boot into something like ARM-flavored Linux?

  • leecb 6 years ago

      one can’t run an x86 OS on an ARM architecture
    
    This is the limitation. There is an ARM version of Windows, but the comments from Microsoft don't sound terribly promising:

      “Microsoft only licenses Windows 10 on ARM to OEMs. We have nothing further to share at this time.” [1]
    
    And Apple has more firmly stated that this won't be an option:

      “We’re not direct booting an alternate operating system,” says Craig Federighi, Apple’s senior vice president of software engineering. “Purely virtualization is the route. These hypervisors can be very efficient, so the need to direct boot shouldn’t really be the concern.” [1]
    
    1: https://www.theverge.com/2020/6/24/21302213/apple-silicon-ma...
    • gumby 6 years ago

      And the iPhone didn’t permit native third part apps when launched. Not because they weren’t ready to announce it yet but because at launch time they figured web apps would be enough.

      I have no idea what plans they might have but I would be surprised if you couldn’t install some ARM Linux distros on their laptops sometime next year.

  • yborg 6 years ago

    Boot Camp support will remain for Intel machines. Current understanding is that ARM machines will have no Boot Camp capability at all.

jjoonathan 6 years ago

If you stay back on x86 virtualization will be slow, but if you jump to ARM this is great!

> However, this would take months

(infomercial arms)

I'm sure it won't make sense for everyone, but I'm just as sure it will make sense for many.

outworlder 6 years ago

On the flip side, I guess ARM Macs will now allow the use of hypervisors for Android simulators, instead of a full hardware virtualization.

... thus making Android development better on Macs?

  • lfy_google 6 years ago

    Android Emulator developer here. In addition to what the other comments said about android emulation with hypervisors existing already for x86, we're also looking into the Hypervisor.framework API for Apple silicon.

    It won't be a trivial task (hoping for pre-existing code to port over maybe?) but we have the other pieces like using Hypervisor.framework for x86 already, and being able to cross compile the other code for arm64, so that would be the only major task left.

    On the subj. of better GPU support, it depends on what it's actually like using the drivers, but from previous experience with the GPUs and drivers shipped with macOS, there shouldn't be any special kind of trouble at least. We may have to use Metal if Apple also gets rid of opengl support on those new machines, but there are also existing translators for gles and vk to metal. The graphics hw itself, is actually the least of our worries due to how consistent the hardware is likely to be---we'd have to deal with a much fewer set of hw/driver quirks versus other host OS platforms.

  • jeroenhd 6 years ago

    Android emulators on desktop generally run amd64 images of Android using existing virtualisation hardware and software.

    At best, you can say that you can run ARM-only games at native speed now, but as a developer you won't really notice much different (assuming the processors aren't slower than Intel's)

  • jayd16 6 years ago

    Android already had x86 builds and that was the preferred way to run the emulator so its mostly likely a neutral change (once they get hardware virtualization for arm working). The GPU aspect of things might improve maybe?

phamilton 6 years ago

> ec2 only offers 6 general-purpose ARM instance sizes

m6g, c6g, r6g each support 6 sizes for a total of 24

  • _msw_ 6 years ago

    Disclosure: I work at AWS building cloud infrastructure

    C6g, M6g, and R6g (powered by AWS Graviton2) each support 8 sizes, along with bare metal. A1 instances (powered by AWS Graviton) have 5 sizes, along with bare metal.

    That's a total of 33 distinct instance sizes.

  • xsmasher 6 years ago

    This seemed like the weakest argument in the article; as arm becomes more popular it will get more support.

ris 6 years ago

If you hadn't sold yourself out of the free market, you would be able to choose what architecture machine you bought.

  • Spivak 6 years ago

    A free market wouldn't have saved you because the market has every incentive to gravitate to a single architecture because vendors and customers want the best software compatibility. The more popular an architecture (or really any platform) gets the more software that's written for it until it starves out competitors because they can't run the software their customers want.

Skunkleton 6 years ago

> Docker on a Mac utilizes a hypervisor. Hypervisors rely on running the same architecture on the host as the guest, and are about about 1x - 2x as slow as running natively.

That doesnt sound right to me. Perhaps on IO bound tasks, if you are using emulated devices. On CPU bound tasks you should see near native performance.

harpratap 6 years ago

There's another unintended consequence of this virtualization - docker is already has very high CPU usage on my macbook, anywhere from 50-100%. Because of which it is always hot and toasty. This is caused already caused my screen to start deteriorating (https://www.ifixit.com/Answers/View/567125/Horizontal+line+o...) and the battery has degraded considerably too, even when I'm not coding on it and docker is shut down. This means a significant hit to the longevity of such devices as they not meant to be pushed so hard 40 hours a week. With ARM macs I see it getting even worse.

pazimzadeh 6 years ago

On the bright side, it looks like low-latency streaming is good enough that as long as you have internet connection, Boot Camp is not really necessary. This works well for me: https://shadow.tech/usen/

snapetom 6 years ago

A lot of developer-centric focus discussion on how Docker would work (hint: it does), but VirtualBox is still pretty common in the sysadmin world and other industry circles. Moreover, there seems to be no way it will ever work. It will be interesting to see how that turns out.

  • bmalehornOP 6 years ago

    Author here. That's a major point of the article - "are we screwed?" I'm not an expert on virtualization but I wanted to see some discussion on this topic, because it feels like we might be screwed and nobody is talking about. Anyway I was happy to see Docker worked, at least on a basic level.

    • snapetom 6 years ago

      Cool. Thanks for writing it. It summarizes and collects a lot of issues we were all grumbling about here and there. The main hurdles for Docker are organization, not technical. However, the other issues you bring up are going to be more technical (same as you, though, not a hypervisor expert and/or we're going to be at the mercy of big vendors like Apple, Oracle, and Microsoft. Those are much harder problems to overcome.

seanparsons 6 years ago

Any company with a load of binaries built without any effort towards supporting cross platform builds that uses Docker is gonna have a bad day with this. They buy a bunch of new MacBooks and then find they can't use them until they spend a few weeks porting everything.

  • neilalexander 6 years ago

    I suppose that depends what they’re written in. Some languages (e.g. Go) far simpler than others.

lsllc 6 years ago

My guess is that Apple will end up copying Microsoft and providing a WSL style Linux kernel "shim" into Darwin (pretty easy as it's already UNIX) and use Rosetta2 to translate any x86_64 containers to aarch64). No need for a hypervisor.

justaguy88 6 years ago

I wonder if Apple will still allow a dual boot with a native (arm64 in this case) Linux

bitwize 6 years ago

Once again -- Apple will do what the entire industry without Apple couldn't do. In this case, force a migration to ARM-based servers, so that prod is running on the same architecture as the developer's machine.

Apple is finally killing x86.

  • tjoff 6 years ago

    Hooray, we have successfully fixed the mistake with an open platform and will now be relegated to incompatible hardware without any competition.

    At last, the future will surely be bright!

    • duskwuff 6 years ago

      How was x86 any more "open" of a platform? If anything, x86 is a far more "closed" platform, as there are only two remaining manufacturers of x86 parts, and there is no licensing process to join them. Meanwhile, there are hundreds of ARM licensees, and the process for becoming a licensee is all documented online [1].

      [1]: https://www.arm.com/why-arm/how-licensing-works

      • tjoff 6 years ago

        How? It is absolutely inconceivable how much more open it is.

        Also, the CPU is but a minor part of the puzzle. But still that is still twice as many as apple (good luck exchanging that apple-arm with any other brand).

        Please let me know how open you think the next apple ARM platform is when you try to boot any OS not written by apple.

        Please compare that with a computer built from AMD/Intel with a motherboard out of dozens of manufacturers etc. Any ATX power supply etc. Pretty much any PCI-E graphics card etc.

        • duskwuff 6 years ago

          You are confusing the x86 CPU architecture (which is closed) with the PC platform (which is relatively open).

          > Also, the CPU is but a minor part of the puzzle. But still that is still twice as many as apple (good luck exchanging that apple-arm with any other brand).

          Even on x86, interchangeable CPUs are the exception, not the rule. Intel and AMD CPUs haven't even used the same socket since the 1990s, and even within those manufacturers, socket incompatibilities are common.

          Software interchangeability is more of an operating systems issue than an architectural one. With appropriate software shims, though, there is no reason to suspect that (for example) Linux ARM software could be run on an Apple ARM CPU. In fact, it's quite likely that tools like the Android emulator will do exactly that.

          > Please compare that with a computer built from AMD/Intel with a motherboard out of dozens of manufacturers etc. Any ATX power supply etc. Pretty much any PCI-E graphics card etc.

          Server-class ARM hardware generally does use similar parts as x86 servers, including power supplies and PCIe peripherals.

          • tjoff 6 years ago

            I am not. But also, how many x86 processors are sold outside of the PC platform? They have a symbiotic relationship. The foundation of everything we have today.

            > Software interchangeability is more of an operating systems issue than an architectural one.

            Not if the architecture is designed around keeping others out. But just not telling anyone how to do it is enough in 99% of cases. Some hacker might post a buggy proof-of-concept for an obsolete device that no one will run.

            > Server-class ARM hardware generally does use similar parts as x86 servers, including power supplies and PCIe peripherals.

            Wanna place a bet on what apple is going to do?

      • josephcsible 6 years ago

        It's more open in terms of what software you can run. And if you cared about hardware being open, RISC-V is where you'd have to go. ARM certainly isn't open hardware.

      • imtringued 6 years ago

        x86 might be a platform with few vendors. ARM isn't even a platform. Most SoCs are meant to run a single OS and that's about it. Not exactly what I'd call a platform because platforms let their users build on top of them. That includes running whatever OS you want to run on that processor.

        What you might be worried about is a duopoly which has nothing to do with whether something is a software platform or not. For example Microsoft has a monopoly on Windows. That doesn't stop Windows from being a platform for which you can write arbitrary software. Apple has a monopoly on iOS but it's not possible for users to write their own software, they have to join a developer program that can always exclude them. This is what one would call a closed platform. ARM is closer to the iOS model than to Windows.

      • klelatti 6 years ago

        Precisely. And the fact that you can run ARMv8 on both a Raspberry PI and the Fujitsu Fugaku which is now No. 1 on the Top 500 says something about what is possible as a result.

    • pjmlp 6 years ago

      Actually the original mistake was IBM's to make.

  • sudosysgen 6 years ago

    There isn't really any advantage to ARM compared to modern x86. Closed ARM is IMO worse than x86. Now, if someone made the x86 license more open that would be cool, but so far it's all downsides at least for me.

    • klelatti 6 years ago

      > Closed ARM?

      Is there any suggestion that the architecture that Apple is using is any different to what is being used by lots and lots of other licensees? If not then it's much more open than x86.

      If you mean that you can't buy an ARM CPU today to plug into your own motherboard then understood but that's probably now a matter of time. At least making such a CPU is possible - no-one is going to make x86 more open.

      • sudosysgen 6 years ago

        x86 is already relatively open, there's more than four companies developing x86 chips, though for now they are mostly focused on very low power outside of Intel and AMD.

        There is no indication that I'll ever be able to buy an ARM CPU and plug it into motherboard with the feature set I choose and plug into it peripherals that follow a standard and open interface with good performance. The only companies that make ARM CPUs that could at all be useful in such an open platform don't make CPUs fast enough. I don't think Apple or Amazon will ever sell me a socketable CPU with high speed PCI support and XMP memory support.

        • klelatti 6 years ago

          ARM has 250 licensees for application processors with 8 licensees (including AMD and Nvidia) who can develop their own architectures around the 64 bit ARM instruction set.

          You can buy socketed ARM CPUs today that are likely to be more than fast enough - from Marvell for example.

          You're 100 % right that there isn't an ARM ecosystem at the moment in the way that there is a PC ecosystem with all its flexibility.

          But that's not a feature of the ARM architecture or what ARM as a company does - it's because x86 has historically dominated the desktop.

          • sudosysgen 6 years ago

            There's only about two ARM licensees that can even come close to desktop tier performance. The instruction set doesn't matter if only one company can make a fast enough CPU, using a massive number proprietary changes to the original architecture.

            When we'll have three ARM companies making socketed CPUs with standardized I/O between them that are faster or very close to AMD or Intel, then it will be good.

  • imtringued 6 years ago

    If it takes a dictator to force a platform upon everyone is it really that great of a platform?

fluxem 6 years ago

But linux can be run on arm natively. Moreover, most packages are also compiled for arm. So apt-get install will work just the same. I'm sure they will be able to target Apple's specific arm chips when they come out.

  • donarb 6 years ago

    Apple released a list of open source projects that they have ported to ARM, they intend to upload patches to each of these projects:

      - Bgfx
      - Blender
      - Boost
      - Skia
      - Zlib-Ng
      - Chromium
      - cmake
      - Electron
      - FFmpeg
      - Halide
      - Swift Shader
      - Homebrew
      - MacPorts
      - Mono
      - nginx
      - map
      - Node
      - OpenCV
      - OpenEXR
      - OpenJDK
      - SSE2Neon
      - Pixar USD
      - Qt
      - Python 3
      - Redis
      - Cineform CFHD
      - NumPy
      - Go
      - V8
    • duskwuff 6 years ago

      Great to see Homebrew and MacPorts on that list. That's a huge signal that Apple plans on supporting developer / tinkerer use cases.

    • mthoms 6 years ago

      I feel like the ARM transition is why MacOS hasn't progressed much (IMHO) in the past 2 or 3 years. It was a common assumption around here that they were diverting all their top resources to iOS/iPadOS development.

      The reality seems to be that their top MacOS developers have been busy laying groundwork for the ARM transition. There's so much to be done.

    • anthk 6 years ago

      Most of those were already ported into ARM Linux/BSD several years ago.

    • ndesaulniers 6 years ago

      I'm curious to see the Google projects that have patches from Apple, since all of those run on Android which is for all intents and purposes ARM.

      I'm guessing new ARMv8 ISA features, PAC/BTI/MTE?

      • pjmlp 6 years ago

        Yes, ARM Mac will make fully use of ARM security and hardware mitigations against typical C exploits, there are a couple of WWDC talks about it.

    • saagarjha 6 years ago

      Has anyone seen those patches start to land? Just curious what the timelines on those are.

    • tonyedgecombe 6 years ago

      It’s interesting that Electron is on that list.

      • imtringued 6 years ago

        It's also interesting that this won't fix old Electron applications. The idea of easy cross platform development via Electron is a myth because most developers won't support your platform even if all support requires is checking a box. When you consider that this is the primary justification for using Electron over other stacks it just makes your blood boil. All the downsides with none of the benefits.

      • asadlionpk 6 years ago

        Crucial piece of tech for many products like VSCode, Slack, Discord, etc.

        • pjmlp 6 years ago

          Given how much Microsoft's React Native team bashes Electron with their performance bar charts (300x more bloat than RN), I look forward that, as soon as it is mature across Linux, macOS and Windows, they replace Electron with RN on VSCode.

      • yjftsjthsd-h 6 years ago

        Makes sense; port Electron and you get a bunch of apps for free.

lowbloodsugar 6 years ago

If I were to describe my job, or programming in general, it might be "problems that have no obvious solution". This is a sad article that just seems to be the opposite in spirit to "Hacker" ethos.

jmull 6 years ago

> Should I get an ARM Mac? ...if you use virtualization often, I wouldn't recommend it.

I use virtualization continuously, but not for anything that needs to be as fast as possible.

I won’t hesitate to get an ARM Mac once I can run x64 Windows VMs on it. (Presumably VMWare Fusion or Parallels, and for once I won’t feel ripped off by the upgrade pricing.)

Docker on Mac doesn’t work that well today, so I don’t have any workflows that depend on it.

  • jki275 6 years ago

    There is no reason to expect that virtualbox, parallels or Vmware will emulate x86-64 to run a guest OS. None of them does any emulation today, they are virtualization platforms.

leoh 6 years ago

This is silly. Most stacks will have counterparts on both architectures. Just run CI with the same configurations as prod. Problems due to differences will be rare for most stacks -- modern languages are well defined and run against extensive spec tests on all major platforms -- and will smooth out over time.

rcarmo 6 years ago

All my Docker stuff is multi-arch these days. Here’s one of my sample pipelines, written precisely to show how easy it is: https://github.com/rcarmo/azure-pipelines-multiarch-docker

quux 6 years ago

I'm not a heavy docker user but why can't I develop docker containers on Arm (as native containers, no emulation) and deploy to x86_64? Or vice versa? I understand that some packages are binary only and wouldn't be necessarily available for Arm, especially initially, but the majority should be.

  • Teknoman117 6 years ago

    It's not that you can't, you just have to be mindful. Because docker containers contain compiled applications, you have to be aware of what CPU architecture they're compiled for. ARM can't natively run x86 binaries, x86 can't natively run ARM binaries.

    If you want to develop containers for x86 systems on an ARM system, you'll have to cross compile your containers, which I'm not sure if docker actually supports outside of emulation.

    If you are only a consumer of containers, most of the popular ones have been compiled for multiple architectures.

  • icedchai 6 years ago

    You can. Because of the Raspberry Pi and other ARM SBCs, the most popular base images already run on ARM. The ones that are missing will catch up pretty fast.

marricks 6 years ago

It will be extremely interesting to see what the dev's say with the test boxes. Curious if there's any magical fixes like switching over to ARM Linux, since the software of an image would likely be compiled for x86 I really doubt it...

Perhaps this will spur some people over to running ARM servers in the cloud...

  • rbanffy 6 years ago

    It's unlikely the test boxes are going to people who deploy to x86 backends. They are targeted towards iOS and macOS developers.

    We can have the experience of developing on ARM to deploy on x86 right now with ARM workstations. A 16 core barebones costs around $700

  • lukevp 6 years ago

    Has anyone received an invite into the program yet? I applied on day one, but don’t have a Mac Store app currently published, so I’m not sure if I will get accepted.

gigatexal 6 years ago

It’s still early days. bhyve which is likely what they’re using or whatever the hypervisor is will just run arm docker images — unless you have a hard x86 dependency many of our http micro services should run just fine on arm images at least my workflow will be just fine.

  • gigatexal 6 years ago

    Furthermore this is speculation. We don’t have ARM Macs yet to test. This is like all the nerds in the forums speculating on hardware leaks how the next gen GPUs will perform: wait for hardware and reviews. Making any sort of claim as to what to buy or not at this point seems disingenuous.

AkihiroSuda 6 years ago

> Why can't you update the Docker image to also support ARM? You theoretically could switch your backend to run ARM Linux. However, this would take months

No need to take months. `docker buildx` can build multi-arch images without using real ARM instances.

  • Matthias247 6 years ago

    Does that help if the software you build inside the container doesn't build on ARM? Imagine a 3 digit count of legacy C libraries which do so far not compile on ARM for a variety of reasons. You would need to spend a significant amount of time to make them compile and run.

ed25519FUUU 6 years ago

I use docker every day and I guess I’m just not worried about this. The container pushes come from a CI host, so I’m not worried about compatibility.

And it’s 20% slower? Well, most of the build time is slow for all sorts of reasons. I honestly don’t think I’ll even notice.

Aqueous 6 years ago

this is why intel should be scared - very scared - that the PC world seems to be converting to ARM en masse. PCs might be a relatively small fraction of intel’s total sales, but it’s the second order effects they should be worried about. if it becomes less convenient for developers on ARM machines to develop and deploy software to x86 cloud architecture, they will begin to demand that the cloud architecture be shifted to ARM as well.

user5994461 6 years ago

Didn't think of that. If running Docker is 10 times slower on ARM and virtual box doesn't support the architecture at all, this might indeed end developers using Mac.

  • armagon 6 years ago

    I've been developing on a Mac for years, and I've never needed to use Docker or virtualisation to do it. (I've been doing game development, and now web front end development. I'm sure the major game engines and browsers will be ported to run on ARM chips (although it may take a while)).

    Honest question: What sort of development regularly requires using docker or virtualized OSes?

    • hesk 6 years ago

      Two examples:

      1) I wanted to provide a Jupyter notebook with IBM DB2 support for a university course. (Why DB2? Because its optimizer can transparently use materialized views for query optimization which PostgreSQL can't AFAIK.) IBM provides a Jupyter magic which requires the Python package ibm_db. ibm_db requires a DYLD_LIBRARY_PATH hack which macOS doesn't support unless you disable System Integrity Protection. I don't want to disable SIP on my system and I can't ask our students to do that. A Docker image provides a convenient solution to the problem.

      2) I do a lot of disparate project development using Flink and Hadoop. These get deployed on Linux machines. My preferred way to develop on my Macbook is to boot up a headless Ubuntu system inside VirtualBox, SSH into it, and then start a TMUX session. This has a number of advantages. a) The dev environment more closely resembles the production environment. b) I can setup my TMUX session, save the state of the VM, and then restore it months later to the exact state just by booting up the VM. c) I don't have to pollute my macOS environment with dev tools that I rarely use. It's not strictly necessary but it's actually quite convenient. I can even do development in IntelliJ on macOS, run the services inside the VM, and thanks to remote JVM debugging use the IntelliJ debugger on macOS to step through the code running inside the JVM.

    • marmaduke 6 years ago

      Honest answer: almost not much. Except, virtualization is a big trend, since it's a hugely convenient way to package a complex thing into a black box and run it with minimal attack surface.

      If you sit back for a second, you can see that things like systemd provide nearly everything you'd want from a container, that, like, who cares is the approach you could have, for the top ten languages.

      If you are running a massive stack then maybe you do need Docker and VirtualBox but at this point, shouldn't you be using a staging server anyway?

    • vertex-four 6 years ago

      Web development doesn't "require" it per se, but it's easier to keep track of the mess of ad-hoc dependencies with Docker. Really, I think the solution is keeping better track of your dependencies and avoiding ad-hoc system configuration, but...

    • neurostimulant 6 years ago

      Almost every server applications (databases, webservers, webapps) are available as docker images. It's very easy to deploy those apps using docker, which is why it's taking over sysadmin world by storm. Previously, handling a big web application deployment is a complex task that requires a dedicated team. Now, you can just package your app as a docker image, and other people that know docker will know how to deploy your app without having to know its internal dependency graph first.

      • anthk 6 years ago

        If you use Docker as a production tool instead of prototyping several shit will happen soon.

        • neurostimulant 6 years ago

          Certainly not for production, but when you can't run x86-64 images, just simple prototyping or building images that requires x86-64 will require using external servers instead of using locally installed docker, which will be a huge barrier for casual prototyping.

        • yjftsjthsd-h 6 years ago

          It's not quite as simple as GP implies, but why not? I work at a place running Docker in prod, and our only issues are with the swarm networking, not Docker itself.

    • jki275 6 years ago

      Embedded.

      Almost all the compiler toolchains for embedded devices will not run under anything but some esoteric specific version of Linux or Windows.

      I generally write code on Mac native (C or C++), then put it into a VM configured with all the right tools and such to build and install.

      Usually that VM is a copy of the VM I use for CI/CD with gitlab.

      I don't know how common that use case is, but I know everybody I know who does embedded work does it that way or something like it.

  • rbanffy 6 years ago

    Running Docker on an emulated x86 will be slow, but I doubt it'd be 10x slower. There are snapdragon laptops running Windows. How much slower is running Docker on them?

  • justaguy88 6 years ago

    I would hope this leads to more people running native Linux, but the cynical side of me thinks it'll push people to WSL2.

jbverschoor 6 years ago

We have qemu on arm. It’s fast enough to software render half-life. Servers will follow desktop, so in a few years a lot of things are arm

cbsmith 6 years ago

Hey guys... we have done emulation before, and it's not nearly so bad.

Also, there ARM images for Docker. You don't HAVE to run x86-64 binaries.

saxonww 6 years ago

Develop on the platform you want to deploy to.

  • yjftsjthsd-h 6 years ago

    I am not running Darwin in prod.

    EDIT: I suppose I should clarify; I don't totally disagree. I personally run Ubuntu on my laptop and servers. But plenty of people are quite happy developing on Darwin and deploying to some sort of GNU/Linux.

  • jki275 6 years ago

    Develop on an ESP32? You're a masochist, but I like it.

zekrioca 6 years ago

I guess someone will port Docker to JVM, and use the JVM optimized to whatever ARM processor there will be.

nojito 6 years ago

Apple has their own virtualization offering to share and they are being quite coy about

__warlord__ 6 years ago

Has Apple mentioned something about Thunderbolt 3 or USB 4 on this new Macs?

lowbloodsugar 6 years ago

I had an ARM based desktop in 1988. Be thrilled to have one again.

huslage 6 years ago

Hypervisors are by no means 1-2x slower. Testing I/O is not testing the performance of a hypervisor. It's testing the I/O stack.

monadic2 6 years ago

Tl;dr they want to run x86 linux for their own reasons rather than arm.

julienfr112 6 years ago

Mac book was never really a dev platform. Maybe for front or nodejs, or definitly for native apple apps, but seriously, brew and so are so subpar.

  • dhosek 6 years ago

    I've been doing dev work on Macs since 2002. Perl, PHP, Java, C++. It's been a great environment. I don't expect most of my workflow to be impacted by the ARM transition, but given I've refreshed my Mac Mini and laptop both in the last 18 months, I don't expect to be changing architectures any time soon either.

  • setpatchaddress 6 years ago

    Genuinely curious what you feel is subpar about brew. It seems to work pretty well.

  • chrisseaton 6 years ago

    I develop low-level code like compilers just fine on a MacBook.

    • fortran77 6 years ago

      What's "low level" about a compiler?

      • chrisseaton 6 years ago

        > Mac book was never really a dev platform. Maybe for front or nodejs, or definitly for native apple apps, but seriously, brew and so are so subpar.

        I'm not sure what you're asking? It's lower level than the examples which were given.

        There's nothing stopping you using a MacBook for almost any development task. It's not just for front-end tasks. You can do work that runs directly on the architecture.

  • rbanffy 6 years ago

    I use Macports and I'm quite happy.

old-gregg 6 years ago

When I ask Mac-loving developers, why they chose to run MacOS when developing non-MacOS software, they used to give me good reasons.

I think their reasoning is no longer valid. The hardware has gotten worse (keyboard, touchbar), the OS has gotten more hostile, meanwhile the state of Linux on Laptop has gotten a lot better. So... I used to understand them, but I no longer do.

https://i.kym-cdn.com/photos/images/newsfeed/001/016/674/802...

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection