Settings

Theme

The u-root CPU command

book.linuxboot.org

71 points by liveranga 4 years ago · 18 comments

Reader

LanternLight83 4 years ago

Awesome! This write up is satisfyingly detailed. Prior work in this space includes Plan9 of course, as well as the python project Outrun, which has it's own RPC-based FUSE FS: https://github.com/Overv/outrun

Other approachs to deployment in particular include the functional package managers Nix and Guix, which can create lightweight application images, and could probably be cobbled together into some sort of remote environment replication even across architectures. As I read on, I thought less about how this compares with Guix in regards to application/environment packaging and more about how these things could be glued together in interesting ways, because I think the intro leads in through slightly off-label examples, if that makes sense. Application packaging isn't what this addresses at the end of the day, but it's no less fascinating for it.

  • amelius 4 years ago

    Are you saying that outrun's cache after one or more runs of e.g. ffmpeg could be zipped, and turned into a standalone package?

    • LanternLight83 4 years ago

      I suppose that's hypothetically possible, but no, it's not been implemented, and that's why I say that the intro seems to come at this from an odd angle-- I don't see u-root's CPU doing that either, despite the comparisons to static linking and other application packaging systems.

TazeTSchnitzel 4 years ago

I found this a bit too much detail to understand the concept well. Is it basically SSH but with reverse file sharing: you connect to an SSH server and run commands there, but with the client's filesystem?

  • stragies 4 years ago

    ... but with a customizable/composed view of any combination of folders present on the client filesystem.

    And other processes on the remote system can't look into your mounts.

dark-star 4 years ago

This sounds interesting, but I think I don't fully understand the use-case here. I mean I get it, you "cpu in" to another system, and the session you have there will transparently mount all your /home, /etc, /usr, /bin and so on from your system to the remote host.

What are some actually useful commands to use with that? I mean if all you're doing is remote-execution of bash, you could just start bash locally since your filesystem looks the same anyway? If you run vi through that tool, it can edit the files that you have on your host (because all directories are "passed through"), so why not just run vi on your host?

Edit: two usecases that I could think of where this is useful, but both don't really work I guess: - If you have a very small flash-constrained system (think router, embedded, IoT) ... but these are usually different architectures (i.e. not x86_64) so this wouldn't work - The example from the article, if you have a different Ubuntu version running in a container than on your host. But this would create a "hybrid" Ubuntu after CPU'ing in, since many directories simply come from your host, and only some stuff is from your container. I don't think this would be very useful?

  • lmz 4 years ago

    Maybe the "cpu host" is faster and you run a computationally heavy command there, or it's closer to some resource (e.g. you wrote a script to search for some data inside S3 and want to run it on an EC2 instance). Maybe it's something with specialized hardware that you'd like to control (I see /dev is not forwarded).

    • amelius 4 years ago

      Isn't it dangerous to use a /dev and /etc from two different systems at the same time?

      • lmz 4 years ago

        Yes, if you plan to run disk management commands. Otherwise I don't see what commands would access machine specific device files on a regular basis anyway.

  • wrigby 4 years ago

    One of the use cases I can think of is "I have a set of scripts (shell, GDB, etc.) that are useful when troubleshooting servers, and I want to be able to use any of them at will once I've connected to a machine that's broken."

    Even just having my own dotfiles (.vimrc especially) present on a machine that I'm troubleshooting is huge.

    • amelius 4 years ago

      But how can you troubleshoot the remote machine if you're seeing your local filesystem?

      Everything you run to test whether the remote is working uses only the CPU of the remote machine, not its files, which is where the problem usually is.

      • wrigby 4 years ago

        Hah, that's what I get for skimming the article and assuming I knew what was going on here.

        With that said, I guess the quickest thing that comes to mind is wanting to run my Jupyter notebooks on a machine with much beefier CPU and memory than my laptop. I was recently working on some lightweight ML stuff, which required training 3 SVR models. Each model really only took 30 seconds to train on my laptop (with a small, synthetic training set), but if cpu was in my workflow, I would have just done it on a beefier machine and saved a minute or two of time every time I wanted to test a new iteration.

stragies 4 years ago

So where does the 9P support in the remote linux machine running `cpud` coming from to mount `/tmp/cpu` ?

If kernelspace, would it not require the target machine to already have that module? If Userspace (= cpud contains an implementation of 9P client), won't that require at least the fuse kernel module present and loaded on the cpud remote machine?

snvzz 4 years ago

So it is a glorified syscall proxy?

blueflow 4 years ago

[tl;dr] User namespaces-based chroot into an sshfs mountpoint

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection