Settings

Theme

PyTorch Monarch

pytorch.org

377 points by jarbus 3 months ago · 44 comments

Reader

chandureddyvari 3 months ago

Interesting - this seems to target a different layer than services like Tinker (https://thinkingmachines.ai/blog/announcing-tinker/). Monarch provides the infrastructure primitives while Tinker is a managed finetuning service. Could someone build something like Tinker on top of Monarch?

  • gaogao 3 months ago

    Yup, there's stuff like https://pytorch.org/blog/introducing-torchforge/ on top of it now

    • chandureddyvari 3 months ago

      Nice, so the open source equivalent now exists. Meta basically commoditized Tinker's($12B valuation) value prop by giving away the infra (Monarch) and the RL framework (TorchForge). Will be interesting to see how a managed service competes with free + open source at this layer.

    • pstoll 3 months ago

      “Service Adverbs - like ‘route’ and ‘fanout’”

      Grammarians are going to be big angry here. Ain’t an adverb in sight.

pjmlp 3 months ago

Apparently PyTorch oxidation has started.

> Monarch is split into a Python-based frontend, and a backend implemented in Rust.

Other than that, looks like a quite interesting project.

  • dhrt12327 3 months ago

    Multiple sources say that it is an experimental framework around PyTorch, not a replacement. People will still get to enjoy a circular graph using std::shared_ptr with memory leaks.

    It's a pity they don't do a complete rewrite with a functional language as the driver.

    • gaogao 3 months ago

      > It's a pity they don't do a complete rewrite with a functional language as the driver.

      It's open source, so seeing such an extension would be quite cool. There's much that could be done with native Rust actors and code that get maybe at what you want, but nothing precludes mixing PyTorch and other backends.

      For example, you could wrap a C++ inference engine as part of one of the actors generating data for other actors doing distributed training.

    • pjmlp 3 months ago

      Interesting, by the way, you can replicate the experience in Rust.

    • bullfightonmars 3 months ago

      You might be looking for elixir/nx and axon

      https://github.com/elixir-nx/axon

    • hansvm 3 months ago

      Arc<T> has entered the chat.

  • galangalalgol 3 months ago

    This is a new project right? Not the oxidation of an existing one.

    • gaogao 3 months ago

      Yup, hyperreactor, one of the new crates that's part of it, does some particularly interesting things for efficient parallel distributed channels.

alyxya 3 months ago

I made my own single controller PyTorch extension [1], though mines doesn't yet support cross node communication. I found it interesting to compare how Monarch makes things performant. I believe Monarch also uses cloudpickle for code to be shared among all nodes, which is probably the only way to performantly have various nodes execute work as that ends up being a one time setup cost. I found the fanning out of sending messages from the single controller to be really interesting, so the controller is unlikely to be the bottleneck besides any synchronous operations.

As far as things that might be a performance loss here, one thing I'm wondering is if custom kernels are supported. I'm also wondering how much granularity of control there is with communication between different actors calling a function. Overall, I really like this project and hope to see it used over multi-controller setups.

[1] https://github.com/alyxya/mycelya-torch

  • gaogao 3 months ago

    > As far as things that might be a performance loss here, one thing I'm wondering is if custom kernels are supported

    Yeah, you might end up needing some changes to remote worker initialization, but you can generally bake in whatever kernels and other system code you need.

porridgeraisin 3 months ago

> This lets us avoid single-host bottlenecks, effectively using the whole mesh as a distributed cluster for message forwarding. (Cite scalability numbers here.)

In case someone that can fix this is reading here

valzam 3 months ago

I assume this is similar to Ray?

  • cwp 3 months ago

    The code example is very similar to Ray.

    Monarch:

      class Example(Actor):
         @endpoint
         def say_hello(self, txt):
             return f"hello {txt}"
    
      procs = this_host().spawn_procs({"gpus": 8})
      actors = procs.spawn("actors", Example)
      hello_future = actors.say_hello.call("world")
      hello_future.get()
    
    Ray:

      @ray.remote(num_gpus=1)
      class Example:
          def say_hello(self, txt):
              return f"hello {txt}"
    
      actors = [Example.remote() for _ in range(8)]
      hello_object_refs = [a.say_hello.remote("world") for a in actors]
      ray.get(hello_object_refs)
  • lairv 3 months ago

    I'm also curious what's the use case of this over Ray. Tighter integration with PyTorch/tensors abstractions?

  • unnah 3 months ago

    There's also Dask, which can do distributed pandas and numpy operations etc. However it was originally developed for traditional HPC systems and has only limited support for GPU computing. https://www.dask.org/

  • disattention 3 months ago

    I had the same thought, especially because of their recent collaboration.

    https://pytorch.org/blog/pytorch-foundation-welcomes-ray-to-...

milancurcic 3 months ago

Cool! Essentially Fortran coarrays from 2008.

  • philipallstar 3 months ago

    Or Hadoop from 2006? But you don't need to write MapReduce or Fortran, so it's probably far nicer.

    • pjmlp 3 months ago

      Fortran 2023 is already quite nice, and doesn't need to rewrite stuff in C for performance.

bjourne 3 months ago

> Monarch lets you program distributed systems the way you’d program a single machine, hiding the complexity of distributed computing:

There are some infamous tech based on the "hiding" paradigm. PHP comes to mind. By hiding how the http request/response cycle actually works it fostered a generation of web developers who didn't know what a session cookie was, resulting in login systems that leaked like a sieve. Distributed computing is complicated. There are many parameters you need to tweak and many design decisions you need to take to make distributed model training run smoothly. I think explicit and transparent architectures are way better. Distributed model training shouldn't "feel" like running on a single device because it isn't.

logicchains 3 months ago

This seems strictly less powerful than Jax, which comes with a powerful compiler that optimises how cross-node communication is conducted.

  • gaogao 3 months ago

    Nah, focusing on a different controller paradigm. Jax is focused on multi-controller SPMD, while this is focused on a single-controller setup. Both have their place, with single-controller being generally easier to reason about, and multi-controller more optimal for certain dataflows. There's also some interesting mixes of the two control paradigms.

fadedsignal 3 months ago

It is a nice project. I have questions.

- Is this similar to openMPI?

- How is a mesh established? Do they need to be on the same host?

semessier 3 months ago

this could become a major thing in coarray world, but the issues start already:

> ...Note that this does not support tensor engine, which is tied to CUDA and RDMA (via ibverbs).

I.e. yet another CUDA married approach: the issue is not ibverbs but the code shows they use GPUDirect RDMA, going from there this can only get worse - more CUDA dependencies. There would have been OpenUCX.

jonapro 3 months ago

Beowulf then.

nothrowaways 3 months ago

FB should create a pytorch foundation and set it free before they fuck it up.

SomaticPirate 3 months ago

"Our Rust-based backend facilitates our performance, scale, and robustness — we amply use Rust’s fearless concurrency in Monarch’s implementation"

Found a few typo's. The em dash makes me suspect an LLM was involved in proofreading

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection