Show HN: Multinode – Python framework for building distributed cloud apps
multinode.dev@dragosbulugean Do you have any preferences for where it should run?
For now, we're thinking of running it on ECS/Fargate in our managed AWS account. But we appreciate that there are advantages in other approaches.
@kennywong1137 some curiosities :)
will there be a provisioning x amount of compute step for the customers?
or is it completely serverless offering for the customers?
if it's serverless how do you plan to manage the spikes on your own infra?
Does it offer multi-cloud support or AWS only right now?
Does it automatically handle scaling based on cloud app demand?
Only AWS for now. What use case do you have in mind for multi-cloud?
It is designed specifically with the idea of handling autoscaling well, see: https://multinode.dev/docs/resources/workers
Are you planning to open source the code?
Not at the moment. We're considering making it source available though. What are your thoughts on this one? It's still early so we're happy to adjust some things given demand!
We're a small team of engineers that spent waay too much time setting up user-triggered distributed workflows with K8s when working on our previous product, even though conceptually we knew exactly what our system is supposed to do [1].
And so we decided to build Multinode. Its goal is to eliminate the hassle of setting up compute infrastructure so you can build arbitrarily complex compute without leaving the comfort of your local Python environment.
You can use it to set up distributed async workflows, continuously running daemons, APIs, etc., with a help of only 5 additional annotations.
TECHNICAL DETAILS
Internally, all the infra is on AWS. All the compute runs on ECS with Fargate using controllers similar to those used by Kubernetes. Unfortunately, Fargate has pretty annoying resource limitations (hence https://multinode.dev/docs/resources/cpus-gpus-memory) so we will likely port it to ECS on EC2s or just straight to K8s at some point. The task registry and multinode's dict is running on Amazon Aurora. We use Python for the actual product and docs run on Firebase with next.js using one of the official Tailwind templates (I'd really recommend them!).
FEEDBACK NEEDED
In its current version, the entirety of your cloud compute would be in Multinode. But the actual #1 reason we decided to build it was these user-triggered autoscaled workflows [1]. We're now playing with an idea of dropping most of the features to focus on solving this core problem in such a way that it's easy to integrate with your existing infrastructure [2] instead of having to build everything in Multinode. Something like Airflow or Dagster but with emphasis on user-triggered workflows with very fast autoscaling and topologies defined at runtime. Do you think we should go with this change or keep Multinode as it is now?
Also, apologies for closed alpha at the moment, we're still figuring out how to put automatic pricing or quotas in place and so opening it up to the public would bankrupt us. In practice it will be some markup on our compute cost. Any advice on what is usually reasonable here?
Any thoughts and feedback on those 2 points would be very appreciated!
[1]: For curious, it was something similar to our chess example: https://multinode.dev/docs/core-concepts/job#distributing-a-... [2]: Ideally, both compute and storage would be provisioned in your own AWS VPC with only the control plane running on our infra for maximum privacy and security.
looks like a nice abstraction for distributed computing :)
where does it run?