Show HN: OpenMetal – open-source first meld of private and public cloud
openmetal.ioHey All!
OpenMetal came out of incubation late last year and I would love HN feedback and thoughts.
From a lot of early and current pushback, we are a company and business model that is going against the grain… but when we find a matching customer they love us!
We are an infrastructure company with a bit of scale already that competes - or is trying to at least - against the likes of AWS, GCP, etc. using open source private cloud technologies - OpenStack and Ceph at the core.
So, I already have thick skin... Feel free to tell me I am crazy but hopefully you also take a look!
Thanks!
Todd The phrase "open source" is mentioned repeatedly here, and in your https://openmetal.io/about-openmetal/guiding-principles/ but https://github.com/openmetalio seems to only be the docs and a simplistic search of them <https://github.com/openmetalio/openmetal-docs/search?q=git+c...> doesn't cough up the open source-ness of OpenMetal Is your pitch that you use open source first, or that you are open source first? ed: <strike>I would actually really enjoy if https://docs.openstack.org/devstack/zed/guides/single-vm.htm... worked because moto only goes so far, whereas if I had the ability to actually do AWS-y stuff in a VM it would enable a lot more realistic tests than either localstack or moto offer</strike> I was inspired to actually try this again and it is now much better, and I'd go so far as to say it's now actually very cool Which brings us back to your post: the website seems to be targeting so many different audiences, and at such differing levels of "hand waveiness" that it's hard to tell what problem paying you would solve. Is it just that you're taking the pain out of provisioning OpenStack and then good luck? Are you trying to take the localstack model of "open source, but operationalized" in order to add more surface area onto OpenStack (and then hopefully feeding those contributions back into OpenStack)? Thanks for looking! and you got all the way to our guiding principles! Awesome. Our spot in the OpenStack/Open Source ecosystem isn't code - and looks like it isn't clear enough - it is that we make the full open source version of OpenStack and Ceph available in an environment that is suited to those systems - not a test system, but a real multi server system. It lets people experience OpenStack and Ceph before every having to take on architecting it - and the Ansible we used is available to them to either carry forward on our systems or for using it as the basis of anything they want to build. These are both for paying customers and for people learning - we have set aside resources and in our business model to allow people to spin up and use the clusters so they can learn. OpenStack and Ceph in real production is pretty hard to pull off - failure is really common and so what we are hoping is we can at least ease some of that. We also help OpenStack itself by providing hardware for the Zuul dev pipelines. As for who we sell to/make money/solve - these are usually mid-size companies spending between $30k and $100k per month on Cloud and we handle part of that spend. Usually because we can tune the cloud to their workload for better performance or because we are a better deal with better support than the local mega cloud... I created an account, but it appears that the only offered 2FA options are SMS and email; any plans on supporting TOTP? Also, regarding > and the Ansible we used is available to them to either carry forward on our systems or for using it as the basis of anything they want to build. I was expecting to find that behind the login wall, but did not. Is that something that eventually shows up on the "central" dashboard somewhere? Speaking of that dashboard, I did eventually find the username for the nodes in the tutorial, but the inclusion of `root@` in front of the hostname can make that process a tiny bit smoother More 2FA work is on the roadmap, added another thumps up for TOTP - team has a pretty big backlog though. I am pretty sure I found your account on our side - sent you an email from the system in case I can help. The Ansible playbooks are held inside a system called Semaphore for each customer's deployment. Playbooks are specific to the servers selected. Configurations for a cluster made up of 3 X single CPU box with 128GB of RAM with a single worker drive vs 3X a dual CPU box, 1TB RAM, and 4 worker drives. We copy it to the servers for now as we are still working on making Semaphore accessible from the Central interface. For the root@ - thanks! Will pass that to the dev team - I found some with it and some without it, makes sense to have it on all since it is root keys. Oh, wow, that does sound awesome, and thank you for taking the time to respond. Based on that, I am for sure in the target audience of folks who would like to learn more about how OpenStack works "for real." Cool and thank you for the kind words! And, really thanks for looking. If you decide to try it check out this https://openmetal.io/programs/education-and-training/ and right now from this discussion I am adding a link to that page from the Guiding Principles page. Seriously, thanks! While searching for any "git clone" instructions, seeing https://github.com/openmetalio/openmetal-docs/blob/main/docs... is a very bad sign since kubespray is indescribably slow and fragile. There are so many really great kubernetes bring-up tools nowadays, so using kubespray is a disservice I guess if your business model is just enabling the customer to provision whatever they want onto their bare metal, or even OpenStack machines, ... I guess not your problem, but I still think either party could be more successful by picking a less fragile bring-up mechanism First, thanks for checking it out! I'll take that back to the engineering group - I don't think they formed a specific opinion so the feedback is really helpful. For Kubernetes, we validated OpenShift/OKD, Rancher, Kubespray as you mentioned, and kOps on our "pin" of OpenStack. We also use an OpenStack system with something call Magnum - though this one is up for discussion as Magnum may have a limited span in front of it. Check this if interested: https://openmetal.io/docs/manuals/kubernetes-guides/ Our model does encourage them to go the way they want as it is a full root level OpenStack/etc. but absolutely we do want to give good guidance when they don't have their systems already going.