Settings

Theme

Ask HN: Deploying Services with an LLM Interface?

1 points by sigalor 3 months ago · 2 comments · 1 min read


With so many LLM-based tools emerging nowadays, I've been thinking about a service which would make life so much easier: Deploying new services without hassle. I would want to:

1. Connect the account of any cloud provider (i.e. similar to https://sst.dev/), so that the generated infrastructure is self-hosted by default.

2. After that's done, simply have access to a ChatGPT-like interface where I can write commands in human language like "Deploy a new MongoDB instance".

3. It should abstract from all complexity and, in this example, simply give me the MongoDB connection string in the end.

The idea is to reduce infra setup overhead to zero, and to become able to test out new services very quickly instead of needing to deal with the annoyances of deployment. Sort of like a heavily simplified Pulumi, where no technical knowledge of infrastructure would be required at all anymore.

Surely this must exist already? Does anyone have recommendations?

tedggh 3 months ago

The problem is LLMs are still non deterministic so they cannot maintain persistent state reliably and guarantee reproducibility.

  • sigalorOP 3 months ago

    I don't get that argument at all. The LLM could just be an assistant to generate Helm charts or stuff like that. Of course a human could manually check it then, before it's deployed. I just find it so weird that LLMs are already pretty good at generating regular programming code, but generating IaC (Infrastructure-as-Code) code is still so underdeveloped.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection