Settings

Theme

Show HN: Llm2sh – Translate plain-language requests into shell commands

github.com

67 points by RandomBK a year ago · 22 comments · 1 min read

Reader

This is my take on the common "use llms to generate shell commands" utility. Emphasis is placed on good CLI UX, simplicity, and flexibility.

`llm2sh` supports multiple LLM providers and lets LLMs generate multi-command sequences to handle complex tasks. There is also limited support for commands requiring `sudo` and other basic input.

I recommend using Groq llama3-70b for day-to-day use. The ultra-low latency is a game-changer - its near-instant responses helps `llm2sh` integrate seamlessly into day-to-day tasks without breaking you out of the 'zone'. For more advanced tasks, swapping to smarter models is just a CLI option away.

padolsey a year ago

Cool! I’m experimenting with something like this that uses docker containers to ensure it’s sandboxed. And, crucially, rewindable. And then I can just let it do ~whatever it wants without having to verify commands myself. Obviously it’s still risky to let it touch network resources but there’s workarounds for that.

  • freedomben a year ago

    What is the (hoped for) future after your experiments? Are you hoping to gain enough confidence/refinement on it that you can run it on you main system? Or just hoping to serve curisioty? (I could see myself doing either, so just curious)

    • padolsey a year ago

      I wouldn't be happy to run it containerless, per se, but within docker or on a remote spun-up box it's fine. I think it's mostly something to play with for now, but I'll eventually build it into a couple of AI platforms I run, just to help with amorphous tasks and more deterministic outputs with math and data.

yjftsjthsd-h a year ago

Some really nice things:

+ GPLv3

+ Defaults to listing commands and asking for confirmation

+ Install is just "pip install"

+ Good docs with examples

Is there a way to point at an arbitrary API endpoint? IIRC llama.cpp can do an OpenAPI compatible API so it should be drop in?

  • RandomBKOP a year ago

    I'm glad you like it! The whole point is to have a simple no-frills clean CLI experience.

    > Is there a way to point at an arbitrary API endpoint?

    Yes. Set the `local_uri` setting in the configuration (`llm2sh --setup`), and either pass `-m local` on the CLI or set `"default_model": "local"` in the config.

conkeisterdoor a year ago

This looks great! I would use this if you had a dispatcher for using a custom/local OpenAI-compatible API like eg llama.cpp server. If I can make some time I'll take a stab at writing one and submit a PR :)

  • RandomBKOP a year ago

    Already exists :)

    Set the `local_uri` setting in the configuration (`llm2sh --setup`), and either pass `-m local` on the CLI or set `"default_model": "local"` in the config.

    PRs are always welcome.

causal a year ago

This looks good.

I created something similar using blade a while back, but I found that using English to express what I want was actually really inefficient. It turns out that for most commands, the command syntax is already a pretty expressive format.

So nowadays I'm back to using a chat UI (Claude) for the scenarios where I need help figuring out the right command. Being able to iterate is essential in those scenarios.

MuffinFlavored a year ago

How much time does this gain you from the perspective of "you have to double check its output and hope it didn't make a mistake"?

  • visarga a year ago

    Doesn't save time, but it's a nice experience. Sometimes you get exposure to something you didn't know, other times it makes the boring stuff less annoying.

llagerlof a year ago

Nice tool. I am using ai-shell for that purpose.

https://github.com/BuilderIO/ai-shell

Y_Y a year ago

There are plenty of variations on this tool around, it would be nice to see a comparison.

amelius a year ago

Does it understand commands such as "get this Nvidia driver to work"?

  • elashri a year ago

    Not sure that it actually read any context from your environment. GitHub copilot CLI does not. So if you want it to answer something like that, then you will need to tell it exactly what "this Nvidia driver" is actually is.

  • visarga a year ago

    I think it's one-round, not interactive. So it can't do complex tasks

francisduvivier a year ago

Wonder how this compares to open interpreter.

fire_lake a year ago

Would consider rewriting in a language that is more portable? Ideally this would be a single binary, excluding the models!

  • RandomBKOP a year ago

    I've no plans for it myself, but that sounds like an interesting project! It should be fairly easy to hack together. The actual logic itself is just a few dozen lines of glue code around a single HTTP POST request.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection