Settings

Theme

Show HN: Elelem – TypeScript LLMs with tracing, retries, and type safety

github.com

3 points by jrhizor 2 years ago · 2 comments · 1 min read

Reader

Elelem is a simple, opinionated, JSON-typed, and traced LLM framework in TypeScript.

It's a proof of concept but has some nice features:

1. Caching (with Redis)

2. Tracing (with OpenTelemetry)

3. Generated examples of JSON output (using FakerJS), which results in more stable output types.

4. Type-safe

5. Has retry logic baked-in

I'd appreciate any feedback on the interface and would be happy to answer questions for anyone interested in trying it out.

Thanks.

gsuuon 2 years ago

I like the integrated telemetry and caching. It looks like the prompt formatter adds text to the system instruction - have you tried using function calling to get json responses? Does the retry mechanism validate against the zod schema?

  • jrhizorOP 2 years ago

    I've had a bit of trouble getting function calling to work with cases that aren't just extracting some data from the input. The format is correct but it was harder to get the correct data if it wasn't a simple extraction.

    Hopefully OpenAI and others will offer something like https://github.com/guidance-ai/guidance at some point to guarantee overall output structure.

    Failed validations will retry, but from what I've seen JSONSchema + generated JSON examples are decently reliable in practice for gpt-3.5-turbo and extremely reliable on gpt-4.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection