Settings

Theme

Evaluate and Track Your LLM Experiments: Introducing TruLens for LLMs

github.com

25 points by shayaks 3 years ago · 8 comments

Reader

westurner 3 years ago

https://news.ycombinator.com/item?id=35810320 :

> - [ ] dvc, GitHub Actions, GitLab CI, Gitea Actions,: how to add PROV RDF Linked Data metadata to workflows like DVC.org's & container+command-in-YAML approach

https://dvc.org/

https://news.ycombinator.com/item?id=34619424 https://westurner.github.io/hnlog/#comment-34619424 :

- XAI: Explainable AI: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...

- > Right to explanation: https://en.wikipedia.org/wiki/Right_to_explanation

- > A more logged approach with IDK all previous queries in a notebook and their output over time would be more scientific-like and thus closer to "Engineering": https://en.wikipedia.org/wiki/Engineering

shayaksOP 3 years ago

We open-sourced TruLens for LLMs to help evaluate and track your LLM experiments. We also built in special integration with langchain to capture the metadata around your entire chain stack to use with your evaluations. Give it a spin and send us your feedback! We're adding new functionality every day.

shayaksOP 3 years ago

Here's a companion blog that explain it works under the hood: https://medium.com/trulens/evaluate-and-track-your-llm-exper...

warthog454 3 years ago

Better LLM app testing is an urgent need when you see the stuff getting put out there.

arielmia 3 years ago

This is a godsend exactly what i have been searching for

duncanid 3 years ago

Smart toolkit to for quick sanity check why developing

laryb2k 3 years ago

Amazing and very helpful package!!!

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection