Guardrails AI wants to crowdsource fixes for GenAI model problems
techcrunch.comI found guardrails AI really useful for my research on LLMs. Otherwise I would have wasted a lot of time trying to curate the outputs from my experiments.
Don’t cram too much in a single prompt though. Prompt structures like guard rails naturally carry high cognitive load for the LLM, which leads to biased outputs. I found the best practice is to alleviate it by using multiple prompts and using a guard rail as an end-step rather than one big prompt for the LLM. (https://arxiv.org/abs/2402.01740)
Pretty interesting to go from a world of deterministic code, to LLMs which can do incredible things, but unreliably. In a world of LLMs, I could imagine guardrails being a table-stakes part of engineering an ML system, just like unit tests, and CI/CD would be for traditional software.
Hi everyone, the CEO of Guardrails AI here! We're stoked to launch Guardrails Hub as an open source framework to solve AI reliability.
Check out the hub here -> https://hub.guardrailsai.com/