Show HN: High-performance GenAI engine now open source
github.comHey HN
After one too many customer firedrills regarding hallucinating or insecure AI models, we built a system to catch these issues before they reached production. The Arthur Engine has been running in Fortune 100 to AI Native Start-Ups over the past two years, putting security controls around more than 10 billion tokens in production every month. We're now opening up this service to developers, enabling you to leverage enterprise-grade solutions to provide guardrails and evals as a service, all for free.
Get it on Github (https://github.com/arthur-ai/arthur-engine) to start evaluating your models today
Highlights of Arthur's Engine include:
* Built for speed and scale: It performs well with p90 latencies of sub-second well over 100+ RPS
* Made for full lifecycle support: Ideal for pre-production validation, real-time guardrails, and post-production monitoring.
* Ease of use: It is designed to be easy for anyone to run and deploy whether you're working on it locally during development, or you're deploying it within a horizontally-scaling architecture for large-scale workloads.
* Unification of generative and traditional AI: The Arthur AI Engine can be used to evaluate a diverse range of models from LLMs and Agentic AI systems to binary classifiers, regression models, recommender systems, forecasting models, and more.
* Content-specific guardrail and detection features: Ranging from toxicity and hallucination detection to sensitive data (like PII, keyword/regex and custom rules) and prompt injection.
* Customizability: Plug in your own models or integrate with other model or guardrail providers with ease, and tailor the system to match your specific needs.
Having been first-hand witnesses to the lack of adequate AI monitoring tools and the general under delivery of Gen AI systems in production, we believe that such a capability shouldn't be exclusive to big-budget organizations. Our mission is to make AI better, for everyone, and we believe by opening up this tool we can help more people get to that goal.
Check out our GitHub repo for examples and directions on how to use the Arthur AI Engine for various purposes such as validation during development, real-time guardrails or performance troubleshooting using enriched logging data. (https://github.com/arthur-ai/engine-examples)
We can’t wait to see what you build
— Zach and Team Arthur We've been ramping up our gen ai usage for the last ~month at Upsolve and it's becoming a huge pain. There are already a million solutions for observability out there, but I like that this one is open source and can detect hallucinations Thanks for open sourcing and sharing, excited to try this out!! Yeah thanks for the feedback. We think we stand out from our competitors in the space because we built first for the enterprise case, with consideration for things like data governance, acceptable use, and data privacy and information security that can be deployed in managed easily and reliably in customer-managed environments. A lot of the products today have similar evaluations and metrics, but they either offer a SAAS solution or require some onerous integration into your application stack. Because we started w/ the enterprise first, our goal was to get to value as quickly and as easily as possible (to avoid shoulder-surfing over zoom calls because we don't have access to the service), and think this plays out well with our product. Cool, I'm running few GenAI automations, but they're rather unsupervisored. So I'm gonna try it and check how they're doing. Thanks for sharing! This looks perfect for teams getting started with monitoring for all model types -- excited to try it out! Very excited to be trying this out! The examples look very useful and excited to tie it up with other open source solutions Yoo! Hopefully no more "oops our AI just leaked the system prompt" moments thanks to these guardrails! Love this. More transparency + better tooling is exactly what AI needs right now. Excited to give it a try. Interesting, AI needs much better guardrails and monitoring! Looks great! How does the system detect hallucinations? Yeah great question We based our hallucination detection on "groundedness" on a claim-by-claim basis, which evaluates whether the LLM response can be cited in provided context (eg: message history, tool calls, retrieved context from a vector DB, etc.) We split the response into multiple claims, determine if a claim needs to be evaluated (eg: and isn't just some boilerplate) and then check to see if the claim is referenced in the context. Excited to get hands on with this. I've had too many sleepless nights trying to figure out how to track when my agents were hallucinating. Very cool!