alexcombessie
- Karma
- 32
- Created
- 4 years ago
About
Co-Founder & CEO @ Giskard | Red-teaming platform for LLM agentsRecent Submissions
- 1. ▲ LLM Sycophancy: The Risk of Vulnerable Misguidance in AI Medical Advice (giskard.ai)
- 2. ▲ Agentic Tool Extraction: Multi-turn attacks that expose AI agents (giskard.ai)
- 3. ▲ LMEval: An Open Source Framework for Cross-Model Evaluation (opensource.googleblog.com)
- 4. ▲ Show HN: Open-Source Evaluation and Testing for Computer Vision Models (github.com)
- 5. ▲ Vision AI model vulnerability scan (docs.giskard.ai)
- 6. ▲ AI Systems Security: Top Tools for Preventing Prompt Injection (sahbichaieb.com)
- 7. ▲ Scanning LLM app vulnerabilities: Quickstart (docs.giskard.ai)
- 8. ▲ Evaluating LLMs with Giskard in MLflow (databricks.com)
- 9. ▲ Show HN: Automatic generation of LLM guardrails with NeMo and Giskard (docs.giskard.ai)
- 10. ▲ Coursera on Red Teaming LLM Applications (coursera.org)