Settings

Theme

alexcombessie

Karma
32
Created
4 years ago

About

Co-Founder & CEO @ Giskard | Red-teaming platform for LLM agents

Recent Submissions

  1. 1. LLM Sycophancy: The Risk of Vulnerable Misguidance in AI Medical Advice (giskard.ai)
  2. 2. Agentic Tool Extraction: Multi-turn attacks that expose AI agents (giskard.ai)
  3. 3. LMEval: An Open Source Framework for Cross-Model Evaluation (opensource.googleblog.com)
  4. 4. Show HN: Open-Source Evaluation and Testing for Computer Vision Models (github.com)
  5. 5. Vision AI model vulnerability scan (docs.giskard.ai)
  6. 6. AI Systems Security: Top Tools for Preventing Prompt Injection (sahbichaieb.com)
  7. 7. Scanning LLM app vulnerabilities: Quickstart (docs.giskard.ai)
  8. 8. Evaluating LLMs with Giskard in MLflow (databricks.com)
  9. 9. Show HN: Automatic generation of LLM guardrails with NeMo and Giskard (docs.giskard.ai)
  10. 10. Coursera on Red Teaming LLM Applications (coursera.org)

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection