Ask HN: With Promptfoo acquired by OpenAI, what are MCP devs using for testing?
With OpenAI's acquisition of Promptfoo last week, I've been thinking about the testing gap for MCP servers specifically. Promptfoo was great for LLM evaluation but didn't handle MCP's transport layer, tool schema validation, or MCP-specific vulnerabilities like Tool Poisoning.
What are people using to test their MCP servers in CI? The MCP Inspector is interactive-only.
I've personally been building MCPSpec [https://light-handle.github.io/mcpspec/] but curious what approaches others are taking custom scripts, unit tests on server internals, something else? /? mcp test: https://hn.algolia.com/?q=mcp+test : MCP Playground, MCPSpec, MCPjam, MCP Inspector, mcp-record, mcpbr, agent-vcr mcpbr > Supported Benchmarks : https://github.com/supermodeltools/mcpbr#supported-benchmark... : > MCPToolBench++, ToolBench, AgentBench, WebArena, TerminalBench, InterCode; SWE-bench, This is a good list. Are you finding these can be integrated into your CI/CD workflows? Oh TBH I've hardly worked with MCP. Do
A2A and AP2 have better auth and authz? There seem to be a lot of solutions for minimizing token use with MCP? /? mcp tokens
https://hn.algolia.com/?q=mcp+tokens One thing I've been running into when working with LLM tooling is that a lot of the friction happens even before testing — just preparing context for the model.