Brocards for vulnerability triage

8 min read Original article ↗

ENOSUCHBLOG

Programming, philosophy, pedaling.


Apr 11, 2026     Tags: oss, security    


I spend some of my hobby time doing vulnerability triage on open source projects. As part of that, I see (and filter through) a lot of nonsense1.

Nonsense is not unique to vulnerability triage: lawyers deal with it too. To cope with it in the legal world, they use brocards — concise aphorisms that capture the essence of a legal principle. Any given brocard is not universally true2, but provides a standard by which a claim can quickly be evaluated for legitimacy.

Vulnerability triage has its own brocards, but I couldn’t find a comprehensive list of them anywhere. This is my attempt to compile such a list.

No vulnerability report without a threat model#

Alex Gaynor explains this one well in Motion to Dismiss for Failure to State a Vulnerability: a vulnerability report can be safely dismissed if it lacks a threat model, or if the threat model presented is incoherent.

Examples include:

No exploit from the heavens#

Closely related are vulnerability reports that describe a severe end state, but only under attacker capability assumptions that are more powerful than the vulnerability itself. In effect, in order to mount an attack that exploits the vulnerability, the attacker would already need to have an equal or more powerful capability than the vulnerability itself provides.

Examples include:

Raymond Chen has 2006 post that covers the same concept. Thanks to Geoffrey Thomas for sharing that with me!

No vulnerability outside of usage#

A vulnerability report can be safely dismissed if it describes a behavior that could occur, but does not in fact occur in actual usage of the software.

Examples include:

No vulnerability from standard behavior#

Perhaps my most controversial (and personal) brocard: a vulnerability report can be safely dismissed if the behavior described is a direct consequence of the software’s correct adherence to a standard or specification. In these instances the vulnerability (if one exists) is present within the standard itself, and not the implementation.

Examples include:

The nuance with this is that an implementation that chooses to be more strict than the standard requires should be considered vulnerable if the intended strictness is violated.

No vulnerability from documented behavior#

(Many thanks to Hugo van Kemenade for this one!)

Similar to the above: a vulnerability report can be safely dismissed if the behavior is documented to occur, particularly when the documentation explicitly describes the security implications of the behavior or specific contexts in which the software is unsafe to use.

Examples include:

Like with the previous brocard, the nuance here is that a downstream usage that violates the documented guidelines for use may be considered vulnerable. In other words: a report against pickle itself (for e.g. enabling code execution) may be safely dismissed, but a report against a downstream usage of pickle that ignores the documented warnings may be considered valid.

No cure worse than the disease#

The maintainer should reject (or contest) vulnerability reports whose consequences are worse than the consequences of the vulnerability itself.

The classic example of this is ReDoS “vulnerabilities,” particularly in contexts where the impact of the “denial of service” is negligible4. These reports typically involve nontrivial amounts of maintainer time and effort to triage, followed by nontrivial amounts of downstream time and effort to remediate, effectively resulting in a denial of service on the community itself.

CVE-2026-4539 is a recent case of this: an anonymous reporter filed a CVE against pygments with VulDB, seemingly bypassing any maintainer or community review. This report was not accompanied by a fixed version (because it’s junk, and ignores pygments’ own security policy), but lit up tens of thousands of downstream dependencies with a “medium” severity vulnerability, causing significant disruption.

The status quo in 2026 is that the CVE ecosystem unreasonably places the onus on maintainers to contest this kind of spam when adversarial reporters bypass them entirely.

The report is neither necessary nor sufficient#

The presence of a vulnerability report (and a CVE or other identifier for that report) is neither necessary nor sufficient for a vulnerability to exist.

This cuts both ways: many (perhaps the majority of) vulnerabilities are never “formally” reported, and many formal reports do not actually describe meaningful vulnerabilities (per above). Consequently, no unvalidated assumption should ever be made about the relationship between the presence of a report and the presence of a vulnerability.

This is another unfortunate status quo, one that stems from (seemingly intentional) strategic ambiguity in the vulnerability reporting ecosystem: partners like MITRE benefit simultaneously from being perceived as a high-quality source of vulnerability information, while also being able to disclaim any responsibility for communicating anything other than a stable identifier for a claim of vulnerability.



Discussions: Reddit Mastodon Bluesky