ENOSUCHBLOG
Programming, philosophy, pedaling.
Apr 11, 2026 Tags: oss, security
I spend some of my hobby time doing vulnerability triage on open source projects. As part of that, I see (and filter through) a lot of nonsense1.
Nonsense is not unique to vulnerability triage: lawyers deal with it too. To cope with it in the legal world, they use brocards — concise aphorisms that capture the essence of a legal principle. Any given brocard is not universally true2, but provides a standard by which a claim can quickly be evaluated for legitimacy.
Vulnerability triage has its own brocards, but I couldn’t find a comprehensive list of them anywhere. This is my attempt to compile such a list.
No vulnerability report without a threat model#
Alex Gaynor explains this one well in Motion to Dismiss for Failure to State a Vulnerability: a vulnerability report can be safely dismissed if it lacks a threat model, or if the threat model presented is incoherent.
Examples include:
-
A report for a Python API that raises an exception in some undocumented or surprising cases3, but doesn’t explain how an attacker could exploit that behavior to cause harm.
-
A report for a hang or stall in a local developer tool. Hangs are undesirable behavior, but the opportunity for harm from one is negligible in a developer tooling context: the developer can always just kill the process.
No exploit from the heavens#
Closely related are vulnerability reports that describe a severe end state, but only under attacker capability assumptions that are more powerful than the vulnerability itself. In effect, in order to mount an attack that exploits the vulnerability, the attacker would already need to have an equal or more powerful capability than the vulnerability itself provides.
Examples include:
-
A report for content manipulation on a web service, where the manipulation can only occur if the attacker is an active meddler in the middle. No vulnerability exists here, because an active MiTM could send entirely arbitrary content, and would not have to limit themselves to manipulating pre-existing content.
-
A report for code execution via memory corruption in CPython, where the memory corruption occurs by directly manipulating CPython’s object internals at runtime (via ctypes, for example). No vulnerability exists here, because the attacker is already running arbitrary code to perform the corruption.
Raymond Chen has 2006 post that covers the same concept. Thanks to Geoffrey Thomas for sharing that with me!
No vulnerability outside of usage#
A vulnerability report can be safely dismissed if it describes a behavior that could occur, but does not in fact occur in actual usage of the software.
Examples include:
-
A report for a vulnerability in a private API within a library, where the only (private) usage of that API is not vulnerable.
For example, a C codebase might have a function that takes a
char *and exhibits a buffer overflow with strings longer than 100 bytes, but a codebase where all calls to that function are statically assertable to not exceed that size is not vulnerable. -
Similarly, a report for a vulnerability in an API (public or private), where the vulnerability can only occur by violating an invariant that the programmer is responsible for maintaining.
For example, an API might have a precondition that an input string is valid UTF-8, and an input that violates this precondition may cause an uncontrolled program abort. However, a fuzzer that discovers this behavior has not found a vulnerability, because in a real program the programmer is responsible for ensuring that the “building blocks” of the API are composed together.
It’s worth noting some nuance here: because the programmer is responsible for maintaining the invariant, there is a potentially legitimate vulnerability when usage of the API violates the invariant. By analogy:
free(3)is not considered vulnerable to a double free, but a program that callsfree(3)on an already freed pointer is considered vulnerable to a double free. -
A re-report of a upstream’s vulnerability, where the upstream’s’s vulnerable behavior is not reachable in the downstream.
For example, CPython is shipped with a build of OpenSSL, and OpenSSL regularly has security advisories. However, CPython’s exposure to OpenSSL is mostly limited to SSL/TLS and a subset of the X.509 APIs, and therefore vulnerabilities outside of these surfaces do not constitute a reasonable re-report to CPython.
No vulnerability from standard behavior#
Perhaps my most controversial (and personal) brocard: a vulnerability report can be safely dismissed if the behavior described is a direct consequence of the software’s correct adherence to a standard or specification. In these instances the vulnerability (if one exists) is present within the standard itself, and not the implementation.
Examples include:
-
Behavior stemming from “robustness” requirements in standards. Many RFCs and similar standards inadvisably follow the (poorly named) robustness principle, and allow interactions that are not well-defined (often by allowing the implementer to make a judgement call about the intended semantics of the interaction). For example, RFC 7230 has this under “Message Parsing Robustness” (ss. 3.5):
In the interest of robustness, a server that is expecting to receive and parse a request-line SHOULD ignore at least one empty line (CRLF) received prior to the request-line.
Although the line terminator for the start-line and header fields is the sequence CRLF, a recipient MAY recognize a single LF as a line terminator and ignore any preceding CR.
-
Behavior stemming from cryptographic requirements that are insecure in isolation but secure by construction. The “classic” example of this is (typically automated) reports of MD5 usage, where that usage is solely in constructions where MD5 is not actually broken (i.e. HMAC-MD5). There’s a strong argument to be made that a better hash function should be used where permitted, but the presence of MD5 in an HMAC construction is not itself a vulnerability.
The nuance with this is that an implementation that chooses to be more strict than the standard requires should be considered vulnerable if the intended strictness is violated.
No vulnerability from documented behavior#
(Many thanks to Hugo van Kemenade for this one!)
Similar to the above: a vulnerability report can be safely dismissed if the behavior is documented to occur, particularly when the documentation explicitly describes the security implications of the behavior or specific contexts in which the software is unsafe to use.
Examples include:
-
Python’s
http.serveris explicitly documented as not suitable for production use, as it only implements basic security checks. -
Python’s
pickleis explicitly documented as not secure, full stop.
Like with the previous brocard, the nuance here is that a downstream usage
that violates the documented guidelines for use may be considered vulnerable.
In other words: a report against pickle itself (for e.g. enabling code execution)
may be safely dismissed, but a report against a downstream usage of pickle
that ignores the documented warnings may be considered valid.
No cure worse than the disease#
The maintainer should reject (or contest) vulnerability reports whose consequences are worse than the consequences of the vulnerability itself.
The classic example of this is ReDoS “vulnerabilities,” particularly in contexts where the impact of the “denial of service” is negligible4. These reports typically involve nontrivial amounts of maintainer time and effort to triage, followed by nontrivial amounts of downstream time and effort to remediate, effectively resulting in a denial of service on the community itself.
CVE-2026-4539 is a recent case of this: an anonymous reporter filed a CVE against pygments with VulDB, seemingly bypassing any maintainer or community review. This report was not accompanied by a fixed version (because it’s junk, and ignores pygments’ own security policy), but lit up tens of thousands of downstream dependencies with a “medium” severity vulnerability, causing significant disruption.
The status quo in 2026 is that the CVE ecosystem unreasonably places the onus on maintainers to contest this kind of spam when adversarial reporters bypass them entirely.
The report is neither necessary nor sufficient#
The presence of a vulnerability report (and a CVE or other identifier for that report) is neither necessary nor sufficient for a vulnerability to exist.
This cuts both ways: many (perhaps the majority of) vulnerabilities are never “formally” reported, and many formal reports do not actually describe meaningful vulnerabilities (per above). Consequently, no unvalidated assumption should ever be made about the relationship between the presence of a report and the presence of a vulnerability.
This is another unfortunate status quo, one that stems from (seemingly intentional) strategic ambiguity in the vulnerability reporting ecosystem: partners like MITRE benefit simultaneously from being perceived as a high-quality source of vulnerability information, while also being able to disclaim any responsibility for communicating anything other than a stable identifier for a claim of vulnerability.
Discussions: Reddit Mastodon Bluesky