⚓︎︎Dan Hudlow / April 2026
If you are fortunate enough to tour the Fort Worth campus of the United States Bureau of Engraving and Printing, you’ll get a primer on the techniques the BEP uses to make it difficult to replicate U.S. paper currency. But if you ask the staff of the BEP’s on-site museum to name the discipline that comprises these techniques, the answer you’ll get is “counterfeiting deterrence.” Such a cumbersome designation is unfortunate, because it’s hard to fully develop and appreciate an idea for which we lack a nomenclature.
And this matters because these concepts are relevant, often vital, to all kinds of interactions we have both in the physical world and the digital realm. My goal is to establish a taxonomy, demonstrate the crucial importance, and propose some best practices in applying these ideas to software design.
Let’s start with a wieldier name for this discipline: antiforgery.
⚓︎︎Security, Cryptography, and Antiforgery
A student of the software industry might, at this point, be confused. Aren’t these practices, in the software world, just called security? Moreover, aren’t the digital equivalents of the BEP’s myriad printing techniques just cryptography?
In fact, in BEP jargon, “security” is synonymous with counterfeiting deterrence; with what we’re calling antiforgery. But outside of currency design “security” has a much broader meaning. A bank has security concerns that go beyond forged currency or documents. Such as, quintessentially, robbery.
And in the software field, a lot of what we call “security” is focused on threats that are more like robbery than counterfeiting and defenses that are more like locksmithing than intaglio printing.
So “security” is too broad. What about “cryptography”? While much of cryptography is certainly devoted to antiforgery, a lot of it is devoted to secrecy also. But there’s a bigger gap we need to examine, and to do that, let’s go back to currency design.
⚓︎︎Technical vs. Practical
The antiforgery techniques implemented by the BEP actually have two distinct goals: one technical, and one practical. The technical goal is to make it difficult to exactly replicate currency. The practical goal is for any attempt at replication to alert even the casual observer.
Technical antiforgery requires exclusive access, knowledge, skills, materials, or technologies which are unavailable to a would-be counterfeiter. Practical antiforgery uses those technical capabilities to produce hard-to-replicate elements that are tangible and unmistakable.
Cryptography, then, can provide technical antiforgery, but it is ultimately up to user interfaces to deliver practical antiforgery.
It’s all too easy, however, for conversations about antiforgery to both start and end with the technical. This makes sense where intractable technical failures exist, as they do for personal checks, caller ID, and email. After all, it is usually incoherent 1 to seek practical antiforgery in the absence of a technical foundation.
But too often, even when an industry spends billions of dollars solving the technical problems, the practical side is neglected. In reality, technical antiforgery without practical antiforgery is also often moot.
Those boffins at the BEP know it’s not feasible to have an expert (or a sophisticated machine) examine every bill in every cash transaction. It’s not enough to make forgery difficult or even (technically) impossible: we also must make authenticity obvious. We need practical antiforgery.
⚓︎︎A Study in Failure
Before we apply these ideas to software, I’d like to take a detour through some other domains where practical antiforgery is needed. And where U.S. currency represents success, it’s important we also know what failure looks like.
We don’t have to look far: acute failures exist even in disciplines immediately adjacent 2 to currency design. In 2010, following a spate of frauds achieved with card skimmers affixed to ATMs, noted designer Khoi Vinh observed:
The fact of the matter is, the superfluously futuristic form of these machines is so nonsensical, so utterly impractical and useless that even a quickly grafted foreign appendage like a skimmer is indistinguishable from the native hardware.
In the sixteen years since Vinh wrote this, banks have made massive investments in the technology of securely transmitting banking information. And these efforts recognize the technical failures of magnetic strips and reusable credit card numbers to prevent fraud.
Yet, ATMs at major U.S. banks still look improvised with parts salvaged from a Soviet cosmodrome. The aesthetic is easily replicated by parasitic hardware, with blank panels and mismatched finishes that almost seem to dare enterprising criminals into action.
⚓︎︎Conditioning
Gas pumps are even worse. To the parts-bin hardware aesthetic, they add a haphazard array of stickers, often including QR codes. If camouflaging a card skimmer is easy, adding a sticker with a QR code for a phishing site is trivial.
Until recently, 3 you’d even see ostensibly legitimate QR code stickers at gas pumps actually soliciting payment with predictable results.
This destructive conditioning is insidious and difficult to unwind. Having trained customers that a QR code on a sticker is a reasonable payment method, any gas pump is vulnerable to this attack, whether or not that specific pump ever solicited payment via sticker.
⚓︎︎Continuity, Context, and Trust
The practical defense for these ATMs and gas pumps is continuity. A simplified physical interface without unnecessary seams, gaps, protrusions, or decals is much more resistant to forgery.
Moreover, when an ATM is bricked into the side of a local bank, continuity between the bank’s architecture and the entirety of the ATM’s physical interface allows the customer to trust the ATM.
And we can generalize these ideas. The opportunity for forgery always exists in some context, whether that’s a bank in someone’s neighborhood or a text message thread on his smartphone. When a context is itself resistant to replication (as an entire bank branch is), we can say that it’s trustworthy.
But here, we must be careful not to destructively condition users to trust a context that’s not actually resistant to forgery.
There’s a trope in spy fiction where a character wakes up in a hospital room with a TV blaring a newscast which, along with the hospital room itself, turns out to be a ruse recreated in a warehouse to elicit information from the “patient.” These scenes are fantastical, but they illustrate a real vulnerability.
⚓︎︎More Contexts
Especially where any kind of technology is involved, the trustworthiness of someone’s context isn’t just a matter of “where” she is, but also how she got there. Interactions over the phone drive this point home.
There’s an important difference between an incoming call, which is easily forged, and an outgoing call placed to the number on the back of a credit card. Even if both calls result in a conversation with a reassuring banker, dialing the number provides vital continuity between a trustworthy context (a physical credit card) and that phone conversation.
Here too, major financial institutions often do a bad job of helping their customers understand this distinction, with agents often asking for sensitive information on calls which they placed. And, as with QR-code payment stickers, this destructive conditioning threatens a customer’s interactions with all of their financial institutions.
⚓︎︎More Conditioning
Frequently, conscientious organizations implement constructive conditioning. This can be passive via acclimating customers to aesthetics, materials, procedures, or conventions that are hard to replicate (even if the technical reasons why they’re hard to replicate aren’t intuitive).
Constructive conditioning can also be active, such as when a text message with an authentication code includes a warning not to share it with anyone.
Unfortunately, organizations are often inconsistent. In a recent transaction with a national insurance company, my wife was repeatedly asked by employees for a code that was delivered with such a warning.
This hypocrisy results in subversive conditioning. Not only is a customer conditioned to do something unsafe; in the process they’re immunized to constructive conditioning.
Perhaps the most acute form of subversive conditioning I’ve personally experienced was at the hands of a corporate bureaucracy that delivered mandatory “cybersecurity awareness” training in emails which bore all the marks of phishing attacks.
⚓︎︎Spatial Exclusivity
Software is especially dependent on constructive conditioning to deliver practical antiforgery. Because software manifests chiefly as pixels on a screen, it can’t exploit intuitions about quality 4 in the same way that a finely crafted physical object can.
Software generally must rely on spatial exclusivity to differentiate trustworthy and forgeable contexts. Nowhere is this distinction more important than in a web browser, where the browser’s own interface elements 5 are inherently trustworthy but the content displayed isn’t.
Indeed, the conditioning of users to understand a web browser’s exclusive control over its address field is the foundation of the entire Internet economy. Building on the technical underpinnings of DNS and TLS, a browser’s address field is widely understood to be authoritative and tamper-proof. Importantly, the continuity the browser provides between the address field and the corresponding web page allows users to trust a web page based on its domain name.
Current versions of Mozilla Firefox and Google Chrome follow similar conventions, helping constructive conditioning transfer between the two browsers.
Historically, Apple’s Safari has taken the most holistic and thoughtful approach to solving this problem. By default, Safari simplifies the address itself to the trustworthy portion, and it keeps all of the browser’s trustworthy zones contiguous. Regrettably, Safari 26 on macOS Tahoe has compromised this advantage by blending the browser’s toolbar with forgeable content. 6
⚓︎︎Subtle Subversion
In addition to the passive conditioning that has taught users the relationship between the address field and the content, modern browsers implement active conditioning for web sites that don’t use TLS because the address field, in this case, should not confer trust to the content area.
But if a website presents the browser with an invalid certificate, something goes horribly wrong. Two of the three major browsers implement active conditioning in the address field and also present authoritative information in the content area.
Google Chrome’s behavior is particularly egregious, as it even presents a call-to-action to get the “highest level” of security.
It’s frustrating enough when subversive conditioning is the result of a sprawling bureaucracy’s only-human inability to behave consistently with its stated policies. Here, Google Chrome implements subversive conditioning in a tidy, self-contained package.
Only Apple’s Safari seems to understand the sacred relationship between the address field and the browser content area, opting not to present a warning in the address field when the browser itself takes over the content area. 7
⚓︎︎Good Intentions
Another example of destructive conditioning I experienced in corporate life was an “external sender” warning for emails. Because this was implemented on the server side, warnings were necessarily part of the email content, acclimating users to responding to a security-minded call-to-action within a forgeable part of the interface.
This is marginally more sympathetic, because the team at Microsoft that implemented support for those warnings probably didn’t have the prerogative to change the Outlook interface. 8 But, it’s still misguided and self-defeating to ask users to trust something so easily forged no matter how sketchy the rest of an email is.
⚓︎︎Tragic Irony
The well-meaning but misguided efforts of those tasked with making businesses more secure are often the worst offenders. Corporate “device management” and “endpoint security” services are strikingly bad at sensing the destructive effects of acclimating users to an ever-shifting suite of shady-looking software appearing on their laptops.
These utilities often have opaque and vaguely malevolent names and request or are deployed with broad and dangerous levels of system access. Sometimes, they trigger software updates that present users with spontaneous, context-free prompts for system passwords. This is destructive conditioning analogous to calling customers on the phone and asking for sensitive information.
⚓︎︎Hostile Takeovers
Smartphones present special challenges because pixel space is so precious and it is typical for a single app to control the entire screen.
For a browser to offer websites the ability to control the whole screen and thereby forge any part of any interface would seem nakedly foolish. Indeed, Apple’s Safari eschews support for the web standard enabling fullscreen websites unless a user has added a site to their home screen as a pseudo-app.
Surprisingly, Chrome on Android does support fullscreen websites, mitigating the risk only with some temporary fine print in a floating component that’s easily lost in an interface transition.
⚓︎︎Two Steps Forward, One Step Back
Password managers represent a big leap forward in translating technical antiforgery into practical antiforgery. This is accomplished by the password manager matching credentials to a website before it autofills them, such that even if a forged site fools the user, their password manager won’t transmit their legitimate credentials.
But even 1Password, undeniably a leader in both technical prowess and adoption 9, sometimes succumbs to presenting forgeable interfaces. For example, with the macOS app and Chrome extension both installed, a user can invoke 1Password’s signature password prompt from a forgeable context (a web page), presenting her with a floating window that can be forged inside the content area of the browser.
⚓︎︎Personalization
Rather than depending on spatial exclusivity, an interface can be personalized in a way that makes it harder to forge. Personalization is analogous to the “signs” and “countersigns” often used in spy stories: just as an agent knows not to reveal the countersign unless first prompted with the sign, a user is conditioned to distrust an interface unless it is personalized.
The 1Password prompt in the above example actually is personalized, albeit not very effectively. The user’s email address is displayed, but only when hovering over the avatar, and the avatar itself can be customized, but only for individual accounts, not for the family subscription I use.
But even aside from these unforced errors, personalization as an antiforgery technique requires us to exercise a lot of caution. It’s crucial to understand how easy it is for personalization to result in destructive conditioning.
Indeed, in an entirely unauthenticated context (like most login screens on the web), personalization is completely useless. Even if an application first asks a user to assert her identity (an email address, for example), so it can present a personalized interface before asking for her password, nothing keeps an attacker from using the same flow to derive the personalized elements from the user’s identity.
And even when personalization is limited to partially authenticated users (as is the case for 1Password 10), it may be difficult to elicit or produce personalized elements that are sufficiently difficult to obtain (avatars, for example, don’t tend to be very private) and to condition users to rely on those elements to establish trust. 11
⚓︎︎Best Practices
As an industry, we clearly still have a long way to go in applying the principles of practical antiforgery to software design. Doing so requires critical thought, attention to detail, and vigilance.
But especially for anyone lost or overwhelmed, I’d like to propose a few best practices:
In designing any interface or interaction, carefully analyze how you can signal authenticity to your users.
For example, render on part of the screen that a forgery cannot.
Pay special attention when prompting a user to return to your application.
For example, consider using push notifications instead of text messages or emails.
Applying the broader principle of defense-in-depth, don’t allow a system to rely too heavily on one or two signals of authenticity. Instead, give users as many meaningful signals as you can that something is trustworthy.
For example, encourage users to pick a custom accent color for your site and make sure that accent color is always visible when they’re logged in.
Conversely, eliminate as much authenticity noise (signals that do not demonstrate authenticity) as possible.
For example, use short, readable URLs that don’t obfuscate your domain name.
If you condition a user to trust a signal, do everything in your power to make that signal unobtainable to forgeries.
For example, constrain the formatting options for user-generated content so it can’t replicate parts of your application’s interface.
If you condition users to distrust a signal, be vigilant that you never, ever contradict yourself.
For example, if you implore users to only enter their passwords on your website, don’t ship a phone app that prompts for a password.
⚓︎︎The Future
There are also some things I’m looking forward to. The adoption of passkeys, in particular, continues the trend started by password managers of translating technical advances into real practical gains.
There’s an opportunity for new ideas and techniques for antiforgery in software that go beyond spatial exclusivity. Apple uses one such technique in the way it presents Apple Pay Cash payments 12 in its Messages app: an iridescent effect that reacts to accelerometer inputs in a way that a texted image could not.
I’d love to see more hardware features used to signal the authenticity of an interaction. This could be screen technology, like Samsung’s new privacy display or the lenticular displays used on the front of the Apple Vision Pro. I can also imagine using exclusive haptics or personalized sound effects. There are many possibilities, especially for companies that can control both hardware and software.
Most of all, I hope to encourage more engineers to consider, discuss, and debate these ideas. It’s especially important that platform and browser vendors take seriously the need for practical antiforgery. And purveyors of security-centric software and services need to take a hard look at where they may be doing more harm than good.
I believe we need to move both the median approach and the state of the art forward. The stakes are too high for us to keep getting this wrong. I hope to write more on this topic in the future, so if you have any thoughts or ideas you’d like me to consider, please reach out.
In Review
-
For example, it doesn’t matter if it’s easy to distinguish between a common material and an unusual one if the unusual material isn’t also difficult to acquire.
One could argue that trademarks are an exception here: that Coca-Cola isn’t technically difficult to replicate (i.e., Pepsi tastes like Coke) but it’s practically difficult to replicate because Pepsi can’t legally call a drink “Coca-Cola.”
But we’ll limit our discussion to forgery efforts unencumbered by legal barriers as counterfeiting currency, for example, necessarily is. ↩︎
-
In the sense that an ATM includes machinery to authenticate currency. ↩︎
-
They’ve disappeared from my local pumps, but it’s hard to say if this is representative. ↩︎
-
It’s not entirely true to say that tangibly high-quality software can’t be produced with exclusive capabilities, but those capabilities aren’t well-correlated to legitimacy or monetary resources and very few businesses of any size or in any market sector possess them. ↩︎
-
The term of art for these elements is the browser’s “chrome” but the branding of a certain web browser overloads this word. ↩︎
-
There’s some tension in the point I’m making here. I just finished saying that web browsers offer an advantageous continuity between the address field and the web content, and now I’m complaining about inadequate contrast between these contexts.
This apparent contradiction is reflective of the balancing act that a web browser has to perform. On the one hand, the relationship between the address and the content needs to be visually clear and tamper resistant. In this sense, continuity is needed. On the other hand, the distinction between a context that is always trustworthy and a context which we know is sometimes untrustworthy must also be clear. In this sense contrast is needed. ↩︎
-
The implication actually yielded is that it’s google.com, and not Safari, that’s warning the user of a problem. I doubt Apple frets too much about an unsophisticated user coming away with that impression. ↩︎
-
Plus, plenty of other email clients support Exchange, and the developers might not have anticipated administrators incorporating a call to action. ↩︎
-
I shared a draft of this piece with 1Password and received this feedback:
We agree that trust boundaries in security-sensitive interfaces are an important topic. It’s also important to recognize that browser extensions, by design, operate alongside arbitrary and potentially hostile web content. We do not have control over what malicious websites choose to render, and any site can visually imitate interface elements presented within or adjacent to page content. That’s a broader phishing and user-deception risk inherent to the web ecosystem, rather than a product-specific vulnerability.
That said, phishing resistance has been consciously incorporated into our lock screen design for years. For example, hovering over an account avatar on the authentic lock screen reveals the associated email address, and we display the user’s chosen account icon where applicable. These dynamic elements are not something a generic phishing page would typically know or replicate, and they provide users with authenticity cues within the browser context.
It’s also worth noting that under our security model, even if a user were deceived into entering a password, a password alone is insufficient for account takeover without the Secret Key.
I appreciate the effort they gave to a response (and, separately, their help in confirming that the avatar on a family account cannot be customized), but don’t find the actual answers very satisfying. Please also make note of my disclosures concerning 1Password. ↩︎
-
1Password is actually unusually well-suited to personalization, because of its key-and-password architecture. ↩︎
-
I suppose the platonic ideal here would be to condition users to have a Pavlovian response to the personalized prompt such that they can’t even bring the password to mind without it. ↩︎
-
One has to think the conceptual proximity to currency design inspired this unusual burst of creativity to be applied to the problem. ↩︎
⚓︎Acknowledgements
I owe a lot to Khoi Vinh for planting the seeds and giving me sixteen years to ponder this space.
I’d also like to thank Peter “meem” Memishian and Henry Andrews for their incisive reviews that significantly shaped the content of this piece.
⚓︎Disclosures
I’ve used iOS and Safari for as long as they’ve existed (which necessarily means I’m an Apple customer), and have also been a 1Password customer for many years. I do not use Android, Chrome, or any Microsoft email products, but I am a paying customer of other Google and Microsoft products.
Weirdly enough, during the course of writing this, I gave 1Password a chance to comment on a draft, and then subsequently and coincidentally applied for a job at 1Password, interviewed, and was ultimately rejected. This created a few ethical hazards:
- As long as I thought the job was still on the table, I could appear to be incentivized to flatter 1Password, soften my critique of their product, or postpone publication. (Although only if I had a low opinion of their integrity.)
- Now that I’ve been rejected, I could appear to be incentivized to treat 1Password vindictively.
- Lastly, during the period that I was both in the running for the job and had already presented 1Password with a draft, it could have given the appearance that I was offering 1Password an opportunity to influence what I wrote if they hired me.
Ideally, I would have published before I applied for the job which would have simplified this disclosure, but I just wasn’t ready. I’ve tried to mitigate these conflicts in several ways:
- I have not adjusted the wording with which I introduced 1Password since before I had any intention of applying for a job.
- I was careful not to mention the draft to anyone I interacted with regarding the job, and I think it very unlikely anyone at 1Password would have been aware of both my communication about the draft and my job application.
- I was resolved that I would publish this piece before accepting an offer at 1Password, and give 1Password an opportunity to rescind the offer if they chose to.
It’s also relevant that all my interactions with 1Password have been pleasant and I have no reason to believe they would be peeved or threatened by anything I’ve written. But there’s still a lot here that you’ll have to take my word for which is why I consider this disclosure to be crucial.