Proposal: Add "No Screen Sharing" Flag for Secure Messaging Apps
End-to-end encryption is pointless if you can be tricked into screen sharing. I’d like to see OS-level APIs let app developers set a "no screen sharing" flag that automatically blanks/obscures the app's contents during screen capture or remote control. This would be per-app and cannot be disabled by users—so secure messaging apps (e.g., Signal) could opt in, preventing accidental leaks via screen sharing.
Questions for HN:
- What problems do you see with the approach?
- Any better approaches? I strongly object to this idea that this is not under user control. Far too many companies and governments seek to control what their users do with data and information and devices that are supposed to be in the control of the user. If you don't trust someone to handle secure information securely perhaps you shouldn't be sending them secure information.
If you're saying you want this to be an optional thing, perhaps even defaulted to enabled to protect grandma from doing something, then I would support that. But it needs to be optional. This mindset of a company can dictate every potential use case to end users is foolishness and doesn't slow down hackers but just irritates normal users who need those features. Just like right now apps can flag whether you can do screenshots or not. In normal situations banking apps other secure messaging apps one time view of photo apps this is all quite useful but it shouldn't be an enforced requirement that you have to then hack your phone to override it. It should be a default that's enabled that you can then disable. Now if you want the sending party to know if you've disabled that feature that's fine as well it's simply more information. Then the sender can decide what they want to do. The idea of completely removing control as if you know better needs to stop.
Things will not get better with technology or literally anything if we cannot require people to be responsible for their actions. Sensible defaults are great a sensible default to have this on is fine a system that notifies other parties during a communication if it's on or off is fine. Removal of control from the user is wrong.
I agree that having options is crucial, but we also need to consider how these options fit together to ensure system-wide safety by design. To actually mitigate screen-sharing risks, you need a combined hardware/OS and secure communication app where the “No Screen Sharing” feature can’t be turned off—and that setup has to be widely used for it to matter at scale.
Hardware/OS choice is the first step. Some platforms, like iOS, are already more sandboxed than, say, Android. I personally use iOS and feel comfortable trusting Apple’s approach, even though zero-day exploits remain a possibility. For my server needs, I use Linux and appreciate full control and root access—but that’s a separate use case.
App choice is the second step. Secure messengers like Signal prioritize privacy as a core feature, while many other messaging apps don’t. If a few high-profile apps enforced “No Screen Sharing,” the people who genuinely need or want to share their screen could always switch to a different app. So in practice, this feature wouldn’t prevent screen sharing entirely; it would just block it in contexts where security is paramount.
All of which is to say: optionality still exists—you can choose a less-restrictive OS or a different communication app. But for those who opt into a more locked-down environment, having a secure messenger that outright prevents screen sharing can make all the difference in avoiding accidental leaks or social engineering attacks.
This seems like a very close-minded response from someone who has hubris that doesn't really understand leaks and attacks. None of this prevents accidental leaks or social engineering attacks All it does is interfere with people who want to have more choice in what they do because it is impossible for you to know everyone's situation or use case. You are only preventing screen sharing That's it nothing else and you cannot make a claim that it's doing anything beyond that. So like every copy protection scheme and DRM scheme that has ever been devised it only inconveniences legitimate users doing legitimate things and doesn't slow down or prevent any bad actor from doing bad things.
When you begin to view your users as the enemy by instituting manipulation and control over their use you set yourself up for a hostile relationship with your user base. When your company is big enough and you've locked in your market that does work for a while.
Thanks for sharing your perspective—I really appreciate it. In hindsight, I should have given more weight to the "Any better approaches?" part of the original post. I fully acknowledge that my proposal involves trade-offs and isn't a one-size-fits-all solution.
That said, I’d like to explore how we might achieve security by design without sacrificing user experience. First, let’s agree on one core principle: if a user decides to share their screen, the OS should treat that choice uniformly across all apps—meaning it must always share the entire screen.
Given that, let's think about other ideas to address the risk scenario: a user might unwittingly share their screen with an adversary and then start a top-secret chat, accidentally leaking sensitive information. Ideally, users handling top-secret data would be exceptionally cautious, but in practice, mistakes happen.
Here's an alternative approach: a "Secret Chat Room" feature, that would rely upon OS checks, explicitly authorized by the user. Think of it as akin to physical secret meeting rooms with soundproof walls and Faraday cages—places where sensitive conversations are truly isolated. When a user enters such a room, they'd see a prompt like:
To preserve privacy and avoid penalizing users with poor security practices, the OS would return only one bit of information:You are now entering a Secret Chat Room. This room is designed to ensure that no eavesdropping (such as keystroke logging, microphone tapping, or unauthorized screen sharing) is occurring. To proceed, please authorize the OS to perform an integrity check. You’ll be allowed in only if this check is successful.
This binary signal prevents the app from knowing whether a failure was due to a deliberate user choice or a technical issue, thus providing plausible deniability.1: The user authorized the check AND it succeeded. 0: Either the user did not authorize the check OR the check failed.What do you think about this approach? I'd love to hear your thoughts on refining it further to balance robust security with a seamless user experience.
I don't think it is appropriate (and feasible) to let app an developer decide how exactly and when _I_ share _MY_ screen on _MY_ phone.