ChatGPT's CSS may hide model info (clip-path, opacity:0, user-select:none) [pdf]
aya-peppers.github.ioThis is a reproducible technical report on how ChatGPT’s UI may hide backend model details via CSS. The DOM includes model strings like GPT-5-2, but CSS properties like `clip-path`, `opacity:0`, and `user-select:none` prevent users from seeing or selecting them. This may be unintentional UX design—or a systematic obfuscation. Either way, I believe it deserves public discussion.
Would you rather they cut out whatever information they're trying to "hide" from the underlying HTML so that it never hits your browser and you have no chance of seeing it?
Interesting question—thank you for raising it!
I wouldn’t mind if some information were omitted entirely, or even hidden by default, as long as the approach is transparent and users are given the option to reveal it if they want to.
What feels concerning here is that model identifiers (like GPT-5.2) are included in the DOM but hidden through CSS properties like clip-path, opacity: 0, and user-select: none. This doesn’t feel like typical UX simplification—it looks more like deliberate obfuscation.
If the goal were simplicity, a toggle or clearly labeled section would work just as well, without undermining trust. I think users generally appreciate being informed and offered choices.
From a regulatory standpoint, this kind of design could also raise questions under frameworks like the GDPR and the EU AI Act, which emphasize transparency, informed consent, and the right for users to understand how automated systems operate. Intentionally hiding relevant model information in the DOM without clear disclosure could be seen as inconsistent with those principles.