Why AI Sucks At Front End · April 12, 2026

8 min read Original article ↗

AI is a sycophantic dev wannabe that skimmed a shitload of tutorials. You get the results of a probabilistic guess based on patterns it saw during training. What did it train on? Ancient solutions, unoriginal UI patterns, and watered down junk.

I'm about to rant about how this is both useful and lame.

The Good #

AI loves the boring stuff. It thrives on mediocrity.

If you want some gloriously unoriginal UI, it has your back 😜

  • Scaffolding: Generic regurgitation of patterns it's seen, done.
  • Tokens: Migrating tokens or mapping them out? It eats this tedious garbage for breakfast.
  • Outlining features: Generic lists ✅
  • Lying to your face: Confident hot garbage on a silver platter. It'll hand you a snippet, dust off its digital hands, and tell you it finished the work. It did not finish the work.

Aka: If it's a well-worn pattern, AI is there to help you copy-paste faster. Which, for a lot of programming, is totally the case. I'm genuinely finding a lot of helpful stuff in this department.

The Bad #

Pixel perfection & bespoke solutions… what are those?

The exact second you step off the paved road of unoriginality, it faceplants.

  • Bespoke solutions & custom interactions: Try asking it for some scroll-driven animations or custom micro-interactions. It will invent a CSS syntax that hasn't existed since IE6.
  • Layout & Spacing: Predicting intrinsic/extrinsic page properties? It's already bad at math, how could it get this rediculously dynamic calculation correct. Spacing? Ha, seems reasonably to expect symmetry, but it's terrible at the math.
  • Combined states: Pinpointing where to edit a complex component state makes it cry.
  • Accessibility: It throws aria-hidden="true" at a wall and hopes it sticks.
  • Performance: It will give you the heaviest, jankiest solution unless you explicitly ask it to be for a specific (apparently "indie") performance solution.
  • Tests: Writing good tests? Good, no. A lot, yes.

And the absolute best part? The more complex the component gets, the slower and dumber the front-end help becomes. Incredible how it can one shot a totally decent front-end design or component, than choke on a follow up request. Speaks to what it's good at.

Why? #

1. It trained on ancient garbage #

It lacks modern training data.

It has an excessive reliance on standard templates because that's what the internet is full of. Modern CSS? It's barely aware of it.

2. It literally cannot see #

It's an LLM, not a rendering engine!

It's notoriously bad at math, and throwing screenshots at it means very little. It's stabbing in the dark.

This leads to the classic UI interaction:

AI: "I'm done! Here is your perfectly crafted UI."
Me: "There's a gaping hole where the icon should be, fix the missing icon."
AI: "You're absolutely right. Let me fix that for you."

3. It doesn't know WHY we do things #

It doesn't understand the "why" behind our architectural decisions.

SDD, BDD, or state machines might help guide it, but the models weren't exactly trained on those paired with stellar solutions.

We're asking a giant text-predictor to make new connections on the fly. We can get it there, but there's so much to consider we have to spell it out before it starts making the connections we want.

4. Zero environmental control #

It doesn't control where the code lives.

It can write annoyingly amazing Rust, TypeScript or Python, but those have the distinct advantage of a predictable (pinnable!!! like v14.4) environment the code executes in.

That's not how HTML or CSS work, there is no pinning the browser type, browser window size, browser version, the users input type (keyboard, mouse, touch, voice), their user preferences, etc. That's complex end environment shit.

The list goes on too, for scenarios, contexts and variables the rendering engine juggles before resolving the final output. The LLM doesn't control these, so it ignores them until you make them relevant.

Even prompting in logical properties, you have to ask for this kind of CSS. These should be CSS tablestakes output from LLMs, but it's not. And even when you ask for it, or provide documentation that spells it out, it's not guaranteed to work.

The place where HTML and CSS have to render is chaotic. It's a browser, with a million different versions, a million different ways to render, a million different ways to interact with it, and a million different ways to break it.

It's a moving target, and LLMs are terrible at moving targets.

Damnit humans #

We're a LLM combinatorial explosion.

We're wildly unpredictable targets. We change our minds, we switch viewports, we change theme preferences, we changes devices, we change browsers, we change browser versions, we switch inputs, we change our everything.

We're not a static target. We're not a pattern that can be learned.

There is a "human mainstream" of behaviors, preferences, and expectations where LLMs can be genuinely helpful; but our "full potential" matrix will be exploding LLM output patterns for a long time to come. IMO at least.

unless we Borg.

23 comments #

I'd argue it is impressive that some of the generated front-ends are better than half-bad, considering these factors. Also, models have visual capabilities for while now. What models definitely lack is the "living process", how and why things get created, and the evolutionary pain that led there.

fry69fry69

Exactly pointing out the issues. In my experience it’s great at writing functionality until it gets complex (context loss issue). It’s quite bad at design/visual stuff, despite models understanding “visual stuff” but that’s more Photoshop like visuals, not websites.

Anselm HannemannAnselm Hannemann

fundamentally it has the same problem backend engineers (broadly speaking) have: trying to apply tech solutions to human problems. what we do is coding UX solutions and there is no debugger for those!

claudio.claudio.

I do not think complexity is the issue here, at leat not how we understand it. Models clearly understand the mechanics, also what gets perceived as "good" (or palatable) and they excel at tackling what we think as complexity. Models have no problem holding a whole website in their context.

fry69fry69

What I want to point out, what models lack, is what we call intention, motivation, experience through countless failures. They have to derive what we want, what our goal (their mission) is from a few tokens we hand them in the prompt. Therefore the results are fairly impressive IMHO.

fry69fry69

Another way to look at this is that all the garbage those models get trained on gets filtered through an unrelenting training process, which rewards only the "better" outcomes. That a few silly tokens in the prompt interpolated/inferred with the outcome of this training process yield these results.

fry69fry69

@argyleink For something that itself is complex maths it’s hilarious how bad it is at maths.

StephanieStephanie

If you pay a lot yes, otherwise not. I had it for a couple of big app features/services that they lost context and broke their own previous code in the same session. But we’re not talking about normal websites here but complex services.

Anselm HannemannAnselm Hannemann

Those failures, if they are not due to older/subpar models, may be a failure to communicate properly with these models -> "context engineering". If anything LLMs excel at grasping complexity our brains cannot, at least not consciously/"on demand". See e.g. Mythos/security fixes.

fry69fry69

You cannot compare ongoing work on services with audits (mythos). My examples were with latest models from one month ago, using markdown reference files for context, still… The one thing is about context loss because of long term service evolution, the other one a super complex one off task force.

Anselm HannemannAnselm Hannemann

@argyleink especially the first part of your article reminds me of how i always recommended css frameworks like bootstrap and foundation. need a quick prototype or don’t care about extra designs? go for it. but the moment you start customizing, adding your special ideas, got a design handed over, etc., better start from scratch, cause it’s the moment you start working against the framework.

Katrin K.Katrin K.

How does one define and communicate “taste” to an AI?

Anthony WhitfordAnthony Whitford

ai sucks at generating images that aren't repulsive and make one not read a post, too, btw.

davepermendavepermen

There's also the fact LLM are extremly bad at CSS because there are too many moving parts with too long causal chains (and few visual feedback). It's obvious when you see them stuck in loop with a browser mode. They are at the CSS stage of throwing shit to wall and see what sticks.

EtienneEtienne

Then for the design, maybe we could have models digging this way. I mean, dedicated models found pretty innovative CPU layouts! But LLM aren't those kind of models, they are just the wrong tool to get creative interfaces.

EtienneEtienne

imo this will probably always be true. there’s just too little good front end out there the bad stuff will still be bad the next training cycle. the new stuff will be a lot more bad because it will have come from AI. the good front end won’t surface to the top and be the dominant approach

Michael WarrenMichael Warren

6 months ago I would have agreed 6 months ago I was working on Web Components Elearning site 6 months ago I took time to actually learn CoPilot, past the prompts and tab-tab code suggestions I pivotted, now vibe-code 4 hours a day The power of AI is you can train it with simple Skills files

Danny Engelman ????????️????️Danny Engelman ????????️????️

150 lines. 4 months tuning my Web Component skills file. Still amazed by AI (and I’ve been around since 1990). Feels like those ’90s interns I trained… ...I should’ve listened when they showed me FutureSplash

Danny Engelman ????????️????️Danny Engelman ????????️????️