Settings

Theme

Codebase is 250% AI generated

moderndescartes.com

16 points by brilee 2 months ago · 12 comments

Reader

appapapp 2 months ago

Very concerning the author is ok with saying “I don’t know how to do this, and I don’t want to learn so I let Claude do it”.

I think a lot of LLM speedups boil down to this (from personal and read experience). Which is fine in some limited use cases, but also the antithesis of a good developer. Your job is to learn how things work if it’s work you’re putting your name behind.

Using LLMs to shortcut the blank page in these scenarios and get you learning faster is the way to go (though sometimes it’ll send you in the absolute wrong direction initially).

> Since then, I have vibecoded every single feature … now, this has lead to a number of hilarious failures

When I was a junior dev I wouldn’t read every line of a PR. Eventually you learn you should be reading lines that aren’t changed in a PR as well as every line that is. The author seems like a smart guy but more of a researcher than somebody I’d pay to build reliable software.

  • fhd2 2 months ago

    Oh you're gonna learn how things work, if you stick around long enough. The decision you get to make is whether you'll learn one thing at a time while you're building, or everything all at once under pressure at 2am.

_se 2 months ago

Is there any way to start banning posts like this if they don't actually link the codebase? There's no point at all in engaging with something like this without a way to judge the real outcomes that have/have not been achieved.

aleph_minus_one 2 months ago

> When AI-coding, you should take YAGNI to its extreme. This is for two reasons:

> 1. AI coding makes it absolutely trivial to add new features later on if you do need it.

When I do programming, I often think deeply about the features and abstractions that I need to get them right.

If I need an additional feature, it thus often means that I deeply misunderstood the problem domain (which does happen), so the changes that are necessary for adding new features in the code are often deep and mean "seeing the world with new eyes (that can also see infrared or ultraviolet light)". A little bit like going from the universal algebra definition of a group to group objects in a braided monoidal category (which, for example, show that there is a deep abstract relationship between groups and Hopf algebras; see for example [2]).

I really cannot imagine how an AI can be capable of doing such deep transformations, which basically mean "rebuild the whole program so that the entire source code is now based on a completely different thinking about the respective problem domain, which goes like this: ...".

[1] https://en.wikipedia.org/wiki/Group_object

[2] https://en.wikipedia.org/wiki/Hopf_algebra#Analogy_with_grou...

  • Incipient 2 months ago

    > 1. AI coding makes it absolutely trivial to add new features later on if you do need it.

    I have to refute this. It may make it easier or faster...but definitely not trivial. I had it add a new feature to my simple app, and the way it did it worked, it just fetched the entire list of entities for each entity in a list. Why it didn't create a new endpoint for "get entity" or just cache the whole list then do lisp) local lookup, I don't know...but it absolutely wrecked performance.

CaptainOfCoit 2 months ago

Best way I've found to deal with moderately complex codebases and LLMs is to almost go overboard with splitting things into libraries/reusable components, and let the LLM go wild on each individual component, but I'm 100% of the owner of how all the pieces fit together.

Seems when a project goes beyond a certain scope, LLMs lose the ability of separating what they come across, so everything "blends into a global state" or something, and it'll indirectly be inspired by unrelated things. All LLMs/agents seems to suffer from this more or less, as far as I can tell.

  • yodon 2 months ago

    This is the approach I've been taking in some research projects - Conceptually thinking of the project as a large number of custom npm or nuget modules with a thin orchestration layer over them (but not actually deploying as npm or nuget modules).

    It definitely seems to help the LLM retain focus and architectural integrity.

  • stockresearcher 2 months ago

    There was a "movement" in the late 70s or early 80s called structured system design that basically advocated for this approach (though 100% human). You may want to pick up a book or read more about it, as it outlines approaches and things to think about when splitting things up so much.

welder 2 months ago

I wish he had real data on lines of code written by AI vs Human using WakaTime, instead of guessing the 98% number.

knome 2 months ago

title is nonsense and unsupported by the justification the author attempts to create.

mooreds 2 months ago

A great post by someone who is putting Claude through the paces. Learned a lot about how to use LLMs to assist coding (and what to avoid).

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection