Settings

Theme

Nobody knows how the whole system works

surfingcomplexity.blog

69 points by azhenley 7 hours ago · 54 comments

Reader

virgilp 6 hours ago

That's not how things work in practice.

I think the concern is not that "people don't know how everything works" - people never needed to know how to "make their own food" by understanding all the cellular mechanisms and all the intricacies of the chemistry & physics involved in cooking. BUT, when you stop understanding the basics - when you no longer know how to fry an egg because you just get it already prepared from the shop/ from delivery - that's a whole different level of ignorance, that's much more dangerous.

Yes, it may be fine & completely non-concerning if agricultural corporations produce your wheat and your meat; but if the corporation starts producing standardized cooked food for everyone, is it really the same - is it a good evolution, or not? That's the debate here.

  • ahnick 5 hours ago

    Most people have no idea how to hunt, make a fire, or grow food. If all grocery stores and restaurants run out of food for a long enough time people will starve. This isn't a problem in practice though, because there are so many grocery stores and restaurants and supply chains source from multiple areas that the redundant and decentralized nature makes it not a problem. Thus it is the same with making your own food. Eventually if you have enough robots or food replicators around knowing how to make food becomes irrelevant, because you always will be able to find one even if yours is broken. (Note: we are not there yet)

    • xorcist an hour ago

      > Most people have no idea how to hunt, make a fire, or grow food

      That's a bizarre claim, confidently stated.

      Of course I can make a fire, cook and my own food. You can, too. When it comes to hunting, skinning and the cutting of animals, that takes a bit more practice but anyone can manage something even if the result isn't pretty.

      If stores ran out of food we would have devastating problems but because of specialization, just because we live in cities now you simply can't go out hunting even if you wanted to. Plus there is probably much more pressing problems to take care of, such as the lack of water and fuel.

      If most people actually couldn't cook their own food, should they need, that would be a huge problem. Which makes the comparison with IT apt.

    • sciencejerk 4 hours ago

      >If all grocery stores and restaurants run out of food for a long enough time people will starve. This isn't a problem in practice though...

      I fail to see how this isn't a problem? Grid failures happen? So do wars and natural disasters which can cause grids and supply chains to fail.

      • ahnick 4 hours ago

        That is short hand. The problem exists of course, but it is improbable that it will actually occur in our lifetimes. An asteroid could slam into the earth or a gamma ray burst could sanitize the planet of all life. We could also experience nuclear war. These are problems that exist, yet we all just blissfully go on about our lives, b/c there is basically nothing that can be done to stop these things if they do happen and they likely won't. Basically we should only worry about these problems in so much as we as a species are able to actually do something about them.

    • shevy-java 4 hours ago

      In Star Trek they just 3D printed everything via light.

  • skeptic_ai 4 hours ago

    At what point is the threshold between fine and concerning? Seems like the one you put is from your point of view. I’m sure not everyone would agree and is subjective.

  • lijok 4 hours ago

    > that's a whole different level of ignorance, that's much more dangerous.

    Why? Is it more dangerous to not know how to fry an egg in a teflon pan, or on a stone over a wood fire? Is it acceptable to know the former but not the latter? Do I need to understand materials science so I can understand how to make something nonstick so I’m not dependant on teflon vendors?

    • virgilp an hour ago

      It's relative, not absolute. It's definitely more dangerous to not know how to make your own food than to know something about it - you _need_ food, so lacking that skill is more dangerous than having it.

      That was my point, really - that you probably don't need to know "materials science" to declare yourself competent enough in cooking so that you can make your own food. Even if you only cooked eggs in teflon pans, you will likely be able to improvise if need arises. But once you become so ignorant that you don't even know what food is unless you see it on a plate in a restaurant, already prepared - then you're in a lot poorer position to survive, should your access to restaurants be suddenly restricted. But perhaps more importantly - you lose the ability to evaluate food by anything other than aspect & taste, and have to completely rely on others to understand what food might be good or bad for you(*).

      (*) even now, you can't really "do your own research", that's not how the world works. We stand on shoulders of giants - the reason we have so much is because we trust/take for granted a lot of knowledge that ancestors built up for us. But it's one thing to know /prove everything in detail up until the basic axioms/atoms/etc; nobody does that. And it's a completely different different thing to have your "thoughts" and "conclusions" already delivered to you in final form by something (be it Fox News, ChatGPT, New York Times or anything really) and just take them for granted, without having a framework that allows to do some minimal "understanding" and "critical thinking" of your own.

    • stoneforger 4 hours ago

      You do need to be able to understand nonstick coating is unhealthy and not magic. You do need to understand your options for pan frying for not sticking are a film of water or an ice cube if you don't want to add an oil into the mix. Then it really depends what you are cooking on how sticky it will be and what the end product will look like. That's why there are people that can't fry an egg, people that cook, chefs, and Michelin chefs. Because nuance matters, it's just that the domain where each person wants to apply it is different. I dont care about nuance in hockey picks but probably some people do. But some domains should concern everyone.

bjt 5 hours ago

The claimed connections here fall apart for me pretty quickly.

CPU instructions, caches, memory access, etc. are debated, tested, hardened, and documented to a degree that's orders of magnitude greater than the LLM-generated code we're deploying these days. Those fundamental computing abstractions aren't nearly as leaky or nearly as in need of refactoring tomorrow.

mamp 6 hours ago

Strange article. The problem isn’t that everyone doesn’t know how everything works, it’s that AI coding could mean there is no one who knows how a system works.

  • lynguist 4 hours ago

    No I think the problem is AI coding removes intentionality. And that introduces artifacts and connections and dependencies that shouldn’t be there if one had designed the system with intent. And that makes it eventually harder to reason about.

    There is a difference in qualia in it happens to work and it was made for a purpose.

    Business logic will strive more for it happens to work as a good enough.

    • stoneforger 4 hours ago

      Excellent point. The intention of business is profit, how it arrives there is considered incidental. Any product no matter what, as long as it sells. Compounding effects in computing, the internet and miniaturisation, have enabled large profit margins that further compound these effects. They think of this as a machine that can keep on printing more money and subsuming more and more as software and computers are pervasive.

  • Animats 5 hours ago

    Including the AI, which generated it once and forgot.

    This is going to be a big problem. How do people using Claude-like code generation systems do this? What artifacts other than the generated code are left behind for reuse when modifications are needed? Comments in the code? The entire history of the inputs and outputs to the LLM? Is there any record of the design?

    • maxbond 4 hours ago

      I have experimented with telling Claude Code to keep a historical record of the work it is performing. It did work (though I didn't assess the accuracy of the record) but I decided it was a waste of tokens and now direct it to analyze the history in ~/.claude when necessary. The real problem I was solving was making sure it didn't leave work unfinished between autocompacts (eg crucial parts of the work weren't performed and instead there are only TODO comments). But I ended up solving that with better instructions about how to break down the plan into bite-sized units that are more friendly to the todo list tool.

      I have prompting in AGENTS.md that instructs the agent to update the relevant parts of the project documentation for a given change. The project has a spec, and as features get added or reworked the spec gets updated. If you commit after each session then the git history of the spec captures how the design evolves. I do read the spec, and the errors I've seen so far are pretty minor.

    • skeptic_ai 4 hours ago

      I for one I save all conversations in the codebase. Includes both human prompts and outputs. But I’m using a modified codex to do so. Not sure why it’s not default as it’s useful to have this info.

    • luckydata 4 hours ago

      Is this an actual problem? Takes minutes for an AI to explore and document a codebase. Sounds like a non problem.

      • shevy-java 4 hours ago

        Is that documentation useful? I haven't seen a well-documented codebase by AI so far.

        To be fair - humans also fail at that. Just look at the GTK documentation as an example. When you point that out, ebassi may ignore you because criticism is unwanted; and the documentation will never improve, meaning they don't want new developers.

      • ahnick 4 hours ago

        Yes, exactly my point as well. It cuts both ways.

  • ahnick 5 hours ago

    This happens even today. If a knowledgeable person leaves a company and no KT (or more likely, poor KT) takes place, then there will be no one left to understand how certain systems work. This means the company will have to have a new developer go in and study the code and then deduce how it works. In our new LLM world, the developer could even have an LLM construct an overview for him/her to come up to speed more quickly.

    • stoneforger 4 hours ago

      Yes but every time the "why" is obscured perhaps not completely because there's no finished overview or because the original reason cannot be derived any longer from the current state of affairs. Its like the movie memento: you're trying to piece together a story from fragments that seem incoherent.

PandaStyle 5 hours ago

Perhaps a dose of pragmatism is needed here?

I am no CS major, nor do I fully understand the inner workings of a computer beyond "we tricked a rock into thinking by shocking it."

I'd love to better understand it, and I hope that through my journey of working with computers, i'll better learn about these underlying concepts registers, bus's, memory, assembly etc

Practically however, I write scripts that solve real world problems, be that from automating the coffee machine, to managing infrastructure at scale.

I'm not waiting to pick up a book on x86 assembly first before I write some python however. (I wish it were that easy.)

To the greybeards that do have a grasp of these concepts though? It's your responsibility to share that wealth of knowledge. It's a bitter ask, I know.

I'll hold up my end of the bargain by doing the same when I get to your position and everywhere in between.

gmuslera an hour ago

It is not about having infinite width and depth of knowledge. Is about abstracting at the right level for the components are relevant enough and can assume correctness outside the focus of what you are solving.

Systems include people, that make their own decisions that affect how they work and we don’t go down to biology and chemistry to understand how they make choices. But that doesn’t mean that people decisions should be fully ignored in our analysis, just that there is a right abstraction level for that.

And sometimes a side or abstracted component deserves to be seen or understood with more detail because some of the sub components or its fine behavior makes a difference for what we are solving. Can we do that?

mojuba 4 hours ago

> AI will make this situation worse.

Being an AI skeptic more than not, I don't think the article's conclusion is true.

What LLM's can potentially do for us is exactly the opposite: because they are trained on pretty much everything there is, if you ask the AI how the telephone works, or what happens when you enter a URL in the browser, they can actually answer and break it down for you nicely (and that would be a dissertation-sized text). Accuracy and hallucinations aside, it's already better than a human who has no clue about how the telephone works or where to even begin if the said human wanted to understand it.

Human brains have a pretty serious gap in the "I don't know what I don't know" area, whereas language models have such a vast scope of knowledge that makes them somewhat superior, albeit at a price of, well, being literally quite expensive and power hungry. But that's technical details.

LLMs are knowledge machines that are good at precisely that: knowing everything about everything on all levels as long as it is described in human language somewhere on the Internet.

LLMs consolidate our knowledge in ways that were impossible before. They are pretty bad at reasoning or e.g. generating code, but where they excel so far is answering arbitrary questions about pretty much anything.

tjchear 5 hours ago

I take a fairly optimistic view to the adoption of AI assistants in our line of work. We begin to work and reason at a higher level and let the agents worry about the lower level details. Know where else this happens? Any human organization that existed, exists, and will exist. Hierarchies form because no one person can do everything and hold all the details in their mind, especially as the complexity of what they intend to accomplish goes up.

One can continue to perfect and exercise their craft the old school way, and that’s totally fine, but don’t count on that to put food on the table. Some genius probably can, but I certainly am not one.

  • mhog_hn 4 hours ago

    But what if the AI agent has a 5% chance of adding a bug to that feature? Surely before any feature was completely bug free

    • tjchear 4 hours ago

      Yeah it’s all trade offs. If it means I get to where I want to be faster, even if it’s imperfect, so be it.

      Humans aren’t without flaws; prior to coding assistants, I’ve lost count of the times my PM telling me to rush things at the expense of engineering rigor. We validate or falsify the need for a feature sooner and move on to other things. Sometimes it works sometimes a bug blows up in our faces, but things still chug along.

      This point will become increasingly moot as AI gets better at generating good code, and faster, too.

tosti 4 hours ago

Not just tech.

Does anyone on the planet actually know all of the subtleties and idiosyncrasies of the entire tax code? Perhaps the one inhabitant of Sealand and the Sentinelese but no-one in any western society.

camgunz an hour ago

Get enough people in the room and they can describe "the system". Everything OP lists (QAM, QPSK, WPA whatever) can be read about and learned. Literally no one understands generative models, and there isn't a way for us to learn about their workings. These things are entirely new beasts.

youarentrightjr 6 hours ago

> Nobody knows how the whole system works

True.

But in all systems up to now, for each part of the system, somebody knew how it worked.

That paradigm is slowly eroding. Maybe that's ok, maybe not, hard to say.

  • redrove 6 hours ago

    > But in all systems up to now, for each part of the system, somebody knew how it worked.

    If the project is legacy or the people just left the company that’s just not true.

    • youarentrightjr 5 hours ago

      > If the project is legacy or the people just left the company that’s just not true.

      Yeah, that's why I said "knew" instead of "knows".

whytaka 5 hours ago

But people are expected to understand the part of the system they are responsible for at the level of abstraction they are being paid to operate.

This new arrangement would be perfectly fine if they aren't responsible when/if it breaks.

  • jstummbillig 5 hours ago

    I don't think there is anything new here and the metaphor holds up perfectly. There have always been bugs we don't understand in compilers or libraries or implementations beyond that, that make the path we chose unavailable to us at a certain level. The responsibility is to create a working solution, sure, but there is nothing that would prevent us from getting there by typing "Hey LLM, this is not working, let's try a different approach", even though it might not feel great.

dizhn 4 hours ago

Let me make it worse. Much worse. :)

https://youtu.be/36myc8wQhLo (USENIX ATC '21/OSDI '21 Joint Keynote Address-It's Time for Operating Systems to Rediscover Hardware)

mrkeen 4 hours ago

  Adam Jacob
  It’s not slop. It’s not forgetting first principles. It’s a shift in how the craft work, and it’s already happened. 
This post just doubled down without presenting any kind of argument.

  Bruce Perens
  Do not underestimate the degree to which mostly-competent programmers are unaware of what goes on inside the compiler and the hardware.
Now take the median dev, compress his lack of knowledge into a lossy model, and rent that out as everyone's new source of truth.
  • conorcleary 3 hours ago

    "I don't need to know about hardware, I'm writing software."

    "I don't need to know about software engineering, I'm writing code."

    "I don't need to know how to design tests, ____ vibe-coded it for me."

psychoslave 4 hours ago

To be fair, I don't know how a living human individual work, let alone how they actually work in society. I suspect I'm not alone in this case.

So nothing new under the sun, often the practices come first, then only can some theory emerge, from which point it can be leverage on to go further than present practice and so on. Sometime practice and theory are more entengled in how they are created on the go, obviously.

mhog_hn 4 hours ago

It is the same with the global financial system

zhisme 2 hours ago

what a well written article. That's actually a problem. Time will come and hit the same way it has done to aqueduct, like lost technology that no one knows how they have worked in details. Maybe it is just how engineering evolution works?

shevy-java 4 hours ago

Adam Jacob's quote is this:

"It's not slop. It's not forgetting first principles. It's a shift in how the craft work, and it's already happened."

It actually really is slop. He may wish to ignore it but that does not change anything. AI comes with slop - that is undeniable. You only need to look at the content generated via AI.

He may wish to focus merely on "AI for use in software engineering", but even there he is wrong, since AI makes mistakes too and not everything it creates is great. People often have no clue how that AI reaches any decision, so they also lose being able to reason about the code or code changes. I think people have a hard time trying to sell AI as "only good things, the craft will become better". It seems everyone is on the AI hype train - eventually it'll either crash or slow down massively.

amelius 4 hours ago

Wikipedia knows how it all works, and that's good enough in case we need to reboot civilization.

kartoshechka 3 hours ago

engineers pay for abstractions with more powerful hardware, but can optimize at their will (hopefully). will ai be able to afford more human hours to churn through piles of unfamiliar code?

fedeb95 4 hours ago

why does the author imply not knowing everything is a bad thing? If you have clear protocol and interfaces, not knowing everything enables you to make bigger innovations. If everything is a complex mess, then no.

  • bsza 4 hours ago

    Not knowing everything never "enables" you to do anything. Knowing how something works is always better than not knowing, assuming you want to use it or make changes to it.

sciencejerk 4 hours ago

We keep delegating knowledge of the natural, physical world for temporary, rapidly-changing knowledge of abstractions and software tools, which we do not control (now LLM cloud tools).

The lack of comprehensive, practical, multi-disciplinary knowledge creates a DEEP DEPENDENCY on the few multinational companies and countries that UNDERSTAND things and can BUILD things. If you don't understand it, if you can't build it, they OWN you.

cess11 3 hours ago

Yeah, it's not a problem that a particular person does not know it all, but if no one knows any of it except as a black box kind of thing, that is a rather large risk unless the system is a toy.

Edit: In a sense "AI" software development is postmodern, it is a move away from reasoned software development in which known axioms and rules are applied, to software being arbitrary and 'given'.

The future 'code ninja' might be a deconstructionist, a spectre of Derrida.

anon291 4 hours ago

I don't like this thing where we dislike 'magic'

The issue with frameworks is not the magic. We feel like it's magic because the interfaces are not stable. If the interfaces were stable we'd consider them just a real component of building whatever

You don't need to know anything about hardware to properly use a CPU isa.

The difference is the cpu isa is documented, well tested and stable. We can build systems that offer stability and are formally verified as an industry. We just choose not to.

bsder 4 hours ago

Sure, we have complex systems that we don't know how everything works (car, computer, cellphone, etc.) . However, we do expect that those systems behave deterministically in their interface to us. And when they don't, we consider them broken.

For example, why is the HP-12C still the dominant business calculator? Because using other calculators for certain financial calculations were non-deterministically wrong. The HP-12C may not have even been strictly "correct", but it was deterministic in the ways in wasn't.

Financial people didn't know or care about guard digits or numerical instability. They very much did care that their financial calculations were consistent and predictable.

The question is: Who will build the HP-12C of AI?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection