Ask HN: What would you recommend a vibe coder learn about how all this works?
I'm a writer who started building with AI coding tools about 8 months ago. No programming background. It's been one of the most fun things I've ever done.
I want to understand more about what's actually happening. What are the big concepts that, once you get them, make everything click in a more interesting way? The stuff that made you go "oh, THAT'S what's going on." If you are enjoying vibe coding, the biggest "aha" for me is realizing that software is mostly about managing state, data flow, and failure, not syntax. Once you get how data moves, where state lives, and what happens when things break, a lot of the "magic" disappears. Learning basic debugging and reading stack traces also compounds forever, even with AI. One thing that helped me: think a bit like a PM. Spend time crafting good prompts (use chatgpt for generating prompts and feed it to your AI coder ;) ) for ChatGPT/Claude/Cursor, ask it to generate docs and explain what it just did. Over time, patterns emerge and things feel less mysterious. LLMs write code well, but they don't give you mental models. That part you still have to build. IMHO, reading isn't enough, you must do it and experience issues that makes you go "what? How?". But your question is so vague that there are a lot of answers. There are a lot of stuff that can be considered big concepts : basic electronic, networking, OS, peripheral communication, compilers, language, algorithms, assembly/procedural/object/functional programming, and the whole world of "AI". What do you want to know? - why you shouldn't call .where() a million times in linq? - how internal statistics tables are managed in Sql server to produce the execution plan, how to watch the execution plan? - big endian vs little endian? - how to use telnet to get an HTML page from a web server? - why everyone thinks that their favourite language is the best? - why is the first char in a string in pascal at [1] when its at [0] in a lot of other languages? Use the AI to teach you. I’m similar not full coder, although tinkered since I started in early 2000’s with html in notepad :) I made a /teach-me slash-command.
Everytime I run into something I want to know about I run it. It launches a background subagent (forking context) that check (and updates) my ‘learning-profile.md’. I can pass additional requests, but otherwise it just deduces from context. It then creates an explainer/lesson. And sends it to my kindle (using a custom created mcp) - so I can read it offline. Not breaking my flow. I’ve never learned so much so quickly, about everything. It’s awesome. I use the same mcp from phone apps. Everytime I want to learn something I just ask. Also: learn about the GIT concepts (not commands). You’ll learn much quicker when you can mess up and retry ;) I've been writing a new textbook for undergrads (chemistry domain focus), and think this excerpt is generally solid advice that is applicable here. Any feedback is welcome (textbook to be published gplv3 via GitHub). I appreciate I am on the conservative side here. The following is copy-paste of the final notes/tips/warnings in the book, copied from latex source with minimal edits for display here: Rather than viewing AI as forbidden or universally permitted, consider this progression: 1. Foundation Phase (Avoid generation, embrace explanation) When learning a new library (e.g., your first RDKit script or lmfit model), do not ask the AI to write the code. Instead, write your own attempt, then use AI to: • Explain error tracebacks in plain language • Compare your approach to idiomatic patterns • Suggest documentation sections you may have missed 2. Apprenticeship Phase (Pair programming) Once you can write working but inelegant code, use AI as a collaborative reviewer: • Refactor working scripts for readability • Vectorize slow loops you have already prototyped • Generate unit tests for functions you have written 3. Independence Phase (Managed delegation) When you have the skill to write the code yourself but choose to delegate to save time, you are essentially trading the effort of writing for the effort of auditing. Because your prompts are condensed summaries of intent rather than literal instructions, the LLM must fill the "ambiguity gap" with educated guesses. Delegation only works if you are skilled enough to recognise when those guesses miss the mark; if your words were precise enough to never be misunderstood, they would already be code. Coding without oversight is dangerous and deeply incompetent behaviour in professional environments. Examples of use-cases are: • Generate boilerplate for familiar patterns, then audit line-by-line • Prototype alternative algorithms you already understand conceptually • Document code you have written (reverse the typical workflow) Hi,AI tools just shifted the interface of programming from code to natural language. The tool you've been sharpening your whole career — words — is now the primary tool of software development. You're not a non-programmer learning to code. You're a native speaker arriving in a country that just switched to your language.
And that's exactly why it's been "one of the most fun things you've ever done." We're always drawn to the things where our strengths quietly do the heavy lifting. Trust that signal. Keep going. Hard disagree, the interface hasn't changed at all. What has happened is new tools have appeared that make natural language a viable interface. It is a new lesser interface, not a replacement. Like a GUI, more accessible but functionally restricted. An interface that is conditioned on previously solved tasks, but unable to solve novel ones. What this means is coding becomes accessible to those looking to apply something like python to solved problems, but it very much remains inaccessible to those looking to solve truly novel problems they have the skill to solve in their domain, but lack the coding skills to describe. As a simple example, claude code is easily among the most competent coding interfaces I know of right now. However, if I give it a toy problem I've been toying with as a hobby project, and it breaks so badly it starts hallucinating that it is chatgpt. ```
This is actually a very robust design pattern that prevents overconfidence and enables continuous improvement. The [...lots of rambling...] correctly. /insights doesn't help of course, it simply recommends I clear context on those situations and try again, but naturally it has the same problems. This isn't isolated, unless I give it simple tasks, it fails. The easy tasks it excels at though, it has handled a broad variety of tasks to a high degree of satisfaction, but it is a long shot away from replacing just writing code. Bottom line, LLM's give coding a GUI, but like a GUI, is restricted and buggy. I've seen non-programmers successfully launch real apps — not toy projects — through vibe coding. I'm doing it myself, and I'm about to ship a developer tool built the same way. They'll still need to pick up the fundamentals of the programming — that part isn't optional yet. And getting to that level as a non-programmer takes real effort. But if the interest is there, it's far from impossible. In fact, I'd argue someone with genuine passion and domain expertise might have better odds than an average developer just going through the motions. I highly recommend Andrej Karpathy's Deep Dive into LLMs like ChatGPT: https://youtu.be/7xTGNNLPyMI. It's a 3.5 hour general intro to the topic that I think does a great job of making sense of the tech and how it works under the hood, without needing to build up a foundation of math and programming. Congrats on the journey and desire to learn. There's different books or approaches to learn more depending on your needs. It's usually useful to tailor things to whatever you already produce for others or consume for yourself because then you get the most of your feedback loop. Can you share more about what you've tried so far, with what tools, or what feels the most fun or useful ? Depending on your answers one might recommend things like - https://automatetheboringstuff.com/ - land of lisp (comics and games with functional programming to intentionally separate you from vibe coding traditional tools) - web related dev with different viz or presentation tools useful concepts regardless will be, in my mind, unit testing and how to approach problems and breaking them down but it all depends on where you want to go. There's two paths here. Bottom up and top down. Bottom up would roughly be
1. Picking a simple introduction to programming textbook ideally Python
2. Work through a building a transformer LLM in python
3. Move to training it on a corpus You're not mastering each step. Reading the python book and doing some exercises is fine. The top down:
This 3Blue1Brown playlist will have you covered
https://youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_... Either way you want to meet in the middle.
There is still a lot in the middle that isn't clear so don't try and work from the middle out! The course on deeplearning.ai are a good starting point. Anthropic and OpenAI both have decent stuff on there, plus their own docs have examples. you have to choose to keep vibing or learn to code. there's a possibility that programming might not matter and being good at vibing would be what matters. no one knows yet ... for now it does pay to know how to. hello, as always: imho (!) recognize that software-engineering is not about writing / vibing code but to solve (!) problems. nobody cares if the code which solves a problem is generated / copied / written ... as long as it was legally obtained ... ;)) anyway: code is liability, every line of code which was not written to solve a problem keeps future maintenance-costs low(er) ... additionally especially for non-trivial problem-solutions - read: projects -, its essential to have maintainable code. which means, code that is ... * easy to understand ~ new developers * easy to extend ~ new features * easy to sustain ~ update dependencies, update the underlying runtime-environment etc. especially if it solves a complex problem for a company, the code may be used for years or even decades =?> keep that in mind! just my 0.02€ Stop vibing and use your own brain.
After you have learned enough and consider yourself a software engineer you can restart using AI-tools. Only then you will have enough knowledge and experience to review the output of those machines. And don't you dare to ever open a pull request with AI generated code in any free and open source project. You mean well but this is not helpful advice. I think the spirit is in the right place but the framing is too extreme. I am never going to be a software engineer either but I took a CS101 class and then data structures and algorithms. Going from zero to passing those classes was the most useful combination of things I ever learned. I am so glad I got to do this way before LLMs because I think I am absolutely the type of person who would have used LLMs to cheat and learn nothing from the class. I think at least those concepts are vastly worth struggling with without the help of LLMs. First of all, that’s great. That’s what I hope it would happen: non technical people using this tools, feeling the apart of “power” of coding, and feeling the desire to learn more. About your question, it is hard to give one. I don’t think there is one big thing that makes everything click, and if there is one, it is probably different for different people. But I can give some advices. 1. You can ask the AI itself to explain how the code works. In my experience they are usually fine at that. You can probably tailor the explanation to your technical level, so that’s neat. 2. Stick to one language. I don’t know which language your AI is using for your tools (probably Python if they run on your machine or JavaScript if they work on a browser). Learning what you are using is the first step. 3. Once you know that, you can use the AI and some online guides to learn the very basics of the language. Maybe ask the AI for very simple toy tools (e.g., a web page where you write a phrase, click a button, and it will show that phrase with the words in reverse order) and try to understand what the code does. You may still ask the AI about the lines you don’t understand. 4. It is a potentially long journey. Go as far as you like. After these first steps, you will likely have more specific questions. That’s good. :) >> The stuff that made you go "oh, THAT'S what's going on." That comes from years of 70 hour weeks of hand coding! Though doing 70 hour weeks will lead to burn out. No, it really won't Reading The Language of Machines book and the Linux Programming Interface. Teachyourselfcs.com is good too ask the vibe to explain in to you, unlike hunans it is very self-aware and patient teacher (albeit prone to say made up horseshit at times) I feel like we might have more to learn from you than the other way round. Where by "we" I mean the traditional programmers who frequent this place.
``` ChatGPT
Apologies, but I don't have the ability to run code or access files in a traditional sense. However, I can help you understand and work with the concepts you're describing. Let me
provide a more focused analysis: