We're all VCs now: The skills developers need in the AI era

14 min read Original article ↗

January 21, 2026 . By Reuven

Many years ago, a friend of mine described how software engineers solve problems:

  • When you’re starting off, you solve problems with code.
  • When you get more experienced, you solve problems with people.
  • When you get even more experienced, you solve problems with money.

In other words: You can be the person writing the code, and solving the problem directly. Or you can manage people, specifying what they should do. Or you can invest in teams, telling them about the problems you want to solve, but letting them set specific goals and managing the day-to-day work.

Up until recently, I was one of those people who said, “Generative AI is great, but it’s not nearly ready to write code on our behalf.” I spoke and wrote about how AI presents an amazing learning opportunity, and how I’ve integrated AI-based learning into my courses.

Things have changed… and are still changing

I’ve recently realized that my perspective is oh-so-last year. Because in 2026, many companies and individuals are using AI to write code on their behalf. In just the last two weeks, I’ve spoken with developers who barely touch code, having AI to develop it for them. And in case you’re wondering whether this only applies to freelancers, I’ve spoken with people from several large, well-known companies, who have said something similar.

And it’s not just me: Gergely Orosz, who writes the Pragmatic Engineer newsletter, recently wrote that AI-written code is “mega-trend set to hit the tech industry,” and that a growing number of companies are already relying on AI to specify, write, and test code (https://newsletter.pragmaticengineer.com/p/when-ai-writes-almost-all-code-what).

And Simon Willison, who has been discussing and evaluating AI models in great depth for several years, has seen a sea change in model-generated code quality in just the last few months. He predicts that within six years, it’ll be as quaint for a human to type code as it is to use punch cards (https://simonwillison.net/2026/Jan/8/llm-predictions-for-2026/#6-years-typing-code-by-hand-will-go-the-way-of-punch-cards).

An inflection point in the tech industry

This is mind blowing. I still remember taking an AI course during my undergraduate years at MIT, learning about cutting-edge AI research… and finding it quite lacking. I did a bit of research at MIT’s AI Lab, and saw firsthand how hard language recognition was. To think that we can now type or talk to an AI model, and get coherent, useful results, continues to astound me, in part because I’ve seen just how far this industry has gone.

When ChatGPT first came out, it was breathtaking to see that it could code. It didn’t code that well, and often made mistakes, but that wasn’t the point. It was far better than nothing at all. In some ways, it was like the old saw about dancing bears, amazing that it could dance at all, never mind dancing well.

Over the last few years, GenAI companies have been upping their game, slowly but surely. They still get things wrong, and still give me bad coding advice and feedback. But for the most part, they’re doing an increasingly impressive job. And from everything I’m seeing, hearing, and reading, this is just the beginning.

Whether the current crop of AI companies survives their cash burn is another question entirely. But the technology itself is here to stay, much like how the dot-com crash of 2000 didn’t stop the Internet.

We’re at an inflection point in the computer industry, one that is increasingly allowing one person to create a large, complex software system without writing it directly. In other words: Over the coming years, programmers will spend less and less time writing code. They’ll spend more and more time partnering with AI systems — specifying what the code should do, what is considered success, what errors will be tolerated, and how scalable the system will be.

This is both exciting and a bit nerve-wracking.

Engineering >> Coding

The shift from “coder” to “engineer” has been going on for years. We abstracted away machine code, then assembly, then manual memory management. AI represents the biggest abstraction leap yet. Instead of abstracting away implementation details, we’re abstracting away implementation itself.

But software engineering has long been more than just knowing how to code. It’s about problem solving, about critical thinking, and about considering not just how to build something, but how to maintain it. It’s true that coding might go away as an individual discipline, much as there’s no longer much of a need for professional scribes in a world where everyone knows how to write.

However, it does mean that to succeed in the software world, it’ll no longer be enough to understand how computers work, and how to effectively instruct them with code. You’ll have to have many more skills, skills which are almost never taught to coders, because there were already so many fundamentals you needed to learn.

In this new age, creating software will be increasingly similar to being an investor. You’ll need to have a sense of the market, and what consumers want. You’ll need to know what sorts of products will potentially succeed in the market. You’ll need to set up a team that can come up with a plan, and execute on it. And then you’ll need to be able to evaluate the results. If things succeed, then great! And if not, that’s OK — you’ll invest in a number of other ventures, hoping that one or more will get the 10x you need to claim success.

If that seems like science fiction, it isn’t. I’ve seen and heard about amazing success with Claude Code from other people, and I’ve started to experience it myself, as well. You can have it set up specifications. You can have it set up tests. You can have it set up a list of tasks. You can have it work through those tasks. You can have it consult with other GenAI systems, to bring in third-party advice. And this is just the beginning.

Programming in English?

When ChatGPT was first released, many people quipped that the hottest programming language is now English. I laughed at that then, less because of the quality of AI coding, and more because most people, even given a long time, don’t have the experience and training to specify a programming project. I’ve been to too many meetings in which developers and project managers exchange harsh words because they interpreted vaguely specified features differently. And that’s with humans, who presumably understand the specifications better!

As someone said to me many years ago, computers do what you tell them to do, not what you want them to do. Engineers still make plenty of mistakes, even with their training and experience. But non-technical people, attempting to specify a software system to a GenAI model, will almost certainly fail much of the time.

So yes, technical chops will still be needed! But just as modern software engineers don’t think too much about the object code emitted by a compiler, assuming that it’ll be accurate and useful, future software engineers won’t need to check the code emitted by AI systems. (We still have some time before that happens, I expect.) The ability to break a problem into small parts, think precisely, and communicate clearly, will be more valuable than ever.

Even when AI is writing code for us, we’ll still need developers. But the best, most successful developers won’t be the ones who have mastered Python syntax. Rather, they’ll be the best architects, the clearest communicators, and the most critical thinkers.

Preparing yourself: We’re all VCs now

So, how do you prepare for this new world? How can you acquire this VC mindset toward creating software?

Learn to code: You can only use these new AI systems if you have a strong understanding of the underlying technology. AI is like a chainsaw, in that it does wonders for people with experience, but is super dangerous for the untrained. So don’t believe the hype, that you don’t need to learn to program, because we’re now in an age of AI. You still need to learn it. The language doesn’t matter nearly as much as the underlying concepts. For the time being, you will also need to inspect the code that GenAI produces, and that requires coding knowledge and experience.

Communication is key: You need to learn to communicate clearly. AI uses text, which means that the better you are at articulating your plans and thoughts, the better off you’ll be. Remember “Let me Google that for you,” the snarky way that many techies responded to people who asked for help searching the Web? Well, guess what: Searching on the Internet is a skill that demands some technical understanding. People who can’t search well aren’t dumb; they just don’t have the needed skills. Similarly, working with GenAI is a skill, one that requires far more lengthy, detailed, and precise language than Google searches ever did. Improving your writing skills will make you that much more powerful as a modern developer.

High-level problem solving: An engineering education teaches you (often the hard way) how to break problems apart into small pieces, solve each piece, and then reassemble them. But how do you do that with AI agents? That’s especially where the VC mindset comes into play: Given a budget, what is the best team of AI agents you can assemble to solve a particular problem? What role will each agent play? What skills will they need? How will they communicate with one another? How do you do so efficiently, so that you don’t burn all of your tokens in one afternoon?

Push back: When I was little, people would sometimes say that something must be true, because it was in the newspaper. That mutated to: It must be true, because I read it online. Today, people believe that Gemini is AI, so it must be true. Or unbiased. Or smart. But of course, that isn’t the case; AI tools regularly make mistakes, and you need to be willing to push back, challenge them, and bring counter-examples. Sadly, people don’t do this enough. I call this “AI-mposter syndrome,” when people believe that the AI must be smarter than they are. Just today, while reading up on the Model Context Protocol, Claude gave me completely incorrect information about how it works. Only providing counter-examples got Claude to admit that actually, I was right, and it was wrong. But it would have been very easy for me to say, “Well, Claude knows better than I do.” Confidence and skepticism will go a long way in this new world.

The more checking, the better: I’ve been using Python for a long time, but I’ve spent no small amount of time with other dynamic languages, such as Ruby, Perl, and Lisp. We’ve already seen that you can only use Python in serious production environments with good testing, and even more so with type hints. When GenAI is writing your code for you, there’s zero room for compromise on these fronts. (Heck, if it’s writing the code, and the tests, then why not go all the way with test-driven development?) If you aren’t requiring a high degree of safety checks and testing, you’re asking for trouble — and potentially big trouble. Not everyone will be this serious about code safety. There will be disasters – code that seemed fine until it wasn’t, corners that seemed reasonable to cut until they weren’t. Don’t let that be you.

Learn how to learn: This has always been true in the computer industry; the faster you can learn new things and synthesize them into your existing knowledge, the better. But the pace has sped up considerably in the last few years. Things are changing at a dizzying pace. It’s hard to keep up. But you really have no choice but to learn about these new technologies, and how to use them effectively. It has long been common for me to learn about something one month, and then use it in a project the next month. Lately, though, I’ve been using newly learned ideas just days after coming across them.

What about juniors?

A big question over the last few years has been: If AI makes senior engineers 100x more productive, then why would companies hire juniors? And if juniors can’t find work, then how will they gain the experience to make them attractive, AI-powered seniors?

This is a real problem. I attended conferences in five countries in 2025, and young engineers in all of them were worried about finding a job, or keeping their current one. There aren’t any easy answers, especially for people who were looking forward to graduating, joining a company, gradually gaining experience, and finally becoming a senior engineer or hanging out their own shingle.

I can say that AI provides an excellent opportunity for learning, and the open-source world offers many opportunities for professional development, as well as interpersonal connections. Perhaps the age in which junior engineers gained their experience on the job are fading, and that participating in open-source projects will need to be part of the university curriculum or something people do in their spare time. And pairing with an AI tool can be extremely rewarding and empowering. Much as Waze doesn’t scold you for missing a turn, AI systems are extremely polite, and patient when you make a mistake, or need to debug a problem. Learning to work with such tools, alongside working with people, might be a good way for many to improve their skills.

Standards and licensing

Beyond skill development, AI-written code raises some other issues. For example: Software is one of the few aspects of our lives that has no official licensing requirements. Doctors, nurses, lawyers, and architects, among others, can’t practice without appropriate education and certification. They’re often required to take courses throughout their career, and to get re-certified along the way.

No doubt, part of the reason for this type of certification is to maintain the power (and profits) of those inside of the system. But it also does help to ensure quality and accountability. As we transition to a world of AI-generated software, part of me wonders whether we’ll eventually need to feed the AI system a set of government- mandated codes that will ensure user safety and privacy. Or that only certified software engineers will be allowed to write the specifications fed into AI to create software.

After all, during most of human history, you could just build a house. There weren’t any standards or codes you needed to follow. You used your best judgment — and if it fell down one day, then that kinda happened, and what can you do? Nowadays, of course, there are codes that restrict how you can build, and only someone who has been certified and licensed can try to implement those codes.

I can easily imagine the pushback that a government would get for trying to impose such restrictions on software people. But as AI-generated code becomes ubiquitous in safety-critical systems, we’ll need some mechanism for accountability. Whether that’s licensing, industry standards, or something entirely new remains to be seen.

Conclusions

The last few weeks have been among the most head-spinning in my 30-year career. I see that my future as a Python trainer isn’t in danger, but is going to change — and potentially quite a bit — even in the coming months and years. I’m already rolling out workshops in which people solve problems not using Python and Pandas, but using Claude Code to write Python and Pandas on their behalf. It won’t be enough to learn how to use Claude Code, but it also won’t be enough to learn Python and Pandas. Both skills will be needed, at least for the time being. But the trend seems clear and unstoppable, and I’m both excited and nervous to see what comes down the pike.

But for now? I’m doubling down on learning how to use AI systems to write code for me. I’m learning how to get them to interact, to help one another, and to critique one another. I’m thinking of myself as a VC, giving “smart money” to a bunch of AI agents that have assembled to solve a particular problem.

And who knows? In the not-too-distant future, an updated version of my friend’s statement might look like this:

  • When you’re starting off, you solve problems with code.
  • When you get more experienced, you solve problems with an AI agent.
  • When you get even more experienced, you solve problems with teams of AI agents.

Related Posts

What’s new in Pandas 3?

Learn to code with AI — not just write prompts

We’re all VCs now: The skills developers need in the AI era