AI-Native vs. Anti-AI Engineers
One of the key differences I an seeing between AI-native engineers and Anti-AI ones: the idea of "fully understanding" what you ship.
Before LLMs, we did not fully understand the libraries we read, the kernels we touched, the networks we only grasped conceptually. We' have always been outsourcing intelligence to other engineers, teams, and systems for decades.
One possible reason is that we use a library, we can tell ourselves we could read it. With an LLM, the fiction of potential understanding collapses. The real shift I am seeing isn't from "understanding" to "not understanding."
It is towatds "I understand the boundaries, guarantees, and failure modes of what I'm responsible for." If agentic coding is the future, mastery becomes the ability to steer, constrain, test, and catch failures - not the ability to manually type every line.
Full piece:
https://open.substack.com/pub/grandimam/p/ai-native-and-anti-ai-engineers?utm_campaign=post-expanded-share&utm_medium=post%20viewer This article mirrors exactly how I’ve been feeling. I'm tired of being attacked for using these tools. AI doesn’t diminish understanding—it forces us to admit that “full understanding” was always partial. Working with AI forces clearer thinking about boundaries, contracts, and system behavior. It makes engineering more explicit, more honest, and ultimately more rigorous, not less. This is the craft of software development evolving, not disappearing.