AI Augmentation, AI Amputation

7 min read Original article ↗

A spiral staircase in the Guggenheim Museum with people walking both up and down

Andres Ortiz from Bestiario once shared an insightful observation during our Proximo Lab call. He described the two distinct ways visitors experience the Guggenheim Museum: either taking the slow spiral staircase upward, gradually absorbing the art with changing perspectives and time for contemplation, or hopping into the elevator for an express journey. These two paths represent fundamentally different experiences of the same content - one offering a journey of discovery, the other delivering efficiency at the cost of that journey.

I've been thinking about this metaphor a lot lately as I navigate my relationship with AI tools in my daily work. There's something profound happening in this moment that deserves our attention and reflection.

What We Gain and What We Lose - Luhan, Microsoft Study, Carr

Marshall McLuhan, in his seminal work "Understanding Media," proposed that every technological advancement represents both an amplification and an amputation. Technology extends certain human capabilities while simultaneously diminishing others. This dialectic feels particularly relevant as AI systems become increasingly embedded in our creative and intellectual processes.

I was reminded of this recently while reflecting on Nicholas Carr's keynote at CHI 2017 in Denver. He shared a story about young Eskimos who, unlike their ancestors, struggle to navigate in snowstorms without GPS. Some have tragically lost their lives because of this dependency. The traditional wayfinding knowledge, honed over generations, is being lost as the technology that was meant to help them has instead created a dangerous dependency.

This isn't meant to be alarmist. As a technologist myself, I'm genuinely excited about AI's potential. But I'm also increasingly aware of what might be slipping away as we embrace these tools.

A person working at a computer with multiple AI tools open

There's a growing body of research examining this dialectic. Microsoft recently conducted a study on AI use within their company, finding that while AI is increasing the performance of developers and programmers, it's simultaneously leading to concerning losses in creativity and problem-solving abilities.

This creates what could be analyzed from a Hegelian perspective - a dialectic of looking forward while simultaneously looking backward. As McLuhan put it, "we view the old world in the rear view mirror" even as we advance into new technological territories.

The Political Dimensions of AI

We cannot separate AI's technical impacts from its political realities. The models driving today's AI revolution are designed by companies like Anthropic and OpenAI, organizations with financial resources exceeding the GDPs of many developing nations.

The economic model is particularly interesting - built around electricity consumption and usage fees, creating dependency relationships that resemble addiction patterns more than tools for empowerment. This has already manifested in phenomena like "GPT psychosis," where people become excessively reliant on these systems. Walk through any public space and you'll find a few talking to these AIs, seeking quick information, frictionless interactions, callous compassion, or even sycophantic admiration.

There's an interesting discourse emerging around friction - how AI's frictionless nature can be both beneficial and detrimental. Reducing friction helps us accomplish certain tasks more efficiently, but it may also eliminate the productive resistance that develops our capabilities.

The Personal Dialectic

My work as a creative technologist spans a broad range - from programming microcontrollers in embedded C to developing enterprise-scale voice AI agents. This variety has given me a unique vantage point to observe how AI tools affect different domains of knowledge work.

When I use AI to help with microcontroller programming - a field where I have over 15 years of experience - I find it genuinely amplifies my capabilities. I can "one-shot" the base level of code, quickly identify where the AI model falls short, and make necessary adjustments. My deep background knowledge allows me to critically evaluate the AI's output, learn from its approaches, and integrate that learning into my expertise.

The result? Shorter iteration cycles, faster prototyping, and continued learning. The technology extends my capabilities without diminishing my agency or understanding.

But my experience with React programming tells a different story. I began learning React about five years ago, around the same time AI coding assistants became available. I've noticed that my learning has plateaued since incorporating these tools. While I'm aware of more technologies and components, I struggle to connect them meaningfully or debug issues independently.

The complexity of modern web development - with its myriad files, tab switching, imports/exports, and npm packages - makes AI assistance incredibly tempting. Tools like Tailwind CSS, while powerful, involve so many classes that using AI to generate them feels almost necessary. But this efficiency comes at a cost: I'm not internalizing the patterns and principles that would allow me to work independently.

In one domain, AI amplifies; in another, it amputates. The difference isn't in the technology itself but in my relationship to it.

choice

The Aesthetic Line

Through these contrasting experiences, I've come to believe there exists what I'll call an "aesthetic line" - a threshold of expertise that determines whether AI will primarily amplify or amputate your capabilities.

florence

When you possess sufficient domain knowledge to critically evaluate AI outputs - to recognize good from mediocre, accurate from plausible-sounding - AI becomes a powerful amplifier. You maintain agency while gaining efficiency.

Without that foundation, however, AI can become a crutch that prevents the development of deeper understanding. You may produce seemingly competent work, but it lacks the substance that comes from genuine mastery.

This explains several trends I've observed:

  1. Experienced professionals thriving: Industry veterans who previously delegated certain tasks now handle them directly with AI assistance, extending their impact.

  2. Mature founders succeeding: People with deep experience but perhaps less opportunity in traditional settings (partly due to their age) are launching successful ventures by leveraging AI.

  3. Entry-level struggles: It's becoming harder for juniors to break into fields where AI can produce seemingly competent work, as the gap between "seeming to know" and actually knowing becomes less visible.

  4. Domain-specific impacts: In fields where the "aesthetic line" is more accessible (like certain visual design tasks), AI democratizes capabilities more effectively than in domains requiring deeper logical understanding.

Teaching in the Age of AI

As someone who teaches Emerging Tech, I've observed AI's impact on education with particular interest:

  • Students use AI for everything from coding to writing to visual design
  • Their work often has a "seems done but not yet" quality that instructors can detect but students cannot
  • A bunch of my teaching now involves helping students debug AI-generated work
  • There's a growing gap between "seemingness of knowing" and actual knowing
  • Students can prototype and validate ideas much faster than before, and that is a huge plus.
  • Visual expression has improved dramatically, especially for those without design backgrounds
  • Workflows are changing fundamentally - industrial design students now generate renderings before modeling, reversing the traditional process

This shift raises profound questions about what it means to learn and master a skill. If AI can generate competent-looking work without understanding, what does expertise look like in this new landscape?

Finding a Balanced Approach

This last example of Industrial design is quite curious. In this case the aesthetic line is "generally available". Many people know what is a good example and the artefact (the rendering) is a unimodal object, so, its purpose is visual. It does not work, it cannot be interacted with. There is a logical series of steps behind the piece but that is all obscured once its produced.

Hence, we are seeing a lot of progress on this front where the aesthetic line is easy to reach with little prior knowledge. The same can be said for music models like ones from Suno.

A good practise, regarding more functional artefacts (like code), is to ask AI to explain what it would do and then watch it do it. Many coding tools have this as an option. I usually switch to the ask mode, brainstorm the whole code with it, often pursue the LLM to explain things to my level of knowledge and then make it implement step by step. Otherwise, for me it was like programming with eyes closed and crying on the sight of what the model came up with. As someone who loves the process as much as the output, it is a way I can use AI, learn something, and not get disheartened.

To end, I have a very simple razor to propose.

If you cannot maintain the artefact you made yourself without AI, you did not make the artefact.

More soon.

Cheers, Flu sick Rohit

images from unsplash