AI Won’t Replace You, But a Manager Using AI Will - Yaniv Preiss

10 min read Original article ↗

The integration of AI into the workplace is not just a minor upgrade or an incremental change, but a fundamental shift in how business is conducted. As AI becomes commoditized, meaning accessible to every manager and employee rather than just Machine Learning PhDs, the differentiator is no longer who has the tool, but who knows how to use it and how to lead the humans using it.

Recent analysis shows that AI doesn’t reduce work, but intensifies it. To navigate this new intensity and complexity, managers must bridge the gap between supercomputing power and human potential.

AI management - trailblazing

Balancing AI Adoption

Both under-adoption and over-adoption of AI present existential risks to a company.

Under-adoption might lead to a potential loss of market share due to slower experimentation, stagnant productivity, and top performers leaving for more advanced environments. Customers likely expect more than mere “AI dust” in the product or service you’re offering.

Over-adoption might result in “innovation theater”, where the strategy is just “do something with AI”, and failed experiments distract from business impact.

Tools amplify competence, not replace it. Same with AI tools. If a manager lacks a solid foundation in leadership, AI will not turn them into a great manager. It will only help them make mediocre decisions faster. Imagine AI “tells” a manager who lacks competency in giving feedback to give more feedback, or teaches ineffective ways to give it, like the “shit sandwich”.

As a manager, keep track of AI progress, not as an addiction and chasing the new shiny object, but as more efficient practices. If an engineer leverages 5 AI agents to code and another to review instead of doing everything manually, this can be a huge gain. Learn how to coordinate and streamline the work as a team – you want the whole team to benefit, not only a specific individual – analyze where the bottlenecks are. Let ideas come from the team and allow experimentation within guardrails.

Rebuilding the Human Connection

It’s no secret that many AI projects fail, and the reason is not always technological.

Employees often view AI as a magical black box and fear being replaced, triggering a Fight-Flight-Freeze-Fawn (FFFF) response that sinks performance. This reminds me of an old colleague, whose team started onboarding remote contractors, and upon each successful onboarding, an existing core team member was fired, until the manager was the only core person left. He told me they knew they were cannibalizing. People may have similar thoughts.

Layoffs and restructuring are happening all around us. “New employees” in the form of highly capable digital agents join the team. Do not underestimate employees’ fear. They might not only drag their feet, but also actively sabotage AI adoption. I’ve witnessed this done before with introducing less threatening changes such as continuous integration, agile methodologies and automated tests.

Every change incurs a loss of identity, authority, status, reward, or belonging. When you face employee resistance, rather than classify the person as hostile, you can use a tool like Immunity to Change to figure out what threatens them and devise small experiments to prove them wrong. This can be done with the support of external coaches.

Employees need to trust leaders – their values, decisions, transparency, and as a person – human connections are good for business. AI cannot do it. It can simulate empathy, but it lacks somatic resonance and has biases that might feel completely foreign.

Managers need to double down on:

Transparency: being clear about how and why AI is being used.

Accountability: establishing that “the AI told me to” is never a valid argument. Humans remain 100% responsible for the final decision and its ethics. They must be able to reason about the output, either done by them, with AI or fully AI-generated. Make sure to check for it also during candidate interviews.

As I highlighted in the article about risks of AI, employees and managers might develop dependency on AI after outsourcing thinking to it. We’ve already seen software engineers losing skills and being unable to write simple code.

Psychological safety: creating an environment where employees feel safe to share their “secrets” and teach each other how they use AI effectively, rather than hoarding prompts for job security.

Warning: using AI to monitor employees is a catastrophic mistake. It destroys trust and encourages gaming the system.

Shifting From Output to Outcome

Despite ample literature, many leaders still deeply care about working hours and output, and have less attention to results, aka business outcomes. This manifests in measuring working hours, the number of code contributions, meeting arbitrary deadlines and other things that are easy to measure. The goals and OKRs they set are sometimes purposefully vague and “unmeasurable”, and there is no learning from the achieved results.

In the new AI world, these traditional metrics like “hours worked” matter even less. “Token usage” may be misleading – if you only measure it as evidence for encouraged adoption, you may miss out the inefficiencies or the business result. It can also be interesting, e.g., you may find out that your lowest performer is using 5x more AI tokens than others, while making way less impact.

Now that building with AI and experimenting is an order of magnitude faster than before and requires fewer people to be involved, outcomes become the obvious target of measurement. For leaders who haven’t done so before, the emphasis is much more on the “why” and “what” than the “how”.

For teams that operate as a “feature factory”, where product decides what to build and engineering is only an execution arm – start learning about empowered teams and how to get yourself and the engineers to have a say about the product. This is even more needed with AI capabilities, and will allow you to run faster and not wait on Product bottleneck. (Hint: talk to customers).

Navigating the Identity Crisis

VUCA (Volatility, Uncertainty, Complexity, and Ambiguity) is greater than ever. The speed of AI advancement holds big promises for new capabilities, but might also create fatigue due to a constant state of anxiety. Knowledge becomes obsolete quickly, sometimes within weeks, and employees may face an identity crisis as AI can easily perform tasks they spent years perfecting. This leads to a fear of redundancy and the feeling of an “always-on” workday that never ends. The same goes for the manager who invested years in learning how to effectively manage humans in traditional structures.

Self-regulation: learn how to reduce your own stress when overwhelmed. This might be the breathing box of 4-4-4-4, the 5-4-3-2-1 technique of naming items using the five senses, feeling grounded on the floor, recalling a favorite item or a positive memory and more. You can guide your team after you’ve successfully done it.

Stress reduction: encourage lower-frequency environments – fewer pings, checking email only 3 times per day, and less news consumption to reduce survival-mode anxiety.

Focus time: start each day by identifying the one main thing to achieve, prioritizing proactive creativity over reactive communication. Magic is often in the work we procrastinate on.

Experimentation: use AI’s speed to run more experiments. In a world of geopolitical and technological uncertainty, the team that learns the fastest wins. This needs to be done without overwhelming and with enough psychological safety not to punish for failed experiments.

Energy tracking: check the past two weeks on your calendar – which activities took your energy and which recharged you? Can you do more of the latter and drop or delegate the former? Double down on your strengths.

Truthfulness: don’t make promises you cannot fulfil, like “nobody will get laid off” – it’s not under your control, and things change anyway.

The Manager’s Tools

While AI excels at data processing, KPIs, and pattern recognition, it cannot navigate complex stakeholder dynamics or provide the ethical judgment required for high-stakes decisions.

As a manager, your value now lies in higher-level judgment. Use AI as a sparring partner for brainstorming or conflict resolution, but rely on your own intuition for growing your people and defining the competitive advantage. By sharing doubts and brainstorming with the team under uncertainty, you foster a culture of collective intelligence that no algorithm can replicate.

The management fundamentals are non-negotiable. Especially in times of stress, keeping 1:1s is crucial for trust, rapport and alignment. Giving timely feedback and coaching, with effective delegation, will continue the growth and performance improvement.

As a manager, make sure to cover the legal compliance and AI risks. Set the guardrails and don’t leave your directs exposed. Make sure your activities are not putting the company at various risks, and that decisions are ethical.

Recap

  • Clarify the why: make sure the team knows the competitive advantages, goals and why they were set, so they may come up with more ways to achieve them and know if they’re on track. Connect the work to the goals and give specific meaning to communications for the team
  • Balance adoption: avoid both “AI dust” (under-adoption) and “innovation theater” (over-adoption) by focusing on tools that amplify existing competence
  • Run more experiments: use AI’s speed to test ideas quickly, ensuring the team learns faster than the competition without punishing failures
  • Set guardrails: oversee legal compliance, ethics, and security risks so employees aren’t left exposed to liability
  • Keep fundamentals: prioritize 1:1s, timely feedback and coaching to maintain rapport and growth during periods of high stress
  • Practice transparency: be explicit about how and why AI is being used in the organization
  • Enforce accountability: establish that “AI told me so” is never an excuse, and humans remain 100% responsible for all decisions
  • Foster psychological safety: create an environment where employees feel safe to collaborate and share their AI “secrets” and prompts rather than compete
  • Avoid surveillance: refrain from using AI to monitor employees, as it destroys trust and encourages gamification
  • Manage energy, not time: shift focus from tracking hours to managing the team’s energy levels and well-being
  • Model self-regulation: use and teach techniques like “Box Breathing” (4-4-4-4) or the “5-4-3-2-1” grounding method to handle overwhelm. Explain that a nervous system in survival mode is inadequate for creativity and performance
  • Create focus time: start each day by identifying one main thing to achieve before diving into reactive pings and emails
  • Reduce frequency: lower the “noise” by encouraging fewer pings, checking emails only 3 times a day, and reducing news consumption
  • Track energy: review your calendar bi-weekly to identify which tasks recharge you and which drain you; delegate or drop as needed
  • Measure outcomes over output: ignore vanity metrics like “hours worked” or “token usage.” Instead, measure the actual business impact and results, such as OKRs
  • Use AI as a sparring partner: use AI for brainstorming, market research, and self reflection, but rely on human intuition for team dynamics and growth
  • Vet candidates for reasoning: during interviews, check if candidates can explain the logic behind their work rather than just relying on AI-generated results
  • Bridge the gap: act as the translator between high-speed AI capabilities and the human needs of your team
  • Keep the finger on the pulse: keep up to date with progress on tools and techniques