Manage for impact, not performance

4 min read Original article ↗

A peer-driven performance feedback is the key mechanism to how talent is managed at some of the largest tech companies today. The interpretation of this feedback by a calibrated committee drives compensation, career progression and improvement interventions. The effort that goes into planning and executing the work such that it resonates with the committee is substantial. At best, this pushes the naturally disorganized people towards structure, at worst it is just a marketing exercise that costs companies weeks across all talent every year. Either way, companies place a huge bet on this system driving their workforce towards improvement, and ultimately, impact.

What’s wrong with this?

If you read a book about competitive differentiation (e.g. Different), you find that a set of products within one category gets more homogeneous over time. Basically, they all copy each other’s unique selling points and address their weaknesses until the difference is superficial. This is facilitated by a set of metrics all products within the category get judged by - over time all metrics of all participants converge to the mean.

When we manage performance along a set of defined criteria, we get the same effect - a homogenous workforce, where innovation and playing to one’s strength just doesn’t happen. Now - this is exactly what we need if we are trying to squeeze out percentage points of efficiency. But when you listen to the CEOs of even the largest tech companies today - they talk about crazy new opportunities on one hand and existential fear on the other. In an environment like that, we shouldn’t be trying to optimize for the lowest common denominator of our collective weaknesses - but rather equip our employees to play to their strengths and take bold bets.

But how do we do that?

At Google, I kept hearing the same story over and over again - a senior engineer, pushing for promotion, but too heavy on leadership with not enough code output. A Front-end engineer driving a ton of value in their product, but not solving ‘complex enough’ problems. Here is a story of a Googler who contributed to multiple projects, drove a ton of impact - but had trouble because his work didn’t conform to the rubric. The current performance review model forces all of these people to switch their focus away from where they have impact towards where they have a weakness.

Why is the system set up this way?

I think that we are afraid to measure impact. We also want to reward good process, not just good outcomes. But at the end of the day, if a person can create value in the internal ‘marketplace’ of a company - that’s a key signal. We are afraid that maybe all engineers just want to go around and lead - so we push them to also produce code. The thing is, if, for example, there is enough opportunity for engineers to write simple frontends and deliver a ton of value - then this is the right thing for them to do, instead of chasing ‘enough complexity’ for a promotion.

The fix: Impact management.

Sure, let’s run a basic performance review, so people can keep track of their blind spots. But we should spend more effort on an impact review with a less rigid structure, with a constant and opportunistic data collection, instead of a regular cycle. Did you fix a legacy pipeline with no direct monetary implication - but now a ton of dependent engineers have 1 thing less to worry about? Get a few testimonials where they try to quantify how much you help them. Did you spend a month on gruntwork, documenting an obscure sub-system and now a few teams hit their OKRs because they could build their integrations much faster? They should give you credit.

Then, when you accumulate enough impact, you get promoted. If even after a year your impact isn’t there, you should get coached to lean more into your strengths and identify opportunities. Your manager might also need help in enabling you. The current half-yearly cycle is absurd when some folks need just a few months to accumulate 2 promotions worth of impact.

In many companies, there are fragments of this already in place:

  • Internal recognition systems (‘kudos’ at Google).
  • The currency of impact is quantified and attributed with OKRs.
  • The unstructured part of the performance review where people write a long prose months after they worked with you which is nice, but not like a testimonial extracted at the value delivery time.

With most of the building blocks in place and the huge energy already going into performance reviews, implementing something like the above is feasible.

What do you think? Join the conversation on twitter!