Why NPS sucks

2 min read Original article ↗

Sam Weaver

A strange kind of grief overwhelms me when I think about using NPS. I will admit, the idea of NPS is great: having a single easy-to-understand, widely accepted measure which can be used to gauge customer satisfaction sounds like a Product Manager’s dream, but the execution usually falls fantastically short. Here’s why NPS sucks, and how to make it better.

So what exactly is an NPS score? If you’ve ever taken a survey you will most likely have seen the following question:

On a scale of 1–10 how likely are you to recommend <product or service> to a friend or colleague?

Your NPS (or Net Promoter Score) is the result of taking all the people who rated your product or service a 9 or 10 (known as Promoters) minus all the people that rated it a 0 to 6 (known as Detractors) and is expressed as a percentage. Neutrals are people that gave a rating of 7 or 8, and have no effect on the score.

Seems like a fairly useful means to gauge user delight, surely?

Here’s why NPS sucks

Have you hit your target persona?

NPS doesn’t care whether your responder is a power user or a new user, or whether their job role (or general demographic) is correctly targeted. If you spent months designing and building a product for “Jackie the Developer” and then an Ops person gives you a low score…