03.Aug.2016

Kevin Lyons
Research Supervisor

The Ultimate Question – How Do You Measure Up?

Are constituents advocating for your institution?

Introduced to the business world in a 2004 Harvard Business Review article by Fred Reichheld, Net PromoterTM is a survey question that measures customer loyalty and predicts sales growth. It has been called the ultimate question because it can deliver those outcomes in only one question – how likely are you to recommend your service provider to others.

Respondents are segmented into groupings based on their responses to the recommend question which uses an 11-point scale (0-10):

  • Those customers who rate this question a 9 or 10 are defined as Promoters
  • Those who rate the advocacy question a 0 through 6 are categorized as Detractors

The Net Promoter ScoreTM (NPS) is the difference between the percentage of Promoters and Detractors.

In addition to its strong predictive value, NPS is a valuable benchmarking tool – as industry norms are available on the website https://www.netpromoter.com/nps-benchmarks/. Norms for US consumer sectors range from 58 in the highest rated sector – department/specialty scores – to 2 for the lowest rated sector – internet service providers. These norms suggest that, across these sectors, a NPS over 40 is good and a NPS over 60 is excellent.

NPS Score Grid

While higher education is not represented in this NPS norm pool, We have included the NPS tool in several alumni & current student surveys to develop an industry norm. The average Net Promoter Score for our higher education clients is 51. The Net Promoter Score norms for current students and alumni were virtually identical (53 and 49, respectively). The top performer within our dataset is a highly rated business school with a NPS of 69.

While knowing where you stand relative to the industry is vital, NPS also creates a segmentation framework helpful in creating marketing strategies. For example, Cornell University uses NPS to fine-tune alumni programming in an effort to boost loyalty and giving. After staff-driven events they send out a survey, including the “would you recommend” question and a follow-up prompt for “why.” They then segment the respondents by Promoters and Detractors (and Passives, those who provided a rating of 7 or 8) in the analysis. This allows them to learn out what is important to each segment, and why certain programming may have fallen flat.

We hope you will consider including the net promoter question in your next survey and let us know how you measure up.