What That List Of “The Ten Best Companies For…” Really Tells You

img-60353ed804560a2e09a3b3d9

B-10-best-1.jpg

People love lists that start with “Top 10 Companies For…”. Three came into my inbox just this week – InterbrandWorld’s Best Workplaces, and HBR’s Top 20 Business Transformations. I read them all, of course. It’s human nature to sort things, and if someone else does the work, great! One less thing our brains must deal with.

But be careful. 

How a ranking is calculated can drastically change the outcome

When Roxie Strohmenger and I were updating the Forrester CX Index methodology in 2014, we spent hours exploring how customer experience (as we thought about it) was different from satisfaction (as measured by established benchmarks like ACSI). We wanted to offer a new, more specific way to measure how companies treat their customers. 

The key, we decided, was the object of the question (caution, grammar geek-out). “How satisfied were you with your most recent experience with this company?” is not the same as “How satisfied are you with this company?” Maybe a passenger had one awful flight with an airline, but the trip was an outlier; the airline gets it right most of the time. Similarly, you could be happy with how a company treats you but not how it treated your friend or news reports about employee working conditions. 

Does it matter that much? Yes. Millions of dollars are at stake in brand, reputation, and advertising based on where a firm falls in a “best of” list their clients use to make buying decisions. For benchmark designers, it’s a huge responsibility. Transparency and fairness are a must. As a reader, you can also take steps to make sure you’re using the lists you see the right way. 

Always read methodology fine print

To be a savvy user of third-party rankings, ask yourself four questions: 

  1. Why was the list created? External rankings provide information to the market, but they also get media attention and drive leads for consulting and certification. There’s nothing wrong with using data to drive business; it’s just crucial for readers to know in advance. If you trade your email address for a copy of some list, you may get asked via email if you need help raising your score. They have a right to ask, and you have a right to say no.

  2. Who was considered? Some rankings only include companies that opted into the study. If a firm doesn’t have the time or money to participate, they won’t show up no matter how great they are. I recently learned the Oscars are like that. A film may seem to get snubbed when the truth is producers never threw their hat in the ring. Again, there’s nothing wrong with opt-in benchmarks. Just don’t assume that companies who aren’t listed are inherently worse than those who are.

  3. When was data gathered? Twice when I ran the CX Index, a timing fluke caused unusual drops that disappeared the next year. One was a cable company negotiating with a sports league. They blacked out games in a major market for two weeks; those two weeks were the same two weeks our survey was in the field. Mergers and acquisitions also cause rankings to ebb and flow. Deciding when to remove an acquired brand from the list is non-trivial. And when I interviewed United Airlines about a sudden boost in CX scores, they attributed it to completing the integration with Continental, which they’d bought two years earlier.

  4. How was the rank calculated? Some lists are based on judges’ input, while others use quant data. HBR’s recent list of Top 20 Business Transformations was upfront about using expert judges, shared their names, and acknowledged how this approach could shape outcomes. If quant data drives scores, there’s less subjectivity. When firms complained about where they ranked in the CX Index, I explained I had zero control over those results. Scores were their customers’ opinion, not mine. That said, which variables the authors use in a calculation can drive massive variation. This article from the Pew Research Center does a great job explaining why two studies came to radically different answers about “how many Americans go online?” Neither is right or wrong; they’re just different.

 Given all this complexity, which we often don’t think about, here’s my advice for using rank-order lists: triangulate. Consult multiple lists when you need to make an important decision. If there’s only one list in your domain, look across time. Consistency isn’t easy, so a multi-year streak is a data point unto itself.

And above all, remember that data is a tool; the decision is yours to make. No metric (or ranking) is perfect, but they’re all useful.

Posted in ,