Working Wider

Measuring the Unmeasurable:
The Innovator’s Wider Dilemma

How do you convince business colleagues to try something new?  At a minimum, you’ll have to answer three questions:

  1. Is it better?
  2. Will it work?
  3. How much will it cost?

When trying something new, these questions are reasonable but difficult, if not impossible to answer definitively.  We live in a time where “If you can measure it, you can manage it” is a well-accepted refrain. Measurement is the icon of rational and objective thinking and decision making.

An unintended consequence of today’s measurement mindset is the pressure to translate everything into dollars.  This leads to an over emphasize on historical costs as these numbers are easier to get and more precise than estimates of future returns.  For example, corporate marketers coping with social media are now challenged to justify the cost of hiring social media experts.  HR walks in with explicit compensation data as marketing tries to build an equally powerful business case for the value of a Facebook Fan page with less explicit evidence. What do you do?

New ideas, methods and products defy precise comparability because they are new and therefore lack a basis for comparison.  In this post, we’ll first explore the dilemmas measuring innovation presents.  Then we’ll show how you can work around them to gain support for innovations that are inherently tough to measure.

Here are just a few of the measurement dilemmas innovators face as they try to take themselves, and companies beyond today’s boundaries:

Precision & Comparability vs. Importance – Engineers and financial types are often accused of being happy with measures that are precise and comparable regardless of importance. Newsweek can precisely measure their subscriber erosion trends but isn’t it more important to understand why a competitors such as the Economist is growing?

Data vs. Evidence – Who does better innovation design:  Google or Apple?   Google relies on engineering data: they once user-tested 41 shades of blue for a web page design.   Apple’s iPhone redefined the smart phone using intuition and personal experience; evidence for sure; but not as rigorous as engineering data.  Always use evidence but don’t assume data is always the best or only kind.

Results vs. prediction – Isn’t it more important to know if you’re about to lose a customer before that happens than to know the actual revenue loss after the fact?   Product development teams rarely measure staffing changes or distractions but experience shows after a certain point, on-time delivery becomes impossible.

Tools vs. Answers –  This is the “nut behind the wheel” issue.  Why do you have a speedometer in your car?  You can tell when you’re driving too fast.  A speedometer is a ticket prevention device. You choose how much over the speed limit you drive based on knowledge of local enforcement.

Correlation vs. Causality – The search for empirical data that we can use to prove (or defend our actions after the fact –  so-called CYA behavior) drives our lust for numbers.  Outside of clinical laboratory studies with control groups, most business measures only show correlation; not direct causality.  Which was the primary cause of this month’s margin increase:   the price increase, new supplier discounts or improved enterprise resource planning technology?

With tools such as Excel and skills taught in contemporary MBA programs, it’s easy to lose the underlying assumptions and logic behind the torrent of numbers.  Any new product development project manager worth his salt knows how to adjust assumptions to beat the internal hurdle rate.  Dial in the recent financial meltdown and isn’t it quite reasonable to step back and test our faith in numbers.  Right?

Wrong.  Not going to happen…at least not instantly.  Why?  Because asking “Is the new way better? Will it work? What does it cost?” are legitimate questions.  Besides, your passion is moving your innovation forward; not fixing our misuse of measures.  So what can you do?

Let’s take two familiar but squishy to measure improvement targets: customer and employee satisfaction.  Netpromoter is a one number, customer satisfaction metric that’s gained wide-spread recognition from companies such as GE and Inuit.  In the Employee Satisfaction space, placing high on Fortune’s 100 Best Companies to Work has developed a similar legitimacy.

Each got traction by creating a rating scale that had sufficient logic to pass the “reasonable man” argument.  They then correlated those that received high ratings to other measures such as stock market performance, revenue growth, etc. to legitimize the economic value of improvement efforts.  In both cases, they can be rightfully accused of being correlations rather than causal, yet over time business leaders tend to cut them slack and they get buy-in. 

Another way to define the economic value of a new approach is with a maturity model.  This is easy to construct for something such as social media.  Stage 1 might be an internal blog exists.  Stage 2 could be an external blog or Facebook Fan page is maintained.  Stage 3 could be ongoing measurement of mentions and sentiment across social media.  Stage 4 is there’s a closed loop business process between the measures and quarterly objectives.  You get the point.

One could then rate several companies and look at growth rates, market share gains, etc. for those at Stage 1 through 5.  Even without monetizing the value of each stage, it’s pretty rare for anyone to say “I don’t want to improve,” so if they come out at Stage 2, it’s easy to suggest what it takes to move to Stage 3.  Even though you may have yet to define the economic value of Stage 3 over Stage 2, that question doesn’t come up as often with maturity models.  People will invest themselves and cash to move up one notch if the cost is reasonable, steps are clear and they can discern a capability difference.

After writing Understanding Customer Experience for the Harvard Business Review, many executives pushed me to prove investing in superior customer experience was worth the cost of improvement.  Here are five tips that come from that work.

1.  CEO’s who came up through outwardly facing functions didn’t have to be convinced whereas those who had financial, manufacturing or engineering backgrounds had to cross a chasm of belief and needed numbers before they’d move. Implication: Start with the most credible voices who already “get it” and enlist their help.

2.  Netpromoter taught us the power of a single number.  It’s an index that every function can use, report on and compare with others as well as historically. Implication: Don’t fight comparability and precision; use it by creating a reasonable index of your own.

3.  Follow the bread crumbs of a past incident, monetize it with “actual” and multiply by the number of times that incident occurs.  For example, a medical device company’s sterile packaging prohibited opening their product outside the OR yet all the materials in each package weren’t necessary for most operations.  The unused materials had to be thrown out or re-sterilized.  Hospitals got around this by custom ordering packages for each operation.  This required sales reps to make personal deliveries to maintain good relationships.  Multiplying the cost of preparing the custom packaging, the lost time of sales reps to delivery, etc. by the number of hospitals and sales reps gave us a base.  We added anticipate sales growth (or lack thereof potentially) to project into the future. Implication: Monetize “facts” from the past using their own assumptions to project into the future

4.  Appeal to a specific demographic where there’s greater need and application than across the board.  Anyone who stands near a teenager knows that texting has replaced eye contact.  If you’re trying to convince a company of the importance of social media, start with that demographic. Implication: Start where the burden of proof is lower

5.  Rather than advocating for an innovation such as increasing relationship value using social media, ask people how they value relationships today?  When and where do they invest in improving current relationships and what proof do they have that works?  How do they measure the value of that investment over time?   Now, frame your suggestion using this input showing how it does the same or better. Implication: Learn their assumptions and logic and then apply it to your innovation.

Summary

Business decisions should be supported by evidence but numbers alone, particularly dollars, are not necessarily the best evidence for all decisions.  Gather and frame evidence and where you can, use available precision and comparability but don’t be held hostage to it. The fact is you can measure anything.  Maybe not precisely, comparably or instantly but the real issue is not the measures themselves but how we use them to make decisions.

2 comments… add one
  • Amram Shapiro May 13, 2010 @ 16:44

    Great insight. At Book of Odds, we worked wider to create something novel and very valuable. We realized that there was a missing dictionary of probability, held back by the walls surrounding specialized bodies of information and data. If the odds of everyday life from domains as different as sports and medicine could be put in a common format, with equal rigor, the result would be a brand new kind of reference source. The narrowness of the specialist was impeding the ability of people to compare, say, cancer risk with the odds of having a White Christmas or ballplayer hitting a double. the odds of a getting cancer in one’s lifetime, for example, is almost exactly the odds of having a White Christmas in a year in Chicago.

  • Christopher Meyer May 14, 2010 @ 13:37

    Best of luck with the Book of Odds! You’re comment on White Christmas in Chicago could make me a proponent of global warming! 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *