“What gets measured gets managed”

You've certainly heard one of those:

  • “Man, how can you say you're doing unit testing without looking at your code coverage metric daily?”.
  • “As a professional Product Manager I look (or never look) at the user centric metrics before any important design decision”.
  • “Read Lean Analytics and you will definitively know what metrics really matter on your new product”.

Like it or not, we are badly obsessed by measurement.

First let's see where this overused intimidating “measure to manage” quote comes from. Then let's share a few simple heuristics to take the best part of measurement discipline in tech and product activities, while staying away from measurement hubris.

Source of the quote: Peter Drucker?

What gets measured gets managed

For ages we've been intimidated by consultants and managers voicing this quote loudly. Sometimes they attribute it to Peter Drucker and sometimes to Edward Deming (who actually said almost the opposite, more on this latter). You've probably heard some variations like “You can't improve what you can't measure”.

Peter Drucker has invented modern management with an handful of others (Henry Ford, Taichi Ohno, Edward Deming…). His work has lead to an infinite number of out of context, real or invented quotes, particularly in the domain of management by objectives.

As far as I've searched I've not found any proper source for this “What get measured get managed”, at least nothing from Drucker or Deming!

Tips for practical and mentally healthful measurement

We have no reliable author for this measurement mantra but we all have painful experiences of measurement obsession. Let's share very simple practices to have the best possible outcome from software and product measurements, without sliding to the dangerous side of measurement idolatry:

  • Metric agility
  • Every metric is not a KPI
  • Graph everything
  • Keep in mind what Deming (really) said on metrics
  • Keep in mind the Hawthorne effect
  • Learn the usual traps in interpreting measures

Metric agility

As the guys at Etsy said when they open source StatsD:

We’ve found that tracking everything is key to moving fast, but the only way to do it is to make tracking anything easy.

Here is a minimalist condition of success in a typical development or product team : the team has at least 2 members able to declare and deploy a new metric in less than a single day (for 80% of metrics).

DevOps and web top performers emphasize that Ops should no longer be bottlenecks to add new metrics and that dev or product people should be autonomous!

Of course, the bigger your data the more complex it gets to add a new metric and manage the whole system. But as your system gets bigger, you probably have a bigger budget for engineers working on the cool but complex metric system ;-)

Every metric is not a KPI

Every metric is not a KPI. There are tactical measures that can have a short lifetime since you don't intend to measure them forever. They can be related to a contextual, somewhat small, domain or problem. Knowing and communicating they're just tactical helps to set them up efficiently, use them intelligently, and leave them lightly when necessary.

Typical examples of tactical metrics:

  • tactical metric to manage a new product feature. For instance during AB testing or pre and post new feature launch
  • tactical metric to investigate and understand something better. Typically after an incident, the post mortem leads you to set up a new metric to measure something that may have an impact during the incident.
  • tactical metric to follow a PDCA cycle and give information required for a wise "Check" step
  • tactical metrics for a team responsible on a part of a product or a domain to follow the performance of that sub-system part

We can't always keep an eye all those tactical metrics. We just don't have enough eyes and brains and working hours and wall of dashboards. Knowing they are tactical helps you leave them when appropriate, in order to move to new tactical ones becoming more relevant, while still keeping eyes on real KPIs.

Graph everything

The easiest one, but sadly too frequently ignored.

Number one advice from statistic expert Donald J. Wheeler in his book Understanding Variation: begin by graphing it, otherwise the metric is useless and does not speak to you.

As this great blog post explains, the brain interprets quantitative data and sees patterns more easily and instantly when this data is expressed graphically.

Do you have more insights based on this table or the graph below?

Same information graphed:

How many metrics do you still follow in table format instead of graphs? Blindly missing trends and spikes and patterns? Confusing standard variations with real outliers?

Keep in mind what Deming (really) said on metrics

Edward Deming never said anything along the lines of “You can't improve what you don't measure”. By the way, if this saying was true, lots of improvements you've made recently would have never occurred ;-)

Actually Deming (although he was a statistician) said the opposite in his 14 points for Managements in his classic Out of the Crisis:

Eliminate management by objective. Eliminate management by numbers, numerical goals. Substitute leadership.

As a simple illustration, the development team with best code coverage I've ever seen does not look regularly at code coverage and does not have a clear minimum goal, they just test and unsure that tests are written systematically and thoroughly.

Deming also said:

"In god we trust, all others bring data"

Which may seem in contradiction with his point above. That's another example of the limits of basing your thinking on great people quotes ;-) But in this case, “data” may be context, facts, hypotheses, results, not only numerical figures!

Keep in mind the Hawthorne effect

The Hawthorne effect is a kind of "tell-me-what-you-measure-I-will-tell-you-how-people-react-to-it" effect. In the 50s, researchers found that during experiments, people tend to change their behavior to optimize measured results. Probably because one consciously or unconsciously think to be evaluated personally through the measures (which may be the case actually).

Concretely that means that setting up new metrics and indicators will improve this metrics … just by setting up the metrics and letting people adapt their behaviors accordingly.

Two sides of the coin: sometimes it's exactly the way we want to influence by setting up a system that encourage good behavior, but sometimes we intend to measure a unique factor, all others being equals or held constant, including people behavior, arrgh. Worst case scenario: behavior changes lead to local optimums by focusing too much on the metric.

Learn the usual traps in interpreting measures

This last one is not simple and is not a heuristic but is critical nonetheless.

The more we want to base software and product decisions based on measurement, the more we need to know about usual data interpretation mistakes. That means relearning things we learned at school and forget at day 1 of our career.

We are prone to make big mistakes on interpreting measures if we don't understand things like:

And a lot of others!

What about you?

What are your personal heuristics to take the best part of software and product measurement, without sliding to the dark side of measurement obsession?

What are the most common interpretation mistakes you observe around you?

I'd love to hear your thoughts in the comments below!

Ismaël Héry

I am in charge of the team designing, building and deploying the new print and digital CMS for the french news brand le Monde in Paris. I've been head of software development @lemonde.fr, and consulting at OCTO Technology with product and tech teams to help them improve their design and build activities. Lean/Agile/Devops FTW.

My Twitter and Google+.

comments powered by Disqus