“High-velocity organizations are constantly experimenting and learning”

The book the High Velocity Edge, looks at market leaders in various industries to understand the common pattern that explain why they all outperform their competitors.

What is described as a common trait of those “rabbits” is their obsession towards learning. This recipe for success seems so cliché … What really makes the difference is the way those leaders do it relentlessly and in a structured way.

How can it translate to software and product teams?

  • Why learning stays a key performance differentiator, while every tech and design people claim to value continuous improvement?
  • What is the common learning pattern observed at market leaders?
  • How can we apply this pattern efficiently in a typical software and product team?

Valuing continuous improvement is overrated

Before the Enron scandal, the following values were displayed in the lobby:

Integrity, Communication, Respect, Excellence

Do you remember where those proclaimed values lead them?

Back to learning and continuous improvement: make your own poll by asking software and product people if they value learning and continuous improvement. You would probably get a result close to what an african dictator gets in his fourth reelection.

But do you really observe systemic learning and continuous improvements aligned with this poll? I mean beyond learning this brand new tool or tech, which is really continuous improvement level 0 in tech and design jobs.

As for other tough “values” or “cultural elements”, you really are what you do, not what you claim to value.

Actually, the difference lies in how learning champions relentlessly apply their learning framework in every possible occasions.

Meta learning framework

In the High Velocity Edge, Steven J. Spear describes the learning framework commonly seen at Lean champions like Toyota. He notes that the learning is always based on the following elements to structure the gap between current state and performance vs an ideal state and performance:

Actual:

  • Background: what's the problem being solved or the process being improved?
  • Current condition: how work is done? What was experienced so far?
  • Analysis/Diagnosis: what causes the problem as revealed by investigations?
  • Actual outcomes: summary of how the work actually performs

Expected:

  • Countermeasure treatments: what should be tried to improve?
  • Expected target condition: how work should perform with the countermeasures in place?
  • Gap analysis: what's the gap between predicted and actual performance? Learn and iterate!

Learning frameworks in tech and product teams

This kind of meta learning framework used at Lean champions is very high-level and generalist and may not seem concrete an usable enough to software and product teams. What I've found more efficient is using specific learning tools in specific identified learning opportunities like those ones:

  • Post mortems after incidents in production
  • Iteration retrospectives
  • Retrospectives after a big release (you may call it post mortem, even if that release is just the beginning :-))
  • Analysis of costly late bug detection during the development process

Other opportunities may seem basic but are in fact rarely seen, and denote a high maturity level:

  • Learning based on the shift between feature/project actual vs expected planning. At best we see vague causes analysis that never leads to formalized knowledge applied in the next feature or project.
  • Learning based on the shift between actual feature use and what we expected when we invested tons of efforts building it.

Of course learning can happen in a lot more situations than those classical opportunities but having those simple ones on a checklist is a simple and very efficient first step to make people more knowledgeable about continuous improvement.

Templates for learning situations

Writing down and refining customized templates for those different types learning opportunities helps a lot. To that respect, you may retrieve ideas from both Lean literature and more practical guides, like the excellent chapter about post mortem after production problems in the book Web Operations or the classic Agile Retrospectives.

After years of coaching continuous improvement in IT and design, I think that this kind of in-house, simple and usable templates makes a big difference because it drastically lower the operationnal and psychological cost of entry for continuous improvement efforts.

Of course, creating an environment that fosters blameless post mortem or respectful retrospective is the hardest part, but those little modest templates help a lot!

What about you?

  • Are you comfortable with the distance between your continuous improvement claims and your actual continuous improvement?
  • Do you use a meta learning framework like the one described in Lean literature or you prefer more specific learning tools?
  • What are the learning opportunities you make the most of? and the ones you think you still miss?
  • Do you use custom learning templates for different learning opportunities?

I'd love to hear about your experiences in the comment below!

Ismaël Héry

I am in charge of the team designing, building and operating the new print and digital CMS for the french news brand le Monde in Paris. I have been consulting for 8 years at OCTO Technology with product and tech teams to help them improve their design and build activities though leadership, Lean, Agile and DevOps.

My Twitter and Google+.

comments powered by Disqus