This post is in response to Kent McDonald’s excellent question on the Weighted Lead Time post. The question deserves a longer response. Kent asked… “What are some of the behavior changes you have seen from teams or organizations when they started paying attention to this metric?”
I spent over two years at Skype working on metrics at the organisational level, especially operational metrics. I learnt two key lessons:
- All metrics will be gamed. In fact Robert Benefield, an expert in game theory, gave the following advice. “All metrics will be gamed, when you design a metric, start with the behaviour you want and then create the metric so that when it is gamed, you get the behaviour that you want”. A variant of lead time is a great example. The easiest way to game lead time variants is to create smaller units of work which is exactly the behaviour we want.
- The other thing that was etched into my memory is that any individual metric can be gamed. As a result, it is necessary to create a system of metrics that provide constraints to prevent gaming.
Coming back to Kent’s question. Weighted lead time can be applied at three significant levels:
- Team: This should not be used as a metric. Each team will attempt to locally optimise which will lead to higher weighted lead times for initiatives.
- Initiative: This is the metric that should set for each team. Each team should be the average of the weighted lead time of all initiatives they are part of.
- Organisation: It is harder for teams to impact this directly but it can be gamed.
There are a number of ways that weighted lead time can be gamed, the most obvious are deliver work with no value, or to deliver a low quality solution. The product metrics should ensure delivery of value. It is important that the organisation has an effective quality metrics from the customer’s perspective (A huge subject). Given that the value and quality are not gamed, how else could a team game the weighted lead time metric?:
- They could avoid being part of initiatives that are cross team and likely to take longer to release value. This is actually the kind of behaviour we want. We want teams to find the simplest solution with fewest dependencies “Everything should be as simple as possible but not simpler.”. It needs to be carefully monitored during the Capacity Planning session, i.e. monitoring of the “but not simpler” rule. Once again, product metrics are key to ensure the initiatives are effective.
- Teams work on initiatives in the wrong order. They might prioritise initiatives that only they are working on to improve their WLT. As Capacity Planning produces an ordered backlog, we were able to create a “wrong order-o-meter” to see if teams were working on things in the wrong order. We weighted the effort they engaged in based on the initiative’s order in their backlog. A high score did not mean the team had the wrong behaviour, it simply indicated that someone should have a look and understand why the team was working on initiatives in the wrong order.
So Kent. the answer is that the metric can easily be gamed. You need an eco-system of metrics and processes and informed people to make this stuff work. Sad to say, its not a silver bullet, just another useful tool for the toolkit.