SAFE versus Theory of Constraints

Last week, Skip Angel, one of the Agile Coaches I work with, attended a training course on SAFE. He gave a cracking one hour summary of the course to the coaching team at the client. Our conclusion was that we have a process that looks very similar to SAFE which gave us confidence that we are on the “right track”. The SAFE material is lovely, it is wonderful marketing material to sell scaled agile.

Our other conclusion was that we much prefer our approach based on Theory of Constraints to implementing SAFE. These are my (not my Client’s) opinion on Safe versus Theory of Constraints.

  1. We started out with Theory of Constraints and ended up with a process similar to SAFE. In some places we are behind but in others we are ahead.
  2. We have a deep organisational understanding why we have adopted each practice. This has taken months to achieve but we believe it will ensure more support and stability for the approach. Adopting SAFE would require a leap of faith.
  3. Theory of Constraints has given us a clear roadmap of the significant issues that we face in the next year or two. We are now creating some real options to prepare.
  4. Theory of Constraints helps us identify issues whereas SAFE tells us what to do. This means that Theory of Constraints adapts better to the context.
  5. Rather than bland statements like the need for servant leaders, Theory of Constraints is helping us identify specific practices that management need to adapt to make our system successful.
  6. Theory of Constraints is allowing us to evolve our process in a way that management have the necessary information to perform proper risk management of the process.
  7. SAFE is a development centric framework. Using Theory of Constraints means that we have already partially incorporated Marketing, Finance are fully engaged and co-evolving practices to ensure fiscal controls are in place. We are currently planning the engagement with Human Resources.

Skip highlighted the most impressive part of SAFE is that the creator acknowledges gaps in the process and looks to the community to fill those gaps. It will be interesting to see whether that happens. I had a poke around SAFE to see how it addressed some of the trickier problems we have had to resolve. So far, it has nothing to say about them. The big gaps are around the “Portfolio” aspects of SAFE… or in other words the scaling bits.

It will be interesting to see how SAFE fills the gaps. Will it adopt a solid practitioner led approach like the XP and the ATDD communities, or will it annoint high priests who lack practical experience like some other marketing lead communities.

My advice to anyone thinking of scaling agile. Use SAFE as a map for the foothills but use Theory of Constraints as your compass. The maps soon runs out…  As a result, build your leadership skills in Theory of Constraints and keep SAFE in your pocket for when you get lost and need inspiration. Rather than give your executives a map to give them confidence, help them learn new skills to see the problems they need to solve. Your executives will surprise you by solving problems in ways you never considered. After all, they have different options to you… so help them see them.


Duration and the Time Value of Money

A while back I suggested that using time value of money had little or no impact on the calculation of what I’m now (as of five minutes ago) calling “Software Investment Duration”.

To validate my assertion (sweet use of TDD language), I’ve calculated the impact of interest rates ( and hence time value of money ) on the calculation of “Software Investment Duration” for three scenarios. In each case, I’ve calculated the value using no interest rate ( time value ) adjustment, and with an interest rate of 20% (i.e. the US Peso). To simplify the blog post I’ve put the description of how the interest rate adjusted value is calculated in an appendix at the bottom of the post.

1. Constant Investment

In this scenario, there is a constant investment of 100 US Pesos per month for six months until the release.

constant investment

The zero interest rate value is approx 3.5. The interest rate adjusted value is 3.45. Approx 1% difference.

1. Big Upfront Investment

In this scenario, there is a large upfront investment of 550 US Pesos, followed by an investment of 10 US Pesos per month for five months until the release.

big upfront

The zero interest rate value is approx 5.75. The interest rate adjusted value is 5.74. Almost no difference.

1. Big last minute Investment

In this scenario, there are monthly investments of 10 US Pesos per month for five months until a big investment of 550 US Pesos just before the release.

lastminute

The zero interest rate value is approx 1.25. The interest rate adjusted value is 1.24. Approx 1% difference.

So why does this matter?

The examples shows that even with fairly extreme values for interest rates (20%), the impact on the “Software Investment Duration” or SID is only about 1%. For lower values of interest rates, the impact is even less. So time value of money or interest rate adjustments only give us an extra 1% of accuracy in our calculation.

More accuracy is more good surely?

This level of accuracy is misleading. Whenever you calculate the cash flows going into the calculation, there are always going to be other factors that have a bigger impact than 1%. The effect of holidays, training and other non development tasks which may or may not be counted. The effects of bugs and or support that may or may not be counted. The effects of salary differentials between roles that may or may not be counted. ( To assign work to an individual rather than a team requires you create a time tracking system which is more than a 1% overhead ).

Worst of all, including an interest rate adjustment makes the concept much harder for people to comprehend and understand. “Everything should be as simple as possible but not simpler”.  See below.

Appendix – How the interest rate adjusted value is calculated.

To calculate the interest rate adjusted value, I calculated a simple discount factor as follows:

Discount Factor = exp ( -1 * interest rate * time in months / number of months in the year )

Although most readers of this blog will probably find this trivial, you should try discussing exponential functions with your business colleagues in finance, HR, product, UX design and marketing. In fact, anyone who did not studying for a science or engineering based degree. What you will find is that you will exclude a number of your colleagues from the business investment process for an extra 1% of accuracy.

This is a simple way to calculate a discount factor. To calculate it properly, you should first decide on which interest rate to use… Risk free? WACC? (Popular with the MBA crowd is the Weighted Average Cost of Capital).

Then you need some analytics to generate a curve using splines etc. A nasty tricky business.


#NeoPrinciple

On a few occasions over the past year I’ve heard the same urban myth about theory of constraints. I wonder whether anyone knows the origin of the story? I’ve started referring to it as the #NeoPrinciple…. They are the one.

Theory of Constraints tells us a number of steps we need to take in order to improve a system. Identify the constraint, exploit the constraint, subordinate everything to the constraint, elevate the constraint, rinse and repeat. Theory of constraints tells us that adding capacity anywhere in the system other than the constraint is a waste.

So what if you do not know about the theory of constraints. You put a bunch of effort into the system but are unaware that you are having little impact. If you cannot identify the constraint, your impact is based on luck rather than judgment.

This is the basis of the #NeoPrinciple urban myth. There is/was a company where the management did not understand the theory of constraints. Within that company was an individual/small group of individuals (I’ve heard both variants) that did understand the theory of constraints. Those with the knowledge of theory of constraints were the effective leaders of the company. It was they that controlled the future of the company, not the management team.

I think the purpose of this urban myth is to add some spice to an otherwise boring subject. The myth is the El Dorado of subjects for those hungry for power and influence. Its a great subject for the pub where alcohol clouded judgment attempts to work out whether it is possible.

Of course, the other aspect of the #NeoPrinciple is that it is a cautionary tale. There are some powerful memes stalking the halls of companies these days. Some memes need executive support. Others just need an infection point. TOC is one of the later. The moral of the story is for the managers of companies, learn about new ways of managing your company. Its better that you have the pleasant joy of discovering the work is already started than to discover that someone has disconnected your tiller from the rudder for good.

Me, I’d like to know if anyone knows whether its based on a true story. Be great to hear from anyone who knows about it.


Ten Ways of “Five Whys”

A little while back a friend complained about “5 Whys” in Feature Injection. “People don’t like it when you ask “Why?” over and over” they said. “Well, no one actually asks Why five times. There are many ways to ask why without actually asking “Why?” I replied. Here is a random ten questions off the top of my head. Please add your own in the comments.

If you want to find out more about Feature Injection ( or Story Mapping, User Stories or Impact Mapping), come and join Christian Hassa, David Evans, Gojko Adzic and me at the Product Owner Survival Camp on 21st Jan in London Town.

Random Ten Alternatives:

1. What do you want me to do with this stuff you’ve put in the system?
2. Now that we’ve stored it, what do you want to do with it?
3. When would we use this?
4. How would we use this information?
5. What reports / screens would you display this on?
6. What would the report / screen look like?
7. How would you use the report?
8. Who would use the report / screen?
9. When would it be useful to them?
10. Errr?
11. What?
12. Eh?
13. Huh?
14. And?
15. Then?
16. Really Blackadder?
17. Draw it please?
18. This is valuable how?
19. I don’t think you are telling me everything?
20. Go on!
21. Why?


How mature must an organisation be to implement Cost of Delay.

Cost of Delay is often mentioned as a possible solution to the difficult problem of deciding the order in which you want to invest in two or more software development investments. ( For an introduction to cost of delay, check out Don Reinertson’s talk and Joshua Arnold’s blog and experience report presentations. Both are well worth the time invested ). I would like to highlight two capabilities your organisation may need before it can effectively use Cost of Delay. These are:

The ability to estimate value.
The ability to convert value to a common “currency”.

In effect, Cost of Delay assumes a level of corporate maturity.

The ability to estimate value

Todd Little wrote an excellent paper that shows the difference between actual effort and estimated effort follows a log-normal distribution ( Troy Magennis would argue that its a Weibull distribution ). IT professionals are pretty good at estimating things it would seem. The same is not true when it comes to estimating value or return. I once worked on a project where the business case estimated the annual profit for an investment would be €100M. In the first year, the investment gave a revenue of only €500K. The estimate of return was out by several orders of magnitude.

Lean Startup and a experimental metric based approach to predicting the improvement in a metric is a much more accurate approach but still is not that accurate.

Rather than a single value, estimates should take the form of a range with a confidence interval. For example, the return will be $1,000 to $1,000,000 with a 90% confidence interval. (Check out Doug Hubbard’s presentation )

So which is the better investment, the one that delivers -$50,000 to $2,000,000 with a 90% confidence interval, or one that delivers $1,000 to $1,000,000 with a 80% confidence interval. My maths is not good enough to compare these two investments. Now let’s consider that this is the cost of delay for the two investments. Which do I choose.

It is quite likely that the two or more potential investments are from different people or groups. How do we ensure that they have adopted a consistent estimation process. One possible solution is to engage the finance department to act of the coaches for putting together business cases. The finance department should not tell people how to build business cases, instead they should coach people and share useful practices. ( I almost wrote share “best practices” but could not face the strangling sound from Dave Snowden). The finance department can ensure a level playing field and help raise the game for the people writing business cases.

The ability to convert value to a common “currency”

Not all value is equal!

With the rise of business metrics, it is possible and desirable for organisations to focus on a particular part of the “funnel” that does not directly lead to a “dollar” value. An investment may be to increase the number of installed customers, or improve the usage or stickiness of customers.

Even with the same metric, there may be more value to customers on a particular platform, or in a geographical or demographic grouping. What are more valuable? Customers in the developed world or in developing markets? Teenagers or Pensioners?

In order to compare an investment to increase usage with teenagers in the USA versus revenue from Baby Boomers in Europe versus New customers in Brazil, the organisation needs an exchange rate to a single currency (The Corporate Peso or the “Corpeso”). This exchange rate needs to be set by the executives of the organisation taking into account market opportunities and the organisations vision. The exchange rate becomes a strategic tool to steer the software development investments. Some organisations may to choose a simpler approach and focus on a small handful of metrics instead.

Simplified Cost of Delay

It is possible to introduce as simplified version of cost of delay focusing on the two or three basic shapes. The delayed return is calculated by multiplying the rate of loss by the delay for a delayed product intro. A step cost for things like fines, and a total loss for things like missing the Christmas season.

There is a danger with introducing the simplified version in that people devalue cost of delay, especially as they are already doing the simple shapes. You risk that cost of delay is seen as adding unnecessary complexity to something simple. This may innoculate the organisation against using the full cost of delay in the future.

Cost of Delay is a great concept which will work well in certain contexts. If you try to implement it in more complex contexts, you need to consider the organisational maturity needed to support it.


Outcome based Process Metrics – A focus on Time to Value.

This is one of those posts where I’m looking for feedback. Ideally practitioners who’ve made this work but also thought leaders who’ve put serious thought into the matter.

For the past year or so, I’ve been thinking about metrics. When measuring the effectiveness of your development investment process, I’ve come to the conclusion that you need three outcome metrics… A “quality metric, a “value” metric and a “time to value” metric. You need the three metrics because any two can be gamed. For example, you can maintain high quality and a low time to value by releasing software that delivers no value.

Quality metrics are fairly straight forward… number of bugs released is a fairly standard metric. Value metrics are fairly easy to define but sometimes harder to measure… e.g. profit, revenue, number of customers. The tricky metric is the “time to value”.

For years I’ve been chanting the “Cycle Time” mantra from the Lean world. When it comes to looking at metrics in the real world of software development, “Cycle Time” is a great inspiring concept but it ain’t much use helping a company performing an Agile Transformation. The problem is knowing when to call the start on the investment. When you trawl through the historical data of real investments, it’s difficult to compare them. The “time to value” metric is really about risk management. A development system with a short “Time to Value” will be less risky than one with a longer one. Initial thoughts are to consider the start of any work on an investment to the time it is released. This penalises investments where a small amount is invested up-front to surface and address risk (the value of business analysis). As a result, the approach is often to ignore the analysis in the “time to value” metric and focus on the point that the “team” commits to delivering value. i.e. The point it is pulled into a sprint. Ignoring the analysis means that over-investment can occur to reduce risk. In effect, “time to value” metrics are messy in the real world. To illustrate this, consider the following three investments:

20131207-193858.jpg

The three investments have the same start date and release date, however (1) is clearly better than (2) which is clearly better than (3). In the real world, it is even harder to compare investments as the investments are spread across several teams with different rates of investment. This is a solved problem in finance where investors need to compare different bonds and match assets to liabilities of differing size and frequency. Duration was one of the simplist and earliest ways of comparing the risk of bonds. The approach converts a number of cash flows into a single cash flow at certain point in time. This is quite simple. Imagine a seesaw in a children’s playground. All of the cash flows in the investment are placed at the time they are made on one side of the seesaw, and all of them at a single point on the other side. They are placed such that the seesaw balances.

20131207-195718.jpg

Duration ( tD ) = Sum of ( Cash Flow * Time to release ) / Sum of Cash Flows.

( Note: In bond maths, the Cash Flow would be the present value of the cash flow.).

Using the examples above, gives the following results…

1. (28*1 + 1*2 + 1*3) / 30 = 33 / 30 = 1.1
2. (10*1 * 10*2 + 10*3) / 30 = 60 / 30 = 2
3. (1*1 + 1*2 + 28*3) / 30 = 87 / 30 = 2.9

The lower the value for the duration of the investment, the lower the risk. The thing I like about this metric is that it drives the behaviour we want. Smart investments where analysis is used to reduce investment risk before a short time from commitment to investment.

The time for the investment should be the point that the team makes a commitment to the investment. In Scrum, this would be the start of the sprint.

So I’d like to hear others thoughts on this.

Discount Factors and the time value of money.

Whenever you discuss investments, thought leaders cannot help trying to add complexity to the situation. My advice is ignore the time value of money as its effect is not that significant. Adding time value of money makes the calculation much less intuitive to people using the numbers.

If time value of money is used, the inverse ( exp[ + rt ] ) of the discount factor ( exp – rt] ) should be used. This is because you are calculating the value of the investment at the time of release in the future whereas in finance you are calculating the current value of a future cash flow.


Introducing Staff Liquidity (2 of n)

Staff Liquidity is about managing the number of options in your system. You want to maintain as many options so that you can respond to changes in demand. Once we realise that we want to create/maintain as many options as possible in our system, it affects the way we make decisions. (Author’s note: When I say “you”, I’m referring to the team as a whole).

In the past I’ve seen many development teams that have one or more Bob’s on the team. Bob is the one who knows (some part of) the system inside out. Whenever you allocate work, Bobs is the only person who can do certain things and as a result, he is always fully loaded at two hundred percent utilisation. Bob spends his entire time on the constraint for the project. Whenever a problem/issue/production issue/etc. occurs, Bob is the only one who can work on it. Whenever Bob is working on a production issue, etc. the projects slips behind a day at a time. Some Bob’s enjoy the attention, some don’t and they leave. Allocating Bob to work first is the most effective way of destroying options on a project. On a skills matrix, Bob would be the only “3″ for one or more task. The rest of the team are only along for the ride as only Bob(s) can do the real work.

Every time you allocate someone to an item of work, you effectively remove them from the skills matrix. Once you see it like that, you realise the best way to allocate work is to start with the people with the fewest options first. You give them work they can do with the support of an expert. You then do the same for the rest of the team until you get to Bob. Ideally, you do not allocate Bob to anything. Bob becomes the expert who pairs with other team members. Bob coaches others in his part of the system. Bob is instantly available in the event of a crisis without impacting the time to value.

When Bob investigates a production issue, he should do so with another member of the team. As soon as they work out what needs to be done and the other team member is able to finish it alone, Bob becomes available again thus ensuring the team has the maximum number of options to address the next crisis.

Sadly its not too uncommon to discover an entire team of Bobs. Everyone on the team is the only person with a “3″ for a part of the system. Once they understand what they are doing, the team should self organise to become Not(Bob)

 


Follow

Get every new post delivered to your Inbox.

Join 42 other followers