Author Archives: theitriskmanager

About theitriskmanager

A IT programme manager specialising in delivering trading and risk management systems in Investment Banks. I achieve this by focusing on risk rather than cost. A focus on costs can lead to increased costs.

The cold war between HR and the Business

Capacity Planning helps the organisation identify the constraints in the development system, and also identify teams that are not working on strategic goals.

Capacity Plan

From the organisations perspective, the most valuable work that the Credit and Payments teams could do is to help out the Windows and Mac Client teams, or perhaps Android or WinPhone Client. Any team or individual that moved to work on an unfamiliar area of the code would likely compare unfavourably to the teams/individuals already working on that code. A team from Credit might only be 30% effective compared to the existing Windows Client team.

Any HR performance / incentives scheme that compares or ranks individuals or teams against each other will discourage the Credit Team from working alongside the Windows Client team. They will find valid reasons for not making the move. The organisation will find it hard to encourage staff liquidity. They should have a policy that encourages people to work on the right work instead. A policy that encourages collaboration.

Lets put that in simple terms. Any HR performance / incentive scheme that compares or ranks individuals or teams against each other is in direct conflict with achieving the goals of the organisation.

Recently Microsoft famously gave up stack ranking. Sadly, other organisations have not seen the light and are happy for the HR department to wage a cold war against the organisation they claim to serve.

Scaling Agile to the Enterprise – Step 1.

Stepping back from the strategic view of scaling agile, I thought I would share a very very practical first step.

GIVEN you have hundreds… thousands of developers (programmers and testers),

AND you have more than a few dozen product owners

WHEN your enterprise intends to scale agile

THEN you are going to need a tool to coordinate activity and decisions and summarise status.

  1. Everyone in the enterprise needs to use the same tool for task management. Whether it is Jira, Rally, Version One, TFS, Lean Kit, Blah, or whatever tool, the entire enterprise needs to use the same instance of the same tool. Your Agile Scaling initiative will have emergent requirements. Its hard enough getting everyone to adopt a process change on a single tool. Trying to do that on multiple tools would be impossible.
  2. The tool will need to support the entire scaled product owner process and the development process (whether it is Scrum, Kanban, XP, or whatever). There will need to be a minimum set of agreed processes that everyone will need to abide by so that the tool can be effectively to support scaling to the enterprise.
  3. Some agalistas will protest saying that the team should pick their own tools. They can but they should ensure that the corporate tool is always up to date. Given that teams are unlikely to want to maintain two tools, they should probably only use the corporate standard tools (My personal preference is that user stories and above are managed in the tool and tasks within a sprint etc are managed on a physical board).
  4. You will need to extract the data from the tool and put it into a reporting database so that you can isolate your enterprise reporting from the tool… so that you are not dependent on the tool for reporting, Also, this means you can more easily switch if you find you have selected the wrong tool.
  5. All reporting needs to be automated. You never want to be in the situation where people are manually aggregating data in a spreadsheet to get an enterprise view. This will break completely when you go into your capacity planning meeting and people are making last minute updates even in the meeting.
  6. You need to have a dedicated team who maintain the tool. You need to create the budget for this as part of the Agile Scaling initiative, and as an on-going activity. You cannot rely on a team maintaining the tool as a side project. The tool is a strategic asset that needs to be carefully controlled. Without proper management, your initiative can derailed by people adding the wrong fields or processes in the tool. The tool maintenance team need to work very closely with your Agile Coaches.
  7. Your tool and supporting processes need to be Agile. Your support team need to be able to add fields/change process in quickly as emergent requirements arise. Any tool that needs modifications to be made by the vendor to add fields etc. IS NOT FIT FOR PURPOSE. If you need to create a manual process as a work around, you have broken your process.

Explicit WIP limits destroy Options.

Zsolt Fabok, a good friend of mine, has written a blog post responding to a talk I gave where I said “Explicit WIP Limits destroy options”. Before I respond, there are a couple of inaccuracies in the post that I want to call out.

  1. In the video I said that “Explicit WIP limits destroy options”. I did not say that “WIP limits destroy options”. This is a subtle but significant difference. It is possible that Zsolt misheard what I say in the video. Instead of “Explicit WIP limits”, I advocate the use of WIP limiting policies.
  2. Zsolt says “In this context it means that “the more liquid our staff” the more work items we can work on in parallel, which means that we have more options.” This is a misunderstanding of Staff Liquidity. Staff Liquidity is about the number of types of task that a team member can work on. It has nothing to do with working on things in parallel. This misunderstanding may be based on another misguided definition of staff liquidity that is based on the number of transactions in a system. (The number of transactions is a measure of system activity, not liquidity.)

Point 1 completely undermines the argument made in Zsolt post. Typical WIP limiting policies would be no more than one EPIC (unit of delivery of value) in progress, and no team member/pair should work on more than one task at a time (Blocked tasks should be marked as such). It is vital that everyone on the team knows the value of limiting WIP.

“Explicit WIP limited systems”, or Kanban systems as they are known, destroy options for the person managing risk in the system. The problem is that this coercive management technique (not necessarily the manager*) dictate the next task that a team member should take in certain circumstances. Given the WIP limit for the “waiting for test” queue is three and the queue currently has two items in it, when someone finishes a development task, then they know that the next task they should take is a testing task. Given that they do not like testing, when this happens, then they might do other stuff until someone else finishes, takes the testing task and then they will take the next development task.

The limits force behaviour. Without the limits, team members pick the tasks that they want to. Making this visible makes it easier to identify people who do not want to perform certain roles. This can lead to a discussion to see if they need further training or they disagree with WIP limits. This is a very important option for the person(s) managing the system, as it shows whether all the team members understand and support the approach.

To be clear, having more WIP reduces options. A smaller WIP increases options. Creating a system with lower WIP and no waste (i.e. where effective swarming takes place) is not a simple task.

Zsolt has been nominated for Kanban Community’s Brickell Quay Award. Please vote for him. :-)

*Sometimes enthusiastic team members can force an approach on other passive members who are not aware of the consequences.

SAFE versus Theory of Constraints

Last week, Skip Angel, one of the Agile Coaches I work with, attended a training course on SAFE. He gave a cracking one hour summary of the course to the coaching team at the client. Our conclusion was that we have a process that looks very similar to SAFE which gave us confidence that we are on the “right track”. The SAFE material is lovely, it is wonderful marketing material to sell scaled agile.

Our other conclusion was that we much prefer our approach based on Theory of Constraints to implementing SAFE. These are my (not my Client’s) opinion on Safe versus Theory of Constraints.

  1. We started out with Theory of Constraints and ended up with a process similar to SAFE. In some places we are behind but in others we are ahead.
  2. We have a deep organisational understanding why we have adopted each practice. This has taken months to achieve but we believe it will ensure more support and stability for the approach. Adopting SAFE would require a leap of faith.
  3. Theory of Constraints has given us a clear roadmap of the significant issues that we face in the next year or two. We are now creating some real options to prepare.
  4. Theory of Constraints helps us identify issues whereas SAFE tells us what to do. This means that Theory of Constraints adapts better to the context.
  5. Rather than bland statements like the need for servant leaders, Theory of Constraints is helping us identify specific practices that management need to adapt to make our system successful.
  6. Theory of Constraints is allowing us to evolve our process in a way that management have the necessary information to perform proper risk management of the process.
  7. SAFE is a development centric framework. Using Theory of Constraints means that we have already partially incorporated Marketing, Finance are fully engaged and co-evolving practices to ensure fiscal controls are in place. We are currently planning the engagement with Human Resources.

Skip highlighted the most impressive part of SAFE is that the creator acknowledges gaps in the process and looks to the community to fill those gaps. It will be interesting to see whether that happens. I had a poke around SAFE to see how it addressed some of the trickier problems we have had to resolve. So far, it has nothing to say about them. The big gaps are around the “Portfolio” aspects of SAFE… or in other words the scaling bits.

It will be interesting to see how SAFE fills the gaps. Will it adopt a solid practitioner led approach like the XP and the ATDD communities, or will it annoint high priests who lack practical experience like some other marketing lead communities.

My advice to anyone thinking of scaling agile. Use SAFE as a map for the foothills but use Theory of Constraints as your compass. The maps soon runs out…  As a result, build your leadership skills in Theory of Constraints and keep SAFE in your pocket for when you get lost and need inspiration. Rather than give your executives a map to give them confidence, help them learn new skills to see the problems they need to solve. Your executives will surprise you by solving problems in ways you never considered. After all, they have different options to you… so help them see them.

Duration and the Time Value of Money

A while back I suggested that using time value of money had little or no impact on the calculation of what I’m now (as of five minutes ago) calling “Software Investment Duration”.

To validate my assertion (sweet use of TDD language), I’ve calculated the impact of interest rates ( and hence time value of money ) on the calculation of “Software Investment Duration” for three scenarios. In each case, I’ve calculated the value using no interest rate ( time value ) adjustment, and with an interest rate of 20% (i.e. the US Peso). To simplify the blog post I’ve put the description of how the interest rate adjusted value is calculated in an appendix at the bottom of the post.

1. Constant Investment

In this scenario, there is a constant investment of 100 US Pesos per month for six months until the release.

constant investment

The zero interest rate value is approx 3.5. The interest rate adjusted value is 3.45. Approx 1% difference.

1. Big Upfront Investment

In this scenario, there is a large upfront investment of 550 US Pesos, followed by an investment of 10 US Pesos per month for five months until the release.

big upfront

The zero interest rate value is approx 5.75. The interest rate adjusted value is 5.74. Almost no difference.

1. Big last minute Investment

In this scenario, there are monthly investments of 10 US Pesos per month for five months until a big investment of 550 US Pesos just before the release.


The zero interest rate value is approx 1.25. The interest rate adjusted value is 1.24. Approx 1% difference.

So why does this matter?

The examples shows that even with fairly extreme values for interest rates (20%), the impact on the “Software Investment Duration” or SID is only about 1%. For lower values of interest rates, the impact is even less. So time value of money or interest rate adjustments only give us an extra 1% of accuracy in our calculation.

More accuracy is more good surely?

This level of accuracy is misleading. Whenever you calculate the cash flows going into the calculation, there are always going to be other factors that have a bigger impact than 1%. The effect of holidays, training and other non development tasks which may or may not be counted. The effects of bugs and or support that may or may not be counted. The effects of salary differentials between roles that may or may not be counted. ( To assign work to an individual rather than a team requires you create a time tracking system which is more than a 1% overhead ).

Worst of all, including an interest rate adjustment makes the concept much harder for people to comprehend and understand. “Everything should be as simple as possible but not simpler”.  See below.

Appendix – How the interest rate adjusted value is calculated.

To calculate the interest rate adjusted value, I calculated a simple discount factor as follows:

Discount Factor = exp ( -1 * interest rate * time in months / number of months in the year )

Although most readers of this blog will probably find this trivial, you should try discussing exponential functions with your business colleagues in finance, HR, product, UX design and marketing. In fact, anyone who did not studying for a science or engineering based degree. What you will find is that you will exclude a number of your colleagues from the business investment process for an extra 1% of accuracy.

This is a simple way to calculate a discount factor. To calculate it properly, you should first decide on which interest rate to use… Risk free? WACC? (Popular with the MBA crowd is the Weighted Average Cost of Capital).

Then you need some analytics to generate a curve using splines etc. A nasty tricky business.


On a few occasions over the past year I’ve heard the same urban myth about theory of constraints. I wonder whether anyone knows the origin of the story? I’ve started referring to it as the #NeoPrinciple…. They are the one.

Theory of Constraints tells us a number of steps we need to take in order to improve a system. Identify the constraint, exploit the constraint, subordinate everything to the constraint, elevate the constraint, rinse and repeat. Theory of constraints tells us that adding capacity anywhere in the system other than the constraint is a waste.

So what if you do not know about the theory of constraints. You put a bunch of effort into the system but are unaware that you are having little impact. If you cannot identify the constraint, your impact is based on luck rather than judgment.

This is the basis of the #NeoPrinciple urban myth. There is/was a company where the management did not understand the theory of constraints. Within that company was an individual/small group of individuals (I’ve heard both variants) that did understand the theory of constraints. Those with the knowledge of theory of constraints were the effective leaders of the company. It was they that controlled the future of the company, not the management team.

I think the purpose of this urban myth is to add some spice to an otherwise boring subject. The myth is the El Dorado of subjects for those hungry for power and influence. Its a great subject for the pub where alcohol clouded judgment attempts to work out whether it is possible.

Of course, the other aspect of the #NeoPrinciple is that it is a cautionary tale. There are some powerful memes stalking the halls of companies these days. Some memes need executive support. Others just need an infection point. TOC is one of the later. The moral of the story is for the managers of companies, learn about new ways of managing your company. Its better that you have the pleasant joy of discovering the work is already started than to discover that someone has disconnected your tiller from the rudder for good.

Me, I’d like to know if anyone knows whether its based on a true story. Be great to hear from anyone who knows about it.

Ten Ways of “Five Whys”

A little while back a friend complained about “5 Whys” in Feature Injection. “People don’t like it when you ask “Why?” over and over” they said. “Well, no one actually asks Why five times. There are many ways to ask why without actually asking “Why?” I replied. Here is a random ten questions off the top of my head. Please add your own in the comments.

If you want to find out more about Feature Injection ( or Story Mapping, User Stories or Impact Mapping), come and join Christian Hassa, David Evans, Gojko Adzic and me at the Product Owner Survival Camp on 21st Jan in London Town.

Random Ten Alternatives:

1. What do you want me to do with this stuff you’ve put in the system?
2. Now that we’ve stored it, what do you want to do with it?
3. When would we use this?
4. How would we use this information?
5. What reports / screens would you display this on?
6. What would the report / screen look like?
7. How would you use the report?
8. Who would use the report / screen?
9. When would it be useful to them?
10. Errr?
11. What?
12. Eh?
13. Huh?
14. And?
15. Then?
16. Really Blackadder?
17. Draw it please?
18. This is valuable how?
19. I don’t think you are telling me everything?
20. Go on!
21. Why?

How mature must an organisation be to implement Cost of Delay.

Cost of Delay is often mentioned as a possible solution to the difficult problem of deciding the order in which you want to invest in two or more software development investments. ( For an introduction to cost of delay, check out Don Reinertson’s talk and Joshua Arnold’s blog and experience report presentations. Both are well worth the time invested ). I would like to highlight two capabilities your organisation may need before it can effectively use Cost of Delay. These are:

The ability to estimate value.
The ability to convert value to a common “currency”.

In effect, Cost of Delay assumes a level of corporate maturity.

The ability to estimate value

Todd Little wrote an excellent paper that shows the difference between actual effort and estimated effort follows a log-normal distribution ( Troy Magennis would argue that its a Weibull distribution ). IT professionals are pretty good at estimating things it would seem. The same is not true when it comes to estimating value or return. I once worked on a project where the business case estimated the annual profit for an investment would be €100M. In the first year, the investment gave a revenue of only €500K. The estimate of return was out by several orders of magnitude.

Lean Startup and a experimental metric based approach to predicting the improvement in a metric is a much more accurate approach but still is not that accurate.

Rather than a single value, estimates should take the form of a range with a confidence interval. For example, the return will be $1,000 to $1,000,000 with a 90% confidence interval. (Check out Doug Hubbard’s presentation )

So which is the better investment, the one that delivers -$50,000 to $2,000,000 with a 90% confidence interval, or one that delivers $1,000 to $1,000,000 with a 80% confidence interval. My maths is not good enough to compare these two investments. Now let’s consider that this is the cost of delay for the two investments. Which do I choose.

It is quite likely that the two or more potential investments are from different people or groups. How do we ensure that they have adopted a consistent estimation process. One possible solution is to engage the finance department to act of the coaches for putting together business cases. The finance department should not tell people how to build business cases, instead they should coach people and share useful practices. ( I almost wrote share “best practices” but could not face the strangling sound from Dave Snowden). The finance department can ensure a level playing field and help raise the game for the people writing business cases.

The ability to convert value to a common “currency”

Not all value is equal!

With the rise of business metrics, it is possible and desirable for organisations to focus on a particular part of the “funnel” that does not directly lead to a “dollar” value. An investment may be to increase the number of installed customers, or improve the usage or stickiness of customers.

Even with the same metric, there may be more value to customers on a particular platform, or in a geographical or demographic grouping. What are more valuable? Customers in the developed world or in developing markets? Teenagers or Pensioners?

In order to compare an investment to increase usage with teenagers in the USA versus revenue from Baby Boomers in Europe versus New customers in Brazil, the organisation needs an exchange rate to a single currency (The Corporate Peso or the “Corpeso”). This exchange rate needs to be set by the executives of the organisation taking into account market opportunities and the organisations vision. The exchange rate becomes a strategic tool to steer the software development investments. Some organisations may to choose a simpler approach and focus on a small handful of metrics instead.

Simplified Cost of Delay

It is possible to introduce as simplified version of cost of delay focusing on the two or three basic shapes. The delayed return is calculated by multiplying the rate of loss by the delay for a delayed product intro. A step cost for things like fines, and a total loss for things like missing the Christmas season.

There is a danger with introducing the simplified version in that people devalue cost of delay, especially as they are already doing the simple shapes. You risk that cost of delay is seen as adding unnecessary complexity to something simple. This may innoculate the organisation against using the full cost of delay in the future.

Cost of Delay is a great concept which will work well in certain contexts. If you try to implement it in more complex contexts, you need to consider the organisational maturity needed to support it.

Outcome based Process Metrics – A focus on Time to Value.

This is one of those posts where I’m looking for feedback. Ideally practitioners who’ve made this work but also thought leaders who’ve put serious thought into the matter.

For the past year or so, I’ve been thinking about metrics. When measuring the effectiveness of your development investment process, I’ve come to the conclusion that you need three outcome metrics… A “quality metric, a “value” metric and a “time to value” metric. You need the three metrics because any two can be gamed. For example, you can maintain high quality and a low time to value by releasing software that delivers no value.

Quality metrics are fairly straight forward… number of bugs released is a fairly standard metric. Value metrics are fairly easy to define but sometimes harder to measure… e.g. profit, revenue, number of customers. The tricky metric is the “time to value”.

For years I’ve been chanting the “Cycle Time” mantra from the Lean world. When it comes to looking at metrics in the real world of software development, “Cycle Time” is a great inspiring concept but it ain’t much use helping a company performing an Agile Transformation. The problem is knowing when to call the start on the investment. When you trawl through the historical data of real investments, it’s difficult to compare them. The “time to value” metric is really about risk management. A development system with a short “Time to Value” will be less risky than one with a longer one. Initial thoughts are to consider the start of any work on an investment to the time it is released. This penalises investments where a small amount is invested up-front to surface and address risk (the value of business analysis). As a result, the approach is often to ignore the analysis in the “time to value” metric and focus on the point that the “team” commits to delivering value. i.e. The point it is pulled into a sprint. Ignoring the analysis means that over-investment can occur to reduce risk. In effect, “time to value” metrics are messy in the real world. To illustrate this, consider the following three investments:


The three investments have the same start date and release date, however (1) is clearly better than (2) which is clearly better than (3). In the real world, it is even harder to compare investments as the investments are spread across several teams with different rates of investment. This is a solved problem in finance where investors need to compare different bonds and match assets to liabilities of differing size and frequency. Duration was one of the simplist and earliest ways of comparing the risk of bonds. The approach converts a number of cash flows into a single cash flow at certain point in time. This is quite simple. Imagine a seesaw in a children’s playground. All of the cash flows in the investment are placed at the time they are made on one side of the seesaw, and all of them at a single point on the other side. They are placed such that the seesaw balances.


Duration ( tD ) = Sum of ( Cash Flow * Time to release ) / Sum of Cash Flows.

( Note: In bond maths, the Cash Flow would be the present value of the cash flow.).

Using the examples above, gives the following results…

1. (28*1 + 1*2 + 1*3) / 30 = 33 / 30 = 1.1
2. (10*1 * 10*2 + 10*3) / 30 = 60 / 30 = 2
3. (1*1 + 1*2 + 28*3) / 30 = 87 / 30 = 2.9

The lower the value for the duration of the investment, the lower the risk. The thing I like about this metric is that it drives the behaviour we want. Smart investments where analysis is used to reduce investment risk before a short time from commitment to investment.

The time for the investment should be the point that the team makes a commitment to the investment. In Scrum, this would be the start of the sprint.

So I’d like to hear others thoughts on this.

Discount Factors and the time value of money.

Whenever you discuss investments, thought leaders cannot help trying to add complexity to the situation. My advice is ignore the time value of money as its effect is not that significant. Adding time value of money makes the calculation much less intuitive to people using the numbers.

If time value of money is used, the inverse ( exp[ + rt ] ) of the discount factor ( exp – rt] ) should be used. This is because you are calculating the value of the investment at the time of release in the future whereas in finance you are calculating the current value of a future cash flow.

Introducing Staff Liquidity (2 of n)

Staff Liquidity is about managing the number of options in your system. You want to maintain as many options so that you can respond to changes in demand. Once we realise that we want to create/maintain as many options as possible in our system, it affects the way we make decisions. (Author’s note: When I say “you”, I’m referring to the team as a whole).

In the past I’ve seen many development teams that have one or more Bob’s on the team. Bob is the one who knows (some part of) the system inside out. Whenever you allocate work, Bobs is the only person who can do certain things and as a result, he is always fully loaded at two hundred percent utilisation. Bob spends his entire time on the constraint for the project. Whenever a problem/issue/production issue/etc. occurs, Bob is the only one who can work on it. Whenever Bob is working on a production issue, etc. the projects slips behind a day at a time. Some Bob’s enjoy the attention, some don’t and they leave. Allocating Bob to work first is the most effective way of destroying options on a project. On a skills matrix, Bob would be the only “3″ for one or more task. The rest of the team are only along for the ride as only Bob(s) can do the real work.

Every time you allocate someone to an item of work, you effectively remove them from the skills matrix. Once you see it like that, you realise the best way to allocate work is to start with the people with the fewest options first. You give them work they can do with the support of an expert. You then do the same for the rest of the team until you get to Bob. Ideally, you do not allocate Bob to anything. Bob becomes the expert who pairs with other team members. Bob coaches others in his part of the system. Bob is instantly available in the event of a crisis without impacting the time to value.

When Bob investigates a production issue, he should do so with another member of the team. As soon as they work out what needs to be done and the other team member is able to finish it alone, Bob becomes available again thus ensuring the team has the maximum number of options to address the next crisis.

Sadly its not too uncommon to discover an entire team of Bobs. Everyone on the team is the only person with a “3″ for a part of the system. Once they understand what they are doing, the team should self organise to become Not(Bob)



Get every new post delivered to your Inbox.

Join 48 other followers