A tale of two coaches.

It was the best of times, it was the worst of times. In the words of Jim Collins, some coaches supported “The Tyranny of the OR” whereas other Coaches promoted “The Genius of the AND”.

This is a tale of Yves and David. Two coaches working with identical companies in a way that is only possible in literature, movies and the minds of thought leaders. The similarities are spooky. Even the managers had the same name… Neil, though David spelled it with a K.

David’s Story

David: “Hello, I’m your new Agile coach.”

Kneel: “Let me explain how our business works.”

David: “No need for that, I’m off to the Gemba.”

Knell: “What’s a Zumba? Is that like my wife’s fitness dance class?”

David: “Sigh. Nothing for you to worry about. You go and learn to be a servant leader.”

Kneel: “A savant lieder? What’s that?”

David: “You have to work it out for yourself whilst you still have a job.”

Six months later. Kneel is talking to his team.

Kneel: “So let me get this right. He changed it, and now its broken something else but if we change it back it will break the thing it fixed.”

Peon: “Yep. What happened to that David guy.”

Kneel: “He told the CEO to sack all the middle managers and get you lot to self organise. The CEO sacked him for being a moron.”

Yves’s Story

Yves: “Hello, I’m your new Agile coach.”

Neil: “Let me explain how our business works.”

Yves: “Great, that will be useful context. After that we’ll head off to the Gemba.”

Neil: “What’s that?

Yves: “I’m going to pair coach with you so that you learn how to coach your teams?

Neil: “Is that necessary? Surely you can do it? Do I need to do coaching?”

Yves: “Management are part of the governance, risk management or control function of the process. Imagine a simple boiler with a controller. Now imagine that the controller does not know how the process works. What would happen?”

Neil: “Chaos. I see your point. But what if they need skills I do not have?”

Yves: “You can help them to find them. It may be someone on one of the other teams or you may need to bring someone in.”

Neil: “So if I go to the Gemba, I don’t need to sit in the glass booth anymore?”

Yves: “You need to do both. Some risks are best observed close up. Some, you need to get some distance”.

Neil: “Can you give me an example.”

Yves: “Imagine all of your teams are burning down through work nicely. However, you have a feeling you are not delivering as much as you think you should.”

Neil: “I get it. Management reporting will help me get the big picture view to spot risks and issues that are at a higher level. A bit like fractals. If you measure the coastline using a one metre ruler you will get a very different answer to if you measure it using a mile long ruler.”

Yves: “Yes, you are looking for problems at a different scale which means you need a different measure and viewpoint. Sometimes at the Gemba. Sometimes in the glass booth.”

Six months later

Neil: “Hi Yves, You remember that change we put in. Well we had to take it out again, and replace it with something else. I didn’t need to get involved, the team did it. They just wanted me to keep an eye on things… Anyway, you know how you said we would probably need to talk about Cynefin and Staff Liquidity. Well I think we’ve hit that point. When can we bring you in again?”

This post is a response to Chris Young’s excellent post and a tweet by Joshua Arnold

All names of the characters are purely fictional. Any resemblance blah blah…


Kicking Risk Down the Road Jeopardizes Success

It’s 1230 and you have a report to write, an international flight to catch at 1600 and at least an hours drive to get to the airport. What do you do first?

Most of us assess the risk and realize that there are many more ways we can be late for the flight than on time. The journey to the airport has the most uncertainty so we complete this first. We can then write the report relaxed at the gate. We call this risk management approach nothing more than common sense.

Common Sense Uncommonly Applied
We have the same opportunity to use risk management in our planning process when we order our backlog. But most of us don’t. Why?

Most product owners now prioritize the backlog order based on short term value rather than taking risk in to account. One of the key risks they reasons is the loud legacy of too many late projects due to engineering choosing to implement the product in bleeding edge technology and this not working out so well.

In this move to focus on shorter term value in our products we’re now failing to acknowledge that we can’t just mitigate the risks of technical novelty by scoping it out. Technical novelty has to happen for reasons of competitive advantage or deprecated technologies. So we need to bring that common sense back in to our planning process.

One of the ways to address this is to bring technical risk into the prioritization process.

A Way of Using Risk to Prioritize the Backlog
So how can we think about this from a backlog priority perspective?

One way is to create a matrix of technical risks to be addressed on the y axis and user stories on the x axis. See below:-

Risk to Story Matrix

The user stories are true user stories, if they’re delivered a user can do something valuable in their process. I put a cross at the intersection of a risk and user story where it shows that if we build and appropriately test that user story it demonstrates that we’ve mitigated the risk. I then look at the user stories with most crosses, balanced with the least effort and the largest value and prioritize those to the top of the backlog. In the above example I’d want to build and test user story 3 as early possible.

This approach is an aid not the answer to your backlog prioritization. We still have to make the trade offs in terms of prioritization between building and maintaining market credibility and building a sustainable solution. We reserve the right to make short term trade offs by building “sparkler” features to keep market interest, preferably those with high value and low technical risk. We may even decide to build “sparkler” features early that have high technical risk but either cut the capability back to manage the risk or accept the risk; then rework when we have time. This way we go in clearly understanding that we’re taking on debt in terms of rework and/or risk in terms of market credibility to realise the additional value of reduced time to market.

Conclusion
Most organisations I’ve come across have little recognition of how to actively manage technical risk.

Ordering your backlog by taking into account those user stories that if built would address most technical risk is one way to increase schedule predictability

It can be argued that the value I describe above is a positive way of stating market risk and that I could have one combined list of technical and market risks. That leads on to the next post…


Given When Then – A Cynefin Case Study

Dan and I created the “Given When Then” pattern on August 23rd 2004, or rather, that was the day we realised we needed the “Given” part. On November 30th, I wrote a series of blogs explaining the format that Dan and I had created in its more familiar form. JBehave II , JBehave III, JBehave IV and JBehave and Postmodernism.

This experience report is best explored by considering it through the lens of the Cynefin framework. Although I read the Cynefin whitepaper before this time, I did not understand it until fairly recently.

Obliquity

Neither Dan or I deliberately set out to create the “Given When Then” framework. We were both working on other projects. Dan was working on BDD where he was trying to change the language of TDD using NLP to make it easier for people to learn TDD. I was trying to work out an analysis approach that would work with Extreme Programming. Dan had replaced TDD’s assert with “Should” and was having more success explaining TDD to people. A week or two before, we had traveled back from the Agile Development Conference together and we had realised that “should” was the language of specification.

On the day that Dan and I first came up with “Given”, our goal was for Dan to explain BDD with Mock Objects and Patterns to me as I was going on sales visits to clients and talking about something I had never actually done. As an amusing aside, several years later Liz Keogh told me it took her six months to remove the visitor pattern from JBehave v1.

The key point is that Dan and I were working on oblique problems. Dan was trying to create a better way to teach TDD and I was trying to learn TDD. We were not deliberately trying to create a pattern to allow non technical people to effectively communicate with Agile developers.

Activated Individuals

Dan and I did not create the “GIVEN” word. We tripped over it. It was literally a movie moment when I said something and Dan and I looked at each other realising it was something useful.

The night before I had been to see a friend researching a PhD in Historiography. Historiography is the study of Post-Modernism as applied to how history is understood and taught. Historiography shows us that the way History is taught, understood and interpreted is more a function of the context than the events themselves.

When Dan showed me the code for TDD with Mocks, my head was full of “Context” (thanks to my friend Rob) which meant I recognised mock objects as context. When I said “Mocks provide the context” Dan and I both realised it was significant as our goals had activated us to the importance of the statement.

The key point which Dave Snowden makes in his talks is that you need individuals in a heightened state of awareness to spot something important* You need experts to spot something subtle and significant.

Recipe Books and Chefs

Both Dan and I were “Chefs” in Dave Snowden’s language. I had a decade of experience of Business Analysis and was used to coming up with new approaches when needed. Dan had several lifetimes worth of experience as a developer, and in particular coaching developers.

As Chefs we recognised that “context” was a new ingredient that was missing from the meal shared between Agile developers and non-developers trying to communicate to them.

This was only possible because we had both served our apprenticeships, and had an understanding of how to combine ingredients that we knew the “GIVEN” was a new ingredient**.

Exaptation

BDD was designed for developers to more easily learn TDD. Dan and I adapted it to communicate between Non technical people and developers.

Actually, the “Given When Then” format is an exaptation of TDD. TDD has the steps Setup – Execute – Assert. Dan and I exapted the TDD form into a specification format that non developers could use to communicate more effectively to developers.

Multiple Safe to Fail Experiments.

I went off to develop Feature Injection while Dan developed JBehave and promoted BDD. Rather than focus all his attention on one BDD solution, Dan promoted and supported many open source communities as they developed tools. From JBehave, through rSpec, to Cucumber and SpecFlow, Dan has supported them.

So Dan engaged in Multiple Safe to Fail experiments in his search to realise a BDD tool that worked.

Cynefin

Thanks to Cynefin, I now have a better understanding of what happened when we discovered the Given When Then format. As a result in the future, I will have a better understanding the context I need to create in order for innovation to occur.

  • GIVEN I use Cynefin to understand the world
  • WHEN I look at situations going on around me
  • THEN I’ll find new meaning

* I’ve watched three of Dave’s keynotes to find the point where he says this but could not find it. My wording may be wrong.

**A while later I realised that GWT was a subset of the use case. Unfortunately the Use Case is so bloated as a tool that it is not focused on specifying behaviour. The Use Case has become the jack of all trades and master of none.


Pull – An experimental blog backlog

I would like to try an experiment. I currently have a fairly large backlog of blog posts that I intend to write that are fairly self constrained. Rather than put them out as I feel like it, I would like to try an experiment in pull. So I will give below a selection of posts I’ve planned. If you want to see one more than others, then leave a comment. I will count the comments (if any) on Monday evening. The one with the most comments will be the next one I write.

The backlog

1. Capacity Planning – An experience report on using Theory of Constraints to create an organisational backlog. This will use details (photos) from the experience report given by Lisa Long at XPDay 2013.

2. The role of the Agile Manager – A description of the role and the training for an Agile Manager (Delivery Manager, Risk Manager and Coach ). The importance of reporting.

3. Given When Then – A Cynefin Case Study. An experience report describing how Dan North and I created the Given When Then format made sense using Cynefin.

4. Using Cynefin’s Butterfly Stamping to determine the most appropriate to building the backlog.

5. The ups and downs of Hippos and Data Scientists. How to order your organisational backlog.

6. Something else you want me to write about. Some aspect of Feature Injection? Real Options? Staff Liquidity? Scaling Agile for Practitioners? What…evvva?

7. Tornado Maps and Skills Matrices. How to use your skills matrix and backlog to build a tornado map.

That should be enough. On Tuesday night, Dermot and Simon Cowell will count the votes and announce the vote on twitter.

Chris


Saint Sebastian and Scaling Agile

Last week we visited Florence and then spent a few days in Tuscany ( instead of attending Agile20xx like last year ). The Uffizi Gallery in Florence was formerly the office of the Medici family and is now one of the preeminent collections of Renaissance art in the World. Truth be told, the buildings and architecture of Florenze are more impressive than the art. For some reason, the pictures of Saint Sebastian (Thumbnails below with bigger images at the bottom of the post) held a particular interest for me.

20140727-102909-37749409.jpg20140727-102907-37747771.jpg20140727-102911-37751456.jpg

After a while I realised that they are a counterpoint for Scaling Agile. The Artists all agree on certain points such as “St Sebastian was male”, “He was bound and shot with arrows”, “The arrows do not seem to affect him” and “He was wearing shorts made of a sheet”. They did not agree on other points though such as “Was he young or old”, “Where did the arrow pierce his body”, “Did the action take place inside or outside”. To the artists and those in the church who commissioned the works, the differences did not matter. All that mattered was that a Christian Saint was shot with arrows for his faith and survived. I sometimes feel that some of those involved in Scaling Agile are focusing on the details that differentiate them rather than the common things that are important. In a particular context, each of the approaches to Scaling Agile may be more or less relevant.

Instead of focusing on the differences, we should focus on the core common elements and then identify the context where the appropriate approach is more appropriate.

For the past year or so, @TonyGrout and I along with a bunch of other coaches have been trying to help a company to Scale Agile. This is the diagram and explanation I have been drawing for the past few months to help others understand the constraints and issues that we face.

Here is the description I’ve used to some success.

The scaled investment process starts with an Investment Decision Process which identifies an ordered list of investment possibilities ( I call it a list of Unicorn Horns as they are not realistic ). The ordering and naming of this list will be culturally specific. Do they use Weighted Shortest Finished Job, or Cost of Delay, or Business Cases, or just Hippos ( Highest Paid Person’s Opinion ). Who decides on the order? Do they call them MVPs or MVFs, or MMFs, or BVIs, or Investments or Stories or Bets or… The point is that the culture will determine the process to give a rough ordering to the list of potential investments. I will call them MVPs for this post.

20140727-125549-46549422.jpg

The next step is to perform Capacity Planning for the coming period of time (typically quarterly).  For this, the owner/promoter of each MVP contacts all of the groups* that need to provide input to deliver an MVP and asks them for a SWAG ( Sweet Wild Assed Guess ) in units of Scrum Team Weeks. The group puts as much effort as necessary into the estimate ( I recommend a solid five to ten minutes at most ). The group also provides their capacity for coming period ( Default is number of Scrum teams in the group multiplied by weeks in period times by 50% ). <Editor’s note. I realised I have not described this process in full on this blog. Watch this space>

* Group – Random name representing a group of one or more Scrum Teams that can work on a component or system within the organisation.

20140727-130144-46904739.jpg

During Capacity Planning, the business investors ( whoever they are ) all come together and select items from the list of Unicorn Horns. For each item they select, they reduce the capacity of the impacted groups. This means the groups form the constraints rather than a generic mythical man month notion of organisational capacity. There are two outputs from this process. One, an ordered backlog for the organisation that all business investors agree to which provides direction to the teams. Two, a list of groups that are constraining the organisations ability to deliver. Note that the constraints are dynamic based on the kind of work involved, although most organisations have a few groups that are needed for pretty much everything. The backlog is the focus for those interested in delivery (short term). The list of constrained groups is of interest to those managing organisational liquidity (longer term).

The Delivery involves the teams adding the items to their team Backlogs. They build the things needed for the MVP which eventually results in a release. The release results in an outcome which feeds back into the investment decision process. This is all very standard Agile/Lean practice.

20140727-132527-48327962.jpg

This is also where our experience differs from standard Agile Doctrine. The belief is that teams will self organise to ensure delivery of the MVPs. This is not what we have experienced. The teams deliver but the organisation does not.

To address this issue we proposed that each MVP is assigned a Product Owner and a Scrum Master who are responsible for its delivery ( I consider accountable and responsible as synonyms. Hopefully someone will explain the difference ).

20140727-135434-50074438.jpg

The MVP Product Owner is responsible for the value delivery of the MVP (The Outcome). As the teams develop the MVP, the MVP PO will let them know what can be descoped without impacting the value delivery.

The MVP Scrum Master is responsible for the delivery of the MVP. They will ensure that the MVP is initiated properly so that everyone involved is aware of their expected contribution. They will ensure that an appropriate architecture is in place. They will set up and facilitate the MVP Scrum of Scrums and Retrospectives. They will ensure transparency on the MVP to ensure all involved can see the status.

The roles are similar to traditional Project Manager and Business Analyst roles with one huge and significant difference. They have NO authority. They influence using transparency and they rely on facilitation instead of direction. This is particularly important as they will develop influencing skills they need when they operate on areas where they do not have authority or influence such as in clients or other business units. They will be servant leaders. To do this, they will need tools to report and show progress.

So this is the software investment process in full. However, we also need to consider governance. A governor was a device on a steam engine that stopped it from blowing up. Two balls would spin around, their speed a function the steam pressure. If the steam pressure went too high, centripedal force would force them out causing them to rise, opening a va would release steam resulting in the pressure dropping. The purpose of governance of governance in an organisation is the same. It ensures that the risks within the system are managed effectively.

20140727-141340-51220167.jpg

The risk managers of the system ( another role without authority other than the power of transparency ) is to ensure that the risks in the system are properly managed, and if the individuals do not have the appropriate skills and experience to manage the risks, ensure that they are provided with coaching so that they can. Consider people playing by a cliff. The risk manager would help to make them aware of the danger. They would show them the cliff and help them work out an appropriate risk strategy. If they wanted to build a wall but did not know how, the risk manager would introduce them to people who could teach them how. The risk manager would monitor the risk to ensure it continues to be managed. This is not about control, but more the appreciation that whilst we are playing in the field, we might forget about other things like risks and demons. Recasting definition of done as a set of risks to be managed allows the teams to find the best solution for their context whilst ensuring that the organisation is not exposed to known risks.

These risk managers are responsible for staff liquidity at the team, group and organisational levels. They are responsible for ensuring the overall investment portfolio is balanced as well as ensuring investment is occurring where it should rather than where it is easy. Rather than simply looking at status of a team, they are considering the health of the organisation as a whole as well as in parts.

One of the most critical risks to address is to ensure that the correct approach is taken to building the teams backlog for the MVP.

20140727-143010-52210199.jpg

The Cynefin Framework is ideal ( and necessary ) for helping teams understand whether they should build the backlog iteratively ( in the complex domain ), or build independent solutions ( in the chaos domain ), or up-front ( in the complicated domain ) or simply let the team do its thing ( for obvious domain ).

A risk management and coaching based approach to delivery and governance is vital for Scaling Agile. It allows software development to fulling integrate with the entire business. However, Cynefin is the “Fifth Discipline” of Scaling Agile to the organisational level, without it, we will be doing the “Wrong things righter, er, The right things wronger”.. without it, we will be barking at the wrong tree.

So have I painted a process where we agree he was shot with arrows? Or does this invite discussion about how old or how good looking he is?

Thoughts?

Paintings of Saint Sebastian

20140727-102911-37751456.jpg

20140727-102907-37747771.jpg

20140727-102909-37749409.jpg


Starboard! It Greg Brougham about to tack.

Dear All

Please welcome Greg Brougham ( @SailingGreg ) to the itRiskManager family of blogger.

sailing-greg

Greg is an experienced IT Risk Manager with an amazing knowledge of the Cynefin framework. Check out his Cynefin article in Infoq. Greg was the person who explained the practical usage of Cynefin that lead to my taking the course.

Welcome aboard Greg.

Regards

Chris


The Risks of Adopting the Wrong Approach

In Achieving success in large, complex software projects Sriram Chandrasekaran, Sauri Gudlavalleti and Sanjay Kaniyar of McKinsey (1) advocate moving from a functional delivery model that is silo based to one that is based on cross-functional teams that are module orientated. There are two problems with this model that I would like to raise.

The first is that the article describes large project as complex in nature without understanding that in a complex system behaviour is emergent. There is a short introduction to complex system in the recently published Cynefin paper on InfoQ (2). One of the key points is that no amount of analysis or planning will lead to understand of how a complex system will develop. They are dispositional in nature and therefore have a tendency to evolve in certain directions but this is not given and cannot be assumed. In this type of system the only viable delivery strategy is one that iterative/incremental in nature which allows you to manage for development of the systems in a desired direction. Trying to base the delivery based on a set of point in time requirements is unrealistic and fundamentally flawed, which the agile community has known for years. Simply moving to a cross functional model will not address this fundamental issue with traditional delivery models. As an aside Brian Appleyard (3) notes, in his most recent book, that simple solutions don’t work for complex problems.

The paper goes on talk about grouping of the work based on use case to support these cross functional teams so that can operate in parallel. This assumes that work can be grouped by use case but it does not elaborate on how, this in itself will support parallel working. One of the key things that you need to ensure is that the use cases are disjoint and one way that you can ensure this is by relating them to capabilities (4), along the lines of domain driven design (5). This allows for partitioning of the problem space and for parallel stream of work to be undertaken. This allows you to manage the parallelism which is mentioned as an issue with agile practices. It is not really an agile issue but one that is generic in nature of large programmes.

References

  1. Achieving success in large, complex, software, July 2014
  2. Cynefin 101 – An Introduction, July 2014
  3. The Brain is Wider Than the Sky: Why Simple Solutions Don’t Work in a Complex World, Bryan Appleyard, Sep 2012
  4. The Next Revolution in Productivity, Ric Merrifield, Jack Calhoun, and Dennis Stevens, Harvard Business Review, June 2008
  5. Domain-driven Design: Tackling Complexity in the Heart of Software, Eric Evans, Aug 2003

Follow

Get every new post delivered to your Inbox.

Join 54 other followers