Author Archives: theitriskmanager

About theitriskmanager

A IT programme manager specialising in delivering trading and risk management systems in Investment Banks. I achieve this by focusing on risk rather than cost. A focus on costs can lead to increased costs.

Ten Ways of “Five Whys”

A little while back a friend complained about “5 Whys” in Feature Injection. “People don’t like it when you ask “Why?” over and over” they said. “Well, no one actually asks Why five times. There are many ways to ask why without actually asking “Why?” I replied. Here is a random ten questions off the top of my head. Please add your own in the comments.

If you want to find out more about Feature Injection ( or Story Mapping, User Stories or Impact Mapping), come and join Christian Hassa, David Evans, Gojko Adzic and me at the Product Owner Survival Camp on 21st Jan in London Town.

Random Ten Alternatives:

1. What do you want me to do with this stuff you’ve put in the system?
2. Now that we’ve stored it, what do you want to do with it?
3. When would we use this?
4. How would we use this information?
5. What reports / screens would you display this on?
6. What would the report / screen look like?
7. How would you use the report?
8. Who would use the report / screen?
9. When would it be useful to them?
10. Errr?
11. What?
12. Eh?
13. Huh?
14. And?
15. Then?
16. Really Blackadder?
17. Draw it please?
18. This is valuable how?
19. I don’t think you are telling me everything?
20. Go on!
21. Why?


How mature must an organisation be to implement Cost of Delay.

Cost of Delay is often mentioned as a possible solution to the difficult problem of deciding the order in which you want to invest in two or more software development investments. ( For an introduction to cost of delay, check out Don Reinertson’s talk and Joshua Arnold’s blog and experience report presentations. Both are well worth the time invested ). I would like to highlight two capabilities your organisation may need before it can effectively use Cost of Delay. These are:

The ability to estimate value.
The ability to convert value to a common “currency”.

In effect, Cost of Delay assumes a level of corporate maturity.

The ability to estimate value

Todd Little wrote an excellent paper that shows the difference between actual effort and estimated effort follows a log-normal distribution ( Troy Magennis would argue that its a Weibull distribution ). IT professionals are pretty good at estimating things it would seem. The same is not true when it comes to estimating value or return. I once worked on a project where the business case estimated the annual profit for an investment would be €100M. In the first year, the investment gave a revenue of only €500K. The estimate of return was out by several orders of magnitude.

Lean Startup and a experimental metric based approach to predicting the improvement in a metric is a much more accurate approach but still is not that accurate.

Rather than a single value, estimates should take the form of a range with a confidence interval. For example, the return will be $1,000 to $1,000,000 with a 90% confidence interval. (Check out Doug Hubbard’s presentation )

So which is the better investment, the one that delivers -$50,000 to $2,000,000 with a 90% confidence interval, or one that delivers $1,000 to $1,000,000 with a 80% confidence interval. My maths is not good enough to compare these two investments. Now let’s consider that this is the cost of delay for the two investments. Which do I choose.

It is quite likely that the two or more potential investments are from different people or groups. How do we ensure that they have adopted a consistent estimation process. One possible solution is to engage the finance department to act of the coaches for putting together business cases. The finance department should not tell people how to build business cases, instead they should coach people and share useful practices. ( I almost wrote share “best practices” but could not face the strangling sound from Dave Snowden). The finance department can ensure a level playing field and help raise the game for the people writing business cases.

The ability to convert value to a common “currency”

Not all value is equal!

With the rise of business metrics, it is possible and desirable for organisations to focus on a particular part of the “funnel” that does not directly lead to a “dollar” value. An investment may be to increase the number of installed customers, or improve the usage or stickiness of customers.

Even with the same metric, there may be more value to customers on a particular platform, or in a geographical or demographic grouping. What are more valuable? Customers in the developed world or in developing markets? Teenagers or Pensioners?

In order to compare an investment to increase usage with teenagers in the USA versus revenue from Baby Boomers in Europe versus New customers in Brazil, the organisation needs an exchange rate to a single currency (The Corporate Peso or the “Corpeso”). This exchange rate needs to be set by the executives of the organisation taking into account market opportunities and the organisations vision. The exchange rate becomes a strategic tool to steer the software development investments. Some organisations may to choose a simpler approach and focus on a small handful of metrics instead.

Simplified Cost of Delay

It is possible to introduce as simplified version of cost of delay focusing on the two or three basic shapes. The delayed return is calculated by multiplying the rate of loss by the delay for a delayed product intro. A step cost for things like fines, and a total loss for things like missing the Christmas season.

There is a danger with introducing the simplified version in that people devalue cost of delay, especially as they are already doing the simple shapes. You risk that cost of delay is seen as adding unnecessary complexity to something simple. This may innoculate the organisation against using the full cost of delay in the future.

Cost of Delay is a great concept which will work well in certain contexts. If you try to implement it in more complex contexts, you need to consider the organisational maturity needed to support it.


Outcome based Process Metrics – A focus on Time to Value.

This is one of those posts where I’m looking for feedback. Ideally practitioners who’ve made this work but also thought leaders who’ve put serious thought into the matter.

For the past year or so, I’ve been thinking about metrics. When measuring the effectiveness of your development investment process, I’ve come to the conclusion that you need three outcome metrics… A “quality metric, a “value” metric and a “time to value” metric. You need the three metrics because any two can be gamed. For example, you can maintain high quality and a low time to value by releasing software that delivers no value.

Quality metrics are fairly straight forward… number of bugs released is a fairly standard metric. Value metrics are fairly easy to define but sometimes harder to measure… e.g. profit, revenue, number of customers. The tricky metric is the “time to value”.

For years I’ve been chanting the “Cycle Time” mantra from the Lean world. When it comes to looking at metrics in the real world of software development, “Cycle Time” is a great inspiring concept but it ain’t much use helping a company performing an Agile Transformation. The problem is knowing when to call the start on the investment. When you trawl through the historical data of real investments, it’s difficult to compare them. The “time to value” metric is really about risk management. A development system with a short “Time to Value” will be less risky than one with a longer one. Initial thoughts are to consider the start of any work on an investment to the time it is released. This penalises investments where a small amount is invested up-front to surface and address risk (the value of business analysis). As a result, the approach is often to ignore the analysis in the “time to value” metric and focus on the point that the “team” commits to delivering value. i.e. The point it is pulled into a sprint. Ignoring the analysis means that over-investment can occur to reduce risk. In effect, “time to value” metrics are messy in the real world. To illustrate this, consider the following three investments:

20131207-193858.jpg

The three investments have the same start date and release date, however (1) is clearly better than (2) which is clearly better than (3). In the real world, it is even harder to compare investments as the investments are spread across several teams with different rates of investment. This is a solved problem in finance where investors need to compare different bonds and match assets to liabilities of differing size and frequency. Duration was one of the simplist and earliest ways of comparing the risk of bonds. The approach converts a number of cash flows into a single cash flow at certain point in time. This is quite simple. Imagine a seesaw in a children’s playground. All of the cash flows in the investment are placed at the time they are made on one side of the seesaw, and all of them at a single point on the other side. They are placed such that the seesaw balances.

20131207-195718.jpg

Duration ( tD ) = Sum of ( Cash Flow * Time to release ) / Sum of Cash Flows.

( Note: In bond maths, the Cash Flow would be the present value of the cash flow.).

Using the examples above, gives the following results…

1. (28*1 + 1*2 + 1*3) / 30 = 33 / 30 = 1.1
2. (10*1 * 10*2 + 10*3) / 30 = 60 / 30 = 2
3. (1*1 + 1*2 + 28*3) / 30 = 87 / 30 = 2.9

The lower the value for the duration of the investment, the lower the risk. The thing I like about this metric is that it drives the behaviour we want. Smart investments where analysis is used to reduce investment risk before a short time from commitment to investment.

The time for the investment should be the point that the team makes a commitment to the investment. In Scrum, this would be the start of the sprint.

So I’d like to hear others thoughts on this.

Discount Factors and the time value of money.

Whenever you discuss investments, thought leaders cannot help trying to add complexity to the situation. My advice is ignore the time value of money as its effect is not that significant. Adding time value of money makes the calculation much less intuitive to people using the numbers.

If time value of money is used, the inverse ( exp[ + rt ] ) of the discount factor ( exp – rt] ) should be used. This is because you are calculating the value of the investment at the time of release in the future whereas in finance you are calculating the current value of a future cash flow.


Introducing Staff Liquidity (2 of n)

Staff Liquidity is about managing the number of options in your system. You want to maintain as many options so that you can respond to changes in demand. Once we realise that we want to create/maintain as many options as possible in our system, it affects the way we make decisions. (Author’s note: When I say “you”, I’m referring to the team as a whole).

In the past I’ve seen many development teams that have one or more Bob’s on the team. Bob is the one who knows (some part of) the system inside out. Whenever you allocate work, Bobs is the only person who can do certain things and as a result, he is always fully loaded at two hundred percent utilisation. Bob spends his entire time on the constraint for the project. Whenever a problem/issue/production issue/etc. occurs, Bob is the only one who can work on it. Whenever Bob is working on a production issue, etc. the projects slips behind a day at a time. Some Bob’s enjoy the attention, some don’t and they leave. Allocating Bob to work first is the most effective way of destroying options on a project. On a skills matrix, Bob would be the only “3″ for one or more task. The rest of the team are only along for the ride as only Bob(s) can do the real work.

Every time you allocate someone to an item of work, you effectively remove them from the skills matrix. Once you see it like that, you realise the best way to allocate work is to start with the people with the fewest options first. You give them work they can do with the support of an expert. You then do the same for the rest of the team until you get to Bob. Ideally, you do not allocate Bob to anything. Bob becomes the expert who pairs with other team members. Bob coaches others in his part of the system. Bob is instantly available in the event of a crisis without impacting the time to value.

When Bob investigates a production issue, he should do so with another member of the team. As soon as they work out what needs to be done and the other team member is able to finish it alone, Bob becomes available again thus ensuring the team has the maximum number of options to address the next crisis.

Sadly its not too uncommon to discover an entire team of Bobs. Everyone on the team is the only person with a “3″ for a part of the system. Once they understand what they are doing, the team should self organise to become Not(Bob)

 


Introducing Staff Liquidity (1 of n)

This is the first of a few blog posts on Staff Liquidity.

Liquidity is a term used in financial markets to describe one aspect of the health of a particulary market or stock. A liquid market is one in which there are active buyers and sellers looking to do business. An illiquid market is one where there are either no buyer, no sellers or no buyers or sellers.

The liquidity of a market is normally expressed in terms of the spread between what people are prepared to pay (bids) and what they are prepared to sell for (offers). This is the bid-offer spread. The bid offer spread of a market or stock is an emergent property of the market or stock based on how long the broker thinks they will have to the stock before they can unwind their position. The time taken to unwind a a stock is determined by the number of active buyer and sellers (and the size of trade they will do). The other factor driving the bid offer spread is the volatility of the stock. A volatile stock with few buyers and sellers will have a high bid-offer spread. A stable stock with lots of buyers and sellers will have a narrow bid offer spread. A bid or offer is really an option to buy or sell a stock.

The Bid-Offer spread cannot be used as a measure of liquidity in software development. “Time to unwind a position” is useful as a outcome metric to measure the health of a software development team, however it is not much use as a diagnostic metric or a means to manage the liquidity of a team. So far, the best way I’ve discovered to manage liquidity is to consider the number of options in the system*. This is done quite simply by the team creating a skills matrix.

The team create a simple skills matrix. Along one side, are all the tasks that need to be done on the system, and along the other are the names of the people in the team. The members of the team then score themselves as follows:

0 – “I know nothing!”

1 – “I can run it”

2 – “I can tweak it or bug fix it”

3 – “I can redesign or refactor it” / “I OWN it!”

The team can use the matrix to make sure that they can cover the application development under their responsibility. They can ensure that there are a minimum of three people who can perform each task (at level 3). If there are only two, they may introduce a policy of not allowing both to go on holiday at the same time. If there is only one or zero, they know they have a key man dependency. Key man dependencies impact the team’s time to value as they cannot move forward if that person is not available. At the stand up, the team can use the matrix to ensure that they address their key man dependencies.

There are a number of points to consider when using a skills matrix:

  • The tasks are not necessarily “skills”. It is normal that they refer to a part of the system. e.g. Feeds, Payables and Receivables rather than C++, Java, SQL. In order to do “Payable”, a team member might need a knowledge of C++ and Java.
  • The tasks may be very very small such as run a build script which only takes 5 minutes. If the team has to wait a week for the person to run it, it has taken a week even though the run time was only 5 minutes. If there is only one person who can do that task, the team might list it until it is no longer a problem.
  • People who have moved internally to other teams can be included in the skills matrix provided there is an agreement in place for them to come back and help the team.
  • Sometimes people overstate their ability. If someone says they are a “3″, ask them to do the next piece of work in that area to test their ability.
  • Sometimes people understate their ability because they do not want to do it. This is actually a good thing as both ability and willing are needed to ensure a task gets done. It helps prevent the situation where someone leaves the team because they keep being assigned to work they do not want to do. It can act as a forcing function for other people to learn.
  • From experience, updating the matrix every two months gives a chance to see development. A month is not quite enough.

* Footnote: Some people in the Kanban Community have suggested that liquidity can be managed by measuring the number of transactions in the system. There are a couple of problems with that approach. First, the approach manages transaction volume rather than liquidity. Second, the “Liquidity” metric is easily gamed by breaking tasks into smaller and smaller items. Third, it does not systematically look at the whole system. Fourth, the team may be working on tasks in one area of the system and areas of the system could be unworkable or illiquid. Fifth, the metric does not indicate the inclination of the team towards certain work and hides the fact that team members may not want to do the work.


Agile doesn’t Scale

Scaling Agile seems to be a hot topic at the moment. A number of people are suggesting that “Agile doesn’t Scale”. They are right and they are wrong.

First, lets clarify what I mean by Scaling Agile. Scaling Agile does not mean a large organisation where all the teams use Scrum or Kanban. It means that the entire organisation is using Agile… The usual suspects of Development, Testing, Product Management, but also Finance, Marketing, Operations and Management. Scaling means that as an organisation grows in its use of Agile, it can do so fairly smoothly.

The “Agile doesn’t Scale” crowd are right!

Dave Snowden says that you cannot scale Agile using Recipes, you need Chefs. You need people who have done their apprenticeship for several years and studied the material. I agree with Dave. You need Chefs to Scale Agile. People with years of experience in Agile who understand the theory and principles. However, that is not enough. You also need people who understand organisations rather than development teams. People who have years of experience which involves understanding management, finance, operations and marketing.

The Chefs exist but there not that many of them about. A number of them are helping organisations to Scale. It will not be possible to determine whether they have successfully scaled Agile in a way that is sustainable for a few years. ( I remember a few years ago an Investment Bank in London had an entire department that was Agile but it did not survive for more than a couple of years as the developers had not incorporated the business analysts )

In order for the claim “Agile Scales” to be valid, we need a set of patterns or recipes that people can use without the need for a Chef.

Those recipes do not exist yet. So the “Agile does not scale” crowd are correct.

The “Agile doesn’t Scale” are wrong!

After the Chefs have scaled agile a number of times, it will be possible to examine their stories of success and failure to extract patterns for scaling Agile. It will not happen for a few years as it is necessary to establish whether the patterns are stable in the organisation, or whether they need the support of a powerful manager to ensure their success.

Even though the recipes do not exist yet, they will start to emerge over the coming years. So the “Agile does scale” crowd are correct as well.

You never know, some of the patterns in the Safe framework may turn out to be valid. (So far, listening to stories of people scaling Agile, you need to scale using the product management function who work on a single enterprise level backlog, with a development/testing group that has staff liquidity).


Command OR Control

DISCLAIMER: These ideas are not fully formed. Please be gentle. ;-)

For the past decade or so I have “known” that “Command AND Control” is the wrong way to run a team or organisation. To me, this meant that Commanding or Controlling was wrong wrong wrong.

I’ve recently been doing some work scaling Agile in the organisation. The role of management or executives is key. They are responsible for focusing the organisation. To ensure that its precious resources are focused in the right place. Some executives do the number crunching in their own heads, other distribute the cognition (self organisation) and some just make it up based on gut instinct. The decision of “what to do?” uses resources from across the organisation, marketing provide market intelligence, and development/operations determine what is possible. Once the decision is made, the executives need to ensure the organisation focuses on it.

There are two ways that the executive can instruct the organisation. They can “Command it” by setting it a set of goals, or they can “Control it” by organising it in such a way that it can be controlled from the top as the executive makes course corrections.

The Marines are an example of a “Command” organisation. The marines are given a clear objective such as “Secure that bridge” or “Destroy that factory”. One individual in the marines can make the difference. In order to do this, the marines are highly trained in a number of different disciplines so that they can “Adapt and Improvise”. A classic example in the business world is Jack Welch’s command to GE “Be the first or second in each market, otherwise exit the market”.

The infantry are an example of a “Control” organisation. Individuals have a lesser impact in a control organisation. The infantry are typically deployed on a battlefield where coordination is more important than individual acts. The training required for individuals in a control organisation is much more limited. The training does not encourage creativity but rather ensures consistency and conformance to the plan. A classic example in the business world is McDonalds. The value to the customer of a McDonalds is that the Big Mac is the same everywhere in the world… from Tokyo to Toronto and Paris, France to Paris, Texas. Creativity can destroy the value of a “control” organisation, imagine the typical customer’s response to a Big Mac made from raw fish or horse meat.

“Control” is important when cost control and consistency are important. They are only appropriate in software development when an effective mechanism to provide command is not available. For software development, “control” is a crude tool as much of the work requires a significant level of creativity. However, without an effective mechanism to coordinate and prioritise “Commands”, “Control” is the only way to provide focus in a large organisation.  Consider an investment bank. They will allocate a budget to each area of the IT development organisation… Bonds, Equity Derivatives, Fixed Income Derivatives, Operations, Finance, and  Compliance. Each department attempts to optimise the value it delivers. When there is not enough budget, the executive’s decide whether to provide more. Any large programmes requiring additional funding are escalated to Management. Within the organisation, the individual groups optimise their profit based on their constraints. This is achieved by providing bonus based on achievement of goals (profit). In effect, Investment Banks operate by getting each business to optimise within its constraints.

“Command” in the context of software may be functionality based such as “Deliver Product X/Component Y” or it may be metric based “Increase Revenue/Reduce Customer Defections in Asia”. Its important for each group to understand what their goal is, especially in a large organisation with multiple products and customers, where there are competing short term and long term priorities.The executive ensures the organisation is focused on the right thing. Once the goals have been set, the organisation should allign to optimise achieving the goals. Distributed approaches (self organisation) are the best… providing appropriate mechanisms exist to coordinate the activity. Organisations with more liquidity can more quickly respond.

So executives should ensure focus using command OR control as appropriate.

Unfortunately, executives will often command AND control. The problem comes that the control (budget) process is normally an annual process and as a result it reduces the liquidity of the organisation. Any changes needed in the control structure needed to deliver the command goals result in time consuming consultation with the executives.

So in summary, an executive needs to “command” OR “control” to ensure the organisation is focused on the right set of organisational goals. Executives should avoid “command AND control” and should speed the transition from one to the other to avoid organisational gridlock.

Like I say. Still getting these ideas clear in my mind. I would welcome feedback.


Two Legs Good.

Two Legs Good

At the end of George Orwell’s “Animal Farm”, the pigs who were the leaders of the revolution partied with human farmers and wrote “Two legs good” on the farm barn wall. They wrote over “Two legs bad, four legs good” which had been their slogan when they led the other farm yard animals in revolution against the farmer who ran the farm before them.

Two weeks ago at the Agile2013 conference in Nashville I felt compelled to write “Two legs good” in bright red paint around the venue. I did not of course but felt that we should mark the end of the Agile Experiment and a return to status quo.

Two legs bad, Four legs good

The Agile Revolution was a reaction to the prevailing approach to telling people how to develop software. The manifesto was clear “We will continue to develop better ways of delivering software by DOING IT and helping others DO IT. The manifesto was followed by a status report “so far, we have come to value…”

The Agile Manifesto could be rewritten “Theory only bad. Practice supported by Theory good.”

The Agile Manifesto was a call to arms. No longer would we develop software based on some theory developed in University or IBM labs. We would apply an empirical (experiential learning) approach. Practitioners working on real world projects would share their experiences. The Agile Community would then test them out and refine them. Then and only then would they be promoted commercially. Version One and Rally established themselves as software vendors who provided tools to support the Agile Practices rather than promote new practices to sell their tools.

This did not mean that the Agile Community ditched theory and ideas. I used theory to develop and hone my own practices. When I was sure that the practices worked, I shared them. I did not promote my theory untested in the real world.

Two legs good

Two weeks ago I walked around the vendor booths at Agile2013 and I was disgusted. The farmers were back. There were several booths promoting SAFE, a framework for scaling Agile. I know nothing about the details of SAFE other than the following:

The only people who really knew anything about it were selling it either as trainers or consultants.
I did not encounter a single person who had successfully implemented SAFE.
There are few if any case studies of corporations implementing SAFE.

SAFE is meant to be an enterprise wide framework. These frameworks can take years to implement and even longer to assess whether they are successful. The Agile Community is now in the grips of a SAFE selling frenzy.

If I were a manager attending Agile2013 who did not know too much about Agile, I would be under the impression that SAFE was safe. After all, there were three or four vendors promoting it. I would take it home to my enterprise unaware that I was testing yet another theory on how to develop software.

Please take a minute or two and reflect in silence. Think back just a few years to when that Agile learning machine had produced practices that had been tested first. Now pay your respects as we lament the death of Agile.

Someone just shot the Agile brand in the back of the head, but at least the Agile Alliance got to charge them for doing it.

“Two legs good”. Paint it big. Paint it red.


Value and Metrics

A significant risk in any business is that you do not know where you are. Value is the thing that we aim to deliver, but value data or metrics are how we measure our success at achieving our goal. The only way we know for certain that we have achieved our goal is when the value data shows us success. For example, we aim to deliver $2million from a feature. The only way we know we have achieved that is when we have the $2million in our account.

Simple right?

Nope.

  • How do we know that it is the new feature that has delivered the value?
  • Do we have to wait until the money rolls in until we know how we are doing?

If we have a single feature that people are buying it may be fairly simple, but that is rarely the case as most applications have many features. We have to put more thought into how we look at our value data. First, we need value data on how the user is using the new feature.

  • Do the users use the feature? Did the
  • How often to do they use the feature?
  • For how long do they use the feature?

Our value data impacts how we might introduce a feature to our users. Do we offer a couple of free goes before they have to pay or do they pay before they use? Offering a couple of free goes allows us to gather value data (or metrics) on how people use the feature. We want to know how long the user spends on each step of using the feature and whether they stop using the feature before they complete all the steps. This will help us understand whether the user values the output enough to put the effort in to provide all the inputs. The value data will tell us which step result in people dropping out of the process.

Context is everything. Context drives how you use value data. A web site is different to an installed application. Retail is different to enterprise. The value of each customer impacts how much we are prepared to test on them. The golden rule is to get feedback on our business case.

If we have an application that the user can download for free and have a couple of free goes before they pay, we have to build a business model ( or belief system ) around how we will make money.

For example, we assume 1,000,000 download the application. 50% use the first free go with the feature, of which 20% use the second free go, of which 10% then pay for the feature. The feature costs $10.00. This would result in revenue of 1,000,000 * 50% * 20% * 10% * $10.00 = $100,000. We could release an earlier version of the feature which tests to see how many people download and use the free versions. We can look at when people drop out of using the feature.

We aim to deliver value but the value data (or metrics) let us know whether things are going to plan.


Value is not enough, we need to consider the return on investment.

Return on Investment breaks Feature Injection.

For many years I have focused on value as the driver of IT development. “Break the model” is the part of Feature Injection that tells us to look for examples that do not fit within the “Olaf” (Model). For some time I’ve been aware that there are examples of development investment that do not fit nicely within the Feature Injection process. These examples have been at the edge of my peripheral vision. I’ve been aware of them but almost subconsciously ignored them because they do not fit in the model. This is exactly the bias that “Break the Model” attempts to address.

What are they? Improvements to the inputs and processes (expecially non functional requirements). Obvious they are valuable but they do not deliver value to the user as we know all of the value in a system comes from consuming the output. i.e. The outcome.

According to the definition of Feature Injection, all value comes from the output which is true, however improving inputs and processes also brings benefit to the user which is also true.

I have been considering a number of successful companies which do not have an obvious source of income. For example, twitter and instagram. For these companies, they value increasing the number of people who have installed their software (network size) and the amount that people use their network (network activity).

Users of an application will probably download it because it is valuable to them.

The amount of usage will depend on the return on investment of the application. This is a function of the value of the application (which is in it’s output) and the investment they need to make in order to get that value.

The investment from the users perspective is determined by the effectiveness of the inputs and processes. From the users perspective, the investment takes many forms, some of which are:

  • Money ( to buy the software / service )
  • Time ( How long you must work to get the value you seek )
  • Mental investment ( How much brain power is required to derive the value)
  • Delay ( How long you must wait for the value )
  • Frustration ( How easy and forgiving the application is to use. Whether the app play “snakes and ladders” with the user. )
  • Training ( How much time and effort must be invested learning to operate the application )
  • Transferability ( Whether the investment is transferrable to another context or application )
  • Specialisation.

This leads to an extension of the rules for generating business value.

Development generates business value when it reduces the investment that a user needs to make in order for them to acheive the value they desire.
This is an addition to the existing definition rather than a rewrite. It also means that Feature Injection will need an additional step so that it becomes.

  1. Hunt the value (i.e. Increase value)
  2. Inject the features
  3. Break the model
  4. Reduce the user’s investment.

Example – Cameras*.

Cameras have no independent value in the process of image making. The value of a camera is to capture images that can be shared with others across time and location. The value is in the images that are viewed at some later date.

The value in the process has never changed. The value is in the viewing of the images. If we cannot view the image, then the camera could easily be replaced with a stone with loss of value to value stream.

In the early days of cameras a low quality photo image would be created by a very expensive process that involved learning optics, chemistry and building a dark room. This meant that the process was limited to a very small number of people who only captured very high value images (The Queen’s Wedding and Coronation).

The photo production process was automated but still specialist. This meant that people could use a camera to capture images but there was a delay of days to recover the images. The cost of investment was reduced so we could capture images of lower value (Normal people’s wedding’s, birthdays and holidays).

One of the problem with the process was a lack of forgiveness. You never knew whether your images had been captured. This changed with the introduction of the Polaroid Camera which provided instant feedback. You could now take photos until you got it right.

The electronic camera wiped out much of the production process and thus cost of the images.  (Now every day events could be captured).

The internet and social media sites wiped out the remaining cost of getting the images in front of your intended viewers. (Even trivial events can be shared at very little cost).

This has lead to an exponential increase in the number of captured images.

Feature Injection was broken. It failed to properly incorporate non functional requirements related to inputs and processes in the investment decision process. We can fix this now and start to look for more examples that do not fit.

*Deliberately chosen to get a response from @Cyetain


Follow

Get every new post delivered to your Inbox.

Join 42 other followers