Earlier this week I met with one of our longest standing clients. Discussing her wider challenges she remarked, “We’ve been Agile now for about eight years and we’re not as Agile as we used to be. I don’t however know whether this is good or bad”. This type of question is not uncommon and I suspect the context is familiar to many organisations.
Since the Agile movement went mainstream towards the end of the last decade many organisations have invested heavily to “go Agile” (please note the deliberate use of the capital “A” to denote the Agile “brand”): people have been trained, Scrum Masters hired, tools purchased and whole change programmes executed. But where are we now, and have the expectations for agility been achieved?
The world has changed
Version One’s State of Agile Report from 2018 supports what many of us already intuitively feel: Agile is now pervasive but the majority of organisations do not believe they are fully mature in it’s application. According to the report, only 2% of organisations surveyed have no agile teams at all but only 4% believe they are mature. The large majority (59%) report they, “Use agile practices but are still maturing”, as opposed to 12% who believe they have a “high level of competency..”.
But what does it mean to have a “high level of competency” and is it good or bad if Agile practices are not applied rigorously?
The flow to value
Unsurprisingly there are a plethora of maturity models now available for agile but many focus on the mechanics of the process rather than the outcomes they’re aiming to achieve. They therefore miss the point altogether (in my humble opinion). What matters is that we’re able to deliver software quickly and efficiently and meet the needs of our customers and our stakeholders.
To assess the quality of the process, and therefore overall agility, we need to return to the basic principles: agile is concerned with flow to value, from here the underlying principles (including those in the Agile Manifesto) should follow, without necessarily implying the dogmatic application of a particular flavour of Agile.
With this in mind, we recommend the following be reviewed as part of any maturity assessment:
Cycle times - how long does it take to get a change from initial business idea to live? This may be difficult to measure and therefore the measurement of initial change request in Delivery to live might be more meaningful. However, the measurement of the full end-to-end cycle time is extremely useful as it will highlight any lack of agility up-stream within the business (e.g. within product management).
A good example of this comes from some work I did with a global reinsurer a number of years ago. Reviews of their process revealed a typical lag of about a year between the business agreeing on an idea and the point the development teams started work. The potential to reduce the overall cycle time is obvious, regardless of the quality of the delivery process. On further investigation it became clear that the problem lay with governance and compliance bodies, which were looking for too much certainty and detail too early in the process. A common agile anti-pattern.
Further downstream, increased cycle times can imply a large number of common issues: batch sizes are too large, ideas are not fully formed when the enter the delivery pipeline, there are too many hand-offs within the delivery process with the team not communicating nor collaborating effectively, or there is too much (unnecessary) post production effort. In each case we can look to Agile and Lean principles for solutions.
A particularly ominous symptom of long cycle times is churn - work being rejected post development due to defects, which “churn” back into the backlog for remedial work. Again this highlights poor quality practices within development and / or poor collaboration with business representatives and quality assurance (which ideally should be embedded within the delivery team).
Similar the earlier example, a couple of years ago I worked with an organisation that provides managed services for B2C insurance companies. At the beginning of the engagement we were told the team could not deliver fast enough to meet the demands of the business but after a few weeks following the flow of change through the process it became clear that the issue was the level of churn. The velocity of development was sufficient but at the end of each sprint up to 40% of the stories would return to the backlog as re-work.
This clear diagnosis led to a number of steps to improve quality and fit: better collaboration with business representatives through the process, a new clearer definition of “done”, testing was shifted left into the Sprint process. Within this focus we were able to reduce churn to less than 10% within a few sprints, and overall productivity increased substantially (as did the morale of the team).
Finally, excessive post production activity effort implies immature application of DevOps practices and culture, an extensive subject itself, which is discussed in Bartosz’s excellent blog An Organization’s Journey to a DevOps Mindset and Culture.
Are we optimal?
Returning to the original question as to whether an organisation’s delivery process remains effective, it should hopefully be clear that a modified approach, which no longer looks perfectly Agile does not necessarily mean that delivery isn’t optimal nor agile. If changes are made to optimise flow then this is clearly a good thing.
As organisations adapt it’s important to step back and think where the true value lies and mature accordingly: continually reviewing, learning and improving; and potentially looking less Agile. The good news is that the benefits are there to be realised. Persevere!