For the past 5 years, I have worked extensively in government projects using agile. All of them went through a Government Digital Service (GDS) assessment of some kind at some point in their evolution. All knew of the existence of the Digital Service Standard’s (DSS) 18 criteria points, and were aware of both the importance and consequences of these quality gates, but some projects ended up failing.
I read recently that GDS just processed their 100th application. They have a reputation for being tough to deal with, and their published overall failure rate averages out around 30-40%. So on those dark days of failure, I was obviously not alone, despite how I felt.
But one point worth clarifying early doors is that adhering to your chosen agile methodology is no guarantee of passing or even meeting the basic requirement of a GDS assessment. Even though DSS point #4 – Use agile methods – is an obvious and important one.
However, GDS assessment success ultimately means showing you have considered all the criteria. What you feed back to them in your presentation should always provide:
- Evidential examples of how you met the stated criteria;
- Evidential examples of how you met selected parts of the stated criteria, especially if elements didn’t apply;
- Evidential examples of building towards but not yet met the criteria, as specified;
- Reasoned examples describing how you interpreted the required criteria, & answered accordingly;
- Examples showing why you believe the required criteria specified does not apply to your project
The main takeaway here is that demonstrating consideration of all 18 points is key, but that you performing actions based on each criterion is optional, especially if you can state good reasons why you think your project doesn’t qualify. But it would be wise to read each point carefully and consider its meaning. If you think something falls under one or multiple criteria, or not, make sure you say so!
During Alpha assessment preparation however, you just might not know everything that your project will deliver or have a fully developed route map. Telling GDS about any ambiguity – especially if its accompanied with a plan to resolve any lack of detail or analysis – is also a valid discussion to have. As is any work-in-progress or things that you think need to be included, but may typically fall outside of the assessment remit.
Ultimately you need to have a conversation with GDS. It’s not just an inspection or a 1-way information presentation. They have a good grasp on their governance & what their expectations are but I know of instances where reasoning and sense have overcome, because the people involved – on both sides – listened rather than robotically applied a set of seemingly intransigent rules
In all my dealing with GDS staff, I have found them demonstrably willing to help, advise or steer. If you engage with them early enough, they have been known to perform a basic audit or at least take calls to discuss the health of certain projects pre-assessment. But even with their criteria to guide, talented delivery team and super-friendly GDS folk, some Government agile projects I have been part of have failed. Let me share some reasons why…
The application of agile in a government project must correspond to both the client appetite for change and the ability of the client to change. Often, this kind of change effects more than just the current project as the end-to-end service typically falls under the same improvement spotlight.
If teams struggle with agile, they are also just as likely to struggle with meeting all the DSS criteria. One key indicator was how quick they want defined, future release-based content for their project plan. Some lasted a whole 2 weeks before wanting to know specifically what the functional content of each release over the next 9-12 months was going to be. When the security blanket of what you already know is just too difficult to let go, you end up with a compromise like Wagile. Even Wikipedia says this is Waterfall “…masquerading as Agile software development”.
If pragmatically applied, agile can be made to work in most any situation, which seems preferable when something like Wagile has no universally agreed definition and can cover such a multitude of sins.
“I know what I want”
If a client says this, teams can be forgiven for thinking “Great! Someone engaged and knowledgeable, with a clear project end goal. Tell me everything!” What you subsequently find out is that clarity is just a bunch of well thought-out ideas for a new or improved website or part thereof & nothing more. The in-built potential to miss out on finding the real issues that lie in the wider service offering are then missed or glossed over. Fixating on one specific option (say website delivery) over an end-to-end service review & transformation, is a common mistake but not only made in projects…
I used to volunteer at a café run by a national homeless charity. I worked as a barista, where to make a good cup of coffee required competency to perform each of the 5 stages (Grind, Puck, Brew, Froth and Mix) to a specific standard. My training was over 5 days and my first day-long session’s objective was simply to know how to set the grind consistency and grind only enough beans to fill the puck. (Fresh ground goes noticeably stale-tasting if left unused for more than 10 minutes). My trainer would then complete the other 4 processes. The next session I progressed to how to compress the grounds, clean up the puck and select the right cup and again, my trainer would do the rest. After 5 training sessions, I knew I could then make a very good (visitor-certified) cup of coffee from start to finish. The reason was I had the practice, experience and exposure to what ‘excellent’ looked like across the entire end-to-end process and worked with experts to acquire my knowledge in a phased way. Subsequent trainees under my tutelage often complained at their apparent lack of progress, so occasionally I let them do all 5 parts after only 1 or 2 training sessions. But, I always got them to then sample what they had produced. They quickly appreciated the poorer tasting result, even though (some) steps had been completed to the high standards expected. They realised that in any process, it only takes 1 poorly executed step to create an inferior product or malign quality, thus rendering the outcome of all the previous carefully completed steps as poor.
MVP stands for Minimal Viable Product, right?
I have seen Government projects get confused as to what Minimal means. Don’t be fooled into thinking this is a trick question. Minimal means just that and its size is completely dictated by the client. The ‘bigger’ the MVP typically means you need more money, more time to achieve it and the more you will need to know about the principles, ideas or tenets you are trying to prove, or disprove. The smaller the MVP, generally the less you need to know from users, the cheaper it is to assemble and the quicker you can get it to users to obtain evidential feedback. I have seen MVP’s used as pure research vehicles to initially assure some of the unknown knowns but then mature into a multi-functional product set.
Viable is another sticking point…
It really does just mean feasible, practical or achievable. However, for it to make sense to customers, it would preferably be something like one (of many) revamped processes; a new process; a repeatable element of multiple processes, or a set of sequentially or logically-bound processes.
One 16-week discovery project I worked on wanted to establish both a new Single Sign On (SSO) process for a myriad of home-grown, COTS and standard applications. They also wanted to bring the publishing of ID cards ‘in-house’. Rather than opt for the easier(?) SSO option, we took the road less travelled and after a lot of false starts, discovered that the ID card printers were generic, not tied to the card publishing software supplied. We suggested the MVP became showing how that same physical device could now operate with some customisable open-source software rather than that supplied. That decision gave assurances to the client that we were prepared to invest the time to get to know their estate, come up with practical and useful offerings to re-use existing facilities, and listen to them when they said they had concerns over the long-term future of such a small, but important ID provisioning service. Creating a new SSO process simply wouldn’t have delivered something as meaningful in the same time frame, or given an opportunity to showcase the team’s ability to solve esoterically complex issues.
The ‘Product’ part means it must be sellable
Not exactly. It can just be a cohesive or entry-level set of capabilities that may not even be fully formed. Yet. But, you will know or have a matured idea as to what the next version(s) could look like.
If you are not legislatively, financially or creatively bound, the next steps or future directions can all be set by customers. You can trial choices, versions of events or engineer something and ask for next steps via usage evidence or feedback. By creating various off-line scenarios, you can test and evidence and discover more about what turns on the light for customers, as well as what turns that light off. The overall object of any MVP is to do more of the former, and less, or none of the latter.
Also, the audience you release your MVP to can be completely under your control. Collective nouns like limited release or private beta are perfectly valid if staffed by friendlies, and can really help smooth out any product rough edges. These normal-use terms mean you can get initial quality feedback before opening your product up to the public.
Also, don’t forget being public doesn’t always mean ‘everyone’ should know about it – if you control who receives your “new” Beta-site URL, you control who can feedback on it. If your MVP can also be self-served, your chances of participation will increase, but tracking participant feedback then can become your new issue, unless prefaced by credential check or a ‘slim’ registration process.
But what about MMF or ‘Thin Slice’?
I believe Minimum Marketable Feature is a very different animal compared to MVP. My reasoning is simple: A feature is usually smaller than a product as a product typically consists of multiple features. MMF’s can really extol the virtues of an individual feature and are often higher fidelity than an MVP. For me, the real beauty of an MMF is that is can be as young as a HTML-based 2D model, but can be as mature as your time, budget or resources needs it to be. They are also way more disposable in nature. After all, creating HTML models isn’t really development, so if you need to change the model, anyone competent with tools like Axure RP(TM) or OmniGraffle(TM) could change things at comparatively short notice. I also like the fact that changing MMF’s at this level is the cheapest form of software development, because no code has been written.
What really works for Government agile projects?
For me, agile has really shown true benefit where there has been strong but open & knowledgeable people in the roles of:
- If they know their business and its many processes to a comprehensive amount of detail but have specialist knowledge in the delivery area(s) allocated
- If they help describe, gauge and measure change impact in those specialist areas but know their limits so will include their peers to help resolve any related or more complex integration issues. To quote Kenny Rodgers, “you gotta know when to hold 'em and when to fold 'em”
- If they have a technical background and can understand development issues but pragmatically take very much a ‘hands off’ position so as not to lead the solution development
- They can assure, and are empowered by, those up the management chain, that if the deliverables are achieved by the desired dates, the day-to-day mechanics of how the project will achieve them, is left up to the PO
- Issue reporting is done by exception
- If they can factually explain the conceptual size, shape and complexity of the issue(s) or the integration, from a technical perspective
- If they can identify all relevant external technical dependencies
- If they can challenge the development impact and effort estimates, if needed
- If they can comprehend the business issues presented by BA/PM/DM
- If they can articulate technical problems, risks, issues, decisions to a BA/PM/DM
- If they can propose technically-adept solutions to the BA/PM/DM
Delivery / Stakeholder Managers
- If they can ensure a clear product vision exists
- If they have engaged the stakeholders via an agreed communication plan
- If all conversations are highly focussed, selection of the methodology is well informed and is regularly monitored for fit
- If they are viewed as a safe pair of hands from a delivery perspective but also have access to a more experienced mentor
- If they are expert in at least their prescribed role but can also draw on experiences in other complementary roles
I believe there are some agile processes that help shore up the delivery capability of these roles:
- Well crafted Definitions of Done and Definition of Ready
- Well-defined & advertised Sprint goals
- A clear list of deliverables that the Minimum Viable Product or Minimum Marketable Feature set is expected to produce
- A very clear view on what is NOT expected to be produced
- A To-Be model showing how the expected End-to-End service will look, or at least show the areas to be considered for improvement
To try and construct any kind of agile rationale is difficult. Given the situations described above, the only demonstrable links between projects where agile typically failed to make an impact were where:
- Both scope and expected deliverables were not clearly defined and agreed up-front. Also, they were never suitably defined thereafter
- Roles & responsibilities were not officially defined, agreed and signed up to by both parties
- Agile project implementation was only partially done or not subscribed to in enough depth
- The End-to-End process was unable to make the other process-based changes identified
- The audience weren’t assured quickly enough that agile was the right choice for their project
Agile can help projects deliver capabilities that clients like and customers want to use. But, because people often revert to what they feel comfortable with, rather than fully embrace something new that will impact their daily role, engagement issues and adoption of service standards can undermine the potential of any agile project. Especially if that change affects the perceived or actual productivity of the project team, as its not (always!) their fault…