There’s a lot of hype around the benefits and use cases of AI, especially since OpenAI’s ChatGPT was released at the end of last year. This hype is one of the things that makes it quite hard to predict the long-term versus short-term potential of AI to change key aspects of how the public sector operates.
I’m deeply suspicious of the folk out there who are making absolute declarations about when these changes will take place, but one thing is true: with the proliferation of AI tools that give human-like responses, you no longer need to be an expert in AI to see what AI is capable of doing.
In my view, some of the future-gazing is at least in part a distraction, and we should instead look at the AI work happening right now to understand how to be ready and how to manage the risks. AI is already being used in both central and local government. If you’re under pressure to be ahead of the curve but worried about the immaturity of the technology and unsure where to start, I’d suggest your first step is the same as it always was – start with a clear understanding of what your organisation aims to achieve and the challenges that need to be overcome.
As I’ll go on to explain, while the technologies may be new, your best next steps in harnessing them are tried and tested, and you already have many of the skills you need – but first, here are some examples of real uses in government that help to illustrate the points I’ll go on to make.
AI use in Government
The UK Government has already begun to make careful use of these technologies over the last few years. When I worked at Government Digital Service (GDS) there was a great example on GOV.UK which supported one of the core goals at the time – making content more accessible to users. At the time there were thousands of untagged pages of content that we estimated would take years for civil servants across government to review and tag. The project team built a supervised machine learning model that could recognise patterns in the pages, read the content, and suggest suitable tags. The final model was able to tag 96% of the content in about six months, making huge amounts of content more accessible to citizens.
A machine learning example from local government is that of Swindon Council which, with a tiny team of three people and seed funding of just £100K, successfully introduced AI to a number of internal business challenges – one notable example being the use of machine learning to cut their document translation costs by 99.6% and reduce the turnaround time from weeks to minutes.
Since 2015, the US Citizenship and Immigration Services (USCIS) has been using Emma, their virtual assistant, to reduce the burden on call centres which receive a large number of calls for general information on the immigration process. In 2020, Emma responded successfully to 35 million enquiries from more than 11 million users. By 2021, Emma’s success rate was 93% in English and 90% in Spanish. As another example of supervised machine learning, Emma trains with adjudicators and case managers, as well as through interactions with the public.
Since 2014, Singapore has had a similar concept called ‘Ask Jamie’ which they have implemented across 70 government agency websites. It supports the Singapore government’s no-wrong-door approach by surfacing answers from across all of those websites, regardless of the site on which the user posed their question.
I’ve spoken previously about the galvanising effect the pandemic had on cross-government collaboration, providing a single point of focus for the COVID response. AI can find patterns where humans can’t and can process increasing amounts of data, offering interesting opportunities in medicine, education and healthcare. A brilliant example of this is the National COVID-19 Chest Imaging Database case study of work undertaken by teams from NHSX, the British Society of Thoracic Imaging (BSTI), and Royal Surrey County Hospital NHS Foundation Trust. This project used technology that BSTI had developed to collate imaging data from over 90 hospitals to help clinical researchers in medical imaging and artificial intelligence fields learn as much as possible about COVID-19 by analysing a comprehensive sampling of acute chest imaging.
Your best next steps
All of these examples had their origin in a clear organisational challenge. Technology was just the means to an end. So, as we explore the best next steps you can take in harnessing AI’s potential, where should we start?
Start with organisational challenges
Fund the problem, not the tech – and start small, focusing on value. As my excellent colleague, Pete Chamberlin recently said, one of the problems with innovation is that it distracts organisations from fixing the ‘plumbing’. All of the examples I mention above address a priority problem space of the government – important challenges that the organisations needed to address and, in many cases, which would have taken humans a long time and lots of effort to solve. New technologies made a big difference in these cases, but it’s vital that the technology is harnessed to the organisational need and not the other way around.
Be wary of AI-centric project proposals and remember that most AI projects aren’t predicted to deliver what is expected. Projects shouldn’t be spun up just to make use of a new, hyped technology; new technologies should make your pre-existing organisational challenges easier to solve. Run a discovery to explore your riskiest assumptions: What are the users’ needs? Is there a more established technology that can solve your problem? Where is your data held, how much of it do you have and what condition is it in? Do you have a multidisciplinary team to do the work with specialist skills in place who can advise on the data and the business area?
Familiarise yourself with the technologies
It’s important to gain a good understanding of the capabilities of different AI technologies so that you’re able to recognise real opportunities within your organisational challenges. This doesn’t require you to become expert in any of them. Focus on having enough understanding of the capabilities and risks inherent in different AI technologies to allow you to ask the right questions and make the right judgements in selecting and leading projects that use AI. The technologies may be new, but the questions you need to ask are familiar and include considerations on ethics, ROI, data security, and legal and regulatory compliance. At Scott Logic, we’ve been busy blogging about AI for a few years now – hopefully, you’ll find some useful material there to help build out your expertise.
Within the public sector at least, it’s likely that most use cases at this stage will begin as trials and require some experimentation to test your hypotheses. This experimentation comes with a degree of failure and learning baked into it. The examples above show the benefits of having some success with using AI to solve issues, but experimenting in this way takes time and also comes with a cost and a high likelihood of at least some failure and/or rework.
We need to encourage informed experimentation and give people the skills to do this well. Experimentation is not always encouraged in the public sector. The systems that underpin government aren’t designed to handle rapid learning as it can look very much like failure. However, studies have shown that only organisations that encourage experimentation and innovation (through a balance of talent, technology, strategy and culture) achieve positive outcomes through digital transformation. A culture of curiosity (as my colleague Matt Phillips describes it in his recent white paper) and experimentation is a big part of fostering data literacy.
Bring to bear your data literacy
I wrote earlier this year about how data literacy gives leaders the edge. Data-literate leaders know how to work with data, they know how to derive actionable insights from data, and they are data storytellers who are able to use data to shape and communicate a narrative about the overarching strategy of their organisation. Importantly, they are also able to see when the data doesn’t look right, and spot issues in the approaches their organisation is taking. These skills and qualities are technology-agnostic and just as relevant and important in the context of AI as they are in the face of any other new technology.
As a data-literate leader, your understanding of the work involved in managing, interrogating and analysing data, as well as your skills in identifying and managing risks, are key to identifying opportunities for exploration. The examples I shared above of AI use in governments relied on people with precisely these skills. And as AI adoption beds in and grows in your organisation, your data literacy will be vital in helping to shape the governance surrounding its use, getting value from investment, and using data to tell the story to those around you.
One quality that will be critical in the next few years of transformation is adaptability, both personally and at a strategic level. Some of these technologies are being released and adopted with unprecedented speed. The underpinning governance, law and policy are following behind, and largely not yet in place, which leaves us with a lot of uncertainty. There are going to be some years of flux and change as the technologies continue to evolve, their societal impact becomes clearer, and the laws governing their use come into force both nationally and internationally.
This doesn’t mean that you should do nothing. It’s another argument for informed experimentation, and retaining a ruthless focus on the problem space and the value delivered. Anchor these experiments in measurable data and take a pragmatic, incremental approach to applying the technologies to real, pre-existing challenges.
Foster data-literate cultures
As pointed out recently by the Ada Lovelace Institute, there is a heightened risk for the public sector when using nascent AI tools. This is due to the increased responsibility around the sensitivity of the data being handled, the expected standards of robustness, and the requirements of transparency or explainability around decision-making.
These responsibilities affect the culture within the public sector, and managing or understanding these risks is partly why it takes time to do new things or change the old way of doing things. At the same time, the NHS example above shows that sometimes it is only the public sector that can solve certain problems, and that using these new technologies can rapidly deliver really astonishing results.
Focusing on data literacy and building into an organisation’s culture the knowledge of how to work with data is key to managing these risks and making effective use of new technologies, including AI. As well as making informed decisions about when using AI is appropriate, teams will need the skills to collect, analyse and interpret data for AI models and also to understand and communicate the outcomes that models might generate.
Initiatives like One Big Thing have an important role to play in this, with its aim of improving services and helping citizens through better interpretation, use, presentation and communication of data. If we can improve the overall data quality and get data flowing across those organisational boundaries, organisations will be well placed to assess and take opportunities when they crop up.
Fix the plumbing
So far I’ve focused a lot on the skills and qualities you can bring to bear in getting ready to harness AI. But if you’re looking for a hands-on, practical next step in order to get started and be ready, beginning with your data is a rock-solid idea. This is a hyped technology that makes an incredibly strong case for improving your data architecture, quality and stewardship. To quote Pete Chamberlin’s post that I linked to earlier, “The more I use it and work on strategies for deploying it, the more I think: this requires data-plumbing-fixing activities!”
I totally agree, Pete.
Find out more
You can find out more here about our work helping to transform the public sector.