AI in government: The path to adoption and deployment
The public sector has a high potential for artificial intelligence (AI) to have a transformative impact. After all, governments have access to tremendous amounts of data, and government operations affect each of us in small and large ways every day.
So far, AI adoption among government entities appears to be uneven and generally lags behind the private sector. But for some governments there are entire departments, or pockets within departments, where adoption is robust, advanced, and successful.
We recently conducted a survey of more than 300 executives across a variety of industries. We found a decidedly mixed picture of AI in government, likely owing to an environment that is often risk-averse, subject to myriad legislative hurdles and vast in its reach. The survey, published by SAS, Accenture Applied Intelligence and Intel and conducted by Forbes Insights, shows signs that we’ve reached the moment at which AI expands beyond discrete use cases and experiments into wider adoption in some agencies.
Free report
Our report, AI Momentum, Maturity and Models for Success, reveals
that leaders and early adopters in AI are making important advances and are identifying and expanding on what works as they use AI in more ways and more parts of their organizations.
The responses of many of the leaders point to a potential explosion of AI adoption just around the corner, even though gaps in capabilities and strategy are apparent.
What does this mean for technologists and other leaders in government? What can they learn today from their peers’ experiences that may set them on a successful path in their own efforts? What’s next in government when it comes to AI?
“This survey certainly indicates that AI just isn’t as far along in government as it is in virtually every other industry”, says Steve Bennett, director of the SAS global government practice. “Though we see a fairly high level of awareness of AI capabilities, which is reflected in my own experience working with government leaders every day.”
Bennett’s view is that they’re savvy and knowledgeable about AI – but the path to adoption and deployment is less clear for them. It seems that many leaders in the public sector tend to follow the lead of their private sector counterparts and are motivated by the high stakes involved if government programs fall short or don’t meet their constituents’ needs. This can create a tendency toward risk aversion.
Which AI capabilities are most likely to be adopted in government first? What are the biggest untapped opportunities for AI adoption in government? What obstacles and challenges unique to government are most important to understand today in order to ensure progress tomorrow?
Pressing operational issue are foremost in government
Let’s focus on a few of the highlights of our findings specifically from government leaders, combined with our own insights based on current government engagements.
Pursuing their missions every day, government agencies spend much of their time focused on operational issues. That time-consuming focus is required in government departments and offices that are held accountable for achieving clearly defined missions. If they fall short, the consequences can be devastating – for the citizens they serve, as well as for the government organization itself. Not to mention, in some cases, a leader’s career.
In that context, it’s easy to see how AI remains a second-tier priority for some government leaders who have operational roles. In the face of pressing requirements to deliver critical services, AI may appear to be a luxury that is just out of reach. This presents government leaders with a paradox. Many have no time to fully embrace AI due to everyday demands, but those AI advances could be instrumental in unlocking real, measurable operational improvements that have the effect of reducing strains on resources and giving them more time to fulfill their mission.
How can government leaders push past this paradox to take advantage of the very real benefits that AI is already delivering for their peers in the business world?
For starters, don’t pursue long, multiyear projects. Think more in terms of quick hits – small pilots that have near-term operational relevance. Automating repetitive tasks such as claim processing is an obvious example, allowing staff to be more efficient and strategic in their work. Just as important, this approach can help demonstrate the value of AI for senior leaders and decision makers wondering whether investments in AI are worth it.
Consider repetitive tasks as a beachhead for AI
Many government leaders we’ve worked with would point to a single class of opportunities for getting started: All the manual, repetitive tasks being undertaken by their agency or department.
In even the most sophisticated government operations, valuable workers are stuck performing repetitive tasks that offer a prime opportunity for AI-enabled automation. Form-based interactions such as tax refund processing, benefits processing and internally-focused HR processing all fit this description.
Here’s an easy way to get started: Survey the processes in your agency. If you’ve already done this for different purposes, you have a head start. If not, focus on identifying processes that have highly repeatable, routine aspects, because those are the ones most amenable to AI applications.
Once you’ve identified a handful of these processes, proceed in a consultative, collaborative manner, seeking the advice of the employees currently undertaking these tasks. Otherwise, you may unwittingly contribute to an environment in which employees feel threatened, rather than enabled, by AI.
AI oversight is critical in government uses
Based on the results of this survey, the state of AI oversight across industries is in flux, showing evidence of progress in some areas, but lagging significantly in others. Even in areas where real progress is being reported, the exact nature and scope of these oversight efforts warrants further investigation. When it comes to AI oversight in government, as you can see in Figure x, leaders report that they believe regular oversight of AI projects in their agencies is important.
This issue of oversight, however, is one of central relevance to government and all the citizens it serves and will only grow in importance as AI is deployed more broadly within government operations. Why? Because governments intersect with citizens’ lives in a way that have wider-ranging consequences than retailers, or even banks. Governments can deprive a citizen of her liberty, deny assistance benefits, bar entry to a flight, and more.
“If a retailer uses machine learning to present me with a recommendation for jeans that I don’t like, I just don’t buy the jeans – there’s really no harm,” Bennett said. “But in government, if these systems are being used to delay a claim, those are high-impact decisions. Humans always have to be in the loop of AI applications because the ethical and legal implications of government’s use of these systems are an order of magnitude higher than any other industry. For that reason, AI will never replace that level of human discernment.”
Oversight, ethics, and organizational values have to inform the development of AI systems on the front end, because they can shape these systems in profound ways. It is simply not enough to bolt on an ethics/oversight infrastructure after the technology is already in place. Because the implications of getting it wrong are significant and can have an adverse effect on the citizens you serve, as well as agency employees. Start by putting ethics and oversight on the agenda in the planning phase.
What’s next with AI in the public sector?
While it seems clear from survey results that AI implementation within most segments of the public sector are lagging behind their private sector counterparts, this may present an unexpected opportunity. As other industries have experimented, failed, learned, and progressed in their efforts with AI, government leaders can benefit from the insights and best practices gleaned from these experiences. That presents a significant opportunity for government, suggesting eventual broader adoption.
Regardless of its trajectory, it seems clear that AI will expand among government entities as the capabilities become more powerful, and leaders hone their ability to deploy them. While what’s next will vary from agency to agency based on their operating environment, it’s a safe bet that the factors that set successful AI adopters apart, as found in the research, will figure prominently among public sector organizations. These factors include:
- Process maturity. How often are the organization’s leaders reviewing AI output? Do they have processes in place for augmenting or overriding questionable results? Do they have plans to significantly improve business processes using AI? These are all markers of process maturity in AI – and they’re all areas in which AI leaders in industries such as manufacturing are setting themselves apart from the pack.
- A focus on ethics. Ethics is not a new area of focus for governments. But what sets AI in government apart, is the recognition that an initiating an artificially intelligent process means the agency has responsibility for the outcomes. Ethical standards for technologists signal an understanding of the stakes involved with potentially unethical uses of AI. Governments will always need to have humans serving as the ultimate arbiters of ethical applications of AI – after all, AI algorithms cannot replace human decision making in government. AI is simply another tool that can help governments be more successful in their missions.
- Connecting analytics to AI. Analytics is at the core of the learning and the automation aspects of AI. Successful AI users show a level of maturity with data-driven analytical processes that are a hallmark of successful AI deployments.
- Trust in AI. When it comes to AI, success breeds confidence – successful organizations are more than twice as likely to trust their ability to ethically and appropriately use AI technologies in the future. Public sector organizations are no exception.
- Healthy levels of AI oversight. Organizations that have been more successful with AI tend to have more rigorous oversight processes in place. For example, 74% of successful organizations report that they review their AI outputs at least weekly. While government entities appear to have made good progress in this area, merely keeping up with AI advances and deployments as they spread across the organization will require focused attention and effort.
Regardless of how you interpret findings in the survey, it’s clear that AI deployment is accelerating. As AI continues its ascent, many of the issues examined in the survey will grow in importance and spark more conversations among senior agency leaders to ultimately improve the ways that government agencies accomplish their missions for the citizens they serve.
Recommended reading
- 4 strategies that will change your approach to fraud detectionAs fraudulent activity grows and fighting fraud becomes more costly, financial institutions are turning to anti-fraud technology to build better arsenals for fraud detection. Discover four ways to improve your organization's risk posture.
- AI in banking: Survey reveals factors for successWhat do banking executives report about their experiences with AI? Where are they focusing today? What’s working? What are their plans for the future?
- GDPR and AI: Friends, foes or something in between?The GDPR may not be best buddies with artificial intelligence – but GDPR and AI aren't enemies, either. Kalliopi Spyridaki explains the tricky relationship between the two.
Ready to subscribe to Insights now?