AI for Public Value: How Government Can Use AI to Improve Performance
AI is Everywhere. 7:00 AM, the alarm goes off. 8:00 AM, you have a cup of coffee, scan the news, and check your email on your phone. During that single hour you have interacted with artificial intelligence (AI) numerous times. The coffee beans were harvested based on an AI algorithm. The news feed… curated by AI. The ads that came with the news… AI. The facial recognition to open your phone… AI. And more.
These all create private value. What about AI to create public value? From the RMV to the IRS to the TSA to the FDA, government can leverage AI to provide quality services to its constituents, efficiently and with equity.
AI Basics
AI is defined by the Oxford English Dictionary as “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”[1]
There are four characteristics of AI. First, AI has the capacity to make decisions or at a minimum support your decision making. Second, AI decisions require a combination of attributes of human intelligence from perception to problem solving to reasoning to learning language.[2] Third, AI systems combine data sources and take action based on the analysis. This contrasts with pre-programmed responses.[3] Fourth, the decision making provides feedback to the system for continual improvement.
AI Categories
The evolution of AI has not been constrained by ideas for applications. The broad range of AI applications are categorized into three groups based on level of human thought or “sophistication.” Artificial intelligence applications are the least sophisticated. They seek to make intelligent programs and machines through writing static code. AI chess, voice-controlled applications (e.g. Siri) and robots for household chores fall into this category. Machine learning begins with written code and then self-improves with each decision and piece of new information. Software that anticipates your email writing and fills in the next word or phrase is an example. Deep learning or neural networks are the most sophisticated applications. Here the system is programmed to replicate the thinking humans do in their brains, accepting and digesting large amounts of unfiltered and unstructured data. You do this while driving instinctively; autonomous vehicles do it via computer-based deep learning.
AI Use Cases for Government
The AI use cases in the public sector generally address four core functions of government: (1) providing safety and security; (2) delivering public services; (3) collecting revenue; and (4) being effective and efficient. Examples are found around the globe. AI for safety and security are in evidence in airports in many countries. Narita Airport in Japan uses advanced robots to enhance security though detection of anomalies including people, baggage, and equipment. The airport is also using facial recognition software to facilitate passenger movement and lower operating costs while maintaining security protocols.[4]
AI is being used to improve the delivery of many governmental services from healthcare to criminal justice to weather. Chatbots, computer systems that simulate human conversation, are a high-leverage tool to provide quality services efficiently and with equity. The Rwandan government worked with Babylon Health to create chatbots capable of assisting the triage process for patients calling the hospital. Upon hearing the callers’ symptoms, the triage tool would provide recommendations for accessing care.[5]
AI is facilitating tax collection. The OECD in 2019 reported that more than 40 tax authorities are using or plan to use AI.[6] For example, the Spanish government has teamed up with IBM’s Watson for the use of this system to address questions about value added taxes, reducing email inquiries by 80 percent.[7] Moreover, AI is being used to detect payment anomalies and fraud. The US Internal Revenue Service is introducing an AI chatbot that is able to help those behind on tax payments to set up a payment plan.[8]
AI is also driving greater government effectiveness and efficiency, enabling better decisions and delivering services at lower costs. Attracting, hiring and retaining employees is a costly activity for government agencies. AI is improving the entire process. AI facilitates hiring by using algorithms to sift through large numbers of applications and select profiles best matched for the position at hand. Hiring decisions are enhanced by candidate potential for growth. Retention is improved by AI systems that identify employees at risk of leaving.
AI: The Concerns
AI is not without controversy. Headlines like “Amazon scraps secret AI recruiting tool that showed bias against women”[9] and “Disinformation researchers raise alarms about AI chatbots”[10] highlight the risks and concerns surrounding AI. The risks include societal risks such as disinformation, excessive surveillance and even autonomous weapons. Economic risks such as inequality and expanding the wealth divide also sit at the national level. The overarching concern is that of ethics; are the specific uses of AI consistent with the values and norms of society? Are the algorithms unbiased?
Job impacts are also a concern for policymakers. Chatbots are eliminating customer service jobs. Robots are replacing manual labor. Automated vehicles threaten the jobs of drivers. Almost all organizations will see impacts on labor. Several options can mitigate the negative impacts. The long-term approach is to orient education in schools and universities to prepare future workers to be AI enabled. The short-term solutions center on re-training, which requires the combined efforts of employers, government, and worker organizations.
Addressing the Limitations of AI: Guiding Principles
Policymakers must walk the fine line of allowing AI applications that create private and public value while regulating the technology to avoid bias and discrimination: “responsible AI.”
The starting point for responsible AI is to establish guiding principles for the development and use of AI. The OECD has established a set of principles for AI implementation. The OECD’s value-based principles include fairness, transparency and explainability, security and safety and accountability. Their recommendations “aim to foster innovation and trust in AI by promoting responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values.”[11]
Operationalizing the Guiding Principles
AI ethics oversight boards or committees are one tool for building trustworthy AI applications in government. These committees serve as the organization’s watchdog to ensure that the inputs, programming and outputs of AI systems developed in the organization or purchased through third parties systematically and comprehensively meet the organization’s core values.
Audits are another tool to provide an independent assessment of the AI algorithm. Does it do what it is intended to do? Are the data unbiased? Are the risks mitigated? AI auditors’ affirmative answers to these questions are trust builders. The audit process is non-trivial. “AI systems are not simply a few lines of code, but complex sociotechnical systems consisting of a mixture of technical choices and social practices.”[12]
The US Government Accountability Office issued audit guidance for federal agencies in 2021. The report provides a checklist and details for ensuring the integrity of governance, data, performance and monitoring.[13] One caution: Audits run the risk of giving the imprimatur of trustworthiness but with limited standards to guide the audit, and “audit-washing” could result. To address this issue, ISACA, a global community with a focus on increasing trustworthiness of technology in general and AI in particular, offers an AI Fundamentals Certificate.
The Bottom Line
AI is part of our world and will increase in importance in the years to come. AI has the potential to improve the efficiency and effectiveness of government. Current and future opportunities abound. However, there are risks that need to be understood and mitigated.
Policy Recommendations
- Define AI principles; the OECD value-based principles are a good starting point.
- Create a rubric for assessing the costs and benefits of AI applications in government.
- Plan for job displacement and repositioning.
- Establish standards for AI oversight boards and audit protocols.
Implications for Practitioners
- Identify AI use cases.
- Confirm value with cost benefit analysis.
- Plan for design, development, deployment, and monitoring.
- Audit throughout the life cycle.
- Learn and identify the next round of use cases.
Mark Fagan is a Lecturer in Public Policy at Harvard Kennedy School. Research for the paper was provided by Emily Ratte and Nanditha Menon, Master students at Harvard Kennedy School.
A more detailed description of AI in government is available at: https://www.hks.harvard.edu/sites/default/files/centers/mrcbg/files/2023-01_FWP_v2.pdf
References
[1] Oxford English Dictionary.
[2] https://www.britannica.com/science/human-intelligence-psychology (P2 of 26)
[3] https://www.brookings.edu/research/what-is-artificial-intelligence/
[4] https://www.naa.jp/en/airnarita/automation.html
[6] https://read.oecd-ilibrary.org/taxation/tax-administration-2019_74d162b6-en
[9] Reuters, Jeffrey Dastin, 2018.
[10] New York Times, Hsu and Thompson, 2/8/23.
[11] https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449
[12] https://hai.stanford.edu/news/stanford-launches-ai-audit-challenge
[13] https://www.gao.gov/assets/gao-21-519sp.pdf
It is mandatory to be registered to comment.
Click here to register and receive our newsletter.
Click here to access.