A Critical Look at AI Ethics in Government

When it comes to the rapid rise of artificial intelligence (AI) in government, there are understandably increasing concerns around ethics.

Moving forward, we are headed into a new frontier where AI will be foundational for government IT successes. This is why we need to take a critical look at the current and future state of AI and government, especially as it related to ethics.

As we recently highlighted, the ATARC AI Summit 2024 addressed the challenges and opportunities around AI, and shined a light on where government actually is when it comes to AI adoption.  For now, government is mainly focused on using AI to streamline processes, and AI will not replace the federal workforce.

To date, following are how some of the largest federal agencies currently use AI, as highlighted at the ATARC Federal AI Summit 2024:

  • The Department of State: Career pathing and data curation

  • The Department of Energy: Power grid protection, clean energy, grant management

  • The Department of Defense: Training exercises and content management

  • The Department of Commerce: Legacy code conversion

  • The Department of Education: General pilots

  • The Department of Homeland Security: Risk based patch management and asset management

  • The Department of Veterans Affairs: General pilots

  • The Department of Treasury: Fraud prevention

This does not mean that AI ethics is not a concern for government. In late March, the Biden administration released its AI governance policy, which aims to establish a roadmap for federal agencies’ management and usage of the budding technology.

Some of the actions from this guidance include the designation of Chief AI Officers at agencies and the development of compliance plans. It also mandates that agencies establish guardrails for AI uses that could impact Americans’ rights or safety, as well as expands what agencies share in their AI use case inventories.

The future of AI ethics in government comes down to preventing data poisoning and the use of synthetic content. AI models cannot be reliable unless the data is safe. Along these lines, NIST published a report in January that found that AI systems, which rely on large amounts of data to perform tasks, can malfunction when exposed to untrustworthy data.

While government AI use cases are very nascent, we will be seeing a rise in different AI models, which have both positive and negative implications:

  • Agent AI: Analyzing information issued by people and understanding intentions and emotions contained in that information.

  • Ambient AI: Analyzing and understanding just about anything in the world (i.e., objects, people and the environment), and instantly predicting and controlling those things.

  • Heart-Touching AI: Analyzing unconscious and unnoticeable aspects of a person’s mind and body, as well as understanding deep psychological, intellectual and instinctual states in that person.

  • Network AI: Organically connecting and cultivating multiple types of AI and optimizing the entire system.

Some of the more negative outcomes of AI in government could include:

  • Authoritarian governments being able to scan crowds and gain insights on their emotional states – determining who is loyal and who is not loyal.

  • AI could potentially become sentient.

  • AI networks creating stronger and more nefarious AI networks.

As we move into the future, lawmakers, as well as government industry leaders, need to seriously ponder the following questions – and develop the right policies and guard rails accordingly:

  • Should AI convey and be capable of empathy?

  • Is AI your partner or competitor?

  • What if very powerful AI was put in the wrong hands?

Much has happened in the world since the launch of ChatGPT in late 2023. As we move forward collectively, we need to ensure that AI is used in ways that positively support the mission and enhance life for citizens and constituents.

Makpar offers the following AI informational resources to help agencies fully embrace AI for mission success:

If you would like to learn more about how Makpar can help your agency develop the most comprehensive AI solutions for enhancing mission success, please contact us here.   

Previous
Previous

The Fed Mission Success Round Up: IRS to Expand AI Capabilities; Making Gov Website More Accessible; and OPM Director to Step Down

Next
Next

The Fed Mission Success Round Up: Pledge to Protect Civil Rights and AI; VA Exploring Multiple Generative AI Pilots; and TMF Investments