The dust has settled from the AI executive order – Here’s what agencies should tackle next
While it’s clear the government has made progress since the initial guidance was issued, there’s still much to be done to support overall safe federal AI.
After the dust has settled around the much anticipated AI executive order, the White House recently released a fact sheet announcing key actions as a follow-up three months later. The document summarizes actions that agencies have taken since the EO was issued, including highlights to managing risks and safety measures and investments into innovation.
While it’s clear the government has been making progress since the initial guidance was issued, there’s still much to be done to support overall safe federal AI adoption, including prioritizing security and standardizing guidance. To accomplish this undertaking, federal agencies can look to existing frameworks and resources and apply them to artificial intelligence to accelerate safe AI adoption.
It’s no longer a question of if AI is going to be implemented across the federal government – it’s a question of how, and how fast can it be implemented in a secure manner?
Progress made since the AI EO release
Implementing AI across the federal government has been a massive undertaking, with many agencies starting at ground zero at the start of last year. Since then, the White House has made it clear that implementing AI in a safe and ethical manner is a key priority for the administration, issuing major guidance and directives over the past several months.
According to the AI EO follow-up fact sheet, key targets have been hit in several areas including:
Managing risks to safety and security: Completed risk assessments covering AI’s use in every critical infrastructure sector are the most crucial area.
Innovating AI for good: Included launches of several AI pilots, research and funding initiatives across key focus areas including HHS and K-12 education.
What should agencies tackle next?
Agencies should further lean into safety and security considerations to ensure AI is being used responsibly and in a manner that protects agencies’ critical data and resources. In January, the National Institute of Standards and Technology released a publication warning regarding privacy and security challenges arising from rapid AI deployment. The publication urges that security needs to be of the utmost importance for any public sector agency interested in implementing AI, which should be the next priority agencies tackle along their AI journeys.
Looking back on similar major technology transformations over the past couple years, such as cloud migration, we can begin to understand what the current problems are. It took the federal government over a decade to really nail down the details of ensuring cloud technology was secure — as a result of the federal government’s migration to the cloud, the government released the Federal Risk and Authorization Management Program (FedRAMP) as a form of guidance.
The good news is, we can learn from the lessons of the last ten years of cloud migration to accelerate AI and deliver it faster to the federal government and the American people by extending and leveraging existing governance models including the Federal Information and Security Management Act and FedRAMP Authority to Operate (ATO) by creating overlays for AI-specific safety, bias and explainability risks. ATO is a concept first developed by NIST to create strong governance for IT systems. This concept, along with others, can be applied to AI systems so agencies don’t need to reinvent the wheel when it comes to securing AI and deploying safe systems into production.
Where to get help?
There’s an abundance of trustworthy resources federal leaders can look to for additional guidance. One new initiative to keep an eye on is from NIST’s recently created AI Safety Institute Consortium (AISIC).
AISIC brings together more than 200 leading stakeholders, including AI creators and users, academics, government and industry researchers, and civil society organizations. AISIC’s mission is to develop guidelines and standards for AI measurement and policy, to help our country be prepared for AI adoption with the appropriate risk management strategies needed.
Additionally, agency leaders can look to industry partners with established centers of excellence or advisory committees with cross-sector expertise and third-party validation. Seek out counsel from industry partners that have experience working with or alongside the federal government, that truly understand the challenges that the government faces. The federal government shouldn’t have to go on this journey alone. There are several established working groups and trusted industry partners eager to share their knowledge.
Agencies across a wide range of sectors are continuing to make progress in their AI journeys, and the federal government continues to prioritize implementation guidance. It can be overwhelming to cut through the noise when it comes to what’s truly necessary to consider or to decide what factors to prioritize the most.
Leaders across the federal government must continue to prioritize security, and the best way to do this is by leaning into already published guidelines and seeking the best external resources available. While the federal government works on standardizing guidelines for AI, agencies can have peace of mind by following the roadmaps that they are most familiar with when it comes to best security practices and apply these to artificial intelligence adoption.
The dust has settled from the AI executive order – Here’s what agencies should tackle next
While it’s clear the government has made progress since the initial guidance was issued, there’s still much to be done to support overall safe federal AI.
After the dust has settled around the much anticipated AI executive order, the White House recently released a fact sheet announcing key actions as a follow-up three months later. The document summarizes actions that agencies have taken since the EO was issued, including highlights to managing risks and safety measures and investments into innovation.
While it’s clear the government has been making progress since the initial guidance was issued, there’s still much to be done to support overall safe federal AI adoption, including prioritizing security and standardizing guidance. To accomplish this undertaking, federal agencies can look to existing frameworks and resources and apply them to artificial intelligence to accelerate safe AI adoption.
It’s no longer a question of if AI is going to be implemented across the federal government – it’s a question of how, and how fast can it be implemented in a secure manner?
Progress made since the AI EO release
Implementing AI across the federal government has been a massive undertaking, with many agencies starting at ground zero at the start of last year. Since then, the White House has made it clear that implementing AI in a safe and ethical manner is a key priority for the administration, issuing major guidance and directives over the past several months.
Join us Apr. 25 at 1 p.m. EST for Federal News Network's StateRAMP Exchange where we'll explore how the StateRAMP program provides cyber assurances as state agencies continue their IT modernization journeys. | Register today!
According to the AI EO follow-up fact sheet, key targets have been hit in several areas including:
What should agencies tackle next?
Agencies should further lean into safety and security considerations to ensure AI is being used responsibly and in a manner that protects agencies’ critical data and resources. In January, the National Institute of Standards and Technology released a publication warning regarding privacy and security challenges arising from rapid AI deployment. The publication urges that security needs to be of the utmost importance for any public sector agency interested in implementing AI, which should be the next priority agencies tackle along their AI journeys.
Looking back on similar major technology transformations over the past couple years, such as cloud migration, we can begin to understand what the current problems are. It took the federal government over a decade to really nail down the details of ensuring cloud technology was secure — as a result of the federal government’s migration to the cloud, the government released the Federal Risk and Authorization Management Program (FedRAMP) as a form of guidance.
The good news is, we can learn from the lessons of the last ten years of cloud migration to accelerate AI and deliver it faster to the federal government and the American people by extending and leveraging existing governance models including the Federal Information and Security Management Act and FedRAMP Authority to Operate (ATO) by creating overlays for AI-specific safety, bias and explainability risks. ATO is a concept first developed by NIST to create strong governance for IT systems. This concept, along with others, can be applied to AI systems so agencies don’t need to reinvent the wheel when it comes to securing AI and deploying safe systems into production.
Where to get help?
There’s an abundance of trustworthy resources federal leaders can look to for additional guidance. One new initiative to keep an eye on is from NIST’s recently created AI Safety Institute Consortium (AISIC).
AISIC brings together more than 200 leading stakeholders, including AI creators and users, academics, government and industry researchers, and civil society organizations. AISIC’s mission is to develop guidelines and standards for AI measurement and policy, to help our country be prepared for AI adoption with the appropriate risk management strategies needed.
Additionally, agency leaders can look to industry partners with established centers of excellence or advisory committees with cross-sector expertise and third-party validation. Seek out counsel from industry partners that have experience working with or alongside the federal government, that truly understand the challenges that the government faces. The federal government shouldn’t have to go on this journey alone. There are several established working groups and trusted industry partners eager to share their knowledge.
Agencies across a wide range of sectors are continuing to make progress in their AI journeys, and the federal government continues to prioritize implementation guidance. It can be overwhelming to cut through the noise when it comes to what’s truly necessary to consider or to decide what factors to prioritize the most.
Read more: Commentary
Leaders across the federal government must continue to prioritize security, and the best way to do this is by leaning into already published guidelines and seeking the best external resources available. While the federal government works on standardizing guidelines for AI, agencies can have peace of mind by following the roadmaps that they are most familiar with when it comes to best security practices and apply these to artificial intelligence adoption.
Gaurav “GP” Pal is found and CEO of stackArmor.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
Translating the AI executive order into security practices
Biden’s executive order on AI: Where to go from here
Why red-teaming is crucial to the success of Biden’s executive order on AI