Artificial Intelligence - Federal News Network https://federalnewsnetwork.com Helping feds meet their mission. Wed, 10 Apr 2024 22:43:36 +0000 en-US hourly 1 https://federalnewsnetwork.com/wp-content/uploads/2017/12/cropped-icon-512x512-1-60x60.png Artificial Intelligence - Federal News Network https://federalnewsnetwork.com 32 32 Ask the CIO: Federal Emergency Management Agency https://federalnewsnetwork.com/cme-event/federal-insights/ask-the-cio-federal-emergency-management-agency/ Wed, 10 Apr 2024 20:41:23 +0000 https://federalnewsnetwork.com/?post_type=cme-event&p=4957819 How is digital transformation impacting the mission at FEMA?

The post Ask the CIO: Federal Emergency Management Agency first appeared on Federal News Network.

]]>
In this exclusive webinar edition of Ask the CIO, host Jason Miller and his guest, Charlie Armstrong, chief information officer at FEMA will discuss the how digital transformation is supporting the mission at FEMA. In addition, Don Wiggins, senior solutions global architect at Equinix will provide an industry perspective.

Learning Objectives:

  • Digital transformation at FEMA
  • Shifting FEMA to the cloud
  • Edge computing for the future
  • Employing artificial intelligence
  • Industry analysis

The post Ask the CIO: Federal Emergency Management Agency first appeared on Federal News Network.

]]>
New Congressional task force looks to make sure it’s not left behind by AI advancements https://federalnewsnetwork.com/artificial-intelligence/2024/04/new-congressional-task-force-looks-to-make-sure-its-not-left-behind-by-ai-advancements/ https://federalnewsnetwork.com/artificial-intelligence/2024/04/new-congressional-task-force-looks-to-make-sure-its-not-left-behind-by-ai-advancements/#respond Wed, 10 Apr 2024 19:10:52 +0000 https://federalnewsnetwork.com/?p=4957751 Twelve members of Congress shave been appointed to a new commission to lead the House’s exploration of AI’s transformational opportunities.

The post New Congressional task force looks to make sure it’s not left behind by AI advancements first appeared on Federal News Network.

]]>
var config_4957166 = {"options":{"theme":"hbidc_default"},"extensions":{"Playlist":[]},"episode":{"media":{"mp3":"https:\/\/www.podtrac.com\/pts\/redirect.mp3\/traffic.megaphone.fm\/HUBB7052876470.mp3?updated=1712754448"},"coverUrl":"https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2023\/12\/3000x3000_Federal-Drive-GEHA-150x150.jpg","title":"New Congressional task force looks to make sure it’s not left behind by AI advancements","description":"[hbidcpodcast podcastid='4957166']nnTwelve members of Congress shave been appointed to <a href="%20%20%20https:\/\/beyer.house.gov\/news\/documentsingle.aspx?DocumentID=6082">a new commission <\/a>to lead the House\u2019s exploration of AI\u2019s transformational opportunities and potential challenges. Their mission? To create guiding principles, recommendations and bipartisan policy proposals for the regulation of AI. One of those members joined Federal News Network's Eric White on <a href="https:\/\/federalnewsnetwork.com\/category\/temin\/tom-temin-federal-drive\/"><em><strong>The Federal Drive with Tom Temin<\/strong><\/em><\/a> to discuss the task ahead: Rep. Don Beyer (D-Va.)nn<em><strong>Interview Transcript:\u00a0\u00a0<\/strong><\/em>n<blockquote><strong>Eric White <\/strong>We have been bombarded with hearing about the potentials of AI. And so I'm sure that as a member of Congress, you're hearing from your constituents as well as their concerns and things that might be brought up if it is implemented fully. So how did this task force on AI all come together?nn<strong>Don Beyer <\/strong>Eric, for a few years, there's been an artificial intelligence caucus. Democrats and Republicans coming together once a month to just talk about AI, but no legislation was really moving. It wasn't clear which committees had jurisdiction, wasn't clear where there was really momentum behind specific pieces of legislation. So Kevin McCarthy (R-Calif.), back before the infamous vacation of the chair, had talked about forming a task force, never happened. And eventually, just a few weeks ago, speaker Mike Johnson (R-La.) And Democratic Leader Hakeem Jeffries (D-N.Y.) appointed these members very bipartisan, an even number of Democrats or Republicans. And we've met a couple of times already. Were now meeting every fly out morning at 9:00. And the goal is by the end of the year to present a completely written up report on AI and what Congress should be doing. And hopefully, Eric, on the way, we'll also actually pass four or five or six foundational bills. Bills we can build upon in the years to come.nn<strong>Eric White <\/strong>Yeah. What can you tell me about the discussions that you just mentioned? Everybody loves to talk about the divisions in Congress and everything. But this issue, you might have a luxury of everybody generally wants a safe thing, a safe, efficient way for AI to be implemented into everyday life. What are you all mostly discussing when you have those conversations?nn<strong>Don Beyer <\/strong>Eric, it's been interesting. In the first couple of meetings, I spent a lot of going around the room saying, what are your priorities? And they're all over the place. For example, one Democratic member from New York had been very concerned about the use of AI delivering porn, especially with child sexual images. Where instead of the old terrible way of kidnaping children and forcing the reform porn in some garage, they actually generate it using large language models and stuff. It's just as evil, but without an actual child in play. So you can get a lot more of it a lot faster, which is even sadder. On the other hand, you get people that are really concerned about deepfakes and what it will mean for elections this year. We all know that more people will vote in 2024 than in any year in the history of mankind. Oh, all over the world and very big elections here in the United States. So it varies, but you could boil it down into 12 main topics. And then the notion is how do you address each one of them? What role does Congress really have or federal government have in these 12 different areas?nn<strong>Eric White <\/strong>And that's a perfect segue into my next question of what is Congress' role in this? Obviously, you have a vested interest in stopping some of the terrible things that can come from AI that you just mentioned. But as far as getting ahead of it and coming out with some overarching principles, is that where you see Congress' enacting a role in working with other branches of government?nn<strong>Don Beyer <\/strong>Yeah, very much so. So far, we've been really thrilled that there's been little partizan bickering, very little partizan divide. There's nothing like the divide we have on guns or on the right to reproductive freedom, things like that. So I'm optimistic about us being able to move forward. And on the role, it's interesting the Europeans who the European Union have recently passed their EU Artificial Intelligence Act, the EU AI act. And they were, I heard it referred to recently, is that they are super regulatory power. They really like regulation. Our tendency, both Democratic and Republican, is to focus on innovation and creation and new uses that can change the way our lives unfold. So almost all of us, across party lines, want to have a relatively light touch from a regulation perspective, unlike the Europeans.nn<strong>Eric White <\/strong>It's interesting. Usually we're trying to find ways to reduce red tape, and the Europeans tend to say, no, we need more red tape here. We're speaking with Virginia Congressman Don Beyer. Congress has always been a punching bag for the American public. And they're seen as sometimes being a little bit behind on when new technologies come in. And there are those viral clips of some of your fellow congressmen describing some things that maybe are off the cuff or out there. Where do you see as this is improving Congress' understanding of AI? Because it's a new technology and not too many people actually get with the facts of what it actually takes to create those deep fakes or actually have technology that will change Americans lives.nn<strong>Don Beyer <\/strong>Well, the good part, Eric, is that while there are only a handful of actual technologists who serve in Congress, the 24 people on this task force, almost all of them are pretty sophisticated about AI across the political landscape. So I'm really encouraged by that. When Speaker Johnson and Leader Jeffries pointed, they were looking for people who already had expressed a deep interest in artificial intelligence and done a lot of reading and a lot of visiting, a lot of experimenting. So that's a really good piece of it. And I also think while Congress always lags the American public, that's because that's the way our founding mothers and fathers set it up. It's two different entities, the House and the Senate. There's a filibuster in the Senate. You really have to spend a lot of time to get to a middle ground before something actually becomes law. And sometimes that slowness frustrates us. But it also can often be wise, because we're not overreacting or doing something quickly and hastily that we later need to reverse.nn<strong>Eric White <\/strong>Let's talk about you yourself. You got appointed to this mostly because we've interviewed you before. You've taken a deep interest in AI, and even have taken some classes in learning more about the technology. What can you tell me of where you stand personally in your understanding of it?nn<strong>Don Beyer <\/strong>I'm learning very quickly. I just came back from a four day AI conference with some of the smartest people I've ever met, and I had lots and lots of questions. And with every exposure, I learn a little bit more. By the way, having my coding background now, just in Python three and in Java, is also helping. No, I can't be a huge AI scientist right now. I'm years away from doing that, but I have a good inkling about how they're going about it and why, which helps. Although, ultimately, here in Congress in this task force, we're not going to be writing any code. We're going to be trying to come up with the right sets of policies for things like the democratization of artificial intelligence. We don't want to just to be owned by the big four. By ChatGPT, by OpenAI and Microsoft and Google. We want to make sure that people like you and me also have access to it. The small businesses and medium sized businesses do it, and researchers everywhere. So the democratization is a big piece of it. And I also think that we have to look really deeply at the potential downsides. How many AI optimists? I think it's could do much more good than harm. But as members of Congress, our job is to protect the American people. So thinking about the potential downsides is very important to you.nn<strong>Eric White <\/strong>Providing me an opening to ask about those big four and the plethora of famous technologists that we've seen making the rounds on news programs, talking about it. Are you bringing in any sort of experts during these conversations with your task force, or are you just kind of reaching out on your own accord and then coming back and reporting to the task force?nn<strong>Don Beyer <\/strong>It's a really good question here, because it's sort of in between. We have had, from Jay Obernolte (R-Calif.), who chairs the overall conference with Ted Lieu (D-Calif.). I think he's been deluged with different people who want to come present to the task force, enough so that they can take up the next three or four years just listening to people tell us their ideas. So he's going to be judicious in terms of the people we bring before us. But so far, it's been the leaders of the big four, but also people like Dario Gil, who's head of research at debt, at IBM. So some of the really great intellectuals and founders of this field are talking to us both in small groups and of big groups. Mark Andreasen, who is an early major technologist, has already come to talk to us. But we're also hearing interesting, Eric, from not just the technologist, but people who've been affected by it. For example, we had one fascinating meeting with the folks that do photography and illustrations, and who write music and who published books, who are seeing artificial intelligence as perhaps taking all of their creative work and making it for free on the internet through the large language models. So what's the business model that allows a photographer still makes a living other than at weddings?<\/blockquote>"}};

Twelve members of Congress shave been appointed to a new commission to lead the House’s exploration of AI’s transformational opportunities and potential challenges. Their mission? To create guiding principles, recommendations and bipartisan policy proposals for the regulation of AI. One of those members joined Federal News Network’s Eric White on The Federal Drive with Tom Temin to discuss the task ahead: Rep. Don Beyer (D-Va.)

Interview Transcript:  

Eric White We have been bombarded with hearing about the potentials of AI. And so I’m sure that as a member of Congress, you’re hearing from your constituents as well as their concerns and things that might be brought up if it is implemented fully. So how did this task force on AI all come together?

Don Beyer Eric, for a few years, there’s been an artificial intelligence caucus. Democrats and Republicans coming together once a month to just talk about AI, but no legislation was really moving. It wasn’t clear which committees had jurisdiction, wasn’t clear where there was really momentum behind specific pieces of legislation. So Kevin McCarthy (R-Calif.), back before the infamous vacation of the chair, had talked about forming a task force, never happened. And eventually, just a few weeks ago, speaker Mike Johnson (R-La.) And Democratic Leader Hakeem Jeffries (D-N.Y.) appointed these members very bipartisan, an even number of Democrats or Republicans. And we’ve met a couple of times already. Were now meeting every fly out morning at 9:00. And the goal is by the end of the year to present a completely written up report on AI and what Congress should be doing. And hopefully, Eric, on the way, we’ll also actually pass four or five or six foundational bills. Bills we can build upon in the years to come.

Eric White Yeah. What can you tell me about the discussions that you just mentioned? Everybody loves to talk about the divisions in Congress and everything. But this issue, you might have a luxury of everybody generally wants a safe thing, a safe, efficient way for AI to be implemented into everyday life. What are you all mostly discussing when you have those conversations?

Don Beyer Eric, it’s been interesting. In the first couple of meetings, I spent a lot of going around the room saying, what are your priorities? And they’re all over the place. For example, one Democratic member from New York had been very concerned about the use of AI delivering porn, especially with child sexual images. Where instead of the old terrible way of kidnaping children and forcing the reform porn in some garage, they actually generate it using large language models and stuff. It’s just as evil, but without an actual child in play. So you can get a lot more of it a lot faster, which is even sadder. On the other hand, you get people that are really concerned about deepfakes and what it will mean for elections this year. We all know that more people will vote in 2024 than in any year in the history of mankind. Oh, all over the world and very big elections here in the United States. So it varies, but you could boil it down into 12 main topics. And then the notion is how do you address each one of them? What role does Congress really have or federal government have in these 12 different areas?

Eric White And that’s a perfect segue into my next question of what is Congress’ role in this? Obviously, you have a vested interest in stopping some of the terrible things that can come from AI that you just mentioned. But as far as getting ahead of it and coming out with some overarching principles, is that where you see Congress’ enacting a role in working with other branches of government?

Don Beyer Yeah, very much so. So far, we’ve been really thrilled that there’s been little partizan bickering, very little partizan divide. There’s nothing like the divide we have on guns or on the right to reproductive freedom, things like that. So I’m optimistic about us being able to move forward. And on the role, it’s interesting the Europeans who the European Union have recently passed their EU Artificial Intelligence Act, the EU AI act. And they were, I heard it referred to recently, is that they are super regulatory power. They really like regulation. Our tendency, both Democratic and Republican, is to focus on innovation and creation and new uses that can change the way our lives unfold. So almost all of us, across party lines, want to have a relatively light touch from a regulation perspective, unlike the Europeans.

Eric White It’s interesting. Usually we’re trying to find ways to reduce red tape, and the Europeans tend to say, no, we need more red tape here. We’re speaking with Virginia Congressman Don Beyer. Congress has always been a punching bag for the American public. And they’re seen as sometimes being a little bit behind on when new technologies come in. And there are those viral clips of some of your fellow congressmen describing some things that maybe are off the cuff or out there. Where do you see as this is improving Congress’ understanding of AI? Because it’s a new technology and not too many people actually get with the facts of what it actually takes to create those deep fakes or actually have technology that will change Americans lives.

Don Beyer Well, the good part, Eric, is that while there are only a handful of actual technologists who serve in Congress, the 24 people on this task force, almost all of them are pretty sophisticated about AI across the political landscape. So I’m really encouraged by that. When Speaker Johnson and Leader Jeffries pointed, they were looking for people who already had expressed a deep interest in artificial intelligence and done a lot of reading and a lot of visiting, a lot of experimenting. So that’s a really good piece of it. And I also think while Congress always lags the American public, that’s because that’s the way our founding mothers and fathers set it up. It’s two different entities, the House and the Senate. There’s a filibuster in the Senate. You really have to spend a lot of time to get to a middle ground before something actually becomes law. And sometimes that slowness frustrates us. But it also can often be wise, because we’re not overreacting or doing something quickly and hastily that we later need to reverse.

Eric White Let’s talk about you yourself. You got appointed to this mostly because we’ve interviewed you before. You’ve taken a deep interest in AI, and even have taken some classes in learning more about the technology. What can you tell me of where you stand personally in your understanding of it?

Don Beyer I’m learning very quickly. I just came back from a four day AI conference with some of the smartest people I’ve ever met, and I had lots and lots of questions. And with every exposure, I learn a little bit more. By the way, having my coding background now, just in Python three and in Java, is also helping. No, I can’t be a huge AI scientist right now. I’m years away from doing that, but I have a good inkling about how they’re going about it and why, which helps. Although, ultimately, here in Congress in this task force, we’re not going to be writing any code. We’re going to be trying to come up with the right sets of policies for things like the democratization of artificial intelligence. We don’t want to just to be owned by the big four. By ChatGPT, by OpenAI and Microsoft and Google. We want to make sure that people like you and me also have access to it. The small businesses and medium sized businesses do it, and researchers everywhere. So the democratization is a big piece of it. And I also think that we have to look really deeply at the potential downsides. How many AI optimists? I think it’s could do much more good than harm. But as members of Congress, our job is to protect the American people. So thinking about the potential downsides is very important to you.

Eric White Providing me an opening to ask about those big four and the plethora of famous technologists that we’ve seen making the rounds on news programs, talking about it. Are you bringing in any sort of experts during these conversations with your task force, or are you just kind of reaching out on your own accord and then coming back and reporting to the task force?

Don Beyer It’s a really good question here, because it’s sort of in between. We have had, from Jay Obernolte (R-Calif.), who chairs the overall conference with Ted Lieu (D-Calif.). I think he’s been deluged with different people who want to come present to the task force, enough so that they can take up the next three or four years just listening to people tell us their ideas. So he’s going to be judicious in terms of the people we bring before us. But so far, it’s been the leaders of the big four, but also people like Dario Gil, who’s head of research at debt, at IBM. So some of the really great intellectuals and founders of this field are talking to us both in small groups and of big groups. Mark Andreasen, who is an early major technologist, has already come to talk to us. But we’re also hearing interesting, Eric, from not just the technologist, but people who’ve been affected by it. For example, we had one fascinating meeting with the folks that do photography and illustrations, and who write music and who published books, who are seeing artificial intelligence as perhaps taking all of their creative work and making it for free on the internet through the large language models. So what’s the business model that allows a photographer still makes a living other than at weddings?

The post New Congressional task force looks to make sure it’s not left behind by AI advancements first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/artificial-intelligence/2024/04/new-congressional-task-force-looks-to-make-sure-its-not-left-behind-by-ai-advancements/feed/ 0
Navy unveils new strategy for science, technology https://federalnewsnetwork.com/federal-newscast/2024/04/navy-unveils-new-strategy-for-science-technology/ https://federalnewsnetwork.com/federal-newscast/2024/04/navy-unveils-new-strategy-for-science-technology/#respond Wed, 10 Apr 2024 16:30:23 +0000 https://federalnewsnetwork.com/?p=4957196 Navy Secretary Carlos del Toro unveils partnership involving the Office of Naval Research, Naval Postgraduate School, U.S Naval Academy and Naval War College.

The post Navy unveils new strategy for science, technology first appeared on Federal News Network.

]]>
var config_4957118 = {"options":{"theme":"hbidc_default"},"extensions":{"Playlist":[]},"episode":{"media":{"mp3":"https:\/\/www.podtrac.com\/pts\/redirect.mp3\/traffic.megaphone.fm\/HUBB5225402584.mp3?updated=1712750460"},"coverUrl":"https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2018\/12\/FedNewscast1500-150x150.jpg","title":"Navy unveils new strategy for science and technology","description":"[hbidcpodcast podcastid='4957118']nn[federal_newscast]"}};
  • The Navy has a new strategy for science and technology. Navy leaders have branded it a “call to service” for scientists and engineers from across the country to help solve military problems. The focus areas include autonomy and artificial intelligence, power and energy, manufacturing, and a host of other issues. The plan does not spell out how the Navy will make progress on those objectives, but Navy Secretary Carlos del Toro said the new work will involve partnerships with the Office of Naval Research, the Naval Postgraduate School, the U.S Naval Academy and the Naval War College.
  • An Air Force legislative proposal to transfer National Guard space units to the Space Force is sparking a backlash among state governors. The National Governors Association has called for the immediate withdrawal of the proposed legislation to eliminate governors’ authority over their National Guard units. Utah Gov. Spencer Cox and Colorado Gov. Jared Polis said reducing governors’ authority over their National Guard personnel will affect military readiness, recruitment, retention and the National Guard infrastructure across the country. Air Force officials proposed legislation to bypass governors in seven states and move 14 Guard units with space missions to the Space Force.
  • Two agencies have obtained extra money for IT modernization projects. NASA won its first award from the Technology Modernization Fund. The Labor Department garnered its sixth in almost six years. These are the fourth and fifth awards the board has made since January 1 and continues its focus on cybersecurity and application modernization. The space agency is receiving $5.8 million to accelerate cybersecurity and operational upgrades to its network. Labor is getting $42 million for the Office of Workers’ Compensation Programs to replace its outdated Integrated Federal Employee Compensation System. The TMF board now has invested in 43 projects since receiving the $1 billion appropriation in the American Rescue Plan Act in 2021.
  • U.S. Cyber Command (CYBERCOM) is considering the best way to build its forces in the future, by conducting a study on future force generation models. The command has typically relied on the military services to train and equip its digital warriors. But leaders have pushed to embrace a more independent U.S. Special Operations Command-type model in recent years. And others have called for the Defense Department to establish an independent cyber service. CYBERCOM is slated to brief Pentagon leadership on the results of the study this summer.
  • Chandra Donelson is the Department of the Air Force's new acting chief data and artificial intelligence officer. In her new role, Donelson will be responsible for implementing the department’s data management and analytics, as well as AI strategy and policies. Donelson previously served as the space data and artificial intelligence officer for the Space Force, a role she will continue to hold. Her fiscal 2024 goals include integrating data and AI ethics into the department’s mission systems and programs.
  • The Postal Service is looking to raise prices on its monopoly mail products for the sixth time since 2020, when it gets approval from its regulator to set mail prices higher than the rate of inflation. USPS is planning to raise the price of a first-class Forever stamp from 68 to 73 cents. If approved by the regulator, these new USPS prices would go into effect on July 14. A recent study warned that USPS price increases are driving away more customers than the agency anticipated. But USPS said the data behind the study is “deeply flawed.”
  • The Department of Veterans Affairs is reviewing more than 4,000 positions that are at risk of a downgrade in their respective pay scales. The six VA positions under review include a mix of white-collar General Schedule (GS) and blue-collar Wage Grade (WG) positions. They include housekeeping aides, file clerks and boiler-plant operators. The VA expects to complete its review of these positions by the end of May. The American Federation of Government Employees said affected employees have received notices in the mail. But, the union said, it has not received notice from the VA about any imminent downgrades.
  • With cyber attacks on the rise, incident response is a big part of managing security risks. Now the National Institute of Standards and Technology is seeking feedback on new recommendations for cyber incident response. The draft guidance is tied to NIST’s recently issued Cybersecurity Framework 2.0. The revised publication layout is a new, more integrated model for organizations responding to a cyber attack or other network security incident. Comments on the draft publication are due to NIST by May 20.

The post Navy unveils new strategy for science, technology first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/federal-newscast/2024/04/navy-unveils-new-strategy-for-science-technology/feed/ 0
Biden administration working on hiring tools to help agencies compete for AI talent https://federalnewsnetwork.com/artificial-intelligence/2024/04/biden-administration-working-on-hiring-tools-to-help-agencies-compete-for-ai-talent/ https://federalnewsnetwork.com/artificial-intelligence/2024/04/biden-administration-working-on-hiring-tools-to-help-agencies-compete-for-ai-talent/#respond Thu, 04 Apr 2024 22:33:53 +0000 https://federalnewsnetwork.com/?p=4950896 The Biden administration is on a hiring spree for experts who can help the federal government adopt artificial intelligence (AI) tools and boost productivity.

The post Biden administration working on hiring tools to help agencies compete for AI talent first appeared on Federal News Network.

]]>
The Biden administration is on a hiring spree for experts who can help the federal government adopt artificial intelligence tools and boost productivity.

The Office of Management and Budget, after finalizing its governmentwide policy on AI use, said last week that it plans to hire 100 AI professionals into the federal workforce by this summer.

The White House, as part of this AI “talent surge,” is making sure agencies have the tools they need to recruit and retain AI professionals. The administration, under a sweeping executive order on AI in government, is also casting a wider net for talent, and looking to fill AI-adjacent positions.

Loren DeJonge Schulman, OMB’s associate director for performance and personnel management, said the administration is focused on the “future civil service we need to build to serve the American people today and in the future.”

“We are not just looking for technologists, for people who build large language models, for people with coding experience. Those are incredibly important, and we do want you. We are also looking for regulators, policymakers, human capital specialists who understand how to deploy AI — lawyers who are thinking about how to regulate this system — and so much more to recruit and support the next generation of AI talent. AI is going to touch everything,” Schulman said Thursday in a webinar.

The Office of Personnel Management is working on several projects to help agencies bring AI talent through the federal hiring process — from strategic workforce planning to onboarding.

Kyleigh Russ, a senior advisor at OPM leading its Hiring Experience Group, said OPM is working on a validated AI competency model that will help “better define AI skills across the federal workforce.”

Along with that model, OPM will also release interpretive guidance that will help agencies classify and assess AI skills, and put OPM’s AI competency model into practice.

OPM took its first step to identify what federal employees need to know to work with AI last summer. In a governmentwide memo to chief human capital officers, the agency outlined 44 general competencies and 14 technical competencies for the federal workforce to work with AI.

OPM is also working with the White House — including its U.S. Digital Service and Office of Performance and Personnel Management — to create an AI and Tech Talent Playbook.

Russ said the playbook will include hiring best practices and case studies that show how agencies can effectively onboard tech talent.

Federal and state agencies are also hosting a virtual “Tech to Gov” job fair on April 18, with a focus on filling AI and AI-enabling positions across government.

“We have a number of agencies there that are actively looking to hire both on-the-spot and in the months following,” Russ said.

OPM, as part of an interagency AI and Tech Talent Task Force, is focused on agencies hiring new AI and AI-enabling talent across government — as well as upskilling the current federal workforce to work with AI.

They’re also looking to give agencies the hiring tools and resources they need to compete with the private sector to attract these in-demand experts.

“We know that this talent is very sought-after and that there will be constant competition, both across government and the private sector. So investing in continuous training, retention incentives and the ability to move around throughout the government will be critical to ensuring that we can keep this talent once they’re in the door,” Russ said.

Russ said agencies are looking for technical AI talent to deploy these tools, as well as “AI-enabling talent” that will play a supporting role in deploying this emerging technology.

“We do, of course, need technical AI talent, who will be designing and modifying AI solutions. Those people will be vital to the federal government’s success in answering questions and solving problems,” Russ said. “People in AI positions will ensure that data is clean and stored in a way that will make it usable, and AI-enabling positions might also be writing policy, or even doing specialized tech talent recruitment for AI. Without these enabling skill sets, the AI talent that we’re bringing in can’t hope to execute on their core missions.”

Russ said AI-enabling positions also include data tagging, human resources, policy and other “backbone” support functions.

OPM is also encouraging agencies to use all the tools and incentives available to them to recruit and retain AI experts.

To incentivize employees to stay, agencies can offer an annual bonus of 25% of an employee’s base pay for up to four years.

The pay bonuses are reserved for feds in difficult-to-fill jobs, or those who have to relocate. But OPM’s recent approval of direct hire authority for AI-related positions is sufficient — with no further evidence needed — for agencies to consider AI jobs difficult to fill, and therefore eligible for the bonuses.

OPM approved direct-hire authority for AI-related positions last year. Agencies also have OPM approval to offer an annual bonus of 25% of an employee’s base pay for up to four years to fill AI-related jobs.

OPM is also encouraging agencies to bring in new talent through pipeline programs, such as the U.S. Digital Service and the President’s Management Fellows program.

Russ also encouraged agencies to bring outside experts into government for a temporary tour of duty through the Intergovernmental Personnel Act (IPA). Federal agencies can make IPA appointments to borrow talent from universities, nonprofits and state and local governments.

“This is largely underutilized across government,” Russ said. “We encourage agencies to use this hiring mechanism, which allows for a great deal of flexibility in their arrangements, that make sense for the agency and the partner organization.”

While the federal government remains in the early stages of AI adoption, agencies have already identified more than 700 AI use cases on AI.gov.

Morgan Zimmerman, an AI policy analyst at OMB, said agencies already see AI tools boosting the productivity of their current workforce.

“We’re seeing that AI can really start to automate a lot of routine tasks that we have here in the government — something like analyzing large datasets very rapidly, and just optimizing processes in ways that really increase productivity here in federal agencies,” Zimmerman said.

Zimmerman said agencies are also using AI for predictive analytics and data-driven decision-making.

“We’re really seeing that AI can enhance the quality of decisions in both policy areas, operations within the agency, enforcement, those sorts of areas.”

OMB also sees AI as a valuable tool to provide a higher level of customer service across the government.

“We’re already seeing that agencies are leveraging AI-powered chatbots. They’re using voice assistants, and things like language translation. And all of these technologies are going to prove really impactful when it comes to government services, and especially making them more accessible and user friendly to the public,” Zimmerman said.

The Department of Veterans Affairs is using automation to help its employees make decisions on disability claims.

Paul Shute, VA’s assistant deputy undersecretary for automated benefits delivery, said the automation tools don’t make final decisions on disability claims — but do help claims processors go through hundreds of thousands of pages of veterans’ military service records, medical records and prior claims histories.

“I’m a firm believer that there’s something inherently human in the disability decisions that we make for veterans. And despite the fact that there’s been incredible advancements in new and emerging technology like AI, there’s no technology available today that can replicate the human discretion required in weighing evidence, determining the probative value of evidence that’s needed to effectively process veterans disability claims,” Shute said.

Last year, the Veterans Benefits Administration received 2.4 million disability compensation claims, a 42% increase from the 1.7 million claims received in fiscal year 2022. VBA expects to keep seeing record volumes of claims in the coming years under the toxic-exposure PACT Act.

“We are currently hiring and we will continue to hire more claims processors to address that volume of work, but we also need to equip both our current and new employees, the tools and technologies they need to enhance their productivity and efficiency and empower them to do the incredible work that they do for veterans every day,” Shute said.

VA has identified more than 100 use cases for AI.

The post Biden administration working on hiring tools to help agencies compete for AI talent first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/artificial-intelligence/2024/04/biden-administration-working-on-hiring-tools-to-help-agencies-compete-for-ai-talent/feed/ 0
President Biden’s AI-facing executive order should be applauded https://federalnewsnetwork.com/commentary/2024/04/president-bidens-ai-facing-executive-order-should-be-applauded/ https://federalnewsnetwork.com/commentary/2024/04/president-bidens-ai-facing-executive-order-should-be-applauded/#respond Thu, 04 Apr 2024 20:05:49 +0000 https://federalnewsnetwork.com/?p=4950703 President Biden's recently-issued executive order that outlined his administration’s plan to promote “Safe, Secure and Trustworthy Artificial Intelligence."

The post President Biden’s AI-facing executive order should be applauded first appeared on Federal News Network.

]]>
In years to come, we may look back at it as the birth of responsible artificial intelligence. President Biden’s recently-issued executive order that outlined his administration’s plan to promote “Safe, Secure and Trustworthy Artificial Intelligence” represented a much-needed response to a growing problem: data privacy in AI systems.

Companies are being reckless with AI, putting the potential benefits of the technology ahead of data privacy. This is not a new condition. Historically, companies rush to adopt disruptive technology without fully considering potential ramifications. ChatGPT and Microsoft already have had AI-related breaches that grabbed headlines this year, and there will certainly be more as the popularity of the technology grows. Without the proper guardrails, these types of headline-grabbing incidents will further compromise consumer privacy.

The momentum propelling this historic executive order began earlier this year when OpenAI CEO Sam Altman appealed to Congress to consider stronger regulations around how companies use generative AI systems to avoid putting consumer privacy at risk. I praised Altman then, and have a similar enthusiasm for President Biden’s executive order.

The executive order is a responsible response to this emerging issue. The section titled “Protecting Americans’ Privacy” is especially poignant. This particular portion of the order considers the significant risks of consumer data exposure via generative AI and proactively calls on Congress to pass bipartisan data privacy legislation addressing four critical components.

Fast-tracking privacy-enhancing technologies

The order first asks Congress to protect Americans’ privacy “by prioritizing federal support for accelerating the development and use of privacy-preserving techniques.” Once locked away in databases, data lives in the cloud and is on the move, especially when used for AI. While privacy-preserving technologies have made tremendous progress, the push to find efficiencies via AI has made data protection a “bare minimum” exercise — monitoring data breaches rather than preventing them. This section of the order understands the critical nature of privacy-preserving techniques that will make organizations better positioned to protect data and infrastructure. As AI systems are being trained, data is continually protected, even in the case of a breach.

Strengthening of privacy research and development

Next, the executive order calls for creating a research coordination network that would promote “rapid breakthroughs and development” of privacy-preserving research and technologies and would work with the National Science Foundation to encourage the adoption of these technologies by federal agencies. This part of the order is incredibly encouraging because it further reiterates the importance of data security to protect the public as new technologies like generative AI emerge. The ways in which data exposure occurs — whether nefarious or accidental — continue to evolve, and the use of generative AI further complicates things. Having a federally funded group dedicated to researching this complex challenge is critical to finding ways to maintain data privacy in AI environments.

Reviewing means for AI-based data collection

Requirements for federal agencies do not stop there. According to the executive order, the suggested legislation would also include provisions to evaluate how federal agencies “collect and use commercially available information” and consider AI usage to strengthen data privacy guidance for federal agencies. Requiring federal agencies to adopt advanced technologies and set more stringent rules for data collection sets an excellent example for enterprises. It shows that the public sector is taking data privacy seriously, which is a positive sign when contrasted with other countries’ measures to protect consumer privacy.

Creating guidelines for privacy technology effectiveness

Another forward-thinking section of the executive order would require guidelines for proving the effectiveness of privacy-preserving techniques. In doing so, federal agencies will have to do more than just implement a solution; they will have to demonstrate efficacy. This provision is most critical because it asks agencies to be diligent in their vetting processes. It is easy to implement a technology or internal policy and assume you have taken the necessary steps to protect data, but is it working? Standards that evaluate how well solutions work are essential to ensuring the best possible protection, especially for federal agencies that should be held to the highest standards for protecting consumer data.

There is still so much to learn about AI, but our journey to harness its potential must be a responsible one. The potential damage is too great to ignore. I believe President Biden’s executive order is an excellent example of how governments can quickly progress to address emerging risks in ubiquitous technologies before problems expand to nearly irreversible proportions. The announcement is a positive first step, and all companies that use consumer data should take note and employ the necessary measures to ensure the safe and responsible use of AI.

Ameesh Divatia is co-founder and CEO of Baffle.

The post President Biden’s AI-facing executive order should be applauded first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/04/president-bidens-ai-facing-executive-order-should-be-applauded/feed/ 0
Intelligence community gets a chief AI officer https://federalnewsnetwork.com/artificial-intelligence/2024/04/intelligence-community-gets-a-chief-ai-officer/ https://federalnewsnetwork.com/artificial-intelligence/2024/04/intelligence-community-gets-a-chief-ai-officer/#respond Thu, 04 Apr 2024 16:45:22 +0000 https://federalnewsnetwork.com/?p=4950308 The appointment of a chief AI officer comes as the IC looks to safely adopt large language models and other technologies.

The post Intelligence community gets a chief AI officer first appeared on Federal News Network.

]]>
var config_4957168 = {"options":{"theme":"hbidc_default"},"extensions":{"Playlist":[]},"episode":{"media":{"mp3":"https:\/\/www.podtrac.com\/pts\/redirect.mp3\/traffic.megaphone.fm\/HUBB3900842332.mp3?updated=1712753853"},"coverUrl":"https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2023\/12\/3000x3000_Federal-Drive-GEHA-150x150.jpg","title":"Intelligence community gets a chief AI officer","description":"[hbidcpodcast podcastid='4957168']nnThe top U.S. spy office has tapped a research official to spearhead the intelligence community\u2019s work on AI.nnJohn Beieler, who serves as Director of National Intelligence Avril Haines\u2019 top science and technology advisor, has been named chief artificial intelligence officer at the Office of the Director of National Intelligence. Beieler confirmed his additional role during a speech today at an event hosted by the Intelligence and National Security Alliance in Arlington, Va.nnBeieler now leads a council of chief AI officers from the 18 elements of the intelligence community, including the CIA, the National Security Agency and the Defense Intelligence Agency. He said the council, which reports directly to Haines, has been meeting every two weeks for the last two months.nn\u201cWhat we're focusing on as a group is AI governance,\u201d Beieler said.nnHe said the group is writing the first IC-wide directive on AI. It will describe what intelligence agencies need to do to deploy AI and machine learning.nn\u201cThings like documentation, standards, [application programing interfaces], what sort of data documentation needs to happen, how all these things fit together, the responsible adoption, ongoing monitoring,\u201d Beieler said, describing what goes into the directive. \u201cThe responsibility of an individual developer, the responsibility of management and leadership. We\u2019re really focusing on that responsible, ethical adoption.\u201dnnHe added that the directive will also lay out civil liberties and privacy protections that need to be included in the algorithms developed by the intelligence community.nnThe new AI council is also leading an update to ODNI\u2019s AI strategy.nn\u201cWe want to make sure that we have that one consolidated viewpoint of, what do we think is important for AI and the IC, to drive some of those resource conversations,\u201d Beieler said.nnConcerned with rapid advances in AI by China and other countries, lawmakers have also urged the intelligence community to prioritize the adoption of AI, with safeguards.nnThe Fiscal 2024 National Defense Authorization Act directs the DNI to establish new policies \u201cfor the acquisition, adoption, development, use, coordination, and maintenance of artificial intelligence capabilities,\u201d including minimum guidelines for the performance of AI models used by spy agencies.nnBeieler has a background in data science and machine learning. Prior to joining ODNI in 2019, he led research programs on human language technology, machine learning and vulnerabilities in AI at the Intelligence Advanced Research Projects Agency.nnAt ODNI, he has also helped lead the intelligence community\u2019s <a href="https:\/\/federalnewsnetwork.com\/technology-main\/2021\/01\/intelligence-communitys-three-as-of-digital-transformation-augmentation-ai-and-automation\/" target="_blank" rel="noopener">Augmenting Intelligence using Machines<\/a> or \u201cAIM\u201d strategy. With many intel agencies dealing with a deluge of data, the goal of AIM has been to coordinate the adoption of AI and automation across spy agencies.nnWhile spy agencies have used forms of artificial intelligence and machine learning for decades, the emergence of widely available large language models like ChatGPT has added both new considerations and renewed urgency to the AI race.nn\u201cA lot of this is focused on making sure that folks that are using these tools understand them,\u201d Beieler said.nnODNI has already funded various training and upskilling programs across intelligence agencies. And he acknowledged the challenges with generative AI and other large language models, such as hallucination errors, copyright issues, and privacy concerns.nn\u201cGetting analysts, collectors and the broad base of the IC workforce familiar with these things, so they understand some of these failure modes, but doing that in such a way that they don't immediately write off the technology,\u201d Beieler said. \u201cThat's the tricky part in upskilling across the workforce.\u201dnnWith just a handful of companies -- rather than government labs or academia -- developing the so-called advanced <a href="https:\/\/openai.com\/blog\/frontier-model-forum" target="_blank" rel="noopener">\u201cfrontier AI models,\u201d<\/a> Beieler acknowledged the intelligence community finds itself in a unique \u201cLLM moment.\u201dnnHe said it will be crucial to test and evaluate the models for different failure modes. He added that the IC isn\u2019t interested in just \u201cbuying a widget\u201d from companies, but partnering with industry and academia test and evaluate how AI will impact the world of intelligence.nn\u201cThat doesn't mean that we won't have humans. In fact, I think it might mean that we have more humans, but again, is what is the role?\u201d Beieler said. \u201cWhat is that teaming, what is that partnership, and how do we work? And how do we put some of those guardrails in so that analysts understand and collectors understand some of these models that they're working with.\u201d"}};

The top U.S. spy office has tapped a research official to spearhead the intelligence community’s work on AI.

John Beieler, who serves as Director of National Intelligence Avril Haines’ top science and technology advisor, has been named chief artificial intelligence officer at the Office of the Director of National Intelligence. Beieler confirmed his additional role during a speech today at an event hosted by the Intelligence and National Security Alliance in Arlington, Va.

Beieler now leads a council of chief AI officers from the 18 elements of the intelligence community, including the CIA, the National Security Agency and the Defense Intelligence Agency. He said the council, which reports directly to Haines, has been meeting every two weeks for the last two months.

“What we’re focusing on as a group is AI governance,” Beieler said.

He said the group is writing the first IC-wide directive on AI. It will describe what intelligence agencies need to do to deploy AI and machine learning.

“Things like documentation, standards, [application programing interfaces], what sort of data documentation needs to happen, how all these things fit together, the responsible adoption, ongoing monitoring,” Beieler said, describing what goes into the directive. “The responsibility of an individual developer, the responsibility of management and leadership. We’re really focusing on that responsible, ethical adoption.”

He added that the directive will also lay out civil liberties and privacy protections that need to be included in the algorithms developed by the intelligence community.

The new AI council is also leading an update to ODNI’s AI strategy.

“We want to make sure that we have that one consolidated viewpoint of, what do we think is important for AI and the IC, to drive some of those resource conversations,” Beieler said.

Concerned with rapid advances in AI by China and other countries, lawmakers have also urged the intelligence community to prioritize the adoption of AI, with safeguards.

The Fiscal 2024 National Defense Authorization Act directs the DNI to establish new policies “for the acquisition, adoption, development, use, coordination, and maintenance of artificial intelligence capabilities,” including minimum guidelines for the performance of AI models used by spy agencies.

Beieler has a background in data science and machine learning. Prior to joining ODNI in 2019, he led research programs on human language technology, machine learning and vulnerabilities in AI at the Intelligence Advanced Research Projects Agency.

At ODNI, he has also helped lead the intelligence community’s Augmenting Intelligence using Machines or “AIM” strategy. With many intel agencies dealing with a deluge of data, the goal of AIM has been to coordinate the adoption of AI and automation across spy agencies.

While spy agencies have used forms of artificial intelligence and machine learning for decades, the emergence of widely available large language models like ChatGPT has added both new considerations and renewed urgency to the AI race.

“A lot of this is focused on making sure that folks that are using these tools understand them,” Beieler said.

ODNI has already funded various training and upskilling programs across intelligence agencies. And he acknowledged the challenges with generative AI and other large language models, such as hallucination errors, copyright issues, and privacy concerns.

“Getting analysts, collectors and the broad base of the IC workforce familiar with these things, so they understand some of these failure modes, but doing that in such a way that they don’t immediately write off the technology,” Beieler said. “That’s the tricky part in upskilling across the workforce.”

With just a handful of companies — rather than government labs or academia — developing the so-called advanced “frontier AI models,” Beieler acknowledged the intelligence community finds itself in a unique “LLM moment.”

He said it will be crucial to test and evaluate the models for different failure modes. He added that the IC isn’t interested in just “buying a widget” from companies, but partnering with industry and academia test and evaluate how AI will impact the world of intelligence.

“That doesn’t mean that we won’t have humans. In fact, I think it might mean that we have more humans, but again, is what is the role?” Beieler said. “What is that teaming, what is that partnership, and how do we work? And how do we put some of those guardrails in so that analysts understand and collectors understand some of these models that they’re working with.”

The post Intelligence community gets a chief AI officer first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/artificial-intelligence/2024/04/intelligence-community-gets-a-chief-ai-officer/feed/ 0
The dust has settled from the AI executive order – Here’s what agencies should tackle next https://federalnewsnetwork.com/commentary/2024/04/the-dust-has-settled-from-the-ai-executive-order-heres-what-agencies-should-tackle-next/ https://federalnewsnetwork.com/commentary/2024/04/the-dust-has-settled-from-the-ai-executive-order-heres-what-agencies-should-tackle-next/#respond Wed, 03 Apr 2024 14:31:05 +0000 https://federalnewsnetwork.com/?p=4948429 While it’s clear the government has made progress since the initial guidance was issued, there’s still much to be done to support overall safe federal AI.

The post The dust has settled from the AI executive order – Here’s what agencies should tackle next first appeared on Federal News Network.

]]>
After the dust has settled around the much anticipated AI executive order, the White House recently released a fact sheet announcing key actions as a follow-up three months later. The document summarizes actions that agencies have taken since the EO was issued, including highlights to managing risks and safety measures and investments into innovation.

While it’s clear the government has been making progress since the initial guidance was issued, there’s still much to be done to support overall safe federal AI adoption, including prioritizing security and standardizing guidance. To accomplish this undertaking, federal agencies can look to existing frameworks and resources and apply them to artificial intelligence to accelerate safe AI adoption.

It’s no longer a question of if AI is going to be implemented across the federal government – it’s a question of how, and how fast can it be implemented in a secure manner?

Progress made since the AI EO release

Implementing AI across the federal government has been a massive undertaking, with many agencies starting at ground zero at the start of last year. Since then, the White House has made it clear that implementing AI in a safe and ethical manner is a key priority for the administration, issuing major guidance and directives over the past several months.

According to the AI EO follow-up fact sheet, key targets have been hit in several areas including:

  • Managing risks to safety and security: Completed risk assessments covering AI’s use in every critical infrastructure sector are the most crucial area.
  • Innovating AI for good: Included launches of several AI pilots, research and funding initiatives across key focus areas including HHS and K-12 education.

What should agencies tackle next?

Agencies should further lean into safety and security considerations to ensure AI is being used responsibly and in a manner that protects agencies’ critical data and resources. In January, the National Institute of Standards and Technology released a publication warning regarding privacy and security challenges arising from rapid AI deployment. The publication urges that security needs to be of the utmost importance for any public sector agency interested in implementing AI, which should be the next priority agencies tackle along their AI journeys.

Looking back on similar major technology transformations over the past couple years, such as cloud migration, we can begin to understand what the current problems are. It took the federal government over a decade to really nail down the details of ensuring cloud technology was secure — as a result of the federal government’s migration to the cloud, the government released the Federal Risk and Authorization Management Program (FedRAMP) as a form of guidance.

The good news is, we can learn from the lessons of the last ten years of cloud migration to accelerate AI and deliver it faster to the federal government and the American people by extending and leveraging existing governance models including the Federal Information and Security Management Act and FedRAMP Authority to Operate (ATO) by creating overlays for AI-specific safety, bias and explainability risks. ATO is a concept first developed by NIST to create strong governance for IT systems. This concept, along with others, can be applied to AI systems so agencies don’t need to reinvent the wheel when it comes to securing AI and deploying safe systems into production.

Where to get help?

There’s an abundance of trustworthy resources federal leaders can look to for additional guidance. One new initiative to keep an eye on is from NIST’s recently created AI Safety Institute Consortium (AISIC).

AISIC brings together more than 200 leading stakeholders, including AI creators and users, academics, government and industry researchers, and civil society organizations. AISIC’s mission is to develop guidelines and standards for AI measurement and policy, to help our country be prepared for AI adoption with the appropriate risk management strategies needed.

Additionally, agency leaders can look to industry partners with established centers of excellence or advisory committees with cross-sector expertise and third-party validation. Seek out counsel from industry partners that have experience working with or alongside the federal government, that truly understand the challenges that the government faces. The federal government shouldn’t have to go on this journey alone. There are several established working groups and trusted industry partners eager to share their knowledge.

Agencies across a wide range of sectors are continuing to make progress in their AI journeys, and the federal government continues to prioritize implementation guidance. It can be overwhelming to cut through the noise when it comes to what’s truly necessary to consider or to decide what factors to prioritize the most.

Leaders across the federal government must continue to prioritize security, and the best way to do this is by leaning into already published guidelines and seeking the best external resources available. While the federal government works on standardizing guidelines for AI, agencies can have peace of mind by following the roadmaps that they are most familiar with when it comes to best security practices and apply these to artificial intelligence adoption.

Gaurav “GP” Pal is found and CEO of stackArmor.

The post The dust has settled from the AI executive order – Here’s what agencies should tackle next first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/04/the-dust-has-settled-from-the-ai-executive-order-heres-what-agencies-should-tackle-next/feed/ 0
Take a use case-driven approach to artificial intelligence https://federalnewsnetwork.com/federal-insights/2024/04/take-a-use-case-driven-approach-to-artificial-intelligence/ https://federalnewsnetwork.com/federal-insights/2024/04/take-a-use-case-driven-approach-to-artificial-intelligence/#respond Mon, 01 Apr 2024 12:50:38 +0000 https://federalnewsnetwork.com/?p=4945866 The most promising applications of artificial intelligence let people do more analytical and strategic work.

The post Take a use case-driven approach to artificial intelligence first appeared on Federal News Network.

]]>

Artificial intelligence (AI) has come into its own for federal agencies. Federal IT professionals have acquired practical understanding of AI technology, and now they can concentrate on identifying use cases and employing  the appropriate AI technology for the job.

Those are among the observations of two of IBM’s top U.S. federal market executives.

“We’ve gotten past the ‘help us understand the technology’ to agencies really beginning to get hands-on with the technology to understand and imagine what the future looks like,” said Susan Wedge, managing partner for the U.S. public and federal market at IBM Consulting. Now, she said, agencies are thinking about “how can they reimagine delivery of their mission, the outcomes that they can achieve.”

“The AI executive order certainly put into focus how agencies need to be thinking about AI and the adoption of AI,” Wedge said. “And we’re really seeing agencies make a shift.”

Generative AI operates differently than what you might call traditional AI. Therefore, said Mark Johnson, vice president of technology for the U.S. federal market at IBM, agencies should take a step-by-step approach to generative AI. The process involves “finding those use cases, applying generative AI [or other AI technologies] and seeing what comes out,” Johnson said. “Then iterating back again, as we discover some interesting things, and we realize we want to know more [about new] questions.”

For example, Johnson cited human resources and its sometimes-convoluted processes. Generative AI, he said, can reveal ways to simplify or re-engineer HR processes and make operations more efficient for HR practitioners. IBM has had success with AI in its own HR function, to the point that 94% of employee questions are successfully answered by the technology.

“That doesn’t mean there’s not a human in the loop,” Wedge said. “It means that a human is there to handle the more complex, more strategic issues.”

In all use cases, success in AI requires careful curation and handling of training data. Moreover, Johnson said, the algorithm or large language model you train must itself have guard rails to protect data.

“You don’t want to go just throwing [your data] out there onto the Internet, into some large language model that you don’t know the provenance of,” Johnson said.

More than software development

AI projects have some characteristics in common with software development, Wedge suggested. As with software development, it’s “important to curate the stakeholders that participate within those pilots or proofs of technology.” More than simply a technology and data exercise, AI projects must pull in a cross section of program managers and anyone else with an interest in performance, safety and efficiency of mission delivery, Wedge said.

Johnson said that, to a greater extent than in pure coding, you must involve users throughout the process. AI touches “the mission of the agency,” he said. “And that’s where you must get it in the hands of those folks who know what they want the outcome to be. And then let them play with it.”

A crucial best practice, Johnson said, establishes oversight of the ethics and fairness of AI as deployed. He noted that IBM has its own internal AI ethics board.

Equally important: a governance setup to ensure AI outcomes stay within acceptable ranges, and avoiding the kind of drift that can affect generative AI such that at some point, one plus one fails to equal two, Wedge and Johnson said.

The most promising use cases “are not about the technology doing the work of a human, but about making the human more productive,” Wedge said. Case management provides another rich possibility, aside from HR.

“Multiple federal agencies are responsible for managing, responding to, engaging on various cases,” Wedge said. “Imagine if you could use generative AI to generate a summary of the case, and then enable that caseworker to drill down in specific areas.”

The post Take a use case-driven approach to artificial intelligence first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/federal-insights/2024/04/take-a-use-case-driven-approach-to-artificial-intelligence/feed/ 0
Understanding the data is the first step for NIH, CMS to prepare for AI https://federalnewsnetwork.com/ask-the-cio/2024/03/nih-cms-finding-a-path-to-better-data-management/ https://federalnewsnetwork.com/ask-the-cio/2024/03/nih-cms-finding-a-path-to-better-data-management/#respond Fri, 29 Mar 2024 19:53:52 +0000 https://federalnewsnetwork.com/?p=4944463 NIH and CMS have several ongoing initiatives to ensure employees and their customers understand the data they are providing as AI and other tools gain traction.

The post Understanding the data is the first step for NIH, CMS to prepare for AI first appeared on Federal News Network.

]]>
var config_4944551 = {"options":{"theme":"hbidc_default"},"extensions":{"Playlist":[]},"episode":{"media":{"mp3":"https:\/\/www.podtrac.com\/pts\/redirect.mp3\/traffic.megaphone.fm\/HUBB3043668049.mp3?updated=1711741714"},"coverUrl":"https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2018\/12\/AsktheCIO1500-150x150.jpg","title":"NIH, CMS finding a path to better data management","description":"[hbidcpodcast podcastid='4944551']nnThe National Institutes of Health\u2019s BioData Catalyst cloud platform is only just starting to take off despite it being nearly six years old.nnIt already holds nearly four petabytes of data and is preparing for a major expansion later this year as part of NIH\u2019s goal to democratize health research information.nnSweta Ladwa, the chief of the Scientific Solutions Delivery Branch at NIH, said the <a href="https:\/\/www.nhlbi.nih.gov\/science\/biodata-catalyst" target="_blank" rel="noopener">BioData Catalyst<\/a> provides access to clinical and genomic data already and the agency wants to add imaging and other data types in the next few months.nn[caption id="attachment_4944475" align="alignright" width="300"]<img class="size-medium wp-image-4944475" src="https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2024\/03\/sweta-ladwa-300x300.jpg" alt="" width="300" height="300" \/> Sweta Ladwa is the chief of the Scientific Solutions Delivery Branch at NIH.[\/caption]nn\u201cWe're really looking to provide a free and accessible resource to the research community to be able to really advance scientific outcomes and therapeutics, diagnostics to benefit the public health and outcomes of Americans and really people all over the world,\u201d Ladwa said during a recent panel discussion sponsored by AFCEA Bethesda, an excerpt of which ran on <a href="https:\/\/federalnewsnetwork.com\/category\/radio-interviews\/ask-the-cio\/">Ask the CIO<\/a>. \u201cTo do this, it takes a lot of different skills, expertise and different entities. It's a partnership between a lot of different people to make this resource available to the community. We're also part of the <a href="https:\/\/federalnewsnetwork.com\/artificial-intelligence\/2024\/02\/ai-data-exchange-state-depts-matthew-graviss-nihs-susan-gregurick-on-ai-as-force-multiplier\/">larger NIH data ecosystem<\/a>. We participate with other NIH institutes and centers that provide cloud resources.\u201dnnLawda said the expansion of new datasets to the BioData Catalyst platform means NIH also can <a href="https:\/\/federalnewsnetwork.com\/cloud-computing\/2023\/06\/cloud-exchange-2023-nihs-nick-weber-explains-how-strides-cloud-program-bridges-27-institutes\/">provide new tools<\/a> to help mine the information.nn\u201cFor imaging data, for example, we want to be able to leverage or build in tooling that's associated with machine learning because that's what imaging researchers are primarily looking to do is they're trying to process these images to gain insights. So tooling associated with machine learning, for example, is something we want to be part of the ecosystem which we're actively actually working to incorporate,\u201d she said. \u201cA lot of tooling is associated with data types, but it also could be workflows, pipelines or applications that help the researchers really meet their use cases. And those use cases are all over the place because there's just a wealth of data there. There's so much that can be done.\u201dnnFor NIH, the users in the research and academic communities are driving both the datasets and associated tools. Lawda said NIH is trying to make it easier for the communities to gain access.n<h2>NIH making cloud storage easier<\/h2>nThat is why cloud services have been and will continue to play an integral role in this big data platform and others.nn\u201cThe NIH in the Office of Data Science Strategy has been negotiating rates with cloud vendors, so that we can provide these cloud storage free of cost to the community and at a discounted rate to the institute. So even if folks are using the services for computational purposes, they're able to actually leverage and take benefit from the discounts that have been negotiated by the NIH with these cloud vendors,\u201d she said. \u201cWe're really happy to be working with multi-cloud vendors to be able to pass some savings on to really advanced science. We're really looking to continue that effort and expand the capabilities with some of the newer technologies that have been buzzing this year, like generative artificial intelligence and things like that, and really provide those resources back to the community to advance the science.\u201dnnLike NIH, the Centers for Medicare and Medicaid Services is spending a lot of time <a href="https:\/\/federalnewsnetwork.com\/workforce\/2024\/02\/hhh-takes-step-toward-goal-for-better-health-information-sharing\/">thinking about its data<\/a> and how to make it more useful for its customers.nnIn CMS\u2019s case, however, the data is around the federal healthcare marketplace and the tools to make citizens and agency employees more knowledgeable.nn[caption id="attachment_4944476" align="alignleft" width="300"]<img class="size-medium wp-image-4944476" src="https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2024\/03\/kate-wetherby-300x300.png" alt="" width="300" height="300" \/> Kate Wetherby is the acting director for the Marketplace Innovation and Technology Group at CMS.[\/caption]nn nn nn nn nn nn nn nn nn nn nn nnKate Wetherby, the acting director for the Marketplace Innovation and Technology Group at CMS, said the agency is reviewing all of its data sources and data streams to better understand what they have and make their websites and the user experience all work better.nn\u201cWe use that for performance analytics to make sure that while we are doing open enrollment and while we're doing insurance for people, that our systems are up and running and that there's access,\u201d she said. \u201cThe other thing is that we spend a lot of time using Google Analytics, using different types of testing fields, to make sure that the way that we're asking questions or how we're getting information from people makes a ton of sense.\u201dnnWetherby said her office works closely with both the business and policy offices to bring the data together and ensure its valuable.nn\u201cReally the problem is if you're not really understanding it at the point of time that you're getting it, in 10 years from now you're going to be like, \u2018why do I have this data?\u2019 So it's really being thoughtful about the data at the beginning, and then spending the time year-over-year to see if it's something you should still be holding or not,\u201d she said.nnUnderstanding the business, policy and technical aspects of the data becomes more important for CMS as it <a href="https:\/\/federalnewsnetwork.com\/automation\/2020\/10\/cms-untangles-its-data-infrastructure-to-enable-ai-powered-fraud-detection\/">moves more into AI<\/a>, including generative AI, chatbots and other tools.n<h2>CMS creating a data lake<\/h2>nWetherby said CMS must understand their data first before applying these tools.nn\u201cWe have to understand why we're asking those questions. What is the relationship between all of that data, and how we can we improve? What does the length of data look like because we have some data that's a little older and you've got to look at that and be like, does that really fit into the use cases and where we want to go with the future work?\u201d she said. \u201cWe\u2019ve spent a lot of time, at CMS as a whole, really thinking about our data, and how we're curating the data, how we know what that's used for because we all know data can be manipulated in any way that you want. We want it to be really clear. We want it to be really usable. Because when we start talking in the future, and we talk about generative AI, we talk about chatbots or we talk about predictive analytics, it is so easy for a computer if the data is not right, or if the questions aren't right, to really not get the outcome that you're looking for.\u201dnnWetherby added another key part of getting data right is for the user\u2019s experience and how CMS can share that data across the government.nnIn the buildup to using GenAI and other tools, CMS is creating a data lake to pull information from different centers and offices across the agency.nnWetherby said this way the agency can place the right governance and security around the data since it crosses several types including clinical and claims information."}};

The National Institutes of Health’s BioData Catalyst cloud platform is only just starting to take off despite it being nearly six years old.

It already holds nearly four petabytes of data and is preparing for a major expansion later this year as part of NIH’s goal to democratize health research information.

Sweta Ladwa, the chief of the Scientific Solutions Delivery Branch at NIH, said the BioData Catalyst provides access to clinical and genomic data already and the agency wants to add imaging and other data types in the next few months.

Sweta Ladwa is the chief of the Scientific Solutions Delivery Branch at NIH.

“We’re really looking to provide a free and accessible resource to the research community to be able to really advance scientific outcomes and therapeutics, diagnostics to benefit the public health and outcomes of Americans and really people all over the world,” Ladwa said during a recent panel discussion sponsored by AFCEA Bethesda, an excerpt of which ran on Ask the CIO. “To do this, it takes a lot of different skills, expertise and different entities. It’s a partnership between a lot of different people to make this resource available to the community. We’re also part of the larger NIH data ecosystem. We participate with other NIH institutes and centers that provide cloud resources.”

Lawda said the expansion of new datasets to the BioData Catalyst platform means NIH also can provide new tools to help mine the information.

“For imaging data, for example, we want to be able to leverage or build in tooling that’s associated with machine learning because that’s what imaging researchers are primarily looking to do is they’re trying to process these images to gain insights. So tooling associated with machine learning, for example, is something we want to be part of the ecosystem which we’re actively actually working to incorporate,” she said. “A lot of tooling is associated with data types, but it also could be workflows, pipelines or applications that help the researchers really meet their use cases. And those use cases are all over the place because there’s just a wealth of data there. There’s so much that can be done.”

For NIH, the users in the research and academic communities are driving both the datasets and associated tools. Lawda said NIH is trying to make it easier for the communities to gain access.

NIH making cloud storage easier

That is why cloud services have been and will continue to play an integral role in this big data platform and others.

“The NIH in the Office of Data Science Strategy has been negotiating rates with cloud vendors, so that we can provide these cloud storage free of cost to the community and at a discounted rate to the institute. So even if folks are using the services for computational purposes, they’re able to actually leverage and take benefit from the discounts that have been negotiated by the NIH with these cloud vendors,” she said. “We’re really happy to be working with multi-cloud vendors to be able to pass some savings on to really advanced science. We’re really looking to continue that effort and expand the capabilities with some of the newer technologies that have been buzzing this year, like generative artificial intelligence and things like that, and really provide those resources back to the community to advance the science.”

Like NIH, the Centers for Medicare and Medicaid Services is spending a lot of time thinking about its data and how to make it more useful for its customers.

In CMS’s case, however, the data is around the federal healthcare marketplace and the tools to make citizens and agency employees more knowledgeable.

Kate Wetherby is the acting director for the Marketplace Innovation and Technology Group at CMS.

 

 

 

 

 

 

 

 

 

 

 

Kate Wetherby, the acting director for the Marketplace Innovation and Technology Group at CMS, said the agency is reviewing all of its data sources and data streams to better understand what they have and make their websites and the user experience all work better.

“We use that for performance analytics to make sure that while we are doing open enrollment and while we’re doing insurance for people, that our systems are up and running and that there’s access,” she said. “The other thing is that we spend a lot of time using Google Analytics, using different types of testing fields, to make sure that the way that we’re asking questions or how we’re getting information from people makes a ton of sense.”

Wetherby said her office works closely with both the business and policy offices to bring the data together and ensure its valuable.

“Really the problem is if you’re not really understanding it at the point of time that you’re getting it, in 10 years from now you’re going to be like, ‘why do I have this data?’ So it’s really being thoughtful about the data at the beginning, and then spending the time year-over-year to see if it’s something you should still be holding or not,” she said.

Understanding the business, policy and technical aspects of the data becomes more important for CMS as it moves more into AI, including generative AI, chatbots and other tools.

CMS creating a data lake

Wetherby said CMS must understand their data first before applying these tools.

“We have to understand why we’re asking those questions. What is the relationship between all of that data, and how we can we improve? What does the length of data look like because we have some data that’s a little older and you’ve got to look at that and be like, does that really fit into the use cases and where we want to go with the future work?” she said. “We’ve spent a lot of time, at CMS as a whole, really thinking about our data, and how we’re curating the data, how we know what that’s used for because we all know data can be manipulated in any way that you want. We want it to be really clear. We want it to be really usable. Because when we start talking in the future, and we talk about generative AI, we talk about chatbots or we talk about predictive analytics, it is so easy for a computer if the data is not right, or if the questions aren’t right, to really not get the outcome that you’re looking for.”

Wetherby added another key part of getting data right is for the user’s experience and how CMS can share that data across the government.

In the buildup to using GenAI and other tools, CMS is creating a data lake to pull information from different centers and offices across the agency.

Wetherby said this way the agency can place the right governance and security around the data since it crosses several types including clinical and claims information.

The post Understanding the data is the first step for NIH, CMS to prepare for AI first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/ask-the-cio/2024/03/nih-cms-finding-a-path-to-better-data-management/feed/ 0
Senate bill aims to bring federal records law into the age of ‘WhatsApp’ https://federalnewsnetwork.com/agency-oversight/2024/03/senate-bill-aims-to-bring-federal-records-law-into-the-age-of-whatsapp/ https://federalnewsnetwork.com/agency-oversight/2024/03/senate-bill-aims-to-bring-federal-records-law-into-the-age-of-whatsapp/#respond Thu, 28 Mar 2024 20:25:27 +0000 https://federalnewsnetwork.com/?p=4943375 The legislation comes after recent federal records controversies where officials lost or deleted messages, like the missing Jan. 6 Secret Service texts.

The post Senate bill aims to bring federal records law into the age of ‘WhatsApp’ first appeared on Federal News Network.

]]>
Key Senate lawmakers are pushing to raise the stakes for government officials who delete texts or use personal online accounts to skirt federal records law.

Homeland Security and Governmental Affairs Committee Chairman Gary Peters (D-Mich.) and Sen. John Cornyn (R-Texas) are introducing the “Strengthening the Federal Records Act of 2024” today.

The bill would tighten disclosure requirements for “non-official messaging accounts” used to carry out government business, while also strengthening the ability of the National Archives and Records Administration to hold agencies accountable for complying with record-keeping rules.

“Federal agencies must maintain adequate records so that the American public can hold officials accountable, access critical benefits and services, and have a clear picture of how the government is spending taxpayer dollars,” Peters said in a statement. “We must also update the law to keep pace with rapidly changing technology and ensure that we are not sacrificing transparency as we embrace new forms of communication.”

The bill would prohibit federal employees from using “non-official” messaging applications to carry out government business unless the messages are backed up or otherwise saved in an official account.

Beyond texting, government officials have also increasingly turned to platforms like WhatsApp and Signal in recent years. Those “ephemeral” messaging applications allow users to permanently delete messages after a set amount of time.

“American taxpayers deserve a full accounting of federal records, including across all forms of digital communication,” Cornyn said. “This legislation would help make sure technological advancements do not hamstring the government’s ability to provide greater accountability and transparency for federal records.”

The proposed FRA reforms do not address record-keeping at the White House. Those practices are governed by a separate statute, the Presidential Records Act.

But the legislation comes after numerous federal record-keeping controversies at the agency-level in recent years. For instance, the Secret Service lost key text messages from the day of the Jan. 6 Capitol riot, reportedly due to an IT system update.

The Department of Homeland Security inspector general, who had been investigating the missing Secret Service texts, more recently admitted to lawmakers he routinely deletes texts off his government-issued phone.

And during a hearing held by the homeland security committee earlier this month, Republicans pointed to a National Institutes of Health official who had told colleagues he used his personal email account to avoid having his records pulled under a Freedom of Information Act request.

“Records are the currency of democracy,” Anne Weismann, a former Justice Department official and law professor at George Washington University, said during the hearing. “They are the way we hold government actors accountable. And we have seen too many examples, whether it’s at NIH, whether it’s at DHS, whether it’s the Secret Service, where federal employees are either willfully or unwittingly avoiding or contravening their record keeping responsibilities. And as a result, the historical record of what they’re doing and why they’re doing it, is incomplete.”

Certification requirements

Under the legislation, federal employees would also have to certify their compliance record-keeping requirements before leaving an agency. Weismann pointed to reports that senior officials in the Trump administration may have deleted crucial messages regarding Jan. 6 before leaving government.

“If they had been required to certify upon leaving government that they had complied with their record keeping responsibilities, that might not have happened, or there would have been some ability to hold them accountable for what they did,” Weismann said during a hearing held by the homeland security committee earlier this month.

The legislation would expand a NARA program that automatically captures the email messages of senior agency officials.

The “Capstone” program would be expanded to automatically capture other forms of electronic messages, including through the “culling” of transitory messages and personal messages “as appropriate,” per the legislation.

Justice Department referral

Peters’ and Cornyn’s bill would also require NARA to refer repeated violations of the FRA to the Justice Department, including cases where employees unlawfully remove or destroy records.

Weismann had told lawmakers that NARA has been reticent to refer violations of records laws to DOJ, especially in cases where records were allegedly destroyed. She said that’s despite the fact that the Archives admits it doesn’t have the resources or authorities to investigate and punish record-keeping violations on its own.

“[NARA] is not well equipped, they don’t have the investigative resources, for example, that the Department of Justice has, which is precisely why we think it’s so critical that the obligation to make that referral be made clear,” Weismann said.

The bill comes as federal agencies and NARA manage an increasing amount of electronic records. NARA will stop accepting permanent paper records from agencies starting this summer.

Numerous advisory committees and advocacy groups have warned that agencies have largely been unprepared to handle the growing influx of digital data over the past two decades, impacting everything from classified information sharing to FOIA processing.

The Peters-Cornyn legislation would also set up an “Advisory Committee on Records Automation” at NARA. The committee would be responsible for encouraging and recommending ways that agencies can take advantage of automation to ingest and manage their electronic records.

The bill has garnered the support of multiple advocacy groups, according to statements provided by the Homeland Security Committee. They include the Citizens for Responsibility and Ethics in Washington (CREW), Americans for Prosperity, Protect Democracy, Government Information Watch, and the Association of Research Libraries.

“Government records are ultimately the property of the American people and agencies are responsible for maintaining the emails, texts, and documents they create,” Debra Perlin, policy director for CREW, said in a statement. “The Strengthening Oversight of Federal Records Act would update and bolster our federal recordkeeping laws to account for changes in technology, and make it easier for organizations like ours to ensure that records are created and preserved during any administration.”

The post Senate bill aims to bring federal records law into the age of ‘WhatsApp’ first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/agency-oversight/2024/03/senate-bill-aims-to-bring-federal-records-law-into-the-age-of-whatsapp/feed/ 0
White House sets ‘binding requirements’ for agencies to vet AI tools before using them https://federalnewsnetwork.com/artificial-intelligence/2024/03/omb-sets-binding-requirements-for-agencies-to-vet-ai-tools-before-using-them/ https://federalnewsnetwork.com/artificial-intelligence/2024/03/omb-sets-binding-requirements-for-agencies-to-vet-ai-tools-before-using-them/#respond Thu, 28 Mar 2024 09:01:59 +0000 https://federalnewsnetwork.com/?p=4942142 The Biden administration is calling on federal agencies to step up their use of artificial intelligence tools, but keep risks in check

The post White House sets ‘binding requirements’ for agencies to vet AI tools before using them first appeared on Federal News Network.

]]>
The Biden administration is calling on federal agencies to step up their use of artificial intelligence tools, but in a way that keeps the risk of misuse in check.

The Office of Management and Budget on Thursday released its first governmentwide policy on how agencies should mitigate the risks of AI while harnessing its benefits.

Among its mandates, OMB will require agencies to publicly report on how they’re using AI, the risks involved and how they’re managing those risks.

Senior administration officials told reporters Wednesday that OMB’s guidance will give agency leaders, such as their chief AI officers or AI governance boards, the information they need to independently assess their use of AI tools, identify flaws, prevent biased or discriminatory results and suggest improvements.

Vice President Kamala Harris told reporters in a call that OMB’s guidance sets up several “binding requirements to promote the safe, secure and responsible use of AI by our federal government.”

“When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” Harris said.

‘Concrete safeguards’ for agency AI use

OMB  is giving agencies until Dec. 1, 2024, to implement “concrete safeguards” that protect Americans’ rights or safety when agencies use AI tools.

“These safeguards include a range of mandatory actions to reliably assess, test, and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI,” OMB wrote in a fact sheet.

By putting these safeguards in place, OMB says travelers in airports will be able to opt out of AI facial recognition tools used by the Transportation Security Administration,” without any delay or losing their place in line.”

The Biden administration also expects AI algorithms used in the federal health care system will have a human being overseeing the process to verify the AI algorithm’s results and avoid biased results.

“If the Veterans Administration wants to use AI in VA hospitals, to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses,” Harris said.

A senior administration official said OMB is providing overarching AI guidelines for the entire federal government, “as well as individual guidelines for specific agencies.”

“Each agency is in its own unique place in its technology and innovation journey related to AI. So we will make sure, based on the policy, that we will know how all government agencies are using AI, what steps agencies are taking to mitigate risks. We will be providing direct input on the government’s most useful impacts of AI,” the official said. “And we will make sure, based on the guidance, that any member of the public is able to seek remedy when AI potentially leads to misinformation or false decisions about them.”

OMB’s first-of-its-kind guidance covers all federal use of AI, including projects developed internally by federal officials and those purchased from federal contractors.

Under OMB’s policy, agencies that don’t follow these steps “must cease using the AI system,” except in some limited cases where doing so would create an “unacceptable impediment to critical agency operations.”

OMB is requiring agencies to release expanded inventories of their AI use cases every year, including identifying use cases that impact rights or safety, and how the agency is addressing the relevant risks.

Agencies have already identified hundreds of AI use cases on AI.gov.

“The American people have a right to know when and how their government is using AI, that it is being used in a responsible way. And we want to do it in a way that holds leaders accountable for the responsible use of AI,” Harris said.

OMB will also require agencies to release government-owned AI code, models and data — as long as it doesn’t pose a risk to the public or government operations.

The guidance requires agencies to designate chief AI officers — although many agencies have already done so after it released its draft guidance last May Those agency chief AI officers have recently met with OMB and other White House officials as part of the recently launched Chief AI Officer Council.

OMB’s guidance also gives agencies until May 27 to establish AI governance boards that will be led by their deputy secretaries or an equivalent executive.

The Departments of Defense, Veterans Affairs, Housing and Urban Development and State have already created their AI governance boards.

“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris said.

A senior administration official said the OMB guidance expects federal agency leadership, in many cases, to assess whether AI tools adopted by the agency adhere to risk management standards and standards to protect the public.

Federal government ‘leading by example’ on AI

OMB Director Shalanda Young said the finalized guidance “demonstrates that the federal government is leading by example in its own use of AI.”

“AI presents not only risks, but also a tremendous opportunity to improve public services,” Young said. “When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services, improve accuracy and expand access to essential public services.”

Young said the OMB guidance will make it easier for agencies to share and collaborate across government, as well as with industry partners. She said it’ll also “remove unnecessary barriers to the responsible use of AI in government.”

Many agencies are already putting AI tools to work.

The Centers for Disease Control and Prevention is using AI to predict the spread of disease and detect illegal opioids, while the Center for Medicare and Medicaid Services is using AI to reduce waste and identify anomalies in drug costs.

The Federal Aviation Administration is using AI to manage air traffic in major metropolitan areas and  improve travel time.

OMB’s guidance encourages agencies to “responsibly experiment” with generative AI, with adequate safeguards in place. The administration notes that many agencies have already started this work, including by using AI chatbots to improve customer experience.

100 new AI hires coming to agencies by this summer

Young said the federal government is on track to hire at least 100 AI professionals into the federal workforce this summer, and holding a career fair on April 18 to fill AI roles across the federal government

President Joe Biden called for an “AI talent surge” across the government in his executive order last fall.

As federal agencies increasingly adopt AI, Young said agencies must also “not leave the existing federal workforce behind.”

OMB is calling on agencies to adopt the Labor Department’s upcoming principles for mitigating AI’s potential harm to employees.

The White House says the Labor Department is leading by example, consulting with federal employees and labor unions the development of those principles, as well as its own governance and use of AI.

Later this year, OMB will take additional steps to ensure agencies’ AI contracts align with its new policy and protect the rights and safety of the public from AI-related risks.

OMB will be taking further action later this year to address federal procurement of AI. It released a request for information on Thursday, to collect public input on that work.

A senior administration official said OMB, as part of the RFI, is looking for feedback on how to “support a strong and diverse and competitive federal ecosystem of AI vendors,” as well as how to incorporate OMB’s new AI risk management requirements into federal contracts.

The public has until April 28 to respond to the RFI.

The post White House sets ‘binding requirements’ for agencies to vet AI tools before using them first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/artificial-intelligence/2024/03/omb-sets-binding-requirements-for-agencies-to-vet-ai-tools-before-using-them/feed/ 0
Pentagon’s CDAO wraps up ninth iteration of GIDE https://federalnewsnetwork.com/defense-main/2024/03/pentagons-cdao-wraps-up-ninth-iteration-of-gide/ https://federalnewsnetwork.com/defense-main/2024/03/pentagons-cdao-wraps-up-ninth-iteration-of-gide/#respond Tue, 26 Mar 2024 22:46:12 +0000 https://federalnewsnetwork.com/?p=4940719 CDAO's GIDE 9 successfully demonstrated a “completely vendor-agnostic” data integration layer for the first time.

The post Pentagon’s CDAO wraps up ninth iteration of GIDE first appeared on Federal News Network.

]]>
var config_4942685 = {"options":{"theme":"hbidc_default"},"extensions":{"Playlist":[]},"episode":{"media":{"mp3":"https:\/\/www.podtrac.com\/pts\/redirect.mp3\/traffic.megaphone.fm\/HUBB3371984271.mp3?updated=1711626107"},"coverUrl":"https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2023\/12\/3000x3000_Federal-Drive-GEHA-150x150.jpg","title":"Pentagon\u2019s CDAO wraps up ninth iteration of GIDE","description":"[hbidcpodcast podcastid='4942685']nnAt the end of 2023, the Chief Digital and Artificial Intelligence Office delivered a minimum viable capability for Combined Joint All-Domain Command and Control via a series of global experiments known as GIDE. In 2024, the office wants to make the data collected during GIDE exercises available to those developing the next round of AI models.nnThis year\u2019s GIDE 9, which was wrapped up last week, successfully demonstrated a \u201ccompletely vendor-agnostic\u201d <a href="https:\/\/federalnewsnetwork.com\/defense-main\/2024\/03\/cdao-expanding-data-integration-layer-for-cjadc2\/">data integration layer<\/a> for the first time, making data extensible across various operational systems. This data can now be fed into the DoD\u2019s development pipeline.nn\u201cNow it is ready to start piping over into the development pipeline into Alpha-1 and other capabilities so that we can start learning at scale as an enterprise. That\u2019s one of the things we really want to get after this year is to start making that pipeline permanent, persistent and real so we can start training those models,\u201d said Air Force Col. Matthew Strohmeyer, who leads the GIDE series, during the Center for Strategic and International Studies <a href="https:\/\/www.csis.org\/events\/scaling-ai-enabled-capabilities-dod-government-and-industry-perspectives">event<\/a> Tuesday.nnLast month, CDAO\u2019s chief Craig Martell said his goal is to have data mesh in place, which will allow information to flow in a secure manner.nnAt the strategic level, the GIDE series is testing data mesh services that will ultimately allow combatant commands and the Joint Staff to have data they are able to exchange, giving information advantage on the global scale.nn\u201cThe data mesh services that we are trying to bring to bear allow us to be able to have data in common between the combatant commands so that one command doesn\u2019t have their kind of program of record that they\u2019re working with that has data in a stovepipe. They may have, for example, some logistics data or munitions data that is relative to that force that they have,\u201d said Strohmeyer.nn\u201cIn the past, that data wasn\u2019t viewable by another combatant command. But now, because we\u2019re trying to truly globally integrate everything we do, a data mesh service allows us to have that piece of data shared by all the combatant commands. And not just shared via email. It\u2019s shared live.\u201dnnAt the same time, the data mesh services look different at the tactical level when the services conduct joint fire missions. When it comes to the strategic level, for example, it\u2019s primarily enterprise-level data that is available in the cloud. For tactical-level decisions, data has to be extremely resilient as it needs to withstand operating in environments that will be contested.nnDuring GIDE 9, the team was able to test out data mesh services for the joint operating system.nn\u201cWe had a true data mesh deployed where all of the data that was used for those warfighting fires decisions existed on every node and the nodes were intelligently routing the data across this kind of mesh network so that if a piece of that mesh went down, it didn\u2019t matter that the data would resiliently repopulate across the mesh and be able to get that information wherever it needed to go and at whatever time,\u201d said Strohmeyer.nnStrohmeyer said that while his office is working on integrating the data collected at the strategic and tactical levels into the development phase, which will allow the development of the next round of AI models, they have a \u201clong way to go.\u201dnnOn the experimentation side, multiple combatant commands conducted a blind test of an AI capability for logistics-related tasks. An example scenario is analyzing the logistics of moving sustainment capability from one location to another. Some of the participants had access to generative AI tools to quickly come up with a recommended path, while some of the participants were coming up with recommendations without generative AI tools.nn\u201cThe difference was that this wasn\u2019t research organizations that were actual warfighters that were doing this and seeing what worked and what didn\u2019t work as they went through the process,\u201d said Strohmeyer.nnThis test is one example of how the GIDE series provides Task Force Lima, the CDAO\u2019s initiative to integrate generative AI tools across the DoD, a venue to experiment with large language models."}};

At the end of 2023, the Chief Digital and Artificial Intelligence Office delivered a minimum viable capability for Combined Joint All-Domain Command and Control via a series of global experiments known as GIDE. In 2024, the office wants to make the data collected during GIDE exercises available to those developing the next round of AI models.

This year’s GIDE 9, which was wrapped up last week, successfully demonstrated a “completely vendor-agnostic” data integration layer for the first time, making data extensible across various operational systems. This data can now be fed into the DoD’s development pipeline.

“Now it is ready to start piping over into the development pipeline into Alpha-1 and other capabilities so that we can start learning at scale as an enterprise. That’s one of the things we really want to get after this year is to start making that pipeline permanent, persistent and real so we can start training those models,” said Air Force Col. Matthew Strohmeyer, who leads the GIDE series, during the Center for Strategic and International Studies event Tuesday.

Last month, CDAO’s chief Craig Martell said his goal is to have data mesh in place, which will allow information to flow in a secure manner.

At the strategic level, the GIDE series is testing data mesh services that will ultimately allow combatant commands and the Joint Staff to have data they are able to exchange, giving information advantage on the global scale.

“The data mesh services that we are trying to bring to bear allow us to be able to have data in common between the combatant commands so that one command doesn’t have their kind of program of record that they’re working with that has data in a stovepipe. They may have, for example, some logistics data or munitions data that is relative to that force that they have,” said Strohmeyer.

“In the past, that data wasn’t viewable by another combatant command. But now, because we’re trying to truly globally integrate everything we do, a data mesh service allows us to have that piece of data shared by all the combatant commands. And not just shared via email. It’s shared live.”

At the same time, the data mesh services look different at the tactical level when the services conduct joint fire missions. When it comes to the strategic level, for example, it’s primarily enterprise-level data that is available in the cloud. For tactical-level decisions, data has to be extremely resilient as it needs to withstand operating in environments that will be contested.

During GIDE 9, the team was able to test out data mesh services for the joint operating system.

“We had a true data mesh deployed where all of the data that was used for those warfighting fires decisions existed on every node and the nodes were intelligently routing the data across this kind of mesh network so that if a piece of that mesh went down, it didn’t matter that the data would resiliently repopulate across the mesh and be able to get that information wherever it needed to go and at whatever time,” said Strohmeyer.

Strohmeyer said that while his office is working on integrating the data collected at the strategic and tactical levels into the development phase, which will allow the development of the next round of AI models, they have a “long way to go.”

On the experimentation side, multiple combatant commands conducted a blind test of an AI capability for logistics-related tasks. An example scenario is analyzing the logistics of moving sustainment capability from one location to another. Some of the participants had access to generative AI tools to quickly come up with a recommended path, while some of the participants were coming up with recommendations without generative AI tools.

“The difference was that this wasn’t research organizations that were actual warfighters that were doing this and seeing what worked and what didn’t work as they went through the process,” said Strohmeyer.

This test is one example of how the GIDE series provides Task Force Lima, the CDAO’s initiative to integrate generative AI tools across the DoD, a venue to experiment with large language models.

The post Pentagon’s CDAO wraps up ninth iteration of GIDE first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/defense-main/2024/03/pentagons-cdao-wraps-up-ninth-iteration-of-gide/feed/ 0
GSA’s 10x to take deeper look at 16 ideas submitted by feds https://federalnewsnetwork.com/contracting/2024/03/gsas-10x-to-take-deeper-look-at-16-ideas-submitted-by-feds/ https://federalnewsnetwork.com/contracting/2024/03/gsas-10x-to-take-deeper-look-at-16-ideas-submitted-by-feds/#respond Tue, 26 Mar 2024 18:41:32 +0000 https://federalnewsnetwork.com/?p=4940278 Ideas to improve public services submitted by employees from FEMA, CFBP, Treasury and others rose to the top of GSA’s 10x priority list.

The post GSA’s 10x to take deeper look at 16 ideas submitted by feds first appeared on Federal News Network.

]]>
An employee at the Federal Emergency Management Agency in the Homeland Security Department believes automation would help federal inspectors at disaster recovery sites to generate comprehensive documentation that includes photos for each site.

An employee at the Federal Acquisition Service in the General Services Administration suggested using modern technology like 3D scanners to improve the maps of federal buildings to benefit emergency responders and others.

And two federal employees at the departments of Veterans Affairs and Commerce’s Census Bureau submitted an idea to translate ethical artificial intelligence principles into technical steps by developing processes to assess AI at every level, from inception to development, production and continuous performance evaluation.

These are just three of the 16 ideas from 10 agencies that GSA’s 10x program is considering for possible funding in 2024.

“Our fiscal 2024 investment priorities centered on ideas for reimagining public engagement and promoting equity in delivery. We also emphasized ‘Moonshot’ ideas: the biggest, boldest and most ambitious ideas to transform digital public services,” GSA wrote about 10x in a new blog post. “This round, ideas for artificial intelligence projects emerged as a standout category. Nearly one fifth of all the submissions we received were related to AI.”

GSA launched the 10x program in 2015, and it is now part of the Technology Transformation Service, as a venture studio where they ask federal employees to send ideas and then makes small investments with the goal of improving federal digital experiences.

GSA 10x to begin analysis

For the 2024 funding opportunity, 10x received almost 200 ideas from more than a dozen agencies. Along with AI, other topics included accessibility technology, public-to-agency communications and improving data sharing.

10x now will move these 16 projects into phase one of the program where cross-functional teams of technologists will try to answer the simple question, “Is there a there there?”

“They investigate the problem, get a sense of how and if this idea could impact the public, and explore whether a technology solution is possible,” GSA wrote. “We use the phase one findings to guide our investment decisions as we decide whether or not to move a project into subsequent phases.”

In a phase two, the 10x team analyzes the idea to decide if it’s ultimately a technology problem or not. If it’s more of a people, policy or funding challenge, 10x will not invest more resources in developing a product or service.

In phase three, the 10x team makes sure the solution integrates with the agency partner’s existing priorities and technology capabilities. The team is reviewing workflow processes and how the agency can continue to sustain and support the technology. Most 10x projects end after Phase 3, when the product is handed off to its agency product owner.

Then in phase four, 10x and the agency sponsor look to scale the technology to support different use cases across agencies and programs that drive the biggest impact with an ultimate goal of transforming digital services for the public.

10x says most ideas never make it to phase 2. For instance in 2022, of the 25 ideas that made it to phase one, only seven received funding for phase two. Additionally, 10x says fewer ideas actually make it to phase three and four where the team scales the solution to the public.

The notify.gov project is an example of a 10x funded program that made it to phase four.

Another example is the site scanning platform that offers real-time intelligence to help agencies improve website performance and compliance with government mandates by providing web managers with a customizable, automated scanning service.

The post GSA’s 10x to take deeper look at 16 ideas submitted by feds first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/contracting/2024/03/gsas-10x-to-take-deeper-look-at-16-ideas-submitted-by-feds/feed/ 0
How CDC’s data office is applying AI to public health https://federalnewsnetwork.com/artificial-intelligence/2024/03/how-cdcs-data-office-is-applying-ai-to-public-health/ https://federalnewsnetwork.com/artificial-intelligence/2024/03/how-cdcs-data-office-is-applying-ai-to-public-health/#respond Tue, 26 Mar 2024 17:49:12 +0000 https://federalnewsnetwork.com/?p=4940120 Public health is ripe for opportunities to leverage AI, but it's not as simple as just picking the shiny new tool and feeding it data.

The post How CDC’s data office is applying AI to public health first appeared on Federal News Network.

]]>
Federal Monthly Insights - Operationalizing AI - March 26, 2024

As federal agencies push forward on their IT modernization goals, many agencies are exploring the potential use of artificial intelligence tools that can supplement human employees. Federal agencies are currently applying AI to a variety of missions, and public health is no different. The Centers for Disease Control and Prevention’s new Office of Public Health Data Surveillance and Technology (DST) is looking into ways to apply AI to public health data, as well as ways to leverage generative AI to bolster their efforts.

“There was actually a series of 15 pilots that were run across different centers and in offices across the agency,” Jennifer Layden, director of DST, said on Federal Monthly Insights – Operationalizing AI. “These were used to help evaluate the type of infrastructure we would need, what type of capabilities we would use, what would be the security factors that we’d have to consider? And the variety of these projects or pilots ranged from more like programmatic work to more operational work, such as website redesign, evaluating, comments back on a protocol or whatnot.”

CDC stood up DST last year to coordinate its data strategy. That includes improving data exchange with other federal, state and local agencies and non-governmental partners; improving the ways data informs public health initiatives; and ways to better visualize and distribute data for public consumption. AI is quickly becoming a part of those efforts.

AI use cases

For example, automated processes can flag potential health threats quicker, facilitating more rapid notifications and communication. But it can also improve internal workflows, making CDC employees more efficient at their jobs. And generative AI can quickly produce a fact sheet about a new public health threat to educate both those at risk and the medical professionals who may need to treat them.

For example, Layden discussed one test use case where AI is examining public cooling sites to identify what areas could be at risk for spreading Legionella, a disease spread through contaminated water.

Ensuring data privacy and reducing bias

Amidst a variety of potential use cases, Layden said DST is focusing on putting guardrails around the use of these tools.

“What we’re trying to do in the process is ensure that one, we establish guidance by which programs and scientists can have some basic playbook by which to use such tools to ensuring that people do it safely and securely,” she told The Federal Drive with Tom Temin. “[Two:] recognizing that we don’t want to create any risks to de-identification or information sharing that that should not be shared. And then three, how to also factor in ethical and bias considerations.”

Data privacy and ethical and bias considerations are especially important when working with public health data. One major concern around AI tools is that bad actors can leverage them to violate the data privacy of patients and citizens by manipulating the tools to reveal personally identifiable information. That’s where de-identification and determinations about what data is appropriate to share come into play. But that data also has to be as equitable and diverse as possible so as not to introduce any biases and potentially create new underserved populations, or exacerbate the conditions of existing ones.

Picking the right teams and AI tools

That’s why Layden said DST encourages the use of multidisciplinary teams when working with public health data. She advocated for teams that include experts in the disease or other public health threat, people who understand the populations affected or at risk, and people who grasp the data tools and methodologies to perform advanced analytics.

“It is really a multidisciplinary team that needs to come together to understand what the question is that we’re trying to answer,” Layden said. “What are the considerations we need to factor in, as we understand the data that we’re using? And then what are the best tools to help answer that question? So not just using a new tool because it’s a new tool, but is it the best tool to answer the question at hand?”

Another consideration revolving around these tools is the fact that they evolve; after all, AI tools have been around for some time now, but generative AI only hit the spotlight about a year ago. As that evolution occurs, experts need to continuously reevaluate them for a number of reasons: Are they still the best tool for the job? Has the nature of identifiable information changed in any way?

Sharing information and tools

The appropriate community also needs access to that information. Best practices and lessons learned can prevent other teams from making similar mistakes, or save them time in evaluating their own tools. Layden said that stakeholders need to continuously build out, test and validate that framework for it to keep doing its job.

That’s because the capabilities have to continue to evolve, because so will the threats. So public health professionals have to keep pace.

“One of the challenges in public health broadly — and not unique or not new — is bringing in the more advanced analytic capabilities, the workforce expertise,” Layden said. “We’ve also looked at ways to partner with academic and private partners, recognizing that our bandwidth, our capabilities to understand the full spectrum of tools and how they could be used … Will be slower to build up those capabilities in-house. So how we can partner with experts either in academic or private is another way for us to build up the capabilities, our understanding, as well as expertise.”

One way CDC accomplishes that is through the use of shared tools. For example, Layden said more than half of state jurisdictions use a tool for case investigation that the CDC operates and maintains. Similarly, there’s a shared surveillance system for tracking emergency room data. And there’s a shared governance model to help support the development and sharing of even more tools.

One of the benefits of sharing tools like this is it encourages sharing data more broadly, and in the same formats, reducing the amount of work data scientists have to do to reconcile the data before they can begin analyzing it.

“So in my mind, public health, the more we can share, build up enterprisewide tools that can be used and leveraged appropriately is one step that we need to continue to take and to grow, but then also sharing the best practices.

 

The post How CDC’s data office is applying AI to public health first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/artificial-intelligence/2024/03/how-cdcs-data-office-is-applying-ai-to-public-health/feed/ 0
State Dept looking at AI to help workforce plan next career steps https://federalnewsnetwork.com/all-about-data/2024/03/state-dept-looking-at-ai-to-help-workforce-plan-next-career-steps/ https://federalnewsnetwork.com/all-about-data/2024/03/state-dept-looking-at-ai-to-help-workforce-plan-next-career-steps/#respond Mon, 25 Mar 2024 22:41:03 +0000 https://federalnewsnetwork.com/?p=4939225 The State Department sees generative AI as a valuable tool to meet it's mission and to help its employees chart the next step in their careers.

The post State Dept looking at AI to help workforce plan next career steps first appeared on Federal News Network.

]]>
The State Department sees artificial intelligence as an increasingly valuable tool to meet its mission, and is looking at generative AI to help its employees chart the next step in their careers.

Don Bauer, the chief technology officer of the State Department’s Bureau of Global Talent Management, said last month that the department recently obtained an Authority to Operate (ATO) to use AI on sensitive internal data.

“We’re literally looking at the next steps of how do we now leverage internal information and start making decisions that way,” Bauer said Feb. 29 during a Federal News Network-moderated panel at ATARC’s AI Summit in Reston, Virginia.

Before President Joe Biden’s sweeping executive order on AI in government last October,  agencies such as the Department of the NavyGeneral Services Administration and the Environmental Protection Agency had put limits on how their staffs could use generative AI tools in the workplace.

But Bauer said the State Department is taking a closer look at what generative AI means for its mission.

“The ability to summarize and pull data from multiple sources, and do a lot of information gathering really resonates with the diplomatic community,” Bauer said. “The State Department’s been very much forward-leaning on just telling everyone, ‘Go out and get an account and get familiar with the technology. Just make sure you don’t put any sensitive information into it.’”

The bureau is looking at generative AI to help develop career paths for State Department employees.

“We have a demonstration project to extract skills from resumes and start building out pipelines for civil servants, as far as career progression,” Bauer said. “If I identify a career path for you, then I’m using publicly available position descriptions, extracting those out, and then building up the ability for you to recognize skills you need. Then we’re going to tie that with our learning management system, so we can actually say, ‘If you want to be this person, here’s the skills you need and here’s how you can go get trained.'”

Along with this culture of experimentation, Bauer said the State Department is prioritizing workforce training around AI.

“We need training, we need to have a common understanding of what AI is to the organization,” he added.

Generative AI ‘on guardrails’ at DOE

Bridget Carper, the Energy Department’s deputy CIO for architecture, engineering, technology and innovation, and its responsible AI official, said the department is giving employees a sandbox environment to experiment with generative AI.

“We actually took the initial stance of, ‘Oh, ChatGPT, we’re going to block it.’ Then, we realized that everyone was just doing it on their personal computer. So, then we started putting in guardrails,” Carper said. “Now we’re going in it with the education aspects, or doing training across the board,” Carper said.

Carper said DOE is currently using AI for enhanced cybersecurity, and to improve the customer experience of individuals and organizations applying for federal grants.

“We were fortunate enough to have funding to be able to provide to different communities, but how do they access that? Most people don’t have the time to go through the different sites — is it EPA? Is it IRS, to be able to obtain that information? So we’re using AI to help put that out there, to make it more readily accessible for users.”

‘You have to have good data’

Bauer said the rise of AI use cases puts increased pressure on agencies to improve their data maturity.

“I’m under tremendous pressure for very accurate HR data — whether it’s positions , whether it’s where people are assigned, how the department moves around at large.

“We’re looking at opportunities, for use cases, around using AI to help us find bad data and clean it up,” he said. “When you have somebody that retires after 30 years and retirement tells them you’re in the wrong retirement code, and you owe the government $25,000 before you can leave — you say, ‘Well, how can that happen? It should be really easy to root those things out.’ But there’s so many different legal authorities and combinations of information that  human beings could probably do it, but we’re really honing in on the ability to actually start looking at that as a data cleanup exercise,  because we’re all under pressure now to have these very very robust data models that decision makers are all wanting. Everything’s decision data driven now, so you have to have good data.”

The post State Dept looking at AI to help workforce plan next career steps first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/all-about-data/2024/03/state-dept-looking-at-ai-to-help-workforce-plan-next-career-steps/feed/ 0