Hubbard Radio Washington DC, LLC. All rights reserved. This website is not intended for users located within the European Economic Area.
Hubbard Radio Washington DC, LLC. All rights reserved. This website is not intended for users located within the European Economic Area.
A new instructional letter from the General Services Administration provides guidance for responsible use of generative AI by employees and contractors with acc...
You can’t tune into a webinar, go to a conference or listen to a speech by a federal technology executive these days without them slipping in a mention of generative artificial intelligence, particularly ChatGPT.
Yes, it’s the latest buzzword to join the litany of past excitement over cloud computing, robotics process automation and most recently, zero trust.
But unlike cloud or RPA or even zero trust, generative AI is striking apprehension in the hearts of federal officials.
“I think that anybody who’s fired up ChatGPT, and I would be one of the people who’s used it today, I think over the last three to four months with our interactions, we’ve reached some sort of inflection point that we don’t even understand. I think that to try to develop a strategy that’s going to help you long term while really not understanding what’s going on right now, or the speed at which that goes, I think keeping up with that is the real challenge for us,” said Katie Baynes, the deputy chief science data officer in NASA’s Science Mission Directorate, at the recent GITEC summit. “I think partnering with folks who are keeping up and doing cutting edge work has really been the way we mitigate that challenge.”
The newness and desire to better understand how these capabilities work that Baynes talked about is becoming a common theme among agencies, and thus driving initial policy decisions.
The most recent one comes from the General Services Administration. It issued an instructional letter (IL) to provide an interim policy for controlled access to generative AI large language models (LLMs) from the GSA network and government furnished equipment (GFE).
GSA chose not to make the actual three-page instructional letter public. But a GSA spokesperson said in an email to Federal News Network that the “interim policy provides guidance for responsible use of generative AI LLM by employees and contractors with access to GSA systems and provides initial guidelines for what’s appropriate and what’s not (e.g., inputting nonpublic information). Our goal is to continue to ensure that GSA has overall policies in place to help ensure responsible use of software tools by employees and contractors.”
Federal News Network obtained a copy of the IL signed out June 9 by David Shive, GSA’s chief information officer, and it spells out the type of concerns agencies leaders have been saying publicly and privately for much of the past few months.
“Access to publicly available, third-party generative AI LLM endpoints and tools shall be blocked from the GSA network and GFE devices,” the letter stated. “Exceptions will be made for research (relevant to the role, position or organization of the requestor) and non-sensitive uses involving data inputs already in the public domain and generalized queries. Exceptions require completing [a] request form detailing intended usage and acknowledgement of GSA’s IT general rules of behavior to not expose internal federal data.”
Shive said in the IL that non-public data such as work products, email, controlled unclassified information and other similar information cannot be disclosed as inputs for LLM prompts.
“Deployment and use of locally deployed LLMs, such as Alpaca or Open-LLaMA on GFE shall abide by GSA IT standards profile,” the letter stated. “GSA-deployed and managed LLMs shall be assessed and authorized to operate by GSA and require specific authorizations to handle personal identifiable information; have privileged access to GSA systems; or transfer data to systems that are not authorized to operate by the GSA.”
Finally, GSA’s letter also tells users that using LLMs for generating software code should be limited to publicly available code only, and code in GSA’s repository should not be inputted into publicly available LLMs.
“The output from LLMs used to generate code or publishable material shall be manually reviewed by the approved user for accuracy, functional effectiveness and suitability, and intellectual property, including copyrights and trademarks, terms of service or end-user license agreements as LLMs may have been trained on data that AI providers may not have had full legal rights to use,” the letter stated. “GSA performs electronic monitoring of internet communications traffic, including publicly available LLMs.”
GSA’s instructional letter is one of several similar policy-like documents issued by agencies over the last few weeks.
The Environmental Protection Agency in early May sent a note to staff saying it was blocking ChatGPT, OpenAI and similar sites.
EPA’s Office of Mission Support, according to the email obtained by Politico, described its policy as an “interim decision,” and that the EPA continues to analyze AI tools and will follow up with a final decision on these tools.
“EPA is currently assessing potential legal, privacy and cybersecurity concerns as releasing information to AI tools could lead to potential data breaches, identity theft, financial fraud or inadvertent release of privileged information,” the EPA memo stated.
A few weeks later, Kevin Duvall, the acting CIO and chief technology officer for the Administration of Children and Families in the Department of Health and Human Services, issued a policy that tries to find the middle ground.
“I think it is important for organizations to take a balanced approach to helping employees navigate appropriate and inappropriate uses in the federal context. We are at the edge of a new frontier that is still just getting started,” Duvall wrote on LinkedIn. “At the Administration for Children and Families (ACF), we are balancing risk, while still exploring this technology and its potential to empower federal employees to serve citizens even better. This interim policy helps us explicitly address some ‘dos’ and ‘don’ts’ while this space matures. I hope this can help others as they navigate this technology.”
HHS ACF’s policy outlines six considerations for employees as they use or considering using chatbots and generative AI tools. These include some common focus areas like don’t share PII or personal health data as well as workforce education about how these tools work and don’t rely on the capabilities for decision making purposes.
These policies — there may be others from more agencies — as well as a new AI task force from the Homeland Security Department come as the White House Office of Science and Technology Policy (OSTP), in a request for information released May 23, is asking the public to provide this input, as part of an upcoming National AI Strategy on how agencies could benefit from generative AI tools to meet their mission. OSTP is accepting comments through July 7.
The real or potential use of generative AI is seeping into other parts of the government. The Interior Department already is starting to see some impact.
“ChatGPT took us by storm. We were off doing our business cases for pilots, and we had our plans, and then all of a sudden there was ChatGPT. Our grant recipients and our contract vendors were like, ‘guess what, we can use this to write our applications and we can use this to get a higher score to finally get funding. Wouldn’t that be nice to level the playing field.’ We were like, ‘Oh my goodness,’” said Andrea Brandon, the deputy assistant secretary for budget, finance, grants and acquisition for the Interior Department. “We don’t have any government policy yet that tells them they can’t use it. We are starting to see an uptick in the applications coming in from organizations that historically been trying and trying to get a grant, and then here is ChatGPT helping them write it, and they don’t need to hire a grant writer. Now they can ask some questions and get it right.”
Brandon said they are discussing the use of ChatGPT because they can hire grant writers so why can’t they use these tools? At the same time, it could mean Interior, or any agency for that matter, could get 1,000 grant applications where once it received only 400. Brandon said having the staff to review the applications would be a huge challenge and likely delay the awarding of the money.
“The organizations that are using ChatGPT currently are finding that it’s actually cheaper to use ChatGPT than to hire a grant writer. Some organizations never had the funds to write a grant right or to hire a grant writer. Whether we’re going to still allow ChatGPT in the future or what have you, we discussed having a policy, which we’re all just discussing it. We’ll take that into consideration,” she said. “We never told applicants that they couldn’t use grant writers. So I don’t know, maybe we won’t tell them they can’t use ChatGPT either so maybe they’ll be able to still use it. I don’t know. But currently, it’s leveling the playing field, they are able to use it.”
It’s clear the amount of uncertainty about generative AI and how it could impact mission areas across the government is causing agencies to put the brakes on the capabilities.
And probably rightly so as generative AI, like any new technology, comes with a host of unknowns and buzz from industry as every company throws the term in their marketing materials.
The question comes back to how quickly will agency leaders become comfortable with the new capabilities and when will OMB publish some basic guidance to help agencies manage the excitement? Until then, the inconsistent application of generative AI will cause more challenges than they solve.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Jason Miller is executive editor of Federal News Network and directs news coverage on the people, policy and programs of the federal government.
Follow @jmillerWFED