Hubbard Radio Washington DC, LLC. All rights reserved. This website is not intended for users located within the European Economic Area.
Hubbard Radio Washington DC, LLC. All rights reserved. This website is not intended for users located within the European Economic Area.
The White House executive order on artificial intelligence gathered into one place all the concerns and cautions floating around for years. How to protect priva...
The White House executive order on artificial intelligence gathered into one place all the concerns and cautions floating around for years. How to protect privacy in training data. How to avoid algorithmic bias. For more on how agencies can improve their AI game, the Federal Drive with Tom Temin spoke with the founder of the FAIR Institute, Nick Sanna.
Interview Transcript:
Tom Temin And you are into quantifying cybersecurity risks and kind of putting numbers around some of these things. Tell us your take on what came out from the White House. Is there anything new there and what should happen next as a result?
Nick Sanna Yeah, actually, we are quite surprised to see this coming so quickly. We had the privilege of having Chris DeRusha, the federal government CISO speak at the recent fair conference and tell us the tools are coming. And I’m personally very surprised by how quickly the federal government has come out in and trying to provide guidance for the agencies in dealing with this problem. Usually you expect a lag from government on dealing with new trends. But as it comes for AI, they’ve been very, very proactive and I think are more proactive than probably many commercial companies. For once, we see the government leading in trying to provide guidance to make sense of it in ways that many private companies are still trying to figure out how to make sense of it.
Tom Temin It read like a lot of great values that you want to bring to your AI activities, but it wasn’t very prescriptive. And to enable, say, lack of bias in algorithms or protecting privacy with training data, whatever the case might be. A few other things you have to do things. And so what should agencies do you think do differently now as they think about AI?
Nick Sanna I actually commend the White House for not being too prescriptive to begin with. And I think what they try to establish is some very common sense practices that apply to any form of risk. And so say if you want to manage this risk properly, they said we need to have somebody focused on it. And so they’re basically asking the agency to designate chief AI officers who have the responsibility to advise leadership on AI, try to assess, you know, the risk associated with the AI and try to capitalize and benefit from it when it makes sense and manage this risk over time. So having somebody responsible for it is a great step. And in terms of governance, you know, if you don’t have someone responsible, you cannot ask for accountability and for this problem to be taken care of. The second thing is that they basically imply that treat this as like any form of risk. It didn’t define how I can be both a risk and an opportunity in terms of extending some of your services and creating some productivity, both in terms of offering you services or improving your security practices. But also try to understand how the adversaries may use AI against you. And so identify those scenarios, try to size the problem and see what are the scenarios that matters in our agency and which may not be as applicable. And then what should we do about it? So I think that I love that because it points to a more risk based approach than a set of prescriptive controls that may not apply to every agency. And so this forces the every agency to say what are the issues in the AI to bubble up to the top that we need to tackle? And it may be slightly different from agency to agency.
Tom Temin That idea of people using AI against you. It reminds me of as a kid the first time you saw two facing mirrors and poked your head in between. There were an infinite number of reflections going on, you know, until the mirror disappeared. And could that happen with AI that with, say, nefarious actors using AI against the AI you have deployed, that they’ll just simply get into a closed loop and cancel each other out altogether?
Nick Sanna Well, it could be. It’s always a race to the top in terms of capability. Definitely. We see it on the adversarial side, the threat actors are using AI to make it much easier to develop a slight variation of the latest ransomware attack to try to, one, detect it to penetrate your account and etc.. And on the defensive side, we’re trying to be as smart and trying to catch up. So it is a warfare there where you need to level up with your adversaries or you’re going to suffer more damage in the short term. So I think that the fact that the government recognizing there’s a bigger threat there probably seen an increase in adversaries using AI to weaponize their tactics, to diversify their approaches and their equivalent level of effort, if not greater, needs to happen on the defensive side. So we have seen this before without the forms of threat before. So again, I commend the government and saying, okay, in terms of threats, let’s treat it like other threats, like what a risk we’ve seen in the past and have an organized way of thinking about it versus trying to find technical silver bullets that may apply to one particular situation, but then don’t solve the problem we know. Again, what that’s saying is identify the issues that matter most so you can apply the resources where it matters most versus try and delude ourselves in a bunch of technical remedies.
Tom Temin We’re speaking with Nick Sanna. He is founder of the Fair Institute and president of Safe Security. And so really, you have to think of AI on two fronts. One, how do we deploy this in the best way possible for the best outcomes? But at the same time, you have to treat it as a cybersecurity threat on the incoming side.
Nick Sanna Absolutely. And so companies need to look at it both ways. I think from the internal use side, and I can give a lot of agencies a lot of help in scaling some of the practices and providing better service to the public. And by being more responsive in providing, you know, answers are very common to certain, can I say, demands of the public. But it allows them to also maybe engineer, as in agencies, to check their code and then be more proficient in QAing their code and testing things. It allows, again, many applications that can increase the productivity and offer new products and services. But again, there is what are the risks associated as opportunities? Are we using public, can I say, versions of, you know, large language models where you can compromise the data and make available to other people inadvertently? Do we look at internal versions of A.I. tools that we actually teach to tool on data that is actually secure and private? And so these are some of the consideration for internal use of AI. And on the adversarial side, yes, you need to increase your threat intelligence capabilities to understand in what ways, you know, the adversaries are using the AI against you and act accordingly. What are the remedies that are most effective in blocking them? And so you need to look at it on both fronts. Absolutely. It’s both offensive and defensive and on top of defensive is there’s a lot of business application that can be very useful in terms of stretching the dollars at taxpayer dollars and for the benefit of the greater good.
Tom Temin And I’ll ask you this because I’ve asked several other people this question. When you deploy a standard application and you have programed logic into it. If you run that program 10 million times, you’ll get the same outcome with the same inputs because that’s how computers work. They’re binary. But with AI and the constant learning process, then as it works over time, your system can get infected, let’s say with drift because of new data coming in. So therefore, the privacy issues, the bias issues, whatever, or just the outcomes you want can keep changing. And so is it incumbent on agencies to add something to their way? They approach application maintenance, which is usually a matter of fixing bugs and adding new features, but maybe turning it off, retraining it with the original data and then relaunching it from time to time or some related technique like that.
Nick Sanna Yeah, Tom, I really love this question because in the past cybersecurity has become a problem because cybersecurity came afterwards. The mandate for both agency, even commercial companies, was to go out to market fast with new applications, with new services. Fast and cheap was the name of the game. Security came afterwards and then, oops, like we have a problem. We need to fix that and suddenly we have to play catch up. And I think in the case of AI, we observe that and say, we cannot do this. We cannot go fast and get in trouble really quickly. We need to incorporate risk assessments in the design of the solution, in the assessment of the solution and preemptive. And so we have the chance now and I commend the government for doing this to ask the agencies, especially given this executive order, to put security in the equation alongside with fast and cheap, not as an afterthought, ever more important and AI makes that even more important than ever before.
Tom Temin Right? So you should really avoid taking a comprehensive approach to trying to modernize everything and applying AI everywhere you can, but do it incrementally.
Nick Sanna Incrementally, but alongside. I think what we’re seeing is more and more I used to say, yes, dev ops was a big trend in an industry. You need to start thinking about devsecops, where as you develop your solution, as you onboard new solution, there must be a mandatory risk assessment done upfront before you launch. And so like in, in if I use an analogy in industry, you know, when you come up with new buildings, you know, you now need to do an environmental assessment. Same thing here. We are entering in an era where it’s no longer okay to have cybersecurity as an afterthought. You need to look at it upfront and in a case of AI to so many implications, you need to consider, you need to the assessment upfront before you know damage can be done. And the government’s being ring the bell and saying that’s the nature of the problem. You know, issues will happen, can happen. You cannot pretend you didn’t know. The government is telling you should look at it proactively and that’s a very good thing.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
Follow @tteminWFED