Samsung released its new Galaxy S24 phone in January. The big buzz was centered around how Galaxy users discover the world around them by incorporating artificial intelligence into photo manipulation, product searches for e-commerce, instantaneous language translation while abroad, and more. The general public is talking more about and understanding the possibilities with AI. This buzz is also present inside federal agencies.
There’s no doubt that artificial intelligence has begun – and will continue – to transform the way we work and live, and there’s no denying its power and potential. But the fact is despite what most people want to believe what AI is and can do, we’re just not there yet.
As the founder of a digital accessibility company who is disabled and has challenges completing many everyday tasks that most people take for granted, there isn’t a bigger fan of AI than me. I have big hopes for it. In time, I know it will succeed. I’m also a realist, and I know that time isn’t now.
AI has made significant strides in recent years, with advancements in machine learning, natural language processing and computer vision. However, despite these breakthroughs, AI still has a long way to go before reaching its full potential. In fact, there are numerous challenges and limitations that hinder the development and deployment of AI systems. Rather than talk about them in great detail, it’s better to move the conversation along by concentrating on viable solutions that will bring us closer to implementing real AI.
The most prudent thing that any federal agency looking to adopt AI can do is sit on the sidelines and wait. Granted, that’s not the popular answer or the response that anyone wants to hear, but it’s the smartest move at this point in time. Federal agencies that take a premature leap into AI will most likely be disappointed, waste time and money, and will probably have more work redoing tasks that didn’t yield the desired outcomes from AI. This technology is in its infant stages. Although it is groundbreaking and exciting, we must walk before we can run.
Better data and bias mitigation
You may have heard the saying, “garbage in, garbage out.” AI systems are data-driven, and if that data is skewed, incorrect or biased, the AI models will inherit and perpetuate these mistakes. Issues related to fairness, transparency and accountability are significant concerns. It’s critical that federal agencies take extreme caution in this area. They must implement measures to identify and mitigate biases and errors in AI algorithms and data sets. This may involve conducting thorough bias assessments during the development phase and ongoing monitoring of AI systems in operation. Agencies must remember that when it comes to data, humans tend to steer things for the outcome they want instead of what the facts are. Right now, there is such a backlog of corrupt data that current AI models misinterpret differentiating between correct information and skewed data. AI can only be as good as the information it is given.
Implementing safeguards
Federal agencies must have safeguards in place when it comes to AI. Promote transparency in AI systems by documenting their development process, data sources, algorithms used, and again, potential biases. Agencies should also establish mechanisms for accountability, such as assigning responsibility for AI system decisions and outcomes. Ensure that AI systems are interpretable and explainable, especially for critical decision-making processes. This involves designing algorithms that produce transparent results and providing explanations for AI-generated decisions when necessary.
Risk management
Even with all these safeguards in place, federal agencies must conduct comprehensive risk assessments to identify potential risks associated with AI implementation, including cybersecurity threats, legal liabilities and unintended consequences. Develop risk mitigation strategies to address these concerns effectively. We’ve already seen what happens when we’re not careful. Take AI facial recognition technology, for example. The number of innocent people arrested after being misidentified by AI facial recognition technology (FRT) keeps increasing, wreaking havoc on innocent people and bringing about lawsuits against federal agencies. Similarly, federal agencies need to be especially careful when it comes to predictive modeling. A predictive policing algorithm was recently found to be both discriminatory and inaccurate.
Collaboration and knowledge sharing
Federal agencies must take a think-tank approach to AI because we’re all in this together. Foster collaboration and knowledge sharing among federal agencies, industry partners, academic institutions, and other stakeholders. Sharing best practices, lessons learned and research findings can help improve the responsible use of AI across the government. Similarly, federal agencies must establish mechanisms for continuous monitoring and evaluation of AI systems’ performance, effectiveness and impact. This includes soliciting feedback from end-users and stakeholders to identify areas for improvement and address emerging issues promptly.
The takeaway
I can’t wait for the day that AI gets to where it fully enhances the way we live and work, but that day isn’t today. It won’t be next month or next year, either. We’ve only begun to scrape the surface. To celebrate AI as this life-changing technology that’s revolutionizing a new mobile device or anything else is irresponsible and nothing more than marketing hype at this time. Consumers see a new phone with built-in AI and think they need it, yet most couldn’t explain why or tell you how or if it will differ from their current device. It’s no different at the federal level when agencies want a faster and better way to do things. But all in due time.
In order for AI to truly reach its full potential will take researchers, developers, policymakers and ethicists to work collaboratively to navigate the complex landscape of AI development, ensuring that it evolves responsibly and ethically. Only through concerted efforts and further development can we pave the way for AI to make a lasting, positive impact on society, the way everyone imagines.
Mark Pound is the founder and CEO of CurbcutOS, a digital accessibility firm making the digital world more user-friendly for people with disabilities.
Federal agencies beware: AI is not all it’s cracked up to be – at least not yet
The general public is talking more about and understanding the possibilities with AI. This buzz is also present inside federal agencies.
Samsung released its new Galaxy S24 phone in January. The big buzz was centered around how Galaxy users discover the world around them by incorporating artificial intelligence into photo manipulation, product searches for e-commerce, instantaneous language translation while abroad, and more. The general public is talking more about and understanding the possibilities with AI. This buzz is also present inside federal agencies.
There’s no doubt that artificial intelligence has begun – and will continue – to transform the way we work and live, and there’s no denying its power and potential. But the fact is despite what most people want to believe what AI is and can do, we’re just not there yet.
As the founder of a digital accessibility company who is disabled and has challenges completing many everyday tasks that most people take for granted, there isn’t a bigger fan of AI than me. I have big hopes for it. In time, I know it will succeed. I’m also a realist, and I know that time isn’t now.
AI has made significant strides in recent years, with advancements in machine learning, natural language processing and computer vision. However, despite these breakthroughs, AI still has a long way to go before reaching its full potential. In fact, there are numerous challenges and limitations that hinder the development and deployment of AI systems. Rather than talk about them in great detail, it’s better to move the conversation along by concentrating on viable solutions that will bring us closer to implementing real AI.
Join us May 6 at 1 p.m. EST for Federal News Network's Industry Exchange Data where we'll explore how you can employ data to modernize your agency. | Register today!
Patience is a virtue
The most prudent thing that any federal agency looking to adopt AI can do is sit on the sidelines and wait. Granted, that’s not the popular answer or the response that anyone wants to hear, but it’s the smartest move at this point in time. Federal agencies that take a premature leap into AI will most likely be disappointed, waste time and money, and will probably have more work redoing tasks that didn’t yield the desired outcomes from AI. This technology is in its infant stages. Although it is groundbreaking and exciting, we must walk before we can run.
Better data and bias mitigation
You may have heard the saying, “garbage in, garbage out.” AI systems are data-driven, and if that data is skewed, incorrect or biased, the AI models will inherit and perpetuate these mistakes. Issues related to fairness, transparency and accountability are significant concerns. It’s critical that federal agencies take extreme caution in this area. They must implement measures to identify and mitigate biases and errors in AI algorithms and data sets. This may involve conducting thorough bias assessments during the development phase and ongoing monitoring of AI systems in operation. Agencies must remember that when it comes to data, humans tend to steer things for the outcome they want instead of what the facts are. Right now, there is such a backlog of corrupt data that current AI models misinterpret differentiating between correct information and skewed data. AI can only be as good as the information it is given.
Implementing safeguards
Federal agencies must have safeguards in place when it comes to AI. Promote transparency in AI systems by documenting their development process, data sources, algorithms used, and again, potential biases. Agencies should also establish mechanisms for accountability, such as assigning responsibility for AI system decisions and outcomes. Ensure that AI systems are interpretable and explainable, especially for critical decision-making processes. This involves designing algorithms that produce transparent results and providing explanations for AI-generated decisions when necessary.
Risk management
Even with all these safeguards in place, federal agencies must conduct comprehensive risk assessments to identify potential risks associated with AI implementation, including cybersecurity threats, legal liabilities and unintended consequences. Develop risk mitigation strategies to address these concerns effectively. We’ve already seen what happens when we’re not careful. Take AI facial recognition technology, for example. The number of innocent people arrested after being misidentified by AI facial recognition technology (FRT) keeps increasing, wreaking havoc on innocent people and bringing about lawsuits against federal agencies. Similarly, federal agencies need to be especially careful when it comes to predictive modeling. A predictive policing algorithm was recently found to be both discriminatory and inaccurate.
Collaboration and knowledge sharing
Federal agencies must take a think-tank approach to AI because we’re all in this together. Foster collaboration and knowledge sharing among federal agencies, industry partners, academic institutions, and other stakeholders. Sharing best practices, lessons learned and research findings can help improve the responsible use of AI across the government. Similarly, federal agencies must establish mechanisms for continuous monitoring and evaluation of AI systems’ performance, effectiveness and impact. This includes soliciting feedback from end-users and stakeholders to identify areas for improvement and address emerging issues promptly.
The takeaway
I can’t wait for the day that AI gets to where it fully enhances the way we live and work, but that day isn’t today. It won’t be next month or next year, either. We’ve only begun to scrape the surface. To celebrate AI as this life-changing technology that’s revolutionizing a new mobile device or anything else is irresponsible and nothing more than marketing hype at this time. Consumers see a new phone with built-in AI and think they need it, yet most couldn’t explain why or tell you how or if it will differ from their current device. It’s no different at the federal level when agencies want a faster and better way to do things. But all in due time.
In order for AI to truly reach its full potential will take researchers, developers, policymakers and ethicists to work collaboratively to navigate the complex landscape of AI development, ensuring that it evolves responsibly and ethically. Only through concerted efforts and further development can we pave the way for AI to make a lasting, positive impact on society, the way everyone imagines.
Mark Pound is the founder and CEO of CurbcutOS, a digital accessibility firm making the digital world more user-friendly for people with disabilities.
Read more: Commentary
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
At VA, AI and data optimization hold the promise of better health outcomes, job satisfaction
Social Security IG looks to stop fraud revved up by AI
AI experimentation unlocks enhanced mission capabilities