Artificial Intelligence - Federal News Network https://federalnewsnetwork.com Helping feds meet their mission. Thu, 20 Jun 2024 21:48:32 +0000 en-US hourly 1 https://federalnewsnetwork.com/wp-content/uploads/2017/12/cropped-icon-512x512-1-60x60.png Artificial Intelligence - Federal News Network https://federalnewsnetwork.com 32 32 DHS names China, AI, cyber standards as key priorities for critical infrastructure https://federalnewsnetwork.com/cybersecurity/2024/06/dhs-names-china-ai-cyber-standards-as-key-priorities-for-critical-infrastructure/ https://federalnewsnetwork.com/cybersecurity/2024/06/dhs-names-china-ai-cyber-standards-as-key-priorities-for-critical-infrastructure/#respond Thu, 20 Jun 2024 21:48:32 +0000 https://federalnewsnetwork.com/?p=5047865 Agencies that oversee critical infrastructure are developing new sector risk management plans, with cybersecurity continuing to be a high priority.

The post DHS names China, AI, cyber standards as key priorities for critical infrastructure first appeared on Federal News Network.

]]>
Agencies that oversee critical infrastructure should address threats posed by China and work to establish baseline cybersecurity requirements over the next two years.

That’s according to new guidance signed out by Homeland Security Secretary Alejandro Mayorkas on June 14. The document lays out priorities over the next two years for sector risk management agencies. SRMAs are responsible for overseeing the security of specific critical infrastructure sectors.

“From the banking system to the electric grid, from healthcare to our nation’s water systems and more, we depend on the reliable functioning of our critical infrastructure as a matter of national security, economic security, and public safety,” Mayorkas said in a statement. “The threats facing our critical infrastructure demand a whole of society response and the priorities set forth in this memo will guide that work.

The memo follows on the heels of a national security memorandum signed by President Joe Biden earlier this year. The memo seeks to expand federal oversight of the critical infrastructure sectors. It specifically directed SRMAs to develop new sector risk management plans in the coming year.

China, AI and space

In his memo this week, Mayorkas highlights “cyber and other threats” posed by China as a key priority risk area. U.S. officials earlier this year said Chinese hackers had breached the networks of multiple U.S. critical infrastructure networks.

“Attacks targeting infrastructure essential to protect, support, and sustain military forces and operations worldwide or that may cause potential disruptions to the delivery of key goods or services to the American people must be our top priority,” the memo states. “Leveraging timely and actionable intelligence and information and adopting best practices for security and resilience, SRMAs, critical infrastructure owners and operators, and other SL TT and private sector partners shall devise and implement effective mitigation approaches to identify and address threats from the PRC, including plans to address cross-sector and regional interdependencies.”

It also encourages agencies to work with their respective sectors to mitigate risks posed by artificial intelligence and emerging technologies. Mayorkas also highlights the need to address climate risks, supply chain vulnerabilities, and a growing reliance on space systems, respectively.

Critical infrastructure ‘resilience’

Meanwhile, the memo also highlights several specific mitigation strategies that SRMAs should work into their plans. It specifically states SRMAs should work with critical infrastructure owners and operators to “develop and adopt resilience measures, anticipate potential cascading impacts of adverse incidents, and devise response plans to quickly recover from all types of shocks and stressors.”

“While we cannot keep determined advanced persistent threats or ransomware actors completely at bay or prevent severe weather occurrences, we can minimize the consequences of incidents by understanding critical nodes, assessing dependencies within systems, and developing plans to ensure rapid recovery,” Mayorkas writes.

Furthermore, the memo continues the Biden administration’s push to set minimum cyber standards across critical infrastructure sectors.

“Individual critical infrastructure owners and operators must be encouraged by SRMAs and, where applicable, held accountable by regulators for implementing baseline controls that improve their security and resilience to cyber and all hazard threats,” the memo states. “Establishing minimum cybersecurity requirements as part of these efforts to secure critical infrastructure also aligns with the 2023 National Cybersecurity Strategy.”

Mayorkas points to the Cybersecurity and Infrastructure Security Agency’s Cyber Performance Goals, as well as the National Institute of Standards and Technology’s Cybersecurity Framework 2.0, as models for cyber protection standards.

“DHS will work with SRMAs, regulators and private sector entities to ensure that baseline requirements are risk informed, performance-based and to the extent feasible, harmonized and to develop tools that support the adoption of such requirements,” Mayorkas adds.

The memo also encourages agencies to incentivize shared service providers to adopt stronger security measures. And it highlights the need to “identify areas of concentrated risk and systemically important entities.”

The post DHS names China, AI, cyber standards as key priorities for critical infrastructure first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/cybersecurity/2024/06/dhs-names-china-ai-cyber-standards-as-key-priorities-for-critical-infrastructure/feed/ 0
This vendor tested its AI solutions on itself https://federalnewsnetwork.com/federal-insights/2024/06/this-vendor-tested-its-ai-solutions-on-itself/ https://federalnewsnetwork.com/federal-insights/2024/06/this-vendor-tested-its-ai-solutions-on-itself/#respond Tue, 18 Jun 2024 14:08:54 +0000 https://federalnewsnetwork.com/?p=5033628 IBM provided its own grounds for testing and developing a set of AI tools. It can help client organizations avoid some of the initial mistakes.

The post This vendor tested its AI solutions on itself first appeared on Federal News Network.

]]>

As its own ‘client zero’, IBM identified its human resources function back in 2017 for transformation with artificial intelligence. Today, the function is fully automated, and IBM has a wealth of insights and learnings to share that they hope can help federal agencies avoid some of the same pitfalls.

IBM took an AI-driven approach to transforming its HR function. For its test bed, the company used itself and came away with valuable lessons learned.

Now IBM can help federal agencies apply those lessons and — hopefully — avoid some of the same mistakes. That’s according to Mike Chon, IBM’s vice president and senior partner of talent transformation for the U.S. federal market.

“IBM has gained the efficiencies, it’s delivered on the employee experience, it has achieved a lot of the automations [and] productivity gains,” Chon said.

He cited statistics that tell the story. IBM employees have had nearly two million HR conversations with a virtual agent. Those have achieved resolution in 94% of the cases, meaning the employee didn’t need to proceed to a conversation with a live person.

Manager productivity

When seeking HR efficiencies, organizations tend to think initially in terms of self-service for employees. But Chon urged IT and HR staffs to think more broadly to include managers too.

“I also want to emphasize manager self-service,” he said. “I think that’s where the additional value can come in.”

It also requires a bit of rewiring of manager habits. Chon said that initially, he, like many experienced managers, was less inclined to invoke a chatbot than to simply call his HR representative with questions.

“I myself did not really adopt that [AI] paradigm right away,” he said. “My muscle memory was to call an HR person. Clock forward to today … I actually tend to go to our AI chatbot more than an HR manager.”

He added, IBM managerial uptake of the HR chatbot has reached 96% worldwide, accounting for 93% of the transactions.

HR presents a natural entry point for AI because it touches everyone.

“By introducing AI through HR, you’re really having this ability to embed the use of these tools throughout your enterprise,” Chon said. “I think that really starts to get people more comfortable.”

Use case approach

Having chosen the HR function, Chon said, IBM initially tried an overly comprehensive approach.

“When we first started this journey, we tried to boil the ocean. It was this big bang approach,” Chon said.

The company realized almost immediately that the tool wasn’t quite right, and people weren’t embracing it.

Lesson learned?

“Never seek the silver bullet,” Chon said. “It really forced everyone to put the brakes on this process” and rethink their approach.

The rethinking resulted in what Chon called a building block, use case-by-use case approach. The team started by identifying specific high-frequency or highly repetitive tasks, the automation of which would allow the team to spend less time on routine tasks and more on strategic, value add work. Data connected to each task helped with this identification, which  ultimately allowed the team to identify two use cases: employee time off and proof-of-employment letters. Before AI, employees would ask their HR representative how many vacation days they had left, and it could take days for HR to prepare and send employee proof of employment letters, Chon said. These tasks represented some of the most repetitive and time consuming for the function.

“AI gave employees the ability to find out their vacation days in seconds and generate their own employee verification letter from anywhere, anytime. And they get instant satisfaction because it happens right in front of them,” Chon said.

In the employment verification letter  use case, AI took the form of robotic process automation, he added.

Moreover, if a particular step to a task doesn’t work, HR and IT could simply turn it off and improve it, without affecting everything else that’s working well.

It’s also important to understand that in a small percentage of cases, employees will need to interact with humans; no AI agent can do everything. Therefore, Chon said, “we always give people the ability to connect to a live agent.” Careful data analysis of what leads to “off-ramps” helps with continuous improvement of the AI tool, he said.

Ultimately, Chon said, the HR AI-driven self-service option for employees and managers lets HR professionals become more productive, taking the drudgery out of HR processes, leaving people more time for “tackling things like recruiting and other high value activities like talent development.”

Ultimately, the key lessons learned from IBM’s experience center on employing a use-case driven approach. AI is successfully adopted with small wins, building blocks and steps. Larger, more strategic and transformational use cases don’t have one clear answer or outcome. The key is finding a use case — a workflow, process or task — that could be accelerated or improved through automation. This also allows for easier scaling to other parts of the agency.

“Now, I would say, seven years later, each time the team launches a new use case, it’s actually getting better and better,” Chon said.

Listen to the full show:

The post This vendor tested its AI solutions on itself first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/federal-insights/2024/06/this-vendor-tested-its-ai-solutions-on-itself/feed/ 0
Congressman supports federal body to monitor AI, like one for cybersecurity https://federalnewsnetwork.com/artificial-intelligence/2024/06/congressman-supports-federal-body-to-monitor-ai-like-one-for-cybersecurity/ https://federalnewsnetwork.com/artificial-intelligence/2024/06/congressman-supports-federal-body-to-monitor-ai-like-one-for-cybersecurity/#respond Fri, 14 Jun 2024 17:16:32 +0000 https://federalnewsnetwork.com/?p=5041045 Some members of Congress want to make sure the government is able to keep tabs on AI developments and react accordingly.

The post Congressman supports federal body to monitor AI, like one for cybersecurity first appeared on Federal News Network.

]]>
var config_5040659 = {"options":{"theme":"hbidc_default"},"extensions":{"Playlist":[]},"episode":{"media":{"mp3":"https:\/\/www.podtrac.com\/pts\/redirect.mp3\/traffic.megaphone.fm\/HUBB1556515942.mp3?updated=1718365172"},"coverUrl":"https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2023\/12\/3000x3000_Federal-Drive-GEHA-150x150.jpg","title":"Congressman supports federal body to monitor AI, like one for cybersecurity","description":"[hbidcpodcast podcastid='5040659']nnThere has been a ton of new developments regarding AI this year. It certainly has the potential to change many aspects of American industry and technology. Some members of Congress want to make sure <a href="https:\/\/troycarter.house.gov\/media\/press-releases\/congressman-carter-thompson-introduce-cisa-securing-ai-task-force-act#:~:text=The%20CISA%20Securing%20AI%20Task%20Force%20Act%20would%3A,AI%2Drelated%20directives%20and%20activities">the government is able to keep tabs on those developments<\/a> and react accordingly. For more, Federal News Network's Eric White talked with Rep. Troy Carter (D-La.).nn<em><strong>Interview Transcript:\u00a0<\/strong><\/em>n<p style="padding-left: 40px;"><strong>Eric White\u00a0 <\/strong>A ton of new developments regarding AI this year certainly has the potential to change many aspects of American industry and technology. Some members in Congress want to make sure the government is able to keep tabs on those developments and react accordingly. One of them, Louisiana Democrat Troy Carter, who joins me now. Congressman, thank you for taking the time.<\/p>n<p style="padding-left: 40px;"><strong>Troy Carter\u00a0 <\/strong>Eric, thank you very much, always good to be with you.<\/p>n<p style="padding-left: 40px;"><strong>Eric White\u00a0 <\/strong>So tell us how this legislation came together, and what would be your main goal?<\/p>n<p style="padding-left: 40px;"><strong>Troy Carter\u00a0 <\/strong>Well, the legislation is born from the speed at which we we see technology moving. We're trying to learn from the mistakes that we made with the onset of the internet. AI is a very, very powerful tool, and can be very beneficial in so many ways, but we also know that it can be abused. So, our effort is to make sure we get our arms around it, we extract good and protect our citizens and our country from the nefarious actors who would attempt to do the wrong thing with it. So, this idea of bringing together the players within CISA, Homeland Security, to contemplate, if you will, the issues that are out there and tighten them to make sure that we lessen the effect of those who would have ill thoughts on how to utilize this technology.<\/p>n<p style="padding-left: 40px;"><strong>Eric White\u00a0 <\/strong>I know it's still a new technology and there's a lot we don't know about what could occur, but what are your main concerns? And, you know, what are those mistakes from the Internet days that you're worried about us repeating?<\/p>n<p style="padding-left: 40px;"><strong>Troy Carter\u00a0 <\/strong>Well, in AI, we know that we have image replication where you can have a person, I could have you, Eric, look just like you, saying things that are fundamentally contrary to your beliefs. And the public wouldn't know. We know that the the stealing of information, the applications as basic as in the classroom, writing a paper for a student, all the way up to our national security and stealing critical information on our military, our placement, our national security, and everything in between. From our financial institutions, to our institutions of higher learning, all are at risk of having not only breaches, because that's one thing that's another issue of having sacred data. And important data would be in financial information and personal information, people's social security numbers, their banking information, their pass codes for their computers and to access other vital information that is private, but also the applications of mimicking people, mimicking ideas. The breadth of knowledge and power that's behind AI, we've only scratched the surface of what it could be. And we've got to be very, very, very careful that this does not get out of our hands or out of the barn too fast. And we've got to stay up with and ahead of the bad guys. Because while we have this technology, they do too. And that's the reality. We've seen it impact our placement in military regimes. We've seen it impact our national elections in, this is one of those cases, as I like to refer to as, common sense legislation. We cannot let this develop without there being some regulation as to the parameters for its use.<\/p>n<p style="padding-left: 40px;"><strong>Eric White\u00a0 <\/strong>We're speaking with Louisiana representative Troy Carter. And, so, one of my main questions is, why CISA? What role do you see this AI taskforce playing in formulating federal policy? And what is it about CISA that you think they'd be poised perfectly to do this task?<\/p>n<p style="padding-left: 40px;"><strong>Troy Carter\u00a0 <\/strong>Well, they're central intelligence. This is where this is where the databases are. This is the people that are dealing with this more often than not. That doesn't mean that others will not be tapped. But within Homeland Security and central information intelligence, this is where the data, this is where the technology lives. And these are the people that deal with this, day in, day out, from various formats. So it's important to have these entities talking, communicating, sharing notes, zeroing in on, while they have so many other responsibilities, the reality and the reasoning behind the taskforce is to kind of force them to the table to have this as a higher priority of review. And then having them report back to the department, the Committee on Homeland Security, which I serve. So in real time, we're getting this information, and then we're determining if further legislation is required, if further coordination with other entities or agencies or other committees of the United States Congress are necessary to make sure that the right hand knows what the left hand is doing. And most importantly, the right hand and the left hand join together to protect the integrity of our data in America.<\/p>n<p style="padding-left: 40px;"><strong>Eric White\u00a0 <\/strong>Yes, so there's other entities I'd like to ask you about. You know, CISA is taking a bigger hand in working with those critical infrastructure industries. I'm sure you're aware for shoring up their cyber resiliency. Could you see CISA doing something similar with all these AI companies that seemed to have been popping up in the last couple of months?<\/p>n<p style="padding-left: 40px;"><strong>Troy Carter\u00a0 <\/strong>Well, absolutely. Because I think ultimately, it's going to require a broad band of regulation. I think it's imperative of government to make sure that we have guideposts, that we have guardrails, that we're able to step in when nefarious actors are coloring outside the lines, to make sure that we have systems in place so we don't have to go and recreate the wheel, or create a wheel. We'll have a wheel that's prepared in place to defend against bad actors. Oftentimes, we find ourselves playing catch up and reacting to things. My idea and notion myself and ranking member Thompson, behind this legislation, is to be proactive. One of the things that we don't do enough of, we spend a lot of time chasing the issue instead of getting in front of it. And what this task force does is allows us to get in front of it, contemplate issues before they are issues and have resolutions to them before they rear their ugly heads.<\/p>n<p style="padding-left: 40px;"><strong>Eric White\u00a0 <\/strong>Yeah, I was going to say, you know, it's well known that Congress seems to sometimes have trouble staying ahead of the game. So it seems as if you are trying to take a different approach and just get out there before problems become problems, even though this is a new technology, and we may not even know what those problems are yet.<\/p>n<p style="padding-left: 40px;"><strong>Troy Carter\u00a0 <\/strong>That's exactly right. And that's the whole idea, contemplated what may come. And listen, we're not flying completely blind. We've seen what's happened with social media. We know how this thing has bloomed and grown. And for every bit of usefulness that we get from it, there's a tremendous amount of bad stuff. And how do we sift through the bad to bring the good to light? The utilization of AI is not a bad thing in and of itself. But like many good things, it can be abused. And so we want to put our arms around that and make sure that we limit the abuse and maximize the good use.<\/p>n<p style="padding-left: 40px;"><strong>Eric White\u00a0 <\/strong>Louisiana Congressman Troy Carter, thank you so much for talking with me.<\/p>n<p style="padding-left: 40px;"><strong>Troy Carter\u00a0<\/strong>Always happy to be with you, my friend. Keep doing great work spreading the good news to the American people. We depend on podcasts like yours to highlight and enlighten the people what we're doing in Washington and how they can partner with us. So, God bless you for for being a great advocate.<\/p>n<p style="padding-left: 40px;"><strong>Eric White\u00a0 <\/strong>Well, thank you so much for your kind words.<\/p>"}};

There has been a ton of new developments regarding AI this year. It certainly has the potential to change many aspects of American industry and technology. Some members of Congress want to make sure the government is able to keep tabs on those developments and react accordingly. For more, Federal News Network’s Eric White talked with Rep. Troy Carter (D-La.).

Interview Transcript: 

Eric White  A ton of new developments regarding AI this year certainly has the potential to change many aspects of American industry and technology. Some members in Congress want to make sure the government is able to keep tabs on those developments and react accordingly. One of them, Louisiana Democrat Troy Carter, who joins me now. Congressman, thank you for taking the time.

Troy Carter  Eric, thank you very much, always good to be with you.

Eric White  So tell us how this legislation came together, and what would be your main goal?

Troy Carter  Well, the legislation is born from the speed at which we we see technology moving. We’re trying to learn from the mistakes that we made with the onset of the internet. AI is a very, very powerful tool, and can be very beneficial in so many ways, but we also know that it can be abused. So, our effort is to make sure we get our arms around it, we extract good and protect our citizens and our country from the nefarious actors who would attempt to do the wrong thing with it. So, this idea of bringing together the players within CISA, Homeland Security, to contemplate, if you will, the issues that are out there and tighten them to make sure that we lessen the effect of those who would have ill thoughts on how to utilize this technology.

Eric White  I know it’s still a new technology and there’s a lot we don’t know about what could occur, but what are your main concerns? And, you know, what are those mistakes from the Internet days that you’re worried about us repeating?

Troy Carter  Well, in AI, we know that we have image replication where you can have a person, I could have you, Eric, look just like you, saying things that are fundamentally contrary to your beliefs. And the public wouldn’t know. We know that the the stealing of information, the applications as basic as in the classroom, writing a paper for a student, all the way up to our national security and stealing critical information on our military, our placement, our national security, and everything in between. From our financial institutions, to our institutions of higher learning, all are at risk of having not only breaches, because that’s one thing that’s another issue of having sacred data. And important data would be in financial information and personal information, people’s social security numbers, their banking information, their pass codes for their computers and to access other vital information that is private, but also the applications of mimicking people, mimicking ideas. The breadth of knowledge and power that’s behind AI, we’ve only scratched the surface of what it could be. And we’ve got to be very, very, very careful that this does not get out of our hands or out of the barn too fast. And we’ve got to stay up with and ahead of the bad guys. Because while we have this technology, they do too. And that’s the reality. We’ve seen it impact our placement in military regimes. We’ve seen it impact our national elections in, this is one of those cases, as I like to refer to as, common sense legislation. We cannot let this develop without there being some regulation as to the parameters for its use.

Eric White  We’re speaking with Louisiana representative Troy Carter. And, so, one of my main questions is, why CISA? What role do you see this AI taskforce playing in formulating federal policy? And what is it about CISA that you think they’d be poised perfectly to do this task?

Troy Carter  Well, they’re central intelligence. This is where this is where the databases are. This is the people that are dealing with this more often than not. That doesn’t mean that others will not be tapped. But within Homeland Security and central information intelligence, this is where the data, this is where the technology lives. And these are the people that deal with this, day in, day out, from various formats. So it’s important to have these entities talking, communicating, sharing notes, zeroing in on, while they have so many other responsibilities, the reality and the reasoning behind the taskforce is to kind of force them to the table to have this as a higher priority of review. And then having them report back to the department, the Committee on Homeland Security, which I serve. So in real time, we’re getting this information, and then we’re determining if further legislation is required, if further coordination with other entities or agencies or other committees of the United States Congress are necessary to make sure that the right hand knows what the left hand is doing. And most importantly, the right hand and the left hand join together to protect the integrity of our data in America.

Eric White  Yes, so there’s other entities I’d like to ask you about. You know, CISA is taking a bigger hand in working with those critical infrastructure industries. I’m sure you’re aware for shoring up their cyber resiliency. Could you see CISA doing something similar with all these AI companies that seemed to have been popping up in the last couple of months?

Troy Carter  Well, absolutely. Because I think ultimately, it’s going to require a broad band of regulation. I think it’s imperative of government to make sure that we have guideposts, that we have guardrails, that we’re able to step in when nefarious actors are coloring outside the lines, to make sure that we have systems in place so we don’t have to go and recreate the wheel, or create a wheel. We’ll have a wheel that’s prepared in place to defend against bad actors. Oftentimes, we find ourselves playing catch up and reacting to things. My idea and notion myself and ranking member Thompson, behind this legislation, is to be proactive. One of the things that we don’t do enough of, we spend a lot of time chasing the issue instead of getting in front of it. And what this task force does is allows us to get in front of it, contemplate issues before they are issues and have resolutions to them before they rear their ugly heads.

Eric White  Yeah, I was going to say, you know, it’s well known that Congress seems to sometimes have trouble staying ahead of the game. So it seems as if you are trying to take a different approach and just get out there before problems become problems, even though this is a new technology, and we may not even know what those problems are yet.

Troy Carter  That’s exactly right. And that’s the whole idea, contemplated what may come. And listen, we’re not flying completely blind. We’ve seen what’s happened with social media. We know how this thing has bloomed and grown. And for every bit of usefulness that we get from it, there’s a tremendous amount of bad stuff. And how do we sift through the bad to bring the good to light? The utilization of AI is not a bad thing in and of itself. But like many good things, it can be abused. And so we want to put our arms around that and make sure that we limit the abuse and maximize the good use.

Eric White  Louisiana Congressman Troy Carter, thank you so much for talking with me.

Troy Carter Always happy to be with you, my friend. Keep doing great work spreading the good news to the American people. We depend on podcasts like yours to highlight and enlighten the people what we’re doing in Washington and how they can partner with us. So, God bless you for for being a great advocate.

Eric White  Well, thank you so much for your kind words.

The post Congressman supports federal body to monitor AI, like one for cybersecurity first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/artificial-intelligence/2024/06/congressman-supports-federal-body-to-monitor-ai-like-one-for-cybersecurity/feed/ 0
FedFakes: Scammers pose as federal agencies adding complexity to defense strategies https://federalnewsnetwork.com/commentary/2024/06/fedfakes-scammers-pose-as-federal-agencies-adding-complexity-to-defense-strategies/ https://federalnewsnetwork.com/commentary/2024/06/fedfakes-scammers-pose-as-federal-agencies-adding-complexity-to-defense-strategies/#respond Fri, 14 Jun 2024 16:19:14 +0000 https://federalnewsnetwork.com/?p=5040938 While impersonation scams are not new, the trend has been further accelerated and made more successful due to advancements in generative AI technology.

The post FedFakes: Scammers pose as federal agencies adding complexity to defense strategies first appeared on Federal News Network.

]]>
Despite how many filters you may have, spam calls are an increasingly common experience that resonates with everyone. While annoying and inconvenient, they can often come with associated risks like impersonation attempts, where scammers pose as legitimate businesses, government agencies, or even friends and family. Such scams often involve fraudulent communication through phone calls, emails or even social media messages, where the scammer poses as a trusted entity to manipulate victims into voluntarily taking actions that benefit the scammer’s agenda.   

While impersonation scams are not new, how they are delivered is changing. This trend has been further accelerated and made more successful due to advancements in generative AI technology. With easily accessible AI tools like voice cloning, scammers can replicate someone’s voice with as little as a three-second clip. The gravity of this situation is exemplified by recent events, such as the Biden robocall that highlighted how scammers can even exploit trusted public figures for their deceptive tactics. As these scams become ever more convincing and difficult to distinguish from genuine communication, they present an increasingly significant challenge to security professionals and the general public.  

Rising threat: Targeting federal government agencies 

Last year was a record-breaking year for impersonation scams, particularly those involving scammers posing as federal government agencies to deceive individuals into disclosing money or sensitive information. In fact, approximately $1.3 billion was lost by Americans to scammers impersonating government officials. The financial losses suffered by U.S. individuals due to government impersonation scams have surged by over sevenfold since 2019, indicating a significant increase in fraudulent activity targeting federal government agencies.   

These types of impersonation scams can involve scammers calling and falsely claiming that an individual will lose their Medicare benefits unless they pay a new fee, posing as an IRS agent insisting that the recipient owes back taxes or fines, or even pretending to be law enforcement or border patrol agents seeking to use the threat of criminal prosecution as a means of intimidating victims into paying fraudulent penalties. The hallmark of these tactics is using  fear of real-life scenarios and creating a sense of urgency to pressure victims into taking immediate action without considering the validity of the caller or situation 

 The problem: Deteriorating trust in government 

These scams are particularly concerning because consumers tend to place higher trust in federal agencies, viewing them as reliable and authoritative entities. Because victims are more likely to disclose sensitive information due to their trust in federal agencies or officials, criminals know these scams are more likely to be successful; a top criterion for any criminal. Addressing these scams is imperative for protecting individuals from financial harm and maintaining public confidence. 

 Additionally, when fraudulent activities erode public trust in government institutions it undermines the foundation of democratic governance. Therefore, combating impersonation scams is crucial for safeguarding the integrity of governmental processes and ensuring that citizens continue to have faith in the institutions designed to serve and protect them. 

 The solution: Arm federal agencies with tools and tactics 

In addition to the Federal Trade Commission’s new rule to combat government and business impersonation scammers, federal agencies must remain vigilant against the ever-evolving external cyber threat landscape. This is especially crucial as cybercriminals continuously adapt their tactics to bypass traditional defensive security measures.  

As threat actors become more adept at evading detection, the need for proactive cybersecurity measures becomes increasingly crucial. This requires a subtle shift in how federal government agencies increasingly defend against these threats proactively while respecting the civil rights of all Americans. In addition to addressing red and blue spaces, this shift involves an effective cybersecurity program that addresses the “gray space” within the attack surface, which includes internet infrastructure, applications, platforms and forums managed by third parties.  

Fortunately, there are many tools available to monitor that gray space. Threat intelligence solutions — such as fake account detection and takedown measures — are key tools that prevent cybercriminals from using fraudulent accounts to impersonate government entities. The lines between real and fake are increasingly blurred as AI tools make it increasingly easier to develop realistic-yet-inauthentic content that challenges individuals and organizations to know what’s real. This increases everyone’s vulnerability to scams, including phishing attacks, ransomware attacks, and Business Email Compromises (BEC). By actively monitoring and removing fake accounts on social media and other web platforms, agencies can proactively — and automatically — disrupt impersonation scammers’ operations within minutes.  

However, being armed with the right security tools to prevent potential attacks is not enough to rest assured. Federal government agencies must maintain ongoing security measures. This can be achieved through the oversight of security operations center functions of monitoring, detection, analysis and responding to security threats. Essential security tools include endpoint detection and response, security information and event management, and security orchestration, automation and response.   

Finally, the linchpin in developing a more unified, proactive security approach lies in the adoption of resilient incident response solutions. These solutions capitalize on existing intelligence to minimize the mean time to detect and mean time to remediate security incidents, improving overall defense capabilities, while providing artifacts back to Intelligence teams for iterative improvements. Additionally, breach notifications play a crucial role in upholding compliance with laws and regulations, while also fostering transparency, which is essential for gaining and maintaining public trust.  

 Augmenting technology with a shift in mindsets and teams 

Federal government agencies must reassess their team structures. For instance, while a security team focused on internal security employs advanced technical measures to safeguard logical assets like databases and networks from compromise, they may need more expertise to protect the agency’s reputation from being used to defraud the American public. To effectively establish an external cybersecurity program, cross-organizational collaboration is essential. This includes experts in technical and physical threat vectors and people well-versed in the dynamics of social media and business platforms, including their potential for misuse. Through increased collaboration that looks at security holistically, government agencies can enhance their resilience against cyber threats while safeguarding the trust and confidence of the public they serve.   

Furthermore, in addition to safeguarding with threat intelligence tools and reassessing team structures, it’s crucial to implement a cybersecurity training and awareness program with a strong focus on phishing and impersonation attacks. By educating employees on recognizing phishing and impersonation tactics, agencies can prevent them from falling victim to these attacks. This training should cover common phishing techniques, such as impersonation emails and fake websites, along with guidance on verifying the legitimacy of communications and URLs. Most importantly, this should not be another annual “check the box” training program. The most effective security training is integrated into daily life as part of a culture of security, with emphasis placed on rewarding people who successfully demonstrate security awareness instead of only focusing on punishing those who struggle to comply.  

Ensuring the integrity of government communications is of utmost importance, as every breach of trust erodes public confidence in the government. External cybersecurity represents a new frontier that demands a fresh mindset, approach and set of tools. Traditional cybersecurity strategies have primarily only focused on defending against threats within the organization’s network perimeter. However, the increasing sophistication of threat actors and the persistent growth of attacks originating from outside the perimeter (like impersonation scams) underscore the necessity for federal government agencies to adopt a more unified, proactive security approach.   

AJ Nash, is vice president and distinguished fellow of Intelligence at ZeroFox. 

The post FedFakes: Scammers pose as federal agencies adding complexity to defense strategies first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/fedfakes-scammers-pose-as-federal-agencies-adding-complexity-to-defense-strategies/feed/ 0
White House showcases what’s possible with AI across 12 agencies https://federalnewsnetwork.com/artificial-intelligence/2024/06/white-house-showcases-whats-possible-with-ai-across-12-agencies/ https://federalnewsnetwork.com/artificial-intelligence/2024/06/white-house-showcases-whats-possible-with-ai-across-12-agencies/#respond Thu, 13 Jun 2024 21:47:33 +0000 https://federalnewsnetwork.com/?p=5040070 The Biden administration is showcasing AI work that’s already underway at a dozen federal R&D agencies.

The post White House showcases what’s possible with AI across 12 agencies first appeared on Federal News Network.

]]>
The Biden administration expects artificial intelligence will set a higher standard for a range of government services, and is showcasing what’s possible with these tools.

The White House’s Office of Science and Technology Policy (OSTP), at an “AI aspirations” summit Thursday,  demonstrated how a dozen federal agencies are up to the task of using AI to deliver more services to more people.

OSTP Director Arati Prabhakar said accelerating the use of AI in government will require expertise,  resources, and data sets from the private sector and academia.

“To build this future, we will need to do together what we can’t do separately,” she said.

Shalanda Young, director of the Office of Management and Budget, said AI “shows so much potential” for agencies to provide better customer service to the public — as long as agencies deploy these tools effectively.

“This is actually an opportunity for us to embrace exciting changes, and show that this power can be used for good, especially for services. But it also reminds me if we get it wrong, and we don’t marry the need to improve government service and use AI to do that in a way that is beneficial for people … If we don’t use AI to actually equitably deliver, we will lose people and their trust,” Young said.

Mo Earley, portfolio lead for federal high-impact service providers at OMB,  said that with the advent of AI, “there are new opportunities to reshape government services for the better, to give everyone the opportunity to feel relief, rather than frustration, to have the relevant context understood, and to feel confident that their data and privacy are being rigorously protected.”

“If done right, AI could help bring new opportunities that can more effectively connect each individual with the right support in a seamless and secure way,” Earley said.

Commerce Secretary Gina Raimondo said NOAA is leveraging AI to create faster, more accurate weather models.

“NOAA’s vast data archives will help train the data sets needed to create AI predictive weather models,” Raimondo said.

National Science Foundation Director Sethuraman Panchanathan said the past 50 or 60 years of sustained federal research in AI are driving the breakthroughs we’re seeing today.

“NSF, of course, has been investing continuously in AI, even in AI winters. And here we are, all of us seeing the outcome of that. Yes, there’s a lot more work to be done, and NSF is truly excited to be able to continue to work with all of you and achieve those innovations,” Panchanathan said.

Education Secretary Miguel Cardona said AI could cause “enormous disruption” in schools and universities across the country, on par with the rise of the internet age.

“The AI train has left the station, and it’s moving full-steam ahead. It may be in its early stages, but we can be sure that it’s going to be present in the homes and in the classrooms of our country. And we can be sure that young people are going to use it in their lives and in their education — perhaps in ways we haven’t even dreamed of yet. So we have to meet them where they are.”

Prabhakar told reporters in a call ahead of the event that the Biden administration is showcasing work that’s already underway at a dozen federal research and development agencies.

“These are specific advances that are paving the way for even bigger things that are ahead,” Prabhakar said in a call Wednesday.

The administration is highlighting the potential of AI to accelerate drug research and approve new medications in “months rather than decades,” support individualized for K-12 students and enable neighborhood-level weather forecasts.

The White House also sees the potential of AI to accelerate progress in setting a higher standard for customer services across the government.

“Accessing the government services that are intended to help people in their most difficult moments is often a vexing experience with forms and processes from so many different agencies. Building on a strong foundation of privacy protection, AI can help us deliver critical services to any American, right when they need them most,” Prabhakar said.

The White House is outlining what agencies can do with AI technologies, but is calling on private-sector experts, academics, and researchers to partner with the government on these efforts.

“Usually, when we are talking about projects, we are talking about very concrete things that are happening today, that are already locked into budget. This is actually a bit different. This is a vision conference, because we want to show people how big the possibilities are ahead. In every one of these cases, there is work going on in government that starts us on this path.”

Prabhakar said agencies are also looking at how to field AI tools ethically and responsibly, as well as ensure algorithms are free of bias.

“This is exactly the right time to ask what could go wrong. Because the answers to this question point the way to building the protections and mitigations before a new technology is deployed,” she said.

The post White House showcases what’s possible with AI across 12 agencies first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/artificial-intelligence/2024/06/white-house-showcases-whats-possible-with-ai-across-12-agencies/feed/ 0
Air Force unveils new generative AI platform https://federalnewsnetwork.com/defense-main/2024/06/air-force-unveils-new-generative-ai-platform/ https://federalnewsnetwork.com/defense-main/2024/06/air-force-unveils-new-generative-ai-platform/#respond Tue, 11 Jun 2024 21:13:09 +0000 https://federalnewsnetwork.com/?p=5036437 NIPRGPT, a ChatGPT-like tool, will allow airmen, guardians and civilian employees to use the technology for tasks like coding and content summarization.

The post Air Force unveils new generative AI platform first appeared on Federal News Network.

]]>
The Department of the Air Force has launched a ChatGPT-like tool that will assist airmen, Guardians and civilian employees with tasks such as coding, correspondence and content summarization, all on the service’s unclassified networks.

The Non-classified Internet Protocol Generative Pre-training Transformer, or NIPRGPT, is part of the Dark Saber software platform, an ecosystem where airmen experiment, develop and deploy their own applications and capabilities.

The platform is not the end tool or the final solution, said Air Force officials, but rather a testing ground that will allow the service to better understand practical applications of generative AI, run experiments, take note of problems and gather feedback.

The Air Force Research Laboratory, which developed the tool, used publicly available AI models, so the service has yet to commit to a particular vendor. But as commercial AI tools become available, the platform will help the service to better gauge the best approach to buying those tools.

“We’re not committing to any single model or tech vendor — it is too early in the process for that. However, we are leveraging this effort to inform future policy, acquisition and investment decisions,” Chandra Donelson, the Air Force’s acting chief data and artificial intelligence officer, told reporters on Monday.

“We aim to partner with the best models from government, industry and academia to identify which models perform better on our specific tasks, domains, as well as use cases to meet the needs of tomorrow’s warfighter.”

While NIPRGPT is only available on unclassified networks, the service is considering expanding it to higher classification levels depending on demand and interest from airmen and guardians.

“The research will absolutely follow demand. We have already had people signal that there’s interest there working with different and appropriate groups. I think that’s why starting intentionally and clearly so we can learn any of those guardrails but, as you can imagine, people want relationships with knowledge at all levels. And so that has absolutely been considered,” said Air Force Research Lab Chief Information Officer Alexis Bonnel.

As uses of generative AI have exploded in the commercial sector, the Defense Department has been carefully exploring how it can leverage the technology to improve intelligence, operational planning, administrative, business processes and tactical operations. The Pentagon’s Task Force Lima, for example, is evaluating a wide range of use cases and working to synchronize and employ generative AI capabilities across the military services.

In the interim, the Air Force’s office of the chief information officer along with the chief data and artificial intelligence office recently wrapped up a series of roundtables with industry and academia where they explored the potential applications and best practices for adopting GenAI across the service. Air Force CIO Venice Goodwine said the roundtables showed how fast the field of generative AI is growing.

“Now is the time to give our airmen and Guardians the flexibility to develop the necessary skills in parallel. There are multiple modernization efforts going on right now across the federal government and within the DAF to get tools in the hands of the workforce. This tool is another one of those efforts,” said Goodwine.

The post Air Force unveils new generative AI platform first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/defense-main/2024/06/air-force-unveils-new-generative-ai-platform/feed/ 0
To make effective AI policy you must trust those who’ve been there https://federalnewsnetwork.com/commentary/2024/06/to-make-effective-ai-policy-you-must-trust-those-whove-been-there/ https://federalnewsnetwork.com/commentary/2024/06/to-make-effective-ai-policy-you-must-trust-those-whove-been-there/#respond Tue, 11 Jun 2024 17:02:34 +0000 https://federalnewsnetwork.com/?p=5022830 Data scientists are essential as policymakers shape legislation around AI

The post To make effective AI policy you must trust those who’ve been there first appeared on Federal News Network.

]]>
On March 28, the White House took a pretty big step toward establishing a broader national policy on artificial intelligence when it issued a memorandum on how the federal government will manage it. It established new federal agency requirements and guidance for AI governance, innovation and risk management. All of this is in keeping with the AI in Government Act of 2020, the Advancing American AI Act, and the President’s executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”  

 Tucked into the 34-page memorandum is something that could easily go unnoticed, but it is perhaps one of the most important and far-reaching details to come out of it. On Page 5 of the document, it lists the roles of chief artificial intelligence Officers (CAIO), and more specifically that there should be a chief data officer (CDO) involved in the process. 

 While the memorandum doesn’t spell out responsibilities in detail, it points to a mandate to include data scientists in the development, integration and oversight of AI in society. More to the point, it’s a reminder that we need the right and most qualified people at the table to set policy on the role AI will play in society. 

You cannot just assume the right experts have a seat at the table. Even though the field of AI has been around for nearly 70 years, it’s only since the generative AI boom starting in November 2022, when ChatGPT was launched, that many leaders in society have begun to see the sea change AI represents. Naturally, some are jockeying for control over something many don’t understand. There is the risk they could be crowding out the people who do, the data scientists who’ve thus far conceived, created and are incorporating AI into our daily lives and workflows. For something this revolutionary and impactful, why? 

AI development faces a human nature problem

Credit human nature. People are at once intimidated by and even scared of the kind of massive societal change AI represents. This reaction is something we as a society and as a country have to quickly get beyond. Society’s welfare, and America’s national security and competitiveness are at stake. 

To be sure, AI’s benefits are real, but it also poses real risk. Shaping and navigating its future will depend on a combination of regulation, broader education, purposeful deployment, and our ability to leverage and advance data science underlying AI systems.   

 Without the latter, systems run a greater risk of being ineffective, unnecessarily disruptive to the workforce, biased, unreliable and even underperforming in areas that could truly be positively impacted by AI. In high-stakes cases like health care, unproven or untested AI can even cause outright patient harm. The possible setbacks in function can lead to setbacks in perception. And setbacks in perception do little to marshal the resources, talent and institutions needed to realize AI’s potential while safeguarding the public.  

 The states take the lead

 As the federal government has wrestled with how to approach AI regulation, more nimble state governments and regulators have taken the early lead. In the 2023 legislative calendar, some 25 states, along with Puerto Rico and the District of Columbia, already introduced AI-centric legislation. Eighteen states and Puerto Rico have “adopted resolutions or enacted legislation,” according to the National Conference of State Legislatures. 

 At the federal level, there have been dozens of hearings on AI on Capitol Hill, and several AI-centric bills have been introduced in Congress. Many of these bills center on how the government will use AI. Increasingly, we are seeing specific AI applications being addressed by individual federal departments and committees. This includes the National AI Advisory Committee (NAIAC). 

 Where are the data scientists?

 You don’t have to look far to find the critical mass of data scientists who need to be involved in society’s efforts to get AI right the first time. We are (some of) those data scientists and we have been part of an organization that understood the intricacies of “machine learning” long before policymakers knew what the term meant. We, the leaders of the sector charged with bringing the promise of AI to the world, have long worked — and continue to work — to create a framework that realizes the potential of AI and mitigates its risks. That vision centers on three core areas:  

  • Ensure that the right data is behind the algorithms that continuously drive AI. 
  • Measuring the reliability of AI, from the broadest use down to the most routine and micro applications, ensures AI quality and safety without compromising its effectiveness and efficiency. 
  • Aligning AI with people, systems and society so that AI focuses on the goals and tasks at hand, learns from what is important, and filters out what is not. 

All of this must be addressed through an ethical prism which we already have in place.  

There is some irony in this early stage in the evolution of AI. Its future has never been more dependent on people – ones who have a full understanding of the issues at play, along with the need for and application of ethical decision-making guardrails to guide everything. 

Bad data makes bad decisions

Ultimately, AI systems are a function of the data that feed them and the people behind that data. Obviously, the ideal is to have accuracy and effectiveness enabled by good data. Sometimes, to better understand how you want it to work, you have to confront those instances where you see what you don’t want – in this case, instances where AI decisions were driven by poor data.   

 For example, when AI systems inaccurately identify minority populations, which is a problem that has plagued security screening technologies for years. This is usually not a technology problem, but rather a data problem.  In this case, the systems are operating on bad or incomplete data and the impact on society is significant because it leads to more people being unnecessarily detained.   

 Chances are, many of these sorts of problems can be traced back to the human beings who were involved, or – perhaps more importantly – not involved in AI development and deployment. Poor data that lead to bias or ineffective decision making is a significant problem across industries, but one that can be solved by combining the expertise of the data science community with that of diverse stakeholders, especially frontline workers and subject matter experts. 

 Data scientists must have a seat at the table … now

 Data scientists need to be at the decision-making table early on, because they have the holistic training and perspective, as well as the expertise to set algorithms in specific domains that focus on leveraging data for actual decision-making. Whether the AI system is supporting healthcare, military action, logistics or security screening, connecting effective data with AI will ensure better decisions and therefore fewer disruptions.   

 When it comes to measuring reliability, that’s what data scientists do. No one is better positioned to ensure that AI systems do what they are designed to do and avoid unintended consequences. Data scientists know. They’ve been there.  

 Data scientists are the intersection of ensuring better and more effective decision making across AI and identifying impacts and biases of AI systems and other problems. As states, Congress, the White House, and industry consider the next steps in AI policy, they must ensure data science is at the table. 

Tinglong Dai, PhD, is the Bernard T. Ferrari Professor at the Johns Hopkins Carey Business School, co-chair of the Johns Hopkins Workgroup on AI and Healthcare, which is part of the Hopkins Business of Health Initiative. He is on the executive committee of the Institute for Data-Intensive Engineering and Science, and he is Vice President of Marketing, Communication, and Outreach at INFORMS. 

The post To make effective AI policy you must trust those who’ve been there first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/to-make-effective-ai-policy-you-must-trust-those-whove-been-there/feed/ 0
When it comes to AI at Energy, it takes a village https://federalnewsnetwork.com/federal-insights/2024/06/when-it-comes-to-ai-at-energy-it-takes-a-village/ https://federalnewsnetwork.com/federal-insights/2024/06/when-it-comes-to-ai-at-energy-it-takes-a-village/#respond Mon, 10 Jun 2024 14:54:18 +0000 https://federalnewsnetwork.com/?p=5027885 Rob King, the chief data officer at the Energy Department, said a new data strategy and implementation plan will set the tone for using AI in the future.

The post When it comes to AI at Energy, it takes a village first appeared on Federal News Network.

]]>
var config_5038065 = {"options":{"theme":"hbidc_default"},"extensions":{"Playlist":[]},"episode":{"media":{"mp3":"https:\/\/www.podtrac.com\/pts\/redirect.mp3\/traffic.megaphone.fm\/HUBB9260653875.mp3?updated=1718217566"},"coverUrl":"https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2023\/03\/EY-Podcast_3000x3000-B-150x150.jpg","title":"When it comes to AI at Energy, it takes a village","description":"[hbidcpodcast podcastid='5038065']nnFederal chief data officers are playing a larger role in how their organizations are adopting and using basic or advanced artificial intelligence (AI).nnA <a href="https:\/\/federalnewsnetwork.com\/big-data\/2023\/12\/chief-data-officers-focused-on-accelerating-ai-adoption-across-government\/">recent survey<\/a> of federal chief data officers by the Data Foundation found over half of the CDOs who responded say their role around AI has significantly changed over the past year, as compared to 2022 when 45% said they had no AI responsibility.nnTaking this a step further, with nearly every agency naming a chief AI officer over the past year, the coordination and collaboration between the CDO and these new leaders has emerged as a key factor in the success of any agency AI program.nn\u201cWe are taking a collaborative and integrated approach to aligning data into artificial intelligence and building synergies between the role of data and data governance, and really being able to meet the spirit of the requirements of the AI executive order, with the ability to interrogate our data ethically and without bias as they are being imported into artificial intelligence models,\u201d said Rob King, the chief data officer at the Energy Department, on the discussion<a href="https:\/\/federalnewsnetwork.com\/government-modernization-unleashed\/"><strong><em> Government Modernization Unleashed: AI Essentials<\/em><\/strong><\/a>. \u201cWe're really now trying to ensure that we can back in the appropriate governance management, make sure we have oversight of our AI inventories and start to align the right controls in place from a metadata management and from a training data standpoint, so that we can meet both the letter and the spirit of the <a href="https:\/\/federalnewsnetwork.com\/artificial-intelligence\/2023\/10\/biden-ai-executive-order-calls-for-talent-surge-across-government-to-retain-tech-experts\/">AI executive order<\/a>. We don\u2019t just want to be compliance driven, but ensure that we are doing the right thing to leverage those AI models to their full extent, and make sure that we can accelerate the adoption of them more broadly.\u201dnnFor that adoption that King talks about to happen more broadly and more quickly, data must be prepared, managed and curated to ensure the AI, or really any technology tool, works well.n<h2>CDOs in a unique position<\/h2>nHe said AI is just the latest accelerator that has come along that reemphasizes the importance of understanding and protecting an organization\u2019s data.nn\u201cHow do we use AI to help us look for themes, patterns of usages in our data to advance the classification and tagging of our data from a stewardship standpoint, so that we can understand that whole full cycle? We're calling things like data-centric AI to ensure that we're looking at ways to use non-invasive data governance approaches to help meet the mission needs of AI. It's a great feedback loop,\u201d King said. \u201cWe're using AI to drive the maturity of our processes so that we can advance the mission adoption of AI as well. The CDOs are in a unique position because we live by the tenets of 'it takes a village.' It takes us working with policy and process leaders, and now the chief AI officers (CAIOs) and mission stakeholders, bringing us all together to really drive the outcomes of strong data management practices, now aligned to positioning for AI adoption.\u201dnnKing, who has been <a href="https:\/\/federalnewsnetwork.com\/ask-the-cio\/2023\/01\/ssas-data-pipelines-under-construction-to-feed-digital-transformation\/">the CDO<\/a> at Energy for <a href="https:\/\/www.energy.gov\/cio\/person\/robert-king" target="_blank" rel="noopener">almost a year<\/a>, said policies like the Federal Data Strategy or the Evidence-Based Policymaking Act have created a solid foundation, but the hard work that still must happen will be by CDOs and CAIOs as they put those concepts into action.nnOne way King started down this data management journey is by developing an enterprise data strategy and \u201crecharged\u201d DoE\u2019s data governance board by ensuring all the right stakeholders with the right subject matter expertise and relevancy are participating.nn\u201cWe're on the precipice of completing that strategy. It's been published in a draft format to our entire data governance board members for final review and edit. We hope to bring that to the finish line in the next few weeks,\u201d he said. \u201cFrom there, we're already moving right into a five-year implementation plan, breaking it down by annual increments to promote that strategy, recognizing that our science complex, our weapons complex and our environmental complexes have very different needs.\u201dn<h2>Testing AI has begun<\/h2>nThe new data strategy will lay out what King called the \u201cNorth Star\u201d goals for DoE around data management and governance.nnHe said the strategy details five strategy goals, each with several objectives and related actions.nn\u201cWe wanted to make sure that everyone could see themselves in the strategy. The implementation plan is going to be much more nuanced. We're now taking key stakeholders from our data governance group and building a team with appropriate subject matter experts and mission representatives to build out that implementation plan and to account for those major data types,\u201d he said. \u201cThe other thing we're starting to look at in our strategy is [asking] what is the right ontology for data sharing? We should have a conceptual mission architecture that can show where we can accelerate our missions, be it on the weapons side or on the science and research side. Where can we build ontologies that say we can accelerate the mission? Because we're seeing like functions and like activities that, because of our federated nature at the Department of Energy, we can break down those silos, show where there's that shared equity. That could be some natural data sharing agreements that we could facilitate and accelerate mission functions or science.\u201dnnEven as Energy finalizes its data strategy, its bureaus and labs aren\u2019t waiting to begin testing and piloting AI tools. Energy has several potential and real use cases for AI already under consideration or in the works. King said applying AI to mission critical priorities like moving to a zero trust architecture and in the cyber domain is one example. Another is applying AI to hazards analysis through DoE\u2019s national labs.nnKing said the CDO and CAIO are identifying leaders and then sharing how they are applying AI to other mission areas.nn\u201cI'm trying to partner with them to understand how I can scale and emulate their goodness, both from pure data management standpoint as well as artificial intelligence,\u201d he said. \u201cWe have one that the National Nuclear Security Administration is leading, called Project Alexandra, around non-nuclear proliferation. They're doing a lot of great things. So how do we take that and scale it for its goodness? We are seeing some strategic use cases that are of high importance. The AI executive order says our foundational models need to be published to other government agencies, academia and industry for interrogation. So how do we then start to, with the chief AI officer, say what is our risk assessment? And what is our data quality assessment for being able to publish our foundational models to those stakeholders for that interrogation? How do we start to align our data governance strategy and use cases to some of our AI drivers?\u201d"}};

Federal chief data officers are playing a larger role in how their organizations are adopting and using basic or advanced artificial intelligence (AI).

A recent survey of federal chief data officers by the Data Foundation found over half of the CDOs who responded say their role around AI has significantly changed over the past year, as compared to 2022 when 45% said they had no AI responsibility.

Taking this a step further, with nearly every agency naming a chief AI officer over the past year, the coordination and collaboration between the CDO and these new leaders has emerged as a key factor in the success of any agency AI program.

“We are taking a collaborative and integrated approach to aligning data into artificial intelligence and building synergies between the role of data and data governance, and really being able to meet the spirit of the requirements of the AI executive order, with the ability to interrogate our data ethically and without bias as they are being imported into artificial intelligence models,” said Rob King, the chief data officer at the Energy Department, on the discussion Government Modernization Unleashed: AI Essentials. “We’re really now trying to ensure that we can back in the appropriate governance management, make sure we have oversight of our AI inventories and start to align the right controls in place from a metadata management and from a training data standpoint, so that we can meet both the letter and the spirit of the AI executive order. We don’t just want to be compliance driven, but ensure that we are doing the right thing to leverage those AI models to their full extent, and make sure that we can accelerate the adoption of them more broadly.”

For that adoption that King talks about to happen more broadly and more quickly, data must be prepared, managed and curated to ensure the AI, or really any technology tool, works well.

CDOs in a unique position

He said AI is just the latest accelerator that has come along that reemphasizes the importance of understanding and protecting an organization’s data.

“How do we use AI to help us look for themes, patterns of usages in our data to advance the classification and tagging of our data from a stewardship standpoint, so that we can understand that whole full cycle? We’re calling things like data-centric AI to ensure that we’re looking at ways to use non-invasive data governance approaches to help meet the mission needs of AI. It’s a great feedback loop,” King said. “We’re using AI to drive the maturity of our processes so that we can advance the mission adoption of AI as well. The CDOs are in a unique position because we live by the tenets of ‘it takes a village.’ It takes us working with policy and process leaders, and now the chief AI officers (CAIOs) and mission stakeholders, bringing us all together to really drive the outcomes of strong data management practices, now aligned to positioning for AI adoption.”

King, who has been the CDO at Energy for almost a year, said policies like the Federal Data Strategy or the Evidence-Based Policymaking Act have created a solid foundation, but the hard work that still must happen will be by CDOs and CAIOs as they put those concepts into action.

One way King started down this data management journey is by developing an enterprise data strategy and “recharged” DoE’s data governance board by ensuring all the right stakeholders with the right subject matter expertise and relevancy are participating.

“We’re on the precipice of completing that strategy. It’s been published in a draft format to our entire data governance board members for final review and edit. We hope to bring that to the finish line in the next few weeks,” he said. “From there, we’re already moving right into a five-year implementation plan, breaking it down by annual increments to promote that strategy, recognizing that our science complex, our weapons complex and our environmental complexes have very different needs.”

Testing AI has begun

The new data strategy will lay out what King called the “North Star” goals for DoE around data management and governance.

He said the strategy details five strategy goals, each with several objectives and related actions.

“We wanted to make sure that everyone could see themselves in the strategy. The implementation plan is going to be much more nuanced. We’re now taking key stakeholders from our data governance group and building a team with appropriate subject matter experts and mission representatives to build out that implementation plan and to account for those major data types,” he said. “The other thing we’re starting to look at in our strategy is [asking] what is the right ontology for data sharing? We should have a conceptual mission architecture that can show where we can accelerate our missions, be it on the weapons side or on the science and research side. Where can we build ontologies that say we can accelerate the mission? Because we’re seeing like functions and like activities that, because of our federated nature at the Department of Energy, we can break down those silos, show where there’s that shared equity. That could be some natural data sharing agreements that we could facilitate and accelerate mission functions or science.”

Even as Energy finalizes its data strategy, its bureaus and labs aren’t waiting to begin testing and piloting AI tools. Energy has several potential and real use cases for AI already under consideration or in the works. King said applying AI to mission critical priorities like moving to a zero trust architecture and in the cyber domain is one example. Another is applying AI to hazards analysis through DoE’s national labs.

King said the CDO and CAIO are identifying leaders and then sharing how they are applying AI to other mission areas.

“I’m trying to partner with them to understand how I can scale and emulate their goodness, both from pure data management standpoint as well as artificial intelligence,” he said. “We have one that the National Nuclear Security Administration is leading, called Project Alexandra, around non-nuclear proliferation. They’re doing a lot of great things. So how do we take that and scale it for its goodness? We are seeing some strategic use cases that are of high importance. The AI executive order says our foundational models need to be published to other government agencies, academia and industry for interrogation. So how do we then start to, with the chief AI officer, say what is our risk assessment? And what is our data quality assessment for being able to publish our foundational models to those stakeholders for that interrogation? How do we start to align our data governance strategy and use cases to some of our AI drivers?”

The post When it comes to AI at Energy, it takes a village first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/federal-insights/2024/06/when-it-comes-to-ai-at-energy-it-takes-a-village/feed/ 0
Why artificial intelligence will never replace your job https://federalnewsnetwork.com/tom-temin-commentary/2024/06/why-artificial-intelligence-will-never-replace-your-job/ https://federalnewsnetwork.com/tom-temin-commentary/2024/06/why-artificial-intelligence-will-never-replace-your-job/#respond Thu, 06 Jun 2024 21:53:20 +0000 https://federalnewsnetwork.com/?p=5029991 Artificial Intelligence people keep reassuring everyone else their jobs are safe. What is it about AI that makes people think it could possibly replace them?

The post Why artificial intelligence will never replace your job first appeared on Federal News Network.

]]>
Have you noticed how artificial intelligence purveyors are trying to shove it down our throats?

Google, for example, urges people to try its Gemini. They’ve built the program into the company’s mail and word processing applications, but you have to opt in. I finally dismissed the notice after it popped up every time I open one of my accounts. Microsoft is no better. It pushes its Co-pilot function everywhere you look. What I’d like from Microsoft: Update the scroll function to work on scheduling appointments or meetings. The functionality there feels like it dates to Windows 286.

Technical people at all of the IT conferences dutifully reassure audiences that artificial intelligence won’t replace people. They say, rather, it will “augment” people by doing routine or low-value tasks. Or it will help prioritize or stage work according to some factor. They keep saying this because enough people must worry about replacement by software.

And let’s acknowledge the fact that AI has already infiltrated daily life. The best commercial digital services and most software applications contain AI augmentation.

I believe that. AI can certainly augment a million tasks and take away cut-and-paste drudgery. AI, though, consists of software. The only people it will likely replace are programmers, those who code. Robots, on the other hand, have replaced people on assembly lines and in certain dangerous exploratory situations. They’ll eventually replace the proverbial hamburger flippers. AI will improve physical, mechanical robots, but it won’t directly replace people.

You don’t have to look far to see the kinds of work AI can maybe help but never replace. I talked the other day with Chris Mark. He works from Pittsburgh for the Mine Safety and Health Administration, part of Labor. Mark earned a Service to America Medal nomination. To greatly simplify it, he discovered how lateral, or tectonic, land movement contributes to mine roof collapses. Roof collapses, a vertical phenomenon, constitute the principal danger to miners’ lives. Because of Mark’s work, mine layout and design techniques have led to more stable mines and steadily fewer annual deaths. He developed software to help mine builders make better calculations.

What a story. At 19 years old, Mark didn’t feel college seemed all that enticing. Born in Greenwich Village, growing up in Manhattan, at 20 he became a coal miner in West Virginia. That’s tough work. Eventually he earned a doctorate in mining engineering before embarking on his long federal career as a  mine researcher and, later, regulator.

We were chatting about Pittsburgh and my visual memories from childhood of flame-belching steel mills and glowing slag heaps. A sudden thought popped into my head. How could AI replace someone like Chris Mark?

So many federal jobs require experience and intuition. Janet Woodcock retired as principal deputy commissioner at the Food and Drug Administration. She’s legendary for reforming drug approval processes and for hectoring Congress to let FDA collect user fees from the generic drug makers. She also pushed for automation and electronic forms to help evaluators deal with what had been trailers full of paper submissions.

No doubt future FDA improvements will come from AI to speed up document discovery, risk analysis and clinical test interpretation. People will think up the use cases and make the decisions.

Nearly Useless Factoid

By Michele Sandiford

The first coal mine in America was established in 1701 in Midlothian, Virginia.

Source: Geogrit.com

The post Why artificial intelligence will never replace your job first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/tom-temin-commentary/2024/06/why-artificial-intelligence-will-never-replace-your-job/feed/ 0
House appropriators pan DHS plans for ‘chief employee experience officer’ https://federalnewsnetwork.com/workforce/2024/06/house-appropriators-pan-dhs-plans-for-chief-employee-experience-officer/ https://federalnewsnetwork.com/workforce/2024/06/house-appropriators-pan-dhs-plans-for-chief-employee-experience-officer/#respond Thu, 06 Jun 2024 21:04:50 +0000 https://federalnewsnetwork.com/?p=5030876 DHS budget document say the new "CEXO" would help the department consolidate its employee engagement goals, as well as recruit and retain top talent.

The post House appropriators pan DHS plans for ‘chief employee experience officer’ first appeared on Federal News Network.

]]>
In the opening salvo of the fiscal 2025 spending debate, House appropriators are not supporting a new Department of Homeland Security initiative focused on employee experience, among other cuts.

The GOP-led House Appropriations Committee’s homeland security subcommittee approved its fiscal 2025 DHS funding bill on Tuesday. And the legislation does not include DHS’s request for $5.5 million to support the establishment of a “chief employee experience officer.”

The House bill is now slated for consideration before the full committee. Senate appropriators have yet to release their corresponding spending bills.

DHS’s budget request outlined the department’s vision for the “chief employee experience officer.” It would also be located within the management directorate and staffed by 12 full-time-equivalent employees.

In budget justification documents, DHS states that the new “CEXO” is “crucial to enhancing coordination efforts to promote and elevate the recruitment and retention of top talent within DHS.”

The CEXO would also serve as the senior DHS official in charge of coordinating the department’s activities under President Joe Biden’s executive order on diversity, equity, inclusion and accessibility in the federal workforce. “At DHS, this function has a broader meaning and purpose to include all facets of the employee experience,” the budget documents add.

While the House subcommittee did not provide a specific explanation for declining to fund the new office, Republicans have broadly sought to stymie the Biden administration’s diversity, equity and inclusion initiatives.

The subcommittee’s summary of the funding legislation states the bill “focuses DHS on its core responsibilities,” including by “preventing the department from carrying out its equity action plan or advancing critical race theory.”

But DHS budget documents state “the primary focus” of the new CEXO office “will revolve around our continued pursuit of outreach, recruitment, and the retention of top-tier talent.”

The CEXO would be a member of the senior executive service. The office would also include a deputy CEXO, four program managers, two program analysts, two HR specialists, a budget analyst, and an acquisitions support professional.

DHS officials have celebrated the department’s recent improvements in Federal Employee Viewpoint Surveys (FEVS). But “momentum going forward will be difficult without a dedicated office to ensure that appropriate resources are focused on understanding and improving the employee experience,” budget documents argue.

DHS AI office not supported

The House subcommittee’s bill also didn’t include a requested $5 million to establish a new artificial intelligence office.

DHS has already named Chief Information Officer Eric Hysen in the dual-hat role of “Chief AI Officer.” But DHS’s budget request is also seeking the funding for the new AI office, also in the management directorate, to help support Hysen and the department’s AI task force.

The new AI office would “support implementing infrastructure, technologies, and processes to advance the responsible and ethical use and adoption of AI and the charter of the AI Task Force,” budget documents state. “This includes planning the infrastructure needed for AI and establishing data management and engineering practices to prepare data for use in AI models.”

DHS has staked out ambitious plans for AI, including a goal to lead the government in the responsible use of AI. An AI roadmap, released by DHS this spring, details several specific uses cases, as well as broader policy initiatives focused on AI.

 

The post House appropriators pan DHS plans for ‘chief employee experience officer’ first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/workforce/2024/06/house-appropriators-pan-dhs-plans-for-chief-employee-experience-officer/feed/ 0
How generative AI is cutting down ‘busy work’ and speeding up processing to combat FWA https://federalnewsnetwork.com/federal-insights/2024/06/how-generative-ai-is-cutting-down-busy-work-and-speeding-up-processing-to-combat-fwa/ https://federalnewsnetwork.com/federal-insights/2024/06/how-generative-ai-is-cutting-down-busy-work-and-speeding-up-processing-to-combat-fwa/#respond Thu, 06 Jun 2024 03:47:59 +0000 https://federalnewsnetwork.com/?p=5029652 Optum Serve’s Amanda Warfield tells Federal News Network how agencies are tapping into generative AI to make federal employees even more productive.

The post How generative AI is cutting down ‘busy work’ and speeding up processing to combat FWA first appeared on Federal News Network.

]]>

Leaders across the federal government are seeing generative artificial intelligence and large language models (LLMs) as promising tools that will reshape how agencies deliver on their mission.

The Biden administration is calling on agencies to experiment with GenAI, and is touting the transformative role this emerging technology will have on government work.

“As generative AI products become widely available and common in online platforms, agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI,” President Joe Biden wrote in a sweeping executive order issue in October 2023.

The executive order underscores agencies’ caution over GenAI, but also signals the importance of experimenting with this emerging technology.

Amanda Warfield, the vice president of program integrity at Optum Serve, said agencies see GenAI as a tool that will enable federal employees to become more productive.

“In the last year or so, we’ve really seen an explosion in generative AI,” said Amanda Warfield, the vice president of program integrity at Optum Serve. “And the federal government has really been trying to apply the right set of guidelines around it, and figure out where it is the most valuable, where it can create the most efficiencies.”

Warfield said agencies see generative AI a promising tool to eliminate or reduce manual tasks, while empowering the federal workforce to focus on higher-impact tasks.

“Generative AI is just there to supplement and make those tasks that most people probably don’t like doing — busy work — whether it’s either data entry, or manual review of large documents. [It’s] things that take you a lot of time. What if you had a way to streamline that, to automatically have a tool that’s going to identify the right section of a 1,000-page document for you?” Warfield said. “For employees, they can then spend their time doing more of what their specialized skill is.”

GenAI as ‘policy assistant’

Agencies are identifying GenAI use cases across a wide array for mission areas. Warfield said many agencies see opportunities to use it to provide a better customer experience to the public.

“For a given agency, how can that be applied to help streamline things, make their lives easier when they’re applying for program benefits, or things like that, that really add value and are meaningful to agencies’ missions?” she said. “It’s about being efficient, saving time and money, and then being able to really prioritize the workload that gives you the most value and the most [return-on-investment].”

Warfield said agency watchdogs, including inspector general offices, are also turning to GenAI as a “policy assistant” to tackle a growing workload of fraud cases.

“They have more cases than they can work. They have finite resources. They don’t have agents to work and prosecute every case that comes their way. So imagine being able to apply generative AI to streamline what today is very manual,” she said.

As IG offices evolve their toolkits to stay ahead of fraudsters, Warfield said GenAI helps investigators comb through hundreds — if not thousands — of documents, and to flag anomalies and build evidence in a case of potential fraud case.

“If we’re talking about a provider in health care, it’s looking at eons of claims data and comparing that to policy documentation,  federal regulations and guidelines to essentially prove what the provider did, or what they billed, violated policy — and how can they prove that’s intentional,” Warfield said. “It involves a lot of manual research, combing through data, combing through these large documents, and to empower agents with a tool that that can easily condense down massive amounts of PDF files and documents and all sorts of data into a human-like Q&A format … [on] whatever case they’re prosecuting … it can provide an easy way for anybody who has health care experience or doesn’t to be able to interpret those big documents.”

GenAI can also supplement the skillsets of employees — allowing them, for example, to write code or parse large volumes of data, even if they don’t have a technical background.

“A lot of folks who support fraud, waste and abuse on the downstream side, in looking at cases for potential prosecution or other action, not all of them are technical individuals who know how to query data or write SQL queries or program. But they still have a need to access data, to aggregate data, to look at trends over time. And using generative AI in a way that allows a regular person to just go in and say, ‘Hey, can you tell me how many claims over the last year have been paid using this type of a procedure code?’ And then have that data automatically aggregated for you, or have the query written for you so that you can just go drop it in somewhere, or even produce charts and visualizations for you, that show you that data in a meaningful way that really gives you the insights right off the bat. Those are huge time savers, for individuals who typically would have to refer that to someone else, wait days or weeks to get the data back, it can really speed up that process.”

Warfield said IG shops can also use GenAI to ingest agency-specific user guides and standard operating procedures, so that newer employees can pull up reference materials faster then ever.

“Instead of you having to sit in a six-hour-long training and try to remember where the section was that was relevant to you, you can then use your Generative AI assistant to say ‘Remind me what our SOP is for whatever the process is,’ and be able to pull up that section really quickly — or just have it summarized for you in a nice, easy-to-read response,” she said.

Getting started with GenAI

Agencies see limitless potential — but also plenty of new risks — when it comes to incorporating GenAI into their day-to-day work.

Among the challenges, agencies need to understand the scope of what the algorithms they’re using have been trained to do, and ensure they don’t produce biased results.

“You can’t just go out and take ChatGPT and apply it to magically work for the HHS mission or in Medicare processes. You have to really take an approach that factors in agency-specific data, agency-specific expertise and context,” Warfield said.

Another challenge agencies face is understanding what datasets to train a GenAI algorithm on, and how to set clear boundaries on which data the algorithm can use.

“There has to be a way to ensure that data is always accurate, it’s always current. It’s the latest version that you’re accessing, so that when you actually apply it into your business processes, you’re getting the right answers and the right accuracy,” Warfield said.

Agencies are also thinking about the impact GenAI plays in cybersecurity. Warfield said agencies need to adopt a zero-trust mindset when it comes to fielding AI tools.

“You’re thinking about how the data is going to come in to your federal enclave. How are you going to ensure that the data never leaves your security boundary? What checks and balances do you have, that you can apply upfront, and make sure those are part of your selection criteria, that decisions are being made to factor those in? Those types of things are really important from a security perspective.

GenAI best practices

While agencies have much to consider for adopting GenAI tools, Warfield outlined a few best practices to keep in mind.

Agencies, she said, should consult with experts before deploying any generative AI tools.

“Having a way to select the right large language model for the right use case is really important. It’s not a one-size-fits-all approach. It’s really important to make sure agencies are consulting with the right experts upfront to have that selection criteria defined to make sure those decisions are made in a way that’s really effective,” she said.

Agencies also need to ensure that human employees still maintain decision-making authority, while using GenAI as means of making data-driven decisions faster than ever.

“You still need to make sure there’s a human in the loop, and you’re not just taking whatever the response is by itself,” Warfield said. “That human in the loop oversight is really important to monitoring the results of your generative AI’s answers: making sure they’re continuing to stay accurate, the training or retraining of the models that needs to happen to stay current and refreshed. All those processes have to be built into your overall framework.”

Listen to the full show:

The post How generative AI is cutting down ‘busy work’ and speeding up processing to combat FWA first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/federal-insights/2024/06/how-generative-ai-is-cutting-down-busy-work-and-speeding-up-processing-to-combat-fwa/feed/ 0
Is the United States primed to spearhead global consensus on AI policy? https://federalnewsnetwork.com/commentary/2024/06/is-the-united-states-primed-to-spearhead-global-consensus-on-ai-policy/ https://federalnewsnetwork.com/commentary/2024/06/is-the-united-states-primed-to-spearhead-global-consensus-on-ai-policy/#respond Mon, 03 Jun 2024 16:56:03 +0000 https://federalnewsnetwork.com/?p=5025484 The U.S. strategy is offering a flexible framework that can swiftly adapt to the rapidly evolving AI landscape.

The post Is the United States primed to spearhead global consensus on AI policy? first appeared on Federal News Network.

]]>
Artificial intelligence is quickly becoming an indispensable asset in addressing a range of challenges in today’s society – from domestic and international cyber threats to healthcare advancements and environmental management. While there are some mixed opinions on many aspects of this technology and its capabilities, there’s no question that in order for AI to meet its full potential, we will need an agile and dynamic policy framework that spurs responsible innovation – a framework that the United States could soon model.

Every day AI becomes more entrenched in our daily lives and will soon be ubiquitous around the world. Countries need a framework to look to for guidance, a leader. Without a flexible policy framework in place that is broadly accepted, we risk missing out on many of AI’s benefits to society. Trust in AI is pivotal for realizing its full potential, yet this trust will be hard-earned. It demands efforts from both private organizations and governments to develop AI in a responsible, ethical manner. Without trust, the promise of AI could remain unfulfilled, its capabilities only partially tapped.

Efforts and innovations must be coordinated across the globe, guided by a responsible pioneer. Lacking some level of synchronization, society could experience a confusing system of disparate AI regulations, rendering the safe advancement of AI initiatives challenging across the board.

With its flexible governance structure informed by valuable international, public-private input, the U.S. could be a clear choice to lead the world to success in this new age of AI.

Current AI governance initiatives

Currently, steps are being undertaken globally to regulate the use of AI, enhance its safety, and foster innovation. It’s natural that various jurisdictions have placed different emphases on their priorities, resulting in a diverse range of regulations – some more proscriptive than others. This variation reflects the unique cultural perspectives of different regions, leading to a potential patchwork of AI regulations. As of October 2023, 31 countries have passed AI legislation and 13 more are debating AI laws.

Europe took an early lead in December 2023 by passing the AI Act, the world’s first comprehensive AI law focused on categorizing AI in terms of risks to users. The original text of the AI Act was written in 2021 – long before the mainstreaming of GenAI in 2023. In contrast to the EU’s approach to AI regulation, the United Kingdom took a more pro-innovation stance and underscored its leadership aspirations by hosting an international AI Safety Summit at Bletchley Park in November 2023.

The United States played a prominent role at the summit which focused on the importance of global cooperation in addressing the risks posed by AI, alongside fostering innovation. Meanwhile, China mandates state review of algorithms, requiring them to align with core socialist values. In contrast, the U.S. and UK are taking a more collaborative and decentralized approach.

The U.S. has taken a more proactive approach to asserting its leadership in AI governance, in contrast to its approach to data privacy, where the EU has largely dominated with the General Data Protection Regulation (GDPR).  A series of recent federal initiatives, including President Biden’s exhaustive AI executive order, signals a commitment to eventually leading global AI governance. The order lays out a blistering pace of regulatory action, mandating detailed reporting and risk assessments by developers and agencies. Notably, many of these requirements and assessments will come into force long before the EU’s AI Act is settled and enforced.

In the absence of strong federal action, states are stepping in. In the 2023 legislative session, at least 25 U.S. states introduced AI bills, while 15 states and Puerto Rico adopted resolutions or enacted legislation around AI. While it is great to see this progress and innovation being made across the world, we must recognize the next steps needed to move forward on the AI front.

Without harmonizing efforts globally and having a leader to look to for guidance on AI endeavors, we could end up with a complex patchwork of AI regulations, making it difficult for organizations to operate and innovate with AI safely — throughout the U.S. and globally.

The blueprint for AI regulation: The U.S.

Without trust, AI will not be fully adopted. The U.S. and like-minded governments can ensure that AI is safe and that it will benefit humanity as a whole. The White House has begun to pave the way with a recent flurry of AI activity, remaining proactive and agile despite evolving demands. To get ahead, Congress is pursuing niche areas within AI that will inform current and future AI regulations. The U.S. can further promote transparency, confidence and safety by collaborating with industry to ensure that the benefits of this evolving technology can be realized, risk concerns do not stifle innovation, and society can trust in AI.

Domestically, the Biden administration has been exceedingly open to input from all sectors, shaping a holistic viewpoint on what is needed for advancement. Abroad, the U.S. prioritizes collaboration with its allies, ensuring best practices are followed and ethical considerations are made. This is a key component needed from a global leader, as regulations must be developed outside of a vacuum for best results. By linking arms with countries around the world to develop standards, conflicting viewpoints can be mitigated to best shape international AI regulations in a way that is most beneficial to society.

Furthermore, by encouraging strong public-private partnerships, the U.S. sets the precedent needed to take responsible AI innovation to the next level. Just like the public sector, private companies must innovate responsibly, accepting the duty to develop AI in a trustworthy manner. By moving forward with cautious enthusiasm, the private sector can considerably bolster efforts to ensure AI reaches its full potential safely, at home and abroad.

Of course, the geopolitical aspect must be considered, as well. By leading in AI standards and regulations, the U.S. can initiate globally accepted norms and protocols to deter an unregulated arms race or other modern warfare catastrophe. Through its technical prowess and dynamic experience, the U.S. is uniquely positioned to lead in the development of a global consensus on responsible AI use.

The future of AI governance is here

The U.S. is just beginning to establish itself as a global leader in AI governance, spearheaded by initiatives such as President Biden’s executive order, Office of Management and Budget guidelines, the National Institute of Standards and Technology’s AI Risk Management Framework, and widely publicized commitments from AI companies. The U.S. strategy is offering a flexible framework that can swiftly adapt to the rapidly evolving AI landscape. This agility will help keep pace with the quickly changing AI technology landscape.

As the U.S. continues to quietly refine its approach to AI regulation, its policies will not only have far-reaching impacts on American society and government, but also offer a balanced blueprint for international partners. The onus to innovate with AI responsibility does not fall solely on the public sector. Private companies, too, must bear the burden alongside their public counterparts to optimize results. This balanced approach considering a variety of international, public-private insights is bound to shape the future of AI governance and innovation worldwide.

Bill Wright is global head of government affairs at Elastic.

The post Is the United States primed to spearhead global consensus on AI policy? first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/is-the-united-states-primed-to-spearhead-global-consensus-on-ai-policy/feed/ 0
VA looking at AI tools to reduce workforce burdens, anticipate veterans’ needs https://federalnewsnetwork.com/artificial-intelligence/2024/05/va-looking-at-ai-tools-to-reduce-workforce-burdens-anticipate-veterans-needs/ https://federalnewsnetwork.com/artificial-intelligence/2024/05/va-looking-at-ai-tools-to-reduce-workforce-burdens-anticipate-veterans-needs/#respond Fri, 31 May 2024 22:07:10 +0000 https://federalnewsnetwork.com/?p=5023216 The Department of Veterans Affairs is looking at artificial intelligence tools to prevent burnout among its employees.

The post VA looking at AI tools to reduce workforce burdens, anticipate veterans’ needs first appeared on Federal News Network.

]]>
The Department of Veterans Affairs is looking at artificial intelligence tools to prevent burnout among its employees.

The VA last week concluded a 90-day AI Tech Sprint. The department recognized six finalist teams — out of 150 teams that participated in the pilot — for projects focused on reducing administrative burdens for employees.

The projects include “ambient dictation,” or AI-powered note-taking that would take place during and after a veteran’s appointment with a VA clinician.

The department also highlighted AI tools that can automatically summarize hundreds of pages of outside medical records anytime a veteran comes into a VA clinic for the first time.

Kaeli Yuen, AI product lead for VA’s Office of the Chief Technology Officer, said the ambient dictation tool records a veteran’s appointment with a medical provider, and summarizes the appointment as a note for the provider to input into the EHR.

“Every time they have an interaction with the patient, they have to document it in the electronic health record. This is causing high levels of burnout, and is also kind of using a lot of time,” Yuen said at a panel discussion Wednesday hosted by ACT-IAC.

Yuen said the ambient dictation tool doesn’t provide a direct transcription of the patient’s appointment, and isn’t used for clinical decision-making.

The physician is required to review the AI-generative summary and sign off on it, as well as make any edits that are required.

Susan Kirsh, deputy undersecretary for health for discovery education and affiliated networks, said the ambient dictation tool is meant to give clinicians more time treating patients, and less time taking notes behind a computer screen.

“We want to spend all of our time taking care of the patient, and the documentation, over the years, has gotten to be pretty high,” Kirsh said.

Denise Kitts, executive director of VA’s Veterans Experience Office, said she sees AI tools as the key to providing a higher level of customer experience to veterans, as well as “putting actionable data in front of people that are making decisions.”

“Surveys are great, but it’s a lagging indicator,” Kitts said. “From a data perspective, we’re pivoting and we’re really looking at all the data the VA has. In our minds, all data is CX data, so how do we pull that data together, so we can build the predictive models and move from being reactive, from a customer experience perspective, and being much more predictive, and even prescriptive, and tailoring that experience.”

Kitts said the VA built an AI model on operational and clinical data — such as the last time a veteran came into the VA for an appointment, and if there were any “no-show” appointments.

“We modeled it until we could get much more predictive, in terms of, who are the people that medical [centers] really need to reach out to and make sure they stay tethered?” Kitts said. “That’s an example of using all our data to predict who are the people that we really need to do outreach to.”

Yuen said the VA is also experimenting with generative AI to tackle administrative burdens.

“There’s a lot of excitement around these tools. Folks from all over VA are coming to our office, asking if they can pilot a generative AI to do things like write emails, summarize policy documents, draft contracting packages, and summarize veteran user experience survey data. People from all over VA want this,” Yuen said.

As VA’s Office of Information and Technology launches generative AI pilots, Yuen said the team is rethinking success metrics.

“How are we going to know if this pilot was successful? Should we expand it? Should be invest in it more? And I think this is one area we’re struggling a little bit,” she said. “We tend to land on time saved — it used to take us 10 hours to analyze a survey, now it takes us two hours to analyze the survey. But I feel like there’s something more we’re leaving on the table here. What we’re doing is applying the tools to a process that is built without those tools in mind, and maybe there’s a different way we should be doing things.”

The post VA looking at AI tools to reduce workforce burdens, anticipate veterans’ needs first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/artificial-intelligence/2024/05/va-looking-at-ai-tools-to-reduce-workforce-burdens-anticipate-veterans-needs/feed/ 0
How Leidos manages many thousand endpoints through standardization, PCaaS https://federalnewsnetwork.com/federal-insights/2024/05/how-leidos-manages-many-thousand-endpoints-through-standardization-pcaas/ https://federalnewsnetwork.com/federal-insights/2024/05/how-leidos-manages-many-thousand-endpoints-through-standardization-pcaas/#respond Tue, 28 May 2024 16:42:28 +0000 https://federalnewsnetwork.com/?p=5017787 Leidos enterprise infrastructure leader John Morton shares strategies that aim to balance usability, flexibility and best-of-breed security.

The post How Leidos manages many thousand endpoints through standardization, PCaaS first appeared on Federal News Network.

]]>

This is the first article in our IT lifecycle management series, Delivering the tech that delivers for government.

Supporting employees and teams on the frontlines — the people on government contractor teams helping meet federal missions anytime, anywhere — is a balancing act.

“When you think about the endpoints, these are an extension of our network. A person — they travel, take it on the plane, go to a different office location. It’s a varying number of environments that the endpoint is in,” said John Morton, vice president of enterprise infrastructure at Leidos. “Making sure that we’re keeping security at the forefront — being able to secure our endpoints yet not disrupt the user — it’s a balancing act.”

For Morton and his team that’s a continual element of managing an enterprise infrastructure that 46,000-plus Leidos employees globally on to meet business demands and deliver services to customers worldwide, many of them federal agencies.

It’s a job for which Morton is ideally suited given his experience leading and being part of teams that worked directly with federal agencies to meet their missions.

Morton shared how Leidos tackles these enterprise demands — achieving that balance while keeping an eye on the bottom line — for the Federal News Network series, Delivering the tech that delivers for government.

Deriving benefits from standardization, PCaaS

A chief aid in finding balance, Morton shared, has been to standardize endpoint offerings and implement PC as a service.

A chief benefit of PCaaS really is that it’s a full-fledged offering, he said. “You come in day one, you get an asset, all the software, all the labor, the maintenance, the management — that is all included in our PCaaS offering.”

Plus, while the company has standardized on laptops and desktops and software for endpoints, it provides a variety of devices based on user personas. There is continuity across the endpoint assets but also flexibility to meet specific user needs and adapt to client mission requirements too, Morton said.

“It’s a shared service model from a financial perspective. So as individuals come into the organization and opt in to PCaaS, obviously, it’s a certain cost across the organization,” he said. “That ultimately lowers your per user costs. So there are  financial gains and efficiencies as well.”

Looking to AI to thwart adversarial attacks, proactively manage devices

While reducing friction for users remains critical, protecting corporate assets and data from cyberthreats is no less important.

It’s an area where Morton sees the potential for artificial intelligence to help, particularly with hardware. Leidos has an incubator where teams innovate how to apply AI to everyday needs.

“There’s now a focus from the adversary perspective, where they’re starting to attack below the operating system. When the user hits the [power] button, you’re automatically susceptible to adversarial attacks,” he explained. “We are now starting to work on how we protect the bootup time, the BIOS time, the things that go on below the operating system.”

Teams at Leidos are taking offensive perspective. Criminals are using AI “for adversarial harm and ransomware attacks, thus we’re leveraging our AI capabilities not only at the OS level but at the hardware level through our partnerships,” Morton said.

Another innovative AI tactic? Gathering and analyzing telemetry data on users’ devices to proactively manage and maintain them rather than wait for problems to arise, he said.

“We are gaining insights and observability into those endpoints. … We’re trying to be a little more proactive and flip the script to more of a predictive analysis versus a reactive model — where we’re actually performing some self-healing based on certain tendencies and actions and events that are really occurring across the enterprise.”

It’s about the user — always keep that as your baseline

No matter what new technology he and his team implement, Morton said it always comes back to the employees, the users of the technology and tools.

“It’s centered around the user, the user persona and user experience,” he said. “Are the users happy? Have we done our job and really made it somewhat transparent? Are we providing that workplace of the future, providing digital ambidexterity?”

Ultimately, can a user work no matter what scenario arises?

Again, Morton expects AI and increased telemetry data about users’ devices to pay dividends in ensuring continuity of operations. “These things will give us additional data, will provide better observability and analytics, to make informed business decisions and really increase overall user experience.”

Discover more stories about how federal systems integrators and government contractors manage their enterprise infrastructure environments in our series Delivering the tech that delivers for government, sponsored by Future Tech Enterprise.

To listen to the full discussion between Leidos’ John Morton and Federal News Network’s Vanessa Roberts, click the podcast play button below:

The post How Leidos manages many thousand endpoints through standardization, PCaaS first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/federal-insights/2024/05/how-leidos-manages-many-thousand-endpoints-through-standardization-pcaas/feed/ 0
State’s OSINT strategy aims to serve diplomats’ demand for unclassified assessments https://federalnewsnetwork.com/inside-ic/2024/05/states-osint-strategy-aims-to-serve-diplomats-demand-for-unclassified-assessments/ https://federalnewsnetwork.com/inside-ic/2024/05/states-osint-strategy-aims-to-serve-diplomats-demand-for-unclassified-assessments/#respond Tue, 28 May 2024 14:38:14 +0000 https://federalnewsnetwork.com/?p=5017523 The Bureau of Intelligence and Research also sees the potential for generative artificial intelligence to better leverage open-source intelligence (OSINT).

The post State’s OSINT strategy aims to serve diplomats’ demand for unclassified assessments first appeared on Federal News Network.

]]>
var config_5017566 = {"options":{"theme":"hbidc_default"},"extensions":{"Playlist":[]},"episode":{"media":{"mp3":"https:\/\/www.podtrac.com\/pts\/redirect.mp3\/traffic.megaphone.fm\/HUBB9585698584.mp3?updated=1716906486"},"coverUrl":"https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2024\/02\/Inside-the-IC-3000x3000-podcast-tile-Booz-Allen-150x150.jpg","title":"The State Department has a new OSINT strategy","description":"[hbidcpodcast podcastid='5017566']nnThe State Department\u2019s intelligence arm is vowing to take better advantage of publicly accessible information and commercial data under a new strategy that calls for meeting diplomats' demand for more unclassified assessments.nnState\u2019s Bureau of Intelligence and Research (INR) today released an <a href="https:\/\/www.state.gov\/open-source-intelligence-strategy\/" target="_blank" rel="noopener">\u201cOpen Source Intelligence Strategy\u201d<\/a> to guide its OSINT efforts over the next two years.nnBrett Holmgren, assistant secretary of State for intelligence and research, said the strategy is driven by the need to \u201charness\u201d a growing body of commercially and publicly available information about world events. And it\u2019s also intended to meet the demand inside the State Department for more unclassified assessments that can be accessed securely by diplomats anywhere in the world, Holmgren added.nnIn addition to delivering more timely information to U.S. diplomats,\u00a0 Holmgren said the use of OSINT could also help INR and the State Department expand its partnerships with foreign countries, especially those outside of the "Five Eyes" intelligence-sharing alliance.nn\u201cSo for us, when it comes to the future of OSINT, the stakes could not be higher,\u201d Holmgren said in an interview.nnThe new strategy also comes on the heels of <a href="https:\/\/federalnewsnetwork.com\/inside-ic\/2024\/04\/intel-community-seeks-to-centralize-osint-under-new-strategy\/" target="_blank" rel="noopener">an intelligence community-wide OSINT strategy<\/a> released earlier this spring.nnHolmgren said the bureau\u2019s strategy complements the IC-wide effort, but also reflects INR\u2019s \u201cunique role in the intelligence community as the only element that's focused exclusively on providing intelligence to support American diplomacy,\u201d Holmgren said.nnHe described how State\u2019s analysts have relied on OSINT dating back to World War II, when INR's forerunner, the research and analysis section in the Office of Strategic Services,\u00a0 used news reports, government statistics and economic outlooks to create long-range assessment of the Axis powers.nn\u201cINR\u2019s long standing embrace of OSINT continues to this day where many of our analysts turn to OSINT as their first source of information in the morning, and then they turn to classified cables and intelligence reports to determine what's significant, what warrants an assessment, what needs to be flagged for policymakers,\u201d Holmgren said.nnThe INR unit, a smaller component within the intelligence community, has been feted for its efforts to use OSINT rather than relying on highly classified sources. But Holmgren acknowledged that even INR\u2019s analysts can struggle to produce unclassified assessments based entirely on open-source data.nn\u201cThe challenge is balancing our desire to produce more products at the unclassified level with the need to ensure that these classified insights that our analysts have acquired, due to their access to classified information, is appropriately protected,\u201d Holmgren said.nnThe Bureau of INR is working with the Office of the Director of National Intelligence on policy guidance on the use of OSINT in intelligence reports.nn\u201cI'm confident we'll find a reasonable solution that allows us to better serve our diplomats while still safeguarding that classified information,\u201d Holmgren said.nnThe bureau last year also established an Open Source Coordination Unit<a href="https:\/\/federalnewsnetwork.com\/inside-ic\/2022\/05\/state-department-intelligence-arm-to-set-up-open-source-coordination-office\/" target="_blank" rel="noopener"> to better organize its OSINT efforts.<\/a> After delays due to an initial lack of funding, Holmgren said the new office is now \u201cstaffed and resourced for the long term.\u201dnnThe bureau\u2019s new strategy highlights the importance of training and education on OSINT. \u201cWe are in the process of developing our own internal training that our folks will be able to access later this year,\u201d Holmgren said.nnHe also said ODNI\u2019s forthcoming guidance will be crucial as INR and other intelligence agencies navigate the challenges of ensuring open-source data isn\u2019t tainted by disinformation.nn\u201cThere will be unique differences between some of the OSINT tradecraft, in terms of how people are reviewing information to assess its reliability and credibility, to make sure that we are able to identify and detect and remove disinformation and other things that foreign adversaries may try to inject in the open source space,\u201d Holmgren said. \u201cBut especially when it comes to conducting analysis, there will be many similarities with the existing analytic tradecraft processes and standards.\u201dn<h2>Role for generative AI<\/h2>nThe bureau\u2019s new strategy also calls for investing in OSINT data and tools.nnLike many intelligence agencies, Holmgren said INR has taken advantage of a recent increase in commercially available satellite imagery. The intelligence community famously used such imagery to issue public warnings about Russia\u2019s impending invasion of Ukraine in 2022.nnBut INR\u2019s analysts also rely on foreign leader speeches, panel discussions at conferences, government reports and other data that\u2019s increasingly available over the internet. And in many cases, the relatively small bureau does not have enough analysts to sift through and analyze all that information.nnHolmgren said that\u2019s an area where tools like generative artificial intelligence could help.nn\u201cWe think there's real potential for things like generative AI to really help summarize and synthesize the key takeaways from this growing body of open source information, government information that's out there,\u201d Holmgren said.nnMeanwhile, the intelligence community\u2019s OSINT strategy calls for coordinating open-source data acquisition and expanding the sharing of such data across the IC. Holmgren called that a \u201cgame changer\u201d for smaller components like INR.nn\u201cFrankly, our ability to acquire tools or licenses, in many cases is cost prohibitive for us because we just don't have the resources,\u201d Holmgren said. \u201cAnd so what the DNI is doing on both cataloging the different tools and capabilities that are out there in the first instance, and then figuring out cost efficient ways for the taxpayer to make those available to the rest of the intelligence community is going to allow smaller agencies like us to take advantage of things that right now, for the most part, only bigger agencies can afford to acquire and deploy at scale.\u201dn<h2>Mobile capabilities in development<\/h2>nOne of INR\u2019s major priorities during Holmgren\u2019s tenure has been IT modernization. In addition to moving into <a href="https:\/\/federalnewsnetwork.com\/ask-the-cio\/2023\/08\/new-top-secret-cloud-strategy-underpins-state-dept-bureaus-modernization-efforts\/" target="_blank" rel="noopener">top-secret cloud environments<\/a>, INR has also sought to expand access to its unclassified work through new digital platforms.nnLast year, the bureau <a href="https:\/\/statemag.state.gov\/2023\/10\/1023itn01\/" target="_blank" rel="noopener">released \u201cTempo,\u201d<\/a> an internal website on the State Department\u2019s unclassified network. Holmgren said ambassadors and diplomats around the world can use Tempo to access a variety of unclassified INR products, including foreign public opinion polling data, humanitarian graphics and maps and analytical summaries.nnHolmgren said INR is now developing a mobile application so State Department employees can access Tempo from their phones, wherever they might be in the world.nn\u201cIn the future, what I believe will be essential to INR\u2019s relevance and our ability to engage more with our customers, but also do enable intelligence diplomacy in a more consistent way, will be sharing unclassified level assessments based entirely on open source data, but still enriched with the expert analysis and expert insights that our analysts possess.\u201d"}};

The State Department’s intelligence arm is vowing to take better advantage of publicly accessible information and commercial data under a new strategy that calls for meeting diplomats’ demand for more unclassified assessments.

State’s Bureau of Intelligence and Research (INR) today released an “Open Source Intelligence Strategy” to guide its OSINT efforts over the next two years.

Brett Holmgren, assistant secretary of State for intelligence and research, said the strategy is driven by the need to “harness” a growing body of commercially and publicly available information about world events. And it’s also intended to meet the demand inside the State Department for more unclassified assessments that can be accessed securely by diplomats anywhere in the world, Holmgren added.

In addition to delivering more timely information to U.S. diplomats,  Holmgren said the use of OSINT could also help INR and the State Department expand its partnerships with foreign countries, especially those outside of the “Five Eyes” intelligence-sharing alliance.

“So for us, when it comes to the future of OSINT, the stakes could not be higher,” Holmgren said in an interview.

The new strategy also comes on the heels of an intelligence community-wide OSINT strategy released earlier this spring.

Holmgren said the bureau’s strategy complements the IC-wide effort, but also reflects INR’s “unique role in the intelligence community as the only element that’s focused exclusively on providing intelligence to support American diplomacy,” Holmgren said.

He described how State’s analysts have relied on OSINT dating back to World War II, when INR’s forerunner, the research and analysis section in the Office of Strategic Services,  used news reports, government statistics and economic outlooks to create long-range assessment of the Axis powers.

“INR’s long standing embrace of OSINT continues to this day where many of our analysts turn to OSINT as their first source of information in the morning, and then they turn to classified cables and intelligence reports to determine what’s significant, what warrants an assessment, what needs to be flagged for policymakers,” Holmgren said.

The INR unit, a smaller component within the intelligence community, has been feted for its efforts to use OSINT rather than relying on highly classified sources. But Holmgren acknowledged that even INR’s analysts can struggle to produce unclassified assessments based entirely on open-source data.

“The challenge is balancing our desire to produce more products at the unclassified level with the need to ensure that these classified insights that our analysts have acquired, due to their access to classified information, is appropriately protected,” Holmgren said.

The Bureau of INR is working with the Office of the Director of National Intelligence on policy guidance on the use of OSINT in intelligence reports.

“I’m confident we’ll find a reasonable solution that allows us to better serve our diplomats while still safeguarding that classified information,” Holmgren said.

The bureau last year also established an Open Source Coordination Unit to better organize its OSINT efforts. After delays due to an initial lack of funding, Holmgren said the new office is now “staffed and resourced for the long term.”

The bureau’s new strategy highlights the importance of training and education on OSINT. “We are in the process of developing our own internal training that our folks will be able to access later this year,” Holmgren said.

He also said ODNI’s forthcoming guidance will be crucial as INR and other intelligence agencies navigate the challenges of ensuring open-source data isn’t tainted by disinformation.

“There will be unique differences between some of the OSINT tradecraft, in terms of how people are reviewing information to assess its reliability and credibility, to make sure that we are able to identify and detect and remove disinformation and other things that foreign adversaries may try to inject in the open source space,” Holmgren said. “But especially when it comes to conducting analysis, there will be many similarities with the existing analytic tradecraft processes and standards.”

Role for generative AI

The bureau’s new strategy also calls for investing in OSINT data and tools.

Like many intelligence agencies, Holmgren said INR has taken advantage of a recent increase in commercially available satellite imagery. The intelligence community famously used such imagery to issue public warnings about Russia’s impending invasion of Ukraine in 2022.

But INR’s analysts also rely on foreign leader speeches, panel discussions at conferences, government reports and other data that’s increasingly available over the internet. And in many cases, the relatively small bureau does not have enough analysts to sift through and analyze all that information.

Holmgren said that’s an area where tools like generative artificial intelligence could help.

“We think there’s real potential for things like generative AI to really help summarize and synthesize the key takeaways from this growing body of open source information, government information that’s out there,” Holmgren said.

Meanwhile, the intelligence community’s OSINT strategy calls for coordinating open-source data acquisition and expanding the sharing of such data across the IC. Holmgren called that a “game changer” for smaller components like INR.

“Frankly, our ability to acquire tools or licenses, in many cases is cost prohibitive for us because we just don’t have the resources,” Holmgren said. “And so what the DNI is doing on both cataloging the different tools and capabilities that are out there in the first instance, and then figuring out cost efficient ways for the taxpayer to make those available to the rest of the intelligence community is going to allow smaller agencies like us to take advantage of things that right now, for the most part, only bigger agencies can afford to acquire and deploy at scale.”

Mobile capabilities in development

One of INR’s major priorities during Holmgren’s tenure has been IT modernization. In addition to moving into top-secret cloud environments, INR has also sought to expand access to its unclassified work through new digital platforms.

Last year, the bureau released “Tempo,” an internal website on the State Department’s unclassified network. Holmgren said ambassadors and diplomats around the world can use Tempo to access a variety of unclassified INR products, including foreign public opinion polling data, humanitarian graphics and maps and analytical summaries.

Holmgren said INR is now developing a mobile application so State Department employees can access Tempo from their phones, wherever they might be in the world.

“In the future, what I believe will be essential to INR’s relevance and our ability to engage more with our customers, but also do enable intelligence diplomacy in a more consistent way, will be sharing unclassified level assessments based entirely on open source data, but still enriched with the expert analysis and expert insights that our analysts possess.”

The post State’s OSINT strategy aims to serve diplomats’ demand for unclassified assessments first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/inside-ic/2024/05/states-osint-strategy-aims-to-serve-diplomats-demand-for-unclassified-assessments/feed/ 0