Commentary - Federal News Network https://federalnewsnetwork.com Helping feds meet their mission. Thu, 20 Jun 2024 19:36:35 +0000 en-US hourly 1 https://federalnewsnetwork.com/wp-content/uploads/2017/12/cropped-icon-512x512-1-60x60.png Commentary - Federal News Network https://federalnewsnetwork.com 32 32 Robust data management is key to harnessing the power of emerging technologies https://federalnewsnetwork.com/commentary/2024/06/robust-data-management-is-key-to-harnessing-the-power-of-emerging-technologies/ https://federalnewsnetwork.com/commentary/2024/06/robust-data-management-is-key-to-harnessing-the-power-of-emerging-technologies/#respond Thu, 20 Jun 2024 19:36:35 +0000 https://federalnewsnetwork.com/?p=5047635 Comprehensive data management is key to unlocking seamless, personalized and secure CX for government agencies.

The post Robust data management is key to harnessing the power of emerging technologies first appeared on Federal News Network.

]]>
The recent AI Executive Order aptly states that AI reflects the data upon which it is built. Federal agencies are looking to responsibly implement cutting-edge IT innovations such as artificial intelligence, machine learning and robotic process automation to improve customer experiences, bolster cybersecurity and advance mission outcomes. Accessing real-time, actionable data is vital to achieving these essential objectives.

Comprehensive data management is key to unlocking seamless, personalized and secure CX for government agencies. Real-time data empowers informed, rapid decision-making, which can improve critical, high-impact federal services where time is of the essence, such as in response to a natural disaster. Alarmingly, only 13% of federal agency leaders report having access to real-time data, and 73% feel they must do more to leverage the full value of data across their agency.

While some agencies are making progress in their IT modernization journeys, they continue to struggle when it comes to quickly accessing the right data due to numerous factors, from ineffective IT infrastructure to internal cultural barriers.

Actionable intelligence is paramount. The ultimate goal is to access the right data at the right moment to generate insights at “the speed of relevance,” as leaders at the Defense Department would say. To achieve the speed of relevance required to make real-time, data-driven decisions, agencies can take steps to enable quicker access to data, improve their data hygiene, and secure their data.

How to effectively intake and store troves of data

From a data infrastructure perspective, the best path to modernized, real-time deployment is using hyper automation and DevSecOps on cloud infrastructures. Many federal agencies have begun this transition from on-premises to cloud environments, but there’s still a long way to go until this transition is complete government-wide.

Implementing a hybrid, multi-cloud environment offers agencies a secure and cost-effective operating model to propel their data initiatives forward. By embracing standardization and employing cloud-agnostic tools for automation, visibility can be enhanced across systems and environments, while simultaneously adhering to service-level agreements and ensuring the reliability of data platforms. Once a robust infrastructure is in place to store and analyze data, agencies can turn their attention to data ingestion tools.

Despite many agency IT leaders utilizing data ingestion tools such as data lakes and warehouses, silos persist. Agencies can address this interoperability challenge by prioritizing flexible, scalable and holistic data ingestion tools such as data mesh. Data mesh tools, which foster a decentralized data management architecture to improve accessibility, can enable agency decision-makers to capitalize on the full spectrum of available data, while still accommodating unique agency requirements.

To ensure data is accessible to decision-makers, it’s important that the data ingestion mechanism has as many connectors as possible to all sources of data that an agency identifies. Data streaming and data pipelines can also enable real-time insights and facilitate faster decision-making by mitigating manual processes. Data streaming allows data to be ingested from multiple systems, which can build a single source of trust for analytical systems. Additionally, these practices limit data branching and siloes, which can cause issues with data availability, quality and hygiene.

Data hygiene and security enable transformative benefits

Data hygiene is imperative, particularly when striving to ethically and accurately utilize data for an autonomous system like AI or ML. A robust data validation framework is necessary to improve data quality. To create this framework, agencies can map their data’s source systems and determine the types of data they expect to yield, but mapping becomes increasingly arduous as databases continue to scale.

One critical success factor is to understand the nature of the data and the necessary validations prior to ingesting the data into source systems. Hygiene can be improved by consuming the raw data into a data lake and then, during conversion, validate the data’s quality before applying any analytics or crafting insights.

In addition to data hygiene, data security must remain a top priority across the federal government as agencies move toward real-time data insights. Adopting a hybrid, multi-cloud environment can lead to a stronger security posture because there are data encryption capabilities inherent in enterprise cloud environments.

Agencies may consider using a maturity model to help their teams assess data readiness and how they are progressing in their cybersecurity frameworks. A maturity model lets agencies identify and understand specific security gaps at each level of the model and provides a roadmap to address these gaps. Ultimately, the cybersecurity framework is as essential as data hygiene to ensure agencies can harness data reliably and efficiently.

When agencies have data management solutions that reduce the friction of navigating siloed government systems and enable faster, more secure collaboration, it enables them to drive innovation. This is especially true for agencies that handle extensive amounts of data. For example, many High Impact Service Providers (HISPs) must manage vast amounts of citizen data to provide critical, public-facing services with speed and scale.

Data is the foundation for modern digital government services. Once data is ingested, stored and secured effectively, the transformational potential of emerging technologies such as AI or RPA can be unlocked. Moreover, with real-time data insights, government decision-makers can use actionable intelligence to improve federal services. It’s essential that agency IT leaders invest in a robust data management strategy and modern data tools to ensure they can make informed decisions and benefit from the power of AI to achieve mission-critical outcomes for the American public.

Joe Jeter is senior vice president of federal technology at Maximus.

The post Robust data management is key to harnessing the power of emerging technologies first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/robust-data-management-is-key-to-harnessing-the-power-of-emerging-technologies/feed/ 0
USINDOPACOM Mission Partner Environment success:  A blueprint for CJADC2 path forward  https://federalnewsnetwork.com/commentary/2024/06/usindopacom-mission-partner-environment-success-a-blueprint-for-cjadc2-path-forward/ https://federalnewsnetwork.com/commentary/2024/06/usindopacom-mission-partner-environment-success-a-blueprint-for-cjadc2-path-forward/#respond Tue, 18 Jun 2024 17:50:43 +0000 https://federalnewsnetwork.com/?p=5045019 DoD can duplicate USINDOPACOM’s transformation to rapidly implement multi-enclave environments on a broader scale in support of CJADC2.

The post USINDOPACOM Mission Partner Environment success:  A blueprint for CJADC2 path forward  first appeared on Federal News Network.

]]>
The interconnected nature of global stability requires keen situational awareness, cooperation and collective decision-making across warfighting domains, interagency departments, nations and partners. To confront the complex challenges posed by emerging threats, allied forces require an interoperable information-sharing infrastructure to rapidly establish new coalitions and joint operations.    

In a step toward enabling the next-generation synchronized command and control, Deputy Secretary of Defense Kathleen Hicks recently announced the Defense Department has delivered its initial iteration of the Combined Joint All-Domain Command and Control (CJADC2) capability. While this marks a notable advancement, DoD must continue its efforts to evolve CJADC2 beyond its current basic operational ability. To achieve seamless integration of assets and personnel, defense leaders should model after the successful implementation of the Mission Partner Environment (MPE) in the Indo-Pacific region, which offers valuable lessons.   

The operational intricacy   

At their core, MPEs are designed to facilitate real-time communication of relevant information among U.S. military and mission partners while maintaining necessary security levels to guide warfighter decision-makers. Traditionally, it involves a desk with multiple screens, each connected to a different network, unique access codes and encryption protocols, and a KVM switch to control it all.    

In the U.S. Indo-Pacific Command (USINDOPACOM), those days are gone.    

Taking a multi-enclave client (MEC) approach, the desk is now simplified to a single console. Authorized users can access and share relevant information from various sources using an integrated mission network, so decisions can be made in real time, and coalition environments can be formed in days instead of weeks.   

Conquering the complexity  

DoD can duplicate USINDOPACOM’s transformation to rapidly implement multi-enclave environments on a broader scale in support of CJADC2. The sheer volume of approximately 17,000 isolated and protected computing environments supported by the command’s network is a testament to the MEC capability built on a hyper-converged infrastructure and private cloud architecture. Virtual infrastructure, which includes desktop virtualization hosting desktop environments on a central server, plays a vital role in connecting all the elements of the MPE landscape, such as applications, data, clouds, APIs, processes, chat, voice and video devices.    

USINDOPACOM’s effective consolidation of siloed data, duplicate copies of information, and separate networks into a single sign-on, data-centric information domain represents a pivotal stride toward the realization of JADC2 and, ultimately, CJADC2. This demonstration of the U.S. military’s robust capability to share information across domains instantly and securely will encourage allies and partners to actively engage in the exchange of intelligence and collaboration necessary to establish a formidable and unyielding collective defense posture.    

However, the next step of enabling instantaneous but strictly controlled access to ensure the right data is released to authorized users is an intense undertaking. It requires a ground-up, zero-trust architecture design that undergoes continuous testing to detect vulnerabilities before malicious actors can exploit them.   

To facilitate safe and secure communication for the U.S. and its allies during peacetime and conflict, USINDOPACOM transitioned defenses from static, network-based perimeters to focus on the users, assets and resources. Bolstering security through zero trust identity verification to provide the right people access to the right information in the right place enabled granular control of data and assets, resulting in a more secure and controlled mission partner environment.   

Setting the stage for AI    

By prioritizing data and taking a rigorous approach to its access to ensure integrity, USINDOPACOM has paved the way for the adoption of artificial intelligence and machine learning to support decision-making. In such a data-centric network environment, artificial intelligence and machine learning can be deployed to continuously monitor and analyze information to identify threats or opportunities as they emerge. The ability to quickly scour through thousands of pieces of data to elevate pertinent information for review and flag trends, threats and opportunities provides a significant decision advantage, allowing accelerated tasking and advanced force management. It is an example of a proactive approach to future readiness that can guide the evolution of CJADC2.    

Success template  

The deployment of USINDOPACOM’s MPE has been a sophisticated and collaborative effort that required a combination of best practices, advanced technologies and skilled personnel. It relied on a multi-team integration framework that functioned as a requirements traceability matrix for all projects. The project lifecycle comprised repeatable processes mapped to a structured work plan that supported over 250 standard and non-standard USINDOPACOM Theater Component Command requirements.   

Several operational lessons can be drawn from USINDOPACOM’s MPE deployment to aid CJADC2 success. First, designing, implementing and maintaining information domains involves adept configuration of hardware and software, security and integrity assurance, performance monitoring, and troubleshooting for numerous application service centers, hundreds of service points and thousands of endpoints. Second, a team of proficient network engineers is essential for this rigorous undertaking. Lastly, managing MPE enclaves and their authority to operate necessitates a disciplined, structured process and the integration of information security and risk management activities throughout the system development life cycle.  

Proof positive 

USINDOPACOM has dramatically enhanced its capacity to exchange information and intelligence, collaborate, and establish interoperability with partner nations and organizations. That transformation illuminated the path forward for enabling JADC2 and, subsequently, CJADC2.  

Steve Robles is vice president of Coalition Network Engineering at SOSi.  

The post USINDOPACOM Mission Partner Environment success:  A blueprint for CJADC2 path forward  first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/usindopacom-mission-partner-environment-success-a-blueprint-for-cjadc2-path-forward/feed/ 0
The top 3 reasons the federal government should embrace non-graduates to bridge the tech skills gap https://federalnewsnetwork.com/commentary/2024/06/the-top-3-reasons-the-federal-government-should-embrace-non-graduates-to-bridge-the-tech-skills-gap/ https://federalnewsnetwork.com/commentary/2024/06/the-top-3-reasons-the-federal-government-should-embrace-non-graduates-to-bridge-the-tech-skills-gap/#respond Mon, 17 Jun 2024 15:33:18 +0000 https://federalnewsnetwork.com/?p=5043452 By prioritizing skills over degrees, government agencies can significantly expand their talent pool and tap into a wealth of often-overlooked candidates.

The post The top 3 reasons the federal government should embrace non-graduates to bridge the tech skills gap first appeared on Federal News Network.

]]>
The government’s traditional reliance on degrees and tactical skills for tech recruitment is falling short in the face of exponential change. While linear thinking may be our default mode, it’s time we adopt a more exponential mindset to keep pace with the rapid evolution of technology.

Some forward-thinkers are already challenging the status quo, from Trump’s 2020 executive order prioritizing skills over degrees to the Office of Personnel Management guidance to the recent introduction of the bipartisan ACCESS Act in Congress. It’s clear that change is on the horizon.

So let’s look at the top three reasons non-graduates are uniquely positioned to help government agencies bridge the tech skills gap:

1. Driving innovation through diverse experiences

Non-graduates often enter the tech industry through unconventional paths — rigorous apprenticeships, self-directed learning or honing their skills in entirely different fields. This diversity of experience is a powerful asset for driving innovation and breaking free from the Einstellung Effect that can plague government problem-solving.

These individuals bring a unique blend of hands-on knowledge and theoretical understanding, enabling them to develop creative solutions that can revolutionize public service delivery. In an environment where overcoming entrenched thinking is crucial, non-graduates’ fresh perspectives can be the key to pushing technological boundaries and reimagining what’s possible.

2. Thriving in the face of rapid technological change

The breakneck pace of technological change demands a workforce that can adapt on the fly — a strength many non-graduates possess in spades. Their self-directed learning experiences and ability to quickly master new skills make them invaluable in the ever-shifting government tech landscape.

Non-graduates’ resilience, born from navigating challenges without the traditional support of academia, is a critical asset in an environment where policies, technologies and public needs are constantly evolving. Their agility ensures that government agencies can stay responsive and effective, no matter what technological curveballs come their way.

3. Expanding access to talent and driving cost-efficiency

By prioritizing skills over degrees, government agencies can significantly expand their talent pool and tap into a wealth of often-overlooked candidates. This approach not only promotes greater inclusivity but also offers substantial financial benefits.

Non-graduates often command lower starting salaries than their degree-holding counterparts, a significant consideration in budget-conscious public sectors. Moreover, by focusing on skills and performance, agencies can foster a more competitive and dynamic workforce where employees are motivated to excel based on real-world contributions rather than just their educational pedigree.

Embracing non-graduates in government tech roles represents a bold step towards a more agile, innovative and cost-effective public sector. By valuing diverse experiences, adaptability and practical skills, agencies can enrich their workforce and elevate their service to the public.

However, successfully integrating non-graduates requires more than just a change in hiring practices. Robust tools which align roles with AI-driven skill assessments and targeted microlearning, while ensuring leaders have actionable data-driven insights from skills analytics are essential for ensuring that all team members can thrive in this new paradigm.

As technology continues to evolve exponentially, our hiring practices must keep up. We need to shake up the status quo and build a government workforce ready to tackle tomorrow’s technological challenges.

Tony Holmes is Practice Lead for Solutions Architects Public Sector at Pluralsight.

The post The top 3 reasons the federal government should embrace non-graduates to bridge the tech skills gap first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/the-top-3-reasons-the-federal-government-should-embrace-non-graduates-to-bridge-the-tech-skills-gap/feed/ 0
FedFakes: Scammers pose as federal agencies adding complexity to defense strategies https://federalnewsnetwork.com/commentary/2024/06/fedfakes-scammers-pose-as-federal-agencies-adding-complexity-to-defense-strategies/ https://federalnewsnetwork.com/commentary/2024/06/fedfakes-scammers-pose-as-federal-agencies-adding-complexity-to-defense-strategies/#respond Fri, 14 Jun 2024 16:19:14 +0000 https://federalnewsnetwork.com/?p=5040938 While impersonation scams are not new, the trend has been further accelerated and made more successful due to advancements in generative AI technology.

The post FedFakes: Scammers pose as federal agencies adding complexity to defense strategies first appeared on Federal News Network.

]]>
Despite how many filters you may have, spam calls are an increasingly common experience that resonates with everyone. While annoying and inconvenient, they can often come with associated risks like impersonation attempts, where scammers pose as legitimate businesses, government agencies, or even friends and family. Such scams often involve fraudulent communication through phone calls, emails or even social media messages, where the scammer poses as a trusted entity to manipulate victims into voluntarily taking actions that benefit the scammer’s agenda.   

While impersonation scams are not new, how they are delivered is changing. This trend has been further accelerated and made more successful due to advancements in generative AI technology. With easily accessible AI tools like voice cloning, scammers can replicate someone’s voice with as little as a three-second clip. The gravity of this situation is exemplified by recent events, such as the Biden robocall that highlighted how scammers can even exploit trusted public figures for their deceptive tactics. As these scams become ever more convincing and difficult to distinguish from genuine communication, they present an increasingly significant challenge to security professionals and the general public.  

Rising threat: Targeting federal government agencies 

Last year was a record-breaking year for impersonation scams, particularly those involving scammers posing as federal government agencies to deceive individuals into disclosing money or sensitive information. In fact, approximately $1.3 billion was lost by Americans to scammers impersonating government officials. The financial losses suffered by U.S. individuals due to government impersonation scams have surged by over sevenfold since 2019, indicating a significant increase in fraudulent activity targeting federal government agencies.   

These types of impersonation scams can involve scammers calling and falsely claiming that an individual will lose their Medicare benefits unless they pay a new fee, posing as an IRS agent insisting that the recipient owes back taxes or fines, or even pretending to be law enforcement or border patrol agents seeking to use the threat of criminal prosecution as a means of intimidating victims into paying fraudulent penalties. The hallmark of these tactics is using  fear of real-life scenarios and creating a sense of urgency to pressure victims into taking immediate action without considering the validity of the caller or situation 

 The problem: Deteriorating trust in government 

These scams are particularly concerning because consumers tend to place higher trust in federal agencies, viewing them as reliable and authoritative entities. Because victims are more likely to disclose sensitive information due to their trust in federal agencies or officials, criminals know these scams are more likely to be successful; a top criterion for any criminal. Addressing these scams is imperative for protecting individuals from financial harm and maintaining public confidence. 

 Additionally, when fraudulent activities erode public trust in government institutions it undermines the foundation of democratic governance. Therefore, combating impersonation scams is crucial for safeguarding the integrity of governmental processes and ensuring that citizens continue to have faith in the institutions designed to serve and protect them. 

 The solution: Arm federal agencies with tools and tactics 

In addition to the Federal Trade Commission’s new rule to combat government and business impersonation scammers, federal agencies must remain vigilant against the ever-evolving external cyber threat landscape. This is especially crucial as cybercriminals continuously adapt their tactics to bypass traditional defensive security measures.  

As threat actors become more adept at evading detection, the need for proactive cybersecurity measures becomes increasingly crucial. This requires a subtle shift in how federal government agencies increasingly defend against these threats proactively while respecting the civil rights of all Americans. In addition to addressing red and blue spaces, this shift involves an effective cybersecurity program that addresses the “gray space” within the attack surface, which includes internet infrastructure, applications, platforms and forums managed by third parties.  

Fortunately, there are many tools available to monitor that gray space. Threat intelligence solutions — such as fake account detection and takedown measures — are key tools that prevent cybercriminals from using fraudulent accounts to impersonate government entities. The lines between real and fake are increasingly blurred as AI tools make it increasingly easier to develop realistic-yet-inauthentic content that challenges individuals and organizations to know what’s real. This increases everyone’s vulnerability to scams, including phishing attacks, ransomware attacks, and Business Email Compromises (BEC). By actively monitoring and removing fake accounts on social media and other web platforms, agencies can proactively — and automatically — disrupt impersonation scammers’ operations within minutes.  

However, being armed with the right security tools to prevent potential attacks is not enough to rest assured. Federal government agencies must maintain ongoing security measures. This can be achieved through the oversight of security operations center functions of monitoring, detection, analysis and responding to security threats. Essential security tools include endpoint detection and response, security information and event management, and security orchestration, automation and response.   

Finally, the linchpin in developing a more unified, proactive security approach lies in the adoption of resilient incident response solutions. These solutions capitalize on existing intelligence to minimize the mean time to detect and mean time to remediate security incidents, improving overall defense capabilities, while providing artifacts back to Intelligence teams for iterative improvements. Additionally, breach notifications play a crucial role in upholding compliance with laws and regulations, while also fostering transparency, which is essential for gaining and maintaining public trust.  

 Augmenting technology with a shift in mindsets and teams 

Federal government agencies must reassess their team structures. For instance, while a security team focused on internal security employs advanced technical measures to safeguard logical assets like databases and networks from compromise, they may need more expertise to protect the agency’s reputation from being used to defraud the American public. To effectively establish an external cybersecurity program, cross-organizational collaboration is essential. This includes experts in technical and physical threat vectors and people well-versed in the dynamics of social media and business platforms, including their potential for misuse. Through increased collaboration that looks at security holistically, government agencies can enhance their resilience against cyber threats while safeguarding the trust and confidence of the public they serve.   

Furthermore, in addition to safeguarding with threat intelligence tools and reassessing team structures, it’s crucial to implement a cybersecurity training and awareness program with a strong focus on phishing and impersonation attacks. By educating employees on recognizing phishing and impersonation tactics, agencies can prevent them from falling victim to these attacks. This training should cover common phishing techniques, such as impersonation emails and fake websites, along with guidance on verifying the legitimacy of communications and URLs. Most importantly, this should not be another annual “check the box” training program. The most effective security training is integrated into daily life as part of a culture of security, with emphasis placed on rewarding people who successfully demonstrate security awareness instead of only focusing on punishing those who struggle to comply.  

Ensuring the integrity of government communications is of utmost importance, as every breach of trust erodes public confidence in the government. External cybersecurity represents a new frontier that demands a fresh mindset, approach and set of tools. Traditional cybersecurity strategies have primarily only focused on defending against threats within the organization’s network perimeter. However, the increasing sophistication of threat actors and the persistent growth of attacks originating from outside the perimeter (like impersonation scams) underscore the necessity for federal government agencies to adopt a more unified, proactive security approach.   

AJ Nash, is vice president and distinguished fellow of Intelligence at ZeroFox. 

The post FedFakes: Scammers pose as federal agencies adding complexity to defense strategies first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/fedfakes-scammers-pose-as-federal-agencies-adding-complexity-to-defense-strategies/feed/ 0
Elevating visibility: The stabilizing force in responsive cyber defense https://federalnewsnetwork.com/commentary/2024/06/elevating-visibility-the-stabilizing-force-in-responsive-cyber-defense/ https://federalnewsnetwork.com/commentary/2024/06/elevating-visibility-the-stabilizing-force-in-responsive-cyber-defense/#respond Thu, 13 Jun 2024 15:31:42 +0000 https://federalnewsnetwork.com/?p=5039322 Agencies need a threat-informed defense approach that leverages global adversary signals and early warning capabilities to defend against cyber threats.

The post Elevating visibility: The stabilizing force in responsive cyber defense first appeared on Federal News Network.

]]>
Recently, MITRE disclosed the impact of the Ivanti Connect Secure zero-day vulnerabilities in compromising one of their virtualized networks. The cyberattack allowed session hijacking that circumvented multi-factor authentication, which eventually lead to persistence, and command and control (C2) with backdoors and webshells. This cyber effect is called “Security Control Gravity,” which is the force that pulls towards security controls from exploitable software vulnerabilities and misconfigurations that circumvent and erodes security controls over time. Improving the efficacy of security controls and how they are implemented to be resilient against cybersecurity attacks should be a key initiative of government and industry research to better understand the impact the gravity has on security controls failing.

We cannot wait for security controls to fail

What is known to be true is that security controls will fail, and that all software has vulnerabilities and known common vulnerability exposures (CVE) that can be exploited, as well as a significant amount of common weakness enumerations (CWE) that could expose vulnerabilities in software. As a result, keeping a pulse on how these security controls perform and the active threats targeting the organization with continuous monitoring is imperative for elevating visibility and being more responsive to cyberattacks. To keep pace with threat actors’ activities, organizations cannot fail in elevating their visibility around threat actors’ behaviors and activities.

Elevating visibility must be the constant and stabilizer in disrupting threat actors. This means formalizing a threat-informed defense approach that leverages global adversary signals and early warning capabilities to peer into imminent and likely threats targeting the organization. Most organizations are detecting threat activity too late in MITRE’s adversarial tactics, techniques and common knowledge (ATT&CK) lifecycle due to the lack of visibility. This reactive security posture plagues many organizations and creates ample dwell time for threat actors to gain a foothold, find sensitive data and exfiltrate it.

To the left, to the left

Now is the time to shift to responsive approaches where elevating visibility anchors disrupts threat actors. It is important to establish clear lines of visibility left of initial access and peer into reconnaissance and resource development activities performed by threat actors. To disrupt threat actors, organizations must gain visibility into reconnaissance and resource development activities before threat actors are able to gain a foothold into the environment. These activities provide signals that can be used to hunt for threat actors’ activities and establish the ability to identify warnings of attack (WoA) and warnings of compromise (WoC).

Active and actionable threat intelligence

WoAs are inbound global adversary signals that indicate in near time an adversary attack or compromise on critical mission assets and resources. WoA is based on a high-fidelity machine analysis of far-space telemetry, such as covert operations, honeypots, border gateway protocol (BGP) data and threat intelligence to provide early warning detection of imminent attacks targeting an organization. Threat actors have been leveraging reconnaissance for targeted attacks into organizations (as seen with the MITRE Ivanti cyberattack) given the amount of breach data on the dark web and a wealth of personal information people share on their social media sites, as well as the rise of artificial intelligence in threat actors’ arsenal to accelerate and fine tune their offensive campaigns. Things like spear phishing can be tailored to look real and legitimate, as if it is coming from people you trust and know like family and friends. As threat actors spin up infrastructure leveraging cloud resources to mimic an organization’s domains and launch phishing attempts, gaining visibility into these activities is essential for formalizing early warnings capabilities.

WoCs are outbound signals from assets and resources that indicate suspicious communication and demonstrate compromised behaviors. WoC is based on adaptive risk profiling and contextual analysis to identify and monitor communication pathways to known infrastructure controlled by adversaries or infrastructure supporting compromised assets and resources. This allows organizations to detect C2, botnet activity, data exfiltration attempts, and ransomware behavior and activities associated with emerging threats. Using global adversary signals pinpoints threat actors’ campaigns that allow organizations to hunt for those signals without having an obvious indicator of compromise (IoC) to look for. Today’s threats are stealthier and are designed to evade cyber defenses; WoC provides a way to elevate visibility against changes and improvements in threat actors’ tradecraft.

Visibility cannot fail

While threat intelligence is good to formalize and leverage in operational environments, it is typically based on what has already happened, things that are in the wild. Responsive cyber defense calls for actionable threat intelligence, based on global adversary signals that warns of imminent and impending cyberattacks – what is happening, based on what has already happened in the past. Evolving the state of practice from hunting IoCs and indicators of attack, to hunting for signals leveraging WoA and WoC capabilities is essential for formalizing responsive cyber defense. This will put organizations in a better position to anticipate, adapt and evolve against threat actors’ capabilities.

Security controls will fail; visibility cannot. Hunt, or be hunted.

Kevin Greene is public sector expert at OpenText Cybersecurity.

The post Elevating visibility: The stabilizing force in responsive cyber defense first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/elevating-visibility-the-stabilizing-force-in-responsive-cyber-defense/feed/ 0
It’s time to help the Davids in small GovCon https://federalnewsnetwork.com/commentary/2024/06/its-time-to-help-the-davids-in-small-govcon/ https://federalnewsnetwork.com/commentary/2024/06/its-time-to-help-the-davids-in-small-govcon/#respond Wed, 12 Jun 2024 17:09:19 +0000 https://federalnewsnetwork.com/?p=5037869 According to Gallup, American have more confidence in small business than they do the police, public schools, the medical system, church or even our military.

The post It’s time to help the Davids in small GovCon first appeared on Federal News Network.

]]>
I recently spoke with a small business owner, let’s call him David, who is a subcontractor for a large defense firm. David has an IT firm with about 100 employees and is a veteran.

While the Prompt Payments Act ensures the prime David works for gets paid quickly, sometimes within 15 days, the same defense prime makes David wait 90 days. This three-month gap until payment costs David, back of the envelope, about $200,000 in annual financing — the equivalent of a highly skilled full-time employee with annual salary and benefits.

For David’s small business this constitutes a significant drain of resources.

A year ago, a Pentagon report called out this and other problems facing small businesses and extolled the innovation small business contractors like David bring. But clearly the federal government has not made much progress in lessening the burden on small business.

The proof is in the numbers: From fiscal 2011 to 2020 the number of small businesses receiving Defense Department contract awards decreased by 43%, despite obligations increasing by 15%.

From David’s point of view, while the 90-day payment plan is both irksome and expensive, he feels it is by no means his most significant barrier to growth.

Small business owners endure something akin to second-class citizenship in the federal marketplace, indentured either as subcontractors or shoved into joint ventures (JVs) or teaming arrangements, because they don’t have the recent past performance on large projects that would allow them to successfully win prime contracts.

With a thicket of regulations blocking their pathway — the current FAR is over 2,000 pages with additional policies on cyber and supply chain risk coming soon — industry giants just thunder past them. In short, “graduating” from small business contracts to “full and open” contract work is a David and Goliath contest.

Think about it from David’s point of view. He and other small government contractors (GovCon) like him are expected, with annual revenues of, say, $20M, to show they have performed work in the hundreds of millions of dollars as proof of competency, something Boeing, Lockheed, Raytheon and the rest can do with their eyes closed.

Another headache for these Davids: The government sets arbitrary deadlines for entry on very important vehicles so that if a small business does not quality by that date, they are closed out of competing for five or ten years.
Moreover, the government does not award contractor performance assessments to subcontractors, yet primes take credit for the work of their subs like David – performance that they then use to get even more prime work.

Because of these and other perverse rules, at the moment a small business picks up momentum and gets more awards under his or her belt, they “size out” of their North American industry classification system (NAICS) code.

This means smaller businesses are suddenly prohibited from obtaining small business contracts — the contracts they initially fed on. This is literally known, inside the small business GovCon world, as “the Valley of Death.”

Let’s be honest and accept an inconvenient truth. It is better to be a prime contractor than a David doing the sub work for one. Not only is it time to rethink many of the rules and regulations that govern small businesses serving the federal customer, but it’s also time to say out loud that this is an emergency.

Small business participation in the federal market has fallen approximately 50% between 2010 and 2022.

Everyone in and out of government agrees that small businesses are an engine of innovation. As Secretary of Defense Lloyd Austin III has said, “for far too long, it’s been far too hard for innovators and entrepreneurs to work with the department. And the barriers for entry into this effort to work with us in national security are often too steep — far too steep.”

According to Gallup, American have more confidence in small business than they do the police, public schools, the medical system, church or even our military.

If the Pentagon and the Biden Administration share that confidence, they must do more to bridge small businesses into mid-tier land, freed from their current second-class status and the significant shackles that come from only subbing to primes or teaming to get work.

Sharon B. Heaton is the CEO and founder of sbLiftOff, a national mergers and acquisitions advisory firm specializing in government contracting companies. She serves on the SBA’s Investment Capital Advisory Committee.

 

The post It’s time to help the Davids in small GovCon first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/its-time-to-help-the-davids-in-small-govcon/feed/ 0
To make effective AI policy you must trust those who’ve been there https://federalnewsnetwork.com/commentary/2024/06/to-make-effective-ai-policy-you-must-trust-those-whove-been-there/ https://federalnewsnetwork.com/commentary/2024/06/to-make-effective-ai-policy-you-must-trust-those-whove-been-there/#respond Tue, 11 Jun 2024 17:02:34 +0000 https://federalnewsnetwork.com/?p=5022830 Data scientists are essential as policymakers shape legislation around AI

The post To make effective AI policy you must trust those who’ve been there first appeared on Federal News Network.

]]>
On March 28, the White House took a pretty big step toward establishing a broader national policy on artificial intelligence when it issued a memorandum on how the federal government will manage it. It established new federal agency requirements and guidance for AI governance, innovation and risk management. All of this is in keeping with the AI in Government Act of 2020, the Advancing American AI Act, and the President’s executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”  

 Tucked into the 34-page memorandum is something that could easily go unnoticed, but it is perhaps one of the most important and far-reaching details to come out of it. On Page 5 of the document, it lists the roles of chief artificial intelligence Officers (CAIO), and more specifically that there should be a chief data officer (CDO) involved in the process. 

 While the memorandum doesn’t spell out responsibilities in detail, it points to a mandate to include data scientists in the development, integration and oversight of AI in society. More to the point, it’s a reminder that we need the right and most qualified people at the table to set policy on the role AI will play in society. 

You cannot just assume the right experts have a seat at the table. Even though the field of AI has been around for nearly 70 years, it’s only since the generative AI boom starting in November 2022, when ChatGPT was launched, that many leaders in society have begun to see the sea change AI represents. Naturally, some are jockeying for control over something many don’t understand. There is the risk they could be crowding out the people who do, the data scientists who’ve thus far conceived, created and are incorporating AI into our daily lives and workflows. For something this revolutionary and impactful, why? 

AI development faces a human nature problem

Credit human nature. People are at once intimidated by and even scared of the kind of massive societal change AI represents. This reaction is something we as a society and as a country have to quickly get beyond. Society’s welfare, and America’s national security and competitiveness are at stake. 

To be sure, AI’s benefits are real, but it also poses real risk. Shaping and navigating its future will depend on a combination of regulation, broader education, purposeful deployment, and our ability to leverage and advance data science underlying AI systems.   

 Without the latter, systems run a greater risk of being ineffective, unnecessarily disruptive to the workforce, biased, unreliable and even underperforming in areas that could truly be positively impacted by AI. In high-stakes cases like health care, unproven or untested AI can even cause outright patient harm. The possible setbacks in function can lead to setbacks in perception. And setbacks in perception do little to marshal the resources, talent and institutions needed to realize AI’s potential while safeguarding the public.  

 The states take the lead

 As the federal government has wrestled with how to approach AI regulation, more nimble state governments and regulators have taken the early lead. In the 2023 legislative calendar, some 25 states, along with Puerto Rico and the District of Columbia, already introduced AI-centric legislation. Eighteen states and Puerto Rico have “adopted resolutions or enacted legislation,” according to the National Conference of State Legislatures. 

 At the federal level, there have been dozens of hearings on AI on Capitol Hill, and several AI-centric bills have been introduced in Congress. Many of these bills center on how the government will use AI. Increasingly, we are seeing specific AI applications being addressed by individual federal departments and committees. This includes the National AI Advisory Committee (NAIAC). 

 Where are the data scientists?

 You don’t have to look far to find the critical mass of data scientists who need to be involved in society’s efforts to get AI right the first time. We are (some of) those data scientists and we have been part of an organization that understood the intricacies of “machine learning” long before policymakers knew what the term meant. We, the leaders of the sector charged with bringing the promise of AI to the world, have long worked — and continue to work — to create a framework that realizes the potential of AI and mitigates its risks. That vision centers on three core areas:  

  • Ensure that the right data is behind the algorithms that continuously drive AI. 
  • Measuring the reliability of AI, from the broadest use down to the most routine and micro applications, ensures AI quality and safety without compromising its effectiveness and efficiency. 
  • Aligning AI with people, systems and society so that AI focuses on the goals and tasks at hand, learns from what is important, and filters out what is not. 

All of this must be addressed through an ethical prism which we already have in place.  

There is some irony in this early stage in the evolution of AI. Its future has never been more dependent on people – ones who have a full understanding of the issues at play, along with the need for and application of ethical decision-making guardrails to guide everything. 

Bad data makes bad decisions

Ultimately, AI systems are a function of the data that feed them and the people behind that data. Obviously, the ideal is to have accuracy and effectiveness enabled by good data. Sometimes, to better understand how you want it to work, you have to confront those instances where you see what you don’t want – in this case, instances where AI decisions were driven by poor data.   

 For example, when AI systems inaccurately identify minority populations, which is a problem that has plagued security screening technologies for years. This is usually not a technology problem, but rather a data problem.  In this case, the systems are operating on bad or incomplete data and the impact on society is significant because it leads to more people being unnecessarily detained.   

 Chances are, many of these sorts of problems can be traced back to the human beings who were involved, or – perhaps more importantly – not involved in AI development and deployment. Poor data that lead to bias or ineffective decision making is a significant problem across industries, but one that can be solved by combining the expertise of the data science community with that of diverse stakeholders, especially frontline workers and subject matter experts. 

 Data scientists must have a seat at the table … now

 Data scientists need to be at the decision-making table early on, because they have the holistic training and perspective, as well as the expertise to set algorithms in specific domains that focus on leveraging data for actual decision-making. Whether the AI system is supporting healthcare, military action, logistics or security screening, connecting effective data with AI will ensure better decisions and therefore fewer disruptions.   

 When it comes to measuring reliability, that’s what data scientists do. No one is better positioned to ensure that AI systems do what they are designed to do and avoid unintended consequences. Data scientists know. They’ve been there.  

 Data scientists are the intersection of ensuring better and more effective decision making across AI and identifying impacts and biases of AI systems and other problems. As states, Congress, the White House, and industry consider the next steps in AI policy, they must ensure data science is at the table. 

Tinglong Dai, PhD, is the Bernard T. Ferrari Professor at the Johns Hopkins Carey Business School, co-chair of the Johns Hopkins Workgroup on AI and Healthcare, which is part of the Hopkins Business of Health Initiative. He is on the executive committee of the Institute for Data-Intensive Engineering and Science, and he is Vice President of Marketing, Communication, and Outreach at INFORMS. 

The post To make effective AI policy you must trust those who’ve been there first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/to-make-effective-ai-policy-you-must-trust-those-whove-been-there/feed/ 0
Federal agency senior leaders should curb their travel time and expenses https://federalnewsnetwork.com/commentary/2024/06/federal-agency-senior-leaders-should-curb-their-travel-time-and-expenses/ https://federalnewsnetwork.com/commentary/2024/06/federal-agency-senior-leaders-should-curb-their-travel-time-and-expenses/#respond Mon, 10 Jun 2024 19:59:31 +0000 https://federalnewsnetwork.com/?p=5035036 Due to reports of questionable spending or behavior at meetings, agencies and departments imposed restrictions on spending and conference attendance.

The post Federal agency senior leaders should curb their travel time and expenses first appeared on Federal News Network.

]]>
The further one travels the less one knows.”   -Tao Te Ching

With the COVID-19 public health emergency officially ended, business travel overall is once again increasing. Travel by government employees also is increasing as restrictions in place during COVID-19 are  being relaxed and government employees once again begin to attend in-person conferences, meetings, trainings and site visits throughout the nation and abroad. But post-COVID 19, I wonder about the rationale for much of this travel time and spending. What evidence after all exists that frequent in-person attendance at meetings in hotels or conference centers or site visits by agency senior leadership or even at the staff level, actually results in enhanced engagement, collaboration or efficiency or long-term outcomes that could not be achieved by phone or Zoom?

Agency travel has proven to be a hot-button issue now and then. The General Services Administration oversees federal travel and provides resources and guidance for agencies but federal government’s often arcane travel regulations and agency policies can confuse and frustrate even more experienced travelers. At times members of Congress weigh in and various inspectors general periodically ding agencies for their noncompliance. The Government Accountability Office, which also takes an occasional interest, has published a few reports here and there.

Sometimes questionable travel decisions by agency leaders become national news. Not that long ago, for example, a former Department of Health & Human Services (HHS) secretary lost his job largely due to alleged travel abuses. More recently, a current high profile leader was sharply criticized in the media for purportedly “unrealistic demands about his travel accommodations.” Following high profile reports of questionable spending or behavior at meetings, many agencies and departments imposed restrictions on spending and conference attendance.

Travel, of course, has had its moments in the private sector as well over the years, periodically seeming to wax and wane as companies seek to cut expenses, improve management and enhance sustainability. Much debate seems to exist as to whether the benefits of business travel outweigh its fiscal or environmental drawbacks, though the spending of taxpayer funds is a key additional consideration within government that does not generally exist within the private sector.

Rationales I’ve heard over the years for senior agency leadership travel include the opportunity to hear  ‘diverse voices’ or ‘getting closer to those we serve,’ ‘showing we care,’ looking at a potential program sites or being able to speak to those outside the ever expanding ‘DC bubble.’ Yet, such justifications often ring hollow. Indeed, many agencies now have fully remote staff throughout the nation and large regional offices; 85% of federal workers live outside DC. And it’s not clear that senior agency or department leadership must travel throughout the nation to know, for example, that the U.S. has a mental health crisis or visit Alaska or the Pacific Islands in person to realize that these populations confront serious challenges. As well, such trips are often repeated every year or two as one leader departs and a new senior leader comes on board or new staff replace retiring or exiting colleagues.

In fairness, I’ve seen a couple high-ranking leaders work hard over the years to curb their own travel and that of senior subordinates, but that’s more of the exception than the norm. Travel seems to be viewed by many senior agency leaders as a prerequisite of their office, with the more travel, the better. And that approach has a way of working its way down to the agency’s staff level.

There are certainly many good reasons for federal employees, even senior leaders, to travel, such as conducting complex audits, providing direct delivery of emergency or health care services and other functions. And it’s also important to acknowledge that once in a blue moon, especially following a disaster or high-profile event, leaders have been criticized for not visiting an area in distress. But much travel by higher level leaders does not seem tied to such direct services or outcomes and has no relation to emergency preparedness and response.

Moreover, as agencies strive post-COVID to enhance their in-person presence, frequent travel by agency leadership also seems inconsistent with the supposed benefits realized from being back in the office, often touted repeatedly by these very same senior leaders. Certainly it’s interesting to hear senior leaders discuss how much they miss(ed) seeing their staff in person and all the benefits of face-to-face interactions while they themselves spend ever more of their time on travel at meetings, conferences or other visits.

The administration’s policies on climate change are another reason to more closely scrutinize senior leadership travel. HHS’ 2022 Sustainability Plan, for instance, calls on agencies and offices within HHS to “reduce the environmental impact of government travel by prioritizing or incentivizing virtual meetings, ‘green’ travel options, and including funding for carbon offsets in travel reimbursement.” As the federal government seeks actively to enhance sustainability, including within the health care sector, shouldn’t its own staff, and most especially its leadership, walk the walk? Sustainable travel is great but no travel at all is obviously that much better for the environment.

The most compelling reasons to curb travel by agency leadership though might simply be that it detracts from their important interactions with staff members facing increasing challenges and evincing profound skepticism about their senior leadership’s overall engagement and effectiveness. Too many agency leaders, even as they jaunt for days at a time to far-flung locations, never have bothered to speak to most staff in their immediate office or building or even key stakeholders in D.C. or other areas where they are based. Travel certainly is not something always to be avoided but agency and department leaders might even, post-COVID, consider setting a personal example of frugality and discipline for their staff, spending less time on the road and more time rolling up their sleeves, opening the doors to their glass offices and working side-by-side with others at home. Perhaps their travel hundreds or thousands of miles away would have more benefits if an agency’s senior leaders actually understood, knew and cared about what was going on down the hall.

Mitchell Berger has worked on public health and behavioral health programs at the federal and local levels. The opinions expressed are solely those of the author and should not be imputed to any public or private entities.

The post Federal agency senior leaders should curb their travel time and expenses first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/federal-agency-senior-leaders-should-curb-their-travel-time-and-expenses/feed/ 0
What is the Trajectory of the ASCEND BPA? https://federalnewsnetwork.com/commentary/2024/06/what-is-the-trajectory-of-the-ascend-bpa/ https://federalnewsnetwork.com/commentary/2024/06/what-is-the-trajectory-of-the-ascend-bpa/#respond Fri, 07 Jun 2024 16:21:50 +0000 https://federalnewsnetwork.com/?p=5031975 The highly commercial nature of cloud services is pushing up against government-unique requirements, bringing the ASCEND BPA to a “fork in the road.” 

The post What is the Trajectory of the ASCEND BPA? first appeared on Federal News Network.

]]>
Last month, the Federal Acquisition Service (FAS) issued a draft Request for Quotes (RFQ) for the proposed governmentwide ASCEND Blanket Purchase Agreement (BPA) for cloud services. The RFQ included a statement of work (SOW) outlining the requirements for Pool 1 – Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), and it invited MAS contractors to submit comments and questions on the draft SOW for FAS’s consideration in the development of final requirements for the formal, competitive phase of Pool 1 of the BPA.

Significantly, the ASCEND BPA acquisition strategy includes two other related pools: Pool 2 – Software as a Service (SaaS) and Pool 3 – Cloud Related IT Professional Services. At the ASCEND BPA industry day held on February 8th this year, FAS indicated that it was starting market research on Pools 2 and 3 and that industry partners should look for a Request for Information (RFI) for those pools this summer. Consistent with this schedule, FAS plans on first issuing a separate RFQ for Pool 1, followed by a separate RFI and RFQ for Pools 2 and 3.

The draft RFQ for Pool 1 marks the latest step in a multi-year journey towards FAS’s goal of establishing a governmentwide BPA for cloud services. In 2021, FAS issued an RFI to MAS contractors for a governmentwide, Multiple Award Cloud BPA. The following year, FAS introduced the ASCEND BPA acquisition strategy while continuing the dialogue around the structure and requirements. The RFIs, industry day, and draft RFQ together raise questions/uncertainties for FAS, customer agencies, and industry regarding the appropriate acquisition strategy. In particular, the highly commercial nature of cloud services is pushing up against government-unique requirements, bringing the ASCEND BPA to a “fork in the road.”  Will the BPA “ascend” towards a streamlined acquisition strategy with corresponding requirements that embrace commercial terms and practices? Or will the BPA continue to “descend” into an overly complex acquisition strategy that incorporates layers of government-unique requirements?

Here are some of the key aspects of the current strategy that have raised uncertainty among FAS’s industry partners:

  • The current pool structure is inconsistent with commercial practice and delivering an integrated, holistic cloud solution, as it will increase complexity, risk, and costs for customer agencies and MAS contractors. Its separate pool approach is compounded further by the separate procurements for each of the pools.  FAS should engage with industry on the optimum approach to structuring the functions consistent with commercial practice and the underlying schedule.
  • The current draft RFQ/SOW incorporates requirements that are inconsistent with commercial practice. The divergence from commercial practice will limit competition, innovation and, ultimately, value to the customer. In this regard, see generally the individual industry feedback the Coalition submitted in response to the RFQ. FAS should focus on limiting non-commercial terms to the maximum extent practicable.
  • Relatedly, the layering on of additional agency-specific cloud requirements at the BPA level will increase complexity and costs for MAS contractors, which, in turn, likely will impact competition and value for customer agencies. FAS should identify a simplified, core set of requirements that generally apply governmentwide, while allowing customer agencies to tailor requirements at the task order level, thereby streamlining the process and enhancing competition. This approach also is consistent with the highly customizable nature of cloud requirements.
  • The absence of agency commitments to use the ASCEND BPA continue to create risk for MAS contractors. Identifying agency commitments will incentivize industry to compete for the BPA and improve the quality of any responses submitted. In addition, this increased competition will enhance value, savings, and innovation over the long term. It is central to ensuring robust mission support through the BPA.
  • As part of the overall acquisition strategy for the ASCEND BPA, FAS’ industry partners also are keen to understand how the BPA strategy fits within its overall IT portfolio currently meeting customer agencies mission support requirements. Questions remain about vertical and horizontal contract duplication arising from the ASCEND BPA acquisition strategy.

Over the course of the development of the ASCEND BPA, FAS’ industry partners have appreciated the opportunity to engage and share feedback on the acquisition strategy and requirements. Given the approaching fork in the road, perhaps it is time for a “cloud roundtable” to discuss the way forward for ASCEND in delivering best value mission support for customer agencies. Coalition members stand ready to facilitate and contribute to such a roundtable discussion to improve the program.

The post What is the Trajectory of the ASCEND BPA? first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/what-is-the-trajectory-of-the-ascend-bpa/feed/ 0
Federal zero trust implementation hinges on actionable strategies https://federalnewsnetwork.com/commentary/2024/06/federal-zero-trust-implementation-hinges-on-actionable-strategies/ https://federalnewsnetwork.com/commentary/2024/06/federal-zero-trust-implementation-hinges-on-actionable-strategies/#respond Thu, 06 Jun 2024 19:40:34 +0000 https://federalnewsnetwork.com/?p=5030789 Zero trust will continue to be a centerpiece of federal cyber strategies across government, and rightfully so.

The post Federal zero trust implementation hinges on actionable strategies first appeared on Federal News Network.

]]>
Zero trust has become an important tool in federal cyber plans, with the Biden Administration showcasing zero trust as one of the lynchpins of 2023 National Cyber Strategy. Out of this has come a bevy of implementation strategies from individual agencies, none of which really get to the heart of the cyber challenges around zero trust implementation and the federal threat landscape, overall.

Cyber defense is not a line in the sand. The threats are constantly evolving. Because of that, federal zero trust plans need to ensure that agencies, experts and professionals in both the private and public sector are consistently aware of these threats and how they’re behaving.

Understanding the threat terrain

Organizations should prioritize creating cybersecurity strategies in collaboration with national partners with similar missions, ensuring a comprehensive approach and common vision on what the threat landscape looks like. A lack of knowledge on the terrain will impact security measures, leaving federal cyber defenders stranded like soldiers entering battle with no understanding of their environment.

As agencies build out their zero trust plans they need insights and case studies on threat terrain right now that include actual threat modeling to go in tandem with guidelines. The priority should be to show agencies the value of gaining visibility of the terrain they’ve jumped into in the past 20 years. This will not only make the strategy helpful for a broader range of agencies, but also give a snapshot of the threat landscape at any given time so they can see how it is evolving.

Actionable guidance 

None of this guidance will work without use cases that show agencies the nuances of implementing certain technologies, strategies and platforms.

A good example for compiling information to form your strategy would be utilizing the National Security Telecommunications Advisory Committee (NSTAC) and the National Security Agency (NSA) plans. They provide guidelines for practical actions like micro and macro segmentation. This is incredibly useful, especially for agencies and other organizations that don’t have the resources to create their own guidance. These documents provide detailed strategic guidance on policy frameworks, interagency coordination, technical implementation and best practices.

Agencies can utilize them immediately to help in three areas:

  • Policy integration and alignment: Structure policies around principle zero trust-defined areas, mapping governance to these areas for long-term visibility and management.
  • Partnerships for implementation: Frame where your boundaries are and ensure that partners are of like mind when implementing strategy. Your defenses need to connect.
  • Continuous improvement and adaptation: Once these bonds are identified, maintain visibility. Constant evolution is necessary and should be the nature of anyone supporting information and communication technology (ICT). Know your terrain and consistently work with your partners to defend it. 

Seasoning strategies to taste

While entities like the departments of Defense and Homeland Security have established important use cases and strategies, other organizations should make sure they’re considering their own unique mission needs and objectives. Tying implementation of zero trust architecture to these goals is essential for aligning zero trust implementation with an agency’s existing strategies. A practical starting point for this is to evaluate how organizational missions align with those already equipped with detailed strategy and threat profiles.

This critical information — the assets, where they’re stored and the paths they may travel, the risk of exposure, how you defend and monitor them, and the actions you’ll take upon authorized and non-authorized exposure — can make the difference in the event of an attack. New threats are always identified, and you don’t want to redo this effort under fire. 

Zero trust will continue to be a centerpiece of federal cyber strategies across government, and rightfully so. When implemented correctly it can make a huge impact on federal security. But to ensure success, agencies need to develop strategies that center around situational awareness and actionable strategies. The threat landscape is only getting more and more complex with malicious actors and nation states utilizing incredibly innovative tools in an attempt to infiltrate government networks. Proper zero trust implementation is essential to combatting this constantly evolving frontier.

Will Smith is director of business expansion and solutions design at RELI Group.

The post Federal zero trust implementation hinges on actionable strategies first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/federal-zero-trust-implementation-hinges-on-actionable-strategies/feed/ 0
New FedRAMP updates: 5 ways federal agencies can evaluate and select the safest cloud providers https://federalnewsnetwork.com/commentary/2024/06/new-fedramp-updates-5-ways-federal-agencies-can-evaluate-and-select-the-safest-cloud-providers/ https://federalnewsnetwork.com/commentary/2024/06/new-fedramp-updates-5-ways-federal-agencies-can-evaluate-and-select-the-safest-cloud-providers/#respond Wed, 05 Jun 2024 17:59:56 +0000 https://federalnewsnetwork.com/?p=5028857 As federal agencies ride the wave of digital transformation and embrace cloud services, the landscape of cybersecurity continues to present complex challenges.

The post New FedRAMP updates: 5 ways federal agencies can evaluate and select the safest cloud providers first appeared on Federal News Network.

]]>
The primary purpose of the Federal Risk and Authorization Management Program is to ensure that federal agencies can leverage the benefits of modern cloud technologies while upholding stringent security standards. FedRAMP serves as a benchmark of security assurance in the ever-expanding cloud landscape, offering a framework that helps federal agencies evaluate and select cloud providers with the highest rigor for data protection. However, there has been speculation around whether FedRAMP is fit for purpose in an increasingly complex cyber threat environment. After all, certification lags the standard by a few years, and the standard lags in identifying control mechanisms to thwart emerging cyber threats.

According to a recent public memo from The White House, “Because federal agencies require the ability to use more commercial [Software-as-a-service] products and services to meet their enterprise and public-facing needs, the FedRAMP program must continue to change and evolve.”

This evolution has now begun. Recent updates to FedRAMP have been driven by several key imperatives. First, the program needed to scale to accommodate the growing demand for cloud services across federal agencies. Second, it aimed to mature by refining its focus on the most critical aspects of data security. Third, efforts were made to streamline the software authorization process, making it more efficient and accessible. Finally, reducing costs was a central goal, making cloud adoption more viable for agencies of all sizes. In essence, these updates represent a commitment to ensuring that FedRAMP remains a robust and adaptable tool for safeguarding federal data in the face of evolving security challenges.

The threats facing federal agencies

The timing couldn’t be worse. Just as agencies are being asked to modernize and embrace cloud services, the risk factor of moving workloads into the cloud has increased manyfold. In the wake of geopolitical turmoil and the democratization of advanced AI-based technologies, federal agencies must now navigate a minefield of cybersecurity challenges while orchestrating their migration and selecting cloud partners. Access to new technologies has armed cybercriminals, state actors and malicious entities with unprecedented access to hacking techniques and tools. We now operate in a world where AI/ML algorithms can be used to create malicious code, where social engineering and identity theft are more sophisticated than ever, and where software supply chains are only as strong as their weakest link. What’s more, the alarming emergence of ransomware-as-a-service – malicious software that’s readily available on the darknet – poses a substantial danger. In this environment, federal agencies must prioritize advanced security measures in their cloud services, recognizing the imperative of safeguarding sensitive data and systems from these evolving and multifaceted threats.

5 criteria federal agencies should use when selecting cloud providers

FedRAMP remains a key framework for security assurance and its updates will prove useful, but in the wake of mounting threats, here is a selection of criteria that chief information officers, chief information security officers and chief technology officers in federal agencies should consider when selecting a cloud provider.

  1. Embrace a “defense-in-depth” approach

One fundamental principle of cloud security is adopting a “defense-in-depth” strategy. Federal agencies should seek cloud service and SaaS providers that employ multiple layers of control mechanisms to protect their data assets, including perimeter security, application security and data encryption. This approach ensures that even if one layer of security is breached, others remain intact, halting potential threats.

  1. Explore beyond FedRAMP standards

While FedRAMP provides a robust framework for cloud security, forward-thinking agencies should explore additional security measures. For example, they should consider if their preferred SaaS solution provider has implemented a zero trust architecture, ensuring that data can only be accessed on a “need-to-know” basis. Solutions that have deployed artificial intelligence-based security methods for threat analysis and detection, and user behavior analysis, will also stand agencies in good stead, particularly when it comes to monitoring software supply chains and the flow of data.

  1. Assess qualifying authorizations

Federal agencies should evaluate cloud providers not only based on FedRAMP requirements but also on other qualifying authorizations they may possess. Consider providers with certifications such as System and Organization Controls (SOC) 2, relevant International Organization for Standards (ISO) standards, or special designations such as AWS Government Competencies to meet the stringent security requirements of public agencies. Microsoft also has certifications such as FedRAMP scores and DoD impact level ratings which can help agencies understand the suitability of various services.

  1. Examine partner network maturity

A cloud provider’s partner network plays a pivotal role in security. Assess the maturity and reliability of partners like CrowdStrike, AWS and Microsoft. A strong partner network can enhance an agency’s overall security posture.

  1. Verify proactive security measures

Staying ahead of evolving threats is crucial. Confirm that the chosen cloud provider has a proven track record of proactive security measures and innovations. Leading providers continuously evolve their offerings to protect data hosted in their environment, often including real-time analytics and threat monitoring.

As federal agencies ride the wave of digital transformation and embrace cloud services, the landscape of cybersecurity continues to present complex challenges. The recent updates to FedRAMP signify a commitment to adaptability in the face of these evolving threats, but to safeguard their data, federal CIOs, CISOs and CTOs should look beyond government frameworks to ensure their cloud adoption strategies can move forward with confidence. Making informed choices about cloud providers is not just a matter of compliance but a critical step in securing the future of federal agencies and the fulfillment of their charter.

Manish Sharma is the senior vice president of engineering and security at Aurigo Software.

The post New FedRAMP updates: 5 ways federal agencies can evaluate and select the safest cloud providers first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/new-fedramp-updates-5-ways-federal-agencies-can-evaluate-and-select-the-safest-cloud-providers/feed/ 0
Lack of fluency in math poses a threat to our region’s economic future https://federalnewsnetwork.com/commentary/2024/06/lack-of-fluency-in-math-poses-a-threat-to-our-regions-economic-future/ https://federalnewsnetwork.com/commentary/2024/06/lack-of-fluency-in-math-poses-a-threat-to-our-regions-economic-future/#respond Tue, 04 Jun 2024 16:17:51 +0000 https://federalnewsnetwork.com/?p=5027004 We should demand that schools across the region have aggressive strategies that put numeracy at the center of the education agenda.

The post Lack of fluency in math poses a threat to our region’s economic future first appeared on Federal News Network.

]]>
Federal agencies, Fortune-500 companies, scores of businesses and nonprofits are scouring the workforce in the D.C. region to fill positions, with thousands of jobs currently going unfilled.

The problem will become even more pronounced as employers increasingly lean on new employees with solid math skills as a prerequisite. The problem is that math skills, especially in the area’s younger generations, are sorely lacking.

The deficit is particularly pronounced among K-12 students. Tests show “catastrophic” losses in Virginia according to Gov. Glen Youngkin. In Maryland, the number of students who tested as “proficient” in math dropped below pre-pandemic levels. And fourth- and eighth-grade students at Washington, D.C. schools experienced precipitous drops in scores.

Why does this matter? It’s simple: math skills are essential to economic growth and prosperity in our region.

From numerous roles in the federal government to federal contractors and business services professions that service federal agencies and companies, many jobs here rely on numeracy. At the same time, the new economy unfolding around us is firmly anchored in math with in-demand jobs in fields such as artificial intelligence, blockchain development, cyber security and data science.

Facility with math provides the opportunity for all students to be successful, regardless of their backgrounds. According to the Bureau of Labor Statistics, the median salary for math-related careers in 2022 was $99,590, more than double the national figure of $46,310 for all occupations.

With all of that in mind, area education leaders must take decisive action to ensure students across the region build strong math skills.

We should replicate efforts by other states to do so.

  • For example, Alabama approved legislation that ensures every elementary school has a math coach. The new law sets up a process to vet and approve high-quality instructional materials and curricula and establishes academies to build a pipeline of principals trained in effective math intervention strategies.
  •   New Jersey created a tutoring corps that serves students statewide in pre-K through eighth grade. The corps works during and after school, as well as over the summer, to ensure minimal drop-off during breaks.
  • Nebraska, Louisiana, Colorado, and Ohio now offer Zearn, a high-quality math supplemental resource. It is free to all public-school students and proven to improve performance.
  • Texas put in place another resource, this time for educators: an interactive tool that gives real-time data insights about student performance in math, enabling early intervention when performance lags.

Part of the solution to this issue in our region also lies in addressing a cultural phenomenon — the false notion that people are either “good at math” or “bad at math.” For decades, the accepted norm has been that people self-select and divide themselves along these lines. That process begins during early education years. And if it solidifies during high school, people are likely to define themselves that way for the rest of their lives — with jobs increasingly relying on math and science skills, students and children face a severe disadvantage if they believe they are born to fail in math.

Parents in our area are supportive of blowing up that false choice. A recent poll by SurveyUSA of D.C. parents found that while some students believe they are not a “math person,” 89% of parents disagree, saying anyone can be a “math person.”

All of us — from parents and business leaders to philanthropies and others—must hold elected and education leaders accountable. We should demand that schools across the region have aggressive strategies that put numeracy at the center of the education agenda. If they don’t, economic opportunity for today’s students  — as the next generation of federal and corporate workers — will slip away.

Jim Cowen is executive director of the Collaborative for Student Success, a nonprofit that aims to ensure all kids are prepared for success in school, college, and careers. Jack McDougle is president and CEO of The Greater Washington Board of Trade, a leading business organization that drives solutions for inclusive economic growth and livability in the region.

The post Lack of fluency in math poses a threat to our region’s economic future first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/lack-of-fluency-in-math-poses-a-threat-to-our-regions-economic-future/feed/ 0
Is the United States primed to spearhead global consensus on AI policy? https://federalnewsnetwork.com/commentary/2024/06/is-the-united-states-primed-to-spearhead-global-consensus-on-ai-policy/ https://federalnewsnetwork.com/commentary/2024/06/is-the-united-states-primed-to-spearhead-global-consensus-on-ai-policy/#respond Mon, 03 Jun 2024 16:56:03 +0000 https://federalnewsnetwork.com/?p=5025484 The U.S. strategy is offering a flexible framework that can swiftly adapt to the rapidly evolving AI landscape.

The post Is the United States primed to spearhead global consensus on AI policy? first appeared on Federal News Network.

]]>
Artificial intelligence is quickly becoming an indispensable asset in addressing a range of challenges in today’s society – from domestic and international cyber threats to healthcare advancements and environmental management. While there are some mixed opinions on many aspects of this technology and its capabilities, there’s no question that in order for AI to meet its full potential, we will need an agile and dynamic policy framework that spurs responsible innovation – a framework that the United States could soon model.

Every day AI becomes more entrenched in our daily lives and will soon be ubiquitous around the world. Countries need a framework to look to for guidance, a leader. Without a flexible policy framework in place that is broadly accepted, we risk missing out on many of AI’s benefits to society. Trust in AI is pivotal for realizing its full potential, yet this trust will be hard-earned. It demands efforts from both private organizations and governments to develop AI in a responsible, ethical manner. Without trust, the promise of AI could remain unfulfilled, its capabilities only partially tapped.

Efforts and innovations must be coordinated across the globe, guided by a responsible pioneer. Lacking some level of synchronization, society could experience a confusing system of disparate AI regulations, rendering the safe advancement of AI initiatives challenging across the board.

With its flexible governance structure informed by valuable international, public-private input, the U.S. could be a clear choice to lead the world to success in this new age of AI.

Current AI governance initiatives

Currently, steps are being undertaken globally to regulate the use of AI, enhance its safety, and foster innovation. It’s natural that various jurisdictions have placed different emphases on their priorities, resulting in a diverse range of regulations – some more proscriptive than others. This variation reflects the unique cultural perspectives of different regions, leading to a potential patchwork of AI regulations. As of October 2023, 31 countries have passed AI legislation and 13 more are debating AI laws.

Europe took an early lead in December 2023 by passing the AI Act, the world’s first comprehensive AI law focused on categorizing AI in terms of risks to users. The original text of the AI Act was written in 2021 – long before the mainstreaming of GenAI in 2023. In contrast to the EU’s approach to AI regulation, the United Kingdom took a more pro-innovation stance and underscored its leadership aspirations by hosting an international AI Safety Summit at Bletchley Park in November 2023.

The United States played a prominent role at the summit which focused on the importance of global cooperation in addressing the risks posed by AI, alongside fostering innovation. Meanwhile, China mandates state review of algorithms, requiring them to align with core socialist values. In contrast, the U.S. and UK are taking a more collaborative and decentralized approach.

The U.S. has taken a more proactive approach to asserting its leadership in AI governance, in contrast to its approach to data privacy, where the EU has largely dominated with the General Data Protection Regulation (GDPR).  A series of recent federal initiatives, including President Biden’s exhaustive AI executive order, signals a commitment to eventually leading global AI governance. The order lays out a blistering pace of regulatory action, mandating detailed reporting and risk assessments by developers and agencies. Notably, many of these requirements and assessments will come into force long before the EU’s AI Act is settled and enforced.

In the absence of strong federal action, states are stepping in. In the 2023 legislative session, at least 25 U.S. states introduced AI bills, while 15 states and Puerto Rico adopted resolutions or enacted legislation around AI. While it is great to see this progress and innovation being made across the world, we must recognize the next steps needed to move forward on the AI front.

Without harmonizing efforts globally and having a leader to look to for guidance on AI endeavors, we could end up with a complex patchwork of AI regulations, making it difficult for organizations to operate and innovate with AI safely — throughout the U.S. and globally.

The blueprint for AI regulation: The U.S.

Without trust, AI will not be fully adopted. The U.S. and like-minded governments can ensure that AI is safe and that it will benefit humanity as a whole. The White House has begun to pave the way with a recent flurry of AI activity, remaining proactive and agile despite evolving demands. To get ahead, Congress is pursuing niche areas within AI that will inform current and future AI regulations. The U.S. can further promote transparency, confidence and safety by collaborating with industry to ensure that the benefits of this evolving technology can be realized, risk concerns do not stifle innovation, and society can trust in AI.

Domestically, the Biden administration has been exceedingly open to input from all sectors, shaping a holistic viewpoint on what is needed for advancement. Abroad, the U.S. prioritizes collaboration with its allies, ensuring best practices are followed and ethical considerations are made. This is a key component needed from a global leader, as regulations must be developed outside of a vacuum for best results. By linking arms with countries around the world to develop standards, conflicting viewpoints can be mitigated to best shape international AI regulations in a way that is most beneficial to society.

Furthermore, by encouraging strong public-private partnerships, the U.S. sets the precedent needed to take responsible AI innovation to the next level. Just like the public sector, private companies must innovate responsibly, accepting the duty to develop AI in a trustworthy manner. By moving forward with cautious enthusiasm, the private sector can considerably bolster efforts to ensure AI reaches its full potential safely, at home and abroad.

Of course, the geopolitical aspect must be considered, as well. By leading in AI standards and regulations, the U.S. can initiate globally accepted norms and protocols to deter an unregulated arms race or other modern warfare catastrophe. Through its technical prowess and dynamic experience, the U.S. is uniquely positioned to lead in the development of a global consensus on responsible AI use.

The future of AI governance is here

The U.S. is just beginning to establish itself as a global leader in AI governance, spearheaded by initiatives such as President Biden’s executive order, Office of Management and Budget guidelines, the National Institute of Standards and Technology’s AI Risk Management Framework, and widely publicized commitments from AI companies. The U.S. strategy is offering a flexible framework that can swiftly adapt to the rapidly evolving AI landscape. This agility will help keep pace with the quickly changing AI technology landscape.

As the U.S. continues to quietly refine its approach to AI regulation, its policies will not only have far-reaching impacts on American society and government, but also offer a balanced blueprint for international partners. The onus to innovate with AI responsibility does not fall solely on the public sector. Private companies, too, must bear the burden alongside their public counterparts to optimize results. This balanced approach considering a variety of international, public-private insights is bound to shape the future of AI governance and innovation worldwide.

Bill Wright is global head of government affairs at Elastic.

The post Is the United States primed to spearhead global consensus on AI policy? first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/is-the-united-states-primed-to-spearhead-global-consensus-on-ai-policy/feed/ 0
Improving citizen experience with proper data management https://federalnewsnetwork.com/commentary/2024/05/improving-citizen-experience-with-proper-data-management/ https://federalnewsnetwork.com/commentary/2024/05/improving-citizen-experience-with-proper-data-management/#respond Fri, 31 May 2024 19:28:36 +0000 https://federalnewsnetwork.com/?p=5022866 By harnessing data-driven decision-making, agencies can significantly enhance the quality and efficiency of services provided to citizens.

The post Improving citizen experience with proper data management first appeared on Federal News Network.

]]>
The White House’s recent FY25 budget proposal emphasizes improved citizen services and quality of life and includes initiatives such as lowering the cost of childcare, increasing affordable housing, decreasing the cost of healthcare and more.  

 To accomplish these goals, the budget proposal focuses on utilizing evidence-driven policies and programs, highlighting the need for additional personnel to collect and analyze evidence, such as data, to properly inform agency initiatives.  

 Most agencies currently collect different types of data, but there is variation in the extent to which it is used to inform decision-making processes. The Office of Management and Budget published an evidence-based policymaking guide to encourage and support agencies in making more data-driven decisions. 

 While this is one of several pieces of support that the federal government has offered agencies, a critical piece is missing from the primary discussion – the role of proper data management and how it can impact citizen services and their experiences. 

Data management for CX success

As federal agencies look to leverage data to inform policies, decisions and programs, they are under-valuing data hygiene, failing to recognize the benefits of processes including lineage and testing protocols. If done incorrectly, the government will fail to meet crucial citizen needs.  

For example, during an analysis of the Internal Revenue Service’s legacy IT, the Government Accountability Office found the agency lacked regular evaluations of customer experiences and needs during the implementation of OMB’s cloud computing strategy. As a result, the agency has spent over a decade trying to replace the legacy Individual Master File (IMF) system, which is the authoritative data source for individual tax account data – this lack of responsiveness to CX needs is compounded with data and other challenges, significantly affecting citizen services.  

To ensure employees understand the true value of data and the benefits it can provide when used correctly, it’s important for agencies to foster a culture of data literacy, or the ability to read, write and communicate data in context. This is a foundational aspect of enhancing government’s data capabilities. 

Data plays a pivotal role in the quality of services provided to citizens. Before it can be used to inform such programs, agencies must ensure their data is organized and accessible according to proper data management protocols. 

Data management is defined as a set of practices, techniques and tools for achieving consistent access to and delivery of data across the spectrum of data subject areas and types in an agency. In the federal government’s case, having access to organized data, regardless of location, provides insights to decision makers that enable them to act according to relevant stats and information. 

This level of insight helps the government greatly when working to meet the requirements needed to correctly inform citizen programs and bolster citizen services, as the process may include migrating large data sets from legacy systems.  

When agencies successfully adhere to proper data hygiene and management, valuable resources for citizen use are made available, ranging from updated payment systems to public safety information such as crime rate data. Once the data has been properly stored and organized, business intelligence and analytics software tools such as ServiceNow or Tableau can help agencies make informed decisions. 

Impact on citizen services

The government provides a variety of services that citizens rely on daily, including health benefits, food assistance, social security and more. But as the economic landscape changes, the government’s citizen services must also change. 

To help individuals and businesses during the COVID-19 pandemic, Congress allotted $2.6 trillion to support vulnerable populations, public health needs and unemployment assistance – when agencies can access readily-available data that has been adequately managed, it makes it easier to provide the services that citizens need in a timely manner. Additionally, by ensuring internal data is ready for use, agencies can provide for all citizens despite factors such as race, location or age. 

Suppose the government decides to increase the amount of food assistance provided across the country and disperses an equal amount to every state without knowing population density, unemployment rates and other essential factors. In that case, they risk significantly decreasing the level of impact of such an initiative. While a simple example, this showcases the importance of data when making decisions that impact the lives of millions of individuals. 

Given the focus of the White House’s FY25 budget proposal, the federal government will see an increased need for proper data management to improve citizen services. Agencies must return to the foundational aspects of data hygiene to be successful. 

By harnessing the power of data-driven decision-making, adopting innovative technologies and fostering a culture of data literacy, agencies can significantly enhance the quality and efficiency of services provided to citizens. 

This transformation not only meets the evolving needs and expectations of the public but also represents a fundamental commitment to transparency, efficiency and accountability in governance. In this digital age, effective data management is not just a strategic asset but a cornerstone of democratic engagement and public trust. 

Laura Stash is executive vice president of solutions architecture at iTech AG. 

The post Improving citizen experience with proper data management first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/05/improving-citizen-experience-with-proper-data-management/feed/ 0
Foundation for origin data through software attestations set https://federalnewsnetwork.com/commentary/2024/05/foundation-for-origin-data-through-software-attestations-set/ https://federalnewsnetwork.com/commentary/2024/05/foundation-for-origin-data-through-software-attestations-set/#respond Thu, 30 May 2024 18:44:43 +0000 https://federalnewsnetwork.com/?p=5021139 Jason Weiss, the chief operating officer of TestifySec, explains why CISA’s repository of software data must be more than ‘compliance theater.’

The post Foundation for origin data through software attestations set first appeared on Federal News Network.

]]>
The Cybersecurity and Infrastructure Security Agency recently released the longawaited secure software development framework (SSDF) attestation form. This action automatically restarted the clock on the Office of Management and Budget’s requirement that all critical software must be attested within 90 days and all other software must be attested within 180 days of the finalized form.

Concurrently, CISA launched the repository for software attestations and artifacts
(RSAA), a web portal where CISA expects federal agency representatives and
software producers alike will “be able to review and upload software attestation
forms, artifacts, and make organization-specific annotations to entries in
accordance with their job responsibilities.”

What happens on the first business day following the 90- and 180-day waiting
period, June 10 and Sept. 9, respectively, is now in the hands of acquisition teams
and authorizing officials. The SSDF attestation requirement forced on industry will
either be ridiculed as another form of compliance theater or heralded as a pathway
to an accelerated issuance of an authorizations to operate (ATOs) and greater cyber
resiliency across the massive swath of government procured products and services.

Unlike the Federal Risk Authorization and Management Program (FedRAMP)
program that predominantly focuses on cloud service providers, OMB has made it
clear that SSDF attestations cover all software including firmware, embedded and
cloud service offerings. The software that powers anti-lock brakes in a forestry
service pickup, that powers the microwave in the Department of Veterans Affairs
hospital break room, that powers the palm readers to access classified areas, and
everything else in between now must be attested to following the SSDF before the
government will buy. The hope is that by adopting this broad definition of software
the government’s buying power alone will profoundly influence and reshape the
cybersecurity behaviors of all software producers.

In fact, even those software producers that already have offerings listed in the
FedRAMP marketplace likely have additional work to do to achieve SSDF
compliance before they can attest. Specifically, the FedRAMP moderate and high
baselines, largely based upon the National Institute of Standards and Technology’s
Special Publication 800-53, do not explicitly require the supply chain risk
management family’s provenance control, SR-4. However, the SSDF extensively
focuses on provenance, with each of the four SSDF groupings including explicit
tasks that can only be satisfied by creating and securely managing trusted telemetry
to establish an irrefutable set of provenance data.

The SSDF definition for provenance is pulled straight from NIST SP 800-53: “The
chronology of the origin, development, ownership, location and changes to a
system or system component and associated data. It may also include personnel
and processes used to interact with or make modifications to the system,
component or associated data.”

The time has arrived for acquisition teams and authorizing officials (AOs) across
the federal landscape to collaboratively decide how important cyber resiliency
actually is to the constituents they serve. At first blush, a vendor filling out a PDF
form, self-certifying compliance and shoving the data into a bespoke repository
hosted by CISA could appear to offer little in the way of cyber resilience. This is
true only if acquisition teams and authorizing officials let this happen.

AOs now have a clear pathway to achieving something they’ve wanted for
decades: more input earlier in the acquisition cycle, or “shifting security left,” as it
is colloquially referred to. AOs have been afforded the opportunity to define and
establish resilience on “Day 0” of an acquisition instead of trying to bolt it on after
the fact like they largely do today. Under the auspices of Executive Order 14028,
Improving the Nation’s Cybersecurity, and through the software producer’s own
SSDF attestation, AOs can now collaborate with acquisition teams and stipulate
the types of provenance data they want to access to prior to the RFP’s publication
in the Federal Register.

In some cases a software bill of materials (SBOM), a provenance mechanism that
is generated at the end of the build process, may be sufficient. In other cases, AOs
may desire access to a more robust set of trusted telemetry captured across the
totality of all software build processes in so-called compliance pipelines, from
code commit through artifact creation. In both situations, any software producer
attesting SSDF compliance must have this provenance data to comply with SSDF
task Protect the Software 3.1: “Securely archive the necessary files and supporting
data (e.g., integrity verification information, provenance data) to be retained for
each software release.”

OMB cannot allow CISA to measure success of the SSDF attestation requirement
solely through the number of PDF forms submitted. Instead, the success and value
derived from the government through software producer SSDF attestations should
be measured by monitoring and reporting on how many requests for proposals now
ask for specific types of software provenance data. AOs and acquisition teams
must be educated and expected to more closely collaborate and define what level
of provenance data they seek through the RFP process. After all, once the software
producer submits their SSDF attestation form, producing the trusted telemetry is as
simple as taking existing metadata, zipping it up and uploading it into the RSAA.

The Biden administration has invested significant time and energy into
cybersecurity, publishing well over a thousand pages of guidance related to
securing the software supply chain and linked explicitly to CISA’s secure by
design initiatives. In 2023, the Justice Department reported it was party to 543
settlements and judgments exceeding $2.68 billion under the False Claims Act. It’s
time for AOs and acquisition teams to use the tools they’ve been given and
demand access to provenance data artifacts in their RFPs. Any software producer
that responds that they cannot provide the data or that assembling the data is too
complicated and time consuming after filing their SSDF attestation has likely run
afoul of the False Claims Act. Come Sept. 9, 2024 when all software requires
attestation, industry will be watching to see if SSDF attestation is headed towards
compliance theater, or if federal employees have started using these SSDF
attestations to achieve the true goal of EO 14028.

Jason Weiss is the chief operating officer of TestifySec and a former chief software
officer for the Defense Department

The post Foundation for origin data through software attestations set first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/05/foundation-for-origin-data-through-software-attestations-set/feed/ 0