The U.S. Department of Homeland Security (DHS) released recommendations for the secure development and deployment of artificial intelligence (AI) in critical infrastructure. The ‘first-of-its-kind’ resource was crafted for all levels of the AI supply chain—cloud and compute providers, AI developers, critical infrastructure owners and operators—as well as civil society and public sector entities that protect consumers. In collaboration with industry and civil society, the alliance proposes new guidelines to promote responsible AI use in America’s essential services.
Titled, ‘Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure,’ the DHS framework proposes a set of voluntary responsibilities for the safe and secure use of AI in U.S. critical infrastructure, divided among five key roles: cloud and compute infrastructure providers, AI developers, critical infrastructure owners and operators, civil society, and the public sector.
It also evaluates these roles across five responsibility areas: securing environments, driving responsible model and system design, implementing data governance, ensuring safe and secure deployment, and monitoring performance and impact for critical infrastructure. Lastly, it provides technical and process recommendations to enhance the safety, security, and trustworthiness of AI systems deployed across the nation’s sixteen critical infrastructure sectors.
“AI offers a once-in-a-generation opportunity to improve the strength and resilience of U.S. critical infrastructure, and we must seize it while minimizing its potential harms. The Framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, internet access, and more,” Alejandro N. Mayorkas, DHS secretary, said in a media statement. “The choices organizations and individuals involved in creating AI make today will determine the impact this technology will have in our critical infrastructure tomorrow.”
Mayorkas added that he is “grateful for the diverse expertise of the Artificial Intelligence Safety and Security Board and its members, each of whom informed these guidelines with their own real-world experiences developing, deploying, and promoting the responsible use of this extraordinary technology. I urge every executive, developer, and elected official to adopt and use this Framework to help build a safer future for all.”
The recommendations in the DHS framework are the culmination of considerable dialogue and debate among the Artificial Intelligence Safety and Security Board (the Board), a public-private advisory committee established by DHS Secretary Alejandro N. Mayorkas, who identified the need for clear guidance on how each layer of the AI supply chain can do their part to ensure that AI is deployed safely and securely in U.S. critical infrastructure.
The report enhances existing efforts by the Administration on AI safety, including guidance from the AI Safety Institute on managing various misuse and accident risks. The Framework seeks to complement and advance the AI safety and security best practices established by the White House Voluntary Commitments, the Blueprint for an AI Bill of Rights, Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the OMB M-24-10 Memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, the Memorandum on Advancing the United States’ Leadership in Artificial Intelligence, the work of the AI Safety Institute, the DHS Safety and Security Guidelines for Critical Infrastructure Owners and Operators, and others.
The framework also builds upon existing risk frameworks that enable entities to evaluate whether using AI for certain systems or applications could harm critical infrastructure assets, sectors, nationally significant systems, or individuals served by such systems. The responsibilities in the framework have been tailored to address these potential harms through the implementation of technical risk mitigations, accountability mechanisms, routine testing practices, and incident response planning. Importantly, the framework prioritizes transparency, communication, and information sharing as key elements of AI safety and security.
The DHS framework proposes a model of shared and separate responsibilities for the safe and secure use of AI in critical infrastructure. For this purpose, the framework recommends risk- and use case-based mitigations to reduce the risk of harm to critical infrastructure systems and the people served by them when developing and deploying AI, as well as the potential for harms to cascade in a manner that could impact multiple sectors or create nationally significant disruptions if left unaddressed.
It also proposes a set of voluntary responsibilities across the roles of cloud and compute infrastructure providers, AI model developers, and critical infrastructure owners and operators in developing and deploying the AI-powered services upon which much of the country’s critical infrastructure currently relies or will soon rely.
Additionally, the framework proposes a set of voluntary responsibilities for civil society and the public sector in advocating for those who use or are affected by these critical systems, supporting research to improve various aspects of new technologies, and advancing strong risk-management practices. It also relies upon existing risk frameworks to enable entities to evaluate whether using AI for certain systems or applications carries severe risks that could harm critical infrastructure assets, sectors, or other nationally significant systems that serve the American people. Further research on the relationships between these risk categories, and their mitigations will help entities conduct this evaluation on a use-case basis.
Furthermore, the DHS framework complements and leverages information gathered from the AI and critical infrastructure security programs DHS coordinates, including the annual AI sector-specific risk assessment process for critical infrastructure established under Executive Order 14110 and the forthcoming National Infrastructure Risk Management Plan.
DHS, through the Cybersecurity and Infrastructure Security Agency (CISA) and in coordination with other Sector Risk Management Agencies (SRMAs), identified three categories of AI safety and security attack vectors and vulnerabilities across critical infrastructure installations – attacks using AI, attacks targeting AI systems, and design and implementation failures. For owners and operators of critical infrastructure whose essential services and functions the public depends on daily, understanding the nature of these vulnerabilities and addressing them accordingly is not merely an operational requirement but a national imperative.
The National Security Memorandum on Critical Infrastructure Security and Resilience (NSM 22) articulates an approach to categorizing risks to critical infrastructure based on the scale and severity of potential harms, enabling the prioritization of risk management efforts.
The DHS framework suggests mitigations that, if implemented by the entities performing the relevant activities, can reduce the likelihood and severity of consequences associated with each risk category. Further, this framing of risks reveals the interdependent nature of these categories, where asset-level risks if left unaddressed can compound into sector-wide or cross-sector risks; conversely, mitigations designed to improve the safety or security of a critical asset may prevent or reduce the likelihood of a nationally significant consequence.
The focus also acknowledges that the various choices made regarding how AI models are developed, how they can be accessed, and how they function within larger systems are critical to the impact they will have when deployed to broad segments of U.S. critical infrastructure. The public sector and civil society play a pivotal role in understanding and shaping this impact, so that benefits can be shared across sectors and harms can be prevented, mitigated, and, as necessary, remediated.
For cloud and compute infrastructure providers, the DHS framework prescribes vetting hardware and software suppliers; instituting best practices for access management; establishing vulnerability management; and managing physical security. It also suggests reporting vulnerabilities; ensuring data availability; conducting systems testing; monitoring for anomalous activity; preparing for incidents; and establishing clear pathways to report harmful activities.
For AI developers, the DHS framework recommends managing access to models and data; preparing incident response plans; incorporating Secure by Design principles; evaluating dangerous capabilities of models; and ensuring alignment with human-centric values. It also calls for respect for individual choice and privacy; promoting data and output quality; use of a risk-based approach when managing access to models; distinguishing AI-generated content; validating AI system use; providing meaningful transparency to customers and the public; evaluating real-world risks and possible outcomes; and maintaining processes for vulnerability reporting and mitigation.
The DHS framework outlined that critical infrastructure owners and operators manage the secure operation and maintenance of critical systems, which increasingly rely on AI to reduce costs, improve reliability, and boost efficiency. These critical infrastructure entities typically interact directly with AI applications or platforms that enable them to configure AI models for specific use cases. While AI use cases vary broadly across sectors, both in terms of their functions and risks, how AI models and systems are deployed have important safety and security implications for critical services, as well as the individuals who consume such services.
The document laid down securing existing IT infrastructure; evaluating AI use cases and associated risks; implementing safety mechanisms; establishing appropriate human oversight’ protecting customer data used to configure or fine-tune models; and managing data collection and use. It also includes using responsible procurement guidelines; evaluating AI use cases and associated risks; implementing safety mechanisms; establishing appropriate human oversight; protecting customer data used to configure or fine-tune models; and managing data collection and use.
The DHS framework also suggests maintaining cyber hygiene; providing transparency and consumer rights; building a culture of safety, security, and accountability for AI; training the workforce; accounting for AI in incident response plans; tracking and sharing performance data; conducting periodic and incident-related testing, evaluation, validation, and verification; measure impact; and ensure system redundancy.
For civil society, the document prescribed actively engaging in developing and communicating standards, best practices, and metrics alongside government and industry; educating policymakers and the public; informing guiding values for AI system development and deployment; supporting the use of privacy-enhancing technologies; considering critical infrastructure use cases for red-teaming standards; and continuing to drive and support research and innovation.
When it comes to the public sector, the DHS framework says that it encompasses federal, state, local, tribal, and territorial government agencies, and is tasked with serving and safeguarding the American people and their institutions. It must ensure that private sector entities across industries protect individual and community rights and provide support during crises or emergencies.
It calls for delivering essential services and emergency response; driving global AI norms; responsibly leveraging AI to improve the functioning of critical infrastructure; advancing standards of practice through law and regulation; engaging community leaders; enabling foundational research into AI safety and security; supporting critical infrastructure’s safe and secure adoption of AI; and developing oversight.
In conclusion, the DHS framework outlined that recent advances in AI present extraordinary possibilities to improve the functioning of critical infrastructure if associated risks can be effectively managed. The Framework provides a foundation for how leaders across sectors, industries, and governments can help advance this field by assuming and fulfilling shared and separate responsibilities for AI safety and security, within their organizations and as part of their interactions with others.
Also, the framework will succeed if, among other achievements, it further strengthens the harmonization of AI safety and security practices, improves the delivery of critical services enabled by AI, enhances trust and transparency across the AI ecosystem, advances research into safe and secure AI for critical infrastructure, and ensures that civil rights and civil liberties are protected by all entities.
Last month, the Department of Energy (DOE) and the Department of Commerce (DOC) announced a memorandum of understanding (MOU) signed earlier this year to collaborate on safety research, testing, and evaluation of advanced AI models and systems. Through this MOU, the DOE and DOC intend to evaluate the impact of AI models on public safety, including risks to critical infrastructure, energy security, and national security.