The Department of Homeland Security (DHS) has unveiled the “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure,” providing voluntary guidelines to ensure the safe and secure use of AI in essential services. This initiative aims to address risks associated with AI deployment in critical infrastructure sectors such as energy, water, transportation, and communication systems.

The Framework, developed through collaboration among industry, academia, civil society, and government, outlines roles for key stakeholders including cloud providers, AI developers, critical infrastructure operators, civil society groups, and public sector entities. It also identifies three main categories of vulnerabilities: AI misuse, attacks on AI systems, and design flaws.

AI’s role in critical infrastructure

AI is increasingly used to enhance critical systems, from detecting earthquakes to preventing blackouts and improving efficiency in services like mail delivery. However, these advancements come with risks, such as exposing systems to cyberattacks or operational failures. DHS Secretary Alejandro N. Mayorkas emphasized the importance of balancing innovation with security.

“AI offers a once-in-a-generation opportunity to improve the strength and resilience of U.S. critical infrastructure, and we must seize it while minimizing its potential harms,” Mayorkas said. He urged stakeholders to adopt the Framework to “help build a safer future for all.”

The Framework complements other federal efforts, such as those by the AI Safety Institute and the Cybersecurity and Infrastructure Security Agency, to address AI-related risks.

Stakeholder responsibilities

The Framework outlines specific recommendations for different stakeholders:

  • Cloud Providers: Secure AI development environments, monitor for suspicious activities, and establish clear reporting channels.
  • AI Developers: Use a “Secure by Design” approach, assess models for risks, and ensure alignment with human-centric values.
  • Infrastructure Operators: Implement cybersecurity measures, ensure transparency about AI use, and monitor AI system performance.
  • Civil Society: Advocate for standards, conduct AI safety research, and assess community impacts.
  • Public Sector: Promote responsible AI use through legislation, standards, and international collaboration.

Industry and expert endorsements

The Framework has received broad support from leaders across sectors. Gina Raimondo, Secretary of Commerce, called it “vital to the future of American innovation.” Ed Bastian, CEO of Delta Air Lines, highlighted its role in fostering collaboration, while Marc Benioff, CEO of Salesforce, praised its emphasis on trust and accountability.

AI developers and researchers also recognized its importance. Dario Amodei, CEO of Anthropic, pointed to its focus on security testing and alignment with human values, and Dr. Fei-Fei Li from Stanford University emphasized the role of academia in shaping responsible AI.

Global and national security implications

AI’s integration into critical infrastructure has both domestic and international ramifications. The DHS 2025 Homeland Threat Assessment warns of threats ranging from AI-driven cyberattacks to systemic failures. Mayorkas emphasized that proactive measures, like adopting the Framework, are essential to protecting infrastructure and national security.