The emergence of artificial intelligence (AI) technologies has brought both opportunities and risks, particularly in the realm of critical infrastructure and cybersecurity. The Department of Homeland Security (DHS) recognizes the potential threats posed by malicious actors exploiting AI for cyber attacks, vulnerabilities in AI systems, and flaws in AI design. As cyber criminals leverage AI tools like deepfakes, malware, and ransomware for social engineering and cyber attacks, the DHS has outlined proactive measures to mitigate these AI security threats.
To address AI risks, the DHS has established guidelines, launched initiatives, and unveiled an AI roadmap focused on governing, mapping, measuring, and managing AI. This comprehensive strategy aims to harness the benefits of AI while safeguarding critical infrastructure, protecting civil liberties, and countering threats like the misuse of AI for chemical, biological, radiological, and nuclear (CBRN) weapon development.
AI Opportunities and Risks in Critical Infrastructure
AI-Related Risks to Critical Infrastructure
The Department of Homeland Security (DHS) has identified three main categories of AI-related risks to critical infrastructure:
- Attacks using AI to enhance, plan, or scale physical or cyber attacks: Malicious actors could leverage AI tools like deepfakes, malware, and ransomware for social engineering and cyber attacks, posing a significant threat to critical infrastructure.
- Targeted attacks on AI systems supporting critical infrastructure: AI systems that support critical infrastructure operations could be targeted by adversaries, potentially leading to disruptions or compromises.
- Failures in AI design and implementation: Flaws or vulnerabilities in the design and implementation of AI systems used in critical infrastructure could result in malfunctions or unintended consequences, jeopardizing safety and security.
Risks of AI Misuse for CBRN Threats
The DHS, through its Countering Weapons of Mass Destruction Office (CWMD), has analyzed the risk of AI being misused to assist in the development or production of Chemical, Biological, Radiological, and Nuclear (CBRN) threats. This analysis has led to the development of recommendations and resources to mitigate potential CBRN-related threats involving AI.
Emerging AI Vulnerabilities and Risks
Recent research has highlighted emerging vulnerabilities and risks associated with AI systems, including:
- Prompt injection attacks: AI models can be induced to circumvent safety mechanisms and produce harmful outputs through carefully crafted prompts, posing a significant security risk.
- Autonomous exploitation of vulnerabilities: Large language model (LLM) agents have been shown to autonomously exploit one-day vulnerabilities in real-world systems using only CVE descriptions, potentially allowing them to perform tasks like blind database schema extraction and SQL injections without human feedback.
These findings underscore the importance of proactive measures to secure AI systems and mitigate potential risks posed by emerging AI vulnerabilities.
Key Guidelines for AI Risk Management
Four-Part Mitigation Strategy
To address the AI-related risks to critical infrastructure, the guidelines outline a four-part mitigation strategy:
- Govern: Establish an organizational culture of AI risk management.
- Map: Understand your individual AI use context and risk profile.
- Measure: Develop systems to assess, analyze, and track AI risks.
- Manage: Prioritize and act upon AI risks to safety and security.
Best Practices
The guidelines recommend several best practices for secure AI deployment, including:
- Securing the deployment environment
- Reviewing the source of AI models and supply chain security
- Hardening deployment environment configurations
- Validating the AI system’s integrity
- Protecting model weights
- Enforcing strict access controls
- Conducting external audits
- Implementing robust logging
Insights from Cross-Sector Analysis
The guidelines are based on insights from a cross-sector analysis of AI risk assessments completed by Sector Risk Management Agencies (SRMAs) and regulatory agencies in January 2024.
Alignment with NIST AI Risk Management Framework
The guidelines incorporate the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF), including its four functions to help organizations address AI risks: Govern, Map, Measure, and Manage.
Guidelines from the Department of Commerce
The Secretary of Commerce, through NIST, will establish guidelines and best practices for developing and deploying safe, secure, and trustworthy AI systems, including for generative AI and dual-use foundation models.
Reporting Requirements for Dual-Use Foundation Models
The Secretary of Commerce will require companies developing potential dual-use foundation models to provide information on their activities, model weights, and performance in AI red-team testing.
Reporting on Large-Scale Computing Clusters
The Secretary of Commerce will require reporting on the acquisition, development, or possession of large-scale computing clusters that could be used for training AI models.
Regulations for Foreign Use of U.S. IaaS Products
The Secretary of Commerce will propose regulations to address the use of U.S. Infrastructure as a Service (IaaS) products by foreign malicious cyber actors, including requiring reporting on foreign persons using IaaS for AI model training and verification of foreign resellers’ identities.
Best Practices for Financial Institutions
The Secretary of the Treasury will issue a report on best practices for financial institutions to manage AI-specific cybersecurity risks.
AI Capabilities for Vulnerability Discovery and Remediation
The Secretaries of Defense and Homeland Security will develop and deploy AI capabilities to aid in the discovery and remediation of vulnerabilities in government software, systems, and networks.
Executive Order on Safe and Trustworthy AI Development
The Biden-Harris Administration issued an Executive Order to promote the safe, secure, and trustworthy development and use of artificial intelligence (AI).
DHS Initiatives for AI-Enabled Cyber Defense
DHS will leverage AI to improve U.S. cyber defense through threat detection, prevention, and vulnerability assessments.
Mitigating Risks of AI-Enabled Misuse of Synthetic Nucleic Acids
DHS will work to mitigate the risks of AI-enabled misuse of synthetic nucleic acids to create weapons of mass destruction.
DHS Initiatives for Safe AI Deployment
Establishing the AI Safety and Security Board
In response to the Biden administration’s Executive Order on AI, the Department of Homeland Security (DHS) has taken several proactive measures to ensure the safe and secure development and deployment of artificial intelligence (AI) technologies. One of the key initiatives is the establishment of the Artificial Intelligence Safety and Security Board (AISSB) to advise on AI-related risks, threats, and mitigation strategies.
The AISSB plays a crucial role in providing recommendations and best practices for secure AI deployments within critical infrastructure sectors. Its primary objective is to guide the responsible integration of AI systems while protecting privacy, civil rights, and civil liberties.
DHS AI Roadmap and Pilot Projects
DHS has unveiled a comprehensive AI roadmap that outlines three strategic lines of effort:
- Responsibly leverage AI to advance homeland security missions: This involves leveraging AI technologies to enhance DHS’s mission capabilities while ensuring the protection of privacy, civil rights, and civil liberties.
- Promote nationwide AI safety and security: DHS aims to establish standards and guidelines for AI safety and security, fostering a nationwide effort to mitigate AI-related risks and threats.
- Continue to lead in AI through strong partnerships: DHS recognizes the importance of collaborating with stakeholders, including industry, academia, and international partners, to remain at the forefront of AI development and deployment.
To test and validate AI technologies in real-world scenarios, DHS has implemented three innovative pilot projects:
- Homeland Security Investigations (HSI): HSI will test a large language model (LLM)-based system to enhance investigative processes and improve efficiency.
- FEMA: The Federal Emergency Management Agency (FEMA) will launch a generative AI (GenAI) pilot to create efficiencies in the hazard mitigation planning process.
- USCIS: The United States Citizenship and Immigration Services (USCIS) will develop an interactive application using GenAI to improve training for immigration officers.
Talent Acquisition and IP Protection Efforts
To bolster its AI capabilities, DHS has launched the AI Corps Recruitment Sprint, aiming to hire 50 experienced AI technology experts. Additionally, DHS has implemented a training program to combat intellectual property theft through AI-generated material.
DHS has also streamlined the processing of petitions and applications for individuals seeking to work, study, or conduct research in the United States in the fields of AI and other critical technologies. Furthermore, DHS has clarified and modernized policies for various visa categories related to AI and other critical technologies, while releasing public data on how experts in these fields have utilized the immigration system.
These initiatives demonstrate DHS’s commitment to promoting the safe and secure development and deployment of AI technologies, while fostering innovation and protecting intellectual property rights.
DHS AI Roadmap and Pilot Projects
DHS AI Roadmap
The Department of Homeland Security (DHS) has unveiled its first ‘Artificial Intelligence Roadmap’, detailing the agency’s plans for 2024 to leverage AI technologies to enhance its mission capabilities while protecting privacy, civil rights, and civil liberties. The roadmap outlines three strategic lines of effort:
- Responsibly leverage AI to advance homeland security missions: DHS aims to harness AI capabilities to bolster its mission while safeguarding privacy, civil rights, and civil liberties.
- Promote nationwide AI safety and security: The roadmap emphasizes establishing standards and guidelines to mitigate AI-related risks and threats across the nation.
- Continue to lead in AI through strong partnerships: DHS recognizes the importance of collaborating with stakeholders, including industry, academia, and international partners, to remain at the forefront of AI development and deployment.
Pilot Projects
As part of the roadmap, DHS has announced three innovative pilot projects to test and validate AI technologies in real-world scenarios:
- Homeland Security Investigations (HSI): HSI will test a large language model (LLM)-based system to enhance investigative processes and improve efficiency, with potential benefits for detecting fentanyl networks and combating child exploitation.
- Federal Emergency Management Agency (FEMA): FEMA will launch a generative AI (GenAI) pilot to create efficiencies in the hazard mitigation planning process for local governments, including underserved communities, to help them become more resilient and apply for disaster mitigation funding.
- U.S. Citizenship and Immigration Services (USCIS): USCIS will develop an interactive application using GenAI to improve training for immigration officers, enhancing their understanding, decision-making, and reducing the need for retraining.
The roadmap aims to strengthen national security, improve operations, and provide more efficient services to the American people, while upholding the commitment to protect civil rights, civil liberties, and privacy.
Supporting Initiatives
To support its AI efforts, DHS has established an AI Task Force, appointed a Chief AI Officer, and is actively hiring 50 experienced AI technology experts to support mission-enhancing AI initiatives. Additionally, DHS has brought on technology company executives, including Sam Altman of OpenAI and Sundar Pichai from Alphabet, to assist with a new board focused on AI and critical infrastructure.
Talent Acquisition and IP Protection Efforts
Talent Acquisition for AI Workforce
The U.S. Office of Personnel Management (OPM) is implementing the AI in Government Act of 2020 to build a robust AI workforce within federal agencies. Key initiatives include:
- Identifying Key Skills and Competencies: OPM has defined 43 general competencies and 14 technical competencies required for AI work, involving the design and development of systems that use machine learning and natural language processing for intelligent decision-making and predictions.
- Establishing AI Occupational Series: OPM is working to establish a new occupational series or update an existing one specifically for AI-related roles and responsibilities.
- Workforce Assessments and Forecasting: OPM is conducting assessments to estimate the current number of AI employees in each agency and prepare 2- and 5-year forecasts of the AI workforce needs.
- Talent Acquisition Guidance: OPM has issued the Artificial Intelligence Classification Policy and Talent Acquisition Guidance to assist agencies in identifying AI positions, clarifying roles and duties, addressing position management issues, recruiting, hiring, and developing a qualified AI workforce, implementing training, performance, and retention programs, and conducting workforce assessments.
Protecting AI Intellectual Property
The Department of Homeland Security (DHS) is taking proactive measures to protect intellectual property (IP) related to AI development:
- AI IP Protection Program: DHS is creating a program to assist AI developers in mitigating AI-related IP risks and help update the IP Enforcement Coordinator Joint Strategic Plan on IP Enforcement to address AI-related issues.
- Combating AI-related IP Theft: DHS will establish a program to help AI developers protect their intellectual property and combat AI-related IP theft.
- Streamlining Immigration Pathways: DHS will streamline processing times and clarify immigration pathways for non-citizens seeking to work on, study, or conduct research in AI and other critical emerging technologies, facilitating the acquisition of AI talent from abroad.
Regulatory Landscape for AI in Hiring
As the use of AI in talent acquisition and hiring processes grows, with estimates suggesting up to 80% of U.S. employers using AI at some point in their hiring, there is increasing regulatory and legal scrutiny around potential bias and discrimination in AI-powered hiring tools:
- Lawsuits and Legal Challenges: High-profile lawsuits, such as Mobley vs. Workday, have raised concerns about the potential for AI-based hiring tools to discriminate against candidates based on protected characteristics like race, gender, or age.
- State and Local Regulations: Some states and cities, like New York City, have passed laws requiring transparency and audits of AI used in hiring to ensure non-discrimination.
- EEOC Guidance: The Equal Employment Opportunity Commission (EEOC) has issued guidance on AI and discrimination, although there is still uncertainty around how to apply existing laws to new AI technologies.
- Compliance Risks: Overall, employers face growing compliance risks and responsibilities to ensure their AI hiring tools do not result in unlawful discrimination, as regulatory and legal scrutiny in this area intensifies.
Conclusion
The emergence of AI technologies has brought forth both opportunities and risks, particularly in the domain of critical infrastructure and cybersecurity. The U.S. government, through the Department of Homeland Security (DHS), has recognized the potential threats posed by malicious actors exploiting AI and has outlined proactive measures to mitigate these risks. The comprehensive strategy involves establishing guidelines, launching initiatives, and unveiling an AI roadmap focused on governing, mapping, measuring, and managing AI.
By fostering a nationwide effort to address AI-related risks, promoting strong partnerships, and responsibly leveraging AI to enhance homeland security missions, the DHS aims to harness the benefits of AI while safeguarding critical infrastructure, protecting civil liberties, and countering potential threats. The proactive approach, which includes talent acquisition, intellectual property protection, and regulatory efforts, underscores the government’s commitment to ensuring the safe and secure development and deployment of AI technologies.