Hello, I’m Ylli Bajraktari, CEO of the Special Competitive Studies Project. In this edition of 2-2-2, SCSP’s Society Panel members Jenilee Keefe Singer and Daniel Trusilo discuss the identification and mitigation of national security risks presented by AI in support of the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework.
Why consider risks to national security when you are not developing a national security AI application?
The development and application of AI systems enable a multitude of economic, social, and defense opportunities. However, the same AI characteristics that allow for new and transformative opportunities also present risks to national security. This is precisely why well-intentioned AI developers, deployers, and users of an AI system must assess and consider risks to national security posed by their particular AI system.
Sidestepping an assessment of national security implications of an AI system might seem natural or logical if the AI system is not being developed or deployed for national security purposes. Stakeholders throughout an AI system’s lifecycle may not readily recognize why national security risks are relevant or may not have the expertise to consider these threats. However, identifying, assessing, and addressing risks to national security is a shared responsibility due to AI’s potential for threats from, for example, intentional misuse, extreme scalability, the generative nature of some AI systems, and corrupted data or software.
U.S. enterprises of all sizes, especially those that do not have in-house national security expertise or even a direct nexus with national security issues, have the responsibility to consider risks to national security posed by their systems. The onus is not on developers alone, but also on entities that choose to deploy and use AI systems. In particular, commercial entities invested in their system or business model should be incentivized to carry out robust reviews early in the process to address potential national security risks or face business vulnerabilities if risks are not identified or are addressed too late.
Does commercial AI actually pose novel risks to national security?
There are multiple sources of risk to national security posed by AI systems that may not be obvious to entities outside of the national security domain.
Misuse: Misuse can occur when an AI system developed for non-national security purposes is accidentally or purposefully used in a way that causes harm to national security. This threat is especially relevant for small and medium-sized enterprises developing new systems or applications, who may invest enormous resources into a system without considering alternative use cases and who don’t have resources for an extensive review process given the speed of AI development and deployment.
Examples:
Unrelated AI systems can be linked to reveal sensitive data about strategic infrastructure, populations, or other subjects that are relevant to national security. In 2018, geospatial mapping software and publicly-posted fitness tracking data were combined to identify U.S. military facilities.
Drafting, design, and simulation software that leverages AI reduces barriers to entry for adversaries working to build hypersonic aircraft, stealth technology, space systems, nuclear weapons, or other advanced weapons technology, which was formerly beyond their design and testing capabilities.
An AI system designed to aid drug discovery by generating and evaluating chemical compounds could also identify novel toxins by changing the reward function for what is considered a good candidate drug.
A content distribution platform that uses AI-enabled recommendation algorithms can be used to prioritize content to manipulate emotions, beliefs, and behavior, sowing discord or undermining democratic institutions.
Scaling: AI enables adoption at unprecedented scale across sectors, organizations, and populations. Being able to analyze and leverage amounts of data that exceed human comprehension creates opportunities for new business models while also increasing the efficiency of existing models. But extreme scaling also creates the potential for rapid introduction and adoption of new systems and use cases that were not previously encountered, predicted, or evaluated. This risk is a direct result of AI analytics at a substantial scale; these connections would otherwise be lost in the volume of data and are resistant to manual analysis.
Example:
The combination of massive volumes of cell phone locations accumulated by cell phone location data aggregators (trillions per year) and AI analytics scaled to handle that volume presents threats to the security of people and critical infrastructure. This aggregation enables the identification of cell phones associated with regular visits to sensitive facilities as well as other geographic locations, including individuals’ homes, putting individuals and locations at risk of exposure and/or targeting.
Generative AI: The advancement and adoption of generative AI has and will result in unintended implications that cannot, by their nature, be known at the time of development. Text, voice, image, and/or video generation technology designed for entertainment purposes can be used to create information campaigns or deepfakes to spread misinformation and disinformation, incite political violence, and generally undermine public trust.
Examples:
In March 2022, an edited and manipulated video of Ukraine President Zelenski surfaced, depicting him calling on Ukrainian soldiers to lay down their arms and surrender.
In September 2022, a former U.S. ambassador to Russia announced that he was being impersonated by deepfake technology that was sufficiently convincing to fool some Ukrainian officials on video calls.
Corrupted Data or Software: Large AI systems typically rely on external software components (e.g., open source) and data (e.g., from the internet). The prevalence of external software components and data in machine learning introduces risks of intentionally and unintentionally corrupted versions being unknowingly incorporated in critical systems. Only a small percentage of the data has to be corrupted or poisoned for the AI system to be affected. Additionally, the opaque nature of AI systems makes it difficult, if not impossible, to identify corrupted data or software during the development process.
Examples:
Demonstrated effects of such data poisoning include a facial recognition system that is manipulated to include a “trigger,” such as an unusual hat, to perform not as intended (e.g, authorize access when not intended).
Open source libraries have been maliciously compromised, tricking users into downloading malicious code files.
These examples illustrate how a seemingly benign AI system, designed with good intentions, by well-meaning people, can pose threats to national security. They also highlight the criticality of considering negative impacts to national security throughout the full lifecycle of a system from ideation to development, testing, deployment, and revision. Already, stakeholders have access to resources to assist with these considerations. These resources should be leveraged and built upon.
Applying the NIST AI RMF to Assess Risks to National Security
The National Institute of Standards and Technology (NIST) publicly released in January 2023 version 1.0 of the AI Risk Management Framework (AI RMF), and companion resources including a Playbook for navigating the framework. The Playbook provides ways to navigate and use the AI RMF, and includes suggested best practices and useful references for the four different functions of the AI RMF (Map, Measure, Manage, and Govern). As the Playbook notes, the Map function establishes the context to frame risks related to an AI system. Without contextual knowledge, and awareness of risks within the identified contexts, risk management is difficult to perform. Map is intended to enhance an organization’s ability to identify risks and broader contributing factors. The Map function is critically relevant to considering national security risks as it establishes the context to frame risks related to an AI system. This voluntary framework is designed to assist people and organizations in identifying and managing risks presented by AI systems – and in considering the broader impacts of AI systems throughout the entire system lifecycle. The AI RMF is designed to be sector agnostic and is not intended to present an exhaustive checklist for every possible risk that may be identified.
Given the transformative power of AI systems and the current geopolitical competition, U.S. developers, deployers, and users of AI systems must apply the NIST RMF with a national security lens. This requires defining a risk of “harm” in terms of geopolitical issues that include negative impacts to health systems, supply chains, strategic infrastructure, the environment, and public trust, for example. A framework to consider national security risks will help guide entities not accustomed to dealing with national security issues. To this end, SCSP published a National Security Addition to the NIST AI RMF Playbook. This addition adds considerations and questions that are meant to assist with identifying and assessing risks specific to national security.
Beyond applying the NIST AI RMF with a view toward identifying national security risks, there are several other lines of effort that would further accelerate the assessment and mitigation of risks to national security posed by AI.
Recommendations for Advancing Assessment of National Security Risks
Educating stakeholders and incentivizing practices that nurture an awareness of the shared responsibility of assessing national security risks is critical. Stakeholders must have readily accessible and non-burdensome access to resources that explain national security concerns as well as approaches to identifying relevant risks. Public-private partnerships to share knowledge and expertise, such as the previously held SCSP Workshop on National Security Risk Considerations for the NIST AI Risk Management Framework, should be established between stakeholders and national security entities to help all stakeholders understand requirements, policies, and standing documents that can be applied. These bridges also will support efforts to navigate U.S. Federal Government AI guidance and how to operationalize such guidance. Robust efforts can be modeled after the work of the Cybersecurity & Infrastructure Security Agency and NIST’s National Cybersecurity Center of Excellence, which both work to develop and share cybersecurity best practices across U.S. institutions.
Key entities that are building the foundational approaches to AI models need to establish standards of practice and norms to reduce national security risks. These agreed upon principles will enable addressing national security risks at the source before they diffuse into downstream applications. Stakeholders can capitalize on venues and consortia where firms already convene. Establishing forums that encourage inclusive, candid discussions on these issues is critical.
The U.S. Government should establish an AI testbed that provides a safe, shared infrastructure where technologies can be objectively evaluated using repeatable, comparable methods and metrics as recommended in the Final Report of the National Security Commission on AI. Such a test bed, available to all enterprises, including small and medium-sized enterprises, will create the possibility of red teaming by individuals with national security expertise, assisting technology developers. This mechanism will support the exploration of AI systems to identify risks that have not previously occurred. Seals of participation for systems that have undergone controlled testing can incentivize voluntary participation.
The boundless opportunities created by AI are inextricably coupled with new risks to our national security. For stakeholders to successfully navigate the risks to national security posed by AI systems, more multidisciplinary discussion and communication will be essential – our intent here is to spur such conversation. In fact, cross-entity communication will support robust conversations about AI technology and techniques among competitors as well as suppliers, adopters, and external stakeholders including Federal government agencies that have national security expertise.