Hello, I’m Ylli Bajraktari, CEO of the Special Competitive Studies Project. In this week’s edition of SCSP’s newsletter, SCSP Senior Advisor for Defense and Intelligence, Ylber Bajraktari unpacks the White House’s first-ever National Security Memorandum (NSM) on Artificial Intelligence (AI).
Tomorrow in Cambridge: The Ash Carter Exchange
We are excited for an inspiring day of dialogue! The Ash Carter Exchange takes place tomorrow in coordination with Harvard University's Belfer Center in Cambridge, MA. We're thrilled to announce a keynote conversation featuring Anne Neuberger, DNSA for Cyber and Emerging Technologies, and Dr. Joseph Nye, Harvard University Distinguished Service Professor, Emeritus. They'll be discussing the crucial topic of "Securing the Nation through Emerging Tech."
Can't make it in person? Follow along on all SCSP platforms for live updates and key takeaways. #CarterExchange
ChatUSG: The White House Gives Artificial Intelligence the National Security Focus it Deserves
In a lengthy, but well-crafted national security memorandum (NSM), the White House last week took an important step in recognizing the intrinsic link between technology and national security. The memorandum also acknowledges – as we at SCSP and previously at the National Security Commission on Artificial Intelligence (NSCAI) have argued over many years – that AI has emerged as an era-defining technology that has already demonstrated critical and increasing relevance to national security. It explicitly articulates as a U.S. policy the goal of ensuring that America leads the world’s development of AI, and warns, as SCSP has, that if the U.S. government does not act with speed, America risks losing ground to strategic competitors, who are working hard to catch up.
In addition to articulating this important vision of American primacy in AI, the NSM also directs a number of critical actions for the national security enterprise that – if continued or undertaken by the next Administration – could better position the U.S. government for the age of AI. In particular, it directs departments and agencies to harness – faster – cutting-edge AI technologies for national security missions. SCSP had similarly recommended in 2023, after the release of chatGPT and Gemini, a set of broad initiatives for the U.S. government to embark upon in our special edition report on Generative AI: The Future of Innovation Power.
So what are some of the most important actions in the memorandum, and what did we at SCSP find missing from it?
What’s in the Memorandum
One of the key actions in the memorandum is the proclamation of U.S. government policy to protect U.S. industry, civil society, and academic AI intellectual property and related infrastructure from foreign intelligence threats. This is particularly important as the race towards artificial general intelligence (AGI) intensifies, between frontier private sector companies in the United States, and national champions in the People’s Republic of China (PRC). Establishing such a security perimeter and a defensive umbrella around our AI ecosystem can go a long way towards ensuring that once American companies manage to get to AGI, their discoveries are not stolen as part of continued intellectual property theft by the PRC.
Another important action in the memorandum, if carried out successfully, could help resolve one of the key bottlenecks in AI development – the growing need for energy. The document directs the White House Chief of Staff to coordinate efforts to streamline permitting, approvals, and incentives for the construction of AI-enabling infrastructure. The need for new sources of energy and modernized grid infrastructure was a common theme at the first AI+ Energy Summit that SCSP convened on September 26, 2024, in Washington, D.C., and it is part of the impetus behind our Commission on the Scaling of Fusion Energy – a 12-month effort to align government, academia, and industry around a shared vision for the deployment of fusion energy.
A third set of important actions in the memorandum pertains to talent. The document affirms as U.S. policy and a national security priority the advancement of the lawful ability of foreigners who are highly skilled in AI and related fields to enter and work in the United States. Moreover, the memorandum also seeks to tackle the availability of AI talent to the U.S. government, directing an update to government-wide procedures for attracting, hiring, developing, and retaining AI and AI-enabling talent for national security purposes, and consideration of programs to attract talent from industry, academia, and civil society. Again, this is an issue that SCSP, and its predecessor organization, NSCAI, have highlighted repeatedly, and provided concrete recommendations on how to address it.
The memorandum helpfully also creates a one-stop shop for private sector developers to voluntarily have their frontier AI models tested, pre- and post-public deployment, for safety, security, and trustworthiness. It designates the AI Safety Institute (AISI), within the Department of Commerce, to serve as the primary point of contact for such testing. To be sure, the testing is voluntary, but streamlining the process could make it more attractive for the developers of frontier AI models. The memorandum also directs the AISI to issue guidance, within six months, for AI developers on how to test, evaluate, and manage risks to safety arising from dual-use foundational models. (Note: The NSM is also accompanied by a standalone Framework to Advance AI Governance and Risk Management in National Security, which directs U.S. agencies to establish governance and risk management standards for AI in national security systems. SCSP has previously argued that the focus be on high consequence use cases.)
In terms of expediting the use of AI by the defense and intelligence communities, the memorandum attempts to address two perennial challenges – slow procurement and accreditation of AI capabilities. It directs the Department of Defense (DoD) and the Office of Director of National Intelligence (ODNI) to identify within 30 days and address issues involving procurement of AI by DOD and intelligence entities. Meanwhile, it directs each agency that uses AI to take all steps possible to accelerate the approval and accreditation of AI systems. It remains to be seen how effective these two measures will be, but the objectives are certainly laudable.
Another important element in the memorandum was the directive to departments and agencies, with special emphasis on DoD, to proactively enable the co-development and co-deployment of AI capabilities with select allies and partners. As AI capabilities grow in sophistication and the development of frontier models remains concentrated in the United States (and the PRC), we at SCSP have warned that allies and partners may struggle to remain interoperable with the U.S. military and intelligence community. Unless given access to American AI models, allies and partners will face a tough and possibly insurmountable challenge of having to develop their own sovereign capabilities. SCSP has undertaken two projects – one with the Royal United Services Institute in London and one with the Australian Strategic Policy Institute in Canberra – to identify opportunities that could lead to greater sharing of AI tools among allies.
Finally, there were two somewhat intriguing references in the memorandum that are worth highlighting. First, there is implicit acknowledgment that – at this time – the development of frontier AI models is beyond the resource wherewithal of the U.S. government. In other words, for truly cutting-edge AI capabilities, the U.S. government will need to rely on the developers of frontier AI models, such as OpenAI, Google, and Anthropic. Nevertheless, the memorandum directs the Department of Energy – which has AI competence across the national labs – to launch over the next six months a pilot project to evaluate the performance and efficiency of federated AI and data sources for frontier AI-scale training, fine-tuning, and inference. The second intriguing reference was to the interaction between the President’s constitutional authority to launch military operations and the use of AI capabilities. In other words, as AI capabilities are increasingly integrated with our defense capabilities, the White House wants to ensure that in non-self-defense cases, the President’s authority to direct the use of lethal force by the U.S. military is not compromised by AI-automated systems.
What May Be Missing from the Memorandum?
While the memorandum constitutes a robust and important step forward in addressing the nexus of AI and national security, three additional elements would have made this document even more consequential.
First, the implementation of the memorandum remains largely in the hands of departments and agencies. While the document provides for the creation of an AI National Security Coordination Group to be co-chaired by Chief AI officers of ODNI and DOD, this is unlikely to galvanize other agencies into action. Departments and agencies generally do not respond to or tolerate being accountable to other departments and agencies. They tend to be more responsive to direction from the White House. Moreover, since AI is unlikely to be the only technology that intersects with national security – think biotechnology, quantum computing, robotics – an AI coordination group could end up being a partial solution to the need for whole-of-government actions. Instead, SCSP has argued that a more comprehensive solution is a new Technology Competitiveness Council (TCC) housed at the White House and Chaired by the Vice President. The United States created the National Security Council in the aftermath of World War II to ensure the coordination of all elements of foreign policy. In 1993, after the collapse of the Soviet Union, it created the National Economic Council to coordinate global economic policy of the United States. And, in 2001, in the aftermath of 9-11 terrorist attacks, it established the Homeland Security Council, to coordinate homeland security policy. With technology assuming center stage in geopolitics, a TCC would provide the much-needed leadership and coordination.
The second element that would have made the memorandum even more consequential and extended its shelf-life to confront the rapid technological advances would have been an explicit reference to artificial general intelligence. While AGI remains a work in progress, each of the frontier AI developers is pursuing it. SCSP has articulated three ways in which AGI will arrive, and there is broad consensus that AGI is more a question of when, not a question of if. Therefore, proposing in the memorandum a national program to get the United States to AGI and directing the departments and agencies to prepare for its arrival would have made the document much more forward looking. While the departments and agencies are engaging in various AI pilot projects, the level of ambition for a game changing technology, such as AGI, should be much higher in our view. To get there requires the pursuit of Apollo-like goals in accordance with American norms and values.
Third and last, and this could sound trivial to some, but the memorandum makes no explicit reference to the PRC. Beijing’s actions may well be detailed in the classified annex to the memorandum. And, in fairness, in his public remarks rolling out the Memorandum, the National Security Adviser, Jake Sullivan, called out China for its use of AI to “repress its population, spread misinformation, and undermine the security of the United States and our allies and partners.” However, for the document to have an even wider impact, particularly with the private sector that is driving AI development, clarifying explicitly who the strategic competitors are would have been useful. It would have helped sensitize the computer scientists, venture capitalists, chip manufacturers, data center managers – and our international partners – on the dangers posed by the PRC to their intellectual property and scientific advances. The last two national security strategies of the United States clearly point to the PRC as the pacing competitor. With technology as the key battleground in this competition, it would have been helpful to articulate what is at stake publicly.