Establishing U.S. Global Leadership in the Era of GenAI
Hello, I’m Ylli Bajraktari, CEO of the Special Competitive Studies Project. In this edition of 2-2-2, SCSP’s Joe Wang, Channing Lee, and Noah Stein provide an overview of the Foreign Policy Chapter of SCSP’s recent report, Generative AI: The Future of Innovation Power. The full Memo to the President and Congress on Establishing U.S. Global Leadership in the Era of GenAI can be found here.
Today at 12PM ET, we are continuing to explore each chapter of our report on LinkedIn Live with expert members of our SCSP team. This week-long live series offers our followers an opportunity for questions and feedback. We hope you can join us!
Generative AI: The Future of Innovation Power Featured on SCSP’s NatSec Tech
Members of the SCSP team join Jeanne Meserve to summarize each memo from our fall report, “Generative AI: The Future of Innovation Power.”
In this episode, SCSP’s Joe Wang summarizes the “Establishing U.S. Global Leadership in the Era of GenAI” Memo.
Everyone from governments to companies to individual users around the world have been paying attention to the emergence of generative artificial intelligence (GenAI). Next week, the UK’s AI Safety Summit will gather global leaders to consider some of the opportunities and challenges nations will face from GenAI. As nations assess their positions on these new applications of AI, and their broader implications on national security, economic prosperity, and social cohesion, the SCSP Foreign Policy team’s GenAI memo from September 2023 offers a number of recommendations for the United States, and our allies and partners, to organize democratic leadership to address the geopolitical dimensions of this new technology. (For recommendations on the governance of GenAI, SCSP’s Society Team produced a great memo and a recent newsletter as well.) With the People’s Republic of China (PRC) introducing its own global governance framework, the United States must move with urgency to put forward its own vision for what the GenAI world should be.
Global Frameworks for Managing GenAI
We face today the terra incognita of a GenAI world and a global competition to chart the map of unexplored challenges and opportunities GenAI holds. The current U.S. lead in GenAI, with U.S. companies demonstrating the continued “innovation power” of our nation, creates a unique moment for the United States with our allies and partners to shape how this technology can be used. Managing the multifaceted global dimensions of GenAI will require multiple layers of international regimes, institutions, and dialogues. Such a multi-pronged, global governance approach can enable the United States and our allies and partners to work together to mitigate against the highest consequence negative use cases of GenAI — potentially in coordination with our rivals, and to support the broadest possible positive-sum use of GenAI applications.
We propose three global frameworks for managing GenAI:
1. A Global AI Security Forum
As nations compete to advance their geopolitical interests around GenAI, the rushed and premature deployment of new applications and capabilities might eclipse any concerns regarding their safe and ethical use. This heightens the chances of multiple forms of GenAI risk materializing, given the speed, scope, and scale at which potentially dangerous GenAI tools can operate, producing both intended and unintended consequences.
A new, leaders-level Global AI Security Forum could convene key nations and regional integration organizations with the capacity to significantly shape, determine, or use GenAI at a scope and scale that can cause or prevent a global cascading disaster. The Forum would seek to define, place, and ensure guardrails around two highest consequence GenAI risks:
Emergent misalignment risks where evolving GenAI systems diverge from human intentions or control, leading to unintended consequences; and
Heightened threats emanating from the intersection of GenAI with the chemical, biological, radiological, and nuclear (CBRN) domains as weapons of mass destruction (WMD).
Initial members could include the United States, the PRC, the European Union, the United Kingdom, Russia, Israel, Japan, South Korea, Singapore, India, Brazil, the African Union, and the Gulf Cooperation Council. The Forum should also build in private sector participation in appropriate formats (e.g. advisory committees or working groups), given private sector partners’ central roles in driving GenAI innovation.
On emergent misalignment risks, the Forum should help elevate science fact from science fiction with respect to the risks of AI systems escaping human control and convene an Intergovernmental Panel on AI Security of government experts, academics, civil society leaders, and private sector actors to assess the issue — similar to how the Intergovernmental Panel on Climate Change brought together public sector and private sector actors to better understand and to educate the world on the nature of climate change. Our Chair, Eric Schmidt, and Mustafa Suleyman have put forward a similar proposal on an “International Panel on AI Safety” staffed by computer scientists and researchers to address the AI safety issues the UK would want its notional AI Safety Institute to address.
On WMD risks that could result from these models, the Forum should look to engage with existing international institutions with mandates to address CBRN risks, such as the International Atomic Energy Agency and Organisation for the Prohibition of Chemical Weapons, as well as multinational initiatives supporting applicable provisions of international law. Such institutions working to mitigate WMD risks have had some successes in creating regimes and enforcement mechanisms to control access to key precursor material and equipment for the development of WMDs. What’s needed with the advent of GenAI is to ensure that these institutions connect with AI experts to understand whether and how GenAI may affect the various control regimes, and what steps may be needed to modernize them to account for GenAI.
At its most ambitious, such a Forum could help leaders direct their governments to work toward creating an agreed set of governance guardrails against the highest consequence risks with GenAI. At a minimum, the Forum could serve the important purpose of informing leaders about unforeseen risks of which leaders may not be aware, or exposing malign activity of irresponsible actors.
2. U.S.-PRC Dialogues
Responsible world leaders engage in dialogue on issues of global importance. Particularly when engaging from a position of strength, great powers recognize that dialogue can avoid precipitous slides into conflict, advance national interests, and further mutually beneficial resolutions of matters of common global concern.
Just as the United States and the Soviet Union worked over the course of decades toward arms control arrangements to ensure strategic stability in the nuclear era, the United States and PRC have a shared interest in understanding the strategic implications of and preserving strategic stability around the highest consequence risks of GenAI.
Adding GenAI as an agenda item into relevant U.S.-PRC bilateral dialogues is a starting point to consider the issues. There should also be track 1.5 and track 2 channels to bring private sector actors from the United States and PRC into the discussion, so their expert insights can inform nuanced dialogue. Conversations to clarify intentions and to avoid surprise, where possible, will become increasingly important to avoid unintentional escalations as the geopolitical competition between both nations intensifies.
3. A Like-Minded Forum
American technology leadership depends on deep collaboration with allies and partners to craft the “DemTech” AI future. The G7 process offers a launching point for the United States and key allies to construct a tech-focused track of engagement with other partners around the world to innovate together, commercialize the work of our private sector actors, align the governance approach to GenAI uses, and build out “DemTech” technology platforms to promote the reach and use of technologies developed in line with democratic values to the broader world.
And as SCSP’s Society Team recently wrote, the United States should establish a new multilateral and multi-stakeholder “Forum on AI Risk and Resilience” (FAIRR), under the auspices of the G20, focused on: 1) preventing non-state malign GenAI use for nefarious ends (e.g., criminal activities or acts of terrorism), 2) mitigating the most consequential, injurious GenAI impacts on society (e.g., illegitimate discriminatory impacts due to system bias), and 3) managing GenAI use that infringes on other states’ sovereignty (e.g., foreign malign influence operations or the use of AI tools in cyber surveillance).
Managing and building such new global institutions to address the opportunities and challenges of GenAI will also require transforming critical aspects of the United States’ foreign policy approach. We must equip our foreign policy professionals with the skills and tools they need to advance U.S. interests in a global technology competition alongside the strategic competition with the PRC. While we touch on these issues in our GenAI memo, a forthcoming SCSP report will dive deeper into the reforms that can position U.S. foreign policy to lead in a transformational era of technology competition.
The United States’ security and prosperity in this new GenAI era will depend on rallying the rest of the world to join a DemTech alliance grounded in respect for individual liberties, fair competition, and the rule of law. It will be essential for the United States to build out with like-minded nations what this DemTech agenda might entail in order to advance the interests of open societies against those of closed autocratic systems.