Hello, I’m Ylli Bajraktari, CEO of the Special Competitive Studies Project. In this edition of our newsletter, SCSP's Ananmay Agarwal discusses the state of AI governance globally, and why the United States must move swiftly to shape the future of AI.
📌 Upcoming SCSP Events
October 23 - SCSP’s second AI+ summit, the AI+ Robotics Summit, will take place tomorrow in Washington, D.C. - see more here!
November 1 - The Ash Carter Exchange in Cambridge, MA. Recently announced speakers include Dr. Joesph Aoun, Dr. Marc Raibert, August Cole, Dr. Ian Waitz, Dr. John Shaw, Dr. Michael McQuade, and more! Join the waitlist here!
November 15 - SCSP x AGI House Hackathon: SCSP is partnering with the Bay Area AI hacker house, AGI House, to host an AI Agents for Gov Hackathon at SCSP’s office in DC. Come create the future of AI agents to solve important real-world challenges. Stay tuned for more details!
The State of International AI Governance
With Election Day just around the corner, the United States stands on the precipice of a technological revolution. The incoming AI Presidency will usher in a new era defined by emerging technologies. To navigate this uncharted territory, we must balance innovation with ethical considerations. By learning from global examples like the European Union’s AI Act and China's nascent regulations, the United States can ensure that AI development aligns with our laws, values, and freedoms.
The EU AI Act Unpacked
The first ever comprehensive AI legislation, the EU AI Act aims to “foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.” The Act places AI systems on a spectrum of risk — minimal, limited, high, and unacceptable — and introduces a two-tiered approach to regulating general purpose AI (GPAI) models, with higher-risk models facing stricter requirements. To assist with implementing and enforcing the Act, especially for GPAI, the European Commission established the AI Office. Focusing on safety and trustworthiness, the AI Office is the central coordinating and monitoring authority that works alongside national governments. In addition, the Act’s extraterritorial scope is noteworthy, as it applies even to foreign companies providing AI products and services within the European Union, as well as to the output of products used by EU citizens.
Broad, proscriptive, and risk-based, the AI Act is predicated on the assumption that it will spur innovation through regulatory certainty. However, its burdensome compliance requirements and steep penalties threaten Europe’s innovation ecosystem, with startups and scaleups most affected. In a 2023 study by the European Initiative for Applied Artificial Intelligence, 50% of AI startups expressed concerns that the compliance costs associated with the EU AI Act could hinder their ability to innovate and compete globally. The focus on specific systems fails to “future proof” the Act from technological advances. The Act will require constant reinterpretation and amendments to determine which applications classify as “high risk.” Adding to the regulatory uncertainty is a lack of clarity on requirements such as intellectual property or codes of practices for businesses. The unpredictability of EU regulation is already driving companies such as Meta to withhold future products from EU markets. While the Act contains exceptions for national security, it covers a range of public security applications, creating ambiguity for dual use technologies and shared capabilities.
Furthermore, the Act’s threshold for defining advanced GPAI models is much lower than both the Biden Administration’s AI Executive Order or the now-vetoed California SB 1047. As a result, more AI models, including those developed by smaller companies, may fall under stricter restrictions, thus raising barriers to entry for startups and burdening early-stage developers with premature and onerous compliance requirements, especially in high-growth sectors that thrive on rapid iteration and lower regulatory constraints.
China’s Centralized Command on AI
Meanwhile, the People’s Republic of China (PRC) is regulating AI in ways that bolster state control and align the technology with the Chinese Communist Party’s (CCP) values, focusing on verticals such as content control and deepfakes. Key drivers of the PRC’s AI governance model are both state and CCP entities, primarily the Cyberspace Administration of China (CAC), which focuses on security and state control. The Ministry of Science and Technology is playing an increasing role, which emphasizes innovation. Think tanks such as the Chinese Academy of Social Sciences, the China Academy for Information Communications Technology, and Tsinghua University’s Institute for AI International Governance also feed into the rulemaking process.
The Generative AI Law, implemented by the CAC in August 2023, specifically targets AI technologies that interact with the public, mandating adherence to core socialist values and imposing strict content and security regulations. The law also necessitates a series of pre-release and continuous assessments to ensure compliance, particularly where AI content could influence public opinion.
In addition, China released an updated draft of its AI law in March 2024. This draft underscores risk management while promoting innovation within a controlled framework. It introduces a detailed grading and categorization system for AI, defining "critical AI" applications as those that could impact national security, public interest, or individual rights. Keeping safety as the bottom line, the law seeks to integrate AI governance with national security, economic, and broader state objectives.
While these regulations ostensibly promote innovation and protect individual rights, they incorporate sweeping exceptions that enable heavy-handed state control and surveillance under the guise of state-led innovation and risk management. The draft law includes various provisions on central planning, including allocation of computing resources. The law’s vague grading and categorization system, including for “critical AI,” imposes stringent compliance, oversight, and pre-approval requirements, while allowing for state use of AI systems for judicial, biometric, social scoring purposes. It also establishes an overarching coordination mechanism for AI development and regulation, while empowering relevant organizations with extensive regulatory and enforcement duties over almost every aspect of AI development and application. The result is a system that, while presenting itself as protecting individual dignity and pro-business, severely undermines innovation and freedoms while tightening CCP control over the technology and advancing an authoritarian agenda.
Even if China's AI regulations focus on internal control and stifle innovation, the CCP's proactive enactment of these laws allows China to shape global norms and standards before the United States does, potentially sidelining U.S. influence in international AI governance. By establishing regulations early, China can promote its own governance models and technical standards in global fora. This leadership vacuum could enable China to set precedents that affect international trade, data privacy, and ethical AI deployment, making it imperative for the United States to act swiftly to assert its own vision for AI's role in society and the global economy.
The Way Ahead: A Call for Smarter Regulation
The current U.S. approach to AI governance is flexible and sector-specific, relying on existing authorities and voluntary industry-led commitments. Key initiatives include the Biden Administration's Executive Order on AI, the White House Blueprint for an AI Bill of Rights, and the National Institute of Standards and Technology's AI Risk Management Framework. Nonetheless, the United States lacks a comprehensive legislative framework on AI. Recognizing this, the bipartisan Senate AI Roadmap encourages relevant congressional committees to pursue AI issues within their jurisdiction.
When the next U.S. administration comes to navigate the complexities of AI regulation, it should chart a course that is flexible and forward-focused. Europe serves as a cautionary tale against overregulation, while China's restrictive model not only hampers technological breakthroughs but also stands in stark opposition to our fundamental democratic norms. In our Mid Decade Challenges to National Competitiveness report, we argued that the U.S. regulatory model should be sector-specific, focused on high consequence outcomes, reliant on existing authorities, and bolstered by non-regulatory governance.
To help operationalize these principles, SCSP developed the Framework for Identifying Highly Consequential AI Use Cases (HCAI Framework). Built to be as dynamic as AI itself, the framework offers a flexible and adaptive solution for prioritizing AI regulatory efforts on high-consequence outcomes through sector-specific assessments. The HCAI Framework illustrates ten categories of corresponding harms and benefits with specific benefits and harms within each, and methods to make high consequence determinations of AI use cases through qualitative and quantitative assessments. As new AI use cases emerge, the flexibility allows the framework to remain relevant and facilitates the best use of resources through tailored regulation. Furthermore, by considering not just the harms, but also the benefits of AI, the framework will incentivize the positive development of AI by encouraging regulatory actions that are conducive to innovation.
Unlike the EU AI Act, the HCAI Framework is only a blueprint for identifying high-consequence AI use cases and does not prescribe specific regulatory actions. Sector-specific regulators may accordingly determine appropriate measures based on their unique contexts and challenges after identifying the use cases demanding maximum regulatory attention. The HCAI Framework offers a promising path forward, one that embraces innovation while safeguarding American values and interests.
United in Purpose: Working with Allies and Partners to Shape Global AI Governance
In addition, the United States has not shied away from participating in various fora that prioritize global AI governance. Notable among these are the AI Safety Summits held in the United Kingdom and South Korea, and the upcoming AI Action Summit in France. However, such summits have only yielded non-binding commitments narrowly focused on minimizing AI risk, without sufficiently highlighting or seeking achievable solutions to maximize AI’s positive contribution to society.
But there is hope for improvement. While the summits in Bletchley Park and Seoul have yielded voluntary agreements from participating governments and companies, the 2025 AI Action Summit in France is aiming to secure hard policy commitments, with France’s Special Envoy for AI Anne Bouverot focused on improving access to compute, data, and training for countries with less vibrant innovation ecosystems. Similarly, the Transatlantic Trade and Technology Council (TTC) at its April 2024 meeting announced increased cooperation between the U.S. AI Safety Institute and the EU AI Office on AI safety, risk management, and trust. The United States should champion initiatives that go beyond risk minimization to actively promote the beneficial uses of AI, thereby ensuring that these technologies serve as a force for good, while providing an alternative to China’s authoritarian model.
Ultimately, absence of perfect alignment between the United States and the European Union should not impede efforts in areas of agreement. The United States needs to work alongside its allies and partners to drive innovation in this global technology competition, especially as China aims to achieve global technological dominance and rewrite the rules in its authoritarian image. The United States and Europe should continue to work together by deconflicting their approaches, respecting shared norms, and collaborating through joint investments and other mutually beneficial incentives in critical technology areas to unleash the full potential of AI for the democratic world. Additionally, the United States should ensure that it is shaping the rules of the road through participation in international standards development organizations, and engaging in iterative standards development processes with allies and partners.
In the global AI race, the United States must prove that cutting-edge progress and effective governance should go hand-in-hand.
اشكركم اعزائي على هذا الحدث الكبير ،الذي نعلق عليه جميعا آمالا كبيرة في إعطاء دفعة قوية فاعلة للحضارة الإنسانية والكرة الارضية ، نحو الرخاء والرفاهية والعدل والسلم والسلام عبر الاليات الجديدة للتكنولوجيا والعلوم،حقيقة ان الذكاء الاصطناعي والطاقة والرقمنة والربوتيك تعتبر في محلها تحديات جديدة وافاق واعدة لتحقيق ما نصبو اليه جميعا ،الا ان ما ابانت عليه البحوث العلمية الجديدة لا يدع مجالا للشك في أن الطريق التي تسلكها IA+ÉNERGIE+ROBOTIC سوف تتخذ مناحي اخرى مما يستدعي تدخلت عاجلا بتقويم المسار وتعديل الوجهة نحو تحقيق الأهداف منها،وذلك بواسطة المتلازمة التالية IMm+IA+Énergie+Robotic ،متلازمة عصية وتستلزم الكثير من العمل الشاق ،ولكن النتائج المتوخاة تبقى اهلا لهذا التحدي الذي نتمنى ان يكون عملا مشتركا بين الجميع.ولاجل الجميع، مع خالص التحيات.
You guys do amazing work. Thanks for this.