Hello, I’m Ylli Bajraktari, CEO of the Special Competitive Studies Project. In this edition of 2-2-2, SCSP’s Rama Elluru, Liza Tobin, and Connor Martin discuss the importance of collaborating with the EU to ensure economic prosperity as generative AI unlocks new possibilities
The AI revolution dramatically raises the stakes for the United States and the European Union (EU) to bridge their digital divide. Generative AI – which generates new content based on an AI model trained on data – has the potential to augment countless human activities and add an estimated $4.4 trillion to global GDP, roughly equivalent to the annual economic output of Germany or one and a half Californias. As advanced economies with high proportions of knowledge workers, the United States and EU stand to reap outsized advantages from generative AI’s anticipated boost to productivity. This much-needed upturn could not come too soon, after years of sluggish productivity growth have been a drag on government budgets and citizens’ incomes.
To unlock this economic promise together, the United States and EU, as two of the largest markets in the world, will need to tap into the power of economies of scale. As SCSP President and CEO Ylli Bajraktari and MEP Eva Maydell argue, this will require converging their approaches to AI and data. Because the power of generative AI depends in part on the vast amount of data it consumes, the nations that write the rules of the road for the digital economy, particularly rules for data collection, storage, and use, will be the ones who lead in AI. Democracies must set these rules together. Obvious irritants like cumbersome EU rulemaking and a laborious U.S. legislative process notwithstanding, U.S. and EU approaches to digital regulation agree more than they diverge, because they share a common foundation built on principles of democracy, sovereignty, fair competition, and individual freedoms. A recent announcement by the U.S. Department of Commerce shows that progress is possible: Secretary Raimondo outlined U.S. steps to implement the U.S.-EU Data Privacy Framework, indicating headway toward resolving obstacles to data flows that impose costs on the transatlantic economy.1 The EU should act quickly to sign off so both countries can begin implementation.
What is more, Washington and Brussels have a common systemic rival in Beijing, which has made clear that it intends to lead the world in AI and other digital technologies. Strategic dithering on the part of the transatlantic partners gives Beijing a green light to continue exporting its model of an authoritarian-controlled cyberspace where surveillance reigns supreme and free expression is curtailed. Time is of the essence for both sides to deconflict their regulatory approaches. As U.S. policymakers search for a viable path forward on global trade and AI regulation, here are five steps Washington can take to shore up the transatlantic digital trade relationship.
Accelerate AI Regulation
The United States and EU must align on an AI approach that protects democratic values and is flexible enough to adapt to the accelerating pace of technological change. The United States and EU “feel the fierce urgency of now” on generative AI, as Secretary Blinken said at the most recent Trade and Technology Council (TTC) ministerial conference in May, highlighting voluntary “codes of conduct” as an interim solution while legislation unfolds. These codes would ensure that “citizens can see that democracies can deliver,” according to EU Commissioner Margrethe Vestager, and could be expanded to include allies and partners like Canada, the UK, Japan, and India. The EU has already leveraged its voluntary code on online disinformation to ask platforms to label AI-generated content.
Following the ministerial, the EU signaled a desire to issue a draft AI code of conduct “within weeks.” The U.S. government should urgently support this work in partnership with industry, which has already signaled a willingness to adopt such a code. In mid-May, Google announced it will voluntarily engage in an “AI Pact” in Europe while the EU’s AI Act rolls out, and Microsoft laid out principles it wants to see in AI regulations, including licensing requirements and designating certain use cases as “high risk.”
While the Departments of State and Commerce monitor and engage with the EU’s draft AI Act2 through the TTC Joint Roadmap for Trustworthy AI and Risk Management, the United States must answer the EU with its own domestic policy framework. As SCSP has argued, the United States should govern AI following four key principles:
Govern AI use cases and outcomes by sector;
Empower and modernize existing regulators, while considering a longer-term centralized AI regulatory authority that can address gaps as well as sector-cross-cutting issues;
Focus on highly consequential uses, and both beneficial and harmful significant impacts. This would align with the risk-based approach of both the EU’s AI Act and the National Institute of Standards and Technologies (NIST) Risk Management Framework; and
Strengthen non-regulatory AI governance, such as the voluntary codes of conduct, with input from industry and key stakeholders.
Yesterday, SCSP released an episode of NatSec Tech, our podcast hosted by Jeanne Meserve, that summarized the need for these four key principles.
A regulatory posture based on these principles will help align a whole-of-government approach to AI while Congress considers other potential statutory approaches. Moving quickly to a coherent domestic approach will provide the framework for transatlantic alignment that responds to the demands for AI regulation being made by the American and European publics as well as the makers of the models themselves.
Pass Comprehensive Data Privacy Legislation
Absent comprehensive federal data privacy legislation, U.S.-EU data privacy frictions could continue to boil over in recurring points of contention. As SCSP has written, the United States must pass legislation that “sets reasonable, transparent, consistent standards” for data actors.
Comprehensive legislation could unify the patchwork of state laws and harmonize the apparent transatlantic conflicts in U.S. and EU approaches to data privacy. It would help make progress towards a longer-term goal: a U.S.-EU digital trade agreement, modeled after digital trade agreements with Japan and the digital chapter of the U.S.-Mexico-Canada Agreement (USMCA), which could benefit firms and workers by future-proofing cross-border data flows and reducing costs associated with customs duties on digital transactions.
The good news is that U.S. and EU principles on privacy are not as far apart as they may seem. While U.S. and EU implementations may vary, our values align. These shared values are reflected in the 11 principles called for in SCSP’s “National Data Action Plan,” which are foundationally aligned with EU requirements in the GDPR, the Digital Markets Act (DMA) and Digital Services Act (DSA), draft Data Act, and Data Governance Act (which took effect in 2022).
Broaden the Discussion to Additional Allies and Partners
Expanding the transatlantic discussion to include other allies and partners is critical to preventing digital fragmentation among market-oriented democracies. Japan’s “Data Free Flow with Trust” model (DFFT), launched as part of the Japanese G7 presidency this year, represents a potential path to a solution. A solution like DFFT could help align a broader group of countries within an ecosystem of trusted digital exchange. At the G7 meeting in Hiroshima, Japan, leaders tasked their governments with operationalizing DFFT, including by establishing the institutional architectures for partnership. The United States should lean into this work and galvanize its partners in the G7 and beyond3 to support it.
Agree that Transatlantic Competitors Are Preferable to PRC National Champions
When the United States belatedly launched a diplomatic campaign to push back against PRC national champion firms Huawei and ZTE’s 5G inroads around the world, it faced difficulties because it had no major U.S. equipment producers that could offer alternative, end-to-end telecoms solutions. Instead, U.S. policymakers and diplomats frequently advocated for European telecom hardware providers Ericsson and Nokia.
When it comes to cloud technologies – which, as a backbone for emerging technologies like AI and robotics, are at least as strategic as 5G, if not more so – the United States is better positioned, with its top three cloud firms occupying two-thirds of global market share. In its commercial diplomacy, the U.S. Government should lean into this advantage, while continuing to advance domestic measures to strengthen cloud security requirements.
The U.S. Government should promote U.S. firms in cloud and other tech sectors as safer alternatives to up-and-coming PRC rivals, like the cloud divisions of PRC national champion firms Huawei, Alibaba, Baidu, and Tencent, which have ties to the Chinese military and sanctioned entities and are gaining market share in Asia, Latin America, and Africa. Greater global influence for PRC national champions is in neither the United States’ nor Europe’s interest – a strategic reality that regulators in Brussels sometimes overlook. In Europe, U.S. cloud providers face challenges under provisions of the EU Cybersecurity Act that, if adopted, would require cloud providers to be headquartered and store data on servers within the EU.4 This would mean that U.S. firms would be precluded from managing a considerable amount of data in the EU. Another challenge faced by U.S. firms is the draft Data Act, which would require companies to share proprietary data with their competitors – potentially providing an avenue for PRC firms to take advantage of the requirement at the expense of U.S. firms.5
Washington and Brussels would be shortsighted to let their regulatory disputes obscure the importance of building a transatlantic technology ecosystem resilient enough to combat China’s brute force economic strategy. At the TTC, Washington should continue to advocate for EU requirements that do not discriminate against U.S. firms or open a lane for PRC companies to exploit measures intended to boost European firms.
Defend the U.S. Regulatory Model, While Improving It
There is no avoiding real U.S. and EU regulatory differences. But even when their approaches differ in means, they are aimed at the same ends: promoting free, fair, and open societies. What may look like immutable gaps can be bridged on a shared regulatory spectrum based on common democratic principles. For example, in January 2023, President Biden wrote an op-ed calling on Congress to pass new tech regulations, with language echoing some of the concerns addressed the EU’s DMA and DSA. The March 2023 Economic Report of the President describes the DMA and DSA in a chapter on digital regulation, acknowledging that “concentration in digital markets raises long-standing concerns about whether dominant players in these markets leverage their market power to stifle competition and innovation” – language that tracks closely with EU positions.
Importantly, though, the report also pointed out that “much of the value of digital companies comes from network effects — so antitrust actions may face greater challenges in preserving value for consumers while addressing problems associated with concentration.” As this dialogue unfolds, American policymakers should not leave U.S. companies on their own to defend the legitimacy and advantages of the U.S. regulatory model, particularly given the apparent singling out of U.S. firms by the DMA and DSA.6
Despite the emergence of regulatory conflicts in recent years, the risks and opportunities presented by the AI era highlight the fundamental alignment of U.S. and EU values in the digital realm. The PRC aims to lead the world in AI and write the rules of the world’s digital superhighways to fit its autocratic values – but the transatlantic partnership can prevail if both partners make good faith efforts to deconflict their approaches. Fundamentally, what matters most is that both sides of the Atlantic find a way to govern this groundbreaking technology in a way that respects shared norms and values, and delivers prosperity to their citizens.
Transatlantic data flows have been in legal limbo since the Schrems II decision of the Court of Justice of the European Union in 2020. Schrems II invalidated a previous U.S.-EU agreement that governed the rules for exporting EU’s citizens’ data to the U.S. under GDPR’s strict requirements. This has left the status of data transfers open to legal challenge, with billion-dollar consequences for businesses. Costs are both indirect, from reduced data flows, and direct, through fines such as the record €1.2B fine Ireland imposed on Meta in May 2023.
Microsoft’s risk-based approach in the draft AI code of conduct is consistent with the EU’s draft AI Act, currently the subject of “trilogue” negotiations between the EU Commission, Council, and Parliament. Among the outstanding questions these negotiations will address are whether the law may be too restrictive to allow for open source models; a ban on facial recognition (which was added to the Commission’s initial draft by Parliament); the principle of compelling large language models (LLMs) to disclose any copyrighted material used to train their models; and the definition of “high risk.”
For example, the UK is close to passing its own legislation and India is expected to release a draft of its Digital India Act in summer 2023.
The EU Cybersecurity Certification Scheme for Cloud Services (EUCS), a certification framework under the EU Cybersecurity Act and administered by the EU Agency for Cybersecurity (ENISA), would require cloud providers to be headquartered within the EU and store data on servers located within the EU in order to receive the highest level of certification, which is needed to handle government activities and serve vital infrastructure including telecoms, finance, transportation, and energy. Industry is attempting to adapt – the EU Data Boundary that Microsoft rolled out in January 2023 addresses many of the potential concerns the EUCS purports to address while maintaining the benefits of a global-scale cloud provider.
The requirement is presumably aimed at boosting European small and medium enterprises, and the EU claims that these “safeguards” will prevent the data being used to develop competing products. But it does nothing to prevent the recipients themselves from studying and making use of the data.
DSA and DMA impose significant requirements on “gatekeepers,” a new legal category created by the laws. Of the first 19 platforms designated as “Very Large Online Platforms” or “Very Large Online Search Engines” under the DSA, sixteen are American, two are from the PRC (TikTok and Alibaba), and one is European. (The same business can be a gatekeeper for multiple core platform services.)