Welcome to the Special Competitive Studies Project!
Hello, I’m Ylli Bajraktari, CEO of the Special Competitive Studies Project (SCSP). Welcome to the 2-2-2.
Why SCSP?
SCSP was founded just a few months ago because a constellation of emerging technologies are changing our world fast. Democracies are facing a tech test and geopolitical tension is intensifying. The SCSP mission is to make recommendations to strengthen America’s long-term competitiveness for a future where artificial intelligence (AI) and other emerging technologies reshape our national security, economy, and society.
Many of the SCSP team, including Eric Schmidt and myself, participated in the National Security Commission on Artificial Intelligence (as Chair and Executive Director respectively). We were proud of the work, but saw unfinished business across a wider set of technologies and embracing a concept of competitiveness broader than national security. In parallel, Eric was finishing a book with Henry Kissinger and Daniel Huttenlocher on the implications of AI for humanity. It raised more questions for which we did not yet have answers. Kissinger suggested we model a new project on an effort he led in the 1950s called the Rockefeller Special Studies Project. It tackled the hardest Cold War and domestic challenges of the 1950s to build a new national consensus. We liked its admonition to shape events with a sense of purpose or risk being engulfed in events shaped by others. And so SCSP was born.
Why 2-2-2?
2-2-2 aims to provide you with regular, thought-provoking perspectives from our team and outside experts as we chart a path forward for our country to maintain global leadership on key technologies that will shape our future. Each month SCSP experts will offer two perspectives from three different world views–the United States, U.S. allies and partners, and China.
Why Subscribe?
In addition to monthly perspectives from SCSP experts, 2-2-2 will be a primary source for future, exclusive SCSP content--such as podcasts or webinars--and events. By subscribing you will never miss out on the latest from SCSP.
Edition 1
In this month's edition, Chuck Howell, an Artificial Intelligence (AI) scientist and SCSP’s Senior Director of Research and Analysis focused on emerging technologies’ impacts on society, will tackle the stakes of AI regulation. Prior to SCSP, Chuck was Chief Scientist for Responsible AI at the MITRE Corporation. In this role he supported the National Security Commission on Artificial Intelligence (NSCAI) line of effort on Responsible AI and Ethics. Chuck has over 30 years of experience working in High Assurance Systems Engineering and AI. This month, we are also joined by David Danks, Professor of Data Science & Philosophy at University of California, San Diego. Dr. Danks will share his thoughts on “data control” vs “use control” as a different way of thinking about types of regulations. Tech governance is a foundation of our competitive agenda. In a country deeply rooted in democratic values and norms, any regulation of technology development and use must reflect those values, earn the trust of our citizens, and still allow us to out-innovate our competitors. As others in the world move out on national AI regulation, we need to think hard about the implications for AI governance and innovation at home.
AI Regulation, a Game Changer
By: Chuck Howell
When you think of AI, what’s the first thing you think of? In blockbuster science fiction books and movies, what’s the one common topic? Easy: “AI regulation”. OK, maybe not. OK, absolutely not. But AI regulation is crucial, if not glamorous. If you care about how AI will continue to impact ever-more aspects of your daily life, you should care about the AI regulations governments are contemplating around the world right now. They will impact the quality of your life and rights of all individuals, the competitiveness of firms, and the competition between nations.
AI regulatory efforts are proliferating.
The Organisation for Economic Co-operation and Development (OECD) AI Policy Observatory is tracking 185 AI-related regulatory initiatives around the world, spanning democracies to autocracies, and covering new regulation and the adaptation of existing regulation to address AI. AI companies are now joining governments in calling for regulation seeking clarity, predictability, and a level playing field for doing business globally.
The term “AI regulation” covers a broad range of applications of AI from self-driving cars to clinical diagnostic support to loan reviews to recommending content on social media and so on. Regulation of AI will be tailored to specific uses, but there are myriad tradeoffs and challenges of regulation (and governance in general).
Taking the long-view.
None of this should be surprising. As a technology moves from proof of concept demonstrations to widely adopted and consequential uses, governance, which includes regulation, becomes important. Regulatory debates are central to the history of emerging technologies. From the decades-long efforts to reduce the danger of exploding steam engines in the 19th century to early 21st century debates about nanotechnology, history suggests that the current regulatory scrutiny was inexorable.
The future of AI governance will not be a binary decision between national regulation and no regulation. It will continue to include corporate policies and voluntary frameworks proposed by trade associations, governments, and multilateral organizations, industry standards, and legal remedies. The goal remains an informed and nuanced balance of industry self-governance and standards, market forces, and appropriate government regulation. Drug regulation serves as an analogy. Burdensome and uninformed regulation of drug approval can clearly delay the availability of valuable drugs, but relying on market forces and self-governance to ensure limits on harm to the public from new drugs is generally considered insufficient.
Why the regulatory groundswell now?
The focus on national and supra-national AI regulation in the European Union (EU), China, the United States, and other countries (e.g., Singapore, Canada, the United Kingdom (UK) is striking. The sheer scope and importance of AI is one factor. Indisputable examples of harm caused by AI systems (e.g., racially biased recommendations for health intervention, shaping adolescent behavior and self-image) also are creating growing demands for regulation as confidence erodes that other forms of AI governance are sufficient. Some nations have also made deliberate decisions to use regulation as part of a strategy to differentiate their position in the global AI landscape (more on that below).
In a recent Pew Foundation survey of 20 nations, results were varied regarding public opinion when asked if AI is a “positive thing for society.”
Majorities in Most Asian Publics Surveyed See AI as a Good Thing for Society, Pew Research Center (Dec. 14, 2020), https://www.pewresearch.org/fact-tank/2020/12/15/people-globally-offer-mixed-views-of-the-impact-of-artificial-intelligence-job-automation-on-society/ft_2020-12-15_artificialintelligence_01/.
A nuanced approach to regulation could ensure that we reap the benefits of AI systems while being shielded from significant harms of AI adoption. Regulation will shape acceptable and unacceptable behaviors and outcomes. It will codify tradeoffs between priorities, such as the pace of innovation, economic growth, national competitiveness, equity, privacy, civil liberties, civil rights, safety, and social cohesion.
Tradeoffs will be weighed differently for different AI systems and in different countries. Yet, country-specific regulations may reverberate across borders. Not every country can have it all in our interconnected world. How China or the EU regulates AI may influence AI development and adoption here in the United States, and vice versa. Who regulates first and what they choose to regulate may shape societies, economies, and governance around the world. Below is the 2-2-2 roundup of what’s happening in this space.
2 Perspectives: United States
One:
As dialogue about national-level AI governance shifts, it is important to distinguish between sector-specific and cross-sector approaches. Washington is pursuing sector-specific regulation that adapts existing regulatory frameworks and agencies to address new issues introduced by the adoption of AI. Examples include the Food and Drug Administration (FDA) rule-making for machine learning (ML) as a medical device and good ML manufacturing processes, the Federal Aviation Administration (FAA) policy on how AI in safety-critical avionics should be addressed in regulation, and the Federal Trade Commission (FTC) applying its current regulatory authorities to new commercial uses of AI. There is some congressional interest in broader cross-sector AI regulation such as the Algorithmic Justice and Online Platform Transparency Act and the AI Accountability Act of 2022. Other initiatives such as the White House’s AI Bill of Rights and NIST’s AI Risk Management Framework complement the legislative and regulatory processes with context and frameworks.
Two:
Meanwhile, in the tradition of state and local governments being “the laboratories of democracy,” there is a vibrant range of legislation and regulation being adopted outside of the Federal Government. From automated employment decision tools for interview screening and promotions in New York City to a number of states regulating general automated decision systems to the extensive exploration of the future of tech regulation in California, state and local governments provide insights into alternative approaches for federal AI regulation. There is the risk that an inconsistent patchwork of local regulation will raise barriers to adoption of a consistent national approach to regulation, so there is an incentive for federal regulation in some instances to harmonize regulation after the “laboratories of democracy” have yielded insights.
David Danks
Professor of Data Science & Philosophy
University of California, San Diego
From Data Control to Reuse Control: The Future of Data Regulation?
Data play a critical role in training, testing, and using AI to address real world challenges. Companies and governments are racing to collect ever-larger datasets so they can deploy more accurate and more powerful AI systems to predict preferences, health conditions, decisions, and much more. As data collection has become increasingly invasive and ubiquitous, data privacy regulations have become increasingly important. However, current regulations have significant differences, particularly about which activities are covered—collection, processing, storage, reuse.
Data regulation has historically focused on “data control”: who has access to what information about you? However, the rise of massive datasets and powerful AI techniques mean that highly sensitive information about you can be inferred from relatively innocuous data that you would not think twice about disclosing. Data control is no longer the right way to think about data regulations, since we cannot really know what someone can learn about us using even simple data.
The E.U. General Data Protection Regulation (GDPR) is probably the best-known data privacy regulation, and enumerates a number of different rights that data subjects have over their data. These rights largely focus on data collection and data storage. In addition to obvious data security requirements, people must consent for their data to be collected, and they can ask to have their data removed or deleted if it could be connected with them in some way (if it is “identifiable” data). GDPR is the main reason why we all now see an endless parade of “I accept these cookies” buttons on every website, as those are the main mechanism that companies use to get consent for data collection. Although GDPR represented a significant step forward in data privacy regulation, it largely does not constrain what data collectors can do with the data, as long as people are informed (perhaps in vague or incomprehensible blocks of text buried somewhere on some website). That is, GDPR is mostly silent about data processing and reuse.
Two U.S. laws—the Illinois Biometric Information Privacy Act (BIPA) and California Consumer Privacy Act (CCPA)—mark a shift towards a very different approach. They have similar data control and storage provisions as GDPR, but go significantly further with respect to data processing and reuse. CCPA requires that information obtained by processing the data (perhaps by an AI) is treated the same as the original data. BIPA goes even further and holds that no one can profit from use of your biometric data. These two laws thus represent a significant strengthening of data privacy regulations relative to GDPR, though they apply to many fewer people since they are state-level regulations.
And so we are beginning to see a shift towards “use control”: who can do what with information about you? This different approach plays less emphasis on data collection, instead focusing on data processing and use. While there are many challenges to developing effective use control regulations, they provide hope that we can reclaim control over what really matters in terms of our data.
2 Perspectives: U.S. Allies and Partners
One:
The European Union’s (EU) top down approach differs from the United States’ patchwork effort. The EU is developing wide-ranging and consequential AI regulation in the draft Artificial Intelligence Act. It takes a risk-based approach by classifying AI systems as having unacceptable risk, high-risk, and limited risk – where risk is understood as harming a person’s health or safety or infringing on fundamental rights. It seeks to frontload trust into AI development. The approach builds on data governance and privacy regulation, and it reflects a broader European desire to establish a distinct European global leadership role in Trustworthy AI, as Dragos Tudorache, one of the EU’s top AI officials, explained last year. It is likely that the full implementation of the Artificial Intelligence Act will take several years, as the GDPR did. The EU believes it can strike the right regulatory balance to protect its citizens and encourage the right kind of innovation. However, Europe holds only 4 leading AI companies out of the top 100 - will more regulation turn this around?
Two:
Bridging the transatlantic AI governance divide is a priority. The White House set a conciliatory tone when the EU rolled out the AI Act last spring. Since then, the U.S.-EU Trade and Technology Council (TTC) spelled out areas of agreement and commitments for further joint efforts. However, the terms of alignment may temper the strategic wisdom, business logic, and values-based appeal of harmonization. For Americans concerned with AI risks, the EU’s approach serves as a compelling call for a national AI regulatory regime. Others, however, fear the pursuit of convergence at the cost of embracing the EU approach risks sacrificing the United States’ innovation advantages and the competitiveness of U.S. firms’ leading positions in many AI sectors. Debating the desirability of convergence may be a moot point. By moving first to comprehensive regulation, the EU could affirm its status as a regulatory superpower. If international firms adapt in Europe to preserve their market access, the EUs’s standards could become the standards. The same might be said for China….
2 Perspectives: China
One:
China is moving fast to establish national AI regulations for its consumer Internet, with new rules that regulate the use of algorithms and synthetic media. Some aspects of China’s AI regulations align with Western governance (e.g., the need for controls over the use of personal data), but others reflect a distinct perspective (e.g., direct calls for People’s Republic of China (PRC) algorithm developers to adhere to core socialist values). Much could be learned from China's approach, such as the practical, technical and bureaucratic demands of monitoring AI systems and enforcing regulations, but not all of it will be broadly applicable to democracies. There are almost always unintended consequences of regulation worth tracking.
The above graphic is based on an analysis done by TruEra and has been modified for the purposes of this newsletter. The original version can be found at: Shameek Kundu, Regulating Artificial Intelligence (AI): Will China and the West Go Their Separate Ways?, Corporate Compliance Insights (Oct. 19, 2021), https://www.corporatecomplianceinsights.com/regulating-artificial-intelligence-china-west/.
Two:
Beijing views leadership in AI regulation and standards as part of its overall goal of global AI dominance directly supporting economic growth and social cohesion. The AI guidance comes on top of two significant data governance policies released in 2021: the Personal Information Protection Law (PIPL) and the Data Security Law. As Kendra Schaefer of Trivium China has observed, in the West, data is discussed in the context of privacy; in China, data is discussed as a factor of economic growth and decision advantage.
A way forward?
It is impossible to anticipate every downstream effect of new regulations. One approach to the regulation of rapidly evolving technologies is “regulatory sandboxes,” where a regulation is applied in a localized and constrained manner to see if and how it works. Sandboxes exist for other rapidly advancing technologies in the United States (at the Federal level and among states), in the European Union, and China. They provide opportunities for navigating the tradeoffs between regulatory rigor and rushing to push the innovation envelope. In this, regulators could embrace the ethos of the innovators – learn by doing, and if necessary, adapt.
Bottomline.
National regulation FOMO should not drive action in the United States. Much will be learned from the successes and failures of others’ efforts. Striking the right balance for regulation will mean navigating a complex and uncertain patchwork of AI governance for the foreseeable future. In the meantime, regulation as virtue signaling is not a national strategy for successful adoption of AI; learning from local, national, and international regulatory experiences and intentional experimentation in sandboxes will enable the United States to benefit from AI advances while remaining aligned to our core democratic values.
Coming Soon (Updated):
Next month, SCSP will announce a call for engagement. The call aims to answer the question of: “How can the United States and allies and partners better deter authoritarian aggression in the Western Pacific and Eastern Europe?” Specifically:
What low-cost techniques that could be implemented in 2-3 years might strengthen deterrence?
What strategies might the United States pursue that preserve our vital strategic interests more effectively than deterrence?
What does the United States get wrong about deterrence, including its relevance as a concept, how emerging technologies are changing the nature of deterrence, or how different cultures perceive deterrence differently?
SCSP will seek submissions in the form of a short paper or video answer to the prompt. SCSP will offer the top three submissions an award of $2,000, $1,500, or $1,000. The top three submissions will befeatured in a future 2-2-2newsletter and on the SCSP website. Make sure to check out next month’s 2-2-2 newsletter for more information.