Hello, I’m Ylli Bajraktari, CEO of the Special Competitive Studies Project. Last week, Senate Majority Leader Chuck Schumer convened the first in a series of AI Insight Forums designed to bring together the top minds in the country to discuss how Congress should properly legislate artificial intelligence. SCSP’s Chair Dr. Eric Schmidt was asked to participate in the inaugural AI Insight Forum. Below is his formal written testimony.
We are also pleased to announce that the Senate leaders of the AI Insight Forum – Senate Majority Leader Chuck Schumer and Senators Mike Rounds, Martin Heinrich, and Todd Young – will all be providing remarks at this year’s Global Emerging Technology Summit!
Harnessing Artificial Intelligence for Societal & Economic Prosperity
by Dr. Eric Schmidt
Artificial Intelligence (AI) has enormous potential to accelerate advances across society, including education, medicine, climate change, access to basic needs, and innovation broadly. Artificial General Intelligence (AGI) is even more fundamental: the arrival of another intelligence that assists humans in our tasks is deeply powerful and essential. The United States must continue working on this advancement because pausing our efforts does not mean others will and that includes state and non-state actors. Overall, the outcomes of this work have been positive. Much of the current work in AI will be applied without issue: it makes us more productive, aids in the discovery of new things, and helps businesses and countries flourish.
We have made significant progress on the responsible building of AI. We will continue to research and apply findings for core areas such as privacy, fairness, safety, robustness, and alignment with human values. In addition, we also want to address people’s fears about catastrophic and extreme harm. We propose a way for governments and the private sector to work together to achieve the full potential of AGI while addressing the risks posed by the increasingly powerful technology. We do NOT believe you need to slow things down, but we do believe we have a lot to learn from the democratic process by pulling in expertise and feedback from many multidisciplinary stakeholders.
Today, we believe the fundamental extreme risks relate to the possible lowering of barriers to entry for bad actors, including non-state actors, seeking to create biological weapons, use cyber weapons at a powerful new scale, and potentially the future ability for systems to operate on their own without human control (the “agency” problem). The new threat from AGI is based on its apparent ability to synthesize new solutions, give step-by-step advice, and accelerate someone's evil intent. AGI may eventually be able to recursively self-improve. In that scenario, we will need to understand and control its evolution to be consistent with human values, laws, and regulations. These extreme risks are unique to frontier AI models, which will have capabilities beyond what we currently know is possible today.
There are many other very serious issues from AI and AGI, including misinformation, election interference, copyright/fair use issues, job impacts, and economic concentration, all of which are not new to society and can (and should) be handled with existing legal and regulatory bodies already in place.
Our starting point is to leverage existing authorities while passing sensible legislation. To date, private firms have led in developing this technology. Governments are using well-designed, long-standing systems and processes to understand and move forward in the policy space but may need help catching up. Yet, in different ways and without new legislation, countries already have legal structures to hold companies liable for the consequences of products they develop and deploy. A model should be treated as highly consequential if it has a capability profile that would be sufficient to cause extreme harm, assuming misuse or misalignment in the model’s design and training. To deploy such a model, AI developers already have a legal responsibility to develop strong controls against misuse and robust assurances, from evaluating their models, that the model will behave as intended.
Governments have a responsibility to help identify emerging models that have capability profiles that could be highly consequential. They have a responsibility to help companies do the right thing so that constructive innovation can rapidly advance. They can help companies evaluate the risks of their models internally, adjust training, assess deployment risks, and assess models' risks after deployment.
Some advanced models pose harms, yet, at the same time, may be essential for protecting society from such risks, designing defenses and countermeasures in a world where any system of controls will have large gaps. Thus, governments have a further set of responsibilities. They should foster the development of models that can identify and defend against extreme harms while setting and exemplifying the robust safety and security standards that should accompany high-risk AI work.
We propose that governments, starting with the United States, the United Kingdom, and other partners like Japan and South Korea, establish an entity that can pool expertise across borders to help willing governments meet these responsibilities. It can help conduct evaluations and design tests to restrict or prevent these specified biotechnology, cybersecurity, and autonomy concerns. Governments would then work with industry to define tests that guarantee the threats are not released to the public through a regulatory body and an associated industry-funded consortium to design and apply the tests.
We propose that the tests are applied above a threshold, which will first be established as an “amount of training” metric based on the amount of computation needed. We recognize this is not very precise, and we believe that the threshold should be changed as we see the point of scale where worrisome capabilities emerge. It's essential to have an unregulated floor where experimentation can occur, new ideas can be designed and deployed, and low-cost development can expand rapidly. For the moment, we propose a floating point training threshold below which these tests will be optional and not mandatory.
Standards and tests could also be applied in defense-related work. Industry could adopt them in order to show it is developing products safely, or governments could adopt more formal regulatory authorities. In any case, the multinational work to establish necessary standards and tests will need to work with the private sector because that is where key capabilities, expertise, and understandings of model design reside. The work can be aided by an industry consortium representing companies and the open source and non-profit communities. The companies would fund this as well. For open source work done outside companies, we anticipate they will be subject to the tests but not required to pay into the consortium. There are many examples of these sorts of industry structures in the US regulatory structure (for example, meat quality, electric utility standards and testing, etc.). No matter their legal system, governments can now make it clear that developers will be accountable if they do not act responsibly to mitigate the dangers that can arise in this work in the stages of both training and deployment.
As standards, tests, and responsible practices become clear, every country can apply those insights in ways that reflect its laws and regulatory approaches. It will be very hard for the regulators and political leaders to agree on what the tests should actually test for, and we believe that those tests should not be used as a new form of legality, censorship, or control that is not already in place. As we explained earlier, our focus here is on the prevention of extreme harms. Fraud, consumer protection, misinformation, copyright or fair use, and election integrity are all very serious issues that should be resolved using existing legal and regulatory structures.
The best solution is for lawmakers to be comfortable with robust executive action as an alternative to an intricate package of legislative changes. With respect to the United States, there is quite a bit of existing legal authority available in law and regulatory authority to address AI with regulation and frameworks, including the Defense Production Act, export control authority, NIST jurisdiction, government contracting power, and more. We should further analyze whether our existing authorities are sufficient to achieve public safety goals or whether new legislation is actually needed.
It's a hard truth, but we will need to accept that some bad things are not desirable or possible to make illegal for a variety of reasons. For the most extreme threats, we have to act. Any system of regulation should have sufficient independence from the companies, be transparent to the public, and seek stakeholder input from civil society. None of this can be accomplished without the right people and expertise. We must ensure that the United States puts in place policies that attract the brightest talent from around the globe and educate and train our domestic workforce.