Hello, I’m Ylli Bajraktari, CEO of the Special Competitive Studies Project. In today’s edition of our newsletter, PJ Maykish and I describe how Albert Einstein recognized the transformative power of nuclear fission and urged the U.S. to take the lead. Today, as we face the potential of Artificial General Intelligence (AGI), his words offer valuable insights.
The race for AGI is heating up, and the stakes couldn't be higher. SCSP’s President and CEO, Ylli Bajraktari explores the potential impact of AGI and the critical role the next US president will play in its development. Read more about the proposed strategies for winning the AGI race here.
AI+ Summit Series
On the three-year anniversary of SCSP's start-up, our mission to strengthen America’s long-term competitiveness in the techno-economic competition remains at the forefront of our work. SCSP is excited about continuing our AI+ Summit Series, a set of high-level events dedicated to enabling rapid advancements in artificial intelligence as it transforms our country and becomes a keystone of our national security.
The AI+ Robotics Summit, the second in this series, will take place on October 23, 2024, at the Waldorf Astoria in Washington, D.C. The series will culminate with our next AI Expo and the Ash Carter Exchange on June 2-4, 2025. We are excited to announce Marc Raibert as our first keynote speaker, with more set to be announced over the next few weeks! If you missed our last Summit, AI+ Energy, you can watch all of the conversations here.
AGI: What Would Albert Say?
In 1939, World War II began, the international order collapsed, and the advent of a new technology – awareness of which was largely confined to the scientific community – threatened to change the world; whether for good or for ill, no-one knew. Albert Einstein was one of the few who saw the connection between the conflict and the new technology. He realized that a successful nuclear fission chain reaction could change not only everyday life but also the outcome of the war. If a chain reaction could be created, and Einstein thought it likely, then he knew that the United States needed to get organized and lead in fission development. Einstein became a grand strategist. The scientist wrote an eight-paragraph letter, with his friend Leo Szilard, to the President of the United States, drawing President Roosevelt’s attention to the possibility that an atomic chain reaction in uranium could produce colossal amounts of energy. It could also, Einstein and Szilard told Roosevelt, enable the creation of a new weapon of inconceivable power.
The parallel to today is evident, as AGI draws ever closer. As the United States confronts this fresh, extreme technological challenge, Einstein’s words have become relevant once again. And Washington would do well to revisit them.
Eight Strategic Moves
1. Just the tech facts: “In the course of the last four months it has been made probable… that it may become possible to set up a nuclear chain reaction in a large mass of uranium…”
Today’s equivalent would be that, in the course of the next presidency, it may become possible for strong AI to develop into the once theoretical artificial general intelligence (AGI). Experiments across the globe in three megatrends (FLLMs, novel paradigms, and AI stack transformation) are fully underway and funded to an unknown but enormous sum. The progression of AI, with the fuel of global investment, is on an unmistakably upward path. Faced with the nuclear facts, spending time slowing fission down or being a “doomer” about the potential good (providing 10% of the world’s energy supply and contributing to the end of total war) would have seemed like a fool's errand.
2. Honor strategic timing: “Now it appears almost certain that this could be achieved in the immediate future.”
The Einsteins of AI tell us that a general form of AI will arrive in the next four years. Sam Altman has said, "It is possible that we will have superintelligence in a few thousand days." At Google, Sundar Pachai has stated that AI could eventually become "far more capable than anything we've seen before" and its speed of development "keeps him up at night." When Geoffrey Hinton observed GPT-4, he left Google, stating, “The idea that this stuff could actually get smarter than people — a few people believed that. [...] But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that." Observing GPT-4’s performance led Eric Horvitz and other Microsoft scholars to write the highly cited “Sparks of AGI” paper, stating, “Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system."
3. Simplify the national significance: “...vast amounts of power…would be generated. This new phenomenon would also lead to the construction of bombs…”
Einstein and Szilard informed Roosevelt the new technology could have a direct national security implication: a massive ordnance bomb that could destroy whole ports (later, this would become whole cities). Nuclear fission was a powerful dual-use technology. Einstein correctly predicted that the same chain reaction that could create a bomb could also be used to generate power. Yet it was the national security implications of Nazi Germany obtaining the bomb before the United States that stiffened Washington’s determination to develop this technology first.
4. See the technology as a system: “The United States has only very poor ores of uranium in moderate quantities.”
Einstein noted a massive strategic disadvantage: the uranium needed for this atomic reaction did not exist in the United States. Einstein saw nuclear fission technology as a global system and recognized the need to secure critical resources through alliances and strategic supply chains. In a globalized world, no nation can ensure its safety or prosperity alone; without strong partnerships and diversified supply sources, even the most powerful face glaring strategic risks. The possibility of a high-yield nuclear weapon transformed supply chains from a logistical concern to a life and death matter. It is a reminder that no nation is secure without allies. AGI is another system technology that depends on energy, microelectronics, novel compute paradigms, networking, data centers, sources, and science. The White House Roundtable on AI Infrastructure is an example of strategizing about technology as a system.
5. Forge a methodical public-private network: “In view of this situation you may think it desirable to have some permanent contact maintained between the Administration and the group of physicists working on chain reactions in America.”
The United States has long sought to connect its scientists and the government. The National Security Commission on Artificial Intelligence aggressively sought wisdom from the private sector and leading researchers on AI strategy. The congressional AI Insight Forums have helped lawmakers learn from the best minds in the country. The President’s Council of Advisors on Science and Technology (PCAST) was founded following Sputnik and is still operational. OSTP and NSC officials periodically reach out to the private sector and, amidst the tyranny of the urgent, do the best they can. But in general on AI, the private sector is moving at breakneck speed, and the government is struggling to keep up (with notable exceptions in the national labs). Einstein suggested there can be no daylight between the actual innovators and the national strategists. The United States should strive to make this the case.
6. Select the leader carefully: “One possible way of achieving this might be for you to entrust with this task a person who has your confidence and who could perhaps serve in an unofficial capacity.”
Entrust a person. All programs of national significance need one specially selected person to be the de facto national mission manager for the task at hand. Einstein knew that there should be no gaps in communication, no telephone effect, no part-time attention, and no “my bads,” in handling a civilization-altering technology. One person needs to have direct access to the President, one whom he or she trusts, who is working full time and methodically fuses the best private sector and academic innovators with the right people in government. The UAE and other countries have successfully adopted and benefited immensely from this approach. The U.S. government should immediately mandate the creation of a national security commission focused expressly on the arrival of AGI. The next Administration will be focused on getting through its first 100 days of organization, and AI is moving too fast to wait.
7. Get organized without roadblocking: “His task might comprise the following…”
Einstein and Szilard identified a general direction to follow, and then laid out seven clear, basic elements of organization that have stood the test of time, and should be followed today:
“Approach government departments.” Perhaps AGI as a systems technology, more so than fission, demands building the network or organizational ties to all of the actors.
“Keep them informed of the further development.” To Einstein and Szilard, fission felt like it was moving fast. We share this assessment of AGI thus the leader needs to build the OODA loop to keep pace with the tech.
“Put forward recommendations for Government action.” The trusted leader should actively strategize how the nation harnesses the potential of AGI.
“Secur[e] a supply of uranium ore for the United States.” Einstein encouraged the President to think about tech advantage as a system. For AGI this would involve all AI infrastructure and inputs including talent.
“Speed up the experimental work, which is at present being carried on within the limits of the budgets of University laboratories.” In a word, this trusted leader should live to accelerate the positive advantages of AGI.
“[Provide] funds, if such funds be required, through his contacts with private persons who are willing to make contributions for this cause.” Einstein was thinking about coordinating a diverse capital stack, and we can foresee the need for a similar diverse capital stack to close several funding gaps on the path toward AGI.
“[Obtain] the co-operation of industrial laboratories which have the necessary equipment.” Finally, Einstein encouraged this leader to pool strategic advantages including voluntary associations between government and private labs.
8. Think and act grand strategically: “I understand that Germany has actually stopped the sale of uranium from the Czechoslovakian mines which she has taken over.”
Einstein’s letter ends with the grand strategic observation that Nazi Germany seemed to realize the potential for a nuclear bomb. Uranium, the scientists informed Roosevelt, was no longer being exported from German-occupied Czechoslovakia and “...the son of the German Under-Secretary of State, von Weizsacker, is attached to the Kaiser-Wilhelm-Institut in Berlin where some of the American work on uranium is now being repeated.” Einstein and Szilard made no bold intelligence claims. They noted only that uranium was being managed by the Germans, who were repeating the chain-reaction work. President Roosevelt was left to conclude the grim outcome of U.S. inaction.
A nuclear chain reaction and the arrival of super-intelligence are not one and the same. However, the concepts in Einstein’s letter can be applied to the present day, as we face an equally fundamental technological revolution. The straightforward action agenda in this letter, from facing the tech facts to thinking and acting grand strategically, have relevance to all nations today. It is difficult to imagine the United States not ending up in the “A” range if we followed Einstein’s advice. In 1939, no one could foresee the full extent of the global upheaval, madness, and carnage about to take place. Civilization itself was in the crosshairs. Yet in eight paragraphs of gentle supposition, Einstein and Szilard initiated a process that ensured the United States and its allies came out on top. Along that path to innovation power, civilization was restored.
Thank you, Jack. That "equally germane" is what strikes us about the letter. We are with you that the deep innovation-diffusion discourse including Jeff's new work applies to national leadership/outcomes. On that note, I find the same diffusion point in the military innovation discourse too (e.g. Stephen Rosen, Bill McNeil, and and Williamson Murray each in their own ways). Since the NSCAI, we've been consistently advancing that AI is a general purpose technology (agrees with Jeff) and that is also different from fission (unless one wants to make the energy connection and that energy itself is a GPT). This is why we note, "A nuclear chain reaction and the arrival of super-intelligence are not one and the same." What has transfer value is how a nation gets organized for a very large technology transformation. This transfer value is in the spirit of "history doesn't (exactly) repeat itself, but it often rhymes." Thanks again!
A creative exercise with a lot of important and helpful analogies. There are, however, some fundamental differences between a nuclear weapon and AGI.
By the start of WWII, the physics, mathematics, and chemistry governing a nuclear reaction (fission/fusion) were unassailable. No such parallel with AGI, other than to extrapolate from current data and compute to future data and compute (an extrapolation that many AI experts don’t agree with — calling instead for an entirely new research path to AGI).
A nuclear explosion is a singular and extremely well-defined event. The whole point of AGI is that it is supposed to be everything everywhere all at once.
Put the 50 top physicists in the world in the same room in 1944, and they would all have agreed on the meaning of nuclear fission/fusion. By 1946, they would have all agreed on the global ramifications of a nuclear war.
The top 50 AI leaders in the world are still all over the place when it comes to defining AGI and its potential ramifications. From “meh” to “the end is nigh!” Because it’s not nearly the same clean physics/mathematics/chemistry problem. In fact, it’s as much a socio-economic issue as it is anything else.
I struggle with treating AGI as the outcome. Or viewing this as a race. I’m much more in Jeff Ding’s camp: that long-term societal diffusion matters most.
Many of the nuclear-weapon related recommendations you raise are equally germane to AI diffusion. It’s just a much messier problem, with far more complicated and uncertain outcomes, than the race for nuclear weapons.