Washington's Approach to Advancing AI Adoption
A spotlight on President Trump’s Executive Order on AI and additional OMB policy guidelines.
Hello, I’m Ylli Bajraktari, CEO of the Special Competitive Studies Project. In this edition of SCSP’s newsletter, we highlight President Trump’s Executive Order on AI and the two policy guidelines issued by the Office of Management and Budget in response. These documents make clear that President Trump wants the United States Government to lean hard into AI innovation to promote American economic security, national security, and human flourishing.
Drone Arena at the AI+ Expo!
Join us on June 2-4, 2025 at the Washington Convention Center as we bring together the brightest minds in artificial intelligence, policy, and industry for conversations in AI, biotech, energy, networks, compute, microelectronics, manufacturing, augmented reality, and beyond- register today!
At the AI+ Expo, be sure not to miss the drone arena! There will be a drone assembly workshop where you can learn about the inner workings of drones, simulators to learn and practice speed and agility skills, and professional pilot demos.
Plus, in partnership with the U.S. National Drone Association we are hosting an interservice U.S. Military drone competition! The competition is open to all military service members, high school, and college students - if you are interested in competing, apply below. The leadership time trials track opens tomorrow, May 1st!
Washington's Approach to Advancing AI Adoption
A New Federal AI Mandate: Innovation Without Compromise
This month, the Administration took a critical step forward in U.S. artificial intelligence (AI) leadership. The White House Office of Management and Budget issued two revised policies on government use and procurement of AI pursuant to the landmark Executive Order (EO) on Artificial Intelligence, Removing Barriers to American Leadership in AI: Memorandum M-25-21, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust and Memorandum M-25-22, Driving Efficient Acquisition of Artificial Intelligence in Government. True to expectations, the EO and memos hit the strategic sweet spot—accelerating innovation while thoughtfully safeguarding the civil rights, civil liberties, and privacy of Americans.
Innovation demands adoption, and adoption demands trust. Striking this delicate balance, the Administration sets forth a bold yet responsible approach, emphasizing forward-looking innovation that serves our society without compromising democratic values.
The main theme of the EO is framing U.S. AI policy as decisively innovation-embracing. The new policy directive is clear: the United States will sustain and enhance its global AI dominance in order to promote human flourishing, economic competitiveness, and national security. A shift towards pro-innovation is no surprise, given that China is flexing its innovation muscles.
To do that, the federal government is instructed to:
Adopt and procure modern AI tools that can enhance services and reduce inefficiencies;
Leverage public–private partnerships to incentivize industry;
Ensure AI is deployed with accountability and transparency; and
Focus oversight efforts where it really matters—on high-impact AI.
The EO calls for a comprehensive AI Action Plan to implement its pro-innovation policy. It gives the Assistant to the President for Science and Technology (APST)—along with the newly designated Special Advisor for AI & Crypto and the National Security Advisor—180 days to develop and submit to the President a government-wide plan for advancing federal AI capabilities.
Accelerated AI Adoption and Modernized Government
The memos provide specific guidance for agencies to increase AI deployment broadly and effectively. Every major agency must appoint a Chief AI Officer (CAIO) as a “change agent” who will serve as the internal champion for responsible AI adoption. Agencies are directed to fast-track the use of AI tools to improve mission delivery, leaning forward on mission-enabling AI for public benefit.
Agencies are strongly encouraged to:
Rapidly adopt low-risk AI that supports mission delivery;
Integrate AI into IT, procurement, and public service operations; and
Use AI to reduce backlogs, improve health outcomes, detect fraud, and enhance responsiveness.
Rather than adding new layers of approval, AI oversight will piggyback on existing IT governance processes. To do this, CAIOs are charged with promoting lower-risk AI projects, mitigating risks for higher-impact uses, and advising on AI investments and spending. The memos also direct agencies to streamline contracting practices, removing bureaucratic hurdles that disadvantage agile, innovative companies.
Support Domestic Industry Competition
The memos strategically recognize that America's cutting-edge AI innovation originates primarily in the private sector. The directive is clear: federal agencies must be drivers of this innovation, not impediments. Agencies are strategically encouraged to buy, test, and scale cutting-edge AI from the private sector using flexible contracts and iterative pilot programs.
That is why the memos include procurement reforms to:
Remove burdensome contracting rules that disadvantage agile, innovative companies.
Favor U.S.-based AI solutions and open standards – a clear strategic preference for domestic capabilities.
Promote open competition and prevent vendor lock-in, fostering a dynamic market.
By removing barriers and explicitly favoring “American AI,” the order strategically deepens collaboration between federal agencies and the U.S. AI industry. Agencies will actively seek out private sector innovators for advanced AI products, strategically directing federal spending to fuel domestic R&D and deployment. The competitive approach, avoiding vendor lock-in, strategically broadens opportunities for the full spectrum of U.S. companies, thereby stimulating innovation across the entire ecosystem and enhancing national capacity.
Increase Public Trust
In an era of rapid technological change, public trust is not a secondary concern; it is a strategic imperative for successful AI adoption across the public sector. The memos emphasize building trust through accountability and transparency. Agencies must stand up an internal AI Governance Board to coordinate AI use and ensure accountability within the agency. The memos also place importance on transparency. Agencies will inventory and report their AI use cases, providing visibility into how AI is being applied in government.
To this end, the memos aim to earn and maintain public trust by:
Public inventories of agency AI systems, demonstrating openness.
Establishment of internal governance boards to ensure strategic oversight and responsible deployment in relevant agencies.
By making AI adoption more transparent and accountable, the Administration expects to build public trust in AI-driven government services. The requirement that agencies inventory their AI use and the establishment of governance boards or councils means there will be oversight on how AI is used, reducing the chance of unchecked or harmful implementations. The balance is clear: accelerate what’s low-risk, scrutinize what’s not. This is where the memos provide the most innovation-friendly yet democratic values supporting the idea. That leads us to the next pillar of the memos.
Ensure the Security of Americans
Ensuring the security of Americans requires strong protections for privacy, civil rights, and civil liberties in all AI deployments. As SCSP has reiterated since September 2022, we cannot regulate every AI use given the pace and diffusion of AI, nor should we. To effectively ensure American security and enable innovation, the memos strategically create a special category of “high-impact AI” systems requiring enhanced oversight. This approach preserves innovation by not overregulating everything—only the systems that are truly consequential get special scrutiny.
The memos define high-impact AI as systems whose use could significantly affect:
Rights: Such as due process or civil liberties;
Security: Public or individual safety; or
Well-being: Like access to critical benefits or health outcomes.
Agencies deploying high-impact AI must:
Apply minimum strategic risk management requirements.
Develop rigorous pre-deployment testing and prepare strategic risk mitigation plans.
Complete thorough impact assessments before deployment and update periodically.
Continuously evaluate system performance against strategic objectives and risk profiles.
The memos closely track SCSP’s and JHU-APL’s recommendation for identifying “highly consequential AI”—those systems that warrant more intense scrutiny due to their potential to:
Affect rights or liberties (e.g., adjudication, healthcare, public safety);
Operate in critical domains (e.g., defense, infrastructure, justice); or
Have irreversible impacts or operate without meaningful human oversight.
The memos operationalize SCSP’s more academic framework, making it actionable across the federal government. Both the framework and memos prioritize systems with broad or significant human consequences, assume irreversible outcomes are riskier, and consider certain public-sector domains and uses (e.g., healthcare and automated decision-making) more critical. By echoing a risk-tiered AI governance approach like the one reflected in SCSP’s Framework for Identifying Highly Consequential AI, the EO helps the United States align with international norms without copying external regulatory models.
The result? A smarter, more targeted approach to AI governance—one that doesn’t treat every chatbot like a missile system.
Conclusion: A Strategic Trajectory
This Executive Order and its implementing memos lay out a clear strategic trajectory for the next era of federal AI leadership. They signal to agencies, innovators, and the public that the United States is not only ready to harness AI's power but is doing so strategically—in a way that upholds freedom, fairness, and trust, essential components of long-term national strength.
As this strategic vision is implemented, we can anticipate new agency strategies, targeted public-private pilots, and government-wide standards that will keep the United States at the forefront of AI advancement. This is leadership not through caution, but through strategic, responsible action.