In early 2025, the Trump Administration revoked the Biden Administration’s Executive Order 14110 (“Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”) and introduced an overhaul of the U.S.’s artificial intelligence (AI) policy through a flurry of executive orders, memoranda, and policy changes:
- January 23, 2025: Executive Order 14179 (“Removing Barriers to American Leadership in Artificial Intelligence”)
- April 3, 2025: Two Office of Management and Budget (OMB) memoranda (M-25-21 and M-25-22)
- April 23, 2025: Executive Order 14277 (“Advancing Artificial Intelligence Education for American Youth”)
- July 23, 2025: “Winning the AI Race: America’s AI Action Plan” (Plan)
- Executive Order 14318 (“Accelerating Federal Permitting of Data Center Infrastructure”)
- Executive Order 14319 (“Preventing Woke AI in the Federal Government”)
- Executive Order 14320 (“Promoting the Export of the American AI Technology Stack”)
These actions set the tone for federal procurement and use of AI, clearing a path for AI innovation for organizations of all sizes. Depending how quickly the agencies tapped with implementing the Plan act, AI developers, vendors, and users may have opportunities to fast-track innovation and bypass the perceived red tape of regulatory requirements. However, while not the focus of this article, state regulators and Attorneys’ General have parallel efforts to further regulate AI, focusing less on America “winning” the race for AI innovation and more on consumer protections, ethical considerations, and environmental impact. The same is true for the implementation and enforcement of the EU AI Act, which is already in effect.
Executive Orders and Memoranda
Three days into his term, the President signed Executive Order 14179, revoking the Biden Administration’s AI Guidance (Executive Order 14110), declaring the latter to be outdated and an impediment to innovation. Executive Order 14179 directs every federal agency to identify and remove barriers to AI deployment, coordinate across agencies on AI policy, and publish an implementation plan so that the United States can lead on AI research and further encourages the use of AI for national security and for commercial growth.
To provide detailed guidance on Executive Order 14179, the OMB issued two companion memoranda, Memorandum M-25-21 and Memorandum M-25-22.
Memorandum M-25-21 focuses on promoting the adoption of AI, directing agencies to name a Chief AI Officer (CAIO) that promotes the adoption and use of AI within the agency and to develop AI risk management strategies. For “high-impact” AI systems (ones that could impact a person’s rights, safety, or access to essential services), agencies are to implement a risk management plan that includes pre-deployment testing and ongoing monitoring of such AI systems. “High-impact” AI systems include emergency services infrastructure, systems controlling movement of vehicles and robots, medical devices, law enforcement applications, and so on.
Memorandum M-25-22 focuses on procurement of AI systems by federal agencies, instructing contracting officers to convene cross-functional teams, favor performance-based statements of work from AI vendors (and U.S.-developed technologies), and ensure that numerous data handling and privacy terms are mentioned in the relevant contracts. For example, agencies are to ensure that each contract prohibits further training of AI algorithms based on non-public government data inputs and outputs. For “high-impact” AI systems, the agencies must require AI vendors to share heightened levels of documentation and undergo more stringent transparency and data handling.
Executive Order 14277 created a White House Task Force on AI Education, directed agencies to embed AI literacy in federal training, and encouraged partnerships between schools and the industry to develop AI literacy and prepare student to enter the AI workforce.
These actions set the tone for federal procurement and use of AI, clearing a path for AI innovation for organizations of all sizes.
How AI Vendors and Business Users Should Respond
While the full implications of these executive orders and memoranda are still unfolding, AI vendors should ensure that their systems, processes, and contract terms comply with federal agency requirements. Vendors should understand where their AI system fits in the new “high-impact” framework, and be prepared to provide detailed technical documentation, audit logs, and data provenance records, as well as implement robust data privacy and security safeguards. AI vendors should consider their current supply chain and manufacturing processes, as the administration has expressed a strong preference for U.S. products.
As each agency’s Chief AI Officer puts the new directives into action in their own way, AI vendors (and the broader AI industry) have plenty of opportunities to get ready under the updated federal guidelines. By keeping an eye on how agencies roll out these requirements, vendors can adjust their products, processes, and contracts to fit the federal government’s push for responsible and innovative AI. Taking some simple steps to make sure their solutions comply with the latest standards for transparency, risk management, and compliance, AI vendors can meet agency expectations and stay competitive in the ever evolving federal AI landscape.
Trump Administration’s Plan Sets Forth AI Agenda
Most recently, on July 23, 2025, the administration issued the Plan, which was accompanied by three related Executive Orders:
- “Accelerating Federal Permitting of Data Center Infrastructure” (EO 14318): Revokes and replaces the January 14, 2025 Biden Executive Order 14141 “Advancing United States Leadership in Artificial Intelligence Infrastructure,” intending to speed up the construction of data centers for high-power demands of AI by removing red-tape obstacles of federal permitting rules and environmental reviews and offers financial incentives.
- “Preventing Woke AI in the Federal Government” (EO 14319): Requires federal agencies only procure large language models (LLMs) that are developed according to “unbiased AI principles” and specifically calls out LLM outputs distorted by “ideological dogmas such as DEI” which should be replaced with ideologically neutral outputs. This EO notes that the federal government should be hesitant to regulate AI models in the private marketplace, but that in the context of procurement, the government has an obligation not to procure models that sacrifice “truthfulness and accuracy to ideological agendas.”
- “Promoting the Export of the American AI Technology Stack” (EO 14320): Directs the Department of Commerce to establish an American AI Exports program, promoting the export of full-stack American AI tech packages and decrease dependence on non-U.S. AI technologies. The EO also empowers the Economic Diplomacy Action Group (EDAG), chaired by the Secretary of State, to coordinate financial investment and financing tools to priorities these AI packages.
This trio of Executive Orders, together with the Plan, make clear the administration’s priority on winning the “AI race” which the Plan equates to the space race for global dominance. The same sentiment was expressed earlier in the spring at the International Association of Privacy Professionals Global Privacy Summit in Washington, DC (April 2025), where certain panelists and presenters from federal enforcement agencies made the message clear – to beat China – when it comes to regulatory scrutiny and agency priorities related to AI innovation.
The Plan focuses on three pillars of policy efforts:
- Accelerating Innovation (removing “bureaucratic red tape”). The recommended policy actions include (among others):
- Reviewing all FTC investigations led by the Biden Administration to ensure those actions do not advance liability concepts that “unduly burden AI innovation” and potentially setting aside prior orders, consent decrees, and injunctions where such undue burden is identified.
- Revising NIST AI Risk Management Framework standards to remove references to misinformation, DEI, and climate change and focus on free speech and American values.
- Encouraging open-source and open-weight AI in a way that improves the financial market for compute and hyperscaler access and makes AI accessible to researchers and academics with constrained budgets through cross-agency efforts.
- Enabling AI adoption for large organizations, particularly in sectors like healthcare, by providing regulatory sandboxes where AI tools can be tested and quickly deployed.
- Collecting and sharing intelligence on foreign AI projects that may have national security implications.
- Expanding AI literacy and skills development to workers.
- Investing in AI innovation for: development of new technology for manufacturing capabilities; AI-enabled science; scientific datasets and protection of restricted federal data; and driving adoption in the federal government, including Department of Defense.
- Protecting U.S. commercial and AI innovation and IP and rooting out deep fakes and related malicious AI content.
- Building American AI Infrastructure. Key policy recommendations include:
- Clearing a path for regulatory constraints that inhibit ability to build out infrastructure – including chip manufacturing and data centers – at rapid pace through exclusions and regulatory reform to fast track data center development.
- Expediting environmental permitting through changes to the Clean Air Act and related laws and making federal land available for data center construction and infrastructure for power generation.
- Developing a power grid to match the needs of AI innovation and securing current grid assets for uninterrupted and affordable power supply.
- Investing federal dollars to remove policy obstacles for semiconductor manufacturing in the U.S.
- Creating new technical security standards for high-risk AI data centers to protect military and intelligence data; promoting secure AI design through industry and regulatory standards; and incorporating AI incident response into existing government procedures.
- Investing in training a skilled workforce to support development and maintenance of AI infrastructure systems.
- Leading in International Diplomacy and Security. This pillar focuses on promoting the use of AI around the globe, not just throughout the U.S. Policy recommendations include:
- Facilitating export of AI technology stack (as mentioned above) so the U.S. is where international users turn for this technology.
- Establishing AI governance approaches to promote innovation and combat against Chinese influence in rules or requirements set by international powers or governing bodies.
- Improving enforcement against unlawful export control to prevent foreign adversaries from accessing our resources and plug “loopholes” for export controls on semiconductor manufacturing sub-systems to maintain U.S. dominance.
- Attempting to drive international policy controls and regulation of sensitive technologies such as AI, so that such precedent and legal controls are determined by the U.S. and not other nations.
- Leading evaluation of security risks posed by frontier AI models for national security purposes and invest in biosecurity to protect unauthorized access to biological intelligence and development.
What’s Ahead
As seen from the policy activities noted in this alert, the administration has taken pointed steps to support the development of AI technology in the U.S. and America’s ability to “win” the “AI race” against other countries. This started in January with immediate executive orders, reinforced in February by Vice President JD Vance’s remarks at the Artificial Intelligence Action Summit in Paris where the Vice President outlined objectives that closely mirror the Plan. He noted that excessive regulation of AI could “kill a transformative industry just as it’s taking off” and that the new administration would make “every effort to encourage pro-growth AI policies…” and deregulation. Last month’s release of the Plan and the accompanying executive orders do just that, clearly stating an intent to remove red tape. While this federal deregulation effort in support of innovation bodes well for AI developers and users under federal regimes, there are still many other considerations such as state AG enforcement and state laws governing AI development and use; international laws, including the EU AI Act; intellectual property protections; and cybersecurity risk. Notwithstanding, the message remains loud and clear, AI is here to stay and for most businesses, adoption of AI is not a question of “if” but “when.”
We will continue to monitor these developments, so stay tuned for future client alerts. If you have any questions about the issues raised in this article, please contact the authors or the Womble Bond Dickinson attorney with whom you normally work.