The Administration’s AI Action Plan outlines policy priorities across three pillars—accelerating innovation, building American AI infrastructure, and leading in international diplomacy and security. Though it provides an important window into the Administration’s thinking, it also leaves key AI policy questions unaddressed. The Trump Administration’s AI action plan (the “Plan”), released on July 23, outlines the Administration’s approach to achieving US “global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The Plan follows a request for input on AI policy from which the Administration received more than 10,000 responses. The Plan is organized around three pillars – accelerating innovation, building American AI infrastructure, and leading in international diplomacy and security – and identifies more than 90 policy changes it believes are needed to ensure the US wins the AI race. The Plan also outlines several principles: ensure that US workers and families benefit from AI innovation, keep AI systems free of ideological bias and “social engineering agendas,” prevent malicious actors from misusing AI, and monitor for emerging and unforeseen AI risks. Importantly, the Plan contains recommendations and does not itself make any policy changes. However, the Administration has already issued several Executive Orders acting on the recommendations. The Plan contrasts sharply with the Biden Administration’s approach to AI policy, which, while acknowledging AI’s potential benefits, emphasized the importance of “mitigating its substantial risks.” Indeed, one of the Administration’s earliest actions was to rescind a Biden-era Executive Order that contained high-level principles for advancement and governance of AI, provisions for developers to share information about model safety with the government, directives for agencies to study potential risks associated with AI, and provisions to protect workers and consumers. The Plan also contrasts with the approach pursued by the European Union (EU), which in May 2024 adopted a comprehensive AI law taking a risk-based approach that defines rules and obligations for AI providers, deployers, distributors and others based on the characteristics of the AI system and the level of risk it and certain use cases pose. The first pillar of the Plan outlines a range of deregulatory proposals aimed at supporting private-sector AI development. For example, it suggests directing Federal agencies to identify and revise or repeal policies that may hinder AI development. It also specifically recommends reviewing all Federal Trade Commission (FTC) investigations initiated under the prior Administration to determine whether they advance “theories of liability” that may restrict innovation, presumably a reference to a 2024 investigation and 2025 staff report on “Generative AI Investments and Partnerships,” a notable recommendation given the Administration’s challenge to the agency’s historical independence. The Plan does not present a clear view of the Administration’s position on a contentious matter of AI policy – the role of states in developing AI regulations. The Senate recently rejected a proposal to impose a moratorium on state regulation of artificial intelligence that would have affected a variety of laws in states including California, Colorado, New York, Utah, Tennessee, and Texas. Echoing a version of the moratorium Congress rejected, the Plan suggests that the Federal government should not allow AI-related Federal funding to be directed toward states with “burdensome AI regulations.” However, it also states the Federal government should not “interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” The Plan also includes recommendations related to standards and government procurement. The National Institute of Standards and Technology (NIST)’s Center for AI Standards and Innovation replaces the former US AI Safety Institute. The Plan suggests that NIST remove references to misinformation, Diversity, Equity, and Inclusion (DEI), and climate change from its AI Risk Management Framework. In addition, it recommends that the Federal government contract only with developers of large language models that can demonstrate their systems are free from “top-down ideological bias.” The Plan further proposes that NIST evaluate AI models developed in China for alignment with political messaging from the Chinese Communist Party. The Plan expresses support for open-source and open-weight AI models, suggesting they improve access to AI technology for startups and academic researchers. It also encourages the creation of “regulatory sandboxes” and Centers of Excellence that would allow public and private actors to experiment with AI technologies to deploy new AI tools rapidly. In addition to removing regulatory barriers, the Plan includes a variety of proposals focused on preparing the US workforce and economy for the impacts of AI. For example, it suggests agencies (including the Departments of Labor and Education) incorporate AI skill development as a core objective in career and educational programs, study the impact of AI on the labor market, and develop retraining programs for individuals affected by AI job displacement. The Plan also proposes that agencies engage with private industry to identify supply chain challenges related to robotics and drone manufacturing and invest in cloud-enabled labs that can leverage AI for scientific research. Finally, the Plan includes a range of technical proposals related to the development and use of datasets, interpretability and reliability of AI systems, and methods for evaluating AI performance. It encourages adoption of AI within the Federal government – including in the defense and intelligence communities – and outlines steps to mitigate the use of synthetic media, such as deepfakes, as evidence in legal proceedings by developing standards and guidance for their detection and evaluation. (The Plan also promotes the “Take It Down Act” that permits affected parties to require removal of deepfakes more easily.) The second pillar of the Plan focuses on the infrastructure, such as chip manufacturing facilities, data centers, and energy production, needed to support AI development. The Plan recommends a variety of steps to speed the development of AI infrastructure, including providing exemptions from certain environmental regulations under the National Environmental Policy Act (NEPA) and evaluating the need for a nationwide Clean Water Act Section 404 permit, such regulates the discharge of dredged or fill material into US waters, for data centers. The Plan also notes that sufficient and reliable electricity is critical to AI. It recommends stabilizing the existing electrical grid by preventing “premature decommissioning” of existing power generation sources while also developing new generation sources (e.g., geothermal and nuclear). Notably, it does not make any reference to solar or wind energy. The Plan recommends continuing to leverage the CHIPS Program Office in the Department of Commerce (DOC) to develop semiconductor manufacturing facilities and for the Departments of Labor and Education to develop career training programs (e.g., internships and apprenticeships) to prepare the workforce for roles needed to build AI infrastructure (e.g., electricians, HVAC technicians). The Plan’s third pillar notes that to maintain its lead in AI, the US must increase international adoption of its AI technology. To that end, the plan recommends that DOC work with the private sector and relevant Federal agencies to identify full AI technology stacks (hardware, models, software, etc.) that can be exported to allies and partners. It also recommends that the US ensure that American values are reflected in the standards and governance developed by international bodies (e.g., the UN, the Organisation for Economic Co-operation and Development, the G7, G20, and others), noting specifically the importance of countering Chinese influence in these organizations. It calls on DOC to develop new controls covering semiconductor sub?systems not currently restricted, improve enforcement of existing or prospective rules, and encourage allies to align with US controls through diplomatic engagement and use of tools including the Foreign Direct Product Rule. The Plan also recommends steps to ensure the Federal government understands the national security risks of AI models, including risks to biosecurity. Finally, the Plan includes recommendations for boosting cybersecurity– such as critical infrastructure systems that use AI – and for improving Federal responses to cybersecurity incidents related to AI. The President signed several Executive Orders in concert with the release of the AI Action Plan which act on several of the Plan’s recommendations. For example, the President directed DOC and relevant agencies to establish an “American AI Exports Program” and engage the private sector in identifying “full-stack” AI technology packages that can be exported. The President also directed agencies to take a variety of steps to accelerate the development of data center projects, including using financial supports (e.g., loans, loan guarantees, and grants), exemptions from certain environmental regulations, and expedited permitting. Notably, the Order specifies that projects involving “natural gas turbines, coal power equipment, nuclear power equipment, geothermal power equipment, and any other dispatchable baseload energy sources” may be eligible, but does not include wind and solar energy producers. Finally, the President issued an Order that requires Federal agencies to procure only those large language models (LLMs) that meet two core standards – truth-seeking and ideological neutrality – explicitly excluding models that incorporate concepts such as critical race theory, transgenderism, unconscious bias, and systemic racism. Reactions to the Plan were mixed – many industry stakeholders praised the plan’s focus on deregulation and infrastructure investment as well as its approach to export controls. Similarly, Meta’s Chief Global Affairs Officer described the Plan as a “bold step to create the right regulatory environment” for AI. Others offered more cautious support – applauding some aspects of the Plan (e.g., infrastructure investments), but expressing concern that the Administration has not taken more aggressive export control measures. In contrast, critics raised concerns about the Plan’s recommendations weakening environmental regulations to facilitate the construction of AI-related infrastructure and its emphasis on eliminating references to climate change and DEI from AI development frameworks. Others critiqued the Plan’s recommendations discouraging state-level AI regulations and the lack of attention to consumer protections. While the Plan advances several priorities aligned with the Administration’s broader agenda – such as deregulation, opposition to DEI initiatives, and countering Chinese influence – it leaves many of the most contested questions in AI policy unresolved, including those related to intellectual property and the role of state-level regulation. However, in remarks at a tech industry event, the President spoke in favor of federal rather than state AI regulation and for allowing AI models to use copyrighted materials. Rather than providing a comprehensive framework, the Plan marks an incremental step in a still-evolving policy landscape. With Congress unlikely to take near-term action on many of these unresolved issues, the US continues to pursue a more fragmented approach to AI governance – one that stands in sharp contrast to the comprehensive regulatory regime in the EU under the AI Act.Trusted Insights for What’s Ahead®
Overview of the Plan
Accelerate AI Innovation
Build American AI Infrastructure
Lead in International AI Diplomacy and Security
Related Executive Orders
Reactions
Conclusion