Alert 02.12.25
White House Seeks Public Input on AI Action Plan: What Stakeholders Need to Know
The White House is gathering industry feedback on AI governance, giving stakeholders an opportunity to shape future policy.
Alert
Alert
By Aimee P. Ghosh, Craig J. Saperstein, Mia Rendar, Diana Obinna
12.19.25
Background: Federal AI Policy and State AI Laws
Earlier this year, President Trump revoked Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” issued under the prior administration, and replaced it with Executive Order 14179 (the “EO”), “Removing Barriers to American Leadership in Artificial Intelligence.” EO 14179 directs agencies to identify and roll back regulations that could act as barriers to artificial intelligence (AI) innovation and calls for the development of a national AI Action Plan focused on competitiveness and reduced regulatory burdens.
As Pillsbury previously explained in its AI Action Plan alert, this pivot signaled a shift to minimal oversight and industry self-regulation, with the federal government de-emphasizing oversight, and emphasizing AI leadership, reduced regulatory barriers and public–private collaboration.
In Congress, lawmakers have simultaneously wrestled with the question of federal preemption of state AI laws. The 2025 House budget reconciliation bill, widely known as the “One Big Beautiful Bill,” would have imposed a 10-year moratorium on enforcement of state and local AI laws, broadly suspending existing and future state AI frameworks across sectors such as employment, health care and financial services without immediately creating replacement federal standards. That moratorium language was ultimately stripped from the bill in the Senate, which voted 99–1 against penalizing states for enacting AI laws.
States, meanwhile, continue to legislate. California, Colorado, New York and Illinois have adopted or proposed comprehensive AI and algorithmic accountability laws addressing transparency, bias mitigation, documentation and sector-specific safeguards. Commentators estimate that more than 1,000 AI-related bills have been introduced across nearly every state in 2024–2025.
The AI framework in the United States differs greatly from the EU, which has taken a uniform, protective approach to regulation of AI.
Overview of the Executive Order
The EO states that the Trump administration will advance its objective of removing barriers to United States AI leadership. It declares that “United States AI companies must be free to innovate without cumbersome regulation” and identifies (1) a “patchwork” of 50 different regulatory regimes, (2) the risk of “ideological bias” in models, and (3) impermissible state regulation of interstate commerce as the impetus for further action.
To address these issues, the EO directs the:
Notably, the above actions will not preempt state AI laws related to (i) child safety protections; (ii) AI compute and data center infrastructure; and (iii) state government procurement and use of AI.
How the EO Impacts State AI Laws
The EO does not directly amend, repeal or suspend any state AI law. State statutes and regulations remain effective unless and until they are changed by the relevant state, preempted by federal law or invalidated by a court.
The EO:
The EO, by itself, does not resolve the extent to which federal law will ultimately interact with specific state AI regimes. That will depend on how agencies implement the EO and how courts assess any resulting litigation or regulatory actions.
Key Considerations for Stakeholders
For companies that develop, deploy or rely on AI, particularly those operating in jurisdictions with active or emerging AI regimes, the EO raises several factual and planning considerations:
State AI Laws Continue to Apply, But May Be Challenged
State AI laws in California, Colorado and other jurisdictions continue to apply. At the same time, those laws may become the subject of DOJ litigation or other federal action prompted by the EO. Companies should be aware that the legal status of certain state AI requirements could change over time as litigation proceeds or as states and agencies respond to the EO.
Contracts related to the use of AI should account for potential changes in law, and delineate responsibility for monitoring, compliance and adaptation based on the changes.
Possible Implications for Funding-Related Projects
Entities that rely on state-administered federal funds, including broadband and infrastructure providers, should consider how the EO’s direction to agencies regarding funding conditions could affect projects that involve BEAD or other discretionary funds. Section 5 of the EO contemplates that states may be asked either not to enact certain AI laws or, for existing laws, to enter into binding agreements with federal agencies not to enforce those laws during the period in which they receive funding.
Interaction with Internal AI Governance
The EO focuses on the federal-state allocation of authority rather than on detailed technical standards for AI systems. Companies that have developed internal AI governance and compliance programs to meet state AI obligations will need to track how those obligations evolve but may find that many of the underlying risk-management practices remain relevant irrespective of changes in specific legal provisions.
Recommended Actions
Although many details will depend on how agencies implement the EO and how courts respond to any litigation, companies can consider the following steps now:
Identify relevant state AI laws. Catalog which state AI, automated decision-making and algorithmic accountability regimes apply to your operations, focusing on jurisdictions that have enacted comprehensive frameworks.
Assess funding dependencies. Determine whether your organization or key projects rely on federal or state-administered funding that could be affected by AI-related funding conditions.
Monitor federal and state developments. Track implementation of the EO, including:
- DOJ announcements regarding the AI Litigation Task Force;
- Commerce’s report on state AI laws and any funding-related notices; and
- FCC/FTC developments and any legislative proposals relating to AI and preemption.
Continue to follow state legislative and regulatory activity, as states may respond to the EO in a variety of ways.
Coordinate with counsel. Work with legal counsel to review how the EO may intersect with existing contractual obligations, regulatory requirements and AI-related risk management frameworks, and to prepare for potential changes in the legal landscape for state AI laws.
Advocate for workable AI laws and regulations. Retain government affairs experts to influence federal agency, congressional and state level legislative and regulatory activity related to AI. In particular, having a government affairs team that can coordinate your organization’s response to numerous legislative and regulatory actions related to AI across states and the federal government is critical.
Pillsbury’s Technology Transactions team is available to help clients assess how the EO may affect their AI arrangements, update contracting and governance frameworks, and negotiate vendor and customer agreements in light of evolving state and federal AI regimes. Pillsbury’s Government Law & Strategies Team is available to advise clients on the implications of this policy shift, assist in drafting public comments, and developing and executing an advocacy strategy to influence federal and state AI policy.