TL;DR — On December 11, 2025, President Trump signed an Executive Order directing the federal government to build a “minimally burdensome” national framework for AI and to push back against state AI laws the Administration views as harmful to innovation.
The EO takes a new, novel approach via Executive Branch authority, creating an AI Litigation Task Force and asking the U.S. Department of Commerce to evaluate state AI laws and identify “onerous” laws (explicitly citing laws that require models to “alter their truthful outputs”.) As the stick, the EO seeks to tie federal funding and grants to state compliance, and directs the FCC and FTC to consider federal reporting, disclosure, and preemption positions. The EO will almost certainly produce litigation and political pushback rather than offer any immediate regulatory clarity in the short-term, and with the potential to hamper AI innovation in the long-term.
In this piece, we’ll break down what’s included in the EO, the potential flashpoints and ramifications, why this matters for broader AI adoption, and what actions businesses can take today to stay ahead of the curve while this battle rages in the courts.
What the EO Does
- Directs the Attorney General to create an AI Litigation Task Force (within 30 days) to identify and challenge state AI laws on commerce-clause, preemption, or other grounds.
- Orders the Secretary of Commerce to publish an evaluation of state AI laws (within 90 days) and identify “onerous” laws, including those that require models to alter truthful outputs or that may raise First Amendment concerns.
- Obligates Commerce to issue a Policy Notice conditioning BEAD broadband funds on state compliance; agencies to assess whether discretionary grants should be conditioned on states not enacting or enforcing certain AI laws.
- Requires FCC to initiate a proceeding to consider a federal reporting and disclosure standard that would preempt conflicting state laws.
- Requires FTC to issue a policy statement explaining when state laws requiring alterations to truthful AI outputs are preempted by the FTC Act’s prohibition on deceptive acts or practices.
- Directs the Administration to prepare legislative recommendations and work with Congress for a uniform federal AI framework while carving out certain state prerogatives (child safety, state procurement, some infrastructure.)
Why This Matters
The EO is an aggressive, Executive Branch attempt to replace a growing patchwork of state AI rules with a single federal floor as part of the Administration’s AI Action Plan released this summer. It aims to remove barriers to AI innovation under the goal of remaining nationally competitive with China and other nation states in what’s shaping up to be the space race of the 21st century. This is the second attempt at instituting a moratorium, after a previous Senate Republican attempt in the summer failed 99-1, and it continues to be a contentious topic across bipartisan lines.
The EO comes after a revival attempt was contemplated earlier this month as an addition to the must-pass National Defense Authorization Act (NDAA) or tied to other appropriations in the lead up to short-term appropriations expiring in January. Congress largely balked at that effort, and Trump opted to take unilateral action instead.
If successful, it would create a national compliance environment for frontier model builders and providers, as well as Big Tech as a whole. If it fails, litigation and state challenges will create years of multi-jurisdictional uncertainty. For businesses, the immediate effect is likely more legal risk and less operational certainty at a time when buyer trust in AI is already fragile.
Who’s Impacted by This EO, and When?
The first step of the EO establishes the AI Litigation Task Force, which within 90 days will identify which states and which state laws the Administration views as in contradiction to the goals of the EO (enforcing that the “policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”) Realistically, this broad policy could be applied to any and all state-level AI laws or a specific set of targets. But the Administration likely has its targets set on California’s recent slew of AI laws targeting model developers, such as SB 53, and other politically convenient targets like Colorado, and potentially send up a warning flare to New York should Governor Hochel sign the R.A.I.S.E. Act or successor proposals. However, it’s notable that this EO isn’t completely pre-empting state law; there are carveouts for state AI laws such as those that address child safety, infrastructure, state-level procurement of AI solutions, and other areas as the Administration deems appropriate.
It’s conceivable that on Day 90 or Day 91, the states deemed in contradiction may face Executive Branch action, including impacts to BEAD funding and lawsuits. But, multiple states aren’t waiting, and are already preparing their own coordinated challenge in the courts on the substance and novelty of the approach infringing on states rights and federal action unlawfully constraining state legislative authority.
On publication of this EO, no state or business is directly impacted today; it’s still business as usual, and laws are still in effect. Until this is resolved in the courts or through further Executive action, the status quo remains.
Section by Section: An Analysis
Section 1 (Purpose) frames state regulation as an innovation threat and specifically criticizes laws, citing Colorado’s algorithmic-discrimination language, that the Administration says could force models to produce “false” outputs. This political framing signals that easing regulatory burdens will be prioritized over state experimentation on harms.
Section 2 (Policy) states the high-level objective: sustain U.S. AI dominance with minimal regulatory burden. It is useful rhetoric but offers little operational clarity.
Section 3 (AI Litigation Task Force) institutionalizes litigation as a policy instrument. Expect the Task Force to identify state laws for challenge and to coordinate federal suits that will likely produce uneven judicial outcomes across jurisdictions.
Section 4 (Evaluation of State AI Laws) requires Commerce to identify laws that require models to alter truthful outputs or that could violate the First Amendment. The EO’s characterization of anti-discrimination interventions as forcing “false” outputs will be heavily contested; courts may reject broad readings equating mitigation of discriminatory impact with compelled false speech.
Section 5 (Restrictions on State Funding) conditions BEAD and potentially other discretionary grants on state compliance. Using federal funding levers to shape state law raises anti-commandeering and Spending Clause concerns that are likely to be litigated.
Sections 6 and 7 (FCC & FTC roles) place the FCC and FTC at the center of the federal push to set disclosure and deceptive-practices norms. Both agencies’ statutory authority to regulate AI model providers is contestable; recent Supreme Court precedent narrowing agency power will complicate aggressive rulemaking.
Section 8 (Legislation) asks for legislative recommendations but acknowledges political limits and preserves certain state authorities. Comprehensive federal AI legislation remains unlikely in the near term.
Likely Legal Flashpoints
As we mentioned, the EO will be subject to numerous lawsuits. Legal issues at play include:
- Preemption without explicit congressional direction. Congress generally has the authority to override state laws with federal laws. However, it’s unclear if the Executive Branch can pre-empt state laws without Congress. The EO directs federal agencies to look at existing law for a pre-emptive hook, but if language isn’t explicit, then the Executive Branch may not be able to read it into the law.
- Anti-commandeering state legislature. The Executive Branch cannot force states into acting in a specific manner and it can be said to go both ways; in that states cannot be forced into not acting. There have also been court battles over conditioning federal funds on enacting federal policy at the state level, which is exactly what the EO attempts to do with BEAD funding.
- Interstate commerce authority. Congress generally regulates activities “in interstate commerce” and those activities have seemingly grown in the last few decades. However, it would be asking courts to assert that Congress has exclusive authority to regulate AI despite not having passed a law to support that claim. Moreover, the Trump Administration will assert that states are regulating activities in other states, which they cannot generally do (known as the dormant commerce clause).
- Agency rulemaking authority. The EO directs the FCC to begin a proceeding on federal AI model standards and disclosures that override state laws. The Communications Act would not likely support the FCC’s endeavors. The Supreme Court has also restricted agency rulemaking authority, which will make it more difficult for the FCC to act. The FTC is also instructed to look at ways to preempt state law, but the FTC Act does not have a broad pre-emption for state laws. The FTC would need to find a direct conflict with these states’ laws to assert that pre-emption exists.
- Compelled speech under the First Amendment. Disclosure laws are already heavily litigated because of First Amendment compelled speech issues (i.e., being forced to say something when you ordinarily would not.) There is a reasonable argument that model providers should disclose certain pieces of information, but courts could decide the best mechanism for that is through private party contracts and not regulations.
Trustible’s Take
- Short-term uncertainty is the most likely outcome of this EO. Litigation and agency reviews will keep the regulatory landscape in flux for the immediate future. The EO does not create the operational trust signals organizations need to be confident in their deployment and use of AI, such as clear liability rules or safe harbors, so buyer caution is likely to continue if not be further exacerbated. Organizations will likely respond defensively with stronger contractual protections, deeper governance, and additional insurance for assurance and liability mitigation. As well, startups will be more exposed than larger firms that can absorb legal and compliance costs, which creates an innovation barrier.
- The Trump Administration has contextualized this EO as “pre-empting” state laws. However, it’s unclear if the Executive Branch can pre-empt state laws without Congress. We point out in our analysis that, while the EO directs agencies to look at existing law for a pre-emptive hook, if the language isn’t explicitly there then the Executive Branch may not be able to read it into the law.
- The legal battles will almost certainly focus on the Interstate Commerce implications, but don’t sleep on arguments against anti-commendeering. Essentially, the Executive Branch cannot force states into doing something and it can be said to go both ways – that states cannot be forced into NOT doing something. Stay tuned to see if courts will address this particular issue.
- The rallying cry continues to be unleashing AI innovation, but is this EO achieving that end? The current landscape heavily favors the “buyer beware” mentality, which does not give businesses sufficient assurances to integrate AI into their operations. Yes, some are protecting themselves with new contractual provisions or AI insurance, but that does not close some of the larger liability gaps.
What Businesses Need Should Do Now
- Building trust is key. Companies should be laser focused on demonstrating trust in their AI tools with customers and end users, because confidence in AI is still relatively low.
- Strengthen contracts. Companies should take nothing for granted and do comprehensive reviews of their contract templates to make sure clauses addressing warranties and indemnities are updated for AI tools and that responsibilities for AI management and oversight are clearly stated.
- Document governance. Maintain detailed records on AI testing and evaluations, red-team reports, model cards, and audit trails. It is also important to provide public disclosures about the types of documentation you maintain.
- Design for layered compliance. Companies should assume state level rules will take effect, which means they should proceed with implementing their compliance programs.
- Engage with rulemaking. Participate in public forums when it makes sense, such as file comments in proceedings that may stem from this EO or joining a coalition of like-minded businesses.
- Review insurance. It is not enough to just have cyber insurance, companies that use AI to support their business operations should get AI-specific insurance.
Bottom Line
The Administration’s EO is the most recent action in a string of many that tests the bounds of federalism, missing the mark on one fundamental truth: that the solution to creating a more dynamic, competitive, and pro-innovation AI economy is collaboration and responsible regulation in partnership with states – not via executive fiat, and not without Congress. Pitting states and the federal government on sides of this debate, rather than partners, actually reduces competitiveness, introduces friction, and does the exact opposite of what the EO sets out to achieve.
The EO is a novel federal effort to set a national floor for AI policy, using litigation, funding conditions, and agency proceedings to displace state rules. But because it rests on legally contested pillars, it is more likely to produce years of litigation and regulatory friction than immediate clarity. Organizations should treat this moment as an escalation of regulatory risk: tighten and strengthen governance that’s agile enough to operate under a shifting legal map, yet flexible enough to adapt to ongoing innovation.


