THE BIG PICTURE
On March 11, two deadlines from President Trump’s December 2025 executive order on AI governance come due. The Federal Trade Commission must publish a policy statement on how the FTC Act applies to AI — including whether state laws that require AI systems to alter their outputs are preempted by federal law. The same day, the Department of Commerce must publish an evaluation identifying state AI laws it considers “onerous” and potentially incompatible with the administration’s policy of “minimally burdensome” AI regulation.
These deadlines matter because Congress has not passed comprehensive AI legislation, and the executive branch is filling the vacuum. Meanwhile, the White House released its National Cyber Strategy, the House Energy and Commerce Committee advanced five cybersecurity bills through markup, and the most ambitious federal AI bill — Senator Blackburn’s TRUMP AMERICA AI Act — remains a discussion draft that hasn’t been formally introduced. For businesses developing, deploying, or affected by AI systems, the regulatory landscape is about to shift — and the direction will be set by executive action, not legislation.
MARCH 11: THE DEADLINES THAT MATTER
President Trump signed Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence,” on December 11, 2025. The order established a 90-day clock for several federal agencies to take concrete action. That clock expires on March 11, 2026.
FTC Policy Statement on AI
The FTC must publish a policy statement explaining how the FTC Act’s prohibition on unfair and deceptive practices applies to AI models. Critically, the statement must address whether state laws that require AI systems to alter their “truthful outputs” — such as algorithmic bias mitigation requirements — are preempted by federal consumer protection law. The administration’s legal theory is that if a state law forces an AI model to produce results that deviate from its underlying data, that alteration constitutes deception under the FTC Act.
This is a significant interpretive move. It effectively characterizes compliance with state algorithmic fairness requirements as a form of federally prohibited deceptive conduct — a reversal of prior FTC guidance that treated algorithmic bias itself as the liability risk. The policy statement is interpretive rather than a binding regulation, and courts may reject the premise. But it will shape enforcement posture and signal to businesses what the federal government considers compliant behavior.
Commerce Department Evaluation of State AI Laws
The Commerce Department must publish an evaluation identifying state AI laws deemed burdensome or in conflict with the executive order’s policy goals. At a minimum, the evaluation must flag state laws that require AI models to alter “truthful outputs” or mandate disclosures that may violate First Amendment protections. Laws flagged in this evaluation may be referred to the DOJ’s AI Litigation Task Force — which has been operational since January 10 — for potential federal court challenges.
The executive order also ties federal funding to state compliance. The Commerce Department will assess whether states with “onerous AI laws” should be made ineligible for non-deployment funds under the Broadband Equity Access and Deployment (BEAD) program — $42 billion in previously allocated broadband infrastructure funding.
What This Means: The March 11 actions will not resolve the regulatory landscape, but they will define it. Businesses will have, for the first time, a federal agency position on how existing consumer protection law applies to AI. That position will either accelerate or complicate compliance planning depending on how aggressively the FTC frames preemption. Companies operating in states like Colorado, California, Texas, and Illinois should be monitoring this closely — their state-level compliance obligations may be challenged by the federal government within weeks.
THE FEDERAL PREEMPTION FIGHT
The March 11 deadlines are the latest chapter in a running fight over who governs AI: the federal government or the states. Understanding the sequence matters.
The pattern is clear. Congress cannot agree on AI legislation — every attempt at a moratorium has failed, and no comprehensive bill has advanced to markup. The executive branch is moving instead, using existing agency authority, litigation, and funding leverage to reshape the regulatory landscape from the top down.
The Paradox: The administration frames its approach as reducing regulatory burden. But the practical effect for most organizations is the opposite. We now face an indeterminate period where federal agencies will challenge state laws, state attorneys general will defend their authority, and courts will adjudicate competing claims. Building a compliance program around the assumption that federal preemption will prevail is a bet — not a certainty. The safer posture is to build governance frameworks flexible enough to adapt to whichever regime ultimately prevails.
THE NATIONAL CYBER STRATEGY (MARCH 2026)
Against this backdrop, the White House released President Trump’s National Cyber Strategy for America. The strategy is organized around six pillars. We highlight the three most relevant to businesses:
Pillar 2 — Common Sense Regulation: Streamline cyber regulations to reduce compliance burdens and address liability. Align regulators and industry globally. Emphasize privacy protections for American data. This pillar reinforces the administration’s deregulatory posture and suggests that future cybersecurity compliance frameworks will favor flexibility over prescriptive mandates.
Pillar 4 — Secure Critical Infrastructure: Identify, prioritize, and harden critical infrastructure including the energy grid, telecommunications, financial systems, data centers, water utilities, and hospitals. Secure supply chains and move away from adversary vendors. For energy companies, government contractors, and infrastructure operators, this pillar signals continued federal investment in sector-specific cybersecurity requirements.
Pillar 5 — Sustain Superiority in Emerging Technologies: Secure the AI technology stack. Deploy agentic AI for network defense. Implement AI-enabled tools to detect and deceive threat actors. Secure cryptocurrencies and blockchain. Promote post-quantum cryptography. This is the administration’s clearest statement that AI is a national security asset to be deployed offensively, not just a commercial technology to be regulated.
The remaining pillars cover offensive cyber operations against adversaries (Pillar 1), modernizing federal networks with zero-trust architecture and AI-powered defense (Pillar 3), and building a cyber workforce pipeline through academia, vocational schools, and the private sector (Pillar 6).
WHERE CONGRESS ACTUALLY STANDS
What’s Moving: Cybersecurity Bills Through E&C Markup
The House Energy and Commerce Committee marked up five cybersecurity and energy security bills on March 5, all passing with unanimous or near-unanimous votes:
These bills enjoy genuine bipartisan support — the vote margins speak for themselves. Cybersecurity for critical infrastructure is one of the few areas where both parties agree. Energy companies and government contractors should anticipate these requirements becoming law.
What’s Stalled: Comprehensive AI Legislation
No standalone AI regulation bill has advanced to committee markup in either chamber. The most notable proposals:
AI Bills in the Pipeline
Multiple AI-related bills have been introduced but remain in early committee referral stages. None have had hearings or markups. They reflect the contours of a debate that Congress has not yet resolved:
Regulating AI to prevent harm (predominantly Democratic sponsors):
Leveraging AI for practical applications (predominantly Republican sponsors):
Technical standards and transparency (bipartisan interest):
Reading the Landscape: The partisan pattern is instructive but not absolute. Democrats tend to propose guardrails — liability, civil rights protections, content integrity. Republicans tend to propose applications — innovation labs, regulatory streamlining, federal preemption of state rules. But child safety and creator protections have genuine bipartisan appeal, and Blackburn’s framework incorporates elements from both camps. The question is whether any of this can survive committee in a midterm year when floor time is scarce and the executive branch is already acting unilaterally.
WHAT THIS MEANS FOR BUSINESSES
The window for treating AI governance as optional has closed. Whether regulatory clarity comes from Congress, the executive branch, or the courts, the trajectory is toward more requirements — not fewer. The question is which requirements, from which authority, and on what timeline.
Three practical takeaways:
NEED MORE DETAIL?
Maceira Zayas Law tracks federal technology, cybersecurity, and AI policy developments. For questions about any item in this briefing, compliance planning, or the implications of federal AI governance actions for your business, contact:
Anthony O. Maceira, Managing Member
amaceira@mzls.com
Marc A. Maceira, President- Honra
marc@honra.io
© 2026 Maceira Zayas Law. All rights reserved. This briefing is provided for informational purposes only and does not constitute legal advice.