This post is part of our The Top 2025 Privacy and Security Issues Still Shaping Healthcare series, in which our team of attorneys provides essential strategies and insights for healthcare privacy and security.
Innovations in artificial intelligence (AI), including advances in generative AI (GenAI) and machine learning, provide new opportunities for healthcare providers, promising improved efficiency in areas such as medical record keeping and billing, as well as advances in clinical decision-making, diagnosis, and treatment.
Federal Oversight and the Push for Unified AI Policy
While the use of artificial intelligence in healthcare offers significant benefits, longstanding risks remain for providers, including potential HIPAA and FTC Act violations, malpractice liability, and data breaches. The rapid evolution and complexity of AI heighten these concerns, prompting state legislatures to actively pursue regulation in this area.
On December 11, 2025, the White House issued an executive order (EO) titled “Ensuring a National Policy Framework for Artificial Intelligence.” This EO created a federal strategy for a unified national AI policy. Its main purposes are to promote AI innovation, keep regulations minimal and consistent, and prevent states from setting their own rules or copyrights that conflict with national standards. These goals support future federal standards that may override state laws.
The EO directs the Secretary of Commerce to conduct a comprehensive review of state AI laws within 90 days. This review must identify laws considered “onerous,” such as those that require AI systems to alter truthful outputs, mandate disclosures that could violate constitutional rights, or impose requirements perceived as burdensome or inhibiting innovation. Laws identified as conflicting or burdensome may be flagged for potential litigation by a designated task force. States with laws that conflict with the federal framework may become ineligible for certain federal funds, including those from the Broadband Equity and Access and Deployment (BEAD) Programs. The goal of the EO is to incentivize states to align with the federal approach.
However, it is important to note that only Congress has the authority to enact true preemption through legislation. The EO cannot invalidate state law.
In the meantime, the healthcare industry faces a fragmented and highly varied regulatory landscape from state to state. For example:
- Comprehensive AI Laws: States like Colorado (Colorado AI Act), Texas (Texas Responsible Artificial Intelligence Governance Act, or TRAIGA), and Utah (Utah AI Policy Act, or UAIPA) have enacted sweeping statutes that set out baseline requirements for AI governance, risk management, and accountability.
- Sector-Specific and Issue-Specific Laws: Other states have passed narrower laws addressing AI in specific healthcare contexts, such as Arizona’s HB 2175 (requiring human review of insurance denials), Illinois’s HB 1806 (regulating the use of AI in therapy and psychotherapy services), and California’s regulations targeting AI-related employment discrimination.
What are the Priorities of Emerging AI Legislation?
To provide an oversimplified overview of what a healthcare AI compliance program may look like, it is helpful to focus on the issues being prioritized by legislators.
- Avoiding Discrimination
AI systems have the potential to introduce biases, resulting in discriminatory outcomes. The issue of AI-driven discrimination is central to the Colorado AI Act, which would require healthcare providers utilizing AI for significant, “high risk” decisions—including clinical decision-making, billing, or other care-related activities—to comply with specific regulatory mandates:
- Exercise reasonable care to prevent illegal discrimination when deploying high-risk AI,
- Implement robust AI risk management policies and programs,
- Conduct detailed AI impact assessments.
- Preserving Clinical Decision-Making Authority
Protecting the central role of licensed and heavily regulated healthcare professionals appears to be another core objective of state legislators. States have taken varied approaches to this task.
Some laws have imposed a positive obligation on providers to oversee the use of AI. For instance, Texas SB 1188 allows practitioners to use AI to support diagnosis and treatment, but mandates that providers review all AI-generated records according to state medical board standards. Similarly, Illinois HB 1806 permits providers offering therapy and psychotherapy services to use AI to develop clinical recommendations and treatment plans, but only if the provider reviews and approves them.
Other states have taken a broad approach, specifically banning GenAI from being used as a substitute for mental health therapy, instead requiring that these technologies support healthcare professionals rather than replace them. For example, Illinois HB 1806 bans unlicensed individuals from using AI to provide therapy or psychotherapy. Nevada’s AB 406 prohibits AI developers from programming AI for mental health treatment, while California AB 489 forbids AI from using terms that falsely suggest care is being provided by a human healthcare provider.
- Ensuring Transparency
Where states have not wanted to universally prohibit AI’s involvement in healthcare, the legislation has instead leaned heavily on disclosing the use of AI to the patient.
For instance, Texas’s SB 1188 and TRAIGA require healthcare providers to inform patients when AI is used in their treatment, but do not detail what the notice must include. Returning to the narrower context of GenAI’s use in mental health, Utah’s HB 452 does not prohibit the use of AI powered mental health chatbots but requires that the AI systems clearly and conspicuously disclose to the user that the chatbot is an AI and not human. The focus of California AB 3030 is even narrower, requiring disclosure only when AI generates clinical communications for patients.
The Colorado AI Act proposes some of the most stringent disclosure requirements. Not only does it mandate that patients be notified before high-risk AI is used in their care, but if the AI is used to make an adverse decision that has a substantial effect on the patient’s receipt of healthcare services, additional disclosures are required. For instance, if AI is used to identify patients with drug-seeking behaviors and informs a resulting decision to deny medication, the Colorado AI Act would require providers to explain the decision, the AI’s role, and the data sources involved, as well as provide an appeals process.
Practical Steps for Healthcare Providers
With the understanding that regulators are looking to prioritize, among other things, mechanisms that ensure anti-discrimination, the continued involvement of healthcare providers in clinical decision making, and transparency in the use of AI, what steps should providers take to improve their compliance posture?
- Inventory and Classify AI Systems: Identify all AI systems in use, assess their risk level, and determine which state and federal laws apply.
- Develop AI Governance Policies: Implement policies for risk management, bias mitigation, and ongoing monitoring of AI systems.
- Ensure Clinical Oversight: Establish clear protocols for provider review of AI-generated outputs, especially in diagnosis and treatment.
- Enhance Transparency: Prepare clear and conspicuous disclosures to inform patients of when AI is used, tailored to state content and location requirements, and ensure staff are trained to communicate effectively about AI use.
- Monitor Legal Developments: Stay up to date on new and evolving state and federal AI regulations and be prepared to adapt compliance programs as the legal landscape changes.
Contact us
For further details or additional information, please contact Noreen Vergara or another member of the Husch Blackwell Healthcare Privacy and Security Work Group.