On April 1, 2026, Governor Bill Lee signed Senate Bill 1580 into law, making Tennessee among the first states to specifically regulate how AI can be marketed and used in the mental health space. The new law takes effect July 1, 2026.
What Does the New Law Do?
The new law is short and straightforward, providing that a person who develops or deploys an artificial intelligence system cannot advertise or represent to the public that such a system is, or is able to act as, a qualified mental health professional.
Artificial intelligence is broadly defined to mean “models and systems capable of performing functions generally associated with human intelligence, including reasoning and learning.” Qualified mental health professionals are defined in existing Tennessee mental health law to include certain licensed psychiatrists, psychologists, psychological examiners, social workers, and marital and family therapists.
The new law establishes a private right of action under the Tennessee Consumer Protection Act of 1977. Violation constitutes an unfair or deceptive act or practice and is subject to a civil penalty of $5,000 per violation.
Bottom line: if you build or sell an AI product, you cannot tell people, or imply, that it can do the job of a licensed therapist or counselor. Otherwise, you may face real financial consequences.
Why Does This Matter?
The recent proliferation of AI-powered chatbots and wellness apps has created a gray zone in mental health care. Some products are genuinely helpful, whereas others may steer vulnerable people away from licensed providers who are trained to diagnose and treat serious conditions. Tennessee’s new law targets that line directly: you can build AI tools, but you cannot pass them off as the real thing.
The Bill in Context
SB 1580 took a narrower approach to regulating AI in the mental health care space than another bill taken up in the same legislative session, House Bill 1455.
HB 1455 defined artificial intelligence much more broadly to expressly include chatbots. HB 1455 also prohibited a more expansive range of conduct. This included training AI to act as a licensed mental health professional, simulate human interaction in a way that could lead users to feel they had a friendship with the AI, encourage users to isolate from family or caregivers, or simulate a human being in appearance, voice, or mannerisms.
Perhaps most strikingly, a violation of HB 1455 would have been classified as a Class A felony, and harmed individuals could have brought a civil lawsuit seeking actual damages including emotional distress, or liquidated damages of $150,000, plus punitive damages and attorney’s fees.
SB 1580 is more measured because it zeroes in on the most clearly harmful and definable conduct: making false or misleading representations to the public about what an AI system is and can do. This is classic consumer protection territory that is well understood by courts and regulators.
The Bottom Line
SB 1580 does not ban AI from mental health space, rather it simply says that AI cannot be fraudulently marketed as something it is not. For patients and families navigating mental health care, that is meaningful protection. And for policymakers, it reflects a recognition that regulating emerging technology requires precision: rules that are too broad risk being struck down, unenforceable, or harmful to innovation, while rules that are too narrow may fail to protect the people who need it most. Tennessee appears to have chosen a carefully targeted starting point while leaving the door open for more to follow.