AI Prohibited from Offering Psychological Support in Illinois
In a significant move towards safeguarding the mental health of its citizens, several states have enacted new laws regulating the use of Artificial Intelligence (AI) in therapy and counseling. Illinois, Nevada, Utah, and New York have all taken steps to ensure that AI serves purely as a support tool for licensed professionals, not as a substitute or independent provider of mental health care.
Illinois: The Wellness and Oversight for Psychological Resources Act
On Friday, Illinois Governor JB Pritzker signed the Wellness and Oversight for Psychological Resources Act into law. This act, effective August 2025, prohibits AI systems from independently making therapeutic decisions, directly interacting with clients in therapy, or generating treatment plans without licensed professional oversight. AI can assist with administrative tasks such as scheduling, billing, and anonymized data analysis, but cannot replace a licensed therapist. Patients must give clear, written consent for any AI use involving session recording or transcription. Violations can lead to civil penalties of up to $10,000 per incident. Exceptions include faith-based counseling and peer support not claiming to offer psychotherapy.
Nevada: AI Banned from Therapy Services
Effective July 1, 2025, Nevada has banned AI systems from providing services that would constitute mental or behavioral health care. AI tools may only be used for administrative support. Unlike Illinois or Utah, Nevada does not mandate disclosures to patients about AI use.
Utah: Regulations on Mental Health Chatbots
Utah has also adopted restrictions on AI use in mental health, focusing on limiting AI’s direct therapeutic application and requiring disclosure. The state has passed regulations on mental health chatbots, requiring companies to disclose that users are interacting with an AI, disclose ads, sponsorships, or paid relationships, ban use of user input for targeted ads, and restrict selling users' individually identifiable health information.
New York: Restrictions on AI Involvement in Direct Mental Health Therapy
New York has enacted legislation that restricts AI involvement in direct mental health therapy and likely requires disclosure when AI is used to assist care, aligning with Illinois and Utah's approaches.
These laws aim to protect patients from unlicensed AI therapy, mandate professional supervision, informed consent, and limit AI’s role to administrative or auxiliary functions. Texas has also regulated AI systems, prohibiting those that incite self-harm or criminal activity, which may indirectly affect mental health AI applications.
The American Psychological Association (APA) has raised concerns that AI posing as therapists could put the public at risk, following two lawsuits filed by parents whose children used chatbots that allegedly claimed to be licensed therapists. In one case, a boy died by suicide after extensive use of the app. In the other, a child attacked his parents. The APA has published a blog post detailing these concerns.
In a press release, Mario Treto, Jr, secretary of the Illinois Department of Financial and Professional Regulation, stated that the people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs. A new law in New York will require AI companions to direct users who express suicidal thoughts to a mental health crisis hotline, effective November 5, 2025.
As other states continue to take action on the issue of AI in healthcare, it is clear that the trend is towards protecting patients and ensuring that AI serves as a supportive tool for licensed professionals, not as a substitute or independent provider of mental health care.
- Illinois, Nevada, Utah, and New York have enacted new laws regulating the use of Artificial Intelligence (AI) in therapy and counseling, with Illinois passing the Wellness and Oversight for Psychological Resources Act.
- AI systems in Illinois are prohibited from independently making therapeutic decisions, directly interacting with clients in therapy, or generating treatment plans without licensed professional oversight as of August 2025.
- Nevada has banned AI systems from providing mental or behavioral health care services and only allows them for administrative support.
- Utah has passed regulations on mental health chatbots, requiring companies to disclose AI use, restrict ads, and protect users' individually identifiable health information.
- New York has also enacted legislation that restricts AI involvement in direct mental health therapy and requires disclosure when AI is used to assist care.
- As concerns about AI posing as therapists threaten public safety have risen, the American Psychological Association (APA) has raised concerns and published blog posts detailing these issues, citing examples of potential harm, such as a boy who died by suicide after extensive use of an app that claimed to be a licensed therapist.