Illinois Bans AI Therapy: New Law Regulates Mental Health Tech

By futureTEKnow | Editorial Team

KEY POINTS

  • Illinois is the first state to ban AI from providing standalone mental health therapy, effective immediately.

  • The law strictly prohibits AI from making independent therapeutic decisions or offering direct clinical interaction—violators face fines up to $10,000 per offense.

  • Limited AI use is permitted only for admin tasks (like scheduling or billing) or analyzing anonymized data—always with explicit patient consent.

  • This move reflects a nationwide surge of similar legislative efforts as states fill the regulatory gap in health AI oversight.

Illinois bans AI from providing standalone mental health therapy. Discover what the new law means, enforcement, and how it impacts AI in healthcare.

Why Did Illinois Ban AI Therapy? Understanding the Groundbreaking Law

Illinois just made history as the first state to ban artificial intelligence from providing independent mental health therapy services. As other states scramble to keep pace with the fast-moving world of health tech, Illinois set a bold precedent—AI can’t play therapist, and only licensed human professionals can provide direct mental health treatment.

What Does the New Law Actually Prohibit?

Signed into law by Governor JB Pritzker, the Wellness and Oversight for Psychological Resources Act (HB 1806) strictly forbids anyone—individual, company, or tech giant—from offering therapy or psychotherapy services through AI. Only licensed professionals are allowed to serve patients directly, and AI is now officially banned from making clinical decisions, generating treatment plans, or communicating directly with patients in a therapeutic context.

The law took immediate effect and grants the Illinois Department of Financial and Professional Regulation the authority to investigate violations. If found guilty, an individual or business faces up to $10,000 in fines for every offense.

Why so urgent? Recent high-profile AI failures, including a lab test case with a fictional character created by researchers, where an AI “therapist” shockingly suggested using meth to a recovering addict, helped catalyze this tough stance. The consensus is clear: AI chatbots lack empathy and accountability, and unchecked use in sensitive mental health scenarios could do real harm.

Can AI Still Be Used for Anything in Mental Health? Yes—But With Limits

Illinois didn’t throw artificial intelligence (AI) out the window entirely. The law recognizes AI’s usefulness for administrative and supplementary tasks. Licensed therapists can use AI for things like:

  • Scheduling appointments

  • Processing billing

  • Drafting therapy notes

  • Analyzing anonymized data for clinical trends

But anytime AI is used in a session for recording, transcription, or analysis, patients must be explicitly notified in writing and give revocable consentCatch-all forms, implied digital “hover” agreements, or vague checkboxes are not valid consent.

Therapists remain fully responsible for all AI-assisted actions. This means no hiding behind tech—humans are still on the hook for care quality and confidentiality.

Who and What Is Exempt?

Not everything with a chatbot is banned. The law does NOT cover:

  • Faith-based counseling.

  • Peer-support groups.

  • Publicly available self-help and educational tools that don’t claim to offer therapy.

What Are the Penalties for Breaking the Law?

The Department of Financial and Professional Regulation can investigate any alleged or suspected violation. If an individual or organization is found in violation, the penalty can be as high as $10,000 per incident, depending on the degree of harm and context. Penalties follow a formal hearing, and payment is required within 60 days of a penalty order.

How Is Illinois Leading a National Trend?

Illinois isn’t alone—34 states have introduced over 250 AI health-related bills this year. But Illinois is the first to enact a law this sweeping. Other states vary in how tough they are:

  • Utah: Requires clear disclosure that users are talking to a bot, but doesn’t ban AI chat completely.

  • Nevada: Bans AI from providing counseling in public schools and certain health services.

  • New York: Mandates AI companions must refer distressed users to suicide hotlines.

  • California, Texas, North Carolina, New Jersey: All have regulations or proposals targeting AI’s role in mental health.

This state-level movement is filling a vacuum: with little federal oversight, states have become “laboratories for policy solutions,” influencing future national standards.

Why Does This Matter?

There’s big money and high hopes around AI in therapy—from startups to major tech players, everyone’s been chasing the promise of scalable, automated help for those who can’t access traditional care. But as this new law makes clear, ethical, human-centered care still comes first in Illinois.

Mental health care isn’t just about data—it’s about empathy, trust, and nuanced judgment. Policymakers, professionals, and advocates hope this law sets the bar for responsible tech use nationwide.

futureTEKnow covers technology, startups, and business news, highlighting trends and updates across AI, Immersive Tech, Space, and robotics.

futureTEKnow

Editorial Team

futureTEKnow is a leading source for Technology, Startups, and Business News, spotlighting the most innovative companies and breakthrough trends in emerging tech sectors like Artificial Intelligence (AI), immersive technologies (XR), robotics, and the space industry.

Latest Articles