⚖️ Texas investigating AI therapy

Texas Targets AI Chatbots for Misleading Mental Health Claims

Texas Attorney General Ken Paxton has launched an investigation into Meta AI Studio and Character.AI, accusing the platforms of potentially engaging in deceptive trade practices by marketing themselves as mental health tools. The probe follows concerns that AI chatbots could mislead children and vulnerable users into believing they are receiving legitimate therapeutic care when they are only interacting with algorithmically generated responses. Paxton’s office emphasizes that some AI personas, including popular “therapist” characters on Character.AI, are being used by minors despite lacking proper oversight or credentials.

Do you ever wish you could connect with one person who could drastically improve your business? Whether it's an advisor, investor, customer, spokesperson or key hire, now you can. I recently started a new job, supporting promising Canadian startups in securing their first $1-$15M in funding. The #1 way I have connected with founders is through Boardy. Boardy is an AI networking agent on WhatsApp that connects you to people in his network based on your goals. Thanks to Boardy I’ve met so many great founders that I never would have otherwise. Boardy has already helped founders close millions in deals through intros like this. If you want to be one person closer to your goals, message Boardy on WhatsApp.

This is sponsored content

Navigating Regulatory Scrutiny on AI Products

The investigation highlights the growing regulatory focus on AI, particularly when services are accessible to minors. Even with disclaimers stating AI responses are not professional advice, startups may face scrutiny if their products are perceived as misleading or potentially harmful. Both Meta and Character.AI log user interactions and leverage them for algorithmic training and targeted advertising, which raises additional privacy and consumer protection concerns. Founders should recognize that regulators are increasingly attentive to both the marketing and data practices of AI platforms.

How Startups Should Respond

For startups building AI-driven tools, this case underscores the need for clear labeling, robust age restrictions, and transparent privacy policies. Founders should implement proactive safeguards, including parental controls, usage monitoring, and explicit disclaimers, to mitigate legal risks. Additionally, tracking and minimizing data collection from minors can reduce exposure to state investigations and potential civil penalties. Being prepared to respond to civil investigative demands and demonstrating a commitment to ethical AI deployment will be essential for navigating this evolving regulatory landscape.

🪙Don’t miss out on this modern-day gold rush. This company holds 29,000 hectares (over 71,000 acres) in the heart of a modern-day gold rush.*

In addition to our newsletter we offer 60+ free legal templates for companies in the UK, Canada and the US. These include employment contracts, investment agreements and more

* This is sponsored content