⚖️ Meta being investigated

Senators Zero In on Meta’s AI and Child Safety

Meta is facing a new wave of scrutiny after leaked internal documents revealed that its generative AI chatbots were once permitted to engage in “romantic” and “sensual” conversations with children as young as eight. The revelations, published by Reuters, triggered outrage and prompted Sen. Josh Hawley (R-MO), chair of the Senate Judiciary Subcommittee on Crime and Counterterrorism, to launch an investigation. In a sharply worded letter to CEO Mark Zuckerberg, Hawley demanded Meta turn over its “GenAI: Content Risk Standards” guidelines, including all drafts and approval records, by September 19. Fellow lawmakers like Sen. Marsha Blackburn (R-TN) have backed the probe, arguing the company has repeatedly failed to safeguard children online.

Do you ever wish you could connect with one person who could drastically improve your business? Whether it's an advisor, investor, customer, spokesperson or key hire, now you can. I recently started a new job, supporting promising Canadian startups in securing their first $1-$15M in funding. The #1 way I have connected with founders is through Boardy. Boardy is an AI networking agent on WhatsApp that connects you to people in his network based on your goals. Thanks to Boardy I’ve met so many great founders that I never would have otherwise. Boardy has already helped founders close millions in deals through intros like this. If you want to be one person closer to your goals, message Boardy on WhatsApp.

This is sponsored content

Why AI Safety Standards Are Becoming a Legal Fault Line

This investigation isn’t just about Meta—it’s part of a larger push to hold tech companies legally responsible for the unintended (or ignored) consequences of AI. By framing permissive chatbot interactions with minors as potential deception or exploitation, regulators are signalling that vague safety policies won’t cut it anymore. For founders, this is an early warning that “content risk standards” are no longer just internal documents—they’re potential evidence in court or before Congress. As lawmakers push for bills like the Kids Online Safety Act, startups building AI or consumer platforms should expect heightened scrutiny over how products can be misused, especially by or against minors.

What Founders Should Do Now to Stay Ahead

If you’re building with AI, don’t wait for a subpoena to clean up your policies. Document your product’s risk assessments, establish clear rules for edge cases (like minors engaging with your product), and implement active monitoring to catch violations early. Consider independent audits—not just as a compliance step, but as a trust signal to users, investors, and partners. Even if your product isn’t child-facing, regulators are beginning to expect safety-by-design baked into every stage of development. The lesson from Meta is clear: it’s far cheaper to build safeguards now than to face regulatory probes, reputational damage, and potentially sweeping restrictions later.

In addition to our newsletter we offer 60+ free legal templates for companies in the UK, Canada and the US. These include employment contracts, investment agreements and more