Authorities rescued a 9-year-old boy who had been locked inside his father’s utility van in eastern France for about two years, dating back to 2024; the boy is safe and investigators are examining the circumstances surrounding the confinement.
A Los Angeles jury found Meta and YouTube liable for deliberately designing addictive features that harmed a young user, awarding $6 million in damages and sparking global calls for meaningful, child-protective design changes, with rights groups praising the ruling as a watershed for accountability; some critics warn of potential free-speech and privacy implications as regulators consider broader protections beyond courts.
A New Mexico jury found Meta Platforms violated state consumer-protection law by misleading users about safety on Facebook, Instagram and WhatsApp and enabling child exploitation, ordering $375 million in civil penalties; Meta says it will appeal as it faces broader scrutiny over youth safety and platform design.
A New Mexico state court jury found Meta liable for violating consumer protections by failing to shield children on Facebook, Instagram, and WhatsApp, ordering $375 million in damages. The case, rooted in an undercover operation and investigative reporting into child exploitation, also highlighted deficient reporting and overreliance on AI moderation. Meta plans to appeal as it faces two more child-safety trials and a separate federal suit, with potential changes like age-gating and encryption-design adjustments on the horizon.
A New Mexico jury ordered Meta to pay $375 million for harming children’s mental health and exposing minors to sexual content, finding the company violated state unfair-practices laws and prioritized profits over safety; Meta says it will appeal as related cases continue.
A New Mexico jury ruled Meta liable for consumer-protection violations and ordered $375 million in civil penalties for misleading users about platform safety and enabling harm, including child sexual exploitation. It is the first bench trial finding Meta liable for acts on its platforms; Meta plans to appeal while the state seeks further penalties and stronger safeguards, such as enhanced age verification and restrictions on encrypted messaging for minors.
Two juries, one in New Mexico and one in California, are weighing lawsuits against Meta that allege the company failed to protect minors and that its platforms contributed to harm and addiction; verdicts could top $2 billion and influence future tech-liability cases, even as Meta defends its safety efforts and privacy claims under Section 230.
A New Mexico state court jury found Meta liable for willfully violating the state's consumer-protections laws by failing to safeguard minors on Facebook and Instagram, ordering roughly $375 million in damages; a second phase will decide on public-nuisance funding and required app changes such as stronger age verification and measures regarding encrypted communications.
A New Mexico jury ruled that Meta knowingly harmed children’s mental health and concealed what it knew about child sexual exploitation on its platforms, finding thousands of violations of the state’s Unfair Practices Act and a potential penalty of $375 million per violation. The verdict also found false or misleading statements and unconscionable trade practices. A second phase could determine remedies; Meta plans to appeal. The case is part of a broader wave of lawsuits alleging social-media platforms’ impact on youth.
A Ramapo, NY postal worker was arrested after video showed him shoving a four-year-old Hassidic boy to the ground; he was charged with endangering the welfare of a child and attempted assault in the third degree, with officials condemning the attack.
Meta will deprecate end-to-end encryption for Instagram messages starting May 8, citing low adoption and directing users to WhatsApp for encrypted chats, while Messenger’s E2EE remains available. Internal safety staff warned that default encryption could hinder detection and reporting of abuse, and internal forecasts suggested a steep drop in referrals to child-protection services, contributing to a slow and incomplete rollout. This marks the first time a major platform has rolled back encryption, against a backdrop of global regulatory pressure and ongoing child-safety concerns, highlighting the clash between user privacy and platform safety in private communications.
Los Angeles County filed a lawsuit against Roblox alleging the platform exposes children to sexual content and predators due to weak moderation and age-verification, while Roblox defends its safety safeguards amid ongoing scrutiny of tech platforms' handling of youth safety.
Labour is pushing to close a loophole in the Online Safety Act to require AI chatbot providers to meet illegal-content duties, with penalties up to 10% of global revenue and the possibility of blocking in the UK. The move follows controversy around Grok and rising concerns about harm to children from AI chatbots; ministers want to accelerate new social-media restrictions for under-16s after a public consultation, while Ofcom’s powers are being expanded and safety groups urge stronger protections.
Opening statements kick off a landmark LA County trial accusing Meta’s Instagram and Google’s YouTube of deliberately addicting children, backed by internal emails and studies. TikTok and Snap settled, leaving Meta and YouTube to defend themselves as a bellwether case (KGM) could shape thousands of similar suits. The eight‑week trial may influence whether First Amendment or Section 230 defenses apply to platform liability, with Meta and YouTube denying the charges and highlighting safeguards, and CEO Mark Zuckerberg expected to testify. Related cases in New Mexico and Oakland, plus rising global policy action, reflect broad scrutiny of how social media affects young users.
Discord says it will roll out a March-wide safety update that by default provides a teen-appropriate experience; unlocking adult content and access to age-restricted spaces will require a one-time age verification (either a selfie video for age estimation or government ID with vendor partners). Unverified accounts will see blurred sensitive content and be blocked from age-restricted channels, servers, and app commands, while DMs from unknown users will go to a separate inbox. Some users may need multiple verification steps, and Discord plans future options like an age-inference model; the company is also forming a Teen Council to help shape teen safety policy.