Tag

Child Safety

All articles tagged with #child safety

Court Holds Social Platforms Accountable for Addictive Design in Landmark Case
technology14 days ago

Court Holds Social Platforms Accountable for Addictive Design in Landmark Case

A Los Angeles jury found Meta and YouTube liable for deliberately designing addictive features that harmed a young user, awarding $6 million in damages and sparking global calls for meaningful, child-protective design changes, with rights groups praising the ruling as a watershed for accountability; some critics warn of potential free-speech and privacy implications as regulators consider broader protections beyond courts.

New Mexico jury awards Meta $375 million in child-safety case, Meta to appeal
business16 days ago

New Mexico jury awards Meta $375 million in child-safety case, Meta to appeal

A New Mexico state court jury found Meta liable for violating consumer protections by failing to shield children on Facebook, Instagram, and WhatsApp, ordering $375 million in damages. The case, rooted in an undercover operation and investigative reporting into child exploitation, also highlighted deficient reporting and overreliance on AI moderation. Meta plans to appeal as it faces two more child-safety trials and a separate federal suit, with potential changes like age-gating and encryption-design adjustments on the horizon.

New Mexico jury orders Meta to pay $375 million for platform safety lapses
technology17 days ago

New Mexico jury orders Meta to pay $375 million for platform safety lapses

A New Mexico jury ruled Meta liable for consumer-protection violations and ordered $375 million in civil penalties for misleading users about platform safety and enabling harm, including child sexual exploitation. It is the first bench trial finding Meta liable for acts on its platforms; Meta plans to appeal while the state seeks further penalties and stronger safeguards, such as enhanced age verification and restrictions on encrypted messaging for minors.

New Mexico jury orders Meta to pay $375 million over child-safety failures on Facebook and Instagram
business17 days ago

New Mexico jury orders Meta to pay $375 million over child-safety failures on Facebook and Instagram

A New Mexico state court jury found Meta liable for willfully violating the state's consumer-protections laws by failing to safeguard minors on Facebook and Instagram, ordering roughly $375 million in damages; a second phase will decide on public-nuisance funding and required app changes such as stronger age verification and measures regarding encrypted communications.

NM jury holds Meta liable for harming children’s safety
technology17 days ago

NM jury holds Meta liable for harming children’s safety

A New Mexico jury ruled that Meta knowingly harmed children’s mental health and concealed what it knew about child sexual exploitation on its platforms, finding thousands of violations of the state’s Unfair Practices Act and a potential penalty of $375 million per violation. The verdict also found false or misleading statements and unconscionable trade practices. A second phase could determine remedies; Meta plans to appeal. The case is part of a broader wave of lawsuits alleging social-media platforms’ impact on youth.

Meta rolls back Instagram encryption, fueling a privacy-safety showdown
technology24 days ago

Meta rolls back Instagram encryption, fueling a privacy-safety showdown

Meta will deprecate end-to-end encryption for Instagram messages starting May 8, citing low adoption and directing users to WhatsApp for encrypted chats, while Messenger’s E2EE remains available. Internal safety staff warned that default encryption could hinder detection and reporting of abuse, and internal forecasts suggested a steep drop in referrals to child-protection services, contributing to a slow and incomplete rollout. This marks the first time a major platform has rolled back encryption, against a backdrop of global regulatory pressure and ongoing child-safety concerns, highlighting the clash between user privacy and platform safety in private communications.

UK to Tighten AI Chatbot Rules Over Child Safety With Fines or Ban
technology1 month ago

UK to Tighten AI Chatbot Rules Over Child Safety With Fines or Ban

Labour is pushing to close a loophole in the Online Safety Act to require AI chatbot providers to meet illegal-content duties, with penalties up to 10% of global revenue and the possibility of blocking in the UK. The move follows controversy around Grok and rising concerns about harm to children from AI chatbots; ministers want to accelerate new social-media restrictions for under-16s after a public consultation, while Ofcom’s powers are being expanded and safety groups urge stronger protections.

Big Tech Faces Landmark Trial Over Claims Social Apps Addict Children
law2 months ago

Big Tech Faces Landmark Trial Over Claims Social Apps Addict Children

Opening statements kick off a landmark LA County trial accusing Meta’s Instagram and Google’s YouTube of deliberately addicting children, backed by internal emails and studies. TikTok and Snap settled, leaving Meta and YouTube to defend themselves as a bellwether case (KGM) could shape thousands of similar suits. The eight‑week trial may influence whether First Amendment or Section 230 defenses apply to platform liability, with Meta and YouTube denying the charges and highlighting safeguards, and CEO Mark Zuckerberg expected to testify. Related cases in New Mexico and Oakland, plus rising global policy action, reflect broad scrutiny of how social media affects young users.

Discord tightens safety with one-time age check for adult content
technology2 months ago

Discord tightens safety with one-time age check for adult content

Discord says it will roll out a March-wide safety update that by default provides a teen-appropriate experience; unlocking adult content and access to age-restricted spaces will require a one-time age verification (either a selfie video for age estimation or government ID with vendor partners). Unverified accounts will see blurred sensitive content and be blocked from age-restricted channels, servers, and app commands, while DMs from unknown users will go to a separate inbox. Some users may need multiple verification steps, and Discord plans future options like an age-inference model; the company is also forming a Teen Council to help shape teen safety policy.