Australia says five major social platforms aren’t fully complying with its age-law provisions, specifically citing Meta, Snapchat, TikTok and YouTube for not fully enforcing child account bans, signaling ongoing regulatory pressure on platforms to curb underage access.
The White House’s AI regulatory framework is exposing deep GOP splits and House-Senate tensions, with disagreements over kids’ online safety, copyright training material, and data-center energy concerns threatening progress toward a federal AI bill this year.
An independent Roblox developer argues the platform's safety checks, including age verification, are insufficient and urges parents to monitor children on Roblox 24/7; Roblox counters that safety is a top priority with advanced safeguards and ongoing monitoring, amid broader concerns about protecting young users.
The FBI warns that criminals can hijack home Wi‑Fi networks to use the owner’s IP as a proxy for illegal activity, potentially making victims appear responsible. To defend against this, users should avoid suspicious sites and apps, skip untrustworthy free VPNs, keep devices updated, and businesses should implement network segmentation and block known residential-proxy IPs; Google is taking action to dismantle proxy rings.
The White House released a four-page national AI framework urging Congress to preempt state AI laws in favor of a single federal standard, covering AI replicas, energy demands, regulatory sandboxes, and child safety online. It’s framed as a policy position rather than a bill and highlights tensions with states and copyright debates; Democrats have introduced bills to counter the Trump-era order, while Republicans seek bipartisan action, signaling a challenging path to a final framework.
Meta’s Instagram will begin proactively notifying parents when teens using Instagram’s Teen Accounts search for suicide or self-harm terms, starting next week in the UK, US, Australia and Canada with other regions to follow; alerts may come via email, text, WhatsApp or in-app and will include resources to guide difficult conversations. It’s the first time Meta has issued proactive parent alerts for teen searches rather than simply blocking content, drawing mixed reactions: supporters say it aids protection, while critics warn it could alarm families or gloss over underlying platform risks. Meta says alerts accompany expert resources and notes it already hides self-harm content and will extend similar alerts to AI chatbot interactions in coming months amid wider scrutiny of youth safety online.
Discord has pushed back its global age-verification rollout to the latter half of the year after user backlash, saying only a minority will need to verify their age and that non-face options like credit-card verification will be explored; the company will publish its age-determination methodology, insists it won’t read messages or store verification images, and aims to align with upcoming youth-access rules while addressing widespread trust concerns.
The UK is moving to tighten its online-safety laws to explicitly cover AI chatbots, expanding regulatory reach over how AI is used and presented online.
Prime Minister Keir Starmer seeks broader powers to regulate internet access to shield children from online risks, including an Australian-style ban for under-16s and faster action via amendments to crime and child-protection laws; the package also expands prohibitions on creating sexualised images with AI, and may include VPN age checks, raising privacy and free-speech concerns amid rapid tech change.
Prime Minister Keir Starmer unveiled a new online-safety plan to curb addictive social-media features, close legal loopholes protecting children, and consider restricting under-16s from certain platforms. The package aims to speed up law changes, strengthen protections in chatbots, and preserve children's data under the Jools’ Law while targeting auto-play and endless scrolling, with discussions on VPN restrictions for adult content. A public consultation begins in March, eliciting mixed reactions from supporters and critics alike.
Google’s AI Overviews synthesize web results, but fake phone numbers planted by scammers have slipped into some summaries, sending users to impersonators who may harvest payment data. While Google says it’s strengthening anti-scam protections, the piece advises always verifying contact numbers on a company’s official site or with a separate search, since AI Overviews can still present outdated or misleading information.
Discord will globally require age verification next month, automatically placing accounts in a teen-appropriate mode unless users prove they are adults. Unverified users will be restricted from age-restricted servers and Stage channels, face content filters, and have DMs from unfamiliar users redirected. Verification options include on-device facial age estimation or submitting a government ID to a third-party vendor, with IDs deleted after verification and no biometric scanning. An age-inference model may allow some users to skip other checks if it’s highly confident they’re adults. The move follows global safety regulations and prior regional checks, with the company citing privacy safeguards and a vendor change after a data breach.
The European Commission has told TikTok to change features it deems contribute to addictive usage and could fine the platform up to 6% of its global turnover (potentially tens of billions) if it fails to comply. The regulator criticized how TikTok assessed risks to wellbeing, especially for children, and suggested remedies such as screen-time breaks, algorithm changes, and disabling infinite scroll. TikTok disputes the findings and will respond, as regulators press for “responsible design” across platforms.
EU regulators say TikTok’s infinite scroll, autoplay, and recommendation algorithm foster compulsive use and may violate online-safety laws, prompting potential changes to protect users’ physical and mental well-being, especially children.
A worldwide phishing campaign floods recipients with urgent emails claiming cloud-storage renewals failed, pushing them to a fake Google Cloud Storage link that redirects to scam pages impersonating cloud portals. The pages upsell a deceptive “loyalty” upgrade and collect credit card info, with the aim of affiliate revenue. Legitimate providers do not notify via such scans or require third-party security products, and users should delete the messages and verify billing directly on official sites.