Former Meta policy chief Nick Clegg offers a nuanced critique of Silicon Valley in his upcoming book, describing it as a conformist culture and expressing skepticism about industry trends, while still acknowledging the positive impact of social media.
Meta's president of global affairs, Nick Clegg, acknowledged that the company is mistakenly removing too much content across its platforms, citing high error rates in moderation that hinder free expression. Clegg admitted that Meta's stringent content removal during the COVID-19 pandemic was excessive, influenced by pressure from the Biden administration. Despite spending billions on moderation, Meta's automated systems have been criticized for errors, including the suppression of political speech. Clegg indicated potential changes to content rules, describing them as a "living, breathing document."
Meta will start labeling AI-generated images on Facebook, Instagram, and Threads to help users distinguish between human and synthetic content, with plans to extend the labeling to audio and video. The move comes in response to concerns about the spread of fake AI-generated content, including sexually explicit images of celebrities and deceptive robocalls. Meta's president of global affairs, Nick Clegg, emphasized the need for transparency and regulation in the use of AI, supporting the idea of government guardrails to ensure proper transparency and safety of AI models.
Meta will start labeling and punishing users who don’t disclose AI-generated media on its platforms, including Facebook, Instagram, and Threads, as election season ramps up. The company is working on tools to detect synthetic media and will require users to disclose when realistic video or audio posts are made with AI, with penalties ranging from warnings to post removal. Meta is collaborating with industry groups and implementing measures to combat the spread of AI-generated content, while also internally testing large language models trained on its Community Standards to assist human moderators.
Nick Clegg, president of global affairs at Meta, has defended the release of an open-source AI model called Llama 2, stating that concerns about AI's dangers are exaggerated. Clegg argued that the models being open-sourced are far from super-intelligent and that Meta has taken precautions to ensure their safety. However, some experts have raised concerns about the potential misuse of open-source AI models. Clegg believes that open-sourcing AI models will make them safer by involving the "wisdom of crowds" and reducing the control of big tech companies.
Nick Clegg, Meta's President of Global Affairs, bailed on a Canadian government hearing after lawmakers changed the event's title to be critical of Facebook. The hearing was meant to debate the "Online News Act," which would force Facebook and other internet companies to pay publishers for their content. Meta is threatening to cut Canada off from news links altogether if the contested legislation passes. Clegg claims the bill is based on a "fundamentally flawed premise" that Meta unfairly benefits from publishers' work.