Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:
- Alex participated in the fifth Senate AI Insight Forum focused on AI and its impact on elections and democracy. It turns out politicians can be reasonable and bipartisan when the cameras are off. - Oma Seddiq/ Bloomberg Law, Gabby Miller/ Tech Policy Press, Cristiano Lima/ The Washington Post, Christopher Hutton/ Washington Examiner, Office of Majority Leader Chuck Schumer
Label Your AI
- Meta will require political advertisers to disclose if content has been digitally altered to make content potentially misleading. - Aisha Counts/ Bloomberg News, Katie Paul/ Reuters, Will Henshall/ Time, Facebook
- Meta will also let political ads on Facebook and Instagram question the legitimacy of the 2020 U.S. presidential election. - Salvador Rodriguez/ The Wall Street Journal
- Microsoft announced a free tool for politicians and campaigns to authenticate media with watermark credentials. - Margi Murphy/ Bloomberg News, Brad Smith/ Microsoft
- YouTube will require creators to disclose realistic AI-generated content with new labels. Users can also request to remove manipulated video “that simulates an identifiable individual, including their face or voice.” - Olafimihan Oshin/ The Hill, Jennifer Flannery O'Connor, Emily Moxley/ YouTube
TikTok Tick Tock
- There’s been a burst of new calls to ban TikTok over allegations that it is boosting anti-Israel and pro-Hamas content. - Alexander Bolton/ The Hill, Cecilia Kang, Sapna Maheshwari/ The New York Times
- TikTok denies these allegations and faults inaccurate news reporting. - TikTok
- Verified transparency about this would be good, but there’s no real evidence for the claim. There may be a conflation of “pro-Palestinian” and “pro-Hamas” content. Many people have pro-Palestinian views, especially TikTok’s young userbase. It also turns out that other platforms have similarly prevalent content. - Drew Harwell/ The Washington Post
- The renewed calls for TikTok to be banned because of content on it that lawmakers don’t like gives the lie to the argument that calls for a ban are not about speech, which is... a First Amendment problem.
- Nepal, however, doesn’t have a First Amendment so it banned TikTok citing disruption to “social harmony” including “family structures” and “social relations” - Niha Masih, Sangam Prasai/ The Washington Post
A Trip to India
- Nothing massively new here, but worth highlighting this WaPo report: “For years, a committee of executives from U.S. technology companies and Indian officials convened every two weeks in a government office to negotiate what could — and could not — be said on Twitter, Facebook and YouTube.” - Karishma Mehrotra, Joseph Menn/ The Washington Post
- Meanwhile, Apple has been notifying opposition politicians in India that they are “being targeted by state-sponsored attackers.” - Meryl Sebastian/ BBC News
Transparency Please
- The first batch of DSA transparency reports have been submitted and Tech Policy Press is tracking. - Gabby Miller/ Tech Policy Press
- The unsurprising news is that X is devoting far fewer resources to content moderation than its peers. Shocker! - Foo Yun Chee, Supantha Mukherjee/ Reuters
- “X's 2,294 EU content moderators compared with 16,974 at Google's YouTube, 7,319 at Google Play and 6,125 at TikTok.”
Legal Corner
- The Supreme Court struggled with two cases about when public officials can block critics online. Much of the debate came down to whether there is a difference between personal and official social media accounts. - Josh Gerstein/ Politico Pro, John Kruzel, Andrew Chung/ Reuters, Ian Millhiser/ Vox, Ann E. Marimow/ The Washington Post
- Overall, the Court sounded sympathetic to the claim that they shouldn’t be able to block people whenever they please, but much less clear on what the test should be.
Sports Corner
- Is there a Big Game in California this weekend? Alex has a lot to say for someone rooting for the team with a losing record in the 126-year series.
Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.
Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.
Like what you heard? Don’t forget to subscribe and share the podcast with friends!