If you have a hard time keeping track of how internet platforms are dealing with misinformation, we don’t blame you. It seems like every week Facebook, Twitter, or YouTube is rolling out new or “clarifying” policies.
That’s ultimately a good thing: From coronavirus to the election, preventing misinformation from spreading on social media is more important than ever. Even if many of the policies leave something to be desired, at least companies are attempting to take action.
But just what those companies are doing can be tough to wrap your head around. Luckily, Mozilla has created a new resource that clearly lays out in a chart the misinformation policies of Facebook, Instagram, Google Search, YouTube, Twitter, and TikTok. That’s helpful considering the way most of these platforms communicate their policies is through disparate blog posts ordinary users probably wouldn’t have a reason to encounter (unless reading those posts is your job. Hi, it me).
Mozilla is also tracking these companies’ advertising policies, how much control they give users over their data, and how accessible and amenable they are to academic research. It’s part of their “Unfck the Internet” campaign, in which they research and advocate for effective ways social media platforms could tamp down misinformation, radicalization, and more ahead of the U.S. election.
The compendium is striking because it lays bare what these platforms lack: A comprehensible way to digest just what the heck these companies are doing about some of the most urgent issues of our time.
It’s interesting to put the platforms’ various policies side-by-side to see where they agree, or diverge. For example, you’d be better off posting misinformation on Facebook and Instagram than on Twitter or TikTok because only on the latter platforms does that behavior result in an account ban.
You can see the charts and full explanations of policies here.