On Jan. 6, a violent mob stormed the U.S. Capitol in an attempt to overturn the 2020 presidential election. The attack was fueled by a constant stream of disinformation and hate speech President Donald Trump and other bad actors flooded across social media platforms before, during and after the election. Despite their civic integrity and content moderation policies, platforms have been slow to take action to limit the spread of content designed to disrupt our democracy.
This failure is inherently tied to platforms' business models and practices that incentivize the proliferation of harmful speech. Content that generates the most engagement on social media tends to be disinformation, hate speech and conspiracy theories. Platforms have implemented business models designed to maximize user engagement and prioritize their profit over combating harmful content.
While the First Amendment limits our government from regulating speech, there are legislative and regulatory tools that can rein in social media business practices bad actors exploit to amplify speech that interferes with our democracy.
The core component of social media platforms' business model is to collect as much user data as possible, including age, gender, location, income and political beliefs. Platforms then share data with advertisers for targeted advertising. It should come as no surprise that disinformation agents exploit these capabilities to micro-target harmful content, particularly to marginalized communities.
For example, the Trump campaign used Facebook to target millions of Black voters with deceptive information to deter them from voting.
Comprehensive privacy legislation, if passed, could require data minimization standards, which limit the collection of personal data to what is necessary to provide service to the user. Legislation could also restrict the use of personal data to engage in discriminatory practices that spread harmful content such as online voter suppression. Without the vast troves of data platforms collect on their users, bad actors will face more obstacles targeting users with disinformation.
Platforms also use algorithms that determine what content users see. Algorithms track user-preferences, and platforms optimize their algorithms to maximize user engagement, which can mean leading users down a rabbit hole of hate speech, disinformation and conspiracy theories. Algorithms can also amplify disinformation as conspiracy theorists used the "stop the steal" moniker across social media platforms to organize offline violence.
Unfortunately platform algorithms are a "black box" with little known about their inner workings. Congress should pass legislation that holds platform algorithms accountable. Platforms should be required to disclose how their algorithms process personal data. Algorithms should also be subject to third-party audits to mitigate the dangers of algorithmic decisionmaking that spreads and amplifies harmful content.