If there’s one certainty about the outcome of the US presidential election, it’s that social media feeds will roil with a head-spinning surge of conflicting information.
The country faces the prospect of days or weeks of vote counting, during which the final result will be uncertain. The early returns on election night could look entirely different from the final vote count, thanks to a deluge of mail-in ballots that take much longer to tabulate than in-person votes. In the meantime, unscrupulous groups may stoke confusion by claiming victory before all the votes are counted.
As a result, a great deal of the responsibility for maintaining Americans’ faith in their elections has fallen to the very companies blamed with enabling bad actors to undermine the integrity of the vote in 2016. This time around, social platforms, messaging apps, search engines, and news aggregators are hoping they can do better by laying careful plans to slow the spread of misinformation. Here are all of their plans.
Social platforms: Facebook | Instagram | Twitter | TikTok | Pinterest
Messaging apps: WhatsApp | Facebook Messenger | Snapchat
All-seeing Alphabet: Google | YouTube
News aggregators: Apple News | Google News
Facebook and Instagram will display messages like “votes are still being counted” or “too early to call” at the top of users’ feeds while the ballot counters do their work. The platforms will attach warning labels to posts that prematurely claim victory for either candidate, and direct users to a “Voter Information Center” that will have real-time updates from journalists at Reuters and the National Election Pool, a consortium of broadcast news networks. When the results are in, the apps will proactively notify users.
Both platforms will also attach warning labels to posts that seek to delegitimize the election by, for example, claiming that mail-in ballots are inherently fraudulent. (US president Donald Trump has repeatedly, and falsely, made this claim.) But the company won’t take those posts down. “We’ll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what’s acceptable in our society,” CEO Mark Zuckerberg wrote in a Facebook post.
Facebook and Instagram will also stop running political ads after polls close on Nov. 3.
Twitter will delete or attach warning labels to tweets that prematurely claim victory, incite violence, or seek to delegitimize election results. The company clarified that specific falsehoods with the potential to cause harm will get deleted, while simple mischaracterizations will get warning labels and will only be shown to a user’s followers. If the misleading tweet comes from a politician, a user with more than 100,000 followers, or one who gets lots of engagement, users won’t be able to reply and may only retweet if they add their own comment.
Twitter permanently banned political ads in October 2019.
TikTok will run a special election information page with live results from the Associated Press (AP) and explainers for the app’s young users about how ballot counting works. A link to the page will appear above election-related content or searches and at the top of the Discover Page. The app is also expanding its roster of fact-checkers, deleting posts that incite violence, and limiting the distribution of posts that contain misleading information.
TikTok permanently banned political ads in October 2019.
Pinterest will label or delete “content apparently intended to delegitimize election results on the basis of false or misleading claims.” Although it banned political ads in 2018, Pinterest will go a step further and won’t show any ads at all to users searching for election-related terms.
WhatsApp and Facebook Messenger will limit message forwarding in an effort to slow the spread of viral misinformation. Users will only be able to forward a chat to five people or groups at a time. Once a message has been forwarded five times, that speed limit gets reduced and users can only forward it to one person or group at a time. WhatsApp will also partner with the International Fact-Checking Network to create a tip line where users can report accounts that spread misleading content and ask for better information.
Snapchat will moderate ads for misinformation and ban media partners from posting premature claims of election victory without context on its Discover page—but it has no policy or prohibition on user-generated misinformation.
Google has partnered with the AP to create an election information box, including live returns, which it will display above the search bar as votes are counted. The company will use its existing search algorithms to favor information from trusted news sources and push down pages that prematurely claim victory for either party. It will also ban political ads after polls close.
YouTube will also run an election information box in partnership with the AP, but it will only show it alongside election-related content and searches. The video-hosting platform will rely on previous tweaks it has made to its search algorithms to recommend election videos from reputable news sources instead of unaffiliated vloggers.
Apple News has partnered with the AP as its official source for election results and will feature polling analysis from the stats wonks at FiveThirtyEight. It will rely on human curators to choose reliable news stories from trusted sources and suppress stories that contain misinformation.
Google News will change very little. The tool has introduced a section dedicated to 2020 election stories, but like Google search results, it will rely on its existing algorithms to surface good information from reputable sources.