The Facebook app will also look different on Tuesday. To prevent candidates from prematurely and inaccurately declaring victory, the company plans to include a notification at the top of news feeds informing people that no winner has been selected until the election results are verified by news outlets like Reuters and The Associated Press .
Facebook also plans to use specific tools when needed, used in "vulnerable countries" like Myanmar where election-related violence was possible. The tools, which Facebook has not publicly described, are designed to slow the spread of inflammatory posts.
After the election
After the polls are completed, Facebook plans to stop the distribution of all political advertisements on the social network and on the photo-sharing website Instagram in order to reduce misinformation about the election result. Facebook has told advertisers that they can expect the ban to last a week, even though the schedule isn't set in stone and the company has been publicly non-committal about the duration.
"We've worked for years to make voting on our platform safer and more secure," said Kevin McAlister, a Facebook spokesman. "We have learned lessons from previous elections, put together new teams with experience in different areas, and developed new products and guidelines to prepare for different scenarios before, during and after election day."
Before the election
Twitter has also worked to tackle misinformation since 2016, in some cases well beyond Facebook. Last year, for example, political advertising was banned entirely because the reach of political messages "should be earned, not bought".
At the same time, Twitter began flagging tweets from politicians when they spread inaccurate information or glorified violence. In May, President Trump's tweets about protests against Black Lives Matter and mail-in votes were tagged with multiple fact-checking labels and restricting people's ability to share those posts.
In October, Twitter began experimenting with additional techniques to slow the spread of misinformation. The company added context to trending topics and limited users' ability to retweet content quickly. The changes are temporary, although Twitter didn't say when they would end.