Did social media actually counter election misinformation?

Full Screen
1 / 2

Copyright 2018 The Associated Press. All rights reserved.

FILE - This photo combo of images shows, clockwise, from upper left: a Google sign, the Twitter app, YouTube TV logo and the Facebook app. Facebook, Twitter and YouTube were quickly put to the test early Wednesday, Nov. 4, 2020 after President Donald Trump told a crowd of cheering supporters at the White House that he would challenge the results of the presidential election. He also tweeted and posted on Facebook misleading statements about the election. The social media platforms have been working for months, if not years since the last presidential election, to prepare for Trumps unsubstantiated claims of election fraud and premature victory declarations. (AP Photo)

Ahead of the election, Facebook, Twitter and YouTube promised to clamp down on election misinformation, including unsubstantiated charges of fraud and premature declarations of victory by candidates. And they mostly did just that — though not without a few hiccups.

But overall their measures still didn't really address the problems exposed by the 2020 U.S. presidential contest, critics of the social platforms contend.

“We’re seeing exactly what we expected, which is not enough, especially in the case of Facebook,” said Shannon McGregor, an assistant professor of journalism and media at the University of North Carolina.

One big test emerged early Wednesday morning as vote-counting continued in battleground states including Wisconsin, Michigan and Pennsylvania. President Donald Trump made a White House appearance before cheering supporters, declaring he would challenge the poll results. He also posted misleading statements about the election on Facebook and Twitter, following months of signaling his unfounded doubts about expanded mail-in voting and his desire for final election results when polls closed on Nov. 3.

So what did tech companies do about it? For the most part, what they said they would, which primarily meant labeling false or misleading election posts in order to point users to reliable information. In Twitter's case, that sometimes meant obscuring the offending posts, forcing readers to click through warnings to see them and limiting the ability to share them.

The video-sharing app TikTok, popular with young people, said it pulled down some videos Wednesday from high-profile accounts that were making election fraud allegations, saying they violated the app’s policies on misleading information. For Facebook and YouTube, it mostly meant attaching authoritative information to election-related posts.

For instance, Google-owned YouTube showed video of Trump’s White House remarks suggesting fraud and premature victories, just as some traditional news channels did. But Google placed an “information panel” beneath the videos noting that election results may not be final and linking to Google’s election results page with additional information.

“They’re just appending this little label to the president’s posts, but they’re appending those to any politician talking about the election," said McGregor, who blamed both the tech giants and traditional media outlets for shirking their responsibility to curb the spread of misinformation about the election results instead of amplifying a falsehood just because the president said it.