Make wishtv.com your home page

Did social media actually counter election misinformation?

A photo taken on October 21, 2020 shows the logo of the the American online social media and social networking service, Facebook and Twitter on a computer screen in Lille. (Photo by Denis Charlet/AFP via Getty Images)

(AP) — Ahead of the election, Facebook, Twitter and YouTube promised to clamp down on election misinformation, including unsubstantiated charges of fraud and premature declarations of victory by candidates. And they mostly did just that — though not without a few hiccups.

But overall their measures still didn’t
really address the problems exposed by the 2020 U.S. presidential
contest, critics of the social platforms contend.

“We’re seeing
exactly what we expected, which is not enough, especially in the case of
Facebook,” said Shannon McGregor, an assistant professor of journalism
and media at the University of North Carolina.

One big test
emerged early Wednesday morning as vote-counting continued in
battleground states including Wisconsin, Michigan and Pennsylvania.
President Donald Trump made a White House appearance before cheering
supporters, declaring he would challenge the poll results. He also
posted misleading statements about the election on Facebook and Twitter,
following months of signaling his unfounded doubts about expanded
mail-in voting and his desire for final election results when polls
closed on Nov. 3.

So what did tech companies do about it? For the
most part, what they said they would, which primarily meant labeling
false or misleading election posts in order to point users to reliable
information. In Twitter’s case, that sometimes meant obscuring the
offending posts, forcing readers to click through warnings to see them
and limiting the ability to share them.

The video-sharing app
TikTok, popular with young people, said it pulled down some videos
Wednesday from high-profile accounts that were making election fraud
allegations, saying they violated the app’s policies on misleading
information. For Facebook and YouTube, it mostly meant attaching
authoritative information to election-related posts.

For instance,
Google-owned YouTube showed video of Trump’s White House remarks
suggesting fraud and premature victories, just as some traditional news
channels did. But Google placed an “information panel” beneath the
videos noting that election results may not be final and linking to
Google’s election results page with additional information.

“They’re
just appending this little label to the president’s posts, but they’re
appending those to any politician talking about the election,” said
McGregor, who blamed both the tech giants and traditional media outlets
for shirking their responsibility to curb the spread of misinformation
about the election results instead of amplifying a falsehood just
because the president said it.

“Allowing any false claim to spread can lead more people to accept it once it’s there,” she said.

Trump wasn’t alone in attracting such labels. Republican U.S. Sen. Thom Tillis got a label on Twitter for declaring a premature reelection victory in North Carolina. The same thing happened to a Democratic official claiming that former Vice President Joe Biden had won Wisconsin.

The
flurry of Trump claims that began early Wednesday morning continued
after the sun rose over Washington. By late morning, Trump was tweeting
an unfounded complaint that his early lead in some states seemed to
“magically disappear” as the night went on and more ballots were
counted.

Twitter quickly slapped that with a warning that said
“some or all of the content shared in this Tweet is disputed and might
be misleading about an election or other civic process.” It was among a
series of such warnings Twitter applied to Trump tweets Wednesday, which
make it harder for viewers to see the posts without first reading the
warning.

Much of the slowdown in the tabulation of results had been widely forecasted
for months, because the coronavirus pandemic led many states to make it
easier to vote by mail, and millions chose to do so rather than
venturing out to cast ballots in person. Mail ballots can take longer to
process than ballots cast at polling places.

In a Sept. 3 post,
Facebook CEO Mark Zuckerberg said that if a candidate or campaign tries
to declare victory before the results are in, the social network would
label their post to note that official results are not yet in and
directing people to the official results.

But Facebook limited
that policy to official candidates and campaigns declaring premature
victory in the overall election. Posts that declared premature victory
in specific states were flagged with a general notification about where
to find election information but not warnings that the information was
false or misleading.

Facebook also issued a blanket statement on
the top of Facebook and Instagram feeds on Wednesday noting that the
votes for the U.S. presidential election are still being counted.

Twitter was a bit more proactive. Based on its “ civic integrity policy,”
implemented last month, Twitter said it would label and reduce the
visibility of Tweets containing “false or misleading information about
civic processes” in order to provide more context. It labeled Trump’s
tweets declaring premature victory as well as claims from Trump and
others about premature victory in specific states.

The Twitter and
Facebook actions were a step in the right direction, but not that
effective — particularly in Twitter’s case, said Jennifer Grygiel, a
professor at Syracuse University and social media expert.

That’s
because tweets from major figures can get almost instant traction,
Grygiel said. So even though Twitter labeled Trump’s tweets about “being
up big,” and votes being cast after polls closed and others, by the
time the label appeared, several minutes after the tweet, the
misinformation had already spread. One Wednesday Trump tweet falsely
complaining that vote counters were “working hard” to make his lead in
the Pennsylvania count “disappear” wasn’t labeled for more than 15
minutes, and was not obscured.

“Twitter can’t really enforce
policies if they don’t do it before it happens, in the case of the
president,” Grygiel said. “When a tweet hits the wire, essentially, it
goes public. It already brings this full force of impact of market
reaction.”

Grygiel suggested that for prominent figures like
Trump, Twitter could pre-moderate posts by delaying publication until a
human moderator can decide whether it needs a label. That means flagged
tweets would publish with a label, making it more difficult to spread
unlabeled misinformation, especially during important events like the
election.

This is less of an issue on Facebook or YouTube, where
people are less likely to interact with posts in real time. YouTube
could become more of an issue over the next few days, Grygiel suggested,
if Trump’s false claims are adopted by YouTubers who are analyzing the
election.

“Generally, platforms have policies in place that are an
attempt to do something, but at the end of the day it proved to be
pretty ineffective,” Grygiel said. “The president felt empowered to make
claims.”