TikTok, Twitter, Meta and YouTube are spreading false allegations of fraud in the U.S. election, according to a recently released report calling on the platforms to strengthen and enforce their “imperfect” content policies.
A report by NYU’s Stern Center for Business and Human Rights concluded that social media “should recognize that campaign misinformation and misinformation has become an ongoing threat, not a problem that materializes every election cycle and then disappears.”
With the midterm elections only weeks away and the next White House election due in 2024, questions are increasingly being asked about the role big tech will play.
What happened in 2020?
Facebook and Google received “record amounts of revenue” ahead of the 2020 US election, Al Jazeera reported as voters went to the polls.
“But even as candidates pour tens of millions of dollars into ads on platforms, there is widespread dissatisfaction with both the rules companies have put in place for elections and how they have enforced them,” the news site says.
Critics say little has changed since then. “The post-election wave of lies, including the ‘big lie’ about Donald Trump’s victory, continues to circulate, supported by hundreds of Republican candidates on the ballot this fall,” says NPR tech correspondent Shannon Bond.
Many experts “are wondering what lessons tech companies have learned from 2020 – and if they’re doing enough this year,” Bond added.
What new recommendations have been introduced?
Election-related announcements on major platforms in recent weeks indicate the tech giants are taking a “business as usual” approach ahead of the November midterm elections, Cathy Harbat, former director of election policy at Facebook, told NPR.
“It’s actually quite a confusing situation because there are no rules, no standards that these companies have to follow,” said Harbat, now an official with the Bipartisan Policy Center. So “everyone just makes the choice that they think is best for them,” she added.
According to a recent report from the Warsaw Institute, social networks including Twitter and TikTok, as well as online platforms including Google, have introduced “a series of tools to mitigate the effects of what could turn into a war room among the electorate.”
The think tank said “fighting false information, disinformation and putting in place advance mechanisms” are “top priorities” for these companies. Twitter has unveiled a revised set of rules designed “to protect civil conversation on the platform,” including the identification of misleading images.
Google changed its Political Content Policy “to clarify the disclosure requirements for election ads using ad formats” and also joined voluntary commitments under a new EU code aimed at demonetizing disinformation.
But what about TikTok?
The video-sharing platform is owned by Chinese tech firm ByteDance, heightening attention to the platform’s potential cultural and political impact in the West. In 2019, TikTok announced a ban on paid political ads in 2019.
Verdict reported last month that TikTok is also launching an educational program “for content creators to better understand the rules around election content and disinformation.”
But, as the tech news site added, TikTok has “done little to address this issue beyond the guidelines already in place.”
How about advertising?
As Big Tech platforms crack down on political ads, political advertisers are “more flocking to the new wild west of programmatic ads,” Axios said.
An analysis by the University of North Carolina’s Center for Technology Policy found that these companies, which automate the buying and selling of ads across multiple platforms, had “minimal transparency tools and a few specific content restrictions,” the news site reported.
The review found that none of the programmatic advertising companies analyzed explicitly prohibited lying about electoral processes or results, and few clearly stated their policy on political disinformation.
What else can be done?
TikTok has promised to weed out election misinformation and harassment of election officials, but critics insist all social media giants need a more robust approach.
Instead of playing “kill the mole” by “retroactively banning harmful content,” social media should take a “more proactive approach to combating the problem,” concludes Verdict.