MY KOLKATA EDUGRAPH
ADVERTISEMENT
Regular-article-logo Tuesday, 24 December 2024

Social media's election plan: Twitter, Facebook and WhatsApp want to bring transparency

The internet giants want to tackle fake news and surface details on political ads, but will it work?

Furquan Ameen New Delhi Published 15.02.19, 08:00 AM

Bradshaw thinks it’s positive to see companies sharing more data about fake accounts but says they’re not the only problem. “There are many legitimate user accounts that perpetuate misinformation and junk news for political or economic purposes. At the same time, fake accounts are becoming more sophisticated over time,” said Bradshaw.

At another level, WhatsApp faces an entirely different challenge because it’s a platform with end-to-end encryption.

ADVERTISEMENT

To control rapid spread of misinformation, especially through bulk messaging, WhatsApp introduced last July a cap of five forwards in India and extended that restriction globally last month. “The five-forwards rule will significantly increase the time and cost required to spread misinformation on an industrial scale,” says Banerjee. “However, a well-resourced political organisation intent on distributing misinformation can bypass these restrictions,” he adds.

Banerjee believes that despite the best efforts it is inevitable political parties and interest groups will exploit social media to suit their own interests. “New technology, especially when cheap and widely available, will have consequences on politics,” he said.

Social media platforms have also tried to crack down on fake news by tying up with governments and partners. Last year, Google set up CrossCheck in France, Verficado in Mexico and Elections DataBot in the US to fight misinformation during elections.

In a similar move, Facebook expanded its third-party fact-checking partnerships in India with the addition of names like India Today Group, Vishvas.News, Factly, Newsmobile, and Fact Crescendo. Facebook is also working with Agence France Presse and BoomLive in India.

But Dipayan Ghosh, a Harvard Kennedy School fellow and a former Facebook public policy advisor, notes: “We must keep in mind the US and India are different animals entirely. Given its linguistic, socio-economic and cultural diversity, protecting elections through political transparency in India might present far greater challenges for Internet companies,” said Ghosh.

At a global level, Facebook is establishing two operations centres – in Dublin and Singapore – with teams working together to safeguard the integrity of elections in different countries. Facebook adds “more than 30,000 people are working on safety and security across the company, three times as many as we had in 2017”.

All these steps are being taken as the online giants are under fire about fake news around the world. Before the 2017 German federal elections, the government there warned Facebook and Twitter to regulate fake news or pay. And the European Commission’s warned the tech giants to intensify efforts to fight disinformation campaigns before the European elections this year.

But fake news is a tough dragon to slay. In October 2018, Oxford Internet Institute which is part of Oxford University issued a report on Global Inventory of Organised Social Media Manipulation in which they found evidence of manipulative campaigns in at least 48 countries. The report mentions India among 13 other countries where “political parties and campaign managers have directly hired PR or consulting firms to help spread computational propaganda during elections”.

These and other findings make it “hard to say how effective the steps social media companies have taken have actually been,” said Samantha Bradshaw, one of the report's authors. “If you look specifically at the US, following the 2016 presidential elections, Facebook and Twitter announced multiple initiatives to combat disinformation. But during the 2018 mid-term elections, our research found a greater amount of “junk news” being shared by users.

In the past few years, the tech giants have been at the centre of investigations related to misinformation, data leaks or foreign country-influencing elections. Facebook and WhatsApp have been among the affected.

Most recently, during the Karnataka Assembly elections, fact-checking website Alt News debunked several false or misleading stories shared on social media platforms to target the rival party. For instance, in one case, television news screenshots were shared on social media carrying a fake quote of Rahul Gandhi saying it was necessary to support Pakistan. Another such story claimed that BJP president Amit Shah called the Baniya community “thieves” and accused them of profiteering.

So who’s spreading fake news?

A study, Beyond Fake News, commissioned by the BBC World Service, found “right-wing networks are much more organised than those on the Left, pushing nationalistic fake stories further”. In India, the BBC report found that a rising nationalist tide in India was driving ordinary citizens to spread fake news. The research suggested facts were less important to some than the emotional desire to bolster national identity. The BBC researchers analysed 16,000 Twitter accounts and 3,000 Facebook pages in India. The results showed a strong and coherent promotion of Right-wing messages, while Left-wing fake news networks were loosely organised, if at all, and less effective.

“Transparency helps, but when taken alone it will fail to defeat the disinformation problem in the long run. We also need better privacy and competition rules,” says Ghosh.

In India, it was a WhatsApp message that led to lynching. In Myanmar the military staged a coordinated social media campaign to spread propaganda and networks based in Iran tried spreading propaganda in the US, the UK and the Middle East.

Last September, Facebook CEO Mark Zuckerberg wrote in a blog that the company had made steady progress in tackling fake news, but faces sophisticated, well-funded adversaries. “They won't give up, and they’ll keep evolving,” he wrote. Referring to the US elections in 2016, Zuckerberg wrote: “What we didn't expect were foreign actors launching coordinated information operations with networks of fake accounts spreading division and misinformation.”

Priyanka Gandhi's official Twitter account on February 15, 2019.

Priyanka Gandhi's official Twitter account on February 15, 2019.

But two key fact-checkers, the Associated Press and Snopes, stopped working with Facebook on its campaign to tackle misinformation at the start of this month. Late last year, the Guardian newspaper published a story alleging fact-checkers were frustrated with Facebook’s lack of transparency. The newspaper quoted Kim LaCapria, a former Snopes fact-checker who said that, “Facebook wanted the appearance of trying to prevent damage without actually doing anything”.

Last week, WhatsApp raised the ante against those spreading fake news on the messaging app. WhatsApp communications head Carl Woog said political parties were using the app in a way not intended and warned that if they continue doing so, they’ll have to be banned from the service.

But the challenges are acute in combating fake news and misinformation. “There’s not always a binary distinction between truth and lies. Fact-checking doesn’t capture these nuances, nor does it address some of the deeper systemic problems that cause misinformation to spread in the first place,” notes Bradshaw. The looming elections in the world’s largest democracy will put the spotlight on how well the social media giants are progressing in the fight against disinformation.

One Twitter battle in India that’s turned particularly nasty involves the global talking shop itself. Twitter is being hauled over the coals by the IT parliamentary committee led by BJP MP Anurag Thakur. On February 1, the committee summoned senior global Twitter executives at short notice to quiz them about steps being taken to ensure users’ security and safety as well as about allegations that the social media site’s been cracking down indiscriminately against “nationalist” users. The committee wasn't pleased when India-based executives turned up instead and refused to hear them, and now has told Twitter CEO Jack Dorsey to appear before February 25.

Thakur's determination to get Twitter's top executives to testify came after a group called Youth for Social Media Democracy demonstrated outside Twitter India’s office, accusing the platform of being anti-right-wing and said it was blocking accounts sharing right-wing content while ignoring inflammatory comments by left-leaning and Congress players. The group’s reportedly also levelling the same accusations against Facebook.

Mayawati's Twitter page on February 15, 2019.

Mayawati's Twitter page on February 15, 2019.

As the elections draw closer, the online sites - Twitter, Facebook, Whatsapp, Instagram and Google – all know they'll be getting more traffic and also that they’ll be scrutinised intensely for how they tackle fake and slanted news. Twitter’s already declared it is “committed to surfacing all sides of the conversation as we enter the election season in this extraordinarily diverse cultural, political and social climate”. And both Twitter and Google say they’re aiming to make political advertising on their platforms more transparent while Facebook says it’s expanding its “efforts to protect elections in 2019”.

The companies have been holding talks with the Election Commission around these issues, including imposing the “silence period” on political ads just before voting. Still, Election Commission spokesperson Sheyphali Sharan says while they’ve been discussing matters, nothing’s been confirmed about how these organisations will work with them.

And it isn't only about India for the online giants. In 2019, four other elections are taking place in Asia – in Indonesia, Thailand, the Philippines and Afghanistan. And the tech giants are under pressure to show they're doing their best to check fake news in those countries too.

As part of the drive to be more transparent about political advertising in India, Google plans to launch a searchable Political Ads Library in March aimed at providing information about who’s buying election ads on the site and what they’re spending. The company also plans to release a Political Advertising Transparency Report in India.

Sayan Banerjee, a researcher at the University of Essex, who’s got a WhatsApp grant to study ways to tackle election-related misinformation, calls the planned Political Ads library a “useful tool” but says “it’s unlikely to have any large effect” beyond “a small group of political junkies.”

“We’re thinking hard about elections and how we continue to support democratic processes in India and around the world,” says Dipti Mehra, Google’s Public Affairs Manager. “We’ll continue to invest in initiatives that build further on our commitment towards election transparency,” Mehra said. Google has also introduced new rules regarding its election ads policy. To purchase an advertisement now, advertisers must provide a certificate from the Election Commission of India and Google, in turn, will verify the ad before it runs.

Twitter, too, has said all parties must produce certification from the Election Commission before ads are accepted. Declaring their plan for Indian elections, Twitter’s global vice-president (public policy) Colin Crowell said Twitter India will use a transparency tool used in the US mid-term elections that makes it possible for anyone to search for advertisers and check who’s paying for the ads.

She's Twittersphere’s newest star. Last week, Priyanka Gandhi Vadra joined Twitter and within 24 hours, racked up 160,000 followers. Of course, Priyanka’s nowhere near Prime Minister Narendra Modi’s 45-million-plus followers. But she and former Uttar Pradesh chief Mayawati, the Bahujan Samaj Party supremo who’s another Twitter newcomer, are notching up Twitter followers fast. Mayawati sent out her first tweet last month and now has collected 91,000 followers.

The moves by Priyanka and Mayawati to take to Twitter underline how social media platforms are shaping up to be an even more important battleground for politicians this election than in the 2014 polls. No one knows how many votes are won or lost on Twitter or Facebook, but everyone concedes social media’s increasingly become an influencer. And that’s what is stoking concern among onlookers who worry about social media rules of play in misinformation and downright fake news.

Follow us on:
ADVERTISEMENT
ADVERTISEMENT