NEW DELHI – WhatsApp is offering research grants to social scientists to help it combat the spread of “misinformation” through the cross-platform messaging service. The move comes in the wake of a string of lynchings in India from fake news rumors spread on the free messaging platform.
The service, which is owned by Facebook, is offering up to $50,000 for proposals that “foster insights into the impact of technology on contemporary society in this problem space,” including election-related content, digital literacy and “detection of problematic behavior within encrypted systems.”
WhatsApp has come under fire in recent days in India — its largest market — after a string of brutal slayings left more than a dozen dead in more than five states, including eight dead in the past week alone. In most cases, innocent bystanders were beaten to death by mobs fed by WhatsApp rumors of child kidnappers or organ harvesting rings. On Sunday, a mob in the western state of Maharashtra set upon five people from a nomadic tribe of beggars, beating the victims to death and then turning on police who tried to intervene.
The forwarding of fake news is a rising problem in India, where more than 200 million users send billions of messages each day on WhatsApp. India’s Ministry of Electronics and Information Technology issued a sharply worded warning Tuesday, saying WhatsApp cannot “evade accountability and responsibility” for messages that lead to the spread of violence and called for the company to “take immediate action to end this menace.”
WhatsApp, in a response letter to the ministry, said it is “horrified by these terrible acts of violence” and that “false news, misinformation and spread of hoaxes are issues best tackled collectively: by government, civil society and technology companies working together.”
But the company also said that messages on its platform can become “highly viral” as users share them, they also are encrypted, making it more difficult to monitor for hate or illegal speech than other social media platforms. WhatsApp said earlier this week that it had added a new feature that allows administrators of WhatsApp groups to control who can post. And it is testing a feature that would label “forwards.” The company plans to start an engagement program with law enforcement officials in India and will increase its outreach in the coming months in advance of the country’s general elections next year, officials have said.
WhatsApp has already partnered with news organizations in Brazil and Mexico — to counter and fact-check news reports.
India’s government has been contemplating ways to control the rise of fake news without much success, leaving police departments around the country to combat it on their own. Some low-tech measures have been taken: In the eastern state of Tripura, officials hired rumor-busting announcers to travel from village to village with loudspeakers to warn villagers not to believe fake messages. Three people were killed there last week, including one “rumor buster.” Authorities shut down internet service for 48 hours to help quell the violence.
Indian officials ordered more than 70 stoppages last year compared with six in 2014, according to the Internet Shutdowns tracker portal. There have been 65 shutdowns so far this year.
Nikhil Pahwa, a technology expert warned in a blog post on his site, MediaNama, that the India’s “massive” fake news problem is “going to get worse.” He argues that the company should allow users to make messages private or public, with the public messages tagged with a unique ID for the creator, which could make inflammatory messages easier to trace.