What's new

Washington Post: Under India’s pressure, Facebook let propaganda and hate speech thrive

hatehs

FULL MEMBER
Joined
Mar 10, 2023
Messages
1,901
Reaction score
-2
Country
Pakistan
Location
Canada

Under India’s pressure, Facebook let propaganda and hate speech thrive​

By Joseph Menn
and
Gerry Shih
September 26, 2023 at 10:00 p.m. EDT

(Shubhadeep Mukherjee for The Washington Post; Tauseef Mustafa/AFP/Getty Images, Pictorial Parade/Archive Photos/Getty Images)

Listen
26 min

Share
Comment36
SAN FRANCISCO — Nearly three years ago, Facebook’s propaganda hunters uncovered a vast social media influence operation that used hundreds of fake accounts to praise the Indian army’s crackdown in the restive border region of Kashmir and accuse Kashmiri journalists of separatism and sedition.
What they found next was explosive: The network was operated by the Indian army’s Chinar Corps, a storied unit garrisoned in the Muslim-majority Kashmir Valley, the heart of Indian Kashmir and one of the most militarized regions in the world.

But when the U.S.-based supervisor of Facebook’s Coordinated Inauthentic Behavior (CIB) unit told colleagues in India that the unit wanted to delete the network’s pages, executives in the New Delhi office pushed back. They warned against antagonizing the government of a sovereign nation over actions in territory it controls. They said they needed to consult local lawyers. They worried they could be imprisoned for treason.


Those objections staved off action for a full year while the Indian army unit continued to spread disinformation that put Kashmiri journalists in danger. The deadlock was resolved only when top Facebook executives intervened and ordered the fake accounts deleted.
“It was open-and-shut” that the Chinar Corps had violated Facebook’s rules against using fictional personas to surreptitiously promote a narrative, said an employee who worked on the Kashmir project. “That was the moment that almost broke CIB and almost made a bunch of us quit.”

Indian soldiers in Srinagar, the summer capital of Indian Kashmir, take part in an October 2021 reenactment of the army's landing there at the start of the 1947-1948 Indo-Pakistani War. (Faisal Khan/NurPhoto/AP)
Three others who were involved confirmed the previously unreported internal battle. Most of those who spoke to The Washington Post discussed company matters on the condition they not be named. Facebook did not dispute their account.

The Kashmir case is just one example of how Facebook has fallen short of its professed ideals in India under pressure from Prime Minister Narendra Modi’s Bharatiya Janata Party (BJP). India, a country whose population is 80 percent Hindu and 14 percent Muslim, has long wrestled with religious strife. But in the past decade, the Hindu nationalist BJP has been accused of abetting violence and fanning incendiary speech against Muslims to stoke support from its political base. And often, when harmful content is spread by BJP politicians or their allies on Facebook, the platform has been reluctant to take action. The company denied acting to favor the BJP.
Skip to end of carousel

The rise of Modi and his Hindu-first state​

arrow leftarrow right
India’s Prime Minister Narendra Modi swept into power nearly a decade ago. Since then, he has repeatedly rallied voters in this vast democracy and entrenched his party’s power by exploiting differences between the Hindu majority and Muslim minority.
Religious tensions have existed in India since independence in 1947, and Modi’s right-wing followers in his Bharatiya Janata Party and beyond turned to inflammatory rhetoric and violence against Muslims to secure support from Hindus.
The BJP and affiliated Hindu nationalist groups have been in the global vanguard of using technology to advance political aims, tightening their grip with an ideology that imperils India’s traditional secularism and equality among religious faiths. Disinformation and divisive, often bigoted online posts and videos are rampant.
Government censorship of critical views has been on the rise. Social media platforms and other Big Tech firms, protective of their position in one of the world’s largest markets, have often given Modi and his allies what they want.
Despite concerns over repression and accelerating autocracy, the Biden administration has been actively courting Modi, hoping that India can help contain Chinese expansionism in the Indo-Pacific region.
Canada’s explosive announcement on Sept. 18 that Indian government agents may have assassinated a Sikh separatist leader on Canadian soil underscores the uncomfortable choices the United States and other Western countries face in moving closer to Modi’s India.


1/6
End of carousel
For Silicon Valley, which has seen user numbers in the United States plateau and international growth become critical to Wall Street shareholders, India is the biggest remaining prize and an ideal market. It is substantially English-speaking and rapidly growing, a tech-savvy democracy that is being wooed by the Biden administration to counter China. The number of Facebook users in India is greater than the entire U.S. population; India is also one of the biggest markets for X, formerly known as Twitter. That’s meant special treatment for content that otherwise would violate both platforms’ terms of service.
Facebook’s cautious approach to moderating pro-government content in India was often exacerbated by a long-standing dynamic: Employees responsible for rooting out hackers and propagandists — often based in the United States — frequently clashed with executives in India who were hired for their political experience or relationships with the government, and who held political views that aligned with the BJP’s.

Indian Prime Minister Narendra Modi and Facebook's Mark Zuckerberg at a September 2015 town hall meeting at the company's headquarters in Menlo Park, Calif. (David Paul Morris/Bloomberg/Getty Images)
Interviews with more than 20 current and former employees and a review of newly obtained internal Facebook documents illustrate how executives repeatedly shied away from punishing the BJP or associated accounts. The interviews and documents show that local Facebook executives failed to take down videos and posts of Hindu nationalist leaders, even when they openly called for killing Indian Muslims.
In 2019, after damning media reports and whistleblower disclosures, Facebook’s parent company, now named Meta, bowed to pressure and hired an outside law firm to examine its handling of human rights in India. That probe found that Facebook did not stop hate speech or calls for action ahead of violence, including a bloody religious riot in Delhi in 2020 that was incited by Hindu nationalist leaders and left more than 50 people, mostly Muslims, dead. Meta never published the document, strictly limited which executives saw it and issued a public summary that emphasized the culpability of “third parties.”


Social media companies today do not lose much when they call out the Russian or Chinese governments for propaganda or dismantle networks of fake accounts tied to those countries. Most U.S. social media platforms are banned in those countries, or they do not generate significant revenue there.
But India is at the forefront of a worrying trend, according to Silicon Valley executives from multiple companies who have dealt with the issues. The Modi administration is setting an example for how authoritarian governments can dictate to American social media platforms what content they must preserve and what they must remove, regardless of the companies’ rules. Countries including Brazil, Nigeria and Turkey are following the India model, executives say. In 2021, Brazil’s then president, Jair Bolsonaro, sought to prohibit social networks from removing posts, including his own, that questioned whether Brazil’s elections would be rigged. In Nigeria, then-President Muhammadu Buhari banned Twitter after it removed one of his tweets threatening a severe crackdown against rebels.
The day before May’s tight election in Turkey, Twitter agreed to ban accounts at the direction of the administration of President Recep Tayyip Erdogan, including that of investigative journalist Cevheri Guven, an Erdogan critic.

A man carrying a child runs amid clashes that erupted in Srinagar after New Delhi revoked the semiautonomous status of Indian Kashmir in August 2019 and placed the region under direct Indian rule. (Faisal Khan/Anadolu Agency/Getty Images)
“Nigeria very much took Modi’s playbook, and it exacerbated existing tensions in Turkey,” said a former Twitter policy lead, speaking on the condition of anonymity to discuss internal matters.
“All of the hard questions around tech come to a head in India. It is a huge market, it is a democracy, but it is a democracy with weak judicial protections, and it’s really geopolitically important,” said Brian Fishman, a former U.S. Army counterterrorism expert who led efforts to fight extremism and hate groups for Facebook until 2021.
U.S. officials depend on nuclear-armed India as a strategic counterweight to neighboring China. And they have been willing to overlook human rights abuses and other problems in India because the officials deem the geopolitics a higher priority, former U.S. officials say. India’s success against the internet companies has inspired many imitators, Fishman added.
“We’re moving into an era around the globe where governments have gotten off their hands and built legal frameworks, and in some cases extralegal frameworks, that allow them to directly pressure the companies,” he said.

Muslims in Srinagar shout anti-Indian slogans during an August 2019 protest of New Delhi's change in the status of Indian Kashmir. (Tauseef Mustafa/AFP/Getty Images)

A covert campaign​

When Facebook’s U.S. investigators first saw the posts from accounts that purported to be residents in Kashmir, it wasn’t hard to find evidence of a central organization. Posts from different accounts came in bursts, using similar words. Often, they praised the Indian military or criticized India’s regional rivals — Pakistan and its closest ally, China.
The technical information about some of the accounts overlapped, and the geolocation data associated with some accounts led directly to a building belonging to the Indian army.
The disinformation hunters also found that the fake accounts often tagged the official account of the Chinar Corps, India’s main military force in Kashmir, showing that they were not putting great effort into disguising themselves.
For a couple of months, employees said, the Facebook team mapped out the network in preparation for rooting out the whole operation, a standard procedure for combating coordinated inauthentic behavior.

Members of the Indian Border Security Force participate in a March 2023 parade on the outskirts of Srinagar. (Kabli Yawar/NurPhoto/Getty Images)
Often, the accounts promoted YouTube videos about problems in Pakistan. Some featured a channel run by Amjad Ayub Mirza, a writer from the Pakistani-controlled part of Kashmir who has declared that Muslims are treated well in India and has called on minorities in Pakistan to rise up against the government.
“The reality is that the people being persecuted inside India and also outside the country are actually the Hindus,” Mirza once told an interviewer. “One has to ponder over the question — where did terrorism start from? Terrorism started from Pakistan.”
Skip to end of carousel

What this series reveals​

arrow leftarrow right
Part 1: Narendra Modi’s Bharatiya Janata Party and its Hindu nationalist allies have built a massive propaganda machine, with tens of thousands of activists spreading disinformation and religiously divisive posts via WhatsApp. Parent company Meta says WhatsApp cannot monitor content, no matter how inflammatory.
Part 2: Social media giants have been reluctant to police Indian content that violates their terms of service. After Facebook discovered a vast influence operation using fake accounts secretly operated by the Indian army, some company employees moved to shut it down, but executives in the New Delhi office stalled the action.
Part 3: A new generation of Hindu vigilantes frequently stream their armed attacks against Muslims on platforms like YouTube and Facebook, amassing large followings and winning BJP protection. While rights activists have repeatedly flagged hateful influencers to social media companies, the accounts are rarely removed.
The Indian government has become increasingly aggressive in restricting online criticism and dissent, frequently ordering social media companies to take down posts and blocking the internet altogether in areas with significant dissension. More stories to come.
1/4
End of carousel
In just 32 minutes on May 24, 2021, a Post review found, 28 accounts from the covert Chinar Corps network on Twitter shared a post criticizing Pakistan’s treatment of Muslim Uyghurs who had fled oppression in China. Some made the point in English or Hindi that “Pakistan is not a safe place for Muslim minorities.” Twitter released its database of network account activity to researchers last year, and The Post obtained it. Though the database did not attribute the accounts to any entity, employees at Twitter and Facebook told The Post they were from the Chinar Corps. A research team at the Stanford Internet Observatory pointed to circumstantial evidence of a connection between the accounts and the military unit. Facebook said it did not preserve its accounts, making research more difficult.
At least 43 tweets contained some version of: “My religion is Islam, but my culture is Hinduism.” Another dozen said the account holders were Muslim but “Indian first.”
The campaign was unfolding at a sensitive time in Kashmir, claimed in its entirety by both India and Pakistan and divided into Indian- and Pakistani-controlled sections. For decades, India administered its portion as a semiautonomous region while its army fought a separatist insurgency that was supported by many Muslims there and often backed by Pakistan.

Indian security personnel patrol Srinagar in August 2019 after New Delhi ordered restrictions on movement and a telecommunications blackout in Indian Kashmir. (AFP/Getty Images)
In 2019, the Modi government stunned the world by announcing a constitutional change that revoked Kashmir’s semiautonomous status and transferred it to New Delhi’s direct rule. The move triggered protests as well as the army crackdown in Kashmir, which included alleged torture and widespread internet shutdowns. With popular anger reaching a boiling point in Kashmir, the Indian government felt pressed to respond.
In public, Indian officials argued that Kashmir’s Muslims would benefit from closer integration with India. Meanwhile, the Chinar Corps covertly spread its messaging. Jibran Nazir, a Kashmiri journalist working in central India, said he was “shocked” to one day find his photo adopted as the avatar of two anonymous Twitter accounts spreading the #NayaKashmir, or “New Kashmir” hashtag, which touted Kashmir’s prosperity under New Delhi’s control.
“They were recently created accounts that had more than 1,000 followers each,” Nazir recalled. “The accounts wanted to show Kashmiris are doing well, which they’re not.”

Journalists gather in Srinagar 60 days after the change in governing status to protest New Delhi's communications blockade. (Faisal Khan/Anadolu Agency/Getty Images)
The Chinar Corps’ stealth operation kept pushing that line — but also went further. It singled out independent Kashmiri journalists by name, disclosing their personal information and attacking them using the anonymous Twitter accounts @KashmirTraitors and @KashmirTraitor1, according to Stanford’s analysis and The Post’s review.
One target was journalist Qazi Shibli and his publication, the Kashmiriyat.
“@TheKashmiriyat posts #fake news on the various operations conducted by the #IndianArmy causing hate amongst people for the #Army,” @KashmirTraitors wrote in a series of tweets. “Even the positive things like ration distribution that are happening in #Kashmir are shown in a negative prospect in posts of @TheKashmiriyat.”

Qazi Shibli, editor of the Kashmiriyat, works on his laptop. He was arrested in July 2019 as part of a crackdown on the press in Indian Kashmir. (Mehran Firdous for The Washington Post)
“The #traitor behind this account and website is @QaziShibli (born in 1993) who has been detained numerous times under various charges for cybercrimes and posting content against national security.”
Shibli’s home was raided, and he was jailed repeatedly on charges including violation of the Public Safety Act, according to the Committee to Protect Journalists. The pressure online was crippling, Shibli said.
“A lot of people left work at the Kashmiriyat” because of the attacks, he told The Post. “It got to the point that a lot of people were not willing to work with us.”
Shibli said that sources dried up and that even personal friends grew afraid to speak with him.

Fahad Shah, right, editor of Kashmir Walla, at work in his publication's Srinagar newsroom in January 2022. Targeted online and arrested, he remains in prison today. (Dar Yasin/AP)
The @KashmirTraitors tweet with the most “likes” targeted journalist Fahad Shah in early 2021, saying the founder of the Kashmir Walla magazine “rigorously publishes content on anti-#India sentiments.”
Shah’s coverage of Kashmir had been published by the Guardian, Foreign Affairs and Time. He was later arrested and accused of “frequently glorifying terrorism, spreading fake news, and instigating people” under the Unlawful Activities Prevention Act. He remains in prison today.
A security official recently based in Kashmir with knowledge of the matter confirmed the existence of the Chinar network, saying it was a failed attempt to counter narratives from Pakistan.
A couple of months into its investigation, Facebook’s coordinated inauthentic behavior team handed their findings to supervisors and security policy chief Nathaniel Gleicher, who then informed Facebook’s team in India.
Executives there began raising objections. It wasn’t the first time they had.

A relative tends to the grave of an insurgent in late 2022 at a cemetery in Indian Kashmir near the de facto border between India and Pakistan. (Tauseef Mustafa/AFP/Getty Images)

Turning a blind eye​

Even before Modi’s rise in 2014, the major U.S. social media companies were overwhelmed by the sheer number of languages and cultures that make up India, according to current and former employees. Inflammatory speech was often coded with slang or references that eluded those unfamiliar with India’s political history, culture or the latest memes.
But the problem wasn’t just about resources. Employees described broad reluctance to take down posts of any kind from Modi’s BJP or its affiliates or to make designations that would cast India in a negative light.
Indian content moderation “was always a hands-off situation because of the political pushback,” said a former employee familiar with the India team. During internal discussions with executives in California and elsewhere, the India office argued in effect that “this is our area, don’t touch it,” the employee said. India-related content-policy employees “would use a case-law-setting tone, instead of what human harm was being done.”

Kashmiri villagers mourn an Indian soldier who was killed in a clash with insurgents this year as Indian Kashmir marked the fourth anniversary of its change in status. (Tauseef Mustafa/AFP/Getty Images)
After U.S. Facebook employees in 2020 warned that Indian Hindu nationalist groups were spreading the hashtag #coronajihad, implying that Indian Muslims were intentionally spreading the coronavirus in a conspiracy to wage holy war, a content policy staffer for the region pushed back, arguing that the meme didn’t amount to hate speech because it wasn’t explicitly targeting a people, two former employees recalled. (Facebook eventually barred searches for that hashtag, but searching for just “coronajihad” returns accusatory posts.)
In late 2019, Facebook data scientist Sophie Zhang tried to remove an inauthentic network that she said included the page of a BJP member of Parliament. She was repeatedly stymied by the company’s special treatment of politicians and partners, known as Xcheck or “cross check.” Facebook later said many of the accounts were taken down though it could not establish that the BJP member of parliament’s page had been part of the network.
The following year, documents obtained by Facebook employee-turned-whistleblower Frances Haugen show, Kashmiris were deluged with violent images and hate speech after military and police operations there. Facebook said it subsequently removed some “borderline content and civic and political Groups from our recommendation systems.”
In one internal case study on India seen by The Post, Facebook found that pages with ties to the Hindu nationalist umbrella organization Rashtriya Swayamsevak Sangh (RSS) compared Muslims to “pigs” and falsely claimed that the Quran calls for men to rape female family members. But Facebook employees did not internally nominate the RSS — with which the BJP is affiliated — for a hate group designation given “political sensitivities,” the case study found.

The mother of an Indian soldier helps carry his coffin after he was killed in a clash that broke out during a fall 2021 anti-insurgency operation in Indian Kashmir. (Narinder Nanu/AFP/Getty Images)
In a slide deck about political influence on content policy from December 2020, Facebook employees wrote that the company “routinely makes exceptions for powerful actors when enforcing content policy,” citing India as an example.
A key roadblock was Facebook’s top policy person and lobbyist in the region, Ankhi Das, who told employees it would hurt the company’s business prospects to take down posts such as one by a prominent BJP official that called for shooting Muslims. She also shared commentary on her personal page in which a former official described Muslims as a traditionally degenerate community.
After an August 2020 Wall Street Journal story spotlighted her interventions, Das resigned that October.
But the pro-government leanings extended beyond Das and reflected a long-standing culture within Facebook to treat India — and its powerful BJP government — with a light touch, according to current and former employees in India and the United States.
After Das’s departure in 2020, Meta appointed Shivnath Thukral, a former public relations executive who had been head of public policy at Meta’s WhatsApp subsidiary since 2017, to oversee public policy for Meta in India on an interim basis. He assumed the position on a permanent basis in November 2022.


Thukral was closer to the BJP than Das: He had worked on Modi’s national campaign in 2014 and had collaborated with Hiren Joshi, a longtime Modi aide who is today the prime minister’s head of communications, on a pro-Modi website called Modi Bharosa, or “Modi is Trust,” recalled a former Modi staffer who worked with both men. The site churned out glowing articles about Modi’s economic record and accused his political rivals of fomenting riots or misgoverning the country.
Around the time of Das’s departure, the BJP’s head of social media, Amit Malviya, kept the pressure on Facebook by sharing, in interviews with news outlets and on Twitter, the employment and personal backgrounds of Facebook employees and calling out those who had previously worked for liberal politicians or causes.
Soon after, Facebook India’s head, Ajit Mohan, addressed the staff at an all-hands meeting. His message: He didn’t want Facebook employees to become the focus of external attention.
In response to inquiries for this story, Facebook said it has hired more staff, now reviews content in 20 Indian languages and has partners that can fact-check in 15 languages.
“We prohibit coordinated inauthentic behavior, hate speech and content that incites violence, and we enforce these policies globally,” Facebook spokesperson Margarita Franklin said in the company’s only direct comment.
“As a global company, we operate in an increasingly complex regulatory environment and are focused on keeping people safe when they use our services and ensuring the safety of our employees in a manner consistent with applicable laws and human rights principles.” Facebook declined to make any of the employees named in this story available for interviews.

Kashmiri villagers, inspecting a building south of Srinagar that was damaged in an August 2021 gunfight, flee after rumors of incoming Indian soldiers. (Dar Yasin/AP)

Report buried​

As the controversy over its handling of hate in India grew in 2019, Facebook hired the law firm Foley Hoag to study and write about its performance there in what is called a human rights impact assessment. Some rights groups worried that the firm would go easy on Facebook because one of its human rights lawyers at the time, Brittan Heller, was married to Gleicher, Facebook’s head of security policy.
But the firm interviewed outside experts and Facebook employees and found that dozens of pages that were calling Muslims rapists and terrorists and describing them as an enemy to be eliminated had not been removed, even after being reported.
Foley Hoag cited multiple underlying issues, including the lack of local experts in hate speech, the application of U.S. speech standards when Indian laws called for greater restriction of attacks on religion, and a legalistic approach that, for example, withheld action if a subject of threats was not explicitly targeted for their ethnicity or religion.


Foley Hoag found that the company allowed incendiary hate speech to spread in the lead-up to deadly riots in Delhi in 2020 and violence elsewhere, according to people briefed on its lengthy document. It recommended that the company publish the report, name a vice president for human rights and hire more people versed in Indian cultures.
Instead of releasing the findings, Facebook wrote a mostly positive four-page summary and buried it toward the end of an 83-page global human rights report in July 2022. That readout said the law firm “noted the potential for Meta’s platforms to be connected to salient human rights risks caused by third parties.” It said the actual report had made undisclosed recommendations, which the company was “studying.”
Press Enter to skip to end of carousel

More stories from this series​



Inside the vast digital campaign by Hindu nationalists to inflame India
Inside the vast digital campaign by Hindu nationalists to inflame India

Under India’s pressure, Facebook let propaganda and hate speech thrive
Under India’s pressure, Facebook let propaganda and hate speech thrive

He live-streamed his attacks on Indian Muslims. YouTube gave him an award.
He live-streamed his attacks on Indian Muslims. YouTube gave him an award.

End of carousel
Foley Hoag partner Gare Smith said by email that the firm’s human rights impact assessment “was conducted in accordance with the highest ethical standards and pursuant to guidance provided by the U.N. Guiding Principles on Business and Human Rights. Inasmuch as it was conducted under privilege, we cannot comment on specific elements of the Assessment or on our client’s summary of it.”
Facebook said that it discloses more on its human rights performance than any other social media company and that it withheld the full report because of concerns about employee safety.
Facebook executives similarly downplayed problems reported by outside groups. The London Story, a Netherlands-based human rights group, reported hundreds of posts that it said violated the company’s rules. Facebook asked for more information, and then asked for it in a different format, then said it would work to improve things if the group stayed quiet. When nothing happened, the group succeeded in getting a meeting with the company’s Oversight Board, created to handle a small number of high-profile content disputes.
It took more than a year to remove a 2019 video with 32 million views, according to the London Story’s executive director, Ritumbra Manuvie.
In the video, Yati Narsinghanand, a right-wing cleric, says in Hindi to a crowd: “I want to eliminate Muslims and Islam from the face of earth.” Facebook took it down just before the London Story released a report on the issue in 2022.
Versions were then posted again. One remained visible as of Monday, but on Tuesday, after Facebook was asked for comment, it was no longer available.

A Kashmiri man tending to his cattle walks on a hillock in Tosamaidan, southwest of Srinagar, in June 2021. The meadow was once an artillery firing range for the Indian army. (Dar Yasin/AP)

Mounting fears​

When Facebook’s investigators brought their Kashmir findings to the India office, they expected a chilly response. The India team frequently argued that Facebook policies didn’t apply to a particular case. Sometimes, they argued that they didn’t apply to sovereign governments.
But this time, their rejection was strident.
“They said they could be arrested and charged with treason,” said a person involved in the dispute.
Facebook’s India team, including policy chief Thukral and communications head Bipasha Chakrabarti, was especially nervous after police raids on Twitter, two people recalled. In 2021, the Indian government was feuding with Twitter over its refusal to take down tweets from protesting farmers. Officials dispatched police to the home of Twitter’s India head and anti-terrorism units to two Twitter offices. Some officials publicly threatened Twitter executives with jail time.
Two former Facebook executives said they had believed that their colleagues’ fears were genuine, though no legal action was ever taken against them.
“I’m not angry at Facebook,” one said. “I’m angry at the Indian government for putting the people who worked on this in a position where they couldn’t address the harms that they found.”


Blocked by their own colleagues, Facebook’s U.S. threat team passed the Chinar Corps information to their counterparts at Twitter. Facebook employees said they had been hoping that Twitter would follow the leads and root out the parallel operation on that platform. The team’s members also hoped that Twitter would do the first takedown, giving Facebook political cover so it wouldn’t have to face government retribution alone and its internal dispute could be resolved.
Twitter, which had been more forceful in pushing back against the Indian government, took no action. It told Facebook staff that it was having technical issues.
In truth, the San Francisco company was changing direction.
The Indian police raids and public comments from government officials criticizing the company had scared off firms that Twitter had planned to use for promotion, former Twitter employees said. “We saw a very obvious slowdown in user growth,” one former policy leader said. “The government is very influential there.”
The former executive added: “We had just promised [Wall] Street 3x user growth, and the only way that was going to be possible was with India.”
Another former policy staffer said Twitter’s bigger problem was physical threats to employees, while former safety chief Yoel Roth wrote in the New York Times this month that Twitter’s lawyers had warned that workers in India might be charged with sedition, which carries a death penalty.
In any case, Twitter was tired of leading the way with takedowns, and it changed how it treated the government overall. The company did not respond to a request for comment.


Fishman, the former senior Facebook executive, said the U.S. tech companies will not respond more forcefully to Indian government pressure unless they receive help from the U.S. State Department, where onetime cybersecurity executive Nate Fick has been named the first cyber ambassador and has built out a team focused on internet freedom and security issues.
“If we want free speech, [if] we want free elections, while these companies are not the most popular institutions in the world, we need U.S. policy to have their backs at times,” Fishman said. Fick did not respond to requests for comment.
But while Indian activist groups and international democracy monitors have warned about the erosion of democratic norms under the Modi government, the Biden administration has largely refrained from publicly criticizing a country seen as a crucial strategic counterweight to China in the Indo-Pacific.
After meetings with Modi in Washington and New Delhi this year, Biden offered no criticism. Uzra Zeya, U.S. undersecretary of state for civilian security, democracy and human rights, visited India in July. She did not publicly comment about India’s human rights or its democracy after meeting Foreign Secretary Vinay Kwatra, but said in a Twitter post: “Grateful for the vital #USIndia partnership & shared efforts to advance a free & open Indo-Pacific, regional stability, and civilian security.”
A senior State Department official, speaking on the condition of anonymity because of the sensitivity of the issue, said that despite the lack of public comment, American diplomats are engaged with India on censorship and propaganda.
“These are precisely the kinds of issues that we raise on a bilateral basis at both the working and senior levels of government,” he said. “The U.S. is committed to ensuring that tech is a force for empowerment, innovation and well-being, and working to ensure that the world’s largest democracy is aligned with us on this vision is a top policy priority.”


Continued silence​

As Facebook’s India team delayed acting on the Chinar inauthentic network, the propaganda investigators in Washington and California worked on less controversial subjects.
“You have only so much time in the day, and if you know you are going to run into political challenges, you might spend your time investigating in Azerbaijan or somewhere else that won’t be an issue. Call it a chilling effect. That dynamic is real,” said Fishman, the former senior Facebook executive.
The impasse continued until the U.S. team demanded action from Nick Clegg, then Meta’s powerful vice president of global affairs, who had been put in charge of India public policy. Clegg was later named president of global affairs.
Finally, after discussions with Facebook’s top lawyers, Clegg ruled in favor of the threat team, employees said.
But the India executives had a request: They asked that Facebook at least break with past practice and not disclose the takedown.


Since coming under fire for failing to spot Russian propagandists using its platform during the 2016 U.S. presidential campaign, Facebook has routinely announced significant removals of disinformation. It often describes what the campaign was trying to do and how it did it, and there is frequently direct attribution to a national government or enough detail for readers to guess.
The idea is to increase transparency that could help disinformation hunters and deter its spreaders from trying again. Smaller takedowns are described more briefly in quarterly summaries.
This time, the India side argued that it would be unwise to embarrass the Indian military and that doing so would increase the likelihood of legal action.
Clegg and Facebook chief legal officer Jennifer Newstead agreed, staffers said. At their direction, Facebook changed its policy to state that it would disclose takedowns unless doing so would endanger employees.
Following standard practice, Facebook removed the fake accounts, and the official Chinar Corps pages they had been working with on Facebook and Instagram, on Jan. 28, 2022. (After the Indian army publicly complained about the takedown of the official pages, they were reinstated.)
That March, Twitter followed Facebook and quietly removed the Chinar Corps’ parallel network on its platform and shared it with researchers. In private meetings with Facebook and Twitter executives, the army defended its fake accounts and said they were necessary to combat Pakistani disinformation.
Facebook didn’t disclose the takedown, and Twitter hasn’t issued what had been twice-yearly summaries of its enforcement actions since one for the period that ended in December 2021.
A month later, Facebook issued a quarterly “adversarial threat report” that listed takedowns of inauthentic networks targeting users in Iran, Azerbaijan, Ukraine, Brazil, Costa Rica, El Salvador and the Philippines.
It said nothing about India.
Shih reported from New Delhi. Jeremy B. Merrill in Atlanta and Karishma Mehrotra in New Delhi contributed to this report.
About this story
Design by Anna Lefkowitz. Visual editing by Chloe Meister, Joe Moore and Jennifer Samuel. Copy editing by Feroze Dhanoa and Martha Murdock. Story editing by Mark Seibel. Project editing by Jay Wang.

 
.
When Facebook’s U.S. investigators first saw the posts from accounts that purported to be residents in Kashmir, it wasn’t hard to find evidence of a central organization. Posts from different accounts came in bursts, using similar words. Often, they praised the Indian military or criticized India’s regional rivals — Pakistan and its closest ally, China.
Indians have used to same methods to pose as Afghans, Balochs and even Iranis.
How long can they maintain their image using these manipulative methods is anyone's guess.
 
Last edited:
.

"A covert campaign

When Facebook’s U.S. investigators first saw the posts from accounts that purported to be residents in Kashmir, it wasn’t hard to find evidence of a central organization. Posts from different accounts came in bursts, using similar words. Often, they praised the Indian military or criticized India’s regional rivals — Pakistan and its closest ally, China.
The technical information about some of the accounts overlapped, and the geolocation data associated with some accounts led directly to a building belonging to the Indian army.
The disinformation hunters also found that the fake accounts often tagged the official account of the Chinar Corps, India’s main military force in Kashmir, showing that they were not putting great effort into disguising themselves.
For a couple of months, employees said, the Facebook team mapped out the network in preparation for rooting out the whole operation, a standard procedure for combating coordinated inauthentic behavior."


This is alarming development - something must be done to counter this campaign. Pakistan should find a way to expose these people.

@PanzerKiel
 
. .
The world has knelt down in front of the mad Indian elephant.
 
.

Pakistan Defence Latest Posts

Pakistan Affairs Latest Posts

Back
Top Bottom