Islamophobia is amplified by bots on social networks – Quartz

0


In August 2021, a Facebook advertising campaign criticism of Ilhan Omar and Rashida Tlaib, the first Muslim women in the United States Congress, has come under intense scrutiny. Critics accused the ads of linking women in Congress to terrorism and some religious leaders condemned the campaign as “Islamophobic”, ie spreading fear of Islam and hatred against Muslims.

It was not the first time the couple had faced Islamophobic or racist abuse, especially on the internet. As a communication teacher who studies online race and identity politics, I saw that Omar is often the target of white nationalist attacks on Twitter.

But online attacks against Muslims aren’t limited to politicians. Twenty years after the September 11 attacks, stereotypes associating Muslims with terrorism go far beyond representations in newspapers and television. Recent research is sounding the alarm bells for widespread Islamophobia in digital spaces, in particular the use by far-right groups of disinformation and other manipulative tactics to vilify Muslims and their faith.

Amplify hatred

In July 2021, for example, a team led by a media researcher Laurent Pintak published research on tweets that mentioned Omar during his campaign for Congress. They reported that half the tweets they studied implied “overtly Islamophobic or xenophobic language or other forms of hate speech”.

The majority of the offensive posts came from a small number of “provocateurs” – accounts that fuel Islamophobic conversations on Twitter. Many of those accounts were owned by conservatives, they discovered. But the the researchers reported that these accounts themselves were not generating significant traffic.

Instead, the team found that the “amplifiers” were primarily responsible: accounts that collect and disseminate agent provocateurs’ ideas through retweets and mass replies.

Their most interesting finding was that only four of the top 20 Islamophobic amplifiers were genuine accounts. Most were either robots – algorithmically generated to mimic human accounts – or “puppetsWhich are human accounts that use fake identities to deceive others and manipulate online conversations.

Robots and puppets spread Islamophobic tweets originally posted by genuine accounts, creating a “megaphone effect” that escalates Islamophobia on the Twitterverse.

“Hidden” accounts

Twitter has just over 200 million daily active users. Facebook, meanwhile, has nearly 2 billion – and some use similar manipulative strategies on this platform to escalate Islamophobia.

Disinformation researcher Johan faras and his colleagues studied “Hidden” Facebook pages in Denmark, which are led by individuals or groups posing as radical Islamists in order to arouse antipathy against Muslims. Academics’ analysis of 11 of those pages, identified as bogus, revealed that the organizers were making malicious claims about ethnic Danes and Danish society and threatening an Islamic takeover of the country.

Facebook deleted the pages for violating the platform’s content policy, according to the study, but they reappeared in a different form. Although Farkas’ team could not confirm who created the pages, they found patterns indicating “the same individual or group hiding behind the cloak.”

Those “hidden” pages has managed to elicit thousands of hostile and racist comments towards the radical Islamists who users say run the pages. But they also sparked anger towards the wider Muslim community in Denmark, including refugees.

Such comments often fit into a larger view of Muslims as a threat to “Western values” and “”whiteness», Emphasizing how Islamophobia goes beyond religious intolerance.

Double threats

This is not to suggest that “real” Islamist extremists are absent from the web. The internet in general and social networks in particular have long served as a means of Islamist radicalization.

But in recent years, far-right groups have expanded their online presence much faster than Islamists. Between 2012 and 2016, the number of Twitter followers of white nationalists increased by more than 600%, according to a study through extremism expert JM Berger. White nationalists “outperform ISIS in almost every social measure, from the number of tweets subscribers per day,” he said.

A more recent study by Berger, a 2018 analysis alt-right content on Twitter, found “a very significant presence of automation, fake profiles and other social media manipulation tactics” among these groups.

Social media companies have focused on their policies aimed at identifying and removing content from Islamic terrorist groups. Big Tech critics, however, argue that companies are less willing to control right-wing groups like white supremacists, making it easier to spread Islamophobia online.

High stakes

Exposure to Islamophobic messages has serious consequences. Experiments show that representations of Muslims as terrorists may increase support for civilian restrictions on American Muslims, as well as support for military action against Muslim-majority countries.

The same research indicates that being exposed to content that challenges stereotypes of Muslims – such as Muslims who volunteer to help fellow Americans over the Christmas period – can have the opposite effect and reduce the support for such policies, especially among political conservatives.

Violence against Muslims, vandalism of mosques and Quran burns have been widely reported in the United States over the past 20 years, and there are indications that Islamophobia keep increasing.

But studies after the 2016 election indicate that Muslims are now experiencing Islamophobia “More often online than in person. “ Earlier in 2021, a Muslim advocacy group Facebook executives sued, accusing the company of failing to suppress anti-Muslim hate speech. The lawsuit claims that Facebook itself commissioned a civil rights audit which found the website “creates an atmosphere where Muslims feel under siege.”

In 2011, around the 10th anniversary of September 11, a report from the Center for American Progress documented the vast network of Islamophobia, drawing particular attention to the role of far-right “disinformation experts” in the dissemination of anti-Muslim propaganda.

Five years later, the entire country was inundated with talk of “disinformation” experts using similar strategies – this time trying to influence the presidential election. Ultimately, these evolutionary strategies not only target Muslims, but can be replicated on a larger scale.

This article is republished from The conversation under a Creative Commons license. Read it original article.


Share.

About Author

Leave A Reply