“Hate is more engaging”: researchers progress in measuring anti-Semitic propaganda on social networks

A pandemic-themed adaptation of the anti-Semitic, supposedly “happy merchant” meme. Photo: ADL

The one-year lockdown caused by the COVID-19 pandemic in 2020 has cemented social media’s place as the primary channel for disseminating anti-Semitic messages, often the crudest and most violent.

As the virus enveloped the world, a set of anti-Semitic memes linked to the coronavirus quickly took shape. Some online trolls have claimed that, just like the Black Death in the 14th century, COVID-19 was a Jewish creation, while others insisted that the disease – dubbed ‘Holocough’ – be used to kill Jews en masse .

Another innovation during this period was the phenomenon of “zoom bombardment”. As social distancing measures forced Jewish institutions to move real-world events to online platforms like Zoom, dozens of virtual meetings were hijacked by anti-Semitic agitators, pushing what a German research institute has described. as a “contained Nazi and anti-Israel glorification overlap” to a stunned and often upset audience.

Along with these outrages, established social media platforms like Facebook, TikTok, Instagram, Snapchat and Twitter have been inundated with anti-Semitic messages. According to the Anti-Defamation League (ADL), between May 7 and May 14 alone this year, more than 17,000 Twitter posts used some variation of the phrase “Hitler was right.”

Quantifying these anti-Semitic conversations on social media and distilling their content has become a key task for academic researchers who monitor the spread of anti-Semitism across a range of social and professional networks. At Indiana University, Bloomington, researchers from the Institute for the Study of Contemporary Antisemitism (ISCA) – which today launched a major conference on anti-Semitism in the United States, which will continue until see you next week – working with colleagues from other departments sifting through thousands of anti-Semitic tweets, some written in heavily coded language, and others expressing their hatred of Jews in blunt terms.

“Indiana University has an agreement with Twitter to get 10% of all tweets on a statistically relevant basis,” ISCA Professor Gunther Jikeli explained in an in-depth interview with The Algemeiner. “This represents a huge database on which we can run queries. “

Jikeli said he meets twice a week with a research team that includes historians and linguists as well as computer programmers. Various generic search terms are used, such as “Israel”, Jews “and” Zionism “, as well as derogatory terms such as” zionazi “and” k * ke “.

“When we monitor individual tweets, we get the message text, user ID, number of retweets and replies, and other types of metadata that allow us to see the extent of their footprint.” , Jikeli said. “We then apply a number of considerations to determine whether the post is anti-Semitic. “

These considerations are based on the working definition of anti-Semitism endorsed by the International Holocaust Remembrance Alliance (IHRA), which shows how anti-Semitic narratives work and how they can manifest in different contexts. ISCA researchers at Indiana University who analyze Twitter posts are directed to an annotation portal, where they can add additional details and insight. A series of prompts: is the tweet anti-Semitic according to the IHRA definition? How intensely is anti-Semitism expressed? Is the Holocaust mentioned? Does the user intend to be sarcastic? – can then be answered in order to categorize the message with precision.

Presenting his paper at the ISCA conference on Monday, Jikeli said his research attempts to clarify six basic questions: What does anti-Semitism look like on social media? How widespread is it? Who is pushing these messages? Who thwarts them? What is the overall impact? And what to do to fight it?

On the last question, the question of censorship, or of “de-data-shaping”, comes up over and over again. Between a climate of absolute censorship of publications deemed anti-Semitic or racist and a season open to sectarianism online, Jikeli tries to find a more nuanced solution.

The ban on anti-Semites from social media raises both ethical questions about free speech and practical questions about how to shut down millions of social media accounts that traffic fanaticism. Jikeli cited recent research from the University of Amsterdam showing that anti-Semitic accounts deleted from mainstream platforms tend to reappear in marginal locations – among others 4chan, Telegram and Gab, the latter app used by the gunman. the Pittsburgh Synagogue, Robert Bowers in 2018. Encouragingly, however, these restored accounts invariably have far fewer followers on these lesser platforms, and therefore find it more difficult to engage in what Jikeli does. called it “monetization of hate”.

Yet as anti-Semites turn to less popular platforms (as well as the dark web) as service providers crack down on hate speech, the profile of anti-Semitism on mainstream social media continues to grow. Studies on TikTok and Twitter over the past year have shown that despite platforms’ commitment to enforcing community speech guidelines, anti-Semitic tweets have grown exponentially on both.

Ironically, some extremist leaders are eager to avoid inducements on their own platforms for two main reasons: to show more moderate potential supporters that they avoid violence, and to avoid being shut down by service providers.

Ayal Feinberg, assistant professor of political science at Texas A&M University who moderated Jikeli’s panel on Monday, cited the observation of a prominent neo-Nazi in this regard. “These Jewish corpses are going to be used as clubs in the years to come to tear down our free speech rights,” moaned Andrew Anglin – editor of the viciously racist. Daily Stormer website – following the Pittsburgh Tree of Life Synagogue massacre.

On both sides of this clash, however, there is uncertainty over what action social media companies will take in the future. While extremists fear imposition of speech guidelines, Jikeli and others point out that social media companies also have a financial interest in directing users to content they will engage with for a longer period of time. .

“Hateful content is more engaging, and that’s why social media companies don’t want a cut from that point of view, they want more of the same,” Jikeli observed.

Jikeli pointed out that even when not delivered directly to users, “anti-Semitic content is easy to find on the Internet” – a situation that is unlikely to be resolved overnight.

“These social media companies are very young and our research is also in its infancy,” noted Jikeli. “We are only at the beginning of our work.


Source link

Comments are closed.