Arabi Facts Hub is a nonprofit organization dedicated to research mis/disinformation in the Arabic content on the Internet and provide innovative solutions to detect and identify it.

Guidelines for Verifying Content That Incites Against Refugees

Guidelines for Verifying Content That Incites Against Refugees

There are over 43 million refugees worldwide, classified as vulnerable groups due to exposure to incitement campaigns and hate speech which often contain misinformation, especially with the rise of extremist movements and nationalist trends.

Digital tools have made creating misleading content targeting refugees and migrants easier while complicating efforts to detect such content. Social media also provides digital spaces that can be exploited for covert and coordinated campaigns against refugees.

Minorities make up three-quarters of online hate speech victims around the world, with women facing even higher targeting, according to a report from the 13th Forum on Minority Issues.

This article outlines methods for detecting misinformation aimed at refugees, highlights key verification tools, and offers tips—drawn from our experience analyzing incitement campaigns—on how fact-checkers can approach examining online campaigns targeting refugees.

Hate Speech Against Refugees and Migrants 

First, fact-checkers must be familiar with terminology; not all insults toward an individual or group qualify as hate speech. For language to count as hate speech, it must target identity. The UN defines hate speech as any kind of communication in speech, writing or behavior, that attacks or uses pejorative or discriminatory language with reference to a person or a group based on their inherent characteristics i.e. their religion, ethnicity, nationality, race, color, descent, gender or other identity factor, and that may threaten social peace.

Additionally, the term "migrant" differs from "refugee": while refugees are forced from their countries, migrants move voluntarily. Confusing “refugee” with “migrant” fuels misinformation by exaggerating their numbers, which intensifies hate speech, especially during economic downturns in host countries. For example, the Matsada’sh platform debunked claims that Egypt hosts nine million refugees, clarifying that this figure includes migrants, as the actual number of refugees in Egypt is around 575,000.

Second, the concept of “size of the monster” is essential in journalism. This term refers to the prevalence of a problem and the number of people affected. Fact-checkers should assess whether incitement is limited to a single post with little impact or part of a widespread, coordinated effort on social or conventional media targeting refugees. These coordinated campaigns, often by what are called “electronic flies” (trolls) or “bots,” use programmed accounts to influence public opinion.

In one of its reports, Arabi Facts Hub monitored a coordinated campaign in Egypt against refugees, containing language and expressions that dehumanize refugees or incite hostility toward them.

Third, fact-checkers focus on uncovering misinformation embedded in hate campaigns. Hate speech often involves digital tools to spread disinformation, like image and video editing software. For instance, Misbar revealed misleading photos shared online which claimed to show the Algerian government expelling large numbers of migrants during the 2022 Arab League Summit.

Tools for Detecting Misleading Campaigns Against Refugees

Hashtag Analysis Tools on Social Media Platforms

These tools help track engagement on hashtags and the timeline of their activity. They identify geographic locations with the most active posting under specific hashtags and the most frequently used words in these posts—known as a "word cloud." This feature is critical for analyzing the prevalent discourse around hashtags and identifying whether they promote hate speech or incite hostility against refugees.

Social Media Activity Analysis Tools:

Relying on some of these tools, a report by Arabi Facts Hub revealed that certain social media users in Egypt launched another campaign against refugees. The Meltwater tool was used to identify the most common "word cloud". By analyzing these words, we found that they carry inciting and violent rhetoric against refugees.

Reverse Search for the Origin of Viral Clips and Images

Reverse search techniques are used to trace the origin of viral images and videos, helping to detect manipulations or instances where media is placed out of its original context. These techniques include:

  • Reverse search through search engines

This last tool is distinguished by a facial recognition feature, which identifies individuals appearing in the image. It’s recommended to use multiple tools to ensure reliable results, as each tool—especially free ones—has unique features and strengths, complementing one another.

Matsada’sh has used several digital tools to verify a circulated video that purportedly showed Sudanese individuals looting stores in Cairo, Egypt. Certain accounts added inciting language expressing hatred against refugees, especially Sudanese refugees. Matsada’sh confirmed the video was actually filmed in Colombia. The fact-checking platform also uncovered evidence of manipulation, including added music to mask the original sound, and an edited portion showing Colombian police in uniforms different from those worn by Egyptian officers.

Verification of Visual Media Metadata

Images and videos contain metadata, including the location of capture, the type of device used, and the recording time and date. Such data is crucial for verification. Some tools include:

Who is Responsible for Protecting Refugees? 

Accountability and oversight are core journalistic functions, but these are often overlooked in fact-checking reports or limited to investigative journalism. When it comes to monitoring misleading campaigns and hate speech against refugees, it’s important to note who is responsible for their protection.

The primary responsibility for safeguarding refugees and asylum seekers lies with host countries, according to international treaties and agreements. Refugees’ rights, as outlined in the Refugee Convention and human rights treaties, require these states to ensure a safe environment for refugees and to mitigate direct impacts from abuse and discrimination.

Social media platforms also have a duty to combat hate speech, as such discourse can spread beyond virtual spaces and lead to real-world violence against refugees. For instance, Sudan’s Beam Reports highlighted that hate speech on social media can lead to forming radical virtual communities that exchange extremist ideas. Platforms allow individuals to reach vast audiences with minimal oversight.

Facebook’s hate speech policies aim to maintain a safe environment by removing inciting content, leading to account suspension after warnings. Similarly, X prohibits hate speech, stating, “You may not directly attack other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease”

Despite these policies, a report by Arabi Facts Hub revealed that social media platforms have not consistently removed inciting posts against refugees in Egypt, leading to a rise in hate speech over time.

Thus, coordinated hate campaigns that spread misinformation about refugees and migrants extend beyond the digital realm, posing real societal risks. This calls for special attention from journalists and fact-checkers.

This article is published in collaboration with the International Journalists’ Network (IJNet).