Arabi Facts Hub is a nonprofit organization dedicated to research mis/disinformation in the Arabic content on the Internet and provide innovative solutions to detect and identify it.

Using Artificial Intelligence to Detect Deepfakes in Investigative Reporting

Using Artificial Intelligence to Detect Deepfakes in Investigative Reporting

 

This article is published in collaboration with the International Journalists’ Network IJNET

 

Deepfakes contribute to the information disorder epidemic and represent one of the most serious threats arising from the use of artificial intelligence in cybercrime. It is used to target individuals, groups and even governments, and to influence public opinion, especially in the context of elections.

In this article, we present tools for detecting disinformation produced using deepfakes, the ethical challenges or caveats that must be taken into consideration when using these tools, in addition to case studies from investigations that uncovered disinformation based on deepfakes, and concluding with advice for journalists interested in expanding into this field.

 

Deep fake technology and its dangers

 


According to the study “The State of Deepfakes: Landscape, Threats, and Impact,” a deepfake is a hyper-realistic digital imitation or forgery of images, video, or audio, created via neural networks using machine learning models known as generative adversarial networks (GANs)


According to the study “The State of Deepfakes: Landscape, Threats, and Impact", deepfakes are defined as a highly realistic digital imitation—or falsification of images, video, or audio—created through neural networks using machine learning models known as Generative Adversarial Networks (GANs).

Detecting deepfakes should be a top priority for fact-checkers and investigative journalists, due to their widespread impact in spreading disinformation across various fields.

As the use of deepfake tools increases, numerous cases have emerged in which politicians appear to make statements that contradict their actual positions. One such example is a video published on a hacked Ukrainian news website that showed President Volodymyr Zelensky calling on his soldiers to lay down their arms.

Miral El Ashry, Head of the Media and Communication Program at the University of East London in Cairo, believes that one of the political goals behind the use of deepfake technologies is to tarnish the reputation of specific individuals or entities by creating fake videos that do not reflect their real actions. These technologies are also used to manipulate public opinion and incite hostility toward leaders or political figures—especially during election periods—a trend increasingly evident in Europe, targeting both ruling regimes and opposition figures.

El-Ashry also explains that deepfake technologies are employed to distort facts and generate misleading content during times of division, armed conflict, and aggression, with the aim of crafting alternative narratives that reflect the perspectives of opposing sides, thereby fueling chaos and deepening societal rifts.

On the economic front, she notes that these technologies are used to produce deceptive content about financial markets, aiming to undermine the policies of specific financial institutions and manipulate currency values—such as the U.S. dollar—or the prices of oil and gas, potentially harming national economies. Deepfakes are also used to extort wealthy individuals and businesspeople by spreading fabricated content.

The New York Times investigated a large-scale financial fraud involving deepfake technology in August 2024. A video, which appeared to be authentic footage of Elon Musk, circulated online encouraging people to buy or invest in cryptocurrency. The video went viral, prompting many to transfer significant sums of money to the scammer. One of the most notable cases involved a victim transferring $690,000, believing they were investing based on a genuine endorsement from Musk.

In the Arab context, El Ashry affirms that deepfakes are exploited to spread hate speech against minorities or specific groups—particularly on religious or sectarian issues—posing a threat to national security and exacerbating political unrest. She also pointed to its evident use during the war on Gaza to promote misleading narratives about the ongoing events.

[Read more: Guidelines and Warnings for Using Artificial Intelligence in Data Journalism]

  

How Are Deepfakes Manufactured?

Deepfake videos are created using one of two main methods:

  1. Modifying an original source video of the targeted person, altering it to make them appear as if they are saying or doing things they never actually did.

  2. Replacing the person's face with someone else’s in a different video—this is known as face swap technology.

Some of the most prominent tools used to create deepfakes include:

 

This tool allows the creation of deepfake videos or face swaps for free, along with other features such as an AI-powered avatar generator and a text-to-speech tool using artificial intelligence. These capabilities enhance the ability to manipulate digital content in various ways.

This tool uses artificial intelligence to swap faces in videos quickly and easily.

 

How to Detect the Likelihood of Deepfake Content in a Video

The starting point lies in the methodology for detecting videos produced using deepfake technology, and initiating the manual verification process by identifying signs and features that may indicate the visual or audio content is not authentic but generated through deepfake techniques. Among the most prominent indicators are technical flaws or illogical messaging and content when compared to the current context and overall picture. This can be verified through the following steps:

  • Factchecking sources

Checking the credibility and reliability of the sources that published or promoted the video on websites and social media helps either rule out or support the authenticity of the content. It is important to verify the source’s reputation—whether it has been involved in previous disinformation campaigns—and to determine if the source is someone with a vested interest or is hostile toward the individual or entity appearing in the video.

  • Detecting technical flaws

Audiovisual flaws that may indicate video manipulation include:

  1. Unnatural lip movements

  2. Lack of synchronization between audio and lip movements

  3. Unnatural colors, reflections, or shadows

  4. Unusual or fixed eye movement

  5. Inconsistent facial or body movements

  6. Abnormal blinking or complete lack of blinking

  7. Misaligned or distorted personal items such as jewelry, buttons, etc.

  8. Blurred or masked facial features to conceal deepfake defects

 

  • Evaluating the content

Analyzing the video’s content and messaging can help identify actions or statements that are illogical or inconsistent with the expected behavior of the person featured. For example, widely circulated images and clips showed Pope Francis wearing a trendy puffer jacket, which did not align with his usual appearance or persona.

A 2019 investigation published by The New York Times uncovered a doctored video of U.S. House Speaker Nancy Pelosi, manipulated to make her appear intoxicated and slurring her words. The video went viral, amassing over 2.5 million views on Facebook within a few days and was shared by several prominent political figures.

 

Top AI Tools for Detecting Deepfakes


Classification algorithms used in detecting deepfakes are trained on large datasets of both real and fake audio-visual samples to identify “artifacts”—the unique elements (or "fingerprints") in digital content produced by deepfake tools.

However, the current technology for detecting fake content still falls short of offering reliable certainty. This creates the need for a hybrid verification approach—combining manual techniques such as observation, contextual analysis, and pattern monitoring, with AI-powered tools that identify the traces and techniques of deepfake manipulation. This integrated method offers more dependable results.

Some of the most prominent tools that can be used to detect deepfakes using artificial intelligence include:

This tool relies on an advanced model designed to detect forgery in audio, image, and video content in real time. It works across various media types by analyzing the video or audio recording frame by frame, enabling the detection of any content created or altered using deepfake techniques and notifying the researcher or user about it.

This application uses artificial intelligence and machine learning technology to detect manipulation or artificial elements within visual media.

This tool relies on artificial intelligence algorithms to analyze visual content and determine whether the image or video contains a face created or modified by AI or if it is an original image.

The application detects videos produced using deepfake technology by analyzing suspected video clips and identifying any manipulated elements within them.

 

The Challenges Associated with Relying on These Tools

 

1- Challenges of Low Quality

Current models fail to accurately detect deepfakes when the quality of video or audio is low. This includes poor lighting conditions, blurry facial expressions, and low content resolution, all of which reduce the effectiveness of the algorithms used in analysis and detection.

2- Limitations of Training Models


Deepfake detection models rely on the data they are trained on, making them less efficient in detecting fakes produced with new methods or those not included in the training data. Additionally, the quality of training data, such as incompleteness or noise, can affect the model's accuracy and efficiency in detecting fakes.

 

3- Adaptability to New Manipulation Techniques


With the continuous evolution of deepfake technology, new, more advanced methods are constantly emerging, and AI detection tools are often unprepared to detect them. This rapid development creates a gap between deepfake techniques and the means to detect them, increasing the likelihood that current systems will fail to recognize new types of fakes.

4- Financial Challenges


Most deepfake detection tools are paid, posing a challenge for independent journalists, as well as organizations and initiatives with limited local resources.