Deepfake video detection has taken a major leap forward thanks to scientists at the Netherlands Forensic Institute. They have developed a cutting-edge method that identifies subtle facial colour changes caused by the human heartbeat—an effect missing in most deepfake videos.
Zeno Geradts, a professor of forensic data science at Amsterdam University, explains that videos of real people show natural blood flow around areas like the eyes, forehead, and jaw. However, deepfakes typically lack this subtle pulse-related discolouration.
Since deepfakes can misrepresent individuals in fake news or explicit content, this new detection method addresses a growing concern. The technique, called blood flow detection, relies on advanced image analysis. It tracks minute skin tone shifts caused by a person’s pulse—something that current AI struggles to replicate.
Initially, when the research team explored this idea in 2012, video quality limited their efforts. Compression technology back then removed the minor colour differences linked to heartbeats. As a result, their analysis couldn’t deliver reliable outcomes.
Today, the situation has changed. Improved video resolution and modern compression techniques preserve more detail. Now, the team can successfully detect even the faintest colour changes from pulsing blood.
In recent tests, volunteers wore heart monitors while being filmed under different lighting conditions and during various movements. The team matched their heart rates with colour fluctuations at 79 facial points. This process consistently revealed a strong correlation, proving the method’s effectiveness.
Geradts emphasizes that while AI has advanced rapidly, it still fails to generate a believable pulse. This shortcoming allows forensic experts to separate genuine footage from artificial content with greater accuracy.
Currently, investigators use blood flow detection in select cases. However, courts have yet to formally recognize it as standard forensic evidence. Still, experts believe it’s only a matter of time before the method gains full legal acceptance.
Meanwhile, concerns over deepfakes continue to grow. For instance, Helpwanted.nl—a Dutch support center for online abuse victims—reported a 31% increase in deepfake pornography and fake nude images in 2024. Clearly, society needs reliable tools to confront this problem.
Geradts warns that if everything becomes potentially fake, people may stop trusting visual evidence altogether. This new method, therefore, represents a timely and much-needed advancement in the fight against misinformation.
READ: Chinese Espionage in Dutch Semiconductor Industry Intensifies