Cyberattacks are still on the rise and they represent a serious threat to the security of companies and users. According to a report by Cybersecurity Ventures, the cost of cybercrime damages is expected to reach $265 trillion annually by 2031. With this in mind, it is essential to strengthen security in the applications and platforms that we use daily. 

Advanced techniques to combat deepfakes  

In the field of digital identity verification there are currently two attack vectors: presentation attacks (PAD) and injection attacks (IAD). First of all, to guarantee that the presentation attack detection technology meets the State-of-the-Art standards, it is essential to have certifications such as iBeta Level 1 and IBeta Level 2, or to be well positioned in the NIST FATE PAD.  

On the other hand, the attack vector of deepfakes is somewhat more complex. For an attack of this type to take place, it must be carried out by means of an injection attack. At present, within the framework of the CEN TC 224/WG18 “Biometric data injection detection” working group, work is being carried out to develop a specific ISO standard for this type of threats; said attack vector does not currently have a certification scheme that can be taken as a reference. This is because the injection of images or videos affects both the hardware and the software used to capture the evidence that digitally verifies identity. That’s why it’s common to see approaches that attack the problem from several security layers: 

  • Techniques based on protecting the security and integrity of both the software and hardware used to capture and communicate evidence (cybersecurity).
  • Techniques to evaluate data integrity. For example, by implementing watermarks, or by analysing the origin of the biometric sample.  
  • Forensic image analysis techniques. Said techniques seek to detect evidence that suggests image or video injection.  
  • AI-based techniques for detecting deepfakes, or for detecting face impersonation.  


Facephi, guaranteeing security  

Here at Facephi, we apply a multi-layered security strategy, using artificial intelligence to improve the accuracy and efficiency of the identity verification process. Machine learning algorithms can help detect and prevent fraud and identity theft. We not only focus on the detection of deepfakes, but also evaluate the process of capturing, processing and analysing the image itself. This includes robust cybersecurity, data integrity and artificial intelligence-based image analysis.