deepfake

In cybersecurity, where the battle between defenders and attackers is constant, the breaking out of deepfake technology has ushered in a new era of digital deception. An attack on a software development company, in which the accounts of 27 of its cloud customers were compromised, serves as a reminder of the growing threat that artificial intelligence (AI)-generated manipulation poses to organisations all over the world. 

As access to AI technology to create deepfakes increases, so does the risk to individuals and businesses. According to an IBM report, a third of global companies have already fallen victim to deepfake fraud.  

  

Real case of deepfake fraud 

The attack on a well-known company to build no-code internal tools began with an SMS phishing campaign targeting the company’s employees. The attackers, posing as members of the IT team, sent messages to employees instructing them to click on a seemingly legitimate link to resolve a payroll-related issue. An unsuspecting employee fell into the trap, leading him to a fake landing page where he handed over his credentials. 

At the next stage of the attack, the hackers called the employee, again posing as the IT team, to obtain the multi-factor authentication (MFA) code. They used advanced voice manipulation techniques to create a convincing environment. The MFA code allowed them to add their own personal device to the employee’s account, granting them continuous access. 

This access allowed the attackers to compromise the accounts of 27 customers, suffering the loss of nearly $15 million in cryptocurrency as a result of the attack. 

  

Effective strategies to combat deepfakes in companies 

Most people won’t believe that a situation like this could happen in their companies too. But reality shows that without proper training in detecting deepfakes and social engineering techniques, the chances of detecting the fraud were minimal without a fraud detection solution through identity verification. That’s why here at Facephi we design solutions to minimise financial risks, reduce fraud losses and protect against reputational risk, ensuring the integrity of commercial operations. 

We have detailed below how the relevance of our technology against identity attacks can help create the best solution for your company.  

Cybersecurity: 

Measures are implemented to prevent the injection of images or videos and to restrict the use of virtual cameras. 

Image/video origin verification:

Image integrity is assessed by analysing information from the device capturing the image using cryptography and watermarking techniques. 

Image/video analysis:

Advanced deep learning techniques are used to detect signs of image manipulation such as: 

  • Presence of foreign or defective elements  
  • Evidence of a compressed image 
  • Image texture analysis 
  • Evidence of synthetic generation (Deepfake) 

Detection of liveness in presentation attacks:

The detection of liveness in presentation attacks is a security mechanism used to verify that the person in front of the camera is a genuine user and not a fraudulent attempt such as a: photo, video or person wearing a mask. Using this technique, here at Facephi we prevent biometric fraud, providing confidence in transactions and simplifying user authentication. 

  

Protection from deepfakes  

The attack that cost this company $15 million in cryptocurrency just a few months ago is not an isolated incident and the banking sector is also particularly vulnerable to these injection attacks. 83% and 81%, respectively, of business leaders in this sector view deepfake voice and video fraud as real threats to their organisations. 

As organisations, individuals, and technology providers navigate this reality, evolving anti-fraud measures will play a crucial role in staying one step ahead of the deepfake threat.  

  

Take a look at our video focused on protection from attacks on digital identity to better understand how our solutions protect the user’s identity at every step of the process towards verification: