Deepfake technology, which was recently identified as becoming a credible cyberthreat, emphasizes the notion that an attack can arise from anywhere. Though the creation of deepfakes can be traced back to 2017, it became a popular trend when a video featuring former US president Barack Obama and Jordan Peele made the rounds on social media. The video was created by Peele's production company using FakeApp, the AI face-swapping tool, and Adobe After Effects.

According to threat intelligence platform IntSights, while deepfake technology isn't as popular a choice for cyberattackers as other methods, it is an emerging threat that security teams must be on the lookout for. IntSights also recognized a 43% increase in hacker chatter around deepfakes on dark web forums since 2019.

What are deepfakes?

An amalgamation of the terms "deep learning" and "fake", a deepfake is a synthesized or fake version of an image, video, or audio file, manipulated to make the observer believe it is real. Deepfakes are created using deep learning methods and artificial intelligence software.

The previously mentioned deepfake containing Obama and Peele was made to increase awareness of deepfakes and contains a fake video of Obama warning users to be careful while consuming content online.

What is the technology behind deepfakes?

Deepfakes are created using deep learning methodologies; specifically, one called generative adversarial network (GAN). GAN is a group of neural network models, which includes machine learning models that teach computers how to process information like the human brain.

The GAN used to create deepfakes consists of two major neural networks run by individual AI algorithms—one is called the generator and the other the discriminator. In simple terms, the generator creates fake content and sends it to the discriminator, which then compares fake content to the target content to identify differences between the two. The generator then tries to eliminate these differences and fool the discriminator again through improved fake content. This cycle continues until a near-perfect fake file is generated.

Normalization of deepfakes

While the detrimental effects of deepfake technology have been acknowledged, so have its advantages, particularly in the film industry. Deepfakes have been looked at as a cheaper alternative to expensive CGI techniques and as a way to effectively revive actors (i.e. portray them in films) once they are no longer alive. This has led to the birth of several deepfake applications that anybody can use. Creating deepfakes has never been easier.

The accessibility and effectiveness of deepfake technology have resulted in cybercriminals using it for social engineering attacks.

How deepfakes can induce cyber attacks

Even though deepfake technology began as an attempt to create fake videos or images, it is now used to mimic clone voice messages as well. Cybercriminals could use this to carry out social engineering attacks like:

  • Vishing: Vishing or voice phishing depends entirely on convincing the victim to carry out an action through a fake phone call, that could ultimately lead to a malicious outcome. Phishers could create nearly perfect replicas of voices that belong to key stakeholders in a company, for example, and use them to convince employees to compromise login credentials or execute a set of actions that could lead to a cybersecurity incident.
  • Business email compromise: Similar to vishing, hackers could execute a spear-phishing attack through email, followed by a convincing phone call, aping a supplier through deepfaked audio. This could lead to a hacker persuading a employee to send funds to a different bank account.

Apart from these, deepfakes can also have a profound political and social impact, since they can influence the political decisions of the people who view them or cause an immediate reaction among the masses when fake content goes viral.

Combating deepfakes

Dealing with deepfake videos or images can be approached in two ways. One is to prevent the misuse of authentic content in fake videos or images, such as using blockchain mechanisms. The other is to use AI/ML technology to detect whether content has been changed.

  • AI/ML as the first line of defense: Elaine Lee, Principal Data Scientist at Mimecast, suggests using an AI-based system similar to the one used on YouTube, which examines audio, converts it to text, and analyzes the text for keywords. This methodology can be used to compare deepfake video content with original content to detect anomalies and then alert the respective authorities. This could be used as the first line of defense in detecting deepfakes with the next step being human intervention.
  • Blockchain technology: A recent study that focuses on using blockchain to combat deepfakes, proposes a distributed data sharing mechanism based on smart contracts, a blockchain technology. When a video is created, a smart contract records the video metadata and its related attributes, then creates a hash. The smart contract also grants specific individuals restricted access to the video This way, the authenticity of the video remains intact, preventing the creation of deepfakes, as every user that wants to download or access the video has to interact with the smart contract.

Using digital signatures and multi-factor authentication are other suggested methods to prevent access to video, audio, or images that can be used to create convincing deepfakes. While digital signatures are a great way to authenticate binary files, video content may require more of a watermark than a digital signature, which is possible through blockchain, as mentioned above. Ensuring multi-factor authentication will also go a long way in preventing unauthorized access of important video resources which can be used for creating deepfakes.

Get the latest content delivered
right to your inbox!

Thank you for subscribing.

You will receive regular updates on the latest news on cybersecurity.

  • Please enter a business email id
  •  
  •  
    By clicking on Keep me Updated you agree to processing of personal data according to the Privacy Policy.

Expert Talks

     
     

© 2021 Zoho Corporation Pvt. Ltd. All rights reserved.