The Globe and Mail UK

Your Global Mail

Blog

Exploring the Potential Risks of AI Deepfakes and How to Detect Them 

The proliferation of deepfakes has posed potential threats to the integrity of online interactions particularly on social media platforms where dissemination of digital content is much easier and simpler. With the technological developments, cybercriminals have upgraded their nefarious maneuvers and discovered many avenues to accomplish their illicit activities. The number of deepfakes is experiencing a soar, highlighting how concerning the matter is and stressing individuals as well as businesses to stay cautious. 

The advent of AI Deepfakes: Quick Insights 

AI deepfakes, manipulated images or videos of targeted individuals, created by using deep learning methods, particularly generative adversarial networks, are widely created to spread pornographic content. Furthermore, these manipulated media are extensively used to spread false information, sway elections, damage the societal image of victims, and fake advertisements. 

Celebrities are the most obvious targets of deepfake, with thousands of actors, singers, or musicians falling into the trap of this evil. According to research, just a single case of deepfake was reported in 2016, while the count mounted to 143,733 in 2023. Deepfake technology isn’t new, however, the advancements in the technology have increased its sophistication, making it harder to differentiate between real and fake.  

A Closer Look into the Evolution of Deepfake Technology 

Deepfake technology was created to offer applications in media production and visual effects, is now being exploited for malicious purposes. No doubt there are many positive use cases of the technology, however, the negative use cases have substantially dominated the race. 

  • The concept of deepfake can be traced back to 1997, when the groundwork for the development of the ‘Video Rewrite Program’ was laid down. This innovative program possessed the capability to produce facial movements from audio inputs and could alter audio & video data using neural networks. Furthermore, the development of this program paved the way for deepfakes in the early 2000s. 
  • Later, a new algorithm named ‘Active Appearance Model’ was introduced and attained popularity in no time due to its efficiency and capabilities. The algorithms largely relied on convolutional neural networks (CNNs), minutely analyzing facial features of the targeted and reconstructing highly convincing images or videos. 
  • Deepfake technology attracted greater attention in 2017 when a community on Reddit experienced a surge in deepfake images and videos, raising alarms and distressing concerns. 

Contextualizing How Deepfakes Are Generated? 

Advanced AI algorithms and deep learning models have made the creation of deepfake simpler and easier. It’s drastically effortless for tech-savvy malicious actors to acquire biometric data of the targeted individuals and use the data to construct irresistible identities. The deepfake images or videos appear highly realistic and convincing to the extent that the boundary between a genuine person and a fabricated identity is blurred.  

Autoencoders and GANs are extensively used in deepfake creation. Encoders and decoders under the framework of GANs are deployed for the creation of deepfake videos. The encoder receives the facial data, analyzes facial features and extracts required facial attributes. The separated facial attributes are passed on to the decoder, which constructs the face using the provided information and repeats the process until the desired results are produced. 

Face morphing, an image manipulation technique often used in the generation of deepfake images, combines two or more images to fabricate a whole new identity, resembling the input data.  

Effective Strategies for Deepfake Detection 

There’s seemingly a surge in the number of deepfakes disseminating over the internet, severely impacting the victims and leaving far-reaching consequences. The surge can be evident from a study reporting 14, 679 deepfake attempts in 2019, while the number reached 49, 081 in 2020, indicating how easy it is for cybercriminals to create fabricated identities. The distressing rise stresses the need to develop effective deepfake detection

Modern problems need modern solutions, highly sophisticated deepfake images or videos can be detected by using effective detection technologies.  

  • Liveness Detection is an advanced security technique employed in facial recognition technology to confirm the liveness of the claimed identity. By analyzing micro-expressions or subtle movements, biometric liveness detection can easily flag spoofed or fake identities, recognizing deepfakes in a matter of seconds. This technology can confirm the authenticity of already existing images or videos on social media by conducting offsite liveness detection. 
  • Integrating texture analysis in facial recognition systems can effectively distinguish a mask or spoofed identity from a genuine person. The technique minutely analyzes the skin texture including wrinkles, pores, blemishes, or skin color, which is hard for spoofed identities to mimic or replicate. These minute details help in the detection of deepfake attempts. 

Closing Thoughts 

Staying ahead of AI deepfakes isn’t that hard, staying alert and detecting inconsistencies can save you from later consequences. Individuals can prevent themselves from falling victim to deepfakes by reducing digital footprints, confirming the authenticity of the resources before sharing information, or using complex passwords to digital accounts. Furthermore, deepfake detection can be made possible by confirming the legitimacy of the resource that is sharing content, and looking for anomalies or inconsistencies.