Digital technology continues to evolve at a rapid pace, and it is getting increasingly better at simulating reality. Deepfakes use artificial intelligence (AI) and machine learning (ML) techniques to replicate the appearance of one person with another in a video, image, audio, or other digital media. Learn more about deepfakes, how they work, and how they’re being used in the world in this definition.
Deepfakes are AI-altered images or videos that change the appearance of a person, often making them look like another person. Deepfakes have primarily been used in a variety of harmless internet spoofs.
However, there are rising concerns and examples of deepfakes being used for malicious purposes, such as creating and spreading fake news and counterfeit videos. Digital media manipulation is not a new phenomenon, but with advances in technology, the gap between real and synthetic media is closing and making deepfakes more common.
Deepfakes use deep learning algorithms to swap faces in digital content and create realistic-looking synthetic media. Most deepfakes are made using neural networks and autoencoders. The autoencoder is an AI program designed to study and replicate what a person looks like from different angles. Machine learning techniques help detect and fix any flaws in the deepfake to make it look more realistic.
Deepfakes have been used in political contexts to spread misinformation. For example, in 2018, a political party in Belgium released a deepfake video of Donald Trump urging Belgium to withdraw from the Paris climate agreement.
In 2022, a deepfake video of Ukrainian President Volodymyr Zelensky was fabricated to order Ukrainian soldiers to surrender to Russia. It is not clear whether Russia was behind this deepfake, but regardless of the source, it created confusion among the Ukrainian military and citizens.
There are numerous other successful deepfake videos that have been used in pornography to replace the faces of the original actors with female celebrities. Deepfakes are also used for humor and satire, with several deepfake videos of politicians and celebrities going viral.
As deepfake technology gets better at creating convincing content, the tech world has responded with techniques to detect and prevent deepfakes. Microsoft’s video authenticator tool analyzes blending boundaries and other elements of a video to detect if the video is real or not.
Intel, in collaboration with Binghamton University, has also developed a tool that detects deepfakes by analyzing biological signals. Several other tools are being developed to combat the rising threat of deepfakes.
To determine if you’re looking at a deepfake attempt, look for some of the most common signs of altered or synthetic media. Anomalies to watch out for include awkward facial feature positioning, unnatural eye movement, a lack of emotion, mismatched audio, changes in skin tone, or inconsistent lighting.
Read next: Guide to Artificial Intelligence (AI) Software