The internet is often referred to as the Wild West of the web. As a largely unregulated platform with capabilities only limited by one’s imagination, almost anything goes — and that includes deepfakes. If you haven’t heard of them, there’s a chance you’ve encountered one on social media. Actors swapped into movies they never starred in. Politicians making announcements that don’t seem to make sense. All deepfakes. But what are they really, and why are they a problem?
What is a deepfake?
Deepfakes are a relatively new use of AI to create false yet realistic audio and video clips. In other words, a form of synthetic media. It’s similar to the technology of a Snapchat or Instagram face-swap filter, but more insidious. With the development of more advanced technology and open-source software readily available online, the average hobbyist can take a photoset of someone and swap their face into an existing video clip. For example, the first deepfakes shared on the internet were of celebrities inserted into explicit content. Many platforms banned the sharing of those videos and later, Congress went on to introduce the DEEPFAKES Accountability Act, requiring all deepfakes to be marked as such.
How are they made?
To create video deepfakes, all one needs are several hundreds of photos of the person they want to insert into an existing video and an AI algorithm called an encoder. The algorithm finds commonalities between the two faces and pairs up the shared features. A second algorithm called a decoder is programmed to pull out the original video subject’s face, leaving behind the inserted subject’s face to be swapped into the video.
What about audio deepfakes?
While both video and audio deepfakes can be used to deceive people, audio deepfakes have a higher potential of actually being used in private scams. Once a convincing duplicate of someone’s voice can be made — for example, the voice of someone you trust — phone scammers can use that voice to steal secure information. The technology to make deepfake audio is becoming more and more accessible, too. Apps like Lyrebird and Resemble used for voice cloning are marketed as tools that can be used to help make podcasts, program virtual assistants and build out game characters. Like most technological advancements, audio deepfake software may have been made with good intentions, but can be used in more manipulative ways.
How do I protect myself against deepfakes?
The government has tried to pass legislation that makes it more difficult for deepfakes to be made and circulated, but at the moment these bills are hard to enforce. Anyone intent on making a deepfake for malicious purposes won’t likely mark it as a deepfake as required by the DEEPFAKES Accountability Act. Watermarks on video or audio can also be edited out. Another roadblock to prohibiting deepfakes is the First Amendment, which protects works of entertainment and satire.
Until a legal solution is found, it’s up to individuals to be vigilant against deepfakes. For viral videos or audio snippets on social media, only trust verified news sources. Be aware of telephone scammers who can use audio deepfakes to contact you directly. Don’t give personal information to anyone over the phone unless you’ve directly called a proven customer service number. If you get a call from someone who says they’re someone you know or are calling on behalf of someone you know, hang up and call your friend or family member back directly and check in with them. Stay alert — a healthy sense of skepticism is safe now that deepfakes are easier to make and spread.
As new technology emerges, securing your organization’s data is more important than ever. Bluefin is here to help. We specialize in security technologies, including tokenization and point-to-point encryption (P2PE), to protect payment and PII/PHI data. With Bluefin solutions, sensitive information never traverses your system, so that if a breach does occur, hackers get nothing. Learn more about our security products or contact us today.