#TrendingNews Blog Business Entertainment Environment Health Lifestyle News Analysis Opinion Science Sports Technology World News
Deepfakes and How We Detect Them

Photoshop appeared 34 years ago, completely altering how people received the news. The images you see are most likely the result of an intentional editing process. Internet users began to question the images' accuracy. They place more trust in video and audio recordings because they are nearly impossible to fake. However, deepfake appeared and quickly "pierced" that internet's seemingly indestructible stronghold.

Deepfake, as a combination of "deep learning" and "fake", refers to fake videos that are created by AI (artificial intelligence) technology. It will scan a person's video and photo, then use AI to merge it with a separate video and replace facial details like eyes, mouth, and nose with lifelike facial movements and voice. The more images there are of that person, the more data the AI can learn and mimic.

Back in 2013, Paul Walker, one of the main actors in the “Fast and Furious” series, passed away due to a car crash while filming for “Furious 7” was still ongoing. Cody and Caleb Walker, his two brothers, and actor John Brotherton were invited to keep playing the role and complete Paul's unfinished work. The production team was able to easily edit the footage and make these three men look exactly like Paul Walker because of the similarities in facial expressions. On the day of its release, the film surprised millions of fans around the world with its authenticity.

Although the concept was not created with bad intentions, deepfakes have been increasingly used in recent years to spread lies, trouble, and negativity.

According to a statistics overview by Deeptrace, an Amsterdam-based company specializing in AI-generated content research, 96% of deepfake videos contain depraved content. The majority of the victims are well-known singers and actresses whose images have been used in pornographic films. They also discover that the top five porn sites that publish deepfake videos have a total of 134 million views.

With the rise of deepfakes, it is becoming increasingly difficult to verify the information. Many concerns have been expressed regarding the privacy of personal information, as well as the risk of tarnishing the image and reputation of celebrities, politicians, or even ordinary citizens.

As individuals, we cannot do much to prevent deepfakes. We can, however, choose to approach information from the internet selectively, and most importantly, we can protect ourselves. Protecting our personal information is an act of self-defense when it is aggregated on the internet. First and foremost, you must take precautions before posting any content online. You should keep your social media accounts private, such as Facebook, Instagram, and Twitter, and only accept requests or share your photos and personal information with people you trust or know in person.

Furthermore, you should educate yourself and update your knowledge about the development of deepfakes regularly, as well as educate those around you about the potential harms of this technology, particularly children and teenagers who have virtually no knowledge or defending skills against the internet.

You can also use Google Reverse Image Search feature to detect fake content and videos. A reverse image search leaves a digital trail of where a specific image has appeared on the internet. All you have to do to determine whether a video or image is real or fake is to follow these simple steps: drag an image into the “images.google.com” search bar, paste a URL into the search bar, or right-click on an image if Chrome is your default browser.

Policymakers could help deepfakes detection technology by investing in and encouraging scientists to develop new artificial intelligence programs for face and voice recognition, while also supporting existing programs or tools.

Besides, the government should encourage large tech corporations to share their massive data sets with social scientists so that they can examine and investigate solutions to viral misinformation and disinformation campaigns.

We can see the importance of the world's largest technology corporations, such as Facebook and Google, in combating deepfakes using their insanely huge data sets. What makes such platforms so powerful is that they collect data from users all over the world, and if they decide to share those things, it can be described as the Big Bang explosion. The information sharing will then result in identifying, locating, and reporting easily any deepfakes that appear on the internet.

However, when users' personal information is at risk of being leaked, this may be criticized. Another significant challenge is obtaining government approval for what data to share or keep. To reach an agreement between different points of view is like solving a difficult math problem, and with deepfakes still spreading every day, we need to think fast and think smart.

Edited by: Thomas Culf

Share This Post On


Leave a comment

You need to login to leave a comment. Log-in