Blog Business Entertainment Environment Health Latest News News Analysis Opinion Science Sports Technology World
Deep Fake Porn and its Double Exploitation

We are in an advanced time of the world, where we hear a lot about fake news being spread across the nations leading us to a deep, inherited conversation on misinformation, But, we completely ignore the effect of the deep fake technology where a simple program can create fake contents that just looks very real, In fact, the fake images or the videos being made are to create realistic revenge porn, political propaganda, and spread other misinformation to the people, and even experts are finding it hard to point out which is real and which are deepfaked.


What are Deepfakes generally?


Deepfakes are a form of AI that merges two words “deep learning” and “fake” called deepfakes which make images or videos of fake events. Deep learning, According to Forbes, is a subset of machine learning where artificial neural network algorithms inspired by the human brain, learn from large amounts of data through which we get deepfake images or videos that are inauthentic and fabricated, yet look and sound so real.


To put it in a simpler term, it is the newer generation’s answer to Photoshopping, which you would have seen with the trend of FaceApp where people used photos to manipulate in whatever way they want or the other plenty of apps, which are available at Google Play Store that helps people in face-swapping themselves or others like Reface app.


The algorithms of deepfakes began rapidly online in late 2017, and the deepfake images are now actually pretty commonly found on Social Media Platforms. It is used in various media-based contents like memes, where a celebrity or a common person’s face is swapped with some other person with precise mimical nuances - resembling the same person’s movements, micro-expressions, and even voice modulations that make it realistic.


What is Deepfake Pornography?


Deepfake Pornography is a similar practice that uses a smart face-swap technology to digitally manipulate pornography, so it looks like other people are present in images or film. Using the photos of celebrities or everyday people, the victim's face is put into an existing pornographic photo or film, replacing the original participant. The practice has happened to many female celebrities like Taylor Swift, Emma Watson, Gal Gadot, Michelle Obama, Daisy Ridley, Meghan Markle, and many others by an anonymous Reddit user who uploaded several deepfake porn videos superimposing them on the internet under the pseudonym "Deepfakes''. With the software becoming more accessible to everyone, it is happening to common people as well.


Inadvertently, the usage of deepfake for pornographic purposes are now posing a serious threat of fake revenge porn that makes people, easy to put anyone’s face or body in any scenario, and make it look very real, which means abusive exes and others who have infinite opportunities to create and spread false images that could harm the reputation of the victim.


The deepfake technology also compiles all known digital images of a person’s face into a mesh. Many of these images and videos include composites of when they were underage and thus be flagged as child porn, even if it is fake.


Comparison of an Original and a deepfake


A comparison of an original and deepfake video of Facebook’s chief executive Mark Zuckerberg PC: The Washington Post via Getty Images


Who makes deepfakes?


It is said that with everyday people's usage from academic researchers to amateur enthusiasts, to visual effects studios and porn producers. Even major figures are also playing with deepfake technology like Google, Facebook, and Amazon, as each of these companies gains a lot by monetizing deepfake technology in order to understand fake content more generally. Indeed, Deepfake videos themselves are to be a profit-turner for Google in the form of YouTube videos, that feature deceased celebrities, and they claim to use it to explore the technology, to find and help in finding new appealing ways to serve the results, which is a cause of concern.


Where does it come from?


Manipulating sexually explicit images isn't new, but, way back, it was a meticulous process that involved a lot of time and hard work to bring it to perfection, But, since 2017’s new software that has made it easier, and all you need is to collect a photoset of a person from their Social Media, Choose a porn film, and input both into an automated AI-system.


The process of a deepfake will take a long time to achieve (more than 24 hours even for a short clip), but open-source software has made it accessible to the people faster as the data says that one commonly used program has been downloaded, more than 100,000 times. But, to truly understand the pace of deepfake technology, Motherboard Predicted that it would take another year to automate deepfake software, but it took only a month in 2018.


Deeptrace, A cyber-security company’s research in September 2019 found the number of videos online produced using deepfake tech has nearly doubled over nine months - 14,698 videos compared with the result in December 2018 of 7,964 online videos, which staggering 96% were pornographic, and 99% of those portray faces from female celebrations onto porn stars.


A Professor of Law at Boston University said that Deepfake technology is being weaponized against women and beyond the porn, there’s plenty of spoof, satire, and mischief.


Are deepfakes just about videos?


Nope, Deepfakes are not just about videos as the technology is capable of creating realistic, yet, fictional photos from its scratch and even audio’s can be deepfaked, to create “voice skins'' or ”voice clones' of public figures, an example of the incident that happened in March 2019, where a group of fraudsters used AI to impersonate the German CEO’s Voice and demanded a fraudulent transfer of €220,000, yet, the companies insure believed it to be deepfaked voice, but the evidence was unclear. Like these, similar types of scams have been reported using recorded WhatsApp Voice messages.


Comparison of an original and a Deepfake


A comparison of an original and deepfake photo. PC: Facebook


How are they made?


The deepfakes are made in two ways: one way is to first run thousands of face shots of the two selected people through an AI algorithm called an encoder that finds and learns similarities between the faces, reduces them to their shared common features, compressing them in the process and then a decoder, second AI algorithm is taught to recover the faces from the compressed ones due to the faces being different. One decoder recovers the first person’s face and the other one for another person’s face. To initiate the face swap, you would just simply feed encoded images into the “wrong” decoder. For example, a compressed image of one person’s face is fed into the decoder trained on another person. The decoder then reconstructs the face of another person with the expressions and orientation of the first person. But for a realistic video, this has to be done on each frame.


The second way is with the help of a single system known as a GAN, Generative Adversarial Network, which is used for face generation and produces faces that otherwise don’t exist. GAN in general uses two separate neural networks or a set of algorithms that are designed to recognize patterns that work altogether by tutoring themselves to learn the facial features of the real images so they can deliver convincing fake ones.


The two networks employ themselves in a tangled interplay that interprets data by labeling, clustering, and classifying. One network creates the images, while the other one grasps how to distinguish between the real and the fake images. Then, the algorithm that is developed will train itself on the images of a real person to generate the fake ones which make realistic yet fake videos.


What do you need for making deepfakes?


It is really tough to create an up to the mark deepfake on a standardized computer as mostly they are done with high-end desktops with powerful graphic cards or with computer power in the cloud which reduces the processing time from days to weeks, but it takes a lot of skillfulness to not least to touch up completed videos to reduce flicker or other visual defects. With that said, there are plenty of tools that are now accessible to create deepfakes for people, and even companies can make them for you like Deepfakes Web or the Zao app.


How can you spot a deepfake?


To spot a deepfake is harder as the deepfake technology itself is revamping itself but in 2018, Researchers from the Us discovered that the deepfaked faces in videos don’t blink normally and their eyes are always open and so the algorithm never learns about it but later deepfakes started appearing with blinking eyes even and that made its own weakness being fixed. Yet, poor-quality deepfakes are quick to spot with lip-synching issues, skin tone being patchy or even there can be flickering around the edges of transposed faces because finer details are hard to render well.


Even incorrectly rendered jewelry or teeth can be spotted and stranger lighting effects like inconsistent illumination or reflection on the Iris of the respective person’s face.


The other ways we can spot are the appearance of Facial morphing (a simple stitch of one image over another), unnatural body shape & hair, awkward head & body positioning, robotic sounding voice, blurry and misaligned visuals, and digital background noises.


Still, All of the tech firms are funding research to identify deepfakes. Even Last year, the first Deepfake Detection Challenge partnered with industry leaders and academic experts to accelerate the development of new models to detect deepfake videos and to create a unique dataset that detected 82% of deepfake videos.


Comparing original and deepfake videos of Russian president Vladimir Putin


Comparing original and deepfake videos of Russian president Vladimir Putin PC: Alexandra Robi


Will deepfakes undercut trust?


The broader and deeper impact of deepfakes along with synthetic and fake media can create havoc from distinguishing truth from false and when trust is scraped out, it paves a way to raise doubts on special events or issues which means trouble for the courts, particularly for child custody battles and employment forums, where faked events could be entered as evidence. And it can also post a personal security risk, as deepfakes can impersonate biometric evidence, and can certainly trick systems that depend on the face, vein, voice, or gait recognition.


Is deepfake pornography illegal?


At present, apart from California, the creation of deepfake is not illegal except for deep nudes, also known as deep porn and as per the law, deepfakes are not illegal per se, but, it depends on the content that may lead to a particular deepfake to be an infringement of copyright, violate data protection law, and be defamatory if it exposes the victim to mockery and shame for which there is a special criminal offense of sharing and private images without consent meaning of revenge porn that leads to the offenders receive up to 2 years in jail.


In the USA, the Deepfakes Accountability Act, which was passed in 2019 authorized deepfakes to be watermarked for the purpose of identification, and Virginia has also reformed its law banning nonconsensual pornography from including deepfakes, but, in India, however, there is no such explicit law as in for banning deepfakes amidst the current law forces including sections 67 and 67A of The Information Technology Act 2000 which regards crime for publishing sexually explicit material in electronic form and the Section 500 of the Indian Penal Code 1860 that provides punishment for defamation. Yet, these provisions are inadequate to challenge various forms in which deepfakes exist.


Especially in Scotland, the revenge porn law includes deepfakes by making it an offense to reveal or threaten to reveal a photo or film, which appears to show another person in an intimate situation. But, in England, the figure cautiously excludes images that have been made merely by altering an existing image.


Yet, some have made calls to make deepfake a specific crime on its own, like for an example of a case in Oct 2018, the Women and Equalities Committee called on the govt to implement a law against image-based abuse that stops the non-consensual creation and distribution of sexual images.


Are deepfakes are always vengeful?


Nope, as in, not always, as some are helpful like the Project Revoice which is a Voice-cloning deepfake initiative by ALS Association that can restore people’s voices when they lose it due to disease and deep fake videos are for enliven like the Dali Museum in Florida has a deepfake of the surrealist painter, who introduces his art and takes selfies with the visitors. Even for the films, it can be used to resurrect dead actors or improve dubbing on foreign-language films like David Beckman's ad that intended to spread awareness on malaria where he speaks nine languages which have been done with the help of deepfake technology. Even, there is a website known as MyHeritage that reanimates photos of people’s dearly departed relatives and some even find that comforting as well, but, all of them are made with each person's consent, which separates them from the other harmful usage of deepfakes that targets mostly women.


An example of Deepfake created by CNBC


An example of Deepfake created by CNBC PC: Kyle Walsh


What about shallowfakes?


Shallowfakes is a term coined by Sam Gregory at the Human Rights organization Witness, where he described them as videos that are either presented out of context or are tampered with simple editing tools which are crude, but indubitably impactful. An example of this was the incident of Jim Acosta, a CNN correspondent, who was temporarily banned from White House press briefings during a heated exchange with the president, but after a shallowfake video released showing him making contact with an intern who attempted to take the mic off him, and later it has emerged that the video has been sped up at the crucial moment, making the move look aggressive by which his Costa’s press pass was restored or the Nancy Pelosi video, which was slowed down using simple editing tools to make her sound like she was drunk which was spreading across media widely before she acknowledged it.


With deepfakes, manipulation is only going to increase and as Henry Ajder, head of threat intelligence at Deeptrace said “The world is becoming increasingly more synthetic. This technology is not going away.”


What is being done about deepfake?


As Porn deepfakes features faces of non-famous individuals like ex-wives, ex-girlfriends, high school crushes like the MIT Technology Review of 2021 which reported on a UK Women, Helen Mort, a poet and a broadcaster in Sheffield, who was warned that she or, rather, her face, picked from different private social media accounts dating way back to 2017 and 2019 had emerged on a porn site, pasted onto violent sex acts. She first couldn’t believe it until she saw the shared images and then started suspecting everyone including her ex-husband, as the abuser had her first name as a pseudonym. “The fact that I was even thinking that was a sign of how you start doubting your whole reality,” said Helen Mort.


The abuser of Helen Mort had seemed to have uploaded non-intimate photos of her like childhood, teenager, and her pregnancy photos to which he urged other users to edit her face into violent pornographic photos. Some of them seemed photoshopped and some looked astonishingly realistic. She tried to contact the police, but they said they couldn’t do anything and she was heading to be cut off from the web entirely, but that was necessary for her work. Such deepfakes turn into revenge porn and porn deepfakes are mostly tagged as fabrications, with some creators taking pride in them as it is kind of fan fiction or media remix rather it becomes non-consensual, objectifying someone real and potentially harming the person’s identity.


Yet, Last year, PornHub banned deepfake videos, and many other social media platforms like Twitter, Reddit, and GIF-hosting site Gfycat have also had similar bans. In July 2020, Motherboard reported a failed deepfake phishing attempt targeting a tech firm and an even more alarming report was from Recorded Future that found evidence that malicious actors are continuously looking to leverage deepfake technology to be utilized in cybercrime.


That particular report showcased how users of certain dark web forums, plus communities on platforms like Discord and Telegram, are debating on how to use deepfakes to carry out social engineering, fraud, and blackmail. A Consultancy Technologent also warned about the new pattern of remote working which is putting employees at an even greater risk of falling victim to deepfake phishing where they reported three such cases among their clients.



For the tech companies which are utilizing it, it may be easy to stimulate what seem to be handy or enjoyable technological advancements, but it’s also simple for the AI technology to become unreliable and questionable, leading to inflict humiliation and trauma among innocent women’s faces which are swapped in, thereby, it can harm sex workers as well. That makes for a broader issue of sex workers losing control over their own images.


Although, Sex workers produce these scenes for money, and are made up for how they live. Whether it’s filmed under an agreement or created in a DIY style, like a cam show, porn that is altered and shared without the approval of the performers is an affront materially as well as morally.


Head of Research Analysis at Deeptrace, Henry Ajder told the BBC that the debate is all about the politics or fraud and the upcoming threat, but a lot of people are forgetting that deepfake pornography is a very real, very current phenomenon that is harming a lot of women”


Activists and legal scholars widely condemn the practice as a form of media-based sexual abuse. Still, porn deepfakes are abundant due to the ease of sharing and reuploading. Piracy is already standard practice on porn aggregator sites, and deepfakes benefit from the resulting complacency around porn content theft.


What’s the ultimate solution?


Ironically, AI might be a simple answer for it because it has already helped many organizations as well as Social Media Platforms to spot fake videos, but many available detection systems have a serious weakness as they work fine for celebrities as they are trained on that for hours of the available free footage.


Yet, Some firms like Tech are now working on these detection systems aiming to stress on the issue and highlight them as well the digital watermarks to be checked on.


Bella Throne, an actress, whose video of her crying over her father death was edited into pornography of a girl masturbating said in a BBC Interview “I don’t know how we regulate apps and things like that, but because it’s not gonna just be your celebrity or your favorite this person that you want to put in this app because you could literally can do it to your own best friend in school if you decide you hate them so much”.


Recently, Researchers at Facebook and Michigan State University claimed that they have figured out a way (an AI software) to reverse-engineer deepfakes that allows them to see which AI made the deepfake at first which came, after the MSU research last year. As a part of the research right now, Facebook has collected and classified 100 different deepfakes models that are in existence.


A reverse engineering research model used by Facebook AI Software to spot deepfakes and detect its origin. PC Facebook AIA reverse engineering research model used by Facebook AI Software to spot deepfakes and detect their origin. PC: Facebook AI


Even, there is a weekly podcast known as The Deepfake Podcast, which explores the human side of deepfakes, synthetic media, and the future of AI with guests that include artists, scientists, developers, and journalists who discuss their perspectives on understanding and making intuitive sense of what deepfakes mean for the net.


So, there have been a lot of improvements and measures in order to spot and stop the spread of deepfakes, and possibly bring in strict laws for prohibiting its wrong usage for defamation, porn or ways in which that will harm a person’s identity, but on the contrary, AI is developing further as well. So, we can be aware of it and hope that it gets better.


Share This Post On

Tags: #Deepfakes #FacebookAI #Deepfakeporn #AI #Shallowfakes #Faceswapping #Revengeporn #Deepfaketechnology


Similar articles

Is Self-replication among living Robots called ‘Xenobots’ at Hand?


0 comments

Leave a comment


You need to login to leave a comment. Log-in
TheSocialTalks was founded in 2020 as an alternative to mainstream media which is fraught with misinformation, disinformation and propaganda. We have a strong dedication to publishing authentic news that abides by the principles and ethics of journalism. We are a not-for-profit organisation driven by a passion for truth and justice in society.

Our team of journalists and editors from all over the world work relentlessly to deliver real stories affecting our society. To keep our operations running, we depend on support in the form of donations. Kindly spare a minute to donate to support our writers and our cause. Your financial support goes a long way in running our operations and publishing real news and stories about issues affecting us. It also helps us to expand our organisation, making our news accessible to more everyone and deepening our impact on the media.

Support fearless and fair journalism today.