Since the beginning of the 21st century, technological advancement has taken place at an exponentially fast pace. In the field of machine learning and artificial intelligence, increased sophistication has allowed the technology to not only process existing information but also generate new content. For a long time, the fake image creation market was monopolised primarily by Photoshop. However, Generative Adversarial Networks (hereinafter, “GANs”), have taken over and can now create computer generated or synthetic pictures of objects (hence, the name “deepfakes”).
Due to their life-like resemblances, in recent years, deepfakes have been popularised as tools for political manipulation, revenge pornography, etc., thereby casting serious doubts about the future of the technology. Just like other emerging technologies, deepfakes have been demonised for their anti-democratic threats and security challenges as they become increasingly authentic and realistic. However, since the technology is still in its nascent stages, we believe that taking a tech-positive approach is crucial.
Hence, in this paper we explore the opportunities presented, instead of the threats posed by deepfakes. For ease of understanding, this paper has been divided into two parts: In Part I, we begin by understanding the meanings and popular use cases of deepfakes, before exploring the technology behind them and addressing the socio-legal challenges they pose. In Part II, we discuss the beneficial use cases of deepfakes in the fields of medicine, criminal law, grief counselling and many more, before offering concluding remarks.
To understand what deepfakes are and how they use GANs, it is important to have a preliminary understanding of how machine learning works. In machine learning, models are created for problem solving. Problems can be solved by two types of machine learning: (a) supervised and (b) unsupervised. For the scope of our paper, it is the unsupervised method that we are concerned with. However, briefly put, in a supervised system, a model is fed with a training data set that consists of both inputs and outputs. To match its actual output with the expected outputs, the model is iteratively corrected and tweaked.
In an unsupervised system, the model is fed only input data. Based on this input data, the model is expected to form and recognize patterns on its own. GANs are examples of unsupervised machine learning algorithms. GANs were introduced by Ian Goodfellow in 2014 to make computer generated images seem as real as possible. Deepfake technology uses GANs to create images. Through this technology anyone can create realistic-looking media (video, audio, image or a combination of these) that shows fake actions/ speech. Simply put, manipulated media is created by mimicking the facial expressions, blinking and vocal patterns of the subject, and causing deception.
Deepfakes involving people can be divided into three categories: ‘Face-swap’, ‘Lip-sync’ and ‘puppet-master’. In “face-swap”, a face in a video is replaced with the face of another person (mainly used to create non-consensual pornography). “Lip-sync”, is whenwherein a video is altered in a way that the mouth region is in consonance with a different audio recording. A very popular example is the video of Barack Obama that was altered to say things like “President Trump is total and complete dipshit”. Finally, “puppet-master”, the person being targeted is animated by another who is sitting in front of the camera, mimicking the different facial expressions, eye movements etc., of the target and acting out.
Today, there are legitimate concerns around the increase in the use of and sophistication of deepfakes. For instance, there is a threat of an increase in cybercrimes that bypass the security systems that rely on biometric features such as face/ voice etc., by mimicking them with sophisticated deepfakes. Deepfakes have also been used in malicious political campaigns, thereby raising concerns regarding the future of democracy in this age of misinformation. Further, deepfakes have been weaponized to harass women by creating non-consensual pornography. Statistically, 96 per cent of deepfakes today are being used to create non-consensual pornographic videos of celebrities through face swapping. This is deeply concerning since it might lead to new forms of bullying and blackmailing on the internet. Due to these harms, it is no surprise that deepfakes have a well-founded negative reputation and are feared by the general public.
There are several socio-legal challenges that surround the conversation around deepfakes. The debate is primarily centred around the issue of data privacy. Oftentimes, deepfakes are used without the consent of the person whose data is being used, which is a grave violation of privacy norms such as Article 6 of the GDPR. An additional issue is: who has the intellectual property rights in the case of a deepfake. Given that a deepfake uses existing data, there is debate about whether the owner of the data would have the copyright to the deepfake media or the creator of the deepfake. Moreover, since deepfakes can potentially be a violation of privacy as well as a form of sexual harassment such as revenge porn etc., the question arises as to whether deepfakes should be accorded copyrights in the first place, be it to the owner of the data or the creator.
Further regulation of deepfakes in the form of their removal from social media sends a concerning message surrounding the value of free speech. Thus, the regulation of deepfakes that is not inherently malicious becomes a tricky aspect. Interestingly, while most social media giants have outrightly banned deepfakes due to these aforementioned challenges, Snapchat invested $160 million in acquiring the deepfake technology company AI factory. The popular feature 'Snapchat Cameos' utilizes the ‘Face-swap’ deepfake technology to recreate 'realistic' digital stickers of users.
While being mindful of the challenges and threats posed by deepfakes, it is imperative to understand its positive use cases as well, given the novel nature of the technology. In the section below, some of the existing and potential positive use cases have been laid out.
Reconstructing Crime Scenes
On 20th February 2014, Ukraine witnessed one of its most violent police-civilian clashes. Civilians, including academicians protesting Ukraine’s tilts towards Russia clashed with the paramilitary forces, leading to the deaths of 48 protestors and 4 policemen. Since it happened in a popular central area, the clash was recorded by security cameras, traffic cameras and even on smartphones of witnesses. A year later, Evenlyn Nefertari combined all videographic footage that captured the events of the protest and released a 164 minute video.
Three years later, the video was used to recreate the deaths of 3 of the protestors. The extensive data archive served as the base for combining traditional architecture with deep dive technology, three dimensional laser scans of the streetscape, ballistics analysis and autopsy reports. The final result was a multimedia reconstruction of the killings of the three protestors, that has been entered into as evidence in the trial of their deaths.
The use of deepfake technology as was done in Ukraine is revolutionary for human rights and justice. If implemented properly, oppressive regimes would be forced to reconsider their actions because someone is always watching in today's digital age.
With minimal effort, 'Lip-Sync' deepfakes can be used in multiple social awareness campaigns, to increase accessibility where language is a barrier. While Manoj Tiwari’s deepfake Hindi video had a negative impact, such voice modulation to translate into multiple languages actually does some good. In 2019, the health charity ‘Malaria No More’ used deepfake technology to make David Beckham talk in nine different languages. The aim of the campaign was to drive home the point that multiple voices together can end Malaria. 'Lip-Sync' deepfake was critical in this seamless projection of different languages from the same speaker.
Moreover, Deepempathy is a collaborative effort between UNICEF and MIT to recreate bustling cities as war torn in order to increase awareness about the perils of conflict. It uses footage from Syrian neighbourhoods devastated by war to recreate cities such as Boston, London etc as adversely impacted by war by employing deepfake technology.
Deep fakes can also be used in enhancing resolution of digital scans, thus training AI to spot tumors better. NVIDIA, MGH & BWH Center for Clinical Data Science and the Mayo Clinic utilized deepfake technology to create synthetic data, which in turn aided data augmentation and efficient training AI. AI trained mainly on synthetic medical images became just as efficient at spotting tumours as AI trained on organic medical data only.
Deepfakes can be used for the generation of content for benign use, i.e., to create a Digital Twin of sorts of the subject that can act as a replacement for in-person or virtual presence. The idea of a digital twin allows users tools to create their digital replicas for future use.  This is relevant for usage in interactive stories, memorials, and any other simulations. The potential benefits are endless. Imagine the scope of grief counselling and bereavement assistance for the family who has lost their loved ones. This is especially useful for sustaining distant relationships, more effectively than the current video conferencing tools. The concept of a digital twin can also be potentially beneficial for people suffering from certain diseases, to be able to speak in their voice. It can also be revolutionary for people suffering from certain disabilities to engage in activities that they otherwise don’t have access to. 
Education and Art
There are several beneficial uses in the education sector as well. With the use of deepfakes, students can get access to more personalised learning experiences. For example, imagine getting a lecture on the partition from Ambedkar or on the conditions of Jews during World War II from Anne Frank; This will potentially allow interaction with prominent historical figures, and have them in interactive forms to make learning a more immersive experience, that can go beyond mere lectures and readings.  Deepfakes are extremely useful for artistic benefits as well as they can be used to resurrect deceased performers for reprising their previous roles. For example, the makers of The Last Jedi faked new dialogues using old recordings, after Carrie Fisher’s death.  So, imagine watching Robin Williams on screen again or Heath Ledger as the Joker!
It is fairly easy to misuse deepfakes, but that is a non-exclusive harm for all emerging technologies, mostly due to their novel nature. And while cybersecurity experts have been working towards ensuring that coherent systems that differentiate between deepfakes and authentic videos are up to date, the hackers outnumber the cybersecurity professionals. In light of the same, one possible way to thwart the misuse of deepfakes is through increasing reliance on blockchain technology. Registering legitimate videos and images to these encrypted ledgers at the time of creation would further secure them from being tampered with.
Legislation criminalizing misuse of deepfakes is a step towards regulating the harmful effects of deepfakes. For example, as part of the National Defence Authorization Act for Fiscal Year 2020, the United States National Intelligence Agency must, inter alia, submit a report to Congress detailing imminent threats of deepfakes as well as foreign attempts to weaponize deepfakes for manipulation of U.S elections. California has banned the circulation of deepfake pornography as well as circulation of politicians’ deepfakes within 60 days of an election.
With the right motivation, deepfakes are a technological marvel that can be used for the advancement of humanity. While there is a grave possibility for misuse and harm, this should still not be enough to dismiss the vast potential for good that deepfakes can provide.
The views expressed above are solely of the authors.
 "Deepfakes for the Good: A Beneficial Application of Contentious Artificial Intelligence Technology", Nicholas Caporusso, jan 2021.
 “Deepfakes: A looming Challenge for Privacy, Democracy and National Security”, Danielle K. Citron, Robbery Chesney, December 2019, California Law Review
 Ibid 1
 Ibid 2