Deepfakes & Misinformation: The Ethics Behind AI

(This article was contributed by the SMU AISG Student Chapter)

Deepfakes are fake events, commonly the swapping of faces and/or manipulation of facial expressions, which largely resemble the real thing but are in fact artificially created by leveraging on deep learning, a form of artificial intelligence. To put it briefly, an artificial neural network is first fed thousands of images in order to train it to identify and reconstruct patterns such as faces. Upon completion of training, the neural network can then be used to match and swap both faces and expressions in videos and images. You would assume that such a confusing process would only work if programmed by an artificial intelligence expert. However, there is a wide suite of readily-available deepfake tools out on the internet for anyone to kickstart their own deepfake project with ease.

The Threat

The most worrying factor of deepfakes is the potential for spreading of misinformation. Anyone could simply create a deepfake of an important public figure or celebrity saying or doing something highly inappropriate. A popular example would be of Barack Obama making highly inappropriate comments in his “public service announcement” (as seen here). If it weren’t revealed to have been a deepfake, this video alone would have destroyed all of Obama’s reputation and credibility. While most deepfakes of politicians and celebrities currently on the internet serve to be amusing content, they also simultaneously reveal the horrible implications deepfakes can potentially have on society. If fallen into the wrong hands, deepfakes could swing elections, create tensions and even incite violence.

Source: BBC   Face manipulated using green screen

Deepfakes also pose a serious threat to the future if not mitigated or regulated in the digital world. Deepfake technology is getting sophisticated by the day and making it even harder to distinguish between a real and fake video (Toews, 2019). This can potentially harm businesses, stock markets, person’s reputation as well as cause conflicts between countries. To understand its future impact, let’s consider this scenario where a malicious person could post a video of the US president being racist against Asians, that circulates across the globe. Such a situation could lead to riots as well as mass defamation of the president that ultimately hurts an entire nation.

The fight with deepfakes going into the future will require constant reinvention as this technology is improving at a rate that surpasses AI experts. A prime example is when group of AI experts tried to detect whether a group of certain videos were deepfaked or not, they failed 40% of the time. Thus, technologies must be developed and people must be educated on this topic, so as to detect and prevent deepfakes from creating havoc.

Over the past few years, misinformation, in the form of fake news, images, and videos, has been proliferating across the internet. While most of us are aware of the implications of deepfake technology, we tend to dismiss them as inconsequential more often than not. However, a closer look at the spread of technology shows us that we have good reasons to be worried. The increased sophistication, ease-of-use, and “democratization of access” to deepfake-based mobile phone applications and software that enable common individuals to propagate misinformation is a concerning trend indeed (Tung, 2019). Moreover, the code to generate deepfakes and different implementations of the algorithm have been published as open-source on the internet which makes it particularly easy for anyone who has a basic knowledge of concepts like artificial intelligence, programming, and software development to manipulate.

As students and broadly, as members of society, we have a need to be concerned about the ethics of deepfakes. Is it really alright to turn a blind eye to the massive amount of misinformation that deepfakes enable? Do we not have a responsibility to speak up about the increasing cybercrime, including non-consensual pornography that has also been enabled due to it? According to Jaiman (2020), “creating a false narrative using deepfakes is dangerous and can cause harm, intentional and unintentional, to individuals and society at large. Those of us who engage in the creation of deepfakes, including big technology companies like Google, Microsoft etc who offer the capabilities to generate them arguably have a strong moral obligation to regulate the use of such media and ensure that they are implemented ethically.

Individuals, Companies and Governments

When it comes to combating the malicious use of deepfakes, there are three main actors involved: individuals, companies, and the government. The fight against the weaponization of this technology can be successful only if these three components cooperate. Individuals have the responsibility to educate themselves on concepts such as media literacy and improve critical and analytical thinking skills. Artificial intelligence and associated technologies, however powerful and beneficial, should neither be overhyped nor be seen as a panacea to societal problems. As mentioned previously, companies including social media platforms like Facebook, Snapchat, Instagram, TikTok etc also have ethical and social obligations to frame and document community standards and post guidelines that discourage malicious practices. Additionally, if content is found to be intentionally or unintentionally harmful to an individual or a group of individuals, then guidelines must be in place to take it down or limit the sharing. The government also bears responsibility to oversee the practices of both individuals and corporations.

In a nutshell, only a mix of technological, socio-political, and regulatory measures can effectively address the magnitude of ethical challenges posed by deepfakes and the AI behind the technology.

Written by:

Nandini Sangeetha Nair, Rohan Manoj Kuruvilla, Lim Zhi Hao
– SMUAI Subcommittee



The views expressed in this article belong to the SMUAI Subcommittee and may not represent those of AI Singapore.