The Deepfake Dilemma: UK and International Approaches to Regulating AI-Generated Pornography
AI and Deepfakes: A New Technology
Language-learning chatbots. Breath-taking cinematic special effects. A computer-generated chess opponent. All these things are enabled, or bettered, by the introduction of AI technology. Yet is this long-rumoured technological advancement as positive as it seems? With its great many uses, under-regulated AI technology poses one of the greatest threats to personal privacy in the modern world.
‘Deepfakes’ are AI-generated photographs or videos that can digitally impose any face onto another body. This is achieved through the submission of videos and images into deepfake software which can analyse and replicate the human faces depicted.[1] Deepfakes may be target-generic – in which the face imposed is a fictional one crafted from images of several people – or target-specific – in which an almost exact replica of a specific person’s face may be created and imposed upon another body.[2] The creation of this target-specific deepfake technology has culminated in a crisis of AI-generated pornographic content depicting non-consensual parties through ‘Nudify’ applications which are specifically designed and marketed for these purposes.[3]
This gives rise to a new way for perpetrators to invade a person’s sexual privacy with the recently created technology already creating several notable and high-profile cases both in the UK and overseas. Notably, American singer-songwriter Taylor Swift became victimised by deepfake pornography posted publicly on the social media app X (formerly, Twitter), which garnered over 45 million views within 19 hours.[4]
This content can even be seen to have posed a threat to democracy with SDLP’s Cara Hunter and DUP’s Diane Forsythe becoming victims of such attacks, this event has been speculated to have affected the outcome of the Northern Irish elections.[5]
This issue is widespread. A report by Deeptrace Labs [6] identified 14,876 deepfakes published online, of which 96% were pornographic This is not a niche, undiscoverable corner of the internet accessible only to experienced cybersex criminals. In fact, the top four deepfake pornography websites alone have accumulated over 13 billion views thus far.
With rising perpetuation of misogynistic harassment and beliefs online,[7] the perfect backdrop has been set to escalate this threat into a crisis of online sexual harassment. Adequate legislation is, therefore, an integral means of protecting the safety of women and minors online.
Government Action Against Deepfake Pornography in the UK
The introduction of new offences under s.188, Online Safety Act 2023[8], (”OSA“) which amends s.66B of the Sexual Offences Act 2003, criminalises the sharing or threatening to share sexual videos or images of an individual including instances where the material “appears to show” something which is AI-generated. This carries a sentence of up to two years of incarceration.
While the inclusion of ‘deepfake pornography’ as a criminal offence in national legislation is a significant advancement towards addressing the issues presented by this technology, there are several limitations on the scope of this Act to have any significant effect.
A Problem Shared is a Problem Criminalised
An individual’s liability for the offences introduced under s.188 is dependent upon evidence that they shared or threatened to share the deepfake content. The Act wholly fails to address the root issue: the creation of such material. In imposing such a requirement, the Act overlooks the initial violation of the possession of non-consensual pornographic content.
Is there a significant difference in the victimhood of the subject if the content is created in several instances by individuals for themselves or if there is one soul creator who shares the image amongst multiple persons?
The passing of the Act in the House of Lords was met with a great level of pushback, with concerns that Parliament is shying away from upsetting ‘big tech’ by banning Nudify applications themselves.[9]
A ‘Minor’ Problem
The refusal of Parliament to ban Nudify applications in this country significantly reduces its effectiveness in tackling a large portion of individuals sharing this content – those who are underage. Minors cannot be held criminally liable for the offences listed under the Act, yet it is safe to draw the conclusion that they likely make up a significant portion of those who engage with the material.[10] Thus, there is a legal blind spot where victims targeted by minors are unable to be prosecuted – an issue which could easily be resolved if Nudify applications themselves were regulated by the government.
Future Actions in the UK
In response to criticism after the passing of the OSA, Viscount Camrose insisted that the Act abided by recommendations from the Law Commission which emphasised that banning the sharing of content rather than the applications themselves would be the most suitable course of action with future action being taken when certain circumstances may be established.[11] These circumstances include a risk-reference, a “working relationship” between the government and AI laboratories and the means to gather evidence to prosecute effectively.[12] Further, the recently introduced Non-Consensual Sexually Explicit Images and Videos (Offences) Bill[13] seeks to introduce offences of “taking” and “soliciting” sexually explicit deepfake content. However, these offences carry a maximum sentence of six months imprisonment.
The Bill would help deter victimisation through helping resolve the problems of the share requirement. However, this alone is insufficient. Criminalisation does not tackle the deepfake content created and shared by minors. Neither does it promise justice for most victims due to the scope of the issue and internet anonymity. Moreover, the light sentencing would limit any potential deterrent effect. Ultimately, the measures described by Viscount Camrose do little to answer the concerns raised about the restricted approach of Parliament regarding this issue. In banning the Nudify applications themselves and taking a restrictive approach to the accessibility of AI tools, there would be little need to establish these complex criteria. This would be best implemented through binding legislation governing the public accessibility of AI tools in respect of the AI Risk Repository which assesses potential social risks of different AI tools. In the meantime, the violation of privacy of persons victimised by this technology will largely persist due to the limitations of the share element and age restrictions allowing for a significant number of deepfakes to remain unable to be prosecuted.
Considering International Approaches to the Issue of Deepfake Pornography
America
American federal law is yet to implement any measures against deepfake pornography[14] however, some individual States such as Texas and Virginia have implemented their own State legislation. Many of these State legislations have implemented very limited regulations with some States, such as Louisiana, limiting criminalisation to deepfake content depicting minors.[15] Efforts to implement these offences at a national level are yet to be passed, though the DEEPFAKES Accountability Bill[16] proposes a similar approach to that seen in the Online Safety Act. American legislators have also been dismissive towards concerns on the legal issues posed by AI, as demonstrated in the dismissal of Clarkson v OpenAI[17] which had the potential to be a landmark case in this field. This case concerned the alleged misuse of data by Microsoft and OpenAI for their internet chatbot, ChatGPT. Though the Court recognised the threats posed by AI, they rejected the notion that AI regulation is a legal concern. This approach was summarised by U.S. District Judge Vince Chhabria who told the plaintiffs that “they are in a court of law, not a town hall meeting.”[18] Although deepfake pornography was not the focus of this case, it illustrates an alarming apprehension of American courts to regulate ethical breaches of AI technology.
Though the American legislation on this matter is hugely limited, criminalisation of possession of deepfakes of minors would be a useful addition to the Act which currently does not explicitly address the issues surrounding the age of the victim.
South Korea
The South Korean government has taken harsh action against deepfake technologies following a national scandal exposing a Telegram chatroom with over 220,000 members dedicated to the creation and distribution of deepfake pornography of women and minors in universities and schools.[19] Under South Korean law, the possession and/or consumption of deepfake pornography could result in up to three years‘ imprisonment.[20] This harsh punishment was brought on by extreme public outrage and public sensitivity to cybersex crime following the exposure of other large-scale non-consensual pornography scandal, the Nth Rooms.[21]
As far as is currently public knowledge, the UK has not witnessed a scandal of this gravity. However, much is to be learned from the action taken by the South Korean government who have faced the largest issues of deepfake pornography internationally. It would be advisable that the UK government consider similar harsh approaches to prevent this issue from escalating to such gravity.
EU
The EU’s AI Act[22] details the legal requirements of distributing deepfake content. It places an obligation upon persons sharing said content to clearly state that the material is falsified. In tackling the issue of deepfakes pornography, the Act does little to alleviate concerns. With online chatrooms and pornographic websites being dedicated to the informed creation, consumption and distribution of deepfake pornography, very little concern regarding this subject is related to any lack of awareness of its inauthenticity. The EU’s Directive on Gender-Based Violence[23] addresses the issue of non-consensual digital distribution of sexual content, including that which is AI-generated. However, it requires that there is a “multitude” of recipients, raising the same concerns of the ’share requirement' in UK law while setting an even higher standard for victims to receive justice.
Moving Forward
Overall, while the UK has made evident efforts to legislate punitive measures against the crises of deepfake pornography, the actions taken thus far remain insufficient. They only create means for a minority of cases to be prosecutable.
During a period of rising aggression towards women and minors, Parliament must take a more restrictive approach to this growing issue. The Non-Consensual Sexually Explicit Images and Videos (Offences) Bill, if passed, would help tackle this issue to some extent. Further, drawing ideas from international legislation, including harsher punishments and criminalising all consumption of this content, would further deter the use of deepfake pornography. However, the legislators could begin by addressing the dangers of unregulated deepfake technology.
REFERENCES AND SOURCES:
[1] Phil Swatton, Margaux LeBlanc, ‘What are deepfakes and how can we detect them?’ (The Alan Turing Institute, 7 June 2024) https://www.turing.ac.uk/blog/what-are-deepfakes-and-how-can-we-detect-them accessed 20 November 2024
[2] Peng, Chunlei, et al. 2021. ‘Deep Visual Identity Forgery and Detection.’ Scientia Sinica Informations. Google Scholar
Chunlei Peng, et al, ‘Deep Visual Identity Forgery and Detection.’, (2021), , https://www.sciengine.com/SSI/doi/10.1360/SSI-2020-0064>
[3] Suzie Dunn, ‘Legal Definitions of Intimate Images in the Age of Sexual Deepfakes and Generative AI’ (1 May 2024). McGill Law Journal, Vol. 69, 2024, at <https://ssrn.com/abstract=4813941> accessed 21 November 2024
[4] Alexander Harriet ‘Lewd Taylor Swift AI Images Likely Originated in a Telegram Chat Group Before Being Viewed on X 45 Million Times in Just 19 Hours as Lawmakers Call for Legislation.’ (Daily Mail, 26 January 2024) , https://www.dailymail.co.uk/news/article-13008871/taylor-swift-pornographic-ai-deepfake-telegram-chatroom.html>
[5] Seanín Graham, “Advocacy group condemns use of fake porn to harass female election candidates in North”, https://www.irishtimes.com/politics/2022/06/28/advocacy-group-condemns-use-of-fake-porn-to-harass-female-election-candidates-in-north/
Seanín Graham, ‘Advocacy group condemns use of fake porn to harass female election Candidates in North’, (Irish Times, 28 June 2022) < https://www.irishtimes.com/politics/2022/06/28/advocacy-group-condemns-use-of-fake-porn-to-harass-female-election-candidates-in-north/>
[6] `The State of Deepfakes: Landscape, Threats, and Impact, Henry Ajder, Giorgio Patrini,
Francesco Cavalli, and Laurence Cullen, September 2019.
Henry Adjer, Girogio Patrini, Francesco Cavalli, Laurence Cullin ‘The State of Deepfakes: Landscape, Threats and Impact’ (September 2019)
[7] Joanna Bourke, Rebecca Gomperts, ‘Natasha Kaplinsky, Misogynistic Influencers, Professor Joanna Bourke, Dr Rebecca Gomperts’ (12 Jan 2023) <https://www.bbc.co.uk/sounds/play/m001gx0q>
[8] Online Safety Act 2023
[9] HL Deb 13 February 2024, vol 836
[10] Ofcom, ‘A deep dive into deepfakes that demean, defraud and disinform’, (Ofcom, 23 July 2024) , https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/deepfakes-demean-defraud-disinform/.
[11] HL Deb 13 February 2024, vol 836
[12] Inid.
[13] Non-Consensual Sexually Explicit Images and Videos (Offences) HL Bill (2024-2025) 26
[14] Michelle Graham, ‘Deepfakes: Federal and state regulation aims to curb a growing threat’, (Thomson Reuters, 26 June 2024) < https://www.thomsonreuters.com/en-us/posts/government/deepfakes-federal-state-regulation/>
[15] inid
[16] DEEPFA KES Accountability Bill, H.R.5586, 118TH Congress, 2023
[17] Clarkson v OpenAI 2023
[18] Inid
[19] Jean Mackenzie, Leehyun Choi, ‘Inside the deepfake porn crisis engulfing Korean schools’, (BBC News, 3 September 2024) <https://www.bbc.co.uk/news/articles/cpdlpj9zn9go>
[20] Emmet Lyons, ‘South Korea set to criminalize possessing or watching sexually explicit deepfake videos’, (CBS News, 27 September 2024) <https://www.cbsnews.com/news/south-korea-deepfake-porn-law-ban-sexually-explicit-video-images/>
[21] Kim Joohee, Jamie Change, ‘Nth Room Incident in the Age of Popular Feminism: A Big Data Analysis.” (2021) vol. 14 <https://muse.jhu.edu/article/798133>
[22] Artificial Intelligence Act 2020/1828
[23] Council Directive 2024/1385 on 7 May 2024 on combating violence against women and domestic violence [2024] OJ 1385