Digital, Commerce & Creative 101: Is this for real? The legal reality behind deepfakes
04 November 2024
Did this really happen? Did they really say that? The emergence of deepfakes, which are videos, audio or images generated by artificial intelligence to mimic real people have quickly evolved from a novel technological curiosity to a sophisticated tool pushing the bounds of what is ethically and legally acceptable.
Deepfake technology offers boundless creative possibilities. Imagine having your favourite book or podcast narrated by your beloved celebrity in your native language, or witnessing the resurrection of long-gone characters on screen.
As this technology matures at an astonishing pace, the line between fake and authentic content becomes increasingly blurred. In a year when half of the world (including the US) is holding elections, deepfakes have ignited a fresh wave of concerns. Earlier this year, Scarlett Johansson called for legislation to outlaw AI deepfakes, after OpenAI released a chatbot that sounded eerily like her. Meanwhile, a fake recording of a presidential candidate in Slovakia boasting about rigging the polls and raising the cost of beer went viral ahead of Slovakia’s presidential election. Some found it entertaining, but some believed it was actually true.
In fact, a recent report by security experts at the World Economic Forum identified misinformation and disinformation as the most significant global threat in the next couple of years, ahead of war, inflation, or extreme weather.
The legal reality
Deepfakes present a myriad of legal issues that intersect with privacy, intellectual property, defamation, and regulatory compliance. Taking some of these in turn:
- Intellectual property rights: One of the most obvious legal concerns is the potential infringement of third-party intellectual property rights. Deepfakes require training on existing images, videos, or audio, which may belong to someone else. Unauthorised copying or scraping of content to create a deepfake will likely amount to an infringement, unless an exception applies. For instance, a humorous deepfake may potentially be protected under a parody exception. There is also an ongoing debate over whether the output of the deepfake training process is eligible for copyright protection.
- Publicity rights: If a deepfake uses a talent's likeness without their consent, it could also violate their publicity or image rights. However, the level of protection afforded to these rights varies across jurisdictions and often remains untested or ill-equipped to tackle deepfake technology. Therefore, in the US, lawmakers have recently proposed legislation that would create a federal property right protecting one’s image, voice and likeness from unauthorised digital replication. Notably, English law does not recognise a specific right of publicity. Instead, there is a patchwork of legal rights that can be used to protect various aspects of an individual's image and personality.
- Privacy concerns: A photo or video of a person used to generate a “faked” photo or video could, inevitably, be used to identify that person. Consequently, generating deepfakes is likely to involve processing of personal data (including biometric data such a one’s voice) and so could violate individual’s privacy rights, if not done lawfully.
- Consumer laws and advertising rules: The rules prohibiting misleading advertisements apply to deepfakes just as they do to any other content. However, deepfakes possess the unique ability to distort reality. This poses a particular concern for, for example, the beauty and fashion industries, as deepfakes can create a false perception of the effects of a particular product that may then fall foul of the applicable advertising rules.
- Online harms legislation: The Online Safety Act 2023 (“OSA”) in the United Kingdom and the Digital Services Act 2022 (“DSA”) in the EU, which focus on tackling illegal content may also be relevant. While deepfakes that constitute misinformation may be considered a grey area, the Online Safety Act falls short in addressing this issue. Although there were expectations that the Act would assist in combating misinformation, the only provisions related to misinformation involve the establishment of an advisory committee to advise Ofcom on disinformation and misinformation, and modifications to Ofcom's media literacy policy to understand impact of dis/misinformation.
In the EU, efforts to combat disinformation have been centered around strengthening the existing guidelines under the voluntary Code of Practice on Disinformation. Worth noting is Article 40 of the DSA, which requires platforms to provide data to researchers working on "systemic risk" and which includes misinformation. The legislation mandates provision of access to researchers because non-governmental bodies play an important role in identifying and flagging nefarious content and actors who spread such content. The UK’s OSA does not have a comparable provision.
- Criminal legislation: It is estimated that over 95% of deepfakes involve non-consensual explicit content. To address this issue, the Online Safety Act has introduced offenses related to the sharing of sexually explicit deepfakes. The previous UK government also proposed amendments to the Criminal Justice Bill, which included the creation of a new offense for producing sexually explicit deepfakes. We have yet to see if the new government will pick up where the previous one has left off.
- Defamation laws: Deepfakes of real people that can also falsely attribute statements or actions to individuals, thereby damaging their reputation i.e. defaming them. However, pursuing a defamation claim involving deepfakes can be challenging not least due to the costs involved in bringing action but also given the anonymous nature of many deepfake creators.
- AI-specific legislation: AI-specific legislation will also play a role in regulating deepfakes. Article 50(4) of the EU's AI Act requires deployers of deepfakes to disclose that the audio, video, or image has been AI-generated. While the AI Act does not outright ban deepfakes, the EU's AI Office is expected to develop codes of practice that provide further guidance on labelling deepfakes.
What if I am creating or using deepfakes?
Deepfake technology offers exciting opportunities to enhance efficiency and unleash creative potential in content creation, localisation, and personalisation. However, it's crucial to secure the necessary rights to create and use deepfakes. This is not only to help ensure compliance with legal requirements regarding third-party content, personal data, and talent likenesses but also to manage relationships with those whose images or likenesses are used. Equally important is transparency - adhering to transparency obligations is vital to maintaining audience trust and avoiding potential PR fallouts.
Let us know if you are dealing with deepfake fiction and want to avoid legal friction - we’re here to help.
Related items
Related services
Artificial Intelligence (AI) – Your legal experts
AI has been all over the headlines in recent months. Generative AI can already produce written content, images and music which is often extremely impressive, if not yet perfect – and it is increasingly obvious that in the very near future, AI’s capabilities will revolutionise the way we work and live our lives.