top of page

Law In Lurch: Faking The Face of Protection Under Performer’s Rights

Updated: Sep 14, 2024



 

I.  ABSTRACT

Since Deepfakes first appeared in 2017, legal professionals have been aggressively searching for legal measures to limit them. In this article, the term “Deepfakes,” which is used to describe artificial intelligence-produced fake performances, is defined, and performers’ rights are proposed as a viable regulating tool. In comparison to current legal remedies, reform, and suggestions put out to control Deepfakes, performers’ rights constitute a more nuanced approach to the problems created by Deepfake technology. In arguing that performers’ rights are a suitable regulatory response to Deepfakes, this paper reveals a contradiction: while performers’ rights are a desirable way to control Deepfakes, this technology confounds the extent to which they may be applied. This is due to the fact that Deepfakes makes use of performers’ rights–protected content (performances) in a manner that was unanticipated by intellectual property policy-makers at the time these rights were established in law. Despite this restriction, if properly reformed, performers’ rights continue to be one of the most alluring legal remedies for controlling Deepfakes. To close this gap, this article suggests two reforms to performers’ rights. To ensure that performances altered by Deepfakes are covered, the first method entails an ad hoc adjustment of performers’ rights. The second and favoured option substitutes a copyright system for the performers’ rights system. A synchronised international approach to the technology can be achieved by making this minuscule but significant shift in legal frameworks rather than fragmented, uneven, and consequently ineffectual protection against unlicensed Deepfakes.[i]

 

II.  INTRODUCTION TO DEEPFAKES

In the 1950s and early 1960s, AI was in its embryonic stage. Alan Turing first proposed a criterion issue (later dubbed the Turing test) for assessing machine consciousness in 1950: If a machine could indeed imitate human conscious behaviour, would it not be conscious? Turing's question shaped the ideology of artificial intelligence (French, 2000). McCarthy, Minsky, and other scientists met in the summer of 1956 at Dartmouth College in the United States to discuss "how to use machines to realistically simulate intelligence." They first proposed the idea of "artificial intelligence (AI)," establishing AI as a field of study. This meeting governed and encouraged the advancement of AI as a field of study for many years. Turing's work and the Dartmouth Symposium established an essential framework for AI research.[ii]

Deepfakes, which derive from the terms "deep learning" and "fake," are made using methods that may overlay face images of a target person onto videos of a source person to create videos of the target person acting or saying what the source person says. This falls under the face-swap subcategory of deepfakes.[iii] Typically, deepfakes are built on Generative Adversarial Networks (GANs), which involve the joint training of two adversarial neural networks. Many computer vision problems have shown substantial progress credits to GANs. Modern architecture, which was first launched in 2014, is able to produce realistic-looking visuals that even a human cannot tell is real or not.[iv]

Deepfakes are videos or audio files that accurately portray someone's appearance, voice, or performance. To create a digital impersonation, they modify real (authentic) footage of the subject. They are essentially digital look sakes or sound sakes. Utilizing free, internet machine learning technologies, deepfakes are created. They have spread because technology is so readily available. It used to be the domain of well-resourced content creators with access to pricey computer-generated images and other visual effects equipment to convincingly digitalize people's faces or voices. Deepfakes have become more common as a result of machine learning's decentralisation of the production of high-quality digitalization.[v]

Fake videos generated by AI that can easily manipulate regular users are now commonplace. The emergence of these videos is a result of the improved ability of modern computers to simulate reality. Modern cinema, for example, relies heavily on computer-generated sets, scenery, characters, and even visual effects. As these scenes are virtually indistinguishable from reality, virtual sites and props have replaced the physical ones. As one of the most recent trends in computer visuals, deepfakes are created by coding artificial intelligence to make a person in a recorded video appear to be someone else.

The term “deepfake” is derived from deep learning, a type of artificial intelligence.[vi] As its name suggests, deepfakes utilises deep learning to generate images of fictitious events. Algorithms utilising Deep Learning can teach themselves how to solve problems involving large data sets. This technology is then used to replace faces in videos and other digital content to create fake media with a realistic appearance. Moreover, deepfakes are not limited to videos alone; this technology can also be used to create images, audio, etc.

There are numerous options for creating deepfakes; however, the most common method relies on the application of a face-swapping technique using deep neural systems that employ autoencoders. Typically, these are created using a target video as the basis for the deepfake, and then AI uses a collection of short videos of the desired person to replace the actual man in the target video.

The autoencoder is a deep learning AI programme that can analyse multiple video clips to determine how a person appears from various angles and in various situations. It maps as well as substituting the face of the individual with the one shown in the target video by identifying common characteristics.

Another type of artificial intelligence learning that can be employed to create deepfakes is Generative Adversarial Networks (GANs). GANs are more sophisticated because they make it more difficult for deepfake detectors to decipher them by using multiple rounds to identify and enhance deepfake flaws. Experts believe that as technology advances, deepfakes will become significantly more sophisticated.

Creating deepfakes is now so simple that even novices can do it with the assistance of numerous apps and programmes. A vast quantity of deepfake software can be discovered on GitHub, an open-source community for software development. Additionally, online users have become more knowledgeable and adept at detecting fake news. To improve cybersecurity, more deepfake detection technology must emerge to prevent the spread of misinformation.

Deepfake algorithms sometimes retain the lighting of the clips used as models for the fake video. Inconsistent lighting in the target video can also indicate a deepfake. If the video is fabricated and the original audio is not similarly manipulated, the audio may not match the individual.[vii]

In one instance, One Umang Agarwal who was an ardent fan of Hrithik Roshan. He usually shares videos of Roshan dancing, sending personal messages to politicians and actors, and filming in exotic places behind the scenes. During the Hindu holiday of Rakhi in August, Agarwal sent out a very interesting video. In the 16-second clip, Roshan said, "Hello, Umang." "Happy Rakhi to you." The video was watched by a lot of people. Fans of the channel couldn't believe Aggarwal had asked their favourite actor for a greeting.

Roshan didn't just write to Aggarwal. He also wrote to other people. Fans got the same thing, and they were called by name, all over social media. They were not exactly real. They were deepfakes made for the candy company Cadbury by Rephrase.ai, a Bengaluru-based startup that is leading the way in the commercial use of avatars made by AI that look like real people. Roshan gave Cadbury the rights to use his image, so the company could make him say anything they wanted with the help of Rephrase.ai. Users just had to buy a limited-edition chocolate box, scan a QR code, and type in a name for the actor to say.[viii]

 

III.  COPYRIGHT AND AI DEEPFAKES INTERFACE

Most of the arguments about copyright and AI centre on the question of whether or to what extent copyrightable works need or should need human action. Under the Berne Convention, the word “authorship” is not defined in a clear way.[ix]But it's very likely that the convention only applies to human creators which would mean it doesn't apply to content made by AI. This means that the Berne Convention's minimum standard only applies to works made by people. “Original works of authorship” are protected by the U.S. copyright law (17 U.S.C. 102(a)).[x] This language is thought to only apply to creations made by humans. Because of this, the law doesn't protect “creations” made by animals, like the “monkey selfie”.[xi] This is a good example of how these rules are used in the courts. This is also true for content made by machines or, in this case, content made by artificial intelligence. So, this kind of content can't be protected by a copyright, unless it can be shown that a person made it. "Nowhere in the [Copyright Act] does it say anything about animals. When looking at authorship under the Act, the Supreme Court and Ninth Circuit have often talked about "persons" or "human beings." Naruto is not an "author" in the sense of the Copyright Act.[xii]

Since there isn't a single EU copyright code but rather a number of directives, each of which covers a different area, it's not easy to boil down what "authorship" means under EU law.[xiii] However, it is possible to get some direction from the way the different directives are written. On the one hand, the "own intellectual creation" standard is set for databases, computer programmes, and photos.[xiv] On the other hand, the “Copyright Directive”[xv] talks about “authors” and “works.” On the other hand, member states may decide to protect content made by machines and AI more. In this case, it's important to talk about the UK Copyright, Designs, and Patents Act (United Kingdom 1988).[xvi] Under this act, the author of a “computer-generated work” is the person who makes the arrangements for the work to be made. A computer-generated work is generated by a computer in such a way that there is no human author of the work.

In Unites States, the DEEP FAKES Accountability Act, 2019[xvii] has been proposed is to stop the malicious use of deepfakes to promote misinformation. Under the new law, any audio or video that has been tampered with must include a watermark and clear text or audio warning viewers that they are experiencing a fraudulent representation. It is illegal for the inventor to fail to comply with these conditions. The measure also permits in rem litigation against the deepfake content to deem it materially untrue and creates a private right of action for victims of harmful deepfake portrayals to seek compensation in civil court. Although it claims to address the problem of ‘modern technical fraudulent personation records,’ it is quite unlikely to do so. Because watermarks are so simple to remove, it is nearly impossible to identify the original authors of malicious hoaxes. However, this bill does lay a crucial groundwork for addressing the issue. The first benefit is that it will provide good-hearted individuals with a better idea of what is and isn't legal when it comes to producing deepfake content. Second, the unprecedented injuries generated by deepfakes cannot be adequately addressed by the current criminal and tort legislation. Whenever a malevolent actor is uncovered, however unlikely, victims will have legal remedy thanks to this Act. This is especially important for those who have experienced deepfake-induced shame misuse.

In India, Section 52 of the Indian Copyright Act, of 1957[xviii] talks about what works are not considered to be infringing works. This is called the “doctrine of fair dealing.” In contrast to the US, the doctrine of fair dealing is an exception to copyright infringement, and the law has a long list of things that are not considered to be copyright infringement. Even though the Indian position on fair dealing is often criticised for being too strict, it works well when dealing with deepfake technology that was made with bad intentions, because using this technology doesn't fall under any of the acts listed in Section 52 of the ICA.[xix] However, the rule may not protect the use of deepfake technology for real reasons. Under Section 52(1)(a)(ii),[xx] Indian courts have also started using the idea of transformative use for the word ‘review.’ In the University of Oxford and Others v. Narendra Publishing and Others[xxi], it was pointed out that, the courts have added the idea of ‘fair use’ to the idea of ‘fair dealing’ as an exception that protects certain types of work because they are good for society as a whole. The only Indian examples of transformative use that have been set so far have been for guidebooks, which are considered literary works. This can't be applied to deepfakes. Section 57 of ICA[xxii] gives people the right to paternity and integrity, which is in line with what the Berne Convention of 1886 says about moral rights.[xxiii]When thinking about deepfakes, Section 57(1)(b)[xxiv] right to integrity is very important, since deepfakes can be seen as distortions, mutilations, or changes to a person's work. There are civil and criminal liability provisions in Section 55[xxv]and Section 63[xxvi] that allow for damages, injunctive relief, jail time, and fines for people who break the law. Some people think that these rules are enough to stop people from making deepfakes for bad reasons, but they don't protect people who make deepfakes for good reasons.

After the verdict in Myspace Inc. v. Super Cassettes Industries Ltd.,[xxvii] Section 79 of the Information Technology Act, 2000 (IT Act)[xxviii] makes intermediaries responsible for copyright violations. The Indian Copyright Act is meant to protect works of art from being stolen, which could be argued to include deepfake technology. It keeps films, videos, public speeches, and lectures from being changed without permission. This would stop deepfake creators from using audio-visual works for data sampling. But the Copyright Act doesn't protect a person's voice, so well-known voices can still be sampled and changed. Researchers from Exeter have said that the rights of performers should be changed so that their likenesses can't be copied using deepfake technology without their permission.[xxix]

Aside from the fact that the Copyright Act doesn't cover everything, there are other problems with the way copyright works that make it useless for regulating deepfake. One of the main reasons to get copyright protection is to protect the owner's financial interests. This means that no one else can use the protected content to their own financial advantage. But with deepfake, the market of the ‘owner’ of the data is not the same as that of the person who made the deepfake. For example, the images taken for brand endorsements could be used by a deepfake creator for personal projects, which is allowed by Section 52(1)(a)(i),[xxx] or these projects could later be used as a sample to get freelance contracts. Second, unlike wax statues, which need exact measurements of the celebrity, deepfakes are based on deep learning and can be made using data samples from the internet without the celebrity's help. As we've talked about, the original work that was sampled can’t be found. So, even if the online data that was used for sampling was protected, the owner of the copyright would not know about it.

Third, copyright is a complicated web of shared ownership, especially when it comes to works of art, where most people do not have exclusive rights. For example, when it comes to movies, the copyright is shared between the producers, production companies, broadcasters, etc. In Fortune Films v. Dev Anand,[xxxi] the court said that an actor has no rights over how he acts in movies, not even moral rights. So, when someone makes a deepfake using an actor's image from a movie, the actor can't sue for copyright infringement and get money. It is also naive to think that the different people who have rights to an image or video would get together to sue for damages when they haven't lost any money because of the use.

Also, Section 57 of the Copyright Act[xxxii] gives the author moral rights that limit how his or her work can be used. But what could be considered ‘work’ would depend on how similar the final product of the deepfake is to the original work. As discussed, deepfakes can make a completely new creation from the data sample, and the similarities are likely to be small enough that they can't be mistaken for the work of the author. The Section also gives the author the right to stop changes to their work if those changes would hurt their honour or reputation. However, the caveat is the same as in the first clause of the Section: the author has to prove that the contested work is actually a change of their work. This would be hard to do when the data sample and final output are very different.

From what has been said above, it is clear that the Copyright Act is not broad enough to protect the rights of people who are used in a deepfake without their permission. It's also important to know that deepfake technology is also used for good things like educating people who speak different languages, recreating lost personalities, and other projects that help the public. So, a total ban can't be the answer because it would stop the people who want to improve the technology and throw away the good things it could do for society in the future.

The proposed Deepfakes Accountability Act[xxxiii] by the US Congress might be a better way to regulate things. This law says that deepfakes must have a watermark that makes them easy to tell apart from the original audio or video. This solves the problem that the other laws couldn't because it catches them when they're first made. It also makes it less likely that the law will be broken. Laws could also require the person who made the deepfake to give credit to the original content that was used to sample the data. This would make sure that the original creator of the sampled content gets credit for it and also give them the right to stop the deepfake under Section 57 of the Copyright Act. The John Doe orders, which have only been used against pirates until now, could be used against Deepfake as well. Most of the time, it's hard to find out who made deepfake, and it's easier to get rid of it when it violates a person's basic rights.

The coming personal data protection regime is a sure way to move in this direction. The Digital Personal Data Protection Act 2023[xxxiv] provides that the ‘principal’ has the ‘right to be forgotten,’ which means that the data fiduciary would have to delete all of the information about that person from their system. With this, it is likely that the principal has the right to use it against their deepfake, as long as the content is not someone else's copyright.

 

IV.  CONCLUSION

Deepfakes make the system of performers' rights harder to understand because performances are copied without making a ‘recording’ or ‘copy’ of a recording. Performers' rights don't cover deepfakes because the AI systems that make them don't record live performances or make copies of recordings of live performances. A group of experts decided last year that deepfake lies have the most potential to hurt society with the help of AI technologies.[xxxv]

The Copyright Act is not yet ready to protect authors from stealing and misusing this technology without their permission. The Copyright Act as it is now doesn't do a good job of setting up a framework for regulation. India needs to make a law that could control how technology is used from the time it is invented. Until then, the only way to stop people from abusing it is to make the laws stricter.

Based on real footage, AI is being used more and more to make videos or sound recordings that look and sound like a person's face, voice, or performance. Intellectual property laws were made long before deepfake technology, so they don't take into account what it can do. Performers have the right to control the copies of their work that are made, but this doesn't apply to digital impersonation. Dr. Mathilde Pavis of the University of Exeter Law School did a study that says performers should have a copyright over their work to stop people from using their likeness without permission.[xxxvi]Pavis said, "The system of performers' rights could be changed to a system of performers' copyright." This small but important change to legal systems could mean the difference between piecemeal, uneven, and ineffective protection against unauthorised deepfakes and a unified approach to the technology around the world. This change would give the law a more complete and long-lasting way to deal with deepfakes. It would make intellectual property easier to understand by getting rid of performers' rights, which are a subset of intellectual property rights, and putting them into the copyright system, which is already well-known. It would get rid of the old divide between authors and performers, which has no good reason in the modern world.”

This change would give the performance and likeness of deepfaked people property rights that can be sold. There will need to be protections, like stopping or limiting the full transfer of rights to third parties. At the moment, a performer's rights protect both the recording of a performance and the copies of that recording. Performers only have control over the version of their show that is fixed and recorded. The content of the recording, which is the performance, is not protected, so it can be played over and over again and copied without any harm. This means that a performer has no right to own the content or style of their performance.


[i] Daniel Lipkowitz, Manipulated Reality, Menaced Democracy: An Assessment of the DEEP FAKES Accountability Act of 2019, N.Y.U. J. Legis. & Pub. Pol’y Quorum (2020).

[ii] Yujia Zhai , “Tracing the evolution of AI: conceptualization of artificial intelligence in mass media discourse”

[iii] M. Feeney, “Deepfake Laws Risk Creating More Problems Than They Solve,” released by the Regulatory Transparency Project of the Federalist Society, March 1, 2021 (https://regproject.org/wp-content/uploads/Paper-Deepfake-Laws-Risk-Creating-More Problems-Than-They-Solve .pdf).

[iv] Ilias Papastratis, Deepfakes: Face synthesis with GANs and Autoencoders https://theaisummer.com/deepfakes/ accessed on 4th Nov, 2022

[v] Mathilde Pavis, Regulating deepfakes using performers’ rights, available at https://www.infolaw.co.uk/newsletter/author/mathildepavis/ accessed on 6th Nov, 2022

[viii] NILESH CHRISTOPHER, This startup is creating personalized deepfakes for corporations https://restofworld.org/2021/creating-personalized-deepfakes-for-corporations/

[ix] Berne Convention for the Protection of Literary and Artistic Works, September 9, 1886, as revised at Stockholm on July 14, 1967 828 U.N.T.S. 221

[x] Copyright Law OF THE United States

[xi] Naruto v. David Slater 888 F.3d 418 (9th Cir. 2018). (U.S. District Court Northern District of California 2016)

[xii] This Selfie May Set a Legal Precedent, People for Ethical Treatment Animals (Sept. 22, 2015), https://www.peta.org/blog/this-selfie-may-set-a-legal-precedent/.

[xiii] Hugenholtz, P.B., Quintais, J.P. Copyright and Artificial Creation: Does EU Copyright Law Protect AI-Assisted Output?. IIC 52, 1190–1216 (2021). International Review of Intellectual Property and Competition Law, Volume 52, pages 1190–1216, (2021)

[xiv] (European Parliament and Council of the European Union 1996, art. 3(1); European Parliament and Council of the European Union 2006, art. 6; European Parliament and the Council 2009, art. 1(3); see also European Parliament and the Council 2019, art. 14).

[xv] (European Parliament and the Council, 2001)

[xvi] Copyright, Designs and Patents Act, 1988, c 48 (UK).

[xvii] DEEP FAKES Accountability Act, H.R. 3230, 116th Cong. (2019).

[xviii] The Copyright Act, 1957, § 52, No. 14, Acts of Parliament, 1957 (India).

[xix] Id.

[xx] The Copyright Act, 1957, § 52(1)(a)(ii), No. 14, Acts of Parliament, 1957 (India).

[xxi] Chancellor Masters & Scholars of University of Oxford v. Narendera Publishing House, (2008) SCC OnLine Del 1058 (India).

[xxii] The Copyright Act, 1957, § 57, No. 14, Acts of Parliament, 1957 (India).

[xxiii] Berne Convention for the Protection of Literary and Artistic Works, September 9, 1886, as revised at Stockholm on July 14, 1967 828 U.N.T.S. 221

[xxiv] The Copyright Act, 1957, § 57(1)(b), No. 14, Acts of Parliament, 1957 (India).

[xxv] The Copyright Act, 1957, § 55, No. 14, Acts of Parliament, 1957 (India).

[xxvi] The Copyright Act, 1957, § 63, No. 14, Acts of Parliament, 1957 (India).

[xxvii] Myspace Inc. v. Super Cassettes Industries Ltd., (2016) SCC OnLine Del 6382 (India).

[xxviii] The Information Technology Act, 2000, § 79, No 21, Acts of Parliament, 2000 (India).

[xxx] The Copyright Act, 1957, § 52(1)(a)(i), No. 14, Acts of Parliament, 1957 (India).

[xxxi] Fortune Films International v. Dev Anand, (1978) SCC OnLine Bom 156 (India).

[xxxii] The Copyright Act, 1957, § 57, No. 14, Acts of Parliament, 1957 (India).

[xxxiii] Id at xvii.

[xxxiv] The Digital Personal Data Protection Act, 2023, No. 22, Acts of Parliament, 2023 (India).

[xxxv] Milind Yadav, Blog of the Intellectual Property and Technology Laws Society of NUJS Regulating Deepfake in India: In Light of the Copyright Framework

14 views0 comments

Comments


bottom of page