top of page

What F1 Can Teach Us About Protecting Personal Image Rights in The Age of AI and Deepfakes


With the launch of F1’s 75th season, I figured it was a good moment to reflect on two prominent IP cases related to the drivers and their teams. These cases have demonstrated how image rights are protected through Intellectual Property Law. In this post, I will explain how this links to recent legislation and criminalisation procedures in the UK court system related to deepfakes. It is through these cases that we can find innovative solutions to protect not only prominent figures’ image rights but the laypersons as well. Let’s dive in.

Former Team Principal Steiner
Former Team Principal Steiner

Last Spring, a lawsuit hit the American court system. It was filed by former team principal Steiner, who argued that his former team, Haas, was utilising his likeness for promotions and merchandise. (Haas in turn filed a trademark infringement suit relating to images Steiner had used in his book, “Surviving to Drive.” However, this post will not dive into those details as the case was dismissed.) Steiner’s suit focused on unpaid commissions from the said promotions and merchandise and the unauthorised use of his image. As registered Patent and Trade Mark Attorney Reiner Smit explained in an entertaining and comprehensive 2024 LinkedIn article, “the core of his complaint is clear: Haas benefitted from his reputation and expertise without [honouring] their contractual obligations.” While the case was settled outside of court via mediation, it is an important likeness case when one considers the protections available to individuals. In the American legal system, it is possible to file for and protect image likeness via Trade Mark. To do so requires the image likeness to be used commercially, often concerning the establishment of a brand. This affords intellectual property protections to the individual’s brand and reputation by providing a direct avenue of recourse when the trademark is infringed upon.

 

Unlike the US, the UK lacks specific avenues for individuals to directly protect their likeness. Instead, parties must either trademark certain signatures and logos or implement the tort of passing off. Both paths are complicated and rely on adherence to certain criteria. UK Trademark criteria are laid out within The Trade Marks Act 1994; you can find more information here. The tort of passing off is more complicated and relevant to this article. To be successful in the tort, a party must possess the ‘classic trinity’ of elements. The first element is goodwill. A claimant must demonstrate that their business has a reputation related to their brand and its services. This is understood to mean that the brand has a distinctive element of goodwill that directly attracts customers. The secondary element is misrepresentation. As Partner and Head of Trade Marks, Ben Evan explained in an article published by Harper James, claimants must show that there was “a false representation…[leading] the public to believe that their goods or services are those of the claimant.” Finally, it must be shown to be the case that this misrepresentation of goodwill has negatively impacted the reputation or success of the business. These damages can range from lost profits to reputational harm.

Former Driver Edmund Irvine
Former Driver Edmund Irvine

Steiner’s case is an interesting case study in image rights, especially when viewed in relation to the 2002 F1 case of Edmund Irvine & Tidswell Ltd v Talksport Ltd, which was tried in the UK. Edmund Irvine is a well-known public figure and former F1 racing driver. This means his image and likeness hold a significant contractual value for promotions and other commercial interests. The 2002 case was brought due to an unauthorised and altered image of Mr Irvine used by Talksport Ltd to promote their radio show. The court was tasked with deciding two things: 1. if the use of the image was eligible for a tort of passing off, and 2. if there existed a right to goodwill concerning his image being used in an unauthorised manner. Based on the circumstances, the court ruled in favour of Irvine. They found that the use of his image was unauthorised and met the criteria for a tort of passing off under UK standards. The court decided that the goodwill of Irvine’s image was a property right which could be protected. This case impacted image likeness rights within the UK, where there are limited options for recourse for public and private figures. It served as a stepping stone, adapting the precedence individuals could rely on to protect their reputations.

 

So, how do these cases inform our modern day? Because they demonstrate a potential avenue for not just sportspersons and public figures but also victims of deepfakes to bring additional action in court to protect themselves and their reputations.

 

As AI Law detailed in a 2023 article, “Unauthorised entities can create fake profiles, pages, or accounts to deceive consumers.” This exposure puts businesses at risk of damage to their revenue and, more importantly, their reputation. This is especially true where the business is built on the brand of one or a few individuals. (Think of the Beckhams or Kardashians, for example.) While this is concerning, especially when commercial and IP law is involved, quickly developing AI technology has shifted this issue away from public personas and businesses to one that the layperson now must worry about, too.

 

In 2018, the UK government, in a proactive measure, implemented the UK GDPR and Data Protection Act 2018. This was a set of laws that were put in place to control how personal data, including images where individuals can be identified, is used. If you live in the UK, you are likely familiar with this act in its most common form: pop-ups when you enter a new website asking for cookie consent. However, the act also regulates how companies may use your image details from social media and other platforms. This act was a strong step towards protecting the greater public from more everyday scenarios than those in the Irvine and Steiner cases. However, it hasn’t been comprehensive enough to block AI deepfake developments from harming individuals. If you are unfamiliar with the concept of deepfakes, you can learn more from my previous post.

 

Deepfakes have been rapidly increasing as the technology to create them becomes more accessible. To curb the impact, the UK reformed the Online Safety Act 2023, adding an offence for anyone caught sharing deepfake sexually explicit images. While commendable, it wasn’t enough. In April 2024, a new offence was created and has been working its way through the legislative process as an amendment to the Criminal Justice Bill 2023. As spokeswoman and Minister for Victims and Safeguarding, Laura Farris explains, the amendment’s purpose is to “[send] a crystal clear message that making this material is immoral, often misogynistic, and a crime.” It does this by imposing heavy fines on deepfake perpetrators and requiring jail time in more serious cases. The hope is that adult individuals will have recourse within the courts when they fall victim to deepfakes. (Children are protected through already existing laws relating to criminalisation for child sexual abuse images.) The amendment is impressive and a much-needed development within the legal sector to help create safety and protection standards against largely unchecked AI technology. But what happens when these laws, acts, and amendments aren’t sufficient? The F1 cases discussed provide two potential solutions.

 

The solution I believe to be most innovative can be taken from Irvine’s 2002 case. Using similar legal reasoning, individuals could theoretically argue against the unauthorised use of their images, including within AI and deepfakes. While most aren’t public figures with lucrative deals attached to their likeness, it is arguable that each person curates a personal brand. This brand comes from their mannerisms, how they present themselves, and the reputation they hold. Individuals could be said to have goodwill invested within themselves via this personal brand. Misrepresentation occurs via AI or deepfakes utilising any part of their image likeness. This misrepresentation significantly impacts a victim’s reputation because it directly influences their personal and professional reputation and, thus, their ability to receive promotions and other job-related benefits. A solution to deepfakes inspired by the reasoning within Irvine's case would rest on a solicitor’s ability to create a solid and persuasive argument, allowing victims of AI and deepfakes to bring a civil tort of passing off using the ‘classic trinity’ against the defendant if criminal procedures fail to provide justice. A tricky hypothetical, but legally intriguing.


A much less experimental but more time-consuming option is for UK legislation to make amendments and adjustments to the current system to allow for similar trademark protections for image likeness within the USA. To do so, the UK would need to implement laws and regulations allowing the trademark of image likeness. This would open the door for trademark suits and torts of passing off to protect reputations. Similar reasoning to the above hypothetical solution could then be used to help protect laypersons' reputations and image likeness rights when deepfakes are involved.

 

In summary, legislation and precedent will need to begin adapting as rapidly, if not quicker than, changing technology and deepfake-related challenges. It must do this to protect not only prominent public figures’ rights to themselves and their work but also all the citizens under its purview. Only by learning from these previous F1 cases and others in the sector can the law continue to serve as the protective barrier it is meant to be.

  

Resources:


Edmund Irvine & Tidswell Ltd v Talksport Ltd [2002] 2 All ER 414 (Case summary available from the Law Gazette )

 
 
 

Comments


© 2023 by Kayla Konnection. Proudly created with Wix.com

bottom of page