In recent years, artificial intelligence (AI) has made significant advancements in various fields, and one area where it has gained considerable attention is in content creation. AI-powered writing tools have emerged, capable of generating high-quality written work that is often indistinguishable from human-written content. However, this raises important questions: how should AI generated content be created, attributed and protected?
Data sources and false attribution
Language models are trained using large amounts of existing data, and learn to mimic the patterns and style of the training data. It is possible to train AI on specific datasets that contain work of a specific author, allowing the AI to learn the writing style, vocabulary and patterns specific to that author so that it can generate new content that closely resembles the author’s writing style.
Indeed, the Times recently published an article on author Jane Friedman and her discovery of books, which she believes were most likely generated by AI, available on Amazon falsely attributing her as the author. However there are existing protections in place that can be relied on for false attribution of work.
The rise of AI in content creation has sparked debates about the ethical and legal implications of using AI to produce written work. While AI can undoubtedly enhance productivity and efficiency, it also challenges traditional notions of authorship and intellectual property.
“In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”
In the UK, the human user is therefore deemed to be (but does not have the right to be identified as) the author of AI-generated works. As AI continues to advance, we might see a change to legislation which address the collaborative nature of AI and human creativity. For example, mandatory tagging to show the source of work or hybrid attribution including statements such as “This content was generated with the assistance of an AI system”.
Who owns copyright in AI work
AI systems are therefore seen as a tool or instrument used by humans to produce content, which aligns with existing copyright laws. If AI systems were recognised as the content creator this would raise questions about legal rights and responsibilities. Can AI systems hold copyrights? Who would be liable for any legal issues arising from the content?
In the UK, AI generated work is protected by copyright. However the legislation distinguishes between the types of creator. AI generated work is protected for 50 years from creation whilst work with a human author is protected for 70 years from the death.
- Know your rights: Familiarize yourself with UK copyright law, which grants certain rights to creators of original works. Understand the specific provisions related to attribution and ownership of AI generated content.
- Contact the attributor: Reach out to the individual or organization that attributed the work to you. Clearly explain the situation and provide evidence to support your claim of being wrongly attributed. Request that they correct the attribution and provide the accurate information.
- Intellectual Property Office (IPO): If the attributor refuses to correct the attribution or if the situation escalates, you can consider filing a complaint with the UK IPO. They can provide assistance and guidance on copyright-related matters.
- Seek legal advice: Consult with a legal professional who specializes in intellectual property rights. We can provide guidance on the specific legal options available to you, such as sending a cease and desist letter or pursuing a legal claim for copyright infringement.
This article is provided by Burlingtons for general information only. It is not intended to be and cannot be relied upon as legal advice or otherwise. If you would like to discuss any of the matters covered in this article, please contact Lydia Mills or write to us using the contact form below.