“The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which”
Stephen Hawking
The advent of Artificial Intelligence (AI) has ushered in an era of unprecedented technological advancements, transforming industries and redefining the boundaries of creativity and innovation. However, with the rapid growth of AI, new and complex legal questions have emerged, particularly concerning intellectual property rights. As AI systems become increasingly capable of generating content, products, and inventions, the traditional frameworks of IP law are being tested in ways that were previously unimaginable. This article explores the impact of AI on intellectual property, focusing on the legal challenges it presents and the evolving questions of ownership in the age of AI.
Multiple newspapers are suing ChatGPT-maker OpenAI and Microsoft, alleging the two companies are using their copyrighted content to train their AI systems. This is not the first lawsuit claiming copyright infringement against organizations specializing in AI and will likely not be the last considering the new innovations that arise often in the technology field.
AI technologies, particularly in the realms of machine learning and natural language processing, have reached a level of sophistication where they can autonomously create music, art, literature, and even inventions. These capabilities raise fundamental questions about who holds the IP rights to works created by AI. Traditionally, IP laws have been designed to protect the rights of human creators and inventors. However, as AI systems play a more significant role in the creative process, the lines between human and machine authorship are becoming increasingly blurred.
Copyright Law and AI
One of the most pressing issues in the context of AI and IP is copyright ownership. The U.S. Copyright Office stipulates that copyright law only extends to works created by human beings. The Copyright Act of 1976 provides copyright protection to “original works of authorship fixed in any tangible medium of expression, now known or later developed, from which they can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.” The term “authorship” in copyright law refers specifically to human authorship, a prerequisite for a valid copyright that has been affirmed by the Supreme Court of the United States. Copyright law protects the original intellectual conceptions of human beings, and the Copyright Office will refuse to register a claim if a non-human being created the work, such as an AI system.
In the case of Thaler v. Perlmutter, the Plaintiff sought copyright protection for visual art created by his computer program possessing AI capabilities. However, the U.S. Copyright Office (USCO) rejected his claim, citing the lack of human authorship. Aggrieved by the decision of the copyright office, the Plaintiff filed this suit challenging USCO’s decision. The court’s analysis focused on the definition of “authors” under copyright law, noting that the term is not explicitly defined in either the Copyright Act or the Constitution. Referencing the 1909 Copyright Act and the legislative history of the 1976 Copyright Act, the court affirmed that only a “person” can obtain copyright for their work. The court also cited Burrow-Giles Lithographic v. Sarony to emphasize the historical importance of human creativity as a cornerstone of copyright protection. While acknowledging the growing challenges posed by AI, the court recognized that questions about “how much human input is necessary to qualify the user of an AI system as an ‘author’ of a generated work, the scope of the protection obtained over the resultant image, how to assess the originality of AI-generated works where the systems may have been trained on unknown pre-existing works, how copyright might best be used to incentivize creative works involving AI, and more” will become increasingly relevant as technology continues to advance. The court noted that the case at hand was not sufficiently complex, leaving these questions yet to be answered.
However, it poses a challenge when AI systems generate content independently, without direct human intervention.
Recent legal cases, such as the lawsuit filed by several U.S. newspapers against OpenAI and Microsoft, highlight the growing tension between AI innovation and copyright law. The newspapers allege that their copyrighted content was used without permission or payment by OpenAI and Microsoft to train their AI models. Frank Pine, executive editor for the MediaNews Group and Tribune Publishing stated “We’ve spent billions of dollars gathering information and reporting news at our publications, and we can’t allow OpenAI and Microsoft to expand the Big Tech playbook of stealing our work to build their own businesses at our expense”. In the wake of such lawsuits that allege copyright infringement against tech companies, a common strategy used by tech companies to overcome these lawsuits is to enter into symbiotic partnerships. For instance, OpenAI recently reached an agreement with the news company Associated Press where they would collaborate in part, with OpenAI receiving access to text archives of Associated Press in exchange for providing guidance and expertise within the technology industry. This partnership will benefit Associated Press by helping to further their efforts of using automation within journalism while simultaneously uplifting OpenAI and their generative AI products. This partnership also underscores that OpenAI respects and values the intellectual property of Associated Press in their copyrighted content.
Some organizations have also tried to raise the affirmative defence of the “fair-use” doctrine, which allows for the use of copyrighted material without obtaining the permission of the copyright holder if the reproduction is considered fair-use. While defences like the “fair use” doctrine have been invoked, the legal landscape remains uncertain. Courts have yet to establish clear precedents on how to handle AI-generated works, leaving creators and AI developers in a state of ambiguity. Proponents argue that the use of copyrighted content to train AI models falls under fair use, especially when the AI’s output is transformative and non-commercial. However, critics contend that the widespread use of copyrighted materials by AI systems could undermine the value of original works and harm the interests of creators.
The courts have not yet provided definitive guidance on how the fair use doctrine should be applied in the context of AI, leaving the interpretation open to debate. The outcome of ongoing legal battles, such as the one involving OpenAI and Microsoft, could set important precedents for the future of fair use and AI-generated content.
Patent Law and AI: Inventorship and Patentability
The impact of AI on patent law is equally significant. Patent laws are designed to protect inventions created by “individuals,” traditionally understood as human beings. However, as AI systems develop the ability to invent new products and processes, the definition of “inventor” is being challenged.
In the notable case of Thaler v. Vidal, the plaintiff sought to patent inventions generated by his AI system, DABUS, and listed the AI as the inventor. However, the USPTO rejected the patent applications, prompting arguments over the interpretation of statutory terms like “individual” and “whoever.” The Supreme Court has clarified that when used as a noun, “individual” typically refers to a human being, consistent with its everyday usage. While statutory interpretation can sometimes allow for reasonable inferences, numerous courts have ruled that the language of the Copyright and Patent Acts is unambiguous and should be interpreted as written. Thaler had filed for patents in 17 jurisdictions, including the EU, UK, Australia, and South Africa. While Thaler’s patent applications were refused/rejected in the EU, UK and Australia, the South African Patent Office granted Thaler’s patent, marking a milestone by recognizing AI as an inventor. However, there’s an important consideration to note. South Africa operates as a non-examining country, meaning that patent applications filed there are not subjected to a thorough examination to ensure they meet the requirements of patentability, as they would be in countries like the US or India. As a result, any patent application that meets the formal filing requirements in South Africa is typically granted without further scrutiny. While the granted patent can still be challenged and potentially invalidated by a third party, it remains valid within the jurisdiction until successfully opposed.
The founding of both copyright and patent protections were established to govern human property, and these protections not only further the public good by incentivizing individuals to create and invent but also promote science and the arts. However, as AI continues to advance, there is growing pressure on lawmakers to reconsider the definition of inventorship and explore new legal frameworks that can accommodate AI’s role in the innovation process. The question of whether AI systems should be granted inventorship status, and how to assign ownership of AI-generated inventions, remains an ongoing debate within the legal and technological communities. Beyond the legal challenges of copyright and patent law, the issue of ownership in AI-generated works presents a fundamental dilemma. If AI systems are not recognized as creators or inventors, then who owns the rights to their output? Is it the person or entity that developed the AI, the user who directed the AI to create the work, or perhaps no one at all?
These questions have significant implications for the future of IP law. As AI becomes more integrated into the creative and industrial processes, determining ownership rights will be crucial for ensuring that the benefits of AI-driven innovation are fairly distributed.
Authored by
Natasha Menon – Associate, AMD LAW India
and Mycah Singletary