Generative artificial intelligence solutions are all the rage today as they offer to generate content in a jiffy. ChatGPT, Perplexity.ai, AutoPilot, Scite.ai, and hundreds of other generative AI solutions are allowing users to generate research articles, news reports, summaries, lyrics, marketing strategies, travel itineraries, and more.
This, in turn, has triggered animated discussions over who owns the content and what will happen to the intellectual property and copyright of the writers and scientists whose works might have gone into training the generative AI models. There are also questions over who should be held responsible for such violations — the programmer, the generative AI firm, or the machine itself?
The Copyright Act in India grants protection to original work, including literary, dramatic, musical, sound recording, and artwork.
“The statute contemplates a natural person as an author and, normally, the first owner. Right now it doesn’t consider AI-generated content and its ownership,” says Nishad Nadkarni, partner with the law firm Khaitan & Co.
Gautam Busi, a partner with law firm Tempus Law Associates, says AI systems often require extensive training data, which might include copyrighted material. “Determining the ownership and rights associated with AI-generated content can be complex, particularly when it involves multiple stakeholders,” he points out.
Generative AI could infringe copyright in two ways. It can generate the same answer for multiple people in different geographies, who might then use the content as their own.
“AI technologies can be used to replicate or mimic existing copyrighted works. The algorithms can analyse and generate content that closely resembles protected works, raising questions about the legality and ethical implications of such replication,” Busi explains.
Sachin Yadav, partner at Deloitte, says inadequate knowledge of third-party IP rights over generative AI content may result in infringement.
“Such models may intentionally or unintentionally provide content that infringes on the IP rights of original work,” he contends.
NALSAR University of Law and IIIT, Hyderabad, conducted a round-table recently on the challenges and opportunities in the AI era for the legal fraternity. It pointed out that, alongside benefits, AI use in the legal domain raises several ethical concerns both in India and around the world.
“AI systems can perpetuate or amplify the existing biases in the legal system such as racial or gender bias. This can lead to unfair outcomes and discrimination,” a draft document brought out by the round-table stated. “AI systems can be difficult to understand and interpret, making it difficult for individuals to challenge or appeal decisions made by these systems. Also, they rely on large amounts of data, which can raise privacy concerns if the data being used is personal or sensitive,” it observed.
Accountability, too, is a major issue as AI systems can make mistakes or produce unintended consequences. It can be difficult to hold them accountable for such errors.
The round-table also cautions about the possibility of cyberattacks on AI systems, leading to more undesirable consequences.
On the positive side, generative AI can help lawyers and judges quickly source references and other relevant information from legal data.
“AI technologies can assist in the enforcement of copyright law by identifying and flagging potential infringement and plagiarism. This can make life easier for law-enforcement agencies and copyright holders,” Tempus Law Associates felt.
It, however, raises concerns about the likely violation of the legal doctrine that allows limited use of copyrighted material without permission from the copyright owner. “AI systems can potentially create transformative works by repurposing copyrighted content in a new and creative manner. Determining the boundaries of fair use and transformative nature of AI-generated works can be a subject of legal interpretation,” it points out.
Legal experts call for a balanced approach towards AI use in the legal domain to ensure transparency, , fairness, and protection of privacy and rights, while also fostering innovation.
On the question of who should be held responsible for violations — the programmer or end-user, Nadkarni says, “This is interesting since AI tools are based on machine learning and not all the base content may be introduced by the programmer.:
The basic algorithm that teaches the AI tool to use future content is, however, always attributed to the programmer.
“With the pace at which generative AI is growing, answers to these and other questions are expected soon, either through amendments to the current legal framework or through judicial precedents,” he observes.
Busi says some countries are contemplating introducing copyright exceptions or specific provisions to address AI-generated content. “These provisions aim to establish guidelines and rules for the protection, use, and attribution of AI-generated works within the framework of copyright law,” he says.
He cites the example of the European Union’s Report on IPR for the development of AI technologies and the ‘Fair Dealing Exceptions’ under the Copyright Act in Canada.