Generative Artificial Intelligence (GAI) is a domain within ‘artificial intelligence’, involving the training of machines to produce original content, including text, images, videos, and code. While GAI can be immensely useful, its increasing adoption calls for careful consideration of various legal implications.

Copyright on GAI output

One significant concern regarding the use of GAI relates to the implications under copyright law. As per the Copyright Act 1957, copyright protection is granted to “authors” for original works that exhibit a certain degree of creativity rather than being solely the result of skill and labor. In India, copyright protection is given to works like original literature, drama, music, art, films, and sound recordings. To qualify for copyright protection, GAI output must result from a minimum degree of creativity, as established in Eastern Book Company v. D.B. Modak. The threshold laid in the D.B. Modak case does not provide a conclusive determination on whether the output of GAI can satisfy the “modicum of creativity” requirement and whether a GAI model or its programmer meets the definition of an “author” as outlined in the Copyright Act to claim ownership of copyrighted works.

While computer-generated works are recognised for grant of copyright, the existing legal framework does not adequately address or guide the works created by GAI programs which are neither human nor a legal entity.

Competition law concerns

The integration of GAI in business operations also has the potential to bend the rules of competition globally. Within the domain of competition law, the adverse impacts of GAI can be grouped into three categories: a) the use of GAI to facilitate anti-competitive agreements or strategies, b) the implementation of anti-competitive strategies by GAI without explicit human guidance, and c) the use of GAI resulting in a reduction of competitive intensity. Distinguishing between a competitive enterprise and one that abuses its competitive advantage can be a nuanced matter. GAI can potentially eliminate smaller competitors lacking access to big datasets and GAI-based technologies.

Issues of data protection and privacy

GAI raises concerns regarding data privacy and protection of sensitive user information. The inadequacy of security measures could make GAI tools susceptible to unauthorised access or disclosure of user data. Such breaches result in data privacy violations and open the door to potential misuse of personal information. Another data privacy concern related to GAI is the adequacy of anonymisation techniques. Since GAI tools often require access to personal or sensitive data for training or generating outputs, it is crucial to employ robust anonymisation methods.

Inadequate anonymisation could lead to re-identification, where individuals can be identified from the generated data, undermining their privacy and anonymity. Furthermore, unauthorised data sharing poses a significant risk in the context of GAI. GAI tools may share user data with third parties without obtaining explicit consent or for purposes beyond what was initially communicated. Thus, GAI tools must obtain proper consent from users and provide clear and transparent information about how their data is collected and utilised.

Accumulation of bias

Another issue with GAI models is that they tend to accumulate societal biases over time as they get trained on datasets. Given the potential impact of GAI programs in shaping human perceptions, it becomes crucial keep analysing the biases in the models to avoid dissemination of misleading, prejudiced, or defaming information.

Accountability

The growing significance of accountability in GAI stems from the inherent difficulty in tracing AI-generated content back to its source or author, unlike human-generated content. This presents a major challenge in holding individuals or organisations responsible for any harm that may result from AI-generated content. GAI also brings forth concerns regarding its potential misuse for illegal activities, such as the creation of deep fakes and other forms of deceptive content. Many GAI models can generate highly realistic content that can be used to manipulate public opinion, defame individuals, or spread misinformation. In case of any civil or criminal legal action, the complex nature of GAI makes it difficult to pinpoint specific individuals or organisations to hold responsible, as the creation and dissemination of such content often involve multiple actors and intricate networks. This creates a significant challenge for legal systems and regulatory frameworks, which must adapt to effectively address the accountability gap presented by the misuse of GAI for illicit purposes.

GAI presents immense potential for innovation and creativity, but as it becomes increasingly integrated into various aspects of our lives, it is crucial to develop and enforce comprehensive legal frameworks that adapt to the unique challenges posed by this technology.

(The writers are advocates at Trinity Chambers, Delhi)

comment COMMENT NOW