AI Regulation: Efforts to Better Wield a Double-Edged Sword

Shawnae Algama

Apprentice, LL.B (Colombo)

INTRODUCTION

The recently launched ChatGPT has sparked curiosity and controversy alike and has now gradually transcended into a debate on the development of Artificial Intelligence (“AI”). This discussion has adopted a two-fold tone - an appreciative acknowledgement of the innovation and efficiency associated with AI, counteracted by a sense of fear and suspicion in the face of the unknown. Whilst the benefits are plenty, the concerns are many. The legal landscape associated with such innovation has provided new issues to navigate - presenting concerns relating to data privacy, intellectual property rights, defamation, doctoring of false evidence, etc. Thereby, strengthening the consensus that, whilst we welcome and adapt to these strides of AI development, there exists the need for clear and uniform regulation. In the words of Margrethe Vestager[1], “on Artificial Intelligence, trust is a must, not a nice to have”.

In light of this issue, presented below are a few examples of legal complications which have arisen by the development of AI, and a summary of the proposed draft of the ‘Artificial Intelligence Act’ by the European Commission  -  which, if passed, may set the guiding benchmark to AI regulation globally.

Legal Issues

The legal predicaments pertaining to AI have seeped its way into the various disciplines of law and the following are but a few examples.

With reference to data privacy concerns, Italy, France, Germany, Spain, the European Union, etc have launched investigations into OpenAI (the provider of ChatGPT) to address the resultant privacy concerns and the potential breach of the General Data Protection Regulation (“GDPR”). Given that the Personal Data Protection Act No. 09 of 2022 (“PDPA”) reflects the GDPR in relation to the key principles of data protection and the rights of data subjects’  -  these concerns are both applicable and pertinent in a Sri Lankan context. Some of the privacy concerns include the lack of legal justification and transparency relating to the mass collection of personal data used to train AI models- which contradicts a few key principles of data protection law (reflected in both the GDPR and the PDPA) such as transparency [2], purpose limitation [3], data minimization [4], and accountability.[5]

Intellectual property rights too have now entered a zone of ambiguity and uncertainty - with artists expressing concern that artwork similar to their own may be instantly reproduced, with a mere prompt to the AI to create art emulating each artist’s own unique style. There also lies the perplexity of identifying the ownership of intellectual property rights  -  does it lie with the AI, the entity/individual which created the AI, or the person who prompted AI in such creation? If it is the latter, there also exists  the complexity associated with the added layer of anonymity provided by AI  -  which may increase the difficulty of identification and liability.

The cries against defamation and fake evidence generated by AI also go hand-in-hand. Experts have expressed concern about the fate of a court system inundated with AI generated fake evidence (also referred to as ‘deep fakes’)  -  overwhelming both judge and jury, and possibly allowing a fair trial only to defendants who can afford expert analysis of evidence. Additionally, whilst the ability of ChatGPT to confidently produce disinformation relating to a person’s identity is a violation the principles of personal data protection law, it has also given rise to claims of defamation - as seen in the recent suit filed by an Australian mayor, regarding ChatGPT’s false claim that he served time in prison for bribery.[6]

While there exists the possibility of AI playing a role in perpetuating discrimination and bias in a decision-making process, it is interesting to note that the GDPR and the PDPA offer a form of redress for individuals affected by an automated decision-making process  - by allowing them the right to have the decision reviewed. However, an exception to the exercise of this right is if the data subject has provided explicit consent. In the context of AI, which proves to be rapidly growing, it is important to acknowledge the existence(or lack thereof) of information, upon which such ‘informed consent’ may be granted. Further, could the fundamental right to equality (as granted by our Constitution[7]) and the right to reason (as ingrained in our administrative law regime[8]) be potentially threatened in the future due to the lack of transparency associated with the opaque process of automated decision making? All of the above, are merely the tip of the iceberg of the countless conundrums which continue to increase in number and frequency at present.

Proposed regulation

The difficulty in introducing regulations pertaining to AI lie in the complication associated with predicting the length, breadth and manner in which AI may exponentially develop and the inability to account for dangers which are potentially unknown at present.  The European Commission, when proposing the draft Artificial Intelligence Act in 2021 (“draft AI Act”)[9], acknowledged that the purpose of such regulation was to build an eco-system fostering AI innovation in order to strengthen the EU’s global competitiveness. This draft AI Act adopts a risk-based approach, where the risk posed is classified in relation to its potential impact on the fundamental rights of an individual - which in turn would determine the stringency of the regulation of such AI. Therefore, this four-tier classification compartmentalizes an AI into the categories of unacceptable, high, limited, and minimal risk. Following are a few, and non-exhaustive, examples of such classification.

Classification Examples Extent of Regulation
Unacceptable Risk
  • AI concerning the cognitive behavioral manipulation of people, specifically vulnerable groups. E.g., toys using voice assistance encouraging dangerous behaviour of minors.
  • Government run social scoring systems.
  • Real-time, remote biometric identification systems in public.
These forms of AI are banned.
High Risk
  • AI used as safety components of products. E.g., use of AI in robot-assisted surgery
  • Use of AI in critical infrastructure. E.g., operating road traffic; supplying water, gas, electricity
  • AI use in education or vocational training. E.g., exam scoring.
  • Use of AI in employment. E.g., CV filtering software for recruitment procedures.
  • Use of AI in law-enforcement. E.g., evaluation of the reliability of evidence; systems used to predict the occurrence or reoccurrence of an offence
  • The use of AI in the administration of justice and democratic processes. E.g., use of AI to assist a judicial authority interpreting the application of law to a concrete set of facts.
These forms of AI are permitted, but developers are required to adhere to rigorous testing requirements, maintain proper documentation and implement an adequate accountability framework.
Limited Risk
  • AI chatbots.
  • AI used to generate or manipulate content (‘deep fakes’)
  • AI used to detect emotions
Required to abide by certain transparency obligations, which would allow a user to make informed choices when proceeding with the use of such AI.
Minimal Risk
  • Spam filters
  • AI-enabled video games
The regulation is minimal and merely encourages providers to implement a voluntary code of conduct.

The draft AI Act has also proposed steep penalties for non-compliance, permitting fines up to thirty million Euros, or six percent of global annual turnover for the preceding financial year if the breach was by a company. Further, it proposes to establish a European Artificial Intelligence board to oversee such regulation and to provide guidance to national authorities in implementing such regulation.

Contrary to what it may appear, the overarching objective of the draft act is to promote the development and creativity of AI by regulating it only to the extent necessary. If implemented, it would provide AI developers a framework within which they may innovate and thus encourage trustworthy AI by design.

Conclusion

Whilst the benefits of AI are irrefutable, the necessity for regulation too is undeniable. The inherent struggle in introducing such regulation relates to the need to ensure that it remains appropriate, relevant, and adequate in line with the rapid development of AI. Additionally, it has been acknowledged that the need for such regulation has little todo with technology being out of control, but rather to reign in the human imagination. Therefore, it is interesting to observe how this issue would be approached, especially by the finalized Act by the European Union, in order to ensure that the growing trust placed in AI is indeed well-deserved and well-placed.

Citations

[1] Executive Vice President of the ‘European Commission for A Europe Fit for the Digital Age’.

[2]  General Data Protection Regulation s. 5(1)(a); Personal Data Protection Act No. 09 of 2022 s.11.

[3]  General Data Protection Regulation s. 5(1)(b); Personal Data Protection Act No. 09 of 2022 s.6(1).

[4]  General Data Protection Regulation s. 5(1)(c); Personal Data Protection Act No. 09 of 2022 s.7

[5]  General Data Protection Regulation s. 34(1); Personal Data Protection Act No. 09 of 2022 s.12

[6]  “Australian Mayor Prepares World's First Defamation Lawsuit over ChatGPT Content” (The Guardian, April 6, 2023) accessed April 25, 2023

[7] The Constitution of the Democratic Republic of Sri Lanka, Art. 12

[8]  Wickremasinghe v. Chandrananda De Silva, Secretary Ministry of Defence and Others (2000);  Samaraweera v People’s Bank [2007] 2 Sri LR 362; Choolanie v People’s Bank[2008] 2 Sri LR 93; Hapuarachchi v Commissioner ofElections [2009] 1 Sri LR 1

[9] European Commission. (2021). Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.COM/2021/206 final. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?

Disclaimer: This article is intended for informational purposes only and should not, under any circumstances, be used or construed as legal advice in any manner or form. The article also limits its scope of discussion to the proposed Artificial Intelligence Act by the European Commission and does not incorporate the later comments adopted by the Council of the European Union in its‘ General Approach’ on the proposed draft nor the ‘Compromise Text’ presented by the European Parliament. The article merely focuses on the proposed draft of the AI Act, which is the foundation of such discussion.