Prof. (Dr.) Ajar Rab*
There is little doubt that artificial intelligence (“AI”) is not only here but the way of the future. Discussions on its ethical use, impact on human employment, and advancement in medical science are raging worldwide. There appears to be an ‘arms-race’ for it. However, from a legal standpoint, is it time for governments to rein in the unruly horse?
India has tried to take lead by attempting to regulate ‘unreliable’, ‘untested’, and ‘Indian Internet’. After severe criticisms, a fresh advisory was issued mandating ‘appropratiely labelling the possible inherent fallibility and unreliability of the output generated’. But what is the right approach?
This post discusses (a) the need to regulate AI and balance business efficacy, (b) the problems in regulating AI and (c) the right approach to AI. Should governments look at (a) self-regulation or ‘netiquettes’[1], (b) piece-meal notifications or orders, or (c) adopt a stand-alone legislation like the European Union (“EU”)? It concludes with a suggestion that does not come at the cost of interference and killing development of AI.
I. NEED TO REGULATE AI
At the outset, any lawmaker must consider the impact on developmental costs by requiring developers or companies to adhere to registration or licensing requirements for AI. There are several horror examples such as ‘Gravity Interactive’ effectively going bankrupt due to compliance with the GDPR. Similarly, reporting requirements may require hiring specific and specialised audit teams for debugging and limiting liability, further increasing the cost of compliance.
At the time same time, in the words of the CEO of Google, “AI is too important not to regulate” as it multiplies existing bias, judgements, and racial profiling. It leads to a lack of trust, making legal intervention necessary. Gender bias, racial profiling, and discrimination have already caused several concerns among companies. These concerns are only compounded by machine learning (“ML”), which leads to changes in the algorithm that are uncontrolled by human intervention. Such automatic algorithms create a ‘black box’[2] (where inputs and outputs are not visible to users or third parties) for developers and regulators. Therefore, companies such as Google, Microsoft, BMW, and Deutsche Telekom are developing formal AI policies with commitments to safety, fairness, diversity and privacy. However, are these enough?
II. REGULATING AI
While there are several legal risks, regulating AI with traditional approaches is not easy. For starters, the jury is still out on how to appropriately define ‘artificial intelligence’. In the absence of a definition, it is unclear what is being regulated. Moreover, the focus in AI regulation should be on the specific applications of AI and not the science of AI itself. Regulators in India and across the globe need to acknowledge that the industrial revolution replaced physical power with machines. Thus, the focus of all legislations was on regulating the processes. This approach is evident in the General Data Protection Regime (“GDPR”), the Digital Personal Data Protection Act, 2023 (“Data Protection Act”) and even Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011.
This process-based approach, however, was outpaced with the dawn of the internet age or the ‘digital platforms era’. In the case of AI, the regulation must focus not on the features or process of AI, but on the use and application of AI. Therefore, the ideal approach is to reduce public risks associated with the use of AI, without stifling innovation or regulating the processes themselves. As long as governments continue taking the traditional approach to AI, it will cripple AI development which may be the future of myriads of good uses, including the possibility of discovering cures for several incurable diseases. Such an approach will lead to another ‘licence raj’.
Governments and regulators must understand that while the industrial revolution replaced physical power, AI replaces inherently subjective cognitive power. Therefore, the traditional approaches may raise compliance costs, effectively taking many industries and start-ups out of business. Once again, dominant companies and developed markets will continue to have an edge. Thus, the approach to regulate AI ought to be to limit[3] AI’s cognitive power by establishing clear standards for (a) duty of care, (b) disclosures, (c) human supervision, (d) access to models, or (e) requiring ‘locked algorithms’ where each updated version is registered/certified or disclosed to the regulator.
The EU has taken a risk-based approach in the Artificial Intelligence Act, categorising risk in a hierarchy as (a) minimal risk, such as video games and spam filters, (b) limited risk, such as chatboxes where consumers know they are dealing with AI, (c) high risk which involves critical infrastructure, product safety, employment or law enforcement and (d) unacceptable risk such clear threats to safety, livelihoods, and rights of people. Does the EU approach address legal risk concerns? Possibly. But is this approach in line with business interests and efficacy? Time will tell, but should India follow suit?
III. THE APPROACHES TO AI
A. Self-Regulation or ‘Netiquettes’
Until the issuance of the advisories, the Indian approach was that “AI is not well understood yet”. The NITI Aayog had issued an “Approach Document for India” in 2021, which contained seven “Principles for Responsible AI” (“AI Principles”). The only other instance of binding regulations was the circular issued by the Securities Exchange Board of India in 2019 containing reporting requirements for AI and ML.
Unfortunately, the AI Principles do little else apart from a general guiding direction. This self-regulatory approach has already failed miserably. For the last 20 years, digital platforms have misused this approach with an unprecedented invasion of personal privacy, market concentration, user manipulation and the dissemination of hate, lies and misinformation. It was to plug this “gateway drug” for digital exploitation of personal information that such information came to be regulated by GDPR and the Digital Personal Data Protection Act, 2023 (“Data Protection Act”).
This ‘hands-off’ approach resulted in several instances of abuse such as ‘deep fakes’ of celebrities and concerns leading to the government mulling amendments to the Information and Technology Act, 2000 (“IT Act”) to include aspects of AI.
B. Sector-Specific Approaches
An alternative to the self-regulation approach is to take a more sectoral-specific approach like many other countries such as the United States, i.e., sectoral-specific governance. Each sector or industry should have customised and tailor made regulation. However, this creates more problems, than it solves. Even in the United States, the approach is shifting towards a comprehensive legislation for AI. Which authority frames these regulations? Does each sector or industry have existing regulators? Even if they do, are they well-versed with AI and its use? Any under regulation or over regulation based on an appropriate or adequate understanding of AI can have disastrous consequences for business – a criticsm sharply directed against the advisories issued by the Indian Government recently.
C. Stand-Alone Legislation
A third alternative is to follow the EU approach with a stand-alone legislation, focusing on mitigating the risk of abuse, instead of micromanaging technology. AI is unique in more ways than one. It is because of its unique nature and capability of self-learning that the traditional approach of regulating is neither advisable nor warranted when examining its regulation. For instance, issues of causation, remoteness of damage and apportionment require explicit and clear standards which cannot be addressed with a piece-meal approach. Similarly, obligations to disclose alogorithms (transparency)[4] or maintain appropriate records to ‘reverse-engineer’ (in the limited sense of an audit trail) the method of reaching an output, along with statutory safeguards of confidentiality are required to protect business interests.
Such problems can be better addressed by certification or auditing by accreditied bodies[5] created under a stand-alone statute with disclosure obligations akin to Article 14(2)(g) and 13(2)(f) of the GDPR. These articles grant data subjects the right to be informed by the use about the existence of automated decision-making, including meaningful information about the logic involved and the intended scope and effects.[6] Therefore, a separate legislation is imperative to creating a comprehensive AI transparency regime with the right balance between transparency, individual rights and public risk. Such a separate legislation can also address questions of the ‘right to AI’.
IV. CONCLUSION
Currently, the government is contemplating further amendments to the IT Act. Such a step may fail to address AI regulation’s aching points and compound the woes of an already misused, ill-drafted and little-understood legislation i.e., the IT Act. A preferable approach may be to consider enacting a stand-alone legislation with an independent and skilled Central Digital Regulator (“CDR”) who can consider sectoral-specific applications with a clearly defined role of supervising AI rather than regulating it.
The CDR should act as the watchdog and certification body with powers limited to sanctioning abuse and adjudicating complaints against the use of AI. It would be naïve of the government to think that traditional courts and judges are adequately trained and equipped to deal with AI-related violations or crimes. Such misconception may incentivise AI abuse due to ineffective remedies and enforcement. If India truly has to grow and be a superpower, its approach to AI and regulation should also be equally futuristic!
[1] Hoffman Riem, ‘Artificial Intelligence as a Challenge for Law and Regulation’ Regulating Artificial Intelligence, Springer (2020), p. 60, 62.
[2] Thomas Wischmeyer, ‘Artificial Intelligence and Transparency: Opening the Black Box’ Regulating Artificial Intelligence, Springer (2020), p. 41.
[3] Christian Djeffal, ‘Artificial Intelligence and Public Governance: Normative Guidelines for Artificial Intelligence in Government and Public Administration’ Regulating Artificial Intelligence, Springer (2020), p. 15-18.
[4] Thomas Wischmeyer, ‘Artificial Intelligence and Transparency: Opening the Black Box’ Regulating Artificial Intelligence, Springer (2020), p. 43-48.
[5] Hoffman Riem, ‘Artificial Intelligence as a Challenge for Law and Regulation’ Regulating Artificial Intelligence, Springer (2020), p. 64.
[6] Christian Ernst, ‘Artificial Intelligence and Autonomy: Self-Determination in the Age of Automated Systems’ Regulating Artificial Intelligence, Springer (2020), p. 48.
*Prof. (Dr.) Ajar Rab, Founding Partner, ANR LAW LLP, Dehradun. The post is a summary of the presentation made by the author at the “Business. Algorith. Law” Conference held at Gydnia, Poland on November 17, 2023. The author expresses his profound gratitude to (a) the Editorial Team of NLSBLR for their valuable suggestions and comments, (b) Ms. Kirpen Dhaliwal, Associate, ANR LAW LLP and (c) Ms. Ramya Singh, final year student at RMLNLU, Lucknow for their able and sincere research assistance.
Comments