Palash Varyani and Divyank Dewan*
I. INTRODUCTION
In a recent development, the Securities and Exchange Commission (“SEC”) fined two investment advisers, Delphia (USA) Inc. and Global Predictions Inc., a total of $400,000 for making misleading claims about their utilisation of Artificial Intelligence (“AI”). This incident sheds light on a concerning issue known as “AI washing,” a deceptive marketing strategy that exaggerates the capabilities of AI technology in products to attract investors. Similar to greenwashing, AI washing exploits the burgeoning interest in AI by portraying a company’s offerings as more technologically advanced than they are.
The prevalence of AI washing stems from the fact that many investors lack a deep understanding of AI terminology, allowing companies to exploit this gap. For instance, if a product is touted as “AI-powered,” the company might not fully disclose what it entails. This vague claim could imply the use of cutting-edge technology on the one hand or something as basic as a cloud service utilising machine learning to analyse data on the other, leading to potential misunderstandings about the product’s true capabilities.
This article examines the practice of AI washing, analysing recent regulatory actions, such as the SEC's penalties against investment advisers for misleading AI claims, and investigative reports like Hindenburg Research's findings on Equinix. It then delves into the Indian regulatory landscape, highlighting the lack of specific safeguards against AI washing and discussing recent failures of AI-based companies in India. Finally, it proposes a way forward, suggesting regulatory measures to address AI washing, including clear definitions, disclosure requirements, and a registration process for AI products.
II. THE IMPACT OF AI WASHING
AI washing presents significant challenges for industries and consumers alike. At its core, it leads to misinformation and deception, obscuring the true capabilities and limitations of a given technology. This not only misleads consumers but also erodes trust in legitimate AI innovations. On a macro level, it can create a monoculture in the AI industry. This occurs when numerous vendors, despite claiming to have unique AI models, use the same underlying technology. They differentiate their products through marketing rather than substantial technological differences. This could pose systemic risks, particularly in sectors like finance, where widespread reliance on similar AI models could lead to a financial crisis. If a significant portion of the financial industry relies on similar AI models that make flawed predictions or recommendations, it can lead to coordinated actions that amplify market distortions. When these bubbles inevitably burst, the resulting fallout can trigger a financial crisis.
The impact of AI washing is also evident on a micro level. It can deceive consumers by misrepresenting the AI capabilities of products and services. The prompt and widespread dissemination of false information could lead to unexpected network effects, potentially enhancing an enterprise's market share while simultaneously raising agency costs for investors, as investors may base their decisions on misleading information.
The repercussions of AI washing extend beyond individual products and services. By creating an atmosphere of hype and unrealistic expectations, it can stifle genuine progress in the field. Companies that overstate their AI capabilities may divert attention and resources away from legitimate advancements in technology, hindering the growth and development of the AI industry as a whole. Moreover, it can impact consumer confidence and decision-making. Individuals may invest in products or services based on false claims, only to discover later that the AI technology does not deliver on its promises. This can lead to skepticism towards future AI applications.
Additionally, the authors believe that it can also create an uneven playing field in the market. Companies that engage in this practice may gain a competitive advantage over those that honestly disclose their AI capabilities. This not only distorts the market but also places pressure on ethical companies to engage in similar practices to remain competitive.
III. SEC'S CRACKDOWN & HINDENBURG'S REPORT
On the regulatory front, the SEC has imposed penalties of $400,000 on two investment advisory firms: Delphia Inc. and Global Predictions Inc., for misrepresenting their utilisation of AI in investment processes and services. Delphia, a Toronto-based firm, made inaccurate assertions about incorporating client data into its investment process through AI and machine learning capabilities. Global Predictions, based in San Francisco, falsely claimed to be the “first regulated AI financial advisor” and misrepresented its platform as providing “Expert AI-driven forecasts.” The SEC underscored its commitment to protecting investors from deceptive practices surrounding AI washing, emphasising that:
Investment advisers should not mislead the public by saying they are using an AI model when they are not. Such AI washing hurts investors.
Complementing the SEC’s enforcement actions, Hindenburg Research conducted a comprehensive investigation into Equinix, a global data center operator with a $80 billion market capitalisation. Their report alleges a pattern of accounting manipulations, exaggerated profitability metrics, and, most significantly, misrepresentations concerning its purported AI capabilities and positioning as an AI beneficiary, an entity which benefits from the development and application of artificial intelligence. While investors have rewarded Equinix based on its AI narrative, Hindenburg's findings directly challenge this perception.
Hindenburg alleges that Equinix has engaged in an AI washing strategy, overstating its AI capabilities and AI-readiness while neglecting practical infrastructural limitations and power constraints that could hinder its ability to serve AI customers effectively. Former employees interviewed by Hindenburg expressed skepticism about the company's ability to upgrade its older sites to meet these escalating power requirements, casting doubt on its capacity to truly benefit from the AI boom. The report suggests that the company’s positioning as an AI beneficiary has been more marketing rhetoric than substantive preparation.
Thus, the SEC’s crackdown and the report released by the Hindenburg Research underscore the growing concern over the escalation of AI washing practices within the financial industry. As regulatory bodies and investors scrutinise these claims, the fallout from these investigations may have far-reaching implications for the valuation, corporate governance, and credibility of companies making dubious claims about their AI capabilities and readiness to capitalise on the AI revolution.
IV. THE INDIAN REGULATORY LANDSCAPE
This has also become a pertinent issue with AI-based Indian companies experiencing significant failures. Firstly, a company called Lumos attempted to create internet-connected devices that used AI and machine learning to adjust and personalise home appliance settings based on the behaviour and routines of the user. Later on, amongst a plethora of reasons, the entrepreneurs put out a statement saying that the company was forced to shut down because they ended up overestimating the utility value of the product and could not meet their own commitments.
In another case, Ola founder Bhavish Aggarwal’s startup launched a beta version of Krutrim AI, which was promoted as a homegrown rival to ChatGPT. However, users soon began to suspect that Krutrim AI was merely a rebranded version of ChatGPT after the chatbot reportedly confirmed it was “created by OpenAI.” Addressing these concerns, the startup explained that the issue was due to a “data leakage problem originating from one of the open-source datasets used in the Large Language Model (LLM) fine-tuning process.”
As the AI washing bubble continues to grow, another concern that arises in the Indian context is that there exists no proper regulatory safeguard for AI washing. It would be difficult for investment analysts to conclusively show that companies are engaging in AI-washing. Regulation 28 of the SEBI (Investment Advisors) Regulations lays down that in a scenario wherein an investment advisor furnishes information that may be false or misleading in any material particular, the Securities and Exchange Board of India (“SEBI”) may take action in accordance with the SEBI (Intermediaries) Regulations, 2008. However, AI washing occupies a grey area that may not neatly fit into the category of false or misleading information. Rather, it could be characterised as a gap between promised capabilities and actual technological implementation. This ambiguity presents a potential loophole, allowing entities to invoke technological limitations as a defence against accusations of AI washing. Consequently, this approach fails to adequately tackle the core issue at hand, leaving the problem largely unaddressed and open to exploitation.
There have been attempts on the part of SEBI in the past to incorporate issues related to AI into its regulatory framework. Two circulars were released in 2019 addressing the use of AI and Machine Learning (ML) technologies. The first circular mandated reporting of AI and ML applications by Market Infrastructure Institutions, and the second focused on their use by Mutual Funds. These circulars present a narrow focus on the mandatory reporting of AI and ML applications by Market Infrastructure Institutions and Mutual Funds, while failing to address the issue of AI washing in the broader regulatory landscape. Further, although the circulars require entities to disclose their use of AI and ML technologies, they do not engage with inflated or misleading representations of AI capabilities. This omission results in a notable regulatory gap, as the circulars emphasize the existence of AI technologies but do not scrutinise the accuracy or integrity of claims regarding their functionalities or outcomes. Moreover, there is an absence of any clear deterrence mechanism or penalties for non-compliance, leaving a significant gap in enforcement. Without explicit provisions for penalizing failures to adhere to these guidelines, the efficacy of these measures in ensuring transparency and accountability remains questionable.
AI washing may also be considered to be an unfair trade practice under Section 21 of the Consumer Protection Act, 2019 (“CPA”), due to its deceptive nature and the false advertising associated with it. Misleading advertisements related to AI washing fall under the Guidelines for Prevention of Misleading Advertisements and Endorsements for Misleading Advertisements, 2022, which prescribes penalties of up to ten lakh rupees. This is also in consonance with CAP Code Rule 3.1 of the United Kingdom which brings forth that there must not be any misleading advertisements. Although the CPA is designed to address deceptive practices, its primary role is to safeguard consumers from misinformation related to product quality and performance. In the case of AI washing, companies may not make outright false claims but instead exaggerate their AI capabilities. This form of deception may escape the CPA’s scrutiny, as companies could explain the disparity between their promises and actual outcomes as part of the technological limitations or ongoing development of AI systems.
Under the Competition Act, 2002, AI washing can be construed as a malpractice having adverse effects on the competition and would fall under the purview of Section 18 of the Competition Act, which imposes a duty upon the Competition Commission of India (“CCI”) to eliminate practices that negatively impact market competition, ensuring freedom of trade for other market participants in India. The Competition Act is primarily focused on addressing practices that harm competition, such as anti-competitive agreements, abuse of dominant positions, and mergers that reduce market competition. However, AI washing, does not fit directly into these categories, since it more strongly relates to misleading advertising and consumer deception rather than traditional anti-competitive behavior.
Currently, no specific regulatory provisions in India explicitly address AI washing. While existing laws like the Consumer Protection Act, Competition Act, and SEBI regulations cover certain deceptive practices, they do not directly tackle the misrepresentation of AI capabilities, which often falls into a grey area. AI washing, as a malpractice, is difficult to prove under existing laws that are designed to address explicit cases of falsehood or fraud. A standalone AI regulation, would effectively address the nuanced nature of AI related misrepresentation. Furthermore, as countries worldwide develop their AI regulations, such as the European Union Artificial Intelligence Act (“the EU AI Act”), India risks falling behind in advancing a comprehensive AI ecosystem. Establishing clear regulations on AI washing and similar malpractices would encourage investments and partnerships from companies committed to ethical practices.
V. SUGGESTIONS FOR AN AI-WASHING FRAMEWORK: INSIGHTS FROM THE EUROPEAN UNION
It has been brought forth that AI washing is becoming a prevalent issue, potentially plaguing India in the coming years due to the rapid advancement of AI technology. Therefore, it is crucial to implement a regulation that effectively addresses these issues. This regulation should clearly and comprehensively define what constitutes AI and provide precise criteria for determining when an application is considered AI-powered. It should specify the minimum level of AI integration required for an application to be classified within that criterion, ensuring a consistent understanding across different use cases and applications.
A vital component of this regulatory framework should be the requirement of full disclosure, parallel to the disclosure requirements under the SEBI’s Listing Obligations and Disclosure Requirements (“LODR”) and Issue of Capital and Disclosure Requirements (“ICDR”) Regulations. Transparency about how AI is integrated into products should be mandated to restore confidence among investors and consumers. To promote compliance and accountability, the regulation should incorporate a system of penalties, similar to the enforcement mechanisms established in the SEBI LODR and ICDR Regulations. Clearly defined penalties would serve as a deterrent against non-compliance and ensure that entities adhere to the prescribed guidelines.
In scenarios where the authority reasonably determines that certain disclosures could jeopardise business models, or trade secrets, thereby hindering innovation, an alternative approach may be warranted. Firstly, documents could be submitted and maintained in a manner that ensures their confidentiality. This method aligns with practices adopted in the EU AI Act. Specifically, Articles 10 and 78 of the EU AI Act stipulate that if the data falls under a special category, it must be subject to restrictions on data transmission, preserved securely, and deleted once its purpose has been fulfilled. Additionally, Article 78 emphasises the protection of intellectual property rights and trade secrets. The requirement for confidentiality during investigations is also highlighted in Article 22 of Directive 2014/90/EU.
In cases where maintaining confidentiality during investigations is not feasible, an alternative approach involving data minimisation should be adopted. This means that authorities would request only the essential data, apply proper cybersecurity measures to safeguard the information, and delete it once it is no longer required. This approach ensures that authorities obtain the necessary data for investigations while AI developers maintain complete confidentiality. Recital 69 of the Act supports this mechanism, emphasising the principle of data minimisation and the right to privacy. Incorporating these alternatives would not only help authorities detect AI washing but also protect sensitive information and support innovation.
Furthermore, a registration process for AI products should be established to enhance accountability and transparency. This registration should include a detailed description and accreditation, verified by a regulatory authority, to prevent AI washing by ensuring that AI's role and its specific use in the product is fully disclosed. This approach will help address AI washing, as the approval from a regulatory authority will ensure thorough vetting and transparency, requiring any additions to be reported under the disclosure requirements and preventing the misuse of terms like “AI powered” and “AI backed.”
VI. CONCLUSION
Throughout the article, the authors have brought forth that AI washing carries grave consequences, posing a substantial threat to consumer and investor trust in AI-based products and services. To mitigate these risks and safeguard the integrity of the AI industry, Indian regulatory authorities must adopt an approach encompassing the imposition of stringent penalties for non-compliance, effectively deterring companies from engaging in deceptive AI marketing tactics. The current absence of specific legislation in India creates a regulatory vacuum, leaving the market vulnerable to the exploitation of this loophole. Hence, it is imperative for Indian authorities to respond with clear and comprehensive regulations that address AI washing.
*Palash Varyani is a fourth-year student at the Institute of Law, Nirma University. Divyank Dewan is a fourth-year student at the National Law Institute University, Bhopal.
Comments