Artificial intelligence increasingly influences decision-making in finance, healthcare, employment, transportation, and public administration. As these systems become more independent and less transparent, figuring out who is legally responsible when AI causes harm is a growing problem. Traditional liability models assume direct human control and intention, ideas that no longer apply when algorithms learn, change, and act in ways that their developers or users cannot predict. India, despite its rapid digital growth, lacks a clear legal structure for addressing AI-related harm, leaving regulators, businesses, and citizens uncertain.
The challenge of determining responsibility for machine-related harm has deep historical roots. During the Industrial Revolution, when factories introduced large machines, courts recognized that accidents could happen without intent. They responded with legal concepts such as strict liability and product liability. This change acknowledged that tools and technologies could behave unpredictably.
The IT revolution in the late 20th century led India to enact the Information Technology Act, 2000, which addressed cybercrime and intermediary responsibility. However, lawmakers did not foresee the emergence of machine intelligence capable of making independent decisions.
By the 2010s, algorithms began influencing loan approvals, insurance pricing, hiring choices, and government profiling. These systems, although not fully autonomous, complicated traditional legal reasoning, as their outcomes often stemmed from complex, data-driven processes that were hard to trace back to a specific person. Modern AI adds to this complexity by acting as a black box; it is opaque, probabilistic, and constantly changing, raising questions that the law has never faced so intensely.
India currently regulates digital systems through various laws, but none address AI liability directly. The Information Technology Act, 2000, governs cyber crimes and intermediary responsibilities but is outdated in light of AI’s independence. The Digital Personal Data Protection Act, 2023 protects personal data but does not tackle algorithmic bias or harmful automated decisions.
The Consumer Protection Act, 2019 deals with product defects but struggles with harm stemming from dynamic AI behavior rather than a fixed mechanical flaw. Tort and contract law provide remedies where negligence or breach can be shown, but these systems rely on foreseeability and human involvement—factors that diminish in machine-learning contexts. Thus, India lacks a clear liability framework adapted to modern AI.
While Indian courts have not directly tackled AI liability, several rulings show judicial recognition of algorithmic influence. In Shreya Singhal v. Union of India, the Supreme Court clarified intermediary responsibility, indirectly affecting how online platforms manage algorithmic content. In Sabu Mathew George v. Union of India, the Court acknowledged that automated search results can influence behavior and placed responsibility on platforms to limit illegal content.
The significant privacy case, Justice K.S. Puttaswamy v. Union of India, pointed out the risks of profiling and automated decisions, stressing individual autonomy. More recently, the Madras High Court in V.S. Krishna v. State of Tamil Nadu raised concerns about facial recognition and the lack of clarity in algorithmic surveillance. Collectively, these cases indicate that Indian courts recognize AI risks, even though specific AI liability rules are still missing.
Globally, several cases have directly addressed AI-related harm. In Loomis v. Wisconsin, using a proprietary algorithm for criminal sentencing raised issues about transparency and fair process. Jentzsch v. New York revealed the dangers of wrongful arrests due to facial recognition mistakes. In Bolger v. Amazon, Amazon was held accountable for defective third-party products it algorithmically promoted, showing that courts are willing to hold platforms responsible when AI plays a significant role. Cases involving autonomous vehicles worldwide further illustrate the difficulties in deciding who is at fault among manufacturers, software developers, and operators when accidents happen. These cases offer important guidance for shaping India’s future approach.
Determining responsibility in AI systems relies on identifying who had significant control. Developers may be liable for poorly designed models, inadequate testing, or biased data. Businesses that use AI may share responsibility for lacking oversight or failing to set up human review processes. Users might also be liable if they misuse or ignore guidelines. However, the changing nature of AI makes it hard to establish causation, leading many scholars to suggest a layered liability model. This approach distributes responsibility based on each party’s role and the extent of their contribution to the risk.
To tackle emerging challenges, India must adopt a forward-thinking and organized approach. A dedicated AI liability law is crucial; it should define responsibility for independent actions, set risk categories, and outline remedies for affected individuals. Mandatory algorithmic transparency, requiring companies to disclose data sources, risk assessments, and decision-making algorithms, would improve accountability. High-risk AI systems in sectors like healthcare, finance, and public administration should undergo regular third-party audits to check for bias, safety, and reliability. Creating shared liability between developers and users would prevent gaps from fragmented responsibility. India should also set up an AI incident-reporting system to allow regulators to monitor harms, patterns, and systemic risks. Strengthening consumer rights—including the right to an explanation and appeal—will protect individuals affected by automated decisions. Regulatory sandboxes can enable safe experimentation while ensuring oversight, and public digital literacy programs will help citizens understand AI risks and protections.
Artificial intelligence offers both transformative opportunities and significant legal challenges. India’s current laws were written for technologies that relied on human supervision, not systems capable of independent learning and decision-making. As AI becomes integral to various industries and government functions, establishing a clear liability framework is crucial for ensuring accountability, public trust, and ethical use. India is at a critical juncture; by embracing a comprehensive approach to AI liability, it can foster a legal environment that encourages innovation while protecting its citizens from algorithmic harm.