Sunday, November 30, 2025
HomeOpinionThe legal gaps in India’s unregulated AI surveillance

The legal gaps in India’s unregulated AI surveillance


In 2019, the Indian government made headlines by announcing its intention to create the world’s largest facial recognition system for policing. Over the next five years, this ambition has materialised with Artificial Intelligence (AI)-powered surveillance systems being deployed across railway stations and the Delhi Police preparing to use AI for crime patrols. The latest plans include launching 50 AI-powered satellites, further intensifying India’s surveillance infrastructure.

While technological integration in law enforcement is commendable, it raises substantial legal and constitutional concerns. The use of AI for surveillance has global parallels, often resulting in “dragnet surveillance”, a term that refers to indiscriminate data collection beyond just suspects or criminals. As observed with Section 702 of the Foreign Intelligence Surveillance Act (FISA) in the United States, even well-intended surveillance laws can result in overreach, infringing on citizens’ rights.

This article explores the legal frameworks, gaps, and concerns surrounding AI surveillance in India and how they intersect with constitutional rights, particularly the right to privacy.

The Telangana Police data breach earlier this year revealed deep-rooted concerns about the data collection practices of Indian law enforcement agencies. According to reports, Hyderabad police had access to databases from social welfare schemes, including “Samagra Vedika”, raising questions about the scope of data being collected and the lack of transparency regarding its use.

Lack of proportional safeguards

While data-driven governance offers solutions for public welfare and crime prevention, these practices must be measured against the individual’s right to privacy, as guaranteed under Article 21 of the Constitution. The Supreme Court of India, in K.S. Puttaswamy vs Union of India (2017), recognised privacy as a fundamental right, extending its scope to “informational privacy”. The judgment emphasised that the era of “ubiquitous dataveillance” brings challenges that must be addressed through robust legal frameworks. However, the extent of surveillance infrastructure in India currently lacks proportional safeguards, leading to legitimate concerns about the implications of AI-driven data collection.

The Digital Personal Data Protection Act (DPDPA), passed in 2023, was meant to provide a framework for managing consent and ensuring accountability for data privacy in India. However, the law has been heavily criticised for broad exemptions that grant the government unchecked power to process personal data.

For instance, Section 7(g) of the DPDPA waives the need for consent when processing data for medical treatment during an epidemic. Section 7(i) further exempts the government from consent requirements for processing data related to employment, a particularly concerning clause given that the government is India’s largest employer. These exemptions raise red flags about the potential for misuse, especially when applied to AI-powered surveillance technologies that operate on vast quantities of personal data.

Moreover, the DPDPA introduces obligations for citizens that could further exacerbate privacy concerns. Section 15(c) mandates that citizens not to suppress any material information when submitting personal data. This provision, while intended to ensure data accuracy, could lead to punitive measures for something as simple as an outdated address or technical error in data collection systems.

In short, the DPDPA places heightened scrutiny on individual data while offering the government broad leeway in its use and collection. Given the profound implications of AI technologies in processing sensitive personal information, the legal framework appears unbalanced, skewed in favour of state surveillance over individual rights.

The approach in the West

India is not alone in grappling with AI and its impact on civil liberties. The European Union (EU) has enacted regulations that could serve as a useful guide for India. The EU’s Artificial Intelligence Act takes a risk-based approach to AI activities, categorising them into unacceptable, high, transparency, and minimal risk levels. Unacceptable risk activities, such as real-time remote biometric identification for law enforcement, are prohibited under EU law unless exceptions apply, such as searching for victims of serious crimes or responding to imminent threats. In stark contrast, India has begun deploying AI-powered facial recognition technology and CCTV surveillance in public spaces with little to no legislative debate or risk assessment. For example, Delhi and Hyderabad have integrated AI into policing without any publicly available guidelines on how data is collected, processed, or stored, or how potential abuses of the technology will be prevented.

As of now, AI remains largely unregulated in India. In 2022, the government promised that AI technologies would be regulated under the upcoming Digital India Act, but draft legislation has yet to materialise. This regulatory void leaves citizens vulnerable to the risks associated with AI-powered surveillance, including the infringement of privacy, discrimination, and data breaches.

Countries such as the United States and members of the European Union have already begun to legislate on the use of AI in public systems, with clear categorisations and restrictions for technologies that could pose a significant threat to civil liberties. The absence of a similar legal framework in India is troubling, especially given the government’s ambitious plans to expand surveillance capabilities.

At its core, the debate over AI surveillance in India touches on fundamental constitutional questions. The right to privacy, as enshrined in Article 21, and the principle of proportionality, as outlined in the Puttaswamy judgment, demand that any intrusion into personal data be backed by law, pursue legitimate aims, and be proportionate to the goal pursued. However, the existing surveillance framework, bolstered by AI technologies, appears to stretch these principles to their limits.

Address the impact on civil liberties

It is not the use of AI in governance itself that is problematic, but rather its unchecked application without sufficient safeguards. A comprehensive regulatory framework that addresses AI’s implications for civil liberties is urgently needed.

It would help protect public interest in consonance with the ‘Right to Privacy’ if such a framework includes provisions for transparent data collection practices, where it is publicly disclosed, what data is being collected, for what purpose, and how long it will be stored. Furthermore, the framework must ensure consent gathering mechanisms have narrow and specific exemptions for processing data with independent and effective judicial oversight. This will not only ensure transparency in consent gathering but also safeguard the constitutionality of such applications of AI-based data processing. In this context, India could benefit from adopting a risk-based regulatory approach, such as the EU’s, which categorises AI activities based on the risks they pose to citizens’ rights.

India is at a crucial juncture in deploying AI-powered surveillance. While integrating advanced technologies in law enforcement and governance offers immense potential, it must be balanced against citizens’ constitutional rights. Policy decisions that embed privacy measures into infrastructure before deployment, with inherent safeguards in surveillance protocols, are vital. Consent mechanisms, transparency reports, and judicial oversight at relevant stages of data collection and management can avoid costly retrofits and retraining.

Though the DPDP Act addresses some issues, criticisms persist, and the long-awaited DPDP Rules remain unnotified. To mitigate risks from AI-driven surveillance, regulating “high-risk activities” through restrictions on digital personal data processing and transparent auditor oversight of data sharing is crucial. A proactive regulatory approach will ensure AI serves public interest without compromising civil liberties.

Shri Venkatesh is the Managing Partner at SKV Law Offices and has over a decade’s experience in dispute resolution. Bharath Gangadharan is Counsel with the Dispute Resolution Team at SKV Law Offices. Aashwyn Singh is Associate with the Dispute Resolution Team at SKV Law Offices. Anuj Nakade is a Content Writer with SKV Law Offices, and a digital content creator



Source link

RELATED ARTICLES

Most Popular

Recent Comments