The future of AI in India would be defined by the intelligence of its systems, their strength and the responsibility the country takes to deploy and secure them.
| Photo Credit: KIRILL KUDRYAVTSEV
For India, where digital public infrastructure and AI-driven innovation are becoming central to economic growth, agentic AI is a massive opportunity as well as a potential liability, said Saugat Sindhu, Global Head, Advisory Services, Cybersecurity & Risk Services, Wipro Limited.
However, he quickly added, “Security, privacy, and ethical oversight must evolve as fast as the AI itself.’‘
The future of AI in India would be defined by the intelligence of its systems, their strength and the responsibility the country takes to deploy and secure them.
According to Mr. Sindhu, agentic AI technologies are reshaping productivity, governance, and national security in an era where machines no longer just assist but act.
Listing out some of the most critical cyber risks of agentic AI, he said India’s digital economy was booming — from UPI payments to Aadhaar-enabled services, from smart manufacturing to AI-powered governance. But as artificial intelligence evolves from passive large language models (LLMs) into autonomous, decision-making agents, the cyber threat landscape is shifting dramatically.
These agentic AI systems can plan, reason, and act independently — interacting with other agents, adapting to changing environments, and making decisions without direct human intervention. “While this autonomy can supercharge productivity, it also opens the door to new, high-impact risks that traditional security frameworks aren’t built to handle,’‘ he cautioned.
There could even be threats involving tool misuse where attackers trick AI agents into abusing integrated tools (APIs, payment gateways, document processors) via deceptive prompts, leading to hijacking. Or it could be memory poisoning where malicious or false data is injected into an AI’s short- or long-term memory, corrupting its context and altering decisions., he explained.
Another form of critical threat could be resource overload which involves attempts to overwhelm an AI’s compute, memory, or service capacity to degrade performance or cause failures — especially in mission-critical systems like healthcare or transport systems.
Cascading hallucination is another kind of threat. Here, AI-generated false but plausible information made to spread through systems, disrupting decisions — from financial risk models to legal document generation.
For example, Mr. Sindhu elaborated, an AI agent in a stock trading platform generated a misleading market report, which was then used by other financial systems, amplifying the error.
Published – October 17, 2025 09:42 pm IST