AI models, especially those integrated into enterprise workflows, are susceptible to manipulated inputs and training data, leading to compromised decision-making or data exposure. | Photo Credit: KACPER PEMPEL
Using an AI agent to write code or launch an application? You’d better think twice before deploying it in the office environment as it is. You might have found it very easy to develop agents or apps using the LLMs, but they might come with certain vulnerabilities that can be manipulated by hackers. Cybersecurity experts cite the incidence of injection of malware into the apps and agents, which could pose serious challenges to organisations.
AI-driven applications are quickly becoming essential in diverse sectors like finance, healthcare and legal. Organisations leverage them for tasks including automating customer service, processing sensitive data, generating code and aiding business decisions.
But what happens when attackers find ways to manipulate these systems? AI agent vulnerabilities could result in unauthorised execution of malicious code; theft of sensitive company or user data; manipulation of AI-generated responses; and indirect prompt injections leading to persistent exploits, cybersecurity company Trend Micro says.
AI models, especially those integrated into enterprise workflows, are susceptible to manipulated inputs and training data, leading to compromised decision-making or data exposure. Prompt injection, in particular, can bypass expected behaviour by influencing AI outputs covertly.
Jaydeep Singh, General Manager for India, Kaspersky, said the company has identified a rising interest among threat actors in targeting AI systems, including Agentic AI.
“While large-scale attacks are still developing, we’ve seen early-stage exploitation attempts involving prompt injection, vector store poisoning and SQL injection. Our AI-CERT initiative monitors such threats and helps coordinate response efforts for AI-specific vulnerabilities,” he said.
A senior developer, however, felt that this concern was not something new.
“There have been concerns when we used to take the off-the-shelf, ready-to-use codes from open-source platforms. Some companies made it a point to use the proprietary code, even if there is an alternative open-source code is available for free. This is to avoid unnecessary challenges that might come by when you use free code,” he says.
He suggests that one should have proper guardrails in place to avoid bottlenecks or harms at a later stage in the product development. This holds good in the pre-LLM era and in the LLM era.
Sean Park, Principal Threat Researcher at cybersecurity solutions company Trend Micro said that a spurt in LLMs gave rise to several cybersecurity questions.
“Can a Large Language Model (LLM) service become a gateway for cyberattacks? Could an LLM executing code be hijacked to run harmful commands? Can hidden instructions in Microsoft Office documents trick an AI agent into leaking sensitive data? How easily can attackers manipulate database queries to extract restricted information – these are some of the fundamental security questions AI agents face today,” he said.
While large-scale, public cases of Agentic AI compromise are still limited, Kaspersky has observed precursors that indicate how such attacks may unfold.
“For instance, we’ve tracked info-stealer malware campaigns that target systems integrating machine learning, aiming to exfiltrate training data or access tokens. In some scenarios, attackers exploited weak API security or injected harmful prompts to influence AI-driven chatbots and automation tools,” Singh pointed out.
“These tactics mirror the prompt injection concept subverting an agent’s logic to leak sensitive outputs or misperform tasks. Additionally, attacks on vector stores, often used for semantic search in AI apps, have shown how corrupted embeddings can return misleading or harmful results,” he said.
To counter these emerging threats, a multi-layered approach is crucial. Singh recommends that developers must implement strict input validation to mitigate prompt injection and similar manipulation attempts.
“They should apply secure coding principles, routine code reviews and isolating AI modules from critical infrastructure,” he said.
He also advises hardening access controls and encrypting training data to defend against vector store poisoning, and conducting regular penetration testing and audits post-deployment.
Published on June 15, 2025
Comments
Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.
We have migrated to a new commenting platform. If you are already a registered user of TheHindu Businessline and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.