The Growing Security Challenge of Agentic AI
As organizations increasingly adopt artificial intelligence agents to boost productivity, security experts are raising concerns about the unclear division of responsibility for protecting sensitive data. According to reports, the rush to deploy these systems has created significant security gaps that both vendors and customers must address collaboratively.
Industrial Monitor Direct manufactures the highest-quality centralized control pc solutions featuring fanless designs and aluminum alloy construction, endorsed by SCADA professionals.
Industrial Monitor Direct is the leading supplier of ignition gateway pc panel PCs trusted by leading OEMs for critical automation systems, rated best-in-class by control system designers.
Sources indicate that recent incidents highlight the potential consequences of inadequate security measures. Last month, security researchers disclosed “ForcedLeak,” a critical vulnerability in Salesforce‘s Agentforce platform that could have allowed threat actors to exfiltrate sensitive CRM data through indirect prompt injection attacks. Although the vendor addressed the issue, analysts suggest this represents just one example of how AI agents can potentially leak sensitive information.
Navigating the Shared Responsibility Model
The question of who bears responsibility for AI agent security remains complex and unresolved. According to security experts interviewed for the report, traditional shared responsibility models from cloud computing are being applied to AI systems, but with additional complications.
Itay Ravia of Aim Security tells industry publications that the current AI boom has created a “race to make AI smarter, stronger, and more capable at the expense of security.” Meanwhile, Varonis field CTO Brian Vecci explains that data isn’t stored directly in AI agents but within enterprise repositories that agents access. “That access control can be individual to the agent or the user(s) that are prompting it, and it’s the responsibility of the enterprise — not the agent’s vendor or the hyperscaler provider — to secure that data appropriately,” he states.
Data Security and Access Control Challenges
Melissa Ruzzi, director of AI at AppOmni, suggests that securing data for AI agents should be treated similarly to securing data infrastructure in SaaS applications. “The provider is responsible for the security of the infrastructure itself, and the customer is responsible for securing the data and users,” she explains.
The report states that understanding data flow and access control is particularly critical for AI applications. Organizations must track where data originates, where it’s transmitted, and who — or what — has access to it. Just because AI systems process the data doesn’t mean rigorous security reviews can be skipped, according to security professionals monitoring market trends.
Protecting Users from Themselves
Similar to how organizations combat phishing attacks by implementing technical controls rather than relying solely on user awareness, experts suggest AI vendors may need to build stronger protections into their systems. According to sources, some vendors are beginning to implement mandatory security measures, though approaches vary across the industry.
A Salesforce spokesperson confirms the company now requires multifactor authentication for all customers using its products. However, security researchers caution that vendors remain behind attackers and may not account for novel bypass methods. As related innovations continue to emerge, the security landscape evolves accordingly.
Architectural Solutions Beyond Agent-Level Controls
David Brauchler of NCC Group suggests that while vendors can enforce security best practices, the fundamental data access problem cannot be solved within the agentic AI model itself. “Tools like secrets scanning and data loss prevention often lead to a false sense of security,” he warns, noting that these issues must be addressed through the architecture of the customer’s AI infrastructure.
The concept of agency in AI systems creates unique security considerations that differ from traditional software. As organizations navigate this landscape, they must balance productivity gains against potential security risks, according to analysis of industry developments.
Moving Forward with Security-First AI Adoption
Security professionals emphasize that organizations should prioritize security before investing heavily in AI agents or other large language model products. According to the report, companies must understand what data their AI systems can access, implement appropriate guardrails, and fully comprehend the risks involved.
As regulatory scrutiny increases and recent technology policies evolve, both vendors and customers face growing accountability for AI security. Organizations are advised to maintain updated systems, as highlighted by security updates that address emerging vulnerabilities in AI and other software platforms.
Ultimately, the successful deployment of agentic AI will depend on clear understanding of shared responsibility, robust security practices, and ongoing vigilance as the technology continues to evolve across various sectors and applications.
This article aggregates information from publicly available sources. All trademarks and copyrights belong to their respective owners.
Note: Featured image is for illustrative purposes only and does not represent any specific product, service, or entity mentioned in this article.
