The Future of Agentic Security
Lee Klarich, Chief Product & Technology Officer of Palo Alto Networks, recently observed that “AI agents operate with access to critical systems and sensitive data, creating the ultimate insider threat.”
The compromised insider has always been one of the hardest threats to detect and prevent, and with the rapid adoption of agentic AI for everything from coding assistance to distributed-systems task automation the price we must all pay for such productivity catalysts is that the new malicious insider - the compromised AI agent - can now move at machine speed, and doesn’t care about getting fired or going to prison.
To address the unique risks of agentic AI on the endpoint, Palo Alto Networks acquired Koi Security today. Koi (now branded Agentic Endpoint Security - AES) provides an enforcement layer specific to AI Agents on endpoints, controlling for what traditional EDR solutions have been largely blind to.
This threat vector didn’t exist five years ago, this level of visibility didn’t exist 3 years ago, and the promise of a unified policy to secure it up and down the stack didn’t exist until today.
But now comes the hard part: using the tools to build and enforce policy that takes into account the entire situation the agent is in: its identity, its host machine’s posture, its network posture, its access rights, and its business purpose.
The old-school policy said something like, “This device is allowed to communicate with this other device on these protocols and applications.”
The next generation of policies needed to secure agents might read more like, “This type of agent is allowed to perform the following actions within this decision space in order to accomplish these goals for these specific purposes.”
Thomas Laugle is a cybersecurity strategist at Palo Alto Networks.