
RSAC 2026 concluded last week, and it was a firestorm of AI and agentic AI announcements, products, services, and marketing. The mood on the show floor was positive, the majority of people crowding around interesting demos and informational sessions. And of course, good booth prizes and tchotchkes. Cybersecurity vendors and service providers paid out for lavish booths and even the smaller booths were mostly cleverly decorated/marketed.
On the downside, the agentic AI hype (and AI hype in general) was so far over the top that it circled the earth and came back again. There were two main themes. In the first, agentic AI is dangerous and it needs immediate security protections. The second is that the only way to secure AI, including agentic AI is (wait for it) agentic AI. The bonus post-script theme was that threat actors are using AI and agentic AI right now. The confusing thing is that *none* of this is untrue.
But it is indictive of a fundamental tech industry failure, not from cybersecurity, but from AI/agentic AI vendors and service providers. The rush to AI/agentic AI is being done with an attitude that says ‘damn the consequences,’ especially with agentic AI. Maybe, in a very kind, empathetic world full of cotton candy and kindness, the rush to AI itself could be excused. But the rush to agentic AI has no such excuse. Agentic AI could have been rolled out more gradually, with *actual* cybersecurity protection, including data protection, regulatory compliance, and responsibility tracking. But the rush to market ignored all common sense and consideration of what kind of dangers agentic AI is exposing, including enterprises, institutions, governments, and regular folks as well.
Every business publication tells boards of directors, CEOs, and other C-level execs that they MUST have agentic AI or fail. Just like they said about AI itself. Whether that is true or not is not really the question here. The question here is, outside of money, why the AI industry could not have done a better job of just establishing base standards, practices, and, of course, ensuring security, especially on agentic AI.
Can anyone imagine another industry that would survive without significant reputational, regulatory, and financial consequences if it behaved with such unrestrained abandon? AI and agentic AI are here to stay, but it’s up to customers to pump the brakes and ensure they don’t implement a technology that leaves them vulnerable to attack in ways that even AI’s creators can’t fully envision.






You must be logged in to post a comment.