Slow Your Roll on AI

S. Schuchart

AI has been the rage for at least three years now, first just generative AI (GenAI), and now agentic AI. AI can be pretty useful, at GlobalData we’ve done some very cool things with AI on our site. Strategic things, that serve a defined purpose and add value. The use of AI at GlobalData hasn’t been indiscriminate – it has been thought through with how it could help our customers and ourselves. Even this skeptical author can appreciate what’s been done.

But a lot of what is happening out there with AI is indiscriminate and doesn’t attack problems in a prescriptive way. Instead, it is sold as a panacea. A cure for all business and IT ills. The claims are always huge but strangely lacking in detail. It’s particularly true for agentic AI where only in the last month managed to get MCP into the Linux Foundation as a standard. The security issues of agentic AI are still largely unaddressed and certainly not addressed in any standardized fashion. It’s not that agentic AI is a bad idea, it’s not. But the way it’s being sold has a tinge of irrational hysteria.

Sometimes when a vendor introduces a new capability that proudly uses agentic AI, it’s not clear why that capability being ‘agentic’ makes any difference than just AI. New AI features are appearing everywhere, with vendors jamming Ai in every nook and cranny, ignoring the privacy issues, and making it next to impossible to avoid or turn off. The worst part is often these AI features are half-baked ideas implemented too quickly or even worse, written by AI itself and all of the security and code bloat issues that ensue.

The prevailing wind, no scratch that, the hurricane force gale in the IT industry is that AI is everything, AI must be everywhere, and *any* AI is good AI. Any product, service, solution, or announcement must spend at least half of its content on how this is AI and how AI is good.

AI *can* be a wonderful thing. But serious enterprise IT administrators, coders, and engineers know a few things:

1. In a new market like AI, not every company selling AI will continue to sell AI. There will be consolidation, especially in an overhyped trend. Vendors and products will disappear.2. Version 1.0 hardly ever lives up to its billing. Even Windows wasn’t really minimally viable until 3.1. 3. Aligning IT/business value received vs. costs to implement/continue is a core component to the job.4. The bigger the hype, the bigger the backlash.5. The bigger the hype, the bigger the fear of missing out (FOMO) amongst senior management.6. The problems are in the details, not in the overall concept.

So let’s all slow our roll when it comes to AI. More focus on what matters, what can *demonstrably* provide value vs. what is claimed will provide value. Implementation costs as well as one year, three year, and five year costs. Risk assessment from a data privacy, cybersecurity, and regulation standpoint. In short, a little bit more due diligence and a lot less FOMO. AI is going to happen; that’s not the issue. The issue is for enterprises to implement AI where it will help, rather than viewing it as a panacea for all problems.

Leave a Reply