Emerging AI governance tools offer a step toward visibility and control of whatโs happening in a companyโs AI applications.
They say that half the money spent on advertising is wasted, but the problem is figuring out which half. If thatโs true, the problem is arguably worse for AI. Talk to people deep in the AI weeds and theyโll tell you upwards of 90% of money spent on AI is waste, and mountains of cash are chasing that elusive 10% because the potential payoff is so good. Accenture, for example, has booked $2 billion just this year to help clients make sense of AI. Nvidia and the clouds keep raking in tens of billions more, too.
Clearly thereโs a lot of money in AI. The question for most companies needs to be: Which investments are working, and which should be dumped?
Although there hasnโt been an obvious answer to that question, a new class of software is being designed to provide answers. Just as data science brought us data governance, companies like Holistic AI deliver AI governance. Fledgling efforts have tried to treat AI governance as an extension of data, IT, or cloud governance, when it actually requires its own unique, distinct approach, given the need to move well beyond standard risk assessment to also include factors such as bias, effectiveness, and explainability.
If this doesnโt seem to be the sexiest category of software, think of it this way: If it helps companies improve their AI win rate, thatโs incredibly sexy.
The stakes are high for AI
Yes, our industry has its fair share of overblown hype for technology โtrendsโ that turn out to be vaporous fads (e.g., Web3, whatever that was). But AI is different. Not because I want it to be, or because AI vendors hope it will be, but because however much we poke holes in it (hallucinations, etc.), itโs still there. Though generative AI is a relatively new spin on AI, the technology itself is a relatively mature, much larger market that includes things like machine learning. Companies may be more obvious in posturing around AI in the past year or two, but donโt let that confuse you. Just this week I talked with a company that has a large number of AI applications running, with each one costing close to a million dollars each year.
Clearly that Fortune 500 company sees value in AI. Unfortunately, itโs not always clear which of their costly applications is delivering on its promise, and which ones are introducing more risk than reward.
When a company elects to build an AI application, theyโre placing a lot of faith in large language models (LLMs) or other tools without much (if any) visibility into how the models yield results. This can be catastrophic for a company if it turns out their algorithms are persistently prejudiced against a protected class (ethnic minorities, etc.), misprice products, or cause other mishaps. Regulators and boardrooms are therefore paying more attention to so-called โalgorithm conductโ to ensure AI delivers boom, not bust.
From commodity to velocity
It has already become tedious to review the newest LLMs. On an almost daily basis, Meta one-ups OpenAI which one-ups Google which one-ups any company with the capacity to invest billions in infrastructure and R&D on model performance. And the next day they all rotate which company claims to be fastest that day. Who cares? In aggregate it matters because enterprises are getting better performance at lower cost, but none of it matters if those same enterprises canโt build on the models with confidence.
To gain true business velocity through AI, companies need full visibility and control across all AI projects. Holistic AI, for example, seamlessly integrates with all common data and AI systems. Even better, it automatically discovers AI projects across the organization, streamlines inventory management, and offers a unified dashboard so that executives get a broad view of their AI assets and can act accordingly. For example, the Holistic AI software surfaces potential regulatory and technical risks in a particular application, alerting the team so that the company can resolve the issue before it becomes embarrassing or expensive (or both).
This isnโt akin to cloud governance tools, if for no other reason than that the stakes are so much higher. You can think of cloud as an inherently better, more flexible way of managing hardware or software assets, but it doesnโt necessarily fundamentally change how we think about these concepts (though serverless, for example, does challenge the thinking around provisioning of infrastructure to support an application). Thereโs a reason we jokingly refer to cloud as โsomeone elseโs computer.โ Not so with AI, which fundamentally changes whatโs possible with software and data, although often in ways that we canโt explain. This is why we need AI governance tools like Holistic AI that help increase the velocity of effective AI experimentation and adoption by minimizing the risk that weโre using AI in ways that will hurt more than help.
The faster we want to move on AI, the more we need guardrails through AI governance systems. Again, this isnโt about forcing teams to slow down; itโs a way to speed up by ensuring less time is wasted on risky, ineffective AI projects.


