To deliver reliable enterprise AI, Systems of Agents will rely on Systems of Knowledge (2025)

To deliver reliable enterprise AI, Systems of Agents will rely on Systems of Knowledge (1)

While there's been some speculation that AI could lead to the death of SaaS, the reality is that AI has a lot to learn from today's enterprise application vendors before it can afford to kill them off.

It's been instructive over the past year or two, watching as vendors have sought to adapt generative AI to deliver reliable results for enterprise use cases. The trouble with generative AI in an enterprise context is that it's remarkably good at providing plausible answers. But the mathematics that powers it is concerned with probabilities rather than precision. Its answers aren't always accurate enough to be relied on in a work environment, especially when precise numbers are needed to validate transactions, or when exact instructions are required to ensure safety and compliance. The big challenge is in finding a way to harness the undoubted power of the technology, while ensuring that it delivers answers that are precise and exact enough to be relied on.

Some advocates of generative AI have argued that this is simply a matter of data volume. The more data you feed it, so the argument goes, the more likely it is to provide the right answer. But while this may work for commonplace questions where the right answer can often be relied on to bubble to the top, it's less certain for more esoteric topics, where the data is necessarily in short supply. And it's very wasteful of processing power to answer such questions without first narrowing the search in a way that reduces the volume of data from which the answer has to be inferred. Even common accounting terms aren't reliably understood by general-purpose LLMs, as accounting software vendor Sage has found, which has built its own specialized language models instead.

This means that generative AI isn't about to sweep away the current generation of enterprise applications — as some have suggested — to replace them with systems of interconnected autonomous agents that act directly on the underlying data sources. Those agents will still need ways of making sense of that data and putting it into a business context, so that they can achieve the intended goals. While it's conceivable that generative AI will one day be able to figure this out unaided — let's return to that thought later — for the foreseeable future it will need a lot of support, which established enterprise software vendors are gearing up to provide.

SaaS adds crucial context

The core point here is that raw data has little value in itself. Like crude oil, it has to be refined before it becomes useful. Internet search engines faced a similar challenge in their early years. They had no way to evaluate the relative authority of the many sources that used a given keyword or search term. This was solved by Google's PageRank algorithm, which gives each page a weighting based on which other pages link to it. This adds human context — the choices made by those other page authors — that allows the algorithm to decide which search results seem most trustworthy.

LLMs need a similar way of figuring out which answers to pick, and in the enterprise arena, it's SaaS that provides that context. One of the intriguing facets of watching all this evolve over the past year or so has been the growing realization of how much intelligence vendors already have built into their existing systems — just like the links that already existed when search engines first tried to make sense of the Web. The data models, structures and workflows of enterprise applications already embody the accumulated knowledge and experience of the many architects, developers and enterprises that have helped to create and refine them over time. It's simply a matter of finding ways to expose this embedded intelligence — establishing a system of knowledge — that the AI can work with.

Historically, these systems of knowledge have been implicit within the application architecture rather than explicitly structured for external consumption. They exist in the form of metadata — data about data — that brings context to make the data meaningful. Enterprise applications have traditionally added metadata by storing data within database schemas or manipulating it in software objects. More recently, graph databases have emerged to map the relationships between the various components in enterprise processes, such as people, resources, projects and goals. At the same time, vendors have become more adept at mapping the various steps that make up automated processes, making it easier to rearrange the steps or join processes together. There's also a lot of policy, procedures and know-how stored as unstructured data in various enterprise document stores, often tagged with metadata or categorized into folders. Vendors are now in a race to make these systems of knowledge explicit, so that they can provide the context that LLMs will use to make sense of the raw data.

Building out Systems of Knowledge

This is why, for example, SAP has been building out a foundation model that has a knowledge graph at its heart, and ServiceNow is layering a knowledge graph and a unified Common Service Data Model over its core CMDB. It's why you hear digital teamwork vendor Atlassian talk about the teamwork graph that powers what it calls its System of Work. It's why Salesforce talks up Data Cloud, which connects data from various application-specific silos of information to align with the metadata models its own applications use. As Rahul Auradkar, EVP & GM of Unified Data Services & Einstein at Salesforce, explains:

If you have a finance department that wants to have their own set of data to present to the SEC what the reporting should be, or legal is doing contracts, those exist and we refer to them as silos. That, in itself, is not a bad thing. That is the enterprise estate — systems that are pertinent to the particular business need. The key is, how do you bring those silos to life when you're serving customers? That's where a combination of zero-copy architecture, ingest, transforms, harmonization, unification — all of that matters.

It has to be brought to life in the context of what [is] the business value that we are creating for customers, and that's where the metadata matters... In themselves, those silos are needed for them because they're needed to run their business. But how do you unify harmonized data that exists in those silos so that they can contextualize that for predictive and generative AI?

These systems of knowledge help to direct LLMs and agents so that they can provide more accurate results. Auradkar goes on:

The intelligence, if you may, of LLMs is one small part of the overall AI value that they are delivering. It's all the things surrounding it, the ability for us to feed the right data for it, the ability for us to drive automations and actions using data context...

The thin veneer, if you may, on top of an LLM, which the early co-pilots were, have only that much amount of value for our customers. It's more about all the, what we in Salesforce refer to as a deeply unified platform, that delivers value across multiple different dimensions.

Modeling the world of business

Across any vendor's platform, this context and structure is needed to create the prompts and instructions used in Retrieval Augmented Generation when querying LLMs, or to direct the 'reasoning engines' that validate and orchestrate the responses and actions of AI agents. They help define the data sets required to create specialized language models tailored to specific knowledge domains, ranging from industry verticals to functional expertise.

LLMs need grounding with these systems of knowledge for the simple reason that artificial intelligence can only work with what it knows. AI can't create context, structure and rules without having the source material from which to derive them. This is why the first triumphs of AI over human counterparts have been in games such as chess and Go, or why LLMs are proving very successful at writing software code — these are domains where the rules are already well-defined. As Demis Hassabis, co-founder and CEO of Google DeepMind, whose AlphaGo AI taught itself to become a world-class Go champion, explains:

Games obviously are very limited and they're quite easy — the rules that you describe usually have perfect information, so that they're relatively easy setups compared to the real world. So the question is, how fast can we generalize the planning ideas and agentic behaviors into planning and reasoning, and then generalize that over to work in the real world, on top of things like world models — models that are able to understand the world around us?

The systems of knowledge embedded in enterprise SaaS are the best source we have at the moment to model the world of business. But these are proprietary systems, reserved to each enterprise application vendor, rather than openly available on the Internet like the raw material that has fueled GPT and other general-purpose LLMs. Unsurprisingly, vendors are highly protective of these walled context gardens, as you might call them, because they keep customers on their home turf. Each vendor is racing to make its own system of knowledge as comprehensive as possible — and offering to make sense of all its customers' data held in other vendors' systems too. No one at the moment is building a vendor-neutral system of knowledge. Instead, the focus is on connecting between these separate proprietary context gardens with standardized integrations such as Anthropic's Model Context Protocol and the Google-led Agent2Agent (A2A) protocol.

Dissolving application boundaries

But what if LLMs didn't need to rely on incumbent application vendors and their systems of knowledge to make sense of enterprise data and processes? I said earlier on that we'd come back to this question. Is there a possibility that future LLMs will be able to independently make use of what DeepMind's CEO calls a "real-world model" of work and business? It seems like it's only a matter of time. For now, SaaS vendors are protecting their proprietary systems of knowledge, but they're going to have to work with LLM developers to hone the performance of those LLMs. That means that the LLMs will need to be able to natively understand more of the context they're working with. At the same time, vendors will need to evolve their systems of knowledge to eliminate some of the redundancy built into their existing application structures. The ultimate goal is an entirely new form of AI-driven, real-time enterprise application of the kind that diginomica contributor Brian Sommer recently described:

{A] Non-Linear Application... ditches the step-by-step transaction process and serves up information and forms when warranted and does the rest behind the scenes. And, it doesn’t necessarily have to do the work in a rigid, one-size manner. Workers don’t have to bounce around from one application to another to complete a business event.

Vendors are already discovering that the traditional boundaries between applications are starting to dissolve as they develop new AI agents — HCM and financials vendor Workday recently launched an agent management application, while teamwork vendor Atlassian just announced a workforce planning application. Meanwhile the likes of ServiceNow and Salesforce are each talking about building agentic workflows that reach across all enterprise applications.

At the moment, this looks like an opportunity for these SaaS vendors to expand their market reach. But what if it also allows for the emergence of a standardized system of knowledge for enterprise work that is independent of any individual enterprise application vendor? In the coming years, maybe AI will kill off SaaS — but only after it has drained the lifeblood out of those systems of knowledge to build its own new model of the enterprise world.

To deliver reliable enterprise AI, Systems of Agents will rely on Systems of Knowledge (2025)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Allyn Kozey

Last Updated:

Views: 5968

Rating: 4.2 / 5 (43 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Allyn Kozey

Birthday: 1993-12-21

Address: Suite 454 40343 Larson Union, Port Melia, TX 16164

Phone: +2456904400762

Job: Investor Administrator

Hobby: Sketching, Puzzles, Pet, Mountaineering, Skydiving, Dowsing, Sports

Introduction: My name is Allyn Kozey, I am a outstanding, colorful, adventurous, encouraging, zealous, tender, helpful person who loves writing and wants to share my knowledge and understanding with you.