The knowledge problem
There’s a whiteboard in almost every enterprise sales office that nobody photographs before it gets wiped. The one where someone sketched the real account map, figured out who the blocker actually was, found the story that would work for this customer at this moment. An hour of collective thinking made visible. Then Monday morning, gone. Everyone remembers the theme of it. Not the detail.
It’s a wider knowledge problem rather than just a technology one, and revenue teams are uniquely bad at solving it.
The intelligence stack
The last decade of commercial intelligence investment ran in two directions. Data intelligence: dashboards, pipeline analytics, win rate reporting, forecasting. It tells you what happened. Customer intelligence: CRM enrichment, intent data, account scoring, buying signals. It tells you who to target and what they care about. Both genuinely useful. Both well-funded.
And together, they miss the layer that actually determines whether a deal closes.
Execution intelligence is the accumulated contextual knowledge of how your team operates. Why that pricing exception got made last quarter. What message worked with the CFO at that account. How the rep who ramped in 60 days thought about their first six months differently from the one who took 180. The negotiation sequence that held during the last competitive displacement. None of it is in your CRM. Some of it is in someone’s head. The rest exists in conversations that have already closed.
This knowledge exists. It’s just structured nowhere.
What this actually costs
I spend time thinking about what this costs in practice. One account planning cycle, one organisation, roughly a billion and a half in revenue: 24,000 hours annually, just to prepare the insight to have the conversation. Not delivering on the accounts. Preparing to discuss strategy. Strategic account planning happened once a year not because nobody valued it, but because gathering the context to do it properly was genuinely prohibitive.
Even after that effort, most of what gets generated doesn’t stick. The synthesis lands in a document. The document gets filed somewhere. The actual knowledge like who’s the real coach in the buying committee, what messaging landed in the last executive briefing, what’s shifted in the account’s internal politics since renewal, it all dissipates. The next cycle starts from close to zero.
I’ve been in planning sessions where a team produced genuinely good account intelligence, three hours of collective reasoning that mapped the situation clearly and honestly. Six months later, half the team had turned over. The document existed. The understanding behind it didn’t.
How AI makes this worse
AI has made this worse, not better.
That’s uncomfortable to say when the productivity gains are real. Teams using AI for deal structuring, competitive positioning, executive communication are moving faster and the outputs are better. But outputs are only half of what an AI conversation produces. Every time I structure an article, or design a model there will be elements that could be valuable in a few months or a years time that will get lost.
Every substantive AI session generates two things: outputs you can save, and context you almost certainly don’t. The outputs such as the document, the draft, the analysis go into your existing tools or local drives. The context including the reasoning, the account-specific framing, the judgment calls about what to include and why just disappears when the session closes. Nobody is capturing the knowledge embedded in the work because that’s not the output, so they’re keeping the finished version of it.
At scale, that’s a structural problem that compounds quietly. An organisation where sellers are actively using AI for commercial work is generating more execution intelligence than ever before. And losing more of it. The productivity gain is real and visible. The knowledge loss is real and invisible.
The architectural failure
The pattern follows from how these tools are designed. AI assistants are built for individual productivity, not organisational knowledge accumulation. Context is session-scoped by default. Sharing it requires deliberate effort. Building on someone else’s session requires them to have exported and stored it somewhere useful — which most people don’t do, because that’s not how the tools work.
The institution gets the outputs not the intelligence that produced them.
When that seller leaves, the outputs stay. The reasoning walks out with them. The next person starts from the document, not from the understanding. It’s the same with enablement, technical architecture, revenue operations and strategy.
AI conversations are doing the same thing, with one additional problem. When a seller leaves, at least you know the knowledge is gone. With AI, the assumption is that the conversation is still there, that the three sessions where the team worked through the account strategy, refined the messaging, mapped the buying committee are sitting safely in the sidebar waiting to be referenced. Often they aren’t. Every major AI platform has experienced significant, recurring conversation loss. Memory features reset silently. Context windows hit limits and get compacted into a summary that reflects what the model thought mattered, not what you needed preserved. Paid tiers offer essentially no protection when the underlying infrastructure fails.
Teams are building institutional knowledge inside a tool that treats history as ephemeral by design, on platforms that have demonstrated they can lose it entirely. The assumption that you can go back to a conversation is the same assumption people make about the colleague who hasn’t left yet. Sometimes it holds. Sometimes Monday morning it’s just gone.
What solutions miss
There are products in this market that claim to address parts of this. Knowledge bases, AI memory layers, enterprise search platforms that promise to connect your Google Drive, your SharePoint, your email. Some are useful in narrow ways. What they don’t solve is the fundamental problem: getting the right execution context to the right person at the right moment, in a form that compounds rather than resets.
The organisations making progress aren’t treating this as a tool selection problem. They’re treating it as an architectural one. What does it mean to capture execution intelligence deliberately rather than accidentally? How do you structure it so it’s shareable, usable, and evolving, not just stored and forgotten? How do you make the knowledge that flows through AI collaboration part of the organisation’s operating asset base?
These are hard questions. The technology to answer most of them is available. The operating model to use it isn’t.
The compounding advantage
The businesses that figure this out will have an advantage that’s genuinely hard to replicate. A competitor can hire your sellers. They can buy the same data. They cannot buy your institutional knowledge of how deals actually get done in your market, with your accounts, against your specific competitive set, if that knowledge is captured and compounded somewhere they don’t have access to.
Most organisations are not building that. They’re building better dashboards on top of a knowledge base that resets every time someone closes a browser tab or hands in their badge.
The gap isn’t getting smaller. The teams generating the most AI-assisted commercial work are generating the most execution intelligence. They’re also losing the most of it.
Where this is going
I’ve been thinking about this problem seriously for the better part of a year. Not as an observer as someone trying to work out what the right architecture actually looks like. I’ve suffered lost conversations with the flow that built the outputs lost forever. The technical pieces exist, developers are solving it for developers but we aren’t for commercial teams. The harder problem is the operating model: what has to change about how teams work for execution intelligence to become an organisational asset rather than an individual byproduct. I don’t have a finished answer. I have a direction that’s starting to feel right.
The teams that figure this out first won’t announce it. They’ll just start compounding.
If you’re thinking about building this layer in your organisation, I’d like to know more. It’s a problem worth solving before you’re asked why you didn’t.