Context Graphs and the Product Decision Problem
Why product decisions are harder to capture than operational ones
Foundation Capital just published what might be the most important thesis on enterprise AI this year.
In AI’s trillion-dollar opportunity: Context graphs, Jaya Gupta and Ashu Garg argue that the next trillion-dollar platforms won’t be built by adding AI to existing systems of record. They’ll be built by capturing something enterprises have never systematically stored: decision traces.
Not just data. Not just rules. But the trail of how decisions actually happened - the exceptions granted, the conflicts resolved, the precedents invoked, and the cross-system context that today lives in Slack threads, deal desks, and people’s heads.
Their key distinction is sharp:
Rules tell an agent what should happen in general (”use official ARR for reporting”).
Decision traces capture what happened in this specific case (”we used X definition, under policy v3.2, with a VP exception, based on precedent Z”).
The thesis is that startups building “systems of agents” have a structural advantage here. They sit in the execution path. They see the full context at decision time - what inputs were gathered, what policy was evaluated, what exception was invoked, who approved. Persist those traces, stitch them across entities and time, and you get what they call a context graph: a queryable record of how decisions were made, making precedent searchable.
It’s a compelling vision. And the core question they raise is important: will entirely new systems of record emerge - systems of record for decisions, not just objects, and will those become the next trillion-dollar platforms?
We’ve been building in this space for months. And reading this thesis, we see both its power and its blind spot.
The thesis works well for operational decisions. But it underestimates how hard this gets for product teams. Call it the product decision problem.
Why Product Decisions Are Different
Foundation Capital’s examples are illuminating: deal desk approvals, contract reviews, support escalations, quote-to-cash workflows. These are decisions where:
Clear rules exist (pricing policies, escalation matrices, approval thresholds)
Execution paths are defined (a deal moves through stages, a ticket follows a workflow)
Agents can sit in the path and observe context at decision time
For these operational decisions, the context graph thesis makes sense. An agent in the execution path can capture not just what decision was made, but how rules were applied, which exceptions were granted, and why.
But product decisions are fundamentally different.
When a product team decides to build Feature A instead of Feature B, there’s no “rule” being applied. There’s no approval matrix. When a PM chooses to target Segment X over Segment Y, what policy is being evaluated? What’s the execution path an agent could observe?
Product decisions happen in Miro boards no one revisits, Slack threads at midnight, side conversations after the real meeting ends - and often, in one person's head, synthesizing inputs that never get written down.
The PRD isn’t where the decision trace gets captured. It’s a reconstruction written after the fact.
This is why product teams struggle more than anyone to answer: “Wait, why did we decide this?”
The Fragmented Workflow Problem
Watch how product teams actually work:
They brainstorm in Miro or FigJam. Research competitors in ChatGPT or Perplexity. Gather customer insights from Slack or Notion. Write specs in Google Docs. Build decks in Slides. Track work in Jira or Linear.
Each tool is a silo. Each switch is a context break. Each transition loses nuance.
When your customer research lives in one place, your competitive analysis in another, and your brainstorming in a third - they never truly compound. The insight from last week’s user interview doesn’t connect to the constraint you discovered in today’s technical discussion.
And then AI came along - and in many ways, made it worse.
Not because AI is bad. Because we’re using it wrong.
We used to argue over the PRD. Now we generate it and move on. We used to debate the strategy. Now we polish decks no one questioned.
AI should be helping product teams think more deeply about hard trade-offs. Instead, we’re using it to skip the thinking entirely.
The Missing Layer
Here’s what we keep coming back to:
Foundation Capital is right that decision traces matter. They’re right that enterprises need queryable records of how decisions were made, not just what was decided. They’re right that precedent should be searchable.
But their thesis assumes decisions happen in a place where traces can be captured—an execution path where agents can observe context at decision time.
For product teams, that place doesn’t exist yet.
There’s no execution path for “deciding the product strategy.” There’s no workflow for “figuring out what to build next.” The thinking is inherently unstructured - scattered across tools, conversations, and people’s heads.
Even if you built perfect context graph infrastructure for product teams, what would it capture? Fragments from Miro. Snippets from Slack. The PRD that was written after the real decision was already made. You’d have traces, but they’d be incomplete reconstruction - not the actual reasoning.
Think of it this way:
Context graphs are the memory of how the organization made decisions - the accumulated structure of decision traces stitched across entities and time.
But memory is only as good as what went into it. If the thinking was fragmented across 9 tools, the trace will be fragmented. If the reasoning happened in someone’s head and never got externalized, no agent can capture it.
What’s missing is a thinking space - a place where product thinking gets structured before it crystallizes into decisions. A place that generates decision traces as a natural byproduct of how the work happens. A space that becomes the execution path for product decisions.
That’s what we’re building with WhiteboardX - an AI-native canvas where product teams work through decisions visually, with research and synthesis happening in one place, so the reasoning chain is captured naturally rather than reconstructed after the fact.
If context graphs become the memory layer of enterprise AI, WhiteboardX becomes a source that can actually feed them. The traces are already there - structured, persistent, and queryable.
Context Graphs Don’t Solve the Product Decision Problem
Here’s where we see things differently from much of the current conversation:
The Context Graph thesis is largely agent-centric. The vision is that AI agents will increasingly make decisions autonomously - and they need decision traces to know how rules were applied in the past, where exceptions were granted, what precedents govern reality.
For operational decisions with clear rules and approval matrices, that makes sense. An agent approving a discount should know how similar discounts were handled before.
But product decisions don’t have clear rules. There’s no “pricing policy” for choosing what to build. There’s no “approval matrix” for prioritization. Product decisions are judgment calls - synthesis of customer needs, technical constraints, business goals, and intuition.
We think the goal for product teams isn’t to have AI agents make these calls autonomously. It’s to have AI help humans make better calls - and capture the reasoning so it becomes organizational knowledge.
Dharmesh Shah, HubSpot’s CTO, offered a sharp reality check on context graphs: asking companies to capture decision traces when they haven’t deployed agents at scale “is sort of like asking someone to install a three-car garage when they don’t own a single car.”
We’d extend the metaphor for product teams: before you build the garage, you need to learn to drive.
Most product teams haven't developed the muscle for structured decision-making. The reasoning isn't missing from some future context graph because no one captured it - it's missing because it never happened in a capturable way.
WhiteboardX is how teams learn to drive.
Why We Built WhiteboardX
As product builders, we’ve watched how product decisions get made - and how much context gets lost. The real thinking happens in fragments. The PRD is a reconstruction. Six months later, nobody remembers why we chose Approach A over Approach B, or why we deprioritized that feature customers kept asking for.
We built WhiteboardX to give product teams a place where decisions stay connected - instead of being scattered across Miro, Slack, documents, and one-off AI threads. Not a place where AI decides what to build, but where it helps humans decide better.
We’re in private beta.
Join the waitlist: whiteboardx.co


