stonewall.dev
Back to Blog
MCP AI claude-code cursor developer-tools

MCP Explained: How AI Agents Get Product Context

Stonewall · · 9 min read

Your Agent Is Working Blind

You open Claude Code. You paste the task. "Add a notification system with email and in-app channels, following the existing patterns in the codebase." The agent reads your files, understands your architecture, and starts building.

But it doesn't know what you decided in the spec review last Tuesday. It doesn't know that push notifications were explicitly descoped. It doesn't know that the acceptance criteria require a user preference toggle. It doesn't know about the open question on whether notifications should be batched.

The agent has code context. It doesn't have product context. And the gap between the two is where features go wrong.

Model Context Protocol (MCP) is the bridge.

What MCP Actually Is

MCP is an open protocol — created by Anthropic and adopted across the AI ecosystem — that lets AI agents connect to external data sources and tools. Think of it as a standardized way for agents to ask questions and get structured answers from systems that aren't part of the conversation.

Without MCP, an agent's knowledge is limited to what's in the chat window and what it can read from the filesystem. With MCP, an agent can query databases, APIs, knowledge bases, and specialized tools — all through a consistent interface.

MCP in one sentence: A standard protocol that lets AI agents query external systems for context, so the agent's knowledge isn't limited to what you paste into the chat window.

The protocol is straightforward. An MCP server exposes "tools" — structured operations the agent can call — and "resources" — data the agent can read. The agent discovers what's available, decides what's relevant, and queries for exactly the context it needs.

For developers, this is already transforming workflows. MCP servers for GitHub, databases, file systems, and documentation are common. Linear has an MCP server. Notion has one. The entire ecosystem is moving toward agents that can reach into your tools and pull context on demand.

Why Product Context Is the Missing MCP Server

Here's what's interesting: there are MCP servers for almost everything EXCEPT the product layer.

Your agent can query GitHub for PR status. It can read your database schema. It can search your codebase. But it can't answer: "What are the acceptance criteria for this feature?" or "What questions were raised during the spec review?" or "What's the priority and scope of this work item?"

That context lives in your head, in a Notion doc the agent can't access, or in a Slack thread that's already buried. The agent does its best without it — and "its best without it" means guessing at requirements, building things that weren't asked for, and missing edge cases that were explicitly discussed.

50+ MCP servers for dev tools
~0 MCP servers for product specs
100% Of agents that need both

The product layer is the last mile of agent context. And it's the most impactful mile — because the difference between code context and product context is the difference between "how do I build this" and "what should I build."

What a Product MCP Server Looks Like

A product-focused MCP server exposes your specs, requirements, and decisions as structured context that agents can query. Here's what the interaction looks like in practice:

Agent starts a task. It reads the codebase, understands the architecture, and identifies the relevant modules. Standard code context.

Agent queries the MCP server. "What's the spec for the notification feature?" The server returns the structured spec — problem statement, acceptance criteria, technical constraints, scoping decisions. Not a wall of text — structured data the agent can reason about.

Agent encounters ambiguity. "Should notifications be sent on comment creation?" Instead of guessing, it queries the MCP server for the relevant spec section and open Q&A threads. The answer is there — or if it isn't, the agent can surface the question for the human to resolve.

Agent references dependencies. "Are there other features that modify the notification preferences table?" The MCP server returns dependency context — other specs that touch the same code paths, blocking relationships, shared data models.

AI Agent Codebase Product MCP Specs + Q&A files MCP query

The key difference from pasting a spec into the chat: surgical retrieval. The agent doesn't get the entire spec dumped into its context window. It gets exactly the sections relevant to the current task. This matters for token efficiency — a full spec might be 3,000 tokens, but the agent only needs the 200 tokens relevant to the endpoint it's implementing.

The Bidirectional Spec Loop

The really powerful pattern isn't just agents reading specs. It's agents writing back to them.

When an AI agent encounters ambiguity during implementation — "the spec says POST /api/orders but the existing convention uses PUT for upserts" — it can raise a question against the specific spec section. The spec owner gets notified. They answer. The spec updates. The agent continues with the resolved context.

This is the bidirectional spec loop:

  1. Human writes spec
  2. Agent reads spec via MCP
  3. Agent hits ambiguity
  4. Agent raises question against spec section
  5. Human resolves question
  6. Spec updates
  7. Agent continues with new context
  8. Repeat until feature ships
The spec isn't just input to the agent. It's a conversation between the human who decides and the agent who builds. MCP makes that conversation structured and persistent.

No current workflow supports this. Today, when an agent hits ambiguity, it either guesses (wrong half the time) or asks in the chat (the answer is ephemeral and doesn't update the spec). The bidirectional loop means answers are captured, the spec evolves, and the next agent that touches this feature inherits all the decisions.

How to Set Up MCP for Your Workflow

If you're using Claude Code or Cursor, you likely already have MCP configured for some tools. Adding product context follows the same pattern.

Option 1: File-based specs

The simplest approach: commit your specs as structured markdown files in the repo. Your agent already reads the filesystem. Add a naming convention (specs/feature-name.md) and reference them in your CLAUDE.md context file.

Pros: Zero setup. Works immediately. Cons: No live updates. No Q&A loop. No structured queries. The agent reads the whole file even when it only needs one section.

Option 2: MCP server for specs

Connect an MCP server that exposes your specs as structured resources. The agent can query specific sections, search across specs, and access related context (Q&A threads, acceptance criteria, dependency info).

Pros: Surgical retrieval. Live updates. Structured queries. Bidirectional loop possible. Cons: Requires a spec management system that supports MCP.

Option 3: Custom MCP server

Build your own MCP server that wraps whatever system you use for specs. This could be a thin layer over Notion's API, Linear's API, or even a structured YAML file in your repo.

Pros: Works with your existing tools. Cons: You're building and maintaining infrastructure. The "thin layer" inevitably grows.

The Compound Effect

Here's what most people miss about product context in AI workflows: it compounds.

An agent that builds Feature A with full spec context produces better code. When it builds Feature B, which touches the same modules, the spec context from Feature A informs the work. The Q&A threads from Feature A surface edge cases that apply to Feature B. The dependency map shows how A and B interact.

Over time, the spec system accumulates institutional knowledge that every agent benefits from. Not just the code patterns (which the agent can derive from the codebase) — the product decisions, the customer feedback, the reasoning behind design choices, the edge cases that were discussed and resolved.

This is knowledge that currently lives in someone's head, in Slack threads, and in meeting notes that nobody will ever find again. MCP makes it queryable, persistent, and available to every agent and every human who needs it.

The Stack Is Incomplete Without Product Context

Your AI development stack in 2026 looks something like this: Claude Code or Cursor for implementation. GitHub for version control and CI/CD. Linear or Jira for work tracking. Some combination of Notion, Google Docs, and Slack for specs and decisions.

The first three are MCP-enabled. The last one — the product layer — isn't. And it's the layer that determines whether the agent builds the right thing.

The specification is the bottleneck, not the model. And MCP is the protocol that gets specifications out of static documents and into the agent's working context. The missing server is the product server — the one that gives your agent not just code awareness, but product awareness.

The missing MCP server for product.
Stonewall's MCP server exposes living specs as structured context for Claude Code and Cursor. Acceptance criteria, code references, Q&A threads, dependency context — all queryable via MCP. Your agent's product brain.
Join the waitlist at stonewall.dev

Related Posts