You didn't sign up for this
It's 11pm. You're a senior engineer — or maybe the founding engineer — and someone just asked you to "write up the requirements" for the next feature. You've never written a PRD. You're staring at a blank Notion page. Maybe you Googled "how to write a PRD engineer" and landed here.
Good. This article is for you.
I'm not going to give you a 47-section template lifted from a product management course. I'm going to show you how to write a spec that actually helps you build software — not a political document designed to get sign-off from people who won't read it anyway.
What a PRD actually is (and isn't)
A PRD — product requirements document — sounds like something that belongs in a binder at IBM circa 1997. And honestly, most PRD templates still read that way. "Vision statement." "Strategic alignment." "Stakeholder matrix."
Forget all of that.
A PRD is an answer to five questions:
- What are we building?
- Why are we building it?
- For whom?
- What does "done" look like?
- What are we NOT building?
That's it. Everything else is decoration. If you can answer those five questions clearly enough that another engineer could start writing code without pinging you on Slack every 20 minutes, you've written a good PRD.
The engineer's PRD vs. the PM's PRD
Here's the thing nobody tells you: PMs and engineers write PRDs for different audiences.
A PM writes a PRD to align stakeholders — executives, designers, marketing, legal. It's a communication artifact. It needs to be readable by people who don't know what a foreign key is.
An engineer writes a PRD to align code. Your audience is yourself in two weeks, your teammates, and increasingly, the AI coding agent that's going to help you implement it. Your PRD should be closer to a technical spec than a strategy memo.
This convergence matters. According to Stack Overflow's 2025 survey, 84% of developers now use or plan to use AI coding tools, with 51% using them daily. Andrew Ng observed at YC's AI Startup School that managers are proposing two PMs for every engineer for the first time — because the bottleneck has shifted from writing code to defining what to build.
Your spec isn't just for humans anymore. If you're using Claude Code, Cursor, or any coding agent, the PRD is the prompt. Precision isn't optional.
The five sections that actually matter
Skip the executive summary. Skip the market analysis. Here are the five sections of a product requirements document that will actually save you time at a startup.
1. Problem statement
One paragraph. One specific user. One specific pain.
Not: "Users need better billing management."
Instead: "When a team admin on our Pro plan wants to add a sixth seat, they have to email support and wait 24 hours. Three customers churned last month because of this. We need self-serve seat management."
The problem statement grounds everything that follows. When you're deep in implementation and wondering whether to support prorated billing on mid-cycle upgrades, you come back to this paragraph. Does it solve the problem? Ship it. Does it not? Defer it.
2. Scope (what's in, what's explicitly out)
This is the section most engineers skip and most PMs pad. For you, it's the most important section in the document.
In scope:
- Add/remove seats from billing settings page
- Immediate charge for new seats (prorated to billing cycle)
- Hard seat limit enforcement at plan level
Out of scope (DO NOT BUILD):
- Seat-based permissions (use existing org roles)
- Annual billing toggle (Phase 2)
- Billing history/invoice download (existing Stripe dashboard is sufficient)
That "out of scope" list isn't just for you. It's for the AI agent that's going to read this spec and try to be helpful by implementing billing history because it "seems related." Explicit non-goals prevent scope creep from both humans and machines.
3. Technical context
This is where the engineer's PRD diverges completely from the PM template. A PM writes "integrates with Stripe." You need to write what actually exists in the codebase.
Existing infrastructure:
- StripeModule (apps/api/src/stripe/) handles customer creation and subscription management
- PlanEntity has a `maxSeats` column (currently unused)
- OrganizationEntity has a `subscriptionId` FK to Stripe
- No existing seat-count tracking — need new column or derived from org membership count
Key dependencies:
- Stripe Billing API v2024-12 (already integrated)
- PermissionsGuard requires MANAGE_BILLING permission
- Frontend billing page exists at /settings/billing (currently read-only)
This section is what turns a vague feature request into something you (or an AI agent) can actually execute against. You're mapping the territory before you start building.
4. Success criteria
Measurable. Testable. Not "improve the experience."
- Admin can add a seat and see the charge reflected in Stripe within 30 seconds
- Attempting to exceed
maxSeatsreturns a 403 with an upgrade prompt - Removing a seat issues a prorated credit on the next invoice
- Existing seat management via support email drops to zero within 30 days of launch
Every criterion should be something you could write a test for. If you can't write a test, it's not a criterion — it's a wish.
5. Edge cases
The part everyone skips. The part that causes 80% of bugs.
- What happens when the last admin tries to remove themselves?
- What if a user is removed from a seat but has active sessions?
- What about Stripe webhook failures — do we retry? How many times?
- Race condition: two admins add a seat simultaneously, exceeding the limit
- What happens at the org's seat limit when an invite is pending but not accepted?
I find that writing edge cases is where I discover I don't actually understand the feature yet. That's the point. You want to discover these questions at spec time, not at 2am when you're debugging a production incident.
A concrete example: before and after
Here's what engineers usually start with:
"Add team billing with seat management."
Seven words. This is a JIRA ticket, not a spec. Now watch what happens when you ask the scoping questions:
"Add team billing with seat management."
Seven words. No scope. No constraints. No edge cases. An AI agent will hallucinate the entire feature. A human will interpret it five different ways.
Structured spec with 5 scoping decisions
Hard seat limits, separate from org membership, immediate proration, admin picks who to remove, extends existing SubscriptionItem.
- Hard limit or soft limit on seats? (Hard — block the action, don't just warn)
- Do seats sync with org membership, or are they a separate concept? (Separate — you can have empty seats)
- What's the proration logic? (Charge immediately for adds, credit on next cycle for removes)
- Does removing a seat remove a user? (No — the admin picks who to remove separately)
- What Stripe objects are we creating? (SubscriptionItem with per-seat pricing, not a new Subscription)
Each question collapses ambiguity. After answering them, you have something an engineer — or an AI agent — can build from without guessing.
The gap between "add team billing" and a structured spec is where bugs live. Close that gap before you write code, and you'll spend less time debugging after.
Your PRD is now a prompt
In 2026, spec-driven development isn't a methodology — it's a practical necessity. Tools like Intent, GitHub Spec-Kit, BMAD-METHOD, and Kiro all emerged because developers realized the same thing: if your coding agent's input is garbage, the output is garbage.
When you write a PRD for an AI coding agent, three patterns matter:
Explicit non-goals. AI agents are eager. They'll build features you didn't ask for because the training data says they're "related." Your out-of-scope section is a constraint on generation, not just a note for humans.
Phased execution. Break the spec into ordered phases. "Phase 1: API endpoints. Phase 2: Frontend UI. Phase 3: Webhook handling." This gives the agent a dependency graph and prevents it from trying to build everything at once.
Technical anchors. Reference specific files, functions, and patterns from the codebase. Instead of "add a new API endpoint," write "add a POST handler in apps/api/src/stripe/stripe.controller.ts following the pattern in createSubscription()." The more concrete your references, the less the agent hallucinates.
This isn't theoretical. If you've used Claude Code on a real codebase, you've seen the difference between "build me a billing feature" and a structured spec with file references. The delta in output quality is enormous.
Where static PRDs break down
Here's the honest part: everything I just told you has a shelf life.
You write the PRD. You hand it to your team (or your AI agent). Implementation starts. Within 48 hours, the code has diverged from the spec. Someone discovered the Stripe API doesn't support the proration model you assumed. The frontend team pivoted to a different component library. A new edge case surfaced in code review.
The PRD is now a historical artifact. Nobody updates it. The next engineer who reads it gets misled.
This isn't a discipline problem — it's a tooling problem. Static documents in Notion or Google Docs are disconnected from the codebase by design. There's no feedback loop between what you specced and what you built. If you've experienced this, you're not alone — here's why PRDs stop working after 48 hours.
The fix isn't a better template. It's a tool that keeps the spec connected to the code.
Writing a PRD should feel like scoping a PR
If you've read this far, here's the core idea: writing a PRD as an engineer shouldn't feel like writing a term paper. It should feel like scoping a pull request — just earlier in the process.
You already know how to write a good PR description. You state what changed, why, what to watch out for, and what's not included. A PRD is the same thing, shifted left.
The five sections — problem, scope, technical context, success criteria, edge cases — map directly to how engineers already think about code. You don't need a PM certification to write a product requirements document at a startup. You need to answer the right questions before you start coding.
We also think PM tools should be open source — and we put our money where our mouth is.