How to write a PRD resource:
When Birmingham City Council’s new ERP system went live in April 2022, critical financial processes immediately failed.[^1] Eighteen months later, the council declared bankruptcy. The root cause? Poor requirements documentation that led to extensive customisation, misaligned processes, and a system that couldn’t perform basic financial functions. The council spent nearly £100 million before abandoning the implementation entirely.
This isn’t an isolated incident. Research consistently shows that 70% of software project failures trace back to requirements issues.[^2] For product teams, this translates to a stark reality: the difference between shipping features that solve real problems and burning months on rework often comes down to how well you document what you’re building and why.
A Product Requirements Document (PRD) is your team’s shared understanding of what problem you’re solving, for whom, and how you’ll know if you’ve succeeded. Done well, it prevents the chaos of misaligned assumptions. Done poorly, or skipped entirely, it costs far more than the time it would have taken to write.

What Makes PRDs Essential
I’ve watched teams waste entire quarters building the wrong thing because engineering, design, and product each held different assumptions about what ‘the feature’ actually meant. The PRD exists to make those assumptions explicit before you write code.
At its core, a PRD answers four questions:
Why are we building this? The problem space, grounded in user research or business need. This isn’t ‘we need a dashboard’, it’s ‘customer success managers spend 4+ hours weekly creating manual reports, delaying their response to at-risk customers’.
What will success look like? Measurable outcomes tied to business or user value. Not ‘users like it’, but ‘reduce report creation time by 75%, measured by time-to-first-insight in analytics’.
Who is this for? Specific user segments with defined needs. Not ‘power users’, but ‘CSMs managing 50+ accounts who need daily health scores to triage outreach’.
What’s in scope? Clear boundaries on what you’re building now and deliberately deferring. This prevents scope creep and sets expectations.
The PRD format varies wildly across organisations. Google historically favoured lengthy, detailed documents. Intercom enforces a one-page rule they call ‘Intermissions’, arguing that if you can’t fit the problem on an A4 page, you don’t understand it well enough yet.^3 Figma’s VP of Product structures PRDs into three sections: Problem Alignment, Solution Alignment, and Launch Readiness, with embedded design files that update automatically.^4
What these approaches share: they all force product teams to articulate the problem before jumping to solutions, and they all provide a single source of truth that prevents costly miscommunication.
The Foundations: What Every PRD Needs
Context: Why This Matters Now
Every PRD should open with the problem or opportunity in context. Not what you want to build, but why building it matters to users and the business.
When I write this section, I include:
- The current state and its cost (in time, money, user friction, or missed opportunity)
- Evidence from user research, support tickets, or analytics
- Business context (strategic goals, market pressure, competitive threats)
For a B2B analytics dashboard, this might read: ‘Customer Success Managers spend 4+ hours per week manually compiling customer health data across five tools. This delays identification of at-risk accounts and limits CSM capacity to 50 accounts each, whilst competitors’ tools enable 80+ accounts per CSM with real-time dashboards.’
The goal isn’t storytelling for its own sake. It’s ensuring that when engineering proposes a different technical approach, or design suggests an alternative flow, everyone can evaluate it against the same problem definition.
Goals and Success Metrics
This is where vague product work dies. If you can’t measure whether you’ve solved the problem, you haven’t defined the problem clearly enough.
I structure goals as:
- Primary metric: The one thing that must move (e.g. ‘Reduce report creation time by 75%’)
- Secondary metrics: Supporting indicators (e.g. ‘Increase CSM capacity from 50 to 65 accounts’)
- Counter-metrics: What shouldn’t get worse (e.g. ‘Report accuracy remains above 95%’)
Research from Info-Tech Research Group found that 50% of project rework stems from requirements issues.[^2] Much of that rework happens because teams built something that technically worked but didn’t move the needle on outcomes that mattered.
User Stories and Use Cases
Requirements written as abstract specifications (‘system shall allow data export’) miss the context that helps teams make good decisions. User stories ground requirements in actual scenarios.
I use the job story format Intercom popularised: ‘When [situation], I want to [motivation], so I can [expected outcome]’.[^5]
For that analytics dashboard:
- When I start my workday Monday morning, I want to see which accounts showed warning signs over the weekend, so I can prioritise my outreach before accounts churn
- When I’m in a client call and they mention an issue, I want to pull up their usage patterns in under 10 seconds, so I can provide context-specific guidance without ending the call to research
These stories do more than describe features. They reveal priorities (Monday morning urgency matters), constraints (10-second load time), and success criteria (without ending the call).
Functional and Non-Functional Requirements
This is the ‘what’ section, organised around how users will actually interact with the product.
Functional requirements describe behaviour:
- Dashboard loads on login, showing accounts sorted by health score
- Users can filter by account tier, renewal date, and custom tags
- Export to CSV includes full data set with timestamp
Non-functional requirements cover quality attributes:
- Dashboard loads in under 3 seconds on standard corporate network
- Data refreshes every 15 minutes without manual trigger
- System maintains 99.5% uptime during business hours
- Meets SOC 2 compliance for customer data handling
I’ve seen teams skip non-functional requirements because they seem obvious. They’re not. If you don’t specify load time, you might get a beautifully designed dashboard that takes 30 seconds to render. If you don’t specify compliance requirements, you might build something that can’t legally ship.
Constraints, Assumptions, and Dependencies
Every project operates within constraints. Making them explicit prevents surprises three weeks before launch.
Constraints might include:
- Must integrate with existing Salesforce instance (no migration)
- Cannot require new database infrastructure (budget approved for application layer only)
- Needs to work on tablets for field CSMs (not just desktop)
Assumptions surface what you’re taking as given:
- CSMs have reliable internet access during client visits
- Salesforce data sync provides sufficiently fresh data for daily decisions
- Current CSM tools remain available during phased rollout
Dependencies identify what must happen before or alongside your work:
- Data pipeline team completes Salesforce sync upgrade by Q2
- InfoSec approves third-party analytics library by sprint 3
- Customer training materials ready two weeks pre-launch
A study by McKinsey found that companies with poor documentation take 18% longer to release features than peers.[^6] Much of that delay comes from discovering constraints or dependencies mid-build that should have been identified during planning.
Open Questions
The best PRDs acknowledge uncertainty. Listing open questions serves two purposes: it prevents teams from treating uncertain things as decided, and it creates a roadmap for what needs to be resolved before launch.
Open questions might include:
- Do CSMs need real-time data or is 15-minute refresh sufficient? (User research in progress)
- Should we build mobile app or mobile-responsive web first? (Blocked on usage analytics)
- What’s the minimum viable feature set for beta? (Requires stakeholder alignment)
I update this section as questions get resolved, turning them into requirements or decisions elsewhere in the doc. This creates a visible record of how the product took shape.
Types of PRDs for Different Contexts
Feature PRDs
These are lightweight documents focused on a single capability or experiment. They’re common in fast-moving consumer products or when testing hypotheses.
A feature PRD for ‘Add dark mode’ might be 3-4 pages: problem (eye strain reports from night users), solution approach (system-level dark theme), success metrics (activation rate among night users, support ticket reduction), and scope (UI only, no backend changes).
The key is constraining scope tightly. Feature PRDs fail when they try to solve every related problem instead of shipping one thing well.

Platform PRDs
These cover APIs, integrations, or infrastructure that other teams will build upon. They need more technical detail and clearer contracts.
When I write platform PRDs, I emphasise:
- API contracts with request/response examples
- Performance characteristics (throughput, latency, error handling)
- Versioning strategy and backwards compatibility
- Integration patterns and common use cases
A platform PRD for a user authentication API might spend half its length on error states, rate limiting, and security requirements that a feature PRD could handle in a paragraph.
Experience PRDs
These centre on user journeys and flows, often spanning multiple features. They’re common in design-led organisations or when redesigning complex workflows.
Figma’s approach exemplifies this: their PRDs embed design files directly, so stakeholders see the experience rather than reading about it.^4 As designs evolve, the PRD automatically reflects the latest version.
Experience PRDs trade technical detail for user context. They spend more time on the journey (onboarding a new user from signup to first value) and less on individual feature specifications.

A Framework for Writing Effective PRDs
Stage 1: Problem Definition
Before writing anything, ensure you can answer:
- What’s the core problem in one sentence?
- Who experiences this problem and when?
- What’s the cost of not solving it?
- What evidence validates this is worth solving?
If you can’t answer these, you’re not ready to write a PRD. Go back to user research or stakeholder conversations.
Intercom’s one-page rule serves as a useful forcing function here.^3 If you can’t explain the problem concisely, you probably don’t understand it well enough yet.
Stage 2: Solution Approach
Describe your proposed solution at high level without locking in implementation details. Focus on what the solution enables, not how it works technically.
Good: ‘CSMs access a unified dashboard showing account health scores, recent activity, and recommended actions, all in one view’
Bad: ‘Build a React dashboard component that queries the analytics API every 15 minutes and renders health scores using a traffic light colour scheme’
The second locks in technical decisions that engineering might improve. The first gives them room to suggest better approaches whilst staying aligned on the outcome.
Stage 3: Requirements Documentation
Now get specific. Break down functional requirements by user journey or feature area. Include acceptance criteria for each requirement.
Format requirements as testable statements:
- User can filter dashboard by account tier
- Acceptance: Filter dropdown shows all active tiers, updates dashboard on selection, persists across sessions
This prevents ambiguity during development and QA. Either the filter works as specified or it doesn’t.
Stage 4: Stakeholder Review
Share the draft with engineering (technical feasibility), design (user experience), and relevant business stakeholders (strategic alignment).
Incorporate feedback in rounds:
- First review: Problem definition and goals (does this solve the right problem?)
- Second review: Solution approach and requirements (is this feasible and complete?)
- Final review: Launch criteria and dependencies (are we ready to commit?)
I’ve seen teams treat PRD review as a formality. That’s when you discover three weeks before launch that legal has compliance concerns or operations lacks training capacity.
Stage 5: Living Document Maintenance
The PRD doesn’t freeze at approval. As you learn through build and testing, update it to reflect decisions made and questions resolved.
I mark changes with version history and update dates. This creates an audit trail of how the product evolved and prevents confusion between ‘what we planned’ and ‘what we built’.

Advanced Techniques for Mastery
Write Backwards from Success
Start with the launch announcement or user testimonial you want to achieve. Amazon’s ‘working backwards’ process uses press releases as PRD anchors: write the press release first, then work backwards to define requirements.^7
This forces clarity on value proposition. If you can’t write a compelling press release, perhaps the feature isn’t as valuable as you thought.
Use Visual Requirements
Screenshots, mockups, and flow diagrams often communicate more clearly than paragraphs. Figma’s embedded files exemplify this.^4 When the design changes, the PRD updates automatically.
For complex workflows, I include:
- Before/after workflow diagrams showing how the solution changes user behaviour
- Annotated mockups with callouts for key requirements
- Decision trees for conditional logic
Separate Must-Have from Nice-to-Have
Use MoSCoW prioritisation (Must have, Should have, Could have, Won’t have) to make scope negotiation explicit.
Must-haves are the minimum viable feature set. If time runs short, you cut from ‘Should’ and ‘Could’, not from ‘Must’. This prevents the trap where everything is ‘high priority’ and nothing actually prioritises.
Include Anti-Requirements
Explicitly state what you’re NOT building or solving. This prevents scope creep and manages expectations.
For that analytics dashboard:
- NOT building: Predictive churn modelling (deferred to Phase 2)
- NOT solving: CSM training on customer success methodology (separate initiative)
- NOT including: Historical trend analysis beyond 90 days (data pipeline limitations)
Create Measurement Plans
Don’t just list metrics. Define how you’ll measure them, what baseline to expect, and what threshold indicates success or failure.
For ‘reduce report creation time by 75%’:
- Measurement: Time from dashboard open to first export/action
- Baseline: Current average 4.2 hours (surveyed from 25 CSMs)
- Success threshold: Under 1 hour for 80% of CSM cohort
- Instrumentation: Dashboard analytics tracking session duration and actions
This makes post-launch evaluation straightforward instead of debating whether you succeeded.
When Not to Write a PRD
PRDs aren’t always the right tool. Consider alternatives when:
The problem is genuinely unclear: If you’re in true discovery mode, a lightweight hypothesis document or experiment brief might serve better. Write the PRD after you’ve validated what to build.
The team is tiny and co-located: If three people sit together and talk daily, verbal alignment might suffice. But document decisions somewhere, even if it’s just Slack threads you can reference.
The change is trivial: Fixing a typo doesn’t need a PRD. Use your judgement on scope. I’d write a PRD for anything taking more than two days of engineering time.
You’re prototyping to learn: Prototypes test assumptions. PRDs document requirements after assumptions are validated. Don’t write requirements for something you plan to throw away.
The cost of skipping a PRD when you need one is high. Research shows 70% of digital transformation projects fail, with requirements issues causing the majority of failures.[^2] But forcing PRD process onto every tiny change creates bureaucratic overhead that slows teams down.
The test: if different team members might make different assumptions about what you’re building, you need shared documentation. What you call it doesn’t matter.
The Role of AI in PRD Writing
I use AI to accelerate PRD creation, but I never let it own the thinking. Here’s where AI helps and where it falls short.
Where AI Adds Value
Draft generation: Given a rough outline, AI can expand user stories, suggest edge cases, and structure sections consistently. This beats staring at a blank page.
Consistency checking: AI spots where terminology shifts or requirements conflict. It’s faster at finding ‘the dashboard should load in 3 seconds’ in one section and ‘acceptable load time is 5-8 seconds’ in another.
Alternative phrasings: When I’ve written something clunky, AI offers clearer ways to express the same requirement. This is editing assistance, not original thinking.
Template creation: AI can generate starter templates based on PRD type (feature vs. platform vs. experience), saving setup time.
Where AI Fails
Prioritisation and trade-offs: AI doesn’t understand your business strategy, technical constraints, or user needs well enough to make calls on what matters most.
Organisational context: It can’t know that ‘simple dashboard’ means something different after your CTO vetoed the last complex analytics project, or that ‘mobile-friendly’ must meet specific accessibility standards your company committed to.
Problem validation: AI might generate plausible-sounding problems, but it can’t validate whether users actually face them or care about solving them.
Accountability: When the PRD turns out to be wrong, the product manager owns that failure. AI is a tool, not a decision maker.
Using AI Effectively
I follow a pattern: AI drafts, I decide.
- I write the problem statement and goals myself (no delegation here)
- I ask AI to expand user stories and suggest edge cases
- I review, modify, and own every suggested requirement
- I verify that requirements align with strategy, constraints, and user reality
Before accepting any AI-generated requirement, I ask:
- Does this align with our strategy?
- Does it reflect our technical constraints?
- Does it match how users actually work?
- Could this cause unintended consequences?
AI serves as a writing assistant, not a product strategist. The human product manager’s judgement remains essential.
Common Mistakes and How to Avoid Them
Writing for Writing’s Sake
The PRD that no one reads wastes more time than having no PRD. Keep it concise. Use visuals. Structure it for skimming with clear sections and headings.
If your PRD exceeds 10 pages, it’s probably too long or trying to cover too much. Split it into multiple feature PRDs or trim the unnecessary detail.
Solution Disguised as Problem
‘We need a dashboard’ isn’t a problem statement. It’s a solution assumption.
The problem is ‘CSMs can’t identify at-risk accounts quickly enough’. A dashboard might solve that. So might automated alerts, or better Salesforce reports, or changing the CSM workflow entirely.
Define problems, not solutions. Let your team propose the best solution.
Requirements Without Rationale
When requirements say ‘system shall support 1000 concurrent users’ without explaining why, engineers can’t make informed trade-offs. If the real driver is ‘we have 800 customers, each typically has 1 user accessing the system, and we expect 40% growth this year’, that context helps.
Every significant requirement should explain its reasoning. This doesn’t mean verbose justification, just enough context for trade-off decisions.
Static Documents
The worst PRDs are ‘write once, file away’ documents that diverge from reality as the project progresses.
Treat PRDs as living documents. Update them when requirements change, questions get resolved, or scope shifts. Version them and note changes. This keeps them useful throughout build and creates an accurate record for future reference.
Starting Points: Your First PRD

If you’re writing your first PRD or improving how you write them, here are concrete actions you can take immediately:
1. Pick a current or upcoming project (This week)
Choose something meaningful but not critical. Your second or third PRD will be better than your first. Don’t learn on the most important project.
2. Interview three users about the problem (1-2 hours each, complete within one week)
Don’t ask them what features they want. Ask them about their current workflow, where it breaks down, and what they do when it fails. This grounds your PRD in reality.
3. Draft the problem statement in one paragraph (30 minutes)
If you can’t explain the problem concisely, you don’t understand it well enough yet. This forces clarity before you spend time on detailed requirements.
4. Define three measurable outcomes (1 hour)
Pick one primary metric that must move, one secondary metric that supports it, and one counter-metric that shouldn’t get worse. If you can’t measure it, you can’t know if you succeeded.
5. Share the draft with an engineer and a designer (Two 30-minute reviews)
Don’t ask ‘is this good?’ Ask specific questions: ‘Is this problem worth solving?’ ‘Is this solution approach technically feasible?’ ‘What am I missing?’ Incorporate their feedback.
6. Write functional requirements as testable statements (2-3 hours)
Each requirement should be verifiable. ‘Fast dashboard’ isn’t testable. ‘Dashboard loads in under 3 seconds on standard corporate network’ is.
These steps take less than two days of focused work and produce a usable PRD. You can refine from there as you learn.
The Leverage of Good Documentation
PRDs aren’t glamorous work. They won’t appear in your portfolio or launch announcement. But they’re among the highest-leverage activities in product management.
When Birmingham City Council’s requirements failed, the cost wasn’t just £100 million spent on unusable software. It was 18 months of delayed projects, manual workarounds, and ultimately bankruptcy.[^1] When Queensland Health’s payroll system failed due to poor requirements, it cost AU$1.25 billion and caused ongoing payment errors affecting thousands of healthcare workers.[^8]
These are extreme examples, but the pattern holds at every scale. Skip the clarity of shared documentation, and you pay in wasted engineering cycles, misaligned features, and rework that could have been prevented.
The best PRDs are concise, collaborative, and outcome-driven. They’re living documents that evolve with the product, not static artifacts filed away at project start. They focus on problems worth solving before jumping to solutions. And in the age of AI, they remain fundamentally a human responsibility.
Write PRDs that people actually use. Your future self, your team, and your users will thank you.
Frequently Asked Questions
What exactly is a Product Requirements Document?
A Product Requirements Document (PRD) is a detailed description of what a product or feature should do, who it’s for, and how success will be measured. It aligns product, design, engineering, and business stakeholders around a shared understanding of requirements before development begins. The PRD includes the problem being solved, goals and success metrics, user stories, functional and non-functional requirements, constraints, and dependencies. Good PRDs prevent miscommunication and wasted effort by making assumptions explicit.
How is a PRD different from a product specification document?
A PRD defines the ‘what’ and ‘why’: what problem you’re solving, why it matters, and what outcomes you’re targeting. A product specification document (PSD) defines the ‘how’: technical implementation details, architecture, design specifications, and development approach. The PRD focuses on requirements from a user and business perspective. The PSD translates those requirements into technical specifications for the engineering team. In practice, smaller teams often combine these into one document, whilst larger organisations keep them separate.
How long should a PRD be?
There’s no universal answer, but most effective PRDs range from 3-10 pages. Intercom enforces a one-page limit for their ‘Intermissions’, arguing that if you can’t fit the problem on one page, you don’t understand it well enough yet. Figma’s PRDs are longer but highly visual, with embedded design files. Google historically favoured detailed PRDs of 20+ pages. The right length depends on product complexity, team size, and organisational culture. If your PRD takes more than 30 minutes to skim, it’s probably too long.
What’s the biggest mistake people make when writing PRDs?
The most common mistake is describing solutions instead of problems. When a PRD starts with ‘we need a dashboard’ rather than ‘CSMs spend 4+ hours weekly compiling reports manually’, it constrains the team to one solution without validating it’s the best one. This leads to building features that technically work but don’t solve the underlying problem effectively. Always start with the problem, provide context about who faces it and when, and let your team propose the best solution approach.
Do startups need PRDs, or are they just for large companies?
Startups absolutely benefit from PRDs, though they should be lightweight. The smaller and more co-located your team, the less formal your PRD needs to be. Three people sitting together might get by with a one-page problem statement and success metrics. But as soon as different team members might make different assumptions about what you’re building, you need shared documentation. The format matters less than having explicit agreement on what problem you’re solving and how you’ll know if you’ve succeeded.
How do you write a PRD for something completely new with no existing data?
For genuinely novel products, your PRD will have more assumptions and open questions than established features. Start with the hypothesis you’re testing and the evidence that makes you believe it’s worth exploring. Include user research insights, competitive analysis, or market trends that validate the problem space. Be explicit about what you don’t know and how you’ll learn it. Consider writing a lightweight hypothesis document first, then a full PRD after you’ve validated the problem through prototyping or beta testing.
Should PRDs include technical implementation details?
Generally no. PRDs should focus on requirements from a user and business perspective, not technical implementation. Specify what the product must do (‘dashboard loads in under 3 seconds’) without dictating how (‘using Redis caching and PostgreSQL queries’). This gives engineering room to find the best technical approach. However, platform PRDs for APIs or infrastructure need more technical detail. And any PRD should note technical constraints (‘must integrate with existing Salesforce’) that limit implementation choices.
How often should you update a PRD?
Treat PRDs as living documents that evolve throughout development. Update them when: requirements change based on new information, open questions get resolved, scope shifts after stakeholder discussions, or you make implementation trade-offs that affect the original requirements. Version your PRD and note what changed when. This keeps it useful throughout the build process and creates an accurate record of how the product evolved. A PRD that diverges from reality becomes useless as a reference document.
What tools are best for writing and maintaining PRDs?
The best tool is whatever your team actually uses and can collaborate on. Popular choices include Notion (flexible structure, good for embedding), Confluence (strong for enterprise teams already using Atlassian tools), Google Docs (universal access, easy commenting), Coda (Figma’s choice, powerful for embedding and automation), and Jira (integration with development workflow). Some teams use Figma or Miro for visual PRDs. The tool matters less than ensuring everyone can access, comment on, and reference the PRD easily throughout development.
How do you get stakeholders to actually read the PRD?
Keep it concise and scannable. Use clear section headers, bullet points for requirements, and visuals where possible. Start with a TL;DR section covering problem, solution approach, and key metrics. Structure the document so different stakeholders can skip to relevant sections (engineering focuses on technical requirements, business stakeholders focus on goals and metrics). Present the PRD in review sessions rather than just emailing it. And update it regularly so people trust it reflects current thinking.
What’s the difference between a PRD and user stories?
User stories are individual requirements written from a user’s perspective (‘As a CSM, I want to filter accounts by renewal date so I can prioritise outreach’). A PRD is the complete document containing multiple user stories plus context, goals, success metrics, constraints, and dependencies. User stories are one component of a PRD, but the PRD provides the broader context those stories exist within. Many teams break PRD requirements into user stories for sprint planning, maintaining traceability between the detailed requirements and the original PRD.
How do you write PRDs for AI features?
AI features require special attention to uncertainty, edge cases, and failure modes. Your PRD should address: how the AI makes decisions (even at high level), what happens when the AI is wrong, how you’ll measure accuracy, and what controls users have. Include evaluation criteria for AI output, not just traditional metrics. Specify how you’ll handle the AI’s probabilistic nature (‘when confidence is below 70%, show alternative suggestions’). And be explicit about ethical considerations, bias prevention, and data privacy requirements for AI systems.
What do you do when requirements conflict?
Document the conflict explicitly and escalate for decision. The PRD should note: what the conflicting requirements are, who owns each position, what the trade-off implications are (time, cost, user impact), and who has authority to decide. Don’t let conflicting requirements sit implicit in the PRD. Make them visible, discuss trade-offs with stakeholders, document the decision and rationale, and update the PRD to reflect the resolution. This prevents teams from building solutions that try to satisfy both requirements and satisfy neither.
How do you balance detailed requirements with giving the team flexibility?
Focus on the ‘what’ and ‘why’, not the ‘how’. Specify what outcomes you need (‘users can filter dashboard by account tier’) and why it matters (‘CSMs need to triage different customer segments differently’), but not how to implement it (‘use a dropdown with checkboxes’). Include enough detail that the requirement is testable and the team understands constraints, but leave room for better solutions to emerge during development. If designers or engineers propose a different approach that achieves the same outcome more effectively, that’s a win.
What’s the ROI of writing PRDs?
Research shows requirements issues cause 70% of project failures and 50% of rework. For a team of 50 developers earning £150K annually, reducing rework by just 5% through better documentation saves over £375K yearly. PRDs prevent the costly pattern of building the wrong thing, discovering misalignment late, and spending months on rework. They also reduce onboarding time for new team members, provide reference material for future work, and create institutional memory. The few hours spent writing a PRD typically save weeks or months of misdirected effort.
References
[^1]: Birmingham City Council ERP implementation failure. Panorama Consulting Group. (2024). Information Technology Project Failure
[^2]: Software project failure statistics on requirements issues. Info-Tech Research Group and Requiment. (2025). Why Do Software Development Projects Fail?
[^5]: Intercom job story template for PRD requirements. Cycle. (2022). How Intercom Writes Product Requirements Documents
[^6]: Documentation impact on feature release timelines. Evizi. (2025). The Hidden Cost of Poor Documentation in Software Development
[^8]: Queensland Health payroll system failure. Dolfing, H. (2024). Project Failure Case Studies
