Certifications PMP CertificationWorld’s top PM cert CSM — Certified ScrumMasterTop agile cert CAPMEntry-level PM cert PRINCE2UK & Europe standard View All Certifications ?
PM Guides Agile GuideComplete breakdown Scrum GuideRoles, ceremonies, artifacts EVM GuideAll formulas explained View All Guides ?
Career & Salary PM Salary 2026By country & level How to Become a PMStep-by-step roadmap 50 Interview QuestionsWith strong answers
PM Software Monday.com ReviewTop pick 2026 ClickUp ReviewBest value Best Free PM ToolsNo trials, truly free View All Software ?
Free Tools & Templates EVM CalculatorFree, no signup Gantt Chart MakerBuild & export free PMP Eligibility Checker30-second result Free PM Templates30 templates — Excel, Word, PDF
Get the Free PMP Guide ?
Quick Answer

Project management estimation techniques fall into two broad families. Predictive (waterfall) techniques — analogous estimating, parametric estimating, bottom-up estimating, three-point estimating (PERT), and expert judgement — produce time and cost estimates in absolute units (days, dollars). Agile techniques — story points, planning poker, T-shirt sizing, affinity estimation and velocity-based forecasting — produce relative effort estimates used to plan sprints and forecast release dates. The right technique depends on how well requirements are understood, how much historical data exists, and how much precision is needed at the current stage of planning. No estimation technique is accurate — all produce approximations. The goal is to produce estimates that are good enough for decision-making at the appropriate level of precision, and to communicate their uncertainty honestly.

8+
estimation techniques covered in this guide
±50%
typical accuracy of early-stage estimates (Rough Order of Magnitude)
PERT
most tested estimation formula on the PMP exam
Velocity
the Agile equivalent of a parametric forecast

Estimation is one of the most consequential skills in project management — and one of the most consistently underestimated in difficulty. Most project schedules are built on estimates. Most budgets are built on estimates. When those estimates are wrong, the project is wrong from the moment planning begins.

The challenge is not that estimating is technically complex. It is that estimates are predictions about uncertain futures, made by humans with limited information, who are subject to optimism bias, anchoring effects and external pressure to produce numbers that look acceptable rather than numbers that are realistic. The techniques in this guide do not eliminate that uncertainty. They give you structured, defensible ways to quantify it — and communicate it honestly to the people who will make decisions based on it.

This guide covers the full toolkit: waterfall and predictive techniques for projects with defined scope, Agile techniques for iterative delivery, hybrid approaches for projects that blend both, and the decision framework for choosing the right technique in the right situation.

01 — The Estimation Problem

Why Estimation Is Hard — The Estimation Problem

Before learning the techniques, it helps to understand why estimation consistently goes wrong in practice. Four forces conspire against accurate estimates on almost every project.

The optimism bias. Humans systematically underestimate how long tasks will take and how much they will cost. Study after study confirms this — in software development, construction, infrastructure and knowledge work alike. The average software project runs roughly 2× over its original time estimate and 1.5× over budget. These are not outliers caused by poor planning — they are the expected outcome of optimism bias in normal planning.

Anchoring. When someone gives you a number first — even an arbitrary one — it anchors your subsequent estimate toward it. If a sponsor says "I was thinking this would cost about $200,000," your estimate will cluster closer to $200,000 than it would have if you had started from a blank page. Structured estimation techniques (particularly Agile techniques like planning poker) are specifically designed to prevent anchoring by having estimators commit independently before revealing numbers.

The planning fallacy. People plan based on best-case scenarios rather than realistic scenarios, ignoring the base rate of similar projects. Asked how long a project will take, people focus on the specific project's characteristics — not on the historical fact that most similar projects ran 30% over their initial estimates.

External pressure. Estimates do not exist in a vacuum. They are given to sponsors, clients and executives who have expectations, budgets and deadlines. The pressure to give an estimate that fits within an existing budget or timeline is enormous — and it silently distorts estimates toward what people want to hear rather than what the data suggests is realistic.

💡
The honest framing every PM should give with an estimate: "This estimate is based on [technique] with [confidence level]. At this stage of the project, we expect an accuracy range of [±X%]. As planning progresses and requirements become clearer, accuracy will improve." Any estimate presented without its uncertainty range is an incomplete estimate.
02 — The Accuracy Spectrum

The Estimating Accuracy Spectrum — ROM to Definitive

Estimates are not equally precise at all stages of a project. PMBOK identifies a spectrum from Rough Order of Magnitude (ROM) at the start to Definitive estimates as planning matures. Understanding where you are on this spectrum — and communicating it clearly — is as important as the estimate itself.

ROM: −50% to +100% Budget: −10% to +25% Definitive: −5% to +10%
Concept / Initiation Early Planning Detailed Planning

A Rough Order of Magnitude estimate made at project initiation — when scope is still high level — has an accuracy range of −50% to +100%. That is not a failure of estimation skill. That is the honest reflection of how much is unknown at that stage. A sponsor who takes a ROM estimate and treats it as a commitment has misused the information.

As planning progresses and requirements are defined in more detail, estimates tighten. A budget estimate (made during early planning with a defined WBS but not yet detailed task plans) might be −10% to +25%. A definitive estimate (made during detailed planning with full task breakdowns, resource assignments and schedule network) narrows to −5% to +10%.

The practical implication: Give different types of estimates at different project stages. Clearly label them. A ROM estimate is not the same as a definitive estimate and should never be used as one.

03 — Predictive Estimation Techniques

Predictive (Waterfall) Estimation Techniques

🏛️
Analogous Estimating (Top-Down)
Uses historical data from similar past projects · Best for early-stage estimates

Analogous estimating uses the actual cost, duration or resource quantities from a previous, similar project as the basis for estimating the current project. It is a top-down technique — you estimate the whole or large sections first, then break down if needed.

For example: "Our last ERP implementation for a company of similar size took 14 months and cost $1.8 million. This implementation has comparable scope, so we estimate 13–15 months and $1.6–2.0 million." The estimate is derived from the historical anchor and adjusted for known differences.

Strengths
  • Fast — can be produced in hours or days
  • Useful at initiation when detail is unavailable
  • Based on real historical performance
  • Calibrated by expert judgement
Limitations
  • Only as good as the historical data — if past projects were poorly executed, estimates inherit those distortions
  • Assumes similarity — differences between projects may be underestimated
  • Low accuracy (ROM range typical)
  • Does not scale well when the new project is significantly different from historical data

Best used when: Requirements are still high level, time for estimation is limited, and a rough magnitude is needed for go/no-go decisions or initial budgeting.

📐
Parametric Estimating
Uses a statistical relationship between variables and cost/duration · Higher accuracy than analogous

Parametric estimating uses a statistical relationship between historical data and measurable project variables to calculate cost or duration estimates. The model is: Estimate = Parameter × Unit Rate. For example: if historical data shows that installing 1 kilometre of network cable takes 8 hours of labour, and the project requires 45 kilometres, the parametric estimate is 45 × 8 = 360 hours.

The technique requires reliable historical unit rate data and a linear or quantifiable relationship between the parameter and the estimate. It is widely used in construction (cost per square metre), software (function points per developer-day), manufacturing and infrastructure projects.

Strengths
  • More accurate than analogous estimating when the unit rate data is reliable
  • Scalable — easy to adjust as scope changes
  • Transparent — the model is explicit and auditable
  • Can achieve Budget to Definitive accuracy with good data
Limitations
  • Requires reliable historical unit rate data — if the data is poor, the model amplifies the error
  • Assumes linearity — real-world relationships are often non-linear (complexity, learning curves, economies of scale)
  • Does not account for project-specific factors not captured in the parameter

Example use cases: Cost per server rack installed, days per module of code developed, hours per inspection, cost per room renovated, days per document reviewed.

🔼
Bottom-Up Estimating
Estimates individual work packages and aggregates upward · Highest accuracy of the three predictive techniques

Bottom-up estimating starts at the lowest level of the WBS — individual work packages or activities — and estimates each one separately. The individual estimates are then aggregated (summed) up through the WBS hierarchy to produce the total project estimate. It is the most time-intensive estimation approach and requires a detailed WBS to execute properly.

The process: decompose the project into work packages in the WBS, estimate the cost and duration of each work package using the best available information (expert judgement, parametric rates, time-boxing), roll up estimates from work package level to WBS element level to project total.

Strengths
  • Highest accuracy of the three predictive techniques
  • Forces thorough understanding of project scope
  • Each estimate is independently justified
  • Makes scope gaps visible — unestimated work = undiscovered scope
  • Creates strong buy-in — the people doing the work help create the estimate
Limitations
  • Time-intensive — requires detailed WBS and substantial effort to produce
  • Cannot be done until scope is well-defined — not useful at initiation
  • Aggregation obscures uncertainty — summing estimates without their uncertainty ranges can create false precision
  • Can be gamed — teams pad individual estimates, creating a sum that is inflated

Best used when: Scope is fully defined, a definitive estimate is required (budget baseline, contract pricing), and time is available to do it properly. This is the standard technique for creating the cost baseline used in EVM.

🧑‍💼
Expert Judgement and the Delphi Technique
Structured use of expertise · Especially powerful for novel or unique projects

Expert judgement means using the knowledge of people with relevant experience to inform or validate estimates. It appears as a tool or technique in almost every PMBOK estimation process because experienced PMs and domain experts are often the most reliable source of estimation data — particularly when historical data is limited or the project is novel.

Used without structure, expert judgement is vulnerable to the biases described earlier — particularly anchoring (if experts hear each other's estimates) and social pressure (if a senior expert dominates). The Delphi technique addresses this by structuring expert input to be anonymous and iterative:

  1. Experts submit their estimates independently and anonymously
  2. A facilitator aggregates the estimates and shares the range with all experts (without attribution)
  3. Experts who gave outlier estimates are invited to explain their reasoning anonymously
  4. All experts revise their estimates in light of the shared reasoning
  5. The process repeats until the estimates converge to an acceptable range

The Delphi technique is particularly valuable for novel technology projects, innovation projects and situations where no reliable historical data exists. It surfaces disagreement and forces it to be resolved through reasoning rather than authority.

04 — Three-Point Estimating and PERT

Three-Point Estimating and the PERT Formula

Three-point estimating addresses one of the core weaknesses of single-point estimates: they present one number as if the future is certain. In reality, every estimate has a range of plausible outcomes. Three-point estimating makes that range explicit by asking for three estimates instead of one.

The three points are:

  • Optimistic (O): The best-case duration or cost — if everything goes better than expected
  • Most Likely (M): The most probable duration or cost — the realistic, most common outcome
  • Pessimistic (P): The worst-case duration or cost — if significant problems occur

These three values feed into two weighting formulas. The PERT (Program Evaluation and Review Technique) formula gives extra weight to the most likely estimate. The triangular distribution treats all three equally. PERT is more commonly tested on the PMP exam and more widely used in practice.

PERT formulas — memorise all three for the PMP exam
PERT Expected Value
E = (O + 4M + P) ÷ 6
Weighted average — Most Likely gets 4× weight. This is the estimate used for planning.
Standard Deviation
SD = (P − O) ÷ 6
Measures estimate spread. Larger SD = more uncertainty. Used to calculate confidence ranges.
Variance
V = ((P − O) ÷ 6)²
SD squared. Variances for independent tasks can be added to get project-level variance.

PERT Worked Example — Fully Calculated

Example — 4-Task Project with PERT Estimates
TaskOptimistic (O)Most Likely (M)Pessimistic (P)PERT Expected (E)Std Dev (SD)
Requirements analysis3 days5 days10 days(3 + 20 + 10) ÷ 6 = 5.5 days(10−3) ÷ 6 = 1.17
System design5 days8 days15 days(5 + 32 + 15) ÷ 6 = 8.67 days(15−5) ÷ 6 = 1.67
Development10 days15 days28 days(10 + 60 + 28) ÷ 6 = 16.33 days(28−10) ÷ 6 = 3.00
Testing3 days5 days9 days(3 + 20 + 9) ÷ 6 = 5.33 days(9−3) ÷ 6 = 1.00
Project Total35.83 daysSD not simply summed — add variances first

Calculating project-level standard deviation: Individual task SDs cannot be simply added. Instead, sum the variances (SD²) and take the square root. Project SD = √(1.17² + 1.67² + 3.00² + 1.00²) = √(1.37 + 2.79 + 9.00 + 1.00) = √14.16 ≈ 3.76 days.

Confidence ranges: With a PERT expected duration of 35.83 days and a project SD of 3.76 days, statistical confidence ranges are: 68% confidence the project completes in 32.1–39.6 days (E ± 1SD), 95% confidence in 28.3–43.4 days (E ± 2SD), 99.7% confidence in 24.6–47.1 days (E ± 3SD).

🎓
PMP exam tip on PERT: The exam will give you O, M and P values and ask you to calculate E, SD or a confidence range. The formula for E = (O + 4M + P) ÷ 6 is always the starting point. For SD, it is always (P − O) ÷ 6. The "±1 SD" range gives approximately 68% confidence. The "±2 SD" range gives approximately 95% confidence. These are standard assumptions in PERT problems — you do not need to derive them.
05 — Reserve Analysis

Reserve Analysis — Building Estimation Uncertainty Into the Budget

Reserve analysis is the process of adding buffers to estimates to account for identified and unidentified uncertainty. PMBOK distinguishes two types of reserve:

Contingency reserve — added for identified risks. If the risk register contains a specific risk (e.g. "vendor may be 2 weeks late") and a response has been planned that involves additional cost if the risk occurs, the estimated cost of that response is the contingency reserve. Contingency reserve is part of the cost baseline and is included in the project budget. It is the PM's authority to use.

Management reserve — added for unknown unknowns — risks that have not been identified but that experience and project complexity suggest will materialise. Management reserve sits outside the cost baseline and requires management/sponsor approval to access. It is not included in EVM calculations (which are based on the cost baseline).

The practical decision: how much reserve is appropriate? This depends on project risk level, complexity, novelty and stakeholder risk appetite. Common guidance: 5–15% contingency for well-understood projects with a mature risk register; 15–25% for novel, complex or technically uncertain projects. Management reserve of 5–10% of the project budget is common on large capital projects.

06 — Agile Estimation Techniques

Agile Estimation Techniques

Agile estimation works on a fundamentally different philosophy from predictive estimation. Rather than predicting exact durations and costs, Agile techniques estimate relative effort — how much work is required compared to other items. This sidesteps the precision bias of single-point estimates and embraces the reality that early estimates are inherently uncertain.

Story Points
The foundation of Agile estimation

Story points are a unit of measure for the relative effort required to implement a user story. They are abstract — a story worth 5 points takes roughly 2.5× the effort of a 2-point story. They capture effort, complexity and uncertainty together in a single number.

Key principle: Story points are team-relative, not universal. A 5-point story for Team A may be a 3-point story for Team B. They are used to compare stories within a team's context, not across teams.

The standard scale uses Fibonacci numbers — 1, 2, 3, 5, 8, 13, 21 — because the increasing gaps reflect genuine uncertainty: a 13-point story is not just 13× harder than a 1-point story, it is also fundamentally harder to estimate with precision.

T-Shirt Sizing
Quick relative categorisation for large backlogs

T-shirt sizing assigns work items to categories — XS, S, M, L, XL, XXL — based on relative effort. It is faster than story point estimation and works well for large backlogs where rough categorisation is more useful than precise point values.

Common use: At product backlog refinement sessions for new epics or features. Once an item is planned for an upcoming sprint, it is broken down into stories and estimated in story points.

Some teams map T-shirt sizes to story point ranges: XS=1, S=2, M=3, L=5, XL=8, XXL=13+. Others use T-shirt sizes purely qualitatively without numerical mapping.

Affinity Estimation
Rapid group sorting for large backlogs

Affinity estimation (also called affinity mapping or silent sorting) is a technique for quickly estimating large numbers of stories — often 20–100 at once. Each story is written on a card, and team members silently sort the cards into size groups, moving cards they disagree with. The process continues until consensus emerges.

When to use it: At the start of a project or release planning session when a large, unestimated backlog needs to be sized quickly. Affinity estimation can estimate 100 stories in under an hour — far faster than individual story-by-story estimation.

Relative Estimation
Anchoring all estimates to a reference story

Relative estimation means establishing a reference story — a "known" item the team has delivered before — and estimating all new stories relative to it. "Is this story bigger or smaller than the login screen we built in Sprint 3? Roughly how much bigger?"

This anchors estimation to real-world team experience rather than abstract time units. It is more accurate than absolute time estimates because humans are better at comparing things than measuring them in absolute units. The reference story is typically a 3 or 5-point story that serves as a calibration point for the team.

Planning Poker — The Standard Agile Estimation Ceremony

Planning poker (also called Scrum poker) is the most widely used Agile estimation technique. It combines relative estimation with anonymous simultaneous reveal to prevent anchoring and encourage honest assessment.

How it works:

  1. The Product Owner presents a user story and answers questions
  2. Each team member privately selects a card representing their story point estimate
  3. All cards are revealed simultaneously — nobody sees others' estimates before committing their own
  4. If estimates vary significantly, the highest and lowest estimators explain their reasoning
  5. The discussion clarifies hidden complexity or assumptions, and the team votes again
  6. Repeat until consensus (or near-consensus) is reached

The card deck uses Fibonacci-ish values. Most decks also include special cards:

0
1
2
3
5
8
13
21
40
?

☕ = break needed (the session has gone too long)  ·  ? = not enough information to estimate  ·  ∞ = too large, needs to be split into smaller stories  ·  0 = already done or trivial

Why simultaneous reveal matters: If team members reveal estimates one at a time, anchoring occurs — later estimators cluster toward the first number they see. Simultaneous reveal forces every team member to commit independently. The disagreement that surfaces is information — it reveals different understandings of the story's scope or complexity that need to be resolved before work begins.

07 — Velocity-Based Forecasting

Velocity-Based Forecasting — The Agile Parametric Estimate

Velocity is the average number of story points a team completes per sprint, calculated over a rolling window of recent sprints. Once a team's velocity is established, it becomes a forecasting tool — divide the total story points remaining in the backlog by the team's velocity to estimate how many sprints are needed to complete the work.

SprintPoints CommittedPoints CompletedRolling Average Velocity
Sprint 1322828
Sprint 2353129.5
Sprint 3303330.7
Sprint 4343030.5
Sprint 5323230.8
Established velocityAverage of Sprints 3–5: 31.7 points/sprintUse this for forecasting

Forecasting remaining work: If the product backlog contains 220 story points of remaining work and the team's established velocity is 32 points per sprint, the forecast is 220 ÷ 32 = 6.9 sprints — approximately 7 two-week sprints, or roughly 14 weeks to complete the backlog.

Important caveats: Velocity is a lagging indicator — it tells you what the team has achieved, not what they will achieve. It assumes team stability (same people, same working patterns), consistent story point calibration (the team is estimating stories consistently over time), and no major changes in the nature of the work. A team that consistently over-estimates stories will appear to have a lower velocity than a team that under-estimates — velocity is calibrated to the team's own estimation scale.

⚠️
Velocity is not a productivity metric. A common mistake is using velocity to compare teams or to pressure teams to increase velocity. Velocity is a forecasting input, not a performance measure. A team that inflates estimates to appear to have high velocity is gaming a meaningless metric. The only thing velocity tells you reliably is how much work that specific team, with that specific estimation calibration, delivers in a sprint.
08 — Hybrid Approaches

Hybrid Estimation Approaches

Rolling Wave Planning

Rolling wave planning is the practice of planning in detail for the near term and at a high level for the future. It acknowledges that detailed estimation is only useful when requirements are sufficiently defined — and that forcing detailed estimates for work 6 months away is a false precision exercise.

In a rolling wave plan: the next 1–2 sprints are planned in detail (bottom-up estimates for each story), the next 1–2 months are planned at feature or epic level (T-shirt sizing or rough story point estimates), and everything beyond that is tracked as themes or capabilities in a high-level roadmap (analogous or ROM estimates).

As the project progresses, the planning horizon rolls forward — yesterday's medium-term plans become near-term plans and receive detailed estimation. Rolling wave planning is the default approach in Agile, but it is also applicable in predictive projects with uncertain scope.

The Cone of Uncertainty

The Cone of Uncertainty is a visual model that shows how estimation accuracy improves as a project progresses and more is known. Originally developed in the software industry, it has been adopted broadly in project management.

The Cone of Uncertainty — Estimate Accuracy Across the Project Lifecycle
+100% Estimate −50% Concept ±50% Initiation Planning ±25% Execution ±10% Completion ±5% Start

The cone illustrates why early estimates carry large uncertainty ranges and why demanding false precision at initiation is counterproductive. As the project moves through planning and execution, the cone narrows — uncertainty reduces as more is known about requirements, technology and constraints.

The cone also explains why projects that lock their budget to a ROM estimate at concept stage will almost always appear to overrun — not because the project was managed badly, but because the initial budget was based on an estimate with an inherent ±50% range.

Monte Carlo Simulation

Monte Carlo simulation is a computational technique that generates thousands of random scenarios based on probability distributions for each task's duration or cost. Each simulation run picks a random value from each task's distribution, calculates the total, and records the result. After thousands of runs, the results form a probability distribution for the overall project duration or cost.

The output answers questions like: "What is the probability this project completes within 12 months?" or "What budget gives us an 80% chance of not overrunning?" It is more sophisticated than PERT analysis and captures non-linear relationships and correlations between tasks that the PERT formula assumes away.

Monte Carlo requires software (Microsoft Project's risk analysis add-ons, Primavera Risk Analysis, or dedicated tools like @Risk) and is most commonly used on large, complex or high-risk projects where the cost of estimation error is substantial.

09 — Which Technique to Use

Which Estimation Technique to Use — Decision Guide

TechniqueBest WhenAccuracyEffort to Produce
Analogous (top-down)Early stage, limited info, quick ballpark needed, similar historical projects existROM (±50%)Low — hours
ParametricWell-defined scope units, reliable historical unit rate data, linear relationship existsBudget to DefinitiveMedium
Bottom-upDetailed WBS available, highest accuracy needed, cost baseline required for EVMDefinitive (±5–10%)High — days/weeks
Three-point (PERT)Uncertainty must be quantified, confidence ranges needed, risk-aware planningMedium to DefinitiveMedium
Expert judgement / DelphiNovel technology, no historical data, complex judgement requiredVaries with expertiseMedium
Story points + planning pokerAgile delivery, iterative requirements, sprint planning, team-based estimationRelative accuracyLow per story
T-shirt sizingLarge backlog, quick rough sort needed, early product planningRough orderVery low
Velocity forecastingEstablished Agile team with 3+ sprints of data, release date forecastingImproves over timeLow (once velocity established)
Rolling wave planningHybrid projects, evolving requirements, phased deliveryProgressive precisionLow (ongoing)
Monte Carlo simulationLarge complex projects, probabilistic schedule/cost analysis, risk quantificationProbabilistic — most sophisticatedHigh — requires software and data
10 — Common Estimation Mistakes

The 6 Most Common Estimation Mistakes — and How to Avoid Them

Treating estimates as commitments
An estimate is a probabilistic prediction, not a promise. When sponsors and clients treat estimates as fixed commitments, teams learn to pad heavily or — worse — give optimistic numbers and then struggle to deliver them. This creates a culture where nobody trusts estimates and nobody gives honest ones.
Always communicate the accuracy range alongside the estimate. "Our best estimate is 14 weeks, with a range of 11–18 weeks at 80% confidence." A named range is more honest and more useful than a single false-precision number.
Estimating based on available budget rather than actual work
When a sponsor says "the budget is $500,000," and the estimator then constructs an estimate that totals $485,000 — not because the work costs that, but because the budget dictates the answer — the estimate is worthless. This is Parkinson's Law in estimation: work expands to fill the budget available.
Estimate the work independently of the budget. If the honest estimate exceeds the budget, that is information the sponsor needs — not a problem to hide through estimation manipulation.
Forgetting integration and overhead in bottom-up estimates
Bottom-up estimates that sum individual work package estimates consistently underestimate total project cost and duration because they miss integration, testing, coordination, communication overhead, management time and rework. The parts do not simply add up to the whole.
Add explicit line items for integration activities (system integration, end-to-end testing, deployment), project management overhead (typically 10–15% of project cost), and reserve analysis. Bottom-up is only the start — integration overhead must be added on top.
Ignoring the estimation accuracy range
Presenting a single number — $350,000 — without the range ($280,000–$490,000 at ROM stage) implies false precision and invites the estimate to be treated as a commitment. The range is not a hedge — it is the honest information that decision-makers need.
Always present estimates with their accuracy range and the basis for that range. Label the estimate type: ROM, budget or definitive.
Allowing anchoring in group estimation sessions
When one person in a group gives their estimate first — especially if they are senior — everyone else anchors to that number. Group estimation sessions that do not control for anchoring produce estimates that reflect the first speaker's view, not independent analysis.
Use simultaneous reveal techniques — planning poker in Agile, or written independent estimates before group discussion in predictive settings. In the Delphi technique, all estimates are submitted anonymously before being shared.
Using Agile velocity data from a team's first two sprints
Sprint 1 and 2 velocities are almost never representative of a team's stable performance. Teams are still calibrating their estimates, setting up their environment and forming working relationships. Using early velocity data to forecast release dates produces wildly inaccurate projections.
Wait for at least 3–5 sprints before using velocity for forecasting. Use a rolling average of the most recent 3 sprints rather than a simple all-time average to reflect the team's current, calibrated performance.

Preparing for the PMP Exam?

Estimation techniques — including PERT calculations, reserve analysis and Agile velocity forecasting — appear across all three ECO domains. Test yourself with 200 free scenario-based questions.

11 — FAQ

Project Management Estimation Techniques — 8 Questions Answered

The main estimation techniques in project management fall into two families. Predictive techniques (used in waterfall projects) include: analogous estimating (using historical data from similar projects), parametric estimating (using a statistical unit rate — e.g. cost per square metre), bottom-up estimating (estimating each work package individually and aggregating), three-point estimating using the PERT formula (O + 4M + P) ÷ 6, expert judgement and the Delphi technique. Agile techniques include: story points (relative effort units), planning poker (simultaneous anonymous estimation), T-shirt sizing (rough categorisation), affinity estimation (silent group sorting), and velocity-based forecasting (dividing remaining backlog points by team velocity). Hybrid approaches include rolling wave planning, the Cone of Uncertainty and Monte Carlo simulation.
PERT (Program Evaluation and Review Technique) uses three time estimates to calculate a weighted expected duration. The formula is: E = (O + 4M + P) ÷ 6, where O is the optimistic (best-case) duration, M is the most likely duration, and P is the pessimistic (worst-case) duration. The most likely estimate receives four times the weight of the optimistic and pessimistic values. The standard deviation is calculated as SD = (P − O) ÷ 6, which measures the uncertainty in the estimate. The variance is SD² = ((P − O) ÷ 6)². For example, if a task has O = 3 days, M = 5 days, P = 10 days: E = (3 + 20 + 10) ÷ 6 = 5.5 days, and SD = (10 − 3) ÷ 6 = 1.17 days.
Story points are a unit of measure used in Agile project management to express the relative effort required to implement a user story. Unlike time-based estimates (hours or days), story points capture effort, complexity and uncertainty together in a single abstract number. They are team-relative — a 5-point story for one team may be a 3-point story for another team. Story points typically use the Fibonacci sequence (1, 2, 3, 5, 8, 13, 21) because the increasing gaps reflect genuine uncertainty in larger work items. The primary use of story points is sprint planning (what can the team commit to?) and velocity-based forecasting (how many sprints will the remaining backlog take at the team's current velocity?).
Analogous estimating uses the overall cost or duration of a previous similar project as a starting point, then adjusts for known differences. It is a top-down technique that requires expert judgement to apply appropriately. Parametric estimating uses a statistical unit rate — a measured relationship between a project variable and its cost or duration — to calculate an estimate mathematically. For example, $150 per square metre of construction, or 2 days per software module. Parametric estimating is more precise than analogous estimating when reliable unit rate data is available, because it is based on measured relationships rather than holistic judgement. Analogous estimating is faster and can be used earlier in the project lifecycle when unit-level data is not yet available.
Contingency reserve is budget set aside for identified risks — specific events in the risk register that have a defined probability and impact and a planned response. It is part of the cost baseline and can be used by the project manager within defined thresholds without external approval. Management reserve is budget set aside for unknown unknowns — unidentified risks that cannot be planned for but that project complexity and experience suggest will arise. Management reserve sits outside the cost baseline and requires senior management or sponsor approval to access. In EVM reporting, cost performance is measured against the cost baseline (which includes contingency but not management reserve). Spending management reserve appears as a change to the performance measurement baseline.
Planning poker prevents anchoring through simultaneous reveal — all team members commit to their estimate privately and reveal their cards at the same moment. This means no estimator sees another's estimate before committing their own. Anchoring occurs when the first number heard becomes a reference point that subsequent estimates cluster toward, regardless of independent assessment. By forcing independent commitment before revelation, planning poker ensures that each team member's estimate reflects their own analysis of the story. Disagreements that surface after simultaneous reveal are then discussed — the highest and lowest estimators explain their reasoning, which often reveals different understandings of scope or complexity that need to be resolved. The discussion improves the quality of the estimate and the shared understanding of the work.
In predictive projects, bottom-up estimating produces the most accurate absolute estimates because it requires the most detailed understanding of scope — each work package is estimated individually based on specific task knowledge. Monte Carlo simulation produces the most sophisticated probabilistic estimates by modelling uncertainty across thousands of scenarios. In Agile projects, velocity-based forecasting becomes increasingly accurate as a team matures and its velocity stabilises across sprints. However, "most accurate" depends entirely on when the estimate is made — no technique can produce a definitive estimate from early-stage, undefined requirements. The most accurate technique is always the most appropriate one for the current stage of planning: analogous at concept stage, bottom-up at detailed planning stage, velocity-based once the team has established patterns of delivery.
Rolling wave planning is an iterative planning approach that plans in detail for the near term and at a high level for the future. As time passes and more information becomes available, detailed plans are created for the next phase of work while future phases remain at a higher level of abstraction. It acknowledges that detailed estimation is only possible when requirements are sufficiently understood — trying to plan in detail 6 months ahead on a complex project produces false precision rather than useful plans. Rolling wave planning is the standard approach in Agile (where sprint-level planning is detailed and release-level planning is high level) and is also applicable to predictive projects with evolving scope. It is recognised in PMBOK as a valid planning approach and appears in the Schedule Management knowledge area.