Scientific Schedule Estimation: From PERT to Monte Carlo
Source: Dev.to
PERT (Program Evaluation and Review Technique)
PERT was developed by the US Navy in the 1950s for the Polaris missile project, a secret that shortened development by two years.
def pert_estimation(optimistic, realistic, pessimistic):
"""
O: Optimistic (when everything is perfect)
R: Realistic (normal case)
P: Pessimistic (when everything goes wrong)
"""
# PERT formula
expected = (optimistic + 4 * realistic + pessimistic) / 6
# Standard deviation (uncertainty)
std_dev = (pessimistic - optimistic) / 6
return {
"expected": expected,
"std_dev": std_dev,
"range_68%": (expected - std_dev, expected + std_dev),
"range_95%": (expected - 2 * std_dev, expected + 2 * std_dev)
}
# Real example: Login API development
result = pert_estimation(
optimistic=4, # Best: 4 hours
realistic=8, # Reality: 8 hours
pessimistic=16 # Worst: 16 hours
)
print(f"Expected: {result['expected']:.1f} hours") # 8.7 hours
print(f"68% probability: {result['range_68%']}") # (6.7, 10.7)
print(f"95% probability: {result['range_95%']}") # (4.7, 12.7)
Why multiply by 4?
The factor gives more weight to the most likely (mode) estimate in a normal‑like distribution, which is useful for Agile teams.
Planning Poker
A quick, team‑wide consensus technique:
- Everyone prepares cards (1, 2, 3, 5, 8, 13, 21, 34…).
- Cards are revealed simultaneously.
- If estimates differ widely, discuss the reasons.
- Reach a consensus estimate.
fibonacci = [1, 2, 3, 5, 8, 13, 21, 34]
Psychological effect: Wider intervals for larger numbers prevent excessive precision.
Monte Carlo Simulation
Monte Carlo uses random sampling to model project completion time.
import random
import numpy as np
def monte_carlo_simulation(tasks, iterations=1000):
"""Simulate project completion time."""
results = []
for _ in range(iterations):
total_time = 0
for task in tasks:
# Randomly select actual time for each task
actual = random.triangular(
task['min'],
task['max'],
task['likely']
)
total_time += actual
results.append(total_time)
return {
"mean": np.mean(results),
"p50": np.percentile(results, 50), # Median
"p90": np.percentile(results, 90), # 90 % probability
"p95": np.percentile(results, 95) # 95 % probability
}
# Project tasks
tasks = [
{"name": "Design", "min": 2, "likely": 3, "max": 5},
{"name": "Development", "min": 5, "likely": 8, "max": 15},
{"name": "Testing", "min": 2, "likely": 3, "max": 6}
]
result = monte_carlo_simulation(tasks)
print(f"50% probability: complete within {result['p50']:.1f} days")
print(f"90% probability: complete within {result['p90']:.1f} days")
Velocity‑Based Estimation
Leverages historical sprint velocity to forecast future work.
class VelocityEstimator:
def __init__(self, past_sprints):
self.velocities = past_sprints
def estimate(self, total_points):
avg_velocity = np.mean(self.velocities)
std_velocity = np.std(self.velocities)
sprints_needed = total_points / avg_velocity
return {
"expected_sprints": sprints_needed,
"optimistic": total_points / (avg_velocity + std_velocity),
"pessimistic": total_points / (avg_velocity - std_velocity)
}
# Past 10 sprint velocities
past_velocities = [23, 28, 25, 30, 22, 27, 26, 24, 29, 26]
estimator = VelocityEstimator(past_velocities)
result = estimator.estimate(total_points=150)
print(f"Expected: {result['expected_sprints']:.1f} sprints")
print(f"Range: {result['optimistic']:.1f} ~ {result['pessimistic']:.1f}")
Expert Consensus (Wideband Delphi)
A structured, multi‑round estimation process:
-
Round 1 – Anonymous submission
- Dev A: 10 days
- Dev B: 5 days
- Dev C: 15 days
-
Round 2 – Share reasoning & re‑estimate
- A: “Considering DB migration…”
- B: “Oh, I missed that.”
- C: “Is test automation included?”
New estimates: 8 days, 9 days, 10 days.
-
Round 3 – Consensus
- Final estimate: 9 days.
Recommendations
| Approach | When to Use |
|---|---|
| Planning Poker | Fast, easy team consensus |
| PERT + Velocity | Balanced accuracy & practicality |
| Monte Carlo + Wideband Delphi | High‑accuracy projects requiring risk analysis |
Adjusting Estimates with Similar Past Projects
similar_projects = [
{"name": "Login System A", "estimated": 20, "actual": 35},
{"name": "Login System B", "estimated": 15, "actual": 28},
{"name": "Login System C", "estimated": 25, "actual": 40}
]
adjustment_factor = np.mean([p["actual"] / p["estimated"] for p in similar_projects])
# adjustment_factor ≈ 1.73
# Apply to a new raw estimate
new_estimate = raw_estimate * adjustment_factor
Sprint Estimation Retrospective
| Task | Estimated | Actual | Difference | Cause |
|---|---|---|---|---|
| API Dev | 8 h | 12 h | +4 h | Auth complexity |
| UI Impl | 6 h | 5 h | –1 h | Template reuse |
| Testing | 4 h | 8 h | +4 h | Edge cases |
Lesson: Auth and testing phases need roughly a 1.5× buffer.
Closing Thoughts
The era of “gut feeling” estimation is over. Use scientific techniques:
- Calculate uncertainty with PERT.
- Estimate as a range, not a single number.
- Leverage past data (velocity, similar projects).
- Involve the whole team (Planning Poker, Wideband Delphi).
- Continuously refine your estimation process.
Accurate estimation builds trust and keeps projects on track.
Need scientific estimation and project management? Check out Plexo.