Defect leakage is the percentage of software defects that escape QA testing and reach production — where they impact real users, create support costs, and erode customer trust.
A leakage rate of 5% or below is a realistic target for a mature QA process. Most teams without structured QA operate between 15–25% leakage. The difference in business cost between those two numbers is significant: support ticket volume, engineering incident response, customer churn, and — for fintech and healthtech teams — regulatory exposure.
Why defect leakage is the QA metric that matters most
Test pass rate tells you how many tests passed. Defect leakage tells you how many real problems reached your users. A team can have 100% automated test pass rates and still have high defect leakage — if the tests do not cover the right paths.
Tracking leakage forces the right question: not "did we pass our tests?" but "what reached production that should not have?" That question leads to better test design, better coverage decisions, and better release gates.
Root causes of high defect leakage
- Insufficient regression coverage. The automation suite covers happy paths but not the edge cases where defects actually cluster. Every production incident should be traced back to a missing test case — and that test case should be added immediately.
- No risk-based test prioritisation. Treating all test cases as equally important means high-risk paths get the same attention as low-risk ones. When time is constrained, critical paths should get more coverage, not less.
- Flaky automation masking real failures. A suite with 20% flaky tests trains the team to ignore red builds. Real regressions are missed because "it's probably just the flaky login test again." Flakiness directly causes leakage.
- Insufficient test data diversity. Tests that run with a single data profile miss defects that only surface with specific data combinations — edge-case amounts, unusual characters in user inputs, empty states, maximum field lengths.
- No UAT before production. Automated tests validate that code does what it was written to do. UAT validates that what it does is correct from a business perspective. Skipping UAT means business logic defects that passed all technical tests reach production.
The 5-step framework to reduce leakage
- Baseline your current leakage rate. Count production defects over the last 90 days. Divide by total defects found in the same period (production + pre-production). That percentage is your current leakage rate. You need a baseline to measure improvement.
- Map every production defect to a test coverage gap. For each production defect in your baseline period, answer: what test should have caught this? This analysis tells you exactly where to add coverage.
- Add regression tests for every production defect. Implement a policy: every defect that reached production gets a new automated test before the fix is merged. Over 6 months, this practice closes coverage gaps one defect at a time.
- Implement risk-based prioritisation. Score test cases by: revenue impact, historical defect frequency, and code complexity. Never skip payment, authentication, or data integrity tests — regardless of time pressure.
- Add a release readiness gate before every deployment. No build reaches production without a documented sign-off checking: regression pass rate, open critical defects, performance baseline, and UAT status. See our release readiness checklist for the full 12-point process.
What this looks like in practice
One of Assurix's fintech clients came to us with a defect leakage rate of approximately 18%. Over a 3-month embedded QA engagement, we implemented risk-based coverage, added regression tests for every historical production defect, and introduced CI/CD quality gates.
Leakage dropped to below 5% by the end of the engagement. Critical production incidents fell by 70%. Engineering time previously spent on incident response was redirected to feature development (Assurix client data, 2024, anonymised per NDA).
Frequently Asked Questions
What is an acceptable defect leakage rate?
Below 5% is a mature QA benchmark. For fintech and healthtech products where production defects have regulatory implications, below 2% on critical paths is a more appropriate target. The right number depends on your product's risk profile — but "whatever we currently have" is never the right answer.
How do you track defect leakage in Jira?
Create a custom field on bug tickets: "Defect origin" with values "Pre-production" and "Production." Run a monthly report counting production defects divided by total defects. Tag production defects with the sprint they were introduced in — this lets you correlate leakage with specific releases and identify patterns.
Does more automation always mean less leakage?
Not automatically. Automation reduces leakage when it covers the right paths with reliable, non-flaky tests. Automation that covers low-risk paths with flaky tests can actually increase leakage by consuming QA capacity and creating noise that masks real failures. Coverage quality matters more than coverage quantity.
Is defect leakage impacting your releases? Talk to an Assurix QA lead about a baseline assessment — we will measure your current leakage rate and propose a structured reduction plan.