Pro Tips
Jan 3, 2026
Industrial safety leaders already know the uncomfortable truth about root cause analysis (RCA): you cannot do it for everything. Full investigations take time, coordination, and expertise, and real operations do not pause so we can run perfect analyses.
Based on our research, organizations report performing full RCAs on roughly 1% to 2% of incidents. Publicly available sources show similarly low rates.
High-risk operations generate a constant stream of events: near misses, first aid cases, equipment damage, process upsets, environmental releases, quality escapes, and the occasional high potential precursor that could have been catastrophic. If you try to do a full RCA on everything, the system collapses under its own weight. If you only RCA the top tier, you risk missing the weak signals and repeating the same event pattern until it finally produces a serious injury or fatality.
If you want fewer incidents next quarter and next year, should you push for more RCAs, or better RCAs?
Our review of the evidence and research suggests a clear direction: both matter, but RCA quality is usually the higher-leverage variable once you have baseline coverage.
The good news is with the help of modern AI you do not have to choose one or the other.
In this two-part article, we will first examine what we learned from research on the topic, then propose a framework for measuring RCA quality and discuss how AI platforms like Haven Safety AI can be a force multiplier in the process.
RCAs only reduce incidents when they convert into stronger controls
An RCA is not valuable because a report exists. It is valuable only when it reliably produces prevention:
Evidence capture quality (facts, conditions, timeline, context)
Causal reasoning quality (system contributors, not just the last unsafe act)
Corrective action strength (controls that actually reduce exposure)
Implementation and verification (did the fix happen, and did it work)
When any link in that chain is weak, recurrence is predictable.
This is why “more RCAs” can feel like progress but does not always show up in incident trends. Quantity increases opportunities to learn. Quality determines the conversion rate from investigation effort into durable risk reduction.
What the industrial safety evidence actually shows
Direct, quantitative “RCA quality vs incident outcomes” research in industrial settings is not massive, but there are meaningful signals.
1) Investigation quality and maturity correlate with better safety performance in mining
In Ghanaian large-scale gold mines, researchers assessed the quality of incident investigation reports using a structured tool, then examined how those quality elements related to injury incidence rates. They found mines differed significantly in report quality, and injury incidence rates were negatively correlated with some elements of the quality assessment. (Springer)
A related study developed an incident investigation maturity framework (based on literature and investigator interviews across multiple mines) to characterize what “mature investigations” look like in practice, explicitly because linking investigation practice to safety performance had been understudied. (MDPI)
Takeaway: even in a high-variability operational environment, “how well investigations are done” shows a meaningful association with safety performance.
2) Program characteristics tied to investigation content and participation matter more than speed
Wachter and Yorio surveyed 300+ establishments about their accident investigation programs and explored relationships with injury and illness outcomes. Their findings are especially relevant to the “quality vs quantity” debate:
They observed that time-to-initiate an investigation appeared less important than the content focus of the investigation and who conducts it.
Programs with a focus on human error and a team or employee-based approach were often associated with lower injury and illness rates (with the important caveat that investigation characteristics explained only a modest portion of variance in accident rates). (ResearchGate)
Takeaway: the parts of an investigation program that look like “quality inputs” (who participates, what the investigation examines) are strongly implicated, while a pure throughput metric (speed) is not the main story.
3) “Learning from incidents” is the most important part of an investigation
Industrial safety organizations often put their best people on causal analysis, then underinvest in the downstream work: decision-making, implementation, and follow-up.
In process industry research, Jacobsson, Ek, and Akselsson describe a learning cycle that includes reporting, analysis, decision, implementation, and follow-up, and propose ways to assess effectiveness across all steps, not just the analysis step. (ScienceDirect)
Drupsteen, Groeneweg, and Zwetsloot similarly argue that organizations frequently fail to learn from past events and present a model to identify bottlenecks. In their work, learning potential was limited, especially in the evaluation stage, and improvement requires attention to the full set of steps in the learning process. (PubMed)
Takeaway: Even excellent analysis will not reduce incident volume if actions are weak, late, unverified, or not institutionalized.
Adjacent evidence reinforces the mechanism: action strength and sustainability are the problem
Even though our focus here is industrial safety, healthcare has a larger body of “RCA output quality” research that helps validate the mechanism we care about: recommendations and follow-through.
A systematic review concluded that RCA can be useful for identifying contributing causes, but that translating RCA into effective recurrence prevention is not consistently achieved. (PMC)
An observational study of RCAs coded recommendation strength and examined perceived effectiveness and sustainability, highlighting that recommendation quality varies and is measurable. (OUP Academic)
You do not need to assume healthcare is “the same” as industrial safety to learn from this. The commonality is structural: investigations generate recommendations, and recommendations only reduce recurrence when they are strong, implemented, and sustained.
What “RCA quality” means in industrial safety (a practical definition)
A “high-quality RCA” is not about producing a longer report. It is about producing a more defensible causal model and stronger controls. Quality by nature is multi-dimensional. High-quality RCAs consistently deliver:
Clear understanding of what happened: The event is defined well, boundaries include system contributors (equipment, procedures, supervision, maintenance, contractor interfaces).
Evidence and analysis rigor: Evidence is traceable; timeline is credible; gaps are explicitly flagged; reasoning is structured (not vibes).
Systems and human factors focus: The analysis goes beyond “worker error” to include design, workload, supervision, training systems, and management system weaknesses.
Strong corrective actions: Controls align with the hierarchy of controls (engineering and elimination when feasible, not defaulting to retraining and communication).
Implementation plan and accountability: Owners, due dates, resources, and dependencies are explicit.
Verification and learning: The fix is verified for effectiveness; recurrence is monitored; lessons are embedded into standards and training.
In this first part, we established that RCA quality is usually the higher-leverage variable once you have baseline coverage on quantity, we referenced empirical industry research to support the argument, and we discussed a practical definition of RCA quality.
In Part 2, we will introduce a quantitative KPI to measure quality and discuss that with AI support, you don’t have to choose between quality and quantity. You can get the best of both worlds.
Experience how AI-powered safety intelligence can transform your workplace. Book a demo to see our platform in action.







