Why Most Maintenance Dashboards Fail to Drive Improvement
The maintenance dashboard is the most commonly requested and least effectively used management tool in manufacturing operations.
Every CMMS produces reports.
Every OEE platform produces dashboards.
Every BI tool connected to manufacturing data produces visualizations.
And yet the operations that consistently improve maintenance performance are not the ones with the most sophisticated dashboards.
They are the ones where the right people are looking at the right numbers at the right frequency and making specific decisions based on what those numbers show.
A dashboard fails to drive improvement when any of three conditions is present.
It shows lagging indicators almost exclusively — metrics that confirm what already happened rather than signals that predict what is about to happen.
It shows numbers without targets — so a maintenance manager looking at a 74-minute average MTTR has no reference for whether that is acceptable, concerning, or critical.
It is reviewed at the wrong cadence — a weekly dashboard for a daily operational metric is 5 to 7 days late for the decisions it is supposed to inform.
This guide builds a dashboard that avoids all three failure modes.
The Eight KPIs That Belong on a Manufacturing Maintenance Dashboard
A manufacturing maintenance dashboard should contain exactly as many KPIs as the management team will actively use to make decisions — and no more.
Every additional KPI that nobody acts on reduces the signal-to-noise ratio and trains the team to scan past dashboard elements rather than engage with them.
Eight KPIs is the practical maximum for a functional maintenance dashboard.
Four lagging indicators that confirm whether the maintenance program produced the outcomes it was designed to produce.
Four leading indicators that predict whether the program will continue producing those outcomes — or whether a course correction is needed before performance degrades.
The Four Lagging Indicators
KPI 1: Overall Equipment Effectiveness (OEE)
What it measures: The percentage of scheduled production time that is truly productive — combining Availability, Performance, and Quality.
Why it belongs on the dashboard: OEE is the ultimate output metric for manufacturing maintenance. Every maintenance decision that prevents an unplanned failure, reduces changeover time, or improves asset condition eventually shows up in OEE. It is the metric that connects maintenance performance to business performance.
Target setting: Use the industry-specific benchmark ranges from the OEE benchmarks guide as a starting reference. Set a 12-month improvement target based on the gap between current performance and the benchmark range for your vertical.
Review cadence: Shift-level for production supervisors, weekly trend for maintenance managers, monthly trend for operations directors.
KPI 2: Mean Time Between Failures (MTBF)
What it measures: The average operating time between significant unplanned failure events for each asset class.
Why it belongs: MTBF measures failure prevention — whether the maintenance program is keeping assets running between failures for longer periods over time. A rising MTBF trend on Tier 1 assets confirms that PM improvements and condition-based maintenance are preventing the failures they were designed to prevent.
Target setting: Set MTBF targets per asset class based on manufacturer design life, historical failure frequency, and the improvement trajectory that the PM program redesign is expected to deliver.
Review cadence: Monthly — MTBF requires sufficient time data to be meaningful. Weekly MTBF figures are too noisy to be directionally reliable.
KPI 3: Mean Time To Repair (MTTR)
What it measures: The average elapsed time between a fault occurring and the asset returning to full operational status.
Why it belongs: MTTR measures maintenance responsiveness — whether the team is closing the gap between fault occurrence and operational recovery efficiently. A declining MTTR trend confirms that information quality, parts staging, and dispatch improvements are producing faster fault-to-fix cycles.
Target setting: Set MTTR targets per asset class based on the world-class benchmarks for comparable equipment types and the improvement trajectory from the MTTR reduction program.
Review cadence: Weekly for the maintenance manager, monthly trend for the operations director.
KPI 4: Planned-to-Reactive Maintenance Ratio
What it measures: The percentage split between planned preventive maintenance hours and reactive corrective maintenance hours.
Why it belongs: This ratio is the single most reliable indicator of maintenance program maturity. A high reactive ratio — above 50% — predicts future OEE underperformance and maintenance cost inflation with more reliability than almost any other maintenance metric. A rising planned ratio confirms that the PM program is maturing and the reactive premium is declining.
Target setting: 70% planned and 30% reactive is a realistic 12-month target for operations currently running 40 to 50% planned. World-class is above 80% planned.
Review cadence: Monthly — the planned-to-reactive ratio changes slowly and requires 30-day aggregation to show meaningful trends.
The Four Leading Indicators
KPI 5: PM Compliance Rate
What it measures: The percentage of scheduled PM work orders completed within the defined window — not just completed at some point, but completed on time.
Why it is a leading indicator: PM compliance today predicts MTBF and OEE performance in 30 to 90 days. A PM compliance decline — particularly on Tier 1 assets — is a leading indicator of increased unplanned failure frequency in the near-term future.
Target setting: 85% is the minimum acceptable threshold. Below 85% on Tier 1 assets requires immediate investigation and corrective action. Above 95% is achievable and should be the target for mature programs.
Review cadence: Weekly — PM compliance is one of the most actionable leading indicators because the corrective response — understanding why PMs were deferred and resolving the constraint — can be initiated the same week the deficit appears.
KPI 6: Condition-Based Trigger Response Time
What it measures: The average elapsed time between a condition-based PM trigger firing and the corresponding maintenance intervention being completed.
Why it is a leading indicator: Condition-based triggers exist because the asset has entered the P-F interval — the window between detectable degradation and functional failure. Response time within that window determines whether the condition-based program prevents failures or simply provides early warning of failures that happen anyway. A rising response time indicates that the action gap is opening — condition signals are being detected but not acted on quickly enough to stay within the P-F interval.
Target setting: Response time targets should be set per asset class based on the P-F interval for the dominant failure mode. A short P-F interval asset requires a faster response than a long P-F interval asset.
Review cadence: Weekly — condition trigger response time is directly actionable and should be reviewed frequently enough to identify response time drift before it produces failures.
KPI 7: Maintenance Backlog Hours
What it measures: The total hours of identified, approved, but not yet completed maintenance work in the work order queue — expressed as a multiple of available weekly maintenance capacity.
Why it is a leading indicator: A growing backlog predicts future PM compliance decline and MTTR extension — because a team running above capacity will defer planned work and take longer to respond to reactive work. A backlog above four weeks of available capacity is a leading indicator of impending maintenance performance degradation.
Target setting: Two to three weeks of available capacity is a healthy backlog. Below two weeks suggests under-investment in PM identification. Above four weeks requires immediate capacity or scope assessment.
Review cadence: Weekly — backlog growth is one of the earliest detectable signals of a maintenance capacity problem.
KPI 8: First-Time Fix Rate
What it measures: The percentage of significant unplanned repairs that are resolved completely in a single visit — without requiring a return visit within 24 hours to complete the repair or address a related failure.
Why it is a leading indicator: First-time fix rate predicts MTTR trajectory and maintenance labor efficiency. A declining first-time fix rate indicates deteriorating information quality at the work order level — technicians are arriving without the parts or knowledge needed for complete repairs, producing return visits that inflate both MTTR and maintenance labor cost.
Target setting: 75% is a realistic initial target. World-class operations achieve above 85%. Below 60% indicates a significant information architecture problem.
Review cadence: Weekly — first-time fix rate is actionable at the individual work order level and should be reviewed frequently enough to identify work order content gaps as they emerge.
How to Structure the Dashboard by Audience
Different audiences need different views of the same underlying data.
Presenting a shift-level OEE breakdown to the board is the same mistake as presenting a monthly trend chart to a production supervisor who needs to manage today's performance.
The Operational Dashboard — for technicians and production supervisors
Cadence: Real-time or shift-level.
Content: Current OEE by line. Open work orders by priority. Overdue PMs. Active condition alerts. MTTR for the current shift.
Format: Visual, at-a-glance, action-oriented. The operational dashboard should tell the supervisor what needs attention right now — not what happened last month.
The Management Dashboard — for maintenance managers and plant managers
Cadence: Weekly.
Content: All eight KPIs against their targets and prior-week comparison. Trend direction for each KPI. Top five Bad Actor assets by downtime contribution this week. PM compliance rate by asset class. Backlog hours against capacity.
Format: Trend lines rather than single data points. Traffic light status against target — green above target, amber approaching threshold, red below threshold. Every amber and red status should link to a specific action.
The Executive Dashboard — for operations directors and plant directors
Cadence: Monthly.
Content: OEE trend versus target. Planned-to-reactive ratio trend. Maintenance cost per unit trend. Top three maintenance improvement initiatives and their progress against milestones.
Format: Financial translation — OEE in recovered production value terms, planned-to-reactive in reactive premium cost terms, MTTR in production value recovered terms. The executive dashboard should answer one question: is the maintenance program delivering measurable financial improvement?
The Target-Setting Process
A KPI without a target is a number.
A target without a threshold that triggers a specific management response is a target in name only.
The target-setting process for each KPI has three steps.
Step 1: Establish the current baseline.
Pull the last 90 days of data for each KPI. Calculate the average. This is the baseline — the starting point that all improvement is measured against.
Step 2: Set the 12-month improvement target.
Use the benchmark ranges in the OEE benchmarks guide and the world-class benchmarks cited for each KPI above as reference points. Set a target that is ambitious but achievable — typically 15 to 25% improvement from baseline for each KPI, within 12 months.
Step 3: Define the response threshold.
For each KPI, define the specific value below which a management response is required — not just noted, but responded to with a specific action within a defined timeframe.
PM compliance below 85% on Tier 1 assets — maintenance manager investigates root cause within 48 hours.
MTTR exceeding 120% of target for three consecutive weeks — maintenance manager reviews work order content quality and parts availability for affected asset class.
Backlog above four weeks of available capacity — maintenance manager presents capacity assessment and priority triage to plant manager within one week.
Without response thresholds, the dashboard is a reporting tool.
With response thresholds, it is a management tool.
Building the Dashboard: Practical Data Requirements
The dashboard is only as good as the data feeding it.
Before designing the visual presentation, confirm that the underlying data exists and is reliable enough to support each KPI.
OEE: Requires machine-connected production data — cycle times, stop events, quality rejection counts — from PLC or IoT connectivity. Operator-reported OEE is insufficiently accurate for KPI tracking purposes.
MTBF: Requires accurate work order timestamps — specifically, the time at which the asset last failed and the time of the current failure. Requires consistent failure categorization so that only significant unplanned failures are counted.
MTTR: Requires accurate work order open and close timestamps — captured at the machine at the time of the events, not estimated or batch-entered retrospectively.
Planned-to-reactive ratio: Requires consistent work order classification — every work order categorized as planned or reactive at creation, not retrospectively.
PM compliance: Requires PM work orders with defined completion windows — a PM without a scheduled due date cannot be assessed for on-time completion.
Condition trigger response time: Requires automatic work order generation timestamps from condition-based triggers and completion timestamps from the responding work order.
Backlog hours: Requires labor time estimates on every open work order — without estimates, backlog hours cannot be calculated.
First-time fix rate: Requires work order linkage between initial repairs and return visits — a capability that most CMMS systems require explicit configuration to provide.
The most common dashboard building failure is attempting to display KPIs before confirming the data quality required to calculate them accurately.
A dashboard showing inaccurate KPIs is worse than no dashboard — because it trains the management team to distrust the data and disregard the dashboard.
Build the data quality foundation first.
Build the dashboard second.
Frequently Asked Questions
How many KPIs should a maintenance dashboard contain?
Eight is the practical maximum for a maintenance management dashboard.
Beyond eight KPIs, the cognitive load of reviewing the dashboard increases to the point where the management team scans rather than engages — defeating the purpose of the dashboard.
If more than eight KPIs seem important, the right response is to build separate dashboards for different audiences rather than adding more KPIs to a single view.
What is the most important single KPI for manufacturing maintenance?
PM compliance rate is the most actionable single KPI for most manufacturing operations — because it is a leading indicator of future performance, it is directly controllable by the maintenance team, and it changes week to week at a pace that allows rapid course correction.
OEE is the most important single KPI for connecting maintenance performance to business outcomes — but it is a lagging indicator that reflects decisions made weeks or months ago, making it less actionable on a weekly basis.
How do we build a maintenance dashboard if we do not yet have a CMMS?
A spreadsheet-based dashboard is a legitimate starting point for operations that do not yet have a CMMS.
The data collection process — manually recording work order timestamps, PM completion dates, and failure events — is time-consuming but produces the baseline data that confirms the value of a CMMS investment and defines the specific data requirements for the platform evaluation.
The spreadsheet dashboard also reveals, quickly and clearly, which data elements are most difficult to capture manually — and those are the elements that drive the most urgent CMMS requirements.
The dashboard that drives improvement is not the one with the most metrics — it is the one where every number has a target, every target has a response threshold, and every threshold is connected to a specific management action that happens within a defined timeframe. Build that dashboard and the improvement follows from the discipline it creates.