Skip to content

Predictive Maintenance: From P-F Curve to ROI — The Honest 2026 Guide

By Martin Brandel · Last updated: April 2026

What is predictive maintenance?

Predictive maintenance (PdM) is the discipline of using continuously measured machine data to estimate when a specific component will fail, and then intervening just before that point — not earlier (wasteful), not later (unplanned stop). In a working implementation it combines three layers: sensors that observe the right physical signals, a data pipeline that gets those signals reliably into a system capable of analysing them, and a model — statistical or machine-learning based — that turns the signals into a usable forecast.

I have spent 35 years connecting machines to higher-level systems, starting with Simatic S5 in 1991 and ending up today with OPC UA, digital I/O gateways and cloud-based analytics. In that time I have watched "predictive maintenance" evolve from a research topic into a buzzword, then into a category of expensive consulting engagements, and finally into something that, in certain well-defined situations, genuinely works. The honest summary of where the industry stands in 2026: predictive maintenance delivers real value in perhaps 20–30 % of the places it is tried, wastes money in another 40–50 %, and in the remaining share quietly turns into something simpler — condition-based maintenance — which is usually what the plant actually needed.

This article is the long version of that reality. It covers what predictive maintenance actually is (and is not), how to judge whether it makes sense for a given machine, what the data foundations really look like, what the different ML approaches honestly deliver, and why most PdM programmes fail for reasons that have nothing to do with algorithms. Because there is no dedicated English blog post on the subject yet, the glossary entry has to do the heavy lifting.

The maintenance strategy ladder — PdM is rung four of five

Predictive maintenance is not a stand-alone technology. It is the fourth step of a five-step maturity ladder, and skipping steps is the single most common reason PdM programmes underperform. Each rung has a distinct trigger, cost profile and organisational prerequisite. A plant that has not mastered rungs two and three cannot realistically benefit from rung four.

Rung Strategy Trigger for action Typical cost (indexed) Realistic availability
1. Reactive Run-to-failure Machine stops 100 (baseline, hidden) 65–75 %
2. Preventive Time- or cycle-based servicing Calendar or counter 75–85 80–88 %
3. Condition-based (CBM) Threshold on a single measured value Parameter crosses limit 60–75 85–92 %
4. Predictive (PdM) Forecast of remaining useful life Predicted time-to-failure 55–70 88–95 %
5. Prescriptive Forecast + recommended action + resource scheduling Closed-loop recommendation 50–65 90–96 %

The honest rule of thumb: moving from rung 1 to rung 2 delivers the largest single jump in availability, at low cost, for almost any plant. Moving from rung 2 to rung 3 delivers a smaller but still clear jump, for a growing set of equipment types. Moving from rung 3 to rung 4 is where diminishing returns set in — the extra availability is real but smaller, and the cost of getting it right is much higher. Rung 5 is largely still an aspiration in 2026 for most plants.

The P-F curve — the physical reason PdM works at all

Predictive maintenance does not work by magic. It works because most mechanical and electrical failures do not happen instantaneously. They follow a curve — the P-F curve — in which a potential failure (P) is detectable long before the functional failure (F) actually occurs. The gap between P and F is the window in which predictive maintenance can act. No P-F interval, no predictive maintenance.

Detection method Typical P-F interval (bearing example) What it tells you
Ultrasonic / acoustic emission 6–12 months before failure Earliest indication of micro-defects
Vibration analysis (high-frequency) 2–6 months Developing imbalance, misalignment, wear
Oil analysis (particle count) 1–3 months Contamination and wear particle accumulation
Temperature (thermography / sensor) Days to weeks Late-stage heat build-up
Audible noise / smoke Minutes to hours Failure is already happening

The practical implication is straightforward: choose the detection method based on how long a P-F interval your maintenance organisation can actually use. If your spare parts take three weeks to arrive, a one-week P-F interval on temperature data is operationally useless — you will know the bearing is dying, and watch it die. The oldest trap in PdM is investing in the most sophisticated sensor technology available, when the bottleneck in the chain is actually the logistics response time.

What signals to actually capture — and what not to bother with

The sensor-and-data question is where most PdM projects make their first expensive mistake. The instinct is to capture everything. The reality is that most signals produce noise, not insight, and a small, well-chosen set of signals on well-understood failure modes outperforms a flood of high-frequency data on everything.

Signal type Best for Typical sampling Retrofit difficulty on brownfield
Vibration (accelerometer) Rotating equipment: motors, pumps, fans, gearboxes, spindles 10–25 kHz for FFT analysis Low — clamp-on sensor, no PLC change
Current / power (CT clamp) Motor load, tool wear, anomaly detection 1–100 Hz Very low — clamp around motor lead
Temperature (RTD/PT100/thermocouple) Bearings, electrical cabinets, hydraulic oil 0.1–1 Hz Low — existing sensors often already present
Pressure / flow Hydraulics, pneumatics, lubrication systems 1–100 Hz Medium — often already in PLC
Cycle time per part Tool wear, assembly degradation, general slowdown Per cycle Very low with MES — already measured
PLC alarm & event log Pattern recognition preceding faults Event-driven Low with OPC UA subscription
Ultrasonic Early bearing defects, leak detection, valve seating 20–100 kHz Medium — specialist sensors, calibration
Oil / lubricant condition Gearboxes, hydraulic systems, large engines Usually manual / periodic Medium — inline sensors available but costly

An underappreciated insight from brownfield work: in many plants the most valuable PdM signal is already present and ignored. Motor current, cycle time per part, and PLC alarm patterns are typically available without adding a single sensor. Before specifying vibration sensors on everything that rotates, it pays to look at what the existing automation is already producing. In retrofit projects I have repeatedly found that combining motor-current data with the PLC alarm history catches 60–70 % of the developing failures that a full sensor roll-out would have caught, at a fraction of the cost.

The analytics approaches — honest comparison

"Predictive maintenance uses machine learning" is the marketing story. The reality is that four distinct analytical approaches coexist in production PdM systems today, each with different strengths, data requirements and failure modes. Choosing the wrong one for your data situation is the second most common cause of disappointing results.

Approach How it works Data requirement Best for Honest limitation
Threshold / rule-based Alarm when a measured value crosses a limit Weeks of data to set thresholds Well-understood failure modes with clear single-variable signatures This is condition-based maintenance, not predictive — but solves 60 % of real use cases
Physics-based / digital twin Mathematical model of the asset predicts expected state; deviation = anomaly Detailed engineering knowledge; moderate data High-value assets (turbines, large motors) with known physics Expensive to build; brittle when operating conditions change
Unsupervised ML (anomaly detection) Algorithm learns "normal" behaviour and flags deviations Months of multi-variable operating data Complex assets with no labelled failure data Tells you something is off, not what or when it will fail
Supervised ML (failure prediction) Model trained on historical failure events to predict RUL Dozens to hundreds of labelled failure events Fleets of similar machines with enough failure history Most plants do not have enough failures of the same type — failures are rare, that is the point

The uncomfortable truth that most vendors avoid stating: the supervised-ML approach — the one that gets the marketing budget — requires exactly the data most plants do not have. Failures of a given component on a given machine occur once every several years. To train a useful supervised model, you need tens of such events, which means you need either a fleet of identical machines (rare outside automotive and energy) or historical data spanning many years. In small and mid-sized plants, threshold-based rules and unsupervised anomaly detection are almost always the more honest choice. The difference between a working PdM system and a shelfware PdM system is most often not the sophistication of the algorithm, but the match between the algorithm and the data reality.

The data pipeline — the unglamorous half that determines success

Every PdM project I have seen that failed had an impressive analytics layer and a fragile data pipeline. Every PdM project that succeeded had an unremarkable analytics layer and a ruthlessly reliable data pipeline. The order matters: data first, analytics second.

Pipeline stage What must be right Typical failure if it is not
1. Sensing Correct sensor, correct mounting point, correct range Data looks valid but represents the wrong physical phenomenon
2. Acquisition Sampling rate adequate for the failure mode; timestamps synchronised Aliasing, missed transients, unusable for FFT
3. Edge / gateway Buffering, resilience to network loss, clear IT/OT boundary Data gaps on nights/weekends no one notices
4. Transport Protocol choice (OPC UA, MQTT), bandwidth, security Silent data loss; models trained on incomplete data
5. Storage & time alignment Long-term retention, clock sync, context (order, product, operator) attached Signals cannot be correlated with process state
6. Labelling & ground truth Actual failures tagged accurately in the data Supervised ML impossible; RUL estimates meaningless
7. Feedback loop Maintenance actions and outcomes written back to the model Model degrades silently over time

The part that surprises most first-time PdM practitioners is step 6. Machine learning models need to know when failures occurred. That information lives in CMMS systems, paper logs, operator memories, and rarely in a consistent, timestamped form. A PdM programme that cannot answer the question "when exactly did this bearing fail, and what was the root cause?" for each of the last 50 events cannot train a supervised model that will work. This is why a good MES with proper downtime and defect capture is often the more useful predecessor to a PdM programme than any new sensor investment.

Brownfield vs. greenfield — two completely different problems

Every textbook example of predictive maintenance is greenfield: new sensors, new PLC tags, new data infrastructure. Every second real-world project is brownfield: a 1990s hydraulic press, no digital interface, a dusty control cabinet, and an expectation that somehow data will appear. These are genuinely different problems, and the strategies differ accordingly.

Question Greenfield (new equipment) Brownfield (legacy equipment)
Sensor strategy Specify required sensors as part of procurement Add external clamp-on sensors; minimise PLC changes
Connectivity OPC UA, often available out of the box Digital I/O gateway, MQTT; 2–4 h installation per machine
Data history None — must be built from day one Often several years of maintenance records, poorly structured
Realistic starting approach Condition monitoring, build to predictive over 12–24 months Threshold alarms on motor current + MES cycle-time trends — often enough
Biggest risk Over-engineering the platform before knowing what matters Assuming PdM is impossible because the machine is old

One claim most plants believe and that is almost never true: "our old machines cannot deliver data." In 25 years of retrofit projects, I have seen perhaps three machines where meaningful data genuinely could not be extracted. Everything else was a matter of finding the right signal — a relay contact, a motor lead, a hydraulic pressure line — and installing a gateway that could read it without touching the PLC logic. Machines from 1985 can support condition-based maintenance; they generally cannot support supervised-ML predictive maintenance, but that is usually not the right approach for them anyway.

The business case — honest math

PdM business cases presented by vendors tend to share two weaknesses: they overstate the baseline downtime they will eliminate, and they understate the total cost of ownership including data engineering, ongoing model maintenance, and the organisational change required to act on predictions. A realistic model looks like this:

Cost / benefit line Realistic range per bottleneck asset / year What drives it
Sensor & gateway hardware € 3,000–15,000 one-off Number of signals, retrofit complexity
Installation & commissioning € 2,000–8,000 one-off Hours per machine, access restrictions
Software subscription (per asset share) € 500–3,000/year Platform choice, level of analytics
Model development & tuning € 5,000–30,000 over 12 months Whether thresholds or supervised ML
Ongoing model maintenance € 2,000–10,000/year Operating-mode drift, retraining cadence
Benefit: avoided unplanned downtime € 20,000–150,000/year Hours prevented × loaded cost per hour
Benefit: reduced PM labour through run-to-condition € 5,000–30,000/year Previously over-scheduled servicing
Benefit: extended asset life Variable, often 5–15 % of capex amortisation Catching failures before cascading damage

The critical variable is the cost per hour of downtime on the specific asset. PdM on a bottleneck machine that costs € 5,000 per hour when stopped pays for itself in a handful of avoided events. PdM on a non-bottleneck machine with € 200 per hour downtime cost rarely does, regardless of how well the technology performs. This is why the first rule of PdM prioritisation is to follow the bottleneck, not the machine that is technically easiest to instrument.

When predictive maintenance does NOT pay off

Because most marketing material presents PdM as universally beneficial, the honest counter-list matters more than the list of benefits. PdM is the wrong answer when:

  1. The asset is not a bottleneck — downtime on it does not translate into lost production.
  2. Spare parts lead time exceeds the P-F interval. Knowing the bearing will fail in three days when replacement takes two weeks changes nothing.
  3. Failure modes are catastrophic and near-instantaneous (some electronics, some hydraulic ruptures). There is no P-F curve to exploit.
  4. The asset is cheap to replace relative to PdM cost. For many small pumps, motors and gearboxes, run-to-failure with stocked spares is the economically correct strategy.
  5. There is no maintenance organisation capable of acting on predictions. A perfect forecast delivered to a team that cannot change a bearing in time is valueless.
  6. Condition-based maintenance has not been tried yet. In a plant still running reactive or calendar-based maintenance, CBM typically captures 70–80 % of the PdM benefit at 20 % of the cost.

The single most important filter: can the organisation act on the prediction? A PdM signal that cannot be translated into a scheduled repair within the P-F interval is just an expensive way of receiving bad news.

How PdM fits into the broader system — MES, CMMS, ERP

Predictive maintenance does not replace any of the existing manufacturing IT systems. It complements them — and relies on them. A PdM platform that is not integrated with a CMMS cannot turn predictions into work orders; a PdM platform that is not integrated with an MES cannot correlate signals with production context (which order was running, what material, what operator); a PdM platform that is not integrated with ERP cannot trigger spare-part procurement in time.

System What it gives the PdM system What it receives from the PdM system
MES Production context (order, material, operator), real cycle time, downtime classification, OEE Alarms, predicted stop windows, maintenance-related availability loss forecasts
CMMS Historical failure events, spare-part history, technician availability Auto-generated work orders with predicted failure window
ERP Stock levels of critical spares, procurement lead times Triggered spare-part replenishment based on predicted need
APS / Scheduler Production schedule, planned changeovers Optimal maintenance slot within P-F interval

The MES is the most underrated partner in this list. Without production context, a spike in motor current cannot be distinguished from a machining operation on a harder material — the signal looks the same, the meaning is opposite. This is one of the reasons PdM programmes built on isolated condition-monitoring platforms produce so many false positives, while PdM programmes running on the same data backbone as the MES rarely do.

Prescriptive maintenance — the next step, honestly assessed

Prescriptive maintenance (the fifth rung) extends PdM by not just predicting failure but recommending the action, the timing and the resource allocation. In 2026, genuine prescriptive maintenance implementations are still rare and largely confined to high-value fleets (aircraft engines, large power generation, some process industries). For the typical discrete manufacturing plant, "prescriptive" in vendor marketing usually means "threshold-based rules with a nicer UI." That is not a criticism — threshold-based rules delivered well are valuable — but it does not justify prescriptive-tier pricing.

The realistic near-term evolution for most plants is not prescriptive analytics but better integration: PdM signals automatically generating CMMS work orders, spare-part checks against ERP inventory, and scheduling of repair windows into APS-driven production plans. That integration is where 80 % of the unrealised value sits, not in more sophisticated algorithms.

The energy angle

One cost-justification line that has emerged strongly since energy prices destabilised in the early 2020s: PdM as an energy-efficiency lever. Motors running at degraded performance consume more energy for the same output. Bearings with developing defects impose higher load. Misaligned couplings waste 3–8 % of motor input power. Condition-monitoring data can identify these inefficiencies long before the component fails — and in many plants, the cumulative energy savings across a fleet exceed the avoided-downtime benefits that originally justified the programme.

This makes the energy-efficiency case especially compelling for plants where bottleneck downtime costs alone do not justify PdM. A large motor fleet (compressors, pumps, fans) with CT-clamp current monitoring is often the best starting point: low retrofit cost, direct energy visibility, and the PdM benefits emerge as a by-product.

FAQ

What is the difference between predictive and condition-based maintenance?
Condition-based maintenance (CBM) triggers action when a measured parameter crosses a pre-defined threshold — a bearing temperature exceeds 80 °C, a vibration amplitude exceeds 4.5 mm/s. Predictive maintenance goes further: it estimates how much time remains before the threshold will be crossed, and therefore how much time remains to plan the intervention. The practical difference is that CBM gives you an alarm, PdM gives you a forecast. For most industrial applications, CBM is sufficient and gets confused with PdM in marketing material. The honest test: if someone cannot tell you the P-F interval for a given failure mode and the confidence band on that estimate, they are doing CBM — not PdM. Which is fine; CBM solves the majority of real-world problems. It just should not be priced or scoped as PdM.

Do we need machine learning for predictive maintenance?
Usually no. Machine learning is one of several tools, and for most failure modes in most plants it is not the best tool. Rule-based thresholds work for the majority of common failure modes where the physics is understood. Unsupervised anomaly detection is useful for complex assets with no labelled failure history. Supervised ML requires what most plants do not have — tens of labelled failure events on comparable assets — and in smaller operations, simply never has enough training data to work. The marketing story that PdM requires AI is mostly a story; the reality is that 70–80 % of practical PdM value in discrete manufacturing comes from well-designed rules, properly measured signals, and disciplined response processes, with machine learning as a useful but usually minority contributor.

How long does it take to see results from a PdM programme?
Longer than vendors suggest, shorter than sceptics fear — but the pattern is highly stage-dependent. The first avoided failure on a bottleneck machine usually occurs within 3–9 months of reliable data capture. Measurable OEE improvement from PdM typically shows up between months 9 and 18, after the system has caught multiple events and the maintenance organisation has built confidence in the predictions. Full ROI on a focused programme (5–20 bottleneck assets) is usually visible within 18–30 months. The danger zone is months 3–9: expensive sensors and dashboards are installed, but the system has not yet caught a real failure. This is where unsupported programmes get cancelled. Every successful PdM rollout I have seen had explicit early quick wins — threshold alarms catching obvious degradation — to carry the programme through the slow build-up of the more sophisticated models.

Should a plant start with a single critical asset or a broader deployment?
A single critical asset, almost without exception. The early failures in a PdM programme are almost always organisational, not technical: the alarm reaches the wrong person, the spare part is not in stock, the production plan cannot accommodate the maintenance window, the model produces false positives and loses trust. All of these problems are easier to solve on one machine than on twenty. Once the pattern is working — alert → diagnosis → work order → scheduled intervention → confirmed success — the same pattern can be replicated across the plant in weeks. Plants that attempt to instrument everything at once usually end up with data they cannot act on, alarm fatigue within the maintenance team, and a loss of credibility that hurts the second attempt years later.

How does an MES like SYMESTIC relate to predictive maintenance?
An MES is the most valuable preparatory system for a future PdM programme, and in many cases it delivers a large share of the practical PdM value before a dedicated platform is even required. The MES captures automatic downtime with root-cause classification, real cycle times, alarm logs from the PLC, and correlates all of this with orders, materials and operators. That dataset is exactly the foundation a serious PdM programme needs. In the Neoperl reference case (PLC-based alarm capture, correlation of alarms with stops and quality defects), this foundation alone delivered 10 % fewer stops, 8 % higher availability and 15 % less scrap — most of what a basic PdM programme would have targeted. The sequence that consistently works is: MES first to establish clean data and response processes; condition-based maintenance second for the obvious threshold cases; targeted predictive maintenance third on the remaining high-value, high-P-F-interval assets. Inverting this sequence — installing PdM sensors before basic downtime capture works — is one of the more expensive ways to waste a maintenance budget.


Related: TPM · Machine Downtime · Downtimes · OEE · Production Costs · Capacity Planning · Alarms · Process Data · MES

About the author
Martin Brandel
Martin Brandel
MES Consultant at SYMESTIC. 35+ years in industrial automation. Dipl.-Ing. Communications Engineering. Connecting machines to higher-level systems since 1991 — from Simatic S5, material-flow control and COROS visualisation to OPC UA, digital I/O gateways and cloud-MES. Specialised in brownfield machine integration without PLC intervention, retrofit projects S5 → S7/TIA, and MES project management from first contact to go-live. · LinkedIn
Start working with SYMESTIC today to boost your productivity, efficiency, and quality!
Contact us
Symestic Ninja
Deutsch
English