Data-driven prognostics
Data-driven prognostics usually use pattern recognition and machine learning techniques to detect changes in system states. The classical data-driven methods for nonlinear system prediction include the use of stochastic models such as the autoregressive (AR) model, the threshold AR model, the bilinear model, the projection pursuit, the multivariate adaptive regression splines, and the Volterra series expansion. Since the last decade, more interests in data-driven system state forecasting have been focused on the use of flexible models such as various types of neural networks (NNs) and neural fuzzy (NF) systems. Data-driven approaches are appropriate when the understanding of first principles of system operation is not comprehensive or when the system is sufficiently complex such that developing an accurate model is prohibitively expensive. Therefore, the principal advantages to data driven approaches is that they can often be deployed quicker and cheaper compared to other approaches, and that they can provide system-wide coverage (cf. physics-based models, which can be quite narrow in scope). The main disadvantage is that data driven approaches may have wider confidence intervals than other approaches and that they require a substantial amount of data for training. Data-driven approaches can be further subcategorized into fleet-based statistics and sensor-based conditioning. In addition, data-driven techniques also subsume cycle-counting techniques that may includePhysics-based prognostics
Physics-based prognostics (sometimes called model-based prognostics) attempts to incorporate physical understanding (physical models) of the system into the estimation of remaining useful life (RUL). Modeling physics can be accomplished at different levels, for example, micro and macro levels. At the micro level (also called material level), physical models are embodied by series of dynamic equations that define relationships, at a given time or load cycle, between damage (or degradation) of a system/component and environmental and operational conditions under which the system/component are operated. The micro-level models are often referred as damage propagation model. For example, Yu and Harris’s fatigue life model for ball bearings, which relates the fatigue life of a bearing to the induced stress, Paris and Erdogan's crack growth model, and stochastic defect-propagation model are other examples of micro-level models. Since measurements of critical damage properties (such as stress or strain of a mechanical component) are rarely available, sensed system parameters have to be used to infer the stress/strain values. Micro-level models need to account in the uncertainty management the assumptions and simplifications, which may pose significant limitations of that approach. Macro-level models are the mathematical model at system level, which defines the relationship among system input variables, system state variables, and system measures variables/outputs where the model is often a somewhat simplified representation of the system, for example a lumped parameter model. The trade-off is increased coverage with possibly reducing accuracy of a particular degradation mode. Where this trade-off is permissible, faster prototyping may be the result. However, where systems are complex (e.g., a gas turbine engine), even a macro-level model may be a rather time-consuming and labor-intensive process. As a result, macro-level models may not be available in detail for all subsystems. The resulting simplifications need to be accounted for by the uncertainty management.Hybrid approaches
Hybrid approaches attempt to leverage the strength from both data-driven approaches as well as model-based approaches. In reality, it is rare that the fielded approaches are completely either purely data-driven or purely model-based. More often than not, model-based approaches include some aspects of data-driven approaches and data-driven approaches glean available information from models. An example for the former would be where model parameters are tuned using field data. An example for the latter is when the set-point, bias, or normalization factor for a data-driven approach is given by models. Hybrid approaches can be categorized broadly into two categories, 1) Pre-estimate fusion and 2.) Post-estimate fusion.Pre-estimate fusion of models and data
The motivation for pre-estimate aggregation may be that no ground truth data are available. This may occur in situations where diagnostics does a good job in detecting faults that are resolved (through maintenance) before system failure occurs. Therefore, there are hardly any run-to-failure data. However, there is incentive to know better when a system would fail to better leverage the remaining useful life while at the same time avoiding unscheduled maintenance (unscheduled maintenance is typically more costly than scheduled maintenance and results in system downtime). Garga et al. describe conceptually a pre-estimate aggregation hybrid approach where domain knowledge is used to change the structure of a neural network, thus resulting in a more parsimonious representation of the network. Another way to accomplish the pre-estimate aggregation is by a combined off-line process and on-line process: In the off-line mode, one can use a physics-based simulation model to understand the relationships of sensor response to fault state; In the on-line mode, one can use data to identify current damage state, then track the data to characterize damage propagation, and finally apply an individualized data-driven propagation model for remaining life prediction. For example, Khorasgani et al modeled the physics of failure in electrolytic capacitors. Then, they used a particle filter approach to derive the dynamic form of the degradation model and estimate the current state of capacitor health. This model is then used to get more accurate estimation of the Remaining Useful Life (RUL) of the capacitors as they are subjected to the thermal stress conditions.Post-estimate fusion of model-based approaches with data-driven approaches
Motivation for post-estimate fusion is often consideration of uncertainty management. That is, the post-estimate fusion helps to narrow the uncertainty intervals of data-driven or model-based approaches. At the same time, the accuracy improves. The underlying notion is that multiple information sources can help to improve performance of an estimator. This principle has been successfully applied within the context of classifier fusion where the output of multiple classifiers is used to arrive at a better result than any classifier alone. Within the context of prognostics, fusion can be accomplished by employing quality assessments that are assigned to the individual estimators based on a variety of inputs, for example heuristics, a priori known performance, prediction horizon, or robustness of the prediction.Prognostic performance evaluation
Prognostic performance evaluation is of key importance for a successful PHM system deployment. The early lack of standardized methods for performance evaluation and benchmark data-sets resulted in reliance on conventional performance metrics borrowed from statistics. Those metrics were primarily accuracy and precision based where performance is evaluated against end of life, typically known a priori in an offline setting. More recently, efforts towards maturing prognostics technology has put a significant focus on standardizing prognostic methods, including those of performance assessment. A key aspect, missing from the conventional metrics, is the capability to track performance with time. This is important because prognostics is a dynamic process where predictions get updated with an appropriate frequency as more observation data become available from an operational system. Similarly, the performance of prediction changes with time that must be tracked and quantified. Another aspect that makes this process different in a PHM context is the time value of a RUL prediction. As a system approaches failure, the time window to take a corrective action gets shorter and consequently the accuracy of predictions becomes more critical for decision making. Finally, randomness and noise in the process, measurements, and prediction models are unavoidable and hence prognostics inevitably involves uncertainty in its estimates. A robust prognostics performance evaluation must incorporate the effects of this uncertainty. SeveraUncertainty in prognostics
There are many uncertainty parameters that can influence the prediction accuracy. These can be categorized as: *Uncertainty in system parameters: this concerns the uncertainty in the values of the physical parameters of the system (resistance, inductance, stiffness, capacitance, etc.). This uncertainty is induced by the environmental and operational conditions where the system evolves. This can be tackled by using adequate methods such interval ones. *Uncertainty in nominal system model: this concerns the imprecisions in the mathematical models which is generated to represent the behavior of the system. These imprecisions (or uncertainties) can be the result of a set of assumptions used during the modeling process and which lead to models that don’t fit exactly the real behavior of the system. *Uncertainty in system degradation model: the degradation model can be obtained from accelerated life tests which are conducted on different data samples of a component. In practice, the data obtained by accelerated life tests performed under the same operating conditions may have different degradation trend. This difference in the degradation trends can then be considered as an uncertainty in the degradation models derived from the data related to the accelerated life tests. *Uncertainty in prediction: uncertainty is inherent to any prediction process. Any nominal and/or degradation model predictions are inaccurate which is impacted by several uncertainties such as uncertainty in the model parameters, the environmental conditions and the future mission profiles. The prediction uncertainty can be tackled by using Bayesian and online estimation and prediction tools (e.g. Particle Filters and Kalman filter etc.). *Uncertainty in failure thresholds: the failure threshold is important in any fault detection and prediction methods. It determines the time at which the system fails and consequently the remaining useful life. In practice, the value of the failure threshold is not constant and can change in time. It can also change according to the nature of the system, operating conditions and in the environment which it evolves. All these parameters induce uncertainty which should be considered in the definition of the failure threshold. Examples of uncertainty quantification can be found in.Commercial hardware and software platforms
For most PHM industrial applications, commercial off the shelf data acquisition hardware and sensors are normally the most practical and common. Example commercial vendors for data acquisition hardware include National Instruments and Advantech Webaccess; however, for certain applications, the hardware can be customized or ruggedized as needed. Common sensor types for PHM applications include accelerometers, temperature, pressure, measurements of rotational speed using encoders or tachometers, electrical measurements of voltage and current, acoustic emission, load cells for force measurements, and displacement or position measurements. There are numerous sensor vendors for those measurement types, with some having a specific product line that is more suited for condition monitoring and PHM applications. The data analysis algorithms and pattern recognition technology are now being offered in some commercial software platforms or as part of a packaged software solution. National Instruments currently has a trial version (with a commercial release in the upcoming year) of the Watchdog Agent prognostic toolkit, which is a collection of data-driven PHM algorithms that were developed by the Center for Intelligent Maintenance Systems. This collection of over 20 tools allows one to configure and customize the algorithms for signature extraction, anomaly detection, health assessment, failure diagnosis, and failure prediction for a given application as needed. Customized predictive monitoring commercial solutions using the Watchdog Agent toolkit are now being offered by a recent start-up company called Predictronics Corporation in which the founders were instrumental in the development and application of this PHM technology at the Center for Intelligent Maintenance Systems. Another example isSystem-level prognostics
While most prognostics approaches focus on accurate computation of the degradation rate and the remaining useful life (RUL) of individual components, it is the rate at which the performance of subsystems and systems degrade that is of greater interest to the operators and maintenance personnel of these systems.See also
*Notes
Bibliography
Electronics PHM
* Modeling aging effects of IGBTs in power drives by ringing characterization, A. Ginart, M. J. Roemer, P. W. Kalgren, and K. Goebel, in ''International Conference on Prognostics and Health Management'', 2008, pp. 1–7. * Prognostics of Interconnect Degradation using RF Impedance Monitoring and Sequential Probability Ratio Test, D. Kwon, M. H. Azarian, and M. Pecht, ''International Journal of Performability Engineering'', vol. 6, no. 4, pp. 351–360, 2010. * Latent Damage Assessment and Prognostication of Residual Life in Airborne Lead-Free Electronics Under Thermo-Mechanical Loads, P. Lall, C. Bhat, M. Hande, V. More, R. Vaidya, J. Suhling, R. Pandher, K. Goebel, in ''Proceedings of International Conference on Prognostics and Health Management'', 2008. * Failure Precursors for Polymer Resettable Fuses, S. Cheng, K. Tom, and M. Pecht, ''IEEE Transactions on Devices and Materials Reliability'', Vol.10, Issue.3, pp. 374–380, 2010. * Prognostic and Warning System for Power-Electronic Modules in Electric, Hybrid Electric, and Fuel-Cell Vehicles,Y. Xiong and X. Cheng, ''IEEE Transactions on Industrial Electronics'', vol. 55, June 2008. * * * * Physics-of-failure based Prognostics for Electronic Products, Michael Pecht and Jie Gu, ''Transactions of the Institute of Measurement and Control'' 31, 3/4 (2009), pp. 309–322. * Sachin Kumar, Vasilis Sotiris, and Michael Pecht, 2008 Health Assessment of Electronic Products using Mahalanobis Distance and Projection Pursuit Analysis, ''International Journal of Computer, Information, and Systems Science, and Engineering'', vol.2 Issue.4, pp. 242–250. * Guest Editorial: Introduction to Special Section on Electronic Systems Prognostics and Health Management, P. Sandborn and M. Pecht, ''Microelectronics Reliability'', Vol. 47, No. 12, pp. 1847–1848, December 2007. * * * Prognostic Assessment of Aluminum Support Structure on a Printed Circuit Board, S. Mathew, D. Das, M. Osterman, M. Pecht, and R. Ferebee ASME Journal of Electronic Packaging, Vol. 128, Issue 4, pp. 339–345, December 2006. * A Methodology for Assessing the Remaining Life of Electronic Products, S. Mathew, P. Rodgers, V. Eveloy, N. Vichare, and M. Pecht, ''International Journal of Performability Engineering'', Vol. 2, No. 4, pp. 383–395, October, 2006. * Prognostics and Health Management of Electronics, N. Vichare and M. Pecht, ''IEEE Transactions on Components and Packaging Technologies'', Vol. 29, No. 1, March 2006.External links