ENBIS-14 in Linz

21 – 25 September 2014; Johannes Kepler University, Linz, Austria Abstract submission: 23 January – 22 June 2014

My abstracts

 

The following abstracts have been accepted for this event:

  • Failure Diagnostics and Prognostics based on Hidden Markov Models

    Authors: Tingting Liu (Vrije Universiteit Brussel), Lemeire Jan (Vrije Universiteit Brussel)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Reliability
    Keywords: Condition-based maintenance (CBM), Hidden Markov models (HMM), Diagnostics, Prognostics
    Submitted at 29-Apr-2014 16:20 by Tingting Liu
    Accepted
    Diagnosing and predicting the degradation process of machines or components plays a vital role in industrial condition-based maintenance (CBM). This paper proposes a hidden Markov model (HMM) with an improved learning framework to achieve this goal. Vibration signals are used to extract effective features, which are then used to learn the HMM models with the proposed framework. The obtained model indicates the behavior of the machine’s health status including its degradation. The proposed method was validated based on simulated data and a benchmark bearing data set. The experimental results verify the effectiveness and efficiency of the proposed HMM learning framework.
  • Reducting Cost of Monte Carlo Experiments in Power Production Safety Contexts: Some Benefits of Monotonicity

    Authors: Nicolas Bousquet (EDF R&D), Vincent Moutoussamy (EDF R&D), Thierry Klein (IMT Toulouse), Fabrice Gamboa (IMT Toulouse)
    Primary area of focus / application: Reliability
    Secondary area of focus / application: Reliability
    Keywords: Monte Carlo, Monotonicity, Computer codes, Metamodelling, Deterministic bounds, Design of Experiments
    Submitted at 29-Apr-2014 16:57 by Nicolas Bousquet
    Accepted
    22-Sep-2014 16:20 Reducting Cost of Monte Carlo Experiments in Power Production Safety Contexts: Some Benefits of Monotonicity
    Safety analyses in power industries are often conducted with numerical models (or so-called computer codes) that simulate a production process. A reason is that no true failure has never been observed, because the components are highly reliable. The high computational cost of such tools necessary force to develop specific accelerated Monte Carlo methods based on clever designs of numerical experiments to compute failures probabilities or quantiles. EDF recently conducted several studies about the benefits of the monotonicity of such functionals to produce fast, consistent and conservative estimators of these quantities. This talk will present the main results of these studies.
  • Spatial Outlier Detection in the Air Quality Monitoring Network of Normandy (France)

    Authors: Jean-Michel Poggi (Univ. Paris Descartes & Univ. Paris-Sud Orsay), Michel Bobbia (Air Normand), Michel Misiti (Univ. Paris-Sud Orsay), Yves Misiti (Univ. Paris-Sud Orsay), Bruno Portier (Normandie Université, INSA Rouen)
    Primary area of focus / application: Mining
    Secondary area of focus / application: Metrology & measurement systems analysis
    Keywords: Air quality, Kriging, Nearest neighbors, Particulate matter, Spatial outlier detection
    Submitted at 29-Apr-2014 17:13 by Jean-Michel Poggi
    Accepted (view paper)
    23-Sep-2014 09:00 Spatial Outlier Detection in the Air Quality Monitoring Network of Normandy (France)
    We consider hourly PM10 measurements from 22 monitoring stations located in Basse-Normandie and Haute-Normandie regions (France) and also in the neighboring regions. All considered monitoring stations are either urban background stations or rural ones. The paper focused on the outlier statistical detection of the hourly PM10 concentrations from a spatial point of view.

    The general strategy uses a jackknife type approach and is based on the comparison of the actual measure with some robust prediction. Two ways to handle spatial prediction are considered: the first one is based on the nearest neighbors weighted median which directly consider concentrations while the second one is based on kriging increments, instead of more traditional pseudo-innovations.

    The two methods are applied to the PM10 monitoring network in Normandy and are fully implemented by Air Normand (the official association for air quality monitoring in Haute-Normandie) in the Measurements Quality Control process. Some numerical results are provided on recent data from January 1, 2013 to May 31, 2013 to illustrate and compare the two methods.
  • Robust Multivariate Process Control of Multi-Way Data in Semiconductor Fabrication

    Authors: Peter Scheibelhofer (Institute of Statistics, Graz University of Technology), Günter Hayderer (ams AG), Ernst Stadlober (Institute of Statistics, Graz University of Technology)
    Primary area of focus / application: Process
    Keywords: Fault detection, Multivariate process control, Multi-way PCA, Robust statistics, Kernel methods
    Submitted at 30-Apr-2014 10:16 by Peter Scheibelhofer
    Accepted
    22-Sep-2014 16:00 Robust Multivariate Process Control of Multi-Way Data in Semiconductor Fabrication
    The evaluation and monitoring of manufacturing processes is a crucial challenge in modern semiconductor fabrication. With growing production complexity large numbers of variables are recorded during the operation of each process step. As the production processes are of a batch type, every production unit (wafer) records data from multiple process variables and at multiple time points during its processing. This results in three-way or multi-way data arrays. To utilize all of the recorded data information methods are needed to handle multi-way arrays. In this work, we present a generalized methodology for multivariate process control of multi-way arrays by using multi-way principal component analysis (MPCA). The use of Hotelling’s T^2 statistics makes outcomes easy to monitor as it can be summarized into one control chart. Variable grouping based on process engineer expertise and multiblock PCA techniques allow fault diagnosis on variable group level. The utilization of kernel principal component analysis (KPCA) allows the approach to also capture nonlinear relationships as frequently observed in semiconductor process data. Attention is also paid on robustness by using robust KPCA and robust T^2 estimation.
    In a case study with data from the Austrian semiconductor manufacturer ams AG an observed production fault can be detected and its root cause can be tracked down successfully.
  • On the Practical Interest of Some Discrete Lifetime Models in Industrial Reliability Studies in the Context of Power Production

    Authors: Alberto Pasanisi (EDF R&D), Come Roero (INRIA Paris Sud), Nicolas Bousquet (EDF R&D), Emmanuel Remy (EDF R&D)
    Primary area of focus / application: Reliability
    Secondary area of focus / application: Modelling
    Keywords: Discrete survival data, Inverse Polya model, Discrete Weibull model, Ageing
    Submitted at 30-Apr-2014 10:22 by Alberto Pasanisi
    Accepted
    22-Sep-2014 15:40 On the Practical Interest of Some Discrete Lifetime Models in Industrial Reliability Studies in the Context of Power Production
    Engineers often cope with the problem of assessing the lifetime of industrial components, under the basis of observed industrial feedback data. Generally, lifetime is modelled as a continuous random variable, for instance exponentially or Weibull distributed. However, in some cases, the features of the piece of equipment under investigation rather suggest the use of discrete probabilistic models. This happens for a component which only operates on cycles or on demand. In these cases, the lifetime is rather measured in number of cycles or number of demands before failure; therefore, in theory, discrete models should be used. This paper aims at bringing some light to the practical interest for the reliability engineer in using discrete models. In particular, we focus on the Inverse Polya distribution, based on the Polya urn scheme and the so-called Weibull-1 model; we show that, for different reasons, they are both of limited interest in an industrial context where components are submitted to ageing, data are possibly censored and failures occur after a relatively high number of solicitations.
  • Identifying Patient Communication Needs: An Empirical Test on Rare Cancer in Italy

    Authors: Rosa Falotico (Università degli Studi di Milano Bicocca), Caterina Liberati (Università degli Studi di Milano Bicocca), Paola Zappa (University of Italian Switzerland, Lugano)
    Primary area of focus / application: Quality
    Secondary area of focus / application: Mining
    Keywords: Patient communication needs, Online communication, Qualitative interviews, Text mining
    Submitted at 30-Apr-2014 10:49 by Rosa Falotico
    Accepted
    23-Sep-2014 10:15 Identifying Patient Communication Needs: an Empirical Test on Rare Cancer in Italy
    In healthcare literature, several papers have demonstrated that providing patients with detailed, continuous, and diversified information on their pathology - in a broad sense - has a significantly positive effect on patient empowerment and on care continuum effectiveness. In this respect, besides healthcare professionals, indirect communication (printed material, website, etc.) plays an increasingly important role. Web communication, in particular, has stated to be crucial: it is expected to be accessible to everyone and cost-saving for healthcare systems, to facilitate two-way interaction, and to promote patient-doctor-advocacy groups collaboration. Web communication, however, seems to suffer from two main limitations: 1. it can be unreliable if unsupervised; 2. it can be used to diffuse only some kinds of information.

    This work aims to address the extent of these limitations, examining patient propensity toward using web communication and identifying the patient needs that it could contribute to satisfy.
    For this purpose, we run an exploratory empirical study on a sample of rare cancer patients treated in a highly specialized research center in Northern Italy. We conduct semi-structured interviews on patient propensity and needs and analyze their transcriptions by means of text mining techniques. We clean and normalize the texts with automatic techniques for lexical- grammatical analysis and, finally, we synthesize the text contents with correspondence analysis (shortly, CA). CA allows us to identify the main concepts expressed by patients and to represent geometrically, in a reduced factorial space, the patients and the concepts. This simultaneous graphic representation of concepts and individuals allows visual representation of the association between patients and their information needs.