ENBIS-15 in Prague

6 – 10 September 2015; Prague, Czech Republic Abstract submission: 1 February – 3 July 2015

My abstracts

 

The following abstracts have been accepted for this event:

  • A Novel Method to Deal with Latent Ability Testing and Evaluation

    Authors: Emil Bashkansky (ORT Braude College of Engineering), Vladimir Turetsky (ORT Braude College of Engineering)
    Primary area of focus / application: Design and analysis of experiments
    Secondary area of focus / application: Modelling
    Keywords: Maximum likelihood, Testing, Latent ability, Difficulty, Item response
    Submitted at 4-Feb-2015 12:52 by Emil Bashkansky
    Accepted (view paper)
    8-Sep-2015 15:35 A Novel Method to Deal with Latent Ability Testing and Evaluation
    A new approach to evaluation of binary test results when checking a one-dimensional ability is proposed. We consider the case where a qualitatively homogeneous population of objects is tested by a set of non-destructive test items having different, but unknown beforehand levels of difficulty, and we need to evaluate/compare both the intrinsic abilities of these objects and the level of difficulty of the test items. We assume that the responses to different test items, applied to the same part, do not affect one another and the same scale invariant item response model applies to all members of the tested population of objects under study (OUTs). In the context of the paper, OUT can mean an electronic component, examinee, program unit or material under test, etc. An algorithm for solving the above mentioned problem, applicable for engineering testing, is proposed. It combines several already developed methods, such as item response theory, maximum likelihood estimation, method of flow redistribution and others. This combination allows building of an acceptable logical/numerical scheme for evaluation of testing results. The approach includes some stages converging to a solution determining levels of difficulty, abilities and distribution of these abilities among the tested population. The method is illustrated by numerical example.
  • Empiricism and the Root Cause Analysis Helix

    Authors: Matthew Barsalou (BorgWarner Turbo Systems Engineering GmbH)
    Primary area of focus / application: Quality
    Secondary area of focus / application: Quality
    Keywords: Root Cause Analysis, Exploratory Data Analysis, Empiricism, Quality
    Submitted at 20-Feb-2015 16:57 by Matthew Barsalou
    Accepted (view paper)
    8-Sep-2015 17:40 Empiricism and the Root Cause Analysis Helix
    Root Cause Analysis (RCA) is performed in industry to determine the cause of a failure or the underlying reason for the current level of performance of a process when improvements need to be implemented. All too often RCA and improvement teams discuss potential causes and then vote on what they believe to be the most probable cause. Unfortunately, they fail to take the failed part or the actual process into consideration. This talk will describe the need for empiricism when performing root cause analysis (RCA). Forming tentative hypotheses with Tukey’s Exploratory Data Analysis (EDA) will be described as well as the use of the scientific method for RCA in the form of Platt’s Strong Inference. Box’s iterative inductive-deductive process will also be described. These three concepts will then be combined with Deming’s Plan-Do-Check-Act and presented as the RCA helix. The RCA helix is an approach to RCA that is compatible with other methods. The scientific method, EDA, PDCA and Box’s iterative inductive-deductive process are not new, but packaging them together in the form of the RCA helix presents a simplified approach to make them easier for engineers, managers, and technicians to apply. This approach is empirically driven and can be used in support of other methods.
  • Confirmation - The Final DOE Step

    Authors: Pat Whitcomb (Stat-Ease, Inc.), Martin Bezener (Stat-Ease, Inc.), Wayne Adams (Stat-Ease, Inc.)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Design of Experiments (DOE), Confirmation, Verification, Predictive model
    Submitted at 4-Mar-2015 19:54 by Pat Whitcomb
    Accepted (view paper)
    7-Sep-2015 10:00 Confirmation - The Final DOE Step
    This talk provides DOE practitioners with practical tools to enhance their skills. The talk.lays out a series of case studies that illustrate several techniques to confirm (or verify) the predictions from a DOE model. The techniques include:
     Simple confirmation,
     Concurrent confirmation,
     Verification DOE.
    The presentation begins by spelling the importance of confirming results from a designed experiment. (Not confirming is not a recommended strategy.) It then provides practical advice and examples of three alternative approaches to implement a confirmation strategy. Attendees will come away with valuable tools to confirm their models as the final step in their DOE process.
  • Comparing Process Data to PCA-Based Contribution Plots for Model-Based Fault Identification in Batch Processes

    Authors: Sam Wuyts (KU Leuven), Geert Gins (KU Leuven), Pieter Van den Kerkhof (KU Leuven), Jan Van Impe (KU Leuven)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Mining
    Keywords: Fault detection, Fault diagnosis, Chemical batch processes, Model-based classification, Process data mining
    Submitted at 6-Mar-2015 17:41 by Geert Gins
    Accepted (view paper)
    9-Sep-2015 10:10 Comparing Process Data to PCA-Based Contribution Plots for Model-Based Fault Identification in Batch Processes
    Introduction
    The (bio)chemical industry relies strongly on batch processes, certainly for products with high added value such as pharmaceuticals or specialty chemicals. However, their dynamic characteristics and the difficulty to measure product quality online inherently limit their controllability. This presents major challenges for monitoring, control, fault detection and diagnosis in batch processes.
    Statistical Process Control (SPC) is typically applied to tackle these challenges. The development of SPC is supported by the availability of large historical databases in chemical plants, containing all available online measurements of a large set of sensors. These databases hold tremendous information regarding process operation, which is exploited by SPC to establish a fast and reliable monitoring procedure.


    Problem statement
    Once an abnormal situation is detected, the underlying root cause still needs identification. Experts and operators typically use contribution plots [3] to interpret and diagnose process upsets. Contribution plots require no prior knowledge about the underlying disturbances but do not always unequivocally indicate the variable(s) responsible for the fault. Hence, expert interpretation is always required for reliable fault identification.
    The expert interpretation of the contribution pattern is bypassed when all possible faults are known. An automated classification model, pinpointing the class most likely causing the fault, is trained on contribution patterns of past faults. Automation of the diagnosis significantly reduces time between fault detection and corrective actions. However, reliability is impacted by fault smearing, which negatively influences the accuracy of the variables’ contributions [4,5]. Therefore, it might be beneficial to consider alternative data patterns (i.e., patterns not subject to fault smearing) as input to the classification model.
    This paper compares automatic classification based on contributions to classification using sensor data.


    Results
    Datasets representing a benchmark penicillin fermentation Pensim [1] are simulated in RAYMOND [2]. Two cases are studied: the first focuses on the intrinsic diagnosability of upsets by only considering basic measurement noise on a limited number of sensor variables while the second study extends the conclusions towards datasets including more complex measurement noise on multiple sensors as well as biological variability. Different pretreatments of both contributions as well as sensor data are employed to maximize performance. These manipulations are chosen based on the intrinsic nature of the faults and significantly improve performance.
    The two studies yield some guidelines about the appropriate pretreatment for faults of different nature. Normalizing data around the average batch trajectory prior to classification is advisable for faults with varying starting times. Classification for faults with both positive and negative deviations from the average trajectory is improved by taking absolute values. Taking into account time windows helps to increase distinction between drift faults and step or drop faults. For both studies, better classification is achieved using pretreated sensor measurements rather than variables’ contributions since the latter are subject to the negative influence of fault smearing.


    References
    [1] Comput. Chem. Eng., 26(11):1553–1565, 2002.
    [2] Comput. Chem. Eng., 69:110–118, 2014.
    [3] ISA Trans., 37(1):41–59, 1996
    [4] Chem. Eng. Sci., 104:285–293, 2013.
    [5] Chemometr. Intell. Lab., 51(1):95–114, 2000.
  • Business Strategy Embraces Data Analytics to Meet New Challenges in the Shipping Industry

    Authors: Shirley Coleman (ISRU, Newcastle University), Ibna Zaman (ISRU, Newcastle University), Rose Norman (MAST, Newcastle University), Kayvan Pazouki (MAST, Newcastle University)
    Primary area of focus / application: Business
    Secondary area of focus / application: Mining
    Keywords: Emissions, Monitoring, Visualisation, Consumption, Offshore
    Submitted at 9-Mar-2015 22:00 by Shirley Coleman
    Accepted (view paper)
    9-Sep-2015 10:40 Business Strategy Embraces Data Analytics to Meet New Challenges in the Shipping Industry
    More and more companies are realising the power of data analytics and are including a re-inspection and augmentation of their data in their business strategy. A small to medium enterprise (SME) in the shipping industry initially developed products to monitor fuel consumption for cost effective travel. Under their new data-aware strategy, the large quantities of data collected for this purpose can be presented in an accessible and readily available form to provide an additional direct value added asset to customers. Having established the data collection mechanism, the company can now collect further measurements on ship’s performance relatively easily and by using statistical and data analytic tools more can be extracted from this data to bring about real operational improvements. Amongst the demands of efficiency and profitability, the marine and offshore sector is subject to legislation implemented by the International Maritime Organisation (IMO); one imminent effect of this is the need for operators to reduce engine emissions. The company’s data can be utilised to improve models for emissions monitoring. The paper demonstrates the findings from the project.
  • Behavior Based Price Enabled by Predictive Modelling

    Authors: Andrea Ahlemeyer-Stubbe (ahlemeyer-stubbe)
    Primary area of focus / application: Mining
    Secondary area of focus / application: Business
    Keywords: Behavior based price, Predictive modelling, Big Data, Automation
    Submitted at 11-Mar-2015 19:43 by Andrea Ahlemeyer-Stubbe
    Accepted
    9-Sep-2015 11:10 Behavior Based Price Enabled by Predictive Modelling
    To optimize the profit and to offer every consumer a good price value relation it is important to predict the individual price affinity. With this you are able to decide which price or money related offers like coupons, voucher or free gifts are necessary to sell the goods now to an individual. You may have noticed that on travel websites or other site different users get different offers for the same product or that price increases if you come back after a while. The prediction models behind it depend on the consumer actual behaviors, time, place, products and historic action.
    To detect fast changes in customer behavior or to react in as focused a manner as possible, predictive modeling must be done in good quality to get effective predictions of customer price affinities and it has to be done fast to be relevant under business aspects. Modeling speed is of great importance in industry as time is a crucial factor. This necessity requires a different technical set up for model development to fulfill both needs: quality and development speed. Today most companies like to develop their models individually with the help of specialists. But for a lot of companies, this way takes too long; even though the models are excellent, the time to develop them sometimes kills the advantages of a better prediction. This article describes the general structure and ideas how to implement industry-focused model production that will help to react quickly to changing behavior with relevant prices.