ENBIS-12 in Ljubljana

9 – 13 September 2012 Abstract submission: 15 January – 10 May 2012

My abstracts

 

The following abstracts have been accepted for this event:

  • Split-plot Design and Mixed Response Surface Models

    Authors: Rossella Berni (University of Florence)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Split-plot design, Robust design, Mixed Response surface models, Variance components
    Submitted at 13-Apr-2012 11:50 by Rossella Berni
    Accepted (view paper)
    12-Sep-2012 10:30 Split-plot Design and Mixed Response Surface Models
    This paper deals with random effects and variance components for the split-plot design, which has been recognized as a valid plan for a robust design approach in the technological field. Our aim is to analyze this experimental design from two points of view: the theoretical basis of a split-plot is first evaluated as a specific and valid experimentation for the robust-design, paying attention to the interactions between control and environmental variables as random effects. The second point relates to the split-plot and the optimization in a multiple response case, by involving just one objective function, which considers the different contribution of the fixed and random parts for each estimated surface. The proposal is also analyzed through a case study.
  • Challenges in Using Mixture DoE to Understand the Complex Phase Behaviour of a 3-Component Formulation

    Authors: Phil Kay (Fujifilm Imaging Colorants Ltd.)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Mixture, Formulation, Phase diagram, Visualisation
    Submitted at 13-Apr-2012 14:04 by Phil Kay
    Accepted (view paper)
    11-Sep-2012 17:30 Challenges in Using Mixture DoE to Understand the Complex Phase Behaviour of a 3-Component Formulation
    The aim was to arrive at a recipe for a 3-component mixture that has the maximum content of the active component and is stable and homogeneous across a range of temperatures. Initial work towards this had proved very resource-intensive and had not yielded much useful information. Mixture DoE was therefore used to determine the experimental formulations that would most efficiently test models that describe the change in the response (homogeneity) across the possible composition/temperature space.

    Various modelling approaches (response transformation, logistic regression, neural network) were tried but ultimately it proved impossible to build a useful predictive model from an economical set of design points. Nevertheless, I will show that the DoE method was instrumental in us achieving our stated aim with much improved efficiency over the traditional approach. I will also discuss whether other approaches would be more efficient for arriving at phase-behaviour information.
  • Evaluation of Measurement Uncertainty and Regulatory Context: An Application in Fire Engineering

    Authors: Alexandre Allard (Laboratoire National de Métrologie et d'Essais), Nicolas Fischer (Laboratoire National de Métrologie et d'Essais), Franck Didieux (Laboratoire National de Métrologie et d'Essais), Eric Guillaume (Laboratoire National de Métrologie et d'Essais)
    Primary area of focus / application: Reliability
    Keywords: Fire Safety, Computational Code, Sensitivity Analysis, Measurement uncertainty, Probability of exceeding a threshold
    Submitted at 13-Apr-2012 14:39 by Alexandre Allard
    Accepted (view paper)
    11-Sep-2012 16:50 Evaluation of Measurement Uncertainty and Regulatory Context: An Application in Fire Engineering
    One of the objectives of fire safety regulation is to ensure that evacuation paths remain practicable and safe for people to evacuate in case of fire in a building. The computational code called CFAST, associated with a pre and post treatment Excel sheet, allows to compute some quantities of interest that are critical regarding the practicability of the evacuation paths, if their value lie beyond a given regulatory threshold. These quantities may be the temperature of the upper and lower layer temperatures or the layer height and are determined by more than twenty input quantities.

    As a consequence, these quantities require to be investigated. In the first place, their uncertainty is considered in terms of central dispersion but also of their behaviour for the extreme values of the probability distribution. In the second place, the most influent input quantities to explain their variability are pointed out through a sensitivity analysis. Different techniques are used for both objectives such as the Monte Carlo simulation to estimate their probability distribution and other parameters such as the mean, the standard deviation or the probability of exceeding a threshold. Different methods for sensitivity analysis are also discussed regarding the computational cost. The local polynomial estimation succeeded in providing a suitable sensitivity evaluation and highlighted the most influent input quantities.
  • A Poisson- Gamma Hierarchical Model for Estimating the Complication Rates of Bladder Cancer

    Authors: Özge Karadağ (Hacettepe University), Gül Ergü (Hacettepe University)
    Primary area of focus / application: Modelling
    Keywords: Poisson – gamma hierarchical model, Hierarchical Bayes, Hyperparameter estimation, Gibbs sampling
    Submitted at 13-Apr-2012 14:45 by Özge Karadağ
    Accepted
    11-Sep-2012 10:00 A Poisson- Gamma Hierarchical Model for Estimating the Complication Rates of Bladder Cancer
    In clinical studies, experiments are usually performed at different times, different laboratories, and also different individuals. In addition to this, some of the individuals can be observed in a limited time. As a result, the parameter estimates will be based on the limited data. For this reasons it is desirable to combine the individual estimates in some way to obtain more reliable and appropriate results.
    In this study a hierarchical model structure and a Bayesian procedure are considered to reach more accurate estimations. A Bayesian Poisson-gamma hierarchical model is built to estimate the individual complication rates for bladder cancer.
  • How to Choose a Fragility Curve? Bayesian Decision Theory Applied to Uncertainty Analysis in an Industrial Context

    Authors: Guillaume Damblin (AgroParisTech / EDF R&D), Merlin Keller (EDF R&D), Alberto Pasanisi (EDF R&D), Irmela Zentner (EDF R&D), Pierre Barbillon (AgroParis Tech), Eric Parent (AgroParis Tech)
    Primary area of focus / application: Reliability
    Keywords: Reliability, Engineering - Industry, Bayesian decision theory, seismic fragility curve
    Submitted at 13-Apr-2012 15:44 by Merlin Keller
    Accepted (view paper)
    11-Sep-2012 17:30 How to Choose a Fragility Curve? Bayesian Decision Theory Applied to Uncertainty Analysis in an Industrial Context
    The fragility curve quantifies the risk of a structure subjected to a seismic solicitation, by the probability of failure conditional on the level of the destructive action. Assessing the fragility curve associated to a given structure is typically done through the EPRI standard approach, which is mainly based on the results of the design study and expert information, or through classical statistical techniques, such as maximum likelihood estimation, based on experimental data.

    We propose a novel approach to estimate the fragility curve, described by a parameterized form, based on data sets from both computer and physical experiments. We adopt a Bayesian framework, that is, we propose to take into account explicitly, by a probability distribution, the uncertainty about the fragility curve. This enables us to benefit from both expert information and available data, hence reducing the uncertainty about the curve.

    More importantly, by using Bayesian decision theory, it is then possible to build an estimator of the fragility curve that takes into account the consequences of over- and underestimation of the probability of failure. To this aim, we propose cost functions measuring the gap between the unknown curve and its estimate, whose qualities are described in terms of the consequences of its use in engineering. The benefits of this approach is illustrated using results of experiments on both simulated and real datasets, where we demonstrate how to choose a fragility curve tailored to the stakes motivating its estimation.
  • A Bayesian Approach for Inference in POD Models

    Authors: Merlin Keller (EDF R&D), Nicolas Bousquet (EDF R&D)
    Primary area of focus / application: Reliability
    Keywords: Reliability, Engineering - Industry, Baysian inference, probability of detection, non-destructive experiments
    Submitted at 13-Apr-2012 15:49 by Merlin Keller
    Accepted
    11-Sep-2012 12:10 A Bayesian Approach for Inference in POD Models
    Among the stochastic inputs involved in the probabilistic calculation of the reliability
    of industrial equipments/components of electric power plants, the flaw size is often a key variable whose distribution has to be carefully estimated.

    The assessment of this distribution is usually not trivial. Indeed, the data available to this purpose typically come from non-destructive experiments (NDE), affected by observational noise and progressive censoring due to the detection limits of the testing process. This censoring is characterized by the probability of detection (POD) function, that is, the probability of detecting a flaw conditionally on its size, whose exact value is generally uncertain.

    In this paper, we show how a combination of these data with observations coming from destructive experiments allows to estimate both the flaw size distribution and the probability of detection (POD) function. Though maximum likelihood techniques can be used to this end, they may be inappropriate given the wide uncertainty about the POD function, and the difficulty of constructing valid confidence intervals.

    Instead, we propose to use a Bayesian approach to derive a posterior distribution for the POD function, based on both expert information and available data. A point estimate can then be derived by minimizing the posterior expectation of a cost function. This allows to penalize differently under and over-estimation of the failure probability, and yields conservative estimates for both the POD function and the flaw size distribution. We demonstrate the benefits of this approach on both simulated and real flaw size data sets.