ENBIS-12 in Ljubljana

9 – 13 September 2012 Abstract submission: 15 January – 10 May 2012

My abstracts


The following abstracts have been accepted for this event:

  • Power and Sample Size Calculation for General Full Factorial Design

    Authors: Melike Bahçecitapar (Hacettepe University), Özge Karadağ (Hacettepe University), Serpil Aktaş (Hacettepe University)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: full factorial design, statistical power, sample size, analysis of experiments
    Submitted at 10-Apr-2012 08:06 by Özge Karadağ
    Accepted (view paper)
    10-Sep-2012 16:55 Power and Sample Size Calculation for General Full Factorial Design
    Full factorial designs measure response variables at all combinations of the experimental factor levels. In a general full factorial design, there are several factors that have multiple levels. The high cost of factorial design might restrict the researcher to perform complex investigates. This study presents sample size and power calculation for general full factorial designs. Calculating sample size and power before the analysis enables us to avoid the wasting resources by collecting surplus data. The calculations are based on number of levels for each factor, number of replicates, maximum difference between main effect means and standard deviation. Power and sample size determination is illustrated for several varieties of general full factorial models. Results shed light on important aspects about the planned study.
  • Sensitivity Analysis of a Welding Thermomechanical Simulation Model

    Authors: Anne-Laure Popelin (EDF R&D), Leila Guenad (EDF R&D), Bertrand Iooss (EDF R&D)
    Primary area of focus / application: Reliability
    Keywords: residual stresses, global sensitivity analysis, uncertainty management, welding simulation, visualization tools
    Submitted at 10-Apr-2012 15:25 by Anne-Laure Popelin
    Accepted (view paper)
    11-Sep-2012 17:10 Sensitivity Analysis of a Welding Thermomechanical Simulation Model
    Numerical welding simulation is based on the definition of a thermomechanical model and uses the finite element method. It aims to assess residual stresses from welding, which is a major issue in estimating life time of welded components. Modeling welding operation requires the characterization of material and operative parameters. Materials input data must be defined of the ambient temperature to the melting temperature. Such data are poorly known at high temperatures and involve many uncertainties. Currently, their impact on the residual stresses is difficult to assess. The determination of the most influential parameters on the residual stresses would therefore provide an estimate of the latter faster through the knowledge of the most powerful parameter.
    This study concerns the numerical computation of the residual stresses generated in a test plate by the fusion welding process. The experiment consisted in the deposit of a weld bead along the longitudinal centre-line of an austenitic 316L plate using an automatic Tungsten Inert Gas (TIG) welding process with 316L filler material.
    A global sensitivity analysis is applied to this numerical welding simulation model in order to rank the importance of input variables on the outputs of the code like residual stresses. A specific method of sensitivity analysis is chosen in function of the main characteristics of the problem: the number of input variables can be large, the output variables are spatially distributed, and only a small number of model runs can be done. Visualization tools help to summarize useful information in understandable mesh-based graphs, so that decisions can be taken more easily.
  • Managing Information Technologies Capacities through Data Analysis: Data Availability & Exploitation

    Authors: Michel Lutz (Information Technologies (IT)), Lance Mitchell (Information Technologies (IT)), Xavier Boucher (Information Technologies (IT))
    Primary area of focus / application: Modelling
    Keywords: organization efficiency, information technology, capacity management, Metadata, predictive models, monitoring
    Submitted at 10-Apr-2012 21:59 by Michel Lutz
    Accepted (view paper)
    10-Sep-2012 12:50 Managing Information Technologies Capacities through Data Analysis: Data Availability & Exploitation
    Architectures are critical to improve organization efficiency (Weill & Ross, 2004). Consequently, IT managers must basically anticipate to provide without interruption enough IT capacity to enable business’ operation plans and forecasts, in order to ensure the functioning continuity of manufacturing systems (Kloesterboer, 2011).

    Nowadays, IT architectures are composed of multiple connected and interdependent components. As a consequence, IT systems give raise to emergent phenomena, which often cannot be anticipated by only considering individual components (Mitleton-Kelly, 2011).

    To cope with such complexity, IT executives more and more rely on data analysis to manage the capacity of information systems (Gunther, 2007; Allspaw, 2008). IT management best practices, like ITIL framework, recommend setting up information systems, dedicated to facilitate the access to all historical data necessary to manage capacity (Lloyd & Rudd, 2007).

    Operationalizing such a data-driven IT capacity approach is often difficult, notably when confronted to large scale complex information systems. However, these complex systems are typically those requiring innovative approach for capacity management. Our presentation proposes original concepts to enable the constitution and exploitation of an IT capacity information system:

    1) Relevant data are likely to be dispersed through disparate databases (Lloyd & Rudd, 2007). An aggregation and filtering is a prerequisite to any exploitation of these data. Fulfilling this requirement can present many problems. A solution, based on the effective use of metadata, will be introduced, to provide IT managers with all appropriate information.

    2) When available, data need then to be properly exploited. IT managers have to handle some challenging topics: how to build predictive IT capacity models, using business activity data as inputs (Kloesterboer, 2011)? How to define automatic monitoring tools (Dugmore & Lacy, 2005) to detect abnormal system’s behavior? Statistical methods overcoming such challenges will be proposed.

    Real cases will be used to bring these concepts to life.
  • A Critical Assessment of Statistical Process Control (SPC) Implementation Frameworks and Agenda for future Research

    Authors: Sarina binti Abdul Halim Lim (University of Strathclyde), Jiju Antony (University of Strathclyde)
    Primary area of focus / application: Quality
    Keywords: Statistical Process Control, frameworks, critical success factors, continuous improvement
    Submitted at 11-Apr-2012 13:35 by Sarina binti Abdul Halim Lim
    Accepted (view paper)
    10-Sep-2012 17:25 A Critical Assessment of Statistical Process Control (SPC) Implementation Frameworks and Agenda for future Research
    Statistical process control is a powerful approach to analyse, reduce and manage process variability in the pursuit of continuous improvement for product and service quality. The prime goals of SPC are identification and elimination of special cause of variation that could result in process instability and unpredictability of process performance. Even though SPC has been introduced decades ago, many organisations from various industries are still struggle in the introduction, implementation and deployment of SPC. Effective SPC implementation may affect by factors from manufacturing and non-manufacturing perspectives such as technical, statistical, organisational, methodical and social factors. The authors found that many academic institutions teaching on SPC focus mainly in control charts and its mathematical aspects instead of the implementation process and their softer issues (e.g. communication the need for SPC, operators’ ownership of the process and teamwork for taking necessary actions on the process to manage out-of-control situations) associated with this powerful technique. The authors strongly argue that although control chart plays crucial role within SPC programme, more emphasis must be given to the framework for implementation and deployment of SPC and critical success factors for its implementation. Therefore, a framework or roadmap for SPC implementation is essential to apply it in systematic and disciplined manner within industrial setting. This paper makes an attempt to critically evaluate and compare the existing SPC implementation frameworks using current literature. The last part of the paper will be looking into the strengths and weaknesses of these frameworks and setting the agenda for further research.
  • An EGO-type Algorithm based on Kernel Interpolation

    Authors: Momchil Ivanov (TU Dortmund), Sonja Kuhnt (TU Dortmund)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Efficient Global Optimization (EGO);, metamodel;, Kriging;, kernel interpolation;, uncertainty prediction;
    Submitted at 11-Apr-2012 15:54 by Momchil Ivanov
    11-Sep-2012 10:20 An EGO-type Algorithm based on Kernel Interpolation
    The Efficient Global Optimization algorithm (EGO) has become a standard tool for optimizing black-box functions in the field of computer experiments. One of the reasons for the popularity of EGO is its symbiosis with the prominent Kriging metamodel. The Expected Improvement (EI) criterion, as key element of EGO, critically relies on the ability of Kriging to assess its own uncertainty at predicted locations. Although numerous examples of alternative metamodels exist e.g. radial basis functions, spline methods, local polynomial regression (Fang, Li and Sudjanto 2006), none of these easily provides an uncertainty measure. Consequently, so far no other metamodel has been available, with which EGO can function. However, recently with kernel interpolation a metamodel has been introduced which also provides an uncertainty predictor.
    A key difference to Kriging stems from the fact that non-stationary functions are expected to have a locally different behavior in their uncertainty estimates, which is recognized by kernel interpolation. Additionally, the uncertainty estimate diverges for kernel interpolation if the prediction is moving away from the observed points. With this in consideration, it’s easy to see that kernel interpolation might be preferable over Kriging in situations where the underlying function is non-stationary. We will investigate to which extend this advantage leads to better optimization results.
    The aim of this talk is to introduce the generalization of EGO with kernel interpolation and ultimately to compare the latter with the Kriging based EGO by means of standard, example functions.
  • Locally D-optimal Designs in Multi–factor Non-linear Models

    Authors: Carmelo Rodriguez Torreblanca (Universidad de Almería), Isabel Ortiz Rodriguez (Universidad de Almería), Ignacio Martinez Lopez (Universidad de Almería)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: D-optimality,, multi-factor non-linear models, optimum designs, product designs
    Submitted at 12-Apr-2012 13:30 by Carmelo Rodriguez Torreblanca
    Most experiments are described by multi-factor non-linear models. The construction of optimum designs is more complicated for these models than for models with a single factor. Therefore, it is interesting to obtain optimum designs for multi- factor models in terms of optimum designs for their one-dimensional components. This paper deals with the construction of local D-optimal designs for multi- factor non-linear models from local D-maximin optimal designs for the corresponding marginal models.