ENBIS-12 in Ljubljana

9 – 13 September 2012 Abstract submission: 15 January – 10 May 2012

My abstracts

 

The following abstracts have been accepted for this event:

  • How to Design Experiments when Categoric Mixture Components Go to Zero

    Authors: Pat Whitcomb (Stat-Ease, Inc.)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: design of experiments (DOE), mixtures, mixture experiments, categoric factors
    Submitted at 26-Jan-2012 20:35 by Pat Whitcomb
    Accepted (view paper)
    11-Sep-2012 17:10 How to Design Experiments when Categoric Mixture Components Go to Zero
    Mixture experiments are used when one wants to vary ingredients and the response depends on their relative proportion to the other ingredients. In some mixture experiments the formulator wants one (or more) of the ingredients to be present at alternate and mutually exclusive categoric levels. For example, an ingredient might be available from one of three vendors, or, perhaps, only one of two different materials can be used in a given formulation. The usual approach is to simply cross the mixture model with the categoric model. This works well so long as the proportion of the categoric ingredient does not go to zero. If any categoric components do go to zero, the crossed model contradicts itself by predicting different response values for the different levels of the categoric factor, even though it is completely absent from the blend! This presentation proposes a new form of mixture model that corrects this problem when the ingredient is zero and becomes equivalent to the crossed model when the categoric factor exceeds its zero level.
    A preservative blend (used to maximize shelf life of a food product) with a categoric factor whose proportion goes to zero is used to illustrate the method.
  • Applications of Bayesian Networks

    Authors: Ron S. Kenett (KPA / University of Turin)
    Primary area of focus / application: Modelling
    Keywords: Cause and Effect, Bayesian Networks, Bayesian Analysis, Conditional Distributions
    Submitted at 15-Feb-2012 08:14 by Ron Kenett
    Accepted (view paper)
    10-Sep-2012 10:20 Applications of Bayesian Networks
    Modelling cause and effect relationships has been a major challenge for statisticians in a wide range of application areas. Bayesian Networks combine graphical analysis with Bayesian analysis to represent causality maps linking measured and target variables. Such maps can be used for diagnostics and predictive analytics. The talk will present an introduction to Bayesian Networks and their applications to web site usability (Harel et al, Kenett et al, 2009) operational risks (Kenett and Raanan, 2010), biotechnology (Peterson and Kenett, 2011), customer satisfaction surveys (Kenett and Salini, 2011) healthcare systems (Kenett, 2012) and the testing of web services (Bai and Kenett, 2012). Some references to software programs used to construct BNs will also be provided.
  • The Significance of Measurement Systems Analysis within the Lean Philosophy

    Authors: Phil Lewis (Coventry University), Gillain Cooke (Coventry University)
    Primary area of focus / application: Quality
    Keywords: Measurement Systems Analysis, Lean, Waste/Muda reduction, Cost Down, SME
    Submitted at 5-Mar-2012 13:33 by Phillip Lewis
    Accepted
    10-Sep-2012 10:20 The Significance of Measurement Systems Analysis within the Lean Philosophy
    The application of Measurement Systems Analysis (MSA) is increasingly important in precision industry and the role of metrology is considered vital in the safety critical environment sectors such as radiotherapy, nuclear power and aerospace. MSA is enabling the creation of extensive knowledge and understanding to the entire envelope of these sectors.

    Within these business sectors the availability of statisticians and expert practitioners is freely available. However the associated supply chains of these business sectors exhibit the practices created by the lean operations drive which has promoted all forms of verification and measurement as “non value added” (NVA). Thus the concept that a NVA process is underpinned by a series of “complicated” statistics has arguably provided an excuse for it to be ignored.

    An investigation into the current accepted practice within the SME activity would unveil many instances of disagreements between entities of the supply chain where contrasting measurements have been obtained. The resolution over which measurement is correct potentially erodes reduced profit margins further.

    The ongoing demand for waste/NVA reduction and “cost down” activity could be achieved through the dissemination of MSA into various organisational interfaces. Linking the process improvement activity to the MSA arena to statistically evaluate confidence levels within organisation interacting measurement systems and thus align the process capability of measurement systems to ultimately enable Muda reductions.

    This paper evaluates the theoretical problems arising from measurement systems misalignments and shows a need for a generic MSA framework which supports the relevant lean philosophy.
  • Detection of Abrupt Changes in Count Data Time Series: Cumulative Sum Derivations for INARCH(1) Models

    Authors: Christian H. Weiß (Darmstadt University of Technology), Murat Caner Testik (Hacettepe University)
    Primary area of focus / application: Process
    Keywords: Count data time series, CUSUM control chart, INARCH(1) model, INAR(1) model, overdispersion
    Submitted at 8-Mar-2012 09:29 by Christian Weiß
    Accepted (view paper)
    12-Sep-2012 10:05 Detection of Abrupt Changes in Count Data Time Series: Cumulative Sum Derivations for INARCH(1) Models
    The INARCH(1) model has been proposed in the literature as a simple, but practically relevant, two-parameter model for processes of overdispersed counts with an autoregressive serial dependence structure. In this research, we develop approaches for monitoring INARCH(1) processes for detecting shifts in the process parameters. Several cumulative sum control charts are derived directly from the log-likelihood ratios for various types of shifts in the INARCH(1) model parameters. We define zero-state (worst-state) and steady-state average run length metrics and discuss their computation for the proposed charts. An extensive study indicates that these charts perform well in detecting changes in the process. A real-data example of strike counts is used to illustrate process monitoring.
  • Water Quality Function Deployment

    Authors: Shuki Dror (ORT Braude College), Natalia Zaitsev (ORT Braude College)
    Primary area of focus / application: Quality
    Keywords: water, quality, technology, QFD
    Submitted at 16-Mar-2012 06:01 by Shuki Dror
    Accepted
    11-Sep-2012 10:00 Water Quality Function Deployment
    The main goal of this study is to create a framework for technology selection, enabling a water supplier to improve the quality of tap water.
    The Quality Function Deployment (QFD) is utilized as an instrument for ranking the relevant technologies. The QFD is designed to reveal where the quality of water characteristics requires improvement and to translate the deficiencies into demands on technical water characteristics and ultimately into relative importance of relevant technologies.
    First, we range the wishes and the conceived preferences of the final customers (“voice of the customer” – VOC) for tap water quality. Customers’ requirements for tap water quality are obtained by a survey, which includes two groups of questions, aimed to assess the desirability of six separate characteristics of water quality, and to estimate the gap between the desired water quality and its perceived present state. The six characteristics are: odor, turbidity, color, calcification (scaling), taste, and absence of biological pollutants. In order to assess the required improvement level of each characteristic to the customer, we define the importance level of a characteristic as the mean of the importance of the characteristic multiplied by the mean gap between the desired level and the perceived present situation for the same characteristic.
    In the next stage, we conduct an expert survey, which consists of two groups of questions.
    In the first group, the expert is asked to estimate the influence of technical parameters of the water on the perceived characteristics of the water quality as provided by the customers. The relative importance of the following 10 parameters were calculated using QFD method: hardness, acidity, chlorine/chloramines, fluoride, nitrate, chloride (salinity), solid colloids, total organic carbon, iron, and dissolved oxygen.
    The second group of questions in the expert survey comprises relevant water treatment technologies and their effect on the technical parameters. Again the experts estimated the contribution of each one of the relevant technologies to the improvement of the technical water parameters and the relative importance were calculated.
    The list of technologies includes more than a dozen principal technologies, such as: desalination, microfiltration, granular filtration (sand/coal bedding), UV sterilization, reduction of acidity, chlorination, etc.
    To construct the water QFD two matrices representing the questionnaire results have to be analyzed. Normalized improvement scores are calculated at each of the three hierarchical levels: customer requirements, technical parameters and technologies. The components to be improved at each of the above levels are selected by means of the MSE (Mean Square Error). We find that the desalination is the vital technology for improving water quality.
  • Restricted Kernel Canonical Correlation Analysis

    Authors: Nina Otopal (IMFM)
    Primary area of focus / application: Mining
    Keywords: kernel, canonical correlation, nonnegativity, restrictited
    Submitted at 20-Mar-2012 10:12 by Nina Otopal
    Accepted (view paper)
    10-Sep-2012 12:50 Restricted Kernel Canonical Correlation Analysis
    Kernel canonical correlation analysis (KCCA) is a procedure for assessing the relationship between two sets of random variables when the classical method, canonical correlation analysis (CCA), fails because of the nonlinearity of the data. The KCCA method is mostly used in machine learning, especially for information retrieval and text mining. Because the data is often represented with non-negative numbers, we propose to incorporate the nonnegativity restriction directly into the KCCA method. Similar restrictions have been studied in relation to the classical CCA and called restricted canonical correlation analysis (RCCA), so that we call the proposed method restricted kernel canonical correlation analysis (RKCCA). We also provide some possible approaches for solving the optimization problem to which our method translates. The motivation for introducing RKCCA is given in Section 2.