ENBIS-17 in Naples

9 – 14 September 2017; Naples (Italy) Abstract submission: 21 November 2016 – 10 May 2017

My abstracts

 

The following abstracts have been accepted for this event:

  • Adversarial Hypothesis Testing

    Authors: Fabrizio Ruggeri (CNR IMATI), David Rios Insua (CSIC ICMAT), Refik Soyer (George Washington University), Jorge González Ortega (CSIC ICMAT)
    Primary area of focus / application: Other: Advances in Adversarial Risk Analysis
    Keywords: Adversarial risk analysis, Hypothesis testing, Game theory, Decision analysis
    Submitted at 25-Jan-2017 17:41 by Fabrizio Ruggeri
    Accepted
    11-Sep-2017 16:00 Adversarial Hypothesis Testing
    In this talk, we are drawing on recent concepts from adversarial risk analysis, and providing a novel approach to the adversarial hypothesis testing (AHT) problem. We assume there is an agent (whom we call Defender) which needs to ascertain which of several hypothesis holds, based on observations from a source that may be perturbed by another agent, which we shall designate as Attacker. The Attacker aims at distorting the relevant data process to confound the decision maker and attain a certain utility. We provide an adversarial risk analysis approach to this problem and illustrate its usage in a batch acceptance context.
  • Estimating Mixed Logit Models

    Authors: Martina Vandebroek (KU Leuven), Deniz Akinc (KU Leuven)
    Primary area of focus / application:
    Secondary area of focus / application: Other: Design of Experiment for Product Quality and Sustainability in Agri-Food Systems
    Keywords: Discrete choice model, Mixed Logit model, Simulated Maximum Likelihood, Hierarchical Bayesian estimation
    Submitted at 27-Jan-2017 14:48 by Martina Vandebroek
    Accepted
    11-Sep-2017 10:30 Estimating Mixed Logit Models
    The mixed logit model is often used to model the results of a discrete choice experiment. It allows to estimate the mean preference in the population as well as the heterogeneity in these preferences. The model can be estimated with maximum simulated likelihood and by hierarchical Bayesian methods and both approaches have their advantages and disadvantages. Both approaches require that several choices are made. For maximum simulated likelihood, the number and type of random draws have to be chosen as well as the starting values and the optimization algorithm. For hierarchical Bayes estimation priors have to be chosen for the mean and covariance matrix of the parameters. We present the results of a simulation study in which we investigated the impact of these choices on the results. We report on the root mean squared error of the estimates for the mean, the covariance matrix as well as the individual preferences. We focus mainly on the number of quasi-random draws and on the prior for the covariance matrix as these are found to have a large impact on the results.
  • Modeling Shrimp Growth in Freshwater

    Authors: Susana Vegas (Universidad de Piura), Valeria Quevedo (Universidad de Piura/ Virginia Tech), Geoff Vining (Virginia Tech)
    Primary area of focus / application: Other: DOE and statistical process monitoring in South America
    Keywords: Process control, Non-linear model, Control chart, Two-stage model
    Submitted at 30-Jan-2017 16:55 by Susana Vegas
    Accepted (view paper)
    11-Sep-2017 17:30 Modeling Shrimp Growth in Freshwater
    For a shrimp farm with 250 ponds, we want to analyze which factors help explain the variation of the shrimp weight. We notice that the shrimp growth has an asymptotic behavior independent of the pond. We use a two-stage non-linear model approach.
    In the first stage, we use a conceptual Gompertz growth model
    logW = θ_1-θ_2 e^(-θ_3 time)
    where θ_1 is the asymptotic average log-weight of adult shrimp, θ_3 is the growth rate constant, and θ_1-θ_2 is the average initial log-weight to predict the shrimp log-weight based on time. Using data from five harvesting campaigns, our analysis show that the best estimates for and can be computed using data from previous campaigns, and for is based on the average log-weight from week 1 from the current campaign.
    To explain the variability left unexplained from the first stage, we fit a multiple linear regression model with the non-linear model residuals as the response, and average food per shrimp, aeration, and water parameters as predictors. To analyze the model efficiency, we estimate the shrimp weight by going backwards to the original scale. We show that the two-stage non-linear model satisfies the model assumptions better than a one-stage MLR model.
    The growth model from the first stage can be used to monitor the process for new campaigns by using a control chart using data from previous campaigns for the control limits. The second-stage regression analysis can be used to suggest corrections during the campaigns.
  • A Tutorial on an Iterative Approach for Generating Shewhart Control Limits

    Authors: Valeria Quevedo (Universidad de Piura), Susana Vegas (Universidad de Piura), Geoff Vining (Virginia Tech)
    Primary area of focus / application: Other: ASQ international journal session
    Keywords: Control charts, Control limits, Statistical process control, Shewhart charts
    Submitted at 30-Jan-2017 20:53 by Valeria Quevedo
    Accepted (view paper)
    11-Sep-2017 12:00 A Tutorial on an Iterative Approach for Generating Shewhart Control Limits
    Standard statistical quality control text books (e.g.,Montgomery 2015) discuss the differences between Phase I and Phase II control charts. In Phase I, control charts are more of an exploratory data analytical tool whose purpose is to generate the control limits for Phase II. Phase II is the true control procedure, which may be viewed as a series of hypothesis tests. The null hypothesis is that the process is in-control, and the alternative is that it is out-of-control.
    The typical presentation of Phase I is a preset number of rational subgroups. Such an approach fails to address the tension that exists between having enough information to generate reliable control limits and being able to start active control of the process. A key point is that control charts allow the engineer to change the process generating the data, unlike post hoc hypothesis tests. Vining (1998; 2009) outline an iterative approach that he developed in the early 1980s when he was employed by the Faber-Castell Corp. The purpose of this iterative procedure is to construct the chart so that it generates a better model of the in-control process.
    This article discusses the differences between Phase I and Phase II control charts and the impact of the number of rational subgroups in the Phase I study upon the quality of the resulting control limits. It then presents two brief case studies that illustrate the iterative approach for the basic Shewhart Xbar and R charts.
  • How to Evaluate Uncertainty of Categorical Measurements?

    Authors: Emil Bashkansky (ORT Braude College of Engineering), Tamar Gadrich (ORT Braude College of Engineering)
    Primary area of focus / application: Other: Sampling
    Keywords: Classification (confusion) matrix, Acceptance sampling, Categorical scale, Bayesian approach
    Submitted at 1-Feb-2017 12:34 by Emil Bashkansky
    Accepted (view paper)
    12-Sep-2017 10:10 How to Evaluate Uncertainty of Categorical Measurements?
    We show how to interpret sampled measurement results when they belong to a categorical scale. The proposed approach takes into account the sampled nature of observations and observation errors, and combines both with prior information (if it exists) about the studied population. The appropriate mathematical tools are presented, considering all these aspects, and providing an adequate description of the partition of the studied property by categories and its parameters. We demonstrate that the most likely or expected estimators may differ significantly from those observed in the sample, and sometimes-even conflict with the assumed confusion matrix. The technique of determining the conflict-free region is presented, as well as the two-stage procedure of assessment updating, based on the verification of the accordance of the new observed information to the already available one. The main propositions of the paper are supported by numerical examples including acceptance sampling for quality control and others.
  • Assessing Assessors (by Means of Binary Test Design and Analysis)

    Authors: Emil Bashkansky (ORT Braude College of Engineering), Vladimir Turetsky (ORT Braude College of Engineering)
    Primary area of focus / application: Other: Reliability of Subjective Measurement Systems
    Keywords: Assessors, Proficiency test (PT), Binary data, Ability and difficulty, Item response model, Maximal Likelihood estimation
    Submitted at 1-Feb-2017 12:49 by Emil Bashkansky
    Accepted (view paper)
    12-Sep-2017 15:40 Assessing Assessors (by Means of Binary Test Design and Analysis)
    A method for evaluating proficiency skills (scoring) of assessors (men, laboratory, method etc.) conducting binary tests is proposed. The method is based on the scale-invariant item response model proposed by the authors in their earlier publications.
    We consider the case where the assessors under proficiency evaluation conduct the same test consisting of a set of binary test items presenting different, but unknown beforehand levels of difficulty (known or unknown beforehand). When trying to detect a particular property of objects under test, we need to evaluate/compare both the intrinsic abilities of the participating assessors and the level of difficulty of these objects/test items (if they are unknown beforehand). We assume that the responses to different test items do not affect one another and discuss how to get and interpret the most likely estimates/scores. The contribution of placebo in the test effectiveness will also be discussed. The method is illustrated by the example of 28 medical laboratories proficiency testing. We propose, as a criterion for screening for ‘‘bad’’ appraisers, to use the probability of successful detection of the studied property by test item/s having a certain, predetermined or defined posteriori level of difficulty.