ENBIS-12 in Ljubljana

9 – 13 September 2012 Abstract submission: 15 January – 10 May 2012

My abstracts

 

The following abstracts have been accepted for this event:

  • Three-stage Industrial Strip-plot Experiments

    Authors: Heide Arnouts (University of Antwerp), Peter Goos (University of Antwerp)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: strip-plot design, three-stage processes, split-split-plot design, D-optimality
    Submitted at 6-Apr-2012 14:55 by Heidi Arnouts
    Accepted
    12-Sep-2012 10:10 Three-stage Industrial Strip-plot Experiments
    Strip-plot designs are commonly used in situations where the production process consists of two process stages involving hard-to-change factors and where it is possible to apply the second stage to semi-finished products from the first stage. In this presentation we focus on three-stage processes. As opposed to the three-stage strip-plot designs in the literature, the third stage does not involve hard-to-change factors but easy-to-change factors that are reset independently for each run. For this scenario, the split-split-plot design is a well-known design option. However, we prefer the more statistically efficient strip-plot designs, and, therefore, we construct D-optimal strip-plot designs for three-stage processes with no randomization restriction in the third stage. The coordinate-exchange algorithm we use to construct our designs can handle any type of factor and any number of factor levels, runs, rows and columns.
  • PM10 Forecasting Using Mixture Linear Regression Models

    Authors: Jean-Michel Poggi (University of Paris Descartes), Bruno Portier (INSA Rouen), Michel Misiti (University of Orsay), Yves Misiti (University of Orsay)
    Primary area of focus / application: Modelling
    Keywords: particulate matter, forecasting, clusterwise linear models, air quality
    Submitted at 7-Apr-2012 17:27 by Jean-Michel Poggi
    Accepted
    11-Sep-2012 10:40 PM10 Forecasting Using Mixture Linear Regression Models
    Clusterwise linear regression models are used for the statistical forecasting of the daily mean PM10 concentration. Hourly concentrations of PM10 have been measured in three cities in Haute-Normandie, France: Rouen, Le Havre and Dieppe. The most important one, Rouen, is located at northwest of Paris, near the south side of Manche sea and is heavily industrialized. We consider monitoring stations reflecting the diversity of situations: urban background, traffic, rural and industrial stations. We have focused our attention on recent data from 2007 to 2010.
    We accurately forecast the daily mean concentration by fitting a function of meteorological predictors and the average concentration measured on the previous day. The values of observed meteorological variables are used for fitting the models but the corresponding predictions are considered for the test data, leading to realistic evaluations of forecasting performances which are calculated through a leave-one-out scheme on the four years.
    We discuss in this talk several methodological issues including various estimation schemes, the introduction of the deterministic predictions of meteorological or numerical models and the way to handle the forecasting at various horizons from some hours to one day ahead.
  • People Make Mistakes - Unavoidable....

    Authors: Johan Batsleer (Amelior)
    Primary area of focus / application: Quality
    Keywords: human error, quality management, mistakes, brain
    Submitted at 8-Apr-2012 10:07 by Johan Batsleer
    Accepted (view paper)
    12-Sep-2012 10:45 People Make Mistakes - Unavoidable....
    Within Quality Management the target today is 'zero defects'. Also in Safety Management targets as 'zero accidents' is the standard today.
    But how can we match that with the idea that paople will always make mistakes.
    Especially when there is routine, the chance of making mistakes will be present and could be something like 3 on 1000.
    Today we can look in the human brain and we can try to understand the mechanisme that creates 'human error'.
  • Outlier Detection for Business Indicators of Healthcare Quality - A Comparison of Four Approaches to Overdispersed Proportions

    Authors: Gaj Vidmar (University of Ljubljana, Faculty of Economics), Rok Blagus (University of Ljubljana, Faculty of Economics)
    Primary area of focus / application: Process
    Keywords: healthcare quality, performance measures, outliers, control charts, cross-sectional data, overdispersion
    Submitted at 8-Apr-2012 22:35 by Gaj Vidmar
    Accepted
    10-Sep-2012 16:55 Outlier Detection for Business Indicators of Healthcare Quality - A Comparison of Four Approaches to Overdispersed Proportions
    Outlier detection among overdispersed proportions is an important issue in healthcare quality control. We had previously introduced control limits for the double-square-root control-chart based on prediction-intervals from regression-through-origin, comparing our approach to common outlier detection tests. In this study, we compared it to three other approaches: Laney's p'-chart for cross-sectional data, Spiegelhalter's random-effects regression-modelling approach (multiplicative and additive) and Carling's median rule. Comparisons were performed on real and simulated same-quantity random-denominator ratios. The real data comprised hospital readmissions data from UK (used by Spiegelhalter and Laney) and data on business indicators of healthcare quality from Slovenian hospitals. Simulations comprised "small" (<0.2; right-skewed) and "large" (>0.5, more symmetrical) proportions, 1000 under each experimental condition. Samples of size 10-100 were drawn from 3-parameter-loglogistic distribution with no or one outlier added (drawn with location parameter multiplied by 2 or 6). Spiegelhalter's approach yielded very high false-alarm rates, except the multiplicative version in tiny samples. Laney's approach produced fewest false alarms, but could not detect the outlier in samples of size 10 with small proportions and regardless of sample size with large proportions. Median rule performed similarly. Our approach proved the best overall: slighlty less liberal than the median rule with small proportions and the only generally useful one for large proportions. Further research should explore theoretical relations between Spiegelhalter's, Laney's and our approach, and applicability of tests/models for overdispersed proportions in statistical quality control.
  • Fitting Data using Bspline Functions and GA and PSO Bioinspired Methods

    Authors: Angel Cobo Ortega (University of Cantabria), Alberto Luceño Vázquez (University of Cantabria), Jaime Puig-Pey Echebeste (University of Cantabria)
    Primary area of focus / application: Mining
    Keywords: Bioinspired methods, Genetic Algorithms, Particle Swarm Optimization, Regression, CAD
    Submitted at 9-Apr-2012 13:16 by Jaime Puig-Pey
    Accepted (view paper)
    10-Sep-2012 16:50 Fitting Data using Bspline Functions and GA and PSO Bioinspired Methods
    In this work we use Bspline functions to fit data using a function y=f(x;p) in a regression model framework. Because of the geometrical adaptability of those piecewise polynomials, they are very useful to fit profiles of quality variables in statistical process control, or in Computer Aided Geometric Design, when performing inverse engineering processes.
    The p parameter vector in f(x;p) contains the coefficients of a linear combination of Bspline basis functions, the vector of nodes, and the degree of the polynomials in the Bspline base. Whereas f(x;p) is linear with respect to the first group of parameters, the coefficients, it is nonlinear with respect the remaining parameters.
    This nonlinearity and the possibly large number of parameters, make the fitting task considerably more difficult than in a linear case. We compare two bioinspired procedures based in Genetic Algorithms (GA) and Particle Swarm Optimization (PSO). The fitting process evolves on populations of node vectors, so that the linear coefficients are fitted conditioned to the selected values of the nonlinear parameters, until the quadratic error is minimized.
    We examine some examples of data points, showing the performance of both GA and PSO techniques. Twenty runs are performed for each example, fitting a Bspline polynomial with given degree and number of knots in the definition interval. Fitting quality measures are reported to aid comparing GA versus PSO.
  • Balancing Interpretation and Prediction Accuracy in Classification and Regression using Local Correlation Information

    Authors: Marco S. Reis (University of Coimbra)
    Primary area of focus / application: Process
    Keywords: partial correlation, clustering, classification, regression, Generalized Topological Overlap Measure, linear discriminant analysis, ordinary/partial least squares
    Submitted at 9-Apr-2012 17:08 by Marco P. Seabra dos Reis
    Accepted
    10-Sep-2012 12:10 Balancing Interpretation and Prediction Accuracy in Classification and Regression using Local Correlation Information
    Current methodologies for conducting classification and regression activities are strongly centred in optimizing estimation accuracy metrics, leaving to a secondary concern the interpretation of the results produced. However, there are an increasing number of applications where interpretation plays a central role in the analysis, and constitutes a major outcome (e.g., the inference of relevant associations in the analysis of complex systems, in order to understand its operation). In this communication, we present an integrated framework for regression and classification with interpretational-oriented features built-in. Such features are a result of inducing local associations between variables and using the resulting network structure to form, in a robust way, modules of associated variables. The network features will constrain the predictive space, in order to introduce interpretable elements into the final model. We have find out that the introduction of these constraints do not usually compromise the methods’ performance and, in fact, quite often leads to better predictive results, meaning that it is indeed possible to improve current methods at the interpretation and accuracy levels.