Overview of all Abstracts

The following PDF contains the Abstractbook as it will be handed out at the conference. It is only here for browsing and maybe later reference. All abstracts as PDF

My abstracts


The following abstracts have been accepted for this event:

  • Bayesian Network in Customer Satisfaction Survey

    Authors: Silvia Salini (1), Ron Kenett (2)
    Primary area of focus / application:
    Submitted at 2-Sep-2007 14:38 by
    A Bayesian Network is a probabilistic graphical model that represents a set of
    variables and their probabilistic dependencies. Formally, Bayesian Networks are
    directed acyclic graphs whose nodes represent variables, and whose arcs encode the
    conditional dependencies between the variables. Nodes can represent any kind of
    variable, be it a measured parameter, a latent variable or a hypothesis. They are
    not restricted to representing random variables, which forms the "Bayesian" aspect
    of a Bayesian network. Efficient algorithms exist that perform inference and
    learning in Bayesian Networks. Bayesian Networks that model sequences of variables
    are called
    Dynamic Bayesian Networks. Harel et. al (2007) provide a comparison between Markov
    Chains and Bayesian Networks in the analysis of web usability from e-commerce data.
    A comparison of regression models, SEMs, and Bayesian networks is presented Anderson
    et. al (2004).

    In this paper we apply Bayesian Networks to the analysis of Customer Satisfaction
    Surveys and we demonstrate the potential of the approach. Bayesian Networks offer
    advantages in implementing managerially focused models over other statistical
    techniques designed primarily for evaluating theoretical models. These advantages
    are providing a causal explanation using observable variables within a single
    multivariate model and analysis of nonlinear relationships contained in ordinal
    measurements. Other advantages include the ability to conduct probabilistic
    inference for prediction and diagnostics with an output metric that can be
    understood by managers and academics.


    (1) Department of Economics, Business and Statistics
    University of Milan, Italy
    (2) KPA Ltd., Israel and University of Torino, Torino, Italy

  • New Adaptive EWMA Control Charts

    Authors: Seiichi YASUI, Yoshikazu OJIMA (Tokyo University of Science, Japan)
    Primary area of focus / application:
    Submitted at 3-Sep-2007 03:03 by
    The exponential weighted moving average (EWMA) control charts are more powerful
    for detecting small shifts than the Shewhart type control charts. Furthermore, the
    average time to detecting shifts can be shorter, if the sampling interval and/or the
    sample size is changed depending on the value of statistics is applied to an EWMA
    control chart. In an EWMA control chart, the plotted statistic is the weighted
    average of the previous plotted statistic and the current observation, hence, the
    weight can also be changed depending on the value of plotted statistics. In this
    study, the adaptive procedure for the weight in an EWMA control chart is proposed.
    The proposed adaptive EWMA control chart has warning limits and control limits. If
    the plotted statistic exceeds the warning limit, weighting is changed. We evaluate
    the performance for detecting several out-of-control situations through Monte Carlo
    method. The adaptive EWMA control is more powerful for detecting small shift than
    the traditional EWMA control chart.
  • The use of the bootstrap in the analysis of the 12-run Plackett-Burman design.

    Authors: Anthony Cossari
    Primary area of focus / application:
    Submitted at 3-Sep-2007 10:14 by
    In recent years the bootstrap has been successfully applied in the analysis of designed experiments, in particular in the case of replicated factorial designs. In this paper, a bootstrap-based analysis for discovering the active effects in unreplicated fractionals is proposed, taking advantage of the projection properties of the design considered. Emphasis is on the 12-run Plackett-Burman design. For each of the collapsed (replicated) complete designs, bootstrap is applied with the aim of selecting the most likely regression model. Some examples and simulations are used to illustrate ideas.
  • The estimation of the role of system & statistical thinking in decision making.

    Authors: Adler Yu., Hunuzidi E. and Shper V.
    Primary area of focus / application:
    Submitted at 3-Sep-2007 10:21 by Vladimir Shper
    The goal of this paper is to suggest very simple quantitative estimates of probabilities of wrong decisions being made by managers who don't use system and statistical thinking. The model of decision making discussed by us follows the well-known Shainin rule of green-yellow-red. It is shown that under some conditions (normal distribution of critical characteristics) the probability of wrong decisions can reach up to 50%. The analysis is based on the system archetypes suggested in our previous paper at ENBIS 2006.
  • A Control Chart for the Desirability Index

    Authors: Heike Trautmann and Claus Weihs (University of Dortmund, Dortmund, Germany)
    Primary area of focus / application:
    Submitted at 4-Sep-2007 12:21 by
    The Desirability Index is a multiobjective optimization method in
    industrial quality control, which includes a-priori preferences of the
    decision makers regarding the quality criteria and transforms the
    multiobjective into a univariate problem. Settings of the process
    influencing factors are selected that lead to the highest possible DI
    value and therefore to maximum process quality. Until now the DI was
    solely used for optimization purposes. A straightforward question
    however is if the maximum DI value can be maintained in the ongoing
    process. For this purpose a DI control chart is introduced, which proves
    to be superior compared to existing charts. Additionally an innovative
    procedure for the analysis of out-of-control signals is presented.
  • Accuracy of the End-to-End performance estimation in logistic service environments

    Authors: Klaus-Ruediger Knuth (Quotas GmbH, Hamburg, Germany)
    Primary area of focus / application:
    Submitted at 4-Sep-2007 13:12 by
    An important part of the QoS in logistic services is the ability to distribute items from one point to the other within a defined timeframe. This is called the service standard. Systems that measure the compliance to this standard are often panel based.

    The result of the measurement can take for example the following form:
    "In 2005 95% of all letters sent from sender panellists have been received by receiver panellists on the next day of service".

    All measurement systems are sized according to given accuracy requirements. Basis is an appropriate estimation of the variance of the on-time performance estimator.

    CEN, the European standardisation network, has almost finished the devellopment of recommendations on the calculation of this variance. Special difficulties that have to be overcome were:

    · All items for any sender, any receiver and any sender-receiver relation may be correlated;

    · On-time performance is usually on a level well above 90% where simple nor-mal approximation is weak;

    · The sampling design is usually disproportional leading to weighted results.

    Specifics: Oral presentation
  • Monitoring Nonlinear Profiles using Support Vector Machines

    Authors: Stelios Psarakis, Javier M. Moguerza and Alberto Munoz
    Primary area of focus / application:
    Submitted at 4-Sep-2007 15:30 by
    Support Vector Machines (SVMs) are powerful classification and regression procedures. SVMs arose in the early nineties as optimal margin classifiers in the context of Vapnik’s Statistical Learning Theory. During the last few years SVMs have been successfully applied to real-world data analysis problems, usually providing improved results compared to other techniques. This methodology can be used within the Statistical Process Control (SPC) framework. In this work we focus on the use of SVMs for monitoring techniques applied to nonlinear profiles.
  • On using bootstrap methods for understanding empirical loss data and dynamic financial analysis

    Authors: Grigore ALBEANU (UNESCO Chair in Information Technologies at University of Oradea), Henrik Madsen (IMM, DTU), Manuela Ghica (Spiru Haret University), Poul Thyregod (IMM, DTU) and F. Popentiu-Vladicescu (Univ. of Oradea)
    Primary area of focus / application:
    Submitted at 4-Sep-2007 15:34 by
    Computer-Intensive methods for estimation assessment provide valuable information concerning the adequacy of applied probabilistic models. The bootstrap method is an extensive computational approach for understanding empirical data and is based on resampling and statistical estimation. It is a powerful tool, especially when only a small data set is used to predict the behaviour of systems or processes. This paper describes some case studies based on the Efron type bootstrap approaches [1] for modelling loss distributions [2] and for general dynamic financial analysis [3]. The case studies are inspired from risk management field. The research is based on theoretical previous developments in accuracy assessment [4], reliability estimation [5] and risk exchange modelling [6].


    [1] Efron B., Computer-Intensive Methods in Statistical Regression, Siam Review, 30, 3, 421-449, (1988).

    [2] Hogg, R.V. and Klugmsn S.A,: Loss distributions, John Wiley & Sons, New York, 1984.

    [3] Kaufmann R., Gadmer A. and Kett R.: Introduction to dynamic functional analysis, ETH Zurich, IFOR, 1999, http://www.ifor.math.ethz.ch/publications/1999_dynamicfinancialanalysis.pdf .

    [4] Albeanu G.: Resampling Simultaneous Confidence Bands for Nonlinear Explicit Regression Models, Mathematical Reports, 50(5-6), 289-295, (1998).

    [5] Albeanu G. and Popentiu F.: On the Bootstrap Method: Software Reliability Assessment and Simultaneous Confidence Bands, Annals of Oradea University, Energetics Series, 7(1), 109-113, (2001).

    [6] Manuela Ghica.: A risk exchange model with a mixture exponential utility function, Annals of Spiru Haret University, Mathematics and Informatics Series, 2006.

  • Investigating the Impact on Product Quality of Raw Material Variability for a Chemical Process: A DoE Approach

    Authors: Ewan Polwart (Fujifilm Imaging Colorants Ltd)
    Primary area of focus / application:
    Submitted at 5-Sep-2007 07:43 by
    Determining the impact on product quality of batch-to-batch variation in raw
    materials is important in specification setting, establishing critical
    parameters and for process understanding for chemical and biochemical
    processes. Where historical data exists on the raw material variability it
    is possible to consider this to look at the impact on product quality.
    Where changes in the process and / or product grade occur data-mining may
    prove infeasible and experimental design may be a more suitable alternative.

    This paper will present one possible strategy for carrying out such an
    experimental design that exploits the inherent correlation within the
    characteristics of the raw material to give a usefully small number of
    experiments. Principle component analysis (PCA) was applied to the
    historical chemical analysis data for the raw material. D-optimal
    experimental design was applied to the principle component scores to select
    batches for inclusion in the DoE.
  • Why DoE is not widely used among engineers in Europe?

    Authors: Martín Tanco; Elisabeth Viles; María Jesus Alvarez; Laura Ilzarbe
    Primary area of focus / application:
    Submitted at 5-Sep-2007 08:59 by
    Engineers perform experiments and analyse data as an integral part of their job. Whether or not engineers have learned statistics, they will do statistics. However, we still have a wide gap between theoretical development of Design of Experiments (DoE) and its effective application in industries. Despite efforts by specialists in quality and statistics, DoE has yet to be applied as widely as it could and should be.

    A vast bibliographic study was carried out for detecting the barriers for why DoE is not widely used among engineers in Europe. The barriers detected were firstly grouped and reduced into sixteen groups. Afterwards, a brief survey was carried out to obtain first-hand information about the importance of each barrier. Four different initiatives were carried out in April 2007 for obtaining response from ENBIS members, which allow us not only to access academician but also practitioners interested in DoE. It was mainly an online survey, which is still available on the web at the following direction: http://examinador.tecnun.es/mtanco/encuesta.asp.

    We introduce in the following work a deep statistical analysis of the mentioned survey. The most important intended goal of our research is to rank and group the barriers in order to suggest some ideas or solutions to allow DoE become closer to industries. We believe our conclusions will help to identify pitfalls and generate a realm of discussion of the situation in Europe.