ENBIS-7 in Dortmund

24 – 26 September 2007

My abstracts


The following abstracts have been accepted for this event:

  • Use of Experimental Design to analyze a Neck Forming Process

    Authors: J. Kunert , E. Tekkaya , L. Kwiatkowski , O. Melsheimer and S. Straatmann (University of Dortmund, Dortmund, Germany)
    Primary area of focus / application:
    Submitted at 24-Jun-2007 20:20 by
    Neck Forming is a production process for achieving a diameter reduction in cylindrical bodies. The process depends on many different factors and is - at least at the state of knowledge - too complex for analytical description of the basic forming mechanisms. We have to do physical experiments because simulation of incremental forming processes like Neck Forming is very time-consuming. With these experiments we investigate the main structure of the process.

    The approach we chose starts with the identification of signifant influencing variables using a fractional factorial experiment. For process optimization and robustification we apply methods which have already proved their usefulness for another incremental forming process, namely sheet metal spinning.

    In this talk, we present the methodology with the help of an example. In this example we try to neck-form straight bead welded steel pipes in a inner range to achieve one given geometry.
  • Improving alarm systems by classification

    Authors: Wiebke Sieben (University of Dortmund, Dortmund, Germany)
    Primary area of focus / application:
    Submitted at 25-Jun-2007 10:08 by
    False alarms are a problem of many monitoring systems, especially in
    intensive care. In situations were classical process control methods
    cannot be applied, existing alarm systems can be improved by
    classification procedures. This is the case when no "in control state"
    exists or when the process to be monitored is high dimensional, complex
    and possibly autocorrelated. Annotations to the existing alarm system
    that contain an expert's opinion whether an alarm is considered as
    "true" or "false" can be used as input for data driven alarm rule
    generation. We study the use of ensembles of decision trees as
    classifiers for this problem and at the same time take the unequally
    severe consequences of misclassifying true as false alarms and false as
    true alarms into account. A procedure based on the analogy of this
    classification problem to statistical testing is presented and applied
    to real data.

    The data comes from a standard monitoring system at an intensive care
    unit. So far, the alarms, mostly based on univariate signals, are
    triggered when a physiological variable crosses a preset threshold.
    These standard monitoring systems are known to produce a high number of
    false alarms that distract and annoy the care givers. With our new
    procedure, the expected sensitivity of the resulting alarm system can be
    adjusted to the monitoring environment. This is demonstrated for
    sensitivities of 95 percent and 98 percent for which a false alarm
    reduction by 46% and 30% is achieved on average.
  • The ENBIS papers database

    Authors: Christopher McCollin (Nottingham Trent University, Nottingham, UK)
    Primary area of focus / application:
    Submitted at 25-Jun-2007 10:09 by
    All the details of ENBIS papers such as author, organisation, title, main
    content, etc were put together into one spreadsheet on Excel to derive some
    preliminary results on final take-up on presentation of papers, main authors,
    main subject headings, etc. These results will be presented with details of the
    present state of the database and scope for future work.
  • Six Sigma, the good, the bad and the very bad

    Authors: Jonathan Smyth-Renshaw
    Primary area of focus / application:
    Submitted at 28-Jun-2007 13:13 by
    This presentation will examine my personnal view of Six Sigma. On my last visit to a ENBIS confernence I heard a very negative presentation on Six Sigma. This was a concern. Since, at long last business is starting to wake up to the power of data, the use of statistics, and the data of data management. I wish to present my art gallery of Six Sigma images, the good, the bad and the very bad.
  • Analysis of Repeated Measures Data that are Autocorrelated at Lag(k)

    Authors: Serpil Aktas, Melike Kaya (Hacettepe University, Ankara, Turkey)
    Primary area of focus / application:
    Submitted at 28-Jun-2007 13:34 by Serpil Aktas
    Several measurements are taken on the same experimental unit in repeated measures
    analysis. The subjects are assumed to be drawn as a random sample from a homogeneous
    population and observations of a variable which are repeated, usually over time.
    When data are taken in sequence , such data tend to be serially correlated that is,
    current measurements are correlated with past measurements. Within-subject
    measurements are likely to be correlated, whereas between-subject measurements are
    likely to be independent in repeated measures design.
    Suppose that Y1, Y2 ,.,Yt are random variables taken from t successive time points.
    The Serial dependency can occur between Yt and Yt-1 . The corresponding correlation
    coefficients are called autocorrelation coefficients. The distance between the
    observations that are so correlated is referred as the lag. The covariance structure
    of repeated measures involves both the between subject and within subject. Usually,
    the between subject errors are assumed independent and the within subject error
    assumed correlated. After performing the analysis of variance when there is a
    significant differences between the factors, multiple comparisons tests are used. In
    these procedures the standard error of the mean is estimated by dividing the
    MSwithin from the entire Anova by the number of observations in the group, then
    taking the square root of that quantity but the standard error of the mean needs an
    autocorrelation correction when the data are autocorrelated. In this study, a
    simulation study were performed to illustrate the behavior of the post hoc
    procedures when data is lag(k) autocorrelated and results were compared to the usual
  • Robust elimination of atypical data points in small samples and high dimensions

    Authors: Florian Sobieczky, Birgit Sponer and Gerhard Rappitsch
    Primary area of focus / application:
    Submitted at 6-Jul-2007 16:46 by Gerhard Rappitsch
    A method of eliminating observations with low statistical depth is proposed, leading to improved affine invariant location estimation. The technique particularly addresses the situation of small samples and high dimensionality of the estimation space, a setting in which the conventional notion of an outlier is not appropriate. Removal of atypical observations is achieved via pruning the longest branches of a spanning tree of the sample. The tree depends on the statistical depth of the observations. If halfspace depth is chosen as the relevant statistical depth function, the algorithm inherits the characteristic robustness and high breakdown properties [see D. Donoho and M. Gasko] while being highly efficient in high dimensions [see P. J. Rousseeuw and A. Struyf]. However, it goes beyond the depth-trimming discussed recently in the literature [see Y. Zuo] and thereby gains the essential feature for successfully processing small samples. The validation of the proposed method is performed by testing a set of multivariate distributions (e.g. multi-normal and t-distribution) and comparing the higher order moments before and after elimination. The impact of the proposed methodology is shown for industrial examples in a production environment where early elimination of atypical observations is important for further statistical post-processing.

    In particular, we demonstrate the improvement in the case of correlation estimation for various multivariate distributions. For this application,
    special attention has to be paid to the influence of atypical
    on the geometry of the estimated contour lines of the underlying
    Further applications are shown from semiconductor industry to
    the correlation of electrically measured performance parameters after
    fabrication (e.g. threshold voltage) and inline measurements of process
    parameters (e.g. oxide thickness).

    D. L. Donoho, M. Gasko: `Breakdown properties of location estimates based on halfspace depth and projected outlyingness’, Annals of Statistics 1992, Vol. 20, No.4, p. 1803-1827

    P. J. Rousseeuw, A. Struyf: `Computing location depth and regression depth in higher dimensions’, Statistics and Computing, 8:p.193--203, 1998. 12

    Y. Zuo: `Multidimensional trimming based on projection depth’, Annals of Statistics 2006, Vol.34, No.5,p. 2211-2251