ENBIS-11 in Coimbra

4 – 8 September 2011 Abstract submission: 1 January – 25 June 2011

My abstracts

 

The following abstracts have been accepted for this event:

  • Optical coherence tomography data analysis by support vector machines

    Authors: P. Serranho, P. Rodrigues, R. Bernardes
    Affiliation: IBILI, Faculty of Medicine, University of Coimbra and AIBILI - Association for Innovation and Biomedical Research on Light and Image
    Primary area of focus / application: Mining
    Keywords: Support Vector Machines , Segmentation , Optical Coherence Tomography , Medical Imaging , Classification , Automatic Training
    Submitted at 27-Apr-2011 16:30 by Pedro Serranho
    Accepted (view paper)
    5-Sep-2011 16:37 Optical coherence tomography data analysis by support vector machines
    We will show some recent developments by our research group in what concerns the use of classification methods for optical coherence tomography (OCT) image analysis.
    We propose the use of support vector machines (SVM) as the basis for the segmentation of OCT retinal data. Using an appropriate set of features and an automatic labeling method for the training set based on gradient methods, we suggest a fully automatic procedure to classify each voxel of the OCT retinal volume as vitreous humour, upper retina, retinal pigment epithelium or choroid.
    Moreover, we will also illustrate the use of SVM to classify OCT volume data into healthy, diabetic retinopathy or diabetic macular edema eye. The results present a nice rate of correct classification by a leave-one-out procedure.
  • Application of a Six Variable Mixture Test Design (With a Non-Mixture variable)

    Authors: William Bettis Line
    Affiliation: DOES Institute
    Primary area of focus / application: Design and analysis of experiments
    Keywords: DOE , Mixture Experiments , Consumer responses , Round-Robin Test Method , Product Optimization , Statistical Models
    Submitted at 28-Apr-2011 23:18 by William Line
    Accepted
    5-Sep-2011 16:40 Application of a Six Variable Mixture Test Design (With a Non-Mixture variable)
    In this application a statistically designed experiment (DOE) was used to vary the six recipe variables simultaneously. However, the effect of each mixture component was assessed independently of other components. Previous to this application, product recipes were determined by a method of changing one-factor-at-a-time (OFAT method). This test design enabled measuring the interactions between the recipe variables, which could not have been done with the OFAT method of testing.

    The test design consisted of 31 specially selected recipes specially selected to be representative of all possible recipes. Test products were made using each recipe. Each recipe was compared directly to a ‘standard’ recipe - the current recipe at that time. Over 70 quality measures, R&D product variables, and consumer response variables were analyzed on each product. Statistical data models were used to find the optimum product recipe.

    A taste test using a national probability sample of candy consumers was conducted to find their ‘best recipe’. Using a round-robin test design, each consumer tasted two products - a test recipe product and the standard product and indicated their preference. Statistical models were fitted to the data to find the consumer rating variability and to find the optimum consumer recipe that also met quality and R&D standards.

    The project results included discovering an optimum recipe that was superior to the current recipe used at that time. The conclusions were applied to ingredient recipes in three different world-wide best-selling candy products. The optimized recipe has been used successfully for many years.

    This project showed that a statistically designed mixture experiment with a non-mixture component can be used to find an optimum product according to consumers.

    The author acknowledges the contributions of Dr. George E. P. Box for his suggested statistical test design and his statistical analysis, particularly in the multi-response optimization phase to find the best recipe.

    Bibliography

    Box, G.; Draper, N.; Empirical Model Building and Response Surfaces, Wiley, 1987.
  • Application of a Second and Third Order Test Design in Six Dimensions with an Orthogonality Constraint

    Authors: Norman R. Draper, PhD Michael J. Morton, PhD William Bettis Line
    Affiliation: DOES Institute
    Primary area of focus / application: Design and analysis of experiments
    Keywords: DOE , Higher Order Models , Orthogonality Constraint , NASA Application , Aerodynamic Parameters , Wind Tunnel Tests , Force Moment Balance
    Submitted at 28-Apr-2011 23:27 by William Line
    Accepted (view paper)
    5-Sep-2011 12:30 Application of a Second and Third Order Test Design in Six Dimensions with an Orthogonality Constraint
    Previous to this application there was a modestly large literature describing third order designs. The references in the literature for six variable designs were even more modest. The approach taken in this application was to cross a 3-dimensional design in one set of variables to a 2-dimensional design in its orthogonal components.
    The NASA application described in this paper was in the wind tunnel testing of aerodynamic forces. In the wind tunnel testing of aircraft, including the space shuttle, a force moment balance is used to test aerodynamic effects. The Wright brothers developed the first force moment balance in 1903. Many force balances are used today, including several by NASA, Lockheed Martin, and many universities.
    Previous to this test design, the one-factor-at-a-time method of testing was used at NASA to calibrate balances. This paper presents a statistical test design that was developed in 2001 for NASA. The project showed that using a designed test reduced the cost of calibration by 85%. The resulting method also improved data quality. NASA patented the developed system, despite their long-standing policy of making innovations ‘public domain’.
    The overall design displayed in design units will be presented. The orthogonality constraint will be shown, plus the ANOVA table. It is often desired to run a calibration sequentially That is, a 2nd order design can be run first. If a lack of fit occurs, that design is followed by a set of axial points to give the 3rd order segment. The recommended test design, including 2nd order and 3rd order segments, will be presented.
    In aerospace and defense testing, the use of 2nd order models has been ineffective. There have been reported ‘many discontinuities in the factor space.’ There exists a need for higher order models in the future, as suggested by this application.

    Draper, N. R.; Smith S. Applied Regression Analysis, 3rd Edition. Wiley, April 1998

    Box, G.; Draper, N.; Empirical Model Building and Response Surfaces. Wiley, 1987.

    Box, G. E. P., Hunter, W .G., Hunter, J. S.; Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building. Wiley, 1978.
  • Bayesian Variance Separation Under Heteroscedasticity – Pressure Measurement as Case Study

    Authors: Katy Klauenberg, Karl Jousten and Clemens Elster
    Affiliation: Physikalisch-Technische Bundesanstalt (PTB), Abbestr. 2-12, 10587 Berlin, Germany
    Primary area of focus / application: Modelling
    Keywords: linear mixed model , multivariate normal distribution , Bayesian Approach , Jeffreys prior
    Submitted at 29-Apr-2011 10:40 by Katy Klauenberg
    Accepted
    5-Sep-2011 12:10 Bayesian Variance Separation Under Heteroscedasticity – Pressure Measurement as Case Study
    We analyse a measurement problem, where ni different, unknown measurands are each measured by (the same set of) nj devices or laboratories. The aim is to characterize the device variability uncoupled from the variability of the measurand. The developed method is to be applied to pressure measurement devices tested at the Physikalisch-Technische Bundesanstalt (PTB)[1].
    A standard set-up of this measurement problem corresponds to a (balanced) linear mixed model. We want to estimate the variability of the device measurements as well as the variability of the measurand (captured by random errors and random effects respectively in the linear mixed model).
    This is a standard statistical model which has been extensively treated for known variances as well as for unknown but equal device variances. For an overview on classical as well as Bayesian solutions, see for example [2]. However, different devices (or labs) rarely exhibit the same variability. We are therefore bound to consider a full heteroscedastic model.
    Reformulated, the measurements can simply be viewed as ni identical replications originating from an nj-variate normal distribution with full covariance matrix of known structure. Fitting a multivariate normal distribution to data has been discussed frequently. For an (objective) Bayesian point of view generally accounting for the full covariance matrix, see for example [3]. However, for the specific (parametric) covariance structure above, no Bayesian approach appears to be available.
    We employ a Bayesian approach using the non-informative Jeffreys prior to infer the full distribution of the variance parameters in a heteroscedastic linear mixed model. By way of simulation, we show that the Bayesian estimates reproduce underlying parameters well, better than Maximum Likelihood Estimates do. Our results are insensitive to small changes in prior assumptions. Moderate violations of the normality assumption for the measurand have no effect on the estimation of the variability of the appliance measurements. We will additionally demonstrate the effect of investments into resources, i.e. how the accuracy improves with an increasing number of participating devices or labs nj.
    We will present results of fitting the above model to measurements of ni = 16 pressures by nj = 4 appliances each, performed at the PTB. The estimated variability for some devices is substantially smaller than the observed variance of their measurements. This case study demonstrates the full potential of variance separation under heteroscedasticity.

    References
    [1] K. Jousten and S. Naef, Journal of Vacuum Science & Technology A 29(1), 011011-1 (2011), DOI:10.1116/1.3529023.
    [2] W. J. Browne and D. Draper, Bayesian Analysis 1, 473 (2006), DOI:10.1214/06-BA117.
    [3] D. Sun and J. O. Berger, Objective Bayesian Analysis for the Multivariate Normal Model, in Bayesian Statistics 8, eds. J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. Smith and M. West (Oxford University Press, 2007).
  • Comparison of On-line Design of Experiments Methods on Physical Models

    Authors: Koen Rutten Josse De Baerdemaeker Bart De Ketelaere
    Affiliation: Laboratory of Mechatronics, Biostatistcs and Sensors, Department Biosystems, KULeuven, Kasteelpark Arenberg 30, 3001 Heverlee
    Primary area of focus / application: Design and analysis of experiments
    Keywords: DOE , optimization , EVOP , RSM , Simplex , sequential optimization
    Submitted at 29-Apr-2011 12:08 by Koen Rutten
    Accepted
    6-Sep-2011 11:05 Comparison of On-line Design of Experiments Methods on Physical Models
    The general goal of the research was to develop Design of Experiments (DOE) methods that can be used during the production process. They differ from classical DOE because new experiments are incrementally defined based on the outcome of previous experiments. A first step consisted of comparing a heuristic online optimization strategy, the simplex, with the most basic form of sequential response surface modeling (sRSM), a factorial augmented with steepest ascent, called factorial EVOP in this text. The comparison was made using the mathematical model of a real physical process. The optimization methods were programmed in Matlab®. To compare these methods, a set of hundred randomly chosen starting points was obtained as an input for all methods. Several comparisons were made: number of steps needed to reach the optimum, times experiment went out-of-bounds, quality of the optimum… The first results are presented and discussed.
  • A Robust Approach for Calibration of Near-Infrared Spectra

    Authors: Walid Gani and Mohamed Limam
    Affiliation: LARODEC, ISGT, University of Tunis
    Primary area of focus / application: Modelling
    Keywords: Calibration , data preprocessing , DOSC , NIR spectroscopy , LVR , SVR
    Submitted at 29-Apr-2011 12:59 by Walid Gani
    Accepted
    5-Sep-2011 16:50 A Robust Approach for Calibration of Near-Infrared Spectra
    In spectroscopic calibration, direct orthogonal signal correction (DOSC) is a successful preprocessing technique for reducing variants and drifts from high dimensional data. DOSC is usually used with latent variable regression (LVR) method. However, the lack of variable selection in high dimensional data can spoil the LVR. To overcome this issue, we propose the use of support vector regression (SVR) method, since it is originally designed to manage large databases. Besides, SVR is a robust regression method with high computational performance due to kernel functions. Our approach consists in combining DOSC with SVR instead of the usual combination of DOSC with LVR for robust calibration of near infrared (NIR) spectra. The proposed approach is assessed with two real NIR spectroscopic data sets and compared with DOSC applied with partial least squares method, principal component regression method and partial robust M-regression method. The results show that DOSC-SVR approach outperforms the DOSC-LVR approach, in terms of the root mean square of error and the coefficient of determination.