Abstracts for ENBIS' First Annual Conference
Back to conference main page.
Department of Statistics
University of Warwick
Gibbet Hill Road
In this presentation algebraic geometry, and in particular Grobner bases, are applied to the design and analysis of a fraction of a 3^3x4 full factorial design. Each combination is one compound used as a catalyst. The aim is to optimize the recycling of the catalyst. Interactions are expected to be relevant. Groups of size 5 to 10 of similar compounds are made at one time, thus creating an aliasing effect. After the first group of compounds was measured, it was clear that a regular fraction of the full factorial design would not have been appropriate.
Algebraic geometry helps in the choice of each group by allowing to determine a saturated set of terms to be included in an identifiable linear regression model for a given group, prior to making the compound. It also allows to have information on the aliasing and confounding structure. Then, together with standard least square estimation, it is used to analyze the results.
This is joint work with Elwin de Wolf, Department of Metal-Mediated Synthesis, Debye Institute, Faculty of Chemistry, Utrecht University.
Peter Goos and
Dept. of Applied Economics
Katholieke Universiteit Leuven
In industry, experiments are often conducted in a split-plot format especially when some of the experimental factors are hard to change. The split-plot format is obtained by not resetting the factor levels for every run of the experiment and allows substantial time and cost savings. The observations in a split-plot experiment tend to be correlated, which complicates the design and the analysis of the results. In this presentation, we will present an algorithm for the construction of tailor-made split-plot experiments. The algorithm computes the best possible design taking into account several restrictions imposed by the practical situation, as for instance the number of observations available, the number of runs that can be performed on one day or the size of a furnace or a field. The design criterion used in the experiment is the D-optimality criterion. In the presentation, the algorithm will be used to design an experiment for investigating the yield of a protein extraction process. In addition, the optimality of two-level factorial and fractional factorial designs will be discussed.
F.Foldvary (3) and
1. Industrial Statistics Research Unit University of Newcastle upon Tyne
2. Advantica Technology, Loughborough
3. UKTransco, 31, Homer Road, Sollihull
Large businesses are multi-faceted and require many metrics to represent their performance. A safety measurement framework has been developed for the gas transportation company, Transco, and consists of 32 metrics representing their wide range of activities. The metrics are presented in control charts with comments and interpretation and produced quarterly for the use of higher management. Many of the metrics are intrinsically seasonal although the onset of the seasons varies with the weather, and have traditionally been reported annually. However, where data are easily obtained quarterly, monthly or weekly, it seems sensible to report the more frequent data. Seasonal control charts are no problem in principle but this paper is concerned with the practical issues arising from trying to include seasonal data in control charts without over complicating the report, maintaining a clear message for the executive readership. The paper reviews the options and illustrates them with a real set of seasonal data.
Options include letting the control limits change with the season, seasonally adjusting and using Q charts, splitting up the data, putting a year to date figure in annual control charts and fitting moving averages. Year to date figures were found to be acceptable but bent control limits were considered too complicated. Seasonally adjusted data has the problem that the data axis is not in meaningful units although it does have the advantage of straight-line control limits. The option of presenting multiple control charts is bulky and moving average charts give a different message. Seasonal control charts also highlight the familiar problem of a short set of start up data arising because five to ten years is as far back as is reasonable in a fast moving business world.
Interesting solutions have to be tempered by what is going to be clear and easy to understand as the safety measurement framework is a bridge between the technical world which gives rise to the data and the business world inhabited by the readers.
L. Tack and
Department of Applied Economics
Katholieke Universiteit Leuven
When performing experiments in a time sequence, the results may be affected by a trend over the course of the experiments. Time trend effects are caused by warm-up in laboratories, learning effects, instrument drift, build-up of deposits, aging of material, etc. This presentation is about the identification of systematic run sequences that are optimally balanced for time trend effects. This is done by maximizing the information on the factor effects while the parameters modeling the time dependence are treated as nuisance parameters. Besides, cost considerations will be allowed for and a numerical algorithm is outlined to construct run orders that not only emphasize on the statistical efficiency but at the same time on the experimental cost. Both the costs associated with particular factor level combinations and the costs for altering the factor levels from one observation to the next one are dealt with. The proposed methodology is extended to experiments in the presence of block effects. Blocking experiments highly increases the design efficiency if the experimental runs cannot be performed under homogeneous conditions. Finally, the problem of constructing efficient run orders under an additional restriction on the available budget is also tackled. The proposed design algorithms are intended to provide the experimenter with a general method for solving a wide range of important problems, rather than providing a catalog of designs or giving solutions to specific problems with a special structure. Real-life examples taken from the electronics industry and the chemical industry illustrate practical utility.
Ron S. Kenett, Ph.D.
KPA Ltd., P.O. Box 2525,
David M. Steinberg
Dept. of Statistics and OR
Tel Aviv University
Bootstrapping is a useful methodology in analyzing data sets of intermediate size. Very small data sets and very large data sets are better analyzed with alternative techniques such as Bayesian methods or data mining. In this paper we focus on the application of Bootstrapping to the analysis of data derived from moderately sized designed experiments.
Data gathered in designed experiments can exhibit patterns of missing data. Missing data affect the data analysis and need to be handled with care. We will distinguish between missing data that result from random mechanisms and missing data due to structural constraints. When the missing data pattern is random methods such as the EM algorithm can be applied to perform the data analysis. Special problems occur however when the missing data pattern is "structural" i.e. due to an inherent constraint in the system under experimentation or the experimental protocol. The structural constraints can be identified prospectively, when the experiment is designed or retroactively, after the experiment is completed or while it is actually being carried out.
Bootstrapping offers an approach to analyze data with structural constraints derived from designed experiments. We will provide several examples and conclude with open research questions.
Statcon ApS - Hejrevang 4 - 3450 Alleroed
Dept. of Mathematical Modelling
Technical University of Denmark
Novo Nordisk A/S
Dept. of Mathematical Modelling
Technical University of Denmark
Injection molding is the most widely used polymeric fabrication process. Critical to the adoption of this high volume, low cost process technology is the ability to consistently produce quality parts. Designs for plastic parts are becoming increasingly more complex. In addition, economic constraints have led to the use of multi-cavity molds. By using a multi-cavity mold the machine time is used much more efficient, because in every cycle of the process several (the number of cavities) parts are produced simultaneously. In this presentation the production of a hollow cylindrical part used in a medical device is studied. The part in question is produced in a 16 cavity mold and is a part of an assembly consisting of seven molded parts. The quality characteristic that is investigated is the length of the part. The cavity-to-cavity variation is considered to be the greatest source of variation in the production of this part. Using a fractional factorial design the factors influencing the mean length and the cavity-to-cavity variation are identified. Two factors are identified to significantly contribute to the variation between cavities. Thus, with a proper choice of levels for the machine variables, it is possible to reduce the variation between cavities substantially.
Martin Arvidson and
Chalmers University of Technology,
An important part of quality improvement work is reduction of variation. A useful tool for this purpose is Design of Experiments as it enables identification of factors that cause variation. As it often is very expensive and sometimes even impossible to conduct completely andomizat experiments it is common to conduct experiments with restricted andomization, i.e. split-plot experiments. A frequently used split-plot experiment is the 2k-p x 2q-r experiment that consists of the Cartesian product of two experimental designs.
In this article a method for joint estimation of location and dispersion effects in un-replicated 2k-p x 2q-r split-plot experiments is presented. The proposed method separates the dispersion of the response variable into a number of error terms corresponding to features of the experimental layout. Factors, which previously have been identified to influence location and the different error terms, serve as input to the estimation procedure. The method utilizes Generalized Linear Models and Response Surface Methodology to obtain estimators of location and dispersion effects. An industrial example is used to illustrate the use of the proposed method.
Key Words: split-plot experiment; dispersion effects; variance components; Generalized Linear Models; estimation procedure; Response Surface Methodology.
Jeroen de Mast
IBIS University of Amsterdam
Plantage Muidergracht 24
1018 TV Amsterdam
Apart from their use as a monitoring tool, control charts are applied as a diagnostic tool to finite data samples. This type of application is usually referred to as retrospective analysis. In the presentation I shall study the application areas and functionality of control charts for retrospective analysis. From the results I derive a number of requirements with which a control charting procedure should comply if it is intended for retrospective analysis. I shall present a control charting procedure that is consistent with these requirements. Among its characteristics are robustness of estimation and testing methods, and the ability to identify both isolated and persistent disturbances. The proposed method is demonstrated from a real-life data set.
IBIS University of Amsterdam
Plantage Muidergracht 24
1018 TV Amsterdam
Additional run rules in Shewhart control charts are used to increase the power of signalling out-of-control situations in a process. When a signal is given the Out-of-Control Action Plan (OCAP) is used to determine the special cause, and to solve the process disturbance as quickly as possible. However, different run rules may point at different causes, and therefore different search strategies for operators would be useful. In the presentation I will outline the run rules and their corresponding process situations. In this context I will also discuss the use of an additional Moving Range chart in the control chart for individuals, which some statisticians belief to be useful for detecting increased process variation.
How to Consider Correlation and Economic Trade-offs Among Responses
Dept. of Mechanical Engineering
University of Cagliari
Dept. of Mathematics,
Politecnico di Torino
Duca degli Abruzzi, 24
Dept. of Production Systems and Economics
Corso Duca degli Abruzzi, 24
The Robust Design problem has been mostly solved up to now considering one response only, although multiple responses are quite common in the applications. The paper presents a general framework for the multivariate problem capable to handle both parameter and parameter plus tolerance design when data are collected from a combined array. A single number criterion based on the quadratic loss function is used for optimization. The criterion was chosen so to incorporate not only the statistical information, such as the correlation structure among the responses and the prediction uncertainty, but also the relevant economical information about the product/process to be designed, namely the relative importance and trade-offs among responses from the user point of view. An application concerning the design of the key component of a force transducer illustrates the steps of the methodology.
David M. Steinberg and
Dept. of Statistics and OR
Tel Aviv University
In fitting empirical models to experimental data, bias due to model inaccuracy is often a serious problem. A particular case in point is computer experiments, in which there is no random error, only model error. Further, it may be that just a few of the experimental factors account for most of the variation in the response. In such settings, it may be desirable to use a design that is able, simultaneously, to screen out the important factors and to fit a higher-order model in those factors. We derive a class of such designs by rotating standard two-level fractional factorials. Special classes of rotations are developed to achieve some appealing symmetries in the design. A comparison of designs based on their alias matrices shows that the new designs are better than many other alternatives.
Camilla Madsen and
Dept. of Informatics and Mathematical Modelling
Technical University of Denmark
Presently, the current procedure in the US Pharmacopeia for testing content uniformity in tablets is under review. In the review it has been suggested to substitute the current procedure by a procedure that more reflects current standards for acceptance sampling procedures. In particular, it has been suggested to substitute the test on the sample coefficient of variation by an adaptation of the Liebermann-Resnikoff approach to acceptance sampling by variables, i.e. to perform a combined test for variability and for deviation from target content. The suggested procedure is a two-stage sampling plan to be applied to each batch during validation. In statistical terms, the procedure is a combination of a test by attributes with acceptance number zero with 'specification limits' +/- 25 % around the target, and a test by variables with 'specification limits' +/- 16.5 % around the target value. In the paper we discuss the statistical properties of this procedure under the assumption of a normal distribution of tablet content. We demonstrate that although the attributes test on the content of individual tablets may serve a psychological purpose, the discriminatory effect of the procedure is provided by the test by variables, and sample size might be reduced without loosing efficiency if principles from variables sampling were used also for rejection in the first stage of the procedure. Although no formal specification limits have been specified, the situation resembles conformity testing performed by regulatory authorities in other situations. Often an 'accept zero' sampling plan plays a prominent role is such procedures. As an example we consider the EC-directives for designating weight on individual packages that combines a test by attributes with a test on the sample mean.
Dipl.-Ing. Roland Goebel
Dipl.-Stat. Martina Erdbruegge,
Prof. Dr. Joachim Kunert,
Prof. Dr.-Ing. Mathias Kleiner,
Department of Forming Technology(LFU)
University of Dortmund
Baroper Str. 301
Sheet metal spinning is a forming process, used to manufacture axial-symmetric hollow bodies in a wide variety of shapes in smaller and medium lot sizes (e.g. wheel rims, lamp reflectors, container bottoms, nozzles etc.). Due to manifold interactions between process parameters and quality characteristics of the work pieces, this forming process is very complex. In addition to that the knowledge of mutual relations is rather insufficient, making the process design difficult and time consuming. In the past, knowledge of the spinning process has mainly been gained by empirical methods like for example one-factor-at-a-time experiments. For industrial applications, more efficient methods as statistical experimental design and analysis are needed. Within the presented investigation such methods have been successfully applied to the sheet metal spinning.
Especially the occurrence of wrinkling is a central aspect limiting the process efficiency. Measurement of the intensity of the wrinkles requires the consideration of many different aspects as amplitude, number, geometry and distribution of wrinkles. Therefore it is very difficult to characterise wrinkling by a single characteristic value. The probably best and fastest way to get a reliable statement about wrinkling will be conceived by asking an appropriately instructed person for a categorical classification and assuring this person to be as objective as possible. In statistical analysis the resulting subjective quality characteristic consequently has to be considered as categorical response.
The presentation will cover the proportional-odds-model as a simple analysis tool to deal with categorical responses. Experimental design (including fractional factorials and central composite designs) and analysis are presented in order to find optimal parameter settings simultaneously for both the minimisation of wrinkling and the optimisation of other response variables. New parameter settings have been identified and verified in confirmatory runs. This new parameter settings enabled the manufacturing of workpieces that show an intense deformation ratio but without the occurrence of wrinkling.
The methods used can easily be transferred to other manufacturing processes, where the handling of categorical quality characteristics is also a well-known problem.
Technical University of Catalonia (UPC)
Technical University of Catalonia (UPC)
The purpose of this paper is to explain production improvements carried in a company that manufactures catheters. The company realized that the number of catheters being rejected was too big. Differences in rejects depending on machines and operators were also high. The workers collected data in an organised way using forms in order to discover both the worst machines and operators and the critical points of the process.
The process of making catheters involves five different steps, one of them being the welding of two tubes made of different materials. This welding is the more complicated step and it was the one having the highest levels of rejects. After the welding process, all catheters were inspected in order to check if they passed a five criteria test. Failing one of these criteria meant rejecting the catheter.
Operators were using the machines for welding both tubes using always the same settings for the various "control factors". They didn't "know" in a clear and quantified way how those factors were affecting the response. An experiment was carried in order to study which factors were significant. We considered five different responses, corresponding to the five criteria. Two criteria implied destructive tests. Some of these criteria were quantitative responses and others were qualitative. Specialists created a scale for grading qualitative responses.
Five control factors were considered in the experiment. We also wanted our welding process to be insensitive to some raw material characteristics that were impossible to control precisely in normal production. To achieve this goal we used six variables as noise factors, having then a crossed array design.
The main conclusions and lessons learned as well as the difficulties that appeared during the design of the experiment and its performance will be explained. We can advance that as is many times the case the most important learning process took place while designing and preparing the experiment
INFORMATION WORKS GmbH
Consulting & System Development
Rolshover Strasse 45
Most industrial companies have insurances for several lines of business. So far, industrial insurances have been calculating risks for each single line of business separately, and for each single line an insurance contract has been signed. A new product in the insurance market is the multi-line multi-year policy. We present a simulation method to quantify the expected loss and overall risk for a customer, summarized over all lines of business.
Consolidation of several insurance contracts to one single policy has two main benefits for industrial companies: the company gets a higher level of control on their own financial risks than with traditional industrial insurance programs and the financial effort for premiums will be lower. These benefits arise from the actuarial principle of compensation. The talk will focus on the simulation algorithm including parameters set by the customer and different kinds of deductible and aggregate structures. For some components of the simulation process, possible ways of modelling a single risk will be considered. Special attention is due to the efficient coding of the empirical claim distributions and to the appropriate fitting of loss distributions.
The algorithms has been developed by a leading German insurance company in cooperation with INFORMATION WORKS, a consulting company with headquarter in Cologne. To help underwriters calculate the rate of premium for complex insurance solutions for large industrial companies the algorithms have been implemented in an easy to use thin client intranet application. The application is in use since March, 2001.
Today's competitive markets impose demanding challenges on business leaders. Customer-level information must be leveraged to acquire further market share; information based business is an essential ingredient to successful CRM(customer relationship management). This talk presents an efficient way of implementing analytical CRM systems that fully support key business goals: improved cross-selling, reduced attrition rates, increased customer profitability, fact-based decision making and direct marketing, The technological enablers are:
- a customer database or customer centric data warehouse,
- a decision support system(DSS) or online analytical processing Tool(OLAP),
- a data mining environment
Keyword: CRM, Database Marketing, Data Mining, Data warehouse, OLAP, DSS, Closed -Loop-Systems, Marketing Automation.
MATFORSK, Norwegian Food Research Institute
Osloveien 1, N-1430
Quality classes of raw materials and products are often defined by the value of a continuous quality characteristic. Salmon fillets could for instance be labeled `fat', `medium' or `meager', according to their fat content. It is often desirable to predict the quality class of a product or a raw material on the basis of other, indirect measurements. It can be expensive or time consuming to measure the quality characteristic directly, or the measurement procedure can be destructive. In such situations, one constructs a classifier relating the quality class to the indirect measurements. An important question about a classifier is how well it performs. The most common measure of goodness for a classifier is the error rate. In this presentation, I will argue that the error rate is not necessarily the best measure in these situations. The main reason for this is that it doesn't penalize misclassifications according to their severity. I propose an alternative measure, inspired by the root mean squared error of prediction (RMSEP) from regression. The new measure penalizes misclassification of an object by the distance between the predicted class and the object's quality characteristic. The measure will be illustrated by examples. I will also briefly discuss strategies for construction good classifiers for the present situation.
Wessel van Wieringen
IBIS UvA, University of Amsterdam
Plantage Muidergracht 24,
1018 TV Amsterdam,
Many quality programs and systems prescribe a measurement system analysis (henceforth MSA) to be performed upon the key quality characteristics. This guarantees the reliability of the acquired data, which serve as the basis for drawing conclusions with respect to the behaviour of the key quality characteristics. These characteristics could be either continuous or categorical. When dealing with the former, the (ANOVA based) Gauge R&R is, in industry, widely regarded as most appropriate to carry out the MSA. However, there is no consensus on a suitable alternative for the latter. In my talk I discuss methods that could serve as a categorical analogue to the Gauge R&R. Furthermore, I argue that latent class analysis, a form of factor analysis designed for nominal data, is the most promising candidate to close the gap.
The talk is based on work done in coorperating with Edwin van den Heuvel.
A Structured Bayesian Modeling Approach
Norwegian Computing Center
P.O.Box 114 Blindern
We present a non-linear Bayesian heirarchical model for the population dynamics of coastal cod. Within this framework, both the variability of thepopulation dynamics and the sampling error are accounted for. The population dynamics is based on three age-groups (0-group (one year old), 1-group (one and a half years old) and adult cod). Expert knowledge is incorporate in the model. Based on data from the Norwegian Skagerak coast, we estimate parameters representing the recruitment process and intra- and inter-cohort effects. The estimates are found by simulations, using Markov Chain Monte Carlo methods. The results indicate a significant density dependence in the survival. We illustrate how the method can be used to provide estimates of the abundances of the different cohorts.
Department of Total Quality Management
Chalmers University of Technology, Gothenburg
Department of Total Quality Management
Chalmers University of Technology, Gothenburg,
When electronic components are tested in high temperature the test equipment wears out over time, which results in that the test equipment releases false signals, i.e. although a component is fault-free it is classified as defective and vice versa, even though the component is defective it is judged as fault-free. Multivariate nature of the date including varied fault probability distribution increases the complexity of the problem. An interesting question then is how to find the point at which we can detect the wear-out of the test equipment and initiate corrective actions. One solution to this problem can be found in constructing an EWMA control chart, monitoring the process and warning us when the test equipment is worn out.
Exponentially Weighted Moving Average (EWMA) solutions are widely used in numerous industrial contexts. Although EWMA control charts have mostly been deployed using normally distributed continuous data, in certain cases as we show in the article below, they prove to be useful with discrete, non-normally distributed data as well. In this paper we will apply the technique to discrete multivariate data.
Senior research scientist
Pl 1201, VTT Information Technology,
02044 VTT, Finland
Tekniikantie 4 B,
02150 Espoo, Finland
In forecasting the sales of new consumer products, an important phase is at the first few months when only a few observations are yet available. Standard time series techniques fail due to the relative lack of data, and the transitory nature of sales during the time period the first observations cover. In this paper, new product forecasting is discussed in the context of related decisions in the product lifecycle. Methods based on dynamical systems theory, control engineering and standard statistical time series analysis are proposed. The usefulness of exogenous data, e.g. sales data from similar products is pondered. An empirical study is carried out with real data.
Risk Initiative & Statistical Consultancy Unit
University of Warwick, Coventry,
and EURANDOM, Eindhoven,
ENBIS is the most exciting development in industrial statistics in Europe for many years and perhaps ever. Let us hope that the excitement is maintained while at the same time we seek to increase understanding and fulfil the duties and ambitions stemming from this initial enthusiasm. It is to be hoped that the style of ENBIS as an informal, democratic and innovative network will make it easier for these ambitions to be realised.
Although industrial and business statistics has had key successes in, for example, experimental design, statistical process control, reliability and financial economics there is a danger of fragmentation into a small and isolated disciplines on the one hand or ad hoc recipes which may confuse or hide fundamental ideas of modelling, probability and inference, on the other. Like a menu which may read better than it tastes one must reserve one's judgement on hybrids such as "delicious, dynamic, partial least squares on a bed of neural nets" until it has been properly sampled. However, and here ENBIS has a long-term duty, there is great scope for unification. The talk will suggest a unified treatment of time series and state-space modeling, Bayesian networks and, indeed, PCA and PLS, based on a current EU-funded project on Multivariate Process Control. Another area where integration is possible is in non-linear regression models defined by differential equations, bringing our subject closer to engineering.
4 Mill Close
Somerset, BA22 9LF
Statistics can be regarded as a branch of mathematics. Certainly, it is widely agreed that a good grounding in mathematics is required for a thorough understanding of statistical techniques and the assumptions that underlie these techniques. Of course, the education and training of most Statisticians goes beyond mathematics, in order to cultivate the unique perspective that can enrich the modelling of real world situations.
However, despite the breadth of the curriculum in the undergraduate or postgraduate education of the statistician, no one would claim that this training produced a fully competent statistical consultant. So, what further training is needed if the statistician is to acquire the full range of skills needed for effective consultancy?
This paper focuses on the people issues that arise during consultancy and discusses how the appropriate inter-personal skills can be developed.
MATFORSK, Norwegian Food Research Institute,
N-1430 As ,
Multiple responses are common in industrial and scientific experimentation. Often these multiple response variables are related in some way. Digitization of continuous curves and several related measures of the same physical phenomena are examples of such data. The analysis is still founded on the classical experimental design methodology, but additional tools are needed.
Principal component analysis is a very powerful technique for this type of data. One possibility is to perform univariate analysis of some individual principal components. Effects or differences can be calculated as usual and the statistical tests are still correct. It is, however, possible to improve this type of tests. There are also multivariate alternatives and a newly developed framework is called Fifty-Fifty MANOVA. A related method is presented in a forthcoming article that considers screening experiments with few or zero error degrees of freedom. When significant effects are found, one should illustrate these effects for easier interpretation. The sums of squares summed of over all responses illustrate the relative importance of each factor. Different (adjusted) mean values can be illustrated in a principal component score plot or directly as mean curves. In this talk, the discussion will be based on examples from food research with multiple responses from near infrared reflectance spectra, sensory profiles and particle size distributions.
Chemical Development Scientist
The presentation is intended as an overview of how Experimental design has been successfully applied within the Chemistry Development Division at GlaxoSmithKline over the past 12 months to gain process understanding and process confidence. There will be particular emphasis on issues, which routinely impact on Manufacturing, such as reducing the cost of goods and increasing the security of supply. Case studies that cover scoping, screening, optimisation and robustness designs will show how Experimental Design is an essential tool for process delivery. Examples will be drawn from early phase R&D projects processes transferring from R&D to manufacturing well-established manufacturing processes. The critical issues of timing, six sigma and process registration will be addressed. Despite the obvious benefits of experimental design, the uptake of this statistical approach for chemical process investigation was surprisingly limited until fairly recently. A key driver for increased uptake has been the development of automated experimental equipment, which facilitates the rapid collection of quality data. The culture of experimental design is now firmly embedded within Chemistry Development and over the past year was applied to over 90% of the projects undertaken at Stevange. The obstacles initially faced and strategies to encourage uptake of experimental design will be outlined briefly.
Capilla, C. and
Department of Statistics and Operations Research.
University Polytechnic of Valencia.
Apartado 22012-46071 Valencia
This paper describes the implementation of a control process system to monitor the quality of the waterway out from a purifying plant of wastewater. First the observed quality parameters influencing the output flow of water are analysed. The most important quality variable studied in the output is biological oxygen demand whose analytical value is known five days after of sampling. Other important quality variable in the output is the particles in suspension, which is a good predictor of the biological oxygen demand and has the advantage that their value can be sooner known. The input variables influencing the output quality parameters are mainly, the water flow rate and biological oxygen demand in the input flow to the purifying plant.
These quality variables implied in the process, present dependence in time either in input and output water flow, which is commonly present in these contexts. The evolution in time is analysed for all variables. Since in this context is inefficient to monitor the univariate quality parameters by classical Statistical Process Control and furthermore in this type of control it is not make use of the relevant information from the autocorrelations existing in the structure of the quality variables. This information allows establishing systems of dynamic prediction of quality variables whose analytical value are delayed in time. From this, it is possible to implement a Quality Control System integrating Engineering and Statistical Process Control (ESPC) to monitor the quality water in the output flow in the purifying plant. The approach to resolve this problem is multivariate in the input.
KEYWORDS: Process in a purifying plant of wastewater Time-Series Analyses, Engineering Process Control, Statistical Process Control, Control Theory, Process Monitoring.
Krogshojvej 49, 9Gs,
dk 2880 Bagsvaerd
Counts of particles and bacteria are modelled by Poisson-Normal models, negative binomial models and lognormal models to account for over dispersion in a Poisson model. Aspects of the different models are discussed and practical examples warning limits calculations are shown.
Norwegian Institute for Food Research
The aim of this MAP experiment was twofold. In addition to the general aspect of achieving more competence on MAP on fish fillet, it was necessary to develop microbiological methods for measuring shelf life and to relate these microbiological methods to the sensory quality (odour) of the fish and the instrumental measures of gas production from the bacteria during the storage period.
The experimental design consisted of two production factors, gas type and storage temperature, each having three levels. For the third factor, storage time, the stored sample packages were measured after 2, 5, 8, 14, 21 and 28 days. As the microbiological analysis was destructive, separate packages were made for each data sample. Each package was first measured by electronic nose and then prepared for bacteria counting. In addition, sensory attributes (smell) were assessed by a panel of assessors.
The data from this experiment were censored, due to the biological nature of the experiment. When the fish fillets developed unacceptable odours, they were taken out of the experiment, and no further measures were done on that combination of gas type and storage temperature. Furthermore, due to lack of resources, it was not possible to test all nine production factor combinations on all days and for all bacteria.
The statistical approach in this experiment is how to estimate effects of storage time, temperature and gas type on the growth of the most common spoilage bacteria in fish, and to relate these estimates to the multivariate measurements from the electronic nose. In this presentation, the experiment and the resulting data set will be described in further details as an introduction to a discussion on possible statistical approaches to this problem. The analyses and conclusions so far will also be presented.
Autoliv is the leading supplier for automotive safety products in Europe and the world. The business idea is to develop, produce and sell systems world-wide for mitigation of injuries to automobile occupants and avoidance of traffic accidents. The company develops and manufactures automotive safety systems for all major European car manufacturers. Autoliv is a tier 1 supplier with overall market coverage. The product range comprises seat belt and airbag systems, steering wheels, seat sub-systems and electronics. Autoliv major contributions to automobile safety include retractor belts, seat belt pretensioners, load limiters, airbag inflators, frontal airbags, knee airbags, thorax side airbags, head side airbags and anti-whiplash systems.
For the development of restraint systems the number of tests has increased significantly. In addition to that, some of the questions which need to be addressed when striving for optimised system performance can only be answered by simulation. As a result, simulation tools like MADYMO and PAMCRASH are used throughout the whole development process to analyse, steer and support the choice of all relevant system parameters.
To take full benefit from a simulation-based development of restraint systems and to transfer the knowledge into new projects, a detailed understanding of the mechanics and influence of each system parameter is required.
In this paper, currently used "optimisation" methods (DoE, Monte Carlo, Global Optimisation) are reviewed and compared regarding system understanding, knowledge transfer, quality, robustness and duration. The advantages of each method are highlighted using illustrative examples.
Finally, the integration of optimisation methods into Autoliv's development process is described and some results of simulation DoE's are discussed.
Miklos Veber and
Department of Probability Theory and Statistics
Eotvos Lorand University,
We present a method in acceptance sampling method where we utilize previous experiences as well by using Bayesian statistics. The main point of this method is that we approximate the empirical distribution of the defective rate by a beta a priori distribution, which is wide enough for modelling any practical cases for observed rates in previous transports.
The second important feature of our model is that we assign different costs to the sampling, to the acceptance of an unsuitable consignment and to the rejection of a suitable one, and we minimize the expected cost of the method. This is a very flexible approach and our main results show that it is possible to find the optimum by an easily interpretable algorithm.
We illustrate the applications of this method in practice by presenting the most wide-spread acceptance sampling procedures occurring in practice (simple acceptance sampling, sequential tests, sampling in different strictness levels, Taguchian approach).
In the first part of the presentation we would like to introduce the above mentioned classical methods in the case of homogeneous consignments, i.e. assuming that the rate of defectives is the same everywhere in the consignment.
In practice it is quite common that the consignment is not homogeneous, while this problem is usually not taken care of, leading to unfounded decisions and substantial losses. In the second part of the presentation we illustrate how we can extend the classical acceptance sampling methods to the inhomogeneous case by breaking down the inhomogeneous consignment into homogeneous layers. We apply the Bayesian approach by choosing a uniform a priori distribution for the overall defect rate, and we assume that the defect rates in the layers behave as an ordered sample from this a priori distribution (giving rise to the beta distribution as the layers' defect rates). The optimal acceptance sampling method can be calculated in this case, too.
Our method is practical in its origin. It will be demonstrated by the computer program which calculates the optimal sampling plan for the situations above.
Alessandra Giovagnoli, Dipartimento di Scienze Statistiche, UniversitÃƒÆ’Ã‚Â di Bologna, and
Daniele Romano, Dipartimento di Meccanica, UniversitÃƒÆ’Ã‚Â di Cagliari
Software engineering often makes empirical comparisons among different ways of writing computer code. This leads to the need for planned experimentation and has recently established a new area of application of DoE. In this paper we present an experiment on the production of multimedia services on the web, performed at the Telecom Research Centre in Turin, where two different ways of developing code, with or without a framework, are compared. As the experiment progresses, the programmer's performance improves as he/she undergoes a learning process; this must be taken into account as it may affect the outcome of the trial. This case study has motivated us to seek D-optimal plans for such experiments and indicate some heuristics which allow a much speedier search for the optimum. Solutions differ according to whether we assume that the learning process depends or not on the treatments.
Lourdes Pozueta Fernandez
Department of Statistics and
Industrial Engineering School,
Technical University of Catalonia (UPC)
Several authors have pointed at different ways to design and analyze experiments with the aim of finding the conditions at which a group of controllable factors minimizes the variability induced by a group of noise factors on the response. We have studied the mistakes in the conclusions of the analysis of those kinds of experiments through the two most widely used methods.
Method 1. When the experiment is designed through what has been called the "product matrix" and the analysis is conducted using two different variables, one to measure the "level" and another to measure the variability.
Possible problems. When the experiment is a two level factorial design without central points:
* if the design is fractionated and the minimum variance zone is near the design center the "real" significant factors affecting the variability are two factor interactions often confounded with main effects. Frequently the experimenter disregards the "existing" interactions in favor of the "non existing" main effects
* if the minimum variance zone is near the experimental zone but outside it, the models estimated for the variability overestimate the variance in this zone and mislead the experimenter to believe that the minimum is far from this region
Solutions. Include center points in the design; re-analyze it using the second method
Method 2. When independently of the design used, the variability is studied through the effects of the noise factors in the "location" model
Problem. The selection of robust conditions is usually done by independently studying each significant noise factor and its interactions with the control factors. This frequently misleads the experimenter.
Solution: study the effects of all significant noise factors together
The presentation will include examples from the literature and a detailed explanation of the rationale behind these errors and how to avoid them.
University of Naples Federico II
Department of Aeronautical Engineering
P.le Tecchio, 80
The logical structure of nearly all the service delivery processes evidences that the variability of the service outcomes, in terms of quality and added value, depends not only on the service provider, who arranges resources and processes, but also on the customer, who is both the recipient and the co-producer of the service itself. Consequently, a non-common statistical estimation problem arises. Adopting a specific probabilistic approach, a model of the process of serving quality has been firstly developed. The model is based on the definition of the actual service quality level as a random variable. This arises from the randomness of both the effectiveness of the aspects of the service and the choice of those with which the customer comes really into contact. In this work, on the basis of the above approach an original statistical procedure is proposed in order to evaluate the actual service quality level and provide a diagnostic tool for the improvement of the service delivery process. The procedure consists of a survey and a data analysis phases: the survey follows a specific experimental design and involves few quality experts who simulate many customer-service contacts; the data analysis leads to the estimates of three quality indexes. In order to highlight strong and weak points in the service delivery process the quality indexes are analysed via specific statistical tests which provide a feasible way to continuously improve the service quality. A real application to the hotel service industry is also provided and fully discussed.
University of Naples Federico II
Department of Aeronautical Engineering
P.le Tecchio, 80 - 80125
It is well-known that the service delivery process implies a sequence of many events depending on several random concurrent causes. So a statistical approach sounds effective in order to control and/or improve the quality of the service. To this end a statistical tool is proposed aiming at monitoring the service process, throughout time, via the observation of only dissatisfied customers. The "return period" of a dissatisfied customer - defined as the interval of time between two successive occurrences of this event - assumes a relevant role and the related control chart can be built under the hypothesis of a steady demand for the service.
An example of application is discussed for the case of services with high contact rate, where the provider needs to timely detect any decrease in the quality level, on pain of suffering severe costs and obscuring the company image.
Norwegian Food Research Institute,
Mixtures are of considerable importance in food science and industry, since most foods are mixtures of a number of different ingredients. Double mixtures appear when two mixtures are involved simultaneously and influence each other mutually. An example from sausages production is considered, were the quantity and the quality of the chemistry components are systematically varied, both being mixtures. Mixture models are generally more challenging to analyze than standard regression models, since the mixture restrictions leads to both "exact" and "near" co-linearity between the variables. Different methods for handling this problem are considered. Models found by multiple linear regression (MLR) and partial least square regression (PLSR) are compared. The differences between the predictions were quite small. The PLSR models were, however, found to have more stable regression coefficients, and could by first sight seem more interpretable.
Dr. Yu Adler,
Moscow Institute of Steel and Alloys
Russian Academy for Quality
Dr. V. Shper,
Russian Electrotechnical Institute
Russian Academy for Quality
A simple tool that may be easily applied to the problem of easy and quick calculation of Lower Confidence Limits (LCL) for Process Capability Indices is presented. This tool is fairly simple because it is a sliding rule that let one calculate LCL's for Cp and Cpk for many different combinations of such parameters as point estimators, sample sizes (n) and levels of confidence (?).
The same tool let one simultaneously calculate the values of Process Yield (PY) corresponding to the LCL's.
The examples of using this tool are presented and brief conclusion with hope that wide application of this simple tool on the shop floors could help more reasonable using of capability analysis for process improvement is given.
University of Naples Federico II
Department of Aeronautical Engineering
In the early phases of concept design of a new prototype, designer needs tools to evaluate the goodness of his ideas and consequently whether his design choices could satisfy the end user. In car design, this evaluation depends primarily on man-machine interaction. A current challenge is to study this interaction in virtual environment using human model to simulate man behaviour.
In this paper the authors present the first results of a research aimed to optimise comfort of a new vehicle using virtual reality tools, involving the application of parameter design methodology. Integrating statistical and industrial design competencies, a classical orthogonal array has been formulated and performed. Furthermore a multi-objective loss function has been formulated in order to discriminate among various configuration settings and to eventually find the optimal design in terms of comfort for passengers. The results obtained by virtual experimentation concern the improvement of design in terms of comfort and the increase of designer knowledge about the effects of design parameters on the ergonomics of the vehicle. This case study has been fully developed in a virtual lab available at PRODE (PRoduct DEsign) a recently established consortium between University of Napoli Federico II and Elasis - Fiat Research Centre in Southern Italy.
In this presentation an approach for applying statistical techniques is proposed to support developers and engineers in improving the reliability of their (new) product designs. This approach is focussed on answering essential questions during the development phase, such as:
- What is the actual reliability of the product?
- Do we meet the specifications?
- What are the dominant failure mechanisms?
- What are the critical parameters of the design?
- What are their optimal settings?
- Is an implemented solution an improvement or not?
- What sample sizes are required?
Keywords: 2n-k schemes, Kaplan-Meier, censoring, parametric models, Weibull, testing.
Industrial Engineering Department
Iran University of Science & Technology
There many situations in which the joint control of several related variables must be considered in order to monitor the quality of a process properly. This fact is sometimes overlooked when a set of jointly distributed variables are monitored individually using separate univariate control procedure. Such an approach ignores the correlation that exists between variables and results in frequent process adjustments which will eventually lead to an unstable process. This point was first discussed by Hotelling (1947) who introduced the concept of multivariate quality control.
The purpose of this paper is to investigate the average run length property of the economic design of control chart and show that this method is not always a proper procedure to be considered in process control programs. An alternative control method based on the cumulative sum (CUSUM) procedures will be provided and it will be showed that its properties are superior to the traditional control chart.
Erika Blanc and Paolo Giudici
University of Pavia
AIM: to combine sequence rules with statistical association methods to obtain valuable info on e-consumers behaviour from logfiles data, and to show how this combined analyis can be carried out in SAS and SAS Enterprise Miner.
In the presentation we shall show:
- The available data (source: SAS Italy)
- Use of support and confidence rules
- Use of odds ratios and corresponding confidence intervals
- Symmetric models for sequences (graphical loglinear models)
- Asymmetric models for sequences (probabilistic expert systems)
The evolution of total quality has occurred over the past five decades since 1951 ÃƒÂ¢Ã¢â€šÂ¬Ã¢â‚¬Å“ the seminal year when Dr. W. Edwards Deming gave his lectures to the Japanese on the application of statistics to business, Dr. Joseph M. Juran published the first edition of his Quality Handbook and Dr. Armand V. Feigenbaum wrote the first edition of his book Total Quality Control. Now, fifty years later, we are faced by the second level of improvement of the past half century ÃƒÂ¢Ã¢â€šÂ¬Ã¢â‚¬Å“ moving from emphasis on control of total quality to management of total quality (or TQM as we abbreviate it) and then from TQM to another level of thought integration that is represented by Six Sigma. This presentation describes the definition of Total Quality and Six Sigma, distinguishes their unique characteristics and identifies appropriate roles for statisticians in the current era of Six Sigma quality initiatives. The business reasons for the rapid diffusion of Six Sigma are discussed along with an assessment of the unique lessons learned over the ten years of Six Sigma deployments.
At the conclusion of this presentation, an agenda for further investigation will be presented. Specific areas and topics for both basic and applied research will be identified along with a ÃƒÂ¢Ã¢â€šÂ¬Ã‹Å“wish listÃƒÂ¢Ã¢â€šÂ¬Ã¢â€žÂ¢ of case study topics to stimulate the involvement of the European Network of Business and Industry Statisticians in related areas of investigation