ENBIS-13 in Ankara

15 – 19 September 2013 Abstract submission: 5 February – 5 June 2013

My abstracts

 

The following abstracts have been accepted for this event:

  • Non-linear Mixed Model Approach for the Statistical Analysis of Stability Data

    Authors: Heidi Wouters (Ablynx)
    Primary area of focus / application: Modelling
    Keywords: Shelf-life, Stability, Non-linear mixed model, Temperature excursions
    Submitted at 11-Apr-2013 17:52 by Heidi Wouters
    Accepted
    16-Sep-2013 11:00 Non-linear Mixed Model Approach for the Statistical Analysis of Stability Data
    Stability data consists of repeated measurements of certain stability responses measured at different temperatures (long term storage conditions and different accelerated conditions) for multiple batches. The aim is to predict the shelf life at a certain long term storage condition on the one hand and the impact of temperature excursions on the other hand.
    A commonly used approach consists of two steps where firstly each response is assumed to decrease with time through a first order kinetic relation which estimates a degradation rate. This model is fitted for each temperature resulting in an estimation of a temperature specific degradation rate. The second step uses the Arrhenius equation to fit the degradation rate as a function of the temperature. From this equation the degradation rate can be predicted for any temperature and hence a prediction of the shelf life can be obtained.
    In this paper the two-step-approach described above is combined into an all-in-one non-linear mixed model having several advantages. In particular, the standard error on the degradation rate is much smaller since the parameter estimate of the degradation rate is based on all data. Moreover, more accurate standard errors are obtained since no linearization of the non-linear model is required. As a result, the all-in-one procedure has more reliable estimates of the degradation rate for intermediate temperatures. Moreover, the results can be generalized to any other potential stability batch since a random batch effect is included into the model. An R-tool is developed for computing shelf life at long term storage conditions and for potential temperature excursions based on the all-in-one non-linear mixed model.
  • Bayesian Assessment of Seismic Hazard Curves

    Authors: Alberto Pasanisi (EDF R&D), Merlin Keller (EDF R&D), Gloria Senfaute (EDF - CEIDRE), Marine Marcilhac (Phiméca Engineering), Thierry Yalamas (Phiméca Engineering), Christophe Martin (Géoter)
    Primary area of focus / application: Other: French special session
    Keywords: Seismic risk, Bayesian analysis, Industry, POT models, Inference, Prediction
    Submitted at 12-Apr-2013 09:38 by Alberto Pasanisi
    Accepted
    17-Sep-2013 16:55 Bayesian Assessment of Seismic Hazard Curves
    Seismic hazard curves are isolines of values of a chosen quantity characterising the seismic load (e.g. the peak ground acceleration, PGA) in a given geographical point for a given period of return (i.e. for a given annual exceeding probability). Hence, for evaluating seismic hazard curves, one needs a predictive model of the PGA, based on data collected during past earthquakes.
    Generally, data consists in values of the magnitude, experimentally observed or, in the case of ancient earthquakes, indirectly inferred based on historically recorded damages. The evaluation of the acceleration in a given point, caused by an earthquake of known magnitude occurring somewhere in the domain, can be made by means of a semi-empiric function, named the attenuation law, depending on the distance between the point under investigation and the seismic source.
    In this communication we sketch a full Bayesian methodology for assessing the PGA distribution function (and thus the seismic hazard curves) in any point of a geographical domain. First (inferential step), the probabilistic model of magnitudes of the seismic sources will be assessed. This model, following the POT (peak over threshold) formalism consists in the distribution of the annual number of earthquakes exceeding a given magnitude, coupled with the probability density of the magnitudes, given that they have exceeded the threshold. Then (predictive step), the PGA is evaluated in the points of interest, taking into account the uncertainty tainting the parameters of the magnitudes' distribution and the attenuation law.
    Finally, as a perspective we sketch some ways to build point estimators of the acceleration corresponding to given period of returns, based on Bayesian Model Averaging and Bayesian Decision Theory.
  • The Effect of Variance on Fuzzy Quality Control Charts

    Authors: Nilüfer Pekin Alakoc (Atilim University), Ayşen Apaydın (Ankara University)
    Primary area of focus / application: Other: Fuzzy
    Keywords: Quality control charts, Fuzzy set theory, Variances of fuzzy numbers, Unnatural patterns, Average run length
    Submitted at 12-Apr-2013 12:01 by Nilüfer Pekin Alakoc
    Accepted (view paper)
    16-Sep-2013 14:45 The Effect of Variance on Fuzzy Quality Control Charts
    The primary goal of a control chart is to identify and eliminate the assignable causes of the variation of the process. When the control charts are plotted with fuzzy set theory, traditional rules are insufficient for defining all out of control conditions. This is due to the fact that fuzzy numbers increase the number of nonrandom patterns. In this study, a new perspective for out of control conditions of fuzzy control charts is presented. The variances of fuzzy numbers, which might indicate an assignable causes of the variation, is proposed to consider with the fuzzy control chart approach. In order to clarify the usage of the study, an example of an in control state process is illustrated with the variances of the fuzzy numbers. Moreover, the effects of variances on fuzzy control charts are investigated in terms of average run length.
  • Implementation of Steady and Second Order Polynomial Models Using R

    Authors: H. Yagmur Gurkan (Hacettepe University Statistics Department), Gul Ergun (Hacettepe University Statistics Department)
    Primary area of focus / application: Economics
    Keywords: Dynamic linear models, Bayesian inference, Kalman filtering, Forward filtering backwards sampling, Gibbs sampling, R
    Submitted at 12-Apr-2013 14:35 by H. Yagmur Gurkan
    Accepted
    In the last decades, the dynamic linear models (DLMs) have become considerably popular in time series analysis. Availability of stochastic based simulation techniques besides some advanced packages to analyze the Bayesian time series models without any computational difficulties can be given as reasons for such demand.
    This study deals with implementing three different types of dynamic linear models such as steady model (random walk plus noise model), local linear trend model and combined model as a second order polynomial with a seasonal effect in R package dlm. Several methods such as maximum likelihood, forward filtering backwards sampling and Gibbs sampling are applied for each model. Datasets are generated to implement the steady model and the linear growth model. In addition, Turkey Cost of Living Index (Wage Earners) is considered for a second order polynomial model with a seasonal component. The estimations of the unknown observational and system variances besides the Kalman filter results are obtained in the study.
  • Use of the Sequential Nature of Bayes' Theorem in the Construction of Quality Control Charts

    Authors: Haydar Demirhan (Hacettepe University)
    Primary area of focus / application: Other: ISBA/IS Invited Session
    Keywords: Credible interval, CUSUM, Bayesian estimation, EWMA, Quality control chart, Shewart chart, Tolerance interval control limits
    Submitted at 12-Apr-2013 14:55 by Haydar Demirhan
    Accepted
    18-Sep-2013 10:15 Use of the Sequential Nature of Bayes' Theorem in the Construction of Quality Control Charts
    Quality control charts (QCCs) are used to monitor various characteristics of business or industrial processes such as mean level, variability, number of defects, etc. QCCs are very useful tools for keeping the process of interest under control.

    The basic QCCs are known as Shewart Charts. The Shewart QCCs are constructed under various assumptions related with the considered quality characteristic. For example, if we are constructing an X-bar chart, we have the assumptions of normality and independence of observations. Stability of a process is evaluated by testing a hypothesis related with the type of constructed QCC at α Type-I error level. If the required assumptions hold, the size of the rejection region is α/2 on each side. The probability of a point within the control limits when the process is in-control is intended to be 0.9973 if a 3-σ Shewart chart is being constructed. For the cases in which required assumptions do not hold, various QCCs have been developed. The exponentially weighted moving average and the cumulative sum charts are some examples for those charts. As another point of view in the construction of QCCs, tolerance interval control limits have been introduced. Tolerance interval control limits are beneficial for taking under control the probability content at a specified level with a predetermined confidence.

    There are also various approaches for the construction of QCCs from the Bayesian perspective. Most of the Bayesian approaches are based on the credible intervals. There are also Bayesian approaches for the calculation of tolerance control limits. These approaches are especially useful when the standard assumptions of the Shewart charts do not hold. Another approach is to use the sequential nature of Bayes’s theorem in the construction of Bayesian QCCs. In the literature, much attention has not been given to this feature in the construction of QCCs. For each type of QCC, either Bayesian or frequentist, there are some difficulties or limitations in the construction of control charts. When the Bayes’ theorem is used sequentially, the posterior distribution of the quality characteristic of interest is progressively updated; hence, information on the considered quality characteristic is accumulated after taking each sample from the considered process. As the result of this approach, more accurate estimates for the control limits are obtained, and small or large shifts are effectively detected. In addition, determination of some parameters by experts or collection of a preliminary sample is not required in this approach.

    In this presentation, we discuss the sequential use of Bayes’ theorem in the construction of tolerance interval control limits for various quality characteristics. First, we mention previously proposed approaches for the Bayesian attribute control charts and the Bayesian X-bar chart for the exponentially distributed measurements. Then, we introduce the Bayesian tolerance interval control limits for the gamma distributed measurements, for which the distributional assumptions of Shewart charts do not hold. Lastly, we discuss the performances of the charts that are constructed by the use of the sequential nature of Bayes’ theorem.
  • A Comparison of Methodologies for On-line Batch Process Monitoring

    Authors: Véronique Gomes (University of Coimbra), Pedro Saraiva (University of Coimbra), Marco Reis (University of Coimbra)
    Primary area of focus / application: Process
    Keywords: Statistical process control, On-line batch process monitoring, Two-way methods, Three-way methods
    Submitted at 12-Apr-2013 17:49 by Véronique Gomes
    Accepted (view paper)
    16-Sep-2013 16:05 A Comparison of Methodologies for On-line Batch Process Monitoring
    Batch processes provide flexible and effective solutions for producing high-value products in different sectors, such as in the pharmaceutical industry, fine chemicals and food industry, among others. The dynamic and non-stationary behaviour of this class of processes raises however important challenges in conducting the usual activities of statistical process monitoring, which are crucial to ensure that product quality remains consistent at the end of each batch run and appropriate for the products end use. Furthermore, this activity must be done on-line and using data available from all process measurements, which contain information regarding the process, reactions involved, variables intrinsic dynamics and interactions, etc. This large amount of information can be organized in a three-way data array of (variables)x(time)x(batches). In this context, several multivariate statistical methodologies were developed for monitoring and diagnosing batch processes, including 2-way (Nomikos and MacGregor, 1994; Nomikos and Macgregor, 1995) and 3-way methods (Louwerse and Smilde, 2000). However, the pros and cons of each class of methodologies is not well established and in this work we report some results of an extensive comparison study aimed to clarify the context in which a given methodology might be recommended over the others. More specifically, in this paper we propose to: i) compare the performance of two different classes of statistical monitoring approaches, namely the two-way unfolded methodology based on PCA and the 3-way approaches, based on PARAFAC and Tucker3 methods; ii) evaluate different methodologies for the calculation of control limits. The comparisons were conducted both in terms of the ability to detect faults and on the false alarms rate, after controlling for the in-control false alarm rate.

    REFERENCES
    Louwerse, D.J. and Smilde, A.K. (2000) Multivariate statistical process control of batch processes based on three-way models, Chemical Engineering Science, 55, 1225-1235.
    Nomikos, P. and MacGregor, J.F. (1994) Monitoring batch processes using multiway principal component analysis, AIChE Journal, 40, 1361-1375.
    Nomikos, P. and Macgregor, J.F. (1995) Multivariate SCP charts for monitoring batch process, Technometrics, 37, 41-59.