ENBIS-13 in Ankara

15 – 19 September 2013 Abstract submission: 5 February – 5 June 2013

My abstracts

 

The following abstracts have been accepted for this event:

  • Managing IT Infrastructure and Service Capacity through Data Analysis

    Authors: Lance Mitchell (Greenfield Research and Allen Systems Group), Michel Lutz (ST MicroElectronics and École Nationale Supérieure des Mines)
    Primary area of focus / application: Other: IT Infrastructure and Data Management
    Keywords: IT, Infrastructure, Capacity Planning, Big Data, ITIL
    Submitted at 3-Jun-2013 16:04 by Lance Mitchell
    Accepted (view paper)
    16-Sep-2013 14:45 Managing IT Infrastructure and Service Capacity through Data Analysis
    Information Technologies (IT) Architectures are critical to the efficiency and effectiveness of organizations. Consequently, IT managers must be capable of predicting the growth and usage of the systems which they manage. This capability enables them to ensure the availability and responsiveness of mission critical systems and access to data.

    Nowadays, IT architectures are very complex and are composed of multiple connected and interdependent components. To cope with such complexity, IT executives more and more rely on data analysis to manage the capacity of information systems (Gunther, 2007; Allspaw, 2008). IT management best practices, like ITIL framework, recommend setting up information systems, dedicated to facilitate the access to all historical data necessary to manage capacity (Lloyd & Rudd, 2007).

    Managing potentially millions of Configuration Items (CIs) and all of their relationships and dependencies is obviously very complex, and requires simplification to understand both the relevance and the significance of the collected data.

    The presenters will suggest that the future solution might be a Big Data approach, and will consider the 4 'V's of Big Data: Value, Variety, Velocity and Volume. This may become a topic for post-session discussion.

    Real use cases will be shown to bring these concepts to life.
  • Does Risk Diversification Always Work? The Answer through Simple Modelling

    Authors: Marie Kratz (ESSEC Business School)
    Primary area of focus / application: Other: French special session
    Keywords: financial risk, insurance risk, risk analysis, risk diversification, quantitative risk management, stochastic risk modelling
    Submitted at 3-Jun-2013 16:07 by Marie Kratz
    Accepted (view paper)
    17-Sep-2013 16:15 Does Risk Diversification Always Work? The Answer through Simple Modelling
    We show how to price an insurance risk and that this price decreases when many similar policies are sold, using a simple model when throwing a die. The diversification benefits increase with the number of policies and similarly the risk loading of the premium required for the risk decreases tending to 0. This is true as long as the risks are completely independent. We propose and study analytically three cases of introducing the non-diversifiable risk. For each, the behavior of the risk loading based on the underlying risk process is examined. Then the results are discussed in view of the risk loading. A numerical illustration is provided for each case. Such a modelling could be used to study particular investment choices under uncertainty.
    This work has been done in collaboration with Marc Busse and Michel Dacorogna, SCOR.
  • Regression Models to Identify Extreme Risk Factors

    Authors: Meitner Cadena (ESSEC & UPMC)
    Primary area of focus / application: Modelling
    Keywords: Extreme values, Regression model, Generalized linear model, Regression trees, Health insurance portfolio, Retired people
    Submitted at 4-Jun-2013 16:37 by MEITNER CADENA
    Accepted (view paper)
    17-Sep-2013 16:35 Regression Models to Identify Extreme Risk Factors
    We develop a new regression-based method to identify factors related to extreme events. This method is inspired from the generalized linear modeling and changes of variables, the former presenting a well-known theory and the latter introducing variables commonly used to represent extreme risks. It also uses regression tree techniques, which alleviates its
    development. This method is general, thus may be applied to different fields. An application will be developed when considering risk factors in an health insurance portfolio of retired people. This sample population is nowadays of particular interest due to its high increase.
  • Pareto-optimal Designs - Computer Simulation Experiments for Alternatives to G-optimal Designs

    Authors: Helmut Waldl (Johannes Kepler University Linz)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: optimal design, Pareto surface, G-optimality, D-optimality, computer simulation experiment
    Submitted at 5-Jun-2013 11:46 by Helmut Waldl
    Accepted
    16-Sep-2013 11:40 Pareto-optimal Designs - Computer Simulation Experiments for Alternatives to G-optimal Designs
    A popular criterion for minimizing the variance of estimates in experimental design is G-optimality. A G-optimal design is a design that minimizes the maximal variance of the predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the corrected kriging variance is a very costly task and finding the maximal kriging variance in high-dimensional regions can be computationally and time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice.

    D-optimality is another design criterion. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two determinant criteria corresponds with designs that perform well under both criteria.

    Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximal kriging variance has to be computed only a few times though.

    The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM).
  • Different Classification Techniques Used to Determine the Change Causes in Control Charts

    Authors: Esteban Alfaro Cortés (University of Castilla-La Mancha), José-Luis Alfaro Navarro (University of Castilla-La Mancha), Matías Gámez Martínez (University of Castilla-La Mancha), Noelia García Rubio (University of Castilla-La Mancha)
    Primary area of focus / application: Process
    Keywords: Hotelling T2, linear and quadratic discriminant analysis, artificial neural networks, classification trees, random forest, boosting
    Submitted at 5-Jun-2013 20:15 by Noelia García Rubio
    Accepted
    Statistical quality control procedures have become essential practices in order to guarantee the competitiveness in any fabrication process. Doubtless, the lack of a clear diagnosis has limited the use of multivariate control charts in industrial process.
    Traditional procedures consisted in analysing a univariate chart for each quality characteristic (Alt, 1985; Hayter and Tsui, 1994) but it is not an appropriate approach under the hypothesis of correlated characteristics. Due to this and other drawbacks some methods based on multivariate analysis have been developed, being the most well-known the T2 decomposition into components that reflect the contribution of every variable (Mason et al., 1995; Mason et al., 1996 and 1997; Doganaksoy et al., 1991; Timm, 1996; Runger et al., 1996).
    A more recent alternative is the application of classification methods (Murphy, 1987; Cheng, 1995 and 1997; Chang, 1996; Zorriassatine and Tannock, 1998; Guh and Hsieh, 1999, Guh and Tannock, 1999a and 1999b; Ho y Chang, 1999; Cook and Chiu, 1998; Cook et al., 2001; Guh, 2003; Noorosana and Vaghefi, 2003; Niaki and Abassi, 2005; Aparisi et al., 2006; Guh, 2007; Gámez et al., 2009; Alfaro et al., 2009).
    In this work we use Hotelling T2 decomposition (although the procedure can be extended to MEWMA or CUSUM) to compare the performance of Linear and Quadratic Discriminant Analysis, Artificial Neural Networks, Classification Trees, Random Forest and Boosted Trees, depending on the correlation level and the kind of shift, trying to offer a guide for the correct use of each classification technique.
  • Sampling for Nonconformities and Other Issues in the Forthcoming Revision of ISO 2859-2

    Authors: Rainer Göb (University of Würzburg)
    Primary area of focus / application: Quality
    Keywords: Attributes sampling, type A sampling, sampling for nonconformities, limiting quality, consumer's risk, producer's risk
    Submitted at 5-Jun-2013 20:15 by Rainer Göb
    Accepted
    18-Sep-2013 09:20 Sampling for Nonconformities and Other Issues in the Forthcoming Revision of ISO 2859-2
    ISO 2859 is a multi-part series of attribute sampling plans maintained by the International Organization for Standardization (ISO). In this series, the relevant standard for type A sampling targeted on finite lots is ISO 2859-2 issued in 1985. Some deficiencies of the standard have been observed over its 25 years of existence. The major problem is the standard's exclusive focus on the conforming-nonconforming-unit quality paradigm, and the absence of any reference to nonconformities counts as a relevant quality indicator. The responsible technical committee TC 69 ``Application of statistical methods'' of the ISO is therefore planning a revision of ISO 2859-2. The present study reviews the history, contents, and weaknesses ISO 2859-2, and describes the major components of the intended revision, in particular with respect to sampling for nonconformities.