ENBIS-17 in Naples

9 – 14 September 2017; Naples (Italy) Abstract submission: 21 November 2016 – 10 May 2017

My abstracts


The following abstracts have been accepted for this event:

  • Maintenance Policies Countering Degradation of Water Supply Networks: Statistical Analysis with a Semi-Markov Model and Panel Data

    Authors: Vincent Couallier (Institute of Mathematics of Bordeaux), Cyril Leclerc (SUEZ Eau France, le LyRE), Yves Legat (IRSTEA), Karim Claudio (Cetaqua (SUEZ))
    Primary area of focus / application: Other: Statistical analysis of industrial reliability and maintenance data
    Secondary area of focus / application: Modelling
    Keywords: Reliability analysis, Multi-state models, Degradation processes, Maintenance modeling, Statistical estimation, Semi-Markov models
    Submitted at 3-Mar-2017 16:46 by Vincent Couallier
    Accepted (view paper)
    12-Sep-2017 09:20 Maintenance Policies Countering Degradation of Water Supply Networks: Statistical Analysis with a Semi-Markov Model and Panel Data
    This work addresses the problem of the statistical modeling of maintenance and failure data of water supply networks. Concerning the pipes of the network, empirical studies have shown that a failure is the end-point of a degradation process that may be described by a continuous time multi-state process. Four states are relevant from an engineering point of view:
    - State 0 : the pipe is in a “leakage-free” state.
    - State 1: Background leakage - initialization of the leak, invisible at the surface with a low and undetectable flow,
    - State 2: Detectable leakage - still invisible at the surface, the leak can be detected by acoustic inspection,
    - State 3: Visible burst -the leak appears at the surface and must be repaired.
    The maintenance lead by water operators includes campaigns of acoustic inspections. These operations yield partial observations of a continuous-time degradation process. In fact only the last state (visible bursts) is exhaustively registered. In such a case, multi-state models using panel data are adequate to model the leakage degradation process. Indeed, the Semi-Markov model with panel data offers an alternative to standard survival analysis of interval censored lifetime data. We show in this work how to fit such a model to data collected since 2010 by the Bordeaux water utility. The network contains more than 30000 pipes, each of them with leak detections and bursts data, as well as characteristics like material, length, diameter, pressure, soil corrosivity, which are used as model covariates for the transition intensities.
  • New Applications of Statistics and Data Analysis for Marketing Research, Applications to the Cosmetic Industry

    Authors: Gianvito Dongiovanni (IPSOS Italy)
    Primary area of focus / application: Other: Statistics for cosmetics
    Keywords: Linear regression, Bayesian networks, New analytical approaches, Markey segments
    Submitted at 3-Mar-2017 23:57 by Gianvito Dongiovanni
    Accepted (view paper)
    12-Sep-2017 19:00 New Applications of Statistics and Data Analysis for Marketing Research, Applications to the Cosmetic Industry
    Statistics play a key role in market research from sampling design to data analysis to generate valuable insights regarding product design, brand communication or customer satisfaction.
    Linear regression is the classical way to determine a ranking of predictors or a quantification of their respective importance for the desired outcome. New analytical approaches are gaining ground in research: combining the bootstrapping technique to Bayesian networks, it is possible to bring out the infer causal relationships of a key outcome from survey data taking into account multicollinearity between variables. Relative impacts and structural mapping are provided to help identifying opportunities in order to develop action plans that best fit business strategy and provide the greatest opportunity.
    Marketing decisions are also supported by a vast amount of information generated by customers on social networks and by new data collecting methods such as open text or pictures and icons present in the survey process itself. Using a mix of monitoring tools, it is possible to identify which messages and initiatives are driving the right actions and desired outcomes.
    These methods enable to provide clients with elements supporting marketing decisions such as identify market segments, decide on a level of pricing, identify the relative strength of their brand versus competitors and also understand the positioning of the brand in the market such as high end brand or value for money brand. It is also used to rank advertising concepts and decide on marketing mix optimisation.
  • Acceptance Sampling Plans for High Quality Processes

    Authors: Stijn Luca (KU Leuven)
    Primary area of focus / application: Process
    Secondary area of focus / application: Quality
    Keywords: Acceptance sampling, Chain sampling, High quality processes, Operating Characteristic curves
    Submitted at 4-Mar-2017 18:11 by Stijn Luca
    13-Sep-2017 09:40 Acceptance Sampling Plans for High Quality Processes
    Lot-by-Lot acceptance sampling plans provide the practitioner with decision rules for acceptance or rejection of a current delivery. Acceptance sampling plans can be classified into variable plans when features from the lot are measured on numerical scale and attributive plans when features are measured that classify items as defective or non-defective.
    We will treat the case where sampling takes place from lots that are coming from a supplier's process which is of high quality, i.e. a proportion defects near zero is associated to the process. Traditional sampling plans won't work in this case since any sample of reasonable size will probably contain zero defects.
    We will propose a generalization of the modified chain sampling plans proposed in [1] that is applicable for as well attributive as variable inspection. For this purpose, it is assumed that lots are drawn from a continuing stream of lots of a process with an unknown but constant fraction defects. Chain sampling plans are able to accumulate information from samples drawn from historical lots to estimate the suppliers quality. The proposed plans allow to go further into history than the standard chain sampling plans of Dodge [2]. In contrast to zero acceptance number single sampling plans, this enables the design of steep operating characteristic (OC) -curves that possess an inflection point near zero.
    Algorithms will be proposed to design the proposed plans when the OC-curve have to pass through two predetermined points that define producer’s and consumer’s risk. Experiments will show that for small fraction defects the required sample size is smaller compared to the classical chain sampling plans of Dodge.

    [1] K. Govindaraju and C. Lai, A modified ChSP-1 chain sampling plan, MChSP-1, with very small sample sizes, American Journal of Mathematical and Management Sciences 18 (1998), pp. 343–358.
    [2] H. Dodge, Chain sampling inspection plan, Industrial Quality Control 11 (1955), pp. 10–13.
  • Discovering Communities in Customer Purchase Behavior by Means of Social Network Analytics

    Authors: Jasmien Lismont (KU Leuven), Bart Baesens (KU Leuven; University of Southampton), Wilfried Lemahieu (KU Leuven), Jan Vanthienen (KU Leuven)
    Primary area of focus / application: Mining
    Secondary area of focus / application: Business
    Keywords: Big Data, Community mining, Customer target groups, Direct marketing, Retail, Cocial network analytics
    Submitted at 5-Mar-2017 10:45 by Jasmien Lismont
    Accepted (view paper)
    12-Sep-2017 14:30 Discovering Communities in Customer Purchase Behavior by Means of Social Network Analytics
    Direct marketing is gaining more and more attention in business. Segmentation and personalization of marketing communication allow for increased customer value. In order to derive new insights to advance marketing communication, we create a pseudo-social network of customers of a European retailer. This network is based on customers’ purchasing behavior where customers are connected if they have certain product groups in common. This leads to a bipartite graph with two types of nodes: customers and product groups. Any bipartite graph can be projected onto a unipartite graph with only one type of nodes, i.e. customers directly linked to customers. However, since we are working on a real-life dataset, challenges frequently associated with big data occur. As such, specific applications are required.
    Consecutively, descriptive social network techniques are applied to the customer network. Specifically, we apply community mining in order to provide understanding for the development of customer target groups. Various algorithms exist, such as modularity-based, spectral, and dynamic algorithms; but most algorithms are developed for unipartite graphs. Some metrics for bipartite community mining exist which can offer a solution, e.g. BRIM and CoClusLSH. Furthermore, the complexity of these algorithms needs to be taken into account. Many algorithms, e.g. based on betweenness, are not feasible for a large network. Finally, profit measures such as recency, frequency and monetary values, or customer lifetime values can be attached to the customer groups or clusters found in the network. This work is currently still ongoing.
  • The Use of Attribute Charts to Monitor the Process Mean

    Authors: Linda Ho (University of São Paulo)
    Primary area of focus / application: Other: DOE and statistical process monitoring in South America
    Keywords: ARL1, ARL0, Gauge device, Classification, Optimization, Genetic algorithm
    Submitted at 6-Mar-2017 01:38 by Linda Ho
    11-Sep-2017 17:50 The Use of Attribute Charts to Monitor the Process Mean
    To monitor a process mean, usually the traditional Shewhart X-bar chart has been considered and in this case, the measures of the quality characteristic of the sample items need to be taken. It is well known that precise measurements of a quality characteristic are expensive and time-consuming and require the calibration of instruments; in destructive experiments, the sampled units are damaged and must be discarded. In these cases, an alternative is the classification of each sampled unit into a group using a device such as gauge rings. Operationally, this method is faster, and no measurement is taken on the sampled unit.
    The aim of this research is to provide an overview of the recent attribute control charts proposed to monitor the process mean based on the results of the classification showing that it is possible to design attribute charts to have good performance economically and in terms of ARL1 like the traditional Shewhart X-bar chart.
  • Applicability of Software Reliability Models

    Authors: Nikolaus Haselgruber (CIS Consulting in Industrial Statistics GmbH)
    Primary area of focus / application: Reliability
    Secondary area of focus / application: Modelling
    Keywords: Reliability, Software, Testing, Modelling
    Submitted at 6-Mar-2017 10:52 by Nikolaus Haselgruber
    12-Sep-2017 19:00 Applicability of Software Reliability Models
    In the last decades, the importance of software as an integrated part of technical products as been increasing substantially. Software runs in small consumer products such as office printers and coffee machines as well as in vehicles, industrial equipment, etc. Consequently, it has considerable impact on the reliability of such applications and raises the demand for quantification. For this purpose, several stochastic model approaches have been proposed since the 1970s. The availability of models as a matter of principle does not necessarily solve the problem of measuring software reliability.
    This presentation discusses the adequacy of the term “Software Reliability” in general and it will give a short overview of relevant models. Further, important aspects to be considered for practical application and an example from automotive industry will be presented.
    [1] Z. Jelinski and P. Moranda (1972). Software reliability research, in Statistical Computer Performance Evaluation, W. Freiberger (Ed.), Academic Press, 1972, pp.465-497.
    [2] B. Littlewood and J. Verrall (1973): A Bayesian Reliability Growth Model for Computer Software, Journal of the Royal Statistical Society, Series C, Vol. 22.
    [3] N. F. Schneidewind (1975): Analysis of Error Processes in Computer Software, Sigplan Note, Vol. 10.
    [4] J.D. Musa, and K. Okumoto (1983). Software Reliability Models: Concepts, Classification, Comparisons, and Practice. In J. K. Skwirzynski (Ed.), Electronic systems effectiveness and life cycle costing, NATO ASI Series (pp. 395-424). Heidelberg: Springer-Verlag.
    [5] A. L. Goel (1985): Software Reliability Models: Assumptions, Limitations, and Applicability, IEEE
    [6] C. Wohlin, M. Höst, P. Runeson and A. Wesslén (2001): Software Reliability, in Encyclopedia of Physical Sciences and Technology (third edition), Vol. 15, Academic Press.
    [7] A.P. Sage and W.B. Rouse (2009): Handbook of Systems Engineering and Management, Wiley, New York.