ENBIS: European Network for Business and Industrial Statistics
Forgotten your password?
Not yet a member? Please register
ENBIS17 in Naples
9 – 14 September 2017; Naples (Italy) Abstract submission: 21 November 2016 – 10 May 2017The following abstracts have been accepted for this event:

Statistical Methods and Tools as Drivers of Innovation in the Energy Industry: Some Insights and Examples
Authors: Alberto Pasanisi (EDF R&D  EIFER)
Primary area of focus / application: Other: ENBIS Best manager Award
Keywords: Statistics, R&D, Innovation, Energy, Modelling, Simulation
Submitted at 9Feb2017 14:01 by Alberto Pasanisi
Accepted
In particular, statistics and data analysis provide valuable methods and tools to quantify and manage uncertainties when forecasting the behaviour of industrial or natural systems, to extract knowledge from data and expertise and to recommend decisions: actually, they help solving a consistent part of problems, today’s engineering is concerned with.
Inspired from the feedback of more than 15 years in the field of advanced engineering and R&D, this talk highlights some examples of the use of statistical methods to implement innovative solutions for industrial problems, with a particular focus on the energy industry.
Without any pretention of exhaustiveness, the examples cover a great variety of energyrelated activities (reliability of components and structures, natural hazards, energy efficiency, smart and sustainable cities ...), and put into evidence the crossdisciplinary and crucial role played by statistical methods. 
Strategies for MixtureDesign Space Augmentation
Authors: Pat Whitcomb (StatEase, Inc.), Martin Bezener (StatEase, Inc.)
Primary area of focus / application: Design and analysis of experiments
Secondary area of focus / application: Design and analysis of experiments
Keywords: Design of Experiments, Mixture design, Augmentation, RSM
Augmentation and sequential strategies for response surface methodology (RSM) have been typically studied in the context of model order or sample size. However, strategies for the expansion of the region for experimentation are sparse, especially for mixture designs. In this talk, we briefly describe the problem at hand for RSM in general. Then we pay particular attention to mixture experiments, where expanding component ranges is more complicated due to their interdependence on one another. We propose several strategies, including an optimal DOE space augmentation algorithm. Several examples will be given. 
Updating Monitoring Networks in Overlay Data Modelling
Authors: Riccardo Borgoni (University of Milano Bicocca)
Primary area of focus / application: Other: Monitoring and optimization of semiconductor processes
Keywords: Microelectronics, Overlay, Lithography, Optimal design, Spatial network reduction, Metaheuristic optimization
Submitted at 20Feb2017 15:36 by Riccardo Borgoni
Accepted
In this paper, we propose a maxent and a maxmin strategy, based on the tree spanning the sampling points, to select an optimal subsample of a given size of the network. The objective function measures how much the network is spread to cover the wafer area. Inspecting all the possible configurations of n points out of the N points originally present in the network is practically unfeasible when the number of measurement locations is even moderately high, hence metaheuristic optimization is employed to tackle this problem.
We compared the results obtained using the reduced network to those obtained using the full sample in terms of the precision of both the predicted overlay values and the estimates of the regression coefficients of the calibrating model. It has been found, that even halving the sample size, the performance of the reduced network remains substantially unchanged. 
From Life Tables to Deep Learning: Where Is the Insurance Business Going? What Are the Opportunities and Challenges for Statisticians?
Authors: Yves Grize (Baloise Insurance & ZHAW)
Primary area of focus / application: Other: ISBIS Session on Statistics in Economics and Business
Keywords: Statistics in insurance, Digitalization, Deep Learning, Big Data, Actuarial sciences, Predictive Analytics, Statistics in industry
Deregulation brought many exciting opportunities to the field of statistics, as it became clear that statistical knowhow (later called predictive analytics), was key to profitability in the new competitive environment. The second revolution is coming now from the technological side: digitalization, big data, cognitive computing are some of the buzzwords here. However this time, the disruptions are likely to affect indepth how insurers deal with data and risks. Some even predict that this revolution may change the entire business model of insurance companies.
I will briefly review the potential impact of this second revolution on the insurance business in general and try to see in particular, what challenges and opportunities may result for the actuaries & statisticians of the XXI century. 
Four Algorithms to Construct a Sparse Kriging Kernel for Dimensionality Reduction
Authors: Mélina Ribaud (Ecole Centrale de Lyon)
Primary area of focus / application: Modelling
Secondary area of focus / application: Design and analysis of experiments
Keywords: Kriging, High dimension, Covariance kernel, Isotropic, Anisotropic, Algorithms
Submitted at 21Feb2017 10:26 by Mélina Ribaud
Accepted
An isotropic kernel parameterized by only one range parameter could be an alternative but it is too restrictive. The idea is to construct a kernel between these two extremal choices.
We propose a datadriven construction of a covariance kernel adapted to high dimension. This kernel is a tensor product of isotropic kernels built on wellchosen subgroups of variables. Four algorithms are implemented to find the number of groups and their composition. They start with an isotropic and finish with an isotropic by group. At each stage, different models are compared and the best is chosen under a BIC criterion.
The first algorithm explores only in the forward direction whereas the three others move in both directions. In the third and last algorithm a cluster step is added at different stage to improve the flexibility.
The algorithms are compared on a simulated case in dimension eight. The fourth algorithm is the most efficient regarding to the set size, the prediction quality and the running time.The isotropic by group kernel is benchmarked against classical kernels on the Sobol function. The study shows that the isotropic by group kernel improves the prediction power. 
Simulating High Correlations Using Ensemble Copula Coupling Method
Authors: Jérôme Collet (EDF R&D)
Primary area of focus / application: Other: Modeling, forecasting and risk evaluation of wind energy production
Keywords: Ensemble copula coupling, Density forecast, Risk management, Correlation