ENBIS-18 in Nancy

2 – 25 September 2018; Ecoles des Mines, Nancy (France) Abstract submission: 20 December 2017 – 4 June 2018

My abstracts

 

The following abstracts have been accepted for this event:

  • Use of Standalone Shiny Application for Better Results Reports

    Authors: Thibaut Fabacher (Hôpital Universitaire de Strasbourg), Erik-André Sauleau (Laboratoire iCUBE, Université de Strasbourg), Nicolas Meyer (Laboratoire iCUBE, Université de Strasbourg)
    Primary area of focus / application: Consulting
    Secondary area of focus / application: Education & Thinking
    Keywords: R, Data visualization, Shiny, Communication
    Submitted at 18-May-2018 17:05 by thibaut Fabacher
    Accepted
    3-Sep-2018 14:20 Use of Standalone Shiny Application for Better Results Reports
    In the context of big data, efficient visualization is essential to easily grasp statistical analysis results. In some cases, extensive and sometimes confusing reports can be replaced by Shiny application from the software R, for example. However, the main inconvenient of shiny application is related to the lack of exportability. To solve this problem, a Shiny standalone application can be used. This application includes R-Portable software with all necessary packages and the Shiny Script. To launch the application, a “.bat” file can be used to run the R script from R-portable software. All these files can be combined in a unique folder to be compiled to a USB stick or shared on the cloud.

    This work was initially developed to help medical students perform their statistical analysis. The team of GMRC (Groupe Methode en Recherche Clinique) from Strasbourg University hospital created a Point-and-Click GUI for basic statistical analysis, based on Shiny packages. To share this tool, the standalone application was inspired by Lee Pang’s R-blogger post (https://www.r-bloggers.com/deploying-desktop-apps-with-r/)

    The major difficulties encountered were on operating system compatibility. Currently, this solution is available for Windows 32 and 64-bit, but for Mac operating System, R software must be installed.
    This visualization tool received positive feedback by users, thanks to the efficient way of manipulating and representing all sorts of data.

    This tool therefore contributes to interactive, synthetic, clear and customizable reports.
  • Functional PCA for Marine Submersion Model

    Authors: Tran Vi-vi Elodie Perrin (Ecole des Mines de Saint-Etienne)
    Primary area of focus / application: Modelling
    Keywords: Marine submersion, Meta-models, Functional principal components analysis, Functional data, Spatial map
    Submitted at 18-May-2018 18:16 by Tran Vi-vi Elodie PERRIN
    Accepted (view paper)
    3-Sep-2018 14:40 Functional PCA for Marine Submersion Model
    This research is motivated by the estimation of the cost caused by marine submersion problem. We aim at exploiting two computer codes provided by the French State reinsurance company CCR and the BRGM (Bureau de Recherche Géologique et Minière). Their main characteristics are the following :

    - Marine submersion model :
    -Inputs : sea forcing (waves, water level...), landforms (buildings, drops...), roughness (bitumen, soil...), flooding propagation (along roads, dams, breaches... ) ... ;
    -Output : spatial map of the maximum water level reached during the event.

    - Damage model :
    - Inputs : output map of the marine submersion model and vulnerability data (Geo-tracking properties, buildings characteristics...).
    -Output : global cost of the geographical area being studied.

    The BRGM model represents at most the complexity of the marine submersion phenomenon at the expense of the computation time (30 minutes for one simulation). At the other hand, the CCR model is faster (5 minutes for one simulation) but overestimates the flooded area. These issues make difficult a direct treatment. Meta-model techniques, such as Gaussian Process regression, have been developed in order to solve computation time issues : an inexpensive mathematical function predicts the computer code output thanks to a few number of simulations. Here we focus on one difficulty : the output of the marine submersion model is a spatial map.

    In this frame, a common technique to reduce dimensionality is principal component analysis (PCA). Prediction at unknown inputs is made on the eigen vectors basis with meta-models. However, the output map of the marine submersion model has above 40 000 pixels which makes intractable this standard PCA approach. In addition, PCA does not take into account the functional nature of the data and their features characteristics, such as smoothness. Alternatively, functional Principal Components Analysis (FPCA), which is based on a functional basis decomposition, does not suffer from these drawbacks. FPCA seems less used for spatial data (rather than time series).

    In this talk, we compare PCA and functional PCA for spatial outputs. We illustrate on toy examples as well as on the application to marine submersion.
  • Machine Learning and Computer Vision for Face Recognition in the Wild

    Authors: Julien Trombini (Two-i)
    Primary area of focus / application: Business
    Secondary area of focus / application: Modelling
    Keywords: Resnet, CNN, LBP, Computer vision, Face recognition
    Submitted at 18-May-2018 18:17 by Julien Trombini
    Accepted
    4-Sep-2018 14:50 Machine Learning and Computer Vision for Face Recognition in the Wild
    The focus of our studies is to apply computer vision and particularly face recognition in a non-controlled environment. By saying so, we refer to a place where distance, light, clarity, number of analysed people, angle of the faces, blurriness of the image, … are uncertain and non-stable.

    We are constantly analysing the trade off in-between accuracy of the end results, speed and statistical relevance. Choosing the right algorithmic structure to increase the performance of the face recognition and the other treatments that follow is challenging.

    Before reaching the final stage of emotion recognition, we must process the picture through several algorithms which will impact speed and accuracy. The question of the number of algorithms / micro-services rises. Is it better to use a long procedural method or to divide the entire process by as many individual tasks as possible?
    To extract the facial emotion from a picture, it is necessary to introduce the concept of Face detection, which is challenged by the dilemma of processing thousands of faces per seconds and limiting the number of false positive. Then follows, the quality filters, which are costly in terms of time and/or computation power but can validate or un-validate a picture for the next stage (Example: is the face too small, is the picture blurry, is the lightning good enough…). The face selection is the last stage of the face recognition part. It can be seen as a way to present efficiently the data to the final stage by calculating the angle of the face and to sort them based on the scale from 0 to 1, 0 being a face from profile and 1 being a face from front.

    Finally, the Emotion analysis algorithm is highly sensitive to the angle of the face and the size of the picture. One of the dilemmas is to choose between number of faces processed and the accuracy of the emotion. To remain time efficient and accurate, a model using two different kind of algorithms can be used. A SVM for the simple cases and a Resnet 34 for the more complex ones.

    I will present two models for face recognition, a LBP cascade and a Resnet, highlighting the main difference in terms of picture processing, accuracy and speed. And finally, I will suggest the idea of using a cascade of CNNs to process efficiently all the described tasks.
  • Monitoring Correlated Multivariate Processes in Stages

    Authors: Paulo Maranhão (Military Institute of Engineering), Ana Karoline Borges Carneiro (Military Institute of Engineering), Josiane Silva de Jesus (Military Institute of Engineering)
    Primary area of focus / application: Process
    Secondary area of focus / application: Process
    Keywords: Monitoring, Transmission, Variation, Multivariate, Processes, Stages
    Submitted at 18-May-2018 22:48 by Paulo Maranhão
    Accepted
    5-Sep-2018 11:30 Monitoring Correlated Multivariate Processes in Stages
    Currently, a significant part of the processes occur in stages. Thus, modeling and monitoring the transmission of variation in multi-stage processes is a key strategy to reduce production variation and therefore reduce process costs. Many researches have been devoted to the study of models that can capture how the variability of an initial stage to a subsequent stage occurs, both in univariate processes and in the multivariate field. However, few studies explore the issue of monitoring the transmission of variability, especially in multivariate processes. In this way, the main idea of this work is to analyze which control schemes would be most effective to monitor the propagation of variance in correlated multivariate processes in stages.
  • How to Select the Uncertainty of Reproducibility from Inter-Laboratory Tests for the Uncertainty Declaration of Building Acoustic Measurements

    Authors: Chiara Scrosati (CNR-ITC), Antonio Pievatolo (CNR-IMATI), Massimo Garai (Department of Industrial Engineering, University of Bologna)
    Primary area of focus / application: Metrology & measurement systems analysis
    Keywords: Reproducibility standard deviation, ISO 12999-1, Coverage, Bayesian predictive intervals
    Submitted at 20-May-2018 12:19 by Antonio Pievatolo
    Accepted (view paper)
    4-Sep-2018 14:30 How to Select the Uncertainty of Reproducibility from Inter-Laboratory Tests for the Uncertainty Declaration of Building Acoustic Measurements
    This work addresses the standard deviation of reproducibility obtained from multiple Inter-Laboratory Tests (ILTs), which has been standardized in ISO 12999-1. This standard deviation is intended to be used for the declaration of uncertainty in test reports of building acoustics measurements. Therefore it is necessary that the declared uncertainty be representative of all the specimens, or products, that could be tested. This work demonstrates that the new criterion for specifying the standard deviation, introduced for the first time in ISO 12999-1, leads to overcoverage. Moreover, two other methods are presented, the former based on a Bayesian analysis and the latter on random variance and fixed mean, to demonstrate that the reproducibility standard deviation, computed as the square root of the mean value of the variances of all the ILTs included in ISO 12999-1, is the natural choice for the declaration of uncertainty.
  • Online NMF with Minimum Volume Constraint

    Authors: Ludivine Nus (Université de Lorraine, CNRS), Sebastian Miron (Université de Lorraine, CNRS), David Brie (Université de Lorraine, CNRS)
    Primary area of focus / application: Other:
    Keywords: Non negative matrix factorisation, Hyperspectral image unmixing, Online algorithm, Hyperparameter estimation
    Submitted at 23-May-2018 22:41 by David Brie
    Accepted
    This work aims at developing real-time hyperspectral image unmixing methods. These methods are required in industrial applications for controlling and sorting input materials. One
    of the most employed technical solutions makes use of a pushbroom imager; the hyperspectral data cube is acquired slice by slice, sequentially in time, by moving the objects directly on the conveyor belt. A relevant method dedicated to this type of applications is on-line Non-negative Matrix
    Factorization (NMF), which is an adaptive version of the classical NMF. For a non-negative matrix X, the NMF consists in finding two non-negative matrices S and A such that: X= SA. The goal of on-line NMF methods is to sequentially update in real-time the endmembers (S) and the abundances (A) for each new acquired sample. In general, the NMF suffers from nonuniqueness of the solution. In order to reduce the set of admissible solutions, we integrate a minimum volume simplex (MVS) constraint, resulting in the on-line MVS-NMF method. However, the effectiveness of the online MVS-NMF is hampered by the optimal determination of the strength of minimum volume simplex term. To answer this problem, we formulate it as a bi-objective optimization problem, resulting in a linear plot (response curve)
    of the data fitting versus regularization cost. In order to estimate the optimal value of the MVS, we propose to use the Minimum Distance Criterion (MDC). By performing experiments on a simulated image, we show that our method are well-suited for the estimation of the optimal value of the MVS hyperparameter.