ENBIS-18 in Nancy

2 – 25 September 2018; Ecoles des Mines, Nancy (France) Abstract submission: 20 December 2017 – 4 June 2018

My abstracts

 

The following abstracts have been accepted for this event:

  • R2R Control in Semiconductor Manufacturing Based on Gaussian Bayesian Network

    Authors: Wei-Ting Yang (École des Mines de Saint-Étienne), Jakey Blue (École des Mines de Saint-Étienne), Agnès Roussy (École des Mines de Saint-Étienne), Marco S. Reis (University of Coimbra), Jacques Pinaton (STMicroelectronics)
    Primary area of focus / application: Process
    Secondary area of focus / application: Mining
    Keywords: Run-to-Run (R2R), Fault Detection and Classification (FDC), Wafer metrology, Equipment condition, Gaussian Bayesian Network (GBN)
    Submitted at 23-Apr-2018 11:02 by Wei-Ting Yang
    Accepted
    Run-to-Run (R2R) control has become a common process regulating approach in the semiconductor industry. Conventionally, key process parameters are regulated with respect to the measured metrology data. However, wafer quality can be affected by complex factors related to equipment condition. In this study, the equipment condition is explicitly modeled and integrated into the core of a R2R controller, in order to accommodate this critical aspect of the system in deriving the control law and reduce process variability in the more effective way.

    For this purpose, we employ the Gaussian Bayesian Network (GBN) to analyze the implicit relationship not only between the control factors and process parameters but also among the process parameters and metrology. With Gaussian Bayesian Network, the cause-effect between these variables can be explicitly expressed by means of a connected graph; consequently, the variation of process parameters caused by control factor can be estimated and the corresponding predicted metrology can be obtained simultaneously. Based on this model, we are able to consider process control in a more global view.

    The effectiveness of this approach is demonstrated and validated using the case study, in collaboration with our industrial partner.
  • Direct Policy Search: An Introduction

    Authors: Jérôme Collet (EDF R&D)
    Primary area of focus / application: Process
    Secondary area of focus / application: Modelling
    Keywords: Stochastic optimization, Direct policy search, Linear decision rules, Binary decision trees
    Submitted at 24-Apr-2018 16:49 by Jérôme Collet
    Accepted (view paper)
    4-Sep-2018 09:20 Direct Policy Search: An Introduction
    Usually, stochastic optimization of a system consists in the following steps: modelling of the stochastic process driving the system, parameter estimation, and search for the policy. It is also possible to bypass these steps: a policy is a function of the past variables, optimizing a given criterion. So, if we assume the policy belongs to an adequate set of parametric functions, one “just” has to find the optimal parameter to choose a policy.
    This bypassing can fulfil the following goals: reduce computational burden (regarding memory use or processing time), optimize in a multi-objective setting (since some stochastic optimization methods are inherently single-objective), or avoid the modelling of a poorly known stochastic process.

    EDF is now facing new energy management problems, in small energy storages (home energy storages, electric vehicles), with poorly known demand processes, so our goal is to bypass its modelling.

    Research on “Direct Policy Search” is increasingly active since the 2000s, so we propose a survey on recent papers. Then, we focus on two policy sets: Linear Decision Rules, and Binary Decision Trees. We show in details its use on small examples, both theoretical and real, and compare the results we obtained.
  • Stepwise Multiblock Latent Variable Regression

    Authors: Maria Campos (University of Coimbra, Department of Chemical Engineering), Marco Reis (University of Coimbra, Department of Chemical Engineering)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Process
    Keywords: Regression methods, Multiblock methods, SO-PLS, Stepwise SO-PLS, Big data
    Submitted at 24-Apr-2018 18:41 by Maria Campos
    Accepted
    4-Sep-2018 10:30 Stepwise Multiblock Latent Variable Regression
    Methods able to handle multiple blocks of data are attracting increasingly more practitioners, as they are able to model the contribution from the different blocks while retaining their natural structure. Among these multiblock methods, SO-PLS (Sequential Orthogonal-PLS) in particular stands out by its interesting prediction and interpretation capabilities, associated with desirable modelling features such as independence from relative scaling of the different data blocks, and the flexibility to handle blocks with different dimensionalities and pseudo-ranks. However this method becomes cumbersome when a high number of blocks are available as the analysis is critically dependent on the order by which they are incorporated in the model. When no a priori knowledge is available for establishing the order of the blocks or when no order is preferred a priori, SO-PLS faces the problem of having to find the most adequate one through an exhaustive search across all permutations. Furthermore SO-PLS (as well as any other current multiblock method) does not contemplate the possibility for selecting/excluding non-relevant blocks of variables. In this article we present an efficient approach for establishing the optimal order of the blocks in SO-PLS with additional capabilities for block selection/exclusion: stepwise SO-PLS. It is computationally much faster and leads to an optimum or very close to optimum solution regarding the selected blocks and their order. A comparison between Stepwise SO-PLS and current multiblock approaches is presented based on a real case study.
  • Metrology Modelling Based on Process Parameters in Semiconductor Manufacturing

    Authors: Aabir Chouichi (Ecole des Mines de Saint-Etienne ( Campus George Charpak )), Jakey Blue (Ecole des Mines de Saint-Etienne ( Campus George Charpak )), Claude Yugma (Ecole des Mines de Saint-Etienne ( Campus George Charpak )), Francois Pasqualini (STMicroelectronics)
    Primary area of focus / application: Mining
    Secondary area of focus / application: Metrology & measurement systems analysis
    Keywords: Sensor signal analytics, Fault detection and classification, Advanced process control, Virtual metrology
    Submitted at 25-Apr-2018 09:55 by Aabir CHOUICHI
    Accepted
    4-Sep-2018 12:00 Metrology Modelling Based on Process Parameters in Semiconductor Manufacturing
    Due to the continuous evolution of microchip applications at a rapid pace, semiconductor companies have faced more and more competitive market demands. In this regard, improving the overall chip performance to meet the specifications from customers has triggered the Integrated Circuit (IC) makers to collect as much data as possible throughout the whole production process. Among the collected data in IC fabrication, two types of data are of particular interest for the optimal process control. First, sensors embedded in the machines send out nearly real-time signals during a wafer process enable the timely actions on the machines. The metrology data retrieved by measuring the quality parameters over the wafers help to characterize the product performance as well as validating the process soundness.

    Apart from monitoring the equipment/process faults, machine signals and wafer metrology can be used together to quantify the relationship between the recipe set-up and the metrology measurements. This is especially important given the fact that the leading edge manufacturing technology demands to reduce the measuring time as much as possible. Consequently, our research aims at modeling the link between machine parameters and metrology data. The predicted metrology can be used as the measured features in the daily routines of the process control. The obtained results are further validated in the industrial environment for potential implementation in real practice.
  • Investigation of Multi-Run Independent Component Analysis on Simulated Semiconductor Data

    Authors: Anja Zernig (KAI - Kompetenzzentrum Automobil- und Industrieelektronik GmbH)
    Primary area of focus / application: Reliability
    Secondary area of focus / application: Quality
    Keywords: Semiconductor industry, Independent Component Analysis, Multi-run ICA, Reliability
    Submitted at 25-Apr-2018 10:29 by Anja Zernig
    Accepted
    4-Sep-2018 11:40 Investigation of Multi-Run Independent Component Analysis on Simulated Semiconductor Data
    In semiconductor industry hundreds of process steps are needed to manufacture safe and reliable devices, commonly known as chips. After the Frontend process, each device on the wafer can be measured. Hence, so-called wafermaps are drawn, where the measurement value of each device is plotted on its corresponding position on the wafer given by x and y coordinates. Some wafermaps show conspicuous features, which are recognized by humans as patterns on the wafer like e.g. a gradient from one side to the other side of the wafer. Some of these patterns are typical signatures of process steps, which become visible in certain measurements. Other appearing patterns indicate fluctuations in the production, which need to be controlled.

    Ideally, the pattern of one Frontend process step is directly visible in one wafermap of the measurement data. However, often patterns are distributed over various measurements and hence they are only visible if these measurements are evaluated together. To unhide latent patterns in multivariate semiconductor data, Independent Component Analysis (ICA) is proposed. ICA is applied to a selected set of measurement data. After the ICA transformation a set of sources is given that are maximum independent of each other. To calculate the ICA transformation, the fastICA algorithm invented and implemented by A. Hyvärinen [1] is used. Like for any stochastic optimization procedure, also for the ICA the outcome depends on the starting condition. Since the “best” starting condition cannot be determined, remedy provides a repetition of the algorithm, called multi-run ICA [2].

    Multi-run ICA consists of the following calculation steps. The fastICA algorithm is performed several times with different starting conditions. This leads to different source matrices, where in a next step the most reliable sources need to be identified. To this end, the individual matrices are combined to one large matrix. Then, hierarchical clustering is used together with a distance measure and a fixed cluster number, which is equal to the number of sources per run. With this method the best combination of sources can be identified and used for further analysis.

    For the evaluation of the multi-run ICA, simulated data of semiconductor wafermaps have been investigated. Since the simulated wafermaps are constructed independent from each other and are linearly mixed, only minor influences of varying starting conditions can be observed. Generally, this cannot be stated for real semiconductor measurement data because real data never exactly follow the ICA model. For this reason, applying multi-run ICA can prevent from being fooled by a suboptimal solution.

    [1] A. Hyvärinen, J. Karhunen and E. Oja. Independent Component Analysis. John Wiley & Sons, 2001
    [2] J. Himberg and A. Hyvaerinen, "Icasso: software for investigating the reliability of ICA estimates by clustering and visualization," 2003 IEEE XIII Workshop on Neural Networks for Signal Processing, 2003, pp. 259-268.
  • Parsimonious Batch Data Analysis

    Authors: Marco P. Seabra dos Reis (Department of Chemical Engineering, University of Coimbra)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Process
    Keywords: Batch processes, Process monitoring, Quality prediction, Feature oriented analysis, Multiresolution modeling
    Submitted at 25-Apr-2018 12:45 by Marco P. Seabra dos Reis
    Accepted
    3-Sep-2018 15:10 Parsimonious Batch Data Analysis
    Batch processes play an important role in modern industrial sectors such as chemical, pharmaceutical, semiconductor, among others. Batch quality is typically assessed at the end of each batch run, through complex analytical procedures whose outcomes only became available with a considerable delay. Both measurements taken during batch operation and batch-end parameters are routinely stored in large databases, providing the basis for developing data-driven models for quality prediction.

    The current standard method to deal with these datasets requires synchronization (e.g. using dynamic time warping) and batch-wise unfolding (BWU) of the 3-way array into a 2-way array (I×(J×K)) [1, 2]. Synchronization is not a trivial task and the unfolding step usually originates a wide matrix with hundreds or thousands of pseudo-variables, leading to over-parametrized models where the potential for overfitting is high.

    The aforementioned approaches for batch data analysis rank high in terms of complexity, both in terms of “estimation complexity” as well as of “implementation complexity”. These two dimensions have consequences on the performance of the methods (accuracy, robustness) and on the tangibility of their impact in industry. Recently, alternative parsimonious formulations for batch data analysis have been developed, presenting lower complexity features in both dimensions. Examples include the class of feature-oriented approaches and multiresolution methodologies. This talk summarizes these new developments on batch data analysis, and demonstrate their benefits with several illustrative examples.

    References
    1. Nomikos, P. and J.F. MacGregor, Monitoring batch processes using multiway principal component analysis. AIChE Journal, 1994. 40(8): p. 1361-1375.
    2. González-Martínez, J.M., et al., Effect of Synchronization on Bilinear Batch Process Modeling. Industrial & Engineering Chemistry Research, 2014. 53: p. 4339-4351.