Seminars 

/main page/Seminars 

This
page presents work in progress research seminars that are held
weekly. These seminars carry on Machine Learning and
Bayesian based methods applied to civil engineering. All seminars are open to the public. Previous seminars : 20162017  20172018  20182019  20192020  20202021 

June 3 2022 15:00  Polytechnique Montreal 
Presenter: Yakin Hajlaoui  Ph.D student,
Polytechnique Montreal Title: Recent advances in Gaussian processes for solving geostatistical problems. Abstract: Spatial statistics, also known as Geostatistics, is a branch of applied mathematics and statistics, designed to solve the estimation of ore reserves in the mining industry in the 1950s, thanks to the work of engineer DG Krige, whose name is associated with one of the most famous techniques of geostatistics, known as Kriging. Kriging is a spatial interpolation method used for prediction problems and description of spatial variation of natural phenomena. Until now, Kriging is still one of the main interpolation algorithms used in geostatistics, and the reason is its ability to quantify uncertainty. However, Kriging suffers from its lack of dealing with big data and has strict assumptions such as the normality assumption, and the stationarity assumption. In machine learning, Gaussian Processes (GP) are very similar to Kriging and are based on the same assumptions. Unlike Kriging, the GP community made substantial developments in order to make GPs more scalable when dealing with big data by approximating them into Sparse Gaussian Processes (SGP). They resolved the stationarity problem by developing a deep structure of GP known as Deep Gaussian Processes (DGP). In this seminar, we will present the similarities and differences between Kriging and GPs and the most influential works that led to the development of SGPs and DGPs. 

May 27 2022 15:00  Polytechnique Montreal 
Presenter: VanDai Vuong  Ph.D student,
Polytechnique Montreal Title: On the comparative performances and smoother for TAGILSTM. Abstract: This presentation demonstrates how to perform smoother in TAGILSTM in order to infer the posteriors for the LSTM's hidden states and cell states. In this seminar, we also compare the performances of TAGILSTM, deterministic LSTM and variational LSTM on two time series benchmarks. 

May 13 2022 15:00  Polytechnique Montreal 
Presenter: Shervin Khazaeli  Ph.D student,
Polytechnique Montreal Title: Measurement System Design Using Reinforcement Learning and Cosine Similarity. Abstract: Measurement system design (MSD) is at the core of any structural health monitoring (SHM) system. An effective MSD is the one enabling a decision maker to: (i)maximize true detections of anomalies, (ii) minimize of the false alarms, and (ii) make a distinction between different type of structural damages. In this seminar we will discuss how a reinforcement learning (RL) agent in conjunction with the cosine similarity address the abovementioned three aspects of a MSD. 

Apr 29 2022 15:00  Polytechnique Montreal 
Presenter: Ali Fakhri  MSc. student,
Polytechnique Montreal Title: Inferring the full heteroscedastic noise covariance matrix with TAGIV. Abstract: This presentation demonstrates how we can couple the TAGItrained neural networks with the multivariate approximate Gaussian variance inference (AGVI) to infer the full heteroscedastic noise covariance matrix. The combined framework is called TAGIV. 

Apr 22 2022 15:00  Polytechnique Montreal 
Presenter: JamesA. Goulet  Professor,
Polytechnique Montreal Title: Inducing sparsity in neural networks through activation functions. Abstract: Neural networks (NNs) involve a large number of parameters defining the relationship between hidden layers. Common applications of NNs do not take advantage of the inherent sparsity caused by the usage of the ReLU activation function. In this talk, we will show how in current fullyconnected feedforward architectures, the RelU activation function leads to approximately 50% of the activated units to be zero, thus opening the door to skipping a substantial portion of the calculations. In order to further leverage the potential of sparsity, we will introduce a modified ReLU activation function that allow having >99% of the activated units for which the computations can be omitted. Inducing sparsity in NNs requires understanding the weights initialization in order to maintain the inference performance. For that purpose, we will cover the fundamental of weights initialization for TAGItrained neural networks. The experiments with activationinduced sparsity made on a simple toy problem point towards the possibility of (1) a computational speedup from the calculations saved through the sparsity, (2) learning from a single epoch as fewer paths are activated in a sparse NN, and (3) that more work is required in order to apply the concept to more complex problems. 

Apr 8 2022 15:00  Polytechnique Montreal 
Presenter: Zachary Hamida  Postdoc,
Polytechnique Montreal Title: Planning Interventions for Infrastructures Using Reinforcement Learning. Abstract: This presentation is about the recent developments and progress in planning interventions for infrastructures. In the talk, an example toy problem and few analyses' results will be introduced, along with future work. 

Mar 18 2022 15:00  Polytechnique Montreal 
Presenter: Bhargob Deka  Ph.D student,
Polytechnique Montreal Title: Gaussian Multiplicative Approximation in StateSpace Models. Abstract: In this seminar, I will present the novel approach of Gaussian multiplicative approximation to be used for multiplicative statespace models. 

Mar 4 2022 15:00  Zoom 
Presenter: JamesA. Goulet  Professor,
Polytechnique Montreal Title: Introduction to Github. Abstract: This seminar provides a general overview of Github, why it will be useful for us to use it and how we will leverage its capacities. 

Feb 25 2022 15:00  Zoom 
Presenter: Ali Fakhri  MSc. student,
Polytechnique Montreal Title: Evaluating Reinforcement Learning algorithms in OpenAI Gym. Abstract: This presentation gives a brief overview of OpenAI Gym, covers the key ideas behind reinforcement learning (RL), and presents an implementation of a Monte Carlo method to solve an RL task in the OpenAI Gym. 

Feb 11 2022 15:00  Zoom 
Presenters: Zachary Hamida  Postdoc, Blanche Laurent  MSc. student,
Polytechnique Montreal Title: Phase 2 : Predire la degradation et comprendre l'effet des interventions. Abstract: This seminar presents a summary of the research work accomplished through the past year. The topics include, a general description about OpenIPDM platform, advancements in estimating the inspectors' uncertainty and the progress in planning intervention activities on a networkscale. 

Feb 04 2022 15:00  Zoom 
Presenter: Blanche Laurent  MSc. student,
Polytechnique Montreal Title: Analytical Inference for Inspector Uncertainty based on NetworkScale Visual Inspection. Abstract: This seminar presents the application of the analytical inference for inspector uncertainty based on NetworkScale Visual Inspection and its performance compared to a gradient based approach. 

Jan 21 2022 15:00  Zoom 
Presenter: JamesA. Goulet  Professor,
Polytechnique Montreal Title: Analytically Tractable HiddenStates Inference in Bayesian Neural Networks. Abstract: With few exceptions, neural networks have been relying on backpropagation and gradient descent as the inference engine in order to learn the model parameters, because closedform Bayesian inference for neural networks has been considered to be intractable. In this paper, we show how we can leverage the tractable approximate Gaussian inference's (TAGI) capabilities to infer hidden states, rather than only using it for inferring the network's parameters. One novel aspect is that it allows inferring hidden states through the imposition of constraints designed to achieve specific objectives, as illustrated through three examples: (1) the generation of adversarialattack examples, (2) the usage of a neural network as a blackbox optimization method, and (3) the application of inference on continuousaction reinforcement learning. In these three examples, the constrains are in (1), a target label chosen to fool a neural network, and in (2 & 3) the derivative of the network with respect to its input that is set to zero in order to infer the optimal input values that are either maximizing or minimizing it. These applications showcase how tasks that were previously reserved to gradientbased optimization approaches can now be approached with analytically tractable inference. 

Jan 21 2022 15:00  Zoom 
Presenter: JamesA. Goulet  Professor,
Polytechnique Montreal Title: Analytically Tractable HiddenStates Inference in Bayesian Neural Networks. Abstract: With few exceptions, neural networks have been relying on backpropagation and gradient descent as the inference engine in order to learn the model parameters, because closedform Bayesian inference for neural networks has been considered to be intractable. In this paper, we show how we can leverage the tractable approximate Gaussian inference's (TAGI) capabilities to infer hidden states, rather than only using it for inferring the network's parameters. One novel aspect is that it allows inferring hidden states through the imposition of constraints designed to achieve specific objectives, as illustrated through three examples: (1) the generation of adversarialattack examples, (2) the usage of a neural network as a blackbox optimization method, and (3) the application of inference on continuousaction reinforcement learning. In these three examples, the constrains are in (1), a target label chosen to fool a neural network, and in (2 & 3) the derivative of the network with respect to its input that is set to zero in order to infer the optimal input values that are either maximizing or minimizing it. These applications showcase how tasks that were previously reserved to gradientbased optimization approaches can now be approached with analytically tractable inference. 

Jan 14 2022 15:00  Zoom 
Presenter: JamesA. Goulet  Professor,
Polytechnique Montreal Title: Jira: What? Why? How? Abstract: In this special seminar, I will present the Jira management tool. I will explain what is it, why it will be useful for us to use it and how we will leverage its capacities. 

Jan 6 2022 09:00  Zoom 
Presenter: Shervin Khazaeli  Ph.D student,
Polytechnique Montreal Title: Can the agent accomplish the 'task' in hand? Abstract: In the context of the Reinforcement Learning (RL), we let the agent to learn the 'task' in hand by interacting with the environment. The goodness of accomplishing the task is determined by a notion of a scalar signal known as 'reward': encourage the agent when it does good and/or penalize it when it does bad. In other words, we express the task in the form of reward values. But, how much expressive are the rewards? Answering this question depends on understanding different accounts of the 'task'. In this seminar we will discuss different tasks and argue that not all of the tasks are expressible by the rewards. 




