The SOLACE seminar is held on Thursdays at 2pm at the
Laboratory of Analysis and Architecture of Systems (LAAS).
During the academic year 2022–2023, the seminar was organized by
Céline Comte.
Claudia Lilian Ramirez – July 3, 2023 – Slides Part 1 – Instructions Part 2 – HTML Part 2 – Notebook Part 2 – Data Part 2
In this seminar I will be presenting some basic concepts on how to structure/produce a code correctly. This is not merely about “organizing” code, this has to do with making code simpler, efficient, readable, and therefore, “better”. You won't learn how to do anything “new”, but with luck you will learn how to make it faster and efficiently.
The seminar will be divided into 3 parts (a break between each part is considered). You can participate in just the first, or just first and second. But you cannot participate in the second part if you didn’t participate in the first, and so on. The seminar is made of “building blocks”, the latter needs the previous.
Inés Armendáriz – June 22, 2023
We prove a fluid limit for the coarsening phase of the condensing zero-range process on a finite number of sites. When time and occupation per site are linearly rescaled by the total number of particles, the evolution of the process is described by a piecewise linear trajectory in the simplex indexed by the sites. The linear coefficients are determined by the trace process of the underlying random walk on the subset of non-empty sites, and the trajectory reaches an absorbing configuration in finite time. A boundary of the simplex is called absorbing for the fluid limit if a trajectory started at a configuration in the boundary remains in it for all times. We identify the set of absorbing configurations and characterize the absorbing boundaries.
This talk is based on a joint work with J. Beltrán, D. Cuesta and M. Jara.
Céline Comte – June 8, 2023 – Slides
We introduce a reinforcement-learning gradient-based method to optimize performance in product-form queueing networks. More specifically, we consider a queueing network whose equilibrium distribution is given, up to a multiplicative constant, as a product of differentiable functions of a vector of control parameters. Our main contribution is a novel expression of the gradient of the undiscounted return in the form of a covariance vector for which we can build an unbiased estimator. We evaluate this method numerically on several examples and compare its performance to that of classical reinforcement-learning methods. Our result can be generalized to other Markov decision processes, provided that we can build an estimator of the gradient of the logarithm of a stationary measure with respect to the vector of control parameters.
This talk is based on an ongoing work with Matthieu Jonckheere, Jaron Sanders, and Albert Senen Cerda.
Siva Theja Maguluri – June 2, 2023 – Slides – Paper 1 – Paper 2 – Paper 3
Motivated by applications in Reinforcement Learning (RL), this talk focuses on the Stochastic Appproximation (SA) method to find fixed points of a contractive operator. First proposed by Robins and Monro, SA is a popular approach for solving fixed point equations when the information is corrupted by noise. We consider the SA algorithm for operators that are contractive under arbitrary norms (especially the l-infinity norm). We present finite sample bounds on the mean square error, which are established using a Lyapunov framework based on infimal convolution and generalized Moreau envelope. We then present our more recent result on concentration of the tail error, even when the iterates are not bounded by a constant. These tail bounds are obtained using exponential supermartingales in conjunction with the Moreau envelop and a novel bootstrapping approach. Our results immediately imply the state-of-the-art sample complexity results for a large class of RL algorithms.
Nikki Levering – May 30, 2023 – Paper 1 – Paper 2 – Slides
Despite measures to reduce congestion, both recurrent patterns (e.g., rush hours) and non-recurrent events (e.g., traffic incidents) cause large delays in highway networks with important economic implications. In this talk, we will introduce the Markovian Velocity Model (MVM), a framework capable of incorporating both effects. The model uses a Markov-modulated background process that tracks the random and (semi-)predictable events affecting the vehicle speeds in a highway network. The resulting continuous-time model, which allows for correlation between the velocities on the arcs, is highly flexible. This flexibility can be employed to operationalize the model in real-life traffic networks. The fitted model can then be used to predict travel-time distributions and, as a result, provide departure time and route guidance for travelers in such networks.
Claudia Lilian Ramírez – May 11, 2023 – Slides
ML allows us to understand the behavior of a system, without needing to fully understand the equations that describe such a system. Because of this, it is being applied in many problems where the equations are unknown, or even sometimes just too expensive to get the answers that we want.
In this talk I will present my current work, which is applying ML tools to fully explore an ignition process. Currently we are using Gaussian Processes, and we want to implement some reinforcement learning techniques.
I won't go into technical details, because I want to present the problem and discuss the next steps that are still not implemented. And if I did that (explain the physics, and then the different tools of machine learning) it would take me a lot of time. Instead, I will present the basic notions, and the concepts involved*.
* If you want to learn more about the physics or (probably) the math involved, you can find it on the internet, it's all there!
Matthieu Jonckheere – March 30, 2023
We review some classical methods (in particular Spectral clustering and T-SNE) to deal with dimension reduction/low dimensional representation and clustering.
Daniel Mastropietro – March 24, 2023
The Fleming-Viot stochastic process has been proposed by W. Fleming and M. Viot in 1979 to model population genetics [1]. Driven by an underlying stochastic process with absorption, it has been proved useful to estimate the quasi-stationary distribution of such absorbing processes [2]. My recent work with the Fleming-Viot process has been two-fold: to help discover rare events such as queue blocking, and to optimise functions that present large barren areas, i.e. where the gradient is nearly zero. In this talk I will motivate how the Fleming-Viot process could help tackle the barren plateau problem with an application to finding the minimum energy in quantum circuits, where optimisation problems with large barren plateaus are ubiquitous [3].
References:
[1] Fleming W., Viot M. “Some Measure-Valued Markov Processes in Population Genetics Theory” (1979)
http://www.jstor.org/stable/24892583
[2] Asselah, A.; Ferrari, P. A.; Groisman, P. & Jonckheere, M.
Fleming–Viot selects the minimal quasi-stationary distribution: The Galton–Watson case
Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, Institute of Mathematical Statistics, 2016, 52, 647-668
https://doi.org/10.1214/14-aihp635
[3] McClean, J.R., Boixo, S., Smelyanskiy, V.N. et al. Barren plateaus in quantum neural network training landscapes. Nat Commun 9, 4812 (2018)
https://doi.org/10.1038/s41467-018-07090-4
Eva Loeser – March 16, 2023
In this talk, we consider a multiclass multiserver queue with reneging operating under the random order of service discipline. Interarrival times, service times, and patience times are assumed to be generally distributed. We establish a fluid limit for a measure valued process that keeps track of the remaining patience time for each job in queue. We prove uniqueness for fluid model solutions under mild conditions and study the asymptotic behavior of fluid model solutions as time goes to infinity. This model is also of interest in studying the dynamics of enzymatic processing.
This talk is based on joint work with Ruth Williams.
Gilles Tredan – March 9, 2023 – Slides
In machine learning, adversarial examples are inputs designed by an adversary to fool a target classifier.
More precisely, these examples are crafted by adding imperceptible noise to originally well-classified inputs in order to change the resulting classification.
Introduced almost a decade ago (2014), they generated a wide spectrum of research that ranges from very practical questions (can I fool an autonomous vehicle) to more fundamental ones (are there classifiers that resist adversarial inputs, and at which cost ?).
This talk is a humble introduction to these various topics by a non-expert.
Matthieu Jonckheere – February 9, 2023
In statistical learning tasks such as clustering, recommendation, or dimension reduction, a notion of distance or similarity between points is crucial but might not be directly available as an input. In previous work, we proposed a new density-based estimator for weighted geodesic distances that takes into account the underlying density of the data, and that is suitable for nonuniform data lying on a manifold of lower dimension than the ambient space. The consistency of the estimator is proven using tools from first passage percolation. The macroscopic distance obtained depends on a unique parameter and we discuss the choice of this parameter and the properties of the obtained distance for machine learning tasks.
Céline Comte – January 26 and February 16, 2023 – Paper 1 – Paper 2 – Slides
In this presentation, we will consider a stochastic dynamic matching model, in which items of different classes arrive according to independent Poisson processes, and compatibilities between items are described by an undirected graph on their classes. We will first focus on a specific matching policy called first-come-first-matched. Our main contribution is the observation that, under this policy, the matching model is equivalent to an order-independent (loss) queue, a model that has recently gained momentum in the queueing-theory community. Using this equivalence, we will formulate simpler proofs for several existing results and derive closed-form expressions for performance metrics like the waiting time of a class and the matching rate along an edge. In a second time, we will use results from graph theory and linear algebra to characterize the set of achievable matching rates under any matching policy.
The first part of this presentation is based on the paper “Stochastic non-bipartite matching models and order-independent loss queues” published in the journal Stochastic Models, Taylor & Francis (2022). The second part of this presentation is based on the recent preprint “Stochastic dynamic matching: a mixed graph-theory and linear-algebra approach” published in collaboration with Fabien Mathieu (LINCS) and Ana Bušić (Inria and PSL University).
Seva Shneer – January 12, 2022 – Paper
We consider the following migration process based on a closed network of $N$ queues with $K_N$ customers. Each station is a ./M/infty queue with service (or migration) rate mu. Upon departure, a customer is routed independently and uniformly at random to another station. In addition to migration, these customers are subject to an SIS (Susceptible, Infected, Susceptible) dynamics. That is, customers are in one of two states: I for infected, or S for susceptible. Customers can only swap their state either from I to S or from S to I in stations. More precisely, at any station, each susceptible customer becomes infected with the instantaneous rate $\alpha*Y$ if there are $Y$ infected customers in the station, whereas each infected customer recovers and becomes susceptible with rate $\beta$. We let $N$ tend to infinity and assume that $\lim_{N\to infty} K_N/N= \eta := \lambda/\mu$, where $\eta$ is a positive constant representing the customer density. The main question of interest is about the set of parameters of such a system for which there exists a stationary regime where the epidemic survives in the limiting system. The latter limit will be referred to as the thermodynamic limit. We establish several structural properties (monotonicity and convexity) of the system, which allow us to give the structure of the phase transition diagram of this thermodynamic limit w.r.t. $\eta$. The analysis of this SIS model reduces to that of a wave-type PDE for which we found no explicit solution. This plain SIS model is one among several companion stochastic processes that exhibit both migration and contagion. We discuss two of them as they provide variants to the plain SIS model as well as some bounds. These two variants are the SIS-DOCS (Departure On Change of State) and the SIS-AIR (Averaged Infection Rate), which both admit closed form solutions. The SIS-AIR system is a classical mean-field model where the infection mechanism based on the actual population of infected customers is replaced by a mechanism based on some empirical average of the number of infected customers in all stations. The latter admits a product-form solution. SIS-DOCS features accelerated migration in that each change of SIS state implies an immediate departure. This model leads to another wave-type PDE that admits a closed form solution. Our main focus is on the closed systems and their limits. The open systems consisting of a single station with Poisson input are instrumental in the analysis of the thermodynamic limits and are also of independent interest.
This is a joint work with François Baccelli and Sergey Foss.
Manuel Saenz and Jean Barbier – December 15, 2022 – Paper
I will present recent results concerning the Bayesian estimation of low-rank matrices corrupted by structured noise, namely rotational invariant noise with generic spectrum. Using the replica method we derive the optimal performance limit. This is possible by exploiting the low-rank structure of the matrix signal implying that we can reduce the model to an effective quadratic model of the Ising type. Secondly, we show that the Approximate Message Passing (AMP) algorithm currently proposed in the literature for Bayesian estimation is sub-optimal. Exploiting the theory of Adaptive Thouless-Anderson-Palmer equations by Opper et al. we explain the reason for this sub-optimality and as a consequence we deduce an optimal Bayesian AMP algorithm with a rigorous state evolution matching the replica prediction.
Paola Bermolen – December 1, 2022
The random dot product graph (RDPG) is a tractable yet expressive generative graph model for relational data, that subsumes simple Erdős-Rényi and stochastic block model ensembles as particular cases. RDPGs postulate that there exist latent positions for each node and specify the edge formation probabilities via the inner product of the corresponding latent vectors. In this talk, we first focus on the embedding task of estimating these latent positions from observed graphs. The workhorse adjacency spectral embedding (ASE) offers an approximate solution obtained via the eigendecomposition of the adjacency matrix, which enjoys solid statistical guarantees but can be computationally intensive and is formally solving a surrogate problem. To address these challenges, we bring to bear recent non-convex optimization advances and demonstrate their impact to RDPG inference. We show the proposed algorithms are scalable, robust to missing network data, and can track the latent positions over time when the graphs are acquired in a streaming fashion; even for dynamic networks subject to node additions and deletions. We also discuss extensions to the vanilla RDPG to accommodate directed and weighted graphs. Unlike previous proposals, our non-parametric RDPG model for weighted networks does not require a priori specification of the weights’ distribution to perform inference and estimation in a provably consistent fashion. Finally, we discuss the problem of online monitoring and detection of changes in the underlying data distribution of a graph sequence. Our idea is to endow sequential change-point detection (CPD) techniques with a graph representation learning substrate based on the versatile RDPG model. We share an open-source implementation of the proposed node embedding and online CPD algorithms, whose effectiveness is demonstrated via synthetic and real network data experiments.
Alejandro Hernandez Wences – November 24 and December 8, 2022
In this talk I will review neutral population models and describe their genealogies in terms of coalescent processes; in particular we will focus on the case where these are best described by Kingman’s coalescent. Then I will present the family of exponential models which incorporate the effect of natural selection, and whose genealogies are best described by the Bolthausen-Sznitman coalescent (although together with Emmanuel Schertzer we have found that this is not always the case). Finally, I will introduce the often used framework (in population genetics) of the site frequency spectrum for the inference of the genealogy of a real population from present-day genetic data. I will present a recent characterization of this statistic for the particular case of the Bolthausen-Sznitman coalescent.
This is a work in collaboration with Arno Siri-Jégousse.
Nahuel Soprano Loto – November 11, 2022
The Online Stochastic Block Model (OSMB) in the dense regime can be informally described as follows. Nodes arrive one-by-one to the system at each time-step. The nodes can be of $K$ different colors. The color of the arriving node is drawn according to a fixed probability distribution. The arriving node, on arrival, tries to connect with any of the present nodes in the system. Each connection actually takes place with a probability that depends only on the colors of the involved nodes. In this way, at each time step, we obtain a random graph. We propose an online algorithm that, at each time-step, gives a matching of this random graph (a matching of a graph is a subset of pairwise disjoint edges). Under certain conditions, we prove that this algorithm is asymptotically optimal in the sense that the ratio between the cardinality of the matching and the total number of nodes converges to 1. The proof uses a non-trivial connection between the OSBM and a modification of the version of the Stochastic Matching model studied in [MM2016], and strongly relies on ideas developed in [JMSLR2022].
This is a work in collaboration with Matthieu Jonckheere and Pascal Moyal.