(IMS Medallion Lecturer) Charles Bordenave, Université de Toulouse, France
Charles Bordenave is a CNRS researcher at the Institut de Mathématiques of Université de Toulouse. In 2017 he was awarded the Marc Yor Prize in probability. He is currently an associate editor for Annals of Applied Probability, Annals of Probability and Bernoulli Journal. His research interests lie in random matrices, random graphs, stochastic networks, combinatorial optimization and stochastic geometry.
Non-backtracking spectrum of random matrices
A fruitful line of thought in the study of discrete combinatorial structures, such as graphs, is to look for natural matrices or operators whose spectrum will contain valuable and accessible information on the underlying combinatorial structure. The non-backtracking matrix of a graph is a matrix acting on pairs of vertices sharing an edge. The entries of powers of the non-backtracking matrix count non-backtracking paths along the edges of the graph, that is, paths which do not visit twice the same edge successively. A non-backtracking path may be interpreted as a discrete geodesic. This matrix has been introduced by Hashimoto in 1988 in the context of the Ihara zeta function of a graph. In recent years, due to its strong geometric flavor, this non-backtracking matrix has been promoted as a powerful tool to analyze the subtle interplay between the geometry of graph and its spectrum. It has found a wide range of applications notably in spectral algorithms. We will focus the talk on the study of the extremal eigenvalues of non-backtracking matrices of classical random graph ensembles: uniform regular graphs, Erdős-Rényi random graphs, stochastic block models and random n-lifts of a base graph. We will notably explain how these results can be used in community detection problems and in the theory of expander graphs, classical and quantum. This is based on joint works with Benoit Collins, Marc Lelarge and Laurent Massoulié.
Ton Dieker, Columbia University, USA
Ton Dieker is an Associate Professor in the Department of Industrial Engineering and Operations Research at Columbia University and is affiliated with the Data Science Institute. He has received the Goldstine Fellowship from IBM Research, a CAREER/PECASE Award from the National Science Foundation, and the Erlang Prize from the Applied Probability Society of INFORMS. He serves/has served on the editorial boards of Operations Research, Mathematics of Operations Research, Journal of/Advances in Applied Probability, Stochastic Systems. His research interests are in the design and performance of service systems, and the design and analysis of algorithms.
Towards a next-generation methodology for stochastic network analysis
We introduce a modeling methodology for stochastic networks that can be used to generate transient distributions of key performance metrics over time. We present a proof of concept that is lightning fast, remarkably accurate, and widely applicable: for example, it handles multi-server stations, time-varying arrivals, general service distributions, periods of significant overload, and re-entrant flow. Our approach circumvents the state space explosion required by a complete Markov representation by viewing the requisite information from a non-Markov probabilistic perspective. We will describe ideas for future enhancements that go far beyond our proof of concept. Joint work with Steve Hackman (Georgia Tech).
Nelly Litvak, University of Twente and Eindhoven University of Technology, The Netherlands
Nelly Litvak is a professor at the University of Twente and Eindhoven University of Technology in the Netherlands. She received the Stieltjes prize for the best PhD thesis and the Google faculty research award. She is a managing editor of Internet Mathematics and an associate editor of Stochastic Processes and their Applications. She is a finalist of the 2017 Enlightener award for the best popular science book in Russia. Her research interests are in the study of large networks such as on-line social networks and the World Wide Web, randomized algorithms, and random graphs.
Centrality in large random networks
A centrality is an algorithm that assigns an importance score to each node in a large network, such as an on-line social network or the World Wide Web. Most prominent example is the PageRank algorithm designed by Google to rank web pages. Mathematically, PageRank is a stationary distribution of a simple random walk with restart on a directed graph. A natural yet largely open question is: how the underlying graph structure affects the distribution of centrality scores? We will address this question by providing an overview of formal analysis for the PageRank distribution. One approach involves formulating the problem in terms of a stochastic fixed-point equation, which enables the analysis of PageRank on trees and on random graphs that can be coupled with a tree. Such random graphs include, e.g., the influential configuration model, where degrees of the nodes are fixed but connections are made purely at random. Another approach exploits the recent notion of local weak convergence of sparse graph sequences, and builds on the right balance between local and global properties of PageRank. We will discuss how this analysis helps to explain empirically observed phenomena and its potential extension to a large variety of more realistic network models.
The talk is based on joint work with Mariana Olvera-Cravioto, Yana Volkovich, Ninguyan Chen, Remco van der Hofstad, Alessandro Garavaglia.
(Marcel Neuts Lecturer) Sidney Resnick, Cornell University, USA
Sidney Resnick is the Lee Teng Hui Professor in Engineering in the School of Operations Research and Information Engineering at Cornell Universtiy. He is a fellow of the Institute of Mathematical Statistics and of the International Statistical Institute, a founding associate editor of Annals of Applied Probability, and a former associate editor of Stochastic Processes and their Applications, Journal of Applied Probability, Stochastic Models, Extremes and The Mathematical Scientist. His research interests center in applied probability and sometimes cross the boundary into statistics. He has worked on modeling questions for queues, storage facilities, extremes, data networks and worried about estimation problems for tails and non-standard time series models.
Why Model the Growth of Networks?
Social network modeling provides plenty of data but realistic models for network growth must be simple if mathematical results are expected. We have used preferential attachment (PA) models with a small number of parameters in an attempt to strike a balance between the mathematics and the statistical fitting. The PA models struggle to match the data but provide a context in which to test methods and analyze estimation techniques. Numerical summaries of network characteristics are often estimated using methods imported from classical statistics without real justification. For example, the Hill estimator coupled with a minimum distance threshold selection technique are commonly used. We discuss some attempts to justify and understand these estimation methods in the context of PA models. Without a model and its properties, there is no way to understand the limitation of estimation methods.