Chair: Dr Ajay Chandra

15:00 15:05  Introduction

 

15:0515:20  Prof Tom Cass: TBA

 

15:2015:35  Prof Dan Crisan: Probabilistic representations of the derivative of the killed diffusion semigroup

I will introduce a probabilistic representation for the derivative of the semigroup corresponding to a diffusion process killed at the boundary of a half interval. In particular, I will show that the derivative of the semi-group can be expressed as the expected value of a functional of a reflected diffusion process. Furthermore, I will introduce a Bismut-Elworthy-Li formula which is also valid at the boundary. This is joint work with Arturo Kohatsu Higa (Ritsumeikhan) and is based on the papers:
[1] D Crisan, A Kohatsu-Higa, A probabilistic representation of the derivative of a one dimensional killed diffusion
semigroup and associated Bismut-Elworthy-Li formula, arXiv preprint arXiv:2312.07084, 2023.

[2] D Crisan, A Kohatsu-Higa, Probabilistic representations of the gradient of a killed diffusion semigroup: The half-
space case, arXiv preprint arXiv:2312.07096, 2023.

 

15:3515:50  Dr Cristopher Salvi: Scaling limits of random recurrentresidual neural networks

I will present some scaling limit results for random recurrent and residual neural networks when width and depth tend
to infinity. When the activation function is the identity, I will show that the limiting object is a Gaussian measure on
some space of paths and its covariance agrees with the so-called signature kernel.

 

16:0016:15  Prof Darryl Holm: TBA

 

16:1516:30  Dr PierreFrançois Rodriguez: Random walks and critical phenomena

I will give some flavours of a research area motivated on the one hand by problems regarding critical
phenomena arising in statistical physics, and by fragmentation and covering problems for random walks on the other.

 

16:3016:45  Dr Zhang Yufei: Some recent progress for reinforcement learning in continuous time and space

Recently, reinforcement learning (RL) has attracted substantial research interests. Much of the attention and success,
however, has been for the discrete setting. RL in continuous time and space, despite its natural analytical connection
to stochastic controls, has been largely unexplored and with limited progress. In particular, characterising sample
efficiency for RL algorithms remains a challenging and open problem. The talk will summarise some recent advances in the sample efficiency of learning algorithms inspired by filtering theory. The approach is probabilistic, involving quantifying the precise performance gap between applying greedy policies derived from estimated and true models, and exploring the concentration properties of Bayesian estimators.

 

16:4517:00  Dr Eyal Neuman: Equilibrium in Functional Stochastic Games with MeanField Interaction

We consider a general class of finite-player stochastic games with mean-field interaction, in which the linear-quadratic
cost functional includes linear operators acting on controls in L^2. We propose a novel approach for deriving the
Nash equilibrium of the game explicitly in terms of operator resolvents, by reducing the associated first order
conditions to a system of stochastic Fredholm equations of the second kind and deriving their closed form solution.
Furthermore, by proving stability results for the system of stochastic Fredholm equations we derive the convergence
of the equilibrium of the $N$-player game to the corresponding mean-field equilibrium. As a by-product we also
derive an $\eps$-Nash equilibrium for the mean-field game, which is valuable in this setting as we show that the
conditions for existence of an equilibrium in the mean-field limit are less restrictive than in the finite-player game.
Finally we apply our general framework to solve various examples, such as stochastic Volterra linear-quadratic games,
models of systemic risk and advertising with delay, and optimal liquidation games with transient price impact.

This is a joint work with Eduardo Abi Jaber and Moritz Voss.

 

Refreshments will be provided in room 410 at 3:00 pm.

Getting here