Infinite time horizon optimal control of McKean-Vlasov SDEs
In this talk we will present a theory of optimal control for McKean-Vlasov equations with infinite time horizon. Starting from the finite horizon case, we consider a system whose dynamics is described by a stochastic differential equation of McKean-Vlasov type on the interval [0, +∞), which means that the coefficients of the state equation depend not only on the trajectories of the state process X, but also on its probability distribution. The existence of a unique solution for the state equation is guaranteed by suitable hypotheses on the coefficients. Given a reward functional, an optimization problem is then defined. The talk especially focuses on the properties of the associated value function V, which at first is defined on the space of square-integrable random variables on the underlying probability space. In analogy with the finite horizon case, we study the possibility to rewrite it as a function on an appropriate space of probability measures. We furthermore introduce a partial differential equation (called Hamilton-Jacobi-Bellman equation) on the same space of probability measures and we analyse its link with the value function of our problem by means of a suitable notion of viscosity solution to a PDE. As a matter of fact, we expect the value function to be the unique viscosity solution to this Hamilton-Jacobi-Bellman equation.
Area: CS3 - Mean Field Games and Mean Field Control II (Andrea Cosso & Luciano Campi)
Keywords: McKean-Vlasov SDEs, Hamilton-Jacobi-Bellman equations, optimal control, viscosity solutions
Please Login in order to download this file