مقاله Dynamic programming in stochastic control of systems with delay

We consider optimal control problems for systems described by stochastic differential equations with
delay (SDDE). We prove a version of Bellman’s principle of optimality (the dynamic programming
principle) for a general class of such problems. That the class in general means that both the dynamics
and the cost depends on the past in a general way. As an application, we study systems where the value
function depends on the past only through some weighted average. For such systems we obtain a
Hamilton–Jacobi–Bellman partial differential equation that the value function must solve if it is
smooth enough.
The weak uniqueness of the SDDEs we consider is our main tool in proving the result. Notions of
strong and weak uniqueness for SDDEs are introduced, and we prove that strong uniqueness implies
weak uniqueness, just as for ordinary stochastic differential equations.
Keywords: Stochastic delay equations; Optimal stochastic control; Dynamic programming;
Hamilton–Jacobi–Bellman equations




Download File