TY - JOUR
T1 - The averaging principle for perturbations of continuous time control problems with fast controlled jump parameters
AU - Azouzi, Rachid El
AU - Altman, Eitan
AU - Gaitsgory, Vladimir
PY - 2000
Y1 - 2000
N2 - We consider a class of singularly perturbed zero-sum differential games with piecewise deterministic dynamics, where the changes from one structure (for the dynamics) to another are governed by a finite-state Markov process. Player 1 controls the continuous dynamics, whereas Player 2 controls the rate of transition for the finite-state Markov process; both have access to the states of both processes. Player 1 wishes to minimize a given quantity. For player 2, we consider two possible scenarios: one in which it wishes to minimize the same quantity (team framework), and one in which it wishes to maximize it (zero sum game). The transition rates of the Markov process are fast, of the order of 1/ε. To solve the above problem, we use the dynamic programming approach. In particular, we study the asymptotic properties of the underlying system for sufficiently small ε. The viscosity solution method is employed to verify the convergence of the value function, which allows us to obtain the convergence in a general setting and helps us to characterize the structure of the limit system. We apply this to the special case of linear quadratic games with jump parameters, which allows us to obtain an explicit solution for the limiting problem.
AB - We consider a class of singularly perturbed zero-sum differential games with piecewise deterministic dynamics, where the changes from one structure (for the dynamics) to another are governed by a finite-state Markov process. Player 1 controls the continuous dynamics, whereas Player 2 controls the rate of transition for the finite-state Markov process; both have access to the states of both processes. Player 1 wishes to minimize a given quantity. For player 2, we consider two possible scenarios: one in which it wishes to minimize the same quantity (team framework), and one in which it wishes to maximize it (zero sum game). The transition rates of the Markov process are fast, of the order of 1/ε. To solve the above problem, we use the dynamic programming approach. In particular, we study the asymptotic properties of the underlying system for sufficiently small ε. The viscosity solution method is employed to verify the convergence of the value function, which allows us to obtain the convergence in a general setting and helps us to characterize the structure of the limit system. We apply this to the special case of linear quadratic games with jump parameters, which allows us to obtain an explicit solution for the limiting problem.
UR - http://www.scopus.com/inward/record.url?scp=0034439894&partnerID=8YFLogxK
U2 - 10.1109/CDC.2000.912855
DO - 10.1109/CDC.2000.912855
M3 - Article
AN - SCOPUS:0034439894
SN - 0191-2216
VL - 1
SP - 730
EP - 735
JO - Proceedings of the IEEE Conference on Decision and Control
JF - Proceedings of the IEEE Conference on Decision and Control
ER -