Control Theory: From Classical to Quantum Optimal, Stochastic, and robust Control Notes for Quantum Control Summer School, Caltech, August 2005 M.R. James Department of engineering Australian National Universit Matthew.ames@anu. edu. au Contents 1 Introductio 2 Deterministic Dynamic Programming and Viscosity Solutions 5 2.1 Introduction 5 2.1.1 Preamble 2. 1.2 Optimal Control 2.1.3 Distance Function 1.4 Viscosity Solut 2.2 Value Functions are Viscosity Solutions .2.1 The Distance Function is a Viscosity Solution 2.2.2 The Optimal Control Value Function is a Viscosity Solution 2.3 Comparison and Uniquenes 2.3.1 Dirichlet problem 2.3.2 Cauchy Problem 3 Stochastic control 3. 1 Some Probability Theory 1.1 Basic Defin 3.1.2 Conditional Expectations 3.1.3 Stochastic Processes 3.1.4 Martingales 3.1.5 Semimartingales This work was supported by the Australian Research Council
Control Theory: From Classical to Quantum Optimal, Stochastic, and Robust Control Notes for Quantum Control Summer School, Caltech, August 2005 M.R. James∗ Department of Engineering Australian National University Matthew.James@anu.edu.au Contents 1 Introduction 3 2 Deterministic Dynamic Programming and Viscosity Solutions 5 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.1 Preamble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.2 Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.3 Distance Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.4 Viscosity Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Value Functions are Viscosity Solutions . . . . . . . . . . . . . . . . . . . . 12 2.2.1 The Distance Function is a Viscosity Solution . . . . . . . . . . . . 12 2.2.2 The Optimal Control Value Function is a Viscosity Solution . . . . 14 2.3 Comparison and Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3.1 Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3.2 Cauchy Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3 Stochastic Control 22 3.1 Some Probability Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.1.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.1.2 Conditional Expectations . . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.3 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.1.4 Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.1.5 Semimartingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 ∗This work was supported by the Australian Research Council. 1
3.1.6 Markov processes 3.1.7 Observation Processes 3. 1.8 Linear Representation of a Markov Chain 3.2 Controlled State Space Models 2.1 Feedback Control Laws or Policies 3.2.2 Partial and Full State Information 3.3 Filtering 34 3.3.1 Introduction 3.3.2 The Kalman Filter 3.3.3 The Kalman Filter for Controlled Linear Systems 3.3.4 The HMM Filter(Markov Chain) 3.3.5 Filter for Controlled HMM 3.4 Dynamic Programming -Case I: Complete State Information 3.4.1 Optimal Control Problem 3.5 Dynamic Programming -Case II: Partial State Information 3.5.1 Optimal Control of HMMs 2678 3.5.2 Optimal Control of Linear Systems(LQG) 3.6 Two Continuous Time Problems 3.6.1 System and Kalman Filte 3.6.2 LQG Control 51 3.6.3 LEQG Control 51 4 Robust Control 4.1 Introduction and Background 4.2 The Standard Problem of Hoo Control 54 4. 2.1 The Plant(Physical System Being Controlled) 4.2.2 The Class of Controllers 4.2.3 Control Objectives 4.3 The Solution for Linear Systems 4.3.1 Problem formulation 56 4.3.2 Background on Riccati equations 4.3.3 Standard Assumptions 57 4.3.4 Problem Solution 4.4 Risk-Sensitive Stochastic Control and Robustness 5 Optimal Feedback Control of Quantum Systems 5.1 Preliminaries 5.2 The Feedback Control Problem 5.3 Conditional Dynamics 63 5.3.1 Controlled State Transfer 5.3.2 Feedback Control 5.4 Optimal Control 5.5 Appendix: Formulas for the Two-State System with Feedback Example.. 73
3.1.6 Markov Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.1.7 Observation Processes . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.1.8 Linear Representation of a Markov Chain . . . . . . . . . . . . . . . 32 3.2 Controlled State Space Models . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2.1 Feedback Control Laws or Policies . . . . . . . . . . . . . . . . . . . 34 3.2.2 Partial and Full State Information . . . . . . . . . . . . . . . . . . . 34 3.3 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.3.2 The Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3.3 The Kalman Filter for Controlled Linear Systems . . . . . . . . . . 39 3.3.4 The HMM Filter (Markov Chain) . . . . . . . . . . . . . . . . . . . 39 3.3.5 Filter for Controlled HMM . . . . . . . . . . . . . . . . . . . . . . . 42 3.4 Dynamic Programming - Case I : Complete State Information . . . . . . . 42 3.4.1 Optimal Control Problem . . . . . . . . . . . . . . . . . . . . . . . 43 3.5 Dynamic Programming - Case II : Partial State Information . . . . . . . . 46 3.5.1 Optimal Control of HMM’s . . . . . . . . . . . . . . . . . . . . . . 47 3.5.2 Optimal Control of Linear Systems (LQG) . . . . . . . . . . . . . . 48 3.6 Two Continuous Time Problems . . . . . . . . . . . . . . . . . . . . . . . . 50 3.6.1 System and Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . 50 3.6.2 LQG Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.6.3 LEQG Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4 Robust Control 53 4.1 Introduction and Background . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.2 The Standard Problem of H∞ Control . . . . . . . . . . . . . . . . . . . . 54 4.2.1 The Plant (Physical System Being Controlled) . . . . . . . . . . . . 54 4.2.2 The Class of Controllers . . . . . . . . . . . . . . . . . . . . . . . . 55 4.2.3 Control Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.3 The Solution for Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . 56 4.3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.3.2 Background on Riccati Equations . . . . . . . . . . . . . . . . . . . 57 4.3.3 Standard Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.3.4 Problem Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.4 Risk-Sensitive Stochastic Control and Robustness . . . . . . . . . . . . . . 59 5 Optimal Feedback Control of Quantum Systems 61 5.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.2 The Feedback Control Problem . . . . . . . . . . . . . . . . . . . . . . . . 62 5.3 Conditional Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.3.1 Controlled State Transfer . . . . . . . . . . . . . . . . . . . . . . . 63 5.3.2 Feedback Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5.4 Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5.5 Appendix: Formulas for the Two-State System with Feedback Example . . 73 2
6 Optimal Risk-Sensitive Feedback Control of Quantum Systems 6.1 System Model 74 6.2 Risk-Neutral Optimal Control 76 6.3 Risk-Sensitive Optimal Control 6.4 Control of a Two Level Atom 6.4.1 Setup 6.4.2 Information State 6.4.3 Dynamic Programmin 81 6.4.4 Risk-Neutral Control 6.5 Control of a Trapped Atom 6.5.1 Setup 6.5.2 Information State 6.5.3 Optimal LEQG Control 6.5.4 Robustness 1 Introduction The purpose of these notes is to provide an overview of some aspects of optimal and robust control theory considered relevant to quantum control. The notes begin with classical deterministic optimal control, move through classical stochastic and robust control, and conclude with quantum feedback control. Optimal control theory is a systematic approach to controller design whereby the desired performance objectives are encoded in a cost function, which is subsequently optimized to determine the desired controller. Robust control theory aims to enhance the robustness(ability to withstand, to some extent uncertainty,errors, etc) of controller designs by explicitly including uncertainty models in the design process. Some of the material is in continuous time, while other material is written in discrete time. There are two underlying and universal themes in the notes dynamic programming and filtering Dynamic programming is one of the two fundamental tools of optimal control, the other being Pontryagin's principle, [24]. Dynamic programming is a means by which candidate optimal controls can be verified optimal. The procedure is to find a suitable solution to a dynamic programming equation(DPe), which encodes the optimal performance, and to use it to compare the performance of a candidate optimal control. Candidate controls may be determined from Pontryagin's principle, or directly from the solution to the DPE In general it is difficult to solve DPEs. Explicit solutions exist in cases like the linear quadratic regulator, but in general approximations must usually be sought. In addition there are some technical complications regarding the DPE. In continuous time, the dPe is a nonlinear PDE, commonly called the Hamilton-Jacobi-Bellman(hjB )equation. The complications concern differentiability, or lackthereof, and occur even in "simple"classical deterministic problems, section 2. This is one reason it can be helpful to work in discrete time, where such regularity issues are much simpler(another reason for working in discrete time is to facilitate digital implementation
6 Optimal Risk-Sensitive Feedback Control of Quantum Systems 74 6.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 6.2 Risk-Neutral Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . . 76 6.3 Risk-Sensitive Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . 77 6.4 Control of a Two Level Atom . . . . . . . . . . . . . . . . . . . . . . . . . 80 6.4.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6.4.2 Information State . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6.4.3 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . 81 6.4.4 Risk-Neutral Control . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.5 Control of a Trapped Atom . . . . . . . . . . . . . . . . . . . . . . . . . . 83 6.5.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 6.5.2 Information State . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.5.3 Optimal LEQG Control . . . . . . . . . . . . . . . . . . . . . . . . 85 6.5.4 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 1 Introduction The purpose of these notes is to provide an overview of some aspects of optimal and robust control theory considered relevant to quantum control. The notes begin with classical deterministic optimal control, move through classical stochastic and robust control, and conclude with quantum feedback control. Optimal control theory is a systematic approach to controller design whereby the desired performance objectives are encoded in a cost function, which is subsequently optimized to determine the desired controller. Robust control theory aims to enhance the robustness (ability to withstand, to some extent, uncertainty, errors, etc) of controller designs by explicitly including uncertainty models in the design process. Some of the material is in continuous time, while other material is written in discrete time. There are two underlying and universal themes in the notes: dynamic programming and filtering. Dynamic programming is one of the two fundamental tools of optimal control, the other being Pontryagin’s principle, [24]. Dynamic programming is a means by which candidate optimal controls can be verified optimal. The procedure is to find a suitable solution to a dynamic programming equation (DPE), which encodes the optimal performance, and to use it to compare the performance of a candidate optimal control. Candidate controls may be determined from Pontryagin’s principle, or directly from the solution to the DPE. In general it is difficult to solve DPEs. Explicit solutions exist in cases like the linear quadratic regulator, but in general approximations must usually be sought. In addition, there are some technical complications regarding the DPE. In continuous time, the DPE is a nonlinear PDE, commonly called the Hamilton-Jacobi-Bellman (HJB) equation. The complications concern differentiability, or lackthereof, and occur even in “simple” classical deterministic problems, section 2. This is one reason it can be helpful to work in discrete time, where such regularity issues are much simpler (another reason for working in discrete time is to facilitate digital implementation). 3
Filtering concerns the processing of measurement information. In optimal control filters are used to represent information about the system and control problem of interest In general, this information is incomplete, i.e. the state is typically not fully accessible, and may be corrupted by noise. To solve optimal control problems in these situations, the cost function is expressed in terms of the state of a suitably chosen filter, which is often called an information state. Dynamic programming can then be applied using the information state dynamics. The nature of the measurements and the purpose for which the data is to be used determine the architecture of the filter. In stochastic situations this is closely linked to the probabilistic concept of conditional expectation. The famous Kalman filter computes dynamically conditional expectations(of states given measure- ments in linear gaussian models), which are also optimal estimates in the mean square error sense. The quantum Belaukin filter, or stochastic master equation, also computes a quantum version of conditional expectation. In linear gaussian cases, the information states are gaussian, a fact which considerably simplifies matters due to the finite num ber of parameters. Filters such as these based on computing conditional expectations of states or system variables do not include any information about the cost or performance objective. While this is not an issue for many problems such as LQG, where the task of estimation can be completely decoupled from that of control [17, there are important problems where the filter dynamics must be modified to take into account the control objective. These problems include LEQG[48, 49] or risk-sensitive control[8, 37, and Hoo robust control [19, 54 Figure 1 shows a physical system being controlled in a feedback loop. The so-calle separation structure of the controller is shown. The control values are computed in the box marked"control", using a function of the information state determined using dynamic programming. The information state, as has been mentioned, is the state of the filter whose dynamics are built into the box marked"filter". This structure embodies the two themes of these notes output physical system control filter feed back controller Figure 1: Feedback controller showing the separation structure These notes were assembled from various lecture notes and research papers, and so we apologize for the inevitable inconsistencies that resulted
Filtering concerns the processing of measurement information. In optimal control, filters are used to represent information about the system and control problem of interest. In general, this information is incomplete, i.e. the state is typically not fully accessible, and may be corrupted by noise. To solve optimal control problems in these situations, the cost function is expressed in terms of the state of a suitably chosen filter, which is often called an information state. Dynamic programming can then be applied using the information state dynamics. The nature of the measurements and the purpose for which the data is to be used determine the architecture of the filter. In stochastic situations, this is closely linked to the probabilistic concept of conditional expectation. The famous Kalman filter computes dynamically conditional expectations (of states given measurements in linear gaussian models), which are also optimal estimates in the mean square error sense. The quantum Belavkin filter, or stochastic master equation, also computes a quantum version of conditional expectation. In linear gaussian cases, the information states are gaussian, a fact which considerably simplifies matters due to the finite number of parameters. Filters such as these based on computing conditional expectations of states or system variables do not include any information about the cost or performance objective. While this is not an issue for many problems such as LQG, where the task of estimation can be completely decoupled from that of control [17], there are important problems where the filter dynamics must be modified to take into account the control objective. These problems include LEQG [48, 49] or risk-sensitive control [8, 37], and H∞ robust control [19, 54]. Figure 1 shows a physical system being controlled in a feedback loop. The so-called separation structure of the controller is shown. The control values are computed in the box marked “control”, using a function of the information state determined using dynamic programming. The information state, as has been mentioned, is the state of the filter whose dynamics are built into the box marked “filter”. This structure embodies the two themes of these notes. ✛ ✲ filter ✛ physical system u y control feedback controller input output Figure 1: Feedback controller showing the separation structure. These notes were assembled from various lecture notes and research papers, and so we apologize for the inevitable inconsistencies that resulted. 4
2 Deterministic Dynamic Programming and viscos ity solutions References for this section include 24],[25 1,3,[15 2.1 Introduction 2.1.1 Preamble Hamilton-Jacobi(Hs) equations are nonlinear first-order partial differential equations of the form (x,V(x),VV(x)=0 (one can also consider second-order equations but we do not do so here). V(r)( C Ro)is the unknown function to be solved for, and Vv(a)=(2r(),., a))denotes the gradient. F(a, v, A)is a nonlinear function Or HJ equations have a long history, dating back at least to the calculus of variations the 19th century, and HJ equations find wide application in science, engineering, etc Perhaps surprisingly, it was only relatively recently that a satisfactory general notion of solutions for(1) became available, with the introduction of the concept of viscosity solution(Crandall-Lions, c. 1980). The difficulty of course, is that solutions are not in general globally smooth(e.g. C). Solutions are often smooth in certain regions, in which the famous method of characteristics may be used to construct solutions. There are a number of other notions of solution available, such as encountered in non-smooth analysis (e.g. pro. imal solution), though we will not discuss them here. In Engineering our principal interest in HJ equations lies in their connection with optimal control(and games) via the dymamic programming methodology. The value func tion is a solution to an HJ equation, and solutions of HJ equations can be used to test a controller for optimality, or perhaps to construct a feedback controller. In these notes we discuss dynamic programming and viscosity solutions in the context of two examples, and make some mention of the general theory 2.1.2 Optimal Control As a first and perhaps familiar example(e.g. LQR), let's consider a finite time horizon optimal control problem defined on a time interval [ to, ty (to, o)=inf (to, To, u()) Here, To is the initial state at time to, and u() is the control;J(to, to, u() represent the associated cost To be specific, and to prepare us for dynamic programming, suppose one wants to minimize the cost functional J(t, r; u())=/L((s), u(s)ds+(a(t1))
2 Deterministic Dynamic Programming and Viscosity Solutions References for this section include [24], [25], [3], [15]. 2.1 Introduction 2.1.1 Preamble Hamilton-Jacobi (HJ) equations are nonlinear first-order partial differential equations of the form F(x, V (x), ∇V (x)) = 0 (1) (one can also consider second-order equations but we do not do so here). V (x) (x ∈ Ω ⊂ Rn ) is the unknown function to be solved for, and ∇V (x) = ( ∂V (x) ∂x1 , . . . , ∂V (x) ∂xn ) denotes the gradient. F(x, v, λ) is a nonlinear function. HJ equations have a long history, dating back at least to the calculus of variations of the 19th century, and HJ equations find wide application in science, engineering, etc. Perhaps surprisingly, it was only relatively recently that a satisfactory general notion of solutions for (1) became available, with the introduction of the concept of viscosity solution (Crandall-Lions, c. 1980). The difficulty, of course, is that solutions are not in general globally smooth (e.g. C 1 ). Solutions are often smooth in certain regions, in which the famous method of characteristics may be used to construct solutions. There are a number of other notions of solution available, such as encountered in non-smooth analysis (e.g. proximal solution), though we will not discuss them here. In Engineering our principal interest in HJ equations lies in their connection with optimal control (and games) via the dynamic programming methodology. The value function is a solution to an HJ equation, and solutions of HJ equations can be used to test a controller for optimality, or perhaps to construct a feedback controller. In these notes we discuss dynamic programming and viscosity solutions in the context of two examples, and make some mention of the general theory. 2.1.2 Optimal Control As a first and perhaps familiar example (e.g. LQR), let’s consider a finite time horizon optimal control problem defined on a time interval [t0, t1]: J ∗ (t0, x0) = inf u(·) J(t0, x0, u(·)) (2) Here, x0 is the initial state at time t0, and u(·) is the control; J(t0, x0, u(·)) represent the associated cost. To be specific, and to prepare us for dynamic programming, suppose one wants to minimize the cost functional J(t, x; u(·)) = Z t1 t L(x(s), u(s)) ds + ψ(x(t1)), (3) 5