Control Synthesis for Nonlinear Optimal Control via Convex Relaxations ^{†}^{†}thanks: This work was supported by Ford Motor Company.
Abstract
This paper addresses the problem of control synthesis for nonlinear optimal control problems in the presence of state and input constraints. The presented approach relies upon transforming the given problem into an infinitedimensional linear program over the space of measures. To generate approximations to this infinitedimensional program, a sequence of SemiDefinite Programs (SDP)s is formulated in the instance of polynomial cost and dynamics with semialgebraic state and bounded input constraints. A method to extract a polynomial control function from each SDP is also given. This paper proves that the controller synthesized from each of these SDPs generates a sequence of values that converge from below to the value of the optimal control of the original optimal control problem. In contrast to existing approaches, the presented method does not assume that the optimal control is continuous while still proving that the sequence of approximations is optimal. Moreover, the sequence of controllers that are synthesized using the presented approach are proven to converge to the true optimal control. The performance of the presented method is demonstrated on three examples.
1 Introduction
A variety of engineering problems require searching for optimal system trajectories while satisfying certain constraints [1]. Despite the numerous applications for these optimal control problems, they remain challenging to solve. Assorted theoretical approaches have been proposed to address these problems including the maximum principle [2] and the HamiltonJacobiBellman Equation [3]. Due to the challenges associated with applying these methods, various numerical techniques such as direct methods that rely upon gradient descent type algorithms [4], and multiple shooting methods that require solving a twopoint boundary value problem [5], have been extensively employed. Though rendering the optimal control problem more amenable to computation, these approaches struggle to find global minimizers.
By recasting the optimal control problem into its weak formulation, one can write the same problem as an infinite dimensional Linear Program (LP) over the space of measures [6]. Approximations to this infinite dimensional linear program have recently been developed using SemiDefinite Programming (SDP) hierarchies [7]. This approach has proven an effective method to tractably construct a sequence of lower bounds to the optimal value of the original optimal control problem even in the presence of nonlinear dynamics and states constraints.
Unfortunately the construction of the optimal controller that achieves this optimal value has remained challenging. For example, approaches which rely upon iterating on the extracted solution [8] or assuming the differentiability of the optimal control [9] have been proposed. The former method is unable to guarantee the optimality of the generated controller. Though the latter approach is able to prove convergence of the controller, a value function, and even a rate of convergence more recently [9], it assumes differentiability of the optimal control, which is a restrictive assumption [10]. This paper addresses these limitations by proposing a method to perform optimal control synthesis for these nonlinear, stateconstrained optimal control problems that are solved using SDP approximations to the weak formulation.
The contributions of this paper are as follows: we formulate the optimal control problem as an infinitedimensional linear program over the space of nonnegative measures, which is amenable to the extraction of the optimal control. To numerically solve this infinitedimensional LP, we construct a sequence of relaxations in terms of finitedimensional SDPs and illustrate how a polynomial control law can be extracted from a solution to any of the SDPs. This extraction method extends a recently developed method to perform control synthesis for control problems formulated in the weak sense [11]. Finally, we prove the convergence of this sequence of control laws to the optimal controller.
This paper is organized as follows: Section 2 defines the problem of interest and key results about occupation measures; Section 3 formulates the optimal control problem as an infinitedimensional LP and proves its equivalence to the original optimal control problem; Section 4 provides a means to solve the infinitedimensional problem using a sequence of SDP relaxations, and control extraction method is discussed; Section 5 demonstrates the performance of our approach on three examples.
2 Preliminaries
In this section, we introduce necessary notation for this paper, and formalize our problem of interest. We make substantial use of measure theory in this paper, and the unfamiliar reader may wish to consult [12] for an introduction.
2.1 Notation
Given an element , let denote the th component of . We use the same convention for elements belonging to any multidimensional vector space. Let denote the ring of real polynomials in the variable . Suppose is a compact Borel subset of , then we let be the space of continuous functions on and be the space of finite signed Radon measures on , whose positive cone is the space of unsigned Radon measures on . Since any measure can be viewed as an element of the dual space to , the duality pairing of on a test function is:
(1) 
For any , we let the support of be denoted as . A probability measure is a nonnegative measure whose integral is one.
2.2 Problem Formulation
Consider a controlaffine system:
(2) 
where is a continuous function, is a continuous function, is the system’s state at time , and is a control action at time . Furthermore, the control is a Borel measurable function defined on an interval which satisfies the input constraint:
(3) 
where .
Remark 1.
Without loss of generality, we assume is a closed unit box, i.e. (since can be arbitrarily shifted and scaled).
The objective of this paper is to find a control that satisfies this input constraint and an associated finitetime trajectory of the system beginning from a given initial condition that satisfies a set of state constraints while reaching a target set and minimizing a userspecified cost function. To formulate this problem, we first define the state constraint and target sets as and , respectively, with . We further assume that:
Assumption 2.
and are compact sets.
The compactness ensures our optimization problem is wellposed.
Next, we define an admissible trajectory of this system. Given a point and a , a control is said to be admissible if there exists an absolutely continuous function such that:

and ,

for all , and

for almost every .
The function which satisfies these requirements and corresponds to the admissible control is called an admissible trajectory, and the pair is called an admissible pair. We denote the space of admissible trajectories and controls by and , respectively. The space of admissible pairs is denoted as . Note that the ODE may not admit a unique solution, so the behavior of the system may not be fully characterized by the control, but instead by the admissible pair. Finally, for each admissible pair the cost is defined as:
(4) 
where and are Borel measurable functions.
Our goal is to solve the optimal control problem to find an admissible pair that minimizes the cost in Equation (4). That is, we consider the following optimal control problem:
s.t.  
where optimization is over the space of measurable control inputs and absolutely continuous functions on . The optimal cost is defined as
(5) 
2.3 Liouville’s Equation
To address this problem, we begin by defining measures whose supports model the evolution of families of trajectories. An initial condition and its evolution can be understood via Equation (2), but the evolution of a family of trajectories understood via a measure must be formalized in a different manner: Let be a linear operator which acts on a test function as:
(6) 
Similarly let be a linear operator which acts on a test function as:
(7) 
Finally, let be a linear operator which acts on a test function as:
(8) 
Remark 3.
Using the dual relationship between measures and functions, let be the adjoint operator of :
(9) 
for all and . Similarly, we can define and as the adjoint operators of and , respectively.
Each of these adjoint operators can be used to describe the evolution of families of trajectories of the system. To formalize this relationship, consider an admissible trajectory defined on , we define its associated occupation measure, denoted as , as:
(10) 
for all subsets in the Borel algebra of , where denotes the indicator function on a set . The quantity is equal to the amount of time the graph of the trajectory, , spends in . Similarly, the terminal measure associated with is defined as:
(11) 
where .
If the control action associated with an absolutely continuous function is also given, the occupation measure associated with the pair , denoted as , can be defined as:
(12) 
for all subsets in the Borel algebra of .
Remark 4.
Notice that despite the cost function potentially being a nonlinear function for the admissible pair in the space of functions, the analogous cost function over the space of measures is linear. In fact, a similar analogue holds true for the dynamics of the system. That is, the occupation measure associated with an admissible pair satisfy a linear equation over measures:
Lemma 5.
Given an admissible pair , its occupation measure and terminal measure satisfy Liouville’s Equation which is defined as:
(14) 
for all test functions . Since this is true for all test function, we write it as a linear operator equation:
(15) 
where is a Dirac measure at , , is a Dirac measure at a point , and denotes the product of measures.
Proof.
This lemma follows directly from the definition of occupation measure and terminal measure. ∎
Now one can ask whether the converse relationship holds. That is, do measures that satisfy Liouville’s Equation correspond to trajectories that satisfy the dynamical system defined in Equation (2)? To answer this question, we first consider a family of trajectories modeled by a probability measure defined on the space of admissible trajectories and define an average occupation measure for the family of trajectories as:
(16) 
and an average terminal measure, , by
(17) 
The solutions to Liouville’s Equation can be marginalized into a conditional measure over the input given a state and time and a marginal distribution over just state and time. Each marginal distribution in fact coincides with the average measures, and the corresponding conditional probability distribution corresponds to trajectories of the dynamical system as defined in Equation (2):
Lemma 6.
Let be a pair of measures satisfying Liouville’s Equation (15) such that , and . Then

The measure can be disintegrated as
(18) where is the marginal of , is a stochastic kernel on given , is a stochastic kernel on given , and .

There exists a nonnegative probability measure supported on a family of absolutely continuous admissible trajectories that satisfy the differential equation:
(19) almost everywhere, such that for all measurable functions and
(20) 
The family of trajectories in the support of starts from at . Moreover, the average occupation measure, , and average terminal measure, , generated by this family of trajectories coincide with and , respectively.
Proof.
As a result, the support of the measures that satisfy Liouville’s Equation (15) coincide with trajectories that satisfy the differential equation defined in Equation (19). Moreover the solutions to Equations (2) and (19), as shown in [14, Corollary 3.2], are identical:
Remark 7.
For any , Equation (19) can be rewritten as:
(21) 
Since is a stochastic kernel, we have . Therefore is an admissible pair.
3 Infinite Dimensional Linear Program
This section formulates the as an infinite dimensional linear program over the space of measures, proves that it computes the solution to , and illustrates how its solution can be used for control synthesis.
Define an infinite dimensional linear program as:
s.t.  
where the infimum is taken over a tuple of measures . The dual to problem is given as:
s.t.  
where the supremum is taken over a tuple of functions .
Next, we have the following useful result:
Theorem 8.
There is no duality gap between and .
Proof.
The proof follows from [15, Theorem 3.10]. ∎
Next, we show that solves . We do this by first introducing another optimization problem that solves , and then showing that is equivalent as . Define optimization problem as
s.t.  
where the inifimum is taken over a tuple of measures .
Note that is identical to the primal LP defined in Equation in [7], which was shown to compute the optimal value of under the following assumption:
Assumption 9.
is convex for any .
Unfortunately, control synthesis was not amenable using . In contrast our formulation, as we describe next, makes control synthesis feasible under the following additional assumption:
Assumption 10.
If is feasible, then the optimal admissible pair is unique almost everywhere.
Next, we prove several important properties about :
Lemma 12.
If is feasible, then

the minimum to , , is attained,

, where is the optimal admissible pair to ,

if is an optimal solution to , we can disintegrate as
(22) then coincides with the occupation measures associated with almost everywhere,

for almost every point in the support of , we have
(23) and if every column of is nonzero almost everywhere along the optimal trajectory trajectory , then
(24) almost everywhere.
Proof.

This follows from [7, Theorem 2.3(i)].

This follows from [7, Theorem 2.3(iii)] by noting and is convex.

We first show that coincides with the occupation measure of a family of trajectories, and then argue that any of the trajectories together with some control will achieve optimal cost. The result then follows by noting the optimal admissible pair is unique almost everywhere.
Since is optimal therefore feasible, it satisfies Liouville’s Equation (15). Thus can be disintegrated according to Lemma 6. By Lemma 6 and , there exists a probability measure such that coincides with the occupation measures of a family of admissible trajectories in the support of . We only need to show all the trajectories in that family are equal to almost everywhere.
Define , we have
(25) (26) (27) (28) where (25) is obtained from the convexity of and the fact that is a probability measure; (26) is from Lemma 6; (27) is from Fubini’s Theorem; (28) is because is an admissible pair.
Since is probability measure, every admissible pair with in the support of must be optimal. Since is assumed to be unique almost everywhere, we have
(29) 
From Lemma 6 and the proof of , we know for any trajectory in the support of ,
(30) Using Equation (29) and the fact that
(31) Equation (23) follows.
Since , we know
(32) Since is nonzero almost everywhere and therefore is nonzero almost everywhere, Equation (24) follows.
∎
The previous result ensures that can be solved to find a solution to in a convex manner; however, control synthesis via this formulation is still not amenable. The next pair of results ensure that can be used to solve and perform control extraction:
Theorem 13.

is feasible if and only if is feasible. Furthermore, if is feasible the minimum to , , is attained, and .

If is feasible, then let be a minimizer of , then coincides with the occupation measure of almost everywhere.
Proof.

Given any feasible point of , clearly will also be feasible in with the same cost. Therefore is feasible and , where is the minimum to . Since , we only need to show there is a feasible point in that achieves as the cost.
To simplify the notations, we assume the number of inputs . The case of can be proved by a similar argument. Let be the optimal solution of , then the Liouville’s Equation (15) is automatically satisfied. Again, we disintegrate as in Equation (22), and define . Since is assumed to be measurable, according to Riesz representation theorem, there exists signed measures such that
(33) Furthermore, since , we have on support of . Then we use HahnJordan decomposition to express as
(34) where and . Thus there exists a measure such that
(35) For any function , we have
(36) Now that we’ve shown satisfy all the constraints in and achieves the cost , it follows that .

This follows from Lemma 12 and the proof of .
∎
Finally we describe how to perform control synthesis with the solution to :
Theorem 14.
Suppose is feasible and let be the vector of measures that achieves the infimum of , then there exists a control law, , such that
(37) 
for all subsets in the Borel algebra of and for each . If moreover are optimal solutions to and columns of are nonzero almost everywhere along the optimal trajectory (e.g. it is sufficient for any element in each column of to be nonzero almost everywhere), then and are equal almost everywhere.
Proof.
We will prove the first result using RadonNikodym theorem, and the second result can be shown by arguing for any in some dense subset of .
, , and are finite for all since they are Radon measures defined over a compact set. Define and notice that each is also finite. Since and , is absolutely continuous with respect to . Therefore as a result of RadonNikodym theorem, there exists a , which is unique almost everywhere, that satisfies Equation (37).
4 Numerical Implementation
We compute a solution to the infinitedimensional problem via a sequence of finite dimensional approximations formulated as SemiDefinite Programs (SDP)s. These are generated by discretizing the measures in using moments and restricting the functions in to polynomials. The solutions to any of the SDPs in this sequence can be used to synthesize an approximation to the optimal controllers. A comprehensive introduction to such moment relaxations can be found in [16].
To derive this discretization, we begin with a few preliminaries. Let denote the space of real valued multivariate polynomials of total degree less than or equal to . Then any polynomial can be expressed in the monomial basis as:
(42) 
where ranges over vectors of nonnegative integers such that , and we denote as the vector of coefficients of . Given a vector of real numbers indexed by , we define the linear functional as:
(43) 
Note that, when the entries of are moments of a measure :
(44) 
then
(45) 
If , the moment matrix is defined as
(46) 
Given any polynomial with , the localizing matrix is defined as
(47) 
Note that the moment and localizing matrices are symmetric and linear in moments .
4.1 LMI Relaxations and SOS Approximations
To construct approximations to , we make the following additional assumptions:
Assumption 15.
The functions , , components of and are polynomials.
We further assume that the state constraints and target set are described using polynomials:
Assumption 16.
and are defined as semialgebraic sets:
(48)  
where .
An sequence of SDPs approximating can be obtained by replacing constraints on measures with constraints on moments. Since and are polynomials, the objective function of can be written using linear functionals as . The equality constraints in can be approximated by an infinitedimensional linear system, which is obtained by restricting to polynomial test functions: , , , and , for any , . For example, Liouville’s Equation (15) is written with respect to moments as:
(49) 
The positivity constraints in can be replaced with semidefinite constraints on moment matrices and localizing matrices as discussed above.
A finitedimensional SDP is then obtained by truncating the degree of moments and polynomial test functions to . Let