1. Trang chủ
  2. » Khoa Học Tự Nhiên

Handbook of mathematics for engineers and scienteists part 149 ppt

7 43 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Nội dung

1004 CALCULUS OF VARIATIONS AND OPTIMIZATION Let x(t) C 2 [t 0 , t 1 ] be an extremal of problem (19.1.3.6) with λ 0 = 1; i.e., the Euler equation (19.1.3.4) is satisfied on this extremal for the Lagrangian L(t, x, x  t )=f 0 (t, x, x  t )+ m  i=1 λ i f i (t, x, x  t ) with some Lagrange multipliers λ i . Legendre condition: If an extremal provides a minimum (resp., maximum) of the functional, then the following inequality holds: L x  t x  t ≥ 0 (resp., L x  t x  t ≤ 0)(t 0 ≤ t ≤ t 1 ). (19.1.3.7) Strengthened Legendre condition: If an extremal provides a minimum (resp., maximum) of the functional, then the following inequality holds: L x  t x  t > 0 (resp., L x  t x  t < 0)(t 0 ≤ t ≤ t 1 ). (19.1.3.8) The equation xL xx +x  t L x  t x – d dt  xL x  t x +x  t L x  t x  t  + m  i=1 μ i g i = 0, g i =– d dt (f i ) x  t +(f i ) x (19.1.3.9) is called the (inhomogeneous) Jacobi equation for isoperimetric problem (19.1.3.6) on the extremal x(t); μ i are Lagrange multipliers (i = 1, 2, , n). Suppose that the strengthened Legendre condition (19.1.3.8) is satisfied on the ex- tremal x(t). A point τ is said to be conjugate to the point t 0 if there exists a nontrivial solution of the Jacobi equation such that  τ 0 g i (t)h(t) dt = 0 (i = 1, 2, , m), where h(t) is an arbitrary smooth function satisfying the conditions h(0)=h(τ)=0. We say that the Jacobi condition (resp., strengthened Jacobi condition) is satisfied on the extremal x(t) if the interval (t 0 , t 1 ) (resp., the half-interval (t 0 , t 1 ]) does not contain points conjugate to t 0 . A point τ is conjugate to t 0 if and only if the matrix H(τ)= ⎛ ⎜ ⎜ ⎝ h 0 (τ) ··· h m (τ)  τ t 0 g 1 (t)h 0 (t) dt ···  τ t 0 g 1 (t)h m (t) dt . . . . . . . . .  τ t 0 g m (t)h 0 (t) dt ···  τ t 0 g m (t)h m (t) dt ⎞ ⎟ ⎟ ⎠ is degenerate. 19.1. CALCULUS OF VARIATIONS AND OPTIMAL CONTROL 1005 Necessary conditions for weak minimum (maximum): Suppose that the Lagrangians f i (t, x, x  t )(i = 0, 1, , m) in problem (19.1.3.6) are sufficiently smooth. If x(t) C 2 [t 0 , t 1 ] provides a weak minimum (resp., maximum) in problem (19.1.3.6) and the regularity condition is satisfied (i.e., the functions g i (t)are linearly independent on any of the intervals [t 0 , τ]and[τ, t 1 ]foranyτ), then x(t)isan extremal of problem (19.1.3.6) and the Legendre and Jacobi conditions are satisfied on x(t). Sufficient conditions for a strong minimum (resp., maximum): Suppose that the Lagrangian L = f 0 + m  i=1 μ i f i is sufficiently smooth and the strengthened Legendre and Jacobi conditions, as well as the regularity condition, are satisfied on an admissible extremal x(t). Then x(t) provides a strong minimum (resp., maximum). T HEOREM. Suppose that the functional J 0 in problem (19.1.3.6) is quadratic, i.e., J 0 [x]=  t 1 t 0  A 0 (x  t ) 2 + B 0 x 2  dt, and the functionals J i are linear, J i [x]=  t 1 t 0  a i (x  t ) 2 + b i x 2  dt (i = 1, 2, , m). Moreover, assume that the functions A 0 , a 1 , , a m are continuously differentiable, the functions B 0 , b 1 , , b m are continuous, and the strengthened Legendre condition and the regularity condition are satisfied. If the Jacobi condition does not hold, then the lower bound in the problem is –∞ (resp., the upper bound is +∞ ). If the Jacobi condition holds, then there exists a unique admissible extremal that provides the absolute minimum (resp., maximum). Example 2. Consider the problem J =  2π 0  (x  t ) 2 – x 2  dt → min;  2π 0 xdt, x(0)=x(2π)=0. A necessary condition is given by the Lagrange multiplier rule (19.1.3.5): x  tt + x – λ = 0. The general solution of the resulting equation with the condition x(0)=0 taken into account is x(t)=A sint + B(cos t –1). The set of admissible extremals always contains the admissible extremal ˆx(t) ≡ 0. The Legendre condition (19.1.3.8) is satisfied: L x  t x  t (t,ˆx,ˆx  t )=2 > 0. The Jacobi equation (19.1.3.9) coincides with the Euler equation (19.1.3.5). The solution h 0 (t) of the homogeneous equation with the conditions h 0 (0)=0 and (h 0 )  t (0)=1 is the function sin t. The solution h 1 (t) of the homogeneous equation x  tt + x+1 = 0 with the conditions h 1 (0)=0 and (h 1 )  t (0)=0 is the function cos t–1. The matrix H(τ) acquires the form H(τ )=  h 0 (τ ) h 0 (τ )  τ 0 h 0 g 1 dt  τ 0 h m g 1 dt  =  sin τ cos τ – 1 1 –cosτ sin τ – τ  . Thus the conjugate points are the solutions of the equation det H(τ )=2 – 2 cos τ – τ sin τ = 0 ⇔ sin τ 2 = 0, τ 2 =tan τ 2 . The conjugate point nearest to zero is τ = 2π. Thus the admissible extremals have the form ˆx(t)=C sint and provide the absolute minimum J[ˆx]=0. 1006 CALCULUS OF VARIATIONS AND OPTIMIZATION 19.1.4. Problems with Higher Derivatives 19.1.4-1. Statement of problem. Necessary condition for extremum. A problem with higher derivatives (with fixed endpoints) in classical calculus of variations is the following extremal problem in the space C n [t 0 , t 1 ]: J[x]=  t 1 t 0 f 0 (t, x, x  t , , x (n) t ) dt → extremum; (19.1.4.1) x (k) t (t j )=x k j (k = 0, 1, , n – 1, j = 0, 1). (19.1.4.2) Here L is a function of n + 2 variables, which is called the Lagrangian. Functions x(t) C n [t 0 , t 1 ] satisfying conditions (19.1.4.2) at the endpoints are said to be admissible. An admissible function ˆx(t) is said to provide a weak local minimum (or maximum)in problem (19.1.4.1) if there exists a δ > 0 such that the inequality J[x] ≥ J[ˆx] (resp., J[x] ≤ J[ˆx]) holds for any admissible function x(t) C n [t 0 , t 1 ] satisfying x –ˆx n < δ. An admissible function ˆx(t) PC n [t 0 , t 1 ]issaidtoprovideastrong minimum (resp., maximum) in problem (19.1.4.1) if there exists an ε > 0 such that the inequality J[x] ≥ J[ˆx] (resp., J[x] ≤ J[ˆx]) holds for any admissible function x(t) PC n [t 0 , t 1 ] satisfying x(t)–ˆx(t) n–1 < ε. Necessary condition for extremum: Suppose that the Lagrangian L is continuous together with its derivatives with respect to x, x  t , , x (n) t (the smoothness condition) for all t [t 0 , t 1 ]. If the function x(t) provides a local extremum in problem (19.1.4.1), then L x (k) t C k [t 0 , t 1 ](k = 1, 2, , n)andthe Euler–Poisson equation holds: n  k=0 (–1) k d k dt k L x (k) t = 0 (t 0 ≤ t ≤ t 1 ). (19.1.4.3) For n = 1, the Euler–Poisson equation coincides with the Euler equation (19.1.2.5). For n = 2, the Euler–Poisson equation has the form d 2 dt 2 L x  tt – d dt L x  t + L x = 0. The general solution of equation (19.1.4.3) contains 2n arbitrary constants. These constants can be determined from the boundary conditions (19.1.4.2). 19.1.4-2. Higher-order necessary and sufficient conditions. Consider the problem J[x]=  t 1 t 0 f 0 (t, x, x  t , , x (n) t ) dt → min (or max); x (k) t (t j )=x k j (k = 0, 1, , n – 1, j = 0, 1) (19.1.4.4) 19.1. CALCULUS OF VARIATIONS AND OPTIMAL CONTROL 1007 with higher derivatives, where L is the function of n + 1 variables. Suppose that x(t) C 2n [t 0 , t 1 ] is an extremal of problem (19.1.4.4), i.e., the Euler-Poisson equation is satisfied on this extremal. Legendre condition: If an extremal provides a minimum (resp., maximum) of the functional, then the following inequality holds: L x (n) t x (n) t ≥ 0 (resp., L x (n) t x (n) t ≤ 0)(t 0 ≤ t ≤ t 1 ). (19.1.4.5) Strengthened Legendre condition: If an extremal provides a minimum (resp., maximum) of the functional, then the following inequality holds: L x (n) t x (n) t > 0 (resp., L x (n) t x (n) t < 0)(t 0 ≤ t ≤ t 1 ). (19.1.4.6) The functional J has the second derivative at the point x(t): J  tt [x, x]=K[x], where K[x]=  t 1 t 0 n  i,j=0 L x (i) t x (j) t (t, x, x  t , , x (n) t )x (i) t x (j) t dt.(19.1.4.7) The Euler–Poisson equation (19.1.4.3) for the functional K is called the Jacobi equation for problem (19.1.4.4) on the extremal ˆx(t). For a quadratic functional K of the form K[x]=  t 1 t 0 n  i=0 L x (i) t x (i) t (t, x, x  t , , x (n) t )  x (i) t  2 dt,(19.1.4.8) the Jacobi equation reads n  i=0 (–1) i d i dt i  L x (i) t x (i) t x (i) t  = 0. Suppose that the strengthened Legendre condition (19.1.4.6) is satisfied on an extremal x(t). A point τ is said to be conjugate to the point t 0 if there exists a nontrivial solution h(t) of the Jacobi equation such that h (i) t (t 0 )=h (i) t (τ)=0 (i = 0, 1, , n – 1). One says that the Jacobi condition (resp., the strengthened Jacobi condition) is satisfied on the extremal x(t)iftheinterval(t 0 , t 1 ) (resp., the half-interval (t 0 , t 1 ]) does not contain points conjugate to t 0 . The Jacobi equation is a 2nth-order linear equation that can be solved for the higher derivative. Suppose that h 1 (t), , h n (t) are solutions of the Jacobi equation such that H(t 0 )=0 and H (n) t (t 0 ) is a nondegenerate matrix, where H(τ)= ⎛ ⎝ h 1 (τ) ··· h n (τ) . . . . . . . . . [h 1 (τ)] (n–1) t ··· [h n (τ)] (n–1) t ⎞ ⎠ , H (n) t (τ)= ⎛ ⎝ [h 1 (τ)] (n) t ··· [h n (τ)] (n) t . . . . . . . . . [h 1 (τ)] (2n–1) t ··· [h n (τ)] (2n–1) t ⎞ ⎠ . A point τ is conjugate to t 0 if and only if the matrix H(τ) is degenerate. 1008 CALCULUS OF VARIATIONS AND OPTIMIZATION Necessary conditions for weak minimum (resp., maximum): Suppose that the Lagrangian L of problem (19.1.4.4) satisfies the smoothness condition. If a function x(t) C 2n [t 0 , t 1 ] provides a weak minimum (resp., maximum), then x(t)isan extremal and the Legendre and Jacobi conditions hold on x(t). Sufficient conditions for strong minimum (resp., maximum): Suppose that the Lagrangian L is sufficiently smooth. If x(t) C 2n [t 0 , t 1 ]isan admissible extremal and the strengthened Legendre condition and the strengthened Jacobi condition are satisfied on x(t), then x(t) provides a strong minimum (resp., maximum) in problem (19.1.4.4). For quadratic functionals of the form (19.1.4.8), the problem can be examined com- pletely. T HEOREM. Suppose that the functional has the form (19.1.4.8), L x (i) t x (i) t C i [t 0 , t 1 ] ,and the strengthened Legendre condition is satisfied. If the Jacobi condition does not hold, then the lower bound in the problem is –∞ (the upper bound is +∞ ). If the Jacobi condition is satisfied, then there exists a unique admissible extremal that provides the absolute minimum (maximum). Example 3. Consider the problem J[x]=  2π 0  (x  tt ) 2 –(x  t ) 2  dt → extremum; x = x(t), x(0)=x(2π)=x  t (0)=x  t (2π)=0. A necessary condition is given by the Euler–Poisson equation (19.1.4.3): x  tttt + x  tt = 0. The general solution of this equation is x(t)=C 1 sin t+C 2 cos t+C 3 t+C 4 . The set of admissible extremals always contains the admissible extremal ˆx(t) ≡ 0. The Legendre condition L x  t x  t (t,ˆx,ˆx  t ,ˆx  tt )=2 > 0 is satisfied. The Jacobi equation coincides with the Euler-Poisson equation. If we set h 1 (t)=1 –cost and h 2 (t)=sint – t, then the matrix H(t) acquires the form H(t)=  h 1 (t) h 2 (t) [h 1 (t)]  t [h 2 (t)]  t  =  1 –cost sin t – t sin t cos t – 1  . Then H(0)=0 and det H  tt (0)=  [h 1 (t)]  tt [h 2 (t)]  tt [h 1 (t)]  ttt [h 2 (t)]  ttt  =  10 0 –1  ≠ 0. Thus the conjugate points are the solutions of the equation det H(t)=2(cos t – 1)–t sin t = 0 ⇔ sin t 2 = 0, t 2 =tan t 2 . The conjugate point nearest to zero is t 1 = 2π. Thus the admissible extremals have the form ˆx(t)=C(1–cost) and provide the absolute minimum J[ˆx]=0. 19.1.5. Lagrange Problem 19.1.5-1. Lagrange principle. The Lagrange problem is the following problem: B 0 (γ) → min; B i (γ) ≤ 0 (i = 1, 2, , m  ), B i (γ)=0 (i = m  + 1, m  + 2, , m), (19.1.5.1) (x α )  t – ϕ(t, x)=0 for all t T ,(19.1.5.2) 19.1. CALCULUS OF VARIATIONS AND OPTIMAL CONTROL 1009 where x ≡ x(t) ≡ (x α , x β ) ≡ (x 1 (t), , x n (t)) PC 1 (Γ, R n ), x α ≡ (x 1 (t), , x k (t)) PC 1 (Γ, R k ), x β ≡ (x k+1 (t), , x n (t)) PC 1 (Γ, R n–k ), γ =(x, t 0 , t 1 ), ϕ PC(Γ, R n ), t 0 , t 1 Γ, t 0 < t 1 , Γ is a given finite interval, and B i (x, t 0 , t 1 )=  t 1 t 0 f i (t, x,(x β )  t ) dt + ψ i (t 0 , x(t 0 ), t 1 , x(t 1 )) (i = 0, 1, , m). Here PC(Γ, R n ) is the space of piecewise continuous vector functions on the closed interval Γ,andPC 1 (Γ, R n ) is the space of continuous vector functions with piecewise continuous derivative on Γ. The constraint (19.1.5.2) is the differential equation that is called the differential con- straint. The differential constraint can be imposed on all coordinates x (i.e., k = n in (19.1.5.2)) or be lacking altogether (k = 0). The element γ is called an admissible element. An admissible element ˆγ =( ˆ x, ˆ t 0 , ˆ t 1 ) provides a weak local minimum in the Lagrange problem if there exists a δ > 0 such that the inequality B 0 (γ) ≥ B 0 (ˆγ) holds for any admissible element γ satisfying the condition γ –ˆγ C 1 < δ, |t – ˆ t 0 | < δ,and|t – ˆ t 1 | < δ, where x C 1 =max t T |x| +max t T |x  t |. 19.1.5-2. Necessary conditions for extremum. Euler–Lagrange theorem. Suppose that ˆγ provides a weak local minimum in the Lagrange problem (19.1.5.1), and, moreover, the functions ϕ =(ϕ 1 , , ϕ n )andf i (i =0, 1, , m) and their partial derivatives are continuous in x in a neighborhood of {(t, ˆ x|t Γ} and the functions ψ i (i=0, 1, , m)are continuously differentiable in a neighborhood of the point ( ˆ t 0 , ˆ x( ˆ t 0 ), ˆ t 1 , ˆ x( ˆ t 1 )) (the smooth- ness condition). Then there exist Lagrange multipliers λ i (i = 0, , m)andp j ≡ p j (t) PC 1 (T ) (j = 1, , k) that are not zero simultaneously, such that the Lagrange function Λ =  t 1 t 0  m  i=0 λ i f i (t, x,(x β )  t )+ k  i=1 p i  (x i )  t – ϕ i (t, x)  dt + m  i=0 λ i ψ i (t 0 , x(t 0 ), t 1 , x(t 1 )) satisfies the following conditions: 1. The conditions of stationarity with respect to x, i.e., the Euler equations dp i dt + k  j=1 p j ∂ϕ j ∂x i = m  j=0 λ j ∂f j ∂x i (i = 1, 2, , k)forallt T , where all derivatives with respect to x k are evaluated at (t, ˆ x). 2. The conditions of transversality with respect to x, p i ( ˆ t j )=(–1) j k  j=0 λ j ∂ψ j ∂x i (t j ) (j = 0, 1; i = 1, 2, , k), where all derivatives with respect to x i (t k )(k = 0, 1)areevaluatedat( ˆ t 0 , ˆ x( ˆ t 0 ), ˆ t 1 , ˆ x( ˆ t 1 )). 3. The conditions of stationarity with respect to t k (only for movable endpoints of the integration interval), Λ t k ( ˆ t k )=0 (k = 0, 1). 1010 CALCULUS OF VARIATIONS AND OPTIMIZATION 4. The complementary slackness conditions λ i B i (ˆγ)=0 (i = 1, 2, , m  ). 5. The nonnegativity conditions λ i ≥ 0 (i = 1, 2, , m  ). 19.1.6. Pontryagin Maximum Principle 19.1.6-1. Statement of problem. The optimal control problem (in Pontryagin’s form) is the problem B 0 (ω) → min; B i (ω) ≤ 0 (i = 1, 2, , m  ), B i (ω)=0 (i = m  + 1, m  + 2, , m), (19.1.6.1) x  t – ϕ(t, x, u)=0 for all t T ,(19.1.6.2) u U for all t Γ,(19.1.6.3) where x ≡ x(t) PC 1 (Γ, R n ), u ≡ u(t) PC(Γ, R r ), ω =(x, u, t 0 , t 1 ), ϕ PC(Γ, R n ), t 0 , t 1 Γ, t 0 < t 1 , Γ is a given finite interval, U ⊂ R r is an arbitrary set, T ⊂ Γ is the set of continuity points of u,and B i (x, u, t 0 , t 1 )=  t 1 t 0 f i (t, x, u) dt + ψ i (t 0 , x(t 0 ), t 1 , x(t 1 )) (i = 0, 1, , m). Here PC(Γ, R n ) is the space of piecewise continuous vector functions on the closed interval Γ,andPC 1 (Γ, R n ) is the space of continuous vector functions with piecewise continuous derivative on Γ. The vector function x =(x 1 (t), , x n (t)) is called the phase variable, and the vector function u=(u 1 (t), , u r (t)) is called the control. The constraint (19.1.6.2) is a differential equation that is called a differential constraint. In contrast with the Lagrange problem, this problem contains the inclusion-type constraint (19.1.6.3), which should be satisfied at all points t Γ, and, moreover, the phase variable x can be less smooth. An element ω =(x, u, t 0 , t 1 ) for which all conditions and constraints of the problem are satisfied is called an admissible controlled process. An admissible controlled process ˆω =( ˆ x, ˆ u, ˆ t 0 , ˆ t 1 ) is called a (locally) optimal process (or a process optimal in the strong sense) if there exists a δ > 0 such that B 0 (ω) ≥ B 0 (ˆω) for any admissible controlled process ω =(x, u, t 0 , t 1 ) such that ω –ˆω C < δ, |t – ˆ t 0 | < δ, |t – ˆ t 1 | < δ, where x C =max t Γ |x|. . (19.1.3.6) and the regularity condition is satisfied (i.e., the functions g i (t)are linearly independent on any of the intervals [t 0 , τ ]and[ τ, t 1 ]foranyτ), then x(t)isan extremal of problem. Euler–Poisson equation (19.1.4.3) for the functional K is called the Jacobi equation for problem (19.1.4.4) on the extremal ˆx(t). For a quadratic functional K of the form K[x]=  t 1 t 0 n  i=0 L x (i) t x (i) t (t,. to t 0 if and only if the matrix H(τ) is degenerate. 1008 CALCULUS OF VARIATIONS AND OPTIMIZATION Necessary conditions for weak minimum (resp., maximum): Suppose that the Lagrangian L of problem

Ngày đăng: 02/07/2014, 13:20