1. Trang chủ
  2. » Luận Văn - Báo Cáo

Dual linear programming problem and its applications

39 6 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 39
Dung lượng 278,94 KB

Nội dung

MINISTRY OF EDUCATION AND TRAINING HANOI PEDAGOGICAL UNIVERSITY ——————–o0o——————— NGUYEN THI THUY DUONG DUAL LINEAR PROGRAMMING PROBLEM AND ITS APPLICATIONS BACHELOR THESIS Hanoi - 2019 MINISTRY OF EDUCATION AND TRAINING HANOI PEDAGOGICAL UNIVERSITY ——————–o0o——————— NGUYEN THI THUY DUONG DUAL LINEAR PROGRAMMING PROBLEM AND ITS APPLICATIONS Major: Applied Mathematics BACHELOR THESIS Advisor: Dr Nguyen Trung Dung Hanoi - 2019 Thesis Assurance I assure that the data and the results of this thesis are not true and not identical to other topics I also assure that all the help for this thesis has been acknowledged and that the results presented in the thesis has been identified clearly Student Nguyen Thi Thuy Duong Thesis Acknowledgement First and foremost, my heartfele goes to my Advisor Dr Nguyen Trung Dung (Hanoi Pedagogical University No2), for his continuous supports when I met obstacles during the journey The completion of this study would not have been possible without his expert advice, close attention and unswerving guidance Secondly, I am keen to express my deep gratitude for my family for encourage me to continute this thesis I owe my special thanks to my parents for their emotional and material sacrifices as well as their understanding and unconditional supports Finally, I own my thanks to many people who helped me and encourage me during my work I am specifically thankful to all my best friends at university for endless encouragement Student Nguyen Thi Thuy Duong CONTENT Thesis Assurance Thesis Acknowledgement Notations Introduction DUAL LINEAR PROGRAMING PROBLEM 1.1 The Linear Programming Problem 1.1.1 The general form of LPP 1.1.2 The standard form of LPP 10 1.1.3 The canonical form of LPP 11 1.2 The Dual Problem 12 1.2.1 Lagrange Function and Saddle Point 12 1.2.2 Standard Form of Duality 13 1.2.3 Canonical Form of Duality 15 1.2.4 General Form of Duality 16 1.3 Primal-Dual Relationships 18 DUAL SIMPLEX ALGORITHM 22 2.1 The basic of the dual simplex algorithm 22 2.2 The dual simplex algorithm 25 2.3 Using Matlab to solve LPP 29 2.4 Applications of the dual linear programming 32 2.4.1 Checking a feasible solution whether or not an optimal solution 32 2.4.2 Find the optimal solution set of dual problem 34 CONCLUSION 36 REFERENCE 37 NOTATIONS R+ Set of non-negative real numbers Rn n-dimensional Euclidean vector space A Transpose matrix of matrix A (x1 , , xn )T Vector column x Ai Row i of matrix A Aj Column j of matrix A LPP Linear Programming Problem INTRODUCTION Motivation Duality theory is an important part of optimization theory and has many applications in reality For an optimal problem, mathematicians researched a problem closely related to it that is called duality problem If the Origin problem is a minimum problem, the Dual problem will be the maximum problem, it is expected that the dual problem is easier to handle than the origin problem From that desire, new methods and software are created to help solve those problems more easily On the other hand, mathematicians also researched the practical applications of duality problem through the theory system Duality theorems, in particular, the Complementary slackness theorem for applications for test optimization and the relationship between the optimal solution of the original problem and the corresponding dual problem The theory of duality and application has been researched by many domestic and foreign authors and obtained many impressive results Due to the enthusiastic help of Dr Nguyen Trung Dung, along with my desire to research more deeply about the Dual problem, I would like to choose the topic ”Dual linear programming problem and its applications” as my research topic.The thesis focuses on the issues of dual theory and applications including statements and how to construct a dual problem from the original problem; Understanding the relationship between the original problem and the dual problem; How to use Dual Simplex Algorithm solving primal linear programming problem; Using Matlab to find solutions of linear programming problems; Application of duality problem through weak theorem of offset deviation Thesis Objectives The main purpose of the thesis is to understand the duality problem and its applications More specifically, the thesis focuses on the following two main topics: Dual linear programing problem and its applications Dual Simplex algorithm Research methods This thesis uses methods to collect, synthesize, analyze and research on documents Thesis organization This thesis is organised as follows: • Chapter Dual linear programing problem In this chapter, Some basic concepts involving linear programming problems; how to construct a Dual linear programming problem from the forms of an original Linear programing problem; Relationships between a pair of primal-dual linear problem is presented, respectively • Chapter Dual Simplex Algorithm This chapter addresses the ways to solve a Primal linear programming problem by using Dual Simplex Algorithm and using Matlab to solve the Linear programming problems Chapter DUAL LINEAR PROGRAMING PROBLEM In this chapter, we recall some concepts and results of dual linear programming problems that using to next chapter 1.1 The Linear Programming Problem In this section, we introduce some forms of linear programming problems such as general form, standard form, and canonical form 1.1.1 The general form of LPP Find a vector x = (x1 , x2 , , xn )T ∈ Rn such that n cj xj → min(max) f (x) = (1.1) j=1 subject to D:                      n aij xj ≥ bj (i = 1, m1 ) aij xj ≤ bj (i = m1 + 1, m2 ) j=1 n j=1 n aij xj = bj (i = m2 + 1, m)     j=1      xj ≥ (j = 1, n1 )       xj ≤ (j = n1 + 1, n2 )      xj unrestricted in sign (j = n2 + 1, n) Common terminology for the LPP can be summarized as follows: • The variables x1 , x2 , , xn are called the decision variables and g(y) = b y → max subject to  A y ≤ c DQ :  y unrestricted in sign (Q) Assume that rank(A) = m and the vectors {Aj , j ∈ J} consisting m column vectors of the matrix A are linearly independent Then the column vectors {Aj , j ∈ J} is called a basis of the matrix A and denoted by AJ Denote the set K = {1, 2, , n} \ J The basic solution (xJ , xK ) of the problem (P) corresponding to the basis J is defined by   xJ = A−1 b J  xK = (P) Definition 2.1.1 A basis J is called a primal feasible basis if the corresponding basic solution is a feasible solution, i.e., xJ = A−1 J b ≥ The basis J is called the optimal basis if the basic solution corresponding to J is an optimal solution Definition 2.1.2 A vector y is called a dual basic solution corresponding to the basis J if y = (AJ )−1 cJ ∈ DQ A basis J is called a dual feasible basis if the corresponding dual basic solution is a feasible solution of dual problem (Q) Example 2.1.1 Consider the following linear programming problem f (x) = x1 + x2 + x3 → subject to   x1     x2 DP :      +x4 −x5 −x4 x3 −2x4 +3x6 = −2 +x5 xj ≥ 0, j = 1, 23 = −x6 =   0    thus J = We have that det(AJ ) = −3 = where AJ =  0   −1 {A , A , A } is a basis of the matrix A   −3 0  1 −1  Thus, On the other hand, we also have AJ = −  −1 −3  3 −1 25 we get xJ = A−1 ,− ) ∈ / DP This implies that J is not a J b = (6, 3 primal feasible basis However, the dual problem of this problem is given by g(y) = 6y1 − 2y2 + 9y3 → max subject to DQ :   y1           ≤ ≤ y2 y3 ≤  y1 −y2 −2y3 ≤      −y1 +y3 ≤      3y2 −y3 ≤ We have y = (AJ )−1 cJ = (1, , 1) ∈ DQ thus J is dual feasible basis Proposition 2.1.1 For the primal-dual problem ((P), (Q)), if J is both primal feasible and dual feasible basis then J is an optimal basis Proof Assume that J is a primal feasible basis and J also is a dual feasible basis, then we have y = (AJ )−1 cJ is a feasible solution of problem (Q), i.e., A y ≤ c or A (AJ )−1 cJ ≤ c 24  zj1 k     zj2 k  k j  Denote z =    , where A = j∈J zjk A for k = 1, 2, , n   zjm k We have then, A = z k AJ implies z k = A (AJ )−1 Thus, we have (xk ) cJ − c ≤ The above inequality is equivalent to j∈J zjk cj − ck ≤ for all k = 1, 2, , n This implies (xJ , xK ) is an optimal solution of problem (P) with J is a optimal basis Remark 2.1.1 Suppose that J is a dual feasible basis, then from Proposition 2.1.1, J is an optimal basis if j is also a primal feasible basis.This is the optimal condition for the dual simplex algorithm Proposition 2.1.2 Consider the primal-dual problem ((P), (Q)) Assume that J is a dual feasible basis then if there exists an index j ∈ J, xj < such that zjk ≥ for all k = 1, 2, , n , the problem (P) has no feasible solution Proof By contradiction, we assume that the problem (P) has a feasible solution, i.e., there exists x ∈ Rn such that x ≥ and Ax = b This −1 implies that A−1 J A = AJ b or n k=1 zjk xk = xj This contradicts the assumption that zjk ≥ and xk ≥ for all k = 1, 2, , n Thus, the problem (P) has no feasible solution Remark 2.1.2 From Proposition 2.1.1, assume that J is a dual feasible basis, then the condition for the problem (P) has no feasible solution if there exists an index j ∈ J, xj < such that zjk ≥ for all k = 1, 2, , n 2.2 The dual simplex algorithm Assume that J = {1, 2, , m} is a dual feasible basis Then we have the dual simplex tableau corresponding to J as follows: 25 Basis cJ xJ J c1 c2 ck cn A1 A2 Ak An A1 c1 x1 z11 z12 z1k z1n A2 c2 x2 z21 z22 z2k z2n Am cm xm zm1 zm2 zmk zmn f (x) ∆1 ∆2 ∆k ∆n We note that the column xJ is defined by xJ = A−1 J b Then we have steps of the dual simplex algorithm as follows: Step Testing the optimality criterion: If xj ≥ for all j ∈ J, then J is an optimal basis and stop Otherwise, i.e., there exists an index j ∈ J such that xj < 0, then go to the Step Step Testing the feasible region of primal problem is empty: If there exists j ∈ J with xj < such that zjk ≥ for all k = 1, 2, , n, then the the feasible region of primal problem is empty Otherwise, we choose pivot row r and pivot column s as follows: xr = min{xj , xj < 0} ∆j ∆s = min{ : zrj < 0} zrs zrj The entry zrs that is in the rth row and sth column is called the pivot element Update the new tableau by pivoting at zrs Write the calculated results to the new tableau Then go back to Step Remark 2.2.1 We update the new tableau by pivoting at zrs as follows: • Replace Ar , xr , and cr with As , xs , and cs , respectively 26 • Divide all entries on pivot row r by zrs • All other entries zjk , j = r are calculated by the following formula: New entry A1 Row j A B Pivot element Pivot row r C D Column k Pivot column s C B D Example 2.2.1 Solve the following linear programming problem by the where A1 = A − dual simplex algorithm f (x) = x1 − x2 − 2x4 + 2x5 − 3x6 → subject to   x1     x2 DP :      x3 +x4 +x5 −3x6 = +x4 +x6 = 12 +2x4 +4x5 +3x6 = xj ≥ 0, j = 1, First, we have the dual problem as follows: g(y) = 2y1 + 12y2 + 9y3 → max subject to DQ :   y1           ≤ ≤ −1 y2 y3 ≤  y1 +y2 +2y3 ≤ −2      y1 +4y3 ≤      −y +y +y ≤ −3 27 Obiviously, we have J = {A1 , A2 , A4 } is a basis of the matrix A and DQ Thus, J is a dual feasible basis y = (AJ )−1 cJ = (1, −1, −1) ∈  −1   , Next, we calculate AK =  0    5 − −1 −  2   K − −2 −  , and xJ = (− , 15 , ) A = Z K = A−1 J  2 2 2   2 Hence, we have the dual simplex tableau as follows: Basis cJ xJ J A1 A2 -1 A4 -2 We have x1 = − hand, we have − 15 -1 A1 A2 A3 − − 0 0 2 -1 -2 -3 A4 A5 A6 -1 − -2 − -5 2 -2 < so that the pivot row is r = On the other min{ ∆3 ∆5 ∆6 = 2, = 5, = }= z13 z15 z16 5 Thus, the pivot column is s = Update the new tableau by pivoting at z16 , we get 28 Basis cJ xJ J A6 -3 A2 -1 A4 -2 -1 -2 -3 A1 A2 A3 A4 A5 A6 − 5 − -17 0 0 − 5 -1 0 − − -5 0 -2 We see that xj ≥ for all j ∈ J Therefore, x∗ = (0, 8, 0, 3, 0, 1) is an optimal solution with the optimal value f (x∗ ) = −17 2.3 Using Matlab to solve LPP By using MATLAB’s Optimization Toolbox solvers, we can solve efficiently linear programming problem In this section, we introduce to functions of this toolbox for solving LPs by some examples below Remark 2.3.1 For more general, in this section, we consider the following linear programming problem: f (x) = c x → subject to    Ax ≤ b   D : Aeq x = beq     x ≥ Example 2.3.1 Consider the following linear programming problem: f (x) = −2x1 + 4x2 − 2x3 + 2x4 → 29 subject to   2x2 +3x3      4x1 −3x2 +8x3 −x4       −3x1 +2x2 −4x4     4x −3x3 +4x4 D:  x1      x3       x3     xj ≥ 0, j = 1, ≥ = 20 ≤ −8 = 18 ≥ ≥ ≤ 10 First, we convert above linear problem into matrix form Define vectors and matrice as follows: c = [−2 − 2] , lb = [1 0], ub = [∞ ∞ 10 ∞], −2 −3 −6 A= ,b= , −3 −8 Aeq = −3 −1 , beq = 20 , −1 Finally, we call the linprogsolver and print the output in formatted form The complete code to solve the aforementioned LP problem is listed below: A = [−2 −3 ; −3 −4]; % c r e a t e t he matrix A b = [ −6; −8]; % c r e a t e t he v e c t o r b Aeq = [ −3 −1; −1 ] ; %c r e a t e th e matrix Aeq beq = [ ; ] ; % c r e a t e th e v e c t o r beq l b = [ ] ; % c r e a t e th e v e c t o r l b ub = [ I n f I n f 10 I n f ] ; % c r e a t e th e v e c t o r ub c = [ −2; ; −2; ] ; % c r e a t e t h e v e c t o r c % c a l l th e l i n p r o g s o l v e r [ X, Z ] = l i n p r o g ( c , A, b , Aeq , beq , lb , ub ) ; f o r i =1: l e n g t h (X) f p r i n t f ( ’ x(%d ) = %f \n ’ , i , X( i ) ) ; end 30 f p r i n t f ( ’ The v a l u e o f t he o b j e c t i v e f u n c t i o n i s = %f \n ’ , Z ) ; After running with Matlab, the results are given by x1 = 1.8000 x2 = 0.0000 x3 = 2.0000 x4 = 3.2000 The value of the objective function is −1.2000 Example 2.3.2 Consider the following linear programming problem: f (x) = x1 + 2x2 + 3x3 + 3x4 → max subject to   2x1     2x D:  x1     +x2 +x3 +2x4 ≤ 20 +x2 +2x3 +x4 ≥ 16 +2x2 +3x3 +4x4 = 18 xj ≥ 0, j = 1, Similary we define vectors and matrice as follows: c = [−1 − − − 3] , lb = [0 0 0], ub = [ ], 1 20 A= ,b= , −2 −1 −2 −1 −16 Aeq = , beq = 18 , The complete code to solve the aforementioned LP problem is listed below: A = [ 1 ; −2 −1 −2 − ] ; ; % c r e a t e th e matrix A b = [ ; − ] ; ; % c r e a t e th e v e c t o r b Aeq = [ ] ; ; %c r e a t e th e matrix Aeq beq = [ ] ; % c r e a t e t he v e c t o r beq l b = [ 0 0 ] ; % c r e a t e th e v e c t o r l b ub = [ ] ; % c r e a t e t he v e c t o r ub c = [ −1 −2 −3 −3]; % c r e a t e t he v e c t o r c 31 % c a l l th e l i n p r o g s o l v e r [ X, Z ] = l i n p r o g ( f , A, b , Aeq , beq , lb , ub ) ; f o r i =1: l e n g t h (X) f p r i n t f ( ’ x(%d ) = %f \n ’ , i , X( i ) ) ; end f p r i n t f ( ’ The v a l u e o f t he o b j e c t i v e f u n c t i o n i s %f \n ’ , Z ) ; After running with Matlab, the results are given by x1 = 5.966964 x2 = 1.608407 x3 = 2.938741 x4 = 0.000000 The value of the objective function is −18.000000 2.4 Applications of the dual linear programming 2.4.1 Checking a feasible solution whether or not an optimal solution From Theorem 1.3.6, we can check any feasible solution of primal problem or dual problem whether or not is an optimal solution of the corresponding problem Indeed, assume that x∗ is a feasible solution of the following primal problem f (x) = c x → subject to DP :   Ax ≥ b  x ≥ (P) In order to check x∗ is an optimal solution of problem (P) or not, we can perform as follows: Step Find the dual problem of (P) g(y) = b y → max 32 subject to  A y ≤ c DQ :  y ≥ (Q) Step Establish the following system:   n       aij x∗j − bi  yi = 0, ∀i = 1, 2, , m      j=1 m           cj − x∗j aij yi = 0, ∀j = 1, 2, , n (T) i=1 y ∈ DQ Step Solve the system (T): If system (T) has a solution then we conclude that x∗ is an optimal solution of problem (P) Otherwise, we conclude that x∗ is not an optimal solution of problem (P) Example 2.4.1 Consider the following linear programming problem (P): f (x) = 3x1 + 4x2 + x3 → subject to   −3x1     2x DP :  4x1     +2x2 −4x3 ≥ 15 −x2 −5x3 ≥ +2x2 +2x3 ≥ 10 xj ≥ 0, j = 1, Prove that x∗ = (7, 0, −9) is an optimal solution of (P) Solution The dual problem of (P) is: g(y) = 15y1 + 8y2 + 10y3 → max subject to   −3y1 +2y2 +4y3 ≤     2y −y2 +2y3 ≤ DQ :  −4y1 −5y2 +2y3 =     y1 ≥ 0, y2 ≥ 0, y3 ≥ 33 We assume that x∗ = (7, 0, −9) is an optimal solution of LP (P), then we consider the following system:   n       aij x∗j − bi  yi = 0, ∀i = 1, 2, , m      j=1 m           aij yi x∗j = 0, ∀j = 1, 2, , n cj − (T) i=1 y ∈ DQ Subtitute x∗ = (7, 0, −9) to this system, we have the following system:   −3y1 +2y2 +4y3 =     y2 = (T ) ⇐⇒  −4y1 −5y2 +2y3 =     y1 ≥ 0, y2 ≥ 0, y3 ≥     y1 = 1/5 ⇐⇒ y2 =    y = 9/10 The system (T) has a solution, so x∗ is an optimal solution of problem (P) Remark 2.4.1 We see that y ∗ = (1/5, 0, 9/10) satisfies f (x∗ ) = g(y ∗ ) By Corollary 1.3.5, we have y ∗ is an optimal solution of (Q) This is the second application of Complementary Slackness theorem 2.4.2 Find the optimal solution set of dual problem Example 2.4.2 Consider the following linear programming problem (P): f (x) = 2x1 + 5x2 + 4x3 + x4 → subject to DP :   x1      −6x2       +3x2 2x2 +x3 xj ≥ 0, j = 1, 34 −2x4 −9x5 = 32 + x4 + x5 = 30 2 +x5 ≤ 36 Let x∗ = (32, 0, 30, 0, 0) is an optimal solution of prime problem (P) Find an optimal solution of the dual problem (Q) Solution The dual problem of (P) is: g(y) = 32y1 + 30y2 + 36y3 → subject to DQ :                 ≤ y1 −6y1 −2y1 +2y2 +3y3 ≤ y2 ≤ + y2 ≤           −9y + y2 +y3      y , y f ree, y ≤ ≤ We have x∗ = (32, 0, 30, 0, 0) is a optimal solution of LP (P), then we consider the following system:   n       aij x∗j − bi  yi = 0, ∀i = 1, 2, , m      j=1 m           aij yi x∗j = 0, ∀j = 1, 2, , n cj − (T) i=1 y ∈ DQ Subtitute x∗ = (32, 0, 30, 0, 0) to this system, we have the following system: (T ) ⇐⇒     y1 = (Since x1 ≥ 0) y2 = (Since x3 ≥ 0)    y = (Since 3x + x = ≤ 36) So, y ∗ = (2, 4, 0) is an optimal solution of the dual problem with g(y ∗ ) = 184 35 CONCLUSION This thesis has provided an overview of dual linear programming problems Som basic concepts and results involving a pair of primaldual linear programming problem have recalled Particular emphasis has been used dual simplex algorithm to solve LPP However, due to limited time and knowledge, so it is inevitable that the shortcomings will be avoided I hope to receive comments and supplements of teachers for a more complete thesis 36 REFERENCE [1] H.A Eiselt, C.L Sandblom, Linear Programming and its Applications, Springer, 2007 [2] H Karloff, Linear Programming, Springer, Birkhauser, 2009 [3] S K Mishra, B Ram, Introduction to LINEAR PROGRAMMING with MATLAB, CRC Press, 2018 [4] Ping-Qi PAN, Linear Programming Computation, Springer, Heidelberg, 2014 [5] N Ploskas, N Samaras, Linear Programming using MatLab, Springer, 2017 37 ... corresponding dual linear programming problem The dual linear programming problem which constructed from the cost and constraints of the original linear programming problem The original linear programming. .. is to understand the duality problem and its applications More specifically, the thesis focuses on the following two main topics: Dual linear programing problem and its applications Dual Simplex... about the Dual problem, I would like to choose the topic ? ?Dual linear programming problem and its applications? ?? as my research topic.The thesis focuses on the issues of dual theory and applications

Ngày đăng: 07/04/2021, 07:30

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w