A NEW METHOD TO SOLVE STOCHASTIC PROGRAMMING PROBLEMS UNDER
PROBABILISTIC CONSTRAINT WITH DISCRETE RANDOM VARIABLES
BY TONGYIN LIU
A dissertation submitted to the Graduate School—New Brunswick Rutgers, The State University of New Jersey
in partial fulfillment of the requitements for the degree of
Doctor of Philosophy
Graduate Program in Operations Research Written under the direction of
Trang 2UMI Number: 3203387 Copyright 2006 by Liu, Tongyin All rights reserved INFORMATION TO USERS
The quality of this reproduction is dependent upon the quality of the copy submitted Broken or indistinct print, colored or poor quality illustrations and photographs, print bleed-through, substandard margins, and improper alignment can adversely affect reproduction
In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted Also, if unauthorized copyright material had to be removed, a note will indicate the deletion ® UMI UMI Microform 3203387 Copyright 2006 by ProQuest Information and Learning Company
All rights reserved This microform edition is protected against unauthorized copying under Title 17, United States Code
ProQuest Information and Learning Company 300 North Zeeb Road
Trang 3© 2006 Tongyin Liu
Trang 4ABSTRACT OF THE DISSERTATION
A New Method to Solve Stochastic Programming
Problems Under Probabilistic Constraint with Discrete Random Variables
by Tongyin Liu
Dissertation Director: Professor Andras Prékopa
In this dissertation, probabilistic constrained stochastic programming problems are con- sidered with discrete random variables on the r.h.s in the stochastic constraints In Chapter 2 and 3, it is assumed that the random vector has multivariate Poisson, bino- mial or geometric distribution We prove a general theorem that implies that in each of the above cases the c.d.f majorizes the product of the univariate marginal c.d.f’s and then use the latter one in the probabilistic constraints The new problem is solved in two steps: (1) first we replace the c.d.f’s in the probabilistic constraint by smooth logconcave functions and solve the continuous problem; (2) search for the optimal so- lution for the case of the discrete random variables In Chapter 4, numerical examples are presented and comparison is made with the solution of a problem taken from the literature In Chapter 5, some properties of p level efficient points of a random vari- able are studied, and a new algorithm to enumerate all the p level efficient points is developed In Chapter 6, p level efficient points in linear systems are studied
Trang 5Acknowledgements
I would like to take this opportunity to express my deepest gratitude and appreciation to my advisor Professor Andrd4s Prékopa for the opportunity working with him on this topic, and for his advice and guidance
I have very much enjoyed courses taught by Professors Andrés Prékopa, Endre Boros, and David Shanno I appreciate the Hungarian brains I am lucky to have taken the Case Study course from Professor Michael Rothkopf, which provided me some opportunities to practice my OR skills I am glad to audit Professor Andrzej Ruszczynski’s Operations Research Models in Finance, and benefit from his ideas of the finanical risk modeling I am happy and thankful to have attended Dr Hammer’s class and RUTCOR’s seminars Also I am grateful for the friendly RUTCOR staff Clare and Terry
My colleagues and friends at RUTCOR will be at the heart of my fondest memories: Igor Zverovich, Sandor Szedmak, James Wojtowicz and Lijie Shi The leisure time with Martin Milanic, Noam Goldberg and Gabor Rudolf gives me so much fun
I thank my parents for so many years supports, their kindness and hard work char- acters are the utmost examples to follow During the visit of my parents in-law, they gave us so much help to take care of my son I also would like to take this opportunity to express my thanks to my wife Xiaoling and our son Jialiang for everything
Finally, I would like to thank DIMACS for their generous financial support, which makes life easier
Trang 6Dedication
To Xiaoling, Leon, and my family
Trang 7Table of Contents
3x 1“ aa TAIAI ii
Acknowledgements .Ặ Ặ Q Q Q Q HQ Q Q ee iii
Dedication ee iv
List of Tables 0 ee vil
List of Figures 1 0 va viii
List of Abbreviations QC Q Q Q Q Q Q v gà va kia ix
1 Introduction 0 00.00 ee 1
1.1 Some definitions 20.00.00 00 0 ee 3
1.2 Literature review 0 ee 4
1.3 Outline of the thesis 2 et 5
2 The case of independent Poisson, binomial and geometric random variables Q Q Q LH HQ ng cà n ngà kg k KV kg ki ki ko ki kia ĩ
2.1 Independent Poisson random variables 8 2.2 Independent binomial random variables 9 2.3 Independent geometric random variables 12 2.4 Relations between the feasible sets of the convex programming problems
and the discrete cases Q Q Q Q LH ng ng gà gà kàng 13
2.5 Local search method 1 eee ee ee ee ee 15
2.6 Probability maximization under constraints 18
Trang 84 Numerical examples 00 pepe eee eee ens 24 4.1 AÀ vehicle routing example ee 25 4.2 A stochastic network design problem 29
5 Methods of enumerating all PLEP’s of a random vector 35 5.1 An algorithm to enumerate all PLEP’s 36 5.2 Numerical examples 2 ee ee 38 5.3 Comparison with PVB algorithm .- 41 Sẽ .eShSnh8 AI II a AaAÁAAaố 42 6 PLEP’s of linear combinations of random variables 47 6.1 Preliminaries ofnetWOTkS HQ HQ gà kh 48 6.2 p level efficient points in the system of Gale-Hoffman inequalities 49
References 0 ẽH.Ha Ca a 54
Vita eee 57
Trang 94.1 4.2 5.1 5.2 5.3 5.4, List of Tables Expected demands 1 ee ee ee 26
Computing results for different (p, gq) -.- -0-02 00000] 34 PLEP’s of a Poisson random vector with parameter \ = (2322132142) 40
All the points in "strips? 2 ee 44
PLEP for differentp ga 44
Complulting T@SUÏS Q Q LH Q ng và kg ga 45
Trang 102.1 4.1 4.2 3.1 5.2 5.3 5.4 6.1 List of Figures
An illustration of two-dimensional search of Hooke and Jeeves 17 The graph of the vehicle routing problem 26 A power system with fournodes 0.00 eee eens 30 Algorithm of enumerating all PLEP’s .- 37 Illustration of algorithm Step2 2.008 39 An illustration of PVB algorithm 0-.20050 42 An illustration of the new algorithm .2005 43
A two-node network 2 ee 52
Trang 11PLEP
List of Abbreviations
p level efficient point set of real numbers
set of nonnegative real numbers set of integers
set of nonnegative integers
Trang 12Chapter 1
Introduction
Stochastic programming is the science that provides us with tools to design and control stochastic systems with the aid of mathematical programming techniques [27] Proba- bilistic constraints are one of the main challenges of modern stochastic programming The motivation is as follows: if in the linear program
min c
subject to Tx > €
z > 0,
where A is an m x n matrix, T is an s X n matrix, and € = (&, ,€ )? is a random vector with values in Rể c, z € R” and b € R*”, we require that Tx > € shall hold at least with some specified probability p € (0,1), rather than all for all possible realizations of the right hand side Then it can be written in the following problem
formulation:
min cr
Trang 13where P denotes probability For furhter reference, we can also write (1.1) in the following equivalent form: min c's subject to P(Eé<y)>p Tr=y (1.2) Az > 0b xz> 0
Historically the first paper that used the programming under probabilistic con- straint principle was the one by Charnes et al (1958), where probabilistic constraints are imposed individually on each constraint involving random variables It was called chance constrained programming” by the authors, and it is correct in some cases, especially when the random variables appearing in different stochastic constraints are independent Miller and Wagner (1965) takes the probabilistic constraints jointly on the stochastic constraints but handles only independent random variables on the right hand sides of the stochastic constraints Problem (1.1) for general random vector € with stochastically dependent components was introduced in [19] and [21], where the probabilistic constraints is taken jointly for the stochastic constraints and the random variables involved are stochastically dependent, in general In [19] and [21] and other subsequent papers (e.g., see [20] and [22]) convexity theorems have been proved and algorithms have been proposed for the solution of problem (1.1) and the companion problem:
max P(Tr > €)
subject to Ax > b (1.3)
z>0
For detailed presentation of these results, the reader is referred to (27] and [29]
Trang 14some primary definitions for the stochastic programming
1.1 Some definitions
Logeoncave measures have been introduced in the stochastic programming framework in [20], [22] and [21] but they became widely used also in statistics, convex geometry, mathematical analysis, economics, etc
Definition 1.1 A nonnegative function f defined on a conver subset A of the space R™ is said to be logarithmically concave (logconcave) if for every pair x,y € A and O<A<1 we have the inequality
f(Ar + (1— Ady) > [F(@)P LAY) (1.4)
If f is positive valued, then log f is a concave function on A If the equality holds strictly for « # y, then f is said to be strictly logconcave
Definition 1.2 A probability measure defined on the Borel sets of R” is said to be logconcave if for any conver subsets of R": A, B andQ <A <1 we have the inequality
PA4+(1-—A)) > [P(4)]`IP(8)]'~^,
tphere ÀA + (1— À)}B = {z = Àz+(1—À)w|zc€ A,ue BỊ
For the case of a discrete €, the concept of p level efficient point(PLEP) has been introduced in [25] Below, we recall this definition Let F(z) designate the probability distribution function of €, ie., F(z) = P(E < z), ze R’
Definition 1.3 Let Z be the set of possible values of £ A vector z € Z is said to be ap level efficient point or PLEP of the probability distribution of € if F(z) =P(€<z)>p and there is no y € Z such that F(y) >p,y<z andy#z
Dentcheva, Prékopa and Ruszezyriski (2000) remarked that, by a classical theorem of Dickson [2] on partially ordered sets (posets), the number of PLEP’s is finite even if
Z is not a finite set Let v), 7 © J be the set of PLEP’s Since
{| P(€ < y) >p} = Z,= |] +R$},
Trang 15a further equivalent form of problem (1.1) is the following: min cÝz subject to Tz € Z, ; ? (1.5) Ax > 0b a> 0 1.2 Literature review
The first paper on problem (1.1) with discrete random vector € was published by Prékopa (1990) He presented a general method to solve problem (1.5), assuming that the PLEP’s are enumerated Note that problem (1.5) can be regarded as a disjunctive programming problem Sen (1992) studied the set of all valid inequalities and the facets of the convex hull of the given disjunctive set implied by the probabilistic constraint in (1.2) Prékopa, Vizvári and Badics (1998) relaxed problem (1.5) in the following way: min c!z subject to Tx > yy, 0 S2; =1, >0, ý€{1, ,|Z|} (1.6) Ar>b z>0,
gave an algorithm to find all the PLEP’s and a cutting plane method to solve problem (1.6) In general, however, the number of p level efficient points for € is very large To avoid the enumeration of all PLEP’s, Dentcheva, Prékopa and Ruszezyriski (2000) presented a cone generation method to solve problem (1.6) Vizvdri (2002) further analyzed the above solution technique with emphasis on the choices of Lagrange mul- tipliers and the solution of the knapsack problem that comes up as a PLEP generation technique in case of independent random variables
Trang 16produce good chips of given types These authors solve a similar problem as (1.3) rather than problem (1.1), where the objective function in (1.3) is 1 — P(T'x > €) Dentcheva et al [7] present and solve a traffic assignment problem in telecommunication, where the problem is of type (1.1) and the random variables are demands for transmission In the design of a stochastic transportation network in power systems, Prékopa and Boros [26] present a method to find the probability of the existence of a feasible flow problem, where the demands for power at the nodes of the network are integer valued random
variables
1.3 Outline of the thesis
Trang 18Chapter 2
The case of independent Poisson, binomial and geometric
random variables
Assume that €1, ,¢ are independent and nonnegative integer valued Let F;(z) be the c.d.f of &,i=1, ,r Then problem (1.1) can be written in the following form: min cz subject to Tx =y (2.1) Ar>b, «£>0 Tins Fi(ys) 2 v- Note that the inequality P(Œz > &) > P(Tz >€) >p, (0<p< 1)
implies 7¿z > 0, ¿ = 1, ,r Thus if £ is a discrete random vector, the above problem is equivalent to the following: min cÝz subjectto T7+z >, €c Z2" } + (2.2) Az>b, xz>0 [lina Fis) 2 p-
Trang 192.1 Independent Poisson random variables
Let € be a random variable that has Poisson distribution with parameter \ > 0 The values of its c.d.f at nonnegative integers are Pạ=5 me n=0,1, k=0 ` Let an Find) = | Tø+D° đz, where OO re) = | a? le-"dx, for p> -—l 0 It is well-known (see, e.g., Prékopa 1995) that for n > 0, Tt Me oO 1.1 Pr = ae = / —e Ada (2.3) k=0 À *
For the log concavity of P,,, we recall the following theorem
Theorem 2.1 (/2/, [27]) For any fired \ > 0, the function F(n;) is logconcave on the entire real line, strictly logconcave on {n | n > —-1} and P, = F(n,A) for any nonnegative integer n
Suppose that €,, ,€,- are independent Poisson random variables with parameters Ai;++* Ap, respectively To solve (2.2), first we consider the following problem: min clr subject to Tx =y (2.4) Ar>b, r>0 co Bi _ Tia Iy rane at 2p
Trang 20By Theorem 2.1, (2.4) is a convex nonlinear programming problem
2.2 Independent binomial random variables
Suppose € has binomial distribution with parameter 0 < p < 1 Let z be a nonnegative integer It is known (see, e.g., Singh et al., 1980; Prekopa 1995) that
@ pid —p)@) = gy "0 =)" ay (2.6)
ima So yo 11 — y) Ady
For fixed a > 0 define G(a, x), as a function of the continuous variable z, by the equation (2.6) for z > 0 and let G(a, z) = 0 for ¢ < a, We have the following Theorem Theorem 2.2 (/27/,/40]) Let a > 0 be a fired number Then G(a, x) is strictly increasing and strictly logconcave for x > a
If x is an integer then G(a,r) = 1 — F(a —1) where F is the c.d.f of the binomial distribution with parameters x and p While Theorem 2.2 provides us with a useful tool in some applications (c.f [40]), we need a smooth logconcave extension of F and it can not be derived from G(a,z)
Let X be a binomial random variable with parameter n and p Then, by (2.6), we have for every x =0,1, ,n —1: 1 _z— _ J4)” ”'dụ =——— Js 2 — 0)*—*—~Ìdụ The function of the variable x, on the right hand side of (2.7), is defined for every P(X <2) (2.7) x satisfying —1 < x < n Its limit is 0, if > 0 and is 1, if —n Let 0, iŸ m<—1; 1 #(1— =®—œ—14} F(x;n,p) = ; om, if -l<a<n; (2.8) 0 1, if r>n
We have the following:
Theorem 2.3 (/16/) The function F(x;n,p) satisfies the relations
Trang 2110
it is strictly increasing in the interval (-1, n), has continuous derivative and is logcon- cave on TRÀ
Proof We skip the proof of the relations because it requires any standard reasoning To prove the other assertions first transform the integral in (2.8) by the introduction of the new variable -— + =t, We obtain J*” dt F(a;n,p) = aye, —l<#<ñn, (2.10) J "tramm qmnd where À = 5 To prove strict monotonicity we take first derivative of this function: dF (a; (x;n,p) (2.11) dx oo 1 oo 1 Sx ntapyerrdt fo” t Intact tŒin,p Si Sự Ly 120mm So fe +1 dt and show that it is positive if -1 < 2 <n The derivative, with respect to A, of the first term in the parenthesis equals d jx f'Inttrrjmrdt — am dv fy oye dt _ —À*lnÀtrjm Se t® apart + À* apy t® It di dt fre apart This is a positive value, since % 1 °° 1 t” Int —————dt > Ind t? —_— —— dt I n (1+t)nri > in / (1+)
Thus, the first term in the parenthesis in (2.11) is an increasing function of A, which proves the positivity of the first derivative of F'(2;n,p) in the interval -l1<a<n.,
The continuity of the derivative of F(z;n,p) on R? follows from (2.11) In fact the derivative is continuous if -1 < 2 <n and by the application of a standard reasoning (similar to the one needed to prove (2.9)) we can show that
mô PŒUP) — pm Tứin,P) = 0
a——1+0 dx z—n—0 dx
Trang 2211 equation 1 2 đF(x;n,p) — Ie (Int)? ngờ dt - ers “) Sor wwe 52-14 (4, ự —, 1 — oo 1 So ỹ (+11 dt to la q¬+#mf1 dt Let us introduce the following p.d.f: (œ+1)t 1 ¿ fIrefin+tT
g(t) = froe (z+1)u }“tranmrrdu (re) , 00 <t <0, (2.13)
where z is a fixed number satisfying -1 < x <n The function g(t) is exactly logconcave on the entire real line Let X be a random variable that has p.d.f equal to (2.13) Then (2.12) can be rewritten as d?F(x;n,p) _ loa £?s()dt _ (hàn t)dt\?” dx? Sey g(t) dt fe 9()dt — | J4 sŒä — ( J2„to(dt J5 s()đt LP g(t)de (2.14) = E(X? | X >InÀ) - E?(X | X >ìnÀ) — (E(X*) - E*(X))
Burridge (1982) has shown that if a random variable X has a logconcave p.d.f., then
E(X? | X >u) — E°(X |X >)
is a decreasing function (a proof of this fact is given in Prékopa 1995 pp.118-119) If we apply this in connection with the function (2.13), then we can see that the value in
(2.14) is negative O
Remark 2.1 The proof of Theorem 2.1 is similar to the proof of Theorem 2.3 In that case the g(t) function is the following:
e(=+1)ts —et
~ TR erie dy? OSES
Trang 2312
Suppose ¢1, £, ,& are independent binomial random variables with parameters (n1,pi), -, (mr, pr), respectively To solve problem (2.2), we first solve the following problem: min cz subject to Tr=y Az >ob Viet (in by £⁄(1 — £)P+~W—1đ— In fy t#(1 — tute) > Inp (2.15) œ>9
This is again a convex programming problem from Theorem 2.3
2.3 Independent geometric random variables
Let € be a random variable with geometric distribution € has probability function P(k) = pq®"! ifk =1,2, and P(k) = 0 otherwise, where gq = 1 — øp and 0 < p< 1 Its distribution function is n Pr =) pgit =1-q" (2.16) k=1
A general theorem ensures but a simple direct reasoning also shows that (see, e.g Prekopa 1995, p.110) P,, is a logconcave sequence The continuous counterpart of the geometric distribution is the exponential distribution If A is the parameter of the later and A = In A then
1—-e *=Pạ, for z=n (2.17)
For the c.d.f F(z) = 1 —e~*, we have the following theorem:
Theorem 2.4 Let \ > 0 be a fixed number Then F(x) = 1—e~™ is strictly increasing and strictly logconcave for x > 0
Proof Since F’ = e~** > 0, F(z) is strictly increasing for z > 0 Rewrite F(x) as follows:
Ax —
F(z)=1-e *= :
Trang 2413
Then In F(x) = In(e** — 1) — Az Take the second derivative of In F(x), we get — A2 ert
(In F(x))” = (1)? < 0
Hence, the second assertion is true L
Suppose the components €1, ,&- of random vector € are independent geometric variables with parameters p,, ,p,, respectively In this case, to solve problem (2.2), we first solve the following convex programming problem: min cs subject to Tx =y Ag >b (2.18) ¡= In(1 = e"⁄) > Inp z >0
For the above mentioned three convex programming problems, they can be solved by many known methods, for example, interior trust region approach [6] as it is used in MatLab 6 It may also be solved by using CPLEX if we have a numerical method to calculate incomplete gamma and beta functions In this dissertation, we use MatLab 6 to solve the problems in the numerical example chapter
2.4 Relations between the feasible sets of the convex programming problems and the discrete cases
First, we have the following theorem:
Theorem 2.5 (/16]) Let € be a discrete random variable, F(z) the c.d.f of §, z€ Z+, and P the set of all PLEP’s of € Let F(x) be a smooth function, x € Ri, such that F(z) = F(z) when2 € Z, Then
Pe Z,={c eZ", | F(z) > ph (2.19)
Proof Let Zp = {y € Zs | P(€ < y) > p}, where P is the distribution function of € Then P © Zy Since the values of F(z) and F(a) coincide at the lattice points,
Trang 2514 Let € be an r-component random vector, and ? the set of all PLEP’s of € Let F(w;A) Ự TT ) = Ty 1£ ung T, > —Ì, 1 , ^ Tí/+1) , I 2¥(1—2)"-9-! dr ——————— -l<ÿ<n, Jọ z(1 — z)"—~v~ldz F(y;,n,p') = and G@;À)=1—e %, u>0 where ['{-) is the gamma function and ø € (0, 1) Let 2; ={u+1<Z: | lÏƑ@:^:) >» À¡>0, i=1, ,r}, i=1 r Z2 ={u+1eZr | [] F@iini vi) 2p, ø; € (0, 1), —] < tị < Thị, i=1, ,r} i=1 and r 1 2g ={u<Z | ]] 26:5) >p ys > 0, A= Ing ¿=1, ,r} Then we have the following Corollary:
Corollary 2.1 (a) If all components of € have independent Poisson distribution with parameters 1,A2, ,Ar, respectively, then P C 2z;
(b) Tƒ all components oƒ € haue independent binomial distribution with parameters (m,1), (nạ, Ð>), , (n„, py), respectively Then P C 29;
(c) If all components of € have independent geometric distribution with parameters D1, P2, ++,Pr, respectively Then P © Ze
Trang 2615 and „ ZZ = {ye Rt? | [[ Gir) > Đ, yi > 0, =n i=1, ,r} t=1 1 — pj’
From Theorem 2.1, Theorem 2.3 and c.d.f of exponential distribution is logconcave, the three above sets are all convex From Theorem 2.5, for a multivariate random vector €, if all the components of € have independent Poisson, binomial or geometric distribution, then all the PLEP’s of € are contained in a convex set, which is obtained from incomplete gamma function, incomplete beta function or exponential distribution function, respectively
So for problem (2.2), if all components of € have independent Poisson, or binomial or geometric distribution, we can get the corresponding relaxed convex programming problem as shown in (2.5), (2.15) and (2.18), respectively
2.5 Local search method
Trang 2716
The modified Hooke and Jeeves direct search method is as follows In each searching step, it comprises two kinds of moves: Exploratory and Pattern Let Aa, be the step length in each of the directions e;, i = 1,2, ,7
The Method Exploratory move
Step 0: Set i = 1 and compute F = f(x*) where z* = |z] =
(11,23, , #z)
Step 1: Set ø := (Z1,Z2, ,¿ + A1, , đr)
Step 2: If f(z) < F and z € D then set F = f(x), i:=i+1; Goto Step 1
Step 3: If f(z) > F and x € D then set x := (21,2%9, ,2; — 2Aqg;, ,0,) If f(x) < F and x € D, the new trial point is retained Set F = ƒ(z), ¿:= ¿+ 1, and goto Step 1
If f(z) > F then the move is rejected, +; remains unchanged Set i:=%+1 and goto Step 1
Pattern move
Step 1: Set « = 2? + (x? — @8), where x? is the point arrived by the Exploratory moves, and #° is a point which is also arrived by exploratory move in previous step where x? is obtained from the exploratory move starting from 7°
Step 2: Starts the Exploratory move If for the point z obtained by the Exploratory moves f(x) < f(x?) and x € D, then the pattern move is recommended Otherwise x? is the starting point and the process restarts from 2
Remark When we consider the discrete random variables which have Poisson, binomial
or geometric distributions, we set Az; = 1
Trang 2817 2 3 1 2 4 7 5 — la 16 15 8 14 17 T 18 11 7 10 13 T1 Figure 2.1: An illustration of two-dimensional search of Hooke and Jeeves following figure 2.1 [15]:
In figure 2.1, the points labeled numbers according to the sequence are selected x!
is a starting base After x® failed, and #2 and x4 are successes, then the new base x4 is
Trang 2918
2.6 Probability maximization under constraints Now we consider problem (1.1) and the following problem together: max P(Tr > &) subject to cax<K (2.21) Ar >b z>0,
where € is a random vector and K is fixed number In [27], the relations between problem (1.1) and (2.21) are discussed
Suppose the components of random vector € are independent, then the objective
function of problem (2.21) is h(x) = []j_, Fi(yi), where Tx = y Since F;(y;) > 0, we
take natural logarithm of h(x) and problem (2.21) can be written in the following form: max lnh(z) subject to cla <K (2.22) If € is a Poisson random vector, problem (2.22) can be approximated by solving the following problem: max 3”; ¡ln (: — Ta Se the*dt) subject to Tx =y Az >b (2.23) ca <K xz>0
From Theorem 2.1, the objective function of problem (2.23) is concave Let z be the optimal solution of problem (2.22) and x* = |x] Then we apply the modified Hooke and Jeeves search method to search the optimal solution of problem (2.21) around 2* as described above, and D is replaced by
Trang 3019
and” <” and” <” are replaced by ” >” and” > ”, respectively A numerical example in Section illustrates the details of this procedure
Trang 3120
Chapter 3
Inequalities for the joint probability distribution of partial sums of independent random variables
In this chapter, we consider the joint probability distribution of partial sums of inde pendent random variables We assume that the r-h.s random variables of problem (1.1) are partial sums of independent ones where all of them are either Poisson or binomial or geometric with arbitrary parameters The probability that a joint constraint of one of these types is satisfied is shown to be bounded from below by the product of the probabilities of the individual constraints The probabilistic constraint is imposed on the lower bound Then smooth logconcave c.d.f’s are fitted to the univariate discrete c.d.f’s and the continuous problem can be solved numerically
For the proof of our main theorems in this chapter, we need the following
Lemma 3.1 (/16]/) LetO0<p, <1, g=1-p, a > a1, bạ >bị, , zọ > 24, then we have the inequality
paobo - - - Z0 + gatbg - - -Z\ > (pao + gai)(pbo + ghi) (p20 + gz1)
Proof We prove the assertion by induction For the case of ao > aj, bo > bi, the assertion is
pagby + qaib, > (pag + ga1)(pbo + gb1)
This is easily seen to be the same as
Trang 3221
which holds true, by assumption Looking at the general case, we can write
pao (bo - - - zo) + ga (b4 - - - Z1)
> (pag + gay) (pbo - - zo + đÙI - - - Z1)
> (pao + qga1)(pbo + qbì) (pco - - - Z0 + đề - - - 21) > (pag + qaz)(pbo + gbi) (pzo + 921)
Thus the lemma is proved O
Let A = (a; 4) #0 be an mxr matrix with 0—1 entries and X;, ,X; independent, 0 - 1 valued not necessarily identically distributed random variables Consider the transformed random variables
r
¥i= So ain Xk, i=1, ,m (3.1)
=1 We prove the following
Theorem 3.1 (/16/) For any nonnegative integers y1, ,Ym we have the inequality m™m
P( <Sựi,' .Ym < m) > |][ PƠi < 1ì (3.2)
¿=1
Trang 3322 We have that PY; Sys, t= 1, -,m) = P(XI <<, ¡€]) = P(X, S min yi) = P(X; <y;) = P(Y; <y;) t=1 IV
then the assertion holds for the case Assume that it holds for r—1 Then, using (3.3) and Lemma 3.1, we can write PCY; Sy, t=1, - ,m) r „ lỊP So aig X; < yi I]? So aig Xj < yi Pl 2 i€l j=2 ier =2 r Tr +]Ị? So aigXj Si -1 [[2 So aig X; < yi 1 ¡€l j=2 hủ j=2 r Tr > ]]|P|} ;%¿X; Su |m+P |3 s¿ÄX; Su 1| tel j=2 j=2 Ty H P > dịjÃj S ier j=2 r Tr = ][P|} %¿X; <u | TPP |S asx; <u tel j=l ¡cÏ J=2 m Tr = [[P (do asx; < yi i=1 \j=1 trì = | Pữi <0 ¿=1
This prove the theorem O
Theorem 3.2 (/16]) Let X1, ,X, be independent, binomially distributed random variables with parameter (n1,pi),. , (Mr, Pr), respectively Then for the random vari- ables (3.1), then inequality (3.2) holds true
Trang 3423
Note that in case of Theorem 3.2, the random variables ¥;, 7 = 1, ,r are not necessarily binomially distributed They are, however, if pj = = pr
Theorem 3.3 (/16]) Let X1, ,X, be independent, Poisson distributed random vari- ables with parameter »1, , Ar, respectively Then for the random variables (3.1), then inequality (3.2) holds true
Proof If in Theorem 3.2, we let n; — oo, py; — 00 such that njpj — À¿, ý = 1, ,7,
then the assertion follows from (3.2) Oo
In case of Theorem 3.3, the random variables Y;, 1 = 1, ,m have Poisson distri- bution with parameter )7j_, ai,nA;, i = 1, m, respectively
Trang 3524
Chapter 4
Numerical examples
In [7], Dentcheva, Prékopa and Ruszczyiski presented an algorithm to solve a stochastic programming problem with independent random variables In this Chapter, we com- pare the optimal values and solutions by using DPR algorithm and the approximation method to a vehicle routing problem in [7] Also a stochastic network design problem is solved by using the new method
The DPR algorithm is as follows: Method
Step 0: Select a p-level efficient point v Set Jo = {0}, k = 0 Step 1: Solve the master problem min cls subject to Ax >b Tr > Dyes, 490 (4.1) ied Aj =1 z>0,À>0 Let u* be the vector of simplex multipliers associated with the second constraint of (4.1)
Step 2: Calculate an upper bound for the dual functional:
d(u®) = (u*) min (u iv mi kyT,,(9) ,
Step 3: Find a p-efficient solution v‘*+”) of the subproblem: min (u*)?z
Trang 3625
and calculate
d(u*) = (+1) TuE,
Step 4: If đ(u°) = d(uŸ) then stop; otherwise set j¿¿¡ = J U {k + 1}, increase k by one and go to Step 1
4.1 A vehicle routing example
Consider the vehicle routing problem in [7], which is a stochastic programming prob- lem with independent Poisson random variables, and the constraints have prescribed probability 0.9
We consider a directed graph with node set NV’ and arc set € A set of cyclic routes II, which are the sequences of nodes connected with arcs and such that the last node of the sequence is identical with the first one For each e € €, denote R(e) the set of routes containing e, and c(7) is denoted as the unit cost on the route
A random integer demand €(e) is associated with each arc e € € The object is to find the nonnegative integers x(m), 7 € II, such that
P| }` a(n) > Ee), e€ E| 2p, TER(e) and the cost }° <7 c(7)2(m) is minimized So the problem is following: min Oren c(7)2(m) subject to P (Ez«=ø z(m) >€(e), e©€ £) >p (4.2) #{m) > 0, integer
Now we consider the following graph shown in Figure 4.1 Each arc in this figure represents two arcs in opposite directions
Trang 37©)
Figure 4.1: The graph of the vehicle routing problem
Trang 3928 By solving problem (4.3), the optimal value is 972.5315, which is reached at a = (1.7869, 3.0314, 5.8495, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 3.9970, 4.1492, 6.7917)
Let 2* = |x], which is exactly By using the modified Hooke and Jeeves searching method to search around +*, the optimal solution is remained at 2”, i.e.,
a* =(2360000000000000447)!
Problem (4.3) is solved by using MatLab 6, and the time is 5 seconds in a PIII-750 CPU computer
Now we reconsider the vehicle routing problem, suppose we have a budget $1, 000K, and we want to maximize the probability of vehicle routing Then the problem is formulated in the following way:
Trang 4029 We apply the modified Hooke and Jeeves searching method Exploratory move Step 0: 2° = (2360000000000000447)!, p=0.9017 and cfz? = 977 Step 1: z'=(3360000000000000447)”, p = 0.9049 and cfz! = 987
Step 2: z2 = (3460000000000000447)7, p = 0.9131 and c’ x? = 1002, which is
great than 1000 so z? is rejected Then check z3 = (3260000000000000447)7,
p = 0.8821 and cfz? = 972 Since the probability at zẺ is less than the probability at x!, we do not accept 2°
Pattern move
Step 1: Let 24 = 2z!-— 2° Then zg = (4360000000000000447)?
Step 2: Start the exploratory moves from «z+ First check
z® = (5360000000000000447)", which is not in D, the set of feasible solution,
because c? 2° = 1007 > 1000 Then try 2° = x4 = (4360000000000000447)’
At 2°, p = 0.9057 and efzổ = 997 < 1000 Also the probability at ôđ is the greatest one in all the tested feasible points Repeat the procedure from 2°, finally the procedure stops at 2° So the optimal solution to problem (4.4) is
z=(4360000000000000447)7, and the optimal value is p = 0.9057 and the cost is $997K
4.2 A stochastic network design problem
We look at the power system model presented in Prékopa (1995, Section 14.3) and formulate an optimization problem based on the special system of four nodes
We reproduce here the graph of the system topology as shown in Figure 4.2, where