1. Trang chủ
  2. » Cao đẳng - Đại học

FINAL PROJECT 2012 IN APPLIED MATHEMATICS

17 4 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 17
Dung lượng 149,32 KB

Nội dung

Thus, the order of convergence is 1. b) Give a modification of Newton’s method so that the order of convergence is 2.. The result is true for all i. The theorem is proved.. Proof.. We ha[r]

(1)

FINAL PROJECT IN

Iterative Solution of Nonlinear Equations in Several Variables 2012

The Minh Tran

1) Let f(x)=x3ex

a) Write down Newton’s method for this function What is the order of convergence?

- Solution :

+ We will use the Newton’s method with the formula : ( )

) ( '

n n n

n

x f

x f x

x + = − for the function x

e x x

f( )=

We have : xn

n

n x e

x

f( )= ⇒ '( )=3 + = ( n +3)

x n x n x n

n x e x e x e x

x

f n n n So

( )

3 2 )

3 ( )

(

2

3 '

1

+ + = + −

= −

=

+

n n n n

x x n n

n n n

n

x x x x

e x

e x x

x f

x f x x

n n

+ To continue, we apply the formula p n n n

x x

α α

− −

+ ∞ →

1

lim to find the order of convergence, where

0 ) (α =

f ; α =0 We compute that :

p n p

n

n n n

p n n

n n n p n n n p n n

n x x

x x

x x

x x

x x x

x

3 2 lim

1 3 2 lim

lim

lim 1

2

1

+ + =

+ + =

= −

+ ∞ → ∞

→ + ∞ → +

→ α

α

We assume that xn →α =0 so :

p n n n p n p

n

n n n

p n n

n x

x x

x

x x

x x

3 lim 3

2 lim

lim 1

2

∞ → +

∞ → +

→ + =

+ =

− − α

α

Only if p =1 then 3

lim = > ∞

p

n n n x

x

, which is nonzero and positive

Thus, the order of convergence is

b) Give a modification of Newton’s method so that the order of convergence is

- Solution

(2)

such that c x

x f

k x→ ( − ) =

) ( lim

0 α

We have : lim 3

0 = ≠

x

e

x x

x . Clearly, we see that the root α =0 has multiplicity for

x

e x x

f( )=

Hence, we can have a modification of Newton’s method to become quadratic convergence is : ( )

3 )

3 ( 3

) (

2

3 '

1

+ = + −

= −

=

+

n n n

x x n n

n n n

n

x x x

e x

e x x

x f

x f N x x

n n

Where N = is the multiplicity of root

αof f(x)=x3ex

Finally, we can observe that p n n

n

n x x

x 1

3 lim

2

+

→ converges to a nonzero constant whenever p =

2 Given a linear system Ax=b where A is SDD

a) Describe Jacobi method applied to this system and prove a convergence theorem + Describe Jacobi method applied to this system

- Solution :

The system AX = b or

n n nn n

n

n n

b x a x

a x a

b x a x

a x a

= +

+

= +

+

. . . . . . . . .

. .

. . . . . . . .

2 1

1

2 12 11

We can rewrite ( assumption that aii ≠0, i=1 n) as :

( )

( 1 , 1)

1

12 11

. . . . . . . . 1

. . .

. . . . . . . . 1

− −

− −

− =

− −

− =

n n n n

n nn n

n n

x a x

a b a x

x a x

a b a x

(3)

                            +                                                     − − − − − − − =                           − 22 11 1 , 22 22 23 22 21 11 11 12 nn n n nn n n nn n n n n a b a b a b x x x a a a a a a a a a a a a a a x x x

Or X = BX + d If we write the matrix A in the form A = L +D +U where                 = − 0 0 , 32 21 n n n

n a a

a a a

L ,

                = − 0 0 1 12 n n n a a a U ,                 = n n a a a D , 22 11 0 0

From the above part, it is to see that :

B =−D−1(L+U) , d = D−1b

With the Jacobi matrix B =−D−1(L+U) , the vector Jacobi d =D−1band X = BX + d We have :

X(k+1) =−D−1(L+U)X( )k +D−1b

( ) ( ) n i x a b a x n i j j k j ij i ii k

i , 1,...,

1 1 =           − = ⇒ ∑ ≠ = +

+ To prove a convergence theorem

-Theorem : If A is strictly diagonally dominant then the Jacobi method converges for any guess x( )0

Proof :

Because A is strictly diagonally dominant (SDD) , we have :

< ⇔

>∑ ∑

i j ii

ij j i ij ii a a a a

(4)

Here G = D−1(L+U)

We choose ∞.Then

( ) max 1

1

1 + = <

= ∑

≠ ≤ ≤ ∞ −

i j ii

ij m

i a

a U

L D G

Thus, the Jacobi method converges for any guess x( )0

b) Describe Gauss-Seidel method applied to this system and prove a convergence theorem + Describe Gauss-Seidel method applied to this system

- Solution :

The system AX = b or

n n nn n

n

n n

b x a x

a x a

b x a x

a x a

= +

+

= +

+

. . . . . . . . .

. .

. . . . . . . .

2 1

1

2 12 11

We have :

(DL)X =UX +b

⇒ = −

− ⇒

=b (D L U)X b

AX ⇒ X =(DL) (−1 UX+b)

X(k+1) =(DL)−1(UX( )k +b)

Where A = D – L –U and D, L, and U represent the diagonal, lower triangular, and upper triangular parts

( ) ( ) ( )

n i

x a x

a b

a x

n i j

k j ij n

i j

k j ij i

ii k

i , 1,...,

1 1

1

=

    

  

− −

=

⇒ ∑ ∑

> <

+ +

(1) +To prove a convergence theorem

- Theorem : The Gauss-Seidel method for Ax=b is convergent if A is strictly diagonally dominant

Proof :

From AX=b ⇒ (D-L-U)X=b ⇒(DL)X =UX +b Therefore : (DL)(X(i+1) −X)=U(X(i+1)−X)

(2)

(5)

Apply from (2), we have :

( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( 1)

1

− − −

− −

+ =

+ =

= −

m m

m

m m

m

m m

Ue D Le D e

Ue Le

De

Ue e

L D

With the above process, and ∑( ) ∑( )

+ = −

= −

− = −

= =

n i j

ij i

j ij ii

a U

a L

a D

1

1

; ;

1

we have :

( ) ( ) ( ) ( ) ( )

    

  

− +

− =

⇒ ∑ ∑

+ =

− −

=

n i j

m j ij i

j

m j ij ii

m

i a e a e

a e

1

1

1

1

For i =1 :

( ) ∑( ) ( ) =

− =

n

j

m j j m

e a a

e

2

1

11

1

( ) ( )

( )

( ) ∞ −

= ∞

− =

≤ ≤ ≤

∑ ∑

1

2 11

1

1

11

1 1

m j

n j

j m

j n j

m j j m

e r

a a

e

e a a

e

where 1

2 11

1 = ∑ <

=

n j

j

a a

r by SDD and set ( )i

n i r

r ≤ ≤

= max

Now, we have for i≥2 and assume that ( ) ( )

∞ −

m1

m

j r e

e

( ) ( ) ( )

( ) ( )

( ) ( ) ( )

∞ − ∞

≠ = ∞

+ = ∞ − −

= ∞ −

+ =

− −

=

≤ =

     

   

<

    

  

+ =

    

  

+ ≤

∑ ∑

∑ ∑

1

1

1

1

1

1

1

1

1 . 1 1

m m

i n

i j j

ij ii

m

n i j

ij m

i j

ij m

ii

n i j

m j ij i

j

m j ij ii

m i

e r e

r a

a e

a e

a e

r a

e a e

a a

(6)

The result is true for all i

( ) ( )

∞ − ≤

≤ ≤

1

max im m

n

i e r e

( ) ( )

∞ − ∞ ≤

m m

e r e

Thus ( )

∞ ∞ ≤

0

e r

e m m and so ( ) →0 →∞( <1)

asm r

em

Or e( )m = x( )mx→0as m→∞ as required The theorem is proved

( We also can apply this way to prove for the part a) )

3) Let ARn×n

a) State and prove a QR- decomposition theorem

-Theorem : Suppose that A is an n×m matrix with linearly independent columns then A can be factored as,

A = QR

where Q is an n×m matrix with orthonormal columns and R is an invertible m×m upper triangular matrix

Proof :

Suppose that the columns of A are given by c1,c2,... ,cm We use the Gram- Schmidt process on these vectors and we have a set of orthonormal vectors u1,u2,...,um We can write

A will have columns A=[c1|c2 |...|cm]

Q will be a matrix with orthonormal columns Q=[u1|u2 |...|um]

We can write each ci as a linear combination of u1,u2,...,um in the following linear system :

m m m m

m m

m m

m m

u u c u

u c u u c c

u u c u

u c u u c c

u u c u

u c u u c c

, .

. . . . ,

, .

.

, .

. . . . ,

, .

, . . . . . ,

,

2

1

2

2 1 2

1

2 1 1

+ +

+ =

+ +

+ =

+ +

+ =

(7)

      

 

      

 

=

m m m

m

m m

u c u

c u c

u c u

c u c

u c u

c u c

R

, . . . . , ,

. .

. .

, . . . . , ,

, . . . . , ,

2

2

2

1

2 1

Now, we can observe that the product QR=A

[ ]

[ ]

A

c c

c

u c u

c u c

u c u

c u c

u c u

c u c

u u

u QR

m

m m m

m

m m

m

= =

      

 

      

 

=

|. . . | |

, . . . . , ,

. .

. .

, . . . . , ,

, . . . . , ,

| . . . | |

2

2

2

2

1

2 1

2

We continue to is to show that R is an invertible upper triangular matrix

First, recall the matrix

      

 

      

 

=

m m m

m

m m

u c u

c u c

u c u

c u c

u c u

c u c

R

, . . . . , ,

. .

. .

, . . . . , ,

, . . . . , ,

2

2

2

1

2 1

from Gram- Schmidt process we know that ukis orthogonal to c1,c2,...,ck−1 This mean that all the inner product below the main diagonal must be zero and they are all of the form ci,uj =0 with

j

i< We know from Special matrices property that a triangular matrix will be invertible if the main diagonal entries ci,ui are non-zero We have the general formula for ui from the Gram-Schmidt process

1

2

1 '

, .

. . ,

, − − − − −

= i i i i i i

i c c u u c u u c u u

u ' ; ' 0

'

≠ =

i

i i

i u

(8)

1

2

1 '

1

2

1 '

, .

. . ,

,

, .

. . ,

,

− −

− −

+ +

+ +

=

+ +

+ +

=

i i i i

i i i

i i i i

i i i

u u c u

u c u u c u u

u u c u

u c u u c u c

Now, we can rewrite the formula using the properties of the inner product

i i i i i

i i i

i i i

i i i i i

i i i i

i

u u u c u

u u c u u u c u u u

u u u c u

u c u u c u u u

c

, ,

. . . ,

, ,

, ,

, ,

. . . ,

, ,

1

2

1 '

1

2

1 '

− − −

+ +

+ +

=

+ +

+ +

=

Because the ui are an orthonormal basis vectors and so we see that

0 ,

, 0

, 0

, i = > i i ≠ ⇒ i i = i' i i

j u i j u u c u u u u

u

And from the above content ,we also have ci,uj =0 with i< j

Hence, R is an invertible upper triangular matrix and it is presented the form following

      

 

      

 

=

m m m m

u c

u c u

c

u c u

c u c

R

, . . 0 . . 0 0

. . .

.

. .

0 .

, . . . . , 0

, . . . . , ,

2

2

1

2 1

b) Prove a uniqueness theorem of the decomposition for a proper A, e.g nonsinglar and so on - Theorem : Let A be a m×n matrix with linearly independent columns Thus, A admits a QR decomposition.Further such a decomposition is unique

Proof

We have proved the existence of QR decomposition of the matrix A as in the part a) Now we can prove the uniqueness of this decomposition

Indeed, from the matrices A, Q, R have the properties as in the part a) Let A=Q1R1 =Q2R2

where Q1TQ1 =Q2TQ2 =Id

(9)

Then, we can a reduction on the matrices and see that :

( )

( )

2

2 2

1 1 1

R R

R Q Q R

A A

R Q Q R R R

t t t t

t t t

= = = =

Hence

( )

1 1 2

2 1

− −

= ⇒

=R R R R R R R

Rt t t t We see the this equation have the left hand side is a lower triangular matrix and the right hand side is an upper triangular matrix Hence, both of them must be diagonal

Let αi and βi 1≤in are the diagonal entries of R1 and R2,respectively Then αi >0; βi >0 for every i and

n i

n i

i i

i i i i

≤ ≤ −

≤ ≤ =

1 ,

1 ,

β α

β α β α

Hence R R− =( )Rt R =Id

1 1

2 ⇒ R1 =R2 Since Q1R1 =Q2R2,it follows that Q1 =Q2 Thus, the decomposition is unique

4) Let { 2} , , 1t t

A= be there vectors(polynomial) and let (f,g) f(x)g(x)dx

1

1

= be the inner product under consideration Use the Gram-Schimidt process to orthogonalize the set A, what is the resulting orthonormal set

-Solution :

We have a unit vector basic A={1,t,t2} Let A1 =1;A2 =t;A3 =t2 and (f,g) f(x)g(x)dx

1

1

=

Compute :

1

1 = A =

q ,( , )

1

1

1 1

1 = ∫ = ∫ =

− −

dx dx

q q q

q

( )

( ) ( )

t

dt t t

t t q q q

q A A q

=

      − = −

= −

=

⇒ ∫

1

1

1

1 2

2 1 , ,

(10)

( )

3 ,

1

1

1 2

2 = ∫ = ∫ =

− −

dx t dx q q q

q

( )

( )

( )

( ) ( ) ( )

3

2

1

, 1 , ,

, ,

,

2

1

1

1 2

2

2 2

2

1

1 3

− =

− −

=

− −

= −

− −

= ⇒

∫ ∫

− −

t

dt t t dt t t

t t t t

t q q q

q A q

q q

q A A q

The resulting orthonormal set is

   

 

− 3 1 ,

, 1 t t2

We can check the inner product again as :

( )

   

 

− ⊥ ⊥ ⇒

=    

 

− =

   

 

− =

   

 

− =

   

 

− =

= ∫ ∫ ∫

− −

3 1

0

1

1 ,

1

1 , ,

1

2

1

1 2

1

1 2

1

1

t t

dt t

t dt t

t t

t dt t t

5) Give three examples of isometry onR2.They should be a reflector, a rotation, and a composition of the two You should specifically write down the matrix for each case

- Solution :

Example : Rotation

Let P be the point (x,y) where x=rcosϕ and y=rsinϕ

Rotating with the angle θ from P(x,y) to P'(X,Y)

Rotation through θ about the origin

where ( )

( )

  

+ =

+ =

+ =

− =

− =

− =

θ θ

θ ϕ θ

ϕ θ

ϕ

θ θ

θ ϕ θ

ϕ θ

ϕ

sin cos

sin cos cos

sin sin

sin cos

sin sin cos

cos cos

y x

rs r

r Y

y x

r r

(11)

We can write down matrix form

   

 −

=

⇒          

 −

=

     

θ θ

θ θ

θ θ

θ θ

θ

cos sin

sin cos

cos sin

sin cos

R y

x Y

X

Example : Reflector

Let P be the point (x,y) where x=rcosϕ and y=rsinϕ

From the above figure, we have computed two reflection angles are equal to θ −ϕ

2 Reflection in the line

2 tanθ

x y=

We can find the angle θ θ ϕ=θ −ϕ

  

 

− + = ∠

2 '

OX P

where ( )

( )

  

+ =

+ =

+ =

− =

− =

− =

θ θ

ϕ θ ϕ

θ ϕ

θ

θ θ

ϕ θ ϕ

θ ϕ

θ

cos sin

sin cos cos

sin sin

sin cos

sin sin cos

cos cos

y x

r r

r Y

y x

r r

r X

We can write down matrix form

   

 

− =

⇒          

 

− =

     

θ θ

θ θ

θ θ

θ θ

θ

cos sin

sin cos

cos sin

sin cos

M y

x Y

X

Example :Composition of the two

Let AR2×2.Write the matrix as : 

     =

d c

b a A

(12)

) 3 ( 0

) 2 ( 1

) 1 ( 1 2

2

= +

= +

= +

cd ab

d b

c a

From the equation (1), we can write a=cosθ ,c=sinθ for some θ

From the equation (2), we have b=cosϕ ,d =sinϕ for some ϕ

From the equation (3), we see that cosθcosϕ +sinθ.sinϕ=0

( )

cosθ −ϕ =

Thus

      

=

   

 

+ =

− =

   

 

+ =

+ =

− =

   

 

+ =

=

   

 

+ =

+ =

θ θ

π θ

θ π θ

π ϕ

θ θ

π θ

θ π θ

π ϕ

cos 2

3 sin ,

sin 2

3 cos ,

2 3

cos 2

sin ,

sin 2

cos ,

2

d b

case which in

Or

d b

case which in

So

Finally, we have :

From 

     =

d c

b a

A and the values a, b, c, d are found Thus

   

 

   

 −

=

θ θ

θ θ

θ θ

θ θ

cos sin

sin cos

cos sin

sin cos

or A

6 ) Find the general solution of the linear difference equation

0 4

4 n+2 − n+1 + n =

U U

U

-Solution :

The characteristic polynomial is :

( )

(2 1) 4

2

= − =

+ − =

ξ ξ ξ ξ ρ

The equation have double root

2 =ξ =

ξ So the general solution has the form :

U c c n

n n

n

    

+

     

=

2 1 2

1

2

(13)

       =        

+ +

+

1

1

n n n

n

U U A U

U

where 2×2 ∈R

A Computer n

A using the associated Jordan decomposition Find its limit as

n Must the spectral radius of A be less than one?

-Solution :

From the equation

    

  

=

    

  

+ +

+

1

1

n n n

n

U U A U

U

n n

n n

n n

U U

U U

U U

4 1 0

4

4 +2 − +1+ = ⇒ +2 = +1 −

            

  

− =

     

   

− =

    

   ⇒

       

=

     

   

− =

    

  

+ +

+

+ +

+ +

+

+ +

1

1

2

1

1

2

1 4 1

1 0

4 1 4 1

n n n

n n n

n

n n n

n n n

n

U U U

U U

U U

U U A U

U U

U U

We set

^ ^

^ ^

1

1 ^

U A U U

A U

U U

U n n n n n

n

= ⇒ =

⇒        

= + +

+

According to in the part a) Using the Jordan decomposition to compute n

A

The equation have the root

( )

( )

2

4 1

4

1

2

0

1 4

2

2

= ⇒ = + − =     

  

− −

− = −

= ⇒

= − =

+ − =

λ λ

λ λ λ λ

λ λ

λ λ λ ρ

(14)

We have the Jordan matrix :

          =

2

1

J

We have AR = RJ with R=[r1 r2 ]

With eigenvalues :

2 =λ =

λ

To apply Jordan decomposition :

( )

( )

      

      = ⇒ =    

 

− = −

      = ⇒ =    

 

− = −

2

1

1

2

2 2

2

1

1

r r r I A r I A

r r

I A r I A

λ λ

So [ ]

     

    − =    

  − = ⇒       =

= −

2

0 2

1

1

2 1

2

1 r R

r R

We observe that the matrix A can presented by the matrices R, R−1,J

     

   

=

    

  

− =

2 1 0

1 2 1 ;

1 4

1 1

0

J A

Clearly,

1

2 1 4 1

0 2 1

2 1 0

1 2 1

2 1

0 2 1

4

1 1

0

=

     

   

     

         

=

    

  

= RJR

A

We have

1

1

1

1

− −

− −

=

= =

=

R RJ A

R RJ RJR

RJR A

RJR A

(15)

   

  =          

  = ⇒

   

  =             = ⇒       =

          =

⇒ −

3

2

2 2

1

0

1

2

0

1

1

1

1

λ λ λ λ λ λ

λ λ

λ λ λ λ λ λ λ λ

λ

J

J J

have We

R R

A

n n

     

    − = 

     = 

    

    = ⇒    

 

= − − −

2

0 2

1 2

1

2

1

, 1

1

R and R

where R

n R

A n

J Thus

n n n n

n n n n

λ λ λ

When n→0, we need to compute

      =      

    −             =       =      

   

= − − −

→ 0 1

0

0 1

0

0

0

1

2

1 lim

lim 1

0

0 R R R

n R

A

n n n n

n n

The eigenvalue of the matrix A is

2

4 1

4

1

2

= = ⇒ = + − =     

  

− −

− =

−λI λ λ λ λ λ λ

A

The spectral radius of A be less than one

( )

2 max = < =

i

i

A λ

ρ

7) Recall that the rank of a matrix is equal to the number of linearly independent columns

Prove that ARn×n has rank one if and only if there exist nonzero vectors u,vRn such that T

uv

A= .To what extent is there flexibility in the choice of u and v ?

-Proof

(16)

n i

for a

a a

a a

ai,1 =αi−1 1,1, i,2 =αi−1 1,2, . . . , i,ni−1 1,n , = 2,. . . ,

Consider the vectors u,v defined as : [ ] [ ]T n

n n

T

n C and u a a C

v= 1,α1, ,α −1 ∈ = 1,1, , 1, ∈

From the definition u,v satisfy the relation A=uvTand they are two non-zero vectors Otherwise, A would be the zero-matrix, contradiction with A is having rank

+ The representation is unique up to a constant if A=u1v1T =u2v2T then 2

2

1 su ,v s v

u = = −

b) Show that if T

uv I

G= − is nonsingular, then G−1 has the form T

uv

I−β Give a formula for β

-Solution :

We see that GG−1 =I

and can check

( )( )

( )

( T T ) T

T T T

T

T T T

T

T T

uv u v uv I

uv u v uv

uv I

uv uv uv

uv I

uv I uv I I GG

β β

β β

β β

β

− + − =

+ − −

=

+ − −

=

− −

= =

−1

Thus, If GG−1 = I

then

1

− = ⇒ = −

+

v u u

v

uvT β T β T

β

Suppose G =IuvT is nonsingular , we will show that G−1 = I−βuvT

where

1

− =

v uT

β

G is nonsingular, suppose T =1

uvG2 =(IuvT)(IuvT)= I−2uvT +uvTuvT =IuvT =G

Since G is nonsingular then = = − TT =0

uv uv

I G

I This is a contradition

1 ≠

uTv We have

( )( )

( )

I

uv uv I

uv v

u uv uv

I

uv v u

uv uv

v u

uv I

v u

uv I uv I

uv I uv I GG

T T

T T

T T

T T

T T

T T

T T T

T T

=

+ − =

   

 

− + − + − =

− + − − − =

   

 

− − −

=

− −

=

1

1

1

1 β

“ The mathematical methods in Applied Mathematics are wonderful ! “

(17)

1 Lectures, U.S.A

2.D.S.Watkins, Fundamentals of Matrix Computations,Wiley, 3rd Edition,2010 3.R.LeVeque’s 2006 Lecture Notes

Ngày đăng: 29/05/2021, 02:57

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w