Hermitian matrix and the schur horn theorem

36 36 0
Hermitian matrix and the schur   horn theorem

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

HANOI PEDAGOGICAL UNIVERSITY DEPARTMENT OF MATHEMATICS ————oOo———— PHAM HUY HIEU HERMITIAN MATRIX AND THE SCHUR HORN THEOREM DRAFT GRADUATION THESIS HANOI, 05/2019 HANOI PEDAGOGICAL UNIVERSITY DEPARTMENT OF MATHEMATICS ————oOo———— DRAFT GRADUATION THESIS HERMITIAN MATRIX AND THE SCHUR HORN THEOREM Supervisor : PHD NGUYEN CHU GIA VUONG Student : PHAM HUY HIEU Class : K41CLC HANOI, 05/2019 Acknowledgment Before presenting the main content of the thesis, I would like to express my gratitude to the math teachers, Hanoi Pedagogical University, the teachers in the algebra group as well as the participating teachers Teaching has dedicated to convey valuable knowledge and create favorable conditions for me to successfully complete the course and thesis In particular, I would like to express my deep respect and gratitude to PDH Nguyen Chu Gia Vuong, who directly instructed, just told to help me so that I could complete this thesis Due to limited time, capacity and conditions, the discourse cannot avoid errors Therefore, I look forward to receiving valuable comments from teachers and friends Student Pham Huy Hieu Preface In mathematics, especially linear algebra, we only attent to matrices with real coefficients that rarely attent to complex matrices In fact, complex matrices are very important and in particular there is a matrix type with complex coefficients hermitian matrix The individual values and the coefficients of their main diagonals are of special relevance The Schur -Horn theorem will tell us the relationship It has inspired investigations and substantial generalizations in the setting of symplectic geometry Contents PRELIMINARIES 1.1 Eigenvalues and eigenvectors 1.2 Permutation matrix 1.3 Hermitian matrix 1.4 Unitary matrix 1.5 Bistochastic matrix and Majorization 1.6 Convex hull 10 1.7 Birkhoff polytope 10 THE SCHUR-HORN THEOREM 12 2.1 Schur - Horn theorem 12 2.2 Proof of the Schur - Horn theorem 12 Application 24 3.1 The Pythagorean Theorem in Finite Dimension 24 3.2 The Schur-Horn Theorem in the Finite Dimensional Case 28 References 31 Chapter PRELIMINARIES 1.1 Eigenvalues and eigenvectors Definition 1.1.1 In linear algebra, an eigenvector or characreristic vector of a linear transformation is a non-zero vector that changes by only a scalar factor when that linear transformation is applied to it More formally, if T is a linear transformation from a vector space V over a field F into itself and v is a vector in V that is not the zero vector, then v is an eigenvectors of T if T (v) is a scalar multiple of v This condition can be written as the equation T (v) = λv where λ is a scalar in the field F , known as the eigenvalues, characteristic values, or characteristic root associated with the eigenvectors v If the vector space V is finite-dimensional, then the linear transformation T can be represented as a square matrix A, and the vector v by a column vector, rendering the above mapping as a matrix multiplication on the left-hand side and a scaling of the column vector on the right-hand side in the equation Av = λv Schur- Horn theorem PHAM HUY HIEU There is a direct correspondence between n − by − n square matrices and linear transformations from an n-dimensional vector space to itself, given any basis of the vector space For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations Geometrically, an eigenvector, corresponding to a real non-zero eigenvalue points in a direction that is stretched by the transformation and the eigenvalues is the factor by which it is stretched If the eigenvalue is negative, the direction is reversed 1.2 Permutation matrix Definition 1.2.1 Given a permutation π of m elements, π : {1, , m} → {1, , m}π : {1, , m} → {1, , m} represented in two-line form by    ··· m ··· m  ,  π(1) π(2) · · · π(m) π(1) π(2) · · · π(m) , There are two natural ways to associate the permutation with a permutation matrix; namely, starting with the m × m indentity matrix, Im , either permute the columns or permute the rows, according to π Both methods of defining permutation matrices appear in the literature and the properties expressed in one representation The m × m permutation matrix Pπ = (pij ) obtained by permuting the columns of the identity matrix Im , that is, for each i, pij = if j = π(i) Schur- Horn theorem PHAM HUY HIEU and otherwise, will be referred to as the column representation in this article Since the entries in row i are all except that a appears in column π(i), we may write   e  π(1)     eπ(2)   Pπ =   ,     eπ(m) , where ej , a standard basis vector, denotes a row vector of length m with in the jth position and in every other position For example, the permutation matrix Pπ corresponding to the permutation :   , π= is      e e1  π(1)          eπ(2)  e4  0           Pπ = eπ(3)  = e2  = 0           eπ(4)  e5  0      eπ(5) e3 0 0 0 1 0 0 0    0   0   1  Observe that the j th column of the I5 identity matrix now appears as the π(j th ) column of Pπ The other representation, obtained by permuting the rows of the identity matrix Im , that is, for each j, pij = if i = π(j) and otherwise, will be referred to as the row representation Schur- Horn theorem 1.3 PHAM HUY HIEU Hermitian matrix Definition 1.3.1 In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the ith row and jth column is equal to the complex conjugate of the element in the jth row and ith column, for all indices i and j: Hermitian ⇐⇒ aij = aji or in matrix form: Hermitian ⇐⇒ A = AT Hermitian matrices can be understood as the complex extension of real symmetric matrices Example:   2+i     A = 2 − i i  is a Hermitian matrix   −i Proposition 1.3.1 Eigenvalues of an Hermitian matrix are real Proof 1.3.1 Let = v ∈ Cn be an eigenvector of A with eigenvalue λ (A hermitian) Then v T Av = λv T v = λ v Taking the conjugate transpose of the previous equation shows that v T A∗ v = λ v ⇒λ=λ ⇒ λ is a real number Proposition 1.3.2 Eigenvectors v1 , v2 of an hermitian matrix A corresponding to different eigenvalues λ1 , λ2 are orthogonal (i.e v2 , v1 = 0) Schur- Horn theorem PHAM HUY HIEU Proof 1.3.2 We have: v1 T A∗ v2 = λ2 v2 , v1 and also v1 T Av2 = v1 T A∗ v2 = λ1 v2 , v1 Hence, (λ1 − λ2 ) v2 , v1 = But λ1 = λ2 , so that v2 , v1 = Therefore, v2 , v1 = v2 , v1 = 0, as claimed 1.4 Unitary matrix Definition 1.4.1 (Unitary Matrix) In mathematics, a complex square matrix U is unitary if its conjugate transpose U ∗ is also its inverse—that is, if U ∗ U = U U ∗ = In , , where In is the identity matrix Example:   cosα −isinα  is an unitary matrix U1 =  isinα cosα Properties 1.4.1 For any unitary matrix U of finite size, the following hold: 1) Given two complex vectors x and y, multiplication by U preserves their inner product; that is, Ux , Uy = x, y 2) U is normal U is diagonalizable; that is,U is unitarily similar to a diagonal matrix, as a consequence of the spectral theorem Thus,U has a decomposition Schur- Horn theorem PHAM HUY HIEU Define a complex number ξ by  ih12 / |h12 | , h12 = ξ :=  1, otherwise Then ξh12 = −ξh12 and |ξ| = Now let  √ √  − 1−t ξ t  U = √ √ ξ 1−t t It is clear by the definitions of the complex number ξ and the entries of U that U is unnitary The matrix A = U HU ∗ has the same eigenvalues as H, and , moreover the diagonal of A is (td1 + (1 − t)d2 , td2 + (1 − t)d1 ), as desired Now, we continue to proof Schur-Horn theorem Proposition 2.2.1 Suppose that d = (d1 , , dn ) and λ = (λ1 , , λn ) are vectors in Rn If d lies in the permutation polytope Pλ then there exists an n × n Hermitian matrix with diagonal d and eigenvalues λ1 , , λn Proof: Suppose that d lies in the permutation polytope ganerated by the vectors λ As remarked earlier, if d or λ is not weakly decreasing, we say replace it by a weakly decreasing alement, so, without loss of generality, assume that both d and λ are weakly decreasing Then by the equivalence of (2.2.2.2 b)and (2.2.2.2a) given in lemma 2.2.2.2, there exists vectors v1 , , ∈ pλ with v1 = λ, = d, and for each integer ≤ m < n, vm+1 = tm vm + (1 − tm )τm vm 18 (∗1) Schur- Horn theorem PHAM HUY HIEU for some tm ∈ [0, 1] and some transposition matrix τm Let V1 denote the diagonal matrix with diagonal v1 = λ since V1 is Hermitian and the vectors vk satisfy the relation (*1), by repeated application of lemma 2.2.2.2 we see that there are Hermitian matrices V2 , , Vn with diaginals v2 , , respectively Since = d, this shows that Vn is a Hermitian matrix with diagonal d, which proves the result It is worthwhile to note that if the vector d which we started with was not weakly decreasing, a simple reodering of the basis i.e, conjugating by a permutation matrix, gives us a matrix whose diagonal is this (non-weakly decreasing) vector Moreover, since the property of being Hermitian is invariant under conjugation by a unitary matrix, and permutation matrices are, in particular, unitary matrices, this new matrix is Hermitian too Appendix: The proof of a technical result Lemma 2.2.2.2 Suppose that x = (x1 , , xn ) and y = (y1 , , yn ) are weakly decreasing vectors in Rn and that n i=1 xi = n i=1 yi Then the following are equivalent a) The vector y is in the permutation polytope Px b) There are vectors v1 , such that v1 = x, = y, for each ≤ m < n, there is a transposition matrix τm ∈ πn and real number tm ∈ [0, 1] such that vm+1 = tm vm + (1 − tm )τm vm , and for m > the first m coordinates of vm+1 agree with the first m coordinates of y Proof We first prove that (a) implies (b), so suppose that (b) holds first 19 Schur- Horn theorem PHAM HUY HIEU notice that by the definition of Px as the convex hull of the orbit Ox of x, it is clear that for any points z ∈ Px and any permutation σ ∈ Πn , the element σz is in Px Moreover, a convex combination of elements of Px is in Px Since x ∈ Px , the obvious induction argument shows that vi ∈ Px for all integer ≤ i ≤ n Since = y, this shows that y ∈ Px Now we prove that (a) implies (b) We set v1 = x, and construct a vector v2 such that v2 = t1 v1 + (1 − t)τ1 v1 (A.1.1) for some t1 ∈ [0, 1] and transposition matrix τ1 ∈ Πn so that the first coordinate of v2 agrees with the first coordinate of y Then given vectors v1 , , , such that: (a.1) for all ≤ j < m, vj+1 = tj vj + (1 − tj )τj vj for some tj ∈ [0, 1] and transposition τj ∈ Πn , (a.2) for all i < j ≤ m, writing vj = (vj,q , , vj,n ), we have vj,i = yi show that the first j − coordinates of vj agree with the first j − coordinate of y, (a.3) and k i=1 yi ≤ k i=1 vj,i for all j ≤ k and k < n and n i=1 yi = k i=1 vj,i , we construct a vector vm+1 such that (b.1 )vm+1 = tm vm + (1 − tm )τm vm for some tm ∈ [0, 1] and transposition τm ∈ Πn , (b.2) for all i < m + we have vm+1,i = yi so that the first m coordinates of vm+1 agree with the first m coordinates of y, (b.3) k i=1 yi ≤ k i=1 vm+1,i for all integers k < n and n i=1 yi = n i=1 vm+1,i We begin by making the following observation which is necessary to con20 Schur- Horn theorem PHAM HUY HIEU struct v2 from v1 We want to construct the convex combination (A.1.1) so that the first coordinate of v2 as a convex combination of the first and l1th coordinates of v1 for some integer l1 , and then let τ1 be the transposition which interchanges the first and l1th coordinates It suffices to find an integer l such that xl ≤ y1 Since x is weakly decreasing, for any integer k, with 1≤k≤n, and any permutation σ ∈ Πn , k k (σx)i ≤ xi i=1 i=1 Therefore, if z = (z1 , zn ) is a convex combination of elements of the orbit Ox of x,for any integer k, with ≤ k ≤ n, k k zi ≤ i=1 xi i=1 In particular, this is true for y since y ∈ Px Using this observation we can show that there exists an integer l, with ≤ l ≤ y1 To see this, suppose, for the sake of contradiction, that there exists no such integer Then k k yi ≤ i=1 xi , i=1 which contradicts the assumption that k i=1 yi = k i=1 xi Let l1 be the least greater than such that xl1 ≤ y1 Now we are ready to construct v2 Since xl1 ≤ y1 ≤ x1 , there exists some real number t1 ∈ [0, 1] such that y1 = t1 x1 + (1 − t)xl1 Let τ ∈ Πn be the transposition matrix which interchanges the first and (l1 )th coordinates Set v1 := x and define v2 := t1 v1 = (1 − t1 )τ1 v1 21 Schur- Horn theorem PHAM HUY HIEU Write the components os v2 as v2 = (v2,1 , , v2,n ).The key feature of v2 is that v2,1 = y1 Then by the construction of v2 and the minimality of l1 , for every integer ≤ k ≤ l1 , k k yi ≤ i=1 v2,i i=1 Moreover, by the construction of v2 , the fact that v1 = x, and the assumptions of the lemma we see that k k v2,i = i=1 k v1,i = i=1 k xi = i=1 yi i=1 Now suppose that we have contructed vectors v1 , , vm ∈ Πx satisfying conditions (a.1)-(a.3) We need to construct a vector vm+1 ∈ ¶x such that (b.1)-(b.3) hold In particular from the construction of v1 , vm we see that m m yi ≤ so subtracting n i=1 vm,i = i=1 m−1 i=1 yi n i=1 yi and m−1 vm,i = i=1 yi + vm,m , i=1 shows that ym ≤ vm,m Moreover, since both n−1 i=1 yi ≤ n−1 i=1 vm,i , we see that vm,n ≤ yn ≤ ym Similarly to the construction of v2 from v1 , we want to find a coordinate of vm so that a convex combination of this coordinate with the mth coordinate of vm is equal to the mth coordinate of y Notice that since n i=1 vm,i otherwise = n i=1 yi n i=1 vm,i there exists an integer l > m such that vm,l ≤ ym , ≤ n i=1 yi which contradicts (a.3) Let lm be the least such integer Then (A.1.2) yj ≤ ym ≤ vm,j for m + ≤ j ≤ lm − Since vm,lm ≤ ym ≤ vm,m , there is a real number tm ∈ [0, 1] such that ym := tm vm,m + (1 − tm )vm,lm 22 Schur- Horn theorem PHAM HUY HIEU Let τm ∈ Πn be the transposition matrix which interchanges the mth and lth coordinates.Define vm+1 := tm vm + (1 − t)τm vm Writing vm+1 = (vm+1,1 , , vm+1,n ) we see that vm+1,j = yj for ≤ j ≤ m Thus vm+1 satisfies conditions (b.1) and (b.2) by construction Now all that remains to be shown is that vm+1 satisfies condition (b.3) When ≤ k ≤ m, this is obvious because vm+1,j = yk for ≤ j ≤ m If m + ≤ k ≤ lm ,then by (A.1.2) we see that k m yi ≤ i=1 k k yi + i=1 vm,i = i=m+1 vm+1,i , i=1 so the inequality holds in the range If lm ≤ k < n ,then k k yi ≤ i=1 k vm,i = i=1 vm+1,i i=1 Lastly, by the construction of vm and the hypothesis (a.3), we see that n n vm+1,i = i=1 n vm,1 = i=1 which completes the proof that (b.3)holds yi , i=1 Finally notice that when m + = n we have vn,j = yj for all ≤ j < n, and the fact that n i=1 yi = n i=1 vn,i shows that vn,n = yn so = y 23 Chapter Application In this chapter we study some variants of the Pythagorean Theorem The Pythagorean Theorem plays an important role in describing the relation between the three sides of the right triangle in Euclidean geometry Among the variations of the Pythagorean Theorem that we will consider, some are trivial while others are not We will find that these can be solved by using the Schur-Horn Theorem 3.1 The Pythagorean Theorem in Finite Dimension In the following we will represent the Pythagorean Theorem in different dimensions, beginning with the classical variant Also we will formulate the converse of the Pythagorean Theorem (PT), which we call the Carpenter Theorem(CT) Theorem 3.1.1 (PT-1) If we have right triangle, such that its two sides are x,y and the angle between them is θ = π2 , then x2 + y = z Theorem 3.1.2 (CT-1) If we have a triangle with sides x, y, z, such that x2 +y = z , then θ = π2 , 24 Schur- Horn theorem PHAM HUY HIEU i.e we have a right triangle Let {e1 , e2 } be an orthonormal basis for R2 Then for x ∈ R2 , we can write x as linear combination of {e1 , e2 }, and in this case we can re-write Theorem 3.1.1 as Theorem 3.1.3 (PT-2) If x = t1 e1 + t2 e2 , and x = then |t1 |2 + |t2 |2 = Proof: Since the norm of x is one we have = x = ( t1 e1 )2 + ( t2 e2 )2 = |t1 |2 e1 + |t2 |2 e2 = |t1 |2 + |t2 |2 The proof is complete Theorem 3.1.4 (CT-2) If t1 , t2 ∈ R+ , and t21 + t22 = 1, then there exists x ∈ R2 such that x = and PRe1 = t1 , PRe2 = t2 Proof: Let x = t1 e1 + t2 e2 Then x = t1 e1 + t2 e2 = t1 e + t2 e 2 = t21 + t22 = As PRe1 x = x, e1 e1 , PRe1 2 ti ei , e1 |2 = t21 = | x, e1 | = | i=1 where x = i=1 ti ei And the same thing for PRe1 = t22 Since x = e1 = 1, then PRe1 x = | x, e1 |2 = | e1 , x |2 = PRx e1 From thispoint of view, we can rephrase the (PT-2) and (CT-2) as, Theorem 3.1.5 (PT-3): If K is a one-dimensional subspace of R2 , then Pk e1 25 + Pk e2 = Schur- Horn theorem PHAM HUY HIEU Theorem 3.1.6 (CT-3) If t1 , t2 ∈ R+ , and t1 +t2 = 1, then there exists one-dimensional K ⊂ R2 , such that Pk e1 = t1 , Pk e2 = t2 Next we see that the same result hold in Rn : Theorem 3.1.7 (PT-4) If K is a one-dimensional subspace of Rn , and {ej }nj=1 an orthonormal n j=1 basis, then PK ej = n j=1 tj ej proof: We choose x = to be a unit vector for K ⊂ Rn , then it spans K and PK e j Then n j=1 = PK ej ej , x x = n j=1 | = | ei , x |2 x ej , x |2 = x = | ei , x |2 by Parseval’s equality Theorem 3.1.8 (CT-5) If t1 , , tn ∈ [0, 1], and n j=1 tj + 1, then there exists a one-dimensional subspace K ⊂ Rn such that Pk ej = tj , j = 1, , n n j=1 tj ej Proof: Let x = and put K = Cx Then n Pk ei = ei , x x = ei , n tj e j j=1 So PK ei 2 = ti x x= 1 tj2 ei , ej x = ti2 x j=1 = ti In the following we are going to generalize the Pythagorean Theorem in Rn , by allowing K to have different dimensions Theorem 3.1.9 (PT-5) If K is an m-dimensional subspace of Rn , then n i=1 PK ei = m Proof: If we choose f1 , , fm to be an orthonormal basis for K ⊂ Rn , then the projection of ei onto K is m PK ei = ei , fi fi j=1 26 Schur- Horn theorem PHAM HUY HIEU So n n PK ei m i=1 ei , fj fj ||2 || = i=1 n j=1 m | ei , fj |2 = i=1 j=1 m n | ei , fj |2 = j=1 i=1 m = fj j=1 m = 1=m j=1 The converse of(PT-5) would be Theorem 3.1.10 (CT-5) If {ti }ni=1 ⊂ [0, 1] and n i=1 ti = m, then there exists an m−dimensional subspace K of Rn , such that PK ei = ti , i = 1, , n suddenly, its is not so obvious how to construct K So first we will attempt to reformulate the theorem If K ⊂ Rn , PK is the orthogonal projection of Rn on K, and e1 , , en is an orthonormal basis with (tij ) the matrix of PK , then PK e j = PK ej , PK ej = PK ej , ej = tjj Sincce PK = PK2 = PK∗ Then can write as n i=1 PK ei n i=1 PK e i = n i=1 ti Which we = tr(PK ).With this in mind, we can rewrite (PT,CT-5) as: Theorem 3.1.11 (PT-6) If K is an m − dimensional subspace of Rn , then tr(PK ) = m Theorem 3.1.12 (CT-6) If t1 , , tn ∈ [0, 1] and n j=1 tj = m, then there is K ⊂ Rn , such that the diagonal of PK is (t1 , , tn ) 27 Schur- Horn theorem PHAM HUY HIEU This formulation of (CT-6) make it clear that its proof is not going to be trivial as the previous results of (PT-CT) If we have thses numbers t1 , , tn ∈ [0, 1], such that their sum is m ∈ N aK ∈ Rn with PK ei = ti for all i; in short, we want to form a matrix of PK with diagonal of ti , such that PK = PK∗ = PK2 It is not obvious that such a thing is even possible If we try to find a projection in that way we will get equation with n(n−1) n(n+1) variables, as we see in the next example Example: Take PK to be a matrix such that PK = Pk∗ = PK2 , and such that the diagonal of pK is (t, − t) for a fixed t ∈ [0, 1] so     2 t x t +x x   , PK2 =  PK =  2 x 1−t x x + (1 − t) As these two should be equal, we get equations t = t2 + x, − t = x2 + (1 − t)2 with the single unknown x In this particular case one √ can check that x = − t2 gives a solution But for a 10 matrix we will have 55 equations in 45 unknowns The bigger the projection, the more equations we have to deal with, and the systems will always be over-determined This is an issue, because such systems may have no solution we will soon see, however, that this problem can be solved in general and that the Schur-Horn Theorem is the way to go 3.2 The Schur-Horn Theorem in the Finite Dimensional Case The Schur-Horn theorem characterizes the relation between the eignvalues and the diagonal elements of selfadjoint matrix by using majorization 28 Schur- Horn theorem PHAM HUY HIEU Theorem 3.2.1 (Schur 1923) If A ∈ Mn (C), then diag(A) ≺ λ(A), where diag(A) is the diagonal of A and λ(A) is the eigenvalues list of A Proof: Let A = U DU ∗ , where D is diagonal matrix and U is a unitary Then the diagonal of A is given by Dh,l Ukh Ulk∗ = akk = h,l λl |Ukl |2 λl Ukl Ukl = l (3.2) l Define a matrix T by Tkl = |Ukl |2 The fact that U ∗ U = U U ∗ = I implies that T is Bitoschastic Equation (3.2) shows that diag(A) = T λ(A) Then diag(A) ≺ λ(A) Lemma 3.2.1 Let A ∈ Mn (C) with diagonal y Let T be a T -transform Then there exists a unitary U ∈ Mn (C) such that U AU ∗ has diagonal T y Proof: Let A be an n × n matrix Define a unitary U by   In      ξsinθ · · · −cosθ · · ·     U = In 0     · · · ξcosθ · · · sinθ · · ·   In Where ξ ∈ C with |ξ| = 1, and bij ξ = −bij ξ Then a straightforward computation shows that diag(U AU ∗ ) = ty + (1 − t)yσ , Where t = sin2 θ, σ = (ij) ∈ σn Theorem 3.2.2 (Horn 1954) If x, y ∈ Rn , and x ≺ y, then ∃A ∈ Mn (C) such that diag(A) = x, λ(A) = y 29 Schur- Horn theorem PHAM HUY HIEU Proof: Let x = (x1 , , xn ), y = (y1 , , yn ) such that x ≺ y Then x ≺ y ⇔ x = (Tr T1 )y where T1 , , Tr are T - transform Let A1 ∈ Mn (R) with diagonal y and zeroes elsewhere By Lemma 3.2.1, there exists a unitary V1 such that A2 = V1 A1 V1∗ has diagonal T1 y Similarly, there exists a unitary V2 such that A3 = V2 A2 V2∗ has diagonal T2 (T1 y) = T2 T1 y Repeating this, after r step, we will have unitaries V1 , , Vr such that A = Vr V1 A1 V1∗ Vr∗ has diagonal Tr T1 y = x As unitary conjugation preserves the spectrum, A has spectrum y and diagonal x We can rephrase Schur’s result by saying that, that for every x ∈ Rn , {Mx : x ≺ y} ⊇ D {U My U ∗ : U ∈ U(n)} , Where Mx , My are diagonal matrices that have x, y at the diagonal So if we conjugate My with the unitary matrix U we still have the same vector y at the diagonal of My And Horn prove the other inclusion, i.e for every x ∈ Rn , {Mx : x ≺ y} ⊆ D {U My U ∗ : U ∈ U(n)} So we can rephrase both theorems together as follows: Theorem 3.2.3 (Schur- Horn Theorem) For every x ∈ Rn , {Mx : x ≺ y} = D {U My U ∗ : U ∈ U(n)} Now we can prove (CT-6) as follows: Proof of CT-6) If a = (a1 , , an ) ∈ [0, 1]n and n j=1 aj = m, then a ≺ mtimes (1, , 1, 0, , 0) By the Schur -Horn Theorem, there exists a self adjoint 30 Schur- Horn theorem PHAM HUY HIEU mtimes matrix P ∈ Mn (C) with diagonal a and eigenvalues (1, , 1, 0, , 0) The minimal polynomial of P is f (t) = t(1−t) So P (I −P ) = 0, i.e P = P Thus P is a projection 31 Bibliography [1] Michael Artin, Algebra,2nd ed.,Pearson, 2010 [2] Tony F Heinz, Topological Properties of Orthostochastic Matrices, Linear Algebra Appl.20 (1978) [3] Alfred Horn, Doubly stochastic matrices and the diagonal of a rotation matrix, Amer.J.Math 76 (1954) [4] Richard V Kadison, The Pythagorean Theorem: I The finite case, Proc Natl Acad Sci USA 99 (2002) [5] https://en.wikipedia.org [6] https://ourspace.uregina.ca/bitstream/handle/ 10294/5791/Albayyadhi_Maram_200282615_ MSC_MATH_201320.pdf?sequence=1&isAllowed= y&fbclid=IwAR2zs4-jGARkjtfpijWNVtAsom_ jbVVSXOK3TmU3Fk-n1C1cu4nJ4xBxTII 32 ... 10 THE SCHUR- HORN THEOREM 12 2.1 Schur - Horn theorem 12 2.2 Proof of the Schur - Horn theorem 12 Application 24 3.1 The Pythagorean Theorem in Finite... 11 Chapter THE SCHUR- HORN THEOREM 2.1 Schur - Horn theorem Theorem 2.1.1 Let d1 , , dn and λ1 , , λn be real numbers There is an n × n Hermitian matrix with diagonal entries d1 , , dn and eigenvalues... Proof of the Schur - Horn theorem Our proof of the Schur - Horn theorem goes as follows In 2.2.1, we show that the vector of diagonal entries of a Hermitian matrix can be written as the product

Ngày đăng: 23/12/2019, 16:17

Mục lục

    Bistochastic matrix and Majorization

    Schur - Horn theorem

    Proof of the Schur - Horn theorem

    The Pythagorean Theorem in Finite Dimension

    The Schur-Horn Theorem in the Finite Dimensional Case

Tài liệu cùng người dùng

Tài liệu liên quan