a generalization of the havrda charvat and tsallis entropy and its axiomatic characterization

9 0 0
a generalization of the havrda charvat and tsallis entropy and its axiomatic characterization

Đang tải... (xem toàn văn)

Thông tin tài liệu

Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2014, Article ID 505184, pages http://dx.doi.org/10.1155/2014/505184 Research Article A Generalization of the Havrda-Charvat and Tsallis Entropy and Its Axiomatic Characterization Satish Kumar1 and Gurdas Ram2 Department of Mathematics, College of Natural Sciences, Arba Minch University, Arab Minch, Ethiopia Department of Applied Sciences, Maharishi Markandeshwar University, Solan, Himachal Pradesh 173229, India Correspondence should be addressed to Satish Kumar; drsatish74@rediffmail.com Received September 2013; Revised 20 December 2013; Accepted 20 December 2013; Published 19 February 2014 Academic Editor: Chengjian Zhang Copyright © 2014 S Kumar and G Ram This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited In this communication, we characterize a measure of information of types 𝛼, 𝛽, and 𝛾 by taking certain axioms parallel to those considered earlier by Havrda and Charvat along with the recursive relation 𝐻𝑛 (𝑝1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾)−𝐻𝑛−1 (𝑝1 + 𝑝2 , 𝑝3 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾)= (𝐴 (𝛼,𝛾) /(𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ))(𝑝1 + 𝑝2 )𝛼/𝛾 𝐻2 (𝑝1 /(𝑝1 + 𝑝2 ), 𝑝2 /(𝑝1 + 𝑝2 ); 𝛼,𝛾)+(𝐴 (𝛽,𝛾) /(𝐴(𝛽,𝛾) − 𝐴 (𝛼,𝛾) ))(𝑝1 + 𝑝2 )(𝛽/𝛾) 𝐻2 (𝑝1 /(𝑝1 + 𝑝2 ), 𝑝2 /(𝑝1 + 𝑝2 ); 𝛾, 𝛽), 𝛼 ≠ 𝛾 ≠ 𝛽, 𝛼, 𝛽, 𝛾 > Some properties of this measure are also studied This measure includes Shannon’s information measure as a special case Introduction Shannon’s measure of entropy for a discrete probability distribution 𝑃 = (𝑝1 , , 𝑝𝑛 ) , 𝑝𝑖 ≥ 0, 𝑛 ∑𝑝𝑖 = 1, (1) 𝑛 = 3, 4, for the above distribution 𝑃, as the basic postulate, and (ii) Chaundy and McLeod [3], who studied the functional equation 𝑛 𝑚 𝑛 𝑚 𝑖=1 𝑗=1 ∑ ∑ 𝑓 (𝑝𝑖 𝑞𝑗 ) = ∑𝑓 (𝑝𝑖 ) + ∑𝑓 (𝑞𝑗 ) , 𝑖=1 𝑗=1 𝑖=1 for 𝑝𝑖 ≥ 0, 𝑞𝑗 ≥ given by 𝑛 𝐻 (𝑃) = −∑𝑝𝑖 log 𝑝𝑖 , (4) (2) 𝑖=1 has been characterized in several ways (see Acz´el and Dar´oczy [1]) Out of the many ways of characterization the two elegant approaches are to be found in the work of (i) Faddeev [2], who uses branching property namely, 𝐻𝑛 (𝑝1 , , 𝑝𝑛 ) = 𝐻𝑛−1 (𝑝1 + 𝑝2 , 𝑝3 , , 𝑝𝑛 ) (3) 𝑝1 𝑝2 + (𝑝1 + 𝑝2 ) 𝐻2 ( , ), 𝑝1 + 𝑝2 𝑝1 + 𝑝2 Both of the above-mentioned approaches have been extensively exploited and generalized The most general form of (4) has been studied by Sharma and Taneja [4], who considered the functional equation 𝑛 𝑚 𝑛 𝑚 ∑ ∑ 𝑓 (𝑝𝑖 𝑞𝑗 ) = ∑ ∑ 𝑓 (𝑝𝑖 ) 𝑔 (𝑞𝑗 ) 𝑖=1 𝑗=1 𝑖=1 𝑗=1 𝑛 𝑚 + ∑ ∑𝑔 (𝑝𝑖 ) 𝑓 (𝑞𝑗 ) , 𝑖=1 𝑗=1 𝑛 𝑚 𝑖=1 𝑗=1 ∑𝑝𝑖 = ∑ 𝑞𝑗 = 1, 𝑝𝑖 ≥ 0, 𝑞𝑗 ≥ (5) Abstract and Applied Analysis We define the information measure as 𝐻𝑛 (𝑝1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) −1 𝑛 𝛼/𝛾 = (2(𝛾−𝛼)/𝛾 − 2(𝛾−𝛽)/𝛾 ) ∑ (𝑝𝑖 𝑖=1 𝛽/𝛾 − 𝑝𝑖 ) , (6) 𝛼 ≠ 𝛾 ≠ 𝛽, 𝛼, 𝛽, 𝛾 > 0, for a complete probability distribution 𝑃 = (𝑝1 , , 𝑝𝑛 ), 𝑝𝑖 ≥ 0, ∑𝑛𝑖=1 𝑝𝑖 = Measure (6) reduces to entropy of type 𝛽 (or 𝛼) when 𝛼 = 𝛾 = (or 𝛽 = 𝛾 = 1) given by −1 𝑛 𝛽 𝐻𝑛 (𝑝1 , , 𝑝𝑛 ; 𝛽) = (21−𝛽 − 1) [∑𝑝𝑖 − 1] , 𝑖=1 (7) 𝛽 ≠ 1, 𝛽 > 𝐻𝑛 (𝑝1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) − 𝐻𝑛−1 (𝑝1 + 𝑝2 , 𝑝3 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) When 𝛽 → 1, measure (7) reduces to Shannon’s entropy [5], namely, 𝑛 𝐻𝑛 (𝑝1 , , 𝑝2 ) = −∑𝑝𝑖 log2 𝑝𝑖 the relationship between the functional forms of generalized logarithms and the asymptotic scaling behavior of the entropy Suyari [11] has proposed a generalization of ShannonKhinchin axioms, which determines a class of entropies containing the well-known Tsallis and Havrda-Charvat entropies Authors [12] showed that the class of entropy functions determined by Suyari’s axioms is wider than the one proposed by Suyari and generalized Suyari’s axioms characterizing recently introduced class of entropies obtained by averaging pseudoadditive information content In this communication, we characterized the measure (6) by taking certain axioms parallel to those considered earlier by Havrda and Charv´at [6] along with the recursive relation (9) Some properties of this measure are also studied The measure (6) satisfies a recursive relation as follows: (8) 𝑖=1 The measure (7) was characterized by many authors by different approaches Havrda and Charv´at [6] characterized (7) by an axiomatic approach Dar´oczy [7] studied (7) by a functional equation A joint characterization of the measures (7) and (8) has been done by author in two different ways Firstly by a generalized functional equation having four different functions and secondly by an axiomatic approach Later on Tsallis [8] gave the applications of (7) in Physics To characterize strongly interacting statistical systems within a thermodynamical framework—complex system in particular—it might be necessary to introduce generalized entropies A series of such entropies have been proposed in the past, mainly to accommodate important empirical distribution functions to a maximum ignorance principles The understanding of the fundamental origin of these entropies and its deeper relations to complex systems is limited Authors [9] explore this question from first principle Authors [9] observed that the 4th Khinchin axiom is violated by strongly interacting system in general and by assuming the first three Khinchin axioms derived a unique entropy and also classified the known entropies with in equivalence classes For statistical system that violates the four ShannonKhinchin axioms, entropy takes a more general form than the Boltzmann-Gibbs entropy The framework of superstatistics allows one to formulate a maximum entropy principle with these generalized entropies, making them useful for understanding distribution functions of non-Markovian or nonergodic complex systems For such systems where the composability axiom is violated there exist only two ways to implement the maximum entropy principle; one is using the escort probabilities and the other is not The two ways are connected through a duality Authors [10] showed that this duality fixes a unique escort probability and derived a complete theory of the generalized logarithms and also gave = 𝐴 (𝛼,𝛾) 𝛼/𝛾 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) + (𝑝1 + 𝑝2 ) 𝐴 (𝛽,𝛾) 𝐻2 ( 𝛽/𝛾 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) (𝑝1 + 𝑝2 ) 𝑝1 𝑝2 , ; 𝛼, 𝛾) 𝑝1 + 𝑝2 𝑝1 + 𝑝2 𝐻2 ( 𝑝1 𝑝2 , ; 𝛾, 𝛽) , 𝑝1 + 𝑝2 𝑝1 + 𝑝2 𝛼 ≠ 𝛾 ≠ 𝛽, 𝛼, 𝛽, 𝛾 > 0, (9) where 𝑝1 + 𝑝2 > 0, 𝐴 (𝛼,𝛾) = (2(𝛾−𝛼)/𝛾 − 1), and 𝐴 (𝛽,𝛾) = (2(𝛾−𝛽)/𝛾 − 1) Consider 𝑛 𝛼/𝛾 𝐻 (𝑝1 , 𝑝2 , , 𝑝𝑛 ; 𝛼, 𝛾) = 𝐴−1 (𝛼,𝛾) [∑𝑝𝑖 − 1] , 𝑖=1 𝛼 ≠ 𝛾, 𝛼, 𝛾 > ≠ 1, 𝐻 (𝑝1 , 𝑝2 , , 𝑝𝑛 ; 𝛾, 𝛽) = 𝐴−1 (𝛽,𝛾) [1 − (10) 𝑛 𝛽/𝛾 ∑𝑝𝑖 ] , 𝑖=1 𝛽 ≠ 𝛾, 𝛽, 𝛾 > ≠ Proof 𝐻𝑛 (𝑝1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) − 𝐻𝑛−1 (𝑝1 + 𝑝2 , 𝑝3 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) −1 𝛼/𝛾 = (2(𝛾−𝛼)/𝛾 − 2(𝛾−𝛽)/𝛾 ) {(𝑝1 𝛽/𝛾 𝛼/𝛾 − 𝑝1 ) + (𝑝2 + ⋅ ⋅ ⋅ + (𝑝𝑛𝛼/𝛾 − 𝑝𝑛𝛽/𝛾 )} −1 − (2(𝛾−𝛼)/𝛾 − 2(𝛾−𝛽)/𝛾 ) 𝛼/𝛾 × {(𝑝1 + 𝑝2 ) − (𝑝1 + 𝑝2 ) + ⋅ ⋅ ⋅ + (𝑝𝑛𝛼/𝛾 − 𝑝𝑛𝛽/𝛾 ) } 𝛽/𝛾 𝛼/𝛾 + (𝑝3 𝛽/𝛾 − 𝑝2 ) 𝛽/𝛾 − 𝑝3 ) Abstract and Applied Analysis −1 𝛼/𝛾 = (2(𝛾−𝛼)/𝛾 − 2(𝛾−𝛽)/𝛾 ) {𝑝1 𝛽/𝛾 − 𝑝1 𝛼/𝛾 + 𝑝2 𝛼/𝛾 −(𝑝1 + 𝑝2 ) −1 𝛼/𝛾 = (2(𝛾−𝛼)/𝛾 − 2(𝛾−𝛽)/𝛾 ) {𝑝1 (𝛾−𝛼)/𝛾 + (2 𝛼/𝛾 + 𝑝2 (𝛾−𝛽)/𝛾 −1 −2 ) {(𝑝1 + 𝑝2 ) −1 𝛽/𝛾 − 𝑝2 𝛽/𝛾 + (𝑝1 + 𝑝2 ) 𝛼/𝛾 − (𝑝1 + 𝑝2 ) 𝛽/𝛾 (5) − 𝛽/𝛾 𝑝1 − } 𝐻𝑛+1 (𝑝1 , , 𝑝𝑖−1 , V𝑖1 , V𝑖2 , 𝑝𝑖+1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) } − 𝐻𝑛 (𝑝1 , , 𝑝𝑖−1 , 𝑝𝑖 , 𝑝𝑖+1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) 𝛽/𝛾 𝑝2 } = 𝛼/𝛾 = (2(𝛾−𝛼)/𝛾 − 2(𝛾−𝛽)/𝛾 ) (𝑝1 + 𝑝2 ) 𝛼/𝛾 ×[ (𝑝1 + 𝑝2 ) (𝛾−𝛼)/𝛾 + (2 𝛼/𝛾 + 𝑝2 𝛼/𝛾 (𝑝1 + 𝑝2 ) (𝛾−𝛽)/𝛾 −1 −2 = + 𝛼/𝛾 − 𝑝2 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) 𝛼/𝛾 (𝑝1 + 𝑝2 ) 𝛼/𝛾 (𝑝1 + 𝑝2 ) 𝐴 (𝛽,𝛾) × 𝐻2 ( 𝛽/𝛾 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) 𝑝𝑖 𝐻2 ( (14) V𝑖1 V𝑖2 , ; 𝛾, 𝛽) , 𝑝𝑖 𝑝𝑖 𝛼 ≠ 𝛾 ≠ 𝛽, 𝛼, 𝛽, 𝛾 > 0, 𝛽/𝛾 (𝑝1 + 𝑝2 ) 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) V𝑖1 V𝑖2 , ; 𝛼, 𝛾) 𝑝𝑖 𝑝𝑖 ) (𝑝1 + 𝑝2 ) 𝑝1 𝐴 (𝛼,𝛾) 𝑝𝑖 𝐻2 ( 𝛽/𝛾 𝛽/𝛾 × [1 − − 1] 𝛼/𝛾 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) + 𝛼/𝛾 𝑝1 𝐴 (𝛼,𝛾) 𝐻2 ( ] 𝑝1 𝑝2 , ; 𝛼, 𝛾) 𝑝1 + 𝑝2 𝑝1 + 𝑝2 for every V𝑖1 + V𝑖2 = 𝑝𝑖 > 0, 𝑖 = 1, , 𝑛, where 𝐴 (𝛼,𝛾) = (2(𝛾−𝛼)/𝛾 − 1) and 𝐴 (𝛽,𝛾) = (2(𝛾−𝛽)/𝛾 − 1), 𝛼 ≠ 𝛾 ≠ 𝛽 Theorem If 𝛼 ≠ 𝛽 ≠ 𝛾; 𝛼, 𝛽, 𝛾 > 0, then the axioms (1)–(5) determine a measure given by 𝛽/𝛾 (𝑝1 + 𝑝2 ) 𝑝1 𝑝2 , ; 𝛾, 𝛽) , 𝑝1 + 𝑝2 𝑝1 + 𝑝2 𝐻𝑛 (𝑝1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) (11) 𝑛 −1 𝛼/𝛾 = (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) ∑ (𝑝𝑖 𝑖=1 which proves (9) 𝛽/𝛾 − 𝑝𝑖 ) , (15) 𝛼 ≠ 𝛾 ≠ 𝛽, 𝛼, 𝛽, 𝛾 > 0, Set of Axioms For characterizing a measure of information of types 𝛼, 𝛽, and 𝛾 associated with a probability distribution 𝑃 = (𝑝1 , , 𝑝𝑛 ), 𝑝𝑖 ≥ 0, ∑𝑛𝑖=1 𝑝𝑖 = 1, we introduce the following axioms: where 𝐴 (𝛼,𝛾) = (2(𝛾−𝛼)/𝛾 − 1) and 𝐴 (𝛽,𝛾) = (2(𝛾−𝛽)/𝛾 − 1) Before proving the theorem we prove some intermediate results based on the above axioms (1) 𝐻𝑛 (𝑝1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) is continuous in the region Lemma If V𝑘 ≥ 0, 𝑘 = 1, 2, , 𝑚 and ∑𝑚 𝑘=1 V𝑘 = 𝑝𝑖 > 0, then 𝑝𝑖 ≥ 0, 𝑛 ∑ 𝑝𝑖 = 1, 𝛼, 𝛽, 𝛾 > 0; (12) 𝑖=1 𝐻𝑛+𝑚−1 (𝑝1 , , 𝑝𝑖−1 , V1 , , V𝑚 , 𝑝𝑖+1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) = 𝐻𝑛 (𝑝1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) (2) 𝐻2 (1, 0; 𝛼, 𝛽, 𝛾) = 0; + (3) 𝐻2 (1/2, 1/2; 𝛼, 𝛽, 𝛾) = 1, 𝛼, 𝛽, 𝛾 > 0; (4) + 𝐻𝑛 (𝑝1 , , 𝑝𝑖−1 , 0, 𝑝𝑖+1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) = 𝐻𝑛−1 (𝑝1 , , 𝑝𝑖−1 , 𝑝𝑖+1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) , for every 𝑖 = 1, 2, , 𝑛; 𝐴 (𝛼,𝛾) 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) 𝛼/𝛾 𝑝𝑖 𝐻𝑚 ( 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) 𝛽/𝛾 V V1 , , 𝑚 ; 𝛼, 𝛾) 𝑝𝑖 𝑝𝑖 𝑝𝑖 𝐻𝑚 ( (16) V V1 , , 𝑚 ; 𝛾, 𝛽) 𝑝𝑖 𝑝𝑖 (13) Proof To prove the lemma, we proceed by induction For 𝑚 = 2, the desired statement holds (cf axiom (4)) Let us suppose Abstract and Applied Analysis that the result is true for numbers less than or equal to 𝑚, we will prove it for 𝑚 + We have 𝐻𝑛+𝑚 (𝑝1 , , 𝑝𝑖−1 , V1 , , V𝑚+1 , 𝑝𝑖+1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) = 𝐻𝑛+1 (𝑝1 , , 𝑝𝑖−1 , V1 , 𝐿, 𝑝𝑖+1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) + + 𝐴 (𝛼,𝛾) 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) 𝛼/𝛾 𝐿 V V 𝐻𝑚 ( , , 𝑚+1 ; 𝛼, 𝛾) 𝐿 𝐿 𝐿𝛽/𝛾 𝐻𝑚 ( V V2 , , 𝑚+1 ; 𝛾, 𝛽) 𝐿 𝐿 For 𝛽 = 𝛾, (18) reduces to 𝐻𝑚+1 ( = 𝐻2 ( + + + + 𝐴 (𝛼,𝛾) 𝐻𝑚+1 ( 𝛼/𝛾 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) 𝑝𝑖 𝐻2 ( 𝛽/𝛾 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) 𝐴 (𝛼,𝛾) 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) 𝑝𝑖 𝐻2 ( 𝛼/𝛾 𝐿 𝛼/𝛾 𝛼/𝛾 +𝐿 + V1 𝐿 , ; 𝛾, 𝛽) 𝑝𝑖 𝑝𝑖 V V 𝐻𝑚 ( , , 𝑚+1 ; 𝛼, 𝛾) 𝐿 𝐿 𝐿𝛽/𝛾 𝐻𝑚 ( = 𝐻𝑛 (𝑝1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) + × {𝑝𝑖 𝐻2 ( V1 𝐿 , ; 𝛼, 𝛾) 𝑝𝑖 𝑝𝑖 V V2 , , 𝑚+1 ; 𝛾, 𝛽) 𝐿 𝐿 𝐴 (𝛼,𝛾) 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) 𝛽/𝛾 {𝑝𝑖 𝐻2 ( V1 𝐿 , ; 𝛾, 𝛽) 𝑝𝑖 𝑝𝑖 +𝐿𝛽/𝛾 𝐻𝑚 ( V V2 , , 𝑚+1 ; 𝛾, 𝛽)} , 𝐿 𝐿 where 𝑝𝑖 = V1 + 𝐿 > (17) One more application of induction premise yields 𝐻𝑚+1 ( V V1 , , 𝑚+1 ; 𝛼, 𝛽, 𝛾) 𝑝𝑖 𝑝𝑖 = 𝐻2 ( V1 𝐿 , ; 𝛼, 𝛽, 𝛾) 𝑝𝑖 𝑝𝑖 V V 𝐿 𝛼/𝛾 + ( ) 𝐻𝑚 ( , , 𝑚+1 ; 𝛼, 𝛾) 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) 𝑝𝑖 𝐿 𝐿 𝐴 (𝛼,𝛾) + 𝐴 (𝛼,𝛾) 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) = 𝐻2 ( V V1 𝐿 V 𝐿 𝛽/𝛾 , ; 𝛾, 𝛽) + ( ) 𝐻𝑚 ( , , 𝑚+1 ; 𝛾, 𝛽) 𝑝𝑖 𝑝𝑖 𝑝𝑖 𝐿 𝐿 (20) Expression (17) together with (19) and (20) gives the desired result 𝑚 𝑖 Lemma If V𝑖𝑗 ≥ 0, 𝑗 = 1, 2, , 𝑚𝑖 , ∑𝑗=1 V𝑖𝑗 = 𝑝𝑖 > 0, 𝑛 𝑖 = 1, 2, , 𝑛, and ∑𝑖=1 𝑝𝑖 = 1, then 𝐻𝑚1 +⋅⋅⋅+𝑚𝑛 (V1 , V1 , , V1 𝑚1 : ⋅ ⋅ ⋅ : V𝑛 , V𝑛 , , V𝑛 𝑚𝑛 ; 𝛼, 𝛽, 𝛾) = 𝐻𝑛 (𝑝1 , 𝑝2 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) V V 𝐻𝑚 ( , , 𝑚+1 ; 𝛼, 𝛾)} 𝐿 𝐿 𝐴 (𝛽,𝛾) V V1 , , 𝑚+1 ; 𝛾, 𝛽) 𝑝𝑖 𝑝𝑖 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) V1 𝐿 , ; 𝛼, 𝛾) 𝑝𝑖 𝑝𝑖 V V1 𝐿 V 𝐿 𝛼/𝛾 , ; 𝛼, 𝛾) + ( ) 𝐻𝑚 ( , , 𝑚+1 ; 𝛼, 𝛾) 𝑝𝑖 𝑝𝑖 𝑝𝑖 𝐿 𝐿 (19) Similarly for 𝛼 = 𝛾, (18) reduces to (where 𝐿 = V2 + ⋅ ⋅ ⋅ + V𝑚+1 ) = 𝐻𝑛 (𝑝1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) V V1 , , 𝑚+1 ; 𝛼, 𝛾) 𝑝𝑖 𝑝𝑖 V V 𝐿 𝛽/𝛾 ( ) 𝐻𝑚 ( , , 𝑚+1 ; 𝛾, 𝛽) 𝑝𝑖 𝐿 𝐿 (18) + + 𝐴 (𝛼,𝛾) 𝑛 𝛼/𝛾 ∑𝑝𝑖 𝐻𝑚𝑖 ( 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) 𝑖=1 𝐴 (𝛽,𝛾) 𝑛 𝛽/𝛾 ∑𝑝𝑖 𝐻𝑚𝑖 ( 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) 𝑖=1 V𝑖 𝑚𝑖 V𝑖 , , ; 𝛼, 𝛾) 𝑝𝑖 𝑝𝑖 V𝑖 𝑚𝑖 V𝑖 , , ; 𝛾, 𝛽) 𝑝𝑖 𝑝𝑖 (21) Proof Proof of this lemma directly follows from Lemma Lemma If 𝐹(𝑛; 𝛼, 𝛽, 𝛾) = 𝐻𝑛 (1/𝑛, , 1/𝑛; 𝛼, 𝛽, 𝛾), then 𝐹 (𝑛; 𝛼, 𝛽, 𝛾) = 𝐴 (𝛼,𝛾) 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) + 𝐹 (𝑛; 𝛼, 𝛾) 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) (22) 𝐹 (𝑛; 𝛾, 𝛽) , (𝛾−𝛼)/𝛾 where 𝐹(𝑛; 𝛼, 𝛾) = 𝐴−1 − 1), 𝛼 ≠ 𝛾, and (𝛼,𝛾) (𝑛 (𝛾−𝛽)/𝛾 − 1) , 𝐹 (𝑛; 𝛾, 𝛽) = 𝐴−1 (𝛽,𝛾) (𝑛 𝛽 ≠ 𝛾 (23) Abstract and Applied Analysis Proof Replacing in Lemma 𝑚𝑖 by 𝑚 and putting V𝑖𝑗 = 1/𝑚𝑛, 𝑖 = 1, 2, 𝑛, 𝑗 = 1, 2, 𝑚, where 𝑚 and 𝑛 are positive integer, we have 𝐴 (𝛼,𝛾) {(1 − 21−𝛼/𝛾 ) 𝐹 (𝑚; 𝛼, 𝛾) − (1 − ( 𝐹 (𝑚𝑛; 𝛼, 𝛽, 𝛾) = 𝐹 (𝑚; 𝛼, 𝛽, 𝛾) + + 𝐴 (𝛼,𝛾) 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) ( ( (𝛼−𝛾)/𝛾 𝐹 (𝑛; 𝛼, 𝛾) ) 𝑚 (𝛽−𝛾)/𝛾 𝐹 (𝑛; 𝛾, 𝛽) , ) 𝑚 (24) 𝐹 (𝑚𝑛; 𝛼, 𝛽, 𝛾) = 𝐹 (𝑛; 𝛼, 𝛽, 𝛾) + 𝐴 (𝛼,𝛾) (𝛼−𝛾)/𝛾 ( ) 𝐹 (𝑚; 𝛼, 𝛾) 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) 𝑛 𝐴 (𝛽,𝛾) 𝛽/𝛾−1 + ( ) 𝐹 (𝑚; 𝛾, 𝛽) 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) 𝑛 (25) Putting 𝑚 = in (24) and using 𝐹(1; 𝛼, 𝛽, 𝛾) = (by axiom (2)), we get 𝐴 (𝛼,𝛾) 𝐹 (𝑛; 𝛼, 𝛽, 𝛾) = 𝐴 (𝛼,𝜆) − 𝐴 (𝛽,𝛾) + 𝐹 (𝑛; 𝛼, 𝛾) 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) (26) 𝐹 (𝑛; 𝛾, 𝛽) , + 𝐴 (𝛼,𝛾) 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) ( ( + (29) That is, 𝐴 (𝛼,𝛾) {(1 − 21−𝛼/𝛾 )𝐹(𝑚; 𝛼, 𝛾) − (1 − (1/𝑚)𝛼/𝛾−1 )} = 𝐶, where 𝐶 is an arbitrary constant For 𝑚 = 1, we get 𝐶 = Thus, we have 𝐹 (𝑚; 𝛼, 𝛾) = (31) which is (23) Now (22) together with (23) gives 𝐹 (𝑛; 𝛼, 𝛽, 𝛾) = 𝐴 (𝛼,𝛾) 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) 𝐹 (𝑛; 𝛼, 𝛾) 𝐴 (𝛽,𝛾) 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) 𝐹 (𝑛; 𝛾, 𝛽) (32) (27) Proof of the Theorem We prove the theorem for rationals and then the continuity axiom (1) extends the result for reals For this let 𝑚 and 𝑟𝑖󸀠 ’s be positive integers such that ∑𝑛𝑖=1 𝑟𝑖 = 𝑚 and if we put 𝑝𝑖 = 𝑟𝑖 /𝑚, 𝑖 = 1, 2, , 𝑛 then an application of Lemma gives 𝛼/𝛾−1 𝐴 (𝛼,𝛾) {[1 − ( ) ] 𝐹 (𝑚; 𝛼, 𝛾) 𝑛 𝛽/𝛾−1 − 1] 𝐹 (𝑛; 𝛾, 𝛽)} ) 𝑚 𝛽 ≠ 𝛾, −1 Equation (27) together with (22) gives + [( − 𝑚1−𝛽/𝛾 1−𝛽/𝛾 = 𝐴−1 − 1) , (𝛽,𝛾) (𝑚 − 21−𝛽/𝛾 (30) = (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) (𝑛1−𝛼/𝛾 − 𝑛1−𝛽/𝛾 ) 𝛽/(𝛽−𝛾) ( ) 𝐹 (𝑚; 𝛾, 𝛽) 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) 𝑛 𝛽/𝛾−1 = 𝐴 (𝛽,𝛾) {[1 − ( ) ] 𝐹 (𝑚; 𝛾, 𝛽) 𝑛 𝛼 ≠ 𝛾 + 𝐴 (𝛽,𝛾) 𝛼/𝛾−1 + [( ) − 1] 𝐹 (𝑛; 𝛼, 𝛾)} 𝑚 − 𝑚1−𝛼/𝛾 1−𝛼/𝛾 = 𝐴−1 − 1) , (𝛼,𝛾) (𝑚 − 21−𝛼/𝛾 Similarly, 𝛽/(𝛽−𝛾) 𝐹 (𝑛; 𝛾, 𝛽) ) 𝑚 𝛼/(𝛼−𝛾) = 𝐹 (𝑛; 𝛼, 𝛽, 𝛾) + ( ) 𝐹 (𝑚; 𝛼, 𝛾) 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) 𝑛 𝛽/𝛾−1 ) )} 𝑚 = 𝐶 (say) 𝛼/(𝛼−𝛾) 𝐹 (𝑛; 𝛼, 𝛾) ) 𝑚 𝐴 (𝛼,𝛾) = 𝛼/𝛾−1 )} ) 𝑚 = 𝐴 (𝛽,𝛾) {(1 − 21−𝛽/𝛾 ) 𝐹 (𝑚; 𝛾, 𝛽) − (1 − ( 𝐹 (𝑚; 𝛾, 𝛽) = which is (22) Comparing the right hand sides of (24) and (25), we get 𝐹 (𝑚; 𝛼, 𝛽, 𝛾) + Putting 𝑛 = in (28) and using 𝐹(2, 𝛼, 𝛽, 𝛾) 𝐻2 (1/2, 1/2; 𝛼, 𝛽, 𝛾) = 1, we get 1 1 𝐻𝑚 ( , , , , , , ; 𝛼, 𝛽, 𝛾) ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 𝑚 𝑚 𝑚 𝑚 𝑟1 𝑟𝑛 = 𝐻𝑛 (𝑝1 , 𝑝2 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) (28) + + 𝐴 (𝛼,𝛾) 𝐴 (𝛼,𝛾) − 𝑛 𝛼/𝛾 ∑𝑝 𝐻𝑟𝑖 𝐴 (𝛽,𝛾) 𝑖=1 𝑖 𝐴 (𝛽,𝛾) 1 ( , , ; 𝛼, 𝛾) 𝑟𝑖 𝑟𝑖 𝑛 1 𝛽/𝛾 ∑𝑝𝑖 𝐻𝑟𝑖 ( , , ; 𝛾, 𝛽) 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) 𝑖=1 𝑟𝑖 𝑟𝑖 (33) Abstract and Applied Analysis That is, Definition We will use the following definition of a convex function A function 𝑓(⋅) over the points in a convex set 𝑅 is convex ∩ if for all 𝑟1 , 𝑟2 ∈ 𝑅 and 𝜇 ∈ (0, 1) 𝐻𝑛 (𝑝1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) = 𝐹 (𝑚; 𝛼, 𝛽, 𝛾) − − 𝐴 (𝛼,𝛾) 𝐴 (𝛼,𝛾) − 𝜇𝑓 (𝑟1 ) + (1 − 𝜇) 𝑓 (𝑟2 ) ≤ 𝑓 (𝜇𝑟1 + (1 − 𝜇) 𝑟2 ) 𝑛 𝛼/𝛾 ∑ 𝑝 𝐹 (𝑟𝑖 ; 𝛼, 𝛾) 𝐴 (𝛽,𝛾) 𝑖=1 𝑖 𝐴 (𝛽,𝛾) 𝑛 𝐴 (𝛽,𝛾) − 𝐴 (𝛼,𝛾) (34) 𝛽/𝛾 The function 𝑓(⋅) is convex ∪ if (40) holds with ≥ in place of ≤ Theorem The measure 𝐻𝑛 (𝑃; 𝛼, 𝛽, 𝛾) is convex ∩ function of the probability distribution 𝑃 = (𝑝1 , , 𝑝𝑛 ), 𝑝𝑖 ≥ 0, ∑𝑛𝑖=1 𝑝𝑖 = 1, when either 𝛼 > 𝛾and 𝛽 ≤ 𝛾 or 𝛽 > 𝛾and 𝛼 ≤ 𝛾 ∑ 𝑝𝑖 𝐹 (𝑟𝑖 ; 𝛾, 𝛽) 𝑖=1 Equation (34) together with (23) and (32) gives 𝑛 𝛼/𝛾 𝛽/𝛾 𝐻𝑛 (𝑝1 , , 𝑝𝑛 ; 𝛼, 𝛽, 𝛾) = ∑ (𝑝𝑖 − 𝑝𝑖 ) , 𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) 𝑖=1 Proof Let there be 𝑟 distributions 𝑛 𝑃𝑘 (𝑋) = {𝑝𝑘 (𝑥1 ) , , 𝑝𝑘 (𝑥𝑛 )} , ∑𝑝𝑘 (𝑥𝑖 ) = 1, 𝑘 = 1, 2, , 𝑟, associated with the random variable 𝑋 = (𝑥1 , , 𝑥𝑛 ) Consider 𝑟 numbers (𝑎1 , , 𝑎𝑟 ) such that 𝑎𝑘 ≥ and ∑𝑟𝑘=1 𝑎𝑘 = and define which is (15) This completes the proof of the theorem 𝑃𝑜 (𝑋) = {𝑝𝑜 (𝑥1 ) , , 𝑝𝑜 (𝑥𝑛 )} , Properties of Entropy of Types 𝛼, 𝛽, and 𝛾 The measure 𝐻𝑛 (𝑃; 𝛼, 𝛽, 𝛾), where 𝑃 = (𝑝1 , , 𝑝𝑛 ), 𝑝𝑖 ≥ 0, ∑𝑛𝑖=1 𝑝𝑖 = 1, is a probability distribution, as characterized in the preceding section and satisfies certain properties, which are given in the following theorems: Theorem The measure 𝐻𝑛 (𝑃; 𝛼, 𝛽, 𝛾) is nonnegative for 𝛼 ≠ 𝛾 ≠ 𝛽, 𝛼, 𝛽, 𝛾 > 𝛼/𝛾 󳨐⇒ ∑ 𝑝𝑖 < 1, 𝑛 𝛼/𝛾 𝑖=1 𝛽/𝛾 𝑖 = 1, 2, , 𝑛 (43) 𝑘=1 Obviously, ∑𝑛𝑖=1 𝑝𝑜 (𝑥𝑖 ) = and thus 𝑃𝑜 (𝑥) is a bonafide distribution of 𝑋 Let 𝛼 > 𝛾 and < 𝛽 ≤ 𝛾, then we have 𝑟 > 1, = ∑ 𝑎𝑘 𝐻𝑛 (𝑝𝑘 ; 𝛼, 𝛽, 𝛾) (36) 𝛽/𝛾 𝑘=1 { { 𝑟 − (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) {[ ∑𝑎𝑗 𝑝𝑗 ] { 𝑗=1 ] {[ − 𝑝𝑖 ) < 𝛼/𝛾 −1 Since, 𝛼 > 𝛾 and 𝛽 < 𝛾, we get −1 𝑟 𝑝𝑜 (𝑥𝑖 ) = ∑ 𝑎𝑘 𝑝𝑘 (𝑥𝑖 ) , 𝑘=1 𝑖=1 󳨐⇒ ∑ (𝑝𝑖 where ∑ 𝑎𝑘 𝐻𝑛 (𝑝𝑘 ; 𝛼, 𝛽, 𝛾) − 𝐻𝑛 (𝑃𝑜 (𝛼, 𝛽, 𝛾)) ∑ 𝑝𝑖 𝑖=1 (42) 𝑟 Proof Case 𝛼 > 𝛾; 𝛽 < 𝛾 ⇒ 𝛼/𝛾 > 1, 𝛽/𝜆 < 1; 𝑛 (41) 𝑖=1 𝛼 ≠ 𝛾 ≠ 𝛽, 𝛼, 𝛽, 𝛾 > (35) 𝑛 (40) 𝑟 𝛽/𝛾 − [ ∑𝑎𝑗 𝑝𝑗 ] [𝑗=1 ] } } } } } 𝑟 𝑛 𝛼/𝛾 (21−𝛼/𝛾 − 21−𝛽/𝛾 ) ∑ (𝑝𝑖 𝑖=1 𝛽/𝛾 − 𝑝𝑖 ) > (37) −1 𝛼/𝛾 (21−𝛼/𝛾 − 21−𝛽/𝛾 ) ∑𝑝𝑖 𝑖=1 𝛽/𝛾 − 𝑝𝑖 −1 This completes the proof of theorem 𝑟 𝛼/𝛾 𝑗=1 > (38) Therefore from Case 1, Case 2, and axiom (2), we get 𝐻𝑛 (𝑃; 𝛼, 𝛽, 𝛾) ≥ 𝑘=1 − (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) ( ∑𝑎𝑗 𝑝𝑗 Case Similarly for 𝛼 < 𝛾 and 𝛽 > 𝛾, we get 𝑛 ≤ ∑ 𝑎𝑘 𝐻𝑛 (𝑝𝑘 ; 𝛼, 𝛽, 𝛾) (39) 𝑟 𝛽/𝛾 − ∑ 𝑎𝑗 𝑝𝑗 ) = 0, 𝑗=1 (by Jensen’s inequality) (44) ⇒ ∑𝑟𝑘=1 𝑎𝑘 𝐻𝑛 (𝑝𝑘 ; 𝛼, 𝛽, 𝛾) − 𝐻𝑛 (𝑃𝑜 ; 𝛼, 𝛽, 𝛾) ≤ 0, that is, ∑𝑟𝑘=1 𝑎𝑘 𝐻𝑛 (𝑝𝑘 ; 𝛼, 𝛽, 𝛾) ≤ 𝐻𝑛 (𝑃𝑜 ; 𝛼, 𝛽, 𝛾), for 𝛼 > 𝛾, < 𝛽 ≤ 𝛾 By symmetry in 𝛼, 𝛽, and 𝛾 the above result is true for 𝛽 > 𝛾 and < 𝛼 ≤ 𝛾 Abstract and Applied Analysis Theorem The measure 𝐻𝑛 (𝑝; 𝛼, 𝛽, 𝛾) satisfies the following relations: −1 𝑚 𝑛 𝛼/𝛾 = (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) ∑ ∑ [𝑝𝑖 𝑖=1 𝑗=1 𝛼/𝛾 𝛽/𝛾 (𝑞𝑗 𝛽/𝛾 + 𝑞𝑗 ) 𝛼/𝛾 𝛽/𝛾 −𝑞𝑗 (𝑝𝑖 (i) Generalized-Additive: + 𝑝𝑖 )] 𝑚 𝑛 −1 𝛼/𝛾 𝛼/𝛾 𝛽/𝛾 = (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) [∑𝑝𝑖 ∑ (𝑞𝑗 + 𝑞𝑗 ) 𝑗=1 [𝑖=1 𝐻𝑛𝑚 (𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾) = 𝐺𝑛 (𝑃; 𝛼, 𝛽, 𝛾) 𝐻𝑚 (𝑄; 𝛼, 𝛽, 𝛾) + 𝐺𝑚 (𝑄; 𝛼, 𝛽, 𝛾) 𝐻𝑛 (𝑃; 𝛼, 𝛽, 𝛾) , 𝑚 𝛼, 𝛽, 𝛾 > 0, 𝑛 𝛽/𝛾 𝛼/𝛾 − ∑ 𝑞𝑗 ∑ (𝑝𝑖 𝑗=1 (45) 𝑖=1 𝛽/𝛾 + 𝑝𝑖 )] ] (49) where Also 𝐺𝑛 (𝑃; 𝛼, 𝛽, 𝛾) = 𝑛 𝛼/𝛾 𝛽/𝛾 ∑ (𝑝 + 𝑝𝑖 ) , 𝑖=1 𝑖 (46) 𝛼, 𝛽, 𝛾 > 𝐻𝑛𝑚 (𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾) −1 𝑛 𝑚 = (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) ∑ ∑ [(𝑝𝑖 𝑞𝑗 ) 𝛼/𝛾 𝛽/𝛾 − (𝑝𝑖 𝑞𝑗 ) 𝑖=1 𝑗=1 −1 (ii) Subadditive: for 𝛼, 𝛽 > 𝛾, the measure 𝐻𝑛 (𝑝; 𝛼, 𝛽, 𝛾) is subadditive; that is, 𝑛 𝑚 = (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) ∑ ∑ [(𝑝𝑖 𝑞𝑗 ) 𝛼/𝛾 𝛽/𝛾 − (𝑝𝑖 𝑞𝑗 ) 𝑖=1 𝑗=1 𝛽/𝛾 𝛼/𝛾 +𝑝𝑖 𝑞𝑗 𝐻𝑛𝑚 (𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾) ≤ 𝐻𝑛 (𝑃; 𝛼, 𝛽, 𝛾) (47) + 𝐻𝑚 (𝑄; 𝛼, 𝛽, 𝛾) , −1 𝑛 𝑚 𝛼/𝛾 𝛼/𝛾 = (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) ∑ ∑ [𝑝𝑖 𝑞𝑗 𝑖=1 𝑗=1 𝑃 ∗ 𝑄 = (𝑝1 𝑞1 , , 𝑝1 𝑞𝑚 , , 𝑝𝑛 𝑞1 , , 𝑝𝑛 𝑞𝑚 ) (48) 𝑛 𝑚 𝛼/𝛾 𝛼/𝛾 = (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) ∑ ∑ [𝑞𝑗 (𝑝𝑖 𝑖=1 𝑗=1 𝛽/𝛾 −𝑝𝑖 are complete probability distributions 𝑛 𝑚 × ∑ ∑ [(𝑝𝑖 𝑞𝑗 ) 𝑖=1 𝑗=1 −1 𝛽/𝛾 − (𝑝𝑖 𝑞𝑗 ) 𝑛 𝛽/𝛾 + 𝑞𝑗 )] 𝑛 𝛽/𝛾 𝛼/𝛾 𝑖=1 𝛽/𝛾 + 𝑞𝑗 )] Adding (49) and (50), we get 𝑚 𝛼/𝛾 𝛼/𝛾 𝛽/𝛾 +𝑝𝑖 𝑞𝑗 𝑚 𝑛 −∑ 𝑝𝑖 ∑ (𝑞𝑗 𝑖=1 𝑖=1 𝑗=1 𝑛 −1 ] = (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) ∑ ∑ [(𝑝𝑖 𝑞𝑗 ) −1 𝛽/𝛾 + 𝑝𝑖 ) 𝛼/𝛾 (𝑞𝑗 (50) 𝑛 𝑚 𝛼/𝛾 𝛼/𝛾 − 𝑞𝑗 ] −1 𝛼/𝛾 𝛼/𝛾 𝛽/𝛾 = (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) [ ∑𝑞𝑗 ∑ (𝑝𝑖 + 𝑝𝑖 ) 𝑖=1 [𝑗=1 Proof of (i) We have 𝐻𝑛𝑚 (𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾) = (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) 𝛽/𝛾 𝛽/𝛾 𝛽/𝛾 𝛼/𝛾 −1 𝛽/𝛾 𝛼/𝛾 − 𝑝𝑖 𝑞𝑗 ] − 𝑝𝑖 𝑞𝑗 +𝑝𝑖 𝑞𝑗 where 𝑃 = (𝑝1 , , 𝑝𝑛 ), 𝑄 = (𝑞1 , , 𝑞𝑚 ) and ] 𝛼/𝛾 𝛼/𝛾 = (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) ∑ ∑ [𝑝𝑖 𝑞𝑗 𝑖=1 𝑗=1 − (𝑝𝑖 𝑞𝑗 ) 𝛼/𝛾 𝛽/𝛾 − 𝑝𝑖 𝑞𝑗 ] 𝛽/𝛾 𝛽/𝛾 − 𝑝𝑖 𝑞𝑗 𝛽/𝛾 +𝑝𝑖𝛼/𝜆 𝑞𝑗 𝛽/𝛾 𝛼/𝛾 𝛽/𝛾 − 𝑝𝑖 𝑞𝑗 ] 2𝐻𝑛𝑚 (𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾) 𝑛 𝑚 −1 𝛼/𝛾 𝛼/𝛾 𝛽/𝛾 = (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) [∑𝑝𝑖 ∑ (𝑞𝑗 + 𝑞𝑗 ) 𝑗=1 [𝑖=1 𝑚 𝛽/𝛾 𝑛 𝛼/𝛾 − ∑ 𝑞𝑗 ∑ (𝑝𝑖 𝑗=1 𝑖=1 𝛽/𝛾 + 𝑝𝑖 )] ] Abstract and Applied Analysis Conclusion 𝑛 𝑚 −1 𝛼/𝛾 𝛼/𝛾 𝛽/𝛾 + (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) [ ∑𝑞𝑗 ∑ (𝑝𝑖 + 𝑝𝑖 ) 𝑖=1 [𝑗=1 𝑛 𝛽/𝛾 𝑛 𝛼/𝛾 −∑ 𝑝𝑖 ∑ (𝑞𝑗 𝑖=1 𝑛 𝛼/𝛾 = ∑ (𝑝𝑖 𝑖=1 𝑚 𝛼/𝛾 𝑗=1 𝑚 𝛼/𝛾 ∑ (𝑞𝑗 𝑗=1 𝑛 𝛽/𝛾 + 𝑞𝑗 )] −1 𝛽/𝛾 + 𝑝𝑖 ) (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) × ∑ (𝑞𝑗 + 𝑖=1 In addition to well-known information measure of Shannon, Renyi’s, Havrda-Charvat, Vajda [13], Darc´ozy, we have characterized a measure which we call 𝛼, 𝛽, and 𝛾 information measure We have given some basic axioms and properties with recursive relation The Shannon’s [5] measure included in the 𝛼, 𝛽, and 𝛾 information measure for the limiting case 𝛼 = 𝛾 = and 𝛽 → 1; 𝛽 = 𝛾 = and 𝛼 → This measure is generalization of Havrda-Charvat entropy 𝛼/𝛾 × ∑ (𝑝𝑖 𝑖=1 𝛽/𝛾 − 𝑞𝑗 ) + Conflict of Interests 𝛽/𝛾 𝑞𝑗 ) (𝐴 (𝛼,𝛾) The authors declare that there is no conflict of interests regarding the publication of this paper −1 − 𝐴 (𝛽,𝛾) ) References 𝛽/𝛾 − 𝑝𝑖 ) , 𝐻𝑛𝑚 (𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾) −1 𝑛 𝛼/𝛾 𝛽/𝛾 = ∑ (𝑝𝑖 + 𝑝𝑖 ) (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) 𝑖=1 𝑚 𝛼/𝛾 × ∑ (𝑞𝑗 𝑗=1 𝛽/𝛾 − 𝑞𝑗 ) −1 𝑚 𝛼/𝛾 𝛽/𝛾 + ∑ (𝑞𝑗 + 𝑞𝑗 ) (𝐴 (𝛼,𝛾) − 𝐴 (𝛽,𝛾) ) 𝑗=1 𝑛 𝛼/𝛾 × ∑ (𝑝𝑖 𝑖=1 𝛽/𝛾 − 𝑝𝑖 ) (51) Using (46) 𝐻𝑛𝑚 (𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝑠) = 𝐺𝑛 (𝑃; 𝛼, 𝛽, 𝛾) 𝐻𝑚 (𝑄; 𝛼, 𝛽, 𝛾) + 𝐺𝑚 (𝑄; 𝛼, 𝛽, 𝛾) 𝐻𝑛 (𝑃; 𝛼, 𝛽, 𝛾) , (52) which is (45) This completes the proof of part (i) Proof of (ii) From part (i), we have 𝐻𝑛𝑚 (𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾) = 𝐺𝑛 (𝑃; 𝛼, 𝛽, 𝛾) 𝐻𝑚 (𝑄; 𝛼, 𝛽, 𝛾) + 𝐺𝑚 (𝑄; 𝛼, 𝛽, 𝛾) 𝐻𝑛 (𝑃; 𝛼, 𝛽, 𝛾) (53) 𝛼/𝛾 As 𝐺𝑛 (𝑃; 𝛼, 𝛽, 𝛾) = (1/2) ∑𝑛𝑖=1 (𝑝𝑖 𝛽/𝛾 + 𝑝𝑖 ) ≤ 1, for 𝛼, 𝛽 ≥ 𝛾, 𝐻𝑛𝑚 (𝑃 ∗ 𝑄; 𝛼, 𝛽, 𝛾) ≤ 𝐻𝑚 (𝑄; 𝛼, 𝛽, 𝛾) + 𝐻𝑛 (𝑃; 𝛼, 𝛽, 𝛾) (54) This proves the subadditivity [1] J Acz´el and Z Dar´oczy, On Measures of Information and Their Characterization, Academic Press, New York, NY, USA, 1975 [2] D K Faddeev, “On the concept of entropy of a finite probabilistic scheme,” Uspekhi Matematicheskikh Nauk, vol 11, no 1(67), pp 227–231, 1956 [3] T W Chaundy and J B McLeod, “On a functional equation,” Proceedings of the Edinburgh Mathematical Society Series II, vol 12, no 43, pp 6–7, 1960 [4] B D Sharma and I J Taneja, “Functional measures in information theory,” Funkcialaj Ekvacioj, vol 17, pp 181–191, 1974 [5] C E Shannon, “A mathematical theory of communication,” The Bell System Technical Journal, vol 27, pp 379–423, 623–636, 1948 [6] J Havrda and F Charv´at, “Quantification method of classification processes Concept of structural 𝛼-entropy,” Kybernetika, vol 3, pp 30–35, 1967 [7] Z Dar´oczy, “Generalized information functions,” Information and Computation, vol 16, pp 36–51, 1970 [8] C Tsallis, “Possible generalization of Boltzmann-Gibbs statistics,” Journal of Statistical Physics, vol 52, no 1-2, pp 479–487, 1988 [9] R Hanel and S Thurner, “A comprehensive classification of complex statistical systems and an ab-initio derivation of their entropy and distribution functions,” Europhysics Letters, vol 93, no 2, Article ID 20006, 2011 [10] R Hanel, S Thurner, and M Gell-Mann, “Generalized entropies and logarithms and their duality relations,” Proceedings of the National Academy of Sciences of the United States of America, vol 109, no 47, pp 19151–19154, 2012 [11] H Suyari, “Generalization of Shannon-Khinchin axioms to nonextensive systems and the uniqueness theorem for the nonextensive entropy,” IEEE Transactions on Information Theory, vol 50, no 8, pp 1783–1787, 2004 [12] V M IIic, M S Stankovic, and E H Mulalic, “Comments on Generalization of Shannon-Khinchin axioms to nonextensive systems and the uniqueness theorem for nonextensive entropy,” IEEE Transactions on Information Theory, vol 59, no 10, pp 6950–6952, 2013 [13] I Vajda, “Axioms for 𝛼-entropy of a generalized probability scheme,” Kybernetika, vol 2, pp 105–112, 1968 Copyright of Abstract & Applied Analysis is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission However, users may print, download, or email articles for individual use ... (7) was characterized by many authors by different approaches Havrda and Charv´at [6] characterized (7) by an axiomatic approach Dar´oczy [7] studied (7) by a functional equation A joint characterization. .. scaling behavior of the entropy Suyari [11] has proposed a generalization of ShannonKhinchin axioms, which determines a class of entropies containing the well-known Tsallis and Havrda- Charvat entropies... In addition to well-known information measure of Shannon, Renyi’s, Havrda- Charvat, Vajda [13], Darc´ozy, we have characterized a measure which we call

Ngày đăng: 02/11/2022, 08:46

Tài liệu cùng người dùng

Tài liệu liên quan