Microsoft Word 05 nguyenmanhtoan 18 Nguyen Manh Toan ESTIMATING CES AND CET PARAMETERS FOR VIETNAM ECONOMY USING MAXIMUM ENTROPY APPROACH ỨNG DỤNG PHƯƠNG PHÁP MAXIMUM ENTROPY TRONG ƯỚC LƯỢNG| CÁC THAM[.]
18 Nguyen Manh Toan ESTIMATING CES AND CET PARAMETERS FOR VIETNAM ECONOMY USING MAXIMUM ENTROPY APPROACH ỨNG DỤNG PHƯƠNG PHÁP MAXIMUM ENTROPY TRONG ƯỚC LƯỢNG| CÁC THAM SỐ CES VÀ CET CHO NỀN KINH TẾ VIỆT NAM Nguyen Manh Toan The University of Danang, University of Economics; Email: nm_toankobe@yahoo.com Abstract - Faced with this limited information, in order to give the “best” conclusions based on the data at hand and without assumptions about something we notknow, Jaynes (1957) proposed making use of the entropy concept This paper presents the theoretical concept used to estimate the Constant Elasticity of Substitution (CES) and Constant Elasticity of Transformation (CET) parameters Applying the maximum entropy method, the study tries to estimate trade parameters, i.e CES and CET, the value of which are just assumed in many CGE models We use the static CGE model to find Cross Entropy estimates of parameters It is the first time that these parameters are estimated for Vietnam based on the Vietnamese SAM 2007 Because CGE results are sensitive to the value of CES and CETparameters, it is believed that when they are used for simulating in the CGE model, the results will reflect the Vietnamese actual situation more precisely Tóm tắt - Trong điều kiện khơng có đầy đủ thơng tin phục vụ việc phân tích, để đưa kết luận “tốt nhất” mà không cần bổ sung thêm giả định, Jaynes (1957) đề xuất vận dụng khái niệm Entropy để xử lý Bài báo trình bày sở lý thuyết dùng để ước lượng hệ số co giãn thay thể (CES) hệ số co giãn chuyển đổi (CET) ngành kinh tế.Ứng dụng phương pháp Maximum Entropy, nghiên cứu tiến hành ước lượng tham số CES CET cho kinh tế Việt Nam.Thông thường, tham số CES CET giả định có sẵn sử dụng tham số nước Đây lần tham số CES CET ước lượng riêng cho kinh tế Việt Nam dựa bảng Ma trận hạch tốn xã hội năm 2007 Độ xác mơ hình CGE phụ thuộc vào giá trị tham số CES CET, kết ước lượng giúp tăng tính xác mơ kinh tế theo mơ hình CGE Key words - CES; CET; CGE model; maximim entrophy; cross entropy Từ khóa - CES; CET; mơ hình cân tổng thể; maximim entrophy; cross entropy Introduction One of the most debated issues in the CGE literature concerns the validity of the key behavioral parameters used in the calibration process In fact, CGE modelers seldom estimate these parameters empirically, preferring to borrow from the handful of estimates available in the literature However, these estimates usually are of countries other than the one the CGE model is trying to represent Estimating key parameters is very crucial since CGE results are sensitive to the value of parameters (Nganou, 2004) Following Arndt Robinson and Tarp (2002) and Nganou (2004), this study uses a robust econometric technique - the Generalized Maximum Entropy (GME) - to estimate CES and CET parameters of Vietnam economy clarified as follows We observe the outcome of an economic variable y However, our interest focuses on the unknown and unobservable frequencies p = (p1, p2 , ,pk)’ that represent the data generating process Since we can not measure p directly, in order to recover it we must use indirect measurements on the observables In this context, for recovering the unknown parameters, we are faced with an inverse problem that can be formalized in the following way: y = Xp (1) ’ where y = (y1, y2 , ,yN) is a N-dimensional vector of observations, p is an unobservable K dimensional vector of unknowns and X is a known (N x K) linear operator Thus, we know X and observe y, and wish to determine the unknown and unobservable parameter vector p, where Brief Theoretical Background 2.1 Maximum entropy As discussed in Golan, Judge, and Miller (1996), the traditional maximum entropy (ME) is based on the entropy-information measure of Shannon (1948), which allows one to use available data, taking into account all relevant constraints and prior information about parameter values, to recover unknown parameters Available information does not need to be complete or even internally consistent The ME estimator may be used in several circumstances such as the case when the sample is small, when there are many covariates, or when the covariates are highly correlated Moreover, the ME method is very flexible since it can allow the user to easily impose nonlinear and inequality constraints The concept of Maximum Entropy can be briefly ∑ K k =1 p k = and pk ≥ Faced with this limited information, in order to give the “best” conclusions based on the data at hand and without assumptions about something we not know, Jaynes (1957) proposed making use of the entropy concept, which has a rich history dating back to the works of Shannon (1948), in determining the unknown distribution of probability in Equation (1) To explain the philosophy of this approach, Golan et al (1996) started from the concept of combinatorics, or counting the number of possible outcomes Let suppose that nature or society is carrying out N trials (repetitions) of an experiment that has K possible outcomes Since there are N trials and each trial has K possible outcome, there are KN possible outcomes in the sequence of N trials Let N1, N2, , Nk be the number of times that each THE UNIVERSITY OF DANANG, JOURNAL OF SCIENCE AND TECHNOLOGY, NO 6(79).2014, VOL outcome occurs in the experiment of length N, where ∑k Nk = N, Nk ≥ and k = 1,2, ,k (2) The number of ways of a particular set of Nk is: N! (3) W = ∏ k N k! The probability that the outcome of the experiment occurs in such a way is thus N! (4) P (W ) = W / K N = * N ∏k N k ! K or [ ] P (W ) = exp ln N !− ∑k ln N k !− N ln K (5) Using the Stirling’s approximation: ln x! ≈ x ln x − x , as o < x → ∞ We obtain [ ] P(W ) ≈ exp N ln N − N − ∑k N k ln N k + ∑k N k − N ln K (6) Since∑k Nk = N, we have [ P(W ) ≈ exp N ln N − ∑k N k ln N k − N ln K ] (7) The ratio Nk/N represents the frequency of the occurrence of the K-th outcome in a sequence of N and N k = pk as N → ∞ (8) N We thus have [ P (W ) ≈ exp N ln N − ∑k Np k ln Np k − N ln K [ = exp [N ln N − ∑ N ln N − N ∑ = exp[− N ∑ p ln p − N ln K ] = exp{N [− ∑ p ln p − ln K ]} ] = exp N ln N − ∑ k Np k ln N − ∑ k Np k ln p k − N ln K k k k k k k k k k ] p k ln p k − N ln K = exp{N [H ( p) − ln K ]} ] (9) Where H(p) is the Shannon measure: H ( p ) = − ∑ k p k ln p k k subject to data consistency and normalizationadditivity requirements (12) y = Xp and k pk = the form of a prior distribution of probabilities q = (q1, q2, , qK)’ When such prior knowledge exists, we wish to incorporate it in the ME estimation As stated by Kullback (1959) and Good (1963), we can then use the principle of minimum cross-entropy (CE) and choose, given the constraints (12) and (13), the estimate of p that minimize the cross-entropy between p and q I(p,q) = ∑k pk ln (p q ) = ∑ p lnp − ∑ p lnq (14) k k k k k k k k If the prior information q is consistent with the data, then ~ p = q and (14) is zero This means that the data in (12) has no additional information to the prior If q is a uniform distribution, qk = 1/K for all K, then the results of CE and ME are the same 2.3 Generalized maximum entropy (GME) and Cross Entropy (GCE) The ME and CE formulation presented in the previous sections seek point estimates of the unknown probabilities pk given the available data, adding-up constraints and possibly a discrete prior qk In this section, the ME and CE formulations will be generalized to recover unknowns that not have the properties of probabilities and the observations include noises The general linear inverse problem with noise can be written as y = Xβ + e (15) Where y is N-vector of noisy observations, X a (N x K) design matrix composed of explanatory variables and β a K-vector of unknown parameters The unobservable disturbance vector, e, represents one or more sources of noise in the observed system Because the elements of the unknown vector β are no longer enough to retain the properties of probability, we use a procedure, referred as reparameterization, in which for each βk we assume there exists a parameter space zk = [z1 , z2 , ,zM ]’ with corresponding probabilities pk = [pk1 , pk2 , ,pkM]’ such that (16) β k = ∑ z km p km m (10) The philosophy of the maximum likelihood approach addresses that the reason why we observe a set of outcomes is because its probability is the greatest in nature Equation (9) implies that the maximum value of P(W) is obtained when H(p)is maximized We can thus choose the set of pk that maximizes (11) H ( p ) = − ∑ p k ln p k ∑ 19 (13) 2.2 Cross Entropy In many situations normally found in practice we have some information about the unknown p = (p1, p2, , pK)’ in For k=1,2, ,K; m=1,2, ,M and M ≥ Similarly, the vector of disturbances, e, is assumed to be e n = ∑ j v nj w nj (17) For n = 1, 2, , N; j = 1, 2, , J, and ≤J