Stochastic frontier analysis (SFA)

Một phần của tài liệu The effects of foreign bank entry, deregulation on bank efficiency in vietnam (Trang 25 - 28)

2.1.2. The method of efficiency measurement

2.1.2.2. Stochastic frontier analysis (SFA)

• Time-invariant models

Meeusen and van den Broeck (1977) and Aigner et al. (1977) who are the pioneer of this method. From then, this model has become popular to estimate the efficiency of the firms. A lot of research has produced and developed many formula and extension from original models. In the development of methodology, it is started with a general formula of the stochastic frontier cross-sectional model and then improvement and expansion to the panel-sectional model.

a) Pitt and Lee (1981)

Because of the richer set of information in panel data, it is considered highly accurate inefficiencies. The first generalization of panel model was presented by Pitt and Lee (1981) that applying the maximum likelihood estimation of the half- normal distribution with time-invariant when u is fixed by time and differed among banks. It means that ui may be defined as follows:

′ � +

� = �+ �′ � − � i=1,…, N; t=1, …, T;

��� = ��� ��

��2

� ��

��� ~�(0, �� )

2

��� ~�+(0, �� )

b) Schmidt and Sickles (1984)

Schmidt & Sickles (1984) developed this model to the Truncated – normal case.

Stochastic frontier model with time-invariant efficiency may be applied modifying conventional fix effect estimation. It means that inefficiency to be correlated with the frontier of independent variables and relax some of the assumptions about ui. c) Battese and Coelli (1988)

The model in Battese and Coelli (1988) is similar with Pitt and Lee (1981) which both are estimated by maximum likelihood method. However, their distribution assumptions are different. While distribution in Pitt and Lee model (1981) is half - normal distribution, Battese and Coelli (1988) assumes u to be truncated normal distribution. The latter includes one more parameter –μ while the former is the case of the latter μ = 0 (μ is the mean of the normal distribution that μ takes the truncation).

• Time – varying models

Because of the limitation of time – invariant model, researcher such as Cornwell (1990), Kumbhakar (1990), Battese and Coelli (1992) and Greene (2005a) suggested the models which allow its time – varying.

a) Cornwell (1990) and Kumbhakar (1990)

The next generalization was developed by Kumbhakar (1990) who was the first researcher suggesting the maximum likelihood estimation of a time-varying stochastic frontier model in that g(t) is clarified as:

�(�) = {1 + exp(�� + ��2)}−1

With this model, we have to estimate two more parameters γ and δ. And it is easy to test the hypothesis of time-invariant efficiency by setting γ = δ = 0.

The common issue of all time-varying SFMs is the intercept α which is the same throughout productive units. There are some time-invariant unobservable factors existence, not relative to the production process but the impact on outputs. This causes bias result when the impact of these factors may be accounted by the inefficiency scores.

b) Battese and Coelli (1995)

Battese and Coelli (1995) used the Cobb-Douglas function to explore the efficiency score. The model can be described as:

��� = exp(�′ � + − � + � )

With ��� follows a truncated normal distribution using maximum likelihood method.

c) Greene (2005a)

In order to overcome the limitation above, Greene (2005a) recommended applying a time-varying stochastic frontier half normal model with a specifying intercept of each unit. The formula is specified as follow

��� = �� + �′ � + �

More suitable than the previous model, this specification allows separating time- varying efficiency from unit detailed time invariant unobserved heterogeneity.

�� ��

��

����

Following the assumptions on the unobserved unit specific heterogeneity, Greene (2005a) used these model as “true” fixed effects (TFE) and “true” random effects (TRE). The maximum likelihood estimation of the “true” fixed effects time-variant depends on two main issues related to the measurement of nonlinear panel data models. Firstly, purely computational because of the large dimension of the parameters space. Therefore, Greene represented a Maximum likelihood dummy variable (MLDV) specification which is computationally feasible also existed a large

number of parameters �� (N>1000). Secondly, the problem with incidental parameters appearance when a number of units is relatively large in term of the length of the panel. In these situations, the �� is not consistently measured as N →∞ with fixed T. In meanwhile, the measurement of the “true” random effects specification may be probably conducted by applying simulated maximum likelihood techniques.

Table 3: An overview of development of SFA’s method

Reference Distribution F Estimate method

Cross – sectional models

Aigner et al (1977)

Meeusen and van den Broeck (1977)

Greene (2003)

Half normal Exponential

Gamma

Maximum likelihood Maximum likelihood

Simulated maximum likelihood Panel – sectional models

*Time – invariant Pitt and Lee (1981)

Schmidt & Sickles (1984) Battese and Coelli (1988)

Half normal -

Truncated normal

Maximum likelihood Generalized least squares Maximum likelihood

Reference Distribution F Estimate method

*Time - varying

Half normal Maximum likelihood Kumbhakar (1990)

Battese and Coelli (1995) Truncated normal Maximum likelihood

Greene (2005a) Exponential Maximum likelihood dummy

Greene (2005a) Half normal variable

Greene (2005a) Truncated normal Maximum likelihood dummy

Greene (2005a) Exponential variable

Greene (2005a) Half normal Maximum likelihood dummy

variable Greene (2005a) Truncated normal

Simulated maximum likelihood Simulated maximum likelihood Simulated maximum likelihood

2.1.3. Economy of scope and economy of scale

Một phần của tài liệu The effects of foreign bank entry, deregulation on bank efficiency in vietnam (Trang 25 - 28)

Tải bản đầy đủ (DOCX)

(72 trang)
w