1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Luận văn thạc sĩ UEH stochastic frontier models review with applications to vietnamese small and medium enterprises in metal manufacturing industry

56 14 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 56
Dung lượng 1,08 MB

Cấu trúc

  • CHAPTER I: INTRODUCTION (5)
    • 1. Introduction (5)
    • 2. Research objectives (7)
  • CHAPTER II: LITERATURE REVIEW (8)
    • 1. Efficiency measurement (8)
    • 2. Data Envelopment Analysis (DEA) and Stochastic Frontier Analysis (SFA) (9)
    • 3. The cross-sectional Stochastic Frontier Model (12)
    • 4. Stochastic frontier model with panel data (15)
      • 4.1. Time-invariant models (16)
      • 4.1. Time varying models (0)
  • CHAPTER III: METHODOLOGY (25)
    • 1. Overview of Vietnamese metal manufacturing industry (25)
    • 2. Analytical framework (26)
    • 3. Research method (26)
      • 3.1. Estimating technical inefficiency (26)
      • 3.2. Variables description (30)
      • 3.3 Data source (33)
  • CHAPTER IV: RESULT AND DISCUSSION (37)
    • 1. Empirical result (37)
      • 1.1 Cobb-Douglas functional form (37)
      • 1.2. Translog functional form (42)
    • 2. Discussion (44)
      • 2.1 Models without distribution assumption (44)
      • 2.2 The distribution of technical inefficiency (45)
      • 2.3 Technical inefficiency and firm-specific effects (46)
      • 2.4 Identification issue (48)
  • CHAPTER V: CONCLUSION (50)
    • Chart 3-1 Firm size and ownership type (0)
    • Chart 3-2 Firm location (0)

Nội dung

INTRODUCTION

Introduction

The rising demand of metal products (especially iron and steel) in daily life, production and, mostly, construction sector makes the role of metal manufacturing industry important According to World Steel Association, at the end of 2011, Vietnamese steel market was the seventh largest in Asia with the growth rate in tandem with economic expansion There are still huge potentials from this industry due to the growing income and an expanding trend of construction

As reported by Viet Nam chamber of Commerce and Industry (VCCI), at the end of 2011, 97% of the number of enterprises in Viet Nam are small and medium sized which employ more than a half of the domestic labor force and contribute more than 40% of GDP This dynamic group of firms have become have become an important resource for economic growth in Viet Nam However, this industry is now facing challenges due to outdated technology and the heavy dependence on import materials From the reasons above, an analysis into the technical inefficiency level of Vietnamese small and medium enterprises (SMEs) in metal manufacturing industry is necessary to maintain and develop the benefit from this industry

Technical efficiency is the effectiveness with which the firm uses a given set of inputs to produce outputs The set of highest amounts of output that can be produced from given amounts of inputs is the production frontier Technical efficiency reflects how close a firm can reach this border: firms producing on this frontier are technically efficient, while those far below from the frontier are technically inefficient A technical efficiency analysis is often conducted by constructing a production-possibility boundary (the frontier) and then estimating the distance (the inefficiency level) of firms from that boundary

There are two approaches to measure technical efficiency: deterministic and stochastic The deterministic approach, called Data Envelopment Analysis (DEA), was first introduced in Charnes, Cooper, and Rhodes (1978) which use linear programming with the data of inputs and outputs to construct the frontier The advantage of this method is that it does not require the specification of the production function However, for being deterministic, this method assumes that there is no statistical noise in data The stochastic approach, called Stochastic Frontier Analysis (SFA), was mentioned first in Aigner, Lovell, and Schmidt (1977) and Meeusen and Broeck (1977) This method, contrary to DEA, requires a specific functional form for the production function and allows data to have noises SFA is used more often in practice because for many cases, the noiseless assumption are unrealistic

Since its first appearance in Aigner et al (1977) and Meeusen and Broeck (1977), the literature of technical efficiency has been widely developed through many studies such as Pitt and Lee (1981), Schmidt and Sickles (1984), Battese and Coelli (1988, 1992, 1995), Cornwell, Schmidt, and Sickles (1990), Kumbhakar (1990), Lee and Schmidt (1993) and Greene (2005) (see Greene (2008) for an overview of those) Being able to deal with various production processes, this method has become a popular tool to analyze the performance of production units such as firms, regions and countries Those applications can be found in Battese and Corra (1977), Page Jr (1984), Bravo- Ureta and Rieger (1991), Battese (1992), Dong and Putterman (1997), Anderson, Fish, Xia, and Michello (1999) and Cullinane, Wang, Song, and Ji (2006)

Despite the fact that a rich literature of this matter has been developed over a long time, researchers at times find it difficult to choose the most appropriate model to estimate the technical efficiency level or determining its sources The earliest versions of these models were built to deal with cross sectional data (Aigner et al., 1977; Meeusen & Broeck, 1977) These models need assumptions about technical inefficiency distribution and its uncorrelatedness with other parts of the model Pitt and Lee (1981) and Schmidt and Sickles (1984) criticized that technical inefficiency cannot be estimated consistently with cross-sectional data and suggested models that deal with panel data

The literature of panel data models first come with the assumption of time-invariant technical inefficiency (Battese & Coelli, 1988; Pitt & Lee, 1981; Schmidt & Sickles, 1984) Researchers, after that, claimed that it is too strict to assume technical inefficiency to be fixed through time and suggested models that allow its time-variation such as Cornwell et al (1990), Kumbhakar (1990), Lee and Schmidt (1993) and Battese and Coelli (1992) Those models solved the problems by imposing some time patterns Nevertheless, the assumption of an unchanged time behavior was also criticized too strict Then the model with technical inefficiency effects was created by Battese and Coelli (1995) which allows technical inefficiency to vary with time and other determinants

Greene (2005) introduces “true” fixed and random models which warrant the unrestraint time changing of inefficiency and separate it from other firm specific factors

This thesis aims to estimate the technical efficiency level of Vietnamese metal manufacturing firms with panel-data stochastic frontier models Besides, this study also reviews those panel data models of technical inefficiency analysis and gives some implication about model choice in this field This study uses an unbalanced panel dataset of firms in metal manufacturing industry in the year 2005,

2007 and 2009 which is withdrawn from Vietnamese SMEs survey The result shows different technical efficiency levels among those stochastic frontier models.

Research objectives

- To give a review of panel-data stochastic frontier models;

- To apply those models to investigate the technical efficiency of SME firms in metal manufacturing industry in Viet Nam.

LITERATURE REVIEW

Efficiency measurement

The main economic function of a business can be expressed as a process which turns its inputs into outputs with a specific producing ability The ratio outputs/inputs indicates the productivity of a specific firm (Coelli, Rao, O'Donnell, & Battese, 2005) Change in productivity reflects how well a production unit operates, in other words, how efficient it is From economic perspective, growth in productivity or efficiency can be considered as the most popular proxy for firm performance

The terms productivity and efficiency need to be discriminated in the context of firm production

On the one hand, productivity implies all factors that decide how well outputs can be obtained from given amounts of inputs It can be considered as “Total factor productivity - TFP” On the other hand, efficiency relates to the production frontier This frontier shows the maximum output that can be produced with a level of input A firm is called efficient technically when it produces on this frontier Firm production cannot go beyond this frontier for this is the limitation of its performing ability When the firm performs below this frontier, it is considered inefficient The farther the distance is, the more inefficient the firm is Changes in productivity can be due to the changes in efficiency (the firm becomes more or less efficient technically), a change in the amount and proportion of its inputs (changing its scale efficiency), a change in technical progress (change in technology level over time) or a combination of all the above factors (Coelli et al., 2005)

Efficiency measurement can be approached from two sides, inputs and outputs Input-oriented measures relate to cost reduction (minimum amount of inputs to produce a given amount of output) Output-oriented measure, on the other hand, makes use of the maximum level of output produced from a given amount of inputs Figure 2-1 and 2-2 illustrate these two approaches Figure 2-1 demonstrates a firm with two inputs X1 and X2, YY’ is an isoquant which shows every minimum set of inputs that could be used to produce a given output If a firm operates on this isoquant (the frontier), it will be technically efficient in an input-oriented way for the reason that the inputs amount of this firm is minimized The iso-cost line CC’ (which can be constructed when the input-price ratio is known) determines the optimal proportion of inputs in order to archive lowest cost Technical efficiency (TE) can be calculated by the percentage rate of OR/OP, allocative efficiency (AE) equals the percentage rate of OS/OR The multiplication of AE and TE expresses the overall efficiency of the firm, called economic efficiency (EE) (i.e.𝐸𝐸 = 𝐴𝐸 × 𝑇𝐸)

Figure 2-2, illustrate the case where the firm uses one input and produces one output, The f(X) curve determines the maximum output can be obtained by using each level of input X (the frontier)

The firm will be technical efficient operating on this frontier In this situation, TE equals BD/DE

Measurements and analyses of TE were conducted by a huge number of studies with two main approaches – Data Envelopment Analysis (DEA) and Stochastic Frontier Analysis (SFA) The next section briefly discusses these two methods.

Data Envelopment Analysis (DEA) and Stochastic Frontier Analysis (SFA)

DEA is a non-parametric method in estimating firm efficiency which was first introduced in Charnes, Cooper, and Rhodes (1978) with constant return to scale Later on, it was extended to allow for decreasing and variable return to scale in Banker, Charnes, and Cooper (1984) Specific instruction can be found in Banker et al (1984), Charnes et al (1978), Fare, Grosskopf, and Lovell

(1994), Fọre, Grosskopf, and Lovell (1985) and Ray (2004)

With n firms (called Decision Making Units – DMUs), each firm uses m types of inputs and produces s types of outputs, the model for DEA following an output-oriented measure is given by: max ℎ 0 = ∑ ∑ 𝑠 𝑟=1 𝑢 𝑟 𝑦 𝑟0

With: 𝑗 = 1,2, … , 𝑛; 𝑟 = 1,2, … , 𝑠; 𝑖 = 1,2, … , 𝑚; 𝑥 𝑖𝑗 , 𝑦 𝑟𝑗 are respectively the ith input and rth output of jth DMU; 𝑢 𝑟 , 𝑣 𝑖 are the weights of outputs and inputs which come from the solution of this maximization problem (Charnes et al., 1978) Using a piece-wise frontier from (Farrell, 1957) and linear programming algorithm in maximization mathematics, this method constructs a production frontier Then, the ratio between outputs and inputs will be brought into account and compared with the frontier to calculate the efficiency level of each firm

Only being noticed from 1978, but, for many reasons, DEA has become a popular branch of efficiency analysis Wei (2001) described this growing progress by listing five evolvements in DEA researches Studies using DEA have been conducted in almost industries, both private and public sector Moreover, numerical methods and supporting computer programs have grown in both number and quality Over time, new models of DEA have been discussed and established, such as additive model, log-type DEA model and stochastic DEA model Besides, the economic and management background of DEA have been analyzed more carefully and deeply, strengthening the base for the applications of this model Mathematical theories related to DEA have also been promoted by many mathematicians Those factors gave rise to the progress of both theoretical improvements and empirical applications of this non-parametric method b Stochastic Frontier Analysis (SFA):

Aigner et al (1977) and Meeusen and Broeck (1977) suggested the method of production stochastic frontier to measure firms’ efficiency The model can be described mathematically as below:

𝑌 𝑖 = 𝑓(𝑋 𝑖 , 𝛽)𝑒 𝑣 𝑖 −𝑢 𝑖 (2.2.2) with 𝑌 𝑖 is output of firm i, 𝑋 𝑖 is the vector of inputs, 𝛽 is the vector of parameters, which is to be estimated The last two factors: 𝑣 𝑖 and 𝑢 𝑖 are the two error terms 𝑣 𝑖 is the random statistical noise and is assumed to have a normal distribution with zero mean 𝑢 𝑖 is a non-negative term indicating inefficiency, which keeps firm far from producing on its frontier There are different assumptions about 𝑢 𝑖 ’s distribution such as half normal distribution (Aigner et al., 1977), exponential distribution (Meeusen & Broeck, 1977), gamma distribution (Greene, 1990) or a non-negative truncation of 𝑁(𝑚 𝑖𝑡, 𝜎 2 ) (Battese & Coelli, 1988, 1992, 1995) c Trade-off between DEA and SFA

Being different in approaching method, DEA and SFA have their own advantages and drawbacks

This implies that when choosing between DEA and SFA, researchers must make some trade-offs

Being a non-parametric method, DEA is a deterministic approach without the specification of the production function while SFA is a stochastic technique using econometric (parametric) tools and requires a model specification (Ray & Mukherjee, 1995) From that key difference, DEA is considered to be non-statistical, which assumes that the data have no noise Data noise can come from measurement errors or random factors which can be controlled by firms This restrict seems to be stubborn in realistic situation SFA is statistical, so it allows and takes into account statistical noise In other words, it is more flexible with real world data, in which random factors and error in collecting is unavoidable But using SFA requires assumptions about the specification of the model, functional forms, distribution of parameters and error terms (Wagstaff, 1989) In DEA, every factor that keeps the firm away from its frontier is regarded inefficiency While in SFA, the residual will be decomposed into two components One part, which is not under the control of the firm itself, is accounted to be the noise and has the zero mean The other part, which is known as the inefficiency, is the weakness of the firm which makes it produce below the frontier So, generally, the efficiency measured from SFA will be higher relatively (Ferrier & Lovell, 1990)

DEA has the advantages with the ability to be applied in various complicated condition of production Without the requirement of a definite production function, it helps simplifying the linkage from inputs to outputs of a production process Without statistical properties, no test can be used to test DEA’s goodness of fit or specification In spite of having troubles with model specification, SFA still has econometric tools to test whether the model is suitable or not The most beneficial advantage of SFA is the capability of dealing with statistical noise Generally, for industries that the production processes are controlled strictly, DEA seems to be the better choice of measuring efficiency This is because the random fluctuation in these industries is minimized and the production process is very stable (from a given amount of inputs, the number and quality of outputs is likely to be determined precisely) Meanwhile, SFA tends to be suitable for industries in which noise is inevitable Firms in those industries have to bear the impacts from random fluctuations In the case of this thesis, firms in metal manufacturing industries are influenced by from the markets of both inputs and outputs, both domestic and abroad and changes in policies

For those natures of the industries the thesis analyzed, SFA is the better model to be applied The next part discusses in detail SFA method, both cross-sectional and panel data models.

The cross-sectional Stochastic Frontier Model

The cross-sectional stochastic frontier model in Aigner et al (1977) can be describe as:

𝑦 𝑖 = 𝑓(𝑥 𝑖 , 𝛽) + 𝑣 𝑖 − 𝑢 𝑖 (2.3.1) with 𝑣 𝑖 is the random noise and 𝑢 𝑖 is technical inefficiency (𝑢 𝑖 ≥ 0) To distinguish these two components of residual, some assumptions are necessary The first assumption is about the distribution of 𝑢 𝑖 while 𝑣 𝑖 follows a symmetric normal distribution For the reason that 𝑢 𝑖 represents the distance from the frontier which keeps firms below the frontier, its value is non- negative As mentioned above, the distribution that is suggested includes half normal distribution (Aigner et al., 1977), exponential distribution (Meeusen & Broeck, 1977), gamma distribution (Greene, 1990) or a non-negative truncation of 𝑁(𝑚 𝑖𝑡, 𝜎 2 ) (Battese & Coelli, 1988, 1992, 1995)

Both two main estimation methods of econometric can be used to deal with technical inefficiency calculation – Ordinary Least Squares (OLS) and Maximum Likelihood (ML) But for the fact that the error term 𝜀 includes two components, that is: 𝑢 with an asymmetric distribution and 𝑣 with a symmetric distribution, so 𝜀 will not have normal distribution Since 𝜀 = 𝑣 − 𝑢 then 𝜇 𝜀 = −𝜇 𝑢 This makes the intercept in OLS bias downward To clarify, we consider a regression with just the intercept 𝛼: 𝑦 = 𝛼 + 𝜀, then the estimator for 𝜀 will be 𝑦̅ From the equation above, 𝑝𝑙𝑖𝑚(𝑦̅) 𝛼 + 𝜀, which does not equal to 𝛼 Winsten (1957) suggested an method called Corrected Ordinary Least Squares (COLS), Afriat (1972) and Richmond (1974) offered the method of Modified Ordinary Least Squares (MOLS) to solve this bias problem Those two methods correct or modify the intercept upward by add up the maximum or average value of OLS residuals COLS and MOLS have some problems, such as non-statistical meaning estimates (Mastromarco, 2007) ML, however, has some asymptotic properties and is able to deal with asymmetrically distributed residual, is used frequently than OLS

With technical inefficiency (𝑢) following a half-normal distribution i.e 𝑢~𝑁 + (0, 𝜎 𝑢 2 ) (Aigner et al., 1977) the log-likelihood function is:

2𝜎 2 ∑ 𝐼 𝑖=1 𝜀 𝑖 2 (2.3.2) with 𝜎 2 = 𝜎 𝑣 2 + 𝜎 𝑢 2 and 𝜆 = 𝜎 𝑢 2 ⁄𝜎 𝑣 2 If 𝜆 = 0, the firm is fully efficient If 𝜆 = 1, it is totally inefficient In the equation above, 𝑦 is a vector of logarithm of outputs; 𝜀 𝑖 = 𝑣 𝑖 − 𝑢 𝑖 = ln 𝑞 𝑖 − 𝑥 𝑖 ′ 𝛽 and Φ(𝑥) is the cumulative distribution function (cdf) of a random variable which follows N(0,1) distribution at x (Coelli et al., 2005) This function can be solved using an iterative optimization procedure in Judge, Hill, Griffiths, Lutkepohl, and Lee (1982) as cited in Coelli et al (2005)

Log-likelihood function with exponential distribution can be described as: ln 𝐿(𝑌|𝛼, 𝛽, 𝜆, 𝜎 2 ) = −𝑁 (𝑙𝑛 𝜎 𝑢 + 𝜎 𝑣 2

𝑁 𝑗=1 (2.3.3) with 𝑢~𝐸𝑥(𝜃); 𝜃 = 𝜎 𝑢 −1 The case of truncated normal distribution i.e 𝑢~𝑁 + (𝜇 𝑢, 𝜎 𝑢 2 ) has the log-likelihood function: ln 𝐿(𝑌|𝛼, 𝛽, 𝜆, 𝜎 2 ) = −𝑁

Log likelihood function for Stochastic Frontier Model with gamma distribution of u can be found in Greene (1990): ln 𝐿 = ∑ (𝑃𝑙𝑛Θ − 𝑙𝑛Γ(𝑃) + Θ 2 𝜎 2

𝑖 ] + ln ℎ[𝑃 − 1, 𝜀 𝑖 ]) (2.3.4) with Θ and P are two parameters of a gamma distribution

Figure 2 – 3: Distribution of technical inefficiency

Figure (2 - 3) illustrates the probability density function of four types of distribution of 𝑢 visually

Obviously, there are restrictions with gamma, exponential and half-normal distribution With these distributions, because most observations locate in the area has low value of u (technical inefficiency), one can conclude that the level of efficiency of firms is rather high (the inefficiency level is low) Put differently, most firms are highly efficient This can be untrue with many industries, in which high efficiency is not realistic Truncated normal distribution is more flexible when it allows the allocation of inefficiency to almost positive point Therefore it can be used to describe 𝑢 better

Consider a Cobb-Douglas production function as: ln 𝑌 𝑖 = 𝛽 ln 𝑋 𝑖 + 𝑣 𝑖 − 𝑢 𝑖 (2.3.5)

The technical inefficiency level of a firm can be calculated as the ratio of observed output (𝑌 𝑖 ) to maximum feasible output 𝑌 ∗ which is the output when the firm is fully efficient or the value of 𝑢 𝑖 is zero

With 𝑢 𝑖 follows the 𝑁(𝑚 𝑖, 𝜎 2 ) distribution in Battese and Coelli (1995), the parameter for inefficiency (or, in other words, also efficiency) can be analyzed with some determinants with the regression equation follow:

With 𝑍 𝑖 is the vector of determinants of 𝑚 𝑖 and 𝛿 is the vector of parameters that need to be estimated The distribution of 𝜔 𝑖 is the truncation of the normal distribution 𝑁(0, 𝜎 2 ) (Battese &

Coelli, 1995) This is called Technical Inefficiency Model and can be estimated simultaneously with the Stochastic Frontier.

Stochastic frontier model with panel data

There are three problems arising while we use stochastic frontier model with cross sectional data (Schmidt & Sickles, 1984) The first is the inconsistency in estimating technical inefficiency Most studies in this field uses the method in Jondrow, Lovell, Materov, and Schmidt (1982) to predict the technical inefficiency level for each firm in the sample The formula is:

𝜎∗ ) (2.4.1) where 𝑓 and 𝐹 are the standard normal density and cumulative density function respectively, 𝜇 ∗ −𝜎 𝑢 2 𝜀 𝜎⁄ 2 , 𝜎 ∗ 2 = 𝜎 𝑢 2 𝜎 𝑣 2 ⁄𝜎 2 and 𝜎 2 = 𝜎 𝑢 2 + 𝜎 𝑣 2 For the reason that 𝜇 ∗ and 𝜎 ∗ are unknown so their estimator 𝜇̂ ∗ and 𝜎̂ ∗ are used instead which leads to some sampling bias In principle, one must take into account this bias but it is very complicated to do This kind of bias disappears asymptotically and can be ignored with large sample However, essentially, technical inefficiency is independent of sample size So the level of technical inefficiency level will be estimated inconsistently (Schmidt and Sickles, 1984) The second is the ambiguity in the distribution of 𝑢, which is necessary to guarantee an independence between technical inefficiency (𝑢) and statistical noise (𝑣) Without a stubborn distribution assumption, it is impossible to decompose the overall error term (𝜀) into inefficiency (𝑢) and statistical noise (𝑣) However, with cross-sectional data, the robustness of the assumption is hard to test The third problem is the assumption of the uncorrelatedness of u with other regressors in the model This problem of endogeneity causes biases in the model Schmidt and Sickles (1984) suggests that the endogeneity is unavoidable because in the long-run the firm realizes its inefficiency level and adjusts the use of inputs to be more efficient

Panel data models (with data from N firms in T periods) can help avoid these three weaknesses (Greene, 2008) Firstly, more observation overtime (the ideal case is when we have long enough time series data T→∞) helps estimate the technical inefficiency more consistently Secondly, by isolation and treating technical inefficiency as fixed effect, the model using panel data is distribution free (the distribution assumption is now optional) (Greene, 2008) Finally, the assumption of uncorrelatedness is also relaxed because some panel models can take into account the effects of this correlation The next section describes in detail those panel data stochastic frontier models which have been developed over a long period since its first appearance

4.1 Time-invariant models a Within estimation with fixed effects and GLS estimation with random effects from

From those discussion above, Schmidt and Sickles (1984) suggests the use of panel data to estimate technical inefficiency (time invariant) both with fixed and random effects The model is described as: ln 𝑦 𝑖𝑡 = 𝛼 + 𝑋 𝑖𝑡 ′ 𝛽 + 𝑣 𝑖𝑡 − 𝑢 𝑖 (2.4.2)

*Note: Schmidt and Sickles (1984) use a log-linear function with 𝑣 𝑖𝑡 is uncorrelated with 𝑋 𝑖𝑡 ′ 𝛽 and 𝑢 𝑖 The within estimator uses dummy variables to estimate separate intercepts for each firm which stand for its own technical inefficiency This method has advantages because it need neither an assumption about the uncorrelatedness between 𝑢 and other variables nor an assumption about 𝑢’s distribution After calculating, each firm’s effect is compared with the highest in the sample and inefficiency is estimated as 𝑢̂ 𝑖 = max(𝛼̂ 𝑖 ) − 𝛼̂ 𝑖 The authors suggests a large number of firms to have exact estimate of the most efficient firm in the sample (the ideal case is with an extensive number of firms over a considerable number of time periods) For the fact that this method is simply a fixed effects estimation using panel data, it includes in technical inefficiency the effects of time-invariant but firm-varying effects (as Schmidt and Sickles (1984) mentioned, it can be capital stock for example – if the value of capital stock stays unchanged overtime, fixed effects model will include it in the value of firm’s specific intercept) which cannot be considered as inefficiency

From the weaknesses of the within estimator mentioned above, the authors suggests assumptions about the uncorrelatedness of 𝑢 and 𝑋 𝑖𝑡 ′ 𝛽, based on which we can conduct GLS estimation to estimate 𝑢 better The better point from this method comes from the ability to separate the time- invariant regressor which within estimator cannot However, the stubborn assumption needs to be tested Given the matter of uncorrelatedness and distribution assumptions, the authors suggests other two methods The first is the estimation of Hausman and Taylor (1981) which relaxes the uncorrelatedness assumption and the second is the maximum likelihood estimation which are more advanced given a specific distribution of 𝑢

The two model considered above is two of some simplest approaches to the concept of technical efficiency With their criticism about the inconsistency in estimating technical inefficiency level, they suggest the use of fixed and random effects model which give more consistent estimate of technical inefficiency in case T is large with a given N However, the lack of a distribution makes it hard to estimate the true inefficiency apart from other firm-specific factors Back in time in Pitt and Lee (1981), a half normal distribution with maximum likelihood estimation is described in the next section b The model with time-invariant efficiency in Pitt and Lee (1981)

In this paper, the panel data from the Indonesian weaving industry was used to estimate technical inefficiency level and its sources The hypothesis of whether technical inefficiency is time- invariant or time-varying was tested using three different models Three cases were suggested by the authors The first case is when 𝑢 is fixed through time and only varies among individuals, which means it is indexed by 𝑖 only (𝑢 𝑖 ) as described below:

*Note: Pitt and Lee (1981) use a linear function

In the second case, technical inefficiency is independent of time and among individuals, which leads back to the cross-sectional model as in Aigner et al (1977) That is:

With 𝐸(𝑢 𝑖𝑡 𝑢 𝑖𝑡 ′ ) = 0 and 𝐸(𝑢 𝑖𝑡 𝑢 𝑗𝑡 ′ ) = 0 for all 𝑖 ≠ 𝑗 and 𝑡 ≠ 𝑡′

The final case is the intermediate of these two when the technical inefficiency is assumed to be correlated with time That is:

With 𝐸(𝑢 𝑖𝑡 𝑢 𝑖𝑡 ′ ) ≠ 0 and 𝐸(𝑢 𝑖𝑡 𝑢 𝑗𝑡 ′ ) = 0 for all 𝑖 ≠ 𝑗 and 𝑡 ≠ 𝑡′

The first and second models are estimated using maximum likelihood method while the intermediate model is estimated with generalized least squares (for the reason that the maximum likelihood procedure for the last case is intractable) Comparing between two first models and model three is conducted by a 𝜒 2 test which can be found in Jửreskog and Goldberger (1972) The test suggests that the last model is appropriate (which implies technical inefficiency is time varying) The measure of technical inefficiency for each firm is not mention in the paper, however, can be done by the method of Jondrow et al (1982) which infers the value of each 𝑢 𝑖 from the value of each 𝜀 𝑖

Although the last model is shown to be more precise, it does not take into account the distribution of technical inefficiency Moreover, it supplies no measure of inefficiency Thus, generally, the idea proposed by Pitt and Lee (1981) hinges around a model with time-invariant inefficiency following half normal distribution and suggests further research into time varying inefficiency

However, as mentioned above, a half normal distribution is sometimes unreasonable Battese and Coelli (1988) suggests a more general distribution of 𝑢 – the truncated normal distribution The model is discussed in detail in the next section c The model with truncated normal distribution in Battese and Coelli (1988)

Battese and Coelli (1988) proposes a model in that technical inefficiency follows a truncated normal distribution which is developed in Stevenson (1980) for the estimation of stochastic production frontier For the availability of data (3 years), the authors make the assumption of time- invariant inefficiency The new distribution is 𝑁 + (𝜇, 𝜎 𝑢 2 ) It is more general than the old ones (half normal, which is introduced in Pitt and Lee (1981) and Schmidt and Sickles (1984)) because when

𝜇 = 0, the distribution becomes half normal With development in calculating the likelihood function from Stevenson (1980), the model is estimated with maximum likelihood method The model can be described as: ln 𝑦 𝑖𝑡 = 𝛼 + 𝛽𝑙𝑛𝑥 𝑖𝑡 + 𝑣 𝑖𝑡 − 𝑢 𝑖 (2.4.6)

*Note: Battese and Coelli (1988) use a Cobb-Douglas function

An extensive contribution of this paper to the field of stochastic frontier is its approach in estimating technical efficiency both in the industry level and firm level for the logarithmic case (the Cobb-Douglas functional form in the study) Instead of using the mean of technical inefficiency - 𝐸(𝑢) and calculating the efficiency level as 1 − 𝐸(𝑢) in Jondrow et al (1982), the authors suggests that technical efficiency level should be attained in form of exp (−𝑢) in the logarithmic case The formula of technical efficiency level is then clarified with the properties of truncated distribution of 𝑢

A common suggestion from the studies mentioned above is the research direction into time varying characteristics of 𝑢 Pitt and Lee (1981) base on their empirical evidences to suggest further research about time varying technical inefficiency Schmidt and Sickles (1984) also state that firms will recognize their inefficiency level in the long-run and change themselves to be more efficient

The lack of a long-period data makes Battese and Coelli (1988) assume 𝑢 to be fixed through time, however in their hint for future research, they also suggest other models that allow inefficiency to vary overtime To relax the inflexible assumption of time-invariant inefficiency, models with time varying inefficiency arise The next section considers those models

4.2 The time varying models a The model of Cornwell et al (1990)

METHODOLOGY

Overview of Vietnamese metal manufacturing industry

Firms in the sample used in this study are divided into two categories: basic metal manufacturing firms and fabricated metal manufacturing (except machinery and equipment) firms according to their main products Details for this classification can be found in International Standard Industrial Classification of All Economic Activities (ISIC), Revision 4 Firms in the first group are involved in the activities of smelting and refining ferrous and non-ferrous metals Those firms use metallurgic techniques with the materials from mining industry such as metal ore, pig or scrap

Taking part in this industry need large investments in physical assets Thus, in this sample of small and medium enterprises, this group takes only 9% of the numbers of firms The second group manufactures structural metal products, metal container-typed objects and steam generators

Producing more popular products, this group takes about 91% of the number of firms

The metal manufacturing industry in Viet Nam has many potentials due to the high demand of metal products for daily using, production and construction In Vietnamese young developing economy, metal manufacturing industry is still immature and most products are used for construction According to World Steel Association, about 80% of iron and steel materials are used for construction Besides, the domestic rising demand of metal materials of machinery, motor and automobile and other consumer goods manufacturing can also be considered as important condition of development of metal manufacturing industry However, along with the depressed situation of Vietnamese economy in the recent year, metal manufacturing industry also has many difficulties The investment cut in public construction due to government budget deficit strongly decreases the demand of metal materials for construction According to Vietnamese Steel Association, steel consumption fell about nine percent in 2012 The rising price of inputs such as electricity, water and labor cost also imposes many hardships on this industry

Due to the importance of metal manufacturing industry, this study is conducted with the objective of analyzing technical efficiency level of firms in this sector However, most observations in this sample are micro and household firms (74.5%) and the number of medium sized firms take only four percent Moreover, due to the availability of data, the dataset used here is in the period of

2005 to 2009, during which economic conditions may be different from the current situation

Because of those reasons, there should be high probability of sample bias if this study gives conclusions about the industry (the population) Thus, as a reminder for readers about the precision of the conclusions from this study, those results should be considered carefully while using for the purpose of policy recommendation.

Analytical framework

Panel-data stochastic frontier models presented in Chapter II are applied to estimate technical inefficiency level of Vietnamese SMEs in metal manufacturing industry Production function for the model is estimated with inputs and outputs data in two different functional forms – Cobb- Douglas and Translog For the case of technical inefficiency effects model in Battese and Coelli

(1995), a firm-specific group of variables will be added into the model Those variables can be considered and sources or determinants of technical inefficiency The results, then, are compared among models to find out the impact of each assumption and model specification on the way technical efficiency is determined.

Research method

Firm efficiency will be calculated with Stochastic Frontier Model As noted above, an important step of using Stochastic Frontier Model is choosing the suitable functional form to build up the frontier Some types of production function can be considered, such as Linear, Cobb-Douglas, Quadratic, Normalized Quadratic, Translog, Generalized Leontief and Constant Elasticity of Substitution (CES) forms (see Griffin, Montgomery, and Rister (1987) for a review of those)

Coelli et al (2005) emphasizes that good functional form should be flexible, linear (in parameters), regular, parsimonious An ith-ordered flexible functional form is the one that has enough parameters for ith-ordered differential approximation Among those function mentioned above,

Linear and Cobb-Douglas form is first-ordered flexible All the rest are second-ordered flexible

Most of production function considered above is linear in parameters Cobb-Douglas and Translog function can become linear in parameters when we take the logarithm of both sides of the equation

A regular functional form is the form that satisfies economic regularity properties of production function with its own nature or with some simple restriction Finally, a parsimonious function can be understood as the simplest function which can adequately solve the problem A flexible functions will less likely imposes assumptions or restrictions on the properties of the production function while parsimonious functions will save the degree of freedom In choosing functional form, researchers always face the trade-off between flexibility and parsimony

Researchers usually choose Cobb-Douglas for its parsimony (and sometimes tractability), while choosing Translog for its flexibility Being less flexible, Cobb-Douglas functional form imposes constant production elasticity, and constant elasticity of factor substitution while Translog functional form does not So the Translog form makes properties of the production function testable Therefore, it is considered to be more realistic and less restrictive Nonetheless it still has some weaknesses The appearance of cross and squared terms in Translog model increases the number of parameters Correlation among those is highly potential Furthermore, if the number of observations is not enough, this increase in parameters reduces the degree of freedom For a comparison, the Cobb-Douglas function and Translog function can be described in specification with three inputs: capital (K), labor (L), material (M) and indirect cost (I) as below:

Cobb-Douglas functional form: ln 𝑌 𝑖𝑡 = 𝛽 0 + 𝛽 1 ln 𝐾 𝑖𝑡 + 𝛽 2 ln 𝐿 𝑖𝑡 + 𝛽 3 ln 𝑀 𝑖𝑡 + 𝛽 4 𝑙𝑛𝐼 𝑖𝑡 + 𝑉 𝑖𝑡 − 𝑈 𝑖𝑡 (3.1) Translog functional form: ln 𝑌 𝑖𝑡 = 𝛽 0 + 𝛽 1 ln 𝐾 𝑖𝑡 + 𝛽 2 ln 𝐿 𝑖𝑡 + 𝛽 3 ln 𝑀 𝑖𝑡 + 𝛽 4 𝐼 𝑖𝑡 + 1

𝛽 6 ln 𝐾 𝑖𝑡 ln 𝑀 𝑖𝑡 + 𝛽 7 ln 𝐾 𝑖𝑡 ln 𝐼 𝑖𝑡 + 𝛽 8 ln 𝐿 𝑖𝑡 ln 𝑀 𝑖𝑡 + 𝛽 9 ln 𝐿 𝑖𝑡 ln 𝐼 𝑖𝑡 +

𝛽 10 ln 𝑀 𝑖𝑡 ln 𝐼 𝑖𝑡 + 𝛽 11 (ln 𝐾 𝑖𝑡 ) 2 + 𝛽 12 (ln 𝐿 𝑖𝑡 ) 2 + 𝛽 13 (ln 𝑀 𝑖𝑡 ) 2 + 𝛽 14 (ln 𝐼 𝑖𝑡 ) 2 ] +

With the denominator i denotes firms, t denotes time periods; 𝑌 𝑖𝑡 is output; 𝐾 𝑖𝑡 is capital input; 𝐿 𝑖𝑡 is labor input; 𝑀 𝑖𝑡 is materials; 𝐼 𝑖𝑡 is indirect costs, 𝑉 𝑖𝑡 stands for statistical noise, which follows 𝑁(0, 𝜎 𝑣 2 ); 𝑈 𝑖𝑡 stands for TE, which follows specific non-negative distribution mentioned above

Obviously, without squared and interaction terms, the Translog function becomes Cobb-Douglas function Cobb-Douglas functional form has constant proportionate returns to scale, constant elasticity of factor substitution, and all pairs of inputs are assumed to be complimentary Those assumptions make it more restrictive

A likelihood ratio test (LR) can be used to test for goodness of fit between these two functional forms with:

𝐻 1 : otherwise where 𝐿(𝐻 0 ) is the log-likelihood value for null model (𝐻 0 ) and 𝐿(𝐻 1 ) is the log-likelihood value for alternative model (𝐻 1 ), the test is given by:

The test statistic is approximately followed a chi-squared distribution with the degree of freedom (df) is the difference between df of null model and df of alternative model

In this thesis, two computer programs are used to estimate those Stochastic Frontier Models:

STATA and FRONTIER 4.1 Commands for estimating Stochastic Frontier Model in STATA have been much developed since the model’s appearance “frontier” for cross sectional data and

“xtfrontier” for panel data are those popular STATA commands to estimate technical efficiency

“frontier” can deal with models whose 𝑢 𝑖 follows half normal, truncated normal, exponential or gamma distribution “xtfrontier” can treat the model in Battese and Coelli (1988) (time-invariant) and Battese and Coelli (1992) (time varying) With only those two commands, users face many troubles testing other models Fortunately, thanks for the very updated paper of Belotti, Daidone, Ilardi, and Atella (2012) and their contribution for building useful “sfcross” and “sfpanel” commands, STATA now grants us full capability of using those models mentioned above with simple syntaxes The second programs is FRONTIER 4.1 from Coelli (1996) which can deal with models in Battese and Coelli (1992) and Battese and Coelli (1995)

For those logarithm functional forms (Cobb-Douglas and Translog) Technical efficiency will be calculated from the value of 𝑢 following two different methods The first method is the one in Jondrow et al (1982) which give the formula: 𝑇𝐸 = exp[−𝐸(𝑢|𝜀)] The second method is the one in (Battese & Coelli, 1988) which suggest the formula: 𝑇𝐸 = 𝐸[exp( − 𝑢)|𝜀] TE calculated from this equation will be a positive value less than one It is the ratio of actual output over the maximum level of output where there is no inefficiency:

It means TE is a comparison between the output of a real firm and the output of an efficient firm

The estimating process of the time-invariant fixed and random effects models as in Schmidt and Sickles (1984) are similar to the panel regressions with fixed and random effect So, those models will be estimated using least squared methods (within estimator and GLS) After calculating, each firm’s effect is compared with the highest in the sample and inefficiency is estimated as 𝑢̂ 𝑖 max(𝛼̂ 𝑖 ) − 𝛼̂ 𝑖 In other words, this model assumes that there is always an efficient firm in the sample which has the technical efficiency level of 100% (𝑢 = 0) This is also the method of other models which are also estimated by least squared methods such as the ones in Cornwell et al

(1990) and Lee and Schmidt (1993) Those models with fixed effects includes a large number of parameters which give rise to biases that come from coincident parameter situation Especially, the model in Cornwell et al (1990) includes 𝑁 × 3 parameters in the time function of technical inefficiency So, for the dataset used in this study, which has only 3 time periods, this model cannot be applied However, the command created by Belotti et al (2012) for the model in Lee and Schmidt (1993) does not calculate precisely the technical efficiency level of firms The model in Lee and Schmidt (1993) compares that level to the highest level in each year while the command compares it to the highest level in all years, which leads to some confusions Thus the technical efficiency level calculated by this model will not be shown in our result

The models in Pitt and Lee (1981) and Battese and Coelli (1988) are both estimated by maximum likelihood method The difference comes from their distribution assumptions The former has a half-normal distributed 𝑢 while the latter assumes 𝑢 to have truncated normal distribution The likelihood function of these two models can be found in their original papers The latter is more general when it includes one more parameter – 𝜇 (the mean of the normal distribution which 𝑢 takes the truncation) The former one is essentially a special case of the latter when 𝜇 = 0

The models in (Kumbhakar, 1990) and (Battese & Coelli, 1992) share some properties Both of them are estimated with maximum likelihood method and treat technical inefficiency as a function of time The former use the time function: 𝑢 𝑖𝑡 = 𝛾(𝑡)𝑢 𝑖 with 𝛾(𝑡) = (1 + exp(𝑏𝑡 + 𝑐𝑡 2 )) −1 where 𝑢 𝑖 is fixed through time but different across firms and 𝑢 𝑖 follows a half normal distribution while the latter conduct the function: 𝑢 𝑖𝑡 = 𝜂 𝑖𝑡 𝑢 𝑖 with the form of 𝜂 𝑖𝑡 = exp[−𝜂(𝑡 − 𝑇)] with

𝑢 𝑖 𝑖𝑖𝑑 |𝑁(𝜇, 𝜎 𝑢 2 )| (truncated normal distribution at zero) Those functional forms let data decide the time behavior of 𝑢 The one in Battese and Coelli (1992) is simpler in calculation but the one in Kumbhakar (1990) is more flexible in showing the dynamics of technical inefficiency

The technical inefficiency effects model in Battese and Coelli (1995) includes the variables of age, size, type of ownership and location of the firm By considering the impact of firm’s age on technical inefficiency, it also take into account the impact of time, which is the key purpose of many previous studies The Stochastic Frontier Model and technical inefficiency effects model are estimated simultaneously so the kind of bias from a two-steps estimation mentioned in Wang and Schmidt (2002) can be avoided “True” random and fixed effects models in Greene (2005) are estimated with exponential distribution of technical inefficiency The authors suggests the “brute force” method to apply maximum likelihood method to estimate all parameters (also including constant terms) simultaneously in “true” fixed effects model The likelihood function can be found in the original papers

RESULT AND DISCUSSION

Empirical result

Table 4 – 1: Time invariant models with Cobb – Douglas function

FE and REGLS: fixed effects model and random effects model in Schmidt and Sickles (1984); PL81: the model in Pitt and Lee (1981); BC88: the model in Battese and Coelli (1988)

*, **, *** corresponds to the level of significance respectively 1%, 5% and 10%

The numbers in the brackets are standard errors

Table 4 – 1 shows the result from estimating time invariant models with Cobb-Douglas functional form Two first columns show the result for fixed and random effects models in Schmidt and Sickles (1984) respectively The result from estimating the model in Pitt and Lee (1981) is in the third column The last column is the result from the model in Battese and Coelli (1988) Technical efficiency level calculated from the fixed and random effects model as in Schmidt and Sickles

(1984) has the average level of 21.71% with the range 16.67% - 100% and 63.25% with the range 58.26% - 100% respectively By taking all firm-specific factors into technical inefficiency, the value of 𝑢 is rather large and varying The model with half normal and truncated normal distribution in Pitt and Lee (1981) and Battese and Coelli (1988) give the similar result Both

FE REGLS PL81 BC88 lnK -0.004 0.011** 0.012** 0.012**

0.172 0.202 0.223 0.050 average TE 21.71% 63.25% 98.32% 99.29% min TE 16.67% 58.26% 98.11% 99.23% max TE 100.00% 100.00% 99.01% 99.51%

𝜎 𝑣 or 𝜎 𝑣 2 models show high level of technical efficiency (98.32% average value with the range 98.11% - 99.01% and 99.29% average value with the range 99.23% - 99.51% respectively) In other words, the value of 𝑢 is small or almost firms are technically efficient or technical inefficiency is absent from this case

Even though the result show different level of technical efficiency, the coefficients of production function are quite consistent Except capital (K) in the case of fixed effects model in Schmidt and Sickles (1984), all inputs coefficients are significant and positive as expectation The consistence in the form of production function helps to prove the validity of technical inefficiency The fixed effects model result gives a negative coefficient of capital However this value is rather small and not significant For the reason that fixed effects model includes a large amount of variables, the difference in the sign of capital can be due to the appearance of a large amount of dummy parameters Using a Hausman test to compare fixed and random effects models in Schmidt and Sickles (1984) we reject the null hypothesis of not having a systematic difference in coefficients between those two models which implies fixed effects model is better even it is less efficient b Time varying models

Table 4 – 2 shows the result of time varying models with Cobb-Douglas functional form Three first columns are the result of the model in Kumbhakar (1990), Battese and Coelli (1992) and Lee and Schmidt (1993) respectively Columns [4] and [5] is the result form technical inefficiency effects model in Battese and Coelli (1995) with two cases: with and without determinants The last four columns are the result of the models in Greene (2005) Columns [8] and [9] are the general forms with variables standing for observable heterogeneity Most results are quite similar Most time varying models show the result of a high level of technical efficiency The average technical efficiency value in the model of Kumbhakar (1990) is 90.71% in 2005 and reach 100% in 2007 and 2009 The model of Battese and Coelli (1992) give the result of 92.14% technical efficiency level in 2005, this level goes up to 95.67% in 2007 and 97.59% in 2009

Table 4 – 2: Time varying models with Cobb – Douglas function KUMB90: model in Kumbhakar (1990); BC92: the model in Battese and Coelli (1992); FELS: the model in Lee and Schmidt (1993); BC95: the model in Battese and Coelli (1995); BC95w: BC95 with determinants; TRE: “true” random effects model; TFE: “true” fixed effects model

If no determinant is involved in to the technical inefficiency effects model in Battese and Coelli

(1995) (column [4]), all firms are shown to be efficient and similar to others (nearly 100% technical efficiency level in all three years) However, with the existence of those determinants (age, size, location and ownership type) (column [5]), firms become different in the level of efficiency although most of them are quite high (technical efficiency level ranges from 92.61% to 100%) In other words, the coefficients for those determinant parameters gain effect for 𝑢 “True” random effects model in Greene (2005) (column [6]) shows that the inefficiency is absent Being separated from the residuals, firm specific factor 𝑤 𝑖 gains value and soaks up technical inefficiency STATA cannot find the result for “true” fixed effects model, this could be due to the misspecification of the model In other words, when firm-specific is distinguished, the existence of 𝑢 is unreasonable and all firm are efficient This result is consistent with the one in “true” random effects model

The estimated coefficients of inputs from the general form of “true” random effects model (column [8]) is quite similar to the result of the technical inefficiency effects model However, the determinants are more significant when being put in the production function Despite the fact that those determinants do not have much impact on technical inefficiency, their effects on general productivity are significant Therefore, they should not be ignored in analyzing firms’ production

KUMB90 BC92 FELS BC95 BC95w TRE TFE General TRE General TFE lnK 0.016*** 0.016*** 0.034 0.012*** 0.009* 0.011** 0.004

𝜎 𝑣 or 𝜎 𝑣 2 function Again, STATA cannot compute the result for the general form of “true” fixed effects model in Greene (2005) This implies that the technical inefficiency is absent from this model

Throughout the results of those time varying models, despite the differences in technical efficiency level, the form of production function is quite consistent The role of capital, labor, material and indirect costs are significant All inputs’ coefficients have the positive sign as expected As mentioned above, a consistent form of production function helps validate the existence of technical inefficiency and the specification of the model

[1] [2] [3] [4] bc95 (CD) bc95 (T) General TRE (CD) General TRE (T) age 0.000004 0.010 0.001* 0.001**

Table 4 – 4: Time invariant models with Translog function

FE and REGLS: fixed effects model and random effects model in Schmidt and Sickles (1984); PL81: the model in Pitt and Lee (1981); BC88: the model in Battese and Coelli (1988)

FE REGLS PL81 BC88 lnK 0.174*** 0.103** 0.103** 0.103**

0.150 0.150 0.148 0.022 average TE 51.78% 100.00% 99.98% 99.70% min TE 41.54% 100.00% 99.98% 99.71% max TE 100.00% 100.00% 99.98% 99.76%

1.2 Translog functional form a Time invariant model

Table 4 – 4 shows the result from estimating time invariant models with Translog functional form

Two first columns show the result for fixed and random effects models in Schmidt and Sickles

(1984) respectively The result from estimating the model in Pitt and Lee (1981) is in the third column The last column is the result from the model in Battese and Coelli (1988) The random effects models in Pitt and Lee (1981), Battese and Coelli (1988) and Schmidt and Sickles (1984) (column [2], [3] and [4]) gives similar results but it is quite different to the one in the fixed effects model (column [1]) The variance of 𝑢 𝑖 is small to the variance of 𝑣 𝑖 and all their estimated technical efficiency levels are about 100% This implies the absence of technical inefficiency effects Obviously, when the technical inefficiency is absent or its value is fractional, differences among those random effects models’ result would be small The result from fixed effects model is quite divergent with the average technical efficiency level is about 51.78% and the range is 41.54%

Coefficients estimated from three random effects models in Schmidt and Sickles (1984), Pitt and Lee (1981) and Battese and Coelli (1988) are similar in value and significant level Most coefficients are significant Fixed effects model in Schmidt and Sickles (1984) has just a few differences in the value and the significance of coefficients Generally, the form of production is quite consistent among those models b Time varying models

Table 4 – 5 shows the result of time varying models with Translog functional form Three first columns are the result of the model in Kumbhakar (1990), Battese and Coelli (1992) and Lee and Schmidt (1993) respectively Columns [4] and [5] is the result form technical inefficiency effects model in Battese and Coelli (1995) with two cases: with and without determinants The last four columns are the result of the models in Greene (2005) Columns [8] and [9] are the general forms with variables standing for observable heterogeneity The time varying models in Kumbhakar

(1990) and Battese and Coelli (1992) give quite equivalent results The technical efficiency level

Table 4 – 5: Time varying models with Translog function KUMB90: model in Kumbhakar (1990); BC92: the model in Battese and Coelli (1992); FELS: the model in Lee and Schmidt (1993); BC95: the model in Battese and Coelli (1995); BC95w: BC95 with determinants; TRE: “true” random effects model; TFE: “true” fixed effects model

KUMB90 BC92 FELS BC95 BC95w TRE TFE Genral TRE General TFE lnK 0.095** 0.092** 0.237 0.087** 0.103*** 0.085**

𝜎 𝑢 or 𝜎 𝑢 2 is respectively 90.54% in average in 2005 and 100% in 2007 and 2009 with the model of Kumbhakar (1990) The technical efficiency level in Battese and Coelli (1992) model is 90.40% in 2005, 96.95% in 2007 and 99.03% in 2009 The result from the FRONTIER 4.1 shows that the model in Battese and Coelli (1995) has no effects of technical inefficiency in the case no determinant is included in the model When those determinants appear, the range of firms’ technical efficiency is widen (83.41% to 100%) True random effects give the result that all firms are efficient with the technical efficiency levels are about 99.95% True fixed effects models cannot give the result This implies a misspecification in the model In other words, technical inefficiency does not exist

Discussion

From the estimating result, the first and most obvious inference is that models without distributional assumption as those in Schmidt and Sickles (1984) and Lee and Schmidt (1993) give the result of wide-gap efficiency ranges and also lower average technical efficiency levels While all models with distributional assumption suggest the technical efficiency level more than 90%, the fixed effects model in Schmidt and Sickles (1984) gives the average of technical efficiency of 21.71% with the Cobb-Douglas function and 51.78% with the Translog function Although the random effects model cannot find technical inefficiency effects for the Translog function (this may be because Translog functional from has more ability in explaining the production function than Cobb-Douglas functional from), it still gives the average technical efficiency level of 63.25% with the Cobb-Douglas functional from The difference in technical efficiency level is also large in those results

This result comes from the matter of what will be included in technical efficiency Without any assumption on the distribution of 𝑢, all firm specific factors will be considered in technical inefficiency value By doing this, the gaps in efficiency among firms are widen This also lowers the average level of technical efficiency Schmidt and Sickles (1984) considered this matter as a weakness of their models They claimed that the technical inefficiency estimated from those models include all fixed effects when some of those (in their example: capital stock) may not relate to technical inefficiency Also, by using these models, technical inefficiency can only be calculated by comparing with the best firm in the sample but not with an absolute standard Then sampling bias will arise With a sample what observations are similar, most of them will be efficient With a sample that a firm or a group of firm has extremely large specific value, most firms will have low efficiency and vice versa Those biases can makes conclusions and implications in efficiency analysis less confident

2.2 The distribution of technical inefficiency

As mentioned above, different assumptions about the distribution of technical inefficiency can lead to different estimating results With half-normal and exponential distribution, most observations have the inefficiency level near zero Then most firms will be efficient Pitt and Lee

(1981) support this idea by stating that over time, most firms will recognize their inefficiency level and change themselves to be more efficient Sometimes this assumption is too strong A truncated distribution with positive mean value can help us avoid that matter by allowing 𝑢 to gather around a positive particular value Thus, the level of technical efficiency is not necessary to be high

How suitable an assumption of distribution is could depend on the nature of the industry Consider an industry with just a small amount of firms’ entry and exit Then most firms stay overtime and with the reason mentioned above, they become more efficient This situation fits the assumption of a half normal or exponential distribution With another industry where a large amount of firms enter and exit every year Then the average level of efficiency should not be so high for the reason that new firms tend to be less efficient than the old ones because they are lack of experience or small in scale However, as we mentioned above, the truncated normal distribution is more general than the others With the value of 𝜇 equals zero, it becomes a half-normal distribution When 𝜇 is not positive, its shape is not different from the others Thus, letting data decide the value of 𝜇, or, in other words, the type of distribution will reflect the nature of technical inefficiency better The

𝜇 value estimated from those truncated models above is all negative which implies that the distributions of 𝑢 in all case are similar to a half-normal one

Table 4 – 6: Value of 𝜇 in models with truncated distribution

2.3 Technical inefficiency and firm-specific effects

The technical inefficiency model in Battese and Coelli (1995) shows different results between the case with determinants and the case without determinants (columns [5] and [6] in table 4 – 2 and

4 – 5) The reason is, with the appearance of determinants of technical inefficiency in the log likelihood function, their parameters gain value from other parts of the model and contribute it to the value of technical inefficiency Thus, the more different the firms are, the greater value of 𝑢 will be estimated By adding more determinants that can makes firms’ characteristic more divergent, we can change the result For this reason, the use of this model needs careful considerations about the matter of what can be the determinants of technical inefficiency Previous literature has suggested many of them such as firm’s size (Mead and Liedholm (1998); Van Biesebroeck , 2005), firm’s age (Mueller, 1972; Jovanovic, 1982), location (Glancey, 1998;

Devereux, Griffith, and Simpson, 2007; Vu, 2003), ownership (Bevan et al., 1999; Chaganti &

Damanpour, 1991), export (Abdullayeva, 2010; Bigsten et al., 2000; Bigsten et al., 2004), innovation (Janz, Lửửf, & Peters, 2003; Koellinger, 2008) The list has not been exhausted

While firm-specific effects gain value for technical inefficiency in the model of Battese and Coelli

(1995), it takes away the value of technical inefficiency in the model of Greene (2005) Greene suggests the “true” fixed and random model in order to separate technical inefficiency with unobserved heterogeneity between firms By doing this, the larger the firm-specific effects are, the smaller the value of 𝑢 is In our empirical result, the “true” random model, after separate out all firm specific effects, leaves just a fractional value for 𝑢 Therefore all firms are nearly 100% efficient The fact that “true” fixed effects cannot give the result can be expressed as, essentially, there is no technical inefficiency effects with a specific distribution, all factors that explain the difference between firms are those time invariant firm effects and random noise bc88 bc92 bc95 (without det.) bc95 (with det.)

From the distinction above, there should be some comparisons about the nature of technical inefficiency in the points of those authors Battese and Coelli (1995) considers technical inefficiency as the combination of all factors that make production effectiveness different from a firm to another From this viewpoint, we can ascertain the determinants of technical inefficiency

Moreover, some of those determinants can be influenced to help improve firm’s efficiency such as size, ownership and export Thus, the model in Battese and Coelli (1995) is helpful in studies which tend to give policy implication for firms and for governments in order to ameliorate technical efficiency The model in Greene (2005) considers inefficiency as a freely various over time factor which is random and uncorrelated with other parts of the model Thus, from Greene’s viewpoint, no firm specific factor can have impact on inefficiency The only characteristic that makes this random factor different from the random noise is a particular distribution Therefore, the technical efficiency level estimated from this model can hardly be used to give implication for firms to improve their efficiency It simply implies the distance from firms to the frontier

Although the model in Greene (2005) cannot explain much about technical inefficiency, it can be used like a powerful tool in analyzing firm productivity and its sources Recall a general form of

“true” fixed and random effects model as described below:

“True” random effects model: 𝑦 𝑖𝑡 = (𝛼 + 𝑤 𝑖𝑡) + 𝛽𝑋 𝑖𝑡 + 𝛾𝑍 𝑖𝑡 + 𝑣 𝑖𝑡 − 𝑢 𝑖𝑡 with 𝑣 𝑖𝑡 is the random noise, 𝑢 𝑖𝑡 is technical inefficiency (in the viewpoint of Greene), firm’s specific unobserved heterogeneity stored in 𝛼 𝑖 and 𝑤 𝑖 , 𝑋 𝑖𝑡 is a vector of inputs value, 𝑍 𝑖𝑡 is a vector of firm-specific observed factors, 𝛽 and 𝛾 are coefficients to be estimated There are three separate factors can be considered here The first is unobservable heterogeneity (𝛼 𝑖 or 𝑤 𝑖 ), which has impacts on firm production but cannot be measured by specific variables The second is observable heterogeneity (𝛾𝑍 𝑖𝑡 ) which influences firm’s production and can be measured And the last is technical inefficiency (𝑢 𝑖𝑡 ) which can change freely through time “True” fixed effects can separate them without any assumptions while the assumption of the uncorrelatedness between 𝑤 𝑖𝑡 and other parts of the model is necessary in “true” random model The observable part can help us in making implication to improve efficiency So, the larger this part is, the better we can understand about firms The unobservable part stays in the form of “effect” However, gradually overtime, it can be observed Thus, potentially, this part can provide information about firm performance

With the objective of reviewing methods of estimating technical inefficiency with Stochastic Frontier Model using panel data, this thesis has tried to describe the divergence among different aspects of previous studies on this matter One general comment can be made at the moment is that over a long development of both theoretical and empirical researches, this field has been deeply and broadly analyzed in many aspects to solve different matters However, the viewpoints about technical inefficiency hardly come to a consensus Each model has inside itself strengths and weaknesses that make it suitable to only specific situations Thus, the matter of model choice depends on what definition of technical inefficiency is perceived and what assumptions that one is willing to make about its nature In other words, it depends on the identification of the technical efficiency

There are some factors that influences the choice of models and assumptions that can be made

They are the theory that one bases on, the reality of the industry or the sample being analyzed and the availability of data Models mentioned above are different in the way they perceive technical inefficiency The fixed effects theory in Schmidt and Sickles (1984), Cornwell et al (1990), Lee and Schmidt (1993) suggest that all firm effects are included in technical inefficiency With within estimator, they do not require any assumption either about the distribution of 𝑢 or the correlation between 𝑢 and other parts of the model “True” fixed effects model in Greene (2005) separates technical inefficiency from observable and unobservable heterogeneity Also use within estimators, it requires only the distributional assumption Random effects model as in Schmidt and Sickles (1984) also suggests that all firm specific effects are involved in technical inefficiency but uses a random approach Therefore, it requires only the assumption of uncorrelatedness Those models in Pitt and Lee (1981), Battese and Coelli (1988, 1992, 1995), Kumbhakar (1990) and

“true” random effects model Greene (2005) require both assumptions However, “true” random effects model separates technical inefficiency from those heterogeneities while other cannot do that Generally, the perceptions of what factors are included in technical inefficiency and whether they have a correlation with other parts of the model are important factors in model choice

The reality or the nature of the chosen sample also affects the way assumptions are imposed As mentioned above, each industry should be considered carefully when its distribution of 𝑢 is chosen (half normal, exponential or truncated normal) Consider an industry with all firms stay close to the frontier (an old industry in which firms are almost technically efficient) and new technology is not obtained through the survey (no shifting in the frontier) Normally, year by year, firms get nearer to the frontier But for the reason that the distance is now too small, the changes are not worth to consider For those cases, one can ignore the matter of time shifting in the level of technical inefficiency and use those time invariant models The evidence from the empirical results of this study shows little differences between the technical efficiency level estimated with time invariant models (Pitt and Lee (1981) and Battese and Coelli (1988) models) and time varying models (Kumbhakar (1990), Battese and Coelli (1992, 1995) models) Besides, the assumption on model specification also needs careful consideration when using the model in Battese and Coelli

(1995) The matter of reality should be considered carefully in the use of the technical inefficiency effects model Like what has been mentioned above, the way we choose the form of this model influences the estimated result For example: to analyze the technical inefficiency of an industry with 100% products are sold domestically, the appearance of the variable related to exporting are necessary Thus, regarding the specific characteristic of the targeted sample help researchers in choosing the most parsimonious and suitable models

CONCLUSION

Ngày đăng: 05/12/2022, 14:37

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN