CFA is employed to establish how well the measurement items represent the measurement dimensions and constructs. Though Cronbach Alpha provides an internal consistency and is quite useful for ensuring the unidimensionality (Hair et al. 2010), CFA is considered as a valid and reliable test of unidimensionality. CFA is employed to confirm validity and reliability of all measurement items (observed variables) measuring the constructs. The items that were extracted through EFA were used as a basis for further CFA analysis.
7.1.1 Reliability assessment
Reliability of scales (i.e. measurement items) and construct reliability assessments are employed to test the internal consistency of the set of measurement items and assess whether they are measuring what they are intending to measure with respect to the measurement dimensions and constructs. The aim is to reduce the measurement error and prevent further errors from occurring in data analysis. As stated in the methodology chapter (Chapter 5), all measurement items must be reliable and consistent in order to produce accurate results (Hair et al. 2010).
To test the reliability of measurement items, Cronbach’s Alpha coefficient is usually used to assess the reliability of measurement items under the measurement dimensions and constructs (Cronbach 1951). The values range between 0 and 1, whereby higher value indicates better measurement item reliability (Hair et al. 2010). According to Pallant (2010), the Cronbach’s Alpha value should be above 0.7 to be reliable. However, Nunnally (1978) stated that Cronbach’s alpha of 0.6 is sufficient.
164 On the other hand, composite reliability (CR) is also an indicator of convergent validity. Its value ranges between 0 and 1 and, if greater than 0.7, it indicates that the internal consistency exists. It also means that the measurement items represent the same measurement construct.
Composite reliability is calculated from the squared sum of factor loading ( ) for each construct and the sum of the error variance terms for a construct ( ) as shown in Equation 7.1 (Hair et al. 2010).
CR = (∑ 𝑛 𝑖=1 𝐿 𝑖 ) 2
(∑ 𝑛 𝑖=1 𝐿 𝑖 ) 2 + ∑ 𝑛 𝑖=1 𝑒 𝑖
Equation 7.1: Composite reliability equation
Another technique that can be used to measure the construct reliability is the squared multiple correlations (SMC) pertaining to the measurement items. SMC refers to item reliability coefficients. In other words, it is used to assess the reliability of each measurement item under each measurement dimensions. The SMC is calculated from a square of the measurement items’ standardized loading values. For instance, the standardised loading of 0.8 would yield a SMC of 0.64. The SMC greater than 0.5 is deemed acceptable, although a SMC of 0.3 is used by some authors as an indicator of acceptable measurement items (Cunningham, Holmes-Smith
& Coote 2006)
7.1.2 Validity Assessment
Validity test is critical for testing the accuracy of a measure and ensuring that the measurement items are representing what they are intending to measure (Cunningham, Holmes-Smith &
Coote 2006). CFA and structural equation modeling can be used for testing convergent validity and discriminant validity (Anderson & Gerbing 1988; Hair et al. 2010).
7.1.2.1 Convergent validity
Convergent validity aims to assess the consistency of the measurement items under each measurement construct. It intends to confirm that those measurement items are actually reflecting latent constructs that they are designed to measure. Factor loading is a critical consideration, as high factor loading on a latent factor indicates that the measurement items involved converge on a common latent factor. The standardised loading estimate could be used
165 to consider the factor loading. The minimum requirement of standardised loading estimate is 0.5, and those factor loadings should be significant at this level. Another technique used for testing construct validity can be determined by the average variance extracted (AVE) value.
The dimensions or constructs would have construct validity when the value of composite reliability (CR) is greater than the value of AVE (Cunningham, Holmes-Smith & Coote 2006;
Kripanont 2007). AVE is calculated using standardized loading values and mean errors, using the expression presented in Equation 7.2 (Fornell & Larcker 1981). According to Nunnally and Bernstein (1994), the value of AVE should be greater than 0.4.
ρ vc(η) = ∑ 𝑃 𝑖=1 𝜆 2
∑ 𝑃 𝑖=1 𝜆 2 + ∑ 𝑃 𝑖=1 Var (𝜀 𝑖 )
Equation 7.2: Average variance extracted equation
Where: ρvc(η) corresponds the average variance extracted; λ corresponds the factor loading; ε represents error of measurement; η corresponds the construct.
7.1.2.2 Discriminant validity
Discriminant validity aims to confirm the uniqueness of measurement items, dimensions, or constructs in the model in which they should be truly distinct from others (Hair et al. 2010).
Four distinct methods can be used for testing discriminant validity. The first method is Pearson’s correlation between measurement items (or measurement dimensions) using AMOS.
It indicates that the measurement items under the same measurement dimension should be highly correlated while having lower correlation with measurement items in other measurement dimensions. Similarly, the measurement dimensions under the same construct should be highly correlated while having lower correlation with measurement dimensions in other constructs. In other words, the measurement items (or measurement dimensions) must cluster into their respective dimension (or construct) (Cunningham, Holmes-Smith & Coote 2006; Kripanont 2007).
The second method is covariance, this time employed to inspect the correlation between measurement dimensions or constructs, rather than measurement items (observed variables). If the correlation between measurement dimensions or constructs in CFA is less than 0.9, then those constructs are unidimensional and are unlikely to have a problem with discriminant
166 validity (Bagozzi, Yi & Phillips 1991; Cunningham, Holmes-Smith & Coote 2006; Kline 2011).
The third method is the square correlation (R2) and average variance extracted (AVE) assessment. As suggested by Fornell and Larcker (1981), measurement dimensions or constructs meet discriminant validity criterion when AVE is greater than R2. However, Anderson and Gerbing (1988) argued for a superior method for assessing discriminant validity. It is the examination of chi-square difference test between constructs through CFA.
The test is to comparing the chi-square of the first model (i.e. model 1,) where the correlation between constructs is free to estimate, with the second model (i.e. model 2), in which the correlation is constrained to 1. If the chi-square difference test yields significant result, then the pair of construct is said to meet the discriminate validity criterion.