Data Processing and Data Analysis

Một phần của tài liệu Corporate social responsibility employee commitment and organizational performance in banking industry in thai nguyen province (Trang 78 - 83)

Doordan (1998) pointed out that in qualitative research, there were several types of analyses. The most widely used analysis consisted of a narrative description and classification according to pre-established categories. Llanes, 2004 (recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008) pointed out that in qualitative research, data analysis could simply involve careful organization, interviews. In this research, quantitative data were processed, analyzed, organized and presented in categories.

After collecting data, the author analyzed using Statistical Package for Social Sciences (SPSS) for quantitative data analysis and category formations and tabulation.

Descriptive and inferential statistics includes: Cronbach’s alpha, EFA, correlation and regression analysis were used. The analysis process is implemented as follow:

3.5.1. Descriptive Statistics

The survey data were applied descriptive statistical analysis in accordance with demographic variables including firm size, and industry. The answer results of the questionnaire were simultaneously summarized to obtain the primary information of the survey samples.

3.5.2. Purification Process

The data after taking some descriptive methods was purified by some methods:

Explore Factor Analysis (EFA)

Construct validity is the ability of a measure to confirm a network of related hypotheses generated from a theory based on constructs. Internal construct validity was assessed using factor analysis. Because factor analysis provides evidence of the dimension of a measure, factor analysis with a Varimax rotation was used to determine the number of factors contained in each dimension. An eigenvalue greater than 1 is considered to indicate the presence of an inter-pretable factor (Kaiser, 1958); therefore, factors with eigenvalues greater than 1 were taken into account for further analysis. Such rule is the default one used by SPSS unless another one is specified (Stevens, 2002 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008).

Construct validity was further evaluated through measuring convergent validity which refers to the extent to which: (i) different scales of constructs indicate the same dimension; and (ii) multiple measures of the same construct are matching (Kerlinger, 1986 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008)). Convergent validity was checked to ensure that each group of constructs indicates the same dimension, and to verify the degree of compatibility among multiple measures within the same construct (Kerlinger, 1986 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008). Convergent validity exists ’when measures of the same concept have similar patterns of correlations with other variables’ (Weisberg et al., 1996 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008). Construct validity was evaluated by following guidelines for measuring convergence proposed by Bagozzi, 1981 (recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008)). Bagozzi, 1981 states

that correlations for items within a dimension should be high. Convergent validity was assessed by measuring the correlation among the corresponding constructs under each of the four dimensions: (i) strategy; (ii) processes; (iii) technology; and (iv) people. High correlations among constructs under each dimension are considered to indicate convergent validity. Existence of convergent validity is determined if all correlations between constructs are higher than 0.5 (Liu, 2001).

The number of factors is determined based on the Eigenvalue index. This index represents the variance explained by each factor. According to the Kaiser criteria, the index Eigenvalue factor which is smaller than 1 is excluded from the model (Garson, 2003).

Variance explained criteria: the total of variance explained should be greater than 50% (Hair et al., 1998 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008).

In order for the measurement scale to receive convergence value, the correlation coefficient between the variables and the factor loading must be greater than or equal to 0.5 in a factor. Principal component analysis with Varimax rotation must be done to ensure that the number of factors is minimum.

EFA is considered to match to the set of data if it satisfies the following criteria:

First, the matching between EFA and sample data is verified by Kaise-Meyer-Olkin (KMO) statistical value. In which, if the value of KMO is greater than 0.5, the EFA is appropriate (Garson, 2003 recited in Hoang Trong and Chu Nguyen Mong Ngoc 2008), otherwise if the value of KMO is less than 0.5, EFA is not suitable for the collected data. Second, number of factors: The number of factors is determined basing on the eigenvalue index representing the variation portion explained by each

factor. According to Kaiser’ criteria, the factors whose eigenvalues are less than 1 is removed from the research model (Garson, 2003). Third, variance explained criteria:

Sum of variances explained criteria must be greater than 50% (Hair et al., 1998 re- cited in Hoang Trong and Chu Nguyen Mong Ngoc (2008). Fourth, convergence value: In order to ensure the convergence of scales, the single correlation coefficients between variables and factor loading must be greater than or equal to 0.5 in one factor (Gerbing and Anderson, 1988 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008). Finally, principal component analysis with Varimax rotation is used to ensure that the number of factors is minimized Hoang Trong and Chu Nguyen Mong Ngoc (2008).

Evaluation the Scale Reliability

Since the proposed framework was derived from the literature, and the aim of the empirical research was to test this framework, it was important to verify the reliability and validity of the measures used in the research to draw valid inferences from the research leading to theory building. Reliability deals with how consistently similar measures produce similar results (Rosenthal and Rosnow, 1984 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008), whereas validity of a measurement instrument refers to how well it captures what it is designed to measure (Rosenthal and Rosnow, 1984 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008).

Reliability has two dimensions referred to as repeatability and internal consistency (Zigmund, 1995). Internal consistency refers to the ability of a scale item to correlate with other items in the scale that are intended to measure the same

construct. Items measuring the same construct are expected to be positively correlated with each other. A common measure of the internal consistency of a measurement instrument in social sciences research is Cronbach’s alpha (Zmud and Boynton, 1991 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008).

Cronbach’s alpha is widely used because it provides many advantages over other reliability measures. Besides its easy computation, it does not pose any restriction on the types of variables used, and it removes the memory effect possibility when measuring reliability (Bollen, 1989 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008). If the reliability is not acceptably high, the scale can be revised by altering or deleting items that have scores lower than a pre-determined cut-off point. If a scale used to measure a construct has an alpha coefficient greater than 0.70, the scale is considered reliable in measuring the construct (Nunnally, 1978; Leedy, 1997 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008).

This indicates a high level of internal consistency or homogeneity among the constructs under each dimension (Straub, 1989 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008). Schuessler (1971 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008) suggested that a scale is considered to have good reliability if it has an alpha value greater than 0.60. Hair et al. (1998 recited in Hoang Trong and Chu Nguyen Mong Ngoc (2008) suggested that reliability estimates between 0.6 and 0.7 represent the lower limit of acceptability for reliability estimates.

In this research, the multi-item scales measuring all corresponding constructs under each of the four dimensions affecting e-readiness including: (i) management; (ii) processes; (iii) technology; and (iv) people) were checked for reliability by

determining Cronbach’s alpha, and an alpha value of 0.60 or greater was considered acceptable.

3.5.3. Testing the Hypothesizes

To test the hypotheses, confirmatory factor analysis was conducted. To do confirmatory factor analysis, Lisrel 8.7 was used to test the measurement model.

Then, discriminant analysis was conducted. The measurement model will satisfy the requirements with GFI ≥ 0.9; AGFI ≥ 0.8; CFI ≥ 0.9; RMSEA ≤ 0.7. After the measurement was checked, the data was used to test the hypotheses.

Một phần của tài liệu Corporate social responsibility employee commitment and organizational performance in banking industry in thai nguyen province (Trang 78 - 83)

Tải bản đầy đủ (PDF)

(124 trang)