CHAPTER 2 EXAMINING PUPIL ATTAINMENT AND PUPIL PROGRESS WITHIN THE
2.5 Limits or Flaws in Educational Effectiveness Research?
No area of research is devoid from criticism and educational effectiveness research is no exception. Reasons for the debate that educational effectiveness research attracts is probably due to the considerable political support that school and educational effectiveness research attracts in many westernized countries (Luyten, Visscher &
Witziers, 2005) besides its connections with economic and social theory (Scheerens, 1997). There have been a number of important reviews about the knowledge base of school effectiveness research (Reynolds et al., 1994; Reynolds et al., 2000; Sammons, 1999; Scheerens & Bosker, 1997) and about the methodological advances in educational effectiveness research (Creemers, Kyriakides & Sammons, 2010).
Criticism of school and educational effectiveness research comes in two forms. There are proponents from within the field who are cognisant about the limitations of educational effectiveness research but view such criticism positively as an opportunity to advance the field. Then, there are critics from outside the field who detect flaws concerning the political, atheoretical and methodological positions expounded by school and educational effectiveness researchers but who choose to view these negatively in order to limit the field.
Critics doubt the existence of the school effect (Gorard, 2010a; Slee & Weiner, 2001;
Thrupp, 1999, 2001, 2010). Critics also argue that school and educational effectiveness research: is overly reliant on quantitative methods, positivist, hegemonic (Dahlberg & Moss, 2005), reductionist (Wrigley, 2004), serves political agendas,
minimizes the importance of social composition in schools (Gorard, 2004; Slee, Weiner
& Tomlinson, 1998; Thrupp, 1999, 2001, Wrigley, 2004), provides governments with a scientific justification for the political interpretation of policy/practice (Slee & Weiner, 2001), does not differentiate between factors that are school-based but not necessarily school-caused (Thrupp, 1999), produces an alternative research account (Gewirtz, 1998; Thrupp, 1999), holds flawed notions about teaching and learning (Rea & Weiner, 1998) that result from the coercive processes of social induction (Elliot, 1996) and that objectivity cannot be true (Ball, 1998). The focus on what schooling should do for pupil outcome, rather than what schooling should achieve for pupil learning, has led to a culture of blame (Rea & Weiner, 1998). Similarly, Elliot (1996:209) refutes that school-based processes should be judged on the basis of pupil outcome, in view of:
―pupils‘ capacities for constructing personal meanings, for critical and imaginative thinking and, self-directing and self-evaluating their learning‖. Elliot considers it the responsibility of the teacher to establish outcomes for pupils. Effectiveness studies are also criticized because they remain under-theorised. Apparently, such studies do not tap into knowledge provided by sociological inquiry because they employ narrow indicators (Thrupp, 2001) and are dominated by the accountability agenda (Lingard et al., 1998).
On the other hand, proponents of effectiveness research such as Reynolds et al.
(2012:15) believe that educational effectiveness research:
has had some success in improving the prospects of the world‘s children over the last three decades – in combating the pessimistic belief that ―schools make no difference‖, in generating a reliable knowledge base about ―what works‖ for practitioners to use and develop, and in influencing educational practices and policies positively in many countries.
Reynolds et al. (2012) acknowledge that the success of educational effectiveness research is partly attributable to valid criticism that led educational effectiveness researchers to seek ways to advance the field. Reynolds et al. (2012) highlight four key themes central to criticism about educational effectiveness research. These themes are:
a lack of methodological rigour particularly in the early studies of effective schools, an over-emphasis on schooling rather on social class influences, a neglect in the linking of
the theory of educational effectiveness research with analyses and findings and a one- size-fits-all approach to research.
Not all forms of knowledge are equally valuable and integral. Amongst the critics who argue against the methodological, atheoretical and political stances in educational effectiveness research, Gorard (2010a:745) has been especially vociferous in his rejection of the ―dominance of the school effectiveness model‖. In response to this antagonistic position against educational effectiveness research, Reynolds et al. (2012) argue that Gorard‘s (2010a & b, 2011) criticism about: relative error, random sampling and use of multilevel modelling techniques is flawed. Reynolds et al., (2012), also argue that Gorard‘s (2010a) broader criticism of educational effectiveness research such as doubting the existence of the school effect, conflating educational effectiveness researchers with governments and the rejection of educational effectiveness research is unjust and invalid. On the other hand, proponents of educational effectiveness research, consider criticism as important in that it provides a springboard for the development of methodological and theoretical advances in the field. This is possibly the greatest point of divergence between hardened critics who consider educational effectiveness research as flawed and proponents of educational effectiveness research who acknowledge the limitations of educational effectiveness research but who instead choose to work towards advancing this field of study.
Very early studies of school effectiveness such as those by Mayeske et al. (1972), Bidwell and Kasarda (1980) and Ralph and Fennessy (1983) were unable to accurately detach the effects of the school with effects associated with pupil intake. Such criticism was answered by methodological developments that led to the stage four generation of input-context/process-product models (Teddlie & Reynolds, 2000). Early studies of this more methodologically sophisticated type such as those conducted by Hallinger and Murphy (1986) and Teddlie et al. (1990) paved the way forward for the ―normal science‖ of school effectiveness (Teddlie & Reynolds, 2000:11). Particularly since 2000, the modelling of educational effectiveness has been consolidated by an increased focus on complexity that examines changes in pupil attainment over time. Increasingly, the longer-in-term effects of factors at the school and at the classroom level are also
being examined alongside with the operators of educational effectiveness such as
―consistency, stability, differential effectiveness and departmental effects‖ (Creemers, Kyriakides & Sammons, 2010:6).
Educational effectiveness research has been repeatedly criticized because it neglects to consider the determinate effects of social class and instead chooses to focus on the influences of schooling (Gorard, 2004; Slee, Weiner & Tomlinson, 1998; Thrupp, 1999, 2001; Wrigley, 2004). Does this automatically imply that the effects of social class are ignored by school or by educational effectiveness research? Based on what is usually elicited by the research, 12% to 15% of the variance is explained by the effects of the school. This suggests that whilst educational effectiveness research does not ignore the effects of social class, the findings might be interpreted in a way that shows educational effectiveness research to downplay the effects of social class. The verb
―downplay‖ rather than ―neglect‖ has been chosen in view of the statement made by Reynolds et al. (2012) in which they argue that more recent findings show the school level to explain between 30% to 50% of the variance and that educational effectiveness research considers the influence of social class. They base their argument on more recent findings that shows the variance accounted for by the school as considerably greater than the figure of 12% to 15% reported by the critics. Given these sharp differences in interpretation, it is essential to understand what the school effect is and how the school effect is measured.
At times, the terminology used to describe the school effect can be misleading (Coe &
Fitz-Gibbon, 1998). The school effect is a measure of the between school variance that cannot be explained by intake characteristics of pupils in schools after controlling for such effects (Coe & Fitz-Gibbon, 1998). The school effect relies heavily on multilevel quantitative methods of analysis which usually offer a snapshot of the educational reality within schools (Luyten, Visscher & Witziers, 2005). The school effect is relative because pupils‘ value-added scores as achieved in a school are compared against the value-added scores of pupils in other schools (Goldstein, 1997). Relativity implies that effects are likely to vary in quantity and in quality across and within schools. School effects need not necessarily be strong for these to be influential. Weak
school effects were elicited by Scheerens & Bosker (1997) for effectiveness factors such as: cooperation, school climate, monitoring, opportunity to learn, parental involvement, pressure to achieve and school leadership. For those who still choose to doubt the existence of the school effect, Luyten, Visscher & Witzers (2005:253) argue that in view of: ―the enormous amount of resources (taxpayers‘money) invested in education each year, it would be unethical not to consider its effects.‖
An example of how school effects can lead to significant differences in pupils‘ progress outcomes over time is discussed by Luyten, Tymms and Jones (2009). Using more sophisticated methods that account for the effects of assigning pupils to higher or lower grades on the basis of their birth-date and using both cross-sectional and longitudinal data, Luyten, Tymms and Jones (2009:146) show that the absolute effects of schooling
―indicate that more than 50% of the progress pupils make over one-year period is accounted for by schooling.‖ This percentage figure differs considerably from the figure of 12% to 15% that is typically reported by studies, as well as by the critics of school and educational effectiveness research. However, the percentage figure of 50%
is similar to that reported by studies that examine the variation between both the school and the classroom level (Hill & Rowe, 1996; Opdenakker & Van Damme, 2000b).
What does the figure of 50% that is accounted for by the school for pupil progress over one year by Luyten, Tymms and Jones (2009) refer to? On page 146, ―the figure of 50% refers to the impact of receiving education in the upper grade as opposed to the lower grade and is calculated as a percentage change in test score.‖ Also on the same page, these same authors also indicate that ―the figure of 10% refers to the variation in the impact of schools.‖ On page 157 they discuss how the above-mentioned difference in percentage figures refer to two aspects of the same phenomenon.
these percentages relate to an aspect of the effect of schooling that is different from what is expressed by the usually reported percentages of school level variance. When these percentages are converted to effect sizes that have been defined in relation to interventions in which there is a control and an experimental group, it is found that 10% to 15% school level variance corresponds to an effect size of .67 to.70.
The above discussion does not automatically resolve the debate as to whether educational effectiveness research examines appropriately the influence of social class.
However, the above discussion does highlight the need for an increasingly balanced take when considering what the school effect represents. The ongoing discussion about the improved measurement of the absolute effect of the school over time shows that contrary to what the critics argue educational effectiveness research does not neglect to consider the influence of social class but instead prefers to focus on the more malleable influences of schooling. Findings by Hill and Rowe (1996), Opdenakker and Van Damme (2000), Luyten, Tymms and Jones (2009) and Guldemond and Bosker (1999) strongly suggest that the incremental effects year-on-year effects of variation accounted for by the school and also by the classroom levels are greater than when considering the school effect as a measure of the between school variance.
Earlier defenses of school and educational effectiveness research have also argued about the importance of conducting such research. Teddlie and Reynolds (2000) argue that the contribution of school effectiveness research is broader, than that of its critics, because it is not restricted to just examining the influence of social class. Townsend (2001) argues that even though critics allege a direct relationship between school effectiveness research and the management of schools, they then choose to ignore that at the root of much social injustice lie funding cutbacks for education. Luyten, Visscher and Witziers (2005:252) argue that discarding the objectivity ideal would reduce educational research to an intellectually anarchic exercise devoid in its potential for the ―generating of information and knowledge that is valid regardless of ideological preferences.‖ Educational effectiveness research does not seek to eradicate ideological preferences nor does it seek to establish the supremacy of an ideology over another.
However it does seek to safeguard objectivity via scientific and rigorous methods (Coe
& Fitz-Gibbon, 1998). Increasingly the amalgamation of quantitative and qualitative methods have led to the development of dialectical approaches that highlight the reality of a ―much more complex iterative approach‖ (Siraj-Blatchford et al., 2006:76) and the pragmatic use of mixed methods useful in refuting an either/or stance (Teddlie &
Sammons, 2010).
Proponents of school and educational effectiveness research are aware that the analysis of data usually stops after the estimation of direct effects, the research questions are often addressed through quantitative methodologies (Coe & Fitz-Gibbon, 1998;
Goldstein & Woodhouse, 2000; Scheerens & Bosker, 1997) and research focuses on the basic skills (Bosker & Visscher, 1999). However, rather than consider this to seriously limit educational effectiveness research, proponents call for a more sophisticated choice of variables that are not necessarily limited to the examination of direct effects (Coe &
Fitz-Gibbon, 1998; Goldstein, 1997). Variables that are also broader, aimed at avoiding narrower approaches (Campbell et al., 2003; Luyten, Visscher & Witziers, 2005) and supportive of both qualitative and quantitative methods (Reynolds et al., 2002). For example these methodological and theoretical advances may be achieved through studies that: measure and illustrate the influence of school and classroom processes (Coe & Fitz-Gibbon, 1998; Scheerens & Bosker, 1997), consider teachers as sources of teaching variance (Luyten, 2003) and testing the generalisability of findings which may eventually contribute towards the formulation of a valid pan-European (2012) and international version (Reynolds, 2006) of The Dynamic Model of Educational Effectiveness (Creemers, Kyriakides & Antoniou, 2009). What distinguishes the proponents from the critics is that issues critical to educational effectiveness research are viewed as limitations that need to be considered further if educational effectiveness research is to continue advancing.