Normal Text
Medium Text
Large Text
 search
Workshop 1: Infrastructure around systematic reviews

DAY ONE: Session 3: Parallel workshop sessions (14 May, 14.00 - 15.30)

Workshop 1: Infrastructure around systematic reviews (Chair: Steve Higgins, Durham University, UK)

A major challenge facing public services is to adopt the most effective, best-value interventions and avoid wasting finite resources. With the amount of new published research evidence increasing rapidly, it is not possible for policy-makers and practitioners to identify and keep up to date with developments in evidence in their fields. Investment in systematic reviews is increasingly used to meet both these challenges, and ensure public services have access to the best possible evidence to inform decisions and choices. As part of this, there is a growing infrastructure - at local, national and international levels - supporting the preparation and updates of systematic reviews, through the commissioning of reviews, building capacity for their conduct and supporting the development of methods.

The aim of this workshop is to critically examine this infrastructure, and its potential to facilitate better use of research in education in Europe. Participants will contribute to this theme, whilst drawing on examples from their own work.

Presentations:

Eamonn Noonan: From evidence gap to knowledge translation gap: the debate on the effectiveness of anti-bullying programmes

This presentation treats the debate on the effectiveness of anti-bullying programmes in Scandinavia in recent years. Following the publication of a systematic review (Farrington & Ttofi 2010) which found that these bring a reduction in bullying, officially commissioned reports concluded that there was no evidence that anti-bullying programmes worked. This viewpoint strongly influenced subsequent decisions around policies in this area.

This presentation sets out the scientific critique of the Farrington and Ttofi’s review, and then traces the process which led the evidence it presented to be sidelined in the policy debate. It notes serious misunderstandings of the systematic review process, and argues that overreliance by official bodies on trusted interlocutors can lead to a distorted view of the available research. The significance of vested interests is also addressed.

It then considers the implications for the promotion of evidence-based policy across the board. A concerted effort to enhance the infrastructure around systematic reviews is certainly desirable. In the meantime, the case for genuinely systematic reviews as a path to an impartial and accurate summary of existing research needs to be vigorously communicated. The specifications for officially commissioned reports should be revised in order to assure greater methodological rigour, and new routines should be developed to identify and redress potential conflicts of interest. These goals can be helped by a public debate on the importance of independent and accurate research synthesis with a focus on programme impact. This remains a key to better outcomes for those in education, including victims of bullying.

Monica Melby-Lervåg (presenting), Charles Hulme and Arne Lervåg: Evidence for the effectiveness of educational interventions: methodological challenges in meta-analyses

During the last 15 years the number of meta-analyses in education has expanded greatly. Many overlapping meta-analyses have been published, often with conflicting findings. Perhaps the most well-known attempt to systematize results of meta-analyses in education has been made by Hattie (2009) in his book “Visible learning”. However, the methodological quality of meta-analyses and how their quality affects the conclusions reached have received little attention, both in Hattie's analysis and in the field of education in general. This proposal consists of two parts: First we will present a reanalysis of Hattie's summary of meta-analyses in the area of reading and language. In his analysis Hattie merges correlational studies without an intervention and intervention studies. Our reanalysis using more methodologically sound inclusion criteria shows that Hattie's estimates of intervention effect can be biased and should be used with caution.

In the second part of the proposal, we present a summary of meta-analyses and systematic reviews of intervention effects in education. Seventy-one different meta-analyses were included in the study. The results show that methodological quality and design features have a large impact on the mean effect size from the meta-analysis. The single most important predictor of mean effect size in the meta-analyses, was the design of the studies included. Thus, the methodology of studies included in a meta-analysis has a large impact on the mean effect size obtained. Policy makers should therefore thoroughly assess the quality of meta-analysis when using such evidence as the basis for making policy recommendations.

Thomas Engsig: The use and misuse of systematic reviews in Danish educational research and practice

Evidence-informed policy and educational practice is becoming a more significant discourse in the educational system in Denmark. The ongoing large reforms of both the Danish public school system and the colleges of teacher training are very much informed by the understanding of the significance of evidence-informed approaches to educational practice and policies.

Over the last five years three systematic reviews have had a substantial impact on educational policy-making, teacher training and development of the Danish public schools. John Hattie’s (2009) Visible Learning, and two reviews from The Danish Clearinghouse for Educational Research on teacher competences (Nordenbo, Soegaard Larsen, Tiftikc¸i, Wendt & Østergaard, 2008), and Evidence on Inclusion (Dyssegaard and Soegaard Larsen, 2013) are systematic reviews that have had an ample impact.

However, there is, in Denmark as well in an international context, a continuing debate on the misuse of systematic reviews and other evidence-based approaches in educational practice. Critics argue that the so-called evidence-movement is a considerable threat against the teacher’s professional and experience-based judgment, and that systematic review only provide evidence on whether an intervention works or not, but not regarding for whom it works and under what contextual circumstances it works (Clegg, 2005).

Through the discussion of a critical realist systematic review on inclusive and supportive practices in the schools general education, I argue that the traditional systematic reviews offer little or now knowledge on how and why interventions are effective, and that a critical realist review potentially offers the professionals in the educational practice and policy-making significant knowledge on why, how and for whom interventions work.