Everything Works Somewhere and Nothing Works Everywhere; Context Matters
The Programme for International Student Assessment (PISA) is a survey of the educational performance of 15-year-old students organised by the Organisation for Economic Co-operation and Development (OECD). The results of the latest round of PISA will be released in December this year and we will see a lot about it in the news and other media. Most of the stories are likely to be either about the international rankings or about ‘what works’ in education. But these are not the stories we should be focusing on.
The PISA rankings
We will be told things like that country X performed slightly above the OECD average in science and reading and around the OECD average in mathematics. We will be told that this performance does or does not represent a significant change from the scores obtained in 2015. We will also be told that country X performance is comparable to the performance of countries like Australia, Germany and Ireland, to give an example. All these rankings are, however, just comparisons of raw scores. They do not take into account the context of the societies in which education systems operate. Just to give an example, in 2015, the performance of UK in PISA was significantly higher than that of Dominican Republic (in all subjects), but we also know that income per capita in the Dominican Republic is almost seven times lower than in the UK. So, it does not seem to be fair to make raw comparisons of education systems when they operate under such different socioeconomic conditions (a good example of comparisons between education systems taking into account socioeconomic differences can be found here).
So, what works in education?
Regarding ‘what works’ in education, the story is not so different. Raw comparisons of education systems are also not very informative, as PISA can be used to provide evidence to support contradictory claims. Supporters of school autonomy (i.e. principals and teachers with greater autonomy in managing their schools’ resources) will point out the case of Estonia (ranked 3rd in science in PISA 2015) to claim this is a good practice. On the other hand, detractors of the same kind of policies will point out Singapore (ranked 1st in science, reading and maths, PISA 2015) to say the opposite (see Figure II.4.3 of the PISA 2015 Results). The results of PISA 2015 can also provide evidence to say that, in schools with smaller classes, students tend to perform better (Finland has classes with less than 20 students on average and ranked 5th in science), but they can also provide evidence to claim the opposite (Japan has classes with more than 35 students on average and ranked 2nd in science) (see Figure II.6.16 of the PISA 2015 Results).
So, the stories we should be focusing on must go beyond simplistic comparisons of education systems. ‘How did we rank?’ or ‘what works?’ are not the right questions to be answered by analysing PISA. This is because, as shown above, almost every policy works somewhere and no policy works everywhere. This is not a limitation of the PISA data but of the analyses and interpretations that we make of it (besides student achievement, PISA collects a wealth of contextual information about students, teachers, schools and countries). Our analyses must not only be about what policies or practices work but about under which conditions these policies or practices are likely to work. In educational research, everything is about the context and therefore education systems cannot be understood without considering the context of the societies in which they operate.