Christian Bokhove

In recent decades more and more educational data have become available, including those from International Largescale Assessments (ILSAs), with the Organisation for Economic Co-operation and Development (OECD) and the International Association for the Evaluation of Educational Achievement (IEA) playing key roles in disseminating results from such ILSA and their international databases. This proliferation of ILSAs has afforded us with unique opportunities to internationally compare jurisdictions in a range of educationally relevant themes. Most ILSAs publish their data on public websites and they are often accompanied by detailed, technical manuals as to aid analysts in analysing these data. Such transparency is great, but also marred somewhat by the specialist knowledge needed to do more advanced analyses. It is important, though, that independent analyses, so not just by the OECD and the IEA, also take place, so we can benefit fully from the rich data.

One challenge in data analysis in general and certainly also with analysing secondary ILSA data, lies in transparency about how data are processed and analysed. There is an enormous amount of so-called ‘researcher degrees of freedom’ in analysing secondary data. To start with, simply the choice of ILSA dataset. For my field, secondary mathematics education, I can look at 15-year olds in PISA and grade 8 students in TIMSS. Each of those has its own achievement variable and depending on what you are interested you could choose one over the other. Then there are the context questionnaires and associated scales. ILSA providers sometimes already make these scales but sometimes we’d like to create our own scales. Another degree of freedom. It might be logical that all researchers follow the ‘best practices’ in catering for ILSAs ‘complex sampling design’ and for example use weights, plausible values and resampling techniques, but we know this isn’t always the case. And even if they are used, there are variations in for example weights are used.

Then researchers choose an analytical approach, ranging from relatively simple descriptive statistics to advanced multilevel structural equation modelling. More choices are made here, for example on whether variables are centered. Or on missing data treatment; it can make a big difference whether data are excluded or imputed with either single or multiple imputation. Finally, there also are numerous software packages that can be used, some more easily or for less cost available than others. They all might use slightly different estimation techniques. I’m sure there are even more ‘degrees of freedom’ but all in all it is an extensive list of things analysts might make different decisions for.

This is why it, in my opinion, is so important to record these choices transparently. A good first step would simply be to report on this in analysis reports and articles. Even better would be if analysis scripts and code are made available so analyses can be replicated. This would also make it easier to then apply to new data when they are released. 

Banner photo by Markus Spiske on Unsplash

About the author(s)

Christian Bokhove

Dr. Christian Bokhove is Associate Professor in Mathematics Education at the Southampton Education School, University of Southampton. Christian’s research focuses on mathematics education in secondary schools, innovative research methods and international comparative research.