Explore

Response: Allowing More Countries to Participate in the PISA Exam Enables Innovation and Fosters Diversity. That Is Something to Embrace

In a recent essay in The 74, Mark Schneider argues that participation in the Programme for International Student Assessment (PISA) by countries that are not members of the Organisation for Economic Co-operation and Development has increased the burden on a program originally designed for member countries, undermined the meaningfulness of comparative statistics based on the assessment, and limited the potential of innovative digital assessment content. As we look toward the 20 years of this groundbreaking example of international collaboration in education, it’s worth taking a closer look at those claims.

One of the first revelations emerging from the rich data enabled by PISA was that the world is not divided between rich and well-educated nations and poor and badly educated ones: National income explains only around a third of the performance variation among countries on the exam, and neither the highest-performing nor the most rapidly improving education systems in PISA are members of the OECD.

Without nonmember countries, PISA would be much poorer in the opportunities it provides for peer learning. In 2014, England’s schools minister, Liz Truss, visited top performer Shanghai, a non-OECD system, and was impressed by the mathematics teaching she observed, as well as the teacher-to-teacher and school-to-school programs in the province. She worked with the Chinese to create an exchange program for teachers in China and England. Today, educators from China and the United Kingdom are working together in more than 30 mathematics hubs in England to innovate teaching.

In the 1960s, Singapore was one of the poorest nations, with a dismal education system. Today, Singapore outperforms every member country in every PISA subject area. Rapid improvers like Brazil, Peru, and Vietnam, too, have become popular destinations for peer learning and research. These and many other examples show how much educators and education systems can learn from each other — and with each other — when comparisons open up the opportunity to do so. This is one of the great strengths of PISA.

OECD countries are also not as “widely different” from other countries as Schneider suggests. When taking all countries together — member and partner countries alike — only 22 percent of the student performance variation lies between countries. In other words, the performance variation we observe within countries dwarfs the differences we observe between countries.

While it is true that not all 15-year-olds are in school, and thus covered by PISA, PISA has never claimed to compare all 15-year-olds. Still, its comparisons relate to all 15-year-old students. PISA’s comparisons carefully acknowledge the coverage rate, which describes the share of 15-year-olds included in the exam populations. The data reveal that only two non-OECD countries show lower coverage rates than the member country with the lowest coverage rate; 48 countries show a higher coverage rate than the United States, including 19 nonmember school systems.

Has the growth in membership weakened the quality of the PISA tests? The opposite is the case. The PISA technical standards have been strengthened with each successive assessment and provide for the most rigorous international comparisons of student learning outcomes globally available. All countries whose data do not meet all PISA technical standards are carefully annotated in OECD reports. In the last PISA round in 2015, only seven nonmember countries – but 15 member countries – did not meet the PISA technical standards, among them the United States. In short, that countries are poor does not mean they are implementing PISA poorly. Instead, some of the poorest nations offer PISA data of the highest technical quality.

It is true that growing country participation has meant PISA needed to measure a wider range of student abilities. This has led to an increase, not a decrease, in the precision of measurement for every country. For example, it has led PISA to introduce adaptive assessment methods that provide a more accurate and nuanced assessment of students in all countries by adjusting the difficulty level to the individual students. Moreover, the PISA for Development group of countries created and contributed a new set of test items that more accurately measure low-performing students, and these are now available for OECD and non-OECD countries alike. This is another example where countries benefit collectively from wider participation and a broader range of expertise.

Does inclusion of nonmember countries hold back the development of innovative digital assessment tasks that exploit the opportunities of computer delivery to assess competencies that cannot be measured by a paper-and-pencil test? Again, use of technology in PISA has not decelerated, but accelerated. PISA now includes computer-based tasks that involve scientific setups where students have to make their own experiments, and it looks at reading of digital texts with hyperlinks, computational thinking, and the use of spreadsheets in mathematics. True, countries can opt out and use paper-based versions, and eight nonmember countries did so in the latest PISA assessment. But that has neither operational nor financial implications for countries that want to, and do, move ahead with new assessment methodologies.

Do nonmember countries incur costs for member nations? Increased participation in PISA has not led to an increase in spending on PISA by the United States or any other OECD country. In real terms, the U.S. contribution to PISA has remained unchanged since the assessment began in 2000, and the total share of PISA funds contributed by the United States has shrunk from 18 percent to 8 percent, thanks to the participation of non-OECD countries. The rising number of participants has increased the resources available and made a greater number of innovative developments possible and affordable.

Finally, how does PISA decide who can join? The participation of nonmember countries is carefully regulated and governed by a Global Relations Strategy that OECD countries agreed to in 2013. The essential criterion is whether the countries can deliver a PISA assessment that meets the demanding internationally agreed-upon standards to ensure relevant, reliable, and valid comparisons — not whether they are rich or poor.

Ultimately, greater participation in PISA enables innovative practice and fosters valuable international peer learning among participating countries. Yes, changes to PISA can require greater sophistication in what is assessed and how; but around the table, we have the skills, knowledge, and disposition to make it work for everyone involved, regardless of their starting point. During my time as chair of the PISA Governing Board, I have been impressed by how every participant has added expertise, insights, and strength to the PISA instruments and discourse, and I have never seen any participant detracting from PISA. As in most things, diversity is a strength of PISA, and it is something to embrace.

Dr. Michele Bruniges chairs the Programme for International Student Assessment (PISA) Governing Board and is secretary of the Australian Government Department of Education and Training. As a teacher, she had a distinguished 16-year career in the Australian education public service and has a Ph.D. in educational measurement.

Help fund stories like this. Donate now!

Republish This Article

We want our stories to be shared as widely as possible — for free.

Please view The 74's republishing terms.





On The 74 Today