4 minute read

Are basic requirements for learning in place?

BOX 3.1 How does language of instruction affect test scores? (Continued)

languages, bilinguals will eventually outperform monolinguals even in the monolinguals’ L1. But instruction almost invariably falls short of optimal, and so knowing by how much a linguistic minority is lagging, and why, is critically important.

Advertisement

The problem is becoming more important as testing coverage expands globally. International large-scale assessments were initially designed for and first given in OECD member countries, which tend to be more linguistically homogeneous than non-OECD countries.a In 2000, the Program for International Assessment (PISA) had 41 national test versions in 25 languages for 30 participating (OECD member) countries; by 2006, 77 versions in 42 languages were given, with all of the increase from non-OECD member countries. The expansion “added considerably to the challenge of ensuring equivalence and fairness of instruments across all participating countries” (Grisay et al. 2007). The challenge is formidable, but, by testing students in their L1 and appropriately analyzing differences between language groups, progress is possible. In fact, some initiatives are already moving in the right direction. For instance, the International Association for the Evaluation of Educational Achievement (IEA) has created guidelines for countries participating in PIRLS and other international large-scale assessments. Countries are responsible for translating the assessment into their own languages and adapting it to their own contexts. In the same spirit, IEA and Boston College conduct studies to detect test and item bias following standards in the field of psychometrics (American Educational Research Association, American Psychological Association, and National Council on Measurement in Education 2014; Educational Testing Service 2014). In the instances where measurement bias is identified (due to language at home, gender, or other factors), these organizations are transparent in communicating these results.

The growth in participation in international largescale assessments provides an opportunity for many countries and for international development organizations. Organizations that conduct international large-scale assessments support participating countries with capacitybuilding initiatives so that they can conduct better national large-scale assessments and follow best assessment practices.

Source: Contributed by Michael Crawford. a. Ethnologue data, 22nd ed. (https://www.ethnologue.com).

Among the determinants of student learning, SDI surveys primarily collect information on school inputs and teacher characteristics. For that reason, the remainder of this chapter focuses principally on variations in student learning that can be explained by differences in these characteristics.11

Many factors—both internal and external to education systems—contribute to a student’s ability to learn those basic skills that will stay with her or him throughout life.12 While individual schools are affected by the broad characteristics of the country’s education system and its stakeholders, factors at the school level decisively influence the learning experience of students. Describing some of these factors is the comparative advantage of surveys such as the SDI.

Are teachers present and teaching?

Teachers need to be present in class to teach. Not only has teacher absence been found to correlate with lower learning, but causal studies also have shown that reducing absence can improve learning (Duflo, Hanna, and Ryan 2012). However, even when they are in school, teachers often spend too much time on activities other than teaching. As mentioned earlier, teacher absence in SDI countries is well documented (Bold et al. 2017). The analysis conducted for this chapter, albeit in an updated sample of SDI countries and using a slightly different definition due to the careful harmonization of the surveys,13 yields a similar story. On average, 22 percent of teachers are absent from school during a surprise visit. If teachers who are not in the classroom during this visit are also counted, the teacher absence rate rises to 38 percent.14 Overall, teacher absence remains a substantial challenge for SDI countries included in this book. There are many possible reasons for teacher absence, including systemwide shortfalls in personnel policies (Liu, Loeb, and Shi 2020), lack of monitoring and accountability, and insufficient incentives (Mbiti 2016; Muralidharan et al. 2016).

Do teachers have the knowledge and skills they need?

The importance of teacher quality and, in particular, of effective pedagogy has been amply documented in the education literature (see Araujo et al. 2016; Evans and Popova 2016; Hanushek and Rivkin 2006). Teachers’ abilities are often assumed to be associated with academic credentials. However, a growing body of evidence shows that the skills that matter most for learning—content knowledge and pedagogy—are not necessarily linked with teachers’ formal qualifications (Cruz-Aguayo, Ibarrarán, and Schady 2017; Hanushek and Rivkin 2012; Rivkin, Hanushek, and Kain 2005). By providing direct measures of knowledge and pedagogy, SDI surveys make it possible to measure the importance of teachers’ abilities in explaining children’s learning outcomes. In fact, a recent study uses SDI surveys and other data to show that the associations between student test scores and teacher attributes might differ for teachers who have high and low scores on content knowledge and pedagogy (Filmer, Molina, and Wane 2020).

The SDI teacher assessment includes two sections. The teacher knowledge section resembles grading a math and literacy exam (grade arithmetic exercises solved by students, correct a letter with grammatical errors, and similar tasks), whereas the pedagogical section asks teachers to perform tasks that they face on a daily basis (prepare to teach a lesson, assess differences in children’s abilities, and evaluate students’ learning achievements and progress). The extracts from teachers’ tests shown in figure 3.4 are examples of the types of questions on which teachers are assessed.

This article is from: