Page 1

Quality Education-It’s all in the mind? How to measure and predict quality education improvement Dr Brian Metters, Mrs Babita Shrestha, Mrs Sangita Bhandari - 7 March 2016 

Summary This paper describes the development of a tool to measure progress in situations and programmes seeking to improve the quality of education in primary schools. It is based on the results of 5 years work in Nepal primary schools in which 200 schools, 2000 teachers and 300,000 children were helped to attain higher standards of teaching and learning in the classroom. The work was conducted by the staff of Nepal Schools Aid (UK) a British charity and their NGO in Kathmandu, Nepal Education Leadership Foundation. The specific tool draws on earlier research we conducted into defining quality education and can be used in any country by any organisation without license or copyright issues.

www.qualityeducationglobal.org

1


Introduction Imagine you were the manager of a bank, an organisation where your “product” is intangible. In fact your product is a service, a service to your customers comprising a number of different features such as ease of taking money out, security/freedom from risk, regularity of statements, accuracy, staff politeness, staff responsiveness, waiting time before being served…….. all of these things adding up to what we might call “Quality of Service”. But who should be the judge of this quality of service, the manager, the staff, the shareholders? Of course not, it is the customers who are the ultimate judges, the ordinary people who put their faith in you and your service at the bank. So, as the manager of this bank, HOW do you get a measure of the quality of service about your bank from your customers? Of course you ASK them, in a systematic, structured, measurable way so that different aspects of your service can be compared monthly, quarterly, annually, or over any timescale you like. You are doing it this way because you cannot afford to wait until the end of a year to measure profit, investment, number of new customers, retention of existing customers etc. You need to know immediately and regularly how your bank is performing in the eyes of your customers so that you can take action in the event of poor feedback or specific warning signs.

So what has this got to do with education or it’s quality you might ask? Well, as a simple analogy, why do schools or even Ministries of Education wait until the end of a year for overall feedback based on exam results, or worse still wait for 5 years to measure enrolments and survival rates to judge if quality is improving? Shouldn’t there be a simpler way to measure quality changes so that children and parents don't have to wait until poor exam results to find out that teaching methods, teacher competence, the curriculum, school discipline …… were all inadequate? The answer of course is “yes, there MUST be a way to do this”, and that there must be many lessons about HOW to do it from other organisations that measure quality and from psychology, because the field of education clearly doesn't have all of the answers.

www.qualityeducationglobal.org

2


Quality Education & Perception Tests Perception tests or questionnaires filled in by customers have long been used as tools to measure service quality as described briefly in the example of a bank given above. But actually, perception tests have also been used in education for more than 20 years and we will now describe two of them used all around the world but in differing situations and for different reasons.

1. SEEQ: Student Evaluation of Service Quality In 1982 an article was published in the British Journal of Educational Psychology describing the work of Herbert Marsh into how students could give their opinions on the teaching they received as a measure of education quality. This was revolutionary: students opinions were sought and valued as feedback about what they were receiving. Marsh developed the Student Evaluation of Education Quality (SEEQ) as a simple but elegant tool based on a set of relevant dimensions to be rated against a scale of 1-5. First he began with the question "what is effective teaching", and proceeded to research, question, discuss and explore a wide range of views and opinions. Eventually he had a total of 9 dimensions to be measured and these were, Learning/Value, Enthusiasm, Organisation, Group Interaction, Individual Rapport, Breadth, Exams, Assignments, Workload. So in summary Herbert Marsh decided that the opinion of students was important as a measure of quality. Revolutionary, madness, or common sense at last? Second and in more detail, each item/dimension was now expanded into 3-4 indicators which were easily observable or experienced. As an example here is one dimension and four indicators: Item 4. GROUP INTERACTION (Dimension) 4.1 Students were encouraged to participate in class discussions. 4.2 Students were invited to share their ideas and knowledge 4.3 Students were encouraged to ask questions and were given meaningful answers 4.4 Students were encouraged to express their own ideas or question the instructor. This means that Education Quality comprised 9 dimensions and about 36 indicators and all that Marsh needed now was a measurement scale, and for this he chose the classic Likert scale approach. Here is the actual example used by Marsh:

www.qualityeducationglobal.org

3


1. Strongly disagree 2. Disagree 3. Neutral 4. Agree 5. Strongly agree A Likert Scale is used by raters, such as students in this case, to give their opinion on a number of items to be measured. So if one item was "Teachers are experts in their subject" then each student can give that statement a rating/score based on their personal opinion. A combination of all students scores then gives a total view of a teachers expertise. Adding together Dimensions, Indicators and Likert Scales leads to a standard questionnaire being designed with approximately 36 question/statements on it in the case of SEEQ, each one to be given a score of 1-5 by every student. The SEEQ was developed in a higher education setting, and thousands of students were asked to complete the questionnaire on their professors in different faculties. By scoring each students responses a measure could be made of education quality by faculty, year, professor. Then feedback discussions could take place to try and improve the quality of teaching. You can read a good review of Marsh's work here based on further studies at the University of Manitoba:Â {http://intranet.umanitoba.ca/academic_support/catl/media/seeq_booklet.pdf} So, do NOT underestimate the work of Marsh, now a professor at University of Oxford UK and reckoned to be the MOST prolific of researchers in the field of educational psychology. This is one of the few people that EVERY developing country should be reading about, listening to, and asking for help. But are they?

2. MALS: Myself as A Learner Score MALS was developed by the late Professor Bob Burden, University of Exeter, UK. It is a questionnaire with dimensions, indicators and a Likert scale just the same as SEEQ, but the questions ALL ask the child to answer questions about THEMSELVES, not about the teachers or their school. It is marked then scored in the same way as SEEQ, but ultimately is focusing on how the child sees themselves educationally, in school, as a learner. Professor Burden introduced the MALS (Myself As A Learner Score) by publishing the article entitled "Assessing Children's Perception of Themselves ........." in the School Psychology International Journal, (4), 291 in 1998Â followed by a long report of some of his

www.qualityeducationglobal.org

4


application of MALS. In a subsequent article Bob wrote "Ability Alone Is Not Enough: How we think about ourselves matters too", and he made the following significant points: ▪

It is a myth that success in school depends on one's IQ.

Many studies show that measured IQ contributes only 40% to academic success.

Sociologists assert that socio-economic factors play a significant role.

But, psychologists now assert that children's motivation is the key.

Despite this, it is still unrecognised in many schools that successful learning is as much about the child's motivation as it is about their innate ability.

Messy isn't it! So let's take this a step further; we have probably all got examples of people who were "useless" at school but who somehow did well in final examinations or were extremely successful in later life because they had a strong belief in themselves or had clear and focused goals. This leads us to the concept of self perception and if you want to read more on this, search the web for the research of the American psychologist, Carol Dweck. She showed that a student's view/opinion of whether they could perform a task is predetermined OR is open to change/flexible, strongly influences how they cope "when the going gets tough". Dweck's research also relates to Martin Seligman's research on Learned Helplessness and Learned Optimism. In other words, we LEARN to be helpless (I can't do this) or to be optimistic (I will do this). If you want to explore this topic more try Googling Attribution Theory. (In 2012 we discussed the use of MALS in Nepal with Professor Burden which he was very enthusiastic about. He freely gave us his questionnaire and instructions in return for our translating everything into Nepali language/script for him which we gladly did. This was then added to his growing library of places around the world using MALS which has widened even further since then. Sadly Bob passed away in 2014 and Nepal's Ministry of Education is a lesser place for not engaging with us to pilot some of Bob's work to help Nepali primary children. But we at Nepal Schools Aid acknowledge all of Professor Burden's contribution to our own knowledge and programmes.) We used MALS in a number of Kathmandu primary schools during 2012 and noted a number of examples of low and high self perceptions amongst primary children as young as 7 years old which couldn’t be explained by any features of the school or the teachers. There seemed to be a simple correlation between higher achievers and a higher MALS score, but at that time we didn’t have the resources to explore the results with individual children. This would have been a job for class teachers but they were under trained and not motivated or committed to caring for individual children with definite

www.qualityeducationglobal.org

5


low perceptions of themselves. What we DID prove to our own satisfaction however, was that perception tests using Likert scales could easily be understood by children as young as 7 years of age, and that the development of a SEEQ type of questionnaire was feasible. If you would like to read Professor Burden’s original paper "Ability Alone Is Not Enough: How we think about ourselves matters too” please click here: {https://qualityeducationglobal.files.wordpress.com/2016/01/1-mals-concept.pdf}

Comparison of MALS & SEEQ Before we move on to detail the quality perception test we developed for Nepal, let’s conclude this section with a summary of the main comparison points between SEEQ and MALS. As you read them, start to ask yourself whether it would be possible to develop a measurement tool/perception test to assess the quality of education in primary schools with children ages 7-11 years completing the questionnaires? 1. SEEQ is measuring a student's perception of their teachers and their learning environment. 2. MALS is measuring a student's perception of themselves within their learning environment. 3. BOTH can be used with primary children above 7 years of age provided careful administration instructions are given. 4. BOTH have been shown to have high validity and reliability in a range of studies and application around the world. 5. Perception tests could be a viable tool to measure issues related to Quality Education, PROVIDED the Dimensions and Indicators chosen for the questionnaire relate EXACTLY to the framework of Quality Education used within the school or system. 6. BOTH tools will provide leading indicators for Quality Education available far in advance of waiting for final outcomes of exam results. 7. The outputs from BOTH tools can be used to improve the quality of education in any school or education system provided the tools are specifically designed for the country system and Quality Education Framework being used. On the above list we were driven by a strong belief in item 5 and the following sections will outline the development of the Quality Education Perception Test (QEPT) in Nepal primary schools.

www.qualityeducationglobal.org

6


The Development of QEPT, Quality Education Perception Test We have already stated that perception tests could be a viable tool to measure quality education, provided the dimensions and indicators chosen for the test relate EXACTLY to the framework for quality education used within the system or the school. We have also written extensively about this and we will refer briefly to our earlier paper, “A Strategic Framework of Quality Education for Developing Countries� which you can view and download here: {https://issuu.com/brianmetters/docs/v2_a_framework_for_quality_educatio} The framework for quality we developed for Nepal is shown here with its 13 components:

www.qualityeducationglobal.org

7


These 13 components can be used as dimensions for a quality perception test, just the same as with the SEEQ developed by Professor Marsh described earlier, and the ones we chose were Child Needs, Curriculum, School Learning Environment, School Physical Resources, and Pedagogy & Assessment. From these we created a set of indicators as statements grouped under the 5 dimensions and the questionnaire is shown here:

www.qualityeducationglobal.org

8


Using The QEPT in Nepal Primary Schools In developing the QEI/QEPT we worked through 3 phases as follows: 1. All children 8-11 years completed the QEPT in a batch of 21 Kathmandu government schools and a QEI (Quality Education Index) score was calculated by adding up all of the scores from each child in each class. (At the same time a set of Pedagogy observations were made using another objective instrument to assess the level of Child Centredness in the classroom which we will discuss later.) 2. Each school now participated in Nepal Schools Aid's School Development Programme (SDP), for about 4-6 months in which they received training courses, in-school coaching, community workshops, and leadership training for the school principal. 3. The QEPT and Pedagogy measurements were repeated on the same children/teachers in each school and scores calculated at the end of the development period. Scores from a batch of schools is shown below with Column 1 indicating the first pre SDP scores for the QEI (from the QEPT) and the second column showing QEI scores after 6 months.

www.qualityeducationglobal.org

9


This questionnaire, the QEPT, can give an individual score for a single student, a combined score for a class or year group, and .... a total score for a whole school. The whole school score we called the Quality Education Index (QEI), on a scale of 0-100, and it is whole-school scores you see in the above table. It is also helpful to classify the schools according to school grades for development purposes which we determined as follows: 1. 2. 3. 4.

Low Quality with QEI < 40 Medium Quality with QEI 40-69 High Quality with QEI 70-84 Outstanding Quality with QEI 85+

From our results it is clear that “something is happening” in these schools to change the perceptions of children to their education, but what could it be?

Analysis & Discussion There are a number of questions we would raise and try to answer in this section related to improving whole educations systems in developing countries as well as individual or groups of schools. We believe that an instrument like the QEPT can be a useful tool in system/school development work provided it is used flexibly from one country to another, and systematically within a country or group of schools. The main questions we consider are as follows: •

How does the QEPT fit within the new Sustainable Development Goals (SDGs)?

How does the QEPT play a part in whole system development?

How does the QEPT play a part in school development?

How can the QEPT be validated?

What is the reliability of the QEPT?

www.qualityeducationglobal.org

10


1. How does the QEPT fit within the new Sustainable Development Goals? The SDGs which were developed in 2015 are a follow-on from the previous Millennium Development Goals (MDGs), the key MDG for education being stated as Universal Primary Education.

There are a number of targets for each goal, which are then broken down into Indicators for measurement and a good description of the Indicators for measuring progress of the education goal can be found here: {http://www.uis.unesco.org/Education/Pages/post-2015-education-indicators.aspx}

www.qualityeducationglobal.org

11


Here are two of the proposed Indicators from UNESCO for “measuring quality of education”at primary school level: • •

Percentage of children who achieve minimum proficiency standards in reading/ mathematics at end of: (i) primary and (ii) lower secondary school Completion rate (primary, lower secondary, upper secondary)

Our view is that these are “lagging” indicators, things that can only be measured as outputs from the system requiring a period of time or waiting before the data is available. And this has always been a major issue for us, that what is needed are some “leading” indicators, things which can measure the important INPUTS to education from a very early stage. This is very very important, would you as a parent want to wait 3-5 years to find out whether the proficiency standards or completion rates are being met for YOUR child? No, you would want to know early and regularly whether standards are being met and this is the point of finding a tool/instrument that can be a proxy for quality, something that indicates whether all is well or action is needed. We believe the QEPT fills this gap, just as Professor Marsh’s SEEQ did at tertiary level. 2. How does the QEPT play a part in whole system development? Although all of our work has been in Nepal it is clear that many developing countries are trying to develop their whole education system. For 15 years these countries have been “guided” by the MDG of Universal Primary Education, with two key measures related to enrolments and completion rates of children in primary schools. Unfortunately most of the national efforts have been focused on getting more children to BEGIN school, with considerably less effort focused on what happens when children are actually in school or in retaining them. Strategies were produced, plans created, massive aid funding obtained, but …… very little achieved related to quality improvement within the primary system. In Nepal for example, a School Sector Reform Plan (SSRP) mentioned Quality Education no less than 84 times in the document, but then virtually ignored it in their implementation. However, nowhere in this document was there any definition of quality, any indicators of quality, or any tool to conduct regular and ongoing measurement of quality. It should be no surprise that retention and exam pass rates are still disastrously low after 7 years of implementation. Our assertion therefore is that a combination of the Quality Education Framework giving direction to the strategy, added to by the QEPT giving measurements at regular intervals, can provide a good measure of the progress of the strategy being implemented. Remember, the QEPT is a follow on from SEEQ developed by Professor Marsh and which is now used in many countries around the world. 3. How does the QEPT play a part in school development? In the majority of attempts to improve a whole education system quality, a great deal of money is spent on retraining teachers. This is fine as long as the sole or main focus is NOT on individual teachers. The main unit for development should be whole schools with

www.qualityeducationglobal.org

12


GROUPS of teachers receiving training as part of a wider plan for school improvement. Other factors should also come into play such as better resources, values, assessment strategy ….. as well as school leadership, governance, the curriculum ….. and much more. Teachers also need help and guidance inside the school to implement what has been trained, so observations, coaching, mentoring, learning groups, learning reviews all have a part to play. A whole school can therefore be shown the Quality Education Framework with all of the elements to be improved, then the QEPT tool can be used pre-implementation of any school improvement plan and at regular intervals, maybe every 6 months. Results from the school’s QEPT can then be a formative assessment measure so that Principal and teachers can review progress and determine changes needed for improvement. Unfortunately in Nepal, nothing like this is happening. Teachers are trained, return to school, then abandoned! No coaching, no measurement, no change! 4. Validity and Reliability of the QEPT These two concepts are extremely important to establish the “usefulness” of any psychologically based questionnaire. “Reliability deals with the extent to which a measure is repeatable or stable. Essentially reliability refers to the consistency of a measure. A reliable measure would return the same results from time to time, assuming the underlying phenomenon which we were measuring has not changed.” During the development of the QEPT we conducted a Test-Retest exercise on 3 schools and approximately 300 children. The QEPT was run in the morning for Class 3-6 in each school, then repeated again about 4 hours later mid afternoon. Results as follows: School

QEI 1

QEI 2

1

68

72

2

67

73

3

65

69

The results show an approximately 6% variation in QEI score in a school between the morning test and the afternoon test. We have not extended the reliability tests nor conducted any detailed statistical analysis either. We are not trying to assess personality or ability of individuals with this test, it is a simple measure of children’s perception of their education quality and we believe this result is quite acceptable within the context of school development.

www.qualityeducationglobal.org

13


â&#x20AC;&#x153;Validity refers to the extent to which a study actually captures or measures what it purports to examine. Two broad categories of validity are external and internal validity. External validity refers to the generalisability of the research findings. For example, would our findings from a particular study also apply to other groups, other geographical areas, etc.? Internal validity essentially refers to how confident the researcher is in the study results. This type of validity encompasses a range of issues from how rigorous the study was in terms of research design, sample, measures, etc. to whether the measures are measuring what they are supposed to measure. In this description of validity we will focus on this second area, the extent to which measures capture what they are purported to capture.â&#x20AC;? So, is the QEPT measuring what it is meant to be measuring? From a design viewpoint 5 dimensions/items were chosen from the Nepal Quality Education Framework which were then converted into a number of indicator statements. For example, #5 Pedagogy & Assessment, contains four statements classically describing the required inputs for Child Centred Development. Therefore we can assert that internally, the questions themselves relate directly to the requirements of quality education both generally and as specified by the Nepal Quality Education Framework. In addition we conducted separate assessments of changes in pedagogy style and measured changes and movements from a teacher centred style to a child centred style. Some results comparing child centredness with QEI score is shown below:

Once again there appears to be some correlation between changes in pedagogy style and increasing Quality Education Index showing that the QEPT is measuring the effect of pedagogy on quality as measured by childrenâ&#x20AC;&#x2122;s perceptions. Finally we have begun to monitor exam results and completion rates but this is obviously a longer study and will take some time to gather the data. For the moment however we believe that we can state that the QEPT has an acceptable level of validity.

www.qualityeducationglobal.org

14


Conclusions & Future Use of QEPT We make the following key points about our research and creation of the QEPT: 1. The Quality Education Perception Test is based on previous well validated research into using perception tests in quality education development, notably the SEEQ and MALS. 2. The dimensions and indicators in the QEPT were selected from a previously researched framework defining quality education in Nepal primary schools. 3. Increases in QEPT scores for a whole school occurred after specific training for teachers and in-school coaching and also correlate with increases in the occurrence of child centred pedagogy. 4. Development of Education Systems globally would benefit from a tool such as QEPT as part of their development strategy, provided it relates to and/or is based on a clear framework of Quality Education. We offer the QEPT to anyone who works in education development anywhere in the world, whether you are in a Ministry or Department of Education, and INGO/NGO. If you would like a copy of the QEPT and score sheet then please write to us at brianmetters1@icloud.com telling us something about yourself, your work, and the uses to which you will put the QEPT. You can learn even more about developing quality education from our online learning website at {www.qualityeducationglobal.org} The Authors Brian Metters, BA (Psychol.), MSc., PhD, is the Chairman of Nepal Schools Aid, a UK registered charity working to develop the quality of education in Kathmandu primary schools. Now retired from business, Brian is an organisational psychologist and specialised in change management mostly within the financial services industry of the UK. He has been involved in charitable fundraising for Cancer Research, The Big Issue (for the homeless) and disadvantaged children in Nepal. {www.nepalschoolsaid.org} Babita Shrestha, BBS, BEd, MA (Sociology) is the co-founder and chairperson of Nepal Education Leadership Foundation. She has 6 years teaching experience in Nepal primary schools and has specialised in English and Phonics teacher training as well as providing child centred teacher training to several hundred primary teachers in Nepal. Sangita Bhandari, BEd, MEd, is the co-founder of Nepal Education Leadership Foundation in Kathmandu and the manager of their School Development Division. She has 5 years teaching experience specialising in mathematics and has received specialist training in several UK primary schools.

www.qualityeducationglobal.org

15

Profile for Dr Brian Metters

Quality Education-It's all in the mind!  

This is the second in our research series of articles about defining, measuring, developing the quality of education in primary schools in d...

Quality Education-It's all in the mind!  

This is the second in our research series of articles about defining, measuring, developing the quality of education in primary schools in d...

Advertisement