Compiling the Final Ranking

As described in Chapter 6. 1. 4, an additional jury session had to be organized for Saturday morning, to verify our marking. As a result, it was impossible to hold a session to discuss the final ranking with the jury. We, the organizers, thus asked the jury on Friday to be trusted in handling the compilation of the final ranking, which we were granted with tremendous support. The compilation of the final ranking is described in the rules as follows:

▪▪“ The final ranking of the students is based up on their equally weighted scores

for theory and practical tasks according the t-score method. This is achieved in taking the average of the four t-scores of the practical task and taking the t-score of the total result of the students on both theory parts. The final score is the sum of these two. Applying a not equal balance between theory and practical task requires the approval of the International Jury.

The rules clearly state that the t-score procedure should be used to enforce an equal weighting between the two parts. The t-score method consists of two steps. In a first step, t- scores of the two exams have to be standardized to equal variance (note that unequal means do not affect the weight of the exams). The standardized values are then combined to build the final score. However, the rules erroneously assume that taking the average of the standardized practical scores (the practical t-score) would lead to a standardized practical score. Since the scores obtained in the practical exams are not perfectly correlated, the sum of the practical t-scores is always expected to have a variance less than 1, even when the variance has been standardized to 1 for each practical individually. In the case of the IBO 2013, the actual variance of the sum of the t-scores was only 0.7. Applying the formula given in the rules would hence have led to a much larger relative weight of the theoretical exam. However, a proper application of the t-score method does require a re-standardization of the sum of the practical scores to obtain equal variance and hence equal weight of the theoretical and the practical exam. The rules are therefore contradictory in that they vigorously demand an equal weighting of the theoretical and practical exam, while the described procedure does always lead to an uneven weighting, unless the practical scores are perfectly correlated. As a result, we were actually not able to satisfy the rules in their entity and were hence faced with the challenge to either break the requirement for equal weighting, or apply a different formula than stated in the rules. Convinced that the spirit of equal weighting is more important to the majority of the jury than an application of the formula by the book, we went for the second solution and hence added an additional standardization step for the average of the practical scores. While heavily discussed, we finally got the full support of the AB meeting in Bangkok, November 7/8, 2013, for this decision.

finalreportIBO | 105

IBO 2013 Final Report