Staff experience of Student Evaluation of Teaching and Subjects/Units

Page 1

Staff experience of

Student Evaluation of Teaching and Subjects/Units

June 2018


Contents 1.

Introduction

2.

Demographic and Employment Characteristics of Respondents

3.

Conduct of Evaluations

4.

Use of evaluation results

5.

Staff response to evaluations

6.

Disrespectful and/or abusive comments

7.

Institutional responses to disrespectful/abusive comments

Staff experience of Student Evaluation of Teaching and Subjects/Units ISBN 978-0-6482106-3-4 ŠNTEU June 2018 This report was prepared by the NTEU Policy and Research Unit, 120 Clarendon Street, South Melbourne VIC 3205. Phone 03 9254 1910. Coordinator: Paul Kniest (pkniest@nteu.org.au) NTEU would like to thank everyone who responded to the survey, as well as those who provided input on drafts. Queries and feedback welcome. Jeannie Rea, National President

2


NTEU Staff Experience of Student Evaluation of Teaching and Subjects/Units June 2018 Executive Summary Student evaluations of teaching and units of study are predominantly standardised compulsory online exercises undertaken every time a unit is offered, over which teaching staff have little input or control in terms of content or timing. In April 2018 NTEU conducted a survey of members about their experience of student evaluations of their teaching and of subjects/units. The survey consisted of a total of 31 questions which covered a range of topics including issues related to the structure and conduct of student evaluations, as well as how the results are used. The survey also asked a series of questions about the extent to which survey responses contained disrespectful or abusive comments, and how staff and institutions responded when this occurred. The survey was developed, at this time, in response to mounting anecdotal commentary on the latter point. Concerns about the use of a student feedback mechanism in measuring staff performance; the effectiveness of the largely generic online surveys as a tool of continuous improvement in teaching and learning; and other related issues have been ongoing concerns of NTEU and are articulated in NTEU policy positions, which are revised by the annual NTEU National Council Meeting. NTEU members engaged to teach at least one higher education unit were invited to participate in the survey which opened on 17 April 2018 and closed 4 May 2018. There were 3,065 responses of which around 2,500 contained enough information to be included in the analysis. The key findings of the survey are: • •

Poor response rates: Almost six-in-ten reported a response rate of 30% or less on student evaluation surveys High rate of use in appraising staff performance, contrasting with low staff confidence in them as a tool for this purpose: Almost nine-in-ten reported that the surveys are used to appraise and manage staff performance. But well under one-in-five respondents reported that evaluations gave an accurate measure of their performance. Informing improved teaching and delivery: Six-in-ten survey respondents felt that student evaluations could be used to inform and improve teaching or the delivery of subjects of units, while only four-in-ten agreed that they were used for this purpose.

The purpose of the student evaluation surveys are to give students an opportunity to provide feedback. They were originally conceived to assist staff in making changes to improve student experience and success. They were not designed as a metric in staff performance management, and there are problems with their use as such a tool. However, the largely online and anonymous (or deidentified) surveys have also opened the door to significant levels of abuse by student respondents, particularly towards women staff and/or those from minority groups. The survey found that: •

Disrespectful and abusive comments: Six-out-of-ten respondents said that some students had used the evaluations to make disrespectful and abusive comments. Based on open ended responses, by far the most common category of these comments related to an individual’s competency to teach a subject. Other common themes were comments about 3


• •

gender, cultural background and spoken English, age (too old and too young), personality/attitude, favouritism and political views. Response to disrespectful and abusive comments: seven-in-ten respondents who reported comments recorded feeling more distressed, angered, fearful, self-conscious or embarrassed after reading the comments. Although lower, still significant numbers also experienced real physical symptoms including loss of sleep or appetite or loss of motivation. Making a complaint: Just over one-in-four respondents who were the subject of disrespectful or abusive comments made an official complaint. University responses: The survey results indicate that universities only undertook an investigation in just over one-in-ten cases after a formal complaint was raised. Very little effort seems to have been made to identify or discipline students. A common response to a formal complaint was to offer the staff member support or counselling or agreeing to modify the evaluations process by the removal or redaction of offensive or abusive comments.

Given these results, there is little wonder that just over one-in-ten respondents were satisfied with the student evaluation of teaching survey at their institution. Not only are the value of these evaluations all but meaningless in terms of appraising or managing staff performance, they also potentially raise workplace health and safety issues, as well as potentially cases of unlawful discrimination. Unfortunately, the standardised online one size fits all evaluation, which is relatively efficient to administer and analyse, is not fit for purpose as a tool to inform or improve subjects/units or their delivery. Low student responses rates indicate that students also see little point; and biased and abusive responses could even have legal implications. These surveys seem to serve little other purpose than providing a (dubious) metric for quality assurance purposes. When issues about the limited validity and misuse of student evaluations have been raised with university management the response has been universities funding is dependent of such evaluations being undertaken. This is not the case. No public funding is tied directly to a requirement to undertake student evaluations of teaching or units. The use of student evaluations of teaching or units forms part of the Higher Education Standards enforced by the Tertiary Education Quality and Standards Agency (TEQSA). According to TEQSA Guidance Note of Academic Quality Assurance it says that Section 5.3 of 2015 Standards (Monitoring, Review, Improvement) require “a fundamental, comprehensive review of courses and course delivery at least every seven years, and speak to the scope of such reviews. These periodic overall reviews of courses of study are expected to be informed and supported by more frequent monitoring of course performance at unit level, and a provider’s review activities are expected to encompass external referencing against comparable courses (including student performance data) and to be informed by student feedback. The Standards do not specify the form of student feed back. As a consequence, the NTEU recommends that universities abandon student evaluations of teaching as are currently used at our universites.

Jeannie Rea National President June 2018

4


Summary of results Respondents The demographic breakdown of the usable responses was: • Male 1,059 (42.7%); • Female 1,369 (55.1%); • Non-binary/third gender 10 (0.4%); • Prefer not to say 45 (1.8%) Of whom: • • • • • • • •

96% were NTEU members; 83% full-time; 78% had ongoing jobs (9% each casual and limited term); 1% identified as being Aboriginal and/or Torres Strait Islanders; 8% identified as Lesbian, Gay, Bisexual, Transgender, Intersex and/or Queer (LGBTIQ); 3% indicated that they had an observable permanent disability; 61% were born in Australia; and 83% had English as their first language.

Conduct of evaluations The results show that individual staff have very little say or control over conduct of student evaluations, which are highly standardised online compulsory university wide exercises. The results showed that: • • • • • • •

only 6.4% of respondents indicated that they could opt out of the evaluations (another 11.8% said they were unsure); 94% said their student evaluations are conducted online, with only about 3% saying they were done by paper and just over 2% indicating that they were conducted both online and on paper; in the overwhelming majority of cases (87.6%), universities require that a student evaluation is conducted every time a subject is offered; almost all (90.9%) of evaluations include elements that are standardised across the institution; only half respondents (50.4%) indicate that they were allowed to add further questions (20% unsure); only one-in-five (22%) of respondents said they could add questions that were not from a predetermined/set or list (26% unsure); in almost nine-out-of-ten cases (84.6%) respondents indicated student evaluation contained opened-ended response questions, with 9% saying no and 6.4% saying they were unsure; and for 94.1% of respondents, the university also determined when the evaluations are to be conducted, with about 10% being conducted in the first half of any teaching period.

Response rates The results are show (see below) that: • •

well over half (57.1%) of the evaluations had a response rate of less than 30%, and almost three-quarters (74.5%) had a response are of less 40%. 5


Figure A

The most common response to open ended questions about the conduct of survey was in relation to low response rates.

Use of evaluation results Performance / contract renewal The data shows that: • •

in eight out of ten cases (78%) student evaluations of teaching and/or course are used to assess individual staff performance and in promotion and/or renewal of contracts; the one outstanding exception to this related to people employed as casuals, where the use of student evaluations for performance or contract renewal is significantly lower at 51.3% than for other staff. This was also reflected in the results for people identifying as Level A, of whom over one third (35%) were engaged as casuals.

Informing and improving subjects The data shows that: • •

about six-in-ten respondents (60.5%) said that student evaluations could be used to inform and improve teaching practice, but that only four-in-ten (40.3%) stated they believed evaluations were the used for this purpose.

6


Staff response to evaluations The results indicate that: • • • •

only just over one-in-ten (12%) of all respondents strongly agreed or agreed with the statement that they were satisfied with the overall use of student evaluations at their university; there were considerable variations based on demographic and employment characteristics (males more satisfied than females) and level of appointment with higher levels being more satisfied; 15.4% of respondents agreed or strongly agreed that the student evaluations gave an accurate measure of student attitude to their teaching or subject or unit; with males more likely to agree than females; and those for whom English is their first language had a higher level of agreement at 16%, compared to 13.1% for those where this is not the case.

Disrespectful/abusive comments The data reveals that: • • • •

overall about six-out-of-ten (61.9%) of respondents indicated that they had received disrespectful or abusive comments in open ended responses; 30.2% received such comments in relation to appearance; 14.4% received such comments in relation to spoken English; 27.3% received such comments in relation to religion, culture, sexuality or disability.

The following Table B shows how respondents reacted to these comments.

7


Table A

8


1.

Introduction

In April 2018 NTEU conducted a survey of members about their experience of student evaluations of their teaching and of subjects/units. The survey consisted of a total of 31 questions which covered a range of topics including issues related to the structure and conduct of student evaluations, as well as how the results are used. The survey also asked a series of questions about the extent to which survey responses contained disrespectful or abusive comments, and how staff and institutions responded when this occurred. The survey was developed, at this time, in response to mounting anecdotal commentary on the latter point. Concerns about the use of a student feedback mechanism in measuring staff performance; the effectiveness of the largely generic online surveys as a tool of continuous improvement in teaching and learning; and other related issues have been ongoing concerns of NTEU and are articulated in NTEU policy positions, which are revised by the annual NTEU National Council Meeting. NTEU members engaged to teach at least one higher education unit were invited to participate in the survey which opened on 17 April 2018 and closed 4 May 2018. There were 3,065 responses of which around 2,500 contained enough information to be included in the analysis below. The following provides a summary of the responses divided into the following sections: Section 2

provides a summary of the important demographic and employment characteristics of respondents.

Section 3

presents the data on how student evaluations are conducted.

Section 4

considers how the results of student evaluations are used.

Section 5

looks at staff responses to evaluations.

Section 6

examines the extent and nature of disrespectful and/or abusive comments.

Section 7

scrutinises institutional responses to disrespectful and abusive comments.

Section 8

draws together the results of the survey and offers a number of suggestions as to how institutions and individual staff can better utilise student evaluations and minimise the incidents and potentially very harmful impact of disrespectful and abusive comments.

The results of the NTEU survey presented in this report are not purporting to be a representative unbiased sample of Australian university staff. What the results represent is an analysis of the views of about 2,500 staff involved in teaching of subject or units at Australian universities this year, and whose efforts are subject to student evaluation surveys. It would ironic if universities were to dismiss the findings of this survey on the basis that it is not an unbiased representative sample, which is exactly the same criticism used in relation to student evaluations of staff performance. Yet universities continue use student evaluations for important managerial decisions regarding staff selection, probation, promotion and contract renewal.

9


2. Demographic and Employment Characteristics of Respondents Demographics Table 1 provides the gender breakdown of respondents for which information was available. Table 1

Gender

Number

%

Male

1,059

42.7%

Female

1,369

55.1%

Non-binary/third gender

10

0.4%

Prefer not to say

45

1.8%

2,483

100.0%

Total

Of the respondents: • • • • • •

96% were NTEU members 1% identified as Aboriginal and/or Torres Strait Islander 8% identified as Lesbian, Gay, Bisexual, Transgender, Intersex and/or Queer (LGBTIQ), 3% indicated that they had an observable permanent disability, 61% were born in Australia, and 83% had English as their first language.

Figure 1 shows that the age distribution of respondents is skewed to toward older age brackets with over half of respondents being 50 or older. While the proportion of females at 55.1% is roughly in line with the sector wide average for academics at about 53%, the age profile of respondents was older with about 41% of academic staff in the sector being fifty or older. (Department of Education and Training Staffing Statistics 2017 Table 2.9) Ancestry In addition to asking respondents whether they were born in Australia, they were also asked to about their ancestry, or what part of the world their family came from using ABS categories. Respondents could provide more than one response, with a significant number indicating both Australian and some other ancestry. Where a respondent indicates dual ancestry one of which was Australian, their responses have been included with non-Australian cohort. For example, if someone indicated that they were of Australian/North African and Middle Eastern they were classified as the latter. The small number of people indicating more than two ancestries has not been included in the analysis. It should also be noted due to the small number of responses it was necessary to aggregate Africa (and Middle East), Europe and Asia into one region rather than into sub-regions of those continents. The number and proportion of responses by region is shown in Table 2 and indicates the almost 80% of respondents indicated that they were of European or Australian ancestry.

10


Table 2

Ancestry Australia Asia Europe North America South America Africa/Middle east Oceania Total

Number 855 237 1089 83 17 45 75 2401

% 35.6% 9.9% 45.4% 3.5% 0.7% 1.9% 3.1% 100.0%

Figure 1

Nature of Employment Eight-in ten (83%) of respondents said that they worked full-time with the remaining 17% being employed on a part-time basis. In terms of their contract of employment eight-in-ten (81%) of respondents indicated that they had an ongoing appointment, 9% were on fixed term contracts and 10% were employed on a casual basis. This is not representative of sector wide composition where the latest WGEA data show that on a headcount only one in three university staff (35%) enjoy ongoing or secure employment. (NTEU The Prevalence of Insecure Employment at Australia’s Universities: An Examination of Workplace Gender Equality Agency (WGEA) University Staffing Profiles)

Figure 2 shows that the level of appointment of the respondents was very concentrated at Lecturer (Level B) and Senior Lecturer (Level C), which together accounted for more than 60% of respondents. Please note that for the purposes of the following analysis, the handful of respondents who said they were tutors have been included amongst Level A academics. Compared to sector wide averages, 11


respondents to NTEU were more concentrated at Level B and C (62.3% compared to 53.6%) and underrepresented at lower (Level A) and higher levels (Levels D and E). (Department of Education and Training Staffing Statistics 2017 Table 2.9) Figure 2

12


3.

Conduct of Evaluations

Respondents indicated that the vast majority (94%) of student evaluations are conducted online, with only about 3% saying they were done by paper and just over 2% indicating that they were conducted both online and on paper. Based on the responses to the NTEU survey, it is seems that student evaluations of teaching or courses were effectively compulsory with only 6.4% of respondents indicating that they could opt out of the survey. While another 11.8% said they were unsure as to whether they could opt out, eight in ten respondents (81.8%) clearly indicated that this was not an option. Based on responses to the NTEU survey it seems that student evaluations of teaching and/or courses are becoming a standardised university wide exercise, with the results showing that: • • • • •

in the overwhelming majority of cases (87.6%), universities require that a student evaluation is conducted every time a subject is offered; almost all (90.9%) of evaluations include elements that are standardised across the institution; only half respondents (50.4%) indicate that additional questions could be added (20% unsure); only one-in-five (22%) of respondents said they could add questions that were not from a predetermined/set or list (26% unsure); and in almost nine-out-of-ten cases (84.6%) respondents indicated student evaluation contained opened-ended response questions, with 9% saying no and 6.4% saying they were unsure.

Not only are universities making predominantly standardised online student evaluations of teaching or courses compulsory, in 94.1% of cases they also determine when these surveys are to be conducted. As the data in Figure 3 shows, about 10% of evaluations were conducted in the first half the teaching period, with the bulk being conducted from about three-quarters of the way through the period, but before final assessment is concluded. For the less than 20 respondents that indicated that they conducted more than one evaluation during the teaching period, their responses have been included in both periods (double counted.)

13


Figure 3

Response rate The NTEU survey also asked what the response rate was for your subject/unit or teaching in the most recent evaluation. The results are shown in Figure 4, which shows that: • •

well over half (57.1%) of the evaluations had a response rate of less than 30%, and almost three-quarters (74.5%) had a response are of less 40%.

In other words, only about one-in-four student evaluations had a response rate of more than 40%.

14


Figure 4

Respondents were also asked whether there is a defined minimum sample size (by either proportion or number) for each individual evaluation to which 37.2% of respondents answered yes. While 43.8% said no, it should be noted that a further 19% indicated that they were unsure. It is worth noting that while a university may have set a minimum sample size, it is unclear on what basis that was determined. One respondent noted that: “(O)ver 5 responses is considered a representative sample. So from a class of 30 students, only 5 may fill out the survey and all could respond negatively. This has happened a number of times and these negative results call the class into review in front of a university education committee. Setting a minimum sample size which is not based on ensuring statistically significant results is meaningless. For example for a class of 30 students, if you wanted to achieve a result with a 90% confidence interval and 10% margin of error you would need at least 20 responses. (see Sample Size Calculator). Low response rates was a major issue with a large number of survey respondents, with the following being only but a small sample of specific comments about response rates: •

Pseudo statistics given low response rates.

.. almost never sample a significant proportion of students enrolled in a unit - in my experience, between as little as 10% and 45%; they fail the basic test of validity used in any statistical analysis.

A 15% response rate is meaningless. To me, I consider that non responding 85% of the class are satisfied, as it is usually the complainers that respond

15


Arguably, the major problem is bias in the sampling. Often only a small proportion of students complete the survey and an even smaller proportion reply to the open ended questions so there is a risk that the results are not representative of the class.

…they are worthless because we get less than 10% of student body filling out these evaluation forms.

For most of the courses evaluated in my school the response rates are too low to be statistically significant and so do not provide accurate feedback.

I find the results from the student surveys are either so small in terms of response rates that they are next to useless.

The university accepts a 5% response rate as adequate (which is a joke) and then the rating is used for performance and promotion.

I have very strong concerns in regards to response rates. When these are low the survey is extremely vulnerable to selection bias and has very large uncertainties.

The surveys are fine, the problem is the response rates. Currently response rates are so low the quantitative data is effectively useless.

As the results of the NTEU survey show, predominantly low response rates in the absence of any systematic statistical validation of the results across sector must raises serious questions about use of these evaluations, especially where the results are used in relation to staff selection, performance and/or promotion. In relation to information provided to students the NTEU survey showed that when asked whether students are referred to the Student Code of Conduct (or similar code) when undertaking the evaluation only 6.4% saying they knew students were definitely referred. While the majority - almost six-out-ten (59.3%) – of respondents said they were unsure, a significant percentage, 34.4%, said they knew students where not referred to the relevant student code of conduct. In 90.3% of cases students are told that their responses are anonymous (with 8.9% unsure). However when asked whether student undertaking the survey can still be identified, only 54.5% of respondents said definitely not, with another 37.5% being unsure.

16


4.

Use of evaluation results

Performance, promotion and contract renewal Respondents were asked whether student evaluations were used to assess their performance and whether they were used in determining promotions or renewal of contracts. The proportion of respondents that answered frequently and always to these questions broken down by selected demographic and employment characteristics are shown in Figures 6 to 9 below. The data show that in the vast majority of cases (80% or more) that student evaluations of teaching and/or course are used to assess individual staff performance and in promotion and/or renewal of contracts. The outstanding exception to this finding for both questions relates to people employed as casuals, where the use of student evaluations for performance or contract renewal is significantly lower than is the case for other staff. This is also mirrored in the results for Level A respondents, perhaps largely due to the fact that over one third (35%) of Level A were casuals compared with only 2.2% of staff classified as Levels B to Level E. While there appears to be a variation in results for respondents with different demographic characteristics, the results for people who identify as being Aboriginal and/or Torres Strait Islanders (A&TSI), or as non-binary or third gender or having a South American or African/Middle Eastern ancestry need to be treated with a degree of caution as the number of usable results, less than 50, is relatively small. Figure 5

17


Figure 6

Figure 7

18


Figure 8

Given the importance of these evaluations for assessing individual staff performance as well in promotion and renewal of contracts it is perhaps not surprising that when respondents were asked whether they had considered amending assessment grades or feedback so that students rate their 19


teaching more favourably, about one in five (20.9%) responded that they had considered this course of action frequently or always. Staff on fixed term contracts had a slightly higher propensity to say frequently and always (23.8%) compared to the average. If some staff are giving students more favourable grades than would otherwise be the case in the belief that would improve students’ evaluations of their teaching then the student evaluations have the potential to significantly undermine academic integrity. Informing and improving units/subjects In addition to the summative managerial uses of student evaluations in performance appraisal and in promotion or for contract renewal as discussed above, student evaluations can also be an essential component of the formative process used to help inform and improve the delivery of teaching and other learning resources. This is the stated purpose of student evaluation instruments. The mis(use) in staff performance management has only been attached subsequent to the wide spread use and encouragement of student evaluation tools. Figures 10 shows the results for two questions included in the survey which asked respondents to indicate their level of agreement/disagreement with the following statements: • Student evaluation surveys can inform and improve teaching practice. • Student evaluation surveys are used to inform and improve teaching practice in my experience. Respondents were also asked whether student evaluations can inform and improve teaching practice when complemented by other evaluation methods (such as peer review). The results show that about six-out-of-ten (60.5%) of all respondents, and a similar proportion across most demographic and employment categories, believe that student evaluation can be used to inform and improve teaching practice. This belief is strengthened with the proportion rising to about sevenout-of-ten (70.3% - data not shown) when evaluations are used in combination with other methods. Therefore, a significant majority of those surveyed clearly believe student evaluations have the potential to improve teaching practice. By contrast however, only about four-in-ten (40.3% - Figure 10) believed that student evaluations were actually being used to inform and improve teaching practice. The results by employment characteristics do not show any discernible difference between mode of employment or level of appointment. The lack of follow-up in using the results of student evaluations to inform and improve teaching practice is also reflected in the response to a question which asked how frequently all teachers (including casuals) involved in a unit engage in student evaluation discussions about implementing improvements to a unit/subject. Overall, only 37.1% (data not shown) of respondents said this occurred frequently or always. Interestingly, the response amongst staff that identified as Aboriginal and/or Torres Strait Islander (A&TSI) was significantly higher than the overall average at 56.5%. However, as noted earlier some caution needs to be exercised with this result due to a small the relatively small number of responses.

20


Figure 9

21


5.

Staff response to evaluations

As shown in Figure 10, only just over one-in-ten (12%) of all respondents strongly agreed or agreed with the statement that they were satisfied with the overall use of student evaluations at their university. Unlike for most of the questions analysed so far, there appears to be some significant difference based on both the demographic characteristics of respondents (Figure 11) as well as the employment characteristics (Figure 11). While the proportions of respondents agreeing or strongly agreeing is very low across all characteristics, the data suggests that males (14.9%) are more likely to be satisfied when compared to females (9.7%). Again while still only representing one-in-five (20.5%) respondents who identified as having a North American ancestry were also more likely to agree than other demographic groupings. Figure 10

Figure 11 clearly shows that the higher the level of appointment, the higher the level of satisfaction with the use of student evaluations.

22


Figure 11

The proportion of staff that either agreed or strongly agreed that the student evaluations gave an accurate measure of student attitude to their teaching or subject or unit, while being slightly higher than the overall level of satisfaction at 15.4% (Figure 12) was still well below one-in-five respondents. The data again shows that males, and those with a North American ancestry, were more likely to agree. The data in Figure 12 also reveals a noticeable difference between those for whom English is their first language and those where this is not the case, with former having a higher level of agreement at 16% compared to 13.1%. There was no clear pattern in relation to employment characteristics (data not shown).

23


Figure 12

As noted above, one of major issues raised about the conduct and use of student evaluations was around the inherent bias associated with low response rates and whether results are for a truly representative sample of students. The following responses are but a handful of the numerous responses that amplified these issues. •

Response rates are usually so low they are of little use and results tend to be skewed as it is usually only students with an axe to grind that respond. It is preposterous to think that these evaluations are even considered as part of a promotion application as they are in no way indicative of teaching quality.

We lost a lot of value when we went from paper to electronic evaluation. Now we just get aggregate feedback - so we can't distinguish between a student who gave low scores to every criteria (or high scores to every criteria), nor can we match positive comments with negative comments, nor can we match comments with scores - so it makes the feedback much harder to interpret and therefore less useful.

I think student evaluation can possibly be a good thing that contributes to academic's ongoing topic maintenance/revision etc. However, there's plenty of research showing they are gendered, and often filled out by students who either love/hate you. None of these 'subjective' and//or 'qualitative' contexts are taken into account by management when SET's are used in our own performance management.

Student surveys are useful to gain general ideas on preferences for certain teaching modules, e.g. I have asked students to indicate their preference for group work vs full class discussion, and whether in future more or less emphasis should be given to different aspects of the course. With a good response rate it's possible to get some qualitative idea of preferences. But as quantitative data it's almost useless - only the most enthusiastic and most unhappy students reply when it's optional, creating an unrepresentative sample, and when it's done in 24


class and everybody has to fill it out, most of the students are fairly apathetic and have nothing much to add. •

The main problem with the surveys is non-response bias, so only students who love or hate the unit bother to evaluate it. ‌.. The problem has got worse now they are done online, with typical response rates of about 20%-40%. The written comments from the evaluations are the most useful.

•

Poor response rates are main issue - can't rely on results as it is usually either the most happy or most disgruntled students who bother to complete the questionnaire.

25


6.

Disrespectful and/or abusive comments

The data in Figure 13 shows that six-out-of-ten respondents to the survey indicated they had experienced disrespectful or abusive comments from the opened ended responses in student evaluations. Figure 13 also shows that a slightly higher percentage of women (63.7%) than men (59.8%) reported being subject to disrespectful or abusive comments. However, the data also shows that a much higher proportion of respondents (with the caveat of relatively small sample size) who identified as non-binary or A&TSI experienced intolerable and totally unacceptable responses. In terms of employment characteristics (Figure 14) the data shows that staff with ongoing positions (63.3%) were subjected to more of these comments than either fixed term (58.7%) and casual employees(53.1%). There is no clear pattern with respect to level of employment other than to observe Level A was well below average (56.1%) whereas Level B was well above average (67.3%). Figure 13

26


Figure 14

Figures 15 to 17 shows the breakdown by demographic characteristics of disrespectful/abusive comments in relation to appearance, spoken English and sexuality, gender, religious or cultural background. The data in Figure 15 shows that 30.2% of all respondents received unwanted comments in relation to their appearance. The proportion of females (35.1%), people with an observable permanent disability (37.5%) and people with South American (40%) or African/Middle Eastern (46.4%) ancestry were well above average. Figure 16 shows that while only 14.4% of all respondents reporting disrespectful/abusive comments said these were in respect of their spoken English. As Figure 16 shows this was noticeably much higher for people not born in Australia (29.9%), those for whom English is not their first language (49.8%) and people with Asian (41.1%), South American (60.0%) and African/Middle Eastern (39.3%) ancestry. Figure 17 shows the proportion of respondents that received distasteful/abusive comments in relation to their gender, sexuality, religious or cultural backgrounds. Given that this covers a wide range of sources for the abuse it is difficult to unpack the results with any great precision, except to say that data clearly shows that females, member of the LGBTIQ community, people with observable disabilities, people not born in Australia or who do not speak English as first langue people and people from what might broadly be described as non-Anglo ancestries all suffer incidents of this type of abuse at higher rates than the average for all respondents.

27


Figure 15

Figure 16

28


Figure 17

In addition to collecting and analysing quantitative answers, the NTEU survey also afforded respondents several opportunities to provide additional information or share their experiences through open ended questions. In response to a question about the nature of disrespectful/abusive comments, we received over 800 open-ended responses which can be broadly categorised, not including in relation to the above issues, as they relate to: • • • • • • • • • • • • •

competency personality/style the course marking/assessment gender age appearance English language skills voice bias/favouritism racism political views lack of feedback/not responding to emails.

Competency

29


By far the most common, accounting for almost one-in-four, group or category of disrespectful or abusive comments related either solely or in part to the competency of the person to coordinate or teach the course. Comments such as: “doesn’t know what they are talking about”, “shouldn't be teaching at the University”, “questioned my knowledge of subject”, “ability to do job”, “lack of expertise”, and “whether I’m qualified” were all too common. While some of the comments at the nastier end of the spectrum were nothing more than outright abuse, including “stupid”, “hopeless” and “a joke”, other comments like “should not be allowed to lecture” or "should never be allowed in a classroom" or that the university should “get rid of … ” or that “….. should be fired” were also not uncommon. As one respondent neatly summarised “(I)t is highly abusive for a student to demand the sacking of a member of staff.” Personality/style A substantial number of respondents said that the open-ended questions included comments that went to issues about what we have categorised as personality or style, or which one respondent described as “character assassination” and which included phrases like: • • • • • • • • •

arrogant uncaring lacking empathy oppositional not approachable boring threatening bully laughs at own jokes

Course There were numerous comments which were critical of the course, including content, structure and organisation. The flavour of these can best be summarised by following comments: “why do I have to do this subject”; “this course is a waste of time”; “this unit is complete waste of time and shouldn’t be compulsory”. Marking/assessment There were a large number of disrespectful/abusive comments about the nature and outcomes of the assessment process being used. This included: • •

• •

Claims that my marking was unfair and negative comments about decisions I made that were in accordance with university policy and procedures. Appalling abuse from students whose essays I have failed and from a student about whose conduct I needed to seek advice from my Head of School. Disrespectful comments about formative assignment tasks which encourage and evaluate subject coverage.” Abusive comments related to my conduct - in particular for reporting students for plagiarism. One student requested a refund as they did not like their grade. 30


It was not uncommon for respondents to speculate that it was students who were doing badly. This included comments such as: • Negative comments from disgruntled students who did poorly or failed. • If the students do not like their grades they give poor feedback. If you did a correlation between grades and feedback I am almost 100% certain it would be an inverse correlation. Gender Comments related to gender were raised frequently and ranged from purely gratuitous observation in relation to appearance, hair, smell and so on to sexist (“should be at home with my child”) to outright misogynistic abuse with remarks such as “bitch” or “femonazi”. A number of respondents suggested that they believed that the comments and responses they received would have been very different if they had been male rather than female, of which the following is an example: “I have answered yes to the last question because I used to teach with a male colleague and frequently received comments comparing me unfavourably to him. They didn't necessarily mention our gender but I am convinced they would not have made those comparisons if we had both been women, or men. I have also received negative comments to the effect that I am 'intimidating' which I'm also convinced would never be made about a man. Then there are the comments to the effect that 'she thinks she's so smart' and of course that I am 'too feminist'. Age The issue of age was also subject of much negative comment in student evaluation of teaching. While this in itself was not a surprise, the fact perhaps one-in-three aged related comments questioned whether the person was too young or inexperienced was interesting. Negative comments about being too old like "passed his sell-by date" were perhaps more predictable. Voice There were rather more negative comments about the nature or sound of a person’s voice and or the speed of their speech, ranging from being “monotonous”, “annoying”, “irritating” “speaks too fast”, “too slow” or more creative, but equally offensive comments, such as sounds like “nails down a chalkboard.” Favouritism/bias A number of respondent’s reported receiving unwarranted comments about being biased against or “favouring one cohort of students over another”. Racism A not insubstantial number of respondents said student responses accused them of being bigoted or racist. One our Aboriginal and Torres Strait Islander respondents said:

31


“I have been labelled racist against non-Aboriginal people, a "coconut" and someone who has no right to teach Aboriginal history or culture even though I am Aboriginal and I believe that it has to do with the lightness of my skin colour.”

Political views A significant number of respondents reported receiving disrespectful/abusive comment about their political views broadly defined. Lack of feedback/not responding to emails Many of the respondents who said they received unfavourable comments in relation to feedback and or failure to respond to email suggested that they were either unwarranted or unrealistic. For example, respondents’ comments included: • •

Complaints for not answering online questions over the weekend or after hours during the week” Lack of feedback given despite multiple feedback opportunities - too soft on marking”

Other In additional to these other responses covered an array of issued including staff being blamed for things out of their control including failure of technology, changed timetables and in a limited number of cases blaming staff for being absence due to illness, injury or fulfilling their carer responsibilities. A more worrying aspect of, albeit in only a very small number of cases, relates to what might be described as comments containing false or misleading statements with the view to deliberately trying to discredit the respondents reputation or career. In a number of instances it was suggested that some students had engaged in a coordinated effort to achieve these ends. Such responses were usually interpreted as threats, and elicited the following responses: • •

• •

Some students (initially anonymous) threatened that if I did not give them a sufficiently high enough mark for their assignments they would lodge a complaint (which they did!); I was alerted to widespread student online cheating through a social media channel by alumni. I reported this with evidence. Students then stated in the same social media forum that they would 'give me a flogging' in the student feedback on course survey. The student feedback on course survey was the worse one I had ever received. I have had students make threats to my physical safety. False accusations such as that I was “using drugs on campus” which I formally requested to be removed from the record. The worst incident was when the students pre-emptively agreed a set comment or phrase to make about a member of staff, these resulted in her being suspended.

The use of student evaluations in this way is totally unacceptable.

32


Reaction to Comments The NTEU survey also asked respondents about how they reacted to disrespectful and/or abusive comments. The results of these questions are shown in Table 3. The data shows that the most common reactions were being distressed after reading the comments (77.2% of respondents) and feelings of anger, self-consciousness or embarrassment (68.7%) followed by wanting to avoid teaching (45.1%). The data also shows that one-in-four (26.0%) of respondents who received disrespectful/abusive comments had stopped reading them, which somewhat negates the purpose for including them in the first place. The data in Table 3 is also broken by demographic characteristics, and perhaps the most noticeable difference relates to the reactions from men compared to women. The data shows that in all cases the response of female respondents was above the average for all respondents and all cases, except the response to “I have stopped reading them�, while the male response was below average. The results also show that in the majority of cases a higher proportion of people who were born in Australia and spoke English as a first language reacted than those not born in Australia and where English is not their first language. In other words, those who one might expect to be least likely to receive disrespectful or abusive comments had the strongest reaction when such comments appeared.

Table 3 Reaction of people who received disrepectful or abusive coments (% of respondents who ahd received such comments) All

Male

Female

Nonbinary #

A&TSI#

Feelings of anger, fear, self-consciousness or embarrassment

68.7%

63.6%

75.2%

75.0%

64.3%

79.5%

62.5%

69.6%

70.8%

69.1%

73.8%

I have been distressed after reading them

77.2%

67.9%

86.4%

75.0%

64.3%

83.0%

75.0%

78.5%

78.1%

78.9%

76.2%

Difficulty sleeping

38.5%

32.7%

44.2%

62.5%

28.6%

41.1%

45.8%

40.4%

37.0%

39.6%

37.6%

Loss of appetite

10.8%

10.0%

11.7%

37.5%

14.3%

17.0%

16.7%

9.9%

12.7%

10.1%

15.2%

Trouble paying attention and staying focused on work

33.2%

29.1%

37.9%

25.0%

35.7%

42.0%

43.8%

34.7%

32.0%

33.9%

34.3%

Lower participation in group meetings

14.7%

14.7%

15.1%

25.0%

21.4%

20.5%

18.8%

16.0%

12.7%

15.2%

13.8%

I have wanted to avoid teaching/tutoring classes

45.1%

36.0%

53.6%

62.5%

42.9%

57.1%

39.6%

47.8%

43.2%

47.7%

38.1%

Desire to use sick leave to avoid going to work

12.9%

10.9%

15.0%

12.5%

21.4%

16.1%

18.8%

13.4%

12.5%

13.4%

11.9%

Anxiousness or depression

34.0%

30.2%

38.1%

25.0%

28.6%

37.5%

45.8%

34.9%

34.4%

34.5%

34.8%

I have thought about quitting my job

32.6%

29.7%

36.0%

50.0%

42.9%

31.3%

43.8%

35.0%

30.5%

35.0%

24.8%

I don't read them anymore

26.0%

26.3%

26.5%

12.5%

35.7%

24.1%

31.3%

25.6%

27.5%

26.1%

26.7%

33

LGBITQ Disability

Born Oz - Born Oz - English English Yes No First - Yes First - No


7.

Institutional responses to disrespectful/abusive comments

Finally, the survey asked respondents whether they had formally raised concerns about disrespectful comment and if they had, how their institution had responded to those complaints. Figure 18 shows that only about one-in-four (26.9%) of respondents who had received disrespectful/abusive comments made a formal complaint. Figure 18 also shows that those who are more likely to raise a formal complaint include: • • • •

Females (30.0%), Non-binary/third gender (37.5%)(noting caution due to low number of respondents), A&TSI (50.0%) (noting caution due to low number of respondents), and People with an observable disability (35.4%)

Figure 19 shows that those on fixed- term (16.2%) or casual (17.5%) contracts of employment are far less likely to raise formal complaints compared to ongoing employees (28.9%). It also shows that the propensity to raise formal complaints also increases with level of appointment. Figure 18

34


Figure 19

Figure 20 reports the results of how universities responded to complaints when they were made. According to survey responses, universities were very reluctant to follow up on formal complaints, with data showing that only in 13.4% of cases where a formal complaint was raised did the university investigate. It should be noted that another 10% of respondents said they were unsure as to whether, there was an investigation, but that leaves a solid three-out-of-four (76.8%) who said there definitely was not an investigation. When there was follow-up the most likely response was to offer support such as counselling (11.5%). In another 7.8% of cases the university removed the responses of students making the distrustful/abusive comments from the overall report. The results also show that there is very little evidences that universities attempted to identify students (6.3%) or provide staff an opportunity to resolve issues with disgruntled students (2.3%). In only 3.1% of cases where an official complaint was raised, did the university agree to modify the way it conducts student evaluations to try to minimise the incidence of these types of unwarranted comments. Is this good enough? Some examples of where universities have acted were to be found in the open ended response to the questions about the nature of disrespectful or abusive comments, and included “unsuitable comments (swear words, etc.) removed before they reach me. (And a message inserted to say that has happened)”. However, this was not always effective as the following comment makes clear “have been called offensive names, often these are blanked out but the message is clear. This has changed recently where a sentence is inserted to state a complete comment has been deleted due to its offensive nature”.

35


Figure 20

One respondent raised some every pertinent issues about how universities responded (or don’t) to the inclusion of disrespectful/abusive comments in student evaluations of teaching, when they asked: “Where is the duty of care that the university provides to me? I read the most hurtful comments which are based on nothing more than them getting low marks. I am held to ransom every year, four times a year - its taking a psychological toll on me. I have had comments that I am misogynist, sexist -- you name it. I have very little to do with students as a result as they are quick to judge and even quicker to record comments because they just think it’s FUN. This is my career and I am evaluated by 17 year old no nothings (sorry) but that's the fact. The harder the course the more they complain. They expect to get a degree but do no study. But when their part time job gets in the way of study they are very quick to damn me and blame me.”

36


8.

Conclusions

Student evaluations of teaching are predominantly standardised compulsory online exercises undertaken every time a subject or unit is offered over which individual staff have little input or control in terms of content or timing. The student evaluations generally have very poor response rates, with almost six-in-ten having a response rate of 30% or less, which brings into question their validity, especially when the results in almost nine-in-ten cases are used to appraise and manage staff performance. To emphasise this point, well under one-in-five respondents felt evaluations gave an accurate measure of their performance. While six-in-ten survey respondents felt that student evaluations could be used to inform and improve teaching or the delivery of subjects of units, only four-in-ten agreed that they were used for this purpose. The NTEU survey also showed that six-out-of-ten respondents said that some students had used the evaluations to make disrespectful and abusive comments. Based on open ended responses by far the most common category of these comments related to an individual’s competency to teach a subject. Other common themes other than the all too predictable comments about gender, cultural background and spoken English included age (too old and too young), personality/attitude, favouritism and political views. The results also showed that these disrespectful and abusive comments had serious consequences on those to whom they were directed with seven-in-ten or more feeling distressed, angered, fearful, self-conscious or embarrassed after reading the comments. While lower, yet significant, numbers also experienced real physical symptoms including loss of sleep or appetite or loss of motivation. Of real concern is the finding that one in five (20.9%) respondents either ‘frequently’ or ‘always’ considered amending assessment grades or feedback so that students rate their teaching more favourably. Not surprisingly, the fact that this was higher for staff on fixed term contracts (23.8%) compared to the average says a great deal about the use of student evaluations in job security, as well as for promotions and career advancement. If some staff are giving students more favourable grades than would otherwise be the case in the belief that would improve students’ evaluations of their teaching, then the student evaluations have the potential to significantly undermine academic integrity. It is perhaps not surprising that only just over one-in-four respondents who were the subject of disrespectful or abusive comments made an official complaint, especially when you consider the way which institutions responded (or didn’t response) to these complaints. The survey results indicate that universities only undertook an investigation in just over one-in-ten cases after a formal complaint was raised. Very little attempt seems to have been made to identify of discipline students. A common response to a formal complaint was offer the staff member support or counselling or agreeing to modify the evaluations process by removing or redacting offensive or abusive comments. Given these results, there is little wonder that just over one-in-ten respondents were satisfied with the student evaluation of teaching at their institution. Not only are the value of these evaluations all but meaningless in terms of appraising or managing staff performance, they are also potentially raise serious occupational health and safety issues. Unfortunately, the standardised online one size fits all evaluation, which is relatively efficient to administer and analyse, is no longer fit for purpose as a tool to inform or improve subjects/units or their delivery. They seem to serve little other purpose than providing a metric for quality assurance purposes. 37


Staff experience of Student Evaluation of Teaching and Subjects/Units


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.