AHE Conference Programme 28 June 2018

Page 1

adem ent y ing

ent

d e m Lear y n

International Assessment in Higher Education Conference

ssm

28 June 2018 Manchester, UK

Conference Programme

Lear

ningEd

ssm

ucat

Asse

ion

Aca

Asse

AHE: Leading Assessment for Learning in Higher Education



Welcome Colleagues

Welcome to Manchester for the one day Assessment in Higher Education conference of 2018. The focus of the conference is ‘feedback’ within the context of assessment. We know that receiving and engaging with effective feedback, particularly in the context of formative or low stakes assessment, has learning power. We know that it is possible to develop strategies within a programme so that feedback on what may be considered summative assessment, of a module, is experienced as useful formative feedback by students. We know that giving feedback to others, especially against criteria, is a useful way for learners to gain insight into academic standards. So feedback matters. Based on the review of a growing body of research on effective ‘feedback’ as part of assessment in higher education, more than ten years ago David Nicol set out widely adopted principles that focus on the role of feedback in developing students as self-regulated learners (2006a: 2006b). More recent reports informed by research review have positioned the increasing body of work firmly within the wider principles of assessment for learning (Sambell, 2011; HE Academy, 2012). However, despite this body of research-informed guidance, ‘feedback’ remains a challenging area for higher education policy and practice. Considerable energy continues to be applied to practitioner research attempting to make sense of the principles for effective feedback through practical strategies within the varied and dynamic contexts of higher education programmes. A simple consideration of key terms used within the titles of the presentations in this conference demonstrates a wide range of contemporary perspectives on feedback:

On behalf of the AHE committee, welcome to the conference. We hope you will enjoy the day, learn something new, contribute to new thinking and make connections for future collaboration. Pete Boyd Conference Chair

pete.boyd@cumbria.ac.uk

Linda Shore AHE Event Manager

linda.shore@cumbria.ac.uk

3


Delegate Information AHE Executive Committee Pete Boyd (Conference Chair) Amanda Chapman Jess Evans Linda Graham Peter Holgate Mark Huxham Rita Headington Geraldine O’Neil Natasha Jankowski Sally Jordan Nicola Reimann Kay Sambell (President) Linda Shore (Events Manager) Rebecca Westrup

University of Cumbria University of Cumbria The Open University Northumbria University Northumbria University Edinburgh Napier University Educational Consultant University College Dublin University of Illinois The Open University Durham University Edinburgh Napier University University of Cumbria University of East Anglia

Networking We hope the conference will provide you with an excellent opportunity to make connections and discover shared interests in higher education assessment with colleagues from across the UK and beyond.

Evaluation and commentary We actively encourage you to make use of the conference hashtag #AssessmentHEconf on Twitter in order to share ideas, respond to sessions, ask questions and make connections. There will be an on-line evaluation after the conference but please feel free to share any comments or suggestions with members of the AHE Executive Committee whilst you are here.

Interesting places to visit Manchester is a vibrant city with many interesting places to visit. Go to http://www.visitmanchester.com/ for plenty of ideas.

Wi-Fi Access For Wi-Fi access at The MacDonald Manchester Hotel please go to ‘Wi-Fi settings’ on your laptop or mobile device and select ‘MacDonald WiFi’ this will enable you to access this service.

AHE Registration Desk For assistance during the conference please visit the AHE Registration Desk 08.30 onwards located to the right of the entrance by the MacDonald Manchester Hotel Main Reception.

4


Keynote Address Professor David Carless University of Hong Kong

Feedback for the longer term: Developing student feedback literacy

We need new ways of thinking about feedback in that doing more of the same is insufficient. In particular, it is unrealistic to place the main burden on teachers to provide more and more comments to large numbers of students. Much needed are more efficient ways to encourage student use of the feedback that is available. For feedback to be effective, students need to engage actively and productively with information they receive and use it to improve their work or learning strategies. To facilitate progress towards this goal, students need feedback literacy: the understandings, capacities and dispositions needed to make productive use of feedback information. A key theme is that a longer term perspective is required. It is important that students experience multiple opportunities to develop their feedback literacy across the curriculum and throughout their program. Four interrelated features of students’ feedback literacy are proposed: appreciating feedback; making judgments; managing affect; and taking action. I draw on a longitudinal inquiry

into four case study students’ experiences of feedback to illustrate the development of some of these features. Some affordances and challenges for the coordinated development of staff and student feedback literacy are sketched and I conclude by outlining an agenda for researching student feedback literacy. Professor David Carless works in the Faculty of Education, University of Hong Kong. He specializes in approaches to assessment which serve to support productive student learning processes. His signature publication is the book Excellence in University Assessment: Learning from Award-winning Practice (2015, Routledge). His current research is focused on students’ experiences of feedback and the development of student feedback literacy. He is a Principal Fellow of the Higher Education Academy. He is currently co-authoring with Naomi Winstone a book for the SRHE Routledge series on Feedback practice in higher education. He tweets about feedback research @CarlessDavid

.

5


Conference Programme Summary Time

Session

Room

09.30

Registration

09.30

Refreshments

09.55

Welcome: Dr Nicola Reimann, Durham University

Piccadilly Suite

10.00

Keynote & Discussion: Professor David Carless

Piccadilly Suite

10.40

Refreshments

11.00

Choice of parallel sessions

1

Hotel Foyer

Research Evaluation Presentations Session Chair: Professor Kay Sambell

Piccadilly Suite

An institutional case study: promoting student engagement in assessment Carmen Thomas, University of Nottingham, UK Can student-staff interviews about how staff use feedback when preparing their work for publication, help develop student feedback literacy? Jenny Marie, University College London; Nick Grindle, University College London, UK Feedback Footprints: Using Learning Analytics to support student engagement with, and learning from, feedback Naomi Winstone, University of Surrey; Dr Emma Medland, University of Surrey, UK

2

Research Evaluation Presentations Session Chair: Jess Evans

Meeting Room 3

Dialogic feedback and the development of professional competence among further education pre-service teachers Justin Rami, Dublin City University; Francesca Lorenzi, Dublin City University, Republic of Ireland Systematic responses to grand challenges in assessment: REVIEW at UNSW Daniel Carroll, University of New South Wales, Australia An evaluation of peer-to-peer feedback using adaptive comparative judgement Jill Barber, The University of Manchester, UK

6


3

Research Evaluation Presentations Session Chair: Linda Thompson

Meeting Room 5

The Pedagogy of Interteaching: engaging students in the learning and feedback process Nick Curtis, James Madison University, Keston Fulcher, James Madison University, Virginia, USA Study Player One: Gamification, Student Engagement, and Formative Feedback Errol Rivera, Edinburgh Napier University, UK Providing and receiving feedback: implications to student’ learning Georgeta Ion, Universitat Autònoma de Barcelona, Cristina Mercader Juan, Universitat Autònoma de Barcelona, Aleix Barrera Corominas, Universitat Autònoma de Barcelona; Anna Diaz Vicario, Universitat Autònoma de Barcelona, Spain

4

Research Evaluation Presentations Session Chair: Dr Rita Headington

Meeting Room 7

Contextual Variables in Written Assessment Feedback in a University-level Spanish program Ana Maria Ducasse, RMIT University, Melbourne, Australia; Kathryn Hill, La Trobe University, Melbourne, Australia VACS - Video Assessment of Clinical Skills Mark Glynn, Dublin City University; Evelyn Kelleher Dublin City University; Colette Lyng, Beaumont Hospital, Dublin; Adele Keough, Dublin City University; Anna Kimmins Dublin City University; Patrick Doyle, Dublin City University, Republic of Ireland Foregrounding feedback in assessment as learning: the case of the processfolio Jayne Pearson, University of Westminster, UK

5

Round Table Presentations Session Chair: Dr Peter Holgate

Meeting Room 11

Exploring student perceptions of a feedback trajectory for first-year students at the University of Leuven Anneleen Cosemans, Leuven Engineering and Science Education Center, Carolien Van Soom, Greet Langie, Tinne De Laet, University of Leuven, Belgium Feedback: What lies within? Jane Rand, York St John University, UK Enabling a culture of self- reflection amongst Pre-service teachers’ through online journaling Margaret O’Keeffe, Mary Immaculate College, Ireland Hearing Voices: First Year Undergraduate Experience of Audio Feedback Stephen Dixon, Newman University, UK 12.00 Break

7


6

Micro Presentations Session Chair: Professor Pete Boyd

Piccadilly Suite

Crisis, what crisis? Why we should stop worrying about NSS scores Alex Buckley, University of Strathclyde, UK Transforming feedback practice in the practical environment using digital technologies Olya Antropova, Ronan Bree, Moira Maguire, Dundalk Institute of Technology, Republic of Ireland; Akinlolu Akande, Institute of Technology, Sligo, Ireland; Dian Brazil, Institute of Technology, Carlow, Ireland; Anne Mulvihill, Athlone Institute of Technolgy, Athlone, Westmeath, Ireland Stressful feedback post paramedic simulation training Enrico Dippenaar, Anglia Ruskin University, UK Developing and implementing a model of intensive spoken teacher and peer feedback with learner control Gordon Joughin, Deakin University, Australia; Helena Gaunt, Guildhall School of Music and Drama, UK Mapping the Assessment Journey: Student evaluations of a visual representations of their assessments Anke Buttner, University of Birmingham UK

13.10

Lunch

14.00

Choice of parallel sessions

7

Research Evaluation Presentations Session Chair: Assistant Professor Natasha Jankowski

Piccadilly Suite

Developing student feedback literacy using educational technology and the reflective feedback conversation Kathryn Hill, La Trobe University, Melbourne, Australia; Ana Maria Ducasse, RMIT University, Melbourne, Australia Diverse orientations towards assessment and feedback internationally Sally Brown, Independent consultant, Emerita Professor at Leeds Beckett University, UK; Kay Sambell, Edinburgh Napier University, UK Frequent rapid feedback, feed-forward, and peer learning, for enhancing student engagement in an online portfolio assessment Theresa Nicholson, Manchester Metropolitan University, UK

8


8

Research Evaluation Presentations Session Chair: Associate Professor Geraldine O’Neil

Meeting Room 3

Student learning through feedback: the value of peer review? Charlie Smith, Liverpool John Moores University, UK In the shoes of the academics: Inviting undergraduate students to apply their assessment literacy to assessment design Anke Buttner, Andrew Quinn, Ben Kotzee, Joulie Axelithioti, University of Birmingham; UK Feedback to the Future Erin Morehead, University of Central Lancashire, Andrew Sprake, University of Central Lancashire, UK

9

Research Evaluation Presentations Session Chair: Jess Evans

Meeting Room 5

‘Feedback interpreters’: The role of Learning Development professionals in overcoming barriers to university students’ feedback recipience Karen Gravett, University of Surrey; Naomi Winstone, University of Surrey, UK Investigating Chinese Students' Perceptions of and Responses to Teacher Feedback: Multiple Case Studies in a UK University Fangfei Li, University of Bath, UK Action on Feedback Teresa McConlogue, University College London; Jenny Marie, University College London; Clare Goudy, University College London, UK

10

Research Evaluation Presentations Session Chair: Dr Peter Holgate

Meeting Room 7

The Cinderella of UK assessment? Feedback to students on re-assessments Marie Stowell, University of Worcester, Wayne Turnbull, Liverpool John Moores University; Harvey Woolf, Formerly University of Wolverhampton, UK A critique of an innovative student-centred approach to feedback: evaluating alternatives in a high risk policy environment Judy Cohen, University of Kent; Catherine Robinson, University of Kent, UK Build a bridge and get over it – exploring the potential impact of feedback practices on Irish students’ engagement Angela Short, Dundalk Institute of Technology; Gerry Gallagher, Institute of Technology, Republic of Ireland

9


11

Round Table Presentations Session Chair: Dr Rita Headington

Meeting Room 11

Feedback, a shared responsibility Annelies Gilis, Karen Van Eylen, Elke Vanderstappen, KU Leuven, Belgium; Joke Vanhoudt, Study Advice Service, KU Leuven, Belgium; Jan Herpelinck, General Process Coordination, KU Leuven, Belgium Dialogue and Dynamics: A time efficient high impact model to integrate new ideas with current practice Lindsey Thompson, University of Reading, UK Large student cohorts: the feedback challenge Jane Collings, University of Plymouth; Rebecca Turner, University of Plymouth, UK A comparison of response to feedback given to undergraduates in two collaborative formative assessments: wiki and oral presentations. Iain MacDonald, University of Cumbria, UK 15.00

Break

15.10

Choice of Parallel Sessions

12

Research Evaluation Presentations Session Chair: Professor Pete Boyd

Piccadilly Suite

Feedback, Social Justice, Dialogue and Continuous Assessment: how they can reinforce one another Jan McArthur, Lancaster University, UK Building a national resource for feedback improvement David Boud, Deakin University/University of Technology Sydney, Australia Dynamic feedback –a response to a changing teaching environment Martin Barker, University of Aberdeen, UK

13

Research Evaluation Presentations Session Chair: Dr Nicola Reimann

Meeting Room 3

‘Who Am I?’ Exploration of the healthcare learner ‘self’ within feedback situations using an interpretive phenomenological approach Sara Eastburn, University of Huddersfield, UK Students’ feedback has two faces Serafina Pastore, University of Bari, Italy; Giuseppe Crescenzo), University of Bari, Italy; Michele Chiusano Vincenzo Campobasso, University of Bari, Italy Feedback and Feedforward in capability-based assessment Panos Vlachopoulos, Macquarie University, Australia

10


14

Research Evaluation Presentations Session Chair: Professor Sally Jordan

Meeting Room 5

Feedback – what students want Susanne Voelkel, University of Liverpool, UK Developing as a peer reviewer: Enhancing students’ graduate attributes Rachel Simpson, Durham University; Catherine Reading, Durham University, UK Does screencast feedback improve student engagement in their learning? Subhi Ashour, The University of Buckingham, UK

15

Research Evaluation Presentations Session Chair: Dr Amanda Chapman

Meeting Room 7

Feedback literacy in online learning environments: Engaging students with feedback Teresa Guasch, Open University of Catalonia; Anna Espasa Catalonia, Rosa M. Mayordomo, Open University of Catalonia, Catalonia Teaching staff’ views about e-assessment with e-authentication Alexandra Okada, Denise Whitelock, Ingrid Noguera, Jose Janssen, Tarja Ladonlahti, Anna Rozeva, Lyubka Alexieva, Serpil Kocdar, Ana-Elena Guerrero-Roldán, The Open University UK Students’ responses to learning-oriented exemplars: towards sustainable feedback in the first year experience? Kay Sambell, Edinburgh Napier University; Linda Graham, Northumbria University; Peter Beven, Northumbria University, UK

16

Round Table Presentations Session Chair: Professor Mark Huxham

Meeting Room 11

Feedback for Postgraduate Taught Students Fay Julal, University of Birmingham, UK Increasing assessment literacy on MA translations modules to improve students’ understanding of and confidence in assessment processes Juliet Vine, University of Westminster, UK Developing an identity as a knowing person: examining the role of feedback in the Recognition of Prior Learning (RPL) Helen Pokorny, University of Westminster, UK Students as partners in co-creating a new module: Focusing on assessment criteria Maria Kambouri-Danos, University of Reading, UK Assessment - Evaluating the Workload on Staff and Students Mark Glynn, Dublin City University; Clare Gormley, Dublin City University; Laura Costelloe, Dublin City University, Republic of Ireland 16. 10

16.30

Plenary & ask the audience: Professor Kay Sambell

Close

11


Author Abstracts Research Evaluation Presentations Session Chair: Professor Kay Sambell

1

Piccadil ly Suite

An institutional case study: promoting student engagement in assessment Speaker: Carmen Thomas, University of Nottingham, UK Engaging students in assessment in meaningful ways to develop their evaluative judgement (Tai et al. 2017) is an area of practice that still requires much development. A case of institution-wide promotion will be presented. Practices that engage students actively in assessment (e.g. guided marking, assessment of exemplars or other formative work) have been promoted. A programme of dissemination, trials and follow-up evaluations has been conducted to promote adoption from the ground up. Fourteen trials in eleven Schools/Departments have been run. The institutional case will show: • The context: Background evaluations reveal the most typical practices deployed to support students in advance of assessments. Two-way and interactive activities such as peer assessment are rare across the institution. • Promotion strategy: Support, dissemination events and evaluations. The institutional strategy has considered support to individuals, dissemination events and also a consistent evaluation programme of the effectiveness of the tasks/sessions. • Evaluation of the impact of peer assessment type of activities on students: Responses from more than 1000 students have been collected using the same core questionnaire for evaluation. All trials have been evaluated considering the: o o

effectiveness of peer assessment in promoting self-efficacy student perception of the value of the activities

Additional meta-analyses have considered which task factors affect student perceptions. This exploratory analysis will be discussed for its implications for practice. • •

From trial to embedded practice: all initial trials have led to adoption of the practice. Persisting challenges and next steps: The initial phase for promotion in a bottom up manner will be followed up by a strategy to use quality assurance (School Reviews) mechanisms to lend further institutional support.

The case illustrates the need for a multipronged institutional approach consisting of both bottom-up and top-down approaches. The institutional evaluation also shows positive impact on students’ perception and self-efficacy lending further support to the value of continuing to increase the base and reach of such practices. The ongoing challenges, slow progress will be discussed. References Tai, J., Ajjawi, R., Boud, D., Dawson, P., Panadero, E. (2017) ‘Developing evaluative judgement: enabling students to make decisions about the quality of work’, Higher Education, December, pp. 1-15. DOI: 10.1007/s10734-017-0220-3 Can student-staff interviews about how staff use feedback when preparing their work for publication, help develop student feedback literacy? Speakers: Jenny Marie, University College London; Nick Grindle, University College London In recent years, scholars have argued that feedback needs to be a dialogic process if students are to engage with it in a way that aids their learning (Nicol and Macfarlane-Dick, 2013). Xu and Carless

12


(2017) question whether students are equipped to do this effectively, suggesting that students first need to develop ‘feedback literacy’. But how can students become feedback literate? The learning that needs to occur is part of the process of acculturating to academic life and norms, but as Sutton (2012) points out there are numerous cultures within a single university and even within a discipline. Our response was to adapt a tried-and-tested activity called ‘Meet the Researcher’ which is used to help students acculturate to the research-based culture of higher education. ‘Meet the Researcher’ consists of student-led interviews of academic staff about their research. The aim, first described by Cosgrove (1981), is ‘to draw upon the experiences of the staff as students, graduates, researchers and teachers in such a way that the staff would mediate between [the discipline] as it had been presented to the students in their other courses and the larger debates … which are presented in the literature’. Following Cosgrove’s pattern most ‘Meet the Researcher’ activities require students to work in groups and read an academic paper written by the researcher, which then forms the basis for their interview (Downie, 2010). We adapted this activity, so that in addition to reading a published paper students would also be shown an earlier draft of the paper and the feedback it had received from the journal’s editor(s) and reviewers. The aim was to expose students to the ways that researchers seek feedback, the different forms in which it is given, and the impact it has on a researcher’s work. The interviews would then focus on the researcher’s feelings about the feedback, their response to it, and what impact it had on the work as finally published. We conducted small-scale pilots of this adaptation with three different groups of students in two departments, sending briefing notes to the staff about the aims of the activity and a briefing they could adapt for their students. The activity ran slightly differently in each of the pilots, resulting in three case studies of how this concept can be realised in practice. Staff partners were asked to complete a short questionnaire about how they had implemented the activity. Students were asked to answer a single question to enable us to understand what they had learnt from the experience. At the conference we will present an analysis of the responses. We’ll also compare the findings with data drawn from an earlier study of more generic ‘Meet the Researcher’ activities in the same university (Grindle 2017). Analysis of the current findings, and comparison with the earlier study, will help us illustrate how far we have succeeded in developing student feedback literacy. We will also identify areas that require further thought and development. References Cosgrove, D. (1981) ‘Teaching geographical thought through student interviews’, Journal of Geography in Higher Education, 5(1), pp.19-22. Downie, R. (2010) ‘A Postgraduate Researcher-Undergraduate Interview Scheme: Enhancing Research-Teaching Linkages to Mutual Benefit’, Bioscience Education, 16(1):pp. 1-6. doi: 10.3108/beej.16.c2. Grindle, N. (2017) ‘Meet the Researcher: What can staff and students learn from engaging in dialogue about research?’ HEA Annual Conference, 6 July 2017, Manchester: University of Manchester. Nichol, D., McFarlane-Dick, D. (2006) ‘Formative assessment and self-regulated learning: a model and seven principles of good feedback practice’, Studies in Higher Education, 31(2), pp.199-218. Sutton, P. (2012) ‘Conceptualizing feedback literacy: knowing, being, and acting’, Innovations in Education and Teaching International, 49(1), pp. 31-40. doi: 10.1080/14703297.2012.647781. Yueting, X. and Carless, D. (2017) ‘‘‘Only true friends could be cruelly honest’: cognitive scaffolding and social-affective support in teacher feedback literacy’, Assessment & Evaluation in Higher Education, 42(7), pp.1082-1094. doi: 10.1080/02602938.2016.1226759. Feedback Footprints: Using Learning Analytics to support student engagement with, and learning from, feedback Speaker: Naomi Winstone, University of Surrey; Emma Medland, University of Surrey Student satisfaction with assessment and feedback has been described as the sector’s “Achilles’ Heel” (Knight, 2002, p. 107). Many students express dissatisfaction with the utility of feedback (Medland, 2016), citing difficulties understanding comments, knowing how to take action, and how to connect feedback from different modules and assignments (Jonsson, 2013; Winstone, Nash, Rowntree & Parker, 2017). In the ‘new paradigm’ of feedback practice (Carless, 2015), emphasis is placed not on feedback as comments (i.e. the ‘old paradigm’), but on feedback as dialogue, where the impact of feedback on students’ learning is of key concern. This approach should lead educators to reflect on

13


where students are enabled to enact feedback in a dialogic cycle, and to find ways of supporting students’ self-regulatory development by providing opportunities for them to experience the impact of implementing feedback. As researchers and practitioners, a key difficulty in understanding students’ engagement with feedback is that once marked work is returned, we know very little about what students actually do with feedback information. There is a real, but largely untapped, potential for learning analytics to illuminate this ‘hidden recipience’. Where feedback is given to students via Virtual Learning Environments (VLEs), learning analytics can provide insight into when and how students engage with feedback (e.g. Zimbardi et al., 2017). Furthermore, student-facing analytics can inform students about their own engagement, thus supporting the development of self-regulated learning. Here, we report on an evaluation of a HEFCE-funded project through which we worked in partnership with students to develop a VLE-embedded feedback portfolio to support students to synthesise and act upon feedback (https://www.youtube.com/playlist?list=PL1eqx09MUhok3OPDHvKPfVB7cMl9gA8Y). The portfolio first includes a ‘feedback review’ tool through which students can extract key messages from feedback, which they can then categorise into a comprehensive set of academic skills. Also housed within the portfolio is a large resource bank, aligned with each of these identified academic skills. The tool synthesises feedback from multiple assignments so that students can see a visual representation of the areas in which they are performing well, and the areas that may need some development. Finally, students can use an action-planning tool to set goals for their use of the resources, and engage in dialogue with their personal tutor. The portfolio incorporates a studentfacing analytics dashboard to enable them to track their engagement with feedback, and the impact of this engagement. In this project, we employed a co-design method through which students, researchers, and learning technologists worked in partnership to develop the feedback portfolio. We will present the key messages emerging from our evaluation, which employed self-report measures (e.g. orientation to feedback, assessment literacy, academic self-efficacy), quantitative analysis of learning gain and learning analytics, and qualitative analysis of user experience. Crucially, our data demonstrate disciplinary differences in engagement with the portfolio and also provide further insight into the challenges students face when using feedback to develop as autonomous, self-regulated learners. The evaluation also demonstrates that students see digital tools as possessing great potential in bringing meaningful dialogue into the feedback process. References Carless, D. (2015) Excellence in university assessment: Learning from award-winning practice. London: Routledge. Jonsson, A. (2013) ‘Facilitating productive use of feedback in higher education’, Active learning in higher education, 14(1), pp.63-76. Medland, E. (2016) ‘Assessment in higher education: drivers, barriers and directions for change in the UK’, Assessment & Evaluation in Higher Education, 41(1), pp.81-96. Winstone, N. E., Nash, R. A., Rowntree, J., & Parker, M. (2017) ‘”It'd be useful, but I wouldn't use it”: barriers to university students’ feedback seeking and recipience’, Studies in Higher Education, 42(11), pp.2026-2041. Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017) ‘Supporting Learners' Agentic Engagement With Feedback: A Systematic Review and a Taxonomy of Recipience Processes’, Educational Psychologist, 52(1), pp.17-37. Zimbardi, K., Colthorpe, K., Dekker, A., Engstrom, C., Bugarcic, A., Worthy, P., & Long, P. (2017) ‘Are they using my feedback? The extent of students’ feedback use has a large impact on subsequent academic performance’, Assessment & Evaluation in Higher Education, 42(4), pp. 625-644.

14


2

Research Evaluation Presentations Session Chair: Jess Evans

Meeting Room 3

Dialogic feedback and the development of professional competence among further education preservice teachers Speaker: Justin Rami, Dublin City University; Francesca Lorenzi, Dublin City University, Republic of Ireland Improving the students learning experience is closely connected with the promotion an implementation of an assessment strategy whose effectiveness relies on the quality of the formative aspect. Assessment can promote or hinder learning and it is, therefore, a powerful force to be reckoned within Education. The literature on assessment makes it quite clear that assessment shapes and drives learning in powerful, though not always helpful, ways (Ramsden, 1997). A number of authors (Steen-Utheim et al. 2017; Merry et al., 2013; Careless, 2013 and 2016; Hyatt, 2005; Juwah & al., 2004; Bryan & Clegg; 2006; (Swinthenby, Brown, Glover, Mills, Stevens & Hughes, 2005; Nicol, 2010; Torrance & Prior 2001) have advocated the encouragement of dialogue around learning and assessment as a means to enhance the formative aspect of assessment. Pedagogical dialogue and formative assessment share common principles such as the emphasis on the process (MacDonald, 1991); the need for negotiation of meaning and shared understanding of assessment criteria (Boud, 1992)(Chanok 2000)(Harrington & Elander 2003; Harrington & al. 2005) (Sambell & McDowell 1998) (Higgins Hatley& Skelton, 2001; (Norton, 2004; Price & Rust, 1999; O’Donovan, Price & Rust 2000; Rust, Price & O’Donovan, 2003) and the development of reciprocal commitment between assessors and assesses (Hyland 1998; Taras, 2001) based on trust (Carless, 2016) We argue with Kopoonen et al. (2016) that a strong dialogic feedback culture together with the developmental role of feedback a part of future working life skills. and their importance warrants greater integration into the higher education curricula as part of the development of expertise. This paper presents the outcomes of the introduction of an assessment portfolio for module/learning unit “Curriculum Assessment” informed by dialogical principles and aimed at the development of professional competence among pre-service Further education teachers. This enquiry used a research qualitative approach with several groups of pre-service FE(T) teachers in Ireland. The paper outlines the results the small-scale research which sought to evaluate the impact of the of the module/Learning unit. The findings led to the identification of three key outcomes: firstly the development of a shared understanding of assessment criteria secondly the establishment of a mutual relationship between assessors and assesses based on commitment and trust and thirdly a heightened self-awareness both in personal (efficacy) and professional (competence) terms. This study demonstrates that a dialogical assessment model that enables students to make sense of knowledge through reflection, professional decisionmaking and engagement. Furthermore, it demonstrates how a dialogical approach to assessment and feedback can initiate reflective processes which may equip student teachers with knowledge transferable to professional practice. References Merry, S., Price, M., Carless, D., & Taras, M. (2013) Reconceptualising feedback in higher education: Developing dialogue with students. London: Routledge. Carless, D. (2013) ‘Trust and its role in facilitating dialogic feedback’, in Boud, D. & Molloy, L. Effective Feedback in Higher and Professional Education. London: Routledge. Carless D. (2016) ‘Feedback as Dialogue’, in Peters, M. (eds.) Encyclopedia of Educational Philosophy and Theory. Singapore: Sprenger. Koponen J., Kooko, T., Perkiö A. Savander- Ranner A. Toivanen T., Utrio, M. (2016) ‘Dialogic feedback culture as a base for enhancing working life skills in higher education’, Journal of Finnish Universities of Applied Sciences, Available at: https://uasjournal.fi/in-english/dialogic-feedbackculture-as-a-base-for-enhancing-working-life-skills-in-higher-education Nicol, D. (2010) ‘From monologue to dialogue: improving written feedback processes in mass higher education’, Assessment & Evaluation in Higher Education, 35(5), pp.501–517. doi:10.1080/02602931003786559

15


Steen-Utheim, A. and Wittek A. (2017) ‘Dialogic feedback and potentialities for student learning Learning’, Culture and Social Interaction, 15, pp. 18-30. Systematic responses to grand challenges in assessment: REVIEW at UNSW Speaker: Daniel Carroll, University of New South Wales, Australia The grand challenges and some of our persistent failings in delivering on the assessment for learning (AfL) agenda are well understood: many admit that systemic improvement of the student experience of assessment still challenges the higher education sector. While the AfL movement and research (Stiggins, 2002) has informed and improved assessment policy and staff development in many institutions, the current assessment systems and tools available still under-deliver in support of the assessment for learning agenda and better student experience of assessment. As part of a large faculty Assurance of Learning Project in 2011, the UNSW Business School adopted REVIEW, a software developed by academics for academics at University of Technology Sydney. REVIEW provides an online platform for criteria based assessment and feedback and visually connects short-term assessment tasks with the development of longer-term learning goals, skills or competencies. Our REVIEW experience has seen a systematic and widespread improvement in assessment experiences for both staff and students. This is based on the increased clarity in assessment through mandated use of criteria, observed reductions in staff marking times, the focus on feedback related to the judgement criteria and the increased use of student self and peer assessment. In six years, usage has grown from 4 pilot courses to over 100 courses with 10,000 + student enrolments per term (Analytics). The presentation and discussion will discuss how an online system provides a platform to operationalise good practice en masse, and assist to systematically meet some of the big challenges facing assessment Presentation Overview: A three minute introduction provides a brief visual introduction illustrating core platform elements of REVIEW (e.g., criteria-based marking screen, task to degree goal-mapping, student self and peer assessment interfaces, staff assessment reports and analytics interface). The talk proceeds to outline how planning, staff support and the platform contribute meaningfully to a systematic institutional response to some of the grand challenges of assessment. Elements of the presentation are supported by ethics approved research and the presentation will be followed by question and discussion section. Challenge 1: Challenge 2: Challenge 3: Challenge 4: Challenge 5:

Assessment is often poorly described and recorded Individual assessments are often atomistic and poorly connected to learning and learning improvement Development of student judgement is not supported as an inherent element of assessment processes (students are passive in assessment) The effect of feedback given to students is unknown (receipt of feedback is not trackable, ‘lost’ or not acted on) We haven’t systematically designed a learner–centric (personalised) approach to assessment for students

References Carroll , D. (2014) Benefits for students from achieving accuracy in criteria-based self-assessment. 40th Annual IAEA Conference, 25-30 May, Singapore: At Singapore. https://www.researchgate.net/publication/264041914_Benefits_for_students_from_achieving _accuracy_in_criteria-based_self-_assessment Carroll , D. (2015) ‘Win, Win, Win: Future assessment systems’, EDEN Conference 2015, 9-12 June, Barcelona. https://www.researchgate.net/publication/281008509_WIN_WIN_WIN_Future_assessment_s ystems Stiggins, R. J. (2002) ‘Assessment crisis: The absence of assessment for learning’, Phi Delta Kappan, 83(10), pp. 758-765.

16


An evaluation of peer-to-peer feedback using adaptive comparative judgement Speaker: Jill Barber, The University of Manchester, UK Adaptive Comparative Judgement (ACJ) is an alternative to conventional marking in which the assessor (or judge) merely compares two (anonymous) answers and chooses a winner (Pollitt, 2012). The use of a suitable sorting algorithm means that repeated comparisons lead to scripts sorted in order of merit. Boundaries are determined by separate review of scripts. The method does much to remove subjective bias and to allow "hawks" and "doves" to mark successfully in a team. (Daly et al., 2017). Our preliminary findings indicate, however, that it is, at best, time-neutral, when assignments are marked by staff. We have now marked nearly 20 assessments using adaptive comparative judgement, and have found its use in peer-assessment is especially powerful. Students find it difficult to assign marks to one another's work, but are usually skilled at simple comparisons. Further, they can learn a great deal from seeing examples of other students' work. As part of an ongoing evaluation of adaptive comparative judgement, we have asked students to leave detailed structured feedback for one another, using a 2000 word third year Pharmacy assignment (150 students) as an example. In a typical assessment a student will evaluate 10-15 of their peers' scripts, and leaving feedback on three or four aspects of the assignment is therefore manageable. In this example, students were given detailed instructions about the preparation of feedback for their peers and their feedback was found to be of high quality and to correspond well to staff feedback for the same assignment. Feedback was readily disseminated to students using Smallvoice (Ellis and Barber, 2016) but other mail merge options may be used. Interim conclusions are that peer-to-peer assessment and feedback using adaptive comparative judgement allows students to benefit from both preparing and receiving much more detailed feedback than staff could reasonably provide. Staff oversee the process by providing detailed instructions for the preparation of feedback and by moderating the assessment process. It is encouraging that the group of students who took part in this exercise have requested the opportunity to take part in adaptive comparative judgement practice exercises to help them prepare for other assessments. References Pollitt, A. (2012) ‘The method of Adaptive Comparative Judgement’, Assessment in Education: Principles, Policy & Practice, 19, pp. 281-300. Daly, M.; Salmonson, Y.; Glew, P.J. and Everett, B. (2017)’ Hawks and doves: The influence of nurse assessor stringency and leniency on pass grades in clinical skills assessments’, Collegian, 24, pp. 449-454. Ellis, S. and Barber, J. (2016) ‘Expanding and personalising feedback in online assessment: A case study in a school of pharmacy’, Practitioner Research in Higher Education Journal, Special Assessment Issue, 10(1), pp.121-129. 3

Research Evaluation Presentations Session Chair: Linda Thompson

Meeting Room 5

The Pedagogy of Interteaching: engaging students in the learning and feedback process Speaker: Nick Curtis, James Madison University, Keston Fulcher, James Madison University, Virginia, USA This presentation will introduce participants to the pedagogy of interteaching. Interteaching is an active learning paradigm grounded in behaviour analytic methods (Goto & Schneider, 2009; Saville, Zinn, Neef, & Ferreri, 2006). There is a great deal of research suggesting that active learning methods, interteaching particularly, are more effective learning tools than traditional lecture methods for many topics (Barkley, 2009; Davis, 2009; Fink, 2013; Miller, Groccia, & Miller, 2001). The instructor of a class is not the ‘holder’ of all knowledge. Students can quite easily obtain all of the information contained in most classes from a research article, book, or online. The job of an instructor is to facilitate the process of constructing the information and to help students apply it in ways that make sense. Thus, it is crucial that students’ perspectives, fields of study, and prior knowledge are considered in the processing of the information. Interteaching allows an instructor to partner with students in the

17


process of teaching and learning to do exactly that. This presentation will highlight and describe the key components of interteaching: Preparatory guides, in-class peer discussions, student feedback and reflection, targeted lectures, and learning probes. Explicit examples of each component, based on a course at our university, will be provided to participants. Special focus will be given to interteaching’s innovative and sustainable process of incorporating student feedback, the redistribution of power between students and teacher in the classroom, and how interteaching works to increase learner agency, involvement, and volition. Participants will interact with each other and the presenters by participating in a mock interteaching session. Participants will leave with thought provoking ideas for their own teaching practice. References Barkley, E. F. (2009) Student engagement techniques: A handbook for college faculty. New Jersey: John Wiley & Sons. Davis, B. G. (2009) Tools for teaching. New Jersey: John Wiley & Sons. Fink, L. D. (2013) Creating significant learning experiences: An integrated approach to designing college courses. New Jersey: John Wiley & Sons. Goto, K., & Schneider, J. (2010) ‘Learning through teaching: Challenges and opportunities in facilitating student learning in food science and nutrition by using the interteaching approach’, Journal of Food Science Education, 9(1), pp.31-35. Miller, J. E., Groccia, J. E., & Miller, M. S. (2001) Student-Assisted Teaching: A Guide to Faculty-Student Teamwork. Bolton MA:Anker Publishing Company. Saville, B. K., Zinn, T. E., Neef, N. A., Norman, R. V., & Ferreri, S. J. (2006) ‘A comparison of interteaching and lecture in the college classroom’, Journal of applied behavior analysis, 39(1), pp. 49-61. Study Player One: Gamification, Student Engagement, and Formative Feedback Speaker: Errol Rivera, Edinburgh Napier University, UK Gamification is the use of game attributes in a non-game context. It is also not a new concept, nor is it a panacea for bored students (Deterding, 2012). Gamification does not turn learning into a game, rather, it generates meaning by shifting the context of an experience from one that is not game-like to one that is. Being a process that operates on how we experience a situation, gamification relies on the premise that experiences can be constructed (Werbach, 2014), and so lends itself to the constructivist foundations of common pedagogical practices in potentially various & useful ways. For example: what if gamification could provide a toolkit for the rigorous design and effective delivery of a formative assessment? Formative assessment offers a number of challenges to the educational practitioner that can often be torn between policy and practice (Jessop, El Hakim, & Gibbs, 2017), with students caught in the middle. Formative assessment has its own unique strengths, such as the role of feedback and the sense of safety that comes from an absence of marks or grades (Dweck, 1999), and these strengths can inform a lecturer’s design methodology. However, there might be just as much to be gained from understanding the potential weaknesses of formative assessment. When formative assessment is taken as a complex proposition, one where a teacher and a learner mutually undertake an involved activity to produce transformations that make learning objectives achievable (Black & Wiliam, 1998), formative assessment may arguably be vulnerable to a student’s level of engagement, and dependent on an inextricable relationship between student engagement and formative feedback – both of which could be supported through gamification. Unfortunately, there is no standardised methodology for identifying and embedding specific game attributes in alignment with theory-based teaching or assessment. However, as gamification research moves forward and its concepts sophisticate, testable theoretical frameworks have begun to develop. This PhD study attempts to reconcile a theory of gamified learning into theoretically based formative assessment practice, bridged by a multi-dimensional framework for understanding student engagement (Kahu, 2013). This consolidation is used as the basis for the design of a formal technique for gamifying formative assessment which will be used in several biological science modules, and monitored for their impact on students’ engagement with formative assessment. Drawn from the initial findings of PhD research into gamification, this presentation first demonstrates the common ways that gamification occurs in the ‘wilds’ of teaching and learning. Participants will then follow the challenges of consolidating existing theoretical constructs of formative assessment and student engagement with that of

18


gamification, and see the formulation of a potential model for concertedly applying gamification in a targeted, technical manner to impact student engagement with formative assessment. This presentation aims to provide grounds for a discussion that confronts our expectations of gamification, highlights the vital role that engagement plays in feedback, and challenges lecturers to better understand the relationship between what students think and what students feel, and ultimately how we learn. References Black, P., & Wiliam, D. (1998) ‘Assessment and Classroom Learning’, Assessment in Education: Principles, Policy & Practice, 5(1), pp. 1-65. https://doi.org/10.1080/0969595980050102 Deterding, S. (2012) ‘Gamification: Designing for Motivation’, Interactions, 19(4), pp. 14. doi:org/10.1145/2212877.2212883 Dweck, C. S. (1999) ‘Self-theories: Their role in motivation, personality, and development’, Essays in Social Psychology, 214. https://doi.org/10.1007/BF01544611 Jessop, T., El Hakim, Y., & Gibbs, G. (2017) ‘The whole is greater than the sum of its parts: a large-scale study of students’ learning in response to different programme assessment patterns Assessment & Evaluation in Higher Education, 42(6), pp. 990-999. https://doi.org/10.1080/02602938.2013.792108 Kahu, E. R. (2013) ‘Framing student engagement in higher education’, Studies in Higher Education, 38(5), pp.758–773. https://doi.org/10.1080/03075079.2011.598505 Werbach, K. (2014) (Re) Defining Gamification : A Process Approach. Persuasive Technology’, 8462, pp. 266–272. doi: org/10.1007/978-3-319-07127-5_23. Providing and receiving feedback: implications to student’ learning Speakers: Georgeta Ion, Universitat Autònoma de Barcelona, Cristina Mercader Juan, Universitat Autònoma de Barcelona, Aleix Barrera Corominas, Universitat Autònoma de Barcelona; Anna Diaz Vicario, Universitat Autònoma de Barcelona, Spain Our study addresses the following issue: In the context of peer feedback, which role, i.e., assessor and assessee, is perceived as more beneficial to learners? To answer this research question, this study examines the relationship between students’ perception of learning while ‘providing’ and ‘receiving’ feedback with an emphasis on the following areas: 1. 2. and 3.

cognitive and metacognitive learning the development of discipline-related and professional academic skills academic emotions or other affective aspects.

Considering the Vygotskian concept of scaffolded learning (Vygotsky, 1978), we designed an online questionnaire using the SurveyMonkey platform entitled ‘Peer evaluation strategies and feedback’. 188 students enrolled in teacher education bachelor degree at Universitat Autònoma de Barcelona (Spain) answered a survey. The questionnaire was administered to the students who were in class at that time, which allowed for the attainment of a representative sample. Once the data were gathered, univariate and multivariate statistical analyses were performed using SPSS v.20 and SPAD_N v.5.6. Results indicate that students perceived that other students benefitted more from the feedback they provided than they benefitted from receiving feedback. According to the univariate analysis, which was performed to describe the application of peer feedback, the experience was (1) a useful learning strategy (M=4.68; SD=1.497) and (2) significantly improved their assignments (M=4.61; SD=1.446). Students believed that despite its significance, the feedback was more useful in improving the tasks of others (M=5.11; SD=1.245) than in improving the tasks performed by their group (M=4.89; SD=1.460). In addition, the difference between providing and receiving feedback was analysed according to the overall assessment of each action, i.e. providing and/or receiving. Then, we compared both indices by performing a paired-samples t-test. The t-value was positive, indicating that the first condition (providing feedback) had a higher mean (M=4.75; SD=.090) than the second condition (receiving; M=4.63; SD=1.145); thus, we may conclude that providing feedback caused significantly more reported benefits than receiving feedback (t(183)=2.504; p=.013). The present study clearly supported the role of students in their own learning. As most participants recognised, more learning occurred when providing feedback, which is a clear indicator that students want to

19


assume an active role in their own learning and consider their involvement critical in the design of teaching and learning experiences. However, to enhance this benefit, classroom experiences should facilitate deep involvement in students during all learning and assessment processes to enhance the students’ professional future competencies as assessors. References Ion, G., Cano, E., & Fernández, M. (2017) ‘Enhancing Self-Regulated Learning through Using Written Feedback in Higher Education’, International Journal of Educational Research, 85, pp.1-10. Panadero, E., Jonsson, A., & Botella, J. (2017) ‘Effects of self-assessment on self-regulated learning and self-efficacy: Four meta-analyses’, Educational Research Review, 22, pp.74-98. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge M, A: MIT Press. 4

Research Evaluation Presentations Session Chair: Dr Rita Headington

Meeting Room 7

Contextual Variables in Written Assessment Feedback in a University-level Spanish program Speakers: Ana Maria Ducasse, RMIT University, Melbourne, Australia; Kathryn Hill, La Trobe University, Melbourne, Australia A number of researchers have highlighted the ‘situated’ nature of assessment and feedback making context, along with teacher and students, a key factor to be taken into account in any consideration of assessment for learning. This paper reports on a collaborative dialogue (Scarino, 2016) between a teacher and researcher regarding the impact of context on written feedback in Spanish as a foreign language program at an Australian university. The study investigated two research questions: 1. and 2.

How does the teaching context influence the nature of feedback provided to students, How does the teaching context influence learner responses to feedback?

Following Turner and Purpura (2015) context is understood as comprising both micro- and macrolevel factors. Macro-level factors include the political, social and cultural environment for assessment and feedback. Micro-level factors include institutional factors (e.g., assessment policy), the teaching and learning ‘infrastructure’ (e.g. curriculum and teaching environments), individual teacher and learner attributes (Andon, Dewey & Leung, 2017) and, following Norris (2016), the assessment task itself. Participants comprised a language assessment researcher, an ‘expert’ Spanish as a foreign language lecturer and 15 students from beginner (CEFR A1), intermediate (CEFR B1), and advanced (CEFR C) levels in a university-level Spanish program. Data comprised written feedback on final writing tasks for each of the three levels collected over a 12-week semester as well as recordings and transcripts of discussions between teacher and researcher regarding feedback decisions. Data were analysed using thematic content analysis. Analysis of teacher feedback focuses on the effect of context on the type and focus of feedback. Analysis of learner responses focused on contextual influences on affective responses, dispositions towards feedback, and uptake. While some general themes could be identified, the results serve to highlight the complexity and variability of the interaction between teacher intentions, learner orientations, and various dimensions of context. References Andon, N. J., Dewey, M., & Leung, C. (2017) ‘Tasks in the Pedagogic Space: using online discussion forum tasks and formative feedback to develop academic discourse skills at Master’s level’, in TBLT as a Researched Pedagogy (Task-Based Language Teaching). Amsterdam: John Benjamins Publishing Company. Norris, J. M. (2014, October) Some reflections on Learning Oriented Assessment. Roundtable on Learning-Oriented Assessment in Language Classrooms and Large-Scaled Contexts, 12 October, Teachers College, Columbia University, New York.

20


VACS - Video Assessment of Clinical Skills Speakers: Mark Glynn, Dublin City University; Evelyn Kelleher Dublin City University; Colette Lyng, Beaumont Hospital, Dublin; Adele Keough, Dublin City University; Anna Kimmins Dublin City University; Patrick Doyle, Dublin City University, Republic of Ireland This paper describes an innovative assessment that was introduced in response to educational and logistical challenges presented in identifying appropriate assessment strategies for assessing practical skills with large cohorts of students. The aim of the innovation was to replace a face-to-face practical exam with online submission of a video recording of the student performing the practical skill. Every year we have in excess of 200 first year undergraduate students. Each student must demonstrate competency with respect to a variety of practical clinical skills. Each skill can take up to 10 minutes. It is necessary for students to individually demonstrate each skill, therefore a lecturer must sit through over 2000 minutes (33+ hours) of individual student assessment for each skill that must be assessed. With five different skills required for first year students the logistic challenge for managing the assessment is very difficult. This normally involved up to ten different staff members supporting the assessment over a week long period. Each lecturer would use a paper based rubric to assess the students and then hand back their evaluations to module coordinator so feedback could be centrally issued to the students. From a student's perspective they are given a time and date and they have a one off opportunity to perform. Therefore we needed to determine a more effective and efficient way of assessing these key clinical skills. Face-to-face practical exams present many challenges; they are inflexible and resource and time intensive. Furthermore, they cause pressure, nerves, anxiety and stress for students, fatigue and loss of concentration for examiners, and inconsistency and errors in marking leading to disputed results. Online video submission is a simple approach that can overcome many of these challenges. Video has been used successfully in education for many years, however, this approach of using online video submission to replace face-to-face practical exams appears to be a new innovation. This new assessment method has transformed the way not only the way we assess but the way students learn their clinical skills. We have moved from just assessment of learning to have assessment of and for learning. Instead of a one off performance in front of a lecturer, students pair up with a colleague and use their own phone to record themselves performing the skill therefore also introducing peer learning/feedback into the process. We first implemented this assessment method for first year students in 2013/14 and have used it every year since. Every year, based on feedback we make slight changes to optimise the process. Preliminary evaluation of the first cohort of students to participate in this innovation suggests that the majority of them preferred the online submission format and that it did enhance their learning. This paper outlines current more in-depth research to assess the full benefits and potential of this assessment method. References Jones, D.J., Anton, M., Gonzalez, M., Honeycutt, A., Khavjou, O., Forehand, R., Parent, J. (2015) ‘Incorporating Mobile Phone Technologies to Expand Evidence-Based Care’, Cognitive and Behavioral Practice, 22(3), pp. 281-290. Zick, A., Granieri, M., Makoul, G. (2007) ‘First-year medical students’ assessment of their own communication skills: A video-based, open-ended approach’, Patient Education and Counselling, 68(2), pp.161-166. Patri, M. (2002) ‘The influence of peer feedback on self- and peer-assessment of oral skills’, Language Testing, 19(2), pp. 109 – 131. Nicol, M., Freeth, D. (1998) ‘Assessment of clinical skills: a new approach to an old problem’, Nurse Education Today, 18(8), pp. 601-609. Foregrounding feedback in assessment as learning: the case of the processfolio Speaker: Jayne Pearson, University of Westminster, UK Research has shown that teacher feedback, even when high-value (Hounsell, 2007), can be disempowering for students if one-directional ‘telling’ (Sadler, 2010) or interpreted as directives despite teachers’ efforts to be facilitative (Richardson, 2000). It is also de-motivating for staff to have time-consuming feedback disregarded by students in fixing their work or to receive low scores for assessment and feedback in internal and external evaluation. This presentation will report on an alternative assessment of academic writing on a high-stakes preparatory course for international

21


students at a UK university, which was designed and implemented over three iterative cycles as part of an action research project. The processfolio assessment was a reaction to performance and product- orientated assessments and exam culture which were preventing students from conceptualising themselves as developing writers transitioning to academic discourse communities (Gourlay, 2009) and constraining student’s individual and social agency. The project was an empirical impact study on washback and social impact of assessment in the context of tensions between the UK’s drive for internationalisation and attempts to maintain linguistic and academic standards through immigration policies. Processfolio, as an adaptation of the traditional portfolio concept, allows the student to present their journey of creating one research essay. The project contained many elements, but this presentation will focus on the incorporation of three levels of feedbacktheir teachers’ advice through written comments and recorded tutorials; their peers’ response through an asynchronous Moodle wiki forum; and their own evaluation through criteria application and a short reflection. As part of the project, students chose for themselves which of the feedback activities were useful and why by including them as artefacts within their folios. By foregrounding feedback as integral to the assessment, rather than a post-hoc event as in most formative-assessment practices, the processfolio incorporates different voices in the assessment process (Boud and Falchikov, 2006), emphasising a shared responsibility for feedback. Benefits derived from the folio were the increased sense of self- efficacy and control over writing-assessment events and an increased ability in self-regulated or procedural autonomy leading to a growing critical agency (Sambell, 2006) evidenced by questioning of criteria, and accepted norms of academic writing and assessment practices. However, the self-evaluative aspect of the folio was the least successful and most resisted by students. Therefore, if the processfolio is to be applied in different contexts it must not be assumed that students will automatically take agency. Therefore, the curriculum to support the processfolio should incorporate activities which facilitate an understanding of the constraining and enabling potential of assessment practices in twenty-first century higher education. Boud, D. & Falchikov, N. (2006) ‘Aligning Assessment with long-term learning’, Assessment & Evaluation in Higher Education, 31(4), pp. 399–413. Gourlay, L. (2009) ‘Threshold practices: becoming a student through academic literacies’, London Review of Education, 7(2), pp. 181–192. Hounsell, D. (2007) ‘Towards more sustainable feedback to students’, in Falchikov, N. & Boud, D. (eds.) Rethinking Assessment for Higher Education. London: Routledge. pp.101-113. Richardson, S. (2000) ‘Students’ conditions response to teachers’ response: portfolio proponents take note!’ Assessing Writing, 7(2), pp. 117-141. Sadler, D.R. (2010) ‘Beyond feedback: developing students’ ability in complex appraisal’, Assessment and Evaluation in Higher Education, 35(5), pp. 535-550. Sambell, K.; McDowell, L. & Sambell, A. (2006) ‘Supporting diverse students: developing learner autonomy via assessment’, in Bryan, C. & Clegg, S. (eds.) Innovative Assessment in Higher Education, Abington, Oxford: Routledge. pp. 158-167 5

Round Table Presentations Session Chair: Dr Peter Holgate

Meeting Room 11

Exploring student perceptions of a feedback trajectory for first-year students at the University of Leuven Speakers: Anneleen Cosemans, Leuven Engineering and Science Education Center, Carolien Van Soom, Greet Langie, Tinne De Laet, University of Leuven, Belgium The University of Leuven is a Flemish university with an open-admission system. Against this background, the University of Leuven strongly invests in study orientation, guidance, and counselling. During the round table, we will zoom in on an innovative feedback trajectory in the Science and Engineering faculties, aimed at first-year students. This trajectory consists of a series of feedback moments throughout the academic year: individualised learning dashboards (Broos, Peeters, et al., 2017; Broos, Verbert, & De Laet, 2018; Broos, Verbert, Vansoom, Langie, & De Laet, 2017), on-campus group sessions, one-on-one counselling, an optional positioning test before enrolment (Vanderoost et

22


al., 2015) followed by individualised feedback, etc. Furthermore, we will propose our research questions to investigating student perceptions of the feedback trajectory. We want to find out how the learning dashboards, with situated information on individual study skills and results are perceived by our first-year students, following one of Evans’ suggestions (2013) to move research-informed practice in assessment feedback in HE forward. Do students feel these tools support their individual learning process? How experienced are they in seeking and receiving feedback? How does feedback coming from the learning dashboards fit into other types of feedback, e.g. within their coursework, or from peers? Our ultimate goal is to optimise the feedback trajectory to make it more effective, i.e. having an impact on learning behaviour. References Evans, C. (2013) ‘Making Sense of Assessment feedback in Higher Education’, Review of Educational Research, 83(1), pp. 70-120. doi:10.3102/0034654312474350. Broos, T., Peeters, L., Verbert, K., Van Soom, C., Langie, G., & De Laet, T. (2017) ‘Small Data as a Conversation Starter for Learning Analytics: Exam Results Dashboard for First-year Students in Higher Education’, Journal of Research in Innovative Teaching & Learning, Broos, T., Verbert, K., & De Laet, T. (2018) ‘Multi-institutional Positioning Test Feedback Dashboard for Aspiring Students’,LAK 2018 Conference. Sydney, 5-9 March, Australia. Broos, T., Verbert, K., Van Soom, C., Langie, G., & De Laet, T. (2017) ‘Dashboard for Actionable Feedback on Learning Skills: How Learner Profile Affects Use’, ECTEL 2017 Conference; ARTEL workshop) Tallinn University, Tallin, 12-15 September, Estonia. Vanderoost, J., Van Soom, C., Langie, G., Van den Bossche, J., Callens, R., Vandewalle, J., & De Laet, T. (2015) ‘Engineering and science positioning tests in Flanders : powerful predictors for study success?’ 43rd Annual SEFI Conference. 12-15 September, Orléans, France. https://www.researchgate.net/publication/281590094_Engineering_and_science_positioning_ tests_in_Flanders_powerful_predictors_for_study_success Feedback: What lies within? Speaker: Jane Rand, York St John University, UK Assessment feedback is an area of disillusionment – there is a mismatch between what students want to receive and what lecturers want to give (Tapp, 2015). Dominant assessment dualisms like ‘peer/tutor’, and more particularly ‘formative/summative’, contribute to disillusionment because they disguise what lies within the binary limits. This round table presentation introduces a model, Dimensions of knowing©, as a technique to initiate a dialogue of educational criticism (Leonardo, 2016) and open up conversations between students and lecturers about what kind(s) of work the representation of feedback does. As one of the most powerful single influences on student learning, and as an integral element of a social epistemology, feedback must both foster purpose within students and be useful to students. Using the Dimensions of knowing model as both a physical and conceptual focus, I invite colleagues to reflect, interact, and begin to discuss the ways in which we might re-see, reframe, and transform feedback. References Leonardo Z (2016) ‘Educational criticism as a new specialization’, Research in Education, 96(1), pp. 87– 92. Tapp J (2015) ‘Framing the curriculum for participation: A Bernsteinian perspective on academic literacies’, Teaching in Higher Education, 20(7), pp. 711–722. Enabling a culture of Self- reflection amongst Pre-service teachers’ through online journaling Margaret O’Keeffe, Mary Immaculate College, Ireland Fostering graduates who are lifelong reflective educators is a core element of the teaching and learning processes of higher education, graduates who can evaluate their own learning through self and peer reflection (Boud 1999, Nicol 2014). In an effort to enable pre-service teachers to become reflective practitioners and engage critically with themselves and others, this study seeks to explore the potential of technology to augment current feedback processes, through reflective online

23


journals. Feedback in this context is informed by principles of assessment for learning, where students become active and critical participants in the reflective process through engagement in self inquiry. With this in mind, this pilot study aims to examine the impact of innovative technologies and pedagogies to support teaching, learning and assessment feedback with third and fourth year B.Ed students. It seeks to establish to what extent does engaging with online feedback processes support pre- service teachers’ personal and professional identity and what are pre-service teachers’ views of using technology to support feedback and reflective practice? Qualitative data from online reflective journals and focus groups will be examined to establish common themes emerging from the data. It is expected that findings will demonstrate to the extent to which online feedback helps students to be reflective and critical thinkers. Findings from this research will support the professional development of teacher educators enabling them to develop feedback pedagogies to support the delivery and integration of feedback within education modules Hearing Voices: First Year Undergraduate Experience of Audio Feedback Speaker: Stephen Dixon, Newman University, UK Recent changes to the UK higher education sector, including the growth and diversification of the student body, greater modularisation with fewer coursework assignments, and less staff-student contact time, have presented numerous challenges. The parallel rise in the use of digital technologies in professional practice can often be seen to exacerbate the perceived dehumanising effect of this massification. The focus of this short presentation centres on the use of one such technology – that of digital audio feedback with first year undergraduates at Newman University, Birmingham. Drawing on the findings of a longitudinal phenomenological study conducted for a doctorate, the presentation will stress the importance of moving beyond any technologically deterministic view, and the need to set any understanding in the wider context of students’ own interpretation of the feedback process. Whilst the use of audio feedback is seen to alleviate the failures of communication often identified in feedback, the findings are also seen to be significant in terms of means of access, use of learning technologies, dialogic perception and studentship and engagement. In particular, the use of audio feedback is seen as facilitating a shift from statement to discourse and the possibility of establishing more meaningful learning relationships with students. 6

Micro Presentations Session Chair: Professor Pete Boyd

Piccadilly Suite

Crisis, what crisis? Why we should stop worrying about NSS scores Speaker: Alex Buckley, University of Strathclyde, UK Assessment and feedback are often discussed in tones of crisis, and the National Student Survey frequently seems to be the cause. Students’ ratings of assessment and feedback are, on average, the weakest of all the areas covered by the NSS, and it has become a commonplace to justify a focus on assessment and feedback by citing the national NSS scores. Nevertheless, it is a mistake: the NSS does not provide any evidence that assessment and feedback is a particular problem for the UK higher education sector. It is generally unwise to draw conclusions from comparisons between survey items, as different levels of positivity can be acceptable for different aspects of students’ experiences. Given the element of criticism inherent in assessment and feedback processes, there are specific reasons to expect that those aspects of students’ experiences would be less positive than other areas of the questionnaire. Having your work assessed, and receiving feedback about your performance, can be emotive and challenging in a way that other elements covered by the questionnaire - teaching, organisation, learning resources - are not. There may be a crisis in assessment and feedback, and there may be good evidence for it, but it doesn’t come from the NSS.

24


Transforming feedback practice in the practical environment using digital technologies Speakers: Olya Antropova, Ronan Bree, Moira Maguire, Dundalk Institute of Technology, Republic of Ireland; Akinlolu Akande, Institute of Technology, Sligo, Ireland; Dian Brazil, Institute of Technology, Carlow, Ireland; Anne Mulvihill, Athlone Institute of Technolgy, Athlone, Westmeath, Ireland. In Science and Health disciplines, undergraduate students spend significant time in practical sessions. Here, they work in groups to learn technical competencies in combination with group work, data analysis and interpretation, communication skills and peer/self-assessment. The TEAM project (Technology enhanced assessment methods in science and health practical settings) aims to improve assessment in practical sessions via the use of digital technologies. To date, four thematic areas have been identified, namely: (i) Pre-practical preparation (videos, online/app-based quizzes), (ii) Electronic laboratory notebooks and ePortfolios, (iii) Digital Feedback and (iv) Rubrics. 50 Pilots involving these technologies have been implemented with 1,481 students and are currently being evaluated. While improvements to the assessment of practicals have been central to the project, enhancements in feedback delivery systems within practical sessions have also become a focus. For example, in the sciences, feedback traditionally comprises hand-written comments on submitted laboratory reports. With the innovative TEAM project (http://www.teamshp.ie), students are receiving their feedback via audio recordings, online quiz generated responses, electronic lab notebook feedback widgets, graphic-tablets & rubrics amongst others. Here, we present the student evaluation of these innovative and transformative feedback-providing technologies that can positively engage and empower students in the science practical environment. Stressful feedback post paramedic simulation training Speaker: Enrico Dippenaar, Anglia Ruskin University, UK Simulation based education has been around for some time in clinical education. These sessions historically have been very didactic, with little to no feedback or debriefing post simulation. In recent years, the significance of feedback format and its positive impact on learning has influenced practice across all forms of clinical training. Paramedic training, however, is distinctive to the ‘job’ being unpredictable and focused on high stress, high intensity situations – often life or death. Simulation based education is ideal for trying to create the same stressors on students as they will face in ‘reallife’. Feedback and debrief following such simulations can be extremely difficult, adrenaline is ‘pumping’, emotions are high, physiologically you exhibit signs of stress, mentally you are drained. How do we use feedback in a constructive manner in such an unconstructed environment? This micro presentation will highlight lessons learned from this unusual situation that may be of value in other assessment contexts. Developing and implementing a model of intensive spoken teacher and peer feedback with learner control Speakers: Gordon Joughin, Deakin University, Australia; Helena Gaunt, Guildhall School of Music and Drama, UK n music and drama education, feedback on students’ performance and presentations in the classroom, on stage and in formal assessments is so central to teaching and learning that it has been characterised as a signature pedagogy. Traditional forms of feedback can easily diminish learner control and student growth. At the Guildhall School of Music and Drama an innovative approach to feedback has been developed, implemented and evaluated through an HEA funded project on transformational feedback for empowering students in their courses and future careers. The Critical Response Process pioneered in dance education has been modified and enhanced through coachingmentoring principles to develop a genuinely dialogic approach to feedback co-constructed between students, their peers and their teachers. The four-step process, closely aligned with Boud and Molloy’s Feedback Mark II, focuses on the meaning of the work for its audience; the student’s request for specific feedback; questions peers and teachers pose to prompt the student’s reflection; and the student’s control over hearing evaluative opinions. The systematic embedding of the process across the Guildhall School has resulted in a significant community of informed feedback practitioners. This process may resonate with colleagues from other disciplines who may also offer insightful challenges to the approach.

25


References Guildhall School of Music and Drama (2017) Empowering artists of the future through a transformational feedback model: astrategic initiative funded by the HEA, 2015-16. Available at: https://www.gsmd.ac.uk/about_the_school/research/research_areas/transformational_feedb ack/ (Accessed: 12 March 2018). Lerman, L., and Borstal. J. (2003) Liz Lerman's Critical Response Process. Takoma Park: Liz Lerman Dance Exchange. Mapping the Assessment Journey: Student evaluations of a visual representations of their assessments Speaker: Anke Buttner, University of Birmingham, UK Assessment is central to students’ experience, and is associated with a large volume of information about different aspects of the assessment process. This ranges from basic procedural details (e.g. deadlines, formats) to sophisticated assessment literacy-related principles, which students need to understand to succeed at their assessments and make the most of their feedback (Price, Rust, O’Donovan and Handley, 2012). To maximise the utility of feedback, students also need to understand the interconnections between different assessments on their programmes, but transfer of feedback from one context to another is often difficult (Schwartz, Bransford, & Sears, 2005). To assist with this, we developed a calendar-based assessment map, but while this was fairly well received by the students, what they considered important did not match the priorities of academics, and the evidence suggested that assessments were still viewed in isolation (Buttner & Pymont, 2015). The aim of this paper is to present students’ evaluations of a more visually intuitive representation of the interconnections between assessments. A flow chart assessment map outlining the connections between first, second, and third year assessments was made available to students, and evaluated using a questionnaire and focus groups. The findings are presented here and implications for future developments in assessment mapping are discussed. References Buttner, A. C. and Pymont, C. (2015) ‘Charting the assessment landscape: preliminary evaluations of an assessment map’, 5th Assessment in Higher Education Conference, Birmingham. 24-25 June. Price, M., Rust, C., O’Donovan, B. and Handley, K. (2012) Assessment Literacy: The Foundation for Improving Student Learning, Oxford: The Oxford Centre for Staff and Learning Development. Schwartz, D. L., Bransford, J. D., & Sears, D. (2005) Efficiency and innovation in transfer. Transfer of Learning from a Modern Multidisciplinary Perspective. North Carolina: Information Age Publishing. pp. 1-51. 7

Research Evaluation Presentations Session Chair: Assistant Professor Natasha Jankowski

Piccadilly Suite

Developing student feedback literacy using educational technology and the reflective feedback conversation Speakers: Kathryn Hill, La Trobe University, Melbourne, Australia; Ana Maria Ducasse, RMIT University, Melbourne, Australia The potential for feedback to promote learning has been well-documented. However, as any teacher will attest, feedback does not automatically lead to improvement (Sadler, 2010) and this had led to a large volume of advice to teachers about how to make their feedback more effective. However, quality notwithstanding, feedback can only promote learning to the extent that it is accessed, understood and acted on by learners (Gibbs & Simpson, 2004-5). Research has found that students do not always share the same views about the role and importance of feedback as their teachers (Price, Handley & Millar, 2011). They often experience difficulty with understanding feedback (Weaver, 2006) and with knowing how to act on it (Gibbs, 2006; Poulos and Mahony, 2008). Moreover, students are not necessarily receptive to the feedback provided (Andon, Dewey & Leung,

26


2017) particularly if it fails to align with their personal learning goals and beliefs about language learning (Ducasse & Hill, 2016, 2018a). There is an increasing recognition of the centrality of the learner in assessment for learning (Andrade, 2010) and these findings underscore the importance of including the learner perspective in any examination of feedback practices. This paper describes an intervention which used the ‘reflective feedback conversation’ (Cantillon and Sargeant, 2008) and educational technology (PebblePad) to encourage learners to take a more active role in the feedback process. The ‘reflective feedback conversation’, first proposed for use in clinical education, involves asking students to identify two to three specific areas they would like feedback to focus on and to evaluate their own performance in these areas. The teacher then provides her own evaluation and asks the student to reflect on specific ways to improve. The teacher then elaborates the student’s response and checks understanding. PebblePad was used to enable the documentation and tracking of these ‘feedback conversations’ over time. The aim of the intervention was to promote: • shared goals and expectations • a shared understanding of expected quality and standard, and • shared responsibility for developing the knowledge and skills necessary for bridging the gap between current and desired performance. Participants were 50 students enrolled in Level 3 (pre-intermediate) of a university-level Spanish program. Data included student questionnaires, teacher and student interviews (n=5) and documentation (via PebblePad) of feedback ‘conversations’ over time. Questionnaire data were analysed using descriptive and inferential statistics while interview and documentation data were analysed using thematic content analysis. The findings have implications for improving the uptake of feedback, increasing student agency and self-regulation, and for promoting sustainable feedback practices. References Andon, N. J., Dewey, M. and Leung, C. (2017). Tasks in the Pedagogic Space: using online discussion forum tasks and formative feedback to develop academic discourse skills at Master’s level. In TBLT as a Researched Pedagogy (Task-Based Language Teaching). John Benjamins Publishing Company. Cantillon, P. and Sargeant, J. (2008). Giving feedback in clinical settings. BMJ, 337, pp. 1292-4. doi: https://doi.org/10.1136/bmj.a1961. Ducasse, A. M. and Hill, K. (in press). ‘Advancing written feedback practice through a teacherresearcher collaboration in a University Spanish program’, in Poehner, M. E. and InbarLourie, O. (eds.) Toward a Reconceptualization of L2 Classroom Assessment: Praxis and Researcher-Teacher Partnership, Springer Educational Linguistics series. Diverse orientations towards assessment and feedback internationally Speakers: Sally Brown, Independent consultant, Emerita Professor at Leeds Beckett University, UK; Kay Sambell, Edinburgh Napier University, UK International practice in assessment and particularly feedback is diverse globally and expectations of how feedback is presented, its scope and extent is highly variable. Additionally, Killick (2018) argues that in an intercultural context, feedback on assignments can be subject to misinterpretation across cultural boundaries’ (p149). There is also substantial variation in the extent to which students feel their teachers are responsible for their academic success through the guidance and feedback they offer (Palfreyman and McBride, 2007), with potential ensuing issues around student and staff expectations. In this short account using emergent data from an ongoing project, Kay Sambell and Sally Brown will showcase examples of different approaches and practices around feedback between at least five nations in which they have worked/presented on assessment and feedback. Our findings to date from the literature in the field and from working internationally ourselves suggest that:

27


1.

2. 3.

4.

5.

Students from some Confucian-heritage nations have higher expectations of deep and extended support on assignments prior to submission in the form of feedback on drafts (Brown and Joughin, 2007); In some Southern European nations (including Italy and Spain), extensive feedback pre- or post- submission is neither offered to students nor expected by them; In the Netherlands, Oral assessment (such as viva voce examinations) is used far more extensively in undergraduate studies than elsewhere, and feedback is frequently provided on unseen time constrained exams, unlike normal practice in for example the UK and Australia. Multiple-choice questions and other forms of computer-based assessment are often more widely used in nations including the US and Singapore than elsewhere and frequently these do not include integral feedback; In the UK, Australia and Ireland, greater use of inter-and intra-peer assessment is used in the assessment of group work than in nations including Spain, the US and Italy, with concomitant requirements for training and support to be provided for students on giving feedback to one another.

We nevertheless recognise the importance of the findings of a comparison between UK and Australian assessment practices (Winstone and Boud, 2017, p3) that suggest that ‘major differences in feedback culture may not be located in national approaches, but in local disciplinary, institutional and pedagogic contexts’. Our work in the area is ongoing, and within the discussion element of this presentation we would seek participants’ collegial contributions to our research. References Brown S. and Joughin, G. (2007) ‘Assessing international students: Helping clarify puzzling Processes’, in Brown, S. and Jones, E. (eds.) Internationalising Higher education .London: Routledge. Carless, D., Joughin, G., and Ngar-Fun L. (2006) How Assessment supports learning: Learning orientated assessment in action. Hong Kong: Hong Kong University Press. Carroll, J. and Ryan, J. (2005) Teaching International students: improving learning for all. Abingdon: Routledge SEDA series. Killick, D. (2018) Developing Intercultural practice: Academic development in a multi-cultural and globalizing world. London and New York: Routledge. Palfreyman, D. and Mcbride, D.L., (2007) Learning and teaching across cultures in Higher Education. Basinstoke: Palgrave Macmillan. Winstone, N. and Boud, D. (2017) ‘Supporting students’ engagement with feedback: The adoption of student-focused feedback practices in the UK and Australia’, Society for Research into Higher Education Annual Research Conference, South Wales, 6-8 December, UK. Frequent rapid feedback, feed-forward, and peer learning, for enhancing student engagement in an online portfolio assessment Speaker: Theresa Nicholson, Manchester Metropolitan University, UK This paper presents outcomes from a 3-year initiative to enhance student engagement through weekly summative and formative feedback. The research concerns a first year undergraduate, tutorials-supported, academic skills module, with an assessed online portfolio. In phases 1 and 2, half the student cohort submitted a printed portfolio, while half completed an online portfolio. In the final phase, all students completed an online portfolio (c.190 students). The research design has enabled a robust comparison of the influence of assignment mode on the efficacy of marking and feedback, attainment, student engagement, and tutorial management. The aims were to enhance student engagement, improve digital literacy among staff and students (Clarke and Boud, 2016), and to raise employability awareness by developing a showcase of skills for prospective employers (Simatele, 2015). The online mode challenges the orthodox approach where the classroom is the focus of learning, to a student-regulated mode, where learning is facilitated by tutors’ regular, individual, written feedback. Thus frequent and interactive feedback, including elements of feed-forward and peer learning, help develop students’ thinking and learning skills (Clark, 2012). Qualitative analysis reveals that the provision of rapid and regular feedback on work is the aspect most valued by students. Tutors valued the ability to track students' progress by accessing online portfolios and providing rapid feedback on completed work. Feedback and progress tracking is easy to give and to

28


receive online (Heinrich et al., 2007), but also creates accountability that is often absent in relatively remote institutional monitoring systems (Stork and Walker, 2015). Some tutors found the marking and feedback process easier online and that there was a positive impact on tutorials, while others found the process more challenging. The online approach had an adverse effect on face-to-face meetings for some, highlighting the need for guidance on tutorial management. Quantitative analysis of student grades tentatively indicates higher attainment levels in the online mode, where progress tracking and regular feedback occur. There are some tensions between meeting the desire to provide very regular, rapid feedback, and associated practical constraints. Barriers sometimes presented through non-engagement of learners, likely influenced by an array of external as well as internal pressures. Nevertheless, engagement on the whole was much improved. There were also constraints due to limited digital literacy, and tutors’ workload pressures. The findings suggest that personalised progress tracking, prompt, regular feedback on tasks, and multiple opportunities for group-based discussion of feedback, can promote student engagement in both self-regulated and face-to-face learning activities. References Clarke, J. L., Boud, D. (2016) ‘Refocusing portfolio assessment: Curating for feedback and portrayal’, Innovations in Education and Teaching International, pp.1-8 http://dx.doi.org/10.1080/14703297.2016.1250664 Clark, I. (2012) ‘Formative assessment: assessment is for self-regulated learning’, Educational Psychology Review, 24(2), 205–249. Heinrich, E., Bhattacharya, M., Rayudu, R. (2007) ‘Preparation for lifelong learning using ePortfolios’, European Journal of Engineering Education, 32(6), pp.653–663. Simatele, M. (2015) Enhancing the portability of employability skills using e-portfolios’, Journal of Further and Higher Education, 39(6), pp. 862–874. Stork, A., Walker, B. (2015) Becoming an Outstanding Personal Tutor. Critical Publishing, Northwich. 8

Research Evaluation Presentations Session Chair: Associate Professor Geraldine O’Neil

Meeting Room 3

Student learning through feedback: the value of peer review? Speaker: Charlie Smith, Liverpool John Moores University, UK The design review – also known as a crit or jury – is a long-standing cornerstone of design education. It is a forum through which students present their coursework, and then receive formative feedback from their teachers and guest reviewers – often external practitioners or teachers from other institutions. However, its perceived value and status within the pedagogic process mean that its methods and effectiveness often go unquestioned. (1) In his seminal 1852 book 'The Idea of a University', Newman argues that, “A university training … is the education which gives a man [sic] a clear conscious view of his own opinions and judgements, a truth in developing them, an eloquence in expressing them, and a force in urging them.” (2) Given the format and power-dynamic of the conventional design review – a nervous student standing in front of their work, facing a panel of critics seated before them, beyond whom other students passively observe the proceedings – it is questionable as to what extent it meets any of Newman’s objectives. What, if anything, does the review offer as a means through which to foreground the student, and toward developing their critical and reflective thinking? This paper interweaves two methodologies to critique student peer review – where students become critics in place of their teachers. Firstly, and theoretically, by interrogating it in the context of contemporary pedagogic research, to debate its value, strengths and weaknesses in terms of learning gain and skills development, and its suitability to the contemporary higher education environment. Secondly, and empirically, through a primary qualitative research project that evaluates the experiences of Architecture students involved in a series of peer reviews, as both reviewers and reviewees; this provides insights and understanding of how students feel about being placed in the position of critics to their peers, and what they think they take from the process – both in terms of feedback on their work and in developing their critical thinking. This is significant, because whilst it might have pedagogic benefits, research has found instances where students

29


resented evaluating their peers’ work. (3) The paper will discuss to what extent peer reviews are an effective means through which to deepen students’ creative and critical thinking, in establishing a meaningful dialogue between learners in which they are contributors as opposed to mere passive recipients, and in augmenting their participation and agency in the learning enterprise. For as Collini argues, preparing students for autonomy is a key objective of higher education. References Collini, S. (2012) What Are Universities For? London: Penguin Books. Newman, J. (1886) The Idea of a University. 6th edn. London: Longmans, Green. Salama, A.M. (2015) Spatial Design Education: New Directions for Pedagogy in Architecture and Beyond. Farnham, Surrey: Ashgate. Wilson, M.J., Diao, M. and Huang, L. (2014) ‘“I’m Not Here to Learn How to Mark Someone Else’s Stuff”: An Investigation of an Online Peer-to-Peer Review Workshop Tool,’ Assessment and Evaluation in Higher Education, 40(1), pp. 15-32. In the shoes of the academics: Inviting undergraduate students to apply their assessment literacy to assessment design Speakers: Anke Buttner, Andrew Quinn; Ben Kotzee; Joulie Axelithioti ; University of Birmingham; UK In higher education, students’ feelings about and responses to assessment are a perennial concern. Much of the discourse in this area relates to students’ satisfaction and engagement with feedback, however, an awareness has been growing that universities need to understand students’ ‘assessment literacy’ - their 'understanding of the rules surrounding assessment in their course context, their use of assessment tasks to monitor or further their learning, and their ability to work with the guidelines on standards in their context to produce work of a predictable standard.’ (Smith et al., 2013:46) Recently, both HEA (2012) and QAA (2013) have encouraged universities to improve students’ assessment literacy to boost student satisfaction, protect and improve standards, and increase confidence in the system of assessment. Yet, to date few empirical studies have addressed students' assessment literacy in the university sector. We combined questionnaire and workshop data to study students’ feelings about assessment as well as their understanding of the assessment design process. We asked students to design the assessment they felt most suited the domain to be assessed. We recorded students thinking, responses and solutions to see whether students can ‘place themselves in the position of academics who design assessment’. In this paper, we present results from our study. Feedback to the Future Speakers: Erin Morehead, University of Central Lancashire, Andrew Sprake, University of Central Lancashire, UK Providing students with feedback is a key part of the academic’s role (Angelo & Cross, 1993; Richardson, 2005). Facilitating students to access & engage with feedback can be challenging. Students also report a lack of understanding around feedback; in particular, on written pieces of work (Richardson, 2005). In addition to this, research has shown that feedforward plays an important role in the development of learners and that self-reflection is an integral part of the learning process (Ferrell & Grey, 2016; HEA, 2012). Therefore, it was identified that there was a need to improve students’ understanding of feedback and to encourage them to make use of this feedback in future assignments. Two small independent cohorts of students from the Faculty of Health & Wellbeing were selected for this pilot study, in order to evaluate the outcomes of two similar feedforward strategies. After receiving feedback on academic work, students were encouraged to access the comments on their assignment. Once the student had reviewed the feedback received, they were asked to complete either Action Plan A (Health Sciences students) or Action Plan B (Sports & Wellbeing students). Action Plan A asked the students to document significant feedback points from one specific assignment. The students were asked to identify both areas for improvement and areas of good practice. Following on from this, students were asked to identify how they improved or maintained these areas in the next piece of work being submitted. The students were required to submit the action plan appended to their next written assignment. Action Plan B encouraged students to document significant feedback points from multiple assignments and to identify how they could improve on this in future pieces of work. Action Plan B also asked students to perform a self-

30


assessment of the mark they would award their own piece of work. Students were encouraged to submit this alongside of their next assignment, but it was not a formal requirement. Overall marks improved on the next piece of work submitted for both cohorts studied, and in a number of cases by an entire grade band. As Action Plan A was a requirement, engagement with the exercise was higher than with Action Plan B. Unsurprisingly, students who reported not engaging with their feedback did not see an improvement in their mark. Overall, the students evaluated the exercise as a positive experience and reported a greater understanding of how feedback can be used in other assessments. The pilot study revealed that although different approaches to feedforward were utilised, the overall outcome of improvement in marks was observed in both cohorts. References Angelo, T. and Cross, P. (1993) Classroom Assessment Techniques: A handbook for college teachers. Jossey Bass: San Francisco. Ferrell, G. and Gray, L. (2016) Feedback and feed forward: Using technology to support students’ progression over time. Available at: https://www.jisc.ac.uk/guides/feedback-and-feed-forward (Accessed: 12 March 2018). Higher Education Academy (2012) Feedback Toolkit: 10 feedback resources for your students. Available at: https://www.heacademy.ac.uk/system/files/resources/10_feedback_resources_for_your_stud ents2.pdf (Accessed 12 March 2018). Richardson, J. (2005) ‘Instruments for obtaining student feedback: a review of the literature’, Assessment & Evaluation In Higher Education, 30(4), pp.387-415. 9

Research Evaluation Presentations Session Chair: Jess Evans

Meeting Room 5

‘Feedback interpreters’: The role of Learning Development professionals in overcoming barriers to university students’ feedback recipience Speakers: Karen Gravett, University of Surrey; Naomi Winstone, University of Surrey, UK. Understanding how students engage with feedback is a critical area for higher education professionals. One of the difficulties faced by those wishing to better understand student engagement with feedback is the ‘hidden recipience’ of feedback; once students access the feedback that they have been given, we know very little about what they subsequently do with it. This is problematic because we then do not have insight into the difficulties and challenges students face when trying to put feedback into practice. To date, the literature has not represented the views of a core group of professionals who have strong insight into the difficulties students face when attempting to implement feedback: student learning development professionals. Until now the role of learning development staff within this feedback landscape has not been considered, despite these individuals providing valuable support to students when interpreting and implementing feedback. Through semistructured interviews with nine learning developers working in a UK University, we explored their insights into the barriers students confront when engaging with feedback. Our research reveals that, while many challenges do exist for staff and students in the context of assessment feedback, learning development professionals are able to provide a meaningful source of guidance, and are able to promote students’ development through constructive, dialogic, student-teacher interactions. By working with students to interpret, discuss and implement feedback, learning development professionals can be viewed as occupying a multiplicity of identities: working as interpreter, coach, intermediary, listener, and as a source of feedback dialogue. This presentation will examine these identities and explore how this critical area of support enables students to achieve at University. Hitherto these interactions have not been fully explored, yet they provide a powerful depiction of the hidden process of feedback recipience, as well as the complex learning environment experienced by today’s higher education students.

31


References Carless, D. (2006) ‘Differing perceptions in the feedback process’, Studies in Higher Education, 31 (2), pp. 219-233. Evans, C. (2013) ‘Making Sense of Assessment Feedback in Higher Education’, Review of Educational Research, 83 (1), pp. 70-120. Nash, R., and Winstone, N. (2017) ‘Responsibility-Sharing in the Giving and Receiving of Assessment Feedback’, Frontiers in Psychology, 8:1519. doi: 10.3389/fpsyg.2017.01519 Nicol, D. (2010) ‘From monologue to dialogue: improving written feedback processes in mass higher education’, Assessment and Evaluation in Higher Education, 35(5), pp. 501–517. doi: 10.1080/02602931003786559. Shield, S. (2015) ‘“My Work is Bleeding: Exploring Students” Emotional Responses to First-year Assignment Feedback’, Teaching in Higher Education, 20(6), pp. 1-11. doi: 10.1080/13562517.2015.1052786 Winstone, N. E., Nash, R. A, Rowntree, J., and Parker, M. (2017) ‘”It’d be useful, but I wouldn’t use it”: Barriers to university students’ feedback seeking and recipience’, Studies in Higher Education, 42(11), pp. 2026-2041. doi: 10.1080/03075079.2015.1130032. Investigating Chinese Students' Perceptions of and Responses to Teacher Feedback: Multiple Case Studies in a UK University Speakers: Fangfei Li, University of Bath, UK With more and more attention drawing to the agency of students in the feedback process in recent studies, researchers reconceptualise the notion of feedback, in relation to its efficacy in scaffolding the dialogical learning process, from 'unilateral' (i.e. information is transmitted from the teacher to the student) to 'multilateral' (i.e. students subjectively construct feedback information and seek to inform their own judgements through drawing on information from various others (Nicol, 2010; Boud and Molloy, 2013)). Perspectives from students on their engagement with the feedback process in terms of how they perceive and respond to feedback are important to the development of teacher feedback but are comparatively under-researched (Carless, 2006). More particularly, there is yet very little research on how Chinese overseas students, who come from a feedback-sparse undergraduate educational background, engage with teacher feedback in the context of UK higher education. The study reported in this presentation therefore aimed to investigate and shed new light on how Chinese postgraduate students perceive and engage with teacher feedback in a UK university. The research questions which guided this study were: 1. 2.

How do Chinese students perceive teacher feedback in the UK higher education? How do they respond to teacher feedback in the UK higher education?

A qualitative case study research, employing semi-structured, stimulated recall and retrospective interviews was conducted with five Chinese postgraduate students. Data collection covered two phases – the pre-sessional language programme and the first term of their Master's degree programme. The data collected was analysed thematically and findings revealed that participants perceived teacher feedback as constituting socio-affective, cognitive, interactive and experiential dimensions. Findings indicated that participants' various perceptions of teacher feedback reflected their different understanding of the purpose of feedback. Participants coming to the UK university, in some cases, held a view of feedback that acts as a kind of explicit instruction, whereas in other cases, tried to relate feedback to the long-term development of learning. What is more, their response to teacher feedback was manifested through a range of ways including accepting and using the feedback with critical consideration, rejecting the feedback with their own reasons and interpreting the feedback to make it align with students' own judgements, beliefs or preferences. Findings showed that students constructed feedback information in various ways by discussing the feedback with other participants (e.g. peers, teachers and learning materials) and connecting it with their previous understanding that was shaped by their prior learning experience in China. The significance of the study is that it will enable university teachers in the UK to have a comprehensive understanding of how students may interpret and respond to the feedback they provide as well as to gain insights into how to prepare and train Chinese overseas students to effectively engage with feedback in the UK higher education.

32


References Boud, D. and Molloy, E (2013) ‘Rethinking Models of Feedback for Learning: the Challenge of Design’, Assessment & Evaluation in Higher Education, 38(6), pp. 698-712. Carless, D. R. (2006) ‘Differing Perceptions in the Feedback Process’, Studies in Higher Education, 31(2), pp. 219-233. Nicol, D. (2010) ‘From Monologue to Dialogue: Improving Written Feedback Processes in Mass Higher Education’, Assessment & Evaluation in Higher Education, 35(5), pp. 501–517. Action on Feedback Speakers: Teresa McConlogue, University College London; Jenny Marie, University College London; Clare Goudy, University College London, UK This presentation reports on an evaluation of an initiative to improve teacher and student understanding of good quality feedback. The Action on Feedback initiative is part of a 5-year project to review assessment and feedback at UCL, a large research-intensive university. We identified feedback as a priority issue from student satisfaction surveys, which indicated dissatisfaction with the timeliness, quantity and quality of feedback. Drawing on recent research and reconceptualisations of feedback (Boud and Molloy 2013, Nicol et al. 2014, Xu and Carless, 2017) we aimed to shift teachers from thinking about feedback as essentially transmissive (Sadler, 2010) to understanding how feedback can develop student’s evaluative judgment (Tai et al., 2018, Boud and Molloy, 2013) and support students to become self-regulated learners (Nicol et al. 2014). We also wanted to address the lack of consistency in the quantity and quality of teacher feedback, by providing professional development for teachers across the institution. This is particularly important for large programmes with very large teaching teams and also for inter-disciplinary and cross-faculty programmes. We have worked to improve feedback by developing resources (especially the Quick Guides to Assessment and Feedback https://www.ucl.ac.uk/teaching-learning/teaching-resources/teaching-toolkits/assessmentfeedback) and designing a two hour interactive workshop for academics, linking current theoretical work on feedback with ideas for practical implementation. The workshop was offered across the institution with a target of 500 participants in the first year, and a further 500 in the following year. We anticipated that this ‘saturation Continuing Professional Development’ would ensure that teaching teams developed a common understanding of good quality feedback, and that this understanding would be shared across faculties. The response to the workshop this year has been excellent and we are on course to meet or exceed our target of 500 participants. The workshop is evaluated through an immediate short questionnaire; one question asks participants what changes they intend to make. Two actions are suggested; one is that teachers should carry out peer review of feedback within their programme team. A case study of how to carry out peer review of feedback is presented in the workshop and participants are encouraged to go back to their teams and collaborate with colleagues to review feedback on their programmes. The second action is that teachers develop students’ understanding of feedback and hone their evaluative judgment through use of exemplars and dialogic feedback (Carless, 2017). Appropriate activities are presented in the workshop (e.g. Guided Marking https://www.ucl.ac.uk/teaching-learning/teaching-resources/teachingtoolkits/assessment-feedback). Participants are encouraged to embed similar activities in their programmes. In this presentation, the evaluation of the impact of the Action on Feedback initiative will be reported through analysis of initial evaluations of the workshop and a follow up impact questionnaire. We will report on whether participants implemented the two suggested actions, on lessons learned and our plans for the second year of the initiative. References Boud, D. and Molloy, E, (2013) ‘Rethinking models of feedback for learning: the challenge of design’, Assessment & Evaluation in Higher Education, 38(6), pp.698-712. Carless, D. and Chan, K.K.H. (2017) ‘Managing dialogic use of exemplars’, Assessment & Evaluation in Higher Education, 42(6), pp.930-941. Nicol, D., Thomson, A. and Breslin, C. (2014) ‘Rethinking feedback practices in higher education: a peer review perspective’, Assessment & Evaluation in Higher Education, 39(1), pp.102-122. Sadler, D. R. (2010) ‘Beyond Feedback: Developing Student Capability in Complex Appraisal’, Assessment & Evaluation in Higher Education, 35(5), pp. 535–550.

33


10

Evaluation or Research Presentations Session Chair: Dr Peter Holgate

Meeting Room 7

The Cinderella of UK assessment? Feedback to students on re-assessments Speakers: Marie Stowell, University of Worcester, Liverpool John Moores University; Harvey Woolf, ex University of Wolverhampton, UK The recent publication of Stefan Sellbjer’s (2018) ‘Have you read my comments?’ was a stark reminder of how little has been written about re-assessment in general and feedback practices relating to re-assessment in particular. Published material on re-assessment concentrates on processes and procedures, purposes and/or outcomes. In large part this is because re-assessment takes place outside the mainstream of institutions’ assessment schedules, without formalised processes for managing student preparation and feedback. A further reason for the Cinderella status of re-assessment is that it is only a minority of students who have to be re-assessed. It is, however, a sizeable minority: nearly 25% of all the students who progressed to Level 5 in the 2016 NUCCATSACWG research on progression from year one had to be re-assessed in order to progress from Level 4 to Level 5. This under-represents the total number who were re-assessed as there would have been students who took Level 4 re-assessments but did not qualify for progression. We do not have hard data on the volume of re-assessment at Levels 5 and 6; however, anecdotal evidence suggests that reassessment remains at a significant level even when students are studying at these levels. Given the overall number of students involved in re-assessment and the institutional resource invested in the process, we might consider how learning through effective feedback can be given greater importance. Over the past three years NUCCAT and SACWG have been engaged in a project to explore reassessment practices in UK higher education. The latest phase of the project is designed to understand the extent to which the management of initial assessments varies from that of reassessments in terms of institutional requirements and practice. One element of this is how feedback is provided to students, not least in relation to the use of capped grades for re-assessments. The presentation will examine the outcomes of the research, highlighting the context in which feedback on re-assessments operates, detailing the variations in the ways in which re-assessment feedback is given compared with feedback on initial assessments, and considering the implications on how feedback on re-assessments can be enhanced. References Sellbjer, S. (2018) “’Have you read my comments? It is not noticeable. Change!’ An analysis of feedback given to students who have failed examinations,’ Assessment & Evaluation in Higher Education, 43(2), pp. 163-174, DOI: 10.1080/02602938.2017.1310801. Pell, G., Boursicot , K., and Roberts, T. (2009) ‘The Trouble with Resits …,’ Assessment & Evaluation in Higher Education, 34(2), pp.243-51. Ricketts, C. (2010) ‘A New Look at Resits: Are They Simply a Second Chance?’ Assessment & Evaluation in Higher Education, 35(4), pp. 351-56. Proud, S. (2015) ‘Resits in Higher Education: Merely a Bar to Jump over, or Do They Give a Pedagogical “Leg Up”?’ Assessment & Evaluation in Higher Education, 40(5), pp. 681-97. The Northern Universities Consortium (NUCCAT) (2016) To what extent do re-assessment, compensation and trailing support student success? The first report of the NUCCAT-SACWG Project on the honours degree outcomes of students progressing after initial failure at Level 4. Available at: http://www.nuc.ac.uk/wp-content/uploads/2017/05/Final-NUCCATSACWGproject-report.pdf (Accessed: 14 March 2018). A critique of an innovative student-centred approach to feedback: evaluating alternatives in a high Risk policy environment Speakers: Judy Cohen; University of Kent; Catherine Robinson, University of Kent, UK This paper evaluates an approach to developing dialogic feedback through team-based learning (TBL). The aim was to address student underperformance via a collaborative and student-centred learning space (Cohen & Robinson, 2017). While this initial study showed improvements in student

34


performance, satisfaction with TBL was mixed and early indications are that these results will be confirmed by the second study. Additionally, staff reported concerns over teaching spaces and workload, while observing that overall student satisfaction with the module dropped slightly in response to the flipped mode of content delivery and high levels of peer interaction. While studentcentred approaches may be equated with ‘teaching excellence’ in the minds of teaching staff, our findings suggest that student centredness and the push towards self-regulated learning is complex and not necessarily favoured by students, particularly those novice learners holding a didactic/reproductive approach to learning (Lee & Branch, 2017). In an era of market-driven provision of education, a focus on choice and cost may be expected to spotlight student ratings and value for money. However, this focus introduces a conflict for staff aiming to deliver a quality higher education experience while facing high expectations of accountability in the current HE environment. Staff may weigh up teaching innovations not only by the potential to enhance transformative learning, but by student satisfaction with the innovation. This phenomenon of pedagogic frailty (Kinchin et al., 2016) is included in our evaluation, and we examine ways to provide higher education which is characterised by acquisition of troublesome knowledge and requiring student effort over time (Land et al., 2010), while addressing student satisfaction. Using a conceptual framework of teaching excellence (Cohen & Robinson, 2017), we provide an evaluation of the implementation of TBL for enhancing student engagement with and use of feedback. In particular, we evaluate student use of feedback in their transition from novice to expert learner and we explore alternative ways of facilitating this developmental journey. We review the notion of pedagogic frailty (Kinchin et al., 2016) on our own practice and the implications for trialling innovative teaching practices within a competitive Business School environment. Findings from our research project are discussed in relation to the methodologies used and the appropriate methods of implementing practices supporting student engagement and peer feedback. Participants will be able to formulate an action plan and evaluate their own innovative practice with a critical eye to pedagogic frailty and student voice. References Cohen, J. and C. Robinson (2017) ‘Enhancing Teaching Excellence through Team Based Learning’, Innovations in Education and Teaching International, pp.1-10. doi:org/10.1080/14703297.2017.1389290 Kinchin,I.M, E. Alpay, K. Curtis, J. Franklin, C. Rivers & N.E. Winstone (2016) ‘Charting the elements of pedagogic frailty’, Educational Research, 58(1) pp. 1–23. doi:org/10.1080/00131881.2015.1129115 Land, R., Meyer, J.H.F. & Baillie, C. (2010) Editors Preface:Threshold Concepts and Transformational Learning . Rotterdam: Sense Publishers. Lee, S.J. & Branch, R.M. (2017) ‘”Students” beliefs about teaching and learning and their perceptions of student-centred learning environments,’ Innovations in Education and Teaching International. doi:org/10.1080/14703297.2017.1285716 Build a bridge and get over it – exploring the potential impact of feedback practices on Irish students’ engagement Speakers: Angela Short, Dundalk Institute of Technology; Gerry Gallagher, Institute of Technology, Republic of Ireland Build a bridge and get over it – exploring the potential impact of feedback practices on Irish students’ engagement. Student engagement (SE) has been defined as participation in educationally effective practices, both inside and outside the classroom, which lead to a range of measurable outcomes. The SE concept “implies a series of conceptual commitments, teaching strategies and behavioural orientations” (Macfarlane & Tomlinson, 2017, p.7) that is key to learning gain and success in higher education. The salience of social relationships in the building of student engagement is no longer questioned, the depth and quality of the teacher-student relationship acting as buffer for students in times of crisis. However, despite the accepted importance of interpersonal relationships in providing a gateway to learning, there is a paucity of research on the nature and impact of the teacher student relationship in higher education (Hagenauer and Volet, 2014). Studies have confirmed that undergraduate students perceive their teachers as caring when they encourage and respond to questions, and give good feedback (Teven, 2001). Good quality feedback, formative in nature and dialogic in delivery, which is intentionally integrated throughout the teaching process and focused on improving student learning, can bolster student confidence and motivation, provide guidance and

35


direction and, importantly, foster a stronger student teacher relationship (Winstone and Nash 2016). This, in turn, can promote learning and enhance the effectiveness of the feedback by increasing student engagement with the process (Yang and Carless 2013). The Irish Survey of Student Engagement (ISSE) is a compulsory national survey of higher education students. Introduced in 2013 and the first system-wide survey of its type in Europe, the ISSE seeks to document Irish higher education students’ experience. Primarily consisting of fixed-choice closed questions, the survey ends by posing two open-ended questions: 1. and 2.

What does your institution do best to engage students in learning? What could your institution do to improve students’ engagement in learning?

Systematic analysis of the free text response data collected from students in the first five years of the ISSE underlines the importance for students of the teacher-student relationship and the perception that teachers care about them and more importantly, their learning. In addition to examining what students have to say about feedback in the ISSE, this paper will explore the potential of teacher feedback to foster the teacher-student relationship when underpinned by principles of good practice. Dialogic teacher-student feedback interactions focus primarily on the individual student and their learning and, thus, can reinforce the perceived caring and approachability of the teacher. The paper will further argue that, although relational feedback practices are challenged by the prevalence of large group teaching in Higher Education today, these practices may be supported through the thoughtful use of appropriate digital technologies (Y1Feedback 2016). References Hagenauer, G. and Volet, S.E. (2014) ‘Teacher–student relationship at university: an important yet under-researched field’, Oxford Review of Education, 40(3), pp.370-388. Macfarlane, B. and Tomlinson, M. (2017)’ Critiques of Student Engagement’, Higher Education Policy, 30, pp. 5-21. Teven, J.J. (2001) ‘The relationships among teacher characteristics and perceived caring’, Communication Education, 50(2), pp.159-169. Winstone, N. & Nash, R. (2016) The Developing Engagement with Feedback Toolkit (DEFT). Available at: https://www.heacademy.ac.uk/knowledge-hub/developing-engagement-feedback-toolkitdeft (Accessed: 14 March 2018). Yang, M. and Carless, D. (2013) ‘The Feedback Triangle and the Enhancement of Dialogic Feedback Processes’, Teaching in Higher Education, 18(3) pp. 285-297. Y1Feedback (2016). Technology-Enabled Feedback in the First Year: A Synthesis of the Literature. Available at: http://y1feedback.ie/wpcontent/uploads/2016/04/SynthesisoftheLiterature2016.pdf (Accessed: 04 March 2018).

11

Round Table Presentations Session Chair: Dr Rita Headington

Meeting Room 11

Feedback, a shared responsibility Speakers: Annelies Gilis, Karen Van Eylen, Elke Vanderstappen, KU Leuven, Belgium; Joke Vanhoudt, Study Advice Service, KU Leuven, Belgium; Jan Herpelinck, General Process Coordination, KU Leuven, Belgium ‘Feedback is one of the most powerful influences on learning and achievement.’ The core concept of KU Leuven’s education philosophy is students’ disciplinary future self (DFS), which is that part of students’ future identity they want to create by choosing a programme of study. Research indicates that programmes that address students’ DFS stimulate motivation and deeper learning. A good implementation is therefore essential: How can we stimulate students to reflect on who they are, where they stand and where they want to go to? Up until now feedback is rather fragmented: Professors give feedback on students’ assignments, study counsellors give feedback on students’ study progress. To facilitate students’ learning and DFS, feedback should be implemented in

36


a more aligned way to empower students. To realise this change at institutional level, we launched a project with two foci:  

What does feedback look like approached from different angles (curricula, students, career counselling)? What is the continuum of actors to involve? What does transformed feedback looks like in practice? A pilot project will be launched in one faculty.

We will provide you with an insight in KU Leuven’s transformation of feedback and share our experiences. References Hattie, J. & Timperley, H. (2007) ‘The Power of Feedback’, Review of Educational Research, 77(1), pp. 81–112. https://www.kuleuven.be/onderwijs/english/education/policy/vision/consideration Dialogue and Dynamics: A time efficient high impact model to integrate new ideas with current practice Speaker: Lindsey Thompson, University of Reading, UK There has been a proliferation of discussion related to student progress in higher education, reflected throughout HEA, HESA and TEF. Progress can be attributed to a variety of factors (Hattie and Timperley, 2007), one of the most significant being feedback which is consistently the lowest scoring question on the NSS (IPSOS Mori, 2006). The literature has identified a range of barriers to effective feedback in terms of dissemination and reception. A key concern of students is the consistency of feedback, their ability to interpret it and how to use it (Ajjawi and Boud, 2017; Winstone et al., 2017). Alternatively, lecturers highlight student engagement as the most important barrier. In addition, they are universally concerned by the time devoted to detailed marking as student numbers continue to rise. Recent research has suggested that effective feedback takes the form of a shared dialogue between the giver and the receiver. This model has also been shown to be of preference to the student (Mulliner and Tucker, 2017). The model proposed in this paper integrates current expertise with the newer demands for dialogue and the idea that feedback should be a dynamic process. Students receive individual reports and targeted ‘next steps’. Reports can be stored and accessed easily to support revision and further work. Feedback summaries enable self-evaluation, and cohort comparisons at the lecturer level. Finally a range of dynamic ‘next steps’ are online or face to face dialogue is booked and recorded. The model is illustrated using a simple Excel spreadsheet separated into various worksheets. Initial discussions with students and lecturers have been very positive with 100% of Foundation year students saying they prefer the system over any of their current models. They report that they are more likely to read the feedback, understand what to do and use it into the future. Staff feel it will ease their workload while providing high impact feedback that will drive student progress. References Ajjawi, R and Boud, D (2017) ‘Research feedback dialogue: an interactional analysis approach’, Assessment and Evaluation in Higher Education 22, pp. 252-265. Hattie, J. and Timperley, H. (2007) ‘The Power of Feedback’, Review of Educational Research, 77, pp. 81- 11.2 Mulliner, E. and Tucker, M. (2017) ‘Feedback on feedback practice: perceptions of students and academics’, Assessment and Evaluation in Higher Education, 42, pp.266-288. Winstone, N.E., Nash, R.A., Rowntree, J. & Parker, M. (2017) ‘”It'd be useful, but I wouldn't use it”: barriers to university students’ feedback seeking and recipience’, Studies in Higher Education, 42(11), pp.2026-2041. Large student cohorts: the feedback challenge Speakers: Jane Collings, University of Plymouth; Rebecca Turner, University of Plymouth, UK At the heart of assessment is the feed-in, feedforward, feedback model (Brown 2007). This is constantly emphasised in TLS staff sessions including the workshop ‘Smarter Faster Feedback’. The

37


aim of the workshop is to improve student learning and reduce staff workload when delivering feedback. At the University there are many examples of good feedback practice including the use of generic feedback, the Dental School delivering same day oral feedback on all assessments and the Law School conducting post examination individual feedback sessions. Student evaluative feedback in such programmes are much improved. Web pages for staff have been developed with feedback resources and a funded project on examination feedback produced a toolkit to assist feedback design (Sutton & Sellick 2016). However, an outstanding challenge remains to improve feedforward and feedback in programmes with large student cohorts. Fast turnaround of generic feedback times are usually cited as the solution. However, generic feedback is often met with student dissatisfaction who request specific feedback with points for improvement. We know we are not alone in facing this challenge. In this round table discussion, we will request participants to share good practice and work collaboratively to develop solutions. References Brown. S (2007) Feedback and Feed-Forward. Centre for Biosciences Bulletin.22. Autumn 2007. Available at: https://www.reading.ac.uk/web/files/EngageinFeedback/bulletin22_feed_forwards.pdf (Accessed: 14 March 2018). Sutton, C. and Sellick, J. (2016) Examination Toolkit. Plymouth: University of Plymouth. A comparison of response to feedback given to undergraduates in two collaborative formative assessments: wiki and oral presentations. Speaker: Iain MacDonald, University of Cumbria, UK This study is prompted by increasing opportunities for group formative assessment afforded by virtual leaning environments. Two methods of computer supported collaborative assessment were used by second year undergraduates – a ‘familiar’ MS PowerPoint presentation and a ‘novel’ wiki, a web communication and collaboration tool. Both were used in the formative assessment context. Using grounded theory, outcome measures of students were explored, including response to feedback given during the two assessments. An online survey and six in depth student interviews provided data for the study. Findings demonstrated that all 32 students had previous experience of MS PowerPoint; however, the wiki was new to them. Feedback was provided by the tutor verbally for the MS PowerPoint presentation; for the wikis this was written feedback together with peer review. Verbal feedback after presentations was seen as less useful, and frequently not comprehended by students due to anxiety. For the wiki feedback, peer review was valued by the majority of the students and written feedback was useful as it allowed subsequent review. This study demonstrates that feedback can be delivered in alternative forms, taking into account the assessment chosen, and should be an important factor in deciding the overall approach to delivery of assessment. References Parker, k. and Chao, J. (2007) ‘Wiki as a Teaching Tool’, Interdisciplinary Journal of Knowledge and Learning Objects, 3, pp. 57-72. Tanaka, A., Takehara, T., and Yamauchi, H. (2006) ‘Achievement goals in a presentation task: Performance expectancy, achievement goals, state anxiety, and task performance’, Learning and Individual Differences, 16(2), pp. 93-99.

38


12

Research Evaluation Presentations Session Chair: Professor Pete Boyd

Piccadilly Suite

Feedback, Social Justice, Dialogue and Continuous Assessment: how they can reinforce one another Speaker: Jan McArthur, Lancaster University, UK This paper builds on previous work on assessment for social justice (McArthur, 2016, 2018) to consider the particular qualities of feedback if one is committed to greater social justice within and through higher education. Drawing on the critical theory of Axel Honneth (2014) and the importance of mutual recognition to social justice, the paper explores the ways in which feedback should engender in students their own sense of control over their abilities and achievements. Such a position is developed from a radical interpretation of feedback as dialogue, which compels an understanding of the importance of students being in dialogue with their own learning in its social context (McArthur & Huxham, 2013). A second feature of this paper is the focus on continuous assessment. Data will be drawn from a multi-partner study being funded by the ESRC/HEFCE as part of the Centre for Global Higher Education. The ‘Understanding Student Knowledge and Agency’ project is a longitudinal study of Chemistry and Chemical Engineering students in the UK and South Africa. This paper will present data from the completed first year of the UK side of the study. Initial analysis suggests that these students make a strong connection between assessment and learning, and this in turn impacts on their feedback attitudes and experiences. Furthermore, a particular feature is that these students undertake a large amount of continuous assessment, in the form of class tests, along with lab reports and larger pieces of work. The challenge of how to give feedback in such cases will be explored. Typical approaches to feedback in these assessment-intensive contexts include the use of devices such as tick box sheets or generic feedback. But responses from the students interviewed for this project suggest that such approaches are unsatisfactory. Many of these students have already identified the link between assessment and learning that is at the heart of good feedback, but are let down by formulaic or partial offerings. However, the time demands on their lecturers to provide more detailed feedback in a system of continuous assessment cannot be overlooked. This paper will explore how the practical solution to this dilemma lies in the philosophical commitment to social justice, as already outlined. Strategies for embedding feedback as dialogue and for nurturing in students the capacity to evaluate their own work will be discussed. References Honneth, A. (2014) The I in We: Studies in the Theory of Recognition. Cambridge: Polity Press. McArthur, J. (2016) ‘Assessment for Social Justice: the role of assessment in achieving social justice’, Assessment and Evaluation in Higher Education, 41(7), pp.967-981. McArthur, J. (2018) Assessment for Social Justice. London: Bloomsbury. McArthur, J., & Huxham, M. (2013) ‘Feedback Unbound: From Master to Usher’, in Merry, S., Carless, D., Price, M. & Taras M. (eds.) Reconceptualising Feedback in Higher Education. London: Routledge. Chapter 10. Building a national resource for feedback improvement Speaker: David Boud, Deakin University/University of Technology Sydney, Australia If we are to improve feedback practices we need to move beyond general impressions of students and staff to understand what specifically is working well and what is not. This was the premise of a national teaching development project (funded by the Office of Learning and Teaching) designed to improve feedback practice across Australian universities, Feedback for Learning: Closing the Assessment Loop. The study recognised that feedback is not transmission of information by teachers to students, but a process involving many players that can occur prior to or following the submission of formal assessment tasks. The aim of a feedback process is to close the loop to improve students’ performance. The extensive literature on feedback in higher education has resulted in a surfeit of models, frameworks, principles and strategies with little guidance or research on what works in diverse contexts and how to choose amongst them. The project addressed this need through a large scale mixed method study (4514 students, 406 staff) to identify what strategies are reportedly

39


working, how do they operate to be effective, and what conditions support those practices over time. In the project feedback was defined as: “a process in which learners make sense of information about their performance and use it to enhance the quality of their work or learning strategies.” A novel feature was that students were asked to identify a situation in which they received particularly effective feedback during their current program of study. They were asked to elaborate on what the activity was and explain what they thought was effective about it and what enabled it to occur. The research team collated information about which were the most frequently mentioned units and undertook detailed case studies on a selection of them. The case studies drew on interviews with the key staff involved in each, a selection of student interviews and examination of the formal documentation setting out the feedback processes. The set of seven cases finally published on the project website was chosen to reflect a diversity of feedback practices and disciplines. Most importantly, the set emphasised practices that could be scaled up and used in large classes and with multiple tutors. The empirical data was also deployed to illuminate ideas from the literature and generate a framework for effective feedback which identifies conditions for feedback success and discusses them in terms of three main categories: building capacity for feedback, designing for feedback and establishing a feedback culture. The presentation provides an overview of the major activities and findings of the project, introduces the main resources developed and discusses some of the challenges to be faced in feedback improvement. Full details of the project can be found at: www.feedbackforlearning.org References Boud, D., & Molloy, E. (2013) ‘Rethinking models of feedback for learning: the challenge of design’, Assessment & Evaluation in Higher Education, 38(6), pp. 698-712. Hattie, J., & Timperley, H. (2007) ‘The Power of Feedback’, Review of Educational Research, 77(1), pp. 81-112. Dynamic feedback –a response to a changing teaching environment Speaker: Martin Barker, University of Aberdeen, UK The teaching environment is evolving rapidly due to many factors, including increasing student diversity, escalating expectations from students, and a changing student-centred pedagogy. At the same time, there is an emerging technology that has opened up a dialogue between academics and their students. Part of the response to these challenges and opportunities is the use of dynamic feedback. This can include an increased emphasis on feedback that is formative and interactive, rather than purely summative and didactic. Various models of dynamic are available, including closing the feedback loop (Pitts, 2003; Quality Assurance Agency for Higher Education. 2006) and assessment dialogues (Carless, 2006) The purpose of this presentation is to strengthen the case for a dynamic feedback. I will use examples involving iterative feedback (Barker and Pinard, 2014) and experience of an online formative assessment tool (BluePulse2, e.g. Abertay TLE, 2016). My observations are that students appreciate a chance to clarify the feedback that they receive, and an opportunity to demonstrate that they have engaged with it. At the same time, their teachers are motivated by the knowledge that their feedback has been understood, or that they can clarify feedback that has been misunderstood. This communication can be instantaneous, so that course adjustments can be made in real time. So, students can become immediate beneficiaries of dynamic feedback, rather than simply providing feedback as a legacy for future students (who may in any case have different needs). The emphasis of the talk is on practical techniques that can be readily deployed in a changing teaching environment. References Abertay TLE(2016) ‘Reading your module’s pulse with Bluepulse2 –the tool to capture anonymous student feedback’, Abertay University Teaching and Learning Enhancement. WordPress December 14. https://abertaytle.wordpress.com/2016/12/14/reading-your-modules-pulsewith-bluepulse-2-the-tool-to-capture-anonymous-student-feedback/ (Accessed: 14 March 2018). Barker, M. & Pinard, M. A. (2014) ‘Iterative feedback between tutor and student in coursework assessments,’ Assessment & Evaluation in Higher Education, 39(8), pp. 899-915.

40


Carless, D. (2006) ‘Differing Perceptions in the Feedback Process’, Studies in Higher Education, 31(2), pp.219–233. Pitts, S. E. (2003) “‘Testing, Testing …’. How Do Students Use Written Feedback?” Active Learning in Higher Education’ 6(3), pp. 218–229. Quality Assurance Agency for Higher Education (2006) Code of Practice for the Assurance of Academic Quality and Standards in Higher Education, Section 6: Assessment of Students. Gloucester: The Quality Assurance Agency for Higher Education. 13

Research Evaluation Presentations Session Chair: Dr Nicola Reimann

Meeting Room 3

‘Who Am I?’ Exploration of the healthcare learner ‘self’ within feedback situations using an interpretive phenomenological approach Speaker: Sara Eastburn, University of Huddersfield, UK This paper presents the findings of doctoral-level research. This empirical research explored learning through the lens of situated learning theory and utilised this theoretical position to investigate the concept of feedback within healthcare education. The research is framed within the work of Bourdieu (1977) who advocates the socially and culturally constructed nature of learning. Furthermore, it draws on the work of Bandura (1986) who situates learning within the processes of social observation, making a case that the environment, the individual and the social behaviours of those involved in a learning situation affect outcome. From these theoretical foundations, this research explores learning from feedback experiences for healthcare students through the particular the lens of communities of practice. Adopting an interpretive phenomenological approach, this paper explores the lived experience of feedback through the “voice” of the students engaged in feedback situations. Belonging to a community of practice with a common purpose appears to be a challenge for healthcare learners. In particular, though not exclusively, this paper explores the experiences of Dawn, a pre-registration healthcare student, who highlights the tensions and ill fit between her own learning and the purpose of healthcare delivery. This paper presents insight into the emotional burden of learning from feedback and it highlights the “unengaged alignment” (Wenger, 1998) of Dawn within a community of practice and the distance that she perceives between herself and other members of the community. Dawn’s use of language will be examined to understand her perception of “fit” and “worthiness” within a healthcare learning community of practice and the impact that this appears to have on how she uses feedback to support her learning. To augment the challenges perceived by the learner, data gathered from university and practice-based educators will also be drawn on. The learner “self” will be considered in relation to learner identity and feedback literacy (Sutton, 2012; Winstone et al., 2016, 2017). This paper will present suggestions of how the learner “self” might be better enabled to learn from feedback experiences, such as the role that newly qualified educators might adopt in supporting learner engagement within a community of practice and how a questioning approach to feedback might support a learner in developing and taking ownership for their learning from feedback. References Bandura, A. (1986) Social Foundations of thought and action: A social cognitive theory. New Jersey: Prentice-Hall. Bourdieu, P. (1977) Outline of a Theory of Practice. Cambridge: Cambridge University Press. Sutton, P. (2012) ‘Conceptualizing feedback theory: knowing, being and acting’, Innovations in Education and Teaching International, 49(1), pp.34-40. Wenger, E. (1998) Communities of Practice: Learning, Meaning and Identity. New York: Cambridge University Press. Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017) ‘Supporting Learners' Agentic Engagement With Feedback: A Systematic Review and a Taxonomy of Recipience Processes’, Educational Psychologist, 51(1), pp.17-37.

41


Winstone, N. E., Nash, R. A., Rowntree, J., & Parker, M. (2016) ‘”It'd be useful, but I wouldn't use it”: barriers to university students’ feedback seeking and recipience’, Studies in Higher Education, 52(1), pp.17-37, doi: 10.1080/00461520.2016.1207538 Students’ feedback has two faces Speakers: Serafina Pastore, University of Bari, Italy; Giuseppe Crescenzo, Michele Chiusano, University of Bari, Italy; Vincenzo Campobasso, Student at the University of Bari, Italy Over the years, there have been remarkable efforts to outline a different kind of assessment that is more sustainable and useful in order to guarantee an active participation of students as ‘full members of the academic community’ (Jungblut, Vukasovic and Stensaker, 2015). Therefore, reform packages, studies, and projects are moving towards the revision of traditional assessment practice, the individuation of alternative forms of assessment, and the analysis of conceptions that teachers and students have of assessment. In this framework, student evaluative surveys across Europe have become the largest and most frequently used data sources for quality assurance in higher education (Klemencic and Chirikov, 2015). However, in Italy, students’ compliant behaviour in completing end of module questionnaires and strong sense of disaffection provide significant challenges to the quality assurance system. In order to understand what hindrances, conceptions, and representations students have of the quality assurance system, a round of informal auditions has been realised during the 2017 fall semester to a sample of student representatives who joined the 15 departments of the University of Bari. Despite the methodological limitations of this study, data analysis confirms deep misconceptions students have about assessment and quality assurance. References Jungblut, J., Vukasovic, M. and Stensaker, B. (2015) ‘Student perspective on quality in higher education’, European Journal of Higher Education, 5(2), pp. 157-180. Klemenčič, M., & Chiricov, I. (2015) ‘On the use of student surveys’, in Pricopie,R., Scott, P., Salmi, J. and Curaj A. (eds.) Future of Higher Education in Europe. Volume I and Volume II. Dordrecht: Springer. Feedback and Feedforward in capability-based assessment Panos Vlachopoulos, Macquarie University, Australia Medical Schools across Australia but also globally are looking into designing, developing and implementing a programmatic assessment approach to their medical curricula. The use of an electronic portfolio (e-portfolio) tool, the provision of longitudinal tutorial support and credible feedback are often the key design challenges for programmatic assessment. At Maqcuarie University the Faculty of Medicine and Health Sciences designed, developed, and implemented a pedagogically sound and technologically workable and sustainable solution to programmatic assessment using an eportfolio tool for its newly established Bachelor of Clinical Science. The Bachelor of Clinical Science is a 2 year accelerated undergraduate program. The curriculum includes a professional practice stream integrated across the entire program. It was in this stream that the students were introduced to the concept and pedadogy of an ePortfolio as way to help them join the dots between learning and professional capabilities across all the modules of study in their program but also from other aspects of their lives (work or volunteering experiences). The presentation will discuss the results of the first two years of implementation. Data was drawn from focus group with a total of 62 students. The focus of the presentation will be on the qualitative differences of the feedback they received by the ePortfolio mentors and the way it was enacted to support professional development skills. . References Van der Vleuten, C.P.M., Schuwirth, L.W.T., Driessen, E.W., Dijkstra, J., Tigelaar, D., Baartman, L.K.J., and Van Tartwijk, J: A. (2012) ‘Model for programmatic assessment fit for purpose’, Med Teach, 34, pp. 205–214. Burke, D. and Pieterick, J. (2010) Giving Students Effective Written Feedback. Berkshire: Open University Press McGraw-Hill Education. Carless, D. (2006) ‘Differing perceptions in the feedback process’, Studies in Higher Education, 31(2), pp. 219–33.

42


Evaluation or Research Presentations Session Chair: Professor Sally Jordan

14

Meeting Room 5

Feedback – what students want Speaker: Susanne Voelkel, University of Liverpool, UK Numerous publications discuss the best way to provide feedback to students (e.g. Nicol and Macfarlane-Dick, 2006. We know that feedback should be timely, that it should clarify what is expected of students and it should tell them what they need to do to improve their work. However, we still seem to be unsure how to put this advice into practice in the context of our own disciplines. Although many staff working hours are spent writing feedback on assignments, every year the NSS results and other student evaluations give an indication that students are not happy with the feedback we provide. This study, therefore, aimed to improve the quality and consistency of written feedback to students in the School of Life Sciences (University of Liverpool). The study was funded by a University of Liverpool Learning and Teaching Fellowship (awarded to SV) and ethics approval was granted by the University’s ethics committee. As part of the project, we wanted to identify which (if any) attributes are common to ‘good’ feedback, i.e. feedback that students find useful. We focused on written assignments from two large year 2 and year 3 UG modules and asked students to send us their feedback if they thought it was helpful. In addition, we asked them to complete a short questionnaire about the feedback. We received 26 and 27 responses from year 2 and year 3 students, respectively. The type and depth of the feedback comments were then analysed according to Glover and Brown (2006). Surprisingly, we found that ‘good’ feedback had relatively few common characteristics. For example, the quantity of the ‘good’ feedback varied, as did the percentage of motivational comments (praise). The questionnaires revealed two strong themes, though: 87% of respondents said that good feedback is ‘specific’ and 96% said it is ‘useful for future assignments’. We then performed semi-structured interviews with 7 students focusing on students’ perception of feedback and allowing us to explore the themes uncovered from the questionnaires. Finally, we conducted a nominal focus group with 6 students (Varga-Atkins et al., 2015). The focus group covered ‘what does your ideal feedback look like?’ and ‘your top wish-list for written feedback’. Interviews and focus group were recorded and transcribed. Transcripts were analysed using thematic analysis (Braun and Clarke, 2006)). The results clearly show students’ top three priorities for ‘good feedback’: 1. 2.

Justification of the marks, How to improve, 3. Positive reinforcement. Overall, feedback should be detailed and specific as well as honest and constructive. The results of the study allowed us to design a staff guidance booklet with concrete feedback examples and straightforward instructions that are all the more powerful because they come directly from ‘the students’ mouth’.

References Boud, D. & Falchikov, N. (2006) ‘Aligning Assessment with long-term learning’, Assessment & Evaluation in Higher Education, 31(4), pp. 399–413. Gourlay, L. (2009) ‘Threshold practices: becoming a student through academic literacies’, London Review of Education, 7(2), pp. 181–192. Hounsell, D. (2007) ‘Towards more sustainable feedback to students’, in Falchikov, N & Boud, D. (eds.) Rethinking Assessment for Higher Education. London: Routledge. pp. 101-113 Richardson, S. (2000) ‘Students’ conditions response to teachers’ response: portfolio proponents take note!’ Assessing Writing, 7(2), pp. 117-141. Sadler, D.R. (2010) ‘Beyond feedback: developing students’ ability in complex appraisal’, Assessment and Evaluation in Higher Education, 35(5), pp. 535-550. Sambell, K., McDowell, L. & Sambell, A. (2006) Supporting diverse students: developing learner autonomy via assessment, in Bryan, C. & Clegg, S. (eds.) Innovative Assessment in Higher Education. New York: Routledge. pp. 158-167.

43


Developing as a peer reviewer: Enhancing students’ graduate attributes Speaker: Rachel Simpson, Durham University; Catherine Reading, Durham University, UK This presentation will explore students’ use of peer review as a formative assessment strategy in Higher Education. It aims to align with research in this area (Cho & MacArthur, 2011; Nicol, Thomson & Breslin, 2014), by moving beyond the benefits of receiving feedback and towards exploring the value of producing and giving feedback to peers. With a focus on sustainable learning, a key question for consideration is: Can developing as a peer reviewer enhance students’ graduate attributes? Against the backdrop of students’ feedback dissatisfaction in Higher Education, an action research project was undertaken in 2016-17, involving university tutors and seventy-four first-year students on a BA Education university course in North East England. The project aimed for the students to develop the skill of producing evaluative judgements about both their peers’ academic work and aspects of teaching practice during a professional work placement. Students’ written records and reflections were analysed, alongside outcomes of semi-structured interviews conducted with a focus group of six students. Findings and analyses of individuals’ responses to a series of scaffolded peer review tasks indicated that reviewing peers’ work may contribute towards the development of some graduate or professional attributes. Primarily, when producing feedback, students formed evaluative judgements based on their understanding of quality, as discussed by Sadler (2010), and then used these judgements to improve their own academic work and professional teaching skills. Secondly, the professional attribute of being able to communicate feedback that was timely, relevant, accurate and understood by the recipient, was also evident. There were also strong indications of increases in students’ resilience when receiving feedback. Such attributes could be considered paramount to these students’ future careers as active members of professional learning communities (Stoll & Louis, 2007). However, in alignment with increasing student diversity in Higher Education (Bryson, 2014), students demonstrated different responses to becoming peer reviewers. Individuals faced a range of challenges as they became peer reviewers, primarily a lack of subject knowledge, skills and confidence to produce and communicate accurate reviews. There were also indications that proof of its success was needed before further investment would be made. Although not conclusive, this study provides insight into the introduction of a student-centred feedback system, its potential to achieve some of the graduate aims of universities, and the diversity of developments and challenges experienced by students as peer reviewers. Implications for future research and practice will be considered, including the balance between student autonomy and tutor input. Examples of the peer review tasks will be given during the presentation, to demonstrate some of the eight principles for peer review design identified by Nicol (2014), and the scaffolded approach which gradually built up students’ confidence and skills to become peer reviewers. Alongside this, the significance of involving the students in understanding the process of peer review from the beginning of their course will be discussed, aiming for active and informed participants in this formative feedback process from the outset. References Bryson, C. (2014) Understanding and Developing Student Engagement. Oxon: Routledge. Cho, K. & MacArthur, C. (2011) ‘’Learning by reviewing, Journal of Educational Psychology, 103, pp.7384. Nicol, D. (2014) ‘Guiding principles for peer review: Unlocking leaners’ evaluative skills’, in Kreber, C., Anderson, C., Entwistle, N. & McArthur, J. (eds.) Advances and Innovations in University Assessment and Feedback. Edinburgh: Edinburgh University Press. pp. 197-224. Nicol, D., Thomson, A., & Breslin, C. (2014) ‘Rethinking feedback in higher education: A peer review perspective’, Assessment and Evaluation in Higher Education, 39, pp.102-122. Sadler, R. (2010) ‘Beyond feedback: Developing student capability in complex appraisal’, Assessment & Evaluation in Higher Education, 35, pp.535–550. Stoll, L., & Louis, K. (2007) Professional Learning Communities: Divergence, depth and dilemmas. England: McGraw-Hill. Does screencast feedback improve student engagement in their learning? Speaker: Subhi Ashour, The University of Buckingham, UK Good quality feedback, delivered in a timely manner, is considered an essential component of learning and teaching in higher education (Chickering & Gamson, 1989; Reitbauer et al., 2013). Such

44


feedback is expected to enhance and promote learning. However, concerns have been raised about the value of feedback following noticeably low levels of students’ engagement with feedback reported in diverse contexts (Ali et al., 2017; Bailey & Garner, 2010; Crisp, 2007). These concerns are shared in my teaching context. To address these concerns, an action research project was designed to explore whether a multimedia screencast-based approach to feedback is likely to improve student engagement with feedback in particular and their learning in general. The project was focused on foundation level students from two main streams: Business and Law. An intervention was introduced in which a randomly selected group of students received screencast feedback instead of the traditional -written- feedback for a whole academic term. Semi-structured interviews were conducted with the students who received the new type of feedback in order to understand their views and perceptions of screencast feedback and to determine whether their engagement in their learning has been improved from their own perspectives. The findings from the interviews reveal a predominantly positive attitude towards screencast feedback compared to previous types of feedback. In particular, participants also identified positive aspects of screencast feedback such as reassurance that their assessed work does matter to their tutor and being able to understand feedback better. Unlike other types of feedback the participants experienced, screencast feedback made it possible for the participants to act upon the suggestions and recommendations made in the screencast feedback such as avoiding a mistake in a future assignment or improving an aspect of their work as suggested in the feedback. Despite the advantages, considerations need to be made about the sustainability of this type of feedback. Tutors involved in this project felt that when under pressure, they were more likely to revert back to traditional types of feedback in order to avoid wasting time because of technical issues associated with screencast feedback. Some student participants also made the point that they were worried about losing the face-to-face opportunity of feedback if screencast feedback was to be rolled out. References Ali, N., Ahmed, L. and Rose, S. (2017) ‘Identifying predictors of students' perception of and engagement with assessment feedback’, Active Learning in Higher Education. doi:/abs/10.1177/1469787417735609 Bailey, R, & Garner, M. (2010) 'Is the feedback in higher education assessment worth the paper it is written on? Teachers' reflections on their practices', Teaching In Higher Education, 15(2), pp. 187-198. Chickering, Arthur W., and Gamson, Z.F.(1989) ‘Seven principles for good practice in undergraduate education’, Biochemical Education, 17(3), pp. 140-141. Crisp, B.R. (2007) ‘Is it worth the effort? How feedback influences students’ subsequent submission of assessable work’, Assessment & Evaluation in Higher Education, 32(5), pp. 571–81. Reitbauer, M., Schumm Fauster, J., Mercer, S., & Campbell, N. (2013) Feedback Matters: Current Feedback Practices in the EFL Classroom. Frankfurt: M., Lang-Ed. 15

Research Evaluation Presentations Session Chair: Dr Amanda Chapman

Meeting Room 7

Feedback literacy in online learning environments: Engaging students with feedback Speakers: Teresa Guasch, Open University of Catalonia; Anna Espasa Catalonia, Rosa M. Mayordomo, Open University of Catalonia, Catalonia We share the approach that feedback is only so if students take some action (Boud and Molloy, 2013; Carless, 2015). However, research shows that teachers invest a lot of time and effort to provide feedback and students do not know how to implement it or they do not know what to do with it (O’Donovan, Rust and Price, 2016; Sutton, 2012; Winstone et al., 2017). In previous studies we have provided evidence on the characteristics of feedback in online learning environments where communication is mainly written and asynchronous, as well as on the effect of feedback type on learning. More recently, we have reported on the factors that influence the use of feedback medium (audio, video and written) and its relation to teachers workload. In this sense we have evidence on the feedback process necessary for the design of the feedback in this specific environment. But we

45


have identified that in general students do not know what they should do with the feedback they receive. Literature shows that there are different reasons for this, i.e. unclear feedback, different students expectations, students untrained in self-regulation of learning, the moment feedback its provided, misconceptions of feedback or level of students feedback literacy. This 2-year project seeks to answer: Why do students not use, implement, make decisions about the feedback received in online learning environments? What strategies would engage students to use, implement, make decisions, in short, be more active, with the feedback received? What strategies in massive contexts are sustainable to promote feedback literacy? To what extent do different strategies of feedback literacy influence student engagement? And to what degree? References Boud, D. and Molloy, E. (2013) ‘Rethinking models of feedback for learning: the challenge of design’, Assessment & Evaluation in Higher Education, 38(6), pp. 698-712. doi:10.1080/02602938.2012.691462. Carless, D. (2015) Excellence in University Assessment. Learning from award-winning practice. London: Routledge. O’Donovan, B., Rust, C. and Price, M. (2016) ‘A scholarly approach to solving the feedback dilemma in practice’, Assessment & Evaluation in Higher Education, 41(6), pp. 938–949. doi:10.1080/02602938.2015.1052774. Sutton, P. (2012) ‘Conceptualising feedback literacy: knowing, being, and acting’, Innovations in Education and Teaching International, 49(1), pp. 31-40, doi: 10.1080/14703297.2012.647781 Winstone, N.E., Nash, R.A., Parker, M. and Rowntree, R. (2017) ‘Supporting Learners’ Agentic Engagement With Feedback: A Systematic Review and a Taxonomy of Recipience Processes’, Educational Psychologist, 52(1), pp. 17–37. doi:10.1080/00461520.2016.1207538. Teaching staff’ views about e-assessment with e-authentication Speakers: Alexandra Okada, Denise Whitelock, Ingrid Noguera, Jose Janssen, Tarja Ladonlahti, Anna Rozeva, Lyubka Alexieva, Serpil Kocdar, Ana-Elena Guerrero-Roldán, The Open University UK Checking the identity of students and authorship of their online submissions is a major concern in Higher Education due to the increasing amount of plagiarism and cheating using the Internet. Currently, various approaches to identity and authorship verification in e-assessment are being investigated. However, the literature is very limited on the effects of e-authentication systems for teaching staff because it is not a widespread practice at the moment. A considerable gap is to understand teaching staff views, concerns and practices regarding the use of e-authentication instruments and how they impact their trust on e-assessment. To address this gap, this study presents cross-national data on distance education teaching staff views on e-assessment using authentication and authorship verification technology. This investigation focuses on the implementation and use of an adaptive trust-based e-assessment system known as TeSLA - an Adaptive Trust-based eauthentication System for e-Assessment. Currently, TeSLA is developed by an EU-funded project involving 18 partners across 13 countries. TeSLA combines three types of instruments to enable reliable e-assessments: biometric, textual analysis and security. The biometric instruments refer to facial recognition, voice recognition, and keystroke dynamics. The textual analysis instruments include anti-plagiarism and forensic analysis (writing style) to verify the authorship of written documents. The security instruments comprise digital signature and time-stamp. This qualitative study examines the opinions and concerns of teaching staff who used the TeSLA instruments. Data includes pre- and post- questionnaires and focus group sessions in six countries: the UK, Catalonia, Netherlands, Bulgaria, Finland, and Turkey. The findings revealed some technological and pedagogical issues related to accessibility, security, privacy, e-assessment design, and feedback. References Apampa, K., Wills G. and Argles, D. (2010) User Security Issues in Summative e-assessment. International Journal of Digital Society (IJDS), 2010. Ardid, M., Gomez-Tejedor, J., Meseguer-Duenas, J., Riera, J. and Vidaurre, A. (2015) ‘Online exams for blended assessment: Study of different application methodologies’, Computers & Education, 81, pp. 296-303.

46


Baneres, D., Baró X., Guerrero-Roldán, A. and Rodríguez, M. (2016) ‘Adaptive e-assessment system: A general approach’, International Journal of Emerging Technologies in Learning, 11(7), pp.16-23. IPPHEAE (2013) Impact of Policies for Plagiarism in Higher Education. Available at: http://plagiarism.cz/ippheae/ (Accessed: 14 March 2018). Okada, A., Mendonca M. and Scott, P. (2015) ‘Effective web videoconferencing for proctoring online oral exams: a case study at scale in Brazil’, Open Praxis Journal of OECD, 7(3), pp. 227-242. Whitelock, D. (2010) Activating Assessment for Learning: are we on the way with Web 2.0? in Lee, M. J. W. & McLough, C. (eds.) Web 2.0-Based-E-Learning: Applying Social Informatics for Tertiary Teaching. PA, USA: IGI Global. pp. 319-342. Students’ responses to learning-oriented exemplars: towards sustainable feedback in the first year experience? Speakers: Kay Sambell, Edinburgh Napier University; Linda Graham, Northumbria University; Peter Beven, Northumbria University, UK This paper presents findings from a three-year action-research project that focused on enhancing first-year students’ engagement with exemplars. The exemplars were explicitly designed as pedagogic tools located within the taught curriculum (Boud and Molloy, 2013) to engage students proactively in feedback processes based on formative activities around their developing subject knowledge. Despite the large numbers of students (n=100+ students per iteration), activities were carefully designed to offer dialogic exemplars-based opportunities (To and Carless, 2016) for students to make sense of information from varied sources which they could then use to enhance the quality of their work and learning strategies, as opposed to providing students with conventional teacher-directed feedback/ feedforward comments. Importantly, the selected samples (representing a quality range) took the form of formative writing-to-learn exercises, in marked contrast with the more conventional use of samples drawn from summative tasks. This had a major bearing on the teachers’ confidence in focusing the samples and associated dialogic activities on feedback processes that were directly linked to evaluating relevant subject knowledge in advance of summative testing. Students were required, in preparation for the workshop, to undertake the same task, and activities based on comparing their work with the samples formed the basis of a two-hour teaching session. In effect, then, this positioned the exemplars as pre-emptive formative assessment (Carless, 2007), contrasting strongly with the approach reported in many exemplars-based studies, where samples are often selected for student analysis and discussion to represent the genre of the imminent summative assessment, but on different topic areas, for fear of imitation or stifling students’ creativity (Hawe et al., 2016). In sympathy with sustainable feedback (Carless et al., 2011), the teachers’ focus throughout was on trying to develop first-year students’ evaluative judgment within the discipline (Tai et al., 2017). Findings, based on surveys, interviews and reflective diaries, illuminated ways in which, however, in the early iterations of the three-year action research cycle, over half of the students became confused and distracted by what Hawe et al. (2017) refer to as the ‘mechanics’ of assessment. Thus in the final cycle the same activities were reframed to promote the development of evaluative judgment even more explicitly, by involving students in the prior co-construction of rubrics. Results revealed dramatic changes in the ways in which students interpreted the same activities, given this further shift towards pedagogic discourse (Tai et al., 2017). These will be reported and linked to the literature, offering critical insights into the growing body of work on students’ responses to exemplars and sustainable feedback processes. References Boud, D. & Molloy, E. (2013) ‘Rethinking models of feedback for learning: the challenge of design’, Assessment & Evaluation in Higher Education, 38(6), pp.698–712. Carless, D. (2007) ‘Conceptualizing Pre-emptive Formative Assessment. Assessment in Education: Principles’, Policy & Practice, 14(2), pp.171–184. Carless, D., Salter, D., Yang, M., & Lam, J. (2011) ‘Developing sustainable feedback practices’, Studies in Higher Education, 36(4), pp.395-407. Hawe, E., Lightfoot, U. & Dixon, H., (2017) ‘First-year students working with exemplars: promoting self-efficacy, self-monitoring and self-regulation’, Journal of Further and Higher Education, 40(6), pp.1–15.

47


Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2017) ‘Developing evaluative judgement: Enabling students to make decisions about the quality of work’, Higher Education, pp. 1-15. To, J., & Carless, D. (2016) ‘Making Productive Use of Exemplars: Peer Discussion and Teacher Guidance for Positive Transfer of Strategies’, Journal of Further and Higher Education, 40(6), pp.746–764. 16

Round Table Presentations Session Chair: Linda Thompson

Meeting Room 11

Feedback for Postgraduate Taught Students Speaker: Fay Julal, University of Birmingham, UK Relative to undergraduate and postgraduate research little is known about the feedback experiences and expectations of postgraduate taught (PGT) students. PGT students are expected to quickly adjust to new assessment and feedback practices, which is particularly challenging for international students (Tian and Lowe, 2013). PGT students are expected to show greater independence and initiative, which may imply fewer instances of feedback and a greater expectation that they will be able to use it effectively. PGT students are presumed to be “HE experts” and the transition expected to be smooth; however, this view has been challenged (see McPherson et al., 2017). The PGT population is diverse and students bring different expectations about assessment and feedback, likely shaped, in part, by their undergraduate experiences. Institutions have limited time to acculturate new students to their practices and students have relatively little experience with the new practices before they evaluate their assessment and feedback experience on the PTES. With the future inclusion of taught postgraduate provision into the TEF, it is important that the needs of PGT students are better understood. This presentation describes a research project that explores specifically the feedback experiences and expectations of postgraduate taught students. Increasing assessment literacy on MA translations modules to improve students’ understanding of and confidence in assessment processes Speaker: Juliet Vine, University of Westminster, UK Assessment is of importance to a translator, not only as a student but also throughout their career. Translator’s need to be able to self-assess their work, as well as assessing the work of peers in professional contexts. Therefore, students as part of their MA studies are often asked to assess their own and their peers’ work, but they are rarely offered the opportunity to think explicitly about the assessment criteria or the level descriptors that have being applied. In recent research into students’ attitude to assessment at our University, we discovered that students expressed concerns about the levels of subjectivity involved in the assessment process (Huertas Barros & Vine, 2017). This lack of confidence in the assessment process has been linked to low levels of assessment literacy (Elkington, 2016). In order to increase assessment literacy, and therefore equip students with the assessment skills they will need as students and professionals and to improve their confidence in the assessment process, I devised and critically evaluated a learning activity which focuses on helping the student’s understand and use the assessment criteria. References Elkington, S. (2016) HEA Transforming Assessment in Higher Education Summit 2016 Final Report. Higher Education Academy. Available at: https://www.heacademy.ac.uk/system/files/downloads/hea_assessment_summit_2016_0.pdf (Accessed: 14 March 2018). Huertas Barros, E., & Vine, J. (2017) ‘Current trends on MA translation courses in the UK: changing assessment practices on core translation modules’, The Interpreter and Translator Trainer (ITT). Special Issue ‘New Perspectives in Assessment in Translator Training’, 12(1), pp. 5-24, doi:10.1080/1750399X.2017.1400365.

48


Developing an identity as a knowing person: examining the role of feedback in the Recognition of Prior Learning (RPL) Speaker: Helen Pokorny, University of Westminster, UK This feedback case study is located within a joint venture between a University and a College of Further Education with an explicit mission to promote part-time education. It provides a review and evaluation of the successful development of an undergraduate programme in leadership and professional development, two-thirds of which is awarded through the Recognition of Prior Learning (RPL), thus reducing the cost to the students. This cost reduction is important. Figures from the Higher Education Statistics Agency show that part-time student numbers in England have fallen by 56% since 2010 with the most rapid decline taking place after the government raised the cap on parttime fees to £6,750 per year in 2012, doubling or tripling the cost of many courses (Fazackerley, 2017). RPL has its roots in the widening participation and social justice initiatives of the early 1990s but has not achieved mainstream status in the UK, despite most universities having RPL processes enshrined within their regulatory framework. Harris’ (2000), observation of RPL that the onus was on the RPL student to take the initiative and to negotiate a process she described as a lone one would still hold true for many students attempting to access this process in universities today. This view of the isolated learner is at odds with the thrust in mainstream higher education to develop learning communities and to promote feedback environments rich in peer dialogue (Boud and Molloy, 2013). In this case study feedback was key to making the RPL process both welcoming and transparent. The programme provides a rich feedback environment which in turn provides a strong sense of identity for learners. This supports Whittaker et al.’s (2006) argument that RPL has the potential to alter social identities in a transformative sense through the recognition participants can get from others as well as from assessors. Evaluation feedback from the students is consistently positive, and highlights the key roles played by the tutors and the peer group in navigating the demands of the process and maintaining motivation when the process felt overwhelming. This sense of belonging and transformation was achieved through the development of the feedback tasks set and activities provided which allowed the tutors to deal with images of “otherness” in learners’ minds, of what being a student means (me/not like me) and, most importantly for RPL, through the feedback rich dialogic environment it was possible to create a community of learners developing together for assessment images of what knowledge is - and where it comes from. References Boud, D. and Molloy, E. (2013) ‘Rethinking models of feedback for learning. The challenge of design’, Assessment and Evaluation in Higher Education, 38(6), pp. 698-712. Fazackerley, A. (2017) ‘Part-time student numbers collapse by 56% in five years, Guardian online, 2nd May https://www.theguardian.com/education/2017/may/02/part-time-student-numberscollapse-universities. Harris, J. (2000) RPL: Power Pedagogy and Possibility, Conceptual and Implementation Guides. Pretoria: HSRC Press. Whittaker, S., Whittaker, R. and Cleary, P. (2006) ‘Understanding the transformative dimensions of RPL’, in Andersson, P. & Harris, J. (eds.) Re-theorising the Recognition of Prior Learning. Leicester: NIACE, pp. 301-321. Students as partners in co-creating a new module: Focusing on assessment criteria Speaker: Maria Kambouri-Danos, University of Reading, UK The project described here focused on developing staff-student partnerships with the objective of engaging students as partners in co-designing assessment and criteria for the new module. In more detail the project aims to a) go beyond student-feedback to engaging students by listening to the ‘student voice’ while developing curriculum, leading to a more inclusive experience and b) co-develop effective and student-friendly assessment criteria by engaging a diverse group of students, leading to a more inclusive pedagogy The new module is part of a work based programme, part of which students are required to work in a relevant workplace for at least two and a half days per week. Most students are mature students with family responsibilities, while working and studying full-time. Due to the cohort’s particular characteristics, engaging these students with the University has been challenging in the past. Through this project it was essential to engage students in four partnership

49


workshops during which staff and students 1) discussed the aims of the project and reviewed existing modules, 2) explored some literature available in order to stimulate discussions about designing assessment 3) finalised details of the assessment design and discussed assessment related vocabulary and 4) reflected on the whole process. Students were asked to complete a short survey, at the beginning and at the end of the project, aiming to identify students’ perspectives/attitudes to student engagement in curriculum design before and after taking part to the partnerships sessions. The results indicate that actively engaging students in such activities helps to promote a sense of belonging and to create and sustained positive staff-students partnerships. References Black, P. & Wiliam, D. (1998) ‘ Assessment and classroom learning’, Assessment in Education: Principles, p olicy and practice, 5(1), p p . 7-74. Available at: http://search.proquest.com.idpproxy.reading.ac.uk/docview/204052267?accountid= 13460 Bloom, B. S., Hastings, J. T. & Madaus, G. F. (1971. Handbook on formative and summative evaluation of student learning. New York: McGraw-Hill. Bloxham, S. & Boyd, P. (2007) Developing effective assessment in higher education: a practical guide. Maidenhead: Open University Press. Coates, H. (2005) ‘The value of student engagement for higher education quality assurance’, Quality in Higher Education, 11(1), pp. 25-36. Fullan, M. & Scott, G. (2009) Turnaround leadership for higher education. San Francisco: Jossey-Bass. Harper, S.R. & Quaye, S.J. (2009) Student engagement in higher education: Theoretical perspectives and practical approaches for diverse populations. New York and London: Routledge. Race, P., Brown, S. & Smith, B. D. (2005) 500 tips on assessment. 2nd ed. London: Routledge. Assessment - Evaluating the Workload on Staff and Students Speakers; Mark Glynn, Dublin City University; Clare Gormley, Dublin City University; Laura Costelloe, Dublin City University, Republic of Ireland The session will discuss the analysis of assessment workload for staff and students and our journey beyond the surface of obvious metrics to reveal the real impact of assessment on the student experience. It has been pointed out that ‘assessment is probably the most important determinant of the character of a course and of student learning within the course’ (Scott, 2015: 700 ). Indeed, as Biggs and Tang (2011: 196) point out, ‘assessment is the senior partner in learning and teaching. Get it wrong and the rest collapses’. Thus, mindful of assessment as a key ‘driver’ of student learning (Race, 2007) and of the importance of better understanding student and staff experiences of assessment, this session will report on the outcomes of a research project at Dublin City University which sought to profile summative assessment workloads within programmes, examining the assessment experience of students and staff within programmes and across disciplines. This study aims to investigate the extent of the assessment overload issue for both staff and students and is being undertaken to encourage programme teams to adopt a programmatic approach to assessment including taking data-informed decisions to enhance the student experience.

50


Dates for Your Calendar 11 July 2018: Transforming Feedback (AHE 2018 Panel Review) Panel Chair – Prof Sally Jordan (The Open University, UK) This is a joint webinar organised with the Assessment in Higher Education conference secretariat. This 1 hour session will feature selected speakers from the Assessment in Higher Education conference (Thursday 28 June 2018 Manchester, UK). This webinar session will take the form of a panel style review of some of the key messages from selected papers presented at the AHE2018 conference. Each panel member will contribute to the discussion with a short overview of their presentation from the conference followed by questions and discussion between the panel and the webinar participants. Panel Presenters: 1. Theresa Nicholson, Manchester Metropolitan University “Frequent rapid feedback, feed-forward, and peer learning, for enhancing student engagement in online portfolio assessment” 2. Stephen Dixon, Newman University “Hearing voices: First year undergraduate experience of audio feedback” 3. Naomi Winstone and Emma Medland, University of Surrey “Feedback footprints: Using learning analytics to support student engagement with, and learning from, feedback”. To register for this session please login (or create an account) on-line here http://transformingassessment.com/user/login then click the ‘register now’ button. Sessions are hosted by Professor Geoffrey Crisp, PVC Education, University New South Wales and Dr Mathew Hillier, Monash University Office of Learning and Teaching, Monash University, Australia. Please note all sessions are recorded and made public after the event. When: 11 July 2018 07:00 AM through 08:00 AM Coordinated Universal Time (UTC / GMT)

51


International AHE Conference 26 & 27 June 2019 Manchester UK The 26 & 27 June 2019 will be the seventh international Assessment in Higher Education conference. This research and academic development conference is a forum for critical debate of research and innovation focused on assessment and feedback practice and policy. The themes for our 2019 conference will invite a wide range of papers, practice exchanges and posters. Themed poster presentations, accompanied by a short pitch from the authors, have been a particular strength of the conference and have encouraged networking by delegates. Keynote Speakers Phil Dawson: Associate Professor at Deakin University Bruce Macfarlane: Professor of Higher Education at University of Bristol Who is the conference aimed at? The conference will be of interest to higher education lecturers, academic developers, professional learning support staff and those with responsibility for the management and quality assurance of assessment. This conference is for researchers and practitioners and is an opportunity to share and learn about both research and innovative practice in assessment in higher education. A thought-provoking and supportive conference The 2019 conference will build on the success of our previous events at which we have made particular efforts to combine critical debate with a supportive and friendly environment. A combination of new thinking from keynote speakers, presentations on current research and sharing of innovative practice will provide a high-quality forum for debate. An excellent conference, I think it is the best I have attended in terms of quality presenters, organisation and fellow delegates. I will be back’ (Delegate comment, June 2017). In addition to two provocative keynotes we will be inviting proposals from delegates to share their research and practice. The call for proposals will be sent out across the network in October 2018. Further information For further information go to: http://aheconference.com/ Conference manager queries: Linda Shore linda.shore@cumbria.ac.uk

The AHE conference is leading the development of assessment for learning in higher education

52


PRHE Journal Practitioner Research in Higher Education publishes research and evaluation papers that contribute to the understanding of theory, policy and practice in teaching and supporting learning. The journal aims to disseminate evaluations and research of professional practice which give voice to all of the participants in higher education and which are based on ethical and collaborative approaches to practitioner enquiry. The on-line, open access journal has recently published a special issue comprising papers from the seventh International AHE Conference. This issue is available on-line here http://ojs.cumbria.ac.uk/index.php/prhe/issue/view/72 We would welcome papers for our next standard issue of the journal. Please email your article to linda.shore@cumbria.ac.uk The deadline for submission is Friday 28 September 2018. Should you like further information or to discuss a proposal, please contact the journal editor pete.boyd@cumbria.ac.uk

53


Delegate List First Name Sian Olya Katy Fabio Riccardo Subhi Farzana John Rob Jill Martin Charles David Pete Dina Sally Angela Alex Anke David Danny Amanda Alison Judy Ellie Jayne Jane Maria Hilary Sue Aleix Barrera

Anneleen Cheryl Nick Trevor John Anna Enrico Stephen Lauren Hannelie Ana Maria Sara Ana Jessica 54

Last Name Alsop Antropova Appleton Arico Ashour Aslam Atherton Baker Barber Barker Blaich Boud Boyd Brazil Bropwn Brzeski Buckley Buttner Carless Carroll Chapman Clapp Cohen Cole Coleman Collings Conota-Marco Constable Coredell Corominas Cosemans Cottrell Curtis Day Dermo DĂ­az-Vicario Dippenaar Dixon Dommett du Plessis Walker Ducasse Eastburn Espasa Evans

Institution Coventry University Research Assistant University of East Anglia University of East Anglia The University of Buckingham Coventry University Explorance Inc. Sheffield Hallam University University of Manchester University of Aberdeen Center of Inquiry/HEDS Deakin University University of Cumbria Institute of Technology Carlow Ireland Leeds Beckett University University of Central Lancashire University of Strathclyde University of Birmingham University of Hong Kong University NSW University of Cumbria Newcastle University University of Kent myknowledgemap University of Cumbria University of Plymouth University of Cumbria University of Cumbria Bishop Grosseteste University Universitat Autonoma de Barcelona KULeuven The University of Manchester James Madison University Royal Literary Fund University of Salford Universitat Autòoma de Barcelona Anglia Ruskin University Newman University University Centre South Devon Coventry University RMIT University Melbourne University of Huddersfield Universitat Oberta de Catalunya The Open University


Delegate List (contd.) First Name Louise Natalie Rachel Keston Gerry Zhengdong Helena Rebecca Mark Clare Stephen Linda Karen Paul Nick Nuria Teresa Hilary Rita Jane Kathryn Peter Tom Jonathan Georgeta Natasha Sally Gordon Fay Maria Vikki David Joanna Fangfei Andy Cecilia Louise Terry Jenny Ross E. Linda Rosa M. Jan Teresa

Last Name Fisher Forde-Leaves Forsyth Fulcher Gallagher Gan Gaunt Gill Glynn Goudy Gow Graham Gravett Greening Grindle Guasch Guasch Pascual Harris Headington Headley Hill Holgate Holland Howard Ion Jankowski Jordan Joughin Julal Kambouri-Danos Kenny Laughton Leach Li Lloyd Lowe Lynch Maguire Marie Markle Matthews Mayordomo McArthur McConlogue

Institution University of Bristol Cardiff University Manchester Metropolitan University James Madison University Dundalk Institute of Technology University of Macau Guildhall School of Music & Drama Newcastle University Dublin City University University College London The University of York Northumbria University University of Surrey Coventry University University College London Regent's University London Universitat Oberta de Catalunya (UOC) University of Reading University of Cumbria Harper Adams University La Trobe University Northumbria University myknowledgemap University of Cumbria Universitat autònoma de Barcelona National Institute for Learning Outcomes Assessment The Open University Deakin University University of Birmingham University of Reading Pebble Learning Ltd Sheffield Hallam University The University of Buckingham University of Bath Cardiff University The University of York Dublin Institute of Technology University of Limerick University College London Educational Testing Service Manchester Metropolitan University University of Cumbria Lancaster University University College London 55


Delegate List (contd.) First Name Eileen David Mary Kristen Ruth Emma Cristina Phillip Erin Lauren Erica Peter Claire Theresa Margaret Geraldine Dimitrios Serafina Jayne John Judith Edd Helen Helen Maria Phil Justin Jane Catherine Nicola Errol Kay Mark Susan Linda Angela Rachel Charlie George Andrew Marie Usha Elaheh Lindsey 56

Last Name McEvoy McGarvey McGrath McKinney McQuater Medland Mercader Miller Morehead Moriarty Morris Morritt Moscrop Nicholson O'Keeffe O'Neill Paparas Pastore Pearson Pearson Pickering Pitt Pittson Pokorny Puro Race Rami Rand Reading Reimann Rivera Sambell Schofield Scoffield Shore Short Simpson Smith Spencer Sprake Stowell Sundaram Tavakoli Thompson

Institution Trinity College Dublin Keele University Galway Mayo Institute of Technology University of California- Los Angeles Manchester Metropolitan University University of Surrey Universitat Autonoma de Barcelona New College Durham University of Central Lancashire Leeds Beckett University Independent Consultant The Open University Edge Hill University Manchester Metropolitan University Mary Immacualte College University College Dublin Harper Adams University University of Bari- Italy King's College London University of Cumbria The Open University University of Kent Harper Adams University University of Westminster University of Sheffield Independent Dublin City University York St John University Durham University Durham University Edinburgh Napier University Edinburgh Napier University Edgehill University Manchester Metropolitan University University of Cumbria Dundalk Institute of Technology Durham University Liverpool John Moores University Keele University University of Central Lancashire University of Worcester University of East Anglia University of South East Norway-Hakim Sabzevari Un University of Reading


Delegate List (contd.) First Name Carmen James Wayne Gemma Juliet Panos Susanne Thushari Denise Matt Naomi Kathleen Neil Harvey

Last Name Tomas Trueman Turnbull van Vuuren Vine Vlachopoulos Voelkel Welikala Whitelock Wingfield Winstone Wise Withnell Woolf

Institution University of Nottingham Anglia Ruskin University Liverpool John Moores University Canterbury Christ Church University University of Westminster Macquarie University University of Liverpool University of London The Open University eAssessment Association University of Surrey Center of Inquiry/HEDS University of Salford University of Worcester

57


Venue Floor Plan: 1st Floor

58



Lear

ningEd

ucat

Asse

ion

Aca

ssm

ing

adem ent y

d e m Lear y n

ent ssm

Asse

Š University of Cumbria 2018 (UOC 1181)

http://aheconference.com


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.