Page 1


Contents Page no Delegate Information

3

Welcome

4

Organising Committee

6

List of Reviewers

7

Programme Summary

9

Programme Overview

10

Conference Themes

21

Day 1 – Keynote: Professor Margaret Price

29

Day 1 – Abstracts

30

Day 2 – Keynote: Associate Professor Gordon Joughin

52

Day 2 – Abstracts

53

Presenter Index

80

Conference Exhibitors & Main Sponsors

90

Delegate List

91

Floor Plan

95

3


Delegate Information Conference dialogue and evaluation The conference is using Textwall to encourage debate and evaluation of the event as it takes place. Please join in and add your comments, questions and ideas to an on-going slideshow by sending texts to 07537402400. Texts will be displayed throughout the two days, adding interactive feedback and evaluation to the conference. Numbers will not be displayed, and texts are charged at standard rates (for most people, this will be included in your monthly allowance). You can start right now - send a quick text about what you want to get out of the conference, pose a question, or give a quick summary to generate interest in your presentation!

Wi-Fi Access For WiFi connection at Maple House please use the following login details: Please enter the password: etcv4063

AHE Conference Stand For assistance during the conference please visit the AHE conference stand located at the entrance to the conference suite on the first floor of Maple House.

4


Welcome It is with great pleasure that I welcome you to Birmingham for the fourth Assessment in Higher Education Conference. The conference is a unique collaboration between academics working in the field of assessment and the UK Higher Education Academy who recognise the importance of debating concerns and increasing understanding of assessment practice within and across academic disciplines. We aim to build a focused, friendly and collaborative atmosphere with an emphasis on the voice of the practitioner, working to improve assessment in their programmes and institutions. The programme includes an important range of themes in assessment research and development including ‘engaging students in assessment and feedback’, ‘assessment for learning’ and ‘assessment technologies’. Challenging and controversial ideas for transforming assessment will sit alongside debates on grading and maintaining standards. We are hoping that the exciting range of papers, practice exchanges, early career sessions and posters will provide the chance to share innovations, evidence-based change, reflections on practice, research and evaluative studies. We have a strong international representation at the conference drawing on research and practice in many higher education systems with contributions from, amongst others, the UK, Australia, China, Norway, Thailand and the USA. Conference Highlights Conference highlights include our two excellent keynote speakers, Professor Margaret Price from Oxford Brookes University in the UK and Associate Professor Gordon Joughin from the University of Queensland in Australia. There will be an opportunity to meet some of the key authors in the field of assessment at the end of the first day of the conference at the launch of two new books published by Routledge on feedback. We have introduced an innovation into our poster session this year, providing the opportunity for greater engagement with the posters by providing their authors with an opportunity to make a 3 minute ‘pitch’ to the delegates. This process will take place in three different rooms depending on the theme of the posters and will enable each author to provide a brief overview of their topic so that delegates can decide which they are interested in following up. The Higher Education Academy have kindly sponsored the prize of an i-pad for the best poster and the judging for this will take place during that session. Networking Above all, we hope the conference will provide you with an excellent opportunity to make connections and discover shared interests in higher education assessment with colleagues from across the world. Our conference evening meal, on the 26th of June, is an informal social event to support networking. This meal needs to be booked in advance so please visit the conference desk by 3pm if you would like to purchase a ticket. We are also holding a free networking breakfast in the conference centre on the second day so that, whatever hotel you are 5


staying in, you can get together to discuss the day’s events and enjoy breakfast with like-minded colleagues. Evaluation There will be an on-line evaluation after the conference but please feel free to share any comments or suggestions on Textwall or with members of the Organising Group whilst you are here. Birmingham is a revitalised city with many interesting places to visit. Go to http://visitbirmingham.com/ for plenty of ideas. As the conference chair, I wish you an excellent, stimulating and friendly conference and a good time in Birmingham. Whatever you are looking for in a conference, I hope you find it. There is no doubt that the sector is in clear need of transformed assessment practice and this conference community has an important role to play in building the critical mass of research and innovation needed to support that transformation.

Sue Bloxham Conference Chair on behalf of the organising group.

6


AHE Conference Organising Committee The conference is managed by an organising group of researchers and practitioners in higher education assessment and is sponsored by the University of Cumbria and the Higher Education Academy.

Conference Theme Leads Engaging students in assessment & feedback: Marking and academic standards: Assessment for learning Assessment technologies Transitions in assessment Academic writing development Practice Exchange/Early Career PhD Seminar

Amanda Chapman Erica Morris Liz McDowell Sally Jordan Kay Sambell & Linda Graham Kay Sambell & Linda Graham Rebecca Westrup

7


List of Reviewers David

Bell

D.Bell@qub.ac.uk

Alisdair

Blair

ablair@dmu.ac.uk

Michaela

Borg

michaela.borg@ntu.ac.uk

Brock

Brock

V.M.Brock@wlv.ac.uk

Francisca

Cabezas

francisca_sofia@hotmail.com

Sheelagh

Collier

Colliers@itcarlow.ie

Krista

Court

Krista.court@cumbria.ac.uk

John

Cowan

J.Cowan@napier.ac.uk

Meng

Fan

meng.fan@newcastle.ac.uk

Martin

Foo

martin.foo@northumbria.ac.uk

Vidar

Gynnild

Vidar.Gynnild@plu.ntnu.no

Marie

Hay

MHay@dmu.ac.uk

Clair

Hughes

clair.hughes@uq.edu.au

donna

Hurford

donna.hurford@cumbria.ac.uk

Linda

Martin

linda.martin@coventry.ac.uk

Helen

Martin

h.martin@abdn.ac.uk

Colin

Mason

colinmason52@gmail.com

Jan

McArthur

jan.mcarthur@ed.ac.uk

Jane

McDonnell

jmcdonnell@dmu.ac.uk

John

McGarrigle

MCGARRIJ@itcarlow.ie

Margo

McKeever

margo.mckeever@northumbria.ac.uk

Karen

McKenzie

Karen.McKenzie@ed.ac.uk

Emma

Medland

emma.medland@kcl.ac.uk

Carole

Murphy

carole.murphy@smuc.ac.uk

Trish

Murray

p.b.murray@sheffield.ac.uk

Lynn

Norton

nortonl@hope.ac.uk

Edd

Pitt

pitte@hope.ac.uk

Venda

Pollack

venda.pollock@newcastle.ac.uk

Nicola

Reimann

nicola.reimann@northumbria.ac.uk

Marina

Sawdon

marina.sawdon@durham.ac.uk

Pritpal

Sembi

P.Sembi@wlv.ac.uk

Paul

Sutton

psutton@marjon.ac.uk

Maarten

Taas

mprt1@leicester.ac.uk

Wayne

Turnbull

W.Turnbull@ljmu.ac.uk

Diana

Vinke

A.A.Vinke@tue.nl

Zabin

Visram

venda.pollock@newcastle.ac.uk

Margaret

Weaver

margaret.weaver@cumbria.ac.uk 8


9


Programme Summary Day 1 26th June 2013 09.00

Registration

26th June 2013 09.30

Pre-conference Master Classes

26th June 2013 11.20

Parallel Session 1

26th June 2013 12.00

Parallel Session 2

26th June 2013 12.30

Lunch (Restaurant)

26th June 2013 13.30

Welcome & Keynote (Accelerate Suite)

26th June 2013 14.30

Refreshments

26th June 2013 15.00

Parallel Session 3

26th June 2013 15.40

Parallel Session 4

26th June 2013 16.20

Parallel Session 5

26th June 2013 16-50

Parallel Session 6

26th June 2013 17.30

Book Launch (Restaurant)

26th June 2013 20.00

Evening Meal (Annexe Restaurant)

Day 2 27th June 2013 08.30

Networking Breakfast (Restaurant) Registration

27th June 2013 09.15

Parallel Session 7

27th June 2013 09.50

Parallel Session 8

27th June 2013 10.35

Parallel Session 9

27th June 2013 11.15

Refreshments (Restaurant)

27th June 2013 11.30

Poster Session 1 - Marking & Academic Standards (Proceed 1) Poster Session 2 - Assessment Technologies (Proceed 1) Poster Session 3 - Transitions in Assessment (Proceed 1) Poster Session 4 - Engaging Students in Assessment & Feedback (Proceed 2) Poster Session 5 - Assessment for Learning (Propel 1)

27th June 2013 12.30

Lunch (Restaurant)

27th June 2013 13.30

Parallel Session 10

27th June 2013 14.10

Keynote & Poster Award (Accelerate Suite)

27th June 2013 15.15

Refreshments & Close (Restaurant) 10


Programme Overview Day 1 Registration 26th June 2013 09.00 Pre-conference Master Classes 26th June 2013 09.30 Delegates are invited to attend a series of master classes prior to the start of the conference on Wednesday 26 June 2013. Experts in the field of assessment will be leading these sessions on Assessment for Learning, Writing good questions for on-line assessment and Oral assessment. 1. Assessment for Learning Liz McDowell1, Kay Sambell2, Catherine Montgomery3 1 Independent Higher Education Consultant, Newcastle upon Tyne, UK, 2Northumbria University, Newcastle upon Tyne, UK, 3University of Hull, Hull, UK i. Location: Proceed 1 2. Using oral assessment Gordon Joughin The University of Queensland, Brisbane, Queensland, Australia i. Location: Proceed 2 3. Producing high quality computer-marked assessment Sally Jordan, Tim Hunt Open University, Milton Keynes, UK i. Location: Propel 1 Parallel Session 1 26th June 2013 11.20 4.

Generating credible evidence of academic outcomes and standards: perspectives of academics in Australian universities Clair Hughes1, Simon Barrie2, Geoffrey Crisp3, Anne Bennison11 The University of Queensland, Brisbane, Queensland, Australia, 2The University of Sydney, Sydney, New South Wales, Australia, 3RMIT University, Melbourne, Victoria, Australia Location: Proceed 1 Chair: Phillip Long

5. Learning from practice - developing an overview of lecturers' learning needs in terms of assessment Marion Palmer1, Jen Harvey,2 1IADT, Dun Laoghaire, Co. Dublin, Ireland, 2DIT, Dublin, Ireland Location: Proceed 2 Chair: Anna Steen 6. Offering carrots or wielding sticks: the voluntary or mandatory use of eAssessment and eFeedback to improve learning and teaching in times of change in HE Nick Allsopp, Harish Ravat De Montfort University, Leicester, UK Location: Forward Chair: Sally Jordan 7. Fearful Asymmetries of Assessment. Paul Sutton University College Plymouth: St Mark & St John, Plymouth, UK Location: Propel 1 Chair: Carol Bailey 11


8. Learning-oriented assessment task design David Carless University of Hong Kong, Hong Kong, Location: Propel 2 Chair: Anita Pelag 9. Making the most of Masters Level assessment Sally Brown1, Phil Race2, Liz McDowell2 1 Leeds Metropolitan University, Leeds, UK, 2Independent HE Consultant, Newcastle, UK Location: Accelerate Suite Chair: Rita Headington Parallel Session 2 26th June 2013 12.00 10. Decision-making for assessment: explorations of everyday practice Margaret Bearman1, David Boud2, Phillip Dawson1, Gordon Joughin3 1 Monash University, Melbourne, Australia, 2University of Technology, Sydney, NSW, Australia, 3University of Queensland, Brisbane, Australia Location: Accelerate Suite Chair: Phillip Long 11. Developing Institutional Assessment Strategies in a Performative Environment Sue Mathieson Northumbria University, Newcastle upon Tyne, UK Location: Proceed 1 Chair: Anna Steen 12. Using technologies to engage students in assessment and feedback: a case study from Biomedical Sciences John Dermo, James Boyne University of Bradford, Bradford, UK Location: Proceed 2 Chair: Sally Jordan 13. 'I can't hand it in yet, it's not perfect': Mature students' experience of assessment and contested identities. Amanda Chapman University of Cumbria, Lancaster, UK Location: Propel 1 Chair: Carol Bailey 14. Introducing Assessment & Feedback: A Framework of Engagement, Empowerment and Inclusion Louise O'Boyle University of Ulster, Belfast, UK Location: Forward Chair: Rita Headington 15. Frameworks and disciplines: mapping and exploring assessment principles Erica Morris*1, Patrick Baughan2 1 Higher Education Academy, UK, 2City University London, UK Location: Propel 2 Chair: Rita Headington Lunch 26th June 2013 12.30 Restaurant

12


Welcome & Keynote 26th June 2013 13.30 Accelerate Suite 16. ‘Assessment literacy: making the link between satisfaction and learning’ Margaret Price Oxford Brookes University, Oxford, UK Refreshments 26th June 2013 14.30 Parallel Session 3 26th June 2013 15.00 17. Examining the examiners: investigating the understanding and use of academic standards in assessment Sue Bloxham1, Margaret Price2, Jane Hudson2, Birgit den Outer2 1 University of Cumbria, Lancaster, UK, 2Oxford Brookes University, Oxford, UK Location: Proceed 1 Chair: Calum Delaney 18. Developing assessment literacy: students' experiences of working with exemplars to improve their approaches to assignment writing. Kay Sambell1, Catherine Montgomery2, Linda Graham1 1 Northumbria University, Newcastle upon Tyne, UK, 2University of Hull, Hull, UK Location: Accelerate Suite Chair: Patrick Baughan 19. ‘Designing a Student Progress Dashboard to promote student self-regulation and support' Julie Vuolo University of Hertfordshire, Harpenden, UK Location: Forward Chair: Marion Palmer 20. Exploring formative assessment with international students Caroline Burns, Martin Foo Northumbria University, Newcastle Upon Tyne, UK Location: Proceed 2 Chair: Louise O’Boyle 21. Demystifying marking and assessment processes: where's the mystery? Fiona Meddings University of Bradford, Bradford, West Yorkshire, UK Location: Propel 1 Chair: Pete Boyd 22. Releasing creativity in assessment Alison Bettley, Ben Oakley, Freda Wolfenden The Open University, Milton Keynes, UK Location: Propel 2 Chair: Donna Hurford Parallel Session 4 26th June 2013 15.40 23. How do assessors mark? What do they see in written work, and what do they do with it? Calum Delaney Cardiff Metropolitan University, Cardiff, UK Location: Forward Chair: David Bell

13


24. "I've never done one of these before." A comparison of the assessment 'diet' at A level and the first year at university. Simon Child, Frances Wilson, Irenka Suto Cambridge Assessment, Cambridge, Cambridgeshire, UK Location: Proceed 1 Chair: Mike McCormack 25. The value of technology enhanced assessment – sharing the findings of a JISC funded project in evaluating assessment diaries and GradeMark Alice Lau, Karen Fitzgibbon University of Glamorgan, Pontypridd, UK Location: Accelerate Suite Chair: Marion Palmer 26. Lost in translation: what students want from feedback and what lecturers mean by "good" Phil Long Anglia Ruskin University, Chelmsford, UK Location: Proceed 2 Chair: Donna Hurford 27. Assessing the change or changing the assessed? Bridget Hanna1 1 Edinburgh Napier University, Edinburgh, UK, 2The Open University, Milton Keynes, UK Location: Propel 1 Chair: Pete Boyd 28. Using Assessment Evidence to Improve Student Learning: Can It Be Done? Natasha Jankowski1 1 University of Illinois Urbana-Champaign, Champaign, IL, USA, 2 National Institute for Learning Outcomes Assessment, Champaign, IL, USA Location: Propel 2 Chair: Liz McDowell Parallel Session 5 26th June 2013 16.20 29. You show me yours.....comparative evaluations of assessment practice and standards. Claire Gray, Julie Swain, Rebecca Turner Plymouth University, Plymouth, UK Location: Proceed 1 Chair: Jan Watson 30. Learning outcomes - balancing pre-defined standards and the benefits of unexpected learning? Anton Havnes1, Tine Sophie Prøitz2 1 Oslo and Akershus University College of Applied Sciences, Oslo, Norway, 2Nordic Institute for Studies in Innovation, Research and Education, Oslo, Norway Location: Proceed 2 Chair: Natasha Jankowski 31. Student Assessment Using Digital Story Telling Anita Peleg, Peter Maple London South Bank University, London, UK Location: Propel 1 Chair: Caroline Marcangelo

14


32. Student Assessment in Neoliberal Higher Education Context: Rationalities, Practices, Subjects.(Cross-cultural Study of the University of Glasgow and Tallinn University) Rille Raaper The University of Glasgow, Glasgow, UK Location: Propel 2 Chair: Kay Sambell 33. Should we stop calling it assessment? What influences students' perceptions and conceptions of assessment and AfL? Donna Hurford University of Cumbria, Lancaster, UK Location: Accelerate Suite Chair: Sheena Bevitt Parallel Session 6 26th June 2013 16-50 34. What do academic staff think about assessment and how can it help inform policy making and approaches to professional development? Sarah Maguire1, Lin Norton0,2, Bill Norton0,2 1 University of Ulster, Coleraine, Derry, UK, 2Liverpool Hope University, Liverpool, Merseyside, UK Location: Accelerate Suite Chair: Elizabeth Ruiz 35. Monitoring the attainment of graduate attributes at individual and cohort levels Kerry Shephard, Brent Lovelock, John Harraway, Liz Slooten, Sheila Skeaff, Mick Strack, Tim Jowett, Mary Furnari University of Otago, Dunedin, New Zealand Location: Proceed 1 Chair: Natasha Jankowski 36. How Educational Practices Change the Evaluation and Assessment: Making Learning Meaningful through ePortfolios Ida Asner LiveText, La Grange, IL, United States Minor Outlying Islands Location: Proceed 2 Chair: Caroline Marcangelo 37. Creative Development or High Performance? An Alternative Approach to Assessing Creative Process and Product in Higher Education Jan Watson University of East Anglia, Norwich, Norfolk, UK Location: Propel 1 Chair: Ainol Madziah Zubairi 38. Assessing learning gains in project management tuition/training. Steven Nijhuis1,2 1 Utrecht University of Applied Science, Utrecht, The Netherlands, 2 Twente University, Twente, The Netherlands Location: Forward Chair: Kay Sambell 39. International students’ perceptions of formative peer assessment in the British university: a case study Meng Fan, David Leat, Sue Robson Newcastle University, Newcastle upon Tyne, UK Location: Propel 2 Chair: Sheena Bevitt

15


Book Launch 26th June 2013 17.30 Restaurant We are pleased to announce the launch of two books published by Routledge Feedback in Higher and Professional Education: Understanding it and doing it well Edited by: David Boud, Elizabeth Molloy Feedback in Higher and Professional Education explores what needs to be done to make feedback more effective. It examines the problem of feedback and suggests that there is a lack of clarity and shared meaning about what it is and what constitutes doing it well. It argues that new ways of thinking about feedback are needed. Reconceptualising feedback in Higher Education: Developing dialogue with students Edited by: Stephen Merry, Margaret Price, David Carless & Maddalena Taras Reconceptualising Feedback in Higher Education, coming from a think-tank composed of specialist expertise in assessment feedback, is a direct and fundamental response to the impetus for change. Its purpose is to challenge established beliefs and practices through critical evaluation of evidence and discussion of the renewal of current feedback practices. Evening Meal 26th June 2013 20.00 Annexe Restaurant The evening meal is to be held at the Annexe restaurant. The restaurant is located just down the road from the conference venue and will offer you the opportunity to immerse yourself in the vibrancy and nostalgia of vintage cafĂŠsociety and enjoy contemporary Italian and French inspired cuisine. Day 2 Networking Breakfast 27th June 2013 08.30 Restaurant Registration 27th June 2013 08.30 Parallel Session 7 27th June 2013 09.15 40. University Standards Descriptors: can you please all of the people even some of the time? Rachel Forsyth Manchester Metropolitan University, Manchester, UK Location: Accelerate 1 Chair: Clair Hughes 41. Using Patchwork Assessment to enhance student engagement, skills and understanding Caroline Marcangelo University of Cumbria, Cumbria, UK Location: Proceed 1 Chair: Amanda Chapman 42. Upskilling students in lectures using student polling software Sam Clarke, Trish Murray University of Sheffield, Sheffield, South Yorkshire, UK Location: Proceed 2 Chair: Andrew Chambers

16


43. Understanding student learning from feedback Stuart Hepplestone1, Gladson Chikwa2,1 1 Sheffield Hallam University, Sheffield, UK, 2Nottingham Trent University, Nottingham, UK Location: Propel 1 Chair: Ian Jones 44. Analysis of student comments following participation in an assessed online collaborative activity based on contributions to a series of wiki pages. Janet Haresnape Open University, Birmingham, UK Location: Propel 2 Chair: Paul Sutton 45. Assessment for learning at institutions of higher education: a study of practices among academicians Mohamad Sahari Nordin1, Ainol Zubairi1, Mohd Burhan Ibrahim1, Nik Suryani Nik Abd Rahman1, Zainurin Abdul Rahman1, Joharry Othman1, Tunku Badariah Tunku Ahmad1, Zainab Mohd Nor2 1 International Islamic University Malaysia, Kuala Lumpur, Malaysia, 2 University Teknologi Mara, Kuala Lumpur, Malaysia Location: Forward Chair: Fiona Handley Parallel Session 8 27th June 2013 09.50 46. Students' Perceptions about Assessment and Assessment Methods: A Case Study Elizabeth Ruiz Esparza University of Sonora, Hermosillo, Sonora, Mexico Location: Forward Chair: Bridget Hanna 47. Working together: can collaboration between academic writing/subject specialists improve students' performance in written assessments? Carol Bailey University of Wolverhampton, Wolverhampton, UK Location: Proceed 1 Chair: Claire Gray 48. Student response system used for motivation and learning outcome Cecilie Asting, Anna Steen-Utheim, Inger Carin Grøndal BI Norwegian Business School, Oslo, Norway Location: Proceed 2 Chair: Tim Hunt 49. Emotional Responses: Feedback Without Tears Mike McCormack Liverpool John Moores University, Liverpool, Merseyside, UK Location: Accelerate Suite Chair: Anton Havnes 50. Perceptions of fairness and consistency regarding assessment and feedback in trainees experiencing a difficult or failed professional placement. Mark Carver University of Cumbria, Lancaster, UK Location: Propel 1 Chair: Sarah McGuire 51. An Authentic Assessment Proposal: Current Events as Reported in the News Media Rick Glofcheski University of Hong Kong, Hong Kong, Hong Kong Location: Propel 2 Chair: Alice Lau 17


Parallel Session 9 27th June 2013 10.35 52. The Introduction of Grade Marking at Southampton Solent University Fiona Handley, Ann Read Southampton Solent University, Southampton, UK Location: Accelerate Suite Chair: Fiona Meddings 53. Assessment Feedback - What Can We Learn from Psychology Research John Kleeman Questionmark, UK, UK Location: Forward Chair: Rachel Sales 54. Immediate feedback by smart phones Nina Helene RonÌs, Anna Steen-Utheim, Inger Carin Grøndal BI Norwegian Business School, Oslo, Norway Location: Proceed 1 Chair: Stuart Hepplestone 55. Reflections on feedback - feedback on reflection Rita Headington University of Greenwich, London, UK Location: Proceed 2 Chair: Susan Mathieson 56. An evaluation of the effectiveness of audio feedback, and of the language used, in comparison with written feedback. Charlotte Chalmers, Janis MacCallum Edinburgh Napier University, Edinburgh, UK Location: Propel 1 Chair: John Dermo 57. Collaborations and Celebrations: the highs and lows of a decade of working with collaborative assessment. Veronica Brock1,2 1 University Of Wolverhampton, West Midlands, UK, 2Halmstad University, Halland, Sweden Location: Propel 2 Chair Janet Haresnape Refreshments 27th June 2013 11.15 Restaurant Poster Session 1 - Marking & Academic Standards 27th June 2013 11.30 Proceed 1 Chair: Erica Morris 58. Getting the Devil off our back: Reflections on Table Marking in Psychology Meesha Warmington, Cecilia Lowe University of York, York, UK 59. Marking as a way of becoming an academic Rachel Sales University of the West of England, Bristol, UK 60. Investigating moderation practices in a Faculty of Education Lenore Adie, Margaret Lloyd, Denise Beutel Queensland University of Technology, Brisbane, Queensland, Australia

18


61. How university tutors learn to grade student coursework: professional learning as interplay between public knowledge and practical wisdom Pete Boyd, Sue Bloxham University of Cumbria, Carlisle, UK 62. Creating and Marking Portfolios Effectively Fiona Handley, Adam Kelly Southampton Solent University, Southampton, UK Poster Session 2 - Assessment Technologies 27th June 2013 11.30 Proceed 1 Chair: Sally Jordan 63. The influence of research and practice on a set of open source eAssessment tools Tim Hunt The Open University, Milton Keynes, UK 64. Video Based Assessment within Higher Education: Developing Tactical Knowledge on a Sports Coaching Module Martin Dixon, Chris Lee Staffordshire University, Stoke-on-Trent, UK Poster Session 3 - Transitions in Assessment 27th June 2013 11.30 Proceed 1 Chair: Kay Sambell 65. Researching assessment and exam practices in an online postgraduate program Andrew Chambers, Ruth Laxton University of New South Wales, Sydney, Australia Poster Session 4 - Engaging Students in Assessment & Feedback 27th June 2013 11.30 Proceed 2 Chair: Amanda Chapman 66. A strategy to help students to actively engage with feedback provided: sharing experience from the undergraduate medical student selected component (SSC) programme. David Bell, Vivienne Crawford Centre for Medical Education, The Queen's University of Belfast, Belfast, Northern Ireland, UK 67. Horizontal and Vertical Models: The Instant Feedback to Help Students Excel Wajee Chookittikul1, Peter Maher2, Sukit Vangtan1 1 Phetchaburi Rajabhat University, Phetchaburi, Thailand, 2Webster Univesrsity, St. Louis, MO, USA 68. Strategies for Formative Oral Assessment of EFL through Storytelling. Ana Fernandez-Caparros Turina, Rafael Alejo Gonzalez, Ana Maria Piquer Piriz University of Extremadura, Extremadura, Spain 69. Facilitating Transitional Learning: An Empirical Investigation into Undergraduate Students’ Perceptions of the Impact of Formative Assessment at Level 4 before entering Level 5 Nikki Woods University of Northampton, Northampton, Uk

19


70. A Multifaceted Bioinstrumentation Assessment Approach in the Rehabilitation Sciences Ricardo Simeoni Griffith University, Gold Coast, Australia 71. Student engagement in formative assessment Lynda Cook, Diane Butler, Sally Jordan Open University, Milton Keynes, UK 72. Alternative approaches to the assessment of statistical reasoning Helen Harth, Ian Jones Loughborough University, Loughborough, UK 73. Peer assessment without assessment criteria Ian Jones, Lara Alcock, David Sirl Loughborough University, Loughborough, UK 74. Cohort diversity - A case for cafeteria assessment mechanisms? Sheena Bevitt University of Derby, Derby, UK Poster Session 5 - Assessment for Learning 27th June 2013 11.30 Propel 1 Chair: Liz McDowell 75. A strategy to familiarise undergraduate students with assessment methods and standards and to provide informal feedback on progress; experience from a final year pharmacology course. David Bell, Malcolm Campbell Centre for Medical Education, The Queen's University of Belfast, Belfast, Northern Ireland, UK 76. The policy paradox: the risk of unwelcome consequences of well-intended assessment policy Clair Hughes1, Simon Barrie2, Geoffrey Crisp3, Anne Bennison1 1 The University of Queensland, Brisbane, Queensland, Australia, 2The University of Sydney, Sydbey, New South Wales, Australia, 3RMIT University, Melbourne, Victoria, Australia 77. IT Office Ergonomics: A Strategy to Assess Doctoral Students in the ASEAN Environment Wajee Chookittikul1, Siripong Teopipithporn1, Peter Maher2, Pachara Payao1 1 Phetchaburi Rajabhat University, Phetchaburi, Thailand, 2Webster Univesrsity, St. Louis, MO, USA 78. Using the Assessment Lifecycle to enhance assessment practice Rachel Forsyth, Rod Cullen, Neil Ringan, Mark Stubbs Manchester Metropolitan University, Manchester, UK 79. The impacts of assessment on professional development in educational postgraduate programmes - a tale of the two universities Nhan Tran Newcastle University, Newcastle upon Tyne, UK 80. Promoting competency and confidence through assessment for learning John Stephens, Jill Gilthorpe Northumbria University, Newcastle Upon Tyne, UK 20


81. Group Work Assessment: Improvingthe student experience through the Belbin Team-Role Self-Perception Inventory Joanna MacDonnell University of Brighton, Brighton, UK 82. Viva! the Viva Julie Peirce-Jones University of Central Lancashire, Preston, UK 83. A Cross-Institutional Approach to Assessment Redesign: Embedding Work-Integrated Assessments within the Curriculum Elisabeth Dunne, Richard Osborne, Charlotte Anderson University of Exeter, Exeter, UK Lunch 27th June 2013 12.30 Restaurant Parallel Session 10 27th June 2013 13.30 84. Assessing the Assessment Process: Dealing with Grading Inconsistency in the Light of Institutional Academic Performance Dawid Wosik Higher Colleges of Technology, Fujairah, United Arab Emirates Location: Proceed 1 Chair: Rachel Forsythe 85. Student perceptions of the value of Turnitin as a learning tool. Carol Bailey, Rachel Challen University of Wolverhampton, Wolverhampton, UK Location: Proceed 2 Chair: Lenore Aidie 86. Formative thresholded assessment: Evaluation of a facultywide change in assessment practice. Sally Jordan, Janet Haresnape The Open University, Milton Keynes, UK Location: Propel 1 Chair: Rick Glofcheski 87. Dialogic feedback and potentialities for students sense making Anna Therese Steen-Utheim BI, Norwegian Business School, Oslo, Norway Location: Propel 2 Chair: Harish Ravat Keynote & Poster Award 27th June 2013 14.10 Accelerate Suite 88. KEYNOTE Plato versus AHELO: The nature and role of the spoken word in assessing and promoting learning. Gordon Joughin The University of Queensland, Brisbane, Queensland, Australia Refreshments & Close 27th June 2013 15.15 Restaurant

21


Conference Themes Academic writing development Assessment Feedback - What Can We Learn from Psychology Research John Kleeman* Day 2: Parallel Session 9 - 27th June 2013 10.35 Developing assessment literacy: students' experiences of working with exemplars to improve their approaches to assignment writing. Kay Sambell*, Catherine Montgomery, Linda Graham Day 1: Parallel Session 3 - 26th June 2013 15.00 Working together: can collaboration between academic writing/subject specialists improve students' performance in written assessments? Carol Bailey* Day 2: Parallel Session 8 - 27th June 2013 09.50

Assessment for learning A Cross-Institutional Approach to Assessment Redesign: Embedding Work-Integrated Assessments within the Curriculum Elisabeth Dunne, Richard Osborne*, Charlotte Anderson Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 A strategy to familiarise undergraduate students with assessment methods and standards and to provide informal feedback on progress; experience from a final year pharmacology course. David Bell*, Malcolm Campbell Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 An Authentic Assessment Proposal: Current Events as Reported in the News Media Rick Glofcheski* Day 2: Parallel Session 8 - 27th June 2013 09.50 An evaluation of the effectiveness of audio feedback, and of the language used, in comparison with written feedback. Charlotte Chalmers, Janis MacCallum* Day 2: Parallel Session 9 - 27th June 2013 10.35 Assessment for Learning Liz McDowell*, Kay Sambell, Catherine Montgomery Day 1: Pre-conference Master Classes - 26th June 2013 09.30 Assessment for learning at institutions of higher education: a study of practices among academicians Mohamad Sahari Nordin, Ainol Zubairi*, Mohd Burhan Ibrahim, Nik Suryani Nik Abd Rahman, Zainurin Abdul Rahman, Joharry Othman, Tunku Badariah Tunku Ahmad, Zainab Mohd Nor Day 2: Parallel Session 7 - 27th June 2013 09.15

22


Collaborations and Celebrations: the highs and lows of a decade of working with collaborative assessment. Veronica Brock* Day 2: Parallel Session 9 - 27th June 2013 10.35 Dialogic feedback and potentialities for students sense making Anna Therese Steen-Utheim* Day 2: Parallel Session 10 - 27th June 2013 13.30 Frameworks and disciplines: mapping and exploring assessment principles Erica Morris*, Patrick Baughan Day 1: Parallel Session 2 - 26th June 2013 12.00 Group Work Assessment: Improvingthe student experience through the Belbin Team-Role Self-Perception Inventory Joanna MacDonnell* Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 International students’ perceptions of formative peer assessment in the British university: a case study Meng Fan*, David Leat, Sue Robson Day 1: Parallel Session 6 - 26th June 2013 16-50 IT Office Ergonomics:A Strategy to Assess Doctoral Students in the ASEAN Environment Wajee Chookittikul*, Siripong Teopipithporn, Peter Maher, Pachara Payao Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 KEYNOTE Plato versus AHELO: The nature and role of the spoken word in assessing and promoting learning. Gordon Joughin* Day 2: Keynote & Poster Award - 27th June 2013 14.10 Making the most of Masters Level assessment Sally Brown*, Phil Race, Liz McDowell Day 1: Parallel Session 1 - 26th June 2013 11.20 MASTERCLASS: Using oral assessment Gordon Joughin* Day 1: Pre-conference Master Classes - 26th June 2013 09.30 Promoting competency and confidence through assessment for learning John Stephens, Jill Gilthorpe* Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 Releasing creativity in assessment Alison Bettley*, Ben Oakley, Freda Wolfenden Day 1: Parallel Session 3 - 26th June 2013 15.00 Student Assessment in Neoliberal Higher Education Context: Rationalities, Practices, Subjects.(Cross-cultural Study of the University of Glasgow and Tallinn University) Rille Raaper* Day 1: Parallel Session 5 - 26th June 2013 16.20 Should we stop calling it assessment? What influences students' perceptions and conceptions of assessment and AfL? Donna Hurford* Day 1: Parallel Session 5 - 26th June 2013 16.20 23


The policy paradox: the risk of unwelcome consequences of wellintended assessment policy Clair Hughes*, Simon Barrie, Geoffrey Crisp, Anne Bennison Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 Using Assessment Evidence to Improve Student Learning: Can It Be Done? Natasha Jankowski* Day 1: Parallel Session 4 - 26th June 2013 15.40 Using the Assessment Lifecycle to enhance assessment practice Rachel Forsyth*, Rod Cullen, Neil Ringan, Mark Stubbs Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 Using Patchwork Assessment to enhance student engagement, skills and understanding Caroline Marcangelo* Day 2: Parallel Session 7 - 27th June 2013 09.15 Viva! the Viva Julie Peirce-Jones* Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30

Assessment technologies Assessing learning gains in project management tuition/training. Steven Nijhuis* Day 1: Parallel Session 6 - 26th June 2013 16-50 ‘Designing a Student Progress Dashboard to promote student selfregulation and support' Julie Vuolo* Day 1: Parallel Session 3 - 26th June 2013 15.00 How Educational Practices Change the Evaluation and Assessment: Making Learning Meaningful through ePortfolios Ida Asner * Day 1: Parallel Session 6 - 26th June 2013 16-50 Immediate feedback by smart phones Nina Helene Ronæs*, Anna Steen-Utheim, Inger Carin Grøndal Day 2: Parallel Session 9 - 27th June 2013 10.35 Master class : Producing high quality computer-marked assessment Sally Jordan*, Tim Hunt Day 1: Pre-conference Master Classes - 26th June 2013 09.30 Offering carrots or wielding sticks: the voluntary or mandatory use of eAssessment and eFeedback to improve learning and teaching in times of change in HE Nick Allsopp*, Harish Ravat Day 1: Parallel Session 1 - 26th June 2013 11.20 Student Assessment Using Digital Story Telling Anita Peleg*, Peter Maple Day 1: Parallel Session 5 - 26th June 2013 16.20

24


Student perceptions of the value of Turnitin as a learning tool. Carol Bailey*, Rachel Challen Day 2: Parallel Session 10 - 27th June 2013 13.30 Student response system used for motivation and learning outcome Cecilie Asting*, Anna Steen-Utheim, Inger Carin Grøndal Day 2: Parallel Session 8 - 27th June 2013 09.50 The influence of research and practice on a set of open source eAssessment tools Tim Hunt* Day 2: Poster Session 2 - Assessment Technologies - 27th June 2013 11.30 The value of technology enhanced assessment – sharing the findings of a JISC funded project in evaluating assessment diaries and GradeMark Alice Lau*, Karen Fitzgibbon Day 1: Parallel Session 4 - 26th June 2013 15.40 Upskilling students in lectures using student polling software Sam Clarke*, Trish Murray Day 2: Parallel Session 7 - 27th June 2013 09.15 Using technologies to engage students in assessment and feedback: a case study from Biomedical Sciences John Dermo*, James Boyne Day 1: Parallel Session 2 - 26th June 2013 12.00 Video Based Assessment within Higher Education: Developing Tactical Knowledge on a Sports Coaching Module Martin Dixon*, Chris Lee Day 2: Poster Session 2 - Assessment Technologies - 27th June 2013 11.30

Engaging students in assessment & feedback A Multifaceted Bioinstrumentation Assessment Approach in the Rehabilitation Sciences Ricardo Simeoni* Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 A strategy to help students to actively engage with feedback provided: sharing experience from the undergraduate medical student selected component (SSC) programme. David Bell*, Vivienne Crawford Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 Alternative approaches to the assessment of statistical reasoning Helen Harth*, Ian Jones Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30

25


Analysis of student comments following participation in an assessed online collaborative activity based on contributions to a series of wiki pages. Janet Haresnape* Day 2: Parallel Session 7 - 27th June 2013 09.15 ‘Assessment literacy: making the link between satisfaction and learning’ Margaret Price* Day 1: Welcome & Keynote - 26th June 2013 13.30 Cohort diversity - A case for cafeteria assessment mechanisms? Sheena Bevitt* Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 Creative Development or High Performance? An Alternative Approach to Assessing Creative Process and Product in Higher Education Jan Watson* Day 1: Parallel Session 6 - 26th June 2013 16-50 Emotional Responses: Feedback Without Tears Mike McCormack* Day 2: Parallel Session 8 - 27th June 2013 09.50 Exploring formative assessment with international students Caroline Burns, Martin Foo* Day 1: Parallel Session 3 - 26th June 2013 15.00 Facilitating Transitional Learning: An Empirical Investigation into Undergraduate Students’ Perceptions of the Impact of Formative Assessment at Level 4 before entering Level 5 Nikki Woods* Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 Fearful Asymmetries of Assessment. Paul Sutton* Day 1: Parallel Session 1 - 26th June 2013 11.20 Formative thresholded assessment: Evaluation of a faculty-wide change in assessment practice. Sally Jordan*, Janet Haresnape Day 2: Parallel Session 10 - 27th June 2013 13.30 Horizontal and Vertical Models: The Instant Feedback to Help Students Excel Wajee Chookittikul*, Peter Maher, Sukit Vangtan Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 'I can't hand it in yet, it's not perfect': Mature students' experience of assessment and contested identities. Amanda Chapman* Day 1: Parallel Session 2 - 26th June 2013 12.00 Introducing Assessment & Feedback: A Framework of Engagement, Empowerment and Inclusion Louise O'Boyle* Day 1: Parallel Session 2 - 26th June 2013 12.00 26


Learning-oriented assessment task design David Carless* Day 1: Parallel Session 1 - 26th June 2013 11.20 Lost in translation: what students want from feedback and what lecturers mean by "good" Phil Long* Day 1: Parallel Session 4 - 26th June 2013 15.40 Peer assessment without assessment criteria Ian Jones*, Lara Alcock, David Sirl Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 Perceptions of fairness and consistency regarding assessment and feedback in trainees experiencing a difficult or failed professional placement. Mark Carver* Day 2: Parallel Session 8 - 27th June 2013 09.50 Reflections on feedback - feedback on reflection Rita Headington* Day 2: Parallel Session 9 - 27th June 2013 10.35 Student engagement in formative assessment Lynda Cook*, Diane Butler, Sally Jordan Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 Strategies for Formative Oral Assessment of EFL through Storytelling. Ana Fernandez-Caparros Turina*, Rafael Alejo Gonzalez, Ana Maria Piquer Piriz Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 Understanding student learning from feedback Stuart Hepplestone*, Gladson Chikwa Day 2: Parallel Session 7 - 27th June 2013 09.15

Marking and academic standards Assessing the Assessment Process: Dealing with Grading Inconsistency in the Light of Institutional Academic Performance Dawid Wosik* Day 2: Parallel Session 10 - 27th June 2013 13.30 Creating and Marking Portfolios Effectively Fiona Handley*, Adam Kelly Day 2: Poster Session 1 - Marking & Academic Standards - 27th June 2013 11.30 Decision-making for assessment: explorations of everyday practice Margaret Bearman, David Boud, Phillip Dawson, Gordon Joughin* Day 1: Parallel Session 2 - 26th June 2013 12.00 Demystifying marking and assessment processes: where's the mystery? Fiona Meddings* Day 1: Parallel Session 3 - 26th June 2013 15.00 27


Examining the examiners: investigating the understanding and use of academic standards in assessment Sue Bloxham*, Margaret Price, Jane Hudson, Birgit den Outer Day 1: Parallel Session 3 - 26th June 2013 15.00 Getting the Devil off our back: Reflections on Table Marking in Psychology Meesha Warmington, Cecilia Lowe* Day 2: Poster Session 1 - Marking & Academic Standards - 27th June 2013 11.30 Generating credible evidence of academic outcomes and standards: perspectives of academics in Australian universities Clair Hughes*, Simon Barrie, Geoffrey Crisp, Anne Bennison Day 1: Parallel Session 1 - 26th June 2013 11.20 How university tutors learn to grade student coursework: professional learning as interplay between public knowledge and practical wisdom Pete Boyd*, Sue Bloxham Day 2: Poster Session 1 - Marking & Academic Standards - 27th June 2013 11.30 Investigating moderation practices in a Faculty of Education Lenore Adie*, Margaret Lloyd, Denise Beutel Day 2: Poster Session 1 - Marking & Academic Standards - 27th June 2013 11.30 Marking as a way of becoming an academic Rachel Sales* Day 2: Poster Session 1 - Marking & Academic Standards - 27th June 2013 11.30 How do assessors mark? What do they see in written work, and what do they do with it? Calum Delaney* Day 1: Parallel Session 4 - 26th June 2013 15.40 Students' Perceptions about Assessment and Assessment Methods: A Case Study Elizabeth Ruiz Esparza* Day 2: Parallel Session 8 - 27th June 2013 09.50 The Introduction of Grade Marking at Southampton Solent University Fiona Handley*, Ann Read Day 2: Parallel Session 9 - 27th June 2013 10.35 University Standards Descriptors: can you please all of the people even some of the time? Rachel Forsyth* Day 2: Parallel Session 7 - 27th June 2013 09.15 What do academic staff think about assessment and how can it help inform policy making and approaches to professional development? Sarah Maguire*, Lin Norton, Bill Norton Day 1: Parallel Session 6 - 26th June 2013 16-50 You show me yours.....comparative evaluations of assessment practice and standards. Claire Gray*, Julie Swain, Rebecca Turner Day 1: Parallel Session 5 - 26th June 2013 16.20 28


Transitions in assessment Assessing the change or changing the assessed? Bridget Hanna* Day 1: Parallel Session 4 - 26th June 2013 15.40 Developing Institutional Assessment Strategies in a Performative Environment Sue Mathieson* Day 1: Parallel Session 2 - 26th June 2013 12.00 "I've never done one of these before." A comparison of the assessment 'diet' at A level and the first year at university. Simon Child*, Frances Wilson, Irenka Suto Day 1: Parallel Session 4 - 26th June 2013 15.40 Learning outcomes - balancing pre-defined standards and the benefits of unexpected learning? Anton Havnes*, Tine Sophie Prøitz Day 1: Parallel Session 5 - 26th June 2013 16.20 Learning from practice - developing an overview of lecturers' learning needs in terms of assessment Marion Palmer*, Jen Harvey Day 1: Parallel Session 1 - 26th June 2013 11.20 Monitoring the attainment of graduate attributes at individual and cohort levels Kerry Shephard*, Brent Lovelock, John Harraway, Liz Slooten, Sheila Skeaff, Mick Strack, Tim Jowett, Mary Furnari Day 1: Parallel Session 6 - 26th June 2013 16-50 Researching assessment and exam practices in an online postgraduate program Andrew Chambers*, Ruth Laxton Day 2: Poster Session 3 - Transitions in Assessment - 27th June 2013 11.30 The impacts of assessment on professional development in educational postgraduate programmes - a tale of the two universities Nhan Tran* Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30

29


Day 1: Keynote Assessment literacy: making the link between satisfaction and learning Professor Margaret Price, Oxford Brookes University, Oxford, United Kingdom Assessment and feedback remain a source of dissatisfaction for students as well as being resource hungry for staff. Initiatives focused on rules, standardisation and ever increasing provision of information seem to have made little difference in alleviating the problems. Consequently we need to examine more deeply how assessment and learning works in order to find an effective, sustainable and satisfying solution. The power of assessment and feedback within the learning process has been recognised for many years and yet the paradigms that currently frame assessment leave students in a passive role and still largely focus on accreditation. This situation needs to be challenged through the development of assessment literacy of both staff and students, which, in turn, will make new attitudes and approaches to assessment and feedback possible. This presentation will discuss the nature of assessment literacy, how it might be developed (particularly among students), and how it has the potential to reshape our thinking about assessment and feedback as well as enhancing student learning. Biography Margaret was awarded a National Teaching Fellowship in June 2002 in recognition of her excellence in teaching and contribution to the development of learning, teaching and assessment in Higher Education especially through curriculum development in interdisciplinary learning and interpersonal skills development. She is Professor in Learning and Assessment and leads the development of learning and teaching in the Business School through the development of strategy, sharing and enhancement of good practice and innovation in learning, teaching and assessment methods. As Director of the Centre for Excellence in Teaching and Learning, ASKe (Assessment Standards Knowledge Exchange) she is working with a team of colleagues to build a learning community centred on assessment, to encourage innovation and foster evidence-based assessment practice within the HE sector. Research interests focus on peer support for learning; Criterion referenced assessment; Social constructivist approaches to sharing knowledge or assessment standards with students and the effectiveness of sharing knowledge of standards within marking teams.

30


Day 1: Abstracts 1. Master Class: Assessment for Learning Liz McDowell1, Kay Sambell2, Catherine Montgomery3 1 Independent Higher Education Consultant, Newcastle upon Tyne, UK, 2 Northumbria University, Newcastle upon Tyne, UK, 3University of Hull, Hull, UK Assessment for Learning (AfL) is widely discussed and promoted as a positive and productive interesting ideas, stemming from AfL, into practice. One of the Centres for Excellence (CETL) funded by the English HE Funding Council focussed on AfL and developed a range of practical approaches to putting AfL principles into practice. These form the basis for the Master Class. In this Master Class, participants will identify problems in assessment and learning in their own practice, and then match them up with AfL principles. We will then move on to developing practical ways in which AfL could be used in the specific context, drawing on the resources of the CETL AfL including the recent book, ‘Assessment for Learning in Higher Education' authored by Kay Sambell, Liz McDowell and Catherine Montgomery who are the leaders of this Master Class. 2. Master Class: Using oral assessment Gordon Joughin The University of Queensland, Brisbane, Queensland, Australia ‘Oral assessment' simply means assessment in which students' understanding and capabilities are expressed or conveyed by speech instead of writing. The assessment may be purely oral or include oral and other components. Oral assessment takes many forms: undergraduate and doctoral vivas, class presentations, OSCEs, moots, design juries, debates ... the list is almost unlimited. This Master Class will explore the nature of oral assessment in its many forms, what is best assessed orally, how to use oral processes to support student learning, and how to ensure that it is used in ways that are valid, reliable, fair, affordable, and acceptable. In a highly interactive session we will be generating practical ideas for using oral assessment to its greatest effect, learning with and from each other within a framework informed by the facilitator's research on students' experience of different forms of oral assessment. Who the masterclass is aimed at? This masterclass will be of most benefit to colleagues who are currently using any form of oral assessment, who are finding it particularly satisfying and/or challenging, who are wishing to explore its dynamics in order to improve its effectiveness, and who are seeking to learn with and from colleagues with a similar commitment to oral assessment and student learning. What should you get out of attending? An enriched understanding of how oral assessment works to improve learning and a collection of practical ideas for designing/reviewing/improving your current practice with respect to its validity, impact on learning, reliability, fairness, affordability, and acceptability. 3. Master class : Producing high quality computer-marked assessment Sally Jordan, Tim Hunt Open University, Milton Keynes, UK Online computer-marked assessment presents an opportunity to assess and provide personalised and immediate feedback to large classes, with savings of cost and greater consistency than human markers. It can motivate and engage 31


students. However concerns are frequently expressed as to the validity and authenticity of assessment of this type and concerns are also expressed as to the cost and time required to author high quality e-assessment items. This master class will illustrate simple ways in which the quality of computermarked assessment can be improved, using a range of strategies from innovative question types (including computer-marked short-answer questions with tailored feedback) to careful embedding of the questions within a module's assessment strategy and the monitoring of student responses so as to improve the feedback provided. Who should attend: Anyone (from any academic discipline) who is interested in introducing online assessment, or in improving the quality of their practice. Note: although we will demonstrate question authoring in Moodle, the principles discussed will be platform-independent. No technical expertise is required but please bring an open mind! We hope that this Master Class will provide attendees with experience of a range of question types and an opportunity to discuss the benefits and limitations of assessment of this type. 4. Generating credible evidence of academic outcomes and standards: perspectives of academics in Australian universities Clair Hughes1, Simon Barrie2, Geoffrey Crisp3, Anne Bennison1 1 The University of Queensland, Brisbane, Queensland, Australia, 2The University of Sydney, Sydney, New South Wales, Australia, 3RMIT University, Melbourne, Victoria, Australia The effectiveness of initiatives such as peer review or external examining in confirming, challenging or benchmarking academic standards is largely dependent on the capacity of the assessment tasks used to generate evidence of specified outcomes. This paper reports an investigation into Australian academics' perspectives of assessment tasks which, in their experience, provided the most credible evidence of standards. The investigation was a major component of a national project, Assessing and Assuring Graduate Learning Outcomes (AAGLO), which involved interviews with 48 academics who were active in seven disciplines in a comprehensive range of Australian universities. Key findings were that the types of tasks nominated generally (but not always) reflected the ‘signature' tasks of a discipline (Bond 2007) and that traditional' outcomes - knowledge, critical thinking, written communication -were more likely to be addressed than the more problematic or ‘wicked' outcomes (Knight and Page 2007) increasingly common to recent statements of outcomes (LTAS 2009) - e.g. those related to ethical development or self and task management. The paper draws on specific examples (and 15 references) to elaborate on major findings and also briefly discusses unanticipated practices concerning examinations, the incorporation of technology, group tasks and the importance many academics attributed to the relationships established among the individual tasks that comprised a course assessment plan. The project identified many individual examples of effective practice but recommends greater attention to whole-of-program approaches to assessment 32


planning and collaborative processes for calibration and moderation of standards as pressing priorities for the credible assurance of standards. 5. Learning from practice - developing an overview of lecturers' learning needs in terms of assessment Marion Palmer1, Jen Harvey0 ,2 1 IADT, Dun Laoghaire, Co. Dublin, Ireland, 2DIT, Dublin, Ireland Assessment is a key function of lecturers in Irish Institutes of Technology. Procedures for the assessment of learners are required to be fair and consistent (Qualifications (Education and Training) Act, 1999, p. 26). Over a period of two years seminars were held in to support staff in the practice of assessment in various Institutes of Technology. Seminar themes included assessment strategy, assessment and technology, critical incidents in assessment, feedback, key skills, exam preparation and streamlining assessment. This series of seminars were captured in a professional practice wiki for sharing with colleagues. Reflecting on the seminars provides an insight into practice that is unique and valuable. Each seminar developed or refined a new tool/activity to support lecturer practice in assessment. Activities ranged from alignment of learning outcomes to teaching, learning and assessment strategies through an exemplar module, to identification and management of lecturer and student workload in a sample assessment. Of particular interest is the test that challenges preconceptions of assessment, the sample module that requires both alignment and diverse assessment methods and variety of feedback approaches that lecturers can adopt in classes or online. This practice exchange reports on these seminars and the practical activities/tools developed. These seminar activities/tools enable lecturers to examine their assessment practices. The practice exchange session will be a chance to experience activities and for delegates to provide feedback on the value of these activities. Key issues of assessment in practice in Ireland are presented. 6. Offering carrots or wielding sticks: the voluntary or mandatory use of eAssessment and eFeedback to improve learning and teaching in times of change in HE Nick Allsopp, Harish Ravat De Montfort University, Leicester, UK The current changes in higher education are well documented and include changes in funding, governmental policy changes to increase the marketisation of the sector and potential changes in student behaviour based upon a consumerist model. These bring with them increased pressure to recruit and retain students at a time of economic austerity which in turn affect pedagogy. The HEA strategic plan 2012-16 foregrounds inspirational and effective practice and both HEFCE and JISC see technology-enhanced learning as a key driver of change. Nicol and Milligan (2006), Broadfoot (2007) support this view and Dermo (2009) reports positively on student views of eAssessment and eFeedback whilst McNeil et al (2011) calls for integrated systems to support them. De Montfort University has over 10 years of experience of using eAssessment, particularly for summative assessment and includes challenging and changing ‘traditional' modes of assessment to take account of new ways of thinking that 33


have emerged alongside the technology. Despite the success of those in the vanguard, there is still considerable lethargy towards the more widespread adoption of these practices, both locally and nationally. This may be due to IT systems that are unable to deliver appropriate solutions or agreement that eAssessment and eFeedback may be not be applicable directly to a number of disciplines. However with the continual growth of IT systems available and an increasing range of applications coming on the market, this session will challenge delegates to make use of eAssessment in all its forms as a way to improve student learning. 7. Fearful Asymmetries of Assessment. Paul Sutton University College Plymouth: St Mark & St John, Plymouth, UK For some learners fear is a barrier to engagement with assessment. Building on earlier work (Sutton 2011, 2012), in this research paper I use an Academic Literacies approach to analyse how and why fear emanates from asymmetries of power within the complex social relations and practices of assessment. My analysis has three interlinked dimensions. Firstly, at the micro level of individual lived experience, fear is engendered by a perceived mismatch between the difficulty of the assessment task and learner ability. This often manifests as feelings of self-doubt, insecurity and uncertainty. Such fear, I contend, is a patterned response to both the meso and macro dimensions of assessment. Meso level fear emerges from the difficulties learners experience with the peculiarities of assessment culture in higher education with its particular "disciplinary practices" (Foucault 1979), conventions and rules. Thirdly, at the macro level, fear is a product of the subordinate position occupied by many learners in an educational structure oriented to reproducing the interests of the dominant cosmopolitan elites of the networked society (Castells 2000). How can we begin to address the barriers to engagement with assessment constituted by these fearful asymmetries? One strategy is to help learners to educate their fear through cultivating intellectual discipline (Freire 2005), and in addition, to dialectically conjoin fear with curiosity (Bachelard 1994). Then at least we may be able to offer learners less dehumanized and alienated subject positions from which to begin a critical and hopeful engagement with the challenges of assessment. 8. Learning-oriented assessment task design David Carless University of Hong Kong, Hong Kong, Hong Kong The aim of this presentation is to outline a framework for learning-oriented assessment task design and illustrate some its implications through reporting research data from case studies of award-winning teachers. A number of principles for effective task design have been noted in the literature: constructive alignment between aims, teaching methods, content and assessment (Biggs, 1999); engaging students in assignments which require them to use higher order cognitive processes (Meyers & Nulty, 2009); and spreading student effort evenly across the module (Gibbs, 2006).

34


To develop the proposed framework further, selected ‘innovative' assessment tasks will be described and analysed drawing on on-going case study research. For example, a teacher of History uses a weekly ‘One Sentence Response' activity (counting for 15% assessment weighting) to stimulate students to provide a regular personal written response to issues arising in the course. A teacher of Tort Law uses a reflective media diary where students identify and analyse potential tort law issues found in local press items. Teacher, student and classroom observational data will be reported to draw out participant responses to the assessment tasks. The paper concludes by relating findings from the cases on task design to key ideas in the relevant literature. I propose that assessment tasks might encourage effective learning practices when they:   

develop participation in the disciplinary community; promote dialogue with peers, self and tutor; and stimulate student engagement for sustained periods of a module.

9. Making the most of Masters Level assessment Sally Brown1, Phil Race2, Liz McDowell2 1 Leeds Metropolitan University, Leeds, UK, 2Independent HE Consultant, Newcastle, UK The National Teaching Fellowship Assimilate project explored innovations and good practice in the under-researched area of Masters level assessment. A compendium of good practice was developed, demonstrating how assessment can integrate formative feedback, use technologies effectively, foster employability and promote skills development. In this session, co-presenters Sally Brown, Phil Race and Liz McDowell will provide opportunities to discuss:  

What kinds of alternatives exist to traditional Masters dissertations; How authentic assessment at Masters level can help students develop skills that will be useful to them for employment and professional development.

10. Decision-making for assessment: explorations of everyday practice Margaret Bearman1, David Boud2, Phillip Dawson1, Gordon Joughin3 1 Monash University, Melbourne, Australia, 2University of Technology, Sydney, NSW, Australia, 3University of Queensland, Brisbane, Australia Most discussion about assessment is normative. Advice for teachers takes the form of what should be done for assessment to be effective. Courses and publications reinforce this advice. There is no shortage of exhortations about good practice. But what do university teachers actually do when faced with designing assessments for their own course? Do they follow principles articulated for good assessment design? Do they even know what they are? The study on which this paper is based examines how academics really make decisions about assessment. It is based on interviews with academics from different disciplines as they go about decision what and how to assess in their own courses. This paper presents a preliminary analysis of data from an OLT-funded project. It involved in-depth interviews of academics across four Australian universities. It is framed in terms of the three major occasions of assessment decision-making: at 35


the overall level of new course design, at the planning and replanning of course modules, and at the level of the micro-decisions of assessment tactics and provision of information to students. It addresses the question: what do university teachers take into account when creating assessment and what influences actually impinge on them. It is hoped that a better understanding of de facto assessment decision-making will allow for a more fruitful confluence between everyday impetuses and what the research literature might lead us to do. 11. Developing Institutional Assessment Strategies in a Performative Environment Sue Mathieson Northumbria University, Newcastle upon Tyne, UK This session focuses on changes in a university’s approach to improving assessment in the context of shifts nationally to a more competitive environment shaped by league tables and the NSS, where previously funds supported Centres for Excellence and grants for small scale innovations in LTA. It explores the possibilities of a developmental approach to enhancing assessment, supporting scholarship and innovation, in an environment where institutional strategies focus on the NSS and other KPIs. It looks at how an institutional Assessment and Feedback task group, comprising academics and middle level managers, approached their task in this context. It highlights the impact of the NSS in refocusing attention on improving underperforming programmes, rather than supporting excellence and innovation at the forefront, asking will this change in the policy environment lead to a more uniform student experience of assessment standards, with less differentiation between the best and worst performing programmes? Through exploring the impact of these changes on assessment strategies in one university, this session will open space to discuss the impact of these changes on other university’s strategies for improving assessment. It will explore similarities and differences in ways universities are mediating this broader policy context, and the scope for agency in shaping alternative approaches to improving assessment, in particular by negotiating the terms of engagement with KPIs and League tables. It will explore the scope for a scholarly and developmental approach to improving assessment, and whether such an approach can also achieve the objectives of improving performance against KPIs and league tables. 12. Using technologies to engage students in assessment and feedback: a case study from Biomedical Sciences John Dermo, James Boyne University of Bradford, Bradford, UK This paper reports on three years' work into innovative practice in assessment, looking at the design, implementation and evaluation of a variety of technology enhanced assessment tasks for students on a core level three Biomedical Science module. An authentic e-assessment coursework task was developed, integrating an online DNA sequence analysis tool (BLAST), routinely used by NHS and research professionals, with online multiple choice questions. This task combines 36


understanding of complex module learning outcomes with real-world authentic skills and challenges the oft-heard accusation that MCQs can lack validity and authenticity for higher-level students. In addition, online formative micro-assessments have been designed to encourage student engagement with learning activities. These microassessments, initially developed for desktops, have now been developed to generate two mobile formative assessment resources accessible to students via a range of mobile devices. The first mobile assessment tool has been designed to promote active learning of key themes and is aimed at students who have a short amount of free-time, for example between lectures, using public transport etc. The second resource utilises more advanced question types to promote a deeper understanding: feedback links to applied elements of the module, with a view to promoting active learning. In this case study, questionnaire survey results, student module feedback and input from student focus groups are used to evaluate these different kinds of technology enhanced assessment. The paper also outlines the process of creating and administering such assessments, offering recommendations on how best to design and deliver these innovative assessment methods. 13. 'I can't hand it in yet, it's not perfect': Mature students' experience of assessment and contested identities. Amanda Chapman University of Cumbria, Lancaster, UK This research draws on the experience of a group of mature students studied during their first year at university. Research (Baxter & Britton, 2001; Cavallaro Johnson & Watson, 2004; Jones, 2010) informs us that the process of 'becoming' a mature student is fraught with risk and change. The assimilation into the culture of university life can be contentious and complex. The first assessment for all students can be seen as a rite of passage (Krause, 2001) so for mature students who may have had a substantial gap in education, this can be a critical moment (Morgan & Nutt, 2006) in their progression through the transition year. Negotiation through the culture and language of academia can lead to misunderstanding and self doubt (Williams, 2005), and the process of assessment can be an emotional journey for some students (Crossman, 2007). In this paper the students describe their experiences of essay writing, working in groups with younger students and the challenges they felt to their identity. Facing the judgement of their peer group and the academic staff was a particular fear of most of the students, as was the reluctance in some students to ‘let go' of their assignment. The paper will conclude with a discussion of the importance of the assessment process in the (re)negotiation of identity. 14. Introducing Assessment & Feedback: A Framework of Engagement, Empowerment and Inclusion Louise O'Boyle University of Ulster, Belfast, UK Chickering and Gamson (1987) outlined the importance of the participation and interaction of both the teacher and student within the learning experience. 37


Neither role is independent of the other. In practice identifying student’s needs and ways of learning initially and throughout the learning experience informs the best ways of communicating and relating to students and encourage their participation. The role played by assessment and feedback strategies and practices are particularly crucial to the success of first year students learning experience. Often there is a need to change student perspectives and behaviour towards assessment away from previous educational experiences, dispel misunderstandings and clarify the learning environment in higher education. This paper will discuss the impact of such a structured framework of engagement with first year tertiary students on their attitudes towards assessment and feedback and its role within their learning. It has evolved as a result of sector and institutional drivers, theoretical research and personal practices and aims to involve students both academically and socially in their learning. The framework emphases the role of assessment and feedback, encouraging students to recognize it not as an end point but rather a ‘pit stop’ on their learning journey. Most importantly it builds student motivation and empowerment so that they may become active participants in their learning. 15. Frameworks and disciplines: mapping and exploring assessment principles Erica Morris1, Patrick Baughan2 1 Higher Education Academy, York, UK, 2City University London, London, UK Over the last decade, there has been a growing focus on the pivotal role of assessment in enhancing teaching and learning in higher education (e.g. Bryan and Clegg, 2006; Bloxham and Boyd, 2007; The Higher Education Academy, 2012). Accordingly, a number of frameworks, models and principles for enhancing assessment practice have been proposed and considered, informed-by research in the fields of assessment and educational development (e.g. Gibbs and Simpson, 2004-5; Carless, 2009; Kearney, 2012; Sambell, McDowell and Montgomery, 2013). Concurrently, there have been a range of educational initiatives, developments and projects on enhancing assessment in UK higher education, through which have emerged resources, such as case studies designed to help ‘bring to life' guidelines and inform changes in educators' assessment practice. This paper will map key models and principles for enhancing assessment practice in higher education, and critically explore how these might be translated or applied to discipline areas. Drawing on a conceptual framework for describing educational development practice (Amundsen and Wilson, 2012), the paper will also look at the interplay between generic issues in assessment practice and discipline or subject-specific concerns or challenges. In essence, this paper will focus on the implications of assessment models and principles for changing academic practice in higher education. 16. ‘Assessment literacy: making the link between satisfaction and learning’ Margaret Price Oxford Brookes University, Oxford, UK See page 29.

38


17. Examining the examiners: investigating the understanding and use of academic standards in assessment Sue Bloxham1, Margaret Price2, Jane Hudson2, Birgit den Outer2 1 University of Cumbria, Lancaster, UK, 2Oxford Brookes University, Oxford, UK External examining in the UK is a key accountability process for assessment standards but there are anxieties regarding its effectiveness. In addition, official investigations of the process have avoided the issue of what standards mean, how they are established, influenced and used by examiners. Research in the field of standards and grading indicates that significant calibration of individuals' assessment standards is lacking and research on tutors' academic standards consistently emphasises the individualised, tacit, interpretive nature of standards, learnt informally through active participation in relevant communities and practices where there is potential for difficulties in assuring a consensus on standards in practice. Within the context of this existing work, the study reported here aims to provide better information regarding examiners' use of standards and is relevant for exploring both external examining and the judgement of students' performance in general. The research has explored how aware examiners are of the provenance of the standards they use and the relative strength of different influences on their standards including that of student work. 24 experienced examiners from diverse universities and working in four contrasting disciplines were recruited. A repertory grid exercise was used to reflect on borderline B/C examples of student work followed by a semi-structured interview to explore what examiners perceive to have shaped their understanding of standards. The research has value for both its innovative focus and its methods. The results, and their implications for both external examining and grading in general, will be presented. 18. Developing assessment literacy: students' experiences of working with exemplars to improve their approaches to assignment writing. Kay Sambell1, Catherine Montgomery2, Linda Graham1 1 Northumbria University, Newcastle upon Tyne, UK, 2University of Hull, Hull, UK Exemplars are often promoted as a means of developing students' assessment literacy (Price et al, 2012; Hendry, 2013). Although empirical research on students' perspectives on exemplars is growing (e.g. Hendry et al, 2009 &2012; Handley & Williams, 2010), little is known about the ways in which learners actually engage with them. This paper focuses on research involving large groups (n= 127) of first-year students undertaking the same exemplars workshop while studying similar disciplinary material in two universities. Building on a previous study, which used participant observation to illuminate a small sample of students' reactions (Sambell, 2011), data were collected anonymously from all students at varying points during the workshop. A mixed methods approach to data collection was employed, in which students: ranked the exemplars against marking criteria; generated feedback and maintained written reflective logs. Voting response systems were used to record each student's responses throughout and to establish if students' understandings of feedback and assessment had altered as a result of the workshop. 39


Broad themes to emerge from the analysis will be reported, with statistical data being supplemented by illustrative excerpts from the student logs. Findings show evidence of students moving conceptually from focusing on surface features of essay-writing (Wingate, 2006) to an appreciation of the requirement to engage more deeply with subject-related ideas and concepts (Harrington, 2006). However, the capacity to drill down and analyse individual responses also allows us to illuminate the complex nature of the challenges that students experienced when developing assessment literacy: providing insights which can help to inform future practice. 19. ‘Designing a Student Progress Dashboard to promote student selfregulation and support' Julie Vuolo University of Hertfordshire, Harpenden, UK The aim of this session will be to articulate the design principles behind the development of a student progress dashboard (SPD) and to seek the views of colleagues in the sector on the perceived benefits of its use in engaging students with the assessment and feedback process. The SPD has been developed to sit within our bespoke managed learning platform here at the University of Hertfordshire. It pulls in student performance information from the electronic submission system and student engagement information from the relevant module sites. The data is then produced in a ‘dashboard' which allows students to view their own performance and engagement data benchmarked against that of their (anonymised) peers. The SDP also has a teacher ‘view' which allows academics to look at the performance of both individual and groups students, helping them identify where students may require additional support or guidance. The design of the SPD was informed by user focus groups and interview. The teacher facing version went live in October 2012, with the student view the following semester. User feedback on the SPD will inform future developments as will comments and ideas from the sector colleagues which would be very welcome in this participative session. Bibliography Gibbs G & Simpson C (2004) Conditions under which assessment supports students' learning. Learning and Teaching in Higher Education. 1, 8. Nicol D J (2007) Assessment design for learner responsibility http://www.reap.ac.uk 20. Exploring formative assessment with international students Caroline Burns, Martin Foo Northumbria University, Newcastle Upon Tyne, UK This study is a continuation of "Formative Feedback First" (FFF), a collaborative action research intervention (Burns & Foo, 2012) used a final year module, taken by predominantly international one year "top up" students, new to the university and to the country. Formative feedback Is provided on an 800-1000 word proposal, via the use of a FFF matrix which has been specifically developed for this purpose. The matrix consists of indicative degree classifications, across four criteria which are knowledge and understanding, theory and practice recognition, 40


use of resources and references and presentation, structure and language. Written feedback is returned via tutorial as suggested by Nichol & MacfarlaneDick (2006) thus facilitating dialogue in a low risk setting prior to any summative feedback. FFF provides international students the opportunity to develop the necessary skills and culturally adjust, i.e. close any gap (Sadler, 1989) between their actual performance and that of a desired standard, before assessments in a high risk setting. Student's initial and subsequent reflections are generated and captured via their use of Gibbs (1988) reflective cycle. The extent that this potential (feed-forward feedback) is being realised via students' actions "moving forward" and across other modules, is explored via follow up interviews held with students at the start of semester two (while they await their semester one results). Further reflection and analysis of themes established from the students' perspective, provides the base for improving further our support of academic writing, and realising some of the benefits as suggested by Wingate, Andon & Cogo ( 2011). 21. Demystifying marking and assessment processes: where's the mystery? Fiona Meddings University of Bradford, Bradford, West Yorkshire, UK At a time when higher education, and the skills and transferable knowledge that it engenders, are becoming increasingly commodified, expectations are high for an education experience that will prepare graduates for the workplace. Prior to entering the world of work, students pass through the Higher Education system submitting course work for assessment. This assessment informs decisions on the classification of degree to be awarded. Students, education customers, pay hefty fees but their introduction and subsequent increases may not herald a corresponding rise in standards despite Browne (2010 p4). Instead degrees seem to be so commonplace that this may in fact dilute the validity of such an award (F端redi, 2009). Where does the lecturer fit in all of this? A major part of their role is marking and assessing. Lecturers conduct a solitary exercise in reviewing student assessment submissions, making a judgement as to the value of the work submitted. It is coming to this judgement that is of interest to this PhD. Previous work on exploring the processes of marking is mainly confined to school years education (Crisp, 2008). My investigation focuses on how lecturers apply set standards and criteria when marking student assessment submissions. There are some checks in place to give a sense of parity though validity and reliability of this whole process has been questioned (Bloxham, 2009 , Bloxham et al., 2011). Exploration of the marking process and making a judgement on student assessment could illuminate marking practise in H.E. 22. Releasing creativity in assessment Alison Bettley, Ben Oakley, Freda Wolfenden The Open University, Milton Keynes, UK The Open University has a well-established model of tuition and assessment to support distance learning, based on tutor-marked assignments that combine summative assessment with individual tuition. Key changes, common across higher education, have prompted a review of assessment practices. These 41


include: new research into student learning, the opportunities offered by digital technologies, changes in funding, and shifts in the characteristics of our student population. This paper reports on a major university-wide project which is reevaluating the purpose of assessment and its relationship with tuition. It draws on both a review of relevant literature and research undertaken within the University, with findings that are relevant to all distance learning contexts. Key areas identified for action include the need to strengthen the relationship between assessment and tuition in order to foster dialogue with and between students, harnessing digital communication where appropriate, and embedding more consistently formative and informal assessment activity to promote students' forward learning. Such change necessitates holistic changes in practices and this paper describes and analyses some of the strategies and challenges involved in bringing about such a cultural strategic and attitudinal shift in a large institution with multiple stakeholders. We explore the balance between an enabling policy framework and granting autonomy to module or qualification teams so as to release creativity in the design and implementation of more effective assessment. Finally we offer thoughts on the implications for assessment reform in higher education more broadly. 23. How do assessors mark? What do they see in written work, and what do they do with it? Calum Delaney Cardiff Metropolitan University, Cardiff, UK In spite of research into the more objective aspects of assessment, and attempts to improve the consistency of marking, how assessors arrive at their judgement of writing remains obscure. With some exceptions, there has been a focus on factors affecting the assessment product rather than the subjectively experienced process of assessment. The aim of this paper will be to present research findings relating to the assessment of undergraduate writing in a higher education context. In particular, it will examine what assessors pay attention to when reading, and two possible processes they may employ to arrive at an evaluation of the work. Importantly, the choice of process may be in part determined by the characteristics of the writing. The data supporting the findings was obtained from semi-structured interviews conducted with university lecturers in health care subjects. The interviews and the data analysis were approached from within a hermeneutic phenomenological tradition. Tentative conclusions drawn from the study suggest that the process of assessment may show considerable variability depending on the nature of the work being assessed. Much of what assessors do might be interpreted as involving attempts at simplification, or attempts to follow the simplest course of action unless forced to do otherwise. This may be one means by which they manage the complexity of the assessment task. The findings support aspects of previous work, and contribute additional insights to an understanding of the assessment process.

42


24. "I've never done one of these before." A comparison of the assessment 'diet' at A level and the first year at university. Simon Child, Frances Wilson, Irenka Suto Cambridge Assessment, Cambridge, Cambridgeshire, UK The UK Government has emphasised the need for A levels to meet the requirements of universities (Department for Education, 2010). However, lecturers have expressed concern that A level assessments encourage too much ‘teaching to the test’, and that their modular structure contributes to students’ under-preparedness (Suto, 2012). Potential explanations for students’ transitional difficulties are that university entails a wider range of assessment types (Crook & Park, 2004), less guidance (Ellis, 2008), less resit opportunities (Gill & Suto, 2011) and more diverse timings of assessment (Yorke, 2007). This study investigated these assessment-related transitional issues, by comparing assessments from 16 universities with A level assessments for three subjects (biology, English literature and mathematics). A greater variety of assessment types were found at university compared to A level, although this varied by subject. However, the written guidance provided to students for university assessments was more detailed compared to A level, perhaps in response to undergraduate students’ unfamiliarity with a more varied assessment ‘diet’. Unlike at A level, students were given only one resit opportunity in the majority of cases, with a cap on the potential mark that could be achieved. Generally, university students also had to cope with earlier summative assessment. These findings are discussed in relation to the potential future alignment of A level assessment structure to that of universities. While A level students may benefit from exposure to more varied assessment types, it would be a significant challenge to cater for the requirements of all university courses. 25. The value of technology enhanced assessment – sharing the findings of a JISC funded project in evaluating assessment diaries and GradeMark Alice Lau, Karen Fitzgibbon University of Glamorgan, Pontypridd, UK Assessment and feedback practice in Higher Education has long been the major source of student dissatisfaction. While technologies are increasingly being used as a tool to improve the assessment experience for students and staff, the use of technologies in improving the assessment experience is still patchy. In particular, there often seems to be a temptation to justify the use of technology with the potential efficiency gains as benefits in adopting those tools. This presentation will share findings from a JISC funded project that evaluated the use of Assessment Diaries and GradeMark at one Higher Education institution from staff and students perspectives. It aims to highlight that the values of technology enhanced assessment lies in a better learning and teaching experience for staff and students where efficiency gains is only a very small part of the result. The Assessment Diary is essentially a simple list of module codes and titles, dates for assessment submission and return of feedback. The diary uses an in house web-based front end which is provided within Blackboard and is personalised 43


giving students and tutors clear, easily accessible information about when assignments are to be submitted and returned. GradeMark is an online marking tool that is part of the Turnitin plagiarism software. It allows tutor to mark online and create comment banks for reusable comments and audio feedback. By sharing the findings of the project, the presentation will highlight the need to focus on the principles behind any technologies enhanced learning rather than simply promoting efficiency gains. 26. Lost in translation: what students want from feedback and what lecturers mean by "good" Phil Long Anglia Ruskin University, Chelmsford, UK The assessment of students’ work has long been recognised as a key part of the learning process (Scouller, 2000; Gibbs and Simpson, 2004; Gibbs and DunbattGoddet,2007) and yet some research (Hounsell, 2008; Ferguson, 2011) suggests that despite considerable advances in recent years in terms of developing a greater variety of assessment forms, the contribution made by feedback on assessment to student learning remains a problematic area. In this paper I explore the concept of good feedback as defined by under graduates and academic staff in a post 1992 British university and how they construct their understanding of what good feedback looks like. My analysis of the data, collected via semi-structured interviews with a range of academic staff and a combination of first and third year undergraduate students, draws upon the themes of discourse, power, identity and emotions as a way of highlighting the contrasting influences on staff and students’ construction of the role and form of effective feedback. The data presented suggests that staff and students construct the idea of feedback in fundamentally different ways with staff tending to emphasise the dynamics of power which is closely linked to their sense of identity whilst students tended to place far greater emphasis on the emotional and personal aspects of feedback. The paper concludes by proposing the adoption of a more dialogic approach to feedback as a way of overcoming the differences in approach and emphasis identified in the research. 27. Assessing the change or changing the assessed? Bridget Hanna1 1 Edinburgh Napier University, Edinburgh, UK, 2The Open University, Milton Keynes, UK Changes in the way we assess and are assessed as professionals are often assumed to be impact and value free out with the intended outcomes of learning. This session will suggest that what is assessed can impact on professional practice and that changing professional practice can produce mediating effects on professional identity. The session will seek to support participants in reflecting on what they assess from the point of view of professional identity and will be particularly suitable for those who assess our future professionals (nursing, accountancy, etc.) through the transition to professional regulation. I will then seek to turn around participants understanding of the potential of assessment for identity construction in Higher Education and develop their understanding of what unintended side effects changing assessments might engender within their own 44


profession. Participants will be asked to reflect on their own experience of being assessed as a profession in HE and alternative educational paradigms will be presented to develop an experiential sense of being changed by assessment. This session is based on initial research findings into the change of regulatory body from the British Psychological Society (BPS) to the Health and Care Professions Council (HCPC) and the subsequent change of assessment for professional regulation. The findings will then lead into a wider discussion on how this research might transfer into other professions and participants will begin to think about the wider implications for all regulated professions and on to all of us assessed as professionals. 28. Using Assessment Evidence to Improve Student Learning: Can It Be Done? Natasha Jankowski1 1 University of Illinois Urbana-Champaign, Champaign, IL, USA, 2National Institute for Learning Outcomes Assessment, Champaign, IL, USA The idea of using evidence of student learning that has been collected or gathered through assessment processes and activities to subsequently improve student learning and institutional effectiveness or performance is pervasive in postsecondary/higher education. Assessment for learning is presented as a counter argument or approach to undertaking assessment for purposes of external accountability or mandatory reporting. When done well, assessment for learning positions assessment and related activities within the purview of the faculty and creates space for reflection on the processes of teaching and learning. This paper examines the discourse around the use of assessment evidence and provides examples of using assessment evidence of learning at the individual course, program, and institution level. It explores how evidence of student learning may be used for learning as opposed to accountability or mandatory reporting requirements. The paper unpacks what is meant by ‘assessment for learning' or using assessment evidence to improve student learning through the presentation of three different assessment for learning frameworks. The author(s) present the argument that under certain conceptions of how learning or change occurs in relation to assessment, it may be impossible or difficult to undertake assessment for learning. The paper concludes that focusing on how assessment for learning is conceptualized, the institutional structures which support or hinder such an undertaking, and the discourse by which using evidence of student learning to improve is employed within an institution is of crucial importance to furthering assessment for learning. 29. You show me yours.....comparative evaluations of assessment practice and standards. Claire Gray, Julie Swain, Rebecca Turner Plymouth University, Plymouth, UK Assessment practices of universities have been the focus of policymakers and researchers, with concerns regarding transparency and consistency across diverse HE settings (Browne, 2010; O'Donovan at al., 2004). They are recognised as complex, representing the interplay between institutional processes and the knowledge/experience of individual academics (Woolf & Cooper, 1999). There is additional complexity interplay where the environment is a partnership one between colleges and universities (Garrod & Macfarlane, 2007). These 45


arrangements often yield high student progression where students wish to complete an honours degree. The paucity of research on comparative assessment performance of native and direct entry students is a motivating factor in this research. Additionally, most research in the field of student transition focuses on social and cultural aspects rather than the pedagogy of transition. This has been recognised as challenging (Winter & Dismore, 2010), and in Jacobs (2008) highlights differing assessment practices as having considerable effect on ultimate student success when moving from college to university. This paper will present findings on an investigation into assessment in differing cognate areas between college and university counterparts using a think-aloud methodology (Ericsson & Simon, 1998). Think-aloud data was collected via a marking exercise which required participants to ‘swap' assessment and marking guidance for similar assignments in equivalent level 5 modules and following up with discussion on the assessment diet within each programme. It examines complex interplay between assessment practice and standards across contrasting environmental and pedagogic environments navigated by students in the final stages of their HE learning journey. 30. Learning outcomes - balancing pre-defined standards and the benefits of unexpected learning? Anton Havnes1, Tine Sophie Prøitz2 1 Oslo and Akershus University College of Applied Sciences, Oslo, Norway, 2Nordic Institute for Studies in Innovation, Research and Education, Oslo, Norway Today's emphasis on understanding learning outcomes signals a change in focus, from defining education by what teachers should be teaching (the intent of the activity) to what students should end up knowing (the result of the activity). Such an outcomes focused approach embraces the perspective of students and student learning as a priority. However, the approach has been critiqued as promoting two conflicting aims: (1) attempts to clarify social demands for set, accountable learning targets as a kind of standard, (2) efforts to encourage individual learning. These complexities in outcomes based approaches are not always made clear, and the introduction of LOs is often surrounded by positive rhetoric about what can be achieved. The uses of learning outcomes have been criticized for being reductionist, technical and limited to behavioural terms, describing what learners should (or even will) be able to do at the end of the educational process. Its advocates claim that the outcomes model affords "access, flexibility and relevance" (Jessup 1995:36). Our argument starts with the basic assumption that what students should learn may not be assessable in a stringent manner. Furthermore, making what is assessable constitute what students should learn might not be in line with fundamental meanings of learning. Learning outcomes oriented approaches in education often makes a call for an alignment between learning goals and student learning. On one hand this may be a reasonable tool when working with assessment for learning. How reasonable is it when working with assessment of learning? The multiple uses of indicators of learning outcomes raise several issues concerning the conflicting aims of outcomes based education (Allan 1996). 46


In the paper we will elaborate on the history and background of LO and their implementation in European higher education. Further, we will discuss the focus on pre-defined learning outcomes in terms of learning theories, for example, Wenger's (1998) notion of identity formation, Wertsch's (1998) notions of mastery versus appropriation, Eisner's (2005) notion of "tricotomy of outcomes" and Batesons (1972) focus on levels of learning, and "blueprint" versus unexpected goals. How do these approaches to learning align with an outcomes focused education? This research is part of an interdisciplinary Norwegian research project on higher education learning outcomes. 31. Student Assessment Using Digital Story Telling Anita Peleg, Peter Maple London South Bank University, London, UK Marketing for the 21st Century is an undergraduate module covering contemporary issues such as future forecasting, ethics and sustainability, postmodernism, consumer-centrics and service dominant logic. Three years ago we transformed the assessment to enable future marketeers to use multi-media effectively and think beyond the written word. Students are challenged to employ a digitally based medium using words, pictures and sound to present their ideas, academic arguments and analysis, without the pre-requisite of needing video authoring and production skills. Photo Story 3 is a free piece of software available as a download. It allows users, to create slides (words diagrams and pictures) with a voice over and a musical background. Little technical expertise is required and seminars facilitate the development of these assignments. While students are encouraged to develop and demonstrate a range of practical presentation skills, we emphasise the academic content encouraging research, academic discussion and analysis. This ensures that students do not focus on the production of the presentation at the expense of demonstrating knowledge and academic rigour. Positive feedback shows students find the process refreshing and fun, but equally challenging. In particular, some less motivated students, have shown a greater willing to explore and have risen to the challenge producing excellent work. Consequently, we have extended this assessment tool to three additional modules. We will demonstrate how this digital story telling tool can encourage reflective and academic development, discussing its successes and challenges and sharing ideas for further and alternative implementation.

47


32. Student Assessment in Neoliberal Higher Education Context: Rationalities, Practices, Subjects.(Cross-cultural Study of the University of Glasgow and Tallinn University) Rille Raaper The University of Glasgow, Glasgow, UK The understanding of education as being simply another market category has become widely dominant in policy and public discourses (Lynch, 2006). Furthermore, the traditional culture of academic work has been replaced with institutional stress on perfomativity, with an emphasis on strategic planning, performance indicators, quality assurance and audits (Olssen & Peters, 2005). Student assessment in this neoliberal context can be seen as a field of contradictions that is surrounded by competing meanings, subjects and functions. For example, assessment directly involves teachers and students (Black & William, 2009), but assessment policies are often developed by institutional policy-makers and primarily from their viewpoint (Geven & Attard, 2012). Furthermore, employers are increasingly interested in assessment results in order to make decisions about students' employability (Boud & Falchikov, 2006). Therefore, student assessment has to fulfil several functions such as supporting learning, certifying, providing data for quality assurance (Yorke, 2008), developing teaching and curricula (Suskie, 2009), but also disciplining students (Harman & McDowell, 2011). In this contradictory situation, assessment involves a clear element of power and control (Barnett, 2007). My study is based on Foucauldian framework and it applies discourse analysis on the data that originates from documents, interviews and focus groups. The research will benefit the field of assessment by addressing the questions: What are the dominant rationalities of student assessment in HE? How do power relationships relate to dominant rationalities? How are assessment practices constructed? How do lecturers and students experience assessment and themselves as assessors or assessed in two university contexts? 33. Should we stop calling it assessment? What influences students' perceptions and conceptions of assessment and AfL? Donna Hurford University of Cumbria, Lancaster, UK The focus of the paper is on the initial analysis of interview data with newly qualified primary teachers which indicates changes in their perceptions and conceptions of assessment and in particular AfL, whilst they were on a teacher education course in higher education. The data provides an insight into the student teachers' perceptions and conceptions of assessment and AfL at different stages: pre-course, during and at the end of course. At the pre-course stage their comments tend to focus on assessment as summative with little understanding of AfL. However at the end of the course assessment is generally seen as potentially formative and conceptions of AfL are more comparable with what is regarded as 窶話est practice'. Arguably this is a hoped for, if not anticipated, outcome for student teachers. There is a professional expectation that student teachers will develop their understanding and application of AfL in the classroom. However expectations are not always realised. The student teachers' descriptions of their learning journeys and their 48


emerging conceptions of AfL indicate a variety of influences and in turn, these influences seem to vary in their impact. This suggests there would be value in seeking a clearer understanding of both the variety of possible influences and how they might affect students' conceptions of AfL. It is argued that having a clearer insight into these influences may be applicable to pre-course induction and pedagogy in teacher education and higher education. 34. What do academic staff think about assessment and how can it help inform policy making and approaches to professional development? Sarah Maguire1, Lin Norton0 ,2, Bill Norton0 ,2 1 University of Ulster, Coleraine, Derry, UK, 2Liverpool Hope University, Liverpool, Merseyside, UK Institutional assessment policies risk being perceived by those who must work with them as managerialist, an ideology that has been argued by Vincent (2011) as having a destructive effect in universities. One of the problems is that institutional policies tend to draw on generic pedagogical principles promulgated in the literature (e.g. assessment for learning; authentic assessment; dialogic feedback). They do not take account of the discipline specific or practitioner/professional nature of assessment. Research shows, however, that differences do exist (Neumann, et.al., 2002) so in order for institutional initiatives to succeed, it would seem sensible to explore Shay’s (2008) suggestion that they explicitly feature disciplinary forms of knowledge. The research reported here is a large scale assessment survey carried out at two institutions, designed to find out what academics from different disciplines with different learning and teaching orientations think about assessment design, marking and feedback. The anonymous survey, consists of two closed item questionnaires; one exploring assessment design beliefs and practices and the other measuring attitudes to marking and feedback. In one institution, a further section has been added asking specifically about two recent assessment initiatives specific to that institution. At the end of the survey, participants have been given the opportunity to add anything further that they think is relevant in an open text box. The Initial analysis from this survey will be presented with an articulated reflection on how such data can help inform and strategically target professional development, operationalize assessment policies and inform institutional teaching and learning strategy. 35. Monitoring the attainment of graduate attributes at individual and cohort levels Kerry Shephard, Brent Lovelock, John Harraway, Liz Slooten, Sheila Skeaff, Mick Strack, Tim Jowett, Mary Furnari University of Otago, Dunedin, New Zealand Some student outcomes are more difficult to assess than others. The University of Otago expects its graduates to appreciate global perspectives, to develop environmental literacy and cultural understanding and to be committed to intellectual openness. These graduate attributes (sometimes described as wicked competencies) have significant affective elements that make assessment by essay or assignment challenging. In most cases students leave the university with no clear validation of their achievements in these domains. To help academic departments to monitor the extent to which they foster these qualities in their 49


programmes, a multidisciplinary team of researchers at the University of Otago have developed and piloted cohort-based approaches focussed on undergraduates' environmental worldview and environmental literacy. These approaches enable departments to determine how widespread and embedded a given attribute is within the student group, as well as helping the department to monitor if there is general improvement in the attribute over time. The approaches emphasise the anonymity of students so they operate without putting individual students in a position where they may feel that they are obliged to show a ‘correct' value, attitude, commitment or appreciation. Anonymity is vital if we are to reasonably interpret the data that students provide. This research paper will focus on data gathered over four years in five different university departments. The paper will encourage discussion on the desirability of monitoring the attainment of graduate attributes at individual and cohort levels. 36. How Educational Practices Change the Evaluation and Assessment: Making Learning Meaningful through ePortfolios Katie Kalmus Director of Client Development LiveText, La Grange, IL, United States Minor Outlying Islands Employing ePortfolios as a high impact, open educational and assessment practice allows students to gather in one place a range of digital artifacts that can be used to demonstrate numerous skills sets and competencies, including communication, presentation, inquiry and analysis, and information literacy. The representations of learning in an ePortfolio reflect the individual student's view of the breadth of his or her education - what was learned both inside and outside the classroom. Research finds that there are patterns of practice at institutions associated with enhanced student learning, where high impact open educational practices occur. When students engage in two or more high impact assessment practices, there is a significant positive impact on their grades and retention (Kuh, 2008), and there is emerging evidence that ePortfolios are associated with these practices. The current interest in ePortfolios flows from increasing calls from accrediting organizations to demonstrate that students are really learning. Moreover, academics are expressing a need for demonstrations of a much broader array of learning outcomes than the existing closed practices support - i.e., personal and social responsibility, teamwork, intercultural knowledge and competence, and integrative learning (Rhodes, 2011). And ePortfolios, with their wide range of capabilities, can help address these needs. During this presentation, attendees will learn good-practice processes for implementing an effective ePortfolio requirement. Firsthand examples of faculty using ePortfolios will be shared in order to gain insight into how students assemble an ongoing collection of best work and how learning assessment data can be collected from such work. 37. Creative Development or High Performance? An Alternative Approach to Assessing Creative Process and Product in Higher Education Jan Watson University of East Anglia, Norwich, Norfolk, UK The importance of developing creative pedagogies and practices in Higher Education (HE) has been highlighted by many educationalists and researchers (Craft, 2006; Jackson, 2006; Kleiman, 2005; McWilliam, 2007). However, it could be argued that the current performance-driven HE assessment processes are in 50


direct conflict with creative development; it has been recognised that the assessment of student creativity is both complicated and controversial (Beghetto, 2005: Elton, 2006; Loveless, 2002). Drawing on a social constructivist conceptual framework, this article examines how Year 3 Education undergraduates responded to the challenge of exploring their creative processes through the planning and presentation of an artwork. In particular, it addresses how they negotiated the demands of a different type of assessment which focused on both the reflective process and the finished product. This investigation is part of a more extensive action research study of how visual art may be used to develop students' creative thinking skills. It is underpinned by the idea that all students have the potential to be creative if they are provided with innovative learning experiences and open-ended assessment tasks. The empirical data was obtained from semi-structured interviews (n=20), students' reflective sketchbooks and observation notes. The findings support the view that students are more motivated and engaged with their learning when they have access to alternative, creative assessment opportunities which involve self-examination, collaboration and risk-taking in a positive learning environment. The data shows that they value formative and summative assessment methods which enable them to demonstrate their knowledge, understanding and skills in different ways. However, the evidence indicated that concern about the perceived subjectivity of the assessment process and the emphasis placed on meeting the learning outcomes, initially presented a barrier to creative thinking and experimentation. The implication is that the conflict between creativity and assessment could be partially resolved if students played a more active part in both the formulation of summative assessment criteria and the on-going formative assessment process. 38. Assessing learning gains in project management tuition/training. Steven Nijhuis1,2 1 Utrecht University of Applied Science, Utrecht, The Netherlands, 2Twente University, Twente, The Netherlands Project management is offered as a significant component in a range of undergraduate an postgraduate academic qualifications (Crawford et al, 2006), but project management is a challenging subject to deliver, not least because of the wide variety of skills and knowledge it embraces (Ellis et al 2003). A top ten of project management competences, based on recent research, shows that the most important competences are skills like communication, leadership and negotiation and not knowledge (Nijhuis 2012), which provides a challenge for assessment. For project managers with one or more years of experience, several assessment methods are available like IPMA C and GAPPS. For higher education institutions, where inexperienced students are trained in project management, such assessment methods are not yet available. The existing methods are too complex to be used in higher education (either in time and or cost consumption). It can be concluded that higher education has little or no tools to assess their activities on project management skills. In this paper a first design of such a tool is presented, based on Students Assessment of Learning Gains (Seymour, 2000) combined with Maslov's learning 51


cycle. The long term object of the study is to be able to compare the effectiveness of different educational approaches. 39. International students’ perceptions of formative peer assessment in the British university: a case study Meng Fan, David Leat, Sue Robson Newcastle University, Newcastle upon Tyne, UK Since there is an increasing emphasis on the development of skills such as communication and critical analysis in UK Higher Education (HE) (Department for Education and Skills, 2003), an awareness of such skills development by the individual requires an innovative approach to teaching, learning and assessment. Formative peer assessment, involving questioning and feedback, has now been applying in some modules as an innovative assessment approach. As the growing number of international students across the globe has resulted in a considerable amount of research into their learning experiences in the host country (e.g. de Vita and Case 2003), investigating international students’ experiences of innovative assessment in UK HE could be a potentially valuable research area. This study aims to develop an understanding of MEd (Master of Education) international students’ perceptions of formative peer assessment and its relevant implications for them in terms of their academic transition to UK HE. It employed a case study approach and collected both quantitative and qualitative data over two academic years (2010-2012) in a cosmopolitan British university. The results are of arguably considerable importance for more innovative approaches to assessment in HE. It is hoped that the study helps UK HE institutions and academic staff focus their efforts on issues of international student learning; it also makes suggestions to assessment policy makers for the further development and spread of successful practices, to encourage a new method of assessment for intercultural learning. Key words: Peer assessment; formative assessment; assessment for learning; feedback; international students.

52


Day 2: Keynote Plato versus AHELO: The nature and role of the spoken word in assessing and promoting learning. Associate Professor Gordon Joughin, The University of Queensland, Australia Plato argued for “the inferiority of the written to the spoken word” in evaluating students’ knowledge: the spoken word was one “written on the soul of the hearer with understanding”, while the written word would allow students to develop a reputation for wisdom without its reality. 2,300 years later we are still confronted with communicative issues in assessment: Are some modes of communication better than others in allowing us to determine what our students know, value and are able to do? What happens to learning and assessment when students have to speak their understanding? If the spoken word can in fact reveal ‘real wisdom’, what does this tell us about traditional exams, essays and standardised, computerised testing on the one hand, and the emerging use of blogs, wikis, podcasts and social media on the other? This keynote address will delve into the nature of the spoken versus the written word in assessment and learning, unpacking ideas about orality and literacy based on studies from the students’ perspective, and inviting participants to consider implications for current and emerging assessment practices. Biography Gordon Joughin is an Associate Professor in Higher Education at The University of Queensland, Australia and currently Director of its Teaching and Educational Development Institute. He has written extensively on assessment matters, with a focus on oral assessment and the influence this form of assessment can have on student learning. He is the editor of Assessment, Learning and Judgement in Higher Education (Springer), co-author (with David Carless and Fun Lui) of How Assessment Supports Learning: Learning-oriented Assessment in Action ( Hong Kong University Press), and co-author of the David Boud led Assessment 2020: Seven Propositions for Assessment Reform in Higher Education. Gordon is a member of the Editorial Board of Assessment and Evaluation in Higher Education. His current projects include a study of veterinary students’ experience of final year vivas and an exploration of the contextual influences on academic decision making in assessment matters.

53


Day 2: Abstracts 40. University Standards Descriptors: can you please all of the people even some of the time? Rachel Forsyth Manchester Metropolitan University, Manchester, UK The most recent QAA institutional audit for Manchester Metropolitan University recommended that the university should “establish a set of comprehensive university-wide assessment criteria to help maintain consistent standards across all provision, both on and off-campus� (QAA, 2010). Agreement about standards has never been problematic within disciplines, and the university had hitherto relied on this fact, because it has been difficult to find an acceptable structure which covers awards from Fine Art to Nursing. Colleagues have been well aware of the difficulties in certainty about marking (Ecclestone, 2001; Woolf, 2004; Beattie, Gill et al. 2008; Bloxham, Boyd et al. 2011) and have put considerable effort into producing discipline-specific criteria and a wellestablished internal and external moderation process (Manchester Metropolitan University 2007). As part of a JISC-supported Assessment and Feedback project, it was resolved that the university would readdress the QAA recommendation. A more detailed description of standards could give us the opportunity to provide electronic marking rubrics and grids as part of the specification for an electronic assessment management system. A set of University Standards Descriptors was produced which have had the additional benefit of linking the University Graduate Outcomes more closely into assessment, as well as providing a common language for describing different levels of achievement which has been agreed across disciplinary areas. This Practice Exchange session will give an opportunity to discuss the Standards Descriptors which were produced as part of this process and to work with the staff development materials which have been developed to support their implementation. 41. Using Patchwork Assessment to enhance student engagement, skills and understanding Caroline Marcangelo University of Cumbria, Cumbria, UK Patchwork text assessment is a method used within the HE sector for over a decade since its inception by Winter et al (2003), and more recently a JISCfunded UK DePTA project (2011) explored how digital enhancement of the patchwork model can increase its effectiveness as a form of assessment that promotes learning development. A patchwork text consists of a variety of small assessment activities, each complete in itself, and the overall unity of these component sections, whilst planned in advance, is finalized retrospectively when they are ‘stitched together'. Each of the short pieces is shared within a small group of learners as part of the teaching-learning process. At the end of the module learners add a reflexive commentary to the short pieces they have already written to summarise and synthesise their learning (Winter, 2003). 54


It is often the case that high stakes assessment consists of summative examinations and essays, however these traditional forms of assessment do not always expose and capture the broader student understanding, subject skills and transferable graduate capabilities valued by employers. Additionally, these traditional methods do not easily facilitate opportunities for formative feedback, peer interaction and self-assessment leading to meta-cognition. Evidence from programmes using patchwork assessment and the DePTA project shows this method to be an effective approach for demonstrating these attributes and affording the opportunity for feedback. Additionally, there is early reporting increased student engagement with continuous reflective learning, and also reduced plagiarism. The practice exchange session will be used to explore how you can adopt this approach within your own programmes. 42. Upskilling students in lectures using student polling software Sam Clarke, Trish Murray University of Sheffield, Sheffield, South Yorkshire, UK Peer assessment is a powerful tool which all students should aim to harness over the course of their degree. Teaching students how to assess work where there is minimal room for interpretation of the assessment criteria is relatively straight forward. However, if the assessment criteria are not specific enough or perhaps more importantly the students cannot understand the assessment criteria we are left with a poor level of assessment. The authors present a case where the students are asked to mark a series of drawings to a relatively specific (yet still subjective) set of assessment criteria. To train the students to a) understand the assessment criteria and b) understand the performance required, an in-lecture poll was conducted using the TurningPoint system. During the lecture the students were asked to grade exemplars with the outcome of the polls being able to guide the course of the lecture. If little consensus or a good consensus but on an inappropriate mark was polled, further discussion would be instigated to attempt to sway the polls. The quality of learning was judged by the degree of consensus in the marks (both between the students and between the students and the lecturer) at the end of the session. 43. Understanding student learning from feedback Stuart Hepplestone1, Gladson Chikwa2 ,1 1 Sheffield Hallam University, Sheffield, UK, 2Nottingham Trent University, Nottingham, UK This paper presents the findings of a longitudinal study at Sheffield Hallam University to explore the subconscious processes that students use to engage with, act upon, store and recall feedback. The project aimed to inform and evaluate how technology can support deliberate actions as a result of receiving feedback. The session will describe how the methodology used both micro-blogging to enable the participants to capture every instance of student interaction with feedback, and weekly diaries encouraged a detailed and reflective account of the 55


nature and use of feedback to be recorded. An end-of-study interview enabled participants to articulate their understanding of their own experiences, and to analyse the differences in how they interact with different forms of feedback. The findings will be shared with the audience, including how participants understood and recognised feedback, as well as their expectations from feedback. Students valued the feedback that they received and made efforts to internalise and store feedback for future use. The main argument will centre on the strategies that students use to apply feedback, especially around the links that the participants (often struggle to) make between the feedback received and future learning. There will also be a discussion on how the participants valued the use of technology in the feedback process, including how they used mobile technologies to enable prompt dialogue about their feedback with their tutors and social media to gain early feedback on ideas from peers to feed forward into their final submissions. 44. Analysis of student comments following participation in an assessed online collaborative activity based on contributions to a series of wiki pages. Janet Haresnape Open University, Birmingham, UK In this online collaborative activity, students each provided data, and suggestions about its interpretation, by contributing to a wiki, and undertaking an assessment based on the interpretation and implications of their findings. The activity was designed for students of evolution, and involved each student ‘claiming' one ‘island', counting different genetic variants on their island, and adding their data to the wiki. There followed probing questions inviting students to add comments about the long-term evolutionary implications of their results. This enabled weaker, less confident students, who reported finding the questions challenging, to build on comments made by others and add their own valuable contributions, demonstrating the interacting roles of learner, peer and tutor (Pachler et al, 2009). Thirty-six participating students who completed the assessment were interviewed by telephone, and students' comments were categorised. Categories included the practical, authentic (Bloxham et al, 2007, Sambell et al 2013) and collaborative (Crisp, 2007) nature of the activity and its assessment, and deeper understanding of the topic. Many reported participation in such an authentic group activity felt like doing 'a real experiment', and gave them a sense of involvement, of being an essential contributor to the group data and its interpretation, and led to deeper understanding and hence increased confidence in tackling the assessment. The activity is now being adapted for use in other contexts in which a complex question, which could be approached from various different angles, is posed on a wiki, enabling students to build on each other's responses to gain a deeper understanding.

56


45. Assessment for learning at institutions of higher education: A study of practices among academicians Mohamad Sahari Nordin1, Ainol Zubairi1, Mohd Burhan Ibrahim1, Nik Suryani Nik Abd Rahman1, Zainurin Abdul Rahman1, Joharry Othman1, Tunku Badariah Tunku Ahmad1, Zainab Mohd Nor2 1 International Islamic University Malaysia, Kuala Lumpur, Malaysia, 2University Teknologi Mara, Kuala Lumpur, Malaysia Assessment for learning is a catalyst for reformation in instructional practices (Sahari, 1999), bridges theory and practices (Riley & Stern, 1998), and creates "a shared academic culture dedicated to assuring and improving the quality of higher education" (Ellyn, 2000, p. 2). This paper presents the findings of a national study conducted at higher learning institutions in Malaysia to find out instructor's self-reported practice, and their perceived competence in assessment for learning. Guided by the frameworks on standards in assessment practices and previous studies in assessing teachers' practices in assessment, this study initially adapted a questionnaire based on five standards in assessment practices from the body of literature. The 23 items questionnaires were distributed to over one thousand instructors from thirty three public and private higher learning institutions in the country. The one thousand and sixty four responses to the questionnaires were subjected to both the principal component analysis and structural equation modelling. The principal component analysis indicated that there were four underlying dimensions of assessment practice measured by the data, with 15 meaningful items. Further analysis reports the rank-order of the four dimensions of assessments for learning. Finally, the structural equation modelling indicated that there was a profound influence of perceived competency on assessment practices. The findings of this study are related to instructional interventions and trainings at the national level to strengthen and support competency in assessment amongst instructors at higher learning institutions in the country. 46. Students' Perceptions about Assessment and Assessment Methods: A Case Study Elizabeth Ruiz Esparza University of Sonora, Hermosillo, Sonora, Mexico This paper describes the research carried out in a Bachelor of Arts in English Language Teaching in Mexico. The qualitative study aimed at investigating the students' perceptions about assessment and assessment methods. Related research has studied this topic in regards to preferences and cognitive process levels (Van de Watering, Gijbels, Dochy & van der Rijt, 2008). In addition, Struyven, Dochy and Janssens (2005) found that students' perception about assessment and assessment formats influence students' learning approaches and conversely, these approaches influence their perceptions. These authors also state the importance of gaining insights into students' perceptions about assessment to understand student learning and thus improve teacher educational practices. The participants in the present study were ten students from the last semester of the four year program. The method of data collection followed Kitzinger's (1994) focus group methodology which involves participants in carrying out collective tasks. The participants were asked to discuss the grading schemes of four teachers from different but related disciplines who taught different subject areas 57


of the program (linguistics, pedagogy, culture and English language). Using Nvivo, an external researcher validated the theme-driven analyses. Results provided rich insights about the students' perceptions of assessment methods, weighting of assessment components and grading criteria as well as relationships between assessment and subject and assessment and learning. This study aims to contribute to the existing research literature by offering an educational context that has been largely unexplored. Second, the study raises the problems with the formative assessment and assessment for learning teacher practices. 47. Working together: can collaboration between academic writing/subject specialists improve students' performance in written assessments? Carol Bailey University of Wolverhampton, Wolverhampton, UK In 2011 University X introduced a new model of academic English language support for postgraduates, aimed primarily at those from outside the EU but in many cases accessible to EU and home students too. The EAP (English for Academic Purposes) provision was contextualised as far as possible and in some cases embedded within core postgraduate modules. One aspect of our provision is formative feedback on assignment drafts. This takes place through face-to-face tutorials and/or online: via email, our VLE, or Grademark (feedback tool within Turnitin). In many cases students receive formative feedback from both their subject lecturer (on content, source use and argument) and their EAP tutor (on language and referencing). This collaborative approach has proved popular with subject lecturers but it has been more difficult to ascertain its usefulness to students. The aims of this practitioner exchange are to 1. Share experiences of collaboration between academic writing/subject 2. specialists that seek to improve students' performance in assessments; Explore ways of evaluating the impact of such collaboration. Indicative references Gimenez, J. (2008) Beyond the academic essay: Discipline-specific writing in nursing and midwifery. Journal of English for Academic Purposes, 7(3), pp.151-164. Gray, L. and Judd, P.L. (2008) Effective English Support for Overseas Students in Engineering Departments. Innovation, Good Practice and Research in Engineering Education [online]. Available at http://www.engsc.ac.uk/downloads/scholarart/ee2008/p055-gray.pdf Sloan, D. E. & Porter, E. (2008) The management of English language support in postgraduate business education: the CEM Model (contextualisation, embedding and mapping. International Journal of Management Education, 7(2), pp. 51-58.

58


48. Student response system used for motivation and learning outcome Cecilie Asting, Anna Steen-Utheim, Inger Carin Grøndal BI Norwegian Business School, Oslo, Norway The aim of the practice exchange session is to get feedback on the following project. The project's purpose is to enhance student activity, interest and motivation in a mandatory course for first year bachelor students. The course in Organizational behaviour and management is viewed by students as chatting about topics appearing obvious to the students. Hence one of the pitfalls is an assumption that the course is easily managed writing an essay-like exam. This is far from the course requirements. Our goal using the mobile application "Student response system" (SRS) is to increase students understanding by making them reflect upon subject specific questions during class. The system provides a graphical illustration of student answers giving the lecturer a chance to give instant feedback. This enables the lecturer to explain current misunderstandings and challenge students in further reflecting on these "obvious topics" hopefully increasing the students learning outcome. The SRS application was introduced in the second lecture in a series of totally 14 lectures. Our intention is to keep using SRS throughout the semester. As an example SRS was used in an identical question about money being the most important motivator on job performance, at the beginning and at the end of a lecture. Comparing the two graphics we could see how students changed their opinion after being confronted with motivational theory. To gain more understanding about the students learning outcome our plans include a survey, focus groups and comparing course grades with students in classes where SRS is not offered. 49. Emotional Responses: Feedback Without Tears Mike McCormack Liverpool John Moores University, Liverpool, Merseyside, UK This research project is funded by the HEA. Much of the research into the efficacy of feedback on student work has been conducted over the last decade. The central aim of the project is to investigate the constituent elements of ‘successful' feedback and thus identify less effective approaches. In this context, ‘successful' could be taken to mean feedback which does not damage, but fairly and justifiably enhances aspects of self-identity and self-esteem, clearly informs the student of strengths and weaknesses and provides clear guidance on future improvement (feed-forward). In particular, given that tutors are sometimes surprised by students' distress on receiving feedback, the project aims to identify commonly occurring emotional ‘clouding' of communication, usually as a response to written rather than spoken feedback, obscuring meaning and tone, and robbing the process of any felicitous outcome. Practical work in Drama presents a fruitful area of investigation due to the heightened emotional stakes already implicit in the activity. Ill-judged or misread/misunderstood feedback comments can, understandably, elicit strong emotional responses due to the perceived power structures in operation between student and tutor and the feeling of vulnerability of ‘the performer'. This sense of the heightened nature of the activity lends itself to an investigation of the underlying processes at work in all forms of feedback on student work in general. This paper will discuss some of the issues and initial conclusions arising from the existing literature, and student focus-groups and questionnaire-interviews undertaken by the project. 59


50. Perceptions of fairness and consistency regarding assessment and feedback in trainees experiencing a difficult or failed professional placement. Mark Carver University of Cumbria, Lancaster, UK The accurate assessment of trainees on placements is a crucial measure for providers of professional training, particularly at the pass/fail borderline. Professional judgements of trainees are qualitative judgements (Hawe, 2002) and consistency is primarily concerned with validity and fitness-for-purpose rather than objectivity. However, students may view inconsistencies as errors or subjectivity rather than expert judgements necessarily involving tacit assumptions and legitimate differences of perspective (Copland, 2010). This perception potentially impacts on trainees' ability to engage with feedback to improve their future practice, lowers satisfaction ratings, and raises trainees' concerns about fairness and consistency of judgements. This research is posited on a view that trainees receiving critical high stakes assessments may find that this is a result of misaligned views regarding professional standards, the purpose of professional feedback and the role of the mentor. Without suitable reflection, critical feedback may exacerbate this misalignment so that trainees find it increasingly difficult to enter professional communities. In recognition of the sensitive nature of the research, a small group of volunteers whose grades were near the borderline were contacted without the knowledge of their mentor or link tutor. Interviews based on prompts derived from the literature were used and coded thematically. This study has implications for exploring the backwash effect of placement feedback, thereby helping students understand the tacit "rules of the game" (Roberts & Sarangi, 2001) as related to their own constructs of fairness and consistency. Findings related to restructuring feedback into feedforward (Carless, 2007), and bolstering existing consistency safeguards are also discussed. 51. An Authentic Assessment Proposal: Current Events as Reported in the News Media Rick Glofcheski University of Hong Kong, Hong Kong, Hong Kong This presentation seeks to offer teachers a new approach to their assessment practices, one that can have a profound impact on how students learn. This approach, in which recent, media-sourced news items are used as assessment questions and learning activities, can help students to move away from the habit of short-term reproductive learning and develop more effective skills that go beyond the mere acquisition of knowledge. Events reported in the local media are real and have relevance in the community. Every week there is an abundance of news items relevant to any given subject area. Using such reports as assessment and learning questions offers students the opportunity to apply their learning in a realistic and relevant scenario. Moreover, because the narratives in such news reports are not flagged or advertised as relevant to the subject, the student must learn the skill of identifying relevance when it is not expressly indicated. This form of assessment question, if carefully selected, promotes the development of 60


independent and life-long learning skills in significant ways. This approach abides by the principle of ‘assessment for learning'. It accepts as axiomatic that assessment is a major driver of student learning behaviour, and is premised on the beliefs that assessment is an integral part of student learning and that more effective student learning requires a reconsideration of how we assess students. 52. The Introduction of Grade Marking at Southampton Solent University Fiona Handley, Ann Read Southampton Solent University, Southampton, UK This practice exchange session explores the challenges faced in the introduction of Southampton Solent University's marking scheme, known as Grade Marking, which was launched in September 2011. Grade Marking replaced a percentage based system of marking with one which uses 17 spot marks from A1 to F4. The key aims of the scheme were to encourage staff to use the full range of marks available to them, to demonstrate a clearer link between the marks given and the University's Generic Grading criteria, and to make marking more efficient by speeding up agreement between first and second markers. Southampton Solent University is one of a number of universities moving away from a percentage based approach to marking, in line with research which questions the validity of relying on marking with numbers to assure standards (e.g. Rust, C. 2011 The Unscholarly Use of Numbers in Our Assessment Practices, In International Journal for the Scholarship of Teaching and Learning, Vol. 5 No. 1). The scheme was developed in consultation with a range of stakeholders, and involved a major staff and student dissemination programme. The scheme has been in place for almost two academic years, and this session offers the opportunity to share the results of the recent evaluation of the initiative with a wider audience, including an analysis of the effectiveness of the scheme in broadening the range of marks given to students, and an assessment of the perceived and actual impact on academic standards. 53. Assessment Feedback - What Can We Learn from Psychology Research Ivan Forward Questionmark, UK  Do you need learners to retain and apply what they learn?  Would you like to improve the effectiveness of your assessment feedback?  Are you worried your learners will forget what you teach them? If you would answer yes to these questions, this session is for you! It will share some actionable evidence from learning and psychological research to improve the learning benefit of your Questionmark assessments. Recent research in cognitive psychology has deepened understanding of the positive effect of retrieval practice on retention of learning. Studies show that practicing retrieval, including taking formative quizzes with feedback, is an efficient way of retaining learning for the long term - more efficient than spending the same time restudying. The experiments of Roediger & Karpicke (2006) and Karpicke & Blunt (2011) will be discussed explaining how the act of taking quizzes/ tests and exams strengthens retention. Research has also evaluated the effects of feedback on learning and retention, and identified pointers for effective feedback. This presentation will also touch on Nicol's seven principles of feedback and show how assessments and feedback can influence learning discussing , and offers good practice recommendations to help improve your assessments. 61


      

Questions to be addressed include: What is the evidence from cognitive psychology and how reliable is it? What question types work best for retention? Is feedback best given immediately or after a delay? Should feedback tell students the right answer? Where does topic feedback fit in? How do you get students to pay attention to feedback?

54. Immediate feedback by smart phones Nina Helene Ronæs, Anna Steen-Utheim, Inger Carin Grøndal BI Norwegian Business School, Oslo, Norway The aim of the practice exchange session is to present digital tools, such as smart phones, for engaging and motivating students both in the lecture but also between lecture sessions. The tools are an "instagram-like" photo tool, wordcloud and polling. These educational tools are used in first year bachelor degree program in the discipline "Consumer Behavior", in large classes with 200-300 students. A second aim is to get feedback on further use of these tools for enhancing students learning outcomes and possible pedagogical implications. The motivation for using these pedagogical tools is to motivate and activate students in large classes. Students get immediate feedback from both lecturer and peers. Both the lecturer and peers comment on the results that are immediately displayed in the lecture hall. The lecturer uses the results to facilitate a plenary discussion that is related to the learning objectives defined in the curriculum. The dialogue creates an opportunity for the students to voice their ideas and get immediate feedback on their thoughts. In addition, it helps the students to gain a deeper understanding of the subject. Through discussions and experiences they see how the theory can be applied. The reason we have chosen these three digital tools is to adapt to students' ability to communicate and engage. Today's students are actively living in the digital world and the use of smart phones has become an integral part of their daily communication. We therefore believe the students' commitment to the subject will increase by using these tools. 55. Reflections on feedback - feedback on reflection Rita Headington University of Greenwich, London, UK I this paper I argue that reflective practice, common to many graduate status professions (Schön, 1991), is a complex threshold concept; transformative and irreversible, fundamental to success, yet liminal, troublesome and based on tacit knowledge (Meyer et al., 2010). Higher education tutors have, through assignments and professional placements, sought to contextualise and convey such tacit knowledge through feedback. However, where students' interpretations fall short of the feedback's intended meaning, students appear drawn towards simplicity over criticality; identifying feedback as a product of learning rather than integral to the reflective process (Sadler, 2010; Tummons, 2011; Headington et al., 2012). Conversely, social constructivist approaches suggest these difficulties are transcended by enabling students' interpretation of feedback to be developed through interaction and dialogue (Black and Wiliam,1998; Hattie and Timperley, 2007; Nicol, 2010). 62


This study examines ten students' lived experiences of feedback and reflective practice across the second year of an initial teacher training programme. Following initial semi-structured interviews, which explored perceptions, expectations and personal histories of feedback and reflective practice, students maintained â&#x20AC;&#x2DC;feedback diaries' using audio, video or written logs, according to personal preference. The diaries, along with documentary evidence of feedback, provided starting points for two further semi-structured interviews. Initial findings indicated that students seek feedback at different levels; forming strategic support networks as appropriate (Headington, 2012). Diaries and interviews revealed that trust and respect for providers determines students' willingness to accept and use the feedback received, whilst timeliness, and the opportunity to take action, impacts upon students' approaches to reflective practice. 56. An evaluation of the effectiveness of audio feedback, and of the language used, in comparison with written feedback. Charlotte Chalmers, Janis MacCallum Edinburgh Napier University, Edinburgh, UK Audio feedback has been shown to be popular and well received by students, as well as by staff1,2. However, there is little published work to indicate how effective audio feedback is in improving student performance3. To address this question, first year students on an undergraduate science degree were given either written or audio feedback on coursework, and their performance assessed by an on-line test. Sixty students agreed to take part; thirty were randomly assigned to receive written feedback, thirty students received their feedback via emailed audio files. Mean marks awarded for the coursework for each group were not significantly different. The end of module online test included three questions which specifically assessed topics from the coursework. Overall test results were not significantly different for the two groups and the mean mark achieved in the three specific test questions was 74.7% for the "audio" group and 72.6% for the "written" group (students t-test p = 0.76). Evidence suggests that the language used in giving audio feedback differs to that used for written feedback3. Samples of feedback given in this study were analysed. Mean word counts for audio feedback = 482.2 versus written feedback = 97 (students t test, p < 0.05). Significantly more feedback (p < 0.05) was given by audio than written feedback, in the following categories 4 "explaining misunderstandings", "demonstration of good practice" and "justifying marks". Whilst marks may not be improved for those students receiving audio rather than written feedback, the feedback given is much richer. 1

Lunt T and Curran J (2010). "Are you listening please?" The advantages of electronic feedback compared to written feedback. Assessment and Evaluation in Higher Education. 35(7): 759-769 2 Gould J and Day P (2012). Hearing you loud and clear: student perspectives of audio feedback in higher education. Assessment and Evaluation in Higher Education. http://dx.doi.org/10.1080/02602938.2012.660131 3 Merry S and Orsmond P (2008). Students' attitudes to and usage of academic feedback provided via audio files. Biosciences Education ejournal 11. http://www.bioscience.heacademy.ac.uk/journal/vol11/beej-11-3.aspx 63


4

Brown E, Gibbs G and Glover C (2003). Evaluation tools for investigating the impact of assessment regimes on student learning. Biosciences Education ejournal 2. http://www.bioscience.heacademy.ac.uk/journal/vol2/beej-2-5.aspx 57. Collaborations and Celebrations: the highs and lows of a decade of working with collaborative assessment. Veronica Brock1 ,2 1 University Of Wolverhampton, West Midlands, UK, 2Halmstad University, Halland, Sweden In order to explore a link between the lament â&#x20AC;&#x153;assessment has proved surprisingly difficult to change, bound up as it is with traditions and concerns about standards and fairnessâ&#x20AC;? (noted in the initial call for papers to this conference) and the nature of the typical conference paper, the intention is to deliver this paper as an evocative and emotionally engaging subjective autoethnographic account (Ellis et al 2006, 2010; Denzin 2006) of the highs and lows of a decadeâ&#x20AC;&#x2122;s use of collaborative assignments, from group reading journals to class tests in HE. The account will reflect on the design of the assignments, and of their reception by undergraduates studying English and linguistics in both an English and a Swedish institution of HE. The paper further seeks to evaluate evocative autoethnography as a potential assessment form in undergraduate modules by recording the reception of this particular paper in a public arena 58. Getting the Devil off our back: Reflections on Table Marking in Psychology Meesha Warmington, Cecilia Lowe University of York, York, UK This poster will outline and evaluate a model for Table Marking piloted in the Department of Psychology, University of York. Timing and quality of feedback is a common concern that has arisen from the National Student Survey across all departments and institutions. One way to ensure that feedback is timely and of good quality is through the use of Table Marking. Table Marking is conducted across 3-4 sessions (5 hours each) in which Postgraduates Who Teach (PGWT ) mark 200 scripts. These sessions involves clarifying understanding and standards through marking a sample of scripts, clarifying marking guidelines, moderation and marking - all done under the supervision of faculty. Results illustrate that Table Marking facilitates in not only improving the efficiency with which assignments are marked, but also to improve internal consistency in feedback and marking. This is reflected in the open comments of PGWT: 1. It was good and definitely helped because everyone could raise questions, you could also tell by how others were marking what the general standard of the essay was and if you had a good or bad batch. 2. I find pre-marking is really helpful. After discussing with other markers, I understood the marking guidelines better and felt more confident about the marks that I had given to students.

64


59. Marking as a way of becoming an academic POSTER Rachel Sales University of the West of England, Bristol, UK Background: Academics supporting students in health social care are often described as gatekeepers for their professional disciplines as they are seen as experts able to make reliable judgments when assessing students. Newly appointed academics within these disciplines are often experienced practitioners, who are seen as moving from being experts in their previous roles to novices in an academic setting. For new academics the first experience of marking is an event as memorable as the experience of preparing for and giving the first teaching session. Aim: The aim of the study was to explore the experience of six newly appointed academics from health and social care backgrounds as they began to mark and give feedback on student coursework. Methods: Three in-depth interviews which each participant. Findings and conclusions: Each participant experienced different levels of support and guidance in relation to assessing, marking and giving feedback on students' coursework. Participants reported a growing confidence in their academic judgements and discussed the challenges in relation to the emotional effects and ethical considerations that they had not anticipated. As a result of the study staff development sessions which focus on the needs of newly appointed academics have be introduced. 60. Investigating moderation practices in a Faculty of Education Lenore Adie, Margaret Lloyd, Denise Beutel Queensland University of Technology, Brisbane, Queensland, Australia Moderation of student assessment is a critical component of teaching and learning in contemporary universities. In Australia, moderation is mandated through university policies and through the new national university accreditation authority. The purpose of this project was to investigate and analyse current moderation practices operating within a faculty of education at a large urban university in Queensland, Australia. Moderation was understood in this project as a process that engages teaching team members in developing a shared understanding of assessment requirements, standards, and the evidence that demonstrates differing qualities of performance. The specific aim of the project was to determine efficient and effective moderation practices within the faculty. This was a qualitative study involved interviews with the unit coordinators (n=21) and tutors (n=8) of all core undergraduate education units within the faculty and others selected on the basis of their profile, for example, their enrolment (internal, external, or combination) and the courses in which they were offered, particularly in the Graduate Diploma Program. Four distinct discourses of moderation that academics drew on to discuss their practices were identified in the study. These were: equity, justification, community building, and accountability. These discourses reveal the nuances in understanding of moderation process and practices. From these findings we identified a number of areas which require attention in informing the practice of moderation and make recommendations for changes to moderation practices within the Faculty. 65


61. How university tutors learn to grade student coursework: professional learning as interplay between public knowledge and practical wisdom Pete Boyd, Sue Bloxham University of Cumbria, Carlisle, UK A study of how university lecturers grade student coursework used 'think aloud' protocols while tutors assessed student coursework assignments. This was followed by semi-structured interviews with the lecturers to explore their perspectives on grading, academic standards, use of related documents, and how they learned to grade (Bloxham & Boyd 2012). Through the qualitative analysis of the interview data a new metaphorical framework for professional learning was developed and tested (Boyd & Bloxham 2013 in press). Metaphors, linguistic representations, are helpful in attempts to capture the human experience of learning but dominant metaphors may be misleading (Lakoff & Johnson 1980; Martinez et al. 2001; Hager 2008). The metaphor of ‘theory and practice', and of a ‘gap' between them, is widely held, often subconsciously, in professional education. Within this traditional way of thinking about knowledge the student teachers will typically be expected to ‘apply theory to practice' and we would argue that it is misleading. The new proposed metaphor was developed from a socio-cultural perspective (Blackler 1995; Wenger 1998) and considers teacher knowledge as 'knowing'. It is also informed by Bernstein's thinking on vertical and horizontal discourses (1999) and by Bolt's work on formal and informal workplace learning (2008). The new metaphor considers professional learning as ‘interplay' between the vertical domain of public knowledge and the horizontal domain of practical wisdom. It is proposed as a useful tool to all those interested in understanding how teachers learn to grade student work and build their personal frameworks for academic standards. 62. Creating and Marking Portfolios Effectively Fiona Handley, Adam Kelly Southampton Solent University, Southampton, UK The use of portfolios as an assessment type is becoming increasingly common as staff look for ways for students to, for example, spread out the time spent on assessment tasks, demonstrate how they have met learning outcomes, and develop reflective skills. This poster was designed as a response to the diversification of portfolio types, to remind staff to think of a portfolio as a coherent assessment, rather than a series of individual materials brought together under one submission. Drawing on the work of David Baume (Baume 2001), and inspired by the models presented by Webb et al (2002), this poster encourages staff to design the structure of portfolios to encourage students to connect their pieces of work (The Spinal Column and the Cake Mix models), rather than present them as separate (The Toast Rack and Shopping Trolley models [ibid.]). This helps reduce over-assessment by ensuring that everything submitted in the portfolio contributes to the assessment, and emphasises that assessment criteria must be set against the portfolio as a whole, meaning that the portfolio is marked as one piece of work. The case study of portfolios highlights an important point relevant for all assessments; that the learning 66


rationale, assessment structure and marking scheme are mutually dependent and should be designed as a whole. Baume, D. 2001 A Briefing on the Assessment of Portfolios York: Learning and Teaching Support Network. Available at http://www.heacademy.ac.uk/resources/detail/evidencenet/Briefing_on_asses sment_of_portfolios. Webb, C. et al 2002 Models of Portfolios. In Medical Education 2002 36: 897-898. 63. The influence of research and practice on a set of open source eAssessment tools Tim Hunt The Open University, Milton Keynes, UK This poster examines the extent to which the functionality in a set of open source eAssessment tools used by a distance learning university has been informed by the research literature, and by the practice of the university's teaching. This university has been doing online computer-marked assessment since 2002 (Butcher 2006). There have been a number of strands of innovation, including  immediate feedback with multiple tries (Ross et al. 2006);  automatic marking of sentence-length answers (Jordan, 2012);  automatic marking of mathematics (Sangwin, 2013). These innovations have been fed into several open source software projects including Moodle, OpenMark and STACK. As well as the research inputs, the poster will also summarise the research outputs, for examples, evaluations of the effectiveness of the tools produced. 64. Video Based Assessment within Higher Education: Developing Tactical Knowledge on a Sports Coaching Module Martin Dixon, Chris Lee Staffordshire University, Stoke-on-Trent, UK Video simulation has been effectively utilised as a pedagogical tool to enhance professional practice (Miyata, 2002), and real learning abilities may be enhanced by student exposure to video simulation (Siegel, Omer & Agrawal, 1997). However, while approaches to teaching and learning are advancing parallel with the utilisation of modern technologies, approaches to assessment have failed to keep pace (Herrington & Herrington, 2006). The aim of this study was to investigate an innovative form of assessment using multimedia which meets both academic learning outcomes and industry needs. The video-based assessment was a summative component of the module ‘Coaching and Teaching in Sport' at Staffordshire University, which aims to develop students' tactical knowledge across a range of invasion games. The video assessment was designed to engage students and promote assessment for learning through a blend of video, still images and text, enabling students with various learning styles to scrutinise material presented in their preferred mode of communication (Dede, 1996). Within this innovative assessment students observed a multi-media resource and analysed a range of tactical situations pertinent to sports coaching and physical education. Students were formally assessed on their knowledge and analytical skills relating to these simulations of practical scenarios. 67


Following an analysis of student focus groups, the poster aims to communicate the key findings of the study related to student learning and engagement, the validity of video based assessments and any inherent practical issues. Implications for use of video-based assessment within higher education and directions for future research will also be presented. 65. Researching assessment and exam practices in an online postgraduate program Andrew Chambers, Ruth Laxton University of New South Wales, Sydney, Australia This poster presents the path the University of New South Wales Masters of Business and Technology (MBT) program took to review its assessment practices, and in particular examinations, in its online program. The research resulted in a range of new assessment approaches in multiple forms and using multiple methods of delivery. The path consisted of: 1. A contracted research report 2. A summary discussion with teaching and learning staff during a biannual workshop 3. Outcomes of the discussion informed an internal Learning and Teaching Committee to make program level decisions 4. On-going discussions between the Educational Development Manager (poster presenter) with course coordinators and course authors to implement new forms of assessment within their courses While the initial proposal was to look at how to replace paper based exams with electronic exams, the discussions that developed from the review of the research report led to a much richer discussion with many and varied assessment outcomes. The original 36 page research report will be available as an electronic download to poster viewers: http://www.student.mbt.unsw.edu.au/StudyGuides/Research%20report%20on% 20exams%20in%20MBT%20assessment.pdf Details of the proposed changes to courses to accommodate newer modes of assessment will also be discussed with viewers during this poster session. Background: The MBT Program is an applied Masters by coursework program aimed at managers and professionals in technology driven environments. The MBT provides participants with the intellectual tools to manage and take up leadership roles where business and technology intersect. http://www.asb.unsw.edu.au/futurestudents/postgraduate/mbt/Pages/default.asp 66. A strategy to help students to actively engage with feedback provided: sharing experience from the undergraduate medical student selected component (SSC) programme. David Bell, Vivienne Crawford Centre for Medical Education, The Queen's University of Belfast, Belfast, Northern Ireland, UK Within the undergraduate medical curriculum, SSCs develop students' critica 68


l appraisal, presentation and communication skills while providing opportunities to study topics of personal interest in depth and experience potential career paths. ~160 SSCs are offered at QUB: each student chooses 5, spanning basic sciences, clinical specialties, ethics, humanities and community projects. SSCs are marked entirely via coursework. A standardised marking scheme assesses performance against generic outcomes and mandatory competencies, with inbuilt flexibility for use across the range of SSCs and assessment forms employed including dissertations, posters, presentations, portfolios and case reports. Written feedback is appended. In common with other subject disciplines, our students often indicated that they wanted more detailed feedback on specific assignments undertaken; conversely, some did not access summative feedback, possibly because they viewed this as relating to specific tasks now completed, not recognising its' broader transferability across and beyond the SSC programme. We piloted an approach to feedback in two different SSCs (Hypertension, End-oflife ethics), each of which required submission of a dissertation. Students submitted formative drafts mid-semester and specific queries/feedback requests. Assessors provided written feedback on these drafts. Students were required to submit a short reflection (10% of final mark) with their revised dissertation indicating which recommendations they had/had not addressed, how and why. Comments from students have been positive, evidenced by module evaluations. In this paper, our strategy is described and evidence presented from student work that it improves active engagement with feedback and promotes development of key skills. Our approach is readily applicable to other disciplines. 67. Horizontal and Vertical Models: The Instant Feedback to Help Students Excel Wajee Chookittikul1, Peter Maher2, Sukit Vangtan1 1 Phetchaburi Rajabhat University, Phetchaburi, Thailand, 2Webster University, St. Louis, MO, USA This research is designed to provide our students the instant feedback necessary, based on their performance, in order to help them learn better in their computer science and computer security subjects through a computer application. One of the obstacles our students face is that not everyone can get feedback on their exercise questions in a timely manner. It is actually highly impractical to do so even if the class size is small because students can decide to learn the materials at any time and their instructor may not be available for giving feedback at a convenient time and place. We have designed the software to give students feedback based on their answers to the exercise questions - very much like an instructor would do. The research proposes two models: the vertical and the horizontal. The former is designed to work with faster learners; the latter is designed for a more typical learner type. We want to make sure that the vertical model can help good students to advance their study efficiently while the other group can have a tool to help them understand the materials better at their own pace. A software prototype was created based on the two proposed models, which are consistent with the B.F. Skinners learning theory. During the design stage, we have tested the prototype with both students and teachers to ensure that the software being created was user-friendly and met their expectations. The software is intended as a tool to complement each of their classes. 69


68. Strategies for Formative Oral Assessment of EFL through Storytelling. Ana Fernandez-Caparros Turina, Rafael Alejo Gonzalez, Ana Maria Piquer Piriz University of Extremadura, Extremadura, Spain Testing speaking is an integral part of the assessment process of any foreign language course. Our poster will show the innovations and the findings within this area as part of an action research study conducted with second year teacher trainees at the University of Extremadura, Spain. The implementation of university courses adapted to the EHEA has created a new scenario in which students need to reach the international standards of achievement established by the CEFR (level B1) and, simultaneously, develop a series of general, specific and cross-curricular competences of the curriculum incorporated in the course syllabus. The consideration of this framework is crucial in the needs analysis for test development as it demands that a major emphasis should be placed on the testing of contextualized language ability (Douglas 2010). The major innovation carried out in our course English for Primary Education, as a means to respond to the students’ context as teacher trainees and reinforce the educational goals of the curriculum, was to make them engage in authentic task: that of performing a reading of a children’s picture book in English in front of an audience. If task authenticity is a distinguished feature of LSP testing (Douglas 1997), it is also vitally important to create an Assessment for Learning environment (Sambell; MacDowell & Montgomery 2013). Additional strategies to foster AfL introduced and whose results will be shown and discussed were: rehearsals, public use of an assessment rubric created ad hoc and peer assessment. 69. Facilitating Transitional Learning: An Empirical Investigation into Undergraduate Students’ Perceptions of the Impact of Formative Assessment at Level 4 before entering Level 5 Nikki Woods University of Northampton, Northampton, UK Formative assessment is the assessment for learning whereas summative assessment is the assessment of learning (Sadler, 1998: 77; Biggs and Tang, 2007: 163-164). Formative assessment allows the deep-learning higher zoned students (Bloom’s Taxonomy, 1956) the opportunity to self-assess because they understand the level of assessment they are striving for and can, therefore, judge their own performance (Sadler, 1998; Juwah et al., 2004). This will encourage reflective and independent learning. The student will be motivated to do better and employ deep learning. Additionally, taxonomic reviews of research (Black and William, 1998; Sadler, 1998; Falchikov, 2005) advocate strong benefits to students being active and involved in their own assessment. The aforementioned research consisted of 168 Level 4 social science students respondents. Part 1: assignments 1 and 2 – two 750-word assignments were returned to the students with formative assessment from their tutors. The students completed a questionnaire regarding their perceptions of their tutor’s feedback. In particular, how this had impacted, helped, enhanced, facilitated or supported their studies before completing Part 2. Part 2: assignments 3 and 4 followed the same process as Part 1 however the students completed these using the feedback received from assignments 1 and 2. Thereafter, the students completed a second questionnaire regarding their tutor’s feedback for assignments 3 and 4. This research will investigate the 70


studentsâ&#x20AC;&#x2122; assessment of our feedback from both Parts 1 and 2 to extract the impact, if any, this two-tier assessment has achieved on their learning and experiences before entering Level 5. 70. A Multifaceted Bioinstrumentation Assessment Approach in the Rehabilitation Sciences Ricardo Simeoni Griffith University, Gold Coast, Australia This study focuses on a second-year Bioinstrumentation course with learning objectives including the development of student skills, within a relevant health science context, in areas such as: Computer (Labview) programming; data acquisition and computer interfacing; signal processing and conditioning; sensor operation and application; and electronic circuit design, construction and analysis. The course (nâ&#x2030;&#x2C6;150) is prerequisite for a Bioinstrumentation-in-Physiotherapy course involving the clinical application of electrotherapy modalities. A similar teaching philosophy[1] as utilised within a foundation Biophysics course is applied, together with authentic, learning-orientated assessment. The evidence-based assessment practices cater for a wide range of student learning styles, place high emphasis on practical skill development, and embed student engagement techniques to facilitate student ownership of curriculum and inherent academic mentoring. Examples include: vocation-related projects such as development of an EMG system; journal article-based written assignment investigating the selection rationale of a utilised sensor; and group instrumentation project which developed skills staircase towards (best project selected for first-year curriculum application). Course evaluations show 83% of respondents agree/strongly agree assessment and its feedback are fair, clear and helpful (high for physics-based health science course). Average grades are strong (â&#x2030;&#x2C6;77%) and passing thresholds are set within examination, summative quiz, computer laboratory, and electronic laboratory components to encourage across-the-board engagement. The assessment-aspedagogy practice and student commentaries on such are presented. [1]

R.J.Simeoni, Positive student outcomes achieved within a large-cohort health foundation year course in the face of a changing and challenging educational environment, Practice and Evidence of the Scholarship of Teaching and Learning in HE, 6(2),2011,249-267. 71. Student engagement in formative assessment Lynda Cook, Diane Butler, Sally Jordan Open University, Milton Keynes, UK We are interested in evaluating the move to using formative assessment, rather than summative assessment for level 1, 30 credit modules. The main driver for the change to formative assessment is to reduce costs but also for student feedback to focus on providing support for progression, rather than on attainment. In this poster, we compare two modules, both of which employ formative assessment with a view to gaining further insight into students' understanding of formative threshold and the impact on student engagement. This project forms part of a larger widespread project within the Science Faculty, to evaluate the impact of formative assessment. 71


Two 30 credit level 1 modules were selected for comparison in this study. Both modules were newly developed for the academic year whereby higher student fees were introduced (October 2012). Both modules incorporated a 40% threshold for the formative assessment, the purpose of which was to encourage students to engage with the formative component of the modules. Upon passing the threshold, the final results, for both modules, were determined solely by an end of module examination. One key difference between the modules was that of an extra 70% ‘threshold' was included for one of the modules, and this higher threshold must be achieved to allow students the opportunity to attain a Distinction grade; with a view to encourage student engagement. This poster will use student submission data for the formative components of each of the module, as a means to compare student engagement. 72. Alternative approaches to the assessment of statistical reasoning Helen Harth, Ian Jones Loughborough University, Loughborough, UK Students from a range of disciplines need to develop and enhance their statistical knowledge and skills during their undergraduate studies. The learning, teaching and crucially the assessment of statistics at this level are challenging. Current assessment strategies of statistics modules can include innovative assessment methods in the use of performance-based, more open-ended, ‘authentic’ tasks that require the student to simultaneously apply their knowledge and skill to create a product (e.g. a report) or solve a problem. However, there is still a mismatch between the portrayed intentions of these assessments and the student outcomes. Validity of the assessment results is crucially important if institutions and users of these qualifications want to defend the worth of the assessments results, regardless of their intended purpose. This poster firstly summarises issues in the assessment of statistics modules in higher education. Using data from twenty modules and published research, it then presents an approach to the validation (evaluation) of statistical modules results using the construct model as a unified model for validity. It concludes with an example of how such a module could be evaluated. This work is part of a project investigating effective strategies for the assessments of undergraduate level introductory statistics modules to ensure they promote desired ways of thinking with statistical material after learning has taken place, e.g. in a more advanced course, personal life or in employment. 73. Peer assessment without assessment criteria Ian Jones, Lara Alcock, David Sirl Loughborough University, Loughborough, UK Peer assessment typically involves students making judgements of their peers work against criteria. We tested an alternative approach in which students judged pairs of scripts against one another in the absence of assessment criteria. First year mathematics undergraduates (N = 194) sat a written test on conceptual understanding of multivariable calculus, then assessed their peers’ responses using a comparative judgement approach. Comparative judgement involves presenting assessors with pairs of student work and asking them to decide which is "better" in terms of a global construct such as "conceptual understanding". The outcomes of many such decisions are then used to construct a scaled rank order of student work from "worst" to "best". 72


Judgement reliability was investigated by randomly assigning the students to two groups and correlating the two groupsâ&#x20AC;&#x2122; assessments. Validity was investigated by correlating the peersâ&#x20AC;&#x2122; assessments with (i) expert assessments, (ii) novice assessments, and (iii) marks from other module tests. We found high validity and inter-group reliability, suggesting that the students performed well as peer assessors. On this basis we repeated the exercise the following year and used the peer assessment outcomes summatively to assign grades to students. In the poster we will present an overview of the rationale and study design, and consider the implications of the findings for future research and practice in HE assessment. 74. Cohort diversity - A case for cafeteria assessment mechanisms? Sheena Bevitt University of Derby, Derby, UK With continuing focus on internationalisation in Higher Education (HEA, 2012) tutors face the challenge of supporting students with a wide range of assessment experiences and preferences. This poster presents primary qualitative data from a research project exploring the impact of assessment familiarity and preferences. The results support the contention that "students' perceptions of assessment and their accompanying approaches to learning are very personal and individual constructions of the learner" Struyven et al., (2005:343). Understanding and managing the expectations these may create can therefore be problematic for tutors where "national or regional categorisations will not always help teachers to understand the requirements of a diverse student cohort" (Welikala, 2013:6). Drawing on reward practices in organisations, this poster presents a possible solution to managing the problem with the development of cafeteria assessment mechanisms which offer students choice to tailor their own assessment pathway through a programme of study. A call for practice based research to explore the potential benefits and problems of this approach is presented along with key research questions which need to be addressed. HEA (2012) Internationalisation [Internet] Available at http://www.heacademy.ac.uk/internationalisation Accessed May 2013 Struyven, K., Dochy, F., Janssens, S. (2005). â&#x20AC;&#x2DC;Students' perceptions about evaluation and assessment in higher education: a review', Assessment and Evaluation in Higher Education, 30:4, 331-347 Welikala , T. (2013). Student experience in an interconnected and diverse world (SEID) [Internet] Available at http://www.heacademy.ac.uk/resources/detail/internationalisation/Connection s-report-liverpool-hope-Welikala Accessed May 2013 75. A strategy to familiarise undergraduate students with assessment methods and standards and to provide informal feedback on progress; experience from a final year pharmacology course. David Bell, Malcolm Campbell Centre for Medical Education, The Queen's University of Belfast, Belfast, Northern Ireland, UK We describe a strategy to increase student awareness of assessment methods and standards and provide formative feedback to improve examination performance. This strategy was piloted in a Level 3 pharmacology module in 73


which the written examination comprised both essay-type and structured questions. We devised several tutorial classes to complement the lecture programme and enable students to progress through the experiential taxonomy of learning by applying material covered in the lectures and clarifying any concepts they were uncertain about. In advance of each tutorial, each student drafted an answer to a previous examination question on a given topic. At the start of the tutorial, working in small groups, students then formulated their group's best answer. A model answer and marking scheme was then distributed in class, with groups swapping their answers and marking each others' work. Students were encouraged to identify any specific strengths, errors or omissions. Students were then given anonymised examples of actual answers written under examination conditions by previous students, representative of various conceptual equivalents spanning excellent to poor, together with marks awarded and examiners' comments. Students were asked to identify specific reasons for the varying marks awarded by the examiners. Plenary discussion centred on the observations of the class. This strategy has proven very effective. Students report greater awareness of the examination format, reducing anxiety. They are also more familiar with the process of constructing a well written answer, and amount of factual detail, originality and critical reflection expected. This approach is readily applicable to other disciplines. 76. The policy paradox: the risk of unwelcome consequences of wellintended assessment policy Clair Hughes1, Simon Barrie2, Geoffrey Crisp3, Anne Bennison1 1 The University of Queensland, Brisbane, Queensland, Australia, 2The University of Sydney, Sydbey, New South Wales, Australia, 3RMIT University, Melbourne, Victoria, Australia In promoting learning as a key driver of assessment policy, institutional decisionmakers can also run the risk of inhibiting effective practice through deliberate trade-offs or failure to anticipate undesirable outcomes. A recent national assessment project in Australia - Assessing and Assuring Graduate Learning Outcomes (AAGLO) - identified examples of assessment policy and guidelines with a positive impact on student learning. However, they also identified instances of where well-intentioned efforts to prevent poor practice had resulted in the development of policy with negative though unintended, consequences or where laudable assessment policy was not carried through because of inadequate infrastructure or inconsistency with policy in related areas. The eight key policy issues emerging from interview were:        

1. Program fragmentation 2. Policy gaps and inconsistencies 3. Standard grade cut-offs 4. Norm-referenced approaches to moderation 5. Mandatory course task variety 6. Upper and lower limits on number of tasks 7. Inclusion of non-achievement factors in grade calculations 8. Mandatory provision of detailed criteria and standards for assessment judgements.

The poster will provide an overview of the likely background or origin of each issue and an explanation of its significance. It is acknowledged that all policy 74


development involves the consideration, evaluation and selection of options appropriate to the educational context in implementation is to occur. The aim of the poster is to promote informed decision-making through raising awareness of potential risk factors rather than to advocate standardised approaches to policy development in support of assessment for learning. 77. IT Office Ergonomics:A Strategy to Assess Doctoral Students in the ASEAN Environment Wajee Chookittikul1, Siripong Teopipithporn1, Peter Maher2, Pachara Payao1 1 Phetchaburi Rajabhat University, Phetchaburi, Thailand, 2Webster Univesrsity, St. Louis, MO, USA The School of Information Technology, PBRU, Thailand, has offered a doctoral degree in Quality Information Technology (QIT) for Thai students since 2003. The degree focuses on how to transform any business into a successful IT-driven business with quality in terms of, for example, employee happiness. Because of the â&#x20AC;&#x2DC;moving forward to ASEAN community', the degree is now being revised to welcome students from ASEAN countries. The QIT program has to be well-designed with appropriate courses for all country members of the ASEAN, and with an effective assessment scheme. Each student's educational and work background should be considered when assessing their class projects. Therefore, an assessment scheme based on the concept of IT Office Ergonomics (ITOE) which is the key feature of the QIT degree is proposed. The first version of the scheme was created based on the data collected from Thailand, Vietnam, and Singapore. The ITOE principles state that there are eight perspectives related to the well-being of humans when at work known as efficiency, safety, healthy, comfort, people, tool, process, and place. Under each perspective, we have identified indicators appropriate to assess a student's project. For example, the "efficient" working hours of each country in the ASEAN is different from one country to another. An instructor of the course can then use the appropriate indicators to assess the student's work based on the working environment with which a student is familiar. The indicators suggested in the ITOE for assessing doctoral student's projects and results are included in the poster. 78. Using the Assessment Lifecycle to enhance assessment practice Rachel Forsyth, Rod Cullen, Neil Ringan, Mark Stubbs Manchester Metropolitan University, Manchester, UK As part of a JISC-funded Assessment and Feedback project, the Transforming Assessment and Feedback For Institutional Change (TRAFFIC) team have introduced the Assessment Lifecycle as an organising principle. This lifecycle has been used in a variety of ways: to specify an electronic assessment management system, to identify administrative and support needs at different stages of assessment, and to help with curriculum planning. The poster will provide examples of this and there will be chance to share and discuss supporting resources which have been produced for the various applications of the lifecycle, all of which are available for reuse on a Creative Commons licence.

75


79. The impacts of assessment on professional development in educational postgraduate programmes - a tale of the two universities Nhan Tran Newcastle University, Newcastle upon Tyne, UK The exploratory research is to be conducted in alignment with the phenomenological approach with the participation of Vietnamese teacher students pursuing a Master in English programme inland and abroad. The central question anchoring the entire research project is how different schemes of assessment impact professional development of master students, particularly the formation of their teacher/ researcher identities or so-called professional identities. Discussion and analysis will centre around both socio, affective and cognitive aspects of this formation/ transformation. It is with high hope that this contrastive analysis of the two different assessment systems of two different cultures will yield more insights and understanding of assessment in an international education setting. 80. Promoting competency and confidence through assessment for learning John Stephens, Jill Gilthorpe Northumbria University, Newcastle Upon Tyne, UK Students entering undergraduate, pre-registration physiotherapy education do so from a diverse range of educational backgrounds. Challenges facing students and staff are multiple and complex, not just from the perspective of an academic award but also in meeting Professional Statutory Regulatory Body (PSRB) requirements (CSP 2010, HCPC 2012). This poster reports on the development and evaluation of what we have termed Collaborative Structured Practical Self Evaluation (CSPSE) workshops aimed to provide low stakes, feedback rich learning opportunities for students as they work towards summative assessment (McDowell 2007, Wake and Watson 2007). Within a semester long Level 4 module providing an introduction to cardiopulmonary physiotherapy practice, CSPSE is employed at three points to consolidate knowledge and skills developed within the module and act as clear â&#x20AC;&#x2DC;stepping points' towards summative assessment (practical viva). Students work in pairs to promote peer feedback and record their level of confidence on a simple 0-10 scale related to a series of activities based around relevant clinical knowledge and skills. A supportive environment for individualised student learning is promoted within a peer coaching reflective mechanism. Evaluation suggests emergent themes that are interdependent and include recognition of key skills and knowledge, learning and continuing professional development, the value of feedback, developing confidence as a learner, and enjoyment. Although providing opportunities for students to explore ideas, rigorous structure and timing is required for success. Planning and preparation does take time but is a valuable investment to ensure an organised and relaxed learning environment.

76


81. Group Work Assessment: Improving the student experience through the Belbin Team-Role Self-Perception Inventory Joanna MacDonnell University of Brighton, Brighton, UK Group work is an essential element in many Higher Education courses and an established method for teaching and assessing project-based work. However, group work assessment can be problematic and become a negative experience, with groups breaking down and failing to function as teams. Many problems stem from the construction of the group and the studentâ&#x20AC;&#x2122;s lack of understanding of how individuals contribute to teams. Therefore the manner in which groups are formed and how students understand the purpose of a group becomes a crucial consideration for tutors, particularly when dividing a cohort of new and unknown students into groups. This poster examines data collected from an empirical study which investigated the use of Belbinâ&#x20AC;&#x2122;s Team-Role Self-Perception Inventory (BTRSPI) to form groups. The study revealed that there are benefits in using the BTESPI for both the student and the tutor; it provides students with a reflective tool which allows them to have more understanding of their own personality and what kind of role they perform in a group and also how other students can contribute differently to groups. It also permits tutors to form balanced groups which have been constructed to contain a mix of personalities who have, through the Belbin TeamRole Self-Perception Inventory, shown a propensity to work well together. A mixed methodology, interviews, action research and surveys, was used to gather the data which supports the findings of this study of level 4 Media Production Students. 82. Viva! the Viva Julie Peirce-Jones University of Central Lancashire, Preston, UK In Higher Education â&#x20AC;&#x2DC;assessment' describes any processes that appraise an individual's knowledge, understanding, abilities or skills. Within the BSc Operating Department Practice Course, the students complete a module entitled Developing Professional Practice. The aim of this module is to establish the principles of professional practice and facilitate lifelong learning. This focus enables the students to develop an understanding of the role of professionals and an understanding of the scope and limitations of professional practice and how this is supported by personal development planning. The viva is utilised as a summative assessment that requires the student to relate during an oral examination the links between theory and practice, focusing on professional aspects incorporating module learning outcomes. 83. A Cross-Institutional Approach to Assessment Redesign: Embedding Work-Integrated Assessments within the Curriculum Elisabeth Dunne, Richard Osborne, Charlotte Anderson University of Exeter, Exeter, UK The poster describes a research and development project focusing on embedding work-integrated assessments within the curriculum at the University of Exeter. In order to close a gap between traditional means of assessment and the skills deemed desirable by employers, work-integrated assessments require the 77


embedding of workplace skills into the University academic curriculum, with tasks and conditions that are more closely aligned to an employment context. The poster highlights a model for the design of such assessments, focusing on six Dimensions with the potential for enabling students to develop employability skills, attributes and awareness. These dimensions have been generated through research and discussion with key stakeholders - employers, staff and students as well as through reviewing associated literature on assessment, in particular ‘authentic assessment’. The use of the model to date has been highly successful, with staff from any discipline finding it easy to describe their current activities and future ambitions for any module. A further aspect of the project has been detailed exploration into how technology can help nurture skills associated with employment by supporting the six dimensions, and how staff and students can use these technologies flexibly and effectively. A bank of over 60 technologies that have potential to be supportive in an educational context have been aligned with the Dimensions model to create a series of ‘top trumps’ cards, suitable for staff development, which summarise the potential affordances of each technology. All project resources will be available online and the poster will have an added interactive dimension. 84. Assessing the Assessment Process: Dealing with Grading Inconsistency in the Light of Institutional Academic Performance Dawid Wosik Higher Colleges of Technology, Fujairah, United Arab Emirates The paper puts an assessment process within the context of quality, accountability and institutional performance in academia. Assessing students plays a significant role in providing a holistic picture of quality of an academic institution. Quality in higher education can be defined as a multidimensional and multi-layer concept which depends on requirements set by different groups of interest (Vlãsceanu et al., 2007). Likewise, quality of the assessment process will be described differently depending on different stakeholders' requirements and hence the purpose served (Bloxham, 2008). "Students should be assessed using published criteria, regulations and procedures which are applied consistently" (European Association for Quality, 2009). These criteria, regulations and procedures need to be communicated effectively among all interested parties. The lack of requirements and regulations in this matter as well as the lack of specific performance measures result in a false image of students', and consequently, graduates' academic ability (Wosik, 2013). The paper outlines a practical approach to measuring quality of the assessment process. It argues the importance of the performance measures that address the grading inconsistency issue while managing institutional effectiveness in higher education. Grade distribution, common examinations, differences between course work and final exam results as well as grade lift metrics are discussed in the paper as practical examples of the performance measures of the assessment process (Bond, 2009; Millet, 2010).

78


85. Student perceptions of the value of Turnitin as a learning tool. Carol Bailey, Rachel Challen University of Wolverhampton, Wolverhampton, UK University X has been using Turnitin as a teaching aid with small groups of students since 2007, but in 2011 changed its policy to encourage student access on a formative basis across the institution. In one School, all students undertaking final year undergraduate projects were invited to submit multiple drafts up to one week before the final deadline. For 280 students, access was facilitated in the context of their Independent Study module. The remaining 468 students were granted access through free-standing optional workshops and subsequent online guidance. Student use of the software was monitored, and those who attended the workshops were invited to express their views on its value as a learning tool. The proportion of students who accessed Turnitin was substantially higher where it was introduced within a module (76% and 78%) than through the extracurricular workshops (17%). This supports the findings of Emerson et al. (2005) and Davis and Carroll (2009) that tutor involvement can optimise the educational benefits of the software. The number of draft resubmissions was higher than we had expected from the study by Wright et al (2008). The majority of workshop participants thought that despite certain limitations Turnitin was helpful in learning about appropriate source use, and wished it had been introduced earlier in their degree course. Given that these students were in their sixth undergraduate semester, a surprisingly high number expressed anxiety regarding the risk of unintentional plagiarism. 86. Formative thresholded assessment: Evaluation of a faculty-wide change in assessment practice. Sally Jordan, Janet Haresnape The Open University, Milton Keynes, UK This practice exchange aims to engage participants in discussion of a faculty-wide move towards â&#x20AC;&#x2DC;formative thresholded' continuous assessment, and the evaluation of impact on student engagement. Concepts and findings of wider relevance will be explored and appropriate research methodologies will be discussed. Previous practice in the Faculty has been for all modules to be assessed by a combination of summative continuous assessment, with extensive feedback comments, and an end-of-module task (an examination or an extended assignment). This practice, although well established and apparently well received, has led to concerns, as reported elsewhere, that staff and students have a different understanding of the purpose of continuous assessment: staff see its purpose as primarily formative whilst students are primarily concerned with obtaining high marks. The revised practice still requires students to meet a threshold for their overall continuous assessment score, but the final grade is determined by the end-ofmodule assessment alone. The decision to alter assessment practice was made largely to save staff resource. However, early indications are that student engagement has been maintained, whilst allowing tutors and students to focus on the feedback provided on assignments rather than on the minutiae of grading. The evaluation of the change in practice has been split into small practitioner-led sub-projects, comparing impact across different modules and levels, with the aim 79


of identifying factors that lead to improved engagement. Sub-projects are both quantitative , e.g. comparing assignment completion rates before and after the change, and qualitative e.g. investigating student and tutor perceptions and opinion. 87. Dialogic feedback and potentialities for students sense making Anna Therese Steen-Utheim BI, Norwegian Business School, Oslo, Norway Research topic/aim: This paper reports from a case study at a bachelor course in international communication from a Norwegian university college. The aim is to explore in what ways interaction in oral dialogic feedback sessions provide learning potentialities for students. Research has demonstrated the potential positive effects feedback can have on learning (Black and Wiliam 1998, Hattie and Timperley 2007) because it helps students clarify misconceptions and faults (Sadler 1989). Feedback can allow students self-regulation, to monitor the quality of their work (Sadler 1989). However, fostering self-regulated learners, students need to be involved in dialogic interaction and communication about feedback (Carless et al 2010, Higgins et al 2001), because dialogue gives potentialities for sense making together. Theoretical framework: A sociocultural perspective on learning and assessment in combination with a dialogical view on human action, communication and cognition frames this study (Linell, 1998; 2009, Vygotsky, 1978). Assessment and learning is considered as interdependent processes and feedback as interaction and a complex form of communication. Methodology/data material: The study is framed within a qualitative approach using interaction analysis and the following concepts; participation structure, potentialities and capacities for change Linell (2009). The material consists of audio recordings of 84 students' oral dialogic feedback sessions, all transcribed and analysed. Preliminary findings: The preliminary findings suggest that students' participation in oral dialogic feedback engages students in meaning making about the feedback. Also, students' engagement in oral dialogic feedback and its dynamical and prolonging character creates potentialities for actualizing students' capacities for change. 88. KEYNOTE Plato versus AHELO: The nature and role of the spoken word in assessing and promoting learning. Gordon Joughin The University of Queensland, Brisbane, Queensland, Australia See page 52.

80


Presenter Index Adie, Lenore Queensland University of Technology, Australia Investigating moderation practices in a Faculty of Education Day 2: Poster Session 1 - Marking & Academic Standards - 27th June 2013 11.30 Allsopp, Nick De Montfort University, UK Offering carrots or wielding sticks: the voluntary or mandatory use of eAssessment and eFeedback to improve learning and teaching in times of change in HE Day 1: Parallel Session 1 - 26th June 2013 11.20 Asner, Ida How Educational Practices Change the Evaluation and Assessment: Making Learning Meaningful through ePortfolios Day 1: Parallel Session 6 - 26th June 2013 16-50 Asting, Cecilie BI Norwegian Business School, Norway Student response system used for motivation and learning outcome Day 2: Parallel Session 8 - 27th June 2013 09.50 Bailey, Carol University of Wolverhampton, UK Student perceptions of the value of Turnitin as a learning tool. Day 2: Parallel Session 10 - 27th June 2013 13.30 Working together: can collaboration between academic writing/subject specialists improve students' performance in written assessments? Day 2: Parallel Session 8 - 27th June 2013 09.50 Bell, David Centre for Medical Education, The Queen's University of Belfast, UK A strategy to familiarise undergraduate students with assessment methods and standards and to provide informal feedback on progress; experience from a final year pharmacology course. Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 A strategy to help students to actively engage with feedback provided: sharing experience from the undergraduate medical student selected component (SSC) programme. Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 Bettley, Alison The Open University, UK Releasing creativity in assessment Day 1: Parallel Session 3 - 26th June 2013 15.00

81


Bevitt, Sheena University of Derby, UK Cohort diversity - A case for cafeteria assessment mechanisms? Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 Bloxham, Sue University of Cumbria, UK Examining the examiners: investigating the understanding and use of academic standards in assessment Day 1: Parallel Session 3 - 26th June 2013 15.00 Boyd, Pete University of Cumbria, UK How university tutors learn to grade student coursework: professional learning as interplay between public knowledge and practical wisdom Day 2: Poster Session 1 - Marking & Academic Standards - 27th June 2013 11.30 Brock, Veronica University Of Wolverhampton, UK Collaborations and Celebrations: the highs and lows of a decade of working with collaborative assessment. Day 2: Parallel Session 9 - 27th June 2013 10.35 Brown, Sally Leeds Metropolitan University, UK Making the most of Masters Level assessment Day 1: Parallel Session 1 - 26th June 2013 11.20 Carless, David University of Hong Kong, Hong Kong Learning-oriented assessment task design Day 1: Parallel Session 1 - 26th June 2013 11.20 Carver, Mark University of Cumbria, UK Perceptions of fairness and consistency regarding assessment and feedback in trainees experiencing a difficult or failed professional placement. Day 2: Parallel Session 8 - 27th June 2013 09.50 Chambers, Andrew University of New South Wales, Australia Researching assessment and exam practices in an online postgraduate program Day 2: Poster Session 3 - Transitions in Assessment - 27th June 2013 11.30 Chapman, Amanda University of Cumbria, UK 'I can't hand it in yet, it's not perfect': Mature students' experience of assessment and contested identities. Day 1: Parallel Session 2 - 26th June 2013 12.00

82


Child, Simon Cambridge Assessment, UK "I've never done one of these before." A comparison of the assessment 'diet' at A level and the first year at university. Day 1: Parallel Session 4 - 26th June 2013 15.40 Chookittikul, Wajee Phetchaburi Rajabhat University, Thailand Horizontal and Vertical Models: The Instant Feedback to Help Students Excel Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 IT Office Ergonomics:A Strategy to Assess Doctoral Students in the ASEAN Environment Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 Clarke, Sam University of Sheffield, UK Upskilling students in lectures using student polling software Day 2: Parallel Session 7 - 27th June 2013 09.15 Cook, Lynda Open University, UK Student engagement in formative assessment Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 Delaney, Calum Cardiff Metropolitan University, UK p { margin-bottom: 0.21cm; } How do assessors mark? What do they see in written work, and what do they do with it? Day 1: Parallel Session 4 - 26th June 2013 15.40 Dermo, John University of Bradford, UK Using technologies to engage students in assessment and feedback: a case study from Biomedical Sciences Day 1: Parallel Session 2 - 26th June 2013 12.00 Dixon, Martin Staffordshire University, UK Video Based Assessment within Higher Education: Developing Tactical Knowledge on a Sports Coaching Module Day 2: Poster Session 2 - Assessment Technologies - 27th June 2013 11.30 Fan, Meng Newcastle University, UK International studentsâ&#x20AC;&#x2122; perceptions of formative peer assessment in the British university: a case study Day 1: Parallel Session 6 - 26th June 2013 16-50

83


Fernandez-Caparros Turina, Ana University of Extremadura, Spain Strategies for Formative Oral Assessment of EFL through Storytelling. Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 Foo, Martin Northumbria University, UK Exploring formative assessment with international students Day 1: Parallel Session 3 - 26th June 2013 15.00 Forsyth, Rachel Manchester Metropolitan University, UK University Standards Descriptors: can you please all of the people even some of the time? Day 2: Parallel Session 7 - 27th June 2013 09.15 Using the Assessment Lifecycle to enhance assessment practice Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 Gilthorpe, Jill Northumbria University, UK Promoting competency and confidence through assessment for learning Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 Glofcheski, Rick University of Hong Kong, Hong Kong An Authentic Assessment Proposal: Current Events as Reported in the News Media Day 2: Parallel Session 8 - 27th June 2013 09.50 Gray, Claire Plymouth University, UK You show me yours.....comparative evaluations of assessment practice and standards. Day 1: Parallel Session 5 - 26th June 2013 16.20 Handley, Fiona Southampton Solent University, UK Creating and Marking Portfolios Effectively Day 2: Poster Session 1 - Marking & Academic Standards - 27th June 2013 11.30 The Introduction of Grade Marking at Southampton Solent University Day 2: Parallel Session 9 - 27th June 2013 10.35 Hanna, Bridget Edinburgh Napier University, UK Assessing the change or changing the assessed? Day 1: Parallel Session 4 - 26th June 2013 15.40

84


Haresnape, Janet Open University, UK Analysis of student comments following participation in an assessed online collaborative activity based on contributions to a series of wiki pages. Day 2: Parallel Session 7 - 27th June 2013 09.15 Harth, Helen Loughborough University, UK Alternative approaches to the assessment of statistical reasoning Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 Havnes, Anton Oslo and Akershus University College of Applied Sciences, Norway Learning outcomes - balancing pre-defined standards and the benefits of unexpected learning? Day 1: Parallel Session 5 - 26th June 2013 16.20 Headington, Rita University of Greenwich, UK Reflections on feedback - feedback on reflection Day 2: Parallel Session 9 - 27th June 2013 10.35 Hepplestone, Stuart Sheffield Hallam University, UK Understanding student learning from feedback Day 2: Parallel Session 7 - 27th June 2013 09.15 Hughes, Clair The University of Queensland, Australia Generating credible evidence of academic outcomes and standards: perspectives of academics in Australian universities Day 1: Parallel Session 1 - 26th June 2013 11.20 The policy paradox: the risk of unwelcome consequences of well-intended assessment policy Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 Hunt, Tim The Open University, UK The influence of research and practice on a set of open source eAssessment tools Day 2: Poster Session 2 - Assessment Technologies - 27th June 2013 11.30 Hurford, Donna University of Cumbria, UK Should we stop calling it assessment? What influences students' perceptions and conceptions of assessment and AfL? Day 1: Parallel Session 5 - 26th June 2013 16.20 Jankowski, Natasha University of Illinois Urbana-Champaign, USA Using Assessment Evidence to Improve Student Learning: Can It Be Done? Day 1: Parallel Session 4 - 26th June 2013 15.40 85


Jones, Ian Loughborough University, UK Peer assessment without assessment criteria Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 Jordan, Sally The Open University, UK Formative thresholded assessment: Evaluation of a faculty-wide change in assessment practice. Day 2: Parallel Session 10 - 27th June 2013 13.30 Master Class : Producing high quality computer-marked assessment Day 1: Pre-conference Master Classes - 26th June 2013 09.30 Joughin, Gordon University of Queensland, Australia Decision-making for assessment: explorations of everyday practice Day 1: Parallel Session 2 - 26th June 2013 12.00 KEYNOTE Plato versus AHELO: The nature and role of the spoken word in assessing and promoting learning. Day 2: Keynote & Poster Award - 27th June 2013 14.10 MASTERCLASS: Using oral assessment Day 1: Pre-conference Master Classes - 26th June 2013 09.30 Kleeman, John Questionmark, UK Assessment Feedback - What Can We Learn from Psychology Research Day 2: Parallel Session 9 - 27th June 2013 10.35 Lau, Alice University of Glamorgan, UK The value of technology enhanced assessment â&#x20AC;&#x201C; sharing the findings of a JISC funded project in evaluating assessment diaries and GradeMark Day 1: Parallel Session 4 - 26th June 2013 15.40 Long, Phil Anglia Ruskin University, UK Lost in translation: what students want from feedback and what lecturers mean by "good" Day 1: Parallel Session 4 - 26th June 2013 15.40 Lowe, Cecilia University of York, UK Getting the Devil off our back: Reflections on Table Marking in Psychology Day 2: Poster Session 1 - Marking & Academic Standards - 27th June 2013 11.30 MacCallum, Janis Edinburgh Napier University, UK An evaluation of the effectiveness of audio feedback, and of the language used, in comparison with written feedback. Day 2: Parallel Session 9 - 27th June 2013 10.35 86


MacDonnell, Joanna University of Brighton, UK Group Work Assessment: Improving the student experience through the Belbin Team-Role Self-Perception Inventory Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 Maguire, Sarah University of Ulster, UK What do academic staff think about assessment and how can it help inform policy making and approaches to professional development? Day 1: Parallel Session 6 - 26th June 2013 16-50 Marcangelo, Caroline University of Cumbria, UK Using Patchwork Assessment to enhance student engagement, skills and understanding Day 2: Parallel Session 7 - 27th June 2013 09.15 Mathieson, Sue Northumbria University, UK Developing Institutional Assessment Strategies in a Performative Environment Day 1: Parallel Session 2 - 26th June 2013 12.00 McCormack, Mike Liverpool John Moores University, UK Emotional Responses: Feedback Without Tears Day 2: Parallel Session 8 - 27th June 2013 09.50 McDowell, Liz Independent Higher Education Consultant, UK Assessment for Learning Day 1: Pre-conference Master Classes - 26th June 2013 09.30 Meddings, Fiona University of Bradford, UK Demystifying marking and assessment processes: where's the mystery? Day 1: Parallel Session 3 - 26th June 2013 15.00 Nijhuis, Steven Utrecht University of Applied Science, The Netherlands Assessing learning gains in project management tuition/training. Day 1: Parallel Session 6 - 26th June 2013 16-50 O'Boyle, Louise University of Ulster, UK Introducing Assessment & Feedback: A Framework of Engagement, Empowerment and Inclusion Day 1: Parallel Session 2 - 26th June 2013 12.00

87


Osborne, Richard University of Exeter, UK A Cross-Institutional Approach to Assessment Redesign: Embedding WorkIntegrated Assessments within the Curriculum Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 Palmer, Marion IADT, Ireland Learning from practice - developing an overview of lecturers' learning needs in terms of assessment Day 1: Parallel Session 1 - 26th June 2013 11.20 Peirce-Jones, Julie University of Central Lancashire, UK Viva! the Viva Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 Peleg, Anita London South Bank University, UK Student Assessment Using Digital Story Telling Day 1: Parallel Session 5 - 26th June 2013 16.20 Price, Margaret Oxford Brookes University, UK ‘Assessment literacy: making the link between satisfaction and learning’ Day 1: Welcome & Keynote - 26th June 2013 13.30 Raaper, Rille The University of Glasgow, UK Student Assessment in Neoliberal Higher Education Context: Rationalities, Practices, Subjects.(Cross-cultural Study of the University of Glasgow and Tallinn University) Day 1: Parallel Session 5 - 26th June 2013 16.20 Ronæs, Nina Helene BI Norwegian Business School, Norway Immediate feedback by smart phones Day 2: Parallel Session 9 - 27th June 2013 10.35 Ruiz Esparza, Elizabeth University of Sonora, Mexico Students' Perceptions about Assessment and Assessment Methods: A Case Study Day 2: Parallel Session 8 - 27th June 2013 09.50 Sales, Rachel University of the West of England, UK Marking as a way of becoming an academic POSTER Day 2: Poster Session 1 - Marking & Academic Standards - 27th June 2013 11.30

88


Sambell, Kay Northumbria University, UK Developing assessment literacy: students' experiences of working with exemplars to improve their approaches to assignment writing. Day 1: Parallel Session 3 - 26th June 2013 15.00 Shephard, Kerry University of Otago, New Zealand Monitoring the attainment of graduate attributes at individual and cohort levels Day 1: Parallel Session 6 - 26th June 2013 16-50 Simeoni, Ricardo Griffith University, Australia A Multifaceted Bioinstrumentation Assessment Approach in the Rehabilitation Sciences Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 Steen-Utheim, Anna Therese BI, Norwegian Business School, Norway Dialogic feedback and potentialities for students sense making Day 2: Parallel Session 10 - 27th June 2013 13.30 Sutton, Paul University College Plymouth: St Mark & St John, UK Fearful Asymmetries of Assessment. Day 1: Parallel Session 1 - 26th June 2013 11.20 Tran, Nhan Newcastle University, UK The impacts of assessment on professional development in educational postgraduate programmes - a tale of the two universities Day 2: Poster Session 5 - Assessment for Learning - 27th June 2013 11.30 Vuolo, Julie University of Hertfordshire, UK â&#x20AC;&#x2DC;Designing a Student Progress Dashboard to promote student self-regulation and support' Day 1: Parallel Session 3 - 26th June 2013 15.00 Watson, Jan University of East Anglia, UK Creative Development or High Performance? An Alternative Approach to Assessing Creative Process and Product in Higher Education Day 1: Parallel Session 6 - 26th June 2013 16-50 Woods, Nikki University of Northampton, UK Facilitating Transitional Learning: An Empirical Investigation into Undergraduate Studentsâ&#x20AC;&#x2122; Perceptions of the Impact of Formative Assessment at Level 4 before entering Level 5 Day 2: Poster Session 4 - Engaging Students in Assessment & Feedback - 27th June 2013 11.30 89


Wosik, Dawid Higher Colleges of Technology, United Arab Emirates Assessing the Assessment Process: Dealing with Grading Inconsistency in the Light of Institutional Academic Performance Day 2: Parallel Session 10 - 27th June 2013 13.30 Zubairi, Ainol International Islamic University Malaysia, Malaysia assessment for learning at institutions of higher education: a study of practices among academicians Day 2: Parallel Session 7 - 27th June 2013 09.15

90


Conference Exhibitors & Main Sponsors

www.calibrand.com

www.brookes.ac.uk

www.pebblelearning.co.uk www.eurospanbookstore.com

www.sagepub.com www.heacademy.ac.uk

www.textwall.co.uk www.livetext.com

www.myknowledgemap.com

www.cumbria.ac.uk

91


Floor Plan

92


Notes

93


Notes

94


95

Profile for Lindas

AHE Conference Programme 2013  

The conference is a unique collaboration between academics working in the field of assessment and the UK Higher Education Academy who recogn...

AHE Conference Programme 2013  

The conference is a unique collaboration between academics working in the field of assessment and the UK Higher Education Academy who recogn...

Advertisement