AI as an opportunity to rethink assessment strategy

by

Students looking at laptop screen

In this opinion piece, Monika Rossiter reflects on the development of artificial intelligence and its impact on student assessment.

The emergence of AI has disrupted the educational sector. Its impact has been pronounced as the appearance of ChatGPT has affected the area of student experience that carries the highest stakes, namely assessment and feedback.

Robustness of assessment in light of Generative AI is important to consider. This has been investigated in the Business School through their assessment stress testing initiatives, which have also been used in other pockets of the College. What is even more valuable is taking a step back, reflecting and considering our overall assessment design alongside the challenges and opportunities presented by AI. The advent of Generative AI presents us with a compelling reason to zoom out to programme level assessment strategy and consider how well it actually tests and supports student learning.

How much assessment is too much?

Assessment loading and bunching has always been an issue across HE sector. This is especially marked with modular design where it is tempting to treat modules in isolation, ensuring everything is assessed at every single module level. This contributes to higher than anticipated assessment load at both a module level and a programme/ year level.

At a module level it potentially causes issues with a disproportionate amount of assessments per module, and consequent discrepancies between indicative hours of effort as indicated by the ECTS and the actual hours of effort that students spend to study on a given module. High module level assessment load also impacts on assessment bunching across modules over the year. With a heavy assessment loads, deadlines tend to come together, which pressurises students to be strategic and can push them towards inappropriate AI use as a coping mechanism. Reduction and rationalisation of assessment not only reduces staff and student assessment load, it also mitigates pressures provoking students’ inappropriate and pedagogically unhelpful use of AI.

Another important aspect for consideration when ‘decluttering’ assessment is feedback. Feedback strategy is often overlooked in assessment design and high module level assessment loads impact feedback and whether it arrives in time for students to act on it.

Stress-testing assessments

Perhaps therefore, an important question to ask ourselves when reflecting on our assessment design through the lens of AI is ‘Is this assessment needed?” If an assessment can be easily replicated by Generative AI, does it still have a place as a standalone assessment? Would removing it benefit student learning in terms of creating space for other more authentic assessments? Would it be better reframed as formative? Would removing it create space for better uptake of feedback?

This kind of reflection is facilitated at the EDU workshops on the Use of Generative AI in Teaching Learning and Assessment and through our newly designed decision tree that outlines questions to consider when making AI related assessment decisions. Given that a broader change in assessment and feedback practices is an important point on the Learning and Teaching agenda, perhaps we can use the challenges of Generative AI as a catalyst for wider assessment change.

Reporter

Murray MacKay

Murray MacKay
Communications Division

Tags:

Strategy-educational-experience, Strategy-student-experience, Artificial-intelligence, Education, Comms-strategy-Learning-and-teaching
See more tags