Designing AI-Resilient Assessment: Reclaiming Human Learning in an Age of Automation
Published on Apr 29, 2025 by Stephen Wheeler.
The arrival of generative AI tools like ChatGPT, Claude, and Gemini has thrown a spotlight on a long-standing problem in higher education: our over-reliance on predictable, replicable, and ultimately shallow forms of assessment. If a language model can produce a passable answer to your assignment prompt in under a minute, then the problem is not just with the AI. The problem is with the design of the assessment.
In a recent wave of institutional responses, the conversation has often turned to detection, policing, and punishment. But treating AI as an external threat to be managed is to miss the deeper opportunity: to redesign assessment around what it means to be human. Instead of trying to make students behave like machines that do not cheat, we should be designing tasks that machines cannot meaningfully complete without genuine human learning, insight, or judgment. In short, we need AI-resilient assessments.
What Makes an Assessment AI-Resilient?
Assessments that are easily answered by generative AI tend to share key characteristics: they are formulaic, decontextualised, and focused on surface-level knowledge reproduction. These are the kinds of tasks that AI, trained on vast corpora of text, is explicitly optimised to perform. But the point isn’t to make assessments that AI “can’t” do — because that bar will shift. Rather, it is to foreground forms of learning that require uniquely human capabilities: lived experience, ethical reasoning, contextual nuance, and creativity.
AI-resilient assessments are:
- Situated in specific, often local or embodied contexts
- Designed to engage students in meaningful process, not just final product
- Dependent on lived, tacit, or experiential knowledge
- Grounded in dialogue, collaboration, and reflection
- Focused on values, ethics, and critical thinking rather than “correct” answers
As Biesta (2013) argues, the purpose of education is not the efficient transfer of information but the cultivation of judgment, responsibility, and subjectivity. This insight is more urgent than ever in an age of increasingly competent machines.
Five Principles for Designing AI-Resilient Assessments
1. Contextualisation
AI thrives on generalisation. It struggles with tasks rooted in a student’s specific context — their local environment, discipline-specific practices, or personal experience. Asking students to situate their learning in a real-world context reintroduces complexity and specificity that AI cannot easily replicate.
Example: Instead of “Discuss the impact of globalisation,” ask students to analyse a local business or cultural institution and how it navigates global pressures.
2. Process Over Product
Most AI interactions are about producing a final output. But human learning is a process — iterative, messy, developmental. Assessing the process rather than just the product invites reflection, authenticity, and learning that cannot be outsourced.
Example: Require research journals, development logs, peer feedback, or annotated drafts that track the evolution of a student’s thinking.
3. Dialogic and Collaborative Tasks
AI can simulate dialogue, but it cannot engage in genuine, responsive human conversation. Designing tasks that involve interpersonal engagement — with peers, teachers, communities, or external stakeholders — makes assessment more relational and less susceptible to automation.
Example: Students co-develop a group position paper based on structured debates and collaborative negotiation.
4. Critical and Ethical Judgment
Freire (1970) famously wrote that education is not about depositing information but helping learners become critically aware of their world. Assessments that require students to take a position, justify decisions, or navigate ethical dilemmas move beyond factual knowledge to reflective judgment.
Example: Present students with a complex, real-world scenario (e.g., an ethical breach in professional practice) and ask them to argue a course of action grounded in principles and consequences.
5. Embodied and Experiential Learning
AI does not have a body. It cannot engage in fieldwork, conduct a performance, complete a lab experiment, or reflect on personal observation. Assessments that require students to draw on lived, sensory, or emotional experience are resilient to automation by design.
Example: In a design course, students document their creative process through sketches, voice memos, and photos — culminating in a reflective essay about their making process.
Three Practical Examples
1. From Essay to Ethnography
Instead of writing a standard essay on cultural diversity, students conduct a short ethnographic observation of a public space (e.g., a market, café, religious service) and reflect on patterns of behaviour, inclusion, and interaction.
2. Critical Incident Report in Professional Practice
In professional programmes, students write a critical incident report based on a real workplace experience. They analyse the situation using theoretical frameworks and reflect on their own role and response.
3. Dialogic Literature Review
Students interview a professional or practitioner in the field, then integrate insights from the conversation into a literature review, discussing how theory and practice align or diverge.
Conclusion: Toward a Human-Centred Pedagogy
Generative AI will continue to evolve, and its capacity to mimic human work will increase. But education is not about mimicry — it is about becoming. It is about developing capacities for thought, judgment, creativity, and ethical engagement. These are not outputs to be efficiently produced but ways of being in the world.
Designing AI-resilient assessments is not just a defensive move — it is a chance to reassert the human core of education. In doing so, we move from reacting to technological disruption to reclaiming assessment as a deeply pedagogical act.
References
Biesta, G., 2013. The beautiful risk of education. Boulder: Paradigm Publishers.
Freire, P., 1970. Pedagogy of the oppressed. New York: Herder and Herder.