Faculty conversations about AI and academic integrity typically start in the wrong place. They start with detection — how do we catch students who use AI? — rather than design. Detection is a reactive strategy. It is also a losing one. Detection tools have meaningful false-positive rates, produce adversarial classroom cultures, and have no answer for the underlying problem: assessments that were designed for a pre-AI world. The more productive question is not "How do we catch AI use?" It is "How do we design assessments where AI use doesn't help?"

The Problem With Traditional Assessments in the Age of AI

Most traditional academic assessments — the essay prompt, the research paper, the short-answer exam, the discussion post — share a structural vulnerability in the age of large language models. They ask students to produce text about a topic. And producing text about a topic is precisely what AI does well.

When a student is asked to write a 1,500-word essay arguing a position on a policy question, the task is, at its core, a text-generation task. It can be completed — not well, but completably — by an AI tool. When a student is asked to summarize three scholarly articles and explain their relevance to a course theme, the task is a summarization and synthesis task. AI is increasingly competent at both. The assignment types that constitute the majority of undergraduate and graduate coursework are, architecturally, the types of tasks that AI was designed to perform.

This is not a moral failure on the part of students or faculty. It is a design problem. We built assessments for a cognitive environment that no longer exists. And the answer to a design problem is redesign — not surveillance.

Over the past year, I have worked with faculty across a dozen disciplines to redesign assessments from the ground up. The resulting framework is not about making AI use impossible. It is about making AI use unhelpful — and in some cases, about making thoughtful AI use a genuine part of the learning process rather than a workaround. That is the concept of AI resilience, and it is different from AI-proofing in ways that matter.

What Makes an Assessment AI-Resilient?

An AI-resilient assessment is one that, even if a student uses AI assistance, still requires them to do the cognitive work the course is designed to develop. It is resilient in the same way a well-designed ecosystem is resilient: not impervious to disruption, but structured so that disruption does not destroy function.

The key insight is that AI is a powerful generalist. It can produce plausible, well-structured text about almost any subject at almost any level of abstraction. What it cannot do — yet, and not reliably — is engage with the specific. It cannot speak to a real conversation that happened in last Tuesday's seminar. It cannot analyze data that a student collected themselves. It cannot reflect on the gap between what a student thought they understood before the unit and what they understand now, because it has no access to that cognitive history. It cannot demonstrate a skill live, in real time, in front of an observer. Specificity, reflection, process documentation, live demonstration, and personal application are the five territories where AI-resilient assessment lives.

Five Principles for Designing AI-Resilient Assessments

Principle 1: Specificity

Generic prompts produce AI-completable work. Specific prompts require specific knowledge. The difference between "Discuss the ethical implications of data privacy policies" and "Using the data collection practices disclosed in the privacy policy we analyzed in Week 6 as your primary case, identify two specific provisions that conflict with the ethical framework you developed in your Week 4 reflection — and explain the conflict in concrete terms" is enormous. The first prompt can be answered by AI using general knowledge. The second requires engagement with specific course materials and a prior student product. AI has neither.

Build specificity into prompts by requiring reference to specific course texts, class discussions, data sets, or the student's own previous work. The more your prompt is anchored in the particular context of your course, the less useful AI's general knowledge becomes.

Principle 2: Reflection on Process

Metacognitive reflection — thinking about one's own thinking — is both educationally valuable and genuinely difficult for AI to fake, because it requires access to a cognitive history the AI does not possess. Assessments that ask students to document and reflect on their process rather than just submit a product have built-in resilience.

This can take many forms: a brief process log submitted alongside a final paper, a recorded voice memo explaining a key decision made during a project, or a structured reflection comparing the student's initial understanding of a concept (documented at the unit's start) with their current understanding. These are not burdensome additions to existing assignments. They are lightweight, high-value elements that shift the assessment from product evaluation to process documentation.

Principle 3: Process Over Product

Related to the above but distinct: some assessments can be redesigned so that the process itself is the deliverable. A research project that requires weekly documentation of sources considered and rejected, decisions made, and dead ends encountered is an assessment of research process — not just research output. AI cannot fake that process log retroactively in a convincing way, because the log should show evidence of real-time engagement with real-time constraints.

Principle 4: Live Demonstration

If a student must demonstrate a skill or explain their reasoning in real time — in a one-on-one conversation with an instructor, a brief recorded explanation, a small-group discussion, or a formal oral examination — AI cannot substitute for them. Live demonstration is the most reliable AI-resilient assessment format available, and it is significantly underused at the university level.

This does not mean returning to a purely oral examination model. It means building in strategic live moments at key points in the course: a brief five-minute check-in where a student explains their project direction, a recorded two-minute explanation of how they interpreted a complex reading, a small-group presentation where follow-up questions require real-time engagement with the material.

Principle 5: Personal Application

Assessments that require students to apply course concepts to their own life, career, or direct experience produce work that AI cannot replicate — because the experiences themselves are not available to AI. A social work student who is asked to analyze an ethical dilemma they actually encountered in their field placement is drawing on information AI does not have. A business student who is asked to evaluate a real negotiation they participated in this semester is similarly drawing on inaccessible data. Personal application grounds academic analysis in lived experience and produces work that is both more meaningful and more resistant to AI substitution.

Real Examples Across Disciplines

In a graduate nursing program, we redesigned a clinical reasoning assessment from a written patient scenario analysis to a recorded verbal case presentation in which the student narrated their diagnostic reasoning process as if presenting to a supervising physician. Students were asked to explain not just their conclusions but their differential reasoning — what they considered and why they ruled it out. The verbal format, the real-time nature, and the process orientation made AI assistance essentially irrelevant.

In an undergraduate sociology course, we added a process log requirement to the existing research paper: students submitted weekly 150-word check-ins documenting what sources they had found, what they had discarded and why, and what questions had emerged from that week's work. The final paper was evaluated alongside the process log, and the rubric weighted coherence between the two. AI could complete the paper, but it could not authentically generate nine weeks of process documentation.

In a graduate public policy course, we replaced the final policy brief with a policy memo analysis in which students analyzed a real, publicly available policy memo from a current government agency, connected it to three specific course readings, and then recorded a five-minute spoken critique. The combination of source specificity, course-text grounding, and live demonstration made the assessment effectively AI-resilient without being prohibitively burdensome.

How Policy and Assessment Work Together

Assessment redesign and AI policy work best when they develop in conversation with each other. A policy that prohibits AI use but offers no redesigned assessments is an enforcement policy, not a learning policy. An assessment redesign that does not clarify what AI use is and is not permitted invites confusion and inconsistency.

My recommendation is to develop assessment and policy in parallel, starting from the same question: what cognitive work do we want students to develop in this course? Once that question is answered, both the assessment design and the AI policy follow logically. If the cognitive work is disciplinary reasoning under uncertainty, the assessment tests that reasoning in specific, process-oriented, and potentially live formats, and the AI policy clarifies that AI tools may be used for initial research or drafting but not for the reasoning documentation that constitutes the core deliverable.

Transparent policies that explain the "why" — that connect AI guidelines to learning objectives rather than purely to academic integrity concerns — build more trust and more genuine compliance than prohibition-first frameworks. Students who understand that an assessment is designed to develop a skill they need — not to catch them cheating — are more likely to engage with it seriously.

Getting Started: A One-Week Action Plan for Faculty

If you are ready to begin redesigning your assessments for AI resilience, here is a concrete plan for your first week.

Day 1: Choose one major assessment from your current course — ideally the one you have worried most about in the context of AI. Write down exactly what cognitive work you intend it to develop. Be specific. Not "critical thinking" — but "the ability to identify methodological weaknesses in empirical studies and explain their implications for a stated policy conclusion."

Day 2: Test your current prompt against the five principles. Which of the five does it engage? Specificity, reflection, process, live demonstration, personal application? Most traditional assessments engage zero or one. Identify which principles offer the most natural fit with your course content.

Day 3: Draft a revised version of the prompt that incorporates at least two of the five principles. Do not try to redesign the entire assessment at once. Add a process log requirement, or anchor the prompt to a specific course text, or add a brief recorded explanation component.

Days 4-5: Share the revised assessment with one trusted colleague — ideally someone in your discipline — and ask them to test it against the AI tool of their choice. Note where the AI response is plausible and where it falls short. Refine the prompt based on what you learn.

Assessment redesign is not a one-time event. It is an ongoing practice that will deepen as you understand your students' engagement with AI tools more fully. The faculty who are navigating this challenge most successfully are not the ones who have found a permanent solution. They are the ones who have committed to treating assessment as a design practice — iterative, evidence-informed, and grounded in honest learning goals.

If you would like a structured workshop to help your department work through this process together, Train and Inspire offers AI literacy training and assessment design workshops built specifically for higher education faculty. We bring the design framework. You bring the disciplinary expertise. The combination is where the real work gets done.

Ready to Redesign Assessments With Your Department?

Train and Inspire offers AI literacy workshops and assessment redesign facilitation for higher education faculty. Let's build an approach that serves your students and fits your disciplinary context.

Start the Conversation

Want Help Applying This to Your Organization?

Every great partnership starts with a conversation. Tell us about your organization.

Book a Free Discovery Call