Every faculty member I work with in AI literacy workshops eventually arrives at the same question: what should my policy actually say? And they almost always frame it as a binary — ban AI or allow AI. That framing is the problem. It treats AI as a single, undifferentiated thing when it is actually a toolkit with dozens of distinct functions, each of which has a different relationship to learning. A scaffolded AI-use policy does not answer "yes or no." It answers "which capabilities, for which tasks, at which point in student development."
Why Blanket AI Policies Fail
A blanket ban on AI use has a surface logic to it: students cannot develop skills they outsource, and AI makes outsourcing seductively easy. The problem is that the ban does not align with reality. Students have AI on their phones. They will use it outside your class to complete work for your class, and the gap between your policy and their actual behavior creates an environment of low-trust surveillance rather than genuine academic formation. You spend your energy policing rather than teaching, and students learn to be furtive rather than thoughtful.
A blanket permission policy has its own failure mode. "AI is allowed as a tool" without further specification gives students no guidance about how to use it in ways that support rather than replace their own thinking. The result is often work that is technically compliant but educationally hollow — AI-generated text the student neither deeply engaged with nor critically evaluated, submitted because the policy offered no reason to do otherwise.
Both extremes share the same root problem: they treat AI use as a single behavior when it is actually a continuum of behaviors. Asking an AI to generate a complete essay is categorically different from asking it to explain a concept you do not understand. Using AI to brainstorm counterarguments is categorically different from using it to write your counterargument section. A policy that does not make these distinctions cannot govern them meaningfully.
What "Scaffolded" Actually Means
In instructional design, scaffolding refers to the structured support a learner receives as they build toward independent competence. Training wheels are scaffolding. Sentence starters are scaffolding. A teacher modeling think-aloud before asking students to work independently is scaffolding. The purpose of scaffolding is not to make things easier permanently — it is to support capability-building until the support is no longer needed.
Applied to AI use policy, scaffolding means thinking about AI not as a shortcut to be blocked or a tool to be freely accessed, but as a form of support whose appropriate level depends on the student's current developmental stage, the specific skill being practiced, and the learning outcome the assignment is designed to achieve. A first-year student learning to construct an argument for the first time needs different AI constraints than a senior completing a capstone project. An assignment designed to build foundational writing mechanics has different AI implications than an assignment designed to test disciplinary judgment.
The scaffolded framework asks faculty to answer three questions before writing their AI policy: What specific cognitive skill does this assignment require the student to practice? What is the student's current developmental level with respect to that skill? And what role, if any, could AI play in supporting skill development versus replacing it? The answers to those questions determine which of four permission levels is appropriate.
The Four Levels of AI Permission
Level 1: No AI Use. This level is appropriate for assignments specifically designed to assess a student's unassisted cognitive performance. Proctored exams, timed writing samples, and assignments whose explicit purpose is to establish a baseline of the student's own thinking fall here. The rationale must be explained to students, not merely asserted. "This is a no-AI assignment because I need to understand what you can do on your own, and that understanding will shape how I support you for the rest of the semester" is a policy that makes pedagogical sense. "AI is banned because I said so" is not.
Level 2: AI for Research and Brainstorming Only. This level permits students to use AI as a generative thinking partner during the pre-writing or pre-project phase, but not as a producer of submitted content. Students may ask an AI to explain a concept, generate a list of possible thesis angles, identify counterarguments, or surface relevant terms for further research. All submitted work must be the student's own writing and synthesis. This level supports skill development by keeping the cognitive load of production on the student while lowering the barrier to getting started.
Level 3: AI as Drafting Tool with Disclosure. This level permits students to use AI to produce draft content that they then substantially revise, critically evaluate, and personally own through a documented editing and reflection process. Disclosure is required: students submit an AI use log noting which tools they used, what prompts they submitted, and how they modified the AI's output. The reflection component — in which students describe what the AI got wrong, what they changed and why, and what they added from their own thinking — is the pedagogical core of this level. The cognitive work is in the evaluation and revision, not the initial generation.
Level 4: AI as Full Partner with Attribution. This level is appropriate for advanced students working on complex, professional-context deliverables where AI collaboration is a likely reality of their future work. Students use AI freely but attribute its contributions with the same rigor they would apply to a human collaborator. The assignment design at this level must shift accordingly: if AI can do the basic task, the assessment must measure what the student brings that the AI cannot — disciplinary judgment, ethical reasoning, stakeholder communication, or iterative refinement toward a nuanced goal.
Applying the Framework by Discipline
The four levels are not distributed evenly across assignment types. Discipline context matters, and faculty should think about the framework in light of what their field's professional norms and knowledge-building processes actually look like.
In English and Writing, introductory composition courses appropriately operate at Level 1 or Level 2 for most assignments, because the core skill being developed is the student's own capacity for argument construction and prose control. Advanced creative writing or professional writing courses might move toward Level 3 or Level 4 as students are preparing for environments where AI-assisted drafting is standard.
In STEM fields, the calculation and derivation skills that build foundational understanding are often best protected at Level 1. But literature reviews, technical reports, and professional communication assignments are reasonable candidates for Level 3 or Level 4, because AI is already standard in those professional workflows and students should learn to use it with critical discernment rather than encounter it for the first time after graduation.
In the Social Sciences and Business, case analysis and applied judgment assignments are where the most interesting policy decisions arise. A business strategy case study where students must apply a framework to a novel situation calls for careful scaffolding — probably Level 2 during the analytical phase and Level 3 for the recommendation memo, with the assessment focused on the quality of the judgment rather than the prose.
How to Communicate Your Policy to Students
A policy that students do not understand will not produce the behavior you intend, regardless of how well-designed it is. When writing your AI-use policy statement for a syllabus or assignment sheet, apply three communication principles: specificity over generality, rationale over assertion, and behavior over category.
Specificity means naming what is permitted and what is not, in terms of concrete actions. "You may use AI to generate an initial list of arguments, but all body paragraphs must be your own writing" is a specific policy. "AI may be used as a tool" is not. Rationale means explaining the pedagogical reason for the constraint. Students comply more consistently and more thoughtfully with rules they understand the purpose of. Behavior over category means describing what students should do, not just what they should not. "If you use AI for any part of this assignment, submit a one-paragraph reflection describing how you used it and what you changed" gives students an action to take. "Do not misrepresent AI use" gives them only a prohibition.
Building in a Review Cycle
Any AI policy written today will need to be revised within a semester or two. The tools are evolving too quickly for any static policy to remain well-calibrated. Build a review moment into your course design: at the end of each semester, revisit each level-assignment pairing and ask whether the AI capability landscape has changed in ways that shift the appropriate level. What was Level 3 eighteen months ago may now be Level 4 because AI capability in that domain has advanced.
This review posture is also connected to what we call AI Bravery — the organizational and individual disposition to engage with AI's implications rather than defer them. A faculty member who reviews and updates their AI policy each semester is not just administering a course. They are modeling for students what it looks like to engage with technological change thoughtfully, iteratively, and with professional seriousness. In an era when students will spend their careers navigating that kind of change, that modeling may be as valuable as any specific content the course delivers.
Want help building an AI policy your faculty will actually use?
Train & Inspire runs AI literacy workshops that walk faculty through the scaffolded policy framework and help departments build consistent, discipline-appropriate AI governance. Let's design one for your institution.
Book a Free Discovery Call