A student in an introductory Sociology course at Indiana Wesleyan University once told me, after completing our poverty simulation game, that she had always thought poverty was primarily a motivation problem. Forty-five minutes later, having watched her in-game family lose their apartment because a medical copay wiped out the buffer she needed for rent, she said she was not sure she had ever thought about anything that hard before. That is what a well-designed academic game can do. That is why we built it.
Where the Idea Came From
The poverty simulation concept was not new. In-person poverty simulations have been used in social work education for decades — participants are assigned family roles, given limited budgets, and asked to navigate a series of stations representing banks, employers, schools, and social services within a compressed time window. The experiences are powerful. They are also logistically intensive, requiring physical space, trained facilitators, and significant coordination. Online courses could not replicate them.
When IWU began expanding its online Sociology offerings, the question came to me: could we recreate the core learning of that in-person simulation in a fully asynchronous, AI-supported digital environment? The answer, after about six months of design and iteration, was yes — but only if we stopped thinking about it as a video or a quiz and started thinking about it as a game.
The project also had a secondary motivation. Higher education institutions spend enormous amounts of money licensing experiential learning tools from vendors. Our internal estimate on this project alone suggested we could replace what would have been a recurring six-figure vendor contract with a custom-built solution — part of a broader curriculum development strategy that has helped IWU avoid over one million dollars in vendor licensing costs across multiple projects.
Designing the Game: Starting with Learning Outcomes
The single most important decision we made was refusing to design the game before we had locked the learning outcomes. This sounds obvious, but in practice, game design is seductive — the mechanics, the aesthetics, the branching logic all pull at your attention before you have answered the foundational question: what should a student be able to do, think, or feel differently after this experience?
We mapped our outcomes directly to Bloom's Taxonomy at the application and analysis levels. We did not want students to merely recall facts about poverty rates or identify sociological terms. We wanted them to apply systems-thinking to resource scarcity decisions, analyze how structural constraints interact with individual choices, and evaluate the assumptions behind common cultural narratives about poverty. That framing changed everything. It told us that the game needed to simulate genuine decision-making under constraint — not a trivia game about poverty statistics, but an experience of the decision architecture that low-income families navigate every day.
We designed the simulation around a fictional family of four: two adults, one school-age child, one infant. Players are assigned the role of the primary financial decision-maker. The family's baseline income, housing situation, employment status, and health circumstances are established in the opening sequence, and the game unfolds over a simulated month — twelve decision points, each reflecting a real category of resource allocation challenge documented in poverty research.
What the AI Actually Does in the Game
AI serves two distinct functions in the simulation, and it is worth being specific about each because the conflation of AI uses leads to confused design.
The first function is adaptive scenario generation. The game does not run the same twelve scenarios in the same order for every player. Based on choices made in the first three decision points, the AI adjusts which subsequent scenarios appear, what resources are available within them, and what the compounding consequences of earlier decisions are. A player who chose to delay a medical visit in scenario two because they could not afford the copay will encounter that decision's downstream effects — in the form of a more serious and more expensive health crisis — in scenario seven. This is not scripted branching. The AI is dynamically weighting the probability of different events based on the player's accumulated decision history, modeling the way compounding disadvantage actually operates.
The second function is post-decision feedback. After each choice, the AI generates a brief, non-judgmental contextual note explaining what research says about how families in this situation typically navigate that decision, what structural factors constrain the available options, and what the most common outcomes are. This is not a "you chose wrong" message — there are no wrong choices in the simulation, only choices with different consequences. The feedback is designed to connect the player's visceral in-game experience to the sociological literature the course has assigned, closing the loop between experience and theory.
The 12 Branching Scenarios
The twelve scenario categories were drawn from poverty research literature, with particular attention to the work of Kathryn Edin, Matthew Desmond, and the Federal Reserve's annual report on the economic well-being of U.S. households. Each scenario category represents a class of decision that low-income families report navigating with high frequency and high stakes.
The categories are: housing cost versus transportation access trade-off; medical care deferral versus acute health risk; childcare cost versus employment hours; food purchasing decisions under CalFresh-equivalent budget constraints; school supply and activity fees; utility shutoff and reconnection fee cycles; payday lending and alternative financial services; employment scheduling instability; public benefits enrollment friction; emergency expense absorption; social relationship costs (birthdays, funerals, community events); and long-term savings versus immediate need trade-offs.
Each category contains multiple specific scenarios that are dynamically selected based on the family's current state. A player whose family has already experienced a medical deferral will not receive the medical deferral scenario again — they will receive the downstream consequence scenario. This means that no two playthroughs are identical and repeat plays remain meaningfully different, which matters for courses that want to use the game as a discussion prompt rather than a one-time event.
What Happened When Students Played It
We piloted the simulation with two sections of Introduction to Sociology in the spring semester following development. The results exceeded what we had hoped for in two specific areas and surprised us in a third.
Module completion rates for the week containing the simulation were 23 percentage points higher than comparable weeks in prior semesters. Students were finishing the activity — and the surrounding readings — at a rate we had not seen in any purely lecture-and-quiz format. The game created a motivational pull that carried over into the adjacent content.
Pre- and post-simulation surveys measuring students' attribution of poverty to individual versus structural causes showed statistically significant movement toward structural explanation after gameplay — even among students who entered with strong individual-attribution beliefs. The movement was not coerced; students were not told what to think. The simulation gave them an experience that their prior mental model could not adequately explain, and that cognitive dissonance opened space for the theoretical framework the course then provided.
The surprise was in the discussion boards. Post-simulation discussions were qualitatively different from anything we had seen in the same course without the game. Students were referencing specific moments from their playthrough — "when I had to choose between the copay and the car repair, I kept thinking about..." — and connecting those moments to course readings with a precision and emotional investment that asynchronous discussions rarely produce. The game had given them a shared experiential vocabulary, and they used it.
Could Your Course Use a Game Like This?
Not every course needs a full simulation. But the design principles behind this project are applicable at many scales, and institutions should not assume that AI-academic game development requires either a massive budget or a dedicated game-design team.
The key question is whether your course has learning outcomes that require application or analysis — outcomes where a student needs to practice making decisions, not just recognize correct answers. If your outcomes live at the lower levels of Bloom's Taxonomy, a well-designed quiz may be sufficient. If they require students to navigate ambiguity, apply systems thinking, or develop professional judgment, a branched simulation — even a modest one — is worth serious consideration.
The practical starting point is outcome-mapping, not technology selection. Define precisely what you want students to be able to do after the experience. Then identify the decision architecture that experience would require. Then and only then ask what technology supports that architecture most efficiently. In some cases that is a full AI-driven simulation. In others, it is a well-crafted branched video scenario or a structured role-play activity. The technology serves the outcome. It does not define it.
Thinking about gamifying part of your curriculum?
Train & Inspire helps higher education institutions design AI-powered academic games and simulations that achieve real learning outcomes — without six-figure vendor contracts. Let's talk about what's possible for your course.
Book a Free Discovery Call