I have a confession: I enjoy prepping lectures. I like researching, analyzing, and organizing material. I like synthesizing ideas and crafting a class session that feels clear and coherent. I even enjoy delivering it and watching discussion unfold, seeing light bulbs go off. That joy is part of why I became a professor in the first place.
The problem is that I like it too much.
Preparing the perfect lecture can expand to fill every available hour. And like most faculty, I don’t only teach. There are committees and emails, as well as advising, writing, grading, and my personal life outside of the institution. Add to that a nagging uncertainty: Did it land? Did they actually understand? Or did they sit politely while I performed?
Now we’re teaching in an AI-saturated world. Students can generate summaries, explanations, and even outlines in seconds. Information is no longer scarce. Presence is. Attention is. Real engagement is.
In that environment, it’s tempting to believe the solution requires even more preparation—more polish, more content, more effort. But that path isn’t sustainable.
What has changed my teaching more than anything recently wasn’t better content or smarter tools—it was better live feedback.
Table of contents
- Why increased lesson prep still feels insufficient
- My exit ticket experiment: a small—but significant—change
- 2 ways this improved student learning
- 2 ways this reduced my prep time, while improving my lectures
- Human engagement in an AI-saturated classroom
- A powerful but sustainable teaching workflow
- Care measured by attentiveness, not volume
Why increased lesson prep still feels insufficient
One of the quiet frustrations of teaching is that the goal is rarely as clear as we pretend. When students begin an assignment without knowing how it will be evaluated, they hesitate. They second-guess. They try to cover everything, just in case. In many ways, lecturing operates under the same conditions. There is no rubric for a class session. No immediate grading guide. No precise measure that tells us, “You hit the target.”
So we try to compensate.
When there is no clear feedback loop, preparation expands to fill the uncertainty. We add more examples. More illustrations. More clarification. More slides. If teaching is a basketball game, it can begin to feel like playing without a hoop. We just keep shooting, hoping something scores.
In the absence of real feedback, we rely on substitutes. A few raised hands. A lively comment or two. A vague intuition about how the room felt. End-of-term evaluations that arrive months too late to help. Often, we mistake the confidence of a few vocal students for broad comprehension.
When you don’t know what students understood, for many of us this creates more preparation. And over time, that strategy becomes exhausting.
My exit ticket experiment: a small—but significant—change
I have to admit that the solution I’m about to describe is not revolutionary. I had heard about exit tickets before. I had even sat through faculty workshops where someone enthusiastically recommended them. At the time, I nodded politely and went back to doing what I had always done. It took a combination of fatigue and curiosity to finally try it for myself.
The original motivation was modest. Like many instructors, I needed a fair and consistent way to assign participation points. I had tried student sign-in, discussion tracking, and informal impressions. None felt quite right. So, I decided to experiment with something simple: At the end of every class, students would submit a brief exit ticket to earn participation credit.
The format was straightforward. Two questions:
- What was the most interesting idea or fact today?
- What was least clear or still confusing?
That was it.
Each student received full credit for a good-faith response. There were no “right” answers. It wasn’t an assessment. It wasn’t surveillance. It wasn’t a gimmick. It was a structured moment of reflection—and a structured opportunity for me to listen.
I expected it to solve a participation problem. I did not expect it to reshape my preparation, my teaching, and my understanding of what my students were actually experiencing in class.
2 ways this improved student learning
The change was small. The effects were not.
The first benefit was cognitive. When students pause at the end of class to articulate what they learned, something changes. A lecture that might otherwise remain a monologue becomes a kind of dialogue. Students must find their own words. They must connect new material to what they already knew before walking into the room. They begin to notice what was added, clarified, or unsettled.
That brief act of reflection consolidates learning. Research on formative assessment consistently shows that frequent, low-stakes feedback strengthens engagement and improves retention because it surfaces understanding in real time rather than weeks later.1 Exit tickets function as a simple but powerful version of that principle: They reveal what connected and what did not, whether because I moved too quickly, assumed background knowledge, or simply missed a moment.
The second benefit was equity. For the first time, I wasn’t just hearing from the fastest or loudest students: I was hearing from everyone. Every professor knows that some of the brightest students are quiet. (Thomas Aquinas himself was nicknamed the “Dumb Ox” by classmates who mistook his silence for ignorance—a story recounted in early Dominican accounts of his life.)2 Many students need time to process before speaking. Others are hesitant to risk public confusion. A briefly written reflection lowers those barriers. It dignifies reflective thinkers and surfaces confusion that would otherwise remain hidden.
In two short questions, the classroom became more honest and more inclusive.
2 ways this reduced my prep time, while improving my lectures
I wasn’t only searching for ways to improve student learning. In all honesty, I also needed to reduce my prep time. The exit ticket did both because it replaced guesswork with evidence.
First, I stopped preparing for imaginary problems. Like many instructors, some of my lecture preparation wandered into rabbit trails—interesting, but not essential. Without clear feedback, it is easy to over-explain, over-illustrate, or anticipate objections students never actually have.
The unfiltered honesty of the exit tickets exposed this. Students would write, “I think I understand the definition, but I’m not sure why it matters” or “We spent too much time on this part.” Their comments weren’t hostile, they were clarifying. They revealed what truly helped them understand the material and what did not.
Second, the structure forced focus. Because I begin each class by addressing responses from the previous exit tickets, I effectively surrender five to ten minutes of lecture time. That constraint sharpened my preparation. When I revisited old lecture notes, I saw sections I could cut without sacrificing learning. Some material was informative but unnecessary. Some explanations were redundant.
Over a fifteen-week semester, meeting twice a week, those minutes accumulate into three to five reclaimed hours of core lecture time. And because preparation expands to match lecture length, prep time shrinks accordingly.
Better feedback didn’t make me care less—it made me care more precisely.
Formative assessment research consistently shows that timely insight into student understanding allows instruction to become more targeted and efficient.3 I experienced that reality firsthand. Better feedback didn’t make me care less—it made me care more precisely.
Human engagement in an AI-saturated classroom
We are teaching in a moment when students can generate explanations, summaries, and outlines in seconds. Information scarcity is gone. A student can generate a polished paragraph without ever grappling with the underlying idea. A student can sound informed without ever wrestling with the material. In that environment, the instructor’s value shifts. Our task is no longer primarily to produce explanations, but to discern what students actually experienced and understood.
This is precisely where the exit ticket proves its worth. AI is asynchronous, it operates after the fact. Exit tickets are synchronous. They capture understanding before students leave the room. AI can summarize content, but exit tickets surface meaning, confusion, and curiosity. They reveal what connected, what did not, and what requires attention now—not next week.
Research on AI in education increasingly notes the limits of automated feedback: It often lacks contextual nuance, relational awareness, and instructional judgment. The exit ticket does something simple but powerful in this environment: It asks students to think before AI thinks for them. In doing so, it preserves what is most human about teaching: shared presence, honest reflection, and responsive instruction.
A powerful but sustainable teaching workflow
Over time, this practice settled into a rhythm. At the end of each class, students will submit their exit ticket—formerly on paper, now through a short online form. The platform is secondary. What matters is consistency.
Before the next class session, I skim the responses, highlight a handful of representative comments, and look for patterns.
- Where was there clarity?
- Where was there confusion?
- What surprised them?
I begin the next class by reading several responses aloud—always anonymously, always verbatim. It is one of my favorite moments. Students walk in with a quiet anticipation: I wonder if my words will be read? The class becomes highly personalized. Their insights, questions, and even frustrations become the opening conversation. What might have been a routine recap instead becomes a targeted refresher and a bridge into new material.
The structure is the same every time. There is no reinvention, no elaborate grading, no additional cognitive load. The feedback shapes emphasis, not just content. And once the feedback loop is in place, other preparation tools, whether digital libraries, research software, or lecture notes, become more efficient because they are guided by evidence rather than guesswork.
Care measured by attentiveness, not volume
I did not set out to redesign my pedagogy. I set out to solve a practical problem: participation points and limited time. What I discovered was not a trick, but a shift.
A small practice created a feedback loop. That feedback loop sharpened my preparation, strengthened student learning, and restored a measure of sustainability to my work.
In a profession where preparation can quietly consume every available hour, it is easy to believe that caring more means preparing more. But care is not measured in volume. It is measured in attentiveness. The exit ticket did not reduce my standards, it clarified them. It did not lower my expectations, it focused them.
In an AI-saturated classroom, where information is abundant but attention is fragile, this kind of live human feedback matters even more. It asks students to reflect before outsourcing their thinking. It shifts teaching from performance to discernment.
For me, two simple questions at the end of class created a win-win: Students learned more and I prepared less. Not because I cared less, but because I finally knew where my care belonged.
Related content
- Our Unspoken Curriculums: How Pedagogical Choices Communicate
- Should Christian Higher Ed Be Worried About AI?
- Developing a Teaching Philosophy: A Guide for Theological Educators
- The Ignorance Hack: How One Prof Increased Classroom Engagement
- How to Write a Syllabus: 10 Commandments for a Better Semester

14 hours ago
7










English (US) ·