Privacy: We paraphrase learner messages, use randomized IDs, and never train on student work.
What you'll learn: Why students give up, and how to stop it. The science of small wins, self-efficacy, and scaffolding—plus 4 classroom-ready strategies.
The podcast above covers the same material. Choose your preferred format: watch on YouTube or continue reading below.
The Scene
It's 12:37 PM on a Monday when a student types these words into their writing activity: "i dont get the prompt".
They're supposed to finish a spooky story—Coach Bones is telling the legend of the Headless Horseman, and the sentence ends mid-action: "He hurled that flaming pumpkin right through that window, and..."
What happens next isn't what you might expect. The student doesn't close the tab. They don't write a single rushed sentence and call it done. Instead, over the next two sessions, they send 88 messages—44 back-and-forth turns with Tronnie, our AI writing coach—asking questions, trying ideas, revising, and ultimately crafting an ending they're proud of.
This conversation became a masterclass in what research calls "small wins," and it taught us something powerful about how students build the confidence to keep going when learning feels hard.
The Research: Why Small Wins Matter More Than Big Promises
Decades of research on self-efficacy reveal a fundamental truth: students build belief in their capabilities primarily through mastery experiences---small, successful steps that provide concrete evidence of their ability (Bandura, 1997). When students face repeated failure, they often conclude that trying is pointless. Motivational posters or phrases like "You can do it!" lack the evidentiary power to change that belief.
Findings are context-dependent; we cite primary sources and report classroom observations, not universal effects.
What does work? Breaking complex problems into visible, achievable steps---a strategy Karl Weick (1984) called "small wins." Weick demonstrated that redefining the scale of challenges builds momentum: each small success makes the next step feel more possible.
This isn't just theory. Research demonstrates several mechanisms at play:
Self-Efficacy Through Mastery Experiences
Albert Bandura's decades of work show that the primary source of self-efficacy---belief that your actions can lead to success---comes from actual experiences of mastery, not persuasion or vicarious learning (Bandura, 1997). Students need evidence, not encouragement.
Goal-Gradient Effects
The closer we perceive ourselves to a goal, the stronger our motivation becomes. Kivetz, Urminsky, and Zheng (2006) demonstrated this "goal-gradient hypothesis" in field and lab studies: visible progress accelerates effort. Even illusory progress---what Nunes and Drèze (2006) call the "endowed progress effect"---increases persistence.
Rapid Feedback and Memory
Memory decays rapidly without reinforcement (Murre & Dros, 2015). When students receive timely, specific feedback, they can consolidate learning before forgetting sets in. Meta-analyses confirm that elaborated, immediate feedback significantly improves learning outcomes, especially in digital environments (Hattie & Timperley, 2007; van der Kleij, Feskens, & Eggen, 2015).
But here's the challenge: How do we create those small wins consistently, especially when we have 30 students and only 55 minutes?
What We Observed: The Anatomy of 88 Messages
We analyzed this two-session conversation to understand what kept a confused student engaged long enough to create something they were proud of. Here's what we found.
Turn 1-5: From Confusion to Clarity
The student's opening---"i dont get the prompt"—wasn't resistance. It was a genuine request for help. Instead of receiving a model answer or a simplified version of the task, they got a question back:
"What do you think happens after the pumpkin goes through the window?"
By turn 5, they were asking: "do i right he hurled the flaming pumkin right through that window and... but do i continue on the sentence?"
Micro-win #1: They understood the format.
Turn 6-15: Generating Ideas Without Being Given Answers
The pivotal moment came at turn 7: "what are some ideas i can use but dont give me the full answer"
This student knew what they needed---options, not solutions. They received three possibilities: maybe the pumpkin scares someone, starts a fire, or reveals something hidden.
They chose one and drafted: "maybe the headless horseman appears again and scares everyone".
When asked to be more specific about what "scares everyone" looks like, they added: "and then they scream".
Micro-win #2: They could generate ideas and make them concrete.
Turn 16-25: Building Specificity Through Iteration
The scaffolding intensified around detail and description. "What does the scene look or sound like?" "What do the people do after they scream?"
The student added: "but when they looked back he disappeared in smoke". Then: "maybe they can see a pumkin left behind on the floor".
Each turn brought another question that pushed thinking forward without giving away the next sentence. The student suggested someone might "try to pick the pumkin up but then get scared"—showing they were now thinking causally about character actions.
Micro-win #3: They were visualizing the scene and making intentional choices.
Turn 26-35: Structure and Sentence Boundaries
Around turn 26, the questions shifted to structure: "Does this have to be a parragh" "how much sentences"
The coach provided a concrete target: "Aim for about 3 to 5 sentences to finish the story".
This transformed an overwhelming open-ended task into a manageable goal. Suddenly, the finish line was visible.
The student also asked: "Where should i add comas?" and "what are run on" and "what did i do wrong in the disappered"
Micro-win #4: They were noticing and self-correcting mechanics.
Turn 36-44: Revision as Thinking
By turn 36, the student was asking what to change, not if they should change anything. They were experimenting with dialogue, character reactions, and emotional beats:
"maybe another one says that they want to see more"
"curios" (when asked if the mood shifted to curious or scared)
The final exchanges focused on separating dialogue from action, and the student asked: "can u give me an example"
The coach held firm: "I can't write it for you, but here's how you can do it: Put the character's words on one line, then start a new sentence for the action".
The student kept trying different approaches until they found one that worked.
Micro-win #5: They persisted through revision and saw improvement.
Why This Matters: The Science Behind the Scaffolding
This conversation embodies four research principles that directly impact student engagement and learning.
1. Psychological Safety Enables Risk-Taking
Before any learning could happen, this student needed to feel safe saying "i dont get the prompt." Research on psychological safety shows that people take intellectual risks---asking questions, admitting confusion, trying new approaches---when it's safe to err (Edmondson, 1999; organizational research widely applied to learning contexts).
Error-friendly classroom climates, where mistakes are treated as learning opportunities rather than failures, correlate with higher motivation, stronger teacher-student relationships, and better achievement (Steuer, Rosentritt-Brunn, & Dresel, 2013). Longitudinal evidence confirms that positive error climates predict less alienation from teachers over time (Steuer, Grecu, & Mori, 2024; two-country study, Grades 5-9).
This matters for practice platforms: When students know the stakes are low---it's practice, not judgment---they engage differently. They ask for help. They try ideas. They persist through 88 messages instead of closing the tab after one.
2. Mastery Experiences Build Self-Efficacy
Bandura's self-efficacy theory emphasizes that beliefs about capability stem primarily from performance accomplishments---not persuasion (Bandura, 1997). In this conversation, each turn provided micro-evidence: "That's a good start!" "Great idea!" "You're on the right track!"
But these weren't empty praise. They were anchored to what the student actually did: chose an idea, added detail, identified a grammar issue. By turn 44, this student had accumulated 44 performance accomplishments---concrete evidence that they could write, revise, and improve.
This aligns with Weick's (1984) small wins framework: breaking large problems into smaller, manageable ones builds momentum because each success provides proof that progress is possible.
3. Visible Progress Accelerates Effort
The goal-gradient hypothesis predicts that motivation increases as we approach completion (Kivetz et al., 2006). When the coach said "Aim for 3 to 5 sentences," the student went from overwhelmed to focused. They could now count: "This is mine so far... what else should i add"
This visible checkpoint transformed an ambiguous task into a measurable goal. The student could see exactly how close they were to finishing, which strengthened their motivation to persist.
Breaking the task into stages---generate ideas, add specifics, fix mechanics, polish dialogue---created what Nunes and Drèze (2006) call "endowed progress": each completed stage felt like forward momentum, even when the endpoint was still ahead.
4. Rapid, Specific Feedback Counteracts Forgetting
Memory decays rapidly without reinforcement. Murre and Dros's (2015) replication of Ebbinghaus's forgetting curve confirms that without active rehearsal, newly learned information fades quickly---often within hours.
This student didn't wait a week for feedback. They didn't even wait an hour. Each question received an immediate response that built on their last attempt. This rapid feedback loop allowed them to consolidate learning before forgetting could set in (Roediger & Butler, 2011).
Meta-analyses confirm that elaborated, immediate feedback significantly improves learning outcomes, particularly in computer-based environments where responses can be tailored to specific student needs (van der Kleij et al., 2015). Hattie and Timperley's (2007) comprehensive review found that feedback's effectiveness depends critically on its timing, specificity, and connection to clear goals---all present in this 88-message exchange.
Try This Tomorrow
You don't need AI to create small wins for your students. Here are strategies you can implement during workshop, independent practice, or small groups:
1. Make the Finish Line Visible
Instead of "Write a paragraph," try: "Write 3-5 sentences that show what the character does next."
Instead of "Revise your draft," try: "Find one sentence you can split into two" or "Add one detail that shows how the character feels."
Specific, countable goals transform overwhelming tasks into achievable steps.
2. Ask Questions That Scaffold, Don't Solve
When a student says "I don't get it," resist the urge to re-explain. Instead:
- "What part makes sense so far?"
- "What do you think might happen next?"
- "Can you show me where you got stuck?"
This approach embodies Nicol and Macfarlane-Dick's (2006) principles of good feedback practice: facilitate self-assessment and encourage positive motivational beliefs. Questions honor the student's thinking while helping them find the next foothold.
3. Celebrate Micro-Progress Out Loud
When a student adds a detail, fixes a run-on, or asks a clarifying question, name it:
- "You just made that sentence way more specific."
- "Nice---you caught that comma splice yourself."
- "That question tells me you're thinking about your reader."
This makes invisible thinking visible and gives students evidence that they're growing.
4. Use "What Else?" to Extend Thinking
After a student shares an idea, ask: "What else could you add?" or "What might happen next?"
This signals that their first draft isn't their final thinking---and it keeps them in the driver's seat.
What to Look For:
- Students asking clarifying questions (not giving up)
- Students revising on their own initiative
- Students using specific vocabulary from your feedback
- Students saying "I fixed it" or "I added more"
Start Small This Week
- One routine: Break writing assignments into 3-5 visible checkpoints
- One tool: Index cards with 3 options (or Instructron writing activities)
- One measure: Students ask clarifying questions before drafting (vs. rushing to first sentence)
How Meaningful Practice Supports This
Built for Safe Practice, Not Test-and-Sort
Many assessment platforms are designed around high-stakes testing---one chance, timed pressure, right-or-wrong scoring. Research shows this approach can heighten test anxiety, particularly for students who've experienced repeated failure, and can widen achievement gaps (Ballen, Salehi, & Cotner, 2017; PLOS ONE study showing female students disadvantaged by exam anxiety in high-stakes contexts; "Exams disadvantage women in introductory biology").
Instructron takes a different approach: programmatic assessment---many low-stakes, feedback-rich datapoints instead of single high-pressure moments (van der Vleuten et al., 2012; medical education model showing better learning and more trustworthy decisions). Low-stakes tests used as tools for learning reduce anxiety while supporting retention (Educational Psychology Review, 2023; review of retrieval practice and climate research).
When students know it's safe to err—when the stakes are practice, not judgment—they take intellectual risks: they ask "i dont get the prompt" instead of giving up. They send 88 messages instead of one rushed attempt. This is psychological safety in action (Edmondson, 1999).
Suggestion-Level Scaffolding
Instructron's Writing Coach embodies these research principles through suggestion-level scaffolding that keeps students in control of their writing. The Writing Coach provides step-by-step prompts with affirmations at each stage, so students see their draft evolve through visible revision rounds.
When a student says "i dont get the prompt," the coach clarifies the task without providing a model answer. When they draft an idea, the coach asks what happens next---not what should happen. When they ask about mechanics, the coach points to patterns without correcting every error.
This approach aligns with evidence-based feedback principles: effective formative feedback is timely, specific, and focused on the task rather than the person (Shute, 2008; meta-analysis of formative feedback, K-12). It promotes self-regulation by helping students develop their own feedback literacy---the capacity to make sense of feedback and use it to enhance learning (Carless & Boud, 2018; qualitative study of higher education, applicable to K-12 contexts).
Over 88 messages, this student received 44 scaffolded prompts---and wrote every word themselves. We scaffold decisions; students write the words. This mirrors Bandura's finding that mastery experiences are most powerful when learners attribute success to their own efforts, not external help.
Product Patterns We Focus On
Teachers see every conversation thread—no hidden AI responses. Student IDs are randomized; we never send names to AI systems. Instructron never trains on student work. This visibility turns every writing session into actionable insight without adding grading time.
The result? Students like this one, who go from "i dont get the prompt" to asking about run-on sentences, dialogue formatting, and character development. Small wins that stack into real growth.
Key Takeaways
- For Teachers: Small wins build self-efficacy more effectively than big praise. Break tasks into 3-5 visible steps and celebrate each milestone.
- For Students: Confusion isn't failure—it's the beginning of learning. Asking "what are some ideas but dont give me the full answer" shows you're ready to think, not just complete.
- For Learning: Immediate, specific feedback defeats the forgetting curve and keeps momentum alive. The closer students are to seeing progress, the harder they'll work.
- For Practice: Scaffolding means asking the next question, not providing the next answer. 44 turns of "What do you think?" beats one turn of "Here's what to write."
Try This Week: Pick one assignment and break it into 3-5 checkpoints. After students complete each checkpoint, give them 30 seconds of specific feedback on what they did well. Watch how many more students persist to the finish line.
Privacy Note: We paraphrase learner messages and remove identifying details. Instructron never sends student names to AI systems; interactions use randomized IDs. Teachers have full conversation visibility. We never train on student work.
References
Ballen, C. J., Salehi, S., & Cotner, S. (2017). Examining the impact of exam length on performance and retention in undergraduate biology. PLOS ONE, 12(10), e0185396. https://pmc.ncbi.nlm.nih.gov/articles/PMC5648180/
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman. https://books.google.com/books/about/Self_Efficacy.html?id=eJ-PN9g_o-EC
Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315--1325. https://doi.org/10.1080/02602938.2018.1463354
Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350--383. https://dash.harvard.edu/entities/publication/13a7b031-0fdd-45ec-a7e0-2b80e2bc679f
Educational Psychology Review (2023). Tests as tools for learning: A meta-analytic review of the testing effect. Educational Psychology Review, 35(3). https://link.springer.com/article/10.1007/s10648-023-09808-3
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81--112. http://www.columbia.edu/~mvp19/ETF/Feedback.pdf
Kivetz, R., Urminsky, O., & Zheng, Y. (2006). The goal-gradient hypothesis resurrected: Purchase acceleration, illusionary goal progress, and customer retention. Journal of Marketing Research, 43(1), 39--58. https://business.columbia.edu/faculty/research/goal-gradient-hypothesis-resurrected-purchase-acceleration-illusionary-goal
Murre, J. M. J., & Dros, J. (2015). Replication and analysis of Ebbinghaus' forgetting curve. PLoS ONE, 10(7), e0120644. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0120644
Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199--218. https://strathprints.strath.ac.uk/3235/
Nunes, J. C., & Drèze, X. (2006). The endowed progress effect: How artificial advancement increases effort. Journal of Consumer Research, 32(4), 504--512. https://academic.oup.com/jcr/article/32/4/504/1787425
Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1), 20--27. https://profiles.wustl.edu/en/publications/the-critical-role-of-retrieval-practice-in-long-term-retention
Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153--189. https://www.ets.org/research/policy_research_reports/publications/report/2007/hslv.html
Steuer, G., Grecu, A. L., & Mori, J. (2024). Positive error climate predicts less alienation from teachers: A longitudinal study. British Journal of Educational Psychology, 94(1), 1--17. https://pmc.ncbi.nlm.nih.gov/articles/PMC11802967/
Steuer, G., Rosentritt-Brunn, G., & Dresel, M. (2013). Dealing with errors in mathematics classrooms: Structure and relevance of perceived error climate. Contemporary Educational Psychology, 38(3), 284--293. https://doi.org/10.1016/j.cedpsych.2013.03.002
van der Kleij, F. M., Feskens, R. C. W., & Eggen, T. J. H. M. (2015). Effects of feedback in a computer-based learning environment on students' learning outcomes: A meta-analysis. Review of Educational Research, 85(4), 475--511. https://doi.org/10.3102/0034654314564881
van der Vleuten, C. P. M., Schuwirth, L. W. T., Driessen, E. W., et al. (2012). A model for programmatic assessment fit for purpose. Medical Teacher, 34(3), 205--214. https://www.researchgate.net/publication/221860673_A_model_for_programmatic_assessment_fit_for_purpose
Weick, K. E. (1984). Small wins: Redefining the scale of social problems. American Psychologist, 39(1), 40--49. https://doi.org/10.1037/0003-066X.39.1.40

