Here is how to use ChatGPT for student feedback: paste the student's submission along with your rubric, ask for draft feedback organized by rubric criteria, then review the output and add your own observations before sending it. The AI handles the structural analysis — identifying where work meets criteria and where it falls short — while you add the personal layer that makes a student feel recognized. The whole process takes about two to three minutes per submission instead of ten.
What you’ll walk away with:
- AI-drafted feedback you can personalize in a fraction of the time
- Consistent feedback quality across all students
- A pattern library of common feedback points
Why ChatGPT for feedback
Student feedback has two distinct layers: structural and personal. Structural feedback addresses whether the work meets the criteria — did the student cover the required points, organize their thinking clearly, support claims with evidence? Personal feedback addresses the student as a person — noticing growth from their previous submission, connecting their work to their specific goals, encouraging a direction that seems promising. ChatGPT is good at the first layer and reliably bad at the second.
That division of labor is what makes it useful. The structural layer is where most of your time goes — reading carefully against criteria, noting what's present and what's missing, articulating gaps clearly. It's important work, but it follows a pattern. A rubric with five criteria generates feedback that maps to those five criteria, and ChatGPT can produce that mapping quickly and consistently. Research on feedback effectiveness (including Wisniewski, Zierer, and Hattie's meta-analysis) confirms that specific, criteria-referenced feedback drives more learning gains than general praise or vague suggestions — exactly the kind of output ChatGPT produces when given a rubric to work from.
From over a decade running Ruzuku, I've watched the same pattern play out repeatedly: instructors start their course with detailed, thoughtful feedback on every submission. By week four, they're writing "Great job!" and nothing else. The issue is never caring — it's capacity. If AI can keep the structural layer consistent while you focus on what only you can see, your students get better feedback, not worse.
Step by step: Creating personalized feedback
Build your rubric first
Before you involve ChatGPT at all, you need a rubric that describes what good work looks like for this assignment. Three to five criteria are enough for most course submissions. A coaching course assignment might use: clarity of coaching question, evidence of active listening, appropriate use of framework, and actionable next steps. Write one to two sentences for each criterion describing what "meets expectations" looks like. This rubric becomes the core of your prompt, and without it, ChatGPT will generate vague feedback that helps nobody.
Paste the student’s work and your rubric into ChatGPT
Combine both elements in a single prompt. Start with the rubric so ChatGPT has the evaluation framework before it reads the work. Something like: "Here is my rubric for this assignment: [rubric]. Here is a student submission: [submission]. Evaluate this work against each rubric criterion." Providing both together is important — if you give only the submission, ChatGPT invents its own criteria, and they rarely match what you actually taught.
Prompt for specific, criteria-referenced feedback
Tell ChatGPT exactly what format you want. Ask it to address each rubric criterion separately, note what the student did well in that area, and identify one specific way to improve. Request that it use examples from the student's actual submission rather than generic advice. "Point to a specific sentence or section" is a prompt instruction that dramatically improves output quality — it forces the model to engage with the actual text rather than producing boilerplate.
Review and edit the draft
Read ChatGPT's feedback with your subject matter expertise active. Check that its assessment is accurate — does the student's work actually demonstrate what ChatGPT says it does? Look for places where the AI praised something mediocre or missed something good. Tighten language that sounds robotic or overly formal. This review typically takes two to three minutes, far less than writing from scratch, but it's not optional. Sending unreviewed AI feedback is worse than sending no feedback at all, because students will notice when the comments don't match their work.
Add your own observations
This is where you earn the word "personalized." Add one or two sentences that only you could write. Maybe you noticed this student's coaching question was sharper than their previous attempt. Maybe their case study connects to a concept you're covering next week. Maybe their approach reminds you of a real client scenario you've encountered. These personal touches take thirty seconds and they're the part students remember months later. The structural feedback tells them what to fix; your personal note tells them you actually read their work.
Send it
Combine the edited structural feedback with your personal observations into a single response. On Ruzuku, you can reply directly to student submissions within the course — the feedback lives in context alongside the assignment, not in a separate email that gets lost. Timeliness matters: feedback delivered within 48 hours of submission is significantly more useful than feedback delivered a week later, because the student still remembers their thinking process.
Prompts to try
Copy these into ChatGPT, replacing bracketed text with your specifics.
- Rubric-based feedback: "Here is my rubric for a [assignment type] in my [topic] course: [paste rubric]. Here is a student submission: [paste submission]. For each rubric criterion, note what the student did well with a specific example from their work, then identify one concrete improvement. Keep the tone encouraging and professional."
- Encouragement plus improvement: "Review this student submission for a [topic] course: [paste submission]. Write 3-4 sentences of feedback. Start with one real strength you observe in their work, citing a specific passage. Then suggest one area for improvement with a concrete example of what a stronger version would look like. Do not use generic praise — reference their actual words."
- Peer review guide: "I teach a course on [topic]. Students will review each other's [assignment type]. Create a peer review guide with 4-5 specific questions students should answer when reading a classmate's work. Each question should focus on one aspect of quality and include a brief explanation of what to look for. The tone should be supportive — this is feedback between peers, not grading."
The human layer
Students need to feel seen. That word — "seen" — keeps coming up in research on what makes online learning work, and it is not something you can automate. A student who submits a case study about working with a client struggling with chronic pain doesn't just need feedback on their coaching framework. They need to know that their instructor recognized the difficulty of that situation and respected how they handled it. They need a sentence that says "I noticed you adapted the framework for a client in active pain — that's a more advanced application than most students attempt at this stage."
AI can handle the structural feedback — whether the response addressed all rubric criteria, whether the logic held, whether the evidence was sufficient. You add the personal recognition: acknowledging effort, noting growth, connecting the student's work to their stated goals or professional context. This division doesn't diminish the feedback; it elevates it. The student gets both thorough structural analysis and real human recognition, delivered faster than either alone.
Course creator tips
Create a reusable prompt template per assignment
Once you've refined the prompt for a specific assignment, save it. Each time a new submission comes in, you paste the student's work into the same template. This means your first submission takes the longest as you calibrate the prompt, but submissions two through twenty take a fraction of the time. If you teach cohort-based courses on Ruzuku, this adds up quickly — fifteen students submitting five assignments each is seventy-five feedback sessions that each benefit from the template.
Batch similar submissions together
Review all submissions for the same assignment in one sitting rather than responding to them as they arrive. When you read multiple attempts at the same assignment back to back, you develop a stronger sense of what "typical" looks like — which makes it easier to spot work that's unusually strong or that's struggling in a specific way. You also catch patterns: if four out of ten students misunderstood the same instruction, the issue is your assignment description, not their effort.
Keep a feedback phrase bank
As you personalize AI-drafted feedback over time, you'll develop sentences and observations that recur. Save the good ones. "Your analysis of X shows real growth from your earlier work on Y" is a structure that works for many contexts with different specifics plugged in. A phrase bank is not the same as canned feedback — it's a collection of proven sentence structures that you fill with specific, real observations about each student's work.
What it gets wrong
ChatGPT's most persistent failure in feedback is generic praise. "Great job on this assignment!" and "You clearly put a lot of effort into this" appear in almost every draft regardless of what the student actually submitted. Generic praise erodes trust because students can tell when a comment could apply to anyone's work. Delete every sentence that doesn't reference something specific to this student's submission.
It also misses context about the student's journey. When a student who struggled with the first three assignments suddenly produces strong work, that's worth acknowledging. When a student who was confident takes a risk on a new approach and it doesn't quite land, the feedback should honor the risk-taking. ChatGPT sees only the single submission in front of it — it has no memory of previous interactions. That context is something only you carry, and it's often the most valuable part of the feedback.
The third problem is volume. ChatGPT tends to produce too much feedback at once — a paragraph per rubric criterion, plus an introduction, plus a closing encouragement, plus suggested resources. A student who receives eight hundred words of feedback on a five-hundred-word submission will feel overwhelmed, not supported. Edit aggressively. The best feedback identifies one primary strength and one primary area for improvement. If there are three things to fix, address the most impactful one and save the others for next time.
Frequently asked questions
How long should feedback be for each student submission?
Three to five sentences of specific, actionable feedback is more useful than a full page of general commentary. Students skim long feedback and miss the important points. Lead with what they did well (one sentence), identify the most important area for improvement (one to two sentences), and close with a concrete next step (one sentence). If you find yourself writing more than a short paragraph, you're probably trying to reteach the lesson — which means the lesson itself needs work, not the feedback.
Should I tell students that AI helped draft their feedback?
Transparency builds trust, so yes — a brief mention in your course materials is appropriate. Something like "I use AI tools to help me provide faster, more detailed feedback, and I personally review and add to every response." Most students care about whether the feedback is accurate and helpful, not whether it was drafted by hand. What would erode trust is sending unreviewed AI output that misses the point of their work or feels generic.
Can I use this process for group assignments or peer review?
For group assignments, the process works with one adjustment: ask ChatGPT to evaluate both the group output and individual contributions if you can identify them. For peer review, a better approach is to use ChatGPT to generate a peer review guide — a structured set of questions students use when reviewing each other's work. This teaches evaluation skills and distributes the feedback workload. You still review a sample of peer feedback to ensure quality and step in when students are too generous or too harsh.
Feedback belongs inside the course, not in a separate inbox
Personalized feedback is what separates a great course from a forgettable one. But feedback that arrives in a disconnected email thread loses context — students have to remember what they submitted and match it to your response. On Ruzuku, you reply directly to student submissions within the lesson. The work, the feedback, and the next step all live in the same place.
That in-context feedback loop is what makes Ruzuku's coaching tools powerful for courses that depend on personal attention. Students submit their work, you respond with the structural analysis ChatGPT helped you draft plus your own observations, and the student sees it all right alongside the lesson that prompted it.
Related guides
- How to Write Course Assessment Questions Using ChatGPT — build the rubrics and assessments that generate the submissions you're now giving feedback on
- How to Write Discussion Prompts Using ChatGPT — use discussion prompts to create peer learning that complements your individual feedback
- How to Build an AI Study Companion for Your Students — give students on-demand support between your feedback sessions
- How to Create Course Surveys Using Google Forms — gather structured student feedback that complements your individual responses
- Ruzuku Course Builder — build courses where exercise submissions and feedback live inside each lesson