Skip to main content
Back to Blog

How to Talk to Students About AI Detection Results

GradeOrbit Team·Education Technology
7 min read

AI detection is now part of the classroom reality for many UK secondary teachers. Whether you are marking GCSE coursework, A-Level NEAs, or homework essays, tools like GradeOrbit can surface a likelihood score that gives you pause. The harder part — the part no software can do for you — is deciding what to do next, and how to have that conversation with a student.

This guide is not about catching students out. It is about helping you respond to AI detection results in a way that is fair, evidenced, and consistent with your professional responsibilities.

What the Likelihood Score Actually Means

GradeOrbit's AI detection tool gives every piece of submitted work a likelihood score between 0% and 100%. A higher score indicates the writing shares more characteristics with AI-generated text — things like unusual consistency of register, atypical sentence variety, or structural patterns common to tools like ChatGPT or Claude.

It is important to understand what this score is not. It is not proof. It is not a verdict. It is a probabilistic signal — the same kind of signal a spell checker gives when it underlines a word. You still decide what to do with it.

A score of 85% does not mean a student definitely used AI. It means the writing has a high statistical resemblance to AI-generated content. Students who write unusually formally, who have received heavy parental editing, or who have EAL backgrounds can sometimes return elevated scores. Your professional judgment remains the most important factor in any decision you make.

The 1-Credit vs 3-Credit Scan: When to Go Deeper

GradeOrbit offers two levels of AI detection scan. The standard 1-credit scan gives you a quick likelihood score suitable for routine checks across a class set. The deeper 3-credit scan runs a more thorough analysis and is better suited to situations where you are considering escalating a concern or where the initial result is ambiguous.

Before you approach a student or line manager about a potential AI integrity issue, it is worth running the 3-credit scan first. A borderline score on a quick scan may look quite different after a deeper analysis — and having that more thorough result gives you a stronger evidential basis if the conversation becomes formal.

Think of the 1-credit scan as a triage tool and the 3-credit scan as the detailed review you run before acting.

Before You Say Anything: Gather Your Evidence

Jumping straight from a high likelihood score to a conversation with a student — or worse, their parents — is a mistake that can damage trust and expose you professionally. Take time to build a rounded picture first.

Look at the student's prior work. Is the writing style in this piece consistent with what they normally produce? Check for any significant shift in vocabulary, argument structure, or fluency. If a student who typically writes in short, straightforward sentences has suddenly produced three pages of polished analytical prose, that context matters.

Consider any external factors. Was this piece produced under timed conditions or at home? Did the student have access to devices? Are there known circumstances — a learning difficulty, a period of absence, significant pastoral issues — that might explain an unusual submission?

You are not conducting a criminal investigation. You are a professional trying to understand what happened so you can respond appropriately.

How to Have the Conversation

When you are ready to speak to a student, keep the tone curious rather than accusatory. Most students who have used AI tools — whether fully or partially — have done so because they were overwhelmed, confused about the rules, or genuinely did not understand that it was problematic. Starting from a place of genuine enquiry tends to be more productive than confrontation.

A straightforward opening works well: something like, "I wanted to talk to you about this piece of work. Some of the writing felt different from what I have seen from you before — can you tell me a bit about how you put it together?" Listen carefully. A student who wrote the work themselves will usually be able to talk you through their thinking, recall their research sources, and explain specific word choices. A student who has submitted AI-generated content may struggle to do any of these things.

You might also consider asking them to reproduce a short section of the argument verbally or in writing under observation. This is not punitive — it is a legitimate way of assessing whether the student understands the content they submitted.

When to Escalate — and When to Let It Go

Not every high likelihood score needs to become a formal incident. Your school's academic integrity policy should be your first reference point. Many schools are still developing their approach to AI, and you may find that guidance is limited. In that case, talking to your Head of Department or SENCO before escalating is sensible.

Escalation is most appropriate when a high likelihood score is supported by additional evidence — a visible change in writing quality, an inability to discuss the work, or a pattern of similar submissions. Escalation through your school's misconduct procedure is particularly important for formal assessments or NEA components, where Ofqual regulations on malpractice apply.

If the score is high but the student can clearly articulate their work, or if context suggests the result may be a false positive, letting it go and noting it in your records is a reasonable professional decision. You are not required to act on every detection result, and exercising judgment is not the same as ignoring the issue.

Protecting Yourself and the Student

Whatever decision you reach, document it. Keep a record of the likelihood score, the date, the scan type used, any conversation you had with the student, and your reasoning. If the matter is later disputed, having a clear contemporaneous record protects both you and the student.

Be careful about sharing detection results informally — with colleagues in the staffroom, for example, or in group emails. Students have a right to privacy, and disclosure of suspected misconduct before it has been formally investigated can create significant pastoral and legal complications.

You can read more about interpreting individual detection scores in GradeOrbit's guide on how to handle AI detection scores.

Try GradeOrbit's AI Detection Tool

GradeOrbit's built-in AI detection tool is designed to support your professional judgment — not replace it. Every scan gives you a clear likelihood score, and you choose whether to run a standard check or a deeper analysis depending on what the situation calls for. It works alongside your existing marking workflow, so there is no need to use a separate platform.

If you are not already using GradeOrbit, you can sign up and run your first detection scan today. The first credits are on us.

Ready to save time on marking?

Join UK teachers using AI to provide better feedback in less time.

Get Started Free