How to Use AI Detection in Your School Fairly
AI detection is now part of the landscape in UK secondary schools. ChatGPT, Claude, Gemini, and a growing list of similar tools are accessible to every student with a smartphone, and the writing they produce has become sophisticated enough that experienced teachers routinely struggle to identify it by eye alone. The tools that detect AI-generated content have improved in parallel — but a detector without a clear, fair process behind it is not just unhelpful. It can be actively harmful.
This guide is for secondary school teachers and leaders who want to use AI detection fairly — not as a gotcha mechanism, but as one evidence-based input into professional judgement. The goal is not to catch students. The goal is to uphold the integrity of assessment in a way that is defensible, consistent, and kind.
Why Fairness Has to Come First
AI detection tools produce a likelihood score — a probabilistic estimate, not a verdict. No tool currently available, including the most sophisticated commercial platforms, can tell you with certainty that a specific piece of writing was generated by AI. What they can tell you is that a piece of writing shares statistical features with AI-generated text to a greater or lesser degree.
That distinction matters enormously in a school context. A student who writes in a formal, well-structured register — perhaps because they have been explicitly taught to — may score higher on an AI detector than a student who writes more colloquially, even if neither used AI at all. Students who use AI as a drafting aid and then substantially rewrite the output may score lower than students who used no AI but happened to produce writing that resembles AI outputs. Treating a likelihood score as proof of wrongdoing, without further investigation, is both professionally indefensible and potentially very unfair to individual students.
A responsible approach to AI detection scores uses the result as the beginning of an investigation, not the end of one.
What a Good AI Detection Result Actually Tells You
The most useful AI detection tools do more than return a single percentage. GradeOrbit's detection tool, for example, returns a likelihood score from 0 to 100%, a confidence rating (Low, Medium, or High), a list of the specific linguistic and structural signals that contributed to the score, and a plain-English summary explaining the overall assessment.
Understanding what each of these elements means in practice helps you make better use of the result.
The likelihood score indicates how strongly the text resembles AI-generated content based on statistical patterns. A score of 30% suggests the writing is broadly consistent with human-authored text. A score of 85% suggests strong AI-associated patterns — but it is still not proof.
The confidence rating tells you how reliable the score is likely to be given the length and nature of the text. A high-confidence result on a 600-word essay is more actionable than a low-confidence result on a 150-word structured response. Short texts inherently produce less reliable detection signals, and a responsible tool tells you when that is the case rather than projecting false certainty.
The signal list describes the specific linguistic features that contributed to the score — things like formulaic paragraph structure, even distribution of specification content, unusually low variation in sentence length, or absence of a personal voice. These are the details that allow you to build a professional case if you need one, and to have a grounded conversation with the student rather than simply citing a number.
GradeOrbit also allows you to choose between a standard 1-credit scan for quick screening and a deep 3-credit analysis when you need greater certainty before taking any further action. For routine screening of a class set, the standard scan is usually appropriate. For a piece of work where you are seriously considering raising a concern formally, the deep analysis is worth the additional credit.
Building a Consistent School Policy
Individual teachers making individual decisions about when and how to use AI detection, based on their own threshold for suspicion, is a recipe for inconsistency — and inconsistency is unfair. A student in one teacher's class may face a formal conversation about a piece of work that would have passed without comment in another teacher's class, not because the work is different, but because the processes are different.
A fair school policy on AI detection addresses several questions in advance. Which assessments will be routinely screened? What score threshold triggers further investigation rather than automatic escalation? Who reviews flagged work, and who makes the final judgement? What happens during the investigation — is the student interviewed, and who is present? Is there a formal record kept regardless of outcome?
Involving Heads of Department in drafting this policy matters for two reasons. First, the norms around AI assistance vary considerably by subject. In Computer Science, using GitHub Copilot to autocomplete code may or may not constitute academic misconduct depending on the specific task. In History coursework, generating a draft argument and then editing it is categorically different from submitting an unmodified AI output. Second, HODs are the people who will need to support their teams in applying the policy consistently — and they will do that better if they had input into designing it.
Once a policy exists, it should be shared explicitly with students and parents. Transparency about what is being scanned, on what basis, and what the process is if something is flagged reduces the likelihood of disputes and signals that the school's approach is principled rather than arbitrary.
How GradeOrbit's Detection Tool Works
GradeOrbit's AI detection tool is built into the same platform as its marking workflow, which means teachers who already use GradeOrbit for AI-assisted marking can screen submitted work without managing a separate subscription or workflow.
The tool accepts pasted text, uploaded documents, and uploaded images — including photographs of handwritten student work taken on a phone. This is important because not all suspicious work arrives as a typed document. A student who generates AI text on their phone and then copies it onto paper by hand, or who uses AI to produce a draft and handwrites a lightly edited version, will still produce work that carries detectable statistical signatures. GradeOrbit can process an image of a handwritten script and detect those signatures just as it can with a typed submission.
Student work is never stored after processing. No submission is retained, indexed, or used for any purpose beyond the immediate detection result. This is a deliberate design choice that keeps GradeOrbit's handling of student work consistent with UK GDPR obligations and the professional ethics of working with data about minors.
Having the Conversation With Students
If a detection result warrants a conversation with a student, the way that conversation is handled matters enormously — both for the student's experience and for the integrity of the outcome.
Start from the result, not from a conclusion. "I've noticed some features of this work that I'd like to ask you about" is very different from "I know you used AI for this." Show the student the specific signals the tool identified. Ask them to talk through their process — how they planned the piece, what sources they used, how they drafted it. A student who genuinely wrote the work will usually be able to provide a plausible and detailed account. A student who copied an AI output will often struggle to explain specific choices in the text.
Take notes during the conversation, and consider having a second adult present if you anticipate a formal outcome. The detection result and the conversation together constitute the evidence base for any decision — not the score alone.
For more detailed guidance on this, see our post on how to talk to students about AI detection results.
Try GradeOrbit's AI Detection Tool
GradeOrbit's detection tool is designed for exactly this context — explained results, confidence-rated scores, image support for handwritten work, and no student data retained after processing. It works alongside GradeOrbit's AI marking workflow, so you can screen and mark from a single platform.
Your first scans are free. Create your free GradeOrbit account and run your first AI detection scan today.