Skip to main content
Back to Blog

How to Write a School AI Academic Integrity Policy

GradeOrbit Team·Education Technology
7 min read

By the end of the 2025–26 academic year, every school in England should have a ratified AI policy in place. That deadline is now pressing for many leadership teams who have been watching the situation develop — hoping for clearer central guidance before committing to paper. That guidance has arrived, and the expectation is clear: schools need a written position on how AI may and may not be used by students, and that position needs to be communicated to staff, students, and parents.

One of the most important components of any school AI policy is the section covering academic integrity — the rules that govern whether and how students are permitted to use AI tools for assessed work. This is the part that has the most direct consequence for students, the most legal weight via JCQ regulations, and the most bearing on how teachers should use detection tools like GradeOrbit when they suspect academic misconduct.

This guide is written for school leaders drafting or updating that policy. It covers what the policy must include, how to communicate it effectively, and how AI detection fits into a fair and legally defensible framework.

Why Your School Needs This Policy Now

The DfE's generative AI guidance makes clear that schools should set out their expectations for both staff and student use of AI tools. While the guidance does not prescribe a single template, it identifies academic integrity and data privacy as the two areas of highest risk — and both require explicit school-level decisions, not just a reference to JCQ rules in the student handbook.

JCQ (the Joint Council for Qualifications) has updated its malpractice regulations to treat unacknowledged use of AI in externally assessed work as equivalent to plagiarism. A student who submits GCSE coursework or a NEA that was substantially generated by ChatGPT or Claude without declaration is committing malpractice and may be disqualified. This is not a grey area for public examinations. But for internal assessments, classwork, and homework, the rules are set at school level — which means your policy needs to draw that line clearly.

Without a written policy, your school is exposed. If a teacher refers a student for AI misconduct and the investigation reaches governors or parents, the first question will be: what were students told? What were they permitted to do? If the answer is "we hadn't written it down," the school's position becomes very difficult to defend.

What the Policy Must Cover

A credible AI academic integrity policy should address four distinct areas: permitted uses, prohibited uses, consequences, and process.

Permitted Uses

Be specific about what students are allowed to do. Many schools find it helpful to distinguish between using AI as a research aid — asking it questions, getting explanations of concepts, exploring ideas — and using AI to produce text that is then submitted as the student's own work. The former is broadly permissible in most contexts; the latter is the category that requires clear restriction.

Some teachers are also beginning to set tasks where AI use is explicitly part of the assignment — students might be asked to use ChatGPT to draft an argument and then critique or improve it. These uses should be acknowledged in the policy so students understand the distinction between sanctioned and unsanctioned AI involvement.

Prohibited Uses

State plainly that submitting AI-generated text as the student's own work — without disclosure — constitutes academic misconduct. This applies to homework, internal assessments, controlled assessment preparation, and all externally assessed work. The policy should name the types of tools covered: not just ChatGPT and Claude, but translation tools used to convert AI-generated output, paraphrasing tools, and AI-assisted essay services.

The wording matters. Avoid phrases like "students may not use AI." That is too vague to be enforced and too broad to be fair. Instead, write something like: "Students may not submit work that was substantially written, generated, or restructured by an AI tool unless the task explicitly requires it and the teacher has confirmed AI use is permitted for that assessment."

Consequences

Set out the range of consequences proportionate to the severity and context of the misconduct. A Year 8 student who used ChatGPT to write a homework essay because they didn't understand the rules warrants a different response than a Year 13 student submitting AI-generated NEA coursework. The policy should allow for graduated responses — a recorded conversation, a redo, a formal investigation — and should specify that for externally assessed work, the school's obligations under JCQ regulations take precedence.

Process

Teachers need to know what to do when they suspect AI use. This is where many schools leave a gap. The policy should describe the process clearly: a teacher who has concerns should gather evidence, including any AI detection results, before initiating a conversation with the student. Detection scores alone are not evidence of misconduct — they are a starting point for investigation. The policy should make this explicit so that teachers do not make premature accusations based on a single score.

Where AI Detection Fits In

AI detection tools like GradeOrbit are most valuable when they are positioned correctly in your policy: as an investigative aid, not an enforcement mechanism. A likelihood score of 85% does not mean a student used AI. It means their work shares many statistical features with AI-generated text. A student who writes exceptionally well, who is an EAL learner, or who has drafted in a very formal register may produce work that scores highly without any AI involvement.

Your policy should specify that AI detection results will be used as one piece of contextual evidence alongside the teacher's professional judgement, the student's prior work, and any opportunity for the student to respond. GradeOrbit provides detection at two levels — a faster single-credit analysis and a more detailed three-credit deep analysis — giving teachers the flexibility to conduct proportionate investigation before escalating.

It is also worth recording in the policy that no punitive action will be taken based on a detection score alone. This protects students from unfair outcomes and protects the school from challenge. The score opens a conversation; it does not end one.

Communicating the Policy to Students and Parents

A policy that has not been clearly communicated is a policy that cannot be enforced. Students need to understand the rules before they can be held to them — and that understanding needs to be documented.

The most effective approach is a brief, plain-language student-facing summary that covers the key points: what they can and cannot do, what happens if they are found to have misused AI, and how detection works at the school. This should be delivered as part of an assembly or tutor session at the start of the academic year, and again before any major internal assessment period.

For parents, a short letter or addition to the school newsletter explaining the policy and the school's approach to detection is usually sufficient. Many parents are not aware that AI detection tools exist or that schools are using them, and transparency here builds trust rather than concern.

Keeping the Policy Current

AI tools are evolving rapidly. A policy written in September 2025 may not accurately reflect the landscape of available tools by January 2026. Build in a review date — the end of each academic year is a sensible minimum — and assign responsibility for that review to a named member of the leadership team or a designated AI lead.

The review should check whether the tools named in the policy are still accurate, whether JCQ or DfE guidance has been updated, and whether any incidents during the year have revealed gaps in the policy's coverage. Schools that treat the policy as a living document, rather than a one-off compliance exercise, will be better placed to respond to the next generation of AI tools — whatever form they take.

Start Building Your Detection Capability With GradeOrbit

A strong policy needs strong tools to support it. GradeOrbit's AI detection feature gives UK teachers a structured, evidence-based way to investigate suspected AI use — with likelihood scores, detailed analysis, and the flexibility to run a quick initial check or a thorough deep scan depending on the situation.

Detection is only one part of a fair academic integrity process, but it is an important one. Having a consistent, documented tool that your staff all use in the same way strengthens your school's position if a case is ever challenged. GradeOrbit is built specifically for UK secondary schools — understanding the context of GCSE and A-Level assessment, and designed to support teachers rather than replace their judgement.

To see how GradeOrbit fits into your school's academic integrity framework, visit GradeOrbit and try it for free. No commitment required — just a clearer picture of what detection looks like in practice.

Ready to save time on marking?

Join UK teachers using AI to provide better feedback in less time.

Get Started Free