Do Parents Need to Know You're Using AI to Mark?
As AI marking tools become part of more teachers' workflows, most of the conversation focuses on the practical questions: does it mark accurately, does it align to the right exam board, does it save enough time to be worth the effort? The question that receives far less attention — but carries real legal and professional weight — is whether the school is being transparent with students and parents about the fact that AI is being used at all.
For many schools, the honest answer is that transparency around AI marking has not been thought through carefully. Teachers adopt tools individually, departments integrate them informally, and the school's privacy notices — if updated at all — do not reflect the new processing activities taking place. Under UK GDPR, this is not a minor administrative gap. It is a compliance issue that becomes increasingly difficult to defend as AI use in assessment becomes more widespread and more visible.
What UK GDPR Says About Transparency
The transparency principle is one of the six lawfulness conditions under UK GDPR Article 5. It requires that personal data is processed in a way that is transparent to the data subjects — in this context, the students whose work is being processed, and by extension their parents or guardians where students are minors.
The transparency obligation is given practical form by Articles 13 and 14, which specify the information that must be provided to individuals when their data is collected. For school-age students, this information is typically delivered through the school's privacy notice — a document that should explain what data the school collects, why, who it is shared with, and for how long it is retained.
When a school begins using an AI marking tool, that tool becomes a data processor operating on the school's behalf. The school, as data controller, is required to update its privacy notice to reflect this new processing activity — including the purpose of processing (marking and feedback), the categories of data involved (student work, which may contain personal information), any third parties involved (the AI marking tool and its sub-processors), and the basis on which any international data transfers are made.
This is not a theoretical requirement. A parent who asks how their child's work is being processed is entitled to a clear answer. A school that cannot provide one — because its privacy notice is out of date and its AI tool use has not been formally documented — is in breach of its transparency obligations regardless of whether the tool itself is handling data responsibly.
What Schools Are Currently Disclosing
In practice, most schools' privacy notices have not kept pace with the adoption of AI tools. The notices that were written when the school last reviewed its GDPR documentation — often in the period following the implementation of the UK GDPR in 2018 — describe a data landscape that no longer exists. They list the systems the school used at the time: MIS platforms, parent communication apps, assessment databases. They do not mention AI marking tools, because AI marking tools were not part of the picture when the notice was written.
This creates a gap between what the school's privacy notice says and what the school is actually doing. That gap matters because transparency is not satisfied by a document that technically exists — it requires that the document accurately reflects current processing activities. A privacy notice that describes how student work is handled in terms that were accurate three years ago but do not account for the school's current use of AI tools is not providing meaningful transparency to parents and students.
The gap is compounded when AI tools are adopted by individual teachers or departments without a school-level decision or disclosure process. A teacher who begins using an AI marking tool as a personal productivity measure, without the knowledge of the school's data protection lead, may be introducing a new processing activity that the school has no record of. Under UK GDPR Article 30, schools are required to maintain Records of Processing Activities that cover all data processing carried out on their behalf. An AI marking tool that is not on the school's approved list and is not in its ROPA represents a compliance failure at the institutional level, even if the individual teacher's intentions are entirely reasonable.
How AI Marking Differs From Other EdTech
Schools use technology to support assessment in many ways — online quiz platforms, plagiarism checkers, reading assessment tools, MIS-integrated grade books. The argument that "we already use lots of EdTech without telling parents about each one" has some surface plausibility. But AI marking tools are different in ways that make disclosure more important, not less.
Most EdTech processes metadata: who logged in, what score they achieved, how long they spent on an activity. AI marking tools process content — the actual substance of what a student has written. A student's essay contains their ideas, their voice, their analysis, and in many cases their personal reflections and experiences. Processing this content through a third-party AI system is a qualitatively different act from recording that a student completed a quiz.
The content processed by AI marking tools may also be sensitive in ways that are not obvious from the assignment brief. A GCSE English Language creative writing task may produce responses that are deeply personal. A sociology assignment on inequality may draw on a student's own experience of disadvantage. A religious studies essay may reveal a student's beliefs or their family's religious practices. The fact that a student submitted this content as assessed work does not remove its sensitivity, or the expectation that it will be handled with appropriate care.
AI marking also involves automated processing of student work that may have consequences for the student — a grade, a feedback comment, a level descriptor. Where automated processing has significant effects on individuals, UK GDPR Article 22 may impose additional requirements, including the right not to be subject to solely automated decisions. Whether AI marking constitutes solely automated decision-making depends on the extent of human review before outcomes are applied — but schools that use AI-generated marks without meaningful teacher verification need to consider this carefully.
What Good Communication Looks Like
Transparency does not mean sending a letter home every time a teacher uses a new tool. It means ensuring that the school's privacy notice is accurate, that it is accessible, and that parents and students understand in general terms how their data is handled — including the use of AI in assessment support.
For schools that are already using AI marking tools, the first step is updating the privacy notice to reflect current practice. The relevant section should describe the purpose of AI processing (to assist teachers with marking and feedback), the categories of work that may be processed (assessed student work submitted for marking), the tool used and its role as a data processor, and the fact that student work is not retained by the tool after processing (where this is true). The notice does not need to describe every technical detail — it needs to give parents and students a clear and accurate picture of what happens to their data.
For schools introducing AI marking for the first time, communication ahead of implementation is preferable to disclosure after the fact. A brief statement in the school newsletter, a note in the relevant section of the school website, or a letter home to parents of students in the relevant year groups is sufficient. The communication does not need to be defensive or apologetic — AI marking support is a legitimate and increasingly common use of technology in education. The goal is simply to ensure that parents are informed rather than surprised.
Student communication matters too, particularly for older students who have rights under UK GDPR in their own name. Year 12 and Year 13 students are likely to be eighteen before the end of their course, at which point they become data subjects with full rights independent of their parents. Ensuring that students understand how their work is being processed — what the AI does, what it does not do, and how the teacher reviews and approves the result before it reaches them — supports both compliance and the professional relationship between teacher and student.
How GradeOrbit Supports Compliant Use
GradeOrbit was built specifically for UK secondary schools, and its design reflects the transparency and data protection requirements that apply in that context.
Student work submitted for AI marking is never stored by GradeOrbit. It is processed to generate marks and feedback, and then discarded. There is no archive of student submissions, no retention period, and no use of submitted content for model training. This makes the data handling statement schools need to include in their privacy notices straightforward: student work is processed to generate AI-assisted marking feedback and is not retained after processing.
GradeOrbit does not identify students by name. Students are anonymous in the marking workflow — identified as Student 1, Student 2, and so on — so the AI model never processes content alongside a student's personal identity. For work that contains identifying information within the text, GradeOrbit's client-side redaction tool allows teachers to black out names, personal details, and teacher annotations before submission, directly in the browser, before anything leaves the device.
Marking outputs include marks at each criterion level, categorised feedback (what went well, what could be improved, subject-specific development points), and an overall grade — all presented for teacher review before any feedback reaches students. This means there is always a stage of human oversight before AI-assisted marks or comments are applied. The teacher approves the result; the AI does not act unilaterally. This is the model that makes AI marking both professionally defensible and compatible with the UK GDPR considerations around automated decision-making.
For teachers who want to reduce their marking workload while ensuring that student data is handled in a way the school can stand behind, GradeOrbit provides marking support aligned to AQA, Edexcel, and OCR mark schemes across GCSE and A-Level — without creating a compliance problem in the process. For guidance on choosing a marking tool that meets the school's data protection standards, see our post on are AI marking tools safe for student work.
Try GradeOrbit's Transparent AI Marking
GradeOrbit gives UK teachers AI-assisted marking with a data handling model that supports rather than complicates the school's GDPR compliance. Student work is never stored. Students are anonymous in the marking workflow. Outputs are reviewed by the teacher before they reach students. And a Data Processing Agreement is available for schools that need one for their records.
If your school has begun using AI marking tools without updating its privacy notice or considering parental disclosure, now is a good time to address that. The legal obligation is real, but the communication itself is not difficult. A clear, honest statement about how AI supports marking — and the safeguards in place to protect student data — is something most parents will receive positively rather than with concern.
Create your free GradeOrbit account and mark your first class set with confidence today.