Skip to main content
Back to Blog

How to Handle AI Detection in Functional Skills Work

GradeOrbit Team·Education Technology
7 min read

When conversations about AI-generated student work first emerged in schools and colleges, most of the concern was focused on GCSE coursework and A-Level essays. Functional Skills quietly fell off the radar. But if you teach Functional Skills English or Maths — particularly in further education, post-16 settings, or adult learning — you have probably already encountered submissions that feel a little too polished, a little too structured, or a little too unlike the student who handed them in.

AI writing tools such as ChatGPT and Claude are just as accessible to Functional Skills learners as they are to Year 11 students. In some cases, the short, structured nature of Functional Skills tasks makes them easier to generate with AI, not harder. This guide explains how AI detection works, what to do with a high score, and how to build a consistent, fair approach across your centre.

Why Functional Skills Work Is Particularly Susceptible

Functional Skills assessments are designed to test practical literacy and numeracy in real-world contexts. At Level 1 and Level 2, English tasks typically involve writing a formal letter, a report, or a short article — all formats that AI tools handle extremely confidently. The word counts are modest, the structures are predictable, and the prompts are often publicly available in past papers.

For a learner who struggles with writing but has access to a phone, the temptation is obvious. Unlike a personal essay or an extended GCSE response, a 300-word formal letter has very little in it that is uniquely theirs. That is precisely what makes it easier to delegate to an AI — and harder to spot through reading alone.

This does not mean your learners are dishonest. Many Functional Skills cohorts include adults returning to education, learners with significant gaps in their schooling, or students who are anxious about formal writing. AI can feel like a lifeline when you are not confident. Understanding that context matters when you decide how to respond to a high detection score.

How AI Detection Actually Works

AI detection tools analyse patterns in text — sentence rhythm, vocabulary predictability, structural uniformity — and compare them against characteristics commonly found in AI-generated writing. The output is a likelihood score, not a verdict. A score of 85% does not mean the student definitely used AI. It means the text shares significant statistical characteristics with AI-generated content.

This distinction is crucial. Detection tools can produce false positives, particularly for learners who write in a very formal, rehearsed register — which is often exactly what Functional Skills teaching encourages. A student who has drilled letter-writing formats repeatedly may produce text that looks, to an algorithm, like it came from a language model.

Reputable tools are transparent about this uncertainty. GradeOrbit's AI Detection feature returns a 0–100% likelihood score alongside a confidence label and a breakdown of the specific linguistic signals that contributed to it. That breakdown is what turns a number into something you can actually use in a professional conversation.

What to Do With a High Likelihood Score

A high score is a prompt to investigate further, not a conclusion. Before you take any action, consider the following questions:

  • Is this consistent with the student's previous work and classroom performance?
  • Could a very structured teaching style explain the AI-like patterns?
  • Does the student have the digital access and technical confidence to use an AI tool?
  • Is there any contextual evidence — a draft, notes, a classroom observation — that supports or contradicts the score?

If the score is high and several of those questions raise flags, the appropriate response is a conversation, not an accusation. Ask the learner to talk you through their work. Ask them to complete a short related task in a supervised setting. In most cases, this will quickly clarify whether they produced the work themselves.

Your centre's academic integrity policy should guide what happens next. If you do not yet have clear guidance on AI use in assessments, this is a good moment to raise it with your head of department or quality lead. The Joint Council for Qualifications (JCQ) has published guidance on AI in assessments that applies across awarding bodies including City & Guilds, Pearson, and NOCN — the most common providers for Functional Skills qualifications.

Using GradeOrbit's Built-In Detection Tool

GradeOrbit includes a dedicated AI Detection tool that works alongside the marking workflow. You can submit text directly by pasting it in, uploading an image of handwritten work, or uploading a document file. The tool returns a likelihood score, a confidence label, and the specific signals the AI identified.

There are two model options. The Faster model costs 1 credit and is suitable for a quick initial check. The Smarter model costs 3 credits and applies a more thorough analysis — useful when you want a more detailed breakdown before raising a concern formally.

For Functional Skills cohorts, where the written tasks are typically short, the Faster model is often sufficient for a first pass. Reserve the Smarter model for cases where you intend to document the result or involve a line manager.

If you are new to AI detection and want a broader overview of how these tools work in practice, the guide on AI detection for teachers covers the fundamentals in more detail.

Staying Consistent Across Your Centre

One of the risks with any new tool is inconsistency. If one teacher flags AI use and another ignores similar evidence, learners quickly learn that the response depends on who marks their work. That is unfair — and in an awarding body context, it can create compliance risk.

Agree on a threshold for escalation with your team. Decide in advance how detection scores will be recorded, whether in a student file, a shared spreadsheet, or your MIS. Make sure all staff understand that the score is advisory and that no action should be taken on the basis of a score alone.

It is also worth communicating clearly with learners. Many Functional Skills students do not fully understand what constitutes academic misconduct when it comes to AI. A short, plain-English explanation of what is and is not acceptable — given at the start of a unit — reduces the likelihood of dishonest submissions and protects learners who might otherwise stumble into a policy breach unintentionally.

Start Detecting AI in Functional Skills Work

AI-generated submissions in Functional Skills are not going away, and the tools available to detect them are improving rapidly. The key is to use them as one piece of evidence among several, maintain professional judgment at every step, and build consistent practices that are fair to all your learners.

GradeOrbit's AI Detection tool is designed to support exactly that approach — giving you a clear, evidence-based score you can act on responsibly, rather than an opaque verdict that leaves you guessing.

Try GradeOrbit free and run your first AI detection check today. No commitment required.

Ready to save time on marking?

Join UK teachers using AI to provide better feedback in less time.

Get Started Free