
5 minute read
How I Cheated My Compliance Course - Sheila from Accounts
Lewis Carr unveils his plan to combat learner misuse of agentic AI in online training.
Meet Sheila from Accounts.
She’s loyal and she works hard, and every year she logs into Moodle to complete her mandatory training. GDPR, anti-money laundering, nut allergy awareness. You name it, she aces it.
She clicks Next through 140 PowerPoint slides, answers a ten-question quiz, watches a video and prints her certificate. Box ticked. Compliance achieved. Wham, bam, thank you ma’am.
Now she can get back to her real job. Or, more realistically, she gets her AI agent to answer her emails and transcribe her meeting notes, all while she’s out walking the dog (sorry… I mean “working from home”).
Because yes, Sheila has adopted AI in her workflow. More importantly, Sheila has learnt to use agentic AI.
Using an AI-powered browser such as Comet, she can instruct it to log in to Moodle, open the course, read the slides, pass the quiz, and even mark the course as complete.
Agentic AI is a godsend for busy workers like Sheila, but a nightmare for learning developers like me.
There’s very little I can do to stop her because the old model is broken. But I have a plan (of sorts).
For 20 years, corporate e-learning has been built around content delivery and compliance tracking.
“Here are some slides. Click Next. Answer a quiz. Download a certificate.”
It worked (sort of) when the main threat was someone skipping the video. But now, that entire model has collapsed.
“We Can’t Stop Them” isn’t a strategy, that’s a surrender. I’m a bad ass Moodle developer, and there are some nerdy tricks I could build into the LMS:
I could randomise DOM elements so every “Next” button has a different selector and the AI can’t find the same thing twice. I could add micro-delays and scroll tracking so the course only unlocks after real human-style interactions. I could build behavioural analytics that flag robotic click patterns or impossible completion speeds. I could even hook into Moodle’s logs to create an “AI suspicion index” that alerts trainers to suspiciously efficient learners.
If I wanted to get really creative, I could write a custom availability plugin that randomises learning paths so the content flow changes every session, the kind of madness that AI hates and that would freak poor Sheila out.
But would that slow Sheila’s AI down? Maybe. For a week.
But the moment I build a wall, Sheila will find a taller AI ladder to climb over it. So yes, I can make it trickier for Sheila, but I’ll never beat the bots at the code level alone. So I need to look at this differently.
The real fix is to outdesign and outcode. The future of learning platforms isn’t just about locking things down; it’s about making learning unoutsourcable (if that’s even a word).
That means designing experiences an AI would struggle to fake. (And I’m not saying AI couldn’t eventually work around it, but let’s at least make it harder for Sheila.)
Ask Sheila to reflect on a real incident in her workplace. Have her record a 60-second video explaining what she’d do if she noticed a suspicious invoice. I can do this by building new tools and creating better content with better tools, so we create a cultural shift.
Sure, she could read a ChatGPT-generated script off camera, but only if she had time to write the prompt and prep the response. Not if we fire random questions at her in short succession. Or grill Sheila on the spot, in the office, catching her off guard in the corridor on the way to the loo. I’m on to you, Shiela.
AI can’t walk around the office. It can’t reflect on your company’s culture. And it can’t prove Sheila understands the why behind the policy.
Some people suggest we just proctor online training, i.e we force Sheila to take her anti-money-laundering module under webcam surveillance.
Aside from being wildly impractical (and creepy), it doesn’t solve the problem.
Are you really going to monitor every employee’s SCORM habits? And what if Sheila’s AI agent just runs on a second machine while she stares blankly at the camera?
The goal of our training was never to verify presence, it was to ensure engagement. So maybe I build more engaging tools in Moodle that make learning.... fun?
The fix isn’t just technical, it’s pedagogical too. Even with all my mad Moodle tricks, the real answer still lies in course design. But not just telling teachers and instructors to change the way they assess - give them the tools to build courses that assess differently, or deliver learning in new, creative ways.
Instead of pushing content at learners, we should be creating experiences that connect to real-world situations and capture personal context or examples. Courses should conclude with meaningful reflection, not just a quiz score.
That’s how we make it easier to learn than to cheat.
I am actively looking at technical code solutions, but I’m looking at how I blend learning experiences that are human, contextual, and impossible to outsource. If I am to change a broken model, I need to write a brand new one.
Lewis Carr is an Innovation and Product Director, and all-round Moodle Wizard. Connect with Lewis here: https://www.linkedin.com/in/lewiscarrlearning/
