Insight No.18

Page 1


Dukes is a family of schools, teachers, learners, parents and partners connected by our pursuit of an extraordinary life for every member of our community.

We believe that education is a journey to be enjoyed and shared at every stage of life. Insight is testament to this ongoing commitment to learning: a termly publication of articles written by some of the extraordinary educationalists in our schools and organisations.

dukeseducation.com

No.18

30 years of EdTech by David Goodhew, Managing Director, Dukes Education.

30 years of EdTech

David Goodhew, Managing Director, Dukes Education

Reviewing technology’s impact on students.

Confessions of a Luddite

Tom Arrand, Principal, Cardiff Sixth Form College

The benefits of a balanced approach to Artificial Intelligence (AI).

Artificial anxiety

Hitesh Chowdhry, Co-Founder of InvestIN Education

The excitement of new opportunities. Act your reading age

Simon Pedley, Head of Academia, The Medic Portal

of language.

Lella Kirkland, Head of Digital Learning, and Pete Whitmell, Deputy Head Pastoral, Notting Hill Prep The importance of thinking skills.

The role of teachers in an AI-driven future

Matthew Tompkins, Managing Director, Dukes Education, Portugal

Koen Claeys, Education Director, Cavendish Education

Chris Hennessy, Director of Digital, Dukes

Wit and wisdom from the world’s greatest thinkers from our colleagues at Dukes Education.

Tim Fish

Editor’s letter

Tim Fish, Editor-in-chief of Insight, is CEO, Dukes Education, UK, and founder of Earlscliffe, a co-ed, international boarding school for students aged 15-19, in Folkestone, Kent.

Welcome to the 18th edition of our ‘Insight’ journal, and its (revisited) theme of AI, as viewed by our panel of informed contributors from across the Dukes family who raise both green and red flags in equal measure.

I attended an education conference in London recently where a snappilydressed, Joe Pesci lookalike, EdTech entrepreneur from Boca Raton called Johnny Garcia, gave a rapid-fire 15-minute overview of AI for the uninitiated, fuelled by his own excitement and a brain’s worth of energy akin to Sizewell C. ‘The cat meows and the dog barks’ was Johnny’s distillation of how AI works.

In other words, human knowledge is the basis for all AI responses. Our knowledge underpins

its form, reach and generative capability.

In about 1980, I was — in a very minor way — partlyresponsible for my secondary school of 1500 pupils receiving its very first computer. On a mid-week evening at a Lancaster FE college I found myself, aged 12, on a mixedyear General Knowledge team in the area final. It was a formal affair with those maths and science lab stop-clocks on tables and a question master who modelled his appearance and manner on renowned broadcaster, Sir Ludovic Kennedy. It came down to the final question in the final round. Who would win the prize, a coveted BBC micro-computer? Ludo started the question ‘Which famous jazz trumpeter…’ and like a flash one of the other team jumped in and exclaimed ‘Louis Armstrong!’. ‘No...’ said Ludo, ‘I’ll state the whole

Bringing Insight to a coffee table near you...

question and pass it over to the other side. Which famous jazz trumpeter attended Eton College?’ And with my answer of Humphrey Lyttleton we took the school’s first piece of rather cumbersome ‘tech’ back to Morecambe.

As an aside, and given the debate in the following pages around the benefits of AI in schools, Humphrey Lyttleton’s father was himself an English and Classics teacher, of whom Philip Ziegler noted ‘George Lyttelton was one of the greatest of English schoolmasters. He was wise and tolerant; his massive presence ensured a dignity which his fine sense of the ridiculous alleviated without diminishing; he cared passionately about good writing and communicated that passion to his pupils’.

30 years of EdTech

Goodhew, Managing

reviews the impact of EdTech on students’ learning

One of the less-read chapters in the otherwise excellent The Teaching of Classics, Cambridge University Press 2003, was entitled Using ICT in Classics or How I learned to stop worrying and love the computer. The author was enthusiastic about the potential for computer-assisted learning and described the benefits of

everything from language-drilling software to virtual reality archaeology. He even dared to imagine a time when laptops might replace exercise books and textbooks.

My own experience of EdTech began in my first year of teaching in 1994. Classrooms typically had one or two machines that pupils had to take turns using; there was also a

computer suite/lab that could be booked for whole-class use. Internet speed was teethgrindingly slow and so most of the work was done using software (much of it very useful and designed by teachers for teachers) installed on the machines. There was no Wi-Fi to speak of and battery life was too short for portable devices to be practical. Nevertheless, it was not uncommon for pupils to word process long essays/coursework, often using desktop machines at home.

By the 2000s, schools had seen a surge in the purchase of interactive whiteboards, fuelled in many cases by the encouragement of inspectorates (OFSTED 2005), market forces (e.g. parental tours) and neuromyths about ‘learning styles’. With hindsight, it is questionable what the benefits of interactive whiteboards were, either for learning (how do 30 children interact with one board simultaneously?), or pedagogy — the teacher remained at the front of the classroom delivering instruction, just in front of a more brightly-coloured and infinitely more expensive board. This highlights two important issues for EdTech: firstly, there is always money to be made selling schools tech they don’t need. Secondly, using screens to hold children’s attention can have unintended consequences. The internet was now in fullswing and schools worried about children using Wikipedia to do their coursework (which was, uncannily, exactly the same first response to ChatGPT). Teaching departments were beginning to collate and curate teaching notes/ resources/courses on intranet sites. There was even talk of schools being replaced by online platforms or MOOCS.

In the 2010s, battery life and Wi-Fi had reached the stage that iPads and Chromebooks were a viable option either as a classroom ‘set’ or as a 1-2-1 device for pupils. Colleagues were enthusiastic about the ‘Martini’ model of learning — ‘any time, any place, anywhere’ — and were experimenting with ‘flipped classrooms’ where pupils did most of the routine work outside of the lesson. Again, pundits opined that YouTube or Khan Academy would disrupt traditional models of schooling. However, in schools, we were starting to see the negative impacts of cyberbullying, social media and pornography

‘Many educationalists and parents worry about the negative impacts of an increasingly screenbased childhood’
‘ The internet was now in full-swing and schools worried about children using Wikipedia to do their coursework’

on mental health. Nevertheless, during the pandemic, many of us breathed a sigh of relief that we had already migrated most of our teaching to Google classroom, while also being painfully aware that the ‘digital divide’ meant that pupils from the most disadvantaged families lacked both the Wi-Fi and the devices to keep up.

Post pandemic, concerns have rightly been raised about a number of issues. Firstly, many educationalists and parents worry about the negative impacts of an increasingly screen-based childhood, not least thanks to the important work of researchers such as Haidt and Twenge. Secondly, questions have been raised about whether the benefits of EdTech have been overstated (or indeed manipulated): Horvath, The EdTech Revolution has Failed, 2004, notes that the effect size is below the 0.4 that Hattie recommends as the ‘hinge-point’ for whether an intervention should be considered for mass inclusion. OECD PISA scores show global decline in Maths, Reading and Science over the last decade in ways that cannot simply be attributed to the pandemic. Thirdly, there is an unintended consequence of using technology to engage/motivate children, which is that they became restless and easily distracted by anything that does not deliver the same dopamine stimulation. Many adults report a similar inability to focus, by the way! For these reasons, in May 2023, Sweden (which had previously been an active user of EdTech) decided to reduce studentfacing digital technology and embrace more traditional practices.

So, what have we learned that is relevant to our current debate around AI in education?

The example of interactive whiteboards should teach us to be cautious and sceptical: global companies offering transformational technology to schools do not always have the best interests of children at heart. The EU Artificial Intelligence Act 2024 has set an excellent example for the rest of the world in this respect. Furthermore, education is about so much more than pedagogy: classrooms (like homes) do not exist in isolation from a digital hinterland that poses risks not just to online safety but also of distraction (at best) and addiction (at worst).

‘Classrooms (like homes) do not exist in isolation from a digital hinterland’

Just as language-drilling software delivers the retrieval practice recommended by Dunlosky, so ChatGPT can improve scores if it’s used in the same way. However, if used as a quick route to the answer and a substitute for thinking, then ChatGPT makes scores worse. Given human nature — not to mention the neurochemistry of the adolescent brain — it is naïve to imagine that (left to their own devices) pupils will not succumb to temptation. As a parent, my heart sinks watching my son allegedly doing his Atom homework online but actually using Canva to create hilarious images of cats eating doughnuts. Therefore, pupil use of EdTech/ AI should either be supervised or teacher-

controlled by other means (e.g. device/ software restrictions). My preference is for supervision by teachers, as teenagers are surprisingly adept at circumventing most attempts to block/control their use of devices and the internet. In the same way that many families are opting for dumbphones over smartphones, perhaps schools should return to desktops with no Wi-Fi to ensure that computers are used only for the intended purpose!

Returning to the author of that chapter in The Teaching of Classics, whatever the merits of allusion to Doctor Strangelove, I feel confident he is no longer unworried by computers. His name? Well, it’s not J.R.Hartley… n

Confessions of a Luddite

Tom Arrand, Principal, Cardiff Sixth Form College, discusses the impact of education technology on students’ ability to learn

If you ask an AI (as I did), it will tell you that a Luddite is ‘a person opposed to new technology or ways of working’. And why not? That is, after all, the most common definition of the term. But if you know your history or are partial to a bit of deep learning and understanding, you will know that the followers of the apocryphal Ned Ludd were humanists who were opposed not to ‘technology’ per se nor new ways of working, but to the

misuse of people — or, rather, the use of people as cogs in a machine. They were opposed to an economy which prioritised profit over people. They took their frustration out on machines but what they started was a movement that resulted in so much that we take for granted.

So, if you value sick pay, weekends, the abolition of child labour, and regulations that prevent you from being worked into an early grave, you are part of the Luddite legacy.

For decades, anyone airing concern or suspicion over the onset of new technologies (the phone, the television, the internet) has been dismissed as a Luddite. But those same people may have been ignored when raising similar concerns over asbestos, lead paint or Sunny Delight. Indeed, those same people might be the reason why we ensure that careful regulation, rigorous safety testing and peerreviewed analysis is pivotal to ensuring that the next shiny thing which goes to market, actually adds value to our lives and causes no harm. Those same people recognise that ultra-high processed foods are bad and that nutritious, whole foods are good.

How does a Luddite approach the Big Question of EdTEch or, as some recognise it, Big Tech with elbow patches? Exactly as we all should — with requisite suspicion because they tend to regard the needs of humanity (and specifically, children) to be greater than the needs of tech billionaires; with caution, because they are reluctant to allow children to be guinea pigs in an experiment where the evidence is at best inconclusive and, at worst, very negative indeed.

Learning, after all, is the acquisition of knowledge, understanding, skills, behaviours, beliefs and attitudes, over time. It is the embedding of these into deep, long-term memory and habituation. The ability to master the piano or to instinctively defend an inswinging yorker cannot be

‘How does a Luddite approach the Big Question of EdTech or, as some recognise it, Big Tech with elbow patches?’

fast-tracked and nor can any form of academic learning. It takes time, expert tuition and practice to achieve mastery.

The first question any educator should ask when presented with a new idea, initiative or piece of tech is whether or not this adds value to learning. If so, where is the evidence? Or, when it comes to the mechanics of operating a school, does this shiny new thing add value to workflow and, if so, show me the evidence so we can harness its potential. Yet where EdTech is concerned, the evidence is patchy, at best. Very few tech companies have conducted any meaningful, randomised, controlled trials on their products’ effectiveness, but where there is growing evidence, the conclusions are very worrying.

In a recent US study, it was discovered that, for every 200 hours spent learning at a computer screen, students would spend 2,000 hours media multitasking; that students would average six minutes on task in class before scrolling off task and up to 38 minutes off task for every hour in class. These

are terrifying results and yet we can all, as adults, relate to the temptation we put before our children given the fact that research suggests that the average ‘Generation X’ adult (b.1965-1981) spends up to four hours a day on their phone and even the average ‘Baby Boomer’ (b.1946-1964) spends three and a half hours a day scrolling. Concentration and engagement have always been a challenge for learners and teachers, so it seems wrongheaded to present an opportunity to make engagement even more difficult.

It is fair to say that we know that technology reduces learners’ ability to focus; that it reduces engagement and the likelihood that students will engage in the calm and enriching activity of reading. It increases the likelihood that they will engage with harmful content and most apps use ‘persuasive design’ to grasp the attention, with the drive being not to enhance lives but to harvest data. If something is free, you are the product not the consumer and profit is driving all of this. Profit, not people.

‘And we must teach its safe and responsible usage; we must educate digital skills and digital literacy’

Even if the Luddite’s interpretation of the evidence is unduly negative, surely the burden of proof is on the tech companies and their advocates to demonstrate that their products add value to education and have no detrimental impact. Show me the evidence. UNESCO suggests that any good, impartial evidence on the impact of technology on education is in short supply, and anything that is supplied is produced by the people who are trying to sell it. The evidence that in-person learning is more effective, however, is out there for us all to see. The renowned education researcher John Hattie has demonstrated that the student/teacher relationship is 2.5 times more effective than any computerised ‘individualised instruction’ programme. Why? Because empathy, nuance, reading of body language and deep understanding of the learner’s needs in context are human, not digital traits. They cannot be replicated. Oxytocin is only released when people engage with each other in person. We have not evolved for this to be replicated through written words or even moving images on a screen.

In its 2023 publication

An EdTech Tragedy, UNESCO states that the global evidence shows that our new-found dependence on technology exposes ‘undetected exclusion, staggering inequality, inadvertent harm and the elevation of learning models that put machines and profit before people’. The pandemic further exacerbated these problems, depriving children of the social and emotional aspects of learning, which are just as important as knowledge acquisition; their education was narrow and impoverished and, while technology allowed aspects of it to continue, even these were no substitute for inperson learning. Talk of a lost generation is overblown but we must acknowledge the learning deficit and, in learning from this, we must accept that we already know what learning is, how it happens and how it does not happen.

The Karolinska Institute in Sweden concluded that EdTech impairs, rather than enhances learning. A stark conclusion which has led Sweden to scrap technology altogether, in favour of traditional learning. Children play and socialise; they read and engage in the wonderful

struggle that is learning. Utopian idealism, maybe? But if you are going to engage in a social experiment, it is safer to let children play with building bricks than matches. Children need nutrition, not junk food. Of course, they will prefer the bag of sweets to the fresh fruit but that’s tough, right? That’s parenting. That’s education. And so it is with technology. But like all good Luddites, I am not suggesting we smash it all to pieces (OK, the analogy fails a bit, there). I wrote this piece on a word processor; received notifications on my phone and computer screen whilst typing; have spreadsheets open on separate screens and engaged in a video call at some point between paragraphs four and five. Technology is here to stay and much of it is great.

Where it adds value, it really adds value. Where it enhances workflow, we embrace it. And we must teach its safe and responsible usage; we must educate digital skills and digital literacy. We should use it in the classroom when, and only when, it adds value to the learning journey. But technology must be our servant, not our master. If the need to be shown evidence that technology definitely enhances learning; that it causes no harm and that it profits the learner more than the shareholder makes me a Luddite, then that is what I am.

But I would rather be that than have to add this to the list of things my generation will be apologising to my children’s generation for getting so disastrously wrong. n

Artificial anxiety

Hitesh Chowdhry, Co-Founder of InvestIN Education strikes a balance in the debate about the influence of AI in education

There’s a recent entry to the long list of uninspiring topics covered by self-proclaimed experts on LinkedIn: a defeatist opinion about what AI means for the future of our children. Even in everyday conversations, the ‘AI will do that anyway’ jibe has become so facile that it doesn’t even prompt a discussion, turning this interesting technological development into somewhat of a melodrama. If you are under 25, the story goes, your future is going to be one long act of career obsolescence; you will either serve the machines or be replaced by them. Here’s a counter-narrative that aims to bring some balance and perspective.

Artificial insights

First, we should treat predictions about the future impact of any technology with grave scepticism. History is littered with examples of technologies that were supposedly ‘game-changing.’ Remember 3D printers? They were supposed to create a factory in every home. How many people have actually used one? Or the 3D TVs that prompted those sleek, expensive spectacles of a few Christmases past, which we wore to watch Avatar — and then no other movie? Just four years ago, Facebook changed its name to Meta, in a very public nod to the coming age in which we’d all be living in the metaverse. Meta is now prioritising investment in AI instead; Apple has stopped development of its Vision Pro headset.

Of course, technology can, and will, bring about huge disruption. It’s just that predicting how, when, and what will cause that disruption is fraught with too many variables to be given much credence. Too often, the most sensational predictions are made by technologists themselves, who overindex a technology’s potential and ignore the more obvious sociological and economic constraints. People don’t want to wear 3D glasses at home. VR headsets are an acquired taste—and they’re expensive. Nobody wants to turn their home into a factory. Even very basic assumptions we made ten years ago — like the idea that physical books would be made completely obsolete by e-readers — have been confounded. Cars ended horse travel, but bicycles are still growing in popularity; this wouldn’t have been obvious at the beginning of the last century. To believe that anyone can accurately map the trajectory and adoption of new technologies, and their impact on the career landscape of the future, is a fantasy. Even genuine experts struggle: John Maynard Keynes predicted in 1930 that by now we’d only be working 15 hours per week because of technological advancement.

Unnatural selection

Not only are the predictions sensationalised, the overriding tone for the prospects of young people feels utterly joyless. The relentless doom-mongering is irresponsible, narrow, and lacks nuance. Imagine writing a letter today to a child born in 1900, listing all of the most important developments in science and technology over the next 125 years. On one side, you write a list of those likely to cause anxiety — including the development and usage of nuclear weapons, the power of surveillance technology, and the march towards human-extinction levels of climate change. On the other side, you write a list of those most likely to induce feelings of wonder and excitement, including that, through scientific advancements, the world’s population has increased by a factor of almost five, life expectancy has doubled, air travel is an everyday occurrence, we can communicate with the world via technology that orbits the Earth, and in general, billions of people today

‘Developments in AI have seemingly caused people to question the entire fabric of education’

enjoy a level of comfort and convenience that even a king would not have experienced in 1900. Now, imagine you only showed them the first side?

Think different?

Worse still, developments in AI have seemingly caused people to question the entire fabric of education. Without acknowledging the obvious contradiction, people seem to be claiming both that ‘the jobs of tomorrow haven’t been invented yet’ and that ‘schools are not preparing our children for the jobs of tomorrow’. Did school ever do the latter? I enjoyed reading Hamlet at A-level 25 years ago; I was never under any illusion that it would make me more employable. I went to school to get an education, not a job. That education came from teachers, peers, and the experiences we shared in and out of the classroom, which served to some extent as a test-drive of the real world I would come to experience in adult life.

There is a growing lament that school curricula are becoming redundant in the face of AI’s capabilities. Why bother teaching essay writing or even mental arithmetic, the argument runs, when a computer can do both for you? But when I learnt arithmetic in the 1980s, it could long be done more quickly and accurately by a calculator. Microsoft Word had spellcheck before most millennials were out of primary school. And still, we taught these things — not because they were the most efficient way of arriving at answers, but because they trained the mind in things like

abstraction, logic, and pattern recognition, whilst instilling the values of hard work and perseverance. Today, AI may write an essay, but it cannot teach a child what it means to wrestle with an idea. That remains a human exercise — and a profoundly valuable one as an end in itself.

Embracing uncertainty

To be clear, I am as excited as others about AI’s potential to change the world for the better. I would trust an abacus to parallel park better than me, let alone a computer, and that would bring about much public benefit.

But the endless scriptwriting of what AI means for our children misses the most marvellous aspect of childhood, which is to embrace the uncertainty of the future. Yes, technological changes abound, but the fact that new technology comes along is, in itself, nothing new. The future has never been certain; change itself has always been the only certainty. If we make that a basis for instilling fear in young people, then the so-called ‘anxious generation’ will itself become a fait accompli.

Instead of scrambling around to retrofit childhood into a proto-career fair, we should aim for something more durable: to foster a sense of wonder at what is possible, and excitement about the opportunities that we can’t yet see; to embrace change, rather than be insulated from it. That would be an exercise in instilling wisdom rather than knowledge — something that would well serve both children and adults alike. n

Act your reading age

Simon Pedley, Head of Academia, The Medic Portal, explores the risks of the simplification of language by Large Language Models

Ilike many people, use AI a lot. The one thing that I find it very useful for is providing additional context to other things I am reading. I’m listening to Adam Tooze’s Wages of Destruction on Audible — a terrible way to ingest a dense piece of economics, which a lay person could barely understand. Names for German pre-war industrial and labour organisations are introduced, then referred to by acronyms for chapters and chapters after that. Obviously, I can’t remember what they did or why they were important. But ChatGPT does — and can — remind me.

It was in that vein that, after reading a comment piece on immigration numbers, I wondered about the general state of the UK labour market. So, I asked ChatGPT for the ‘UK unemployment rate’, and it duly gave me the correct 4.4% unemployment rate, along with other statistics around economic inactivity, vacancy rates etc... and finished with a neat summary:

‘These figures suggest a labour market that is relatively stable but facing challenges, such as declining job vacancies and a slight uptick in unemployment compared to the previous year’.

This was all very helpful! But reading the above, I wondered about the accessibility of the language. I have a postgraduate degree in economics and that is a fairly unusual thing to have. If you copy and paste the entire response from ChatGPT into a readingage estimator, you are told the reading age (essentially the average age of a child of that could understand the text) is 19, or undergraduate level. I think that is probably quite wrong, it isn’t that advanced, but it is still nonetheless a dense and complex text.

I asked ChatGPT whether it made an assessment of the cognitive capacity of the user and moderated the language, or if not, what is the average reading age that responses tend to cluster around. I was told it made no assessment of cognitive ability and aimed for responses between ‘13 and 16’ years old.

According to National Literacy Trust, the average reading age of a UK citizen is between 9 and 11 years.

This does invite the question of what the advancement of AI, specifically Large Language Models (LLMs), looks like in a world what 75% of the population cannot understand the responses that they are being given. Will there be a stark divide between the ‘knowledge class’ that can utilise and, crucially, understand the responses of these new tools? And if so, what would that look like? At best a further divergence of productivity? But maybe, at worst, AI eats the jobs of the ‘knowledge class’ and provides nothing of value to those not as linguistically adept at parsing responses.

However, it gets worse.

ChatGPT offered to present the labour market information at the level a 9-year-old could understand. It rephrased the ‘rate of economic inactivity’ as ‘About 1 in 5 people (aged 16-64) are not working or looking for work’.

This is technically true. But the ‘economically inactive’ include students, stay at home parents, early retirees and those physically incapable of work, but ‘not working or looking for work’ doesn’t capture that on an intuitive level. And it isn’t reasonable to assume people would necessarily know better. It is reasonable

‘75% of the population cannot understand the responses that they are being given’

to assume, from just reading ChatGPT’s summary, that there are a load of people sitting around on the sofa playing Mario Kart and not working. A view that many other sections of the media would support!

This is not to blame ChatGPT necessarily for this, but simplification in areas like this inherently involve political choices. The framing of ‘economic inactivity’ provided by ChatGPT is very friendly to a conservative viewpoint, in a way a bland ‘economically inactive’ is not (or less so). Simplification involves leaving out context, and context often matters. Which context is provided and withheld shapes a layperson’s view of the world around them. It is concerning that, at a first test, AI is providing at best politicised explanations of concepts

to the general public, and at worst misleading them.

I don’t know what the solution is here. If LLMs are going to become more widely embedded across society then the language it uses to communicate with us would need to change, to become simpler. But that simplification requires choices to be made, and who is in control of those choices?

Realistically, AI will make those choices. And maybe that’s better than it being a theatre of political contestation? Of course, the obvious answer is to improve the general literacy of the population. But things aren’t looking so hot on that front either.

In any case, I’m going back to my audiobook. Because who has the time to read anything? n

Teaching metacognition in the age of AI

Lella Kirkland, Head of Digital Learning, and Pete Whitmell, Deputy Head Pastoral, Notting Hill Prep, reflect on the thinking skills needed to master artificial intelligence

At Notting Hill Prep School (NHP), we find ourselves reflecting (on an almost hourly basis) on how lucky we are to be a Thinking School. NHP, the first and only independent prep school in England to be awarded Thinking School status, has been a Thinking School since 2014 and in 2021 achieved accreditation as an

Advanced Thinking School by the Cognitive Education Development Unit of the University of Exeter and was reaccredited in March 2024. Metacognition, the practice of thinking about thinking, is woven into the fabric of our lessons across the curriculum and NHP children are finely attuned to reflecting on what they think, and challenge why they think it.

This has never been more helpful than over the past few years as we have begun to address the rapid impact of AI on the world for which we are preparing our young people.

One of the most wellpublicised and frequently discussed uses of AI is using LLMs to outsource our cognition — to summarise documents and to generate responses to prompts. Effective metacognition, therefore, is vital for writing an effective LLM prompt. Children who challenge, compartmentalise and clearly articulate their own thinking are much more able to identify specifically what thinking they need AI to achieve for them.

Explicitly teaching metacognition equips children not just with knowledge but also with the ability to think deeply and independently about their

‘Metacognition, the practice of thinking about thinking, is woven into the fabric of our lessons across the curriculum’

own learning. At NHP, we introduce students to the language of thinking from an early age through our Rabbit Habits, based on Art Costa’s Habits of Mind, such as ‘thinking flexibly’ and ‘communicating with clarity’. Regular structured activities, reflective conversations, and classroom practices help students internalise these habits, empowering them to effectively plan, monitor, and evaluate their own learning.

This understanding of effective thinking and effective questioning begins in Key Stage 1, (5 to 7 years of age) as the children explore the ideas behind Dr Edward De Bono’s Six Thinking Hats Although first published in 1985, the concept of breaking down questioning for different purposes is every bit as relevant in 2025, and equally accessible to our youngest students.

As they progress through our digital literacy curriculum, the impact of technology more widely is explored from a metacognitive perspective. How has this targeted advert been designed to attract our clicks? How has this fake news article been written to reinforce our preconceptions? These questions lead us seamlessly into the topic of AI. Is this image AI-generated, and how can we tell?

At this stage, we also reflect on how lucky we are to have P4C (Philosophy for Children) in our curriculum, as we wade into the murky waters of some of the big questions around the morality and ethics of AI capabilities.

For our Upper KS2 and KS3 pupils there are realworld quandaries to be addressed, and they need to be addressed head on. Can I use AI to answer my homework in a fraction of the time? And if not, why not? As teachers, we need to arrive at these discussions with a clear objective for the task we are asking the children to complete, and, critically, an understanding of how we want the children to be using their brains.

Sharing this metacognitive process both openly and transparently with the children, in all curriculum areas, is at the core of being a thinking school and is at the heart of how we can best address the growth of AI with our pupils.

This confidence is something which we have worked hard to build for staff, through a targeted AI and

‘ With a focus on metacognition, we might be able to teach them to ask the right questions’

Metacognition programme of teacher training involving a range of expert speakers including Guy Claxton, Emeritus Professor at Winchester University and Visiting Professor of Education at King’s College London and Charles Fadel, Founder and Chairman of the Centre for Curriculum Redesign. The rate of change in the technology can be disorientating even for people working in the industry, and our focus with the staff, as with the children, is more about the relevant thinking than specific practical skills. To structure this learning effectively, we used De Bono’s Thinking Hats, a tool that proves equally valuable in staff training as it does in the classroom, guiding discussions through clearly defined perspectives: considering processes, facts, caution, benefits, creativity, and feelings.

We’re also careful to ensure that our focus is framed positively, highlighting the creative potential of AI, and challenging pupils to use their own creativity in conjunction with technology. While using an LLM to precis a document is undoubtedly very helpful, the creative potential of AI

is more exciting. Our school competition, run alongside the Sway.ly app that filters social media content, challenged Year 7 and 8 pupils to find and write about positive use cases of AI, and many chose much wider examples than simply LLMs, finding examples where data analysis is being used to further medical research or improve sustainability, such as optimising energy usage in smart cities.

Dukes Education is ahead of the game in this regard, with the AI Task & Finish group currently developing a practical guide to designing an AI curriculum. This guide includes real examples from across the Dukes family and places particular emphasis on nurturing ‘digital wisdom’ — a term we adopted to describe the blend of digital literacy, ethical understanding, and critical-thinking skills essential for thriving in an AI-driven world.

In an ever-developing AI landscape, we educators have to accept that we are not going to be able to teach the children the answers to all of their AI challenges; with a focus on metacognition, we might be able to teach them to ask the right questions. n

The role of teachers in an AI-driven future

Matthew Tompkins, Managing Director, Dukes Education, Portugal, traces the history of education to examine the role of teachers and how that may influenced by AI

As speculation continues to grow around how AI will reshape our lives, attention is turning to how we must evolve our educational structures to adequately prepare today’s students for the uncertainties of tomorrow. In this process, it is essential

not only to reimagine the function of schools, but also to reaffirm the high professional standards that educators uphold as society’s demands of their role increasingly expand to include the pastoral health and wellbeing of children particularly.

‘Crucially, the guild model reminds us that learning is not only about content but also about the cultivation of discipline and professionalism’

The roots of formal education trace back over four millennia to Egypt’s Middle Kingdom era (c. 2060-2010 BC), where schooling was reserved for the sons of nobility. These early institutions were designed not merely for knowledge transmission, but to ensure the continuity of governance through literacy, numeracy, and administrative competence. In that sense, schools functioned as strategic instruments for national stability and development. This original purpose remains relevant as we consider how future education systems should be constructed — not simply to meet academic benchmarks, but to reinforce societal resilience.

The Greek and Roman systems each introduced their own evolutions: the Greeks emphasised philosophical inquiry, while the Romans prioritised practical knowledge and civic preparation. Roman schooling, still largely reserved for boys, combined academic disciplines with skills-based training for public speaking, legal reasoning, and administrative service. These developments laid the foundation for education’s dual purpose: intellectual development and societal functionality.

After the fall of the Roman Empire, education in Medieval Europe found a stronghold within monastic communities. Monks preserved classical knowledge through painstaking transcription

and maintained religious instruction as the core of learning. Charlemagne’s reforms in the late 8th century marked a significant moment in standardising curricula and expanding access — early steps toward the kind of national education frameworks we now see globally.

Meanwhile, vocational training flourished under the guild systems. Master craftsmen passed their skills to apprentices, creating a stable mechanism for economic continuity. These guilds exemplified the enduring value of structured mentorship — something mirrored in today’s classrooms, where the teacher’s role is pivotal in maintaining academic and behavioural expectations. Crucially, the guild model reminds us that learning is not only about content but also about the cultivation of discipline and professionalism — qualities best conveyed by educators who model them consistently.

The Renaissance and Enlightenment periods reintroduced classical texts and emphasised independent thought, history, and philosophy. The invention of the printing press in 1440 expanded access to education and led to a more secular and inclusive approach. During this era, the educator’s tone evolved from one of purely religious authority to that of intellectual stewardship. The rise of the public school system made it necessary for teachers to adopt the formal and structured

mode of communication deemed critical for ensuring respect, equity, and order in increasingly diverse classrooms.

As the Industrial Revolution unfolded, nations turned to mass education to meet the demands for a literate, numerate, and disciplined workforce. Compulsory education emerged not only as a tool of opportunity but also as a societal equaliser. Standardised systems were implemented to provide consistent learning experiences, and central to that standardisation was the formal tone and authority of the teaching profession that reinforced the gravity of learning, and modelled for students the decorum required in professional and civic life.

Today, educational systems vary globally yet share a common goal: to prepare young people for a rapidly changing future. The subjects and skills taught have evolved, but the underlying premise of schooling remains. Whether training elite bureaucrats in ancient Egypt, apprentices in medieval guilds, or digital-native learners in today’s classrooms, education has always aimed to equip individuals for meaningful contribution to society.

What, then, can we learn from the past to inform the future of education? And how do educators — especially in an AI-enabled era — maintain their irreplaceable influence?

Certain foundational elements remain nonnegotiable. Literacy and

numeracy are essential for communication, collaboration, and critical thinking. Scientific inquiry satisfies human curiosity and supports innovation. Today, emerging priorities such as sustainability, digital literacy, emotional wellbeing, and global citizenship demand our attention.

Yet amidst technological progress, one truth remains: teachers are the cornerstone of the educational experience. While AI can support content delivery and data analysis, it cannot replicate the human presence that instils rigour, empathy, and ethical reasoning in students. Here, the role of the teacher is critical. In a world awash with digital communication, the classroom must remain a space where formality, human interaction and connection fosters focus, respect, and aspiration.

Personalised learning may grow in importance, but not at the expense of social cohesion and shared experience. The teacherstudent relationship grounds students in a learning environment that mirrors the expectations of the adult world they are being prepared for.

Ultimately, the greatest strength of human civilisation has been its ability to transfer knowledge and values across generations. This has never been the role of technology alone — it has always been the responsibility of educators. AI may assist in delivering lessons, but it is the teacher’s experience, knowledge

and humanity that give those lessons meaning. As we navigate the next evolution of education, we must not let go of that fact that, while the tools may change, the principles endure. Teachers must continue to lead, not just in academic content but in human connectedness. And in doing so, they will ensure that schooling remains not just relevant for societal progress but for individual selfactualisation too. n

United Kingdom

Ireland

Bruce College

Institute of Education

Wales

Cardiff Sixth Form College

KEY:

Educational Setting:

Nursery School

Preparatory School

Senior School

Sixth Form College

Dukes Plus:

Ultimate Activity Camps

Summer Boarding Courses

InvestIN Education

Reflections Day Nursery & Forest School

Sancton Wood Nursery, Cambridge

Sancton Wood School, Cambridge

St. Andrew’s College, Cambridge

Cardiff Sixth Form College, Cambridge Campus

Hove Village Nurseries

Rochester Independent College

Broomfield House School

Riverside Nursery Schools KEW MONTESSORI

Riverside Nursery Schools RICHMOND

Riverside Nursery Schools ST MARGARETS MONTESSORI

Riverside Nursery Schools TWICKENHAM PARK

Kneller Hall School*

* Opening September 2027

Radnor House School

Radnor House Prep

Riverside Nursery Schools GROSVENOR HOUSE

Hampton Court House School

LONDON
Earlscliffe

Heathside School Hampstead

Devonshire House School

Bassett House School

The Acorn Nursery

Notting Hill Preparatory School

Orchard House School

Pippa Pop-ins

Kindergartens POOH CORNER

Miss Daisy’s Nursery BROOK GREEN

Miss Daisy’s Nursery CHELSEA

Pippa Pop-ins

Prospect House School

Hampstead Fine Arts College

Hopes and Dreams

Montessori Nursery School OLD STREET

Hopes and Dreams

Montessori Nursery School

ANGEL

The Lyceum School

Miss Daisy’s Nursery HYDE PARK

London Park School Mayfair

Knightsbridge School

Miss Daisy’s Nursery KNIGHTSBRIDGE

Kindergartens MOUSE HOUSE

Kindergartens MOUSE HOLE

Broomwood Prep BOYS

Kindergartens CRESCENT II

Dukes Education DUKES HOUSE, BUCKINGHAM GATE

Eaton House Belgravia School

Eaton Square Prep School

Miss Daisy’s Nursery BELGRAVIA

London Park School Sixth

Kindergartens THE PARK

Eaton House THE MANOR GIRLS SCHOOL THE MANOR BOYS SCHOOL

Broomwood Prep GIRLS

London Park School Clapham

Broomwood Pre-prep

Kindergartens THE PARK

The Pointer School

A partnership for neurodiversity

Koen Claeys, Education Director, Cavendish Education, examines the role of AI in supporting neurodiverse students

As Education Director at Cavendish Education, a family of specialist schools across the UK that focuses on educating neurodiverse children and young people, I have witnessed first-hand the rapid transformation taking place in the educational landscape. At the heart of this change is AI, which is fast emerging as a powerful tool for enhancing learning experiences. However, while AI offers immense promise, it’s crucial to recognise that it cannot replace the irreplaceable: the

human connection that sits at the core of meaningful education. At Cavendish, we work with students who have often found the mainstream education system unable to meet their unique needs. Many of them join us having had their confidence diminished or their potential overlooked. Our mission is to reignite that spark, to help them rediscover their curiosity, creativity, and confidence. And to do this, we embrace innovation, tailoring education to suit how our students think, learn, and grow.

AI, when used thoughtfully and ethically, brings a wealth of potential benefits to our classrooms. However, its role must always be to enhance, not replace, the human elements of teaching and learning. Our belief is simple: the future of education lies not in choosing between AI and humans, but in forming a partnership between the two.

Personalised learning experiences

One of the most promising aspects of AI in education is its ability to personalise learning. AI-driven platforms can adapt lessons and materials to match the pace, learning style, and interests of individual students. This level of personalisation is particularly valuable in our specialist settings, where students often require learning to be delivered in highly-tailored ways.

For instance, if a student struggles with reading comprehension but excels in visual learning, AI can adapt the format and difficulty of content, presenting it in ways that feel more accessible. It can identify patterns, gaps, and strengths in real time, and offer immediate feedback, helping students to grasp concepts more quickly and retain information more effectively.

This instant responsiveness allows learners to feel seen, supported, and understood. And for those who may have experienced failure or frustration in traditional settings, this can be transformational.

Accessibility and inclusivity

Another significant benefit of AI is its potential to make education more inclusive. AI-powered assistive technologies, such as textto-speech, speech-to-text, predictive typing, and visual aids, can break down barriers for students with a wide range of needs.

For a dyslexic learner, for example, AI tools can help with decoding and comprehension, allowing them to engage with content without being held back by their reading challenges. These technologies provide a level of scaffolding that can build confidence and foster independence.

Furthermore, AI allows for continuous assessment. Progress can be tracked over time and learning pathways adjusted accordingly. This enables educators to intervene early and tailor support where it's most needed. Yet we also know that progress is not always linear, and academic achievement is only one part of the picture — especially for our neurodiverse students.

Many of our learners benefit from targeted support in areas that extend beyond traditional academics. Executive functioning skills such as planning, sequencing, and working memory, as well as emotional regulation, communication, and social interaction, are crucial areas of development. While AI can assist here, it is the human educators who make the difference.

‘One of the most promising aspects of AI in education is its ability to personalise learning’

Communication and social skills

AI can simulate social environments in which students practise important social behaviours. These virtual scenarios can help learners build skills like turntaking, active listening, and interpreting non-verbal cues, all within a safe, controlled space. However, these simulations can only go so far.

Social and emotional growth requires real human connection. Students thrive when they feel understood, encouraged, and supported. They need educators who

can read their emotions, respond with empathy, and create a sense of belonging. AI cannot offer a reassuring smile, a spontaneous moment of humour, or the kind of nuanced understanding that comes from human experience.

Soft skills such as effective communication, collaboration, emotional intelligence, and resilience are best developed through ongoing, consistent human interaction. These are the very skills that our students will need, not just in school, but in life and in the modern workplace.

‘Our belief is simple: the future of education lies not in choosing between AI and humans, but in forming a partnership between the two’

Human educators can model these behaviours and create opportunities for students to practise and refine them. They can also provide crucial feedback, offering encouragement when it’s needed most, and guiding students to see challenges not as setbacks but as opportunities to grow.

The human-AI partnership: looking ahead

As we look to the future, it’s clear that the most effective educational models will be those that embrace the possibilities of AI while honouring the foundational role of human relationships in education.

This isn’t a question of either/or. It’s about how we combine the analytical power and efficiency of AI with the creativity, compassion, and contextual understanding of educators. It’s about recognising that,

while AI can process data, deliver content, and even simulate dialogue, it cannot replicate the complexity and warmth of human interaction.

At Cavendish Education, we see technology not as a replacement for teachers, but as a partner in helping us meet each learner where they are. By integrating AI into a thoughtful, humanled educational approach, we can better support our students to develop not just academically, but holistically.

In an increasingly complex and interconnected world, this kind of holistic, responsive, and inclusive education will be more important than ever. The future of education lies in a partnership—between human and machine, heart and logic, tradition and innovation. When we get that balance right, we unlock the true potential of every learner. n

IRELAND

Institute of Education

Bruce College

UNITED KINGDOM

Colegio Inglés English School of Asturias

Colegio Internacional de Valladolid

Colégio Luso Internacional de Braga

Colégio Júlio Dinis

PORTUGAL

United Lisbon International School

Colegio Inglés Zaragoza

SPAIN

International Sharing School

Engage International School Colegio Joyfe

International English School of Castellón

Elian’s British School of La Nucía

Prague Humanities Grammar School š.po

Beehive Square Primary School

Meet the Family

Founded in 2015, Dukes brings together a carefully curated group of nurseries, schools, colleges, education consultancies and student experience organisations.

Bambíno

American Academy in Prague

JK Education

Copperfield International School SWITZERLAND

Our central team is based in London. From here, we serve our settings in the UK and Europe, providing administrative support and training, whilst promoting high-performance, leadership and wellbeing.

American Academy in Brno

American Academy in Zagreb

Verita International School

International School of Athens

CROATIA
ROMANIA
GREECE
CZECHIA

Speaking Shakespeare in a room full of cyborgs

Chris Hennessy, Director of Digital, Dukes Education, provides a brief history of AI and its applications in marketing

Human progress could be considered as the pursuit of efficiency via abstraction. We take complex things, processes, and observations and abstract them down into simpler and more manageable ideas and concepts.

Communication provides a good example of abstraction in the name of progress. It has evolved from delivering messages on foot or horseback, to writing letters sent via postal services, to making phone calls through wired networks, and most recently, sending instant messages and joining video calls. Each stage removes manual effort and complexity for the user while preserving the core function of delivering information.

Behind these apparent simplifications and efficiencies, however, are increasingly

vast, intricate and complex systems which conversely become harder to understand. There are other sacrifices; some forms of communication, such as face-to-face conversations, convey richness and nuance that abstraction often strips away.

This business of abstraction in the name of progress is not always a straight line, nor does it always lead to a universally better outcome. However, this is a necessary discussion for the topic of generative AI. Why? Because by utilising it we are becoming more efficient, but at the cost of (in most cases) stripping away richness and thoughtfulness. To use our communications analogy above, sending a text is easy, but there are times when it is wildly inappropriate. As with any newly acquired power, then, it should be wielded with responsibility.

1950

Alan Turing proposes a test of a machine’s intelligence.

1945

First programmable computer built.

So, what exactly is AI?

In 1945 the first programmable computer was built. In 1950 Alan Turing proposed a test of a machine’s intelligence, and shortly after, in 1956, we were introduced to the term artificial intelligence by John McCarthy, an American computer scientist and one of the founders of the discipline of AI. In the years to follow, progress was slow, but in the 1990s things started to pick up again, and by 1997, IBM’s Deep Blue computer was able to beat Garry Kasparov, the world chess champion.

In the 2000s, AI began integrating into everyday applications. Search engines, recommendation systems, and even early voice assistants became more sophisticated. The deep learning revolution took off in the 2010s. AlexNet’s success in image recognition demonstrated the power of neural networks,

1956

John McCarthy introduces the term ‘artificial intelligence’.

and Google DeepMind’s AlphaGo defeated a world champion in Go, a game previously thought to be too complex for AI.

Then came the Transformers. No, thankfully not the robots in the sky — though the 80s TV series that was once considered ridiculous is starting to look more plausible with every day that passes. Generative PreTrained Transformers are the ‘G, P, and T’ in ChatGPT. These are deep neural networks with a twist — they convert text inputs into numerical representations in parallel instead of sequentially. A transformer can capture context and relationships between many words at once, enabling it to predict the next word in the sentence more accurately and therefore generate more coherent sentences than, say, a recurrent neural network.

IBM’s Deep Blue computer vs Garry Kasparov, world chess champion.

2010s

Google DeepMind’s AlphaGo defeats a Go world champion.

Words with a similar meaning when visualised using PCA (takes 1,000 dimensions and plots on 2 so we can view it) are close together and this enables us to do mathematic operations on words i.e. Madrid – Spain = Capital. Capital + Russia = Moscow. Once converted into numerical data, we can feed it through a neural net.

The first transformer to make waves was GPT-3.5 from Open AI, launched in November 2022. It exploded in popularity, reaching 1 million users in just five days, making it one of the fastest-growing apps ever, marking this new age of Generative AI.

Open AI managed to get to this point (and beyond) by improving models based on the scaling hypothesis. The scaling hypothesis suggests that simply making models larger, and training them on more

Then came the Transformers. Self-created image by ChatGPT, using the prompt, ‘create an image of what you look like’.

data, leads to continuous performance improvements… and it has verified this hypothesis. The laws indicate that there’s no immediate upper limit to how much models can improve; bigger models always perform better given more data and compute, despite there being diminishing returns. This hypothesis, combined with Moore’s Law (the principle that the number of transistors in a computer chip doubles every two years), leads us to the ultimate conclusion that soon we will have AGI (artificial general intelligence), a machine that can think like a human and solve general tasks, see The Age of Spiritual Machines by Ray Kurzweil (1999). So, what does this all mean in the context of Marketing and Admissions, I hear you ask? Finally, he’s getting to the point…

‘At the point of writing, AI is an incredible time saver, idea generator, image generator, and much more’

AI in marketing & school admissions

We are not yet at the singularity of an allpowerful AI that can run all of our marketing for us, far from it (Kurzweil gives us another 20 years or so!). At the point of writing, AI is an incredible time saver, idea generator, image generator, and much more. It is rapidly transforming how independent schools attract, enrol, and engage students. Once a novelty, AI-driven tools are becoming mainstream across education. In higher education, eight in ten colleges planned to use AI in admissions for the 2024 cycle, indicating a broad shift that is now reaching K-12 independent schools.

Recent surveys of independent schools echo this trend: 59% of private K-12

institutions are exploring AI in some form, and a striking 78% of school marketers are already using AI in their work. This surge in adoption is driven by the promise of greater efficiency, personalisation, and data-driven decision-making.

But to harness it effectively we need to know, what is it really good at, and what is it not so good at?

Marketing and Admissions are therefore prime candidates for AI utilisation, especially Generative AI. But to get the best out of these tools and not churn out the exact same content as everyone else overusing them, you must feed it the right information and provide context. ‘Garbage in, garbage out’, as they say. You do this in your prompt.

GOOD AT NOT GOOD AT

• Audio processing and generation

• Image & video processing and generation

• Sentiment & text analysis and generation

• Translation

• Coding

• Annotation, labelling, transcription

• Summarising and scheduling

• Computer vision

• Completion of tasks

• Story writing

• Analysing large data sets

• General prediction and probability

• Knowing what you want. You need to be explicit. Sometimes you need to do some research or digging yourself to figure this out before considering using AI.

• Facts (although better with web search or OpenAI’s Deep Research).

• Advanced maths (although again, the newer models are much better).

• Seeing through traps/ honeypots. As with most bots, traps can be laid to detect or deceive them.

• …less and less, though, as the technology rapidly progresses.

It’s all in the prompt This prompting framework helps users get the most out of AI tools.

1.

2.

Task

Clearly describe what you want the generative AI tool to help you with, and specify a persona and format.

3.

Context

Provide all the necessary details to help the generative AI tool understand what you want it to do.

Help me draft a landing page for our next open day on [date] to feature on the website [website url].

I am a Marketing Manager at [school name].

4.

Reference If available, provide examples for the generative AI tool to use in creating output.

Evaluate

Assess the output to determine if it’s helpful.

The landing page should be targeted to [define audience] in the [location] area.

The goal of the landing page is to drive signups via a registration form.

The key metrics for this landing page; age, users, views, average engagement time, engagement, and open day registrations.

For further context, read our prospectus and brand guidelines before writing: [insert links or upload documents]

5.

Iterate

If the output isn’t helpful, continue to refine by clarifying what you need until it’s just right.

Here are our previous Open Day landing pages [urls]

Prompts are not the only thing to consider when using these tools. Personal data and intellectual property are two major causes for concern. The models that power these tools eat up data and use it for fine tuning, potentially exposing it or making it accessible to others, so be wary of what you put in your prompts. If you want to analyse financial data, pupil data, or anything of that nature, you should really download and run a model on your own machine like Ollama (for advanced users) or use a service that explicitly states that it will not train its models on your data.

Combining tools

Combining these tools into workflows is where they become really powerful. For example, using image generation tools and Zapier or Postman to string together a ChatGPT Marketing Assistant (trained on your tone of voice, brochures, website, brand guidelines, etc.), one can quickly create a draft blog or guide.

Consider then feeding that blog/document into a video or voice generation tool to create a video summary for users in a hurry, or audio recording for those on the move to listen to. If you tailor a workflow to output content,

that is good enough, you can even POST to an API endpoint (your website, YouTube, a Social Media channel) to auto-publish your content. However, it is important to always have a human in the process to edit and approve content to avoid AI errors. In theory, you can therefore create and publish content in multiple formats and on multiple channels in a few clicks or just by briefing in a task over a WhatsApp voice note.

You, in essence, become the architect and engineer of your own software systems with these low (little) to no code solutions.

We have seen these tools saturate marketplaces already, with AI-written books being published on Amazon, and podcasts being generated and published on Spotify. We’re now seeing AI UGC (user-generated content) videos and other generated marketing content being published on already crowded and noisy social media channels.

So, should you start using these tools? I would always advise to use them with scepticism and caution over complete abandon, but the important thing is to start using them. If you don’t, others will, and you will be speaking Shakespeare in a room full of cyborgs. n

‘It is important to always have a human in the process to edit and approve content to avoid AI errors’

New tools, old challenges

James Howarth, Head of Digital Learning, Dukes Education, Spain, explores the practical implementation of AI in the classroom

As AI usage becomes ubiquitous across the digital world, regulatory bodies scramble to offer guidance. In schools, educational thought leaders are reimagining education and the world we are preparing our students for. That future is already here, and the world is being reshaped before us. At Dukes Education, we are focusing on preparing students and our workforce in readiness for the opportunities and

the challenges that an AIenhanced world brings.

Generative AI (GenAI) usage has increased rapidly since its launch, and ChatGPT only took two months to reach 100 million users. This was down to GenAI’s access, cost and scale. Coding was no longer needed to interact with GenAI, natural language could be used. Plus, it was free, and everyone with an internet connection could access it. Two years on, the recent 2024 Generative AI in Independent

Schools Survey reported that 94% of schools are using ChatGPT, a LLM. Yet there is a lot more to AI than the most famous LLM, whose name has almost become synonymous with Generative AI, in the same way that Google has for searching information. We are now moving into the era of agentic AI and reasoning models. How are our communities adapting to these rapid societal and technological changes?

We want our communities to be confident, skilled and knowledgeable when using AI, safely and ethically. So how are we bridging a new potential digital divide? And how are we preparing our communities for an AIEnhanced rather than an AI-replaced world?

New tools. Old challenges. As E.O Wilson suggests, ‘We have god-like technology, medieval institutions and palaeolithic emotions’. Humans evolve very slowly, especially compared to technology. Whilst technology and the external stimuli our brains receive may be evolving at light speed, the human brain remains relatively the same as it was during the industrial revolution and the birth of education systems as we know them. Cognitive science has made huge strides in understanding how we learn but how we learn — by connecting neurons through synapses — remains the same. So at least one variable is relatively constant, and the science of learning is a domain we are

experts in. The future may be uncertain and infused with technology, yet humans and how they learn (until Mr Musk is allowed to insert chips into our brains) will remain very similar.

As we project our thinking forward, we know that it will be essential to be able to adapt and continue ‘learning to learn’, a term coined by Professor Guy Claxton, a key academic influence in the Dukes Education’s Teaching and Learning Framework. Claxton’s learning dispositions, which have been in the common educational discourse for over 20 years, are already foundational for our teachers and students. The way we develop these dispositions will undoubtedly evolve with the technology.

Our students can already develop their public speaking skills by being transported into a room with a VR headset that gives them feedback. Yet communication and presentation skills are the focus of that immersive experience. This, therefore, is not a new challenge although the tools at our disposal are new for teachers and learners. So, what does this look like in a learning context for classroom practitioners and learners?

GenAI’s outputs are improving, but expertise is still essential: the greater the users’ expertise in a domain GenAI is being used, the greater the impact (The Economist). Teachers and traditional sources are therefore just as important to cross reference and triangulate information.

‘ We want our communities to be confident, knowledgeable and skilled when using AI’

GenAI will improve, especially as it moves towards specialised AIs rather than Artificial General Intelligence, as previously forecast (The Economist). The teacher no longer holds all the answers. In this context, the sage-on-stage pedagogical approach does not produce the desired outcome. Teachers transition to being facilitators, also termed: guides-on-thesides. According to Andreas Lund, a teacher education academic at the University of Oslo, teachers must be ‘designer(s) of technology-rich learning environments, but without necessarily being an expert in digital literacies’. Teachers provide the context and guide learning when

appropriate rather than leading students through preordained knowledge milestones. This creates a flattened classroom hierarchy which challenges the more traditional classroom set up, but the teacher is no less important. In fact, the teacher has to be far more agile, reflective and skilled. They are constantly seeking feedback through questioning and observations. I ascribe to the belief shared by Rose Luckin, Professor of Learner Centred Design at UCL and an authority on AI, that the technology alone will only get learners so far, it is the teacher that adds true value to the technology.

Most teachers in my division Most teachers in my department

‘Students may be leading the race in usage but using and applying this technology to learn still requires a discerning teacher’s guidance’

Perceptions of generative AI in the classroom

The second area we will explore is how we are bridging a potential AI digital divide while providing personalised learning opportunities and ensuring student safety and ethical GenAI usage.

Perception of risk

In a recent poll, it was a relief that teachers are more wary of the perceived risks outweighing the benefits. Students are the opposite, and this is reflected in usage. If you refer to the figures in Table 1, we can perceive a clear digital divide forming.

A recent poll of 1,000 university students published in The Guardian claimed that 92% of students were using AI in high-stakes assessments and the remaining 8% were at a potential disadvantage for not leveraging this technology. This highlights another digital divide forming, but this time among peers rather than teachers and students. There are other digital divides we must consider. Henry Jenkins, a Harvard Media Literacy academic, believes that schools are responsible for narrowing the divide through participation. Tiffany Apps, an Australian academic, saw the digital divide as a lack of positive role models when using technology. To avoid these divides propagating in our schools, we need to change the perception of AI through reflection and dialogue. We need to give people opportunities to use GenAI in safe spaces, often called sandboxes, so that teachers and students can play, innovate and potentially

fail. We need to provide access to the safest and bestin-class AI tools, iterative policy development and a suite of AI tools. We need to develop AI Literacy through competency frameworks and curriculum.

A cornerstone in this journey has been identifying what we want to achieve with AI: personalisation, productivity and safety. This has led us to align our thinking with UNESCO, that AI must not usurp human intelligence but must enhance our human capabilities. For this reason, Dukes has adopted the UNESCO Teacher and Student Frameworks to ensure progressive competence development grounded in academic research, with learners at the centre. By learning about AI, learning to use AI and learning through AI, we want to ensure that the aforementioned digital divides of differing usage, lack of participation and too few positive AI role models, are bridged.

This is not a new challenge for teachers — we need to continue being pedagogical experts and learn to leverage new technology and deploy it innovatively to benefit the learner. Yet we want to ensure privacy and ethics are always a primary consideration. Students may be leading the race in usage but using and applying this technology to learn still requires a discerning teacher’s guidance. We are a team that loves learning. This is an exciting learning opportunity for everyone involved. n

The last word…

Every week at Dukes, we share a ‘Quote of the Week’ offered up by one of the team. We’ve collected some of our recent favourites.

“She wasn’t looking for a knight, she was looking for a sword.”

Atticus, Love Her Wild

Thanks to Hannah Boddy, Legal Counsel, Dukes Central Support

“The standard you ignore is the standard you accept.”

Jan Shadick, CEO, Haberdashers Academy Trust

Thanks to Leighton Bright, Vice Principal, Rochester Independent College

“Surround yourself with only people who are going to lift you higher.”

Oprah Winfrey

Thanks to Thea Philips, Director of Marketing and Admissions, Schools West

“Yesterday is history, Tomorrow is a mystery, Today is a gift as we call it the present.”

Attributed to Eleanor Roosevelt or American historian and writer, Alice Morse Earle

Thanks to Jane Zinopoulos, Head of Admissions, The Pointer School

“The time for seeking global solutions is running out. We can find suitable solutions only if we act together and in agreement.”

Pope Francis, 266th Catholic Pope

Thanks to Sally Cornelius, Sustainability Manager, Dukes Central Support

“An organisation’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage.”

Jack Welch, (1935-2020) Chairman and CEO, GE

Thanks to David Fitzgerald, CEO Dukes Education, Europe

“Imagination is more important than knowledge. For knowledge is limited to all we know and understand, while imagination embraces the entire world, and all there ever will be to know and understand.”

Albert Einstein

Thanks to Fouzia Elanqaoui, Nursery Teacher, Heathside School Hampstead

“When I look back, I am so impressed again with the life-giving power of literature. If I were a young person today, trying to gain a sense of myself in the world, I would do that again by reading, just as I did when I was young.”

Maya Angelou

Thanks to Alicia Smith, Programme Management Assistant Director, InvestIN

“You do not rise to the level of your goals — you fall to the level of your systems.”

James Clear, Atomic Habits

Thanks to Bruce Campbell, House Parent, Earlscliffe

“The best investment you can make is in yourself.”

Warren Buffett

Thanks to Hitesh Chowdhry, Co-Founder InvestIN Education

Dukes Education is a family of nurseries, schools, and colleges in England, Wales, Ireland, Spain, Portugal, Greece, Czechia, Croatia, Romania and Switzerland. Our schools cater for children from 0-19, serving them from their earliest years at nursery until they leave school to go on to university.

Surrounding our schools, we also have a collection of complementary education offerings — day camps, international summer schools, and university application consultancy services. This way, we create a wraparound experience for every family that joins us.

Dukes Education

58 Buckingham Gate London SW1E 6AJ +44 (0)20 3696 5300 info@dukeseducation.com dukeseducation.com

Founder and Chairman

Aatif Hassan

Dukes Group Board of Directors

Aatif Hassan, Mike Giffin, Tim Fish, David Fitzgerald

UK Board of Directors Tim Fish, Mark Bailey, David Goodhew, Libby Nicholas, Scott Giles, Damian Quinn, Jonathan Cuff

Europe Board of Directors

David Fitzgerald, Juan Casteres, Chris Eversden, Philippe Grosskost, Liza Humphrey, Matthew Tompkins, Claire Little

Dukes Education Advisory Board

Jenny Aviss, Pam Mundy, Neil Roskilly

Insight Editor-in Chief

Tim Fish

Insight Managing Editor

Anna Aston

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.