AI Policy Development

In this Inside Story, Oliver Wharton (Assistant Headteacher) considers changing technology and the need to create an AI Policy.
Schools often find themselves in the position of having to respond to changing landscapes. These can be social, educational or as this inside story is going to discuss, technological.
Artificial intelligence (AI) as a concept has been around for some time, but it is only relatively recently that computing power has developed to a stage where AI driven software has become freely available for wider society to use. As with any emergent technology there are positive and negative aspects to the way in which the technology is used, many of which only become apparent when the technology is implemented ‘in the real world’. This is certainly the case with AI. As a society we have to question how we avoid repeating the mistakes that were made with the implementation of widespread social media access for adults and young people.
For education, AI driven software offers the promise of a stronger research tool, the ability for students and teachers to find increasingly personalised and appropriate learning activities and feedback and the automation of some administrative tasks that take time away from supporting students. This must be balanced against the risks of such tools being used to plagiarise and misrepresent work, produce harmful content or be used to receive inaccurate or misrepresented information. So, the question for PHGS is how do we get the benefits and avoid the pitfalls with AI? Addressing this question whilst the technology is emerging and evolving (with more integration into everyday software, such as PowerPoint) is tricky. This is further complicated when we consider that students are trialling and testing the technology at the same time, in many cases with more confidence than the adults!
For a school, one of the first steps is to decide what the approach will be. For Prince Henry’s our school values can guide us here; we want our students to be respectful, flourish in what they do and achieve as highly as they are able. As an early adopter of a school wide iPad scheme, access to innovative digital technology has been part of our school’s approach for many years. These values and experiences have guided our approach and helped with the development of our AI policy. We want our students and our staff to be able to use AI confidently, and competently, ensuring that they are aware of when it is (and is not) appropriate to use, what the risks are with using it and how it can be used with honesty and integrity.
In creating an AI Policy consultation has taken place with staff and students in school, ensuring that we are left with a policy that feels relevant and workable in our school community. Alongside AI policy creation, staff have had training on how to use AI approaches in their teaching, not just in terms of lesson resource creation, but also on activities and approaches which encourage students to engage with AI in a controlled and monitored manner. Staff and students have been trained on what appropriate use of AI looks like through staff meetings and assemblies. Further to this, exam year groups have had extra training sessions on ensuring that they are clear on the exam board requirements to declare AI use where necessary, helping to avoid them falling foul of the rules.
Returning to our school values and how these apply to this policy development, two values in particular feel relevant:
Honesty, which falls under our Respect value – we would like to think we are creating an environment where our students and staff can use AI tools with honesty in an appropriate manner. As new applications and tools come out they can be assessed against this value as to whether their use is appropriate. One conversation with a parent this year focussed on whether the integrated AI content creator in PowerPoint was appropriate to use for a homework assignment – as the parent correctly raised, it didn’t feel like an honest approach to use this tool to complete the homework and the child was advised not to use it. As part of our student training we have shared a two-tier system where AI use will be ‘green lighted’ or ‘red lighted’ by staff to give students guidance on whether they can use AI for learning tasks.
Independence, under our Flourish value – AI tools are already being used by students to generate revision activities and tools, such as flashcards, and we would not want an approach that would drive access to this valuable resource underground. Being open about the opportunities AI tools can offer and making sure students are aware that appropriate and relevant use of AI is not only permitted, but welcomed, allows our students to work independently and keep pushing themselves forward in terms of their academic development. One student in my Year 11 class was showing me a personalised set of quiz questions that they were using for revision that had been created with a prompt that used the examination specification and information from their last topic test scores. This felt like a positive use of the technology with a clear benefit to the student.
As with any changing and evolving area in society we will need to keep our approach to AI under review to ensure that we are able to capitalise on the benefits offered, and try and head off the any potential risks to our school community. This is partly achieved through annual reviews of our policy to ensure it remains fit for purpose and in line with our school values. Being open to feedback from our wider school community, including reviews from members of the school’s Governing Body, and utilising our student Digital Leaders are other ways we are able to get a feeling for how the policy is being enacted in the active school environment.
Over the next few years it is clear that AI technology is set to develop with the aim of further supporting teachers and students. Exam boards are already trialling the use of AI to help mark GCSE and A-Level exam papers, these trials have shown striking accuracy when compared with human markers. Other AI trials are seeking to support students and staff lower down school with concise summaries of performance, incorporating clear targets for improvement or helping to develop differentiated lesson resources more swiftly. An example you can try at home is to take a detailed passage of text from a news article, or an A-Level textbook and ask AI to explain it more simply to a 12-year-old – you might be surprised by the level of the response. Tasks like this could be the difference between a child ‘getting it’ or not. As such the possibilities for learning appear endless, but this has always got to be considered against the ethical backdrop.
I hope this edition of Inside Story has provided an interesting insight into the policy development process and provides some of the background and context of our AI policy. At Prince Henry’s we are taking an open-minded approach aiming to both educate our students and staff in how to use this emergent technology responsibly and open up the potential advantages of the technology to all that are using it helping to ensure that our students flourish and achieve in the AI