Article written by:
Tony Gilbert, Sales Director NZ
There is a lot (and I mean a lot!) of noise out there about the potential impacts of Chat GPT in schools and the wider community; ranging from the terrified to the completely and utterly disinterested. There are visions of AI-driven robots and programs doing your work faster and better… (and possibly also taking over the world?).
But what is the reality?
The real answer is… We don’t know. Do a simple online search and (perhaps unsurprisingly) you will see a massive variety of ideas and predictions. But differences of opinion (and scaremongering) aside, there is no doubt that AI and the ability to synthesise original content does exist, and it will probably only become more prevalent and more effective in its outcomes.
But what does this mean in an educational environment?
If you are old (like me) and remember the days of kids copying each other’s work on pen and paper, you will also remember how the rise of the computer and then the internet took this to the next level. ‘Copy and Paste’ seemed like the worst enemy of the teacher. But if you do remember this, you will also remember how easy it was to tell that a student had not written it. Some rookies wouldn’t even bother changing the font, and even if they did, many would use words you knew they didn’t know the meaning of when challenged.
This evolution of plagiarism, not only published content but that of other students, was curtailed somewhat by programmes like Turnitin and Duplichecker, helping to ensure that the work was original-ish. But what do you do now when, after a few keyboard strokes and some random word association, you can get a Haiku written about a bicycle and fish?!
Here are a few thoughts on addressing the problems of AI generated content in a school:
Even if you consider the multimodal capabilities of Chat GPT-4 (it can generate content from both image and text prompts), it takes very little to ask a student to explain what they have written and/or presented. Then you can start to work out if the result came from their own brain or one that had a distinctly higher AI content.
Relevant (and when necessary, pointed) questions are part of the arsenal at a skilled teacher’s disposal and have always been used to pull apart a deeper understanding of what has been submitted. While this process can be time-consuming, it provides a clear understanding of what the student knows, what they may need to work on, and what has been generated somewhere other than inside their own head.
It is worth considering whether students should be encouraged to find alternative ways to show their understanding of a topic, away from the all-too-familiar safety of the internet. Where is the harm in closing the doors, getting a pen and paper (or a connectivity-less computer), posing a formative assessment question, and seeing what comes out of their individual thought process? This simple exercise would help ensure that students are not overly reliant on AI-generated content. It would also ensure that different learning (and thinking) styles are accommodated as part of the assessment. While this perhaps should not be the default approach, it can be used as a benchmark against the summative result.
Assessment Type – Find a way of assessing the same things but in a different way
Great educators have been doing this for a long time. From group responses that are less threatening in a formative assessment context, to staggering what kind of assessments are undertaken at different times, changing the actual ways that we assess provides a context that sits outside of the AI systems realm of generated, synthesised content. A professional inter-departmental may produce some interesting alternatives around this.
Signpost the Assessment
Is there any good reason that for some assessments you cannot stage them and signpost tasks along the way? It is all too easy to ask a singular question and make that the whole basis for a summative assessment.
By staging and signposting an assessment it gives you the ability to change tack and ensure thoughts are more original in their origin. This can be helped by specific formulation of the questions themselves, as well as the context of the assessment.
For example, there may be some parts of the assessment that can be done at home (and therefore at risk of being generated on Chat GPT) but then in class, they are asked to take that work and write an opposing view to what was initially written at home. Trust me, you will be able to tell pretty quickly who understood the initial work because it was their own… and who did not.
Relate things to the individual – the types of questions you ask
I am pretty sure that Chat GPT or AI hasn’t gotten to the stage where it is delving into individuals’ personal lives, other than the parts that have been digitally published (Facebook, Instagram, Blogs etc.).
By relating a question to an individual such as “use an example from your childhood that relates to the theme that you have described” you can force your students to conceptually draw a parallel to something that requires their own unique, and often subjective, experience. Sure, they could type in that experience and ask Chat GPT to draw a parallel for them, but that feels like a lot of work and chances are they would have had to have some idea of the relevance in the first place.
Another approach could be to make part of the assessment process another assessment. This may be as simple as creating a closed environment (away from internet land) and then getting students to bullet point key parts of what they had previously submitted. A well-formed submission that has come from the student’s own thinking should have key parts that are easy to recall. Of course, one must be careful not to rush to assumptions here and be wary of the outliers. As we know from experience, some students may not find it easy to recall key parts of previous work, and their work may be genuine despite not appearing to be so.
Vernacular Local Nuance
I found after playing around with the early version of Chat GPT that if I got it to write a rhyming poem with our CEO as the subject, it inevitably came up with the same version every time, even when prompted to emphasise certain parts or omit others.
Not only did the structure become quite familiar, but it also lacked the nuance that comes with original work, born of an original context and an original mind.
For example, some of the words and context that would have made sense to a New Zealand audience (think ‘legend’ or ‘mate’) would have seemed unusual to someone reading it from the United States and vice versa. The presence (or lack of presence) of this unique vernacular can be a big giveaway.
Try it out Yourself
Put yourself in the shoes of a student that is using the AI to do the work for them – actually use Chat GPT to come up with the answer. Read it through a few times (and maybe even store it somewhere for reference) and it will likely ring a bell when you are reading assessments once they come back.
As an IT integrator, there are several tools and options you can have to restrict the use of AI engines in a school environment. Filters, blacklisting, and monitoring tools are at the disposal of a school should they be concerned with the use of AI. It is also worth considering the physical context of your assessments. For example, some schools have moved away from the lab to a pure BYOD environment, which makes sense for a lot of reasons. However, in a lab environment or even with the use of school domain bound/controlled computers that are issued, the ability to restrict access and monitor is greatly enhanced.
The good stuff
There are also many good things about AI. From generating questions and units of work that draw on resources you might not be aware of, to simply reframing concepts (try typing in ‘explain the concept of ‘x’ to an eleven year old’). Ultimately this needs to be combined with your pedagogical approach to what learning and assessment means. Does it matter if it is used to generate a response if this helps the student learn? What emphasis do you place on the synthesis of information and how feasible is this given the age and stage?
From an educator’s perspective, if you pare it all back, very little has changed from the fundamental concept of knowing and engaging with your students well.
It is actually very easy to notice when suddenly, grades, quality, and even getting assessments in on time takes a dramatic shift.
Teachers have been doing this for years, particularly with our pastoral hats on where certain signals or changes in behaviour can indicate something else is going on other than what is initially presented.
Thankfully, AI is not a silver bullet for students to circumvent assessments. With the right approach, I believe it is an opportunity for us all to examine how we assess, and provide a sharp reminder that assessment is a tool that should progress learning, not just a test of what has been done.