I’m fascinated by the recent explosion in discussion about AI text generation in education. In November 2022, OpenAI launched a tool called ChatGPT. You can enter prompts into the tool and the tech returns a response written in, what is considered by academia, proper English grammar. There are now flurries of debate about banning the tools, changing all writing assignments to be in class, and the “death” of academic writing skills. At the same time, I’ve also seen a number of resources for instructors, a surprising number of which focus on instructors using the tool to generate exam questions, lesson plans, and activity ideas. Aside from the hypocrisy of “let’s ban this for students” and “let’s use this to help teachers create materials”, I’m way more interested in how we can all learn how to leverage these tools to support thinking and learning.
Already, many word processors will offer a suggestion as to the next few words I might want to type. Sometimes it is what I was going to type and I accept it. Sometimes I was going to say it differently but the suggestion is as good or better than what I was thinking so I accept it. And sometimes, it is completely wrong and I just keep typing. I think this is not a bad starting place for this discussion. Rather than making me uncritical of what is being written these suggestions actually help me to refine my thinking. I have to ask myself questions about my intentions and goals and then make decisions.
This is similar to how Fyfe (2022) used an early version of ChatGPT with students on a final essay. Fyfe had students get content from the text generator that they then used in within their final essay. It wasn’t the entire essay, but sections of the AI generated text were included into their final version. They then also submitted a “revealed” version showing the parts that were AI generated and their own reflections on the experience. Using the AI was fundamental to their perspectives regarding the ethics and practice of using AI within writing. And by considering how to integrate the AI text, students had to make critical decisions about their writing.
I recently created a video for my course, discussing and demoing how a student could use these tools to help them explore a topic that is new to them. To help generate ideas, not to replicate those ideas in their assignment, but to generate ideas that they can explore further. To help generate ideas that they can then think critically about and make decisions about their fit within their assignment or not. I’m really excited for this idea. As I mention in the video, our own biases can prevent us from seeing different ideas and perspectives. While collaborating with diverse groups can help to alleviate this, and that is also part of the course in question, we can’t bring a diverse group together if we have blind spots due to our biases that prevent us from even seeing why someone else might be interested. AI isn’t going to solve our biases. It really is subject to all of societal biases because of where it is searching in order to generate its response to our prompt. But, if we can use AI to begin thinking critically about a topic and to expand our view just a little then we might be better able to recognize what other topics we need to research or who else we need to engage.
Note: There are already lots of discussions about the equity implications of a tool like ChatGPT. Even the fact that it turns out “proper” English grammar has equity implications and these need to be part of the discussion. But my hope is that we will have a discussion instead of just banning the tools from all use and preventing students from developing needed skills for the coming future.