There has been quite a bit of press recently about content generated by artificial intelligence, first graphics through such programs as DALL-E and Midjourney, and now with written content through ChatGPT.
ChatGPT is an open-source artificial intelligence bot that allows users to enter questions and provides answers that read as if a human created them. Those of us in education are concerned about students using these tools to plagiarize and cheat their way through classes. For example, students can easily feed a discussion prompt into ChatGPT and receive an acceptable answer in seconds, ask it to write poetry or even ask it to generate computer code.
While these examples seem like smart shortcuts to avoid busy work, what we should be more concerned about is that this technology will allow students to stop making sense of the world in context with their personal, lived experiences.
For example, if I ask ChatGPT to describe my extroverted personality, it will tell me the types of social activities and events I might enjoy and general expectations regarding extroverts. But if I want to explore my extroverted personality in the context of my life, the relationships I have with others, my job requirements and my lived experiences, ChatGPT’s answer would not capture the introspection and meaning required for developing a unique path forward to integrate my personality into my daily activities.
The sensemaking process is how we take in information, process it and use it to understand and generate a worldview. It works in four stages – we notice things in our environment through observation, we create interpretations of the event, we then author our own unique response based on our integration with previously held information and then enact what we feel are appropriate behaviors.
This is a key way that humans learn, even though we may not realize it. If we short-circuit this process by feeding the AI our questions, we shortchange the learning process. While we may notice something (or are forced to notice it because of an assignment), ChatGPT produces the answer. The AI does not compare alternative interpretations or explanations and certainly is not authoring a unique response grounded in the user’s experience. We are no longer authors of our own behavior, and instead, we are just reacting to an AI’s response in the world around us.
One approach to address this challenge is for educators to stay one step ahead of AI, creating increasingly more challenging questions that require analysis, limiting the use of computers in class and during tests and incorporating software that can detect AI-generated answers. Most educators know this is like playing whack-a-mole, that the software (like Turnitin) that we use to detect cheating is already outmatched by the techniques students employ, that university IT departments are woefully underfunded and teachers overextended with larger and larger class sizes.
Some advocates echo the arguments from a generation ago when graphing calculators could be programmed to run sophisticated analyses that freed their human counterparts from manual number-crunching, that this AI is merely another tool to liberate the masses from busy work. But opponents argue that this is not just another tool; it is a disruptive force that will ripple through every level of our educational system and corrode the learning process. Educators must find a middle ground, incorporating the technology while underscoring the value and process of knowledge acquisition. This approach embodies the fundamental belief regarding education espoused by Plato, that education is not just memorizing dates and facts, but is a lifelong practice of learning and a requirement for social justice.
Unless we want to become drones that merely recite what AI creates for us, educators must stress the importance of the sensemaking process and embrace the technology as a tool we can use to sharpen the learning process.
But professors have another option. Instead of embarking on a futile fight to ban technologies like ChatGPT, they can help their students better understand why it’s so important to learn how to make sense of what they are learning. They can do this by integrating the technology into their teaching. For example, educators can challenge students to accurately feed questions into the AI (noticing), then encourage the students to find alternative explanations to the AI-generated answers (interpreting). To craft a relevant response, students could interact with the AI to include personal experiences (authoring) and then adopt or discard recommended behaviors (enactment).
Too many individuals see learning as another task they must do and a degree just something they must attain, forgetting that learning creates connections between our neurons and allows us to interpret what we observe in multiple ways, driving new connections and authoring innovation for the world around us. If learners are no longer interested in seeking knowledge, and merely employ ChatGPT to provide answers, we will literally stop making sense. A better way forward is to use the technology to galvanize and personalize the sensemaking process.
Gretchen Vogelgesang Lester, Ph.D., is an associate professor at San Jose State University and public voices fellow of The OpEd Project.
The opinions in this commentary are those of the author. If you would like to submit a commentary, please review our guidelines and contact us.
To get more reports like this one, click here to sign up for EdSource’s no-cost daily email on latest developments in education.