I have a confession. In the second grade, I cheated on a spelling test. Our desks had built-in cubbies under the table top. So, when the test began, I pulled out my notebook with the words written down from earlier review. I kept this secret stashed between my lap and cubby, then copied away. This wasn’t pre-meditated; it sort of evolved organically in the moment. Perhaps I thought I was being resourceful? “Why isn’t everyone else doing this? It makes it so much easier!” I recall thinking…

Not until a few hours later did I process what I had done was dishonest. I was too scared to ever admit it to anyone back then. This is actually my first time admitting to this academic violation… So, now that I have that off my chest, let me level with you – despite my cheating, I don’t believe it truly inhibited my ability to truly learn what I needed to know.

Yes, my action that day in Mrs. Bareford’s class was a violation of academic integrity. But was the result a long-term detriment to my intellectual development? What happens nowadays when any of us misspell in writing software? The computer autocorrects or suggests an alternative! In other words, technology has made one’s spelling ability a mitigating factor toward expressing one’s ideas.

This brings me to a point about Large Language Models (LLMs), a form of generative-AI. Tools like ChatGPT, Bing Chat, and Google’s Bard, now occupy a large swath of the conversational landscape around academic integrity and whether they belong in the classroom. The basic question, or variation, of “Do these tools count as cheating?” is far too derivative. “Yes” or “No” could be the answer depending on context, objectives, and policy implementation, sure. But this is not as simple as my contraband notes used for my spelling quiz. And just like autocorrecting makes personal spelling accuracy less vital for expressing ideas (and is not considered a professional faux pas), LLMs will soon occupy a similar, yet more significant role.

With this in mind, this technology should be leveraged to improve your students’ abilities in your subject matter right now in your classroom.

We won’t stop this wave. So grab a surfboard!

I’m certain there are a variety of perspectives on LLM usage among those reading this. Whatever your current stance, there is no holding back the tidal wave of use of this new technology.  The concerns over using LLMs are real.  However I propose that the proponents of LLMs are likelier to be in favor of policies and rhetoric that encourage healthy, intellectually stimulating use of these tools in the classroom.

Since students are already using them in creative ways beyond the copy-paste-rinse-repeat, let’s get creative alongside them! The more faculty and instructors can encourage techniques like prompt engineering and problem formulation with LLMs according to their disciplines, the more students will see the genuine value of these tools and be excited to explore their depths. Ignoring the technology will not make it go away, and waiting on the administration to implement wider policies that may or may not have practical value to your class is reactionary.

So instead, be proactive!

You know your content best and can determine how AI and LLMs can play an advantageous role in your classroom. To understand it best for your discipline, start using it! This permits you to know the boundaries according to each lesson, activity, or assessment. For instance, if you are focused on creative, original writing from students, there are ways to ensure that you can track that their work is LLM-proofed. However, I want to suggest that rather than simply barring it or incorporating tracking software, lead them through engaging activities with the technology. Some examples include:

  • Demonstrate how they can analyze multiple angles of a prompt by discussing with an AI person you create.
  • Workshop how the LLM can dial in a thesis after multiple variations have been ideated (Would you encourage them to do this with a peer or yourself in person or over Zoom prior to this technology being available? If so, then why not encourage them to do it with technology which is always available and can ideate so rapidly?)
  • Practice together how AI can make their posts friendlier to classmates in online environments or make their key points pack more of a punch! LLMs allow students to edit more precisely for tone and word usage, which is a lifelong skill to refine and develop.
  • You can also advocate that while LLMs are useful, they are not always accurate. As their faculty, they’ll trust you. So teach them to properly research, cite, and understand what AI creates for them.

If you can model how to engage with these tools, you are teaching them something that is proving to be a lifelong necessary skill in their future discipline. Often, you wear many hats in academia. Instructor, researcher, advisor, administrator, supervisor, just to name a few. And many students reach out to you organically for advice and assistance in their academic journeys. Though I’ve stated it can be helpful, I won’t ever encourage you to fully substitute an LLM for the value of personal interaction with your students. Still, how nice would it be to have a built-in, 24-hour tutor available to your students? Or give them access to a competent editor a 2:00 a.m.? Perhaps a partner to play devil’s advocate to their thesis while you’re on vacation? LLMs can do all of these things and more. So rather than discouraging its use or ignoring the 13,000-pound elephant in the classroom, embrace it in your lessons and activities.

If you’re intrigued but unsure how to begin – reach out to the TIPS and Instructional Design team at Global Campus. We are working with these technologies daily and constantly learning about their function and features. Heck, we’ve even podcasted about it already! We’d be happy to work with you as you lead your students in this new, unknown, and exciting frontier.

Ironically, no LLMs were used in the creation of this article. 🙂