Skip to main content

ChatGPT, Academic Integrity, and McGraw Hill

We live in an age of rapid technological change, challenging us to stay ahead of the curve. The introduction of ChatGPT in November 2022 accelerated the curve by Mach speed.

ChatGPT, or Chat Generative Pre-Trained Transformer, is an AI chatbot created by OpenAI. It’s a Large Language Model (LLM), trained on massive amounts of language data to accurately predict the next word of a sentence. It’s a short step from that next word to potential student misuse of this writing tool.

Instructor Anna Johnson of Oregon’s Mt. Hood Community College explains it nicely.

“A student can feed a test question into ChatGPT and get back a well-written, reasonably accurate paragraph back. If one student wrote their own answer to this test question, and another student submitted a ChatGPT-generated answer, their instructor would not consistently be able to tell the difference.”

Working with administrators and instructors to help them maintain academic integrity has always been a priority at McGraw Hill. And so, we took steps to better understand ChatGPT’s impact on academic integrity and gauge how educators are responding to it.

Our survey and its key findings

We launched a blind survey of 1,081 higher education professionals in February 2023 that included:

  • 656 instructors
  • 132 adjuncts
  • 260 administrators and instructors
  • 33 non-teaching administrators

We also reviewed existing research and articles in the press and conducted in-depth interviews with administrators of various school types. The top-line findings are eye-opening.

Currently, there are two camps of higher education professionals—those who report more positive emotions about ChatGPT, and those who report more negative emotions. The camps are evenly split: 54% in the positive group; 46% in the negative.

Most instructors (83%) don’t yet have a policy on ChatGPT. Of the 17% who do, 11% ban its use and 6% allow it. Similarly, most universities are still determining the best approach to prevent cheating (64%) and leverage ChatGPT as a learning tool (66%). 32% of administrators also report allowing instructors to determine the best way to leverage ChatGPT in their courses.

What’s next for institutions?

Nearly 80% of instructors are either acting or planning to take action to prevent student cheating because of ChatGPT. Actions include:

  • Changing types of questions and assessments to test student knowledge
  • Creating more group assignments
  • Using or planning to use AI-detection tools

While only 6% of instructors currently leverage ChatGPT as an instructional tool or resource, more than 50% plan to include AI chatbots in students’ writing process and create practice questions for student exams or quizzes.

How is McGraw Hill supporting institutions to maintain academic integrity?

Historically, we have taken the following steps to help institutions maintain academic integrity:

  • Provide question pools and algorithmically generated variables for the assessment utilities in our learning platforms
  • Partner with companies like Proctorio and Respondus to help provide secure assessment environments
  • Ensure that our platforms also provide a variety of assessment types that are incompatible with tools like ChatGPT

Currently, in addition to those steps, we are also:

  • Dedicating a team of professionals to customer success that will work individually with instructors to help them decide how best to address the challenges for their specific courses with our platforms
  • Continuing to investigate other solutions that will address instructor needs related to ChatGPT
  • Developing an ethical AI governance policy with the help of both internal and external resources

How will our products and services evolve to respond to tools like ChatGPT?

Everything we do is being done with these guiding principles in mind: use any new technology responsibly while at the same time protecting against dishonesty.

The current learning tools we offer employ varying degrees of AI, so we have extensive experience with this type of technology. Our ALEKS platform, for example, has employed AI effectively for over 25 years and helped millions of students learn Math and Chemistry. 

We are actively learning more about how AI tools can both be incorporated into courses to enhance the teaching and learning process and be detected and limited where they can cause harm.

What’s more, we are in active conversations with our customers, authors, contributors, and industry thought leaders to shape our go-forward strategy.  A key aspect of our approach is creating an ethical AI governance policy that will provide guidance for future developments.