Skip to main content

McGraw Hill Higher Education, Academic Integrity, and Large Language Models (LLM)

We live in an age of rapid technological change, challenging us to stay ahead of the curve. The introduction of certain generative artificial intelligence (AI) tools, free to the public in 2022 accelerated the curve by Mach speed.

Large Language Models (LLM) such as Open AI's ChatGPT is trained on massive amounts of language data and provide human-like responses to natural language questions. It’s a short step from asking a tool a question, to a potential student misuse of these writing tools.

Instructor Anna Johnson of Oregon’s Mt. Hood Community College explains it nicely.

“A student can feed a test question into ChatGPT and get back a well-written, reasonably accurate paragraph back. If one student wrote their own answer to this test question, and another student submitted a ChatGPT-generated answer, their instructor would not consistently be able to tell the difference.”

Working with administrators and instructors to understand the challenges they face in the classroom and help them maintain academic integrity has always been a priority at McGraw Hill. We took steps to better understand these natural language AI chatbots' impact on academic integrity and gauge how educators are responding to it.

Our survey and its key findings

We launched a blind survey of 1,081 higher education professionals in February 2023 that included:

  • 656 instructors
  • 132 adjuncts
  • 260 administrators and instructors
  • 33 non-teaching administrators

We also reviewed existing research and conducted in-depth interviews with administrators of various school types. The top-line findings are eye-opening.

Currently, there are two camps of higher education professionals—those who report more positive emotions about AI chatbots, and those who report more negative emotions. The camps are evenly split: 54% in the positive group; 46% in the negative.

Most instructors (83%) don’t yet have a policy on AI chatbots. Of the 17% who do, 11% ban its use and 6% allow it. Similarly, most universities are still determining the best approach to prevent cheating (64%) and leverage AI chatbots as a learning tool (66%). 32% of administrators also report allowing instructors to determine the best way to leverage AI chatbots in their courses.

What’s next for institutions?

Nearly 80% of instructors are either acting or planning to take action to prevent student cheating because of AI chatbots. Actions include:

  • Changing types of questions and assessments to test student knowledge
  • Creating more group assignments
  • Using or planning to use AI-detection tools

While only 6% of instructors currently leverage AI chatbots as an instructional tool or resource, more than 50% plan to include AI chatbots in students’ writing process and create practice questions for student exams or quizzes.

How is McGraw Hill supporting institutions to maintain academic integrity?

We will continue to follow our existing practices to help institutions maintain academic integrity:

  • Provide question pools and algorithmically generated variables for the assessment utilities in our learning platforms
  • Partner with companies like Proctorio and Respondus to help provide secure assessment environments
  • Ensure that our platforms also provide a variety of assessment types that are incompatible with tools like AI chatbots

Currently, in addition to those steps, we are also:

  • Dedicating a team of professionals to customer success professionals that will work individually with instructors to help them decide how best to address the challenges for their specific courses with our platforms
  • Continuing to investigate other solutions that will address instructor needs related to AI chatbots

How will McGraw Hill Higher Education products and services evolve to respond to tools like AI chatbots?

Everything we do follows our guiding principles: use any new technology responsibly while at the same time protecting against dishonesty.

The current learning tools we offer employ varying degrees of AI, so we have extensive experience with this type of technology. Our ALEKS platform, for example, has employed AI effectively for over 25 years and helped millions of students learn Math and Chemistry. 

We are actively learning more about how AI tools can both be incorporated into courses to enhance the teaching and learning process and be detected and limited where they can cause harm.

What’s more, we are in active conversations with our customers, authors, contributors, and industry thought leaders to shape our go-forward strategy.  A key aspect of our approach is creating an ethical AI governance policy that will provide guidance for future developments.