Skip to main content

Responsible AI in Education: How we ensure safety and effective use, while adding value to our products

McGraw Hill’s Chief Product Officer, Lori Anderson, explains the company’s approach to building AI solutions that students and educators can trust


Tags: Personalized Learning, Article, Artificial Intelligence (AI), Blog, Corporate

Ask educators what they think about artificial intelligence (AI) and its use in education and you’ll get some interesting reactions. We’ve done that two years in a row for our Global Education Insights Report.

While they believe this rapidly evolving technology can have a positive impact on student learning, educators want to have assurance that the AI tools they use are trustworthy and that they can help them achieve their most important goals.

We believe AI can turbocharge aspects of personalization in learning, and also go a long way to save educators time. But when we apply it to something as critically important as learning, or embed it into technology used by young children, we must ensure it works as intended and that students aren’t put at risk.

How do we do that? In this interview, Lori Anderson, McGraw Hill’s Chief Product Officer, explains how we build AI responsibly and how we have ensured the safety and trustworthiness of our latest GenAI tools.

Q: What are the principles that guide our responsible approach to AI at McGraw Hill? 

"The principle that guides us is student learning outcomes. We want to provide a personalized experience in an environment without surprises that we wouldn’t want introduced in the classroom.” Watch to learn more:

Q: Can you explain the concept of guardrails for AI and how we apply them to the learning experiences we create? 

"Guardrails ensure the safety of our customers, students and instructors as they engage with AI. For example, in one of our GenAI tools called Writing Assistant, students will enter a prompt. If a student enters something that might be alarming in an education environment, we’ll send a message to the instructor.” Watch to learn more:

Q: Tell us about one of McGraw Hill’s new  AI applications and some of the decisions we made to test their capabilities and ensure we built them responsibly.

"In our GenAI solution called Writing Assistant, we started with user research to ensure we understood what educators and students would be willing to use.” Watch to learn more:

Q: Why is responsible AI important for our business? 

"Approaching AI responsibly, ethically and with safety in mind for our customers is not a choice. It’s what we will always do.” Watch to learn more:

Want to read more about how we’re applying AI in ways that improve student learning? Learn about how our new AI Reader tool is increasing student engagement here: How McGraw Hill built, piloted and scaled its first GenAI solution—then showed that it works