Skip to main content

AI Nutrition Facts


Clinical Reasoning AI Cases

Description

Clinical Reasoning’s Patient AI and Feedback Reporting allows learners to practice history taking with realistic sounding patients either through verbal or text conversation. For both options a running transcript is provided. As the learner conducts patient history, they may order information on physical exams, labs, imaging, and procedures which is provided as text-based results. These actions and inputs from the learner, including a selected Working Differential and Final Diagnosis, as well as rationale and confidence level, are collected in order to generate a Feedback Report that provides a comprehensive clinical skills assessment. At present, soft skills are not evaluated. A copy of the Feedback Report may be downloaded, and past cases are saved for later review if the learner desires.

Privacy Ladder Level

2

Feature is Optional

Yes

Model Type

Generative

Base Model

OpenAI – GPT-4.o & o1 (subject to change)

Trust Ingredients

Base Model Trained with Customer Data

The app is not trained on any user data or PII.

No

Customer Data is Shared with Model Vendor

The app runs on a private instance of base model that does not send data back to model vendor.

No

Training Data Anonymized

N/A

Data Deletion

McGraw Hill’s engineering team will keep a secure record of all user interactions to monitor performance. Input and output data is discarded after four years.

Yes

Human in the Loop

While there are a number of safety and accuracy guardrails in place, learners receive instant output from the AI model.

No

Data Retention

4 Years

Compliance

 

Logging & Auditing

McGraw Hill will systematically review records of model input/output to audit performance. Model upgrades will be made as needed.

Yes

Guardrails

The app employs input and output guardrails. It does not engage in dialogue with users when offered biased or harmful language.

Yes

Input/Output Consistency

Yes

Other Resources

Ask your Account Manager for more information about Clinical Reasoning AI Cases.

Clinical Reasoning AI Cases Model Overview

Clinical Reasoning’s AI Cases utilize a large language model (LLM) for interactions with the AI patient and for feedback reporting. The cases are human authored and reviewed. This LLM is a private instance of OpenAI’s GPT-4o & o1 provided via Microsoft Azure AI. This model is given context for the specified product (and only that product) using a Retrieval Augmented Generation (RAG) pattern, which indexes McGraw Hill’s content.  

 

Subject Matter Expert Driven Approach

The purpose of Clinical Reasoning is to help develop critical thinking skills regarding patient evaluation and applying medical knowledge in a pragmatic framework. The AI Cases feature is intended to work in conjunction with the diagnostic process within Clinical Reasoning. To support our goal of providing students with opportunities for deliberate practice in building clinical skills, the AI Cases feature leverages the long-standing expertise of clinical educators. AI Cases’ patient performance is driven by exemplars provided by our leading subject matter experts in medical education, with ongoing case refinement, updates, and additions. AI Cases’ feedback report includes an evaluation comparing the learner’s findings with those of a seasoned clinician, facilitated by a carefully designed matrix developed in collaboration with SMEs. The feedback report is designed to help support a student’s self-improvement.  

 

Data Privacy and Security

McGraw Hill takes matters of security and bias very seriously. We have monitoring in place to ensure Clinical Reasoning’s AI Cases meets our company standards for educational use. We designed AI Cases to be secure in design and have guardrails in place to minimize bias, inaccuracies, and inappropriate responses. We are committed to ongoing enhancement of safety measures.

  • Secure Treatment of Limited PII: Clinical Reasoning’s AI Cases have access to the user’s conversations only to the extent of user input for the AI patient’s performance, and a transcript is included in the user’s feedback report for convenience. Conversations are not used for model training.
  • No Data Sharing: AI Cases do not send data back to the model vendor for model training purposes.
  • Secure Data Handling: Our secure system records all model inputs (e.g., highlighted text, actions taken) and outputs for product improvement and model evaluations. Data will be retained for up to 4 years to ensure students are able to review their own past casework.
  • Bias, Accuracy, and Appropriateness Guardrails: Underlying each “action” (e.g., generating realistic patient responses) is a proprietary, lengthy prompt. These prompts have been designed to minimize potential biases, inaccuracies, or inappropriate responses. While McGraw Hill is dedicated to offering safest-in-class AI solutions for education, AI might occasionally produce biased or inaccurate information, and users must use critical thinking to evaluate model output as AI makes mistakes.

 

Continuous Improvement

McGraw Hill’s team is dedicated to the continuous improvement of our products. To better serve learners, our team will systematically review deidentified model data and reserves the right to make changes to the underlying model as needed. We work closely with SMEs to identify opportunities to enhance the experience, including offering more cases and patient variance. This ongoing process ensures that our AI Cases improve as a reliable and effective tool for education.

 

Institutional Choice

While AI technologies present many exciting opportunities within education, we want to ensure the choice to use these technologies remains firmly in the hands of institutions. Institutions can request to disable the AI Cases functionality for Clinical Reasoning at any time, for any amount of time. To make such a request, prior to a trial or subscription, please inform your Account Manager that disabling AI features is an institutional policy requirement, and that you wish to disable the AI Cases functionality. If your trial or subscription has already begun, please reach out to your Account Manager or visit our Clinical Reasoning Support Page to contact us. Along with your request, please indicate any required provisions by your institution if information has already been collected for the purpose of the AI to perform as a patient and provide feedback.