The Art of Measuring Classroom ROI
What is the ROI of a college education?
This has become a more common question these days as all the stakeholders (students, educators, administrators, legislators, etc.) involved in higher education begin to analyze the cost, value, and long-term success of a college degree. As educators, we look at our individual contribution to this question, since one of the cornerstones of any program or degree is the coursework itself. Knowing that education is valuable in its own right is something most educators see as intrinsically true but, in an age of data and analytics, how can we prove its value to others? How should we measure ROI for not only the course but also show that our coursework is both accurate and predictive of ultimate student success beyond the institutions we teach in?
One way to answer this question is to assess how instructors measure outcomes.
What gets measured is typically both what gets done and what gets rewarded. In 2012, I was able to secure undergraduate Marketing Principles syllabi for 15 notable and well-regarded institutions. This chart is a summary by grading category and indicates what weight is assigned by each instructor.
The categories are consistent across the sampling with some variation in weighting and inclusion. The most significant assessment vehicle, with the highest weighting, is the traditional assessment of quizzes and exams. For educators just dipping their toe in to the ROI world, this is the place to start.
However, do these types of measurement really demonstrate command of the material and the ability of graduates to apply this command to the benefit of future employers? Some programs have developed end-of-program assessments, which are essentially used as capstone and competency exams to determine what has really been retained by students. There have been mixed results from such efforts, causing administrators, instructors, and employers alike to reflect on what needs to be done to improve outcomes.
I included the simulation category because there have been recent developments on the efficacy of these tools to improve learning outcomes. Modern programs are shifting to courses with more student engagement and learning that requires more applied methods. You are seeing courses with more projects, more simulations, and more alternative learning tools. As a result of this effort, I have reached out to all the schools who participated in supplying syllabi back in 2012 to see what changes there have been in determining what metrics are best. (I will provide a post with updated results on these findings in the near future.)
One tool that has provided help in this area of applied methods is McGraw-Hill Education’s Connect® and SmartBook ® programs. Smartbook, an adaptive eBook, provides an individualized approach that helps determine a student’s ability and then provide tailored lessons and reading experience for improvement – something that that no traditional exam can deliver. Connect also provides the benefit to instructors of autonomous grading and detailed reporting, allowing instructors to focus on what these outcomes tell you about what students are having difficulty learning real time and adapting the content in order to deliver higher course ROI.
Early results from recent end-of-program assessments indicate progress on student retention and competency at program completion as a result of more applied and alternative learning methods, but continued testing needs to be completed to confirm statically validated results.