Skip to main content

Assess Thyself or Be Assessed: 5 Golden Rules of Conducting Assessment

No matter your discipline, it’s pretty much impossible to be part of higher education without having bumped into some sort of assessment project. Of course, as teachers, in both formal and informal ways, we have been using formative and summative assessment to better understand how to help students make progress for ages. For example, pausing in a lecture to ask students if they have any questions about the topic just covered is informal formative assessment used to determine if it’s time to move on or if we have to go back and review some important ideas. But as educators today we also increasingly find ourselves subject to programmatic, institutional, or national assessments--testing--that at many times feels like political posturing or a competition for funding.

In writing studies, where I’ve spent my academic life, assessment has been a hot topic for the last few decades; mainly because first-year required writing courses are often seen as a key gate-keep position for incoming first-year students and pre-requirements for other course registration. Ed White, a professor from University of Arizona who has researched and written extensively about all forms of college assessment, warns that “if we do not meet the academic and political demand for . . . assessment at various levels, others will happily take on that task, whether they know anything about the matter or not.” This maxim has been playfully called “White’s first law of assesso-dynamics: Assess thyself or assessment will be done unto thee” (“The Misuse of Writing Assessment for Political Purpose”).

We do need to assess ourselves as teachers (how else to reflect upon and then improve our teaching practice?) and our programs (how else to know if students are meeting our pedagogical goals or which objectives cause the most challenge?). Whether you’re an administrator who needs to dive into assessment in order to fend off institutional and/or political pressure or a faculty member who is in a department undergoing some type of programmatic assessment, here are some key guidelines to keep in mind.

  1. Keep it Local

When it comes to assessing student progress, one size does not fit all. The assessment will mean more to everyone involved if the goals are important to the community--the teachers and students in your program. You will get more buy-in if the assessment is planned and devised with participation from several members of the faculty (a good cross-section--and that means adjuncts or part-timers if you have them, as well as full-time and/or ladder faculty), instead of by just the administrator(s). When all the different faculty stakeholders participate, they bring knowledge of your student population and pedagogical goals. It won’t be easy to create an assessment project with several faculty members involved--but it’s worth it.

  1. Make Sure Your Goals Are Precise

Once you’ve got an assessment community, discuss what you want to get out of the project in order to devise a clear set of goals. You may need to assess because of institutional pressure (political whims, accreditation reviews, funding needs, etc.)--if so, your goals should be shaped with this is mind, but also find a way to reflect what the community values.

If you already have a clear set of programmatic (or course) goals, use these as a starting point, but don’t try to create a project that will assess all of them; discuss which are most important and which can be measured in some way that makes sense within the pedagogy of the class. If you create a project around just a few main objectives, you should be able to collect data that gives you much thicker detail about what students are and aren’t learning. Sometimes simple is better.

  1. Use Technology When It Can Help

If your program is using a platform like McGraw-Hill Education’s Connect or ALEKS, take advantage of what technology can do to help make the assessment process easier. In Connect, for example, you can create an assignment that is shared with all the classes participating in the project; once students complete the assignment, you can easily retrieve the results from all the classes.

  1. Make Sure Your Assessment Will Return Valid Data

Not all of us are researchers and many of us didn’t get our PhD in statistics. Don’t be afraid to admit that you don’t know everything or that you’re unsure about how to obtain statistically valid data in an easy-to-implement manner.  Look for help from other departments on campus--social science research and statistics--and ask the advice of friends or colleagues at other institutions about their experiences with assessment. You may also need to gain approval from the Institutional Review Board or Office of Research on your campus.

When you’re starting out, consult with someone who can advise you about sample size. You don’t have to create a project that encompasses every single student and every section of the course being offered in order to produce valid and statistically significant data. These calculators from Creative Research Systems and SurveyMonkey can give you a starting point for understanding sample size.

  1. Involve the Community: Responding to and Making Conclusions about the Results

After the data has been collected and organized, allow the community to review it then and talk about their reactions. Consider making the results the topic of a faculty meeting or a start-of-the-term workshop. With more perspectives involved, you will gain a broader sense of what the data is revealing about what students are learning and what they’re challenged by.

About the Author

Lynda Haas teaches rhetoric and composition at the University of California, Irvine, where for 10 years she was also a Writing Program Administrator. Her areas of research include digital pedagogy and digital literacies, visual rhetoric, and the intersections between writing theory, feminism, and cultural studies. She co-edited a book of essays entitled From Mouse to Mermaid: The Politics of Film, Gender, and Culture, and recently presented a paper on accelerating the learning of international students at the Symposium on Second Language Writing. She has been using McGraw-Hill’s Connect for over a decade, and helped pilot several tools as they were released, including the facilitation of an inter-institutional assessment study using Connect's ""Outcomes Based Assessment” tool in 2010. Since then, she has served as a “Digital Faculty Consultant” for McGraw-Hill, talking with other instructors about Connect and digital pedagogy.

Profile Photo of Lynda Haas