Skip to main content

How AI is Reshaping Speech Grading Podcast and Transcript

Moderator Jennifer Foster and fellow University of Central Oklahoma instructors Dr. Megan Cox and Jennifer White share how AI is transforming the way they grade speeches and engage students.


Higher Education Blog Podcast Transcript Communication Corner

Jennifer Foster:

Welcome to the Communication Corner, a McGraw Hill podcast for the communication discipline. I'm Jennifer Foster, your moderator for today's session on how AI is reshaping speech grading. I'm going to have both of our guests introduce themselves and then we'll delve into our discussion. Megan, why don't you start us off?

Dr. Megan Cox:

Hi, yes. My name is Dr. Megan Cox. I am part of the strategic communication major at the University of Central Oklahoma.

Jennifer Foster:

And Jennifer?

Jennifer White:

Hi, my name is Jennifer White and I am part of the interpersonal communication section of Mass Comm.

Jennifer Foster:

And I'm Jennifer Foster. I am the core curriculum coordinator and oversee our communication courses in the department as well. And I think that all three of us have been researching, looking into, and using personally AI in our classrooms because we recognize the importance of preparing our students as they go out into the workforce. And as we look at what we do as graders in our communication courses, I think really the place that we should start this discussion is talking about the traditional approach to grading, specifically student speeches. Megan, what's really been your traditional approach to grading speeches?

Dr. Megan Cox:

I remember when I started that I was actually a little panicked about it, about being able to really judge a speech, knowing that I was just going to watch it the one time through, and being worried that I would miss something. But after just one or two times, I realized that I was really going to grade it as this listener, this first time listener, and that if I kept my notes real short, the program provided great rubrics that I could make those notes as I went. And then if I wrote the topic at the top, then I could always go back in my mind and expound a little bit more as long as I did it in the same day to make sure that they got as much full feedback from me as possible.

Jennifer Foster:

And Jennifer, do you find that to be the same approach you've taken, and if so, are there any challenges that instructors face when trying to grade and use that approach?

Jennifer White:

Yeah, I know it might be different now. It's been a couple years since I taught public speaking. I always wanted to have the prep outline next to me, and then I would want to check sources. So I'm constantly checking all the marks, and I would make notes to myself. So like Megan, I didn't necessarily go back and look at tape, but I would make notes to myself like, "Check the source," if I miss something, because I didn't want to miss it, but I would always give them the benefit of the doubt on that. So that's how I approached it, because I started teaching public speaking in 2002, so you're just constantly just trying to improve and trying to hit those spots and make sure that you're consistent with each student, and then marking their growth as a public speaker throughout the 16 weeks.

Jennifer Foster:

Yeah, I think that when you talk about needing to be consistent, that's a huge challenge that we face. If you're grading 100 speeches, are you grading them all in one setting? Are you changing based off of your mood or your interest or the time of the day? And making sure that how we mark off or take points away on particular topics stays consistent, is a challenge because it's time-consuming. The thing about grading speeches is that it can be a very time-consuming process because we want to give that accurate feedback to the student for their growth, as you talked about. So Megan and Jennifer, I know that you guys are coming off of a research project really diving into AI, and maybe you could speak to what sparked your interest in wanting to explore AI tools, and then maybe talk about how you're currently using some of those.

Dr. Megan Cox:

Okay. and I think our interest in AI grading was a circumstance that I'd had with my daughter where some of her work was being graded by AI and just AI. So we were really curious how students would feel about AI grading. And we wanted to make sure that we knew where the line is. Are they upset because it feels like a double standard because we're not letting them use it to generate content, so we shouldn't be able to use it either? Is it that idea? Or is it more that they just don't trust AI? We found that to a large extent that wasn't, although there were some examples of people saying, "I don't mind if it's used as long as rubrics are really clear and I know how it's being graded, and that I could appeal that grade if I didn't feel that it was fair." So those were some of the things. The pilot study has been done, and we're still in some of the initial phases, but so far that's what we've been finding.

Jennifer White:

Yeah, and I think that, and Megan back me up here, I think one of the main things that I found when interviewing students is that it wasn't really even on their radar that faculty would use AI to grade. They were like, "What? No one's talking, no one has said this about me." From one of the interviews, it was like, "Your job is to teach, so this is your job. You should be grading my stuff. AI shouldn't be..." Now, that wasn't true for all of them, but it was just really interesting that for a lot of them, they were just like, "What are you talking about? No, that's for us. That's not for faculty."

Dr. Megan Cox:

Yeah, they were really surprised by that idea. I think it's still so new to so many people.

Jennifer Foster:

Absolutely. And so what AI tools or platforms have you been using whether maybe personally as a part of your grading and assessment process?

Dr. Megan Cox:

I hate to say that a lot of my grading has come down to AI checkers, and there's a lot of them, and some have a more reliable reputation than others. What I really enjoyed using for writing, and I think that I would use for outlines in the future is Google. When the students create a file in a Google document or in a Google folder, it shows me their version history. So I can go and look how they put the information in. And while that's not 100% going to say they didn't use any AI, it definitely helps make sure that most people are doing it the right way. And that helps me not have to rely so heavily on those checkers.

Personally, I use ChatGPT. I don't like it to write any content for me, it's my pride as a writing professor, but I do feel that it helps me sometimes sort through information that's really maybe information that's poorly written. And I could see using generative AI to help me maybe look at papers or outlines in the future to maybe get a little bit better idea of what the student's trying to say.

Jennifer White:

Yeah. I have GPTZero right here on my monitor. I have a subscription to it because it's... And it gives me also some educational tools too. There are articles in it that talk about AI in education. I also have a subscription. I like to have subscriptions to everything. I have a subscription to ChatGPT as well. I use it for help with rubrics. I'll plug my assignment in and ask it to help create some criteria for me. I tell my students that. I tell my students, "Here's how I'm using it and how I expect you to use it."

And so Megan and I were talking about this on our trip. I think Jennifer, you and I talked about it too in the interview. I'm completely open to AI. I think it's silly not to be because it is... We are in an AI world, but it's about working together. So this is what I'm telling my students like, "We're figuring this out together, but that doesn't mean you get to copy paste an assignment in. So we're going to need to work together. You show me."

So in comm theory, I'm having them do AI integrity statements, like how are you using it? I need a statement from you. That's not foolproof. But I know as a student, I would've felt... I don't know if I could honestly tell you how I'm using it, if I'm using it inappropriately. So I'm doing that. I'll just be like, "Hey, I ran this through Turnitin." And I've set it up where they can look too. They can check their own work. So I'm like, "You got to be accountable. You got to check your own work. Here's the screenshot of saying it used 87%. That's excessive. That's not okay." I just feel like we just don't know. We don't have anything to ground us. We're all on our own island. That's where I'm at right now.

Jennifer Foster:

Yeah. And I think that some of that does depend on the tools that you're using. Like you just talked about, Turnitin, I personally am using that. That does have an AI checker built into it that I felt pretty confident in because it's embedded inside our LMS and students are able to, as you said, preview that information. And so I like it from a privacy standpoint that we're not going outside of what's already built in, but I do wonder whether that's truly checking things accurately and grabbing it because ones that I would've sworn were probably AI produced were saying that they were 0% likely being AI. And I was very surprised by some of that.

And then I also have been using some of the AI tools that are built into GoReact inside the McGraw Hill Connect platform. And I think that one of the interesting things about that is that it can give students some instant feedback. We talked about how there is such a time commitment on the speeches that it does take me a week to get that feedback to the students when students want some instant, "How did I do? What's going on?" And so I'm able to select some of the individual criteria inside there and allow it to give some feedback to the students and been able to play with some of that.

But when we are examining each of these different platforms, the real question is how do we think it's handling it? What do we think that an AI tool would be helpful for, or what might it not be good for? And Jennifer, I'll let you start on this one. In handling elements of grading specifically, can you see any ways that it would be good or where you might see that there would be an issue?

Jennifer White:

From a public speaking standpoint, I think it could catch verbal fillers. It could help measure time. So I think it could go for some of the things that could just quantify, might be able to strength of argument, maybe making sure the main ideas are properly presented, maybe even checking sources, maybe some of those things. And this will be reflective of our interview as well, I think one of the main criticisms would be AI doesn't know the student. It doesn't know the student has a high degree of communication apprehension necessarily, and seeing how that student has mastered that or worked through that and grown. And so we're not there yet. That's where I think we come in and just being able to see that growth and give that true qualitative feedback of where they are in their journey.

Dr. Megan Cox:

Yeah, I would completely agree with the growth, I think. And this is even the difference between sometimes an in-person and an online class, just being able to know the student and see where they're coming from. Because, to me, it's not so much about... You want them to master public speaking as much as they're able to, but some of them start in very different places. So just giving those grades to encourage that movement forward. And sometimes I worry about looking at a speech too intently. I'm not sure I would want someone to use AI to really grade my speech and count how many ums. There may be a speech that's actually a little bit better put together and a little more compelling that may have a couple more ums than another speech.

So being careful with some of those easy to count things not impacting maybe the overall, that human approach to it and saying, "Was this an impactful and compelling speech? Did they use the story in a way that really got to me?" I still wonder about some AI's tools ability to be able to examine that.

Jennifer White:

Yeah, I want to say one more thing too, that I hate reading an AI-generated assignment. I would not want that kind of feedback, that just vague, abstract, depersonalized. It's just generic. I would want a teacher or an educator to show me, "Oh, I really like what you did there. That was great." I want that human aspect.

Jennifer Foster:

And so do you think that there are... So we mentioned fillers, and pros and cons to how we then evaluate it, but that would be something that an AI tool easily could grab. I think an AI tool probably could grab transitions if we really wanted to focus in on a particular skill that we might be able to do transitions. Are there any other particular things that you could see an AI tool really being able to highlight, whether that's emotional impact, tone, pacing, logical fallacies? I don't know, do you see any of that happening, Megan, where an AI tool might be able to be helpful in that?

Dr. Megan Cox:

I do. I think, as we mentioned before sources, it probably could grab that pretty well. Pacing, the way you're talking, maybe even to some degree because of AI's ability to look at the way someone's shifting or the way they're using their hands. As far as compelling human stories, I would still feel uncomfortable with AI telling me what those are. I feel that that's a uniquely human element. And I realize that that can be different from human to human, but that's the challenge of a speech, is making sure that that story you're using is going to affect the most people.

Jennifer Foster:

Jennifer, any other thoughts on places that you see AI really being a useful tool in speech grading?

Jennifer White:

I know that the hardest time, the hardest part for a student is at the beginning of a speech, so I remember begging them to have a strong attention getter. And then having that statement at the end that provides psychological closure to the speech, like your present, that you're unwrapping a present. So maybe having that strong attention getter and that strong closing statement to bring that psychological closure to the speech, maybe just to have that strong beginning and strong end.

Jennifer Foster:

Right. Do you think then that AI grading might change the way that we structure our assignments or the way that we teach? If we could incorporate some of these things and wave a magic wand that it all worked out well, what ways might it change the way that you approach your assignments or teaching, Megan, do you think?

Dr. Megan Cox:

I think that one of the students mentioned that if we started to grade with AI, then they would just all learn what the AI was looking for, and that's all that they would pay attention to. So I would want to be careful and say, "I am going to use some AI technology to look at these things, the easy things, the things that you should just be working on is technical skills. And then I'm going to go back, I'm going to review your speech and I'm going to review his feedback. I'm going to give you an overall sense." And I do think that that would maybe cut down on grading time and make maybe more quality instead of quantity remarks from me overall. So I think it could be very helpful in the future.

Jennifer Foster:

Because that is true. I think in speech grading, sometimes we feel very repetitive that on 10 speech assignments out of 20, I'm writing the same sorts of feedback. And so maybe if we could train the system to really be looking for those things, then as you say that quality, I might be able to go deeper then in some of the feedback that I give as opposed to just having to hit some of those surface things. Jennifer, do you see it changing any of your assignments or being helpful as far as the tool goes?

Jennifer White:

Are we talking about grading-wise or just creation-

Jennifer Foster:

Because if AI could help in grading, would that then change the way you potentially structure an assignment or think or approach it?

Jennifer White:

Yeah. I think that I'm going to be on the hesitant adopter side of using AI to grade. I need to be in it. Now, that might change if I teach a public speaking class again, but in all my other assignments, I need to give feedback so I know where they're going on it, especially if it's a, what, scaffold? What is it called? Megan, this is the word I couldn't think of. Scaffolding assignment, where they have these stages? I need to know where they are. I think it came out in an interview though I would use it for them to practice things or to run things through for them to give them feedback and then give it to where it's ready for me, which might cut down on my grading time if they're able to learn from what AI is telling them. But I would be hesitant, I think, at this stage.

Jennifer Foster:

So you all have talked to some students about AI grading, and I'd like for you just to share how you think that students are going to respond to receiving feedback from an AI system. Maybe just what they've given some insights on as far as how they feel about that.

Dr. Megan Cox:

I think one of our main takeaways is transparency. It's what we're asking from our students as we grow a little more comfortable, especially in a public speaking class where I'm not trying to make sure that they are... My writing classes, I'm making sure they understand those elements. I want them to be good writers in public speaking, but I'm okay with them using some AI to help them structure it. I want them to just be transparent with me. And that's what I feel from our interviews was what they wanted from us is they want transparency. If we are going to use it, how are we using it? What are we using? And what does the rubric look like? If they have a problem with what comes back, can they approach us and get our feedback on what it gave them? I think that's really what stood out to me. What do you think, Jennifer?

Jennifer White:

Yeah, and just having that relationship with their instructor, just the feeling of safety in that space. So I think just being able to have that trust was a big one. But I know motivation was one of your areas, right, Megan? I know that several of them, I think there was one outlier, but several of them were like, "I wouldn't be motivated if AI was grading my stuff because I'm trying to work to impress," I guess, "My instructor. So if AI is just the one grading it, who am I working for?" I think was some of that feedback I got.

Dr. Megan Cox:

Yeah.

Jennifer Foster:

Do we have any obligations though to consider using it as a part of the grading system so that students do learn how AIs are reading things? I know that potentially they're going to be screened when they turn in a resume through an AI tool. And so I don't know, does that cross, has anything come up with that about our preparation and the way that we are needing to use it as educators to prepare students? Megan, any thoughts on that potentially?

Dr. Megan Cox:

I think you make a really good point, and I do think what we talk about today is probably going to be different. Unfortunately at the pace everything is going, I think what we talk about today and what students are comfortable is going to be very different in 24 months from now. I think they're going to see it. Unfortunately, AI is impacting their entry-level job prospects, so they're going to want to know, "What do I need to know how to use it, but also the feedback's going to give me." And you're absolutely right, applicant tracking systems are using AI to either kick a resume out or not. And I think that they could use it in an interview process. I think that AI could help them become better interviewers. So if we use some of that feedback in a public speaking class, then they might be more used to it when they actually go out and that job is on the line.

Jennifer Foster:

I can see it being helpful too in changing for audience adaptation. Because in a public speaking class, we are asking students to give those speeches instead of the same audience, in front of that same audience throughout the semester, and so they're not having to adapt very much. But potentially if we ask the AI to take the role of a particular audience member or from a certain standpoint and give feedback from that perspective, then it really could maybe enhance the students' ability to recognize how different audience members might respond to it. So it sounds like if transparency is important, then that means that maybe the best solution is some sort of hybrid where we're not just handing over all of the grading, assign the grade based off of what the AI says, but instead allowing it to give some feedback. Is there a time then when that would be more valuable as a supplement, informative assessments, as opposed to sumnative assignments? Do you see that there's a difference in when we use this tool, Jennifer?

Jennifer White:

Yeah. So again, I think from a public speaking standpoint, so when you're looking at the chapter on writing a strong introduction, that would be great as an activity they create an intro and then have AI go through and give feedback on that before you're putting it together to actually give your speech. So those checkpoints along the way.

Jennifer Foster:

So I think that one of the nice things about the tool that's inside GoReact is it does label itself as an AI assistant and gives those feedback statements. And then I also like that it pulls language from the speech. Because as you said, Jennifer, it can come across as being very impersonable. And so if they're pulling out quotes that AI thinks landed, I think that that's nice for a student sometimes to be able to see, "This language choice, this diction, this way that I presented, it might have really made a big impact for the listener." So do you think, Megan, when you were talking to students, that students trusted the AI feedback as much as they would've trusted feedback from a human or from their instructor?

Dr. Megan Cox:

I think that they trusted in the sense that they feel that it'll be pretty unbiased across the board, that it'll give everybody the same feedback. That was also the reason for some of the distrust, is if they know me and I know them and I know what they're struggling with, they want that human element. It's almost a sense of being able to negotiate I think a little bit of like, "I'm doing okay, right?" Whereas AI is going to be like it's either good or it ain't. So I think it's both sides, pros and cons.

Jennifer Foster:

And Jennifer, have you ever encountered or heard of where the AI evaluation would be different than the instructor's feedback that they were going to give? Maybe this human component of knowing the student, but do you think that it could just give wrong feedback to the student?

Jennifer White:

I don't know, I haven't run into that. I think we're just trying to keep our head above water in terms of evaluating work that could have been done by AI, that the idea of even being on the other side of it and using it to... It can be very overwhelming. So I have not even been on that side of it yet. Megan, you?

Dr. Megan Cox:

Not yet. We're starting to grow more comfortable with using it in certain areas, and as we do start to play with it a little bit in grading to see where it can help us fill in the gaps, I think we'll start to understand how it looks at things. I'll say I'm presenting the study next week, and I had AI look at the comments on a video, and then I did my own analysis before I even looked at its rundown and my rundown of the main themes. They were very similar. We didn't name everything the same thing, and we used some different comments as statements, but I will say that the similarity was really, really quite fascinating as two separate analyses.

Jennifer Foster:

I think that that's interesting because that would be, I think, one of the concerns that instructors might have with using an AI tool for grading is that if I'm telling a student to use hand gestures in a certain way or this type of organizational pattern, I wouldn't want then the AI tool to be giving them different feedback and wrong information or counting off. So that's interesting that your experience has been that it was similar. But Megan, I know that you have used AI trackers for things like plagiarism or whether or not the student is using that AI tool. What ethical or privacy concerns have you been thinking about when using AI as a grader?

Dr. Megan Cox:

Yeah, and I think you've made a good point that Turnitin is used within our learning platform, and that's a great way to do that. But because I wanted to be able to use multiple checkers, the students are made aware of that in my syllabus. It's a agreement for taking the class that I will check it. It's not something I love to do, and I hate policing it. And that's why I've come up with, the Google situation has been really lovely this semester, and we've been doing a lot more writing in class, and I get a lot of questions while they're writing. So really, it's been a fun process of learning what they really need from me. You do have to let them know that you are putting that work in a checker because it is made public at that point. But if they're putting the prompts in checkers, that's what they're doing too. They're making your work and your class public. So there's this understanding that this is part of the deal at this point.

Jennifer Foster:

And Jennifer, you can speak to this too, but are you all doing anything to keep the student's work anonymous when you put it through an outside tracker? Or is that something that you're just able to use the whole document and the whole information in it?

Jennifer White:

I don't know if it [inaudible 00:26:58].

Dr. Megan Cox:

Yeah, I just [inaudible 00:27:00].

Jennifer White:

I'll take a section of it without their name attached to it, but they know I'm doing it. I have made a decision that this is something that's important to me. I'm trying to create scholars. I want people to come out better than they came in and they can't do that if they're copying and pasting. So this is going to be a fight. I'm willing to fight, but they know, I tell them and I use... And I'm just doing a section. But before if I thought I saw something familiar, I would copy and paste a statement in Google to see if it pulled up the exact same statement. So it's that same thing, I think.

Dr. Megan Cox:

And you can turn off learning on some of those systems so it can... It doesn't take what you're putting into it and use it to learn. So there's a couple options for that, but I think if you just copy and paste parts of the document and you don't upload the document, the data is not attached.

Jennifer White:

Yeah, I don't upload. Yeah, I just copy and paste. Yeah.

Jennifer Foster:

So we've talked about how AI is here probably, and there's really nothing that we can do about it. And it might be hard for us to answer this next question, but I wonder how you see AI evolving in the communication classroom over the next, let's call it five years. And that may be even too far out to go, but how is it going to evolve both for our students but then as us as educators, and how can we leverage this to maximize our time, maybe to avoid burnout, to get accurate feedback? How do you see this changing, Megan, over and evolving over the next few years?

Dr. Megan Cox:

And I think this is something that Jennifer and I talked about is that our expectations of what a document looks like or what a speech looks like are actually going to be higher. If we are allowing AI tools, and we want to help students learn them so that there's not any disparity between students or among students, we want to make sure that they do have that opportunity. At the same time then I'm going to focus more of my efforts, more of my instruction efforts in less in the skill based and more in the why base, like why are you learning this? Why are we doing this? So that if you know ChatGPT spits something out at you as an outline, you can look at that outline and know if it's hitting the points, if it's doing what it needs to do.

And I think that is our biggest concern in the next five years with ai. If an eighth grader is just putting a prompt in and getting something out, are they critically thinking about that material that they're getting back before they submit it? Maybe not if we're not spending more time talking about the why and less time focused on that's where a comma goes or that's where a transition goes.

Jennifer Foster:

Jennifer, has anything surprised you since you've been diving into using AI in the classroom, whether from the student's perspective or from the instructor's? And especially in regards to grading. Has anything surprised you or what stands out maybe?

Jennifer White:

I think it helps to level the playing field for students. So I do think it can really help students and faculty do some of the things that took a lot of time before. It's all about learning the appropriate, how to use it appropriately. And this would be on both sides, from the educator perspective and the student perspective. We're just learning and it's evolving. And I don't even think we can anticipate what five years down the road looks like. I just know I'm revising my courses every semester, intense revision, and I've never had to do that before. We're just going to have to keep doing it, but we have to keep talking to each other. That's the thing. We have to learn from each other and stay informed on it because it's just happening so fast.

Jennifer Foster:

So Megan, what would be your final piece of advice for teachers who are considering using AI for grading, in particular in grading speeches? Maybe what mistakes that you've made that you would caution people against, or just what advice could you tell someone who's thinking about really trying to implement this into what they're doing?

Dr. Megan Cox:

Yeah, I think there's two things. I think always give your students some grace. It's really easy to fall into the police role and forget that they're busy or sometimes they didn't use ai. And just to always start with a conversation. And then be open-minded. It does feel sometimes that there's a lot asked of us and that we're constantly needing to change, but look at AI as an opportunity to enhance our role even as graders, to maybe help us with grading fatigue, and see it as a positive thing. Because the only way that we are going to manage how AI is going to affect every part of our life is to look at what are the positive aspects and how can I leverage those?

Jennifer Foster:

Yeah. And I think too maybe starting small. Anytime we see a new technology, it can just become so overwhelming. And so finding two places, two feedback, two, something that we would want the AI to help us out with and use it as a tool for our own benefit because it's not going to overtake that human component and the teacher role that each of us have. And so finding, really hone in on what it is that we're wanting it to look for. I know I've personally, when it comes to the AI assistant in GoReact each speech, I'm just asking it to look for two things. Honestly, I just tell it to look for two positives. I don't even ask for it to give two places for improvement. I just bolster that student up, give them some extra positive feedback, but just honing in on two skills that might be for that particular speech. And so maybe starting small might be my piece of advice. Jennifer, what would be your last piece of advice you might give instructors?

Jennifer White:

Yeah, I think I would just reiterate what Megan said that be open-minded about it, and see the positives and work with students to figure out what works best with them. This is a relationship we're co-constructing here, so just try to be in the know with students and work with them to figure out what do they need, what feedback would work best for them? Based on that, play around with it. Have a good relationship with your students where you're like, "Okay, so I'm going to take this assignment and I'm going to run it through. What do you think? Is this helpful?" Just be just really fluid and stay open-minded, yeah.

Jennifer Foster:

thank you Megan and Jennifer for joining me today. As we continue our conversations, and I think continue is really the key word there, on AI and how it is reshaping our grading and our ability to be effective communication professors, we hope that you will join us next time for another episode of the Communication Corner.

Related Content:

  • Engaging the Quiet Voices: Bringing Every Student Into the Classroom Conversation Podcast and Transcript

    Join moderator Dr. Jeffrey Child, along with guests Dr. Scott Myers (West Virginia University), Dr. David Kahl Jr. (Penn State Behrend), and C. Kyle Rudick (University of Northern Iowa), as they dive into a lively conversation on Engaging the Quiet Voices: Bringing Every Student Into the Classroom Conversation.

  • Teaching Listening: The Underrated Communication Skill Podcast and Transcript

    Join moderator Dr. Kory Floyd with guests Dr. Narissra M. Punyanunt-Carter (Texas Tech University) and Dr. Graham Bodie (The University of Mississippi) as they explore why listening is an essential skill and why it deserves more attention in the classroom.

  • Career Readiness in Communication Podcast and Transcript

    Join moderator Dr. Jeffrey Child and guests Dr. Tiffany Wang (University of Montevallo), Dr. Joy Daggs (Northwest Missouri State University), Dr. Michael Burns (University of Colorado Boulder), and Dr. Scott Myers (West Virginia University) as they discuss career readiness in communication courses.