The inconsistent policies we have had to deal with, the student complaints because “No, I didn’t use AI….”, and the overall pain of having to grade papers we know are not student created, but AI generated.    

But some of those commonalities are becoming harder to spot, due to the increased personalization of AI itself.   I will relate to what I have seen, and how I deal with it, having seen it at both Undergraduate and Graduate levels in my career. 

AI-specific symbols (or rarely used by humans)

Certain formatting patterns have become increasingly common in AI-generated writing. One example is the frequent use of decorative or symbolic formatting, such as icons or visual markers used to emphasize points. While humans can include these elements, they are uncommon in traditional academic writing and appear far more often in AI-assisted text, making them easier to identify. Another indicator is the consistent use of extended punctuation styles—such as the em dash instead of the standard hyphen—which most students do not intentionally insert or apply uniformly when typing.

There are other indicators as well. These include the use of stylized headings created with decorative emphasis, or the replacement of standard transitional phrases with visual or directional cues to illustrate the flow of ideas. In most cases, students do not expend extra effort to insert specialized visual formatting into a formal paper, particularly when it does not align with academic conventions.

Because I grade strictly according to APA guidelines, these formatting choices are not acceptable. Academic writing should rely on clear language and proper structure rather than visual embellishments to communicate meaning.

Additional format Tells

Along with various symbol formatting from above, we often see the overall structure of what was given in a very consistent, even format.   The sentences are all primarily the same length; the paragraphs are also nice and evenly distributed.    The issue with this is we know students don’t write like this on any given day. 

Add to this the complete lack of personalization to the response, and the fact that the wording used is nothing we would ever hear as a student speech pattern, it is very easy to tell when these are AI generated. 

The specific AI format of Introduction, even paragraphs, typical bullet points that are not explained, and “Conclusion” or “in summary” line, allows us to see clearly that these were not student made.   You will also see this in the amount of clearly extra information given in the answer, which is usually not relevant to the question given.   

Students using AI for their work typically do not proofread and make it their own or build off of it.  They will generate it, assume it is good, and copy/paste it.  This typically leads to very generic answers to questions that you should be making very specific.   In grading these that lack ofspecificity, it should be acknowledged as incorrect, with an explanation of why.  In most cases you do not have to revert bvert back to “This was AI generated so it was wrong”.  You simply have to state the truth, in that “This answer did not answer the question as given, as you were to relate it to this…….   This has not been achieved”   

An additional example of how I deal with these issues follows.  In a class where a student blatantly used AI to answer, as the answer had exceptionally detailed vocabulary the student never used, and gave an incredibly long and vague answer, I used his written words to prove my points.   In class, I used his answer to ask him another question.  He then informed me that he did not know what those words meant.  At which point I asked him how he used them in his homework response if he did not know their meaning.  My point was made to all.   

As instructors, we must be the ones holding the cards so that we can teach our students.  If we don’t know these signs and how to address them in class, we will lose our edge, and our students will gain nothing.