Introduction
As artificial intelligence (AI) large language models (LLMs) become increasingly sophisticated, the ability to distinguish between human-written text and text generated by AI, in order to protect academic integrity, is becoming more important than ever.
With the rising popularity of ChatGPT, Bard, and other AI chatbots, it can be hard to tell whether a piece of writing was created by a human or AI. There are many AI detection tools available, but the truth is, many of these tools can produce both false-positive and false-negative results.
Fortunately, there are still some reliable ways to support in determining whether a piece of writing was generated by Generative AI or written by a human. This guidance has been developed to provide helpful tips for spotting AI-written content by sight.
Generative AI Detection Tips
- Implausible or inaccurate statements: ChatGPT, Google Bard, and other Generative AI tools are known to make up facts. While students can also include inaccuracies in their writing, AI bots make false information seem extremely believable. ChatGPT also has limited knowledge of events that occurred after 2021, which means it usually can't produce factual information about current events. If the writing seems very well-written but contains false information, it could be AI generated. If you're evaluating a piece of writing for potential AI use, try searching the web for a few facts from the text. Try to search for facts that are easy to verify—e.g., dates and specific events.
- Some sentences look right, but don't actually make sense: AI bots can create grammatically perfect sentences that don't make sense, even though they look great on the surface. This is because AI bots do not know whether something is true or false, they only knows how to use the right kind of word in the right place. If you find yourself rereading something that looks like it should make sense, but you can't grasp what the sentence is trying to say, there's a good chance you're looking at AI-generated writing.
- Fake or inaccessible sources: While the AI bots that's built into search engines such as Google, may cite sources automatically, the standard versions of the AI bots tend to make up sources that don't exist.
- A lack of descriptive and "rare" words: AI bots like ChatGPT work by predicting the next word in a sentence, which results in lots of non-specific words like "it," "is," and "they." Because AI bots less likely to use rarer words to describe things, an overall lack of descriptive language could mean that the content was generated by AI.
- No grammatical or spelling errors: While students do their best to catch all grammatical and spelling errors before turning in their assessment, it’s hard to catch everything. Computers, on the other hand, produce grammatically impeccable work, even if it the writing isn't factual.
- Short sentences: AI-generated content often includes very short sentences. This is because the AI is trying to mimic human writing, but it hasn't quite mastered extensive sentence complexity as of yet. This can be more obvious if you're reading a technical blog about something that requires code or step-by-step instructions.
- Repetition of words and phrases: Another way to spot AI-generated content is by looking for repetition of words and phrases. This is the result of the AI trying to fill up space with relevant keywords (aka – it doesn't really know what it's talking about). So, if you're reading an article and it feels like the same word is being used over and over again, there's a higher chance it was written by an AI.
- Lack of analysis: Another way to tell if an article was written by an AI is if it lacks complex analysis. This is because machines are good at collecting data, but they're not so good at turning it into something meaningful. If you're reading an article and it feels like it's just a list of facts with no real insight or analysis, there's an even higher chance it was written with AI. AI generated writing is a lot better for static writing compared to creative or analytical writing. The more information a topic has, the better AI can write & manipulate it.
- Lack of contextual awareness: Look at the depth of contextual knowledge showcased within the text. AI models often struggle with context-dependent queries or might provide generic responses that lack specific detail or relevant information. If the content produce lacks a deep understanding of the subject matter or fails to address specific aspects required for the assignment piece, this could be an indication of AI generated text.
- Consistency versus randomness: AI-generated content tends to be overly consistent and uniform in sentence length and structure. It may even seem painfully boring and monolithic compared to the complexities and agility of a human's writing style.
- Incorrect data: When collecting data from multiple sources, machines need to correct things. If a machine does not know what to do but must produce results, it will predict numbers based on inaccurate patterns. As a result, if you read an article and notice several inconsistencies between facts and numbers, you can be certain AI wrote it.
Although the tips above can help staff to further investigate the suspected misuse of Generative AI content, they are not fool proof, and they are not definitive proof of academic misconduct.
If you suspect the unathorised use of Generative AI to produce content for an assessment that a student(s) has submitted as their own work, our Academic Integrity Procedure (PDF) must be followed at all times.
Where there is no hard evidence that content is AI generated, but it has been demonstrated that there are good reasons to believe that the work is not the student’s own, it may be deemed appropriate for the Academic Integrity Officer (AIO) or the Academic Misconduct Panel to expect the student to demonstrate that the work is their own original work. This may be involve, but is not limited to, an Academic Integrity formal discussion. An Academic Integrity formal discussion can involve the AIO or Academic Misconduct Panel asking for draft copies of the work, additional samples of the student’s other work and/or asking a series of questions about the content of the submitted work.
Examples of some questions that can be used during the investigation include, but are not limited to:
- How did you plan your assignment?
- How did you come up with the main ideas/concepts/arguments for your assignment?
- What were your key sources for this assignment and why?
- What was your main platform for finding your sources?
- What lead you to reach the conclusion in your assignment?
- Were there any parts of the assignment that you found difficult to complete?
- What section/part of the assignment are you most proud of and why?
- If you had to redo this assignment what would you do differently?
- Which class/seminar/lecture/workshop had the biggest impact on your assignment?
- Did you use any techniques, tools or software to improve your writing?
- Did you use Generative AI as a resource and, if yes, how?
In cases where the Academic Misconduct formal discussion is conducted at the Academic Misconduct Panel meeting, the expectation is that this discussion will be led by the Panel Chair. The Panel Chair should prepare for this by drafting a series of questions (such as the ones above) and if appropriate requesting any additional evidence (such as draft or sample copies of the students work). This should be done in the preparation time ahead of the Academic Misconduct Panel meeting.
Additionally, more information on academic integrity and misconduct for staff, including links to all relevant polices, forms, FAQs and additional resources are now available via the Student Conduct and Compliance webpages.