Skip to Main Content

 

Artificial Intelligence

This guide will help you learn the fundamentals of using generative Artificial Intelligence for your academic work.
   

Evaluating your Output

ChatGPT and other text-generating AI tools can produce incorrect information. Although the information may look accurate, there may be errors, or the output may be entirely fabricated.

Ask your instructor first before using content from generative AI (e.g., ChatGPT) in academic assignments. 

If you are allowed to use AI tools in your course, take time to critically analyze the output.
These tools are great synthesizers, but the critical thinker is you!

Use an evaluation method like the CRAAP Test to guide your review.

Image Credit: humanoid robot characters set in different poses by bigpa. Adobe Stock. (Education License)

Apply the CRAAP Test

Apply the CRAAP test to evaluate output from generative AI.

Expand each section to learn more.

Ask yourself:

  • When was the information published, posted, or last updated?
  • Is the information current for your topic and field of study?
  • Does the AI tool explain when its training data ends?

What to know about GenAI:

  • GenAI tools are trained on limited datasets and may not include the most recent information.
  • They usually cannot access subscription or password‑protected sources (like library databases).
  • AI output often includes facts without dates, making currency hard to judge.

What you should do:

  • If no dates are provided, verify the information using other sources (library databases, reputable websites).
  • If your assignment requires up‑to‑date information, do not rely on AI alone.

Red Flags

  • Vague time references like “recently” or “currently” with no dates
  • Claims about breaking news or new research with no external verification

Ask yourself:

  • Is it suitable for your purpose or assignment/project?
  • Is there not enough, or too much, detail for your purpose?

What to know about GenAI:

  • AI responses are generic by default and may oversimplify complex topics.
  • The output depends heavily on your prompt; poor prompts can lead to irrelevant or unfocused results!

What you should do:

  • Treat AI output as a starting point, not a finished product.
  • Adjust or refine your prompt, but still confirm the information elsewhere.

Red Flags

  • Content that sounds polished but doesn’t actually answer your question
  • Long explanations that avoid specific details you need for your assignment

Ask yourself:

  • Are human sources named and traceable? Are they qualified to write on the topic?

What to know about GenAI:

  • AI tools do not have expertise, credentials, or personal experience.
  • They can sound very confident, even when they are wrong.
  • Statements like “experts say” may be generated without real sources.

What you should do:

  • Look for real, credible sources (authors, organizations, studies) that support the claims.
  • Be especially cautious of claims of authority or professional expertise.

Red Flags

  • “According to experts” with no names or sources
  • Claims like “as a doctor/lawyer/historian…”
  • No clear accountability for errors

Ask yourself:

  • Have sources been given so that you can easily find or verify them?
  • Are citations real, and do they actually support the claims?
  • Can the facts, data, or statistics be verified using independent, credible sources?
  • Does the information contradict itself or have logical gaps?

What to know about GenAI:

  • AI may “hallucinate” or make up facts, statistics, or citations.
  • Citations generated by AI may look real but not exist.
  • AI reflects patterns in its training data, including biases and stereotypes.

What you should do:

  • Always fact‑check AI output before using it.
  • Look up every citation AI provides to confirm it’s real and relevant.
  • Compare information across multiple trusted sources.

Red Flags

  • Citations that don’t exist or don’t match the claim
  • Confident statements with no sources
  • Overgeneralizations or stereotypical language

Ask yourself:

  • Why am I using this GenAI tool? (brainstorming, outlining, background understanding)
  • Is this use allowed for my course or assignment?
  • Am I expected to use and cite scholarly or library sources instead?
  • Is this content a starting point, or am I treating it like a final source?

What to know about GenAI:

  • AI doesn’t have a purpose. But you do!

What you should do:

  • Follow your instructor’s policies on AI use and disclosure.
  • Use AI to support your work, not replace research or critical thinking.
  • Ask yourself whether you would feel comfortable explaining how you used AI.

Red Flags

  • Submitting AI‑generated content as your own original work
  • Using AI instead of reading required sources
  • Failing to disclose AI use when required

Adapted from Evaluating Information: The CRAAP Test (2009) by University of the Fraser Valley.

Important:
  • Do not just trust that the information given is correct. GenAI may “hallucinate” (fabricate) information when it does not find a clear answer.
  • GenAI tools may perpetuate bias, as they will reflect the narratives dominant in their training data.
  • GenAI tools are trained on a limited set of data and will unlikely have access to current or password-protected information (e.g., paid content in library databases).
  • Consider whether generative AI is the right tool for your learning and research.
  • For help with effective research strategies and finding credible sources, talk to a Librarian!