Generative AI can be an invaluable tool, however, it's important to be aware of the limitations and considerations when utilising generative AI in your studies.
Gen AI can make assumptions that are misleading and include bias in the outputs.
Gen AI can make up facts, sometimes called 'hallucinations'. This means all outputs need to be fact checked.
The training data used to train gen AI contains biases of dominant groups within society and history. This means the that outputs often lean towards certain viewpoints known as 'Algorithmic Bias'. an example would be to refer to stereotypes like:
Doctor = Male Nurse = Female
The use of gen AI raises the issue of inaccurate information (including intentional disinformation, deep fake images, and fake news) propagating through the news media and web. You should always try to critically assess any information you engage with.
You have a responsibility to critically evaluate any output you plan to utilise created by gen AI services and tools. View gen AI as a starting point, and summary of a topic, before using it to start researching more widely.
When critically evaluating any source (generative or not) consider running the material through a checklist like the CRAAP Test:
How Current is the work?
The Relevance of the material
What is the Authority behind the text?
How Accurate is it? and
What is the Purpose of the work?
See more in the Evaluating Sources of Information in Study Essentials.
Evaluating Sources of Information also has material on identifying Fake News, and Fact-Checking Tips.
Finally, the Search Engines and Library Databases page on Filter Bubbles & Fake News has sources you can use for fact-checking.
You must always check the credibility of your information and source, as any material you use which turns out to be inaccurate or false may lead to findings of academic misconduct.