Generative AI tools, such as ChatGPT, are increasingly used to support the research and learning process. When used effectively, they can streamline tasks and generate ideas. However, these tools are not infallible - they can produce convincing but inaccurate, misleading, or biased information. Without careful evaluation, AI-generated content may hinder rather than enhance academic work.
To ensure quality and reliability, it is essential to critically assess AI outputs, as you would any other source. Consider accuracy, credibility, relevance, and potential biases before incorporating AI-generated material into your research. Remember, AI cannot replace independent thought, rigorous analysis, or ethical academic practices.
For assessed work, you must always follow the University’s guidelines on AI use.
This guide will help you develop a critical approach to using generative AI, ensuring it serves as a tool to enhance - not compromise - your research and learning.
Framework for Evaluating Generative AI Output
It's always worth taking a moment to consider whether using a Generative AI tool is appropriate for the task you have. Issues such as the severe environmental impact of power and water use when generating images, audio, and text using GenAI tools mean considering whether an existing resource can do the job you need is good practice.
Image generation is a nice example of this. Do you need to use a Generative AI tool to create a new image, or is there a high quality image from a royalty free, Creative Commons, platform such as Unsplash, Pexels, or nappy that already exists and will meet your needs.
The University's Blended Learning Service has some guidance on the general principles of the use of Generative AI.
Photo by Johannes Plenio on Unsplash
This artifact was created with the help of Claude.
The content on this page has been co-created with a range of Generative AI tools. All content generated by Generative AI has been reviewed, revised, and edited by the I&LS team before publication.