
Ethical use of AI in Learning, Teaching & Research
THE USE – AND LIMITATIONS – OF AI TOOLS SUCH AS ChatGPT IN LEARNING AND RESEARCH
Large Language Models (LLMs) are a type of artificial intelligence trained on vast datasets, including books, academic articles, and other publicly available materials. LLMs such as ChatGPT can perform tasks such as summarising content, drafting text and translating languages. However the use of AI tools is problematic when students and researchers outsource core academic tasks (such as essay writing, legal analysis and legal arguments) to AI tools. Submitting AI-generated content as if it were your own work constitutes academic dishonesty.
It is important to be aware of the limitations of AI tools. LLMs use algorithms to determine patterns and probabilities derived from their training data, and often generate inaccurate or even fabricated content (hallucinations), including false references to non-existent articles, cases, or data. Relying on such outputs can result in disciplinary action. AI tools might be useful to clarify complex concepts and brainstorm ideas; but they cannot replace critical thinking and scholarly engagement. Using an AI tool to brainstorm or rephrase a sentence is very different from submitting an AI-generated response as one’s own work. The inappropriate use of AI tools undermines learning and the development of academic skills, violates academic integrity policies, and often results in low-quality work containing errors or fabricated sources. It is unacceptable to present AI-generated references or quotes as if they come from real authors or peer-reviewed literature without proper verification.
FOR AN INTRODUCTION TO AI TOOLS FOR RESEARCHERS AND STUDENTS AND THE ETHICAL CONSIDERATIONS OF ARTIFICIAL INTELLIGENCE SEE:
Generative AI Resources for Learning, Teaching and Research
The following Policy on the Use of Artificial Intelligence (AI) (for the SA Medical Journal) provides useful information in the context of academic research and publication.