1. Introduction
Generative AI (GenAI) is disrupting education. GenAI refers to a broader
set of technologies that can generate new and unique content, in various
formats such as text, images, audio, code, text, simulations, 3D
objects, and videos (Moorhouse et al., 2023). The concept of GenAI is
closely tied to Large Language Models (LLMs), which are explicitly
designed to facilitate the creation of text-based materials. The advent
of free, relatively easy-to-use, online LLM based conversational
interfaces such as ChatGPT has quickly transformed students’ use of AI
in education. Specifically, tools such as ChatGPT can automatically
generate text in response to a human prompt and have raised implications
for assessments (Sharples, 2022). As a result, many educators worry
about new forms of academic dishonesty as students can simply copy and
paste content generated by these technologies potentially engaging in
uncredited use of AI-generated text, misrepresenting one’s abilities,
and neglecting essential learning processes (Padillah, 2023;
Mohammadkarimi, 2023; Habib et al., 2024). Hence, use of GenAI by
students can undermine principles of fairness in education and diminish
the value of personal achievements (Padillah, 2023).
Ethical concerns related to the use of GenAI such as bias,
accessibility, and privacy have also been raised (Sabzalieva &
Valentini, 2023). The issue of bias in produced content is a significant
ethical challenge (Ferrer et al., 2021; Zhou, et al., 2023) as the
models learn from data, and if that data reflects societal biases, the
generated content can introduce or reinforce these biases in its outputs
(Akter et al., 2021). Another concern is the potential for inequality
arising from varying resources and prior knowledge required to access
and utilise advanced functionalities of AI language models effectively
(Ng et al., 2021). The digital divide, already a societal challenge,
could be worsened as those with more resources and technical know-how
could disproportionately benefit from these technologies. There are also
concerns related to individual privacy and intellectual property rights.
Using AI language models involves processing vast amounts of user data
(Gupta et al., 2023) and copyrighted materials – but often without
adequate notification or consent (Lucchi, 2023). The need for data input
to train and fine-tune these models presents a risk to individual
privacy, with questions arising about the security and responsible
handling of sensitive information (Wu et al., 2023) by the private
enterprises offering public access to GenAI technologies.
In response, several universities attempted to temporarily restrict or
ban access to GenAI tools (Moorhouse et al., 2023). However, concerns
have been raised about the impact of restricting access in academic
contexts, considering the impact on students’ AI literacy and their
readiness for a society increasingly powered by artificial intelligence
(Chiu, 2024; Chiu 2023). Further, the disruptiveness of GenAI does not
rest solely with its potential to interfere with student learning but
also in its promise for enhancing student learning and the effectiveness
of educators. A comprehensive guide on using ChatGPT and Artificial
Intelligence in higher education published by UNESCO explains how AI
tools used both by students and educators can enhance learning
experiences. The authors of the guide present diverse roles AI can
fulfil such as those of a personal tutor, co-designer, and motivator
(Sabzalieva & Valentini, 2023). Furthermore, Miao (2021) highlights the
importance of collaborative intelligence between humans and machines and
describes four main areas in which there are emerging and potential
applications of AI in education: (1) Education management and delivery,
(2) Learning and assessment, (3) Empowering teachers and improving
teaching, and (4) Promoting lifelong learning. Recognizing opportunities
for disruption becomes essential for not only improving the
effectiveness of education but also for preparing students with the
skills required for employment in a world increasingly influenced by AI
integration (Alekseeva, et al, 2021).
The rapidly evolving landscape of GenAI tools, with frequent releases of
new tools and improvements to existing ones, necessitate continuous
vigilance and adaptation (Gill et al., 2022). This dynamism requires
stakeholders to stay up to date with developments to address emerging
issues effectively, creating additional complexity in managing the
responsible and ethical use of GenAI. As society faces these concerns,
there is a need to find a balance that allows the benefits of advanced
AI language models to be received while addressing the associated
ethical and societal challenges. Considering the opportunities,
concerns, and the immediacy of the disruption to university education, a
process is needed that can gather insights into how AI is being used,
assess whether such uses are effective or not, and bring stakeholders
together to define acceptable uses of AI in education (Beardsley et al.,
2024). Student learning agreements may offer a flexible, student-centred
approach that gathers informative data while providing opportunities for
students to become more thoughtful in their decision making related to
the use of innovative technologies such as GenAI.