- The use of ChatGPT in higher education raises concerns about authorship verification and intellectual integrity.
- Some higher education institutions are responding by implementing measures to address ChatGPT use.
- University of Johannesburg associate professor, Prof. Lisa Otto, agrees that institutions need to embrace AI language tools and begin shaping best practices to prepare students for the future workforce.
- For more stories, visit the Tech and Trends homepage.
The media has lately been awash with news about ChatGPT, with headlines ranging from whether the AI language model could pass the bar to what happens if you ask it to pick your clothes or plan your holiday.
One area in which there has been particular concern is the education sector, where it raises questions around ethics, how students are taught, how they learn, and how they will be tested.
Educators in the higher education sector began sounding the alarm late last year, noting the use of the tool by students to produce essays and respond to exam questions, often against the backdrop of the increasing use of the take-home format for exams, which persisted after its initial use during the pandemic.
Academics, like the University of Johannesburg's Bhaso Ndzendze, have noted that many educators "are worried about what they see as the diminished ability to ensure the authorship of the submissions made in essays, the cornerstone of a humanities and social sciences education and the primary tool by which students' understanding, application and synthesis of complex concepts is put to the test".
Indeed, many school boards and universities have begun considering their responses to the use of ChatGPT and similar tools by students.
In Australia, for example, some states, like New South Wales, have blocked ChatGPT in schools, while several colleges in the United States of America have responded to the tool by restructuring modules and putting preventative measures in place.
Turnitin, a common tool in education for testing for plagiarism, has developed an AI score to provide an indication of whether an AI language model was used to produce text, while a developer has produced an application, called GPTZero, with a similar purpose, although both tools have come under criticism regarding questions of accuracy.
What's more, while ChatGPT tends to write very well, thus far, it doesn't do a great job of referencing, which adds another layer to the ethical challenges of its use in the academic space.
AI plagiarism is, in itself, an as yet undelimited space, with academics rapidly producing think pieces (like this one) and preliminary approaches to how we address the use of AI in academia, both for students and researchers.
Associations which produce style guides for referencing are also starting to update their guidance.
One challenge is that traditional mechanisms for referencing would not suffice for text produced by AI language models because their words are produced in real-time, would be different each time they are generated, would vary based on exact prompting, and aren't collected or stored in a fashion where a reader might be able to locate a permanent record.
READ MORE | Why ChatGPT and other language AIs don’t know what they’re saying
Given that the process of producing such text is iterative and ultimately a collaborative process, the results derived have a direct correlation to the nature and quality of prompting provided, thus raising a question as to the ownership of the work produced.
Would it be ethical for whole sections of text to be copied verbatim if it is somewhere acknowledged that an AI language model was used? Or, should students and researchers use the tool as part of the thinking process around the topic, allowing it to assist in finding and analyse texts in a more efficient manner and, then, once having formed an understanding and opinion, translate this to paper using the texts the tool referred to?
These specifics are yet to be determined.
After all, it is our responsibility to prepare them for work and, whether we like it or not, AI language tools, like ChatGPT, are not going away.
In fact, Microsoft announced in February 2023 that it would invest $10 billion into the further development of ChatGPT, with a view to using its capabilities within its own products.
It has been reported more recently that this functionality is to be incorporated into Microsoft Office, which is the standard application package used across the world for word and number processing.
Many other technology companies are working on or have produced their own variants of the AI language tool - search engines Google and Bing have been developing AI tools, although both have faced serious challenges, including misinformation, prompting the companies to seek changes to the tools.
I tend to agree with the assertion that educators need to embrace these new technologies and become active participants in how we integrate these into our pedagogy.
It is, therefore, incumbent upon us to more deeply delve into the questions of ethics, the practical elements around how we use it, how we teach students to get the most out of it, without foregoing learning. We have the opportunity now to start to develop this best practice.
If we aren't proactive in shaping the use of these tools now, we'll risk being swept along with the tide, wherever it may take us.
- Professor Lisa Otto is an associate professor and the SARChl Chair for African Diplomacy and Foreign Policy at the University of Johannesburg.