Share

ChatGPT: What are the ethical challenges of AI language models in higher education?

accreditation
0:00
play article
Subscribers can listen to this article
If higher education institutions aren't proactive in shaping the use of these AI tools now, they'll risk being swept along with the tide.
If higher education institutions aren't proactive in shaping the use of these AI tools now, they'll risk being swept along with the tide.
Pexels
  • The use of ChatGPT in higher education raises concerns about authorship verification and intellectual integrity. 
  • Some higher education institutions are responding by implementing measures to address ChatGPT use.
  • University of Johannesburg associate professor, Prof. Lisa Otto, agrees that institutions need to embrace AI language tools and begin shaping best practices to prepare students for the future workforce. 
  • For more stories, visit the Tech and Trends homepage

The media has lately been awash with news about ChatGPT, with headlines ranging from whether the AI language model could pass the bar to what happens if you ask it to pick your clothes or plan your holiday.

One area in which there has been particular concern is the education sector, where it raises questions around ethics, how students are taught, how they learn, and how they will be tested.

Educators in the higher education sector began sounding the alarm late last year, noting the use of the tool by students to produce essays and respond to exam questions, often against the backdrop of the increasing use of the take-home format for exams, which persisted after its initial use during the pandemic.

Academics, like the University of Johannesburg's Bhaso Ndzendze, have noted that many educators "are worried about what they see as the diminished ability to ensure the authorship of the submissions made in essays, the cornerstone of a humanities and social sciences education and the primary tool by which students' understanding, application and synthesis of complex concepts is put to the test".

Indeed, many school boards and universities have begun considering their responses to the use of ChatGPT and similar tools by students.

In Australia, for example, some states, like New South Wales, have blocked ChatGPT in schools, while several colleges in the United States of America have responded to the tool by restructuring modules and putting preventative measures in place.

Turnitin, a common tool in education for testing for plagiarism, has developed an AI score to provide an indication of whether an AI language model was used to produce text, while a developer has produced an application, called GPTZero, with a similar purpose, although both tools have come under criticism regarding questions of accuracy.

What's more, while ChatGPT tends to write very well, thus far, it doesn't do a great job of referencing, which adds another layer to the ethical challenges of its use in the academic space.

AI plagiarism is, in itself, an as yet undelimited space, with academics rapidly producing think pieces (like this one) and preliminary approaches to how we address the use of AI in academia, both for students and researchers.

Associations which produce style guides for referencing are also starting to update their guidance.

One challenge is that traditional mechanisms for referencing would not suffice for text produced by AI language models because their words are produced in real-time, would be different each time they are generated, would vary based on exact prompting, and aren't collected or stored in a fashion where a reader might be able to locate a permanent record.

READ MORE | Why ChatGPT and other language AIs don’t know what they’re saying

Given that the process of producing such text is iterative and ultimately a collaborative process, the results derived have a direct correlation to the nature and quality of prompting provided, thus raising a question as to the ownership of the work produced.

Would it be ethical for whole sections of text to be copied verbatim if it is somewhere acknowledged that an AI language model was used? Or, should students and researchers use the tool as part of the thinking process around the topic, allowing it to assist in finding and analyse texts in a more efficient manner and, then, once having formed an understanding and opinion, translate this to paper using the texts the tool referred to?

These specifics are yet to be determined.

Some observers argue that we would be missing an opportunity, as educators, to teach our students critical tools that they will need for jobs and workplaces of the future.

After all, it is our responsibility to prepare them for work and, whether we like it or not, AI language tools, like ChatGPT, are not going away.

In fact, Microsoft announced in February 2023 that it would invest $10 billion into the further development of ChatGPT, with a view to using its capabilities within its own products.

It has been reported more recently that this functionality is to be incorporated into Microsoft Office, which is the standard application package used across the world for word and number processing.

Many other technology companies are working on or have produced their own variants of the AI language tool - search engines Google and Bing have been developing AI tools, although both have faced serious challenges, including misinformation, prompting the companies to seek changes to the tools.

I tend to agree with the assertion that educators need to embrace these new technologies and become active participants in how we integrate these into our pedagogy.

These tools are, evidently, still quite imperfect, but this will change rapidly as developers tweak and hone the tech and as the AI itself learns.

It is, therefore, incumbent upon us to more deeply delve into the questions of ethics, the practical elements around how we use it, how we teach students to get the most out of it, without foregoing learning. We have the opportunity now to start to develop this best practice.

If we aren't proactive in shaping the use of these tools now, we'll risk being swept along with the tide, wherever it may take us.

  • Professor Lisa Otto is an associate professor and the SARChl Chair for African Diplomacy and Foreign Policy at the University of Johannesburg. 



We live in a world where facts and fiction get blurred
Who we choose to trust can have a profound impact on our lives. Join thousands of devoted South Africans who look to News24 to bring them news they can trust every day. As we celebrate 25 years, become a News24 subscriber as we strive to keep you informed, inspired and empowered.
Join News24 today
heading
description
username
Show Comments ()
Voting Booth
Should the Proteas pick Faf du Plessis for the T20 World Cup in West Indies and the United States in June?
Please select an option Oops! Something went wrong, please try again later.
Results
Yes! Faf still has a lot to give ...
68% - 2302 votes
No! It's time to move on ...
32% - 1099 votes
Vote
Rand - Dollar
18.51
+0.3%
Rand - Pound
23.23
-0.0%
Rand - Euro
19.94
-0.1%
Rand - Aus dollar
12.22
+0.1%
Rand - Yen
0.12
+0.1%
Platinum
966.10
-0.0%
Palladium
950.00
-0.1%
Gold
0.00
0.0%
Silver
0.00
0.0%
Brent Crude
82.96
-0.9%
Top 40
70,300
+0.5%
All Share
76,428
+0.5%
Resource 10
60,246
-0.2%
Industrial 25
107,200
+1.3%
Financial 15
16,554
-0.2%
All JSE data delayed by at least 15 minutes Iress logo
Editorial feedback and complaints

Contact the public editor with feedback for our journalists, complaints, queries or suggestions about articles on News24.

LEARN MORE