8 minute read

Conversations about and with ChatGPT: Two case studies

The recent advent of the generative artificial intelligence (AI) chatbot ChatGPT has led to increased interest in the ethical, practical and technological impacts of AI on the way we live and work. ALIA’s seminar on AI and libraries in February this year attracted over 500 registered attendees, all of them keen to learn more about the possible implications of this technology on academic integrity and information literacy. In this feature article, we present two case studies of what it’s like to chat with the chatbot – one from Dr Nicole Johnston (on behalf of the ALIA Research Advisory Committee) and one from Anand Kochunny, a career reference librarian. These two case studies explore how this technology may impact the library, higher education and business sectors, and the potential dangers and benefits of its integration into the community.

Implications of generative AI on information literacy and referencing

Dr Nicole Johnston

In November 2022, a free version of a generative artificial intelligence chatbot developed by OpenAI called ChatGPT was released, which has been trained to provide conversational, automatically generated responses from internet sources. In his 2023 article on the implications of generative AI for journalism, John Pavlik states that GPT stands for ‘generative pre-trained transformer’ and is a form of natural language processing that can generate new content.

Andrew Cox, Stephen Pinfield and Sophie Rutter, in their 2019 article on the impact of AI on academic libraries, note that one of the biggest impacts AI is anticipated to have on libraries is on search and discovery of resources, which also brings up implications of privacy, copyright and bias. They argue that librarians are well placed to help users to develop critical information literacy skills needed to understand the privacy and legal implications of AI.

Brady Lund and Ting Wang in a 2023 article on the impacts of ChatGPT, note that ChatGPT itself acknowledges the implications AI has for natural language processing in libraries, including improving the accuracy and effectiveness of search systems and assisting with automated reference queries, while also acknowledging concerns around the privacy of the information it has been trained on and the possibilities of bias.

In discussions in higher education around the implications of AI, some limitations that researchers have already discovered about ChatGPT relate to information literacy, ethical use, misinformation and referencing. In a recent article in The Guardian ‘AI makes plagiarism harder to detect, argue academics – in paper written by chatbot’ on detecting plagiarism in AI, Anna Fazackerley points out that, currently, ChatGPT does not understand academic referencing. In the article the authors of a recently published article about academic integrity and AI produced using ChatGPT, point out that one way they used to trick the reviewers was to add references to the content produced from ChatGPT.

There are also concerns around the currency of information, as ChatGPT’s training data currently only goes up to 2021. Jürgen Rudolph, Samson Tan and Shannon Tan point out in their 2023 article on ChatGPT and assessments that search engines such as Google search for current information. Another 2023 article by Michael Liebrenz, Roman Schleifer, Anna Buadze, Dinesh Bhugra and Alexander Smith on ethical challenges for medical publishing points out that ChapGPT could be harmful if it produces inaccurate information. There needs to be ethical considerations around copyright, attribution, plagiarism and authorship, they say.

In order to test the limitations of ChatGPT as a research tool, I asked ChatGPT to write an essay on the opportunities and challenges of open access for a Master of Library Science student. ChatGPT responded to my additional prompts and added new and further information, such as references and citations.

ChatGPT wrote an essay outlining several opportunities and challenges of open access. It produced 10 references, all prior to 2017. ChatGPT accessed open access articles, books and institutional repositories to write the essay with all of the real articles in the reference list being open access articles, either direct from the journal or through an institutional repository. Of the 10 references provided, seven were found to be real. Citations provide a short, one-sentence overview of the article and appear to be mainly sourced from the abstract of the papers. With the references that were accurate, there were some inaccuracies with year, issue, publisher and location.

ChatGPT appears aware of its own limitations and, if advised that a reference or information is fake or inaccurate, provides advice around verifying information. Below is the response provided when asked why some references are fake.

‘As an AI language model, I do not intentionally provide fake references. However, I generate my responses using complex algorithms that scan a vast amount of text data available on the internet. This means that there may be instances where I inadvertently produce fake or inaccurate references.

Additionally, it’s worth noting that not all information on the internet is accurate or reliable. Some sources may contain intentionally misleading information or inaccuracies, while others may be outdated or no longer valid.

It’s important to verify information from multiple sources and use critical thinking skills to evaluate the credibility and accuracy of sources before relying on them.’ (ChatGPT)

ChatGPT also asked to let it know when a reference is incorrect. The following is an example of a fake reference that I told ChapGPT was fake:

Suber, P. (2015). Open access and academic freedom. Educause Review, 50(6), 10-11.

Suber (2015) argues that open access promotes academic freedom by enabling greater access to research outputs.

‘You are correct that the reference I provided, Suber, P. (2015). Open access and academic freedom. Educause Review, 50(6), 10-11, is fake. Upon further investigation, I could not find any evidence that this article was published in Educause Review or any other academic journal’.

With new and emerging AI in the works, and Microsoft soon to include an AI powered system called Copilot in Microsoft 365, it’s important that library staff are AI literate and are well positioned to provide their users with the information literacy skills needed to critically evaluate and verify the information produced through AI, as well as educating users on the ethical and legal implications of utilising these tools.

Dr Nicole Johnston

Member of the ALIA Research Advisory Committee, ALIA Director and Associate University Librarian, ECU n.johnston@ecu.edu.au

Hello, tomorrow! AI brings you the future today

Anand Kochunny

As a recent migrant to Australia from New Zealand, I wanted to know if Kiwis in Australia were eligible for the free COVID-19 bivalent vaccine (the fifth dose). So, I visited the SA Health website, managed by the Government of South Australia, and encountered Zoe, its chatbot. I asked Zoe, ‘Can a Kiwi get a free vaccine?’.

Zoe rephrased my question as ‘How can I book a COVID-19 vaccine?’ and replied: ‘Vaccination is available at participating GPs and pharmacies across South Australia. You can find a clinic near you by checking the Vaccine Clinic Finder.’

When I asked ChatGPT-3.5 the same question, it provided a 107-word reply beginning with: ‘Yes, a Kiwi (citizen or permanent resident of New Zealand) who is currently living in Australia can get a free COVID-19 vaccine under Australia’s national vaccination program.’

Comparing Zoe with ChatGPT is possibly like comparing a small car to a luxury sports model, for GPT-3.5 operates on 175 billion parameters. To be fair, Zoe was built by Clevertar and SA Health in just six days in early 2020. And, like most chatbots, Zoe can only respond on the basis of the information it has been fed.

For the benefit of the uninitiated – as I was, until recently – ChatGPT is an artificial intelligence chatbot that has been developed by a San Francisco-based firm called OpenAI and launched on 30 November 2022. GPT stands for Generative Pre-trained Transformer. According to an article in The Guardian, ChatGPT is the fastest-growing consumer app, reaching 100 million users within two months of being launched.

There’s a reason for this unprecedented popularity. According to the Wikipedia page for ChatGPT, it can ‘write and debug computer programs; compose music, teleplays, fairy tales, and student essays; answer test questions (sometimes, depending on the test, at a level above the average human test-taker); write poetry and song lyrics; emulate a Linux system; simulate an entire chat room; play games like tic-tac-toe; and simulate an ATM’ simply via a text prompt from the user. OpenAI has released GPT-4 on 14 March 2023, and its capabilities are mindboggling, as this columnist has described. GPT-4 can respond to prompts from text and images.

Encouraged by ChatGPT’s response, I asked it to write an article for a library journal on chatbots and AI. It took ChatGPT-3.5 eight attempts (with different versions on each attempt) before successfully completing a 658-word article. The article was dry, impersonal, objective and lacked insight. But it was coherent, structured, contextual and reasoned. On that basis, I’m sure that GPT-4’s attempt would be an improvement.

Microsoft, which has backed OpenAI to the tune of billions of dollars, has already incorporated GPT technology into Bing, its search engine, and Edge, its browser — as well as into Windows 11 and Office 365 The future is here, and this breed of AI is powered by large language models (LLMs).

That said, LLMs aren’t perfect yet. Bard, Google’s answer to ChatGPT, flubbed its debut spectacularly and wiped off $100 billion from the market value of Google’s shares in February 2023. While this story made the headlines, media has been less excited about errors made by Microsoft’s AI-powered Bing. Such errors are caused by the tendency of LLMs to sometimes ‘hallucinate’, and this can lead to a loss in confidence in the technology. Both OpenAI and Google are aware of the limitations of their LLMs and are working to improve their products.

What has the impact of ChatGPT been in the workplace? Business is moving cautiously to adopt ChatGPT. Some professional services firms in Australia appear to be making use of it while companies like JPMorgan Chase have barred their employees from using ChatGPT. Microsoft has deliberately used the word ‘Copilot’ in branding its use of this AI technology to emphasise that AI will assist rather than replace workers. However, it is inevitable that as this technology improves, some existing categories of jobs will yield to new categories of jobs.

And what of the education sector? The response from the Group of Eight Australian universities has been to ban the use of ChatGPT for fear that students will use it to cheat. Their consternation is understandable, for even a cursory look at GPT-4’s test scores reveals its impressive capabilities. Meanwhile, a student at Princeton University has created a tool to detect the use of AI-generated text, and Turnitin (registration required) intends to release an AI-detection tool with a claimed 97 percent accuracy rate in April 2023. There have been voices from within academia that have called for university authorities to allow students to use ChatGPT (see ChatGPT a a teaching tool, not a cheating tool and ChatGPT and the future of university ); indeed, some departments at the University of Sydney have already done so this year. To return to my story: I had originally posed the question to Zoe, the chatbot, in February 2023, before the bivalent vaccines had arrived in Australia. When Zoe couldn’t help, I had sought human assistance, and I had spoken to a member of staff at Australia’s Easy Vaccines Access (EVA) service. The person I’d spoken to wasn’t absolutely certain if I qualified for a free vaccine and wanted to refer to the rules. Then she had a flash of inspiration and asked me if I had a Medicare card. When I told her that I did, she replied to say that the Medicare card was all I needed to get my free vaccine. And so, I got my fifth dose in March 2023. This experience tells me that we human beings still have an edge over AI, thanks to our intuition. As long as human beings are able to control the risks of using generative AI, we should be able to use it effectively.

Anand Kochunny, AALIA anand.kochunny@gmail.com

This article is from: