4 minute read

Understanding Davidson Academics in a World with ChatGPT

The skills Davidson students pursue throughout their liberal arts education may soon be replaced by artificial intelligence. With the release of ChatGPT 4, the conversation about artificial intelligence’s growing role in education has intensified. The newest model of the highly sophisticated chatbot has enhanced features which expands its capabilities significantly; one change is that the system can now generate responses from both written and visual prompts. ChatGPT can write poetry, pass the Medical Licensing Exam, and might have even written this article. Chair and Professor of Digital Studies Dr. Mark Sample provided context about how artificial intelligence and tools like ChatGPT are most directly impacting education right now.

“There are AI systems that do text generation like ChatGPT,” Dr.Sample said. “They do image generation, like Midjourney or DALLE. There’s predictive algorithms. So AI is just this huge field. When it comes specifically to education, I think the text generation is probably the most relevant right now.”

Advertisement

Dr. Sample also discussed the limitations of reliance on AI.

“AI is always going to replicate what is already in the system,” Dr. Sample explained. “That’s how AI is trained – by giving it huge chunks of existing text or existing images. So what the con is, is that anytime you want something unique or breaking out of the existing paradigms, AI can’t do that. It just replicates whatever’s in the original system and if there’s biases in the data, it’s going to come up in whatever AI generates [...] it really isn’t a source of originality and creativity.”

Although ChatGPT and similar programs will continue to evolve and advance, Associate Professor of Art John Corso-Esquivel who utilizes AI as a teaching tool, believes that its success will dwindle since the program lacks humanity. He said that technology will continue to advance to “unimaginable measures”, yet it will never be able to have “these innate human senses.”

Dr. Sample also warned against the false attribution of human intuition, creativity, and motivations to AI.

“There is no human in there, it is just a statistical model. I think humans, evolutionarily speaking, tend to anthropomorphize things [...] the number one thing that I would say is, it’s important to dispel that it is not sentient, and that there are no human factors at work there. The AI has no experience of the world. It just has this huge data set that you gave it.”

Associate Professor of Art John CorsoEsquivel highlighted that AI lacks morality.

As the world becomes increasingly digitized, so do the tools that we use to learn and educate ourselves. In recent years, artificial intelligence has played a crucial role in this shift, with many institutions incorporating it into their curricula to enhance student learning. One example of this is Davidson College, where Chat GPT, a large language model trained by OpenAI, has been making waves in the liberal arts experience.

GPT to create interactive art experiences that allow students to engage with artworks on a deeper level,” says Esquivel. “It’s a gamechanger in terms of how we teach art and how students learn.”

The question of morality and human biases within artificial intelligence is an important and complex issue that continues to challenge the field of AI. As AI systems become increasingly sophisticated and integrated into various aspects of our lives, it is essential to consider how ethical principles and human biases may impact their development and use.

In Dr. Sample’s view, one of the biggest challenges facing AI developers is the risk of perpetuating existing human biases in their systems. He notes that AI algorithms are only as unbiased as the data they are trained on, and if that data contains biases, those biases will be reflected in the AI’s output.

Furthermore, Dr. Sample highlights the importance of designing AI systems that prioritize ethical principles such as transparency, accountability, and fairness. He emphasizes the need for ongoing dialogue and collaboration between AI developers, ethicists, and other stakeholders to ensure that these principles are upheld throughout the development and use of AI.

“It can tell us what’s moral based on what it has been fed to believe is immoral, but it can’t really produce morality. I just don’t see it being able to answer these core unsolvable questions of what makes a human human. [For example] What is post-humanity? How do we treat each other better?”

The world still needs readers to engage with information. Many in the field of academia suggest there will still be value and merit in discussing and working to answer these fundamental questions, possibly even more so with the continuation of chatbots.

Another way humans are deeply important to this process is through their own recognition biases.

“AI relies on human beings to learn,” Wren

According to Mark Sample, a professor of Digital Studies at Davidson, Chat GPT has become an integral part of the college’s curriculum, allowing students to engage with a wide range of subjects in a unique and innovative way. “The possibilities for using Chat GPT in the liberal arts are almost limitless,” says Sample. “It allows us to explore complex topics and concepts that might have been difficult to access in the past.”

One area where Chat GPT has made a significant impact is in the field of art. John Corso Esquivel, an Associate Professor of Art at Davidson, explains that Chat GPT has allowed students to interact with art in a more immersive and engaging way. “We use Chat

In conclusion, the question of morality and human biases within artificial intelligence is a complex issue that requires careful consideration and ongoing attention. As Dr. Sample notes, it is essential to prioritize ethical principles and work to mitigate the impact of human biases in AI systems to ensure that they serve the common good and benefit society as a whole.

Annabelle Ross ‘24, the Head of Davidson’s Honor Council, notes that Chat GPT has also had an impact on academic integrity. “One of the biggest concerns with AI in education is the potential for cheating,” says Ross. “But we’ve found that Chat GPT has actually helped to promote academic integrity by encouraging students to think critically and engage with the material more deeply.”

Wren Marks ‘23, an Art History major at Davidson, has experienced this firsthand. “When I first heard about Chat GPT, I was skeptical,” says Healy. “But once I started

This article is from: