3 minute read

ENGLISH TEACHERS ADOPT NEW POLICIES, MINDSETS TO ACCOUNT FOR RISE OF CHATGPT

For decades, scientists have studied and experimented with the capabilities of artificial intelligence. For even longer, students have studied and experimented with cheating in the classroom. In November 2022, the AI research laboratory OpenAI released the chatbot ChatGPT, which has quickly become a worldwide phenomenon. Educators, parents, and students alike have been fascinated by ChatGPT’s ability to answer questions, maintain conversations, and write stories based on a single prompt. However, the advancement of AI poses a challenging question for schools: How will language processing tools such as ChatGPT a ect the learning environment in English classrooms?

According to English teacher Kate Weymouth, although using ChatGPT to cheat on a small assignment may seem inconsequential, such behaviors can lead to greater consequences in the long run. “What worries me is the longterm costs of it,” Weymouth said. “In the short term, a student might like that (using ChatGPT) gives them some room in their day and gets their work done for them, but the bill comes due later when they’re in a situation where they don’t have access to it.” may not always be e

Advertisement

ChatGPT makes many mistakes in its answers. English teacher Ethan Halter elaborated on this phenomenon. “I think that when I’ve used ChatGPT, which I haven’t done a ton, there (are) always mistakes—and, in fact, the mistakes are often quite subtle,” he said. “There are mistakes, for example, that I can detect easily because I know the text really well, but if you don’t (fully understand the text) and there’s a subtle factual error in, for example, the sequence of events that takes place, then there will just be obvious cheating going on.”

Weymouth has also noticed this flaw in the AI. “I think one (mistake) that a teacher saw was (ChatGPT) saying that a character in ‘The Great Gatsby’ had a conversation with a character in ‘The Grapes of Wrath,’” she said. “ChatGPT doesn’t know, because it hasn’t actually read the books.” researched information, but making up a string of words on the spot based on what it has been trained to say in the past. Therefore, ChatGPT is unable to distinguish between accurate and fabricated information, making it an unreliable source for students.

Upon first hearing about ChatGPT, many may immediately jump to the conclusion that students will use it to cheat. While this concern is valid, the chatbot may have positive implications for English curricula as well. Halter believes that students can use it to better understand the texts they are reading, and even to come up with some ideas of their own. “There’s no question that reading something about ‘The Great Gatsby’ will help students understand ‘The Great Gatsby,’” he said. “I would never tell a student not to read an essay about ‘The Great Gatsby’ just because it was written by a robot. That sounds interesting; they should go ahead and read that. It’ll help them write their own essay. (They can) use it to

Weymouth believes that there is a potential place for ChatGPT in the classroom, but teachers need to be sure of what they are trying to achieve with it. “If you were working on revision or skills for giving feedback or editing, then if you have ChatGPT quickly produce the essay (where) your job is to revise it, to improve it, then I can see that being a good use of (ChatGPT),” she said. “Teachers have to be really clear of what they are trying to do with assignments or assessments they are giving and what they want to assess, and what they want

As the chatbot continues to improve its responses, educators must constantly adapt to the changing technology in order to help rather than

New discoveries in the medical field are revolutionizing the way people develop medications to cure diseases. But these discoveries are not being made by researchers in labs—they are being made by artificial intelligence.

AI mimics, and sometimes even surpasses, human intelligence in order to perform not only everyday duties but tasks not necessarily possible without technological assistance. The term “AI” was first coined by Dartmouth professor John McCarthy in 1955, but it was another two decades until AI began to gain prominence in the medical field. Recent developments by two key players in the AI game, Mark Zuckerberg and Christopher Bahl, are changing the way AI is used in medicine.

Protein folding

In biology, structure determines function. This is especially true of proteins, which change their structure during a process known as protein folding. Once proteins fold, they are able to perform several functions in the body. However, if a protein misfolds, the consequences can be drastic, according to biology teacher Jena Lee. “Molecules are able to fit into and interact with other molecules because of their specific three-dimensional shape,” she said. “If a protein’s shape is lost, the interaction will not happen, and that can have serious consequences on pathways that involve the protein.”

In biology, structure determines function. This is especially true of proteins, which change their structure during a process known as protein folding.

In other words, if a protein’s shape is lost, it either becomes useless or toxic. When a protein misfolds and the function of the protein is lost or changed, it results in a buildup of amyloid fibrils, which can result in allergies, neurodegenerative diseases, and many other ailments.

In order to address these a ictions, it is often helpful to be able to predict protein folding. “If scientists can predict the way

This article is from: