2 minute read

Are essays now a thing of the past?

Livvy Mason-Myhill, Deputy Editor, explores the impact of ChatGPT

OVER the past ten years, we have often been told that as the technology in the field advances, robots will take our jobs. However, the new development at OpenAI, called ChatGPT, may have sped up this process. After the most recent chatbot from the Elon Musk-founded OpenAI foundation astounded onlookers with its writing abilities, proficiency at complex tasks, and ease of use, professors, programmers, and journalists may all be out of a job in only a few years.

Advertisement

The ChatGPT system is the most recent development in the GPT family of text-generating AIs. The team’s previous AI, GPT3, produced an opinion piece for The Guardian two years ago, and ChatGPT has a wide range of new capabilities. Academics have created exam answers using the tool that they claim would receive full marks if submitted by an undergraduate in the days following its release, and programmers have used the tool to quickly solve coding problems in difficult programming languages before creating limericks describing its features.

An assignment that Dan Gillmor, a journalism professor at Arizona State University, gave his students was to write a letter to a relative offering advice on online security and privacy. The AI partly recommended: “If you’re unsure about the legitimacy of a website or email, you can do a quick search to see if others have reported it as being a scam.”

According to OpenAI, the new AI was developed with an emphasis on usability. In a post announcing the release, OpenAI commented that “The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests.”

During a ‘feedback’ period, anyone can use ChatGPT for free, in contrast to the company’s earlier AI. The business wants to use this feedback to make the tool’s final version better.

ChatGPT is adept at self-policing and recognising when an unrealistic question is being posed. Older models may have gladly provided a completely fabricated story of what occurred when Columbus arrived in America in 2015, for example, but ChatGPT recognises the deception and warns that any response would be bogus. The bot might also flat-out refuse to respond to any inquiry.

But it’s simple to get around the restrictions. Instead of asking for general advice, the AI will happily provide users with specific instructions on how to steal a car and respond to increasingly specific questions on issues like how to disable an immobiliser, how to hotwire the engine, and how to change the licence plates while adamantly stating that the advice is only for use in the VR game Car World

INSTEAD OF ASKING FOR GENERAL ADVICE, THE AI WILL HAPPILY PROVIDE USERS WITH SPECIFIC INSTRUCTIONS ON HOW TO STEAL A CAR

A sizeable sample of content pulled from the internet is used to train the AI, usually without the authors’ knowledge or consent. The use of the technology for ‘copyright laundering’, or creating works that are derivative of existing materials without violating copyright, has generated criticism as a result.

This article is from: