4 minute read

ChatGPT from an IBP perspective

The widespread use of ChatGPT may subtly suggest its operation is as straightforward as its interface implies. Indeed, for the most part, this holds true, yet its usage can also induce complacency. When ChatGPT was initially introduced around six months ago, an unexpected consequence was a decline in exam scores within Danish high schools. This pointed to a disheartening reality: ChatGPT, used without complementary critical thinking, is likely to fall short. Although it struggles with generating content from scratch, ChatGPT excels as an assistant when used effectively.In this article, we'll set aside the pernicious implications of these AI models for a government bent on justifying further education in the name of optimizing the workforce. Instead, we'll focus on some key tips for effective ChatGPT use from an IBP-perspective.

ChatGPT 3.5 | ChatGPT 4.0

Advertisement

As soon as ChatGPT 4.0 was released, the progress it represented was astounding. ChatGPT now excels rather than barely passing in exams testing complicated subject matter as the LSAT or AP tests. A similar experiment can be conducted on the type of exams we ourselves receive. Putting in a Financial Accounting midterm, GPT 3.5 went from around half of it being correct, barely passing, to 26/30 correct just edging in a 12 with the curve we were provided. As I hope the above examples illustrate, the 4.0 version is far better for all purposes. Its only two drawbacks are that the 4.0 version is behind a paywall and it’s slower. This holds especially true if you want to throw complicated reasoning problems at it, especially if you want to throw some advanced, say, li- near algebra problems at it. More relevant for IBP, however, for subjects like International Economics and Macroeconomics that often require multi-step reasoning jumping between models, the likelihood of ChatGPT arriving at the right answer is much higher. For the questions of a more conceptual kind, this is where one ought to be vigilant in how one uses ChatGPT.

ChatGPT Needs Sufficient Context

On its own, without any aid, ChatGPT is terrible at generating text from the ground-up. Asking it for explanations of difficult concepts, is unlikely to yield any impressive results. The reason for this is simple: LLM’s don’t have any weighting function for “truth” of any kind, they don’t possess the same reasoning capacities as humans. A priori, it indiscriminately treats any two pieces of data likewise when updating the model. We have some kind of elementary capacity to reason that allows us to process the information we’re parsing. This is where the user becomes important in the prompts they create: we have to provide it with some kind of baseline it can work from. Concretely, this could be pasting in large sections of dense, academic papers providing a definition of the concept way past anyone’s level of understanding. This is fine, because ChatGPT is incredible at dumbing it down for us. As long as we’ve given it “good” information and ask it to base its responses off it, the likelihood for a fruitful answer is much greater and the risk of it hallucinating goes down. Obviously, it’s not always worth the hassle, but it’s a principle to preferably keep in mind when possible. With this in mind, there are some things that ChatGPT becomes an obvious candidate for:

Exam Revision

Ask it questions, see how it would go about answering some problem. Here ChatGPT 4.0 has a significant edge so if pos sible, try to use the newer version. It can clarify conceptsGPT 4.0 especially can function like a hyper-powered search engine and provide you succinct answers you would otherwise search for far longer for. You can also paste in parts of the syllabus and get it to provide you mini-quizzes.

A lot of threads are left dangling throughout a course. Maybe some topic wasn’t delved as deep into asyou would have liked, here again ChatGPT can be pretty good. You can use it pasting in sections of difficult papers to check your understanding, or even asking it for important papers and their contributions. Also, if you have any pa sages you’ve written, it’s truly incredible how good it is at formulating it better sometimes and breaking down the passage’s level of complexity.

More Advanced Uses

OpenAI also provides an API service, though its limited to the 3.5 version and it costs a few fractions of a cent per usage. It can usually provide a good replacement to someone looking over it physically. (Using the API is edifying on its own just for the deeper insights into the parameters it takes in when responding to a request.) For example, if you use Notion for your notes, you can combine the Notion API and the one provided by OpenAI to get it to process through your notes and ask daily questions emailed on specific topics (or just however you wish to categorise it) to truly ingrain them. In using library-intensive languages like R and Python, pasting in the documentation for it can be incredibly helpful if the snippets of code it provides keep failing even when feeding it back the error messages. This is especially the case when the syntax one has to use constantly updates, which is common for API requests or newer, smaller libraries imported. By providing the relevant context we’re effectively circumventing its cutoff date constraint.

Conclusion

The overall conclusion I want to highlight is that ChatGPT, for all of its hype, one should be very wary of using it to generate information from scratch. ChatGPT 3.5 even, seems borderline unusable for any even slightly advanced uses. But what ChatGPT 4.0 is good for, is processing, treating and modifying good, pre-existing data. For all of its supposed impressiveness, when one asks it to write entire passages in the style of, say, Ernest Hemingway, the result is tawdry. While it contains the glamour of his style, upon closer inspection the writing appears shallow and the metaphors soulless. It is best used as an assistant, and a particularly effective one at that. It can synthesise, break down and expand what you give it, you're effectively assigning what's important to it, but in its moments of creation it still remains underwhelming. A competent user behind it, however, can create something better in conjunction with it.

BY OSCAR JULIUS ADSERBALLE

This article is from: