
1 minute read
A tool like any other
The biggest impact of ChatGPT in the classroom has been on tedious, ineffectual writing exercises, such as: “What are five good things and five bad things about biotech?” The fact that chatbots have gotten good at this is great news. Fortunately, that’s not how most of us teach at Princeton, so the impact so far has been relatively mild.
In general, though, how should we respond when a skill that we teach students becomes automatable? This happens all the time. The calculator is a good example. In some learning contexts, the calculator is prohibited, because learning arithmetic is the point. In other contexts, say a physics course, computing tools are embraced, because the point of the lesson is something else. We should use the same approach for AI. In addition to using AI as a tool in the classroom when appropriate, we should also incorporate it as an object of study and critique. Large language models (LLMs) are ac- companied by heaps of hype and myth while so much about them is shrouded from view, such as the labor exploitation that makes them possible. Class discussions are an opportunity to peel back this curtain.
Advertisement
Students should also keep in mind that ChatGPT is a bullshit generator. I mean the term bullshit in the sense defined by philosopher Harry Frankfurt: speech that is intended to persuade without regard for the truth. LLMs are trained to produce plausible text, not true statements. They are still interesting and useful, but it’s important to know their limits. ChatGPT is shockingly good at sounding convincing on any conceivable topic. If you use it as a source for learning, the danger is that you can’t tell when it’s wrong unless you already know the answer.
Arvind Narayanan is a professor of computer science, affiliated with the Center for Information Technology Policy. He can be reached at arvindn@cs.princeton.edu.