5 minute read

AI: Tool or Terror?

Next Article
Peru In Turmoil

Peru In Turmoil

By Jeremy Betz

From a dynamic workforce to artistic image generation to college-level writing, it can feel like artificial intelligence has been evolving in all aspects of life. Boston Dynamics’ robots, Stability AI’s Stable Diffusion image generator, and Open AI’s ChatGPT are improving at alarming rates, and there is a greater and greater push to understand how they will affect our world. As it stands, there are three broad realms of AI innovation up for debate: labor, images, and text. Each presents its own set of unique challenges, but the common thread remains: artificial intelligence encroaches on human activity in ways that remain unimaginable to us—and whether that is good or bad is yet to be seen.

Advertisement

Of the three, labor presents what is probably the most quantifiable problem. In a recent video, Boston Dynamics showed how their humanoid robot “Atlas” is capable of assisting in construction-related efforts. Throughout the video, Atlas displays a number of skills: It manipulates its environment to achieve goals, it carries a toolbag while in motion, and it successfully throws the toolbag to the person who requested it. And to top it all off, it jumps back down with a 180-degree spin, and fist-pumps to its success.

Boston Dynamics has been developing what they call “athletic intelligence”, the ability for robots to react like humans or other beings. Videos showing how their quadrupeds respond when kicked, for instance, demonstrate fluid responses and movements beyond a standard autonomously running program. And movement is the only purpose these androids are built for.

Another of Boston Dynamic’s droids, “Stretch”, best exemplifies the concerns of blue-collar workers. These machines are designed for load work and warehouse jobs, and they are extraordinarily effective at it. These heavy lifters have all but cemented blue-collar concerns over automation replacing jobs, an economic concern many will have to grapple with. And it’s not just simple labor, either. With Atlas showing continuous improvement, warehouse employees aren’t the only ones in trouble. Housekeepers beware, there’s a new and improved Roomba.

Artificial intelligence, it seems, does pose a threat to the economic standard. In a survey done by McKinsey & Co, a consulting firm, nearly 62% of executives said they would need to retrain a quarter of their workforce. In the United States, that number goes to 64%. McKinsey estimates that this would amount to as many as 375 million workers who would face hardship. So, frankly, it seems that being a Luddite makes sense here.

And speaking of Luddites, if there are any who currently represent the caricature of raging anti-tech sentiment, it is those who challenge AI-generated images. In recent months, many image-sharing platforms have faced a spate of anti-AI sentiment. Artstation, one of the most popular sites, decided to remove images that protested AI generation, claiming that the images violated the Terms of Service. Which terms were violated, however, were left unclear. The protest stems from a feeling that image generators are unethical in their training methods, where the model “scrapes” the internet for images to create a dataset. In other words, the material they create is completely unoriginal.

The images collected come from across the internet, at a scale that can be hard to comprehend. Consider, for a moment, just typing “art” into your search bar. Google returns 12.89 billion results - that is the store of information programs such as Stable Diffusion and other image generators have at their disposal. Some groups, however, have taken action to protect their content against use.

Getty Images, the self-proclaimed “world’s best photo library”, has sued Stability AI, the company that owns Stable Diffusion. In January, the stock image website claimed that Stability AI bypassed the licensing processes for the images, instead illegally obtaining them for the commercial interests of their product. The suit’s outcome is yet to be seen, but it’s guaranteed to have serious ramifications.

In addition to the ethics of training, other criticisms of generated images target the products themselves. In an anonymous interview, a Pingry artist described that they felt AI images started out from a finished product, whereas human art adds originality to the piece. When asked about the likelihood of AI putting human artists out of business, the student said that they felt human art would still be desired over AI images. The student does both photography and drawing, through Pingry and outside of school.

Finally, on to the third realm of AI innovation: text generators like ChatGPT. These chatbots, which create responses based on a

Spring 2023

2023

given prompt, have given rise to concerns over authenticity and cheating. Some concerned individuals, even, have created programs dedicated to detecting AI-generated text.

Princeton Student Edward Tian created the model he calls ChatGPTZero, which detects AI based off of the properties of “perplexity” and “burstiness”. Supposedly, the more ChatGPTZero is perplexed by a text, the more likely it is that it was created by a human. Also, “burstiness” is used to identify human text, as humans are more likely to create sentences of varying lengths. AI chatbots tend to write sentences of uniform length.

However, not all are so concerned. Archie McKenzie, a fellow Princeton student and a teacher’s assistant for the Computer Science 109 course, said of ChatGPT that “If you use it to do your homework, you’ll get a D”. In addition, with regards to programs such as ChatGPTZero, Mr. McKenzie said in an interview to emphasize the risks of false positives. If a student writes a piece that sets off Zero’s alarm bells, that creates a problem that can easily ruin that student’s academic career. In short, the risks of overestimating ChatGPT and other chatbots are great, whereas the risks they pose are minimal.

But this may be confusing. How can programs capable of imitating human speech not pose many risks?

Well, it’s complicated. ChatGPT and similar programs, known as “large language models,” are trained on as much of the Internet as is publicly available. Think of Wikipedia as a benchmark - it made up only 3% of ChatGPT’s training base, according to Mr. McKenzie. ChatGPT, in the training period, scoured the entire web to associate words with other words. Once training is complete, the software creates responses based on probability- how likely words are to appear around other words. This is how ChatGPT is capable of creating a list of citations of studies and people which appear real, yet in fact, have been fabricated. Mr. McKenzie described this inaccuracy as the AI hallucinating, making things up that seem real.

In one experiment, Mr. McKenzie tested ChatGPT on the 2021 exam for his course. On questions that asked for factual information, the AI did very well, regurgitating information verbatim. But when questions required logical reasoning, ChatGPT failed across the board. In all, the chatbot only beat out one student from the course. And it may be impressive that a program did better than a student at one of the top colleges in America, but as Archie said, “you can’t control for people being stupid.” In all, ChatGPT and similar programs don’t seem to pose much threat as of now. The molehill has been made into a mountain, but a proper understanding of how they truly work cuts a looming shadow back down to size.

Similarly, AI in general seems to pose less of a threat than people think. In practical application, only Boston Dynamics’ advancements in working robots are of serious concern. But in the realms of logic, reason, and creativity, humans reign supreme.

This article is from: