
12 minute read
PARSONS’ EX P LORATIONS OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING REFLECT THE SCHOOL’S COMMITMENT TO HUMAN-CENTERED RESEARCH AND P RACTICE



by Sarah Fensom
Rarely do technologies spark passionate debate and interest the way artificial intelligence (AI) and machine learning (ML) have in recent months. Although these tools have been developed over the course of decades, their growing prominence in public discussion makes the field seem new and unexplored. The Parsons community has been experimenting with AI and ML for years, though, applying our school’s characteristic blend of curiosity and criticality. Read on to learn about students, alumni, and faculty who are coding creativity and wellness into the new technologies shaping daily life.
At Parsons, the conversation on digital tools has always been open and framed by the creative possibilities and complications that technology brings. Today’s engagement with AI and ML is no different. As designers, artists, and entrepreneurs in our community consider introducing new applications into their practices, they’re asking important questions. Their line of inquiry is not so much about what buzzy programs like ChatGPT or Midjourney can or can’t do, but about how they can be used to develop—or advance—a creative concept or address a design challenge. They’re asking how tech can help us envision a more sustainable and socially just future for New York City, its creative industries, and the world as a whole. Or how it can foster innovation to connect individuals more closely. And they’re asking how the many biases and limitations of algorithmically driven tools can be circumvented, freeing creatives to pursue what is important to them, including progress toward a more inclusive world.
An essential part of Parsons’ examination of these emergent technologies is intensive research, which is being carried out in part by a new three-year partnership—the LG AI Research & TNS Parsons Collaboration—with the electronics corporation LG Group. The research focuses on investigation rather than product development, but the initiative is not all theoretical, as many of the exciting AI- and ML-related projects unfolding on campus demonstrate. Students working on the partnership are harnessing the generative possibilities of play to discover ways these technologies can promote collaboration. From free-form explorations unexpected results emerge. In a course on the future of fashion, for instance, design and photography students exploring AI developed a bubble that can help long-distance couples communicate. Dubbed the LoveSphere, the project is described below, as are projects involving tech-assisted music, anti-anthropocentric speculative fiction, and others that raise a critical question: What do we want our future to be?




Introducing creatives to AI—and acknowledging concerns surrounding the technology—is the goal of Creativitywith.ai, a website developed by MFA Transdisciplinary Design ’22 alums and part-time Parsons faculty members Julienne DeVita and Henry Lee. Their homepage prompt is designed to put site visitors at ease. “Are you curious about AI? Uncertain? Intimidated? Excited? Unsure? Jazzed?” it asks. “Regardless,” the site assures the visitor, “you’re in the right spot.” The interface guides users to a library of AI programs. GIFs hinting at the programs’ functionalities form a grid. Included are applications that turn speech into animated drawings (Scribbling Speech), mimic the sounds of multiple instruments playing at once (Nsynth: Sound Maker), allow users to create multi-user apps without knowing anything about code (Bubble.io), and more. With its inviting interface and growing archive of simple descriptions, Creativitywith.ai seems like an artist’s tool kit aimed at bringing ideas to life.
DeVita and Lee, along with their collaborator, Jeongki Lim, an assistant professor of strategic design and management, began the project in spring 2021 in a course called Creativity, AI, and Social Justice. “The project was based on the desire to understand the applications of AI and creativity for ourselves and for students,” Lee says. Accessibility was paramount. “The aim is to show that no matter who you are, you can engage with AI—you don’t need to know how to code or have a computer science degree,” says DeVita. With novices in mind, the designers asked themselves how they could in effect say, “Here’s the front door, here’s the ‘Welcome’ rug,” says Lee. Relying on what Lee calls design basics—“simple colors, lots of white space”—the team got the site running in a matter of weeks. Since then, DeVita notes, “over 90,000 users have found Creativitywith.ai just through Google search.”
The site’s main goal is to amplify users’ own creativity. “Creativity isn’t coded,” DeVita says. “There will always be space for human creation, because creativity is not linear. That gives the site its optimistic feel; we’re trying to encourage a sense of excitement about AI and promote education instead of fear.” For DeVita, Lee, and Lim, what is essential for creatives working with AI is the act of reframing the technology as a collaborator instead of a tool. Collaboration is made possible by AI technology’s responsiveness to a designer’s input and often yields surprising and unforeseen results. The artist or designer plays a critical role in deciphering and ultimately creating meaning from AI’s outputs. “That switch inspires the human collaborator to think critically. It shows that AI can’t exist without us, and that’s an important perspective to have right now, given the anxiety about it replacing us,” DeVita says. She nonetheless acknowledges an understandable concern. “AI will replace some jobs,” she concedes, “but humans will continue to adapt, and we should focus on how.”
Exploring the ways we can continue to adjust to the expanding technology landscape is central to Parsons’ LG partnership, which Lim is overseeing. Through academic research, scholarly exchanges, and exhibitions, the LG collaboration supports exploration of AI technologies and their application to creative industries, many of which are based in New York. “This is a new type of industry partnership,” Lim says. “We’re not here to give deliverables or to launch classes—we’re working with LG to develop new ways of looking at and using technologies, and through that investigation, together we’re shaping how technology will be developed.”
Adam Brown , The New School’s vice provost for Research, notes that Parsons is uniquely well equipped to examine the development of evolving digital tools while addressing the “critical ethical and cultural questions that must be asked in relation to technical advancement.” The advantages of this partnership are multifarious, he says: “LG benefits from the university’s intellectual input, and our students gain numerous educational opportunities. Such hands-on learning can be transformative as our students pursue jobs in this field and ultimately become innovators themselves.” Cynthia Lawson Jaramillo, dean of Parsons’ School of Design Strategies, adds, “This partnership demonstrates the value of exchanges between technologists and strategists operating in a values-driven learning ecosystem at creativity’s cutting edge.”
The LG partnership began in September 2022 with a symposium that brought together leadership from LG AI Research and Parsons faculty and students to delve into the most pressing questions relating to AI and the creative community. Lim also staged a two-day hackathon with students, many of whom were new to AI. Together they tested a prototype of a text-to-image generating tool that LG has been refining. Lim notes that the tool is not unlike Midjourney or DALL-E, which generate digital images from language descriptions or prompts, but rather than yielding just one or two images, LG’s tool produces a number of examples from one prompt. At the hackathon, students of various design disciplines developed original projects. Central to them all were Lim’s guiding thoughts on when and how designers should bring AI into their practices. “If we create a tool where students are focusing on using AI to make their final product, that will prevent them from making their own work,” Lim says. “But if AI tools are applied during the ideation stage to help students crystallize concepts, it can enhance the students' creative learning and endeavors.” T he projects initiated during the hackathon established a foundation for the partnership’s academic research; many of them are ongoing.

Last year, Lim co-taught the course AI, Metaverse, and the Future of Fashion with Abrima Erwiah, director of the Joseph and Gail Gromek Institute for Fashion Business. Erwiah says that AI accelerated students’ process of envisioning their projects—imaginative design-focused socially beneficial proposals. “Because AI can quickly organize information, the technology allowed students to take all these data points, question them, and identify where to begin,” says Erwiah. “In just an hour or two of class, students were able to iterate ideas and translate them into visuals with ChatGPT or other tools, which helped them broadly imagine many possible ideas.”
Erwiah notes that projects concerning mental health were a priority for students because of the pandemic. In the class, third-year Parsons students— Chenxiao (Nini) Li, BFA Communication Design; Liwen Sang, BFA Fashion Design; and Siran (Eva) Wang, BFA Photography—developed LoveSphere with couples in long-distance relationships in mind. Their project addresses both the mental strains of separation anxiety and the realities of a global society in which virtual connections can supplant realworld relationships. The trio describe LoveSpheres as life-sized transparent balls used by distance-separated couples, with each member occupying their own bubble. Inside, the LoveSphere evokes sensations of touch and allows for communication with the remote partner. Inspired by “puffer jackets and air-filled clothing,” Li used Midjourney to render striking images of hypothetical users occupying the large diaphanous spheres envisioned by the team.
To produce feelings of comfort, LoveSphere would rely on multiple AI technologies, including Clo3D, a 3D modeling and simulation software taught at Parsons. An inner layer of the sphere would shrink and expand to embrace the person sitting within, simulating touch—“like a hug,” Sang explains. Users would wear VR headsets to experience a fully immersive session with a remote partner. In virtual reality, they could construct and explore a world together, talking, playing games, or even “building their own love house with furniture or NFT artwork,” Sang says. Though Wang says real-time communication—like a FaceTime exchange—is their “end angle,” a ChatGPT function in the LoveSphere could help couples revisit earlier conversations when a partner is unavailable. “We’re thinking about couples who live in different time zones,” says Wang.

The goal of providing emotional support anchors recent work by other members of the Parsons community, including 2023 MS Strategic Design and Management graduates Andrés Galicia Hernández , Ifah Pantitanonta , and Akanksha Shrivastava Ai.ly, their joint thesis, is designed to help young people find jobs, one of life’s most stressful pursuits. Begun in the Design Research class taught in fall 2022 by Jack Wilkinson and developed this past spring in assistant professor of strategic design and management Rhea Alexander ’s Capstone Studio 2 course, Ai.ly is a voice-enabled AI chatbot that focuses on job interview preparation. Through research, the Ai.ly team found that new graduates often feel unsupported during this phase of the job-finding process. A business opportunity emerged.
The strategic designers determined that AI could be employed in motivational interviewing, a micro-counseling technique known as OARS (Open questioning, Affirming, Reflection, and Summarizing). The OARS method helps individuals discuss and manage activities leading to positive behavioral change and constructive habit formation. Encouragement is at the heart of Ai.ly. “Everyone needs a little push sometimes,” Shrivastava says. “Ai.ly is designed to be your trusted interview companion and boost your confidence.” The AI-powered platform, which functions as a Chrome plug-in with a personalized dashboard, scans job postings, generates relevant interview questions, and drills applicants on answers. Recruitment sites today often employ AI tools to assess candidates and cull résumés; Ai.ly offers the same capacity to jobseekers, allowing them to quickly find good leads. “We want to empower candidates in their job search with AI,” says Galicia Hernández.

Ai.ly is now in its implementation phase, as the designers develop the product, gather input, and design the pilot. “In the next few months,” Shrivastava says, “we’ll begin refining Ai.ly, incorporating feedback, and seeking investment to bring the product to market.” ew technology is also being explored on campus on a systems level. The internal workings of New York City were central to ML and the City, a class co-taught last spring by Sven Travis and Christopher Kirwan. Travis, co-director of the MFA Design and Technology program at Parsons’ School of Art, Media, and Technology (AMT), and Kirwan, an AMT part-time faculty member whose work focuses on urban planning, guided students in applying easily accessible ML tools in projects using city data on urban media and the urban environment. “We gave students the flexibility to use data and the flow of information to create new models,” Kirwan says. Kirwan and Travis encouraged students to use their imaginations in applying ML to urban challenges. “We have a lot of problems in New York City that machine learning should help us solve, but we didn’t want students to be limited by that,” says Kirwan.


For Travis and Kirwan, exploration is key. “The advent of computer networks has made NYC more sophisticated in its ‘self-connectedness,’ and has enabled us to see the city as a living organism,” Travis says. Kirwan challenges students to use the living city metaphor as a way to think about data, with a “blood flow” of information driving the systems at work within the urban organism, and then model different realities and scenarios. Travis notes that a major benefit of the class is students’ access to urban data sets, which can inform their projects.


To illustrate the point, Travis cites a class project by Gabriel Lee and Guan-Hao Zhu, both first-year MFA Design and Technology students, and Sayidmurad Sayfullaev, a senior in the BFA Architectural Design program. Inspired by the conservation-focused Billion Oyster Project, the group developed an interactive map that features speculative narratives representing possible futures of oyster beds in a post-human New York City. Lee explains that the group fed ChatGPT-3 a variety of creative nonfiction sources that explore “what might happen to the built environment and nature in a post-human world—specifically data corresponding to location descriptions.” Among the sources were Big Oyster by Mark Kurlanksy, The World Without Us by Alan Weisman, and Netflix’s TV show Life After People “Then we put in the names of oyster research stations and reefs around the city,” Lee says. “We gave specific locations, and then GPT-3 gave us back descriptions of what the locations will be like in the future without humans, when nature is healing on its own.” The designers then edited the descriptions, put them in the future tense, and added them to the interactive map.

The group is using fiction as a vehicle to examine AI’s limitations, such as its tendency to generate outputs that are not based on fact or existing sources. Zhu hopes that the map will lead users to question perspective, authorship, and context in relation to the descriptions in light of the fact that they were produced by AI. Given the random nature of what ChatGPT-3 creates and the human impulse to derive meaning from output, Zhu likens the AI tool to a fortune teller. The human collaborator’s role is to interpret output and scrutinize its sources. Lee says that one must “heavily curate” results in order to create something interesting. “Some of the outputs are hallucinations or things that sound convincing but aren’t necessarily true.”
Still, Lee notes, AI’s ability to generate output that is not fact based makes it particularly well suited to developing speculative futures that could prove useful in environmental preservation discussions. Using AI, projects like this one harness data to virtually envision a not-so-bright future that we could avoid by eliminating destructive practices.


Agarwal and Nagabhushan created posters titled “Are You Sure?” asking listeners to try to distinguish original music classics from (mostly unsuccessful) imitations produced by AI software.




Nidhi Nagabhushan Isha Agarwal
Another group of first-year MFA Design and Technology students in Travis and Kirwan’s course orchestrated AI software tools in a different kind of artistic feat, one resulting in a newfound appreciation for play in creative pursuits. Isha Agarwal and Nidhi Nagabhushan set out to explore artistic authorship through CYAN (Create Your Album Now), a music-generating immersion in technology. Their idea was to choose famous musicians associated with different genres and New York neighborhoods—like Bob Dylan with Greenwich Village and Duke Ellington with Harlem—and use AI programs to generate sound-alike songs. They would employ Riffusion to create real-time audio, Midjourney to generate album covers, ChatGPT to write lyrics, and Speechify to voice them, prompting all the programs with keywords relevant to the original artists and their musical form and associated locale. The designers then devised a plan to link to playlists of both the original and the AI songs on QR posters that they would hang throughout the city, encouraging bystanders to listen and compare.
In practice, though, Agarwal and Nagabhushan found that the AIgenerated songs were generally incoherent; they rarely captured the nuances of musical genre and derived too literally from the keywords used as prompts. Somewhat unexpectedly, CYAN became a poignant articulation of AI’s limitations—especially in comparison to the results of human expression applied to an artform like music. “Speechify and other text-to-speech applications can read lyrics aloud, but they can’t emulate singing,” Agarwal says. In the end, the project’s creative output was humor, not music. “Sven and Chris named us ‘the laughing group’ because we were laughing at all of the humorously bad outputs the AI gave us,” says Nagabhushan. Nonetheless, AI had, in a way, led her to incorporate play more deeply into her creative process.
Anew conversation on AI and ML will open this November, when Lim stages Art, Design, and AI, a campus exhibition mounted in collaboration with LG AI Research and featuring work resulting from an open call to creators throughout The New School. According to Lim, the exhibition demonstrates “how students are striving to be transformative and creative by engaging with and reflecting upon AI tools.” The show will coincide with a second symposium mounted by the Parsons–New School–LG partnership to advance research and practice with new technology. The conference will bring together computer scientists, social activists, artists, and other critical voices, effectively “setting The New School and Parsons up to be an alternative intellectual site to discuss AI,” says Lim. “The value of being in a place like Parsons and involving the New York City community is that when individuals here engage with new tools, they interrogate them and begin thinking about how to make them more relevant to the world today.”