3 minute read

NONE OF THIS IS REAL

The integration and sophistication of artificial intelligence (AI) in the products and services we use every day is expanding at an increasingly exponential rate. At this point, it might be easier to list the industries not using it than those who are. Broadly speaking, this technology has largely been developed and implemented outside the view of day-to-day consumers. We all benefit from it, but most don’t have any idea of just how pervasive it is or of how it actually works. A few notable customer-facing exceptions are things like Apple’s Siri digital assistant, social media algorithms, self-driving features found in some newer automobiles, and Content-Aware Fill in Adobe Photoshop. Yet for industrial designers, were just now seeing a glimpse of wild new horizon ahead of us—one where concept generation workflows are dramatically sped up and the lines between machine and designer become ever blurrier.

Recently, public versions of image generation AI software have begun to gain in popularity and accessibility. In simple terms, many of these tools use a string of words as the input prompt, which is used to create a completely original digital image as the output. The resulting images can then be refined further by modifying the words used to describe the image or by selecting portions of the image to retain and then re-creating the areas around it until the desired outcome is achieved. This allows pretty much anyone with a computer and an imagination to express themselves in previously unthinkable ways. Artists can put down their brushes but still create strokes, illustrators can envision expressive new characters without the need for a pen, and photographers can give up their camera and yet still capture a moment (that never happened) in life-like detail.

What does this mean for industrial designers? We are, after all, heavily reliant on visual output as a primary delivery mechanism for our work. We create sketches, renderings, and other visualizations using a wide variety of physical and digital tools. This is nothing new; computers have been central to our process since their introduction. But programs like DALL-E and Midjourney are making us rethink our creative relationship with software and, at the same time, pose myriad new questions about ethics and the authorship of intellectual property. Even the idea of one’s personal talent or creativity could be up for debate when software is responsible for producing a finished image.

In an October 2022 interview on the Daily Show with Trevor Noah, OpenAI chief technology officer Mira Murati discussed the creative capabilities of DALL-E 2, the ethical and moral questions that using AI raises, and how artificial intelligence can enhance and shape the imagination of society. When asked about how AI will affect jobs, Murati responded, “We see them as tools. An extension of our creativity or our writing abilities. These concepts of extending human abilities and being aware of our vulnerabilities are timeless … it’s really just an extension of your imagination.”

As exciting as it currently is, we must also acknowledge that this technology is still in a nascent stage. AI has a long way to go before being able to completely replicate what humans do naturally, particularly in areas that demand any sort of emotional, cultural, or contextual consideration. For the process of industrial design, these image-generating tools also don’t currently have any references to ergonomics, materials, or manufacturing data used in their calculations. The only output is 2D images that occasionally have unresolved areas as if the software couldn’t quite figure out what to put there. That said, it’s not difficult to envision a future where ergonomics, materials, and manufacturing data sets (along with countless others) are incorporated into the AI engine, which would in turn create output that is both practically feasible and perhaps even emotionally cognizant of who the end user(s) could be.

We have a fascinating new tool at our disposal, and there is a lot to unpack about how we can leverage it to our creative advantage. In this issue we hope to explore how image-generating AI software, and other emergent AI technologies, will impact the work we do as industrial designers. And yes, while updates to some of the AI platforms have been released since these articles were written in February 2023, the opportunities, concerns, and possibilities the contributors address in their articles will remain relevant for months and years to come. Get ready, it’s going to be a wild ride!

In the above cartoon, which ran in the New York Tribune in 1923, Harold Tucker Webster envisioned a future in which a newspaper cartoonist would use electronic contraptions to generate new ideas and draw them onto paper. These innovations would give the cartoonist more free time for other pursuits, like planning a fishing trip with his buddies. Exactly 100 years later, you could say that ChatGPT is the real-life Idea Dynamo and DALL-E and Midjourney are the Cartoon Dynamo.

This article is from: