WHAT IS REALITY?
Artificial intelligence has arrived

Artificial intelligence has arrived
Several members of our community gathered in schools, studios, and social settings around the country to mark the annual National Industrial Design Day celebration on March 5, 2023. As always, it’s a great opportunity for networking and community-building while taking some time to appreciate all that industrial designers do.
Publisher IDSA 950 Herndon Pkwy. Suite 250 Herndon, VA 20170
P: 703.707.6000 idsa.org/innovation
Executive Editor Chris Livaudais, IDSA chrisl@idsa.org
Contributing Editor Jennifer Evans Yankopolus jennifer@wordcollaborative.com 678.612.7463
Graphic Designers Nicholas Komor 678.756.1975 0001@nicholaskomor.com
Sarah Collins 404.825.3096 spcollins@gmail.com
Advertising IDSA 703.707.6000 sales@idsa.org
Subscriptions/Copies IDSA 703.707.6000 idsa@idsa.org
Annual Subscriptions
Students $50
Professionals / Organizations
Within the US $125 Canada & Mexico $150
International $175
Single Copies Fall $75+ S&H
All others $45+ S&H
24
by Emilie Williams, IDSAby Tony Siebel, IDSA, Tom Gernetzke, and Caterina Rizzoni, IDSA 38
20 IDSA.ORG Redesign IN EVERY ISSUE
4 In This Issue
6 Chair’s Report by Lindsey Maxwell, IDSA
8 From HQ by Jerry Layne, CAE
9 Beautility by Tucker Viemeister, FIDSA
14 Women on Design by Rebeccah Pailes-Friedman, IDSA
16 ID Essay by Steven R. Umbach, FIDSA
by Roger Ball, IDSA58
by Max YoshimotoCover: Using reference images and blending in Midjouney to experiment with translucent CMF effects. Courtesy of Hatch Duo.
Opposite: On the Moon by Eric Ye, S/IDSA. Image generated in DALL-E using the prompt “An astronaut is seating on a chair which is made from pink foam spray chair in style of blobism, the background is the moon surface and universe.”
Innovation is the quarterly journal of the Industrial Designers Society of America (IDSA), the professional organization serving the needs of US industrial designers. Reproduction in whole or in part—in any form—without the written permission of the publisher is prohibited. The opinions expressed in the bylined articles are those of the writers and not necessarily those of IDSA. IDSA reserves the right to decline any advertisement that is contrary to the mission, goals and guiding principles of the Society. The appearance of an ad does not constitute an endorsement by IDSA. All design and photo credits are listed as provided by the submitter. Innovation is printed on recycled paper with soy-based inks. The use of IDSA and FIDSA after a name is a registered collective membership mark. Innovation (ISSN No. 0731-2334 and USPS No. 0016-067) is published quarterly by the Industrial Designers Society of America (IDSA)/Innovation, 950 Herndon Pkwy, Suite 250 | Herndon, VA 20170. Periodical postage at Sterling, VA 20164 and at additional mailing offices. POSTMASTER: Send address changes to IDSA/Innovation, 950 Herndon Pkwy, Suite 250 | Herndon, VA 20170, USA. ©2023 Industrial Designers Society of America. Vol. 42, No. 1, 2023; Library of Congress Catalog No. 82-640971; ISSN No. 0731-2334; USPS 0016-067.
The integration and sophistication of artificial intelligence (AI) in the products and services we use every day is expanding at an increasingly exponential rate. At this point, it might be easier to list the industries not using it than those who are. Broadly speaking, this technology has largely been developed and implemented outside the view of day-to-day consumers. We all benefit from it, but most don’t have any idea of just how pervasive it is or of how it actually works. A few notable customer-facing exceptions are things like Apple’s Siri digital assistant, social media algorithms, self-driving features found in some newer automobiles, and Content-Aware Fill in Adobe Photoshop. Yet for industrial designers, were just now seeing a glimpse of wild new horizon ahead of us—one where concept generation workflows are dramatically sped up and the lines between machine and designer become ever blurrier.
Recently, public versions of image generation AI software have begun to gain in popularity and accessibility. In simple terms, many of these tools use a string of words as the input prompt, which is used to create a completely original digital image as the output. The resulting images can then be refined further by modifying the words used to describe the image or by selecting portions of the image to retain and then re-creating the areas around it until the desired outcome is achieved. This allows pretty much anyone with a computer and an imagination to express themselves in previously unthinkable ways. Artists can put down their brushes but still create strokes, illustrators can envision expressive new characters without the need for a pen, and photographers can give up their camera and yet still capture a moment (that never happened) in life-like detail.
What does this mean for industrial designers? We are, after all, heavily reliant on visual output as a primary delivery mechanism for our work. We create sketches, renderings, and other visualizations using a wide variety of physical and digital tools. This is nothing new; computers have been central to our process since their introduction. But programs like DALL-E and Midjourney are making us rethink our creative relationship with software and, at the same time, pose myriad new questions about ethics and the authorship of intellectual property. Even the idea of one’s personal
talent or creativity could be up for debate when software is responsible for producing a finished image.
In an October 2022 interview on the Daily Show with Trevor Noah, OpenAI chief technology officer Mira Murati discussed the creative capabilities of DALL-E 2, the ethical and moral questions that using AI raises, and how artificial intelligence can enhance and shape the imagination of society. When asked about how AI will affect jobs, Murati responded, “We see them as tools. An extension of our creativity or our writing abilities. These concepts of extending human abilities and being aware of our vulnerabilities are timeless … it’s really just an extension of your imagination.”
As exciting as it currently is, we must also acknowledge that this technology is still in a nascent stage. AI has a long way to go before being able to completely replicate what humans do naturally, particularly in areas that demand any sort of emotional, cultural, or contextual consideration. For the process of industrial design, these image-generating tools also don’t currently have any references to ergonomics, materials, or manufacturing data used in their calculations. The only output is 2D images that occasionally have unresolved areas as if the software couldn’t quite figure out what to put there. That said, it’s not difficult to envision a future where ergonomics, materials, and manufacturing data sets (along with countless others) are incorporated into the AI engine, which would in turn create output that is both practically feasible and perhaps even emotionally cognizant of who the end user(s) could be.
We have a fascinating new tool at our disposal, and there is a lot to unpack about how we can leverage it to our creative advantage. In this issue we hope to explore how image-generating AI software, and other emergent AI technologies, will impact the work we do as industrial designers. And yes, while updates to some of the AI platforms have been released since these articles were written in February 2023, the opportunities, concerns, and possibilities the contributors address in their articles will remain relevant for months and years to come. Get ready, it’s going to be a wild ride!
In the above cartoon, which ran in the New York Tribune in 1923, Harold Tucker Webster envisioned a future in which a newspaper cartoonist would use electronic contraptions to generate new ideas and draw them onto paper. These innovations would give the cartoonist more free time for other pursuits, like planning a fishing trip with his buddies. Exactly 100 years later, you could say that ChatGPT is the real-life Idea Dynamo and DALL-E and Midjourney are the Cartoon Dynamo.
Iprobably wouldn’t be the vice president of Teague if it weren’t for IDSA, or at least for the members I met when I was a freshly graduated industrial designer. It might sound dramatic, but I am truly indebted to the numerous women from IDSA who networked with me, gave me portfolio feedback, invited me to see their work, and eventually encouraged me to apply for an open position at Teague.
Industrial design remains a male-dominated industry: Women make up just 19% of our profession, according to the Design Salary Guide by Coroflot. That’s why my early history with IDSA is so meaningful: It was women who carried me through, despite being few and far between in the industry. I didn’t feel as isolated because I had a community—brought together through IDSA—behind me.
The Many Values of Community
IDSA is the set and setting for designers to uplift one another. This is one great purpose of a professional organization. Together, we have the capabilities to nurture and promote the careers of all underrepresented designers, ultimately bringing much-needed balance to the industry. There’s a good reason we have a Diversity, Equity, and Inclusion Council—to take real action toward increasing minority representation in IDSA’s core programming, leadership, and membership; increasing access to industrial design
education; and dismantling inequities within design at large. A greater spectrum of identities is in the best interest of our industry and the world we affect through our products, environments, and ideas.
The practice of industrial design is facing an identity crisis of its own. Our roles have changed dramatically in the last decade as we have watched the field of industrial design evolve and grow to intersect with UX, UI, AI, and 3D—so many acronyms; it’s easy to get lost. New design practices and technologies will continue to emerge and disrupt the status quo over and over again, and the need to recalibrate our identities as designers will not cease.
It is important that we address these challenges—the changing, growing practice—as an organization. IDSA can play a crucial role in helping members navigate these changes and stay informed about the latest developments in the field. By providing a platform for discussion, learning, and collaboration, we can help shape the future of industrial design.
On the Horizon
I’m a people-first person. That’s true wherever I am in a leadership role, whether it’s at Teague or IDSA. The value of this organization lies first and foremost with our members, as I experienced myself early in my career. My primary goal as Board Chair is to support our existing `and future members.
I hope to foster the same kind of support for others that I experienced myself.
This begins first with finding our next Executive Director. We want to do this the right way—that means slowly and deliberately. We’re establishing a search committee with external voices from the design community to help us find the right person who can help our members navigate what the future of the industry will be. We are committed to finding someone who is not only knowledgeable and experienced but also forward thinking and excited about the future of industrial design.
In the coming months of 2023, I’m also looking forward to sharing more about the exciting changes we’ve made to our awards and conferences programming. We’ve created new committees for both programs, and I’m proud to say they’re full of highly talented designers who are dedicated to elevating these two important offerings. In the future, we will share an inside view of the IDEA review process—get ready for some lively debate over the winning entries! For conferences, the committee will now include a much larger, diverse group that will identify conference themes that are most relevant to you, our members and community. After the topics and themes are agreed upon, the committee members will curate the speakers, emcees, workshop facilitators, moderators, and pane`lists.
We have also begun testing an IDSA membership service for schools that allows for greater community access to industrial design students. This is a wonderful benefit to our up-and-coming professional members and helps to increase a diversity of perspectives at IDSA.
I am proud of the improvements IDSA has made this last year and know there are many more to come in the years that follow. The ever-changing landscape of industrial design may seem daunting, and finding our place within it may feel like a challenge. But this, in my opinion, is the most exciting time in history to be an industrial designer. With technology advancing rapidly and global needs to solve for, we have seemingly endless opportunities to make real impact with our work.
To truly make a difference, we need a strong point of view and advocacy for our profession. The members of IDSA hold the key to creating that voice and identity for industrial design.
I look forward to meeting many new faces and seeing old friends at the International Design Conference this August. Though it’s been 20 years since we’ve held the conference in New York, we’re honored to be returning to the design hub after such a long time away. See you there.
—Lindsey Maxwell, IDSA, IDSA Board Chair lmaxwell@teague.comSince stepping into the role of Interim Executive Director after Chris Livaudais’ departure at the end of last year, I’ve been humbled by the kind words from so many of you. With your support, and that of the Board and Staff, I’m excited for a very successful year ahead.
While there is plenty of reason for optimism about our future, a simultaneous reality is beginning to take shape. It’s no secret that the economy has been facing turbulence, but it’s not solely about inflation’s impact on the price of everyday items like food and gas or what seems to be a long-term debate over a looming recession. Conversations with our members over the past several months reveal a growing trend of lower-than-expected company revenues and, in some cases, layoffs.
My first major communication to you as Interim Executive Director is not meant to be doom and gloom. With the passion and commitment of our Board, Staff, and greater IDSA community, I am confident that we will overcome these challenges. However, I believe that as a leader within the profession, IDSA has a responsibility to be honest and proactive about the realities we currently face.
At the Board of Directors meeting this January, with both Board and Staff in attendance, I shared my plan for the year ahead. Borne out of conversations with the Board and member feedback and an understanding of what’s happening within the industry, we must demonstrate agility and provide pertinent resources so that our members can continue to derive value from IDSA and feel equipped to face challenges that might lay ahead.
I would like to briefly touch on two areas of our 2023 plan:
Events are central to IDSA’s identity. Nearly five years ago, a series of visioning sessions yielded a recommitment to what we’ve always been great at: in-person connections. But a lot has changed in the world since then. Economic uncertainties, the impact of COVID-19, and a much-needed spotlight on diversity, equity, and inclusion have shaped a
new reality. No longer can IDSA afford to singularly focus on the in-person experience or frameworks of the past. That’s why this year we’re investing our resources to enhance the entire event experience, from thoughtful content curation and diverse event leadership to strengthened programming partnerships and streamlined virtual access. More to come in the second half of the year.
At last year’s IDC in Seattle, WA, IDSA announced it was bolstering organizational resources to better support the academic community. Additionally, the vision for a new Academic Membership program was introduced. As an evolution of our highly successful Group Membership program, Academic Membership will uniquely package new and existing offerings designed for students, educators, and industrial design programs. Early excitement about the pilot, which kicked off in January, suggests we’re on to something big. Stay tuned for the rollout of the full program later this year.
Professional chapters and sections have always been, and remain, a core part of IDSA’s programming. We’re continuing to improve our support for volunteer leaders by exploring ways to better equip them with training through the Community Leadership Institute and access to IDSA funds for programming. We are strategizing on different models and hope to share creative solutions with them this year.
We have a challenging but promising year ahead. With the resilience and strength of the IDSA community, I am certain we will emerge stronger than ever. In the meantime, we are committed to supporting our members and providing the resources, content, and opportunities to help our community succeed.
—Jerry Layne, CAE, Interim Executive Director jerryl@idsa.orgFor this issue, I decided to emulate the modus operandi of my colleague Scott Henderson, IDSA: interview a celebrity. So I got in touch with ChatGPT, sibling to the visual image maker DALL-E, both created by the artificial intelligence research laboratory OpenAI. ChatGPT is like Cixin Liu’s “Big Quantum” in his sci-fi book The Supernova Era (written before Three Body Problem). The AI in his story is a kind of super-democratic computing tool. Like in a massive multiplayer game, it gathers everyone’s ideas and individual desires into summaries to foster human-scale dialog. “That’s right. They are all speaking, but only Big Quantum heard the nearly two hundred million messages. It summarized them into the one statement you heard,” says
Pan Yu in the book. The value of this AI is that it makes “sense” of all its data; it says what we already know because its database is everything we know.
ChatGPT, like Big Quantum, is not making things up; it’s assembling information and making it sound good. The composing is artificial, but the ideas are ours— our conventional wisdom. I was surprised at how much ChatGPT knows about industrial design! Which means that the general public finally now knows what we do. My conversation with ChatGPT is presented here more or less verbatim. Although as the December 26, 2022, New York Times headlined—“Did a Fourth Grader Write This? Or the New Chatbot?”—decide for yourself!
Tucker Viemeister: Am I crazy to interview a chatbot?
ChatGPT: It is not necessarily crazy to interview a chatbot for an article about industrial design or any other topic. Chatbots like me can provide a broad overview of a subject and can be a convenient and efficient way to gather information. However, it is important to note that my responses are based solely on the information that I have been programmed with and do not reflect the personal experiences or opinions of an individual.
It is up to the individual journalist or writer to decide whether interviewing a chatbot is appropriate.
Viemeister: OK, I do think it’s appropriate! I want to find out what’s in your database. First, what does a chatbot know about industrial design?
ChatGPT: Interviewing a chatbot like me could provide a broad overview of the field and help to familiarize readers with the basic principles and concepts of industrial design. I can provide information on the history and evolution of the field, the role of industrial design in shaping the products and systems that people use every day, and the key skills and techniques that industrial designers use to create functional and aesthetically pleasing products.
Viemeister: Tell me, why do you think industrial design is important for the future?
ChatGPT: Industrial design is important for the future because it plays a critical role in shaping the products and systems that people use every day. It is a multidisciplinary field that combines creativity, engineering, and business to create functional and aesthetically pleasing products that meet the needs of users and the market.
There are several reasons why industrial design is important for the future:
1. Industrial design helps to create innovative products that meet the changing needs and preferences of consumers. As technology and society evolve, industrial designers are constantly looking for ways to improve and update existing products, as well as to create new products that meet emerging needs.
2. Industrial design is a key factor in the success of a product. A well-designed product is more likely to be successful in the market, as it is more appealing to consumers and easier to use. Industrial design also plays a role in the sustainability of a product, as designers consider the environmental impact of a product throughout its lifecycle.
3. Industrial design helps to drive economic growth. The products and systems created by industrial
designers create jobs and contribute to the economy. In addition, industrial design plays a role in helping companies to stay competitive in a global market.
Viemeister: What do you base these projections on?
ChatGPT: As an artificial intelligence, I have been programmed with a large dataset of information on a wide range of topics, including industrial design. I have been trained to understand and analyze text input and generate appropriate responses based on that input. This allows me to provide information and answer questions on a wide range of topics. I do not have personal experiences or opinions, and my responses are based solely on the information that I have been programmed with. Is there anything else you would like to know about industrial design?
Viemeister: Why don’t industrial designers make more money if their work is so valuable?
ChatGPT: It is important to note that the value of industrial design work can be difficult to quantify, and designers may not always receive financial compensation that is commensurate with the value of their work. However, many designers are motivated by the opportunity to create products that are valued and appreciated by consumers, and they may find other forms of fulfillment and satisfaction in their work.
Viemeister: Have you heard of the word “beautility”?
ChatGPT: Beautility is a term that is used to describe products that are both beautiful and useful. It is often used in the context of design, to describe objects that are aesthetically pleasing and also serve a practical purpose. Beautility is a concept that has been around for many years, and it has been a key consideration in the design of a wide range of products, from furniture and household objects to consumer electronics and other products. Many designers strive to create products that are both beautiful and useful, as they believe that such products are more likely to be appreciated and valued by consumers.
Viemeister: I guess I didn’t coin the word then. How important is beauty to successful industrial design?
ChatGPT: Beauty is an important, but it is not the only factor. A product that is aesthetically pleasing is often more appealing to consumers and can contribute to its success in the market. However, beauty is subjective and can vary depending on cultural and personal preferences. Therefore, it is important for industrial designers to consider a wide range of factors when designing a product, including its function,
usability, sustainability, and cost. Industrial designers must also consider the cultural and social context in which the product will be used, as well as the environmental impact of the product throughout its lifecycle.
Viemeister: Here’s a trick question: What is the difference between design thinking and industrial design?
ChatGPT: Design thinking is a problem-solving approach that involves understanding the needs of users, generating creative ideas, prototyping and testing solutions, and iterating on those solutions until a satisfactory result is achieved. It is a process that can be applied to a wide range of fields and industries, including product design, service design, and organizational design.
Industrial designers use design thinking as part of their process, but they also have specialized knowledge in areas such as materials, manufacturing processes, and ergonomics, which allows them to create functional and aesthetically pleasing products that meet the needs of users and the market.
Viemeister: Who are the most important industrial designers?
ChatGPT: There have been many important industrial designers throughout history who have made significant contributions to the field. Some of the most well-known include:
1. Raymond Loewy: Loewy was an influential industrial designer who is credited with designing some of the most iconic products of the 20th century, including the Coca-Cola vending machine and the Greyhound Scenicruiser bus.
2. Dieter Rams: Rams is a German industrial designer who is known for his minimalist design philosophy and his contributions to the development of the Braun brand.
3. Jony Ive: Ive is a British industrial designer who was the Chief Design Officer at Apple and is credited with designing many of the company’s most iconic products, including the iPod, iPhone, and iPad.
4. Chuck Pelly: Pelly is an American industrial designer who has designed a wide range of products, including automobiles, motorcycles, and consumer electronics.
5. Karim Rashid: Rashid is an Egyptian-born industrial designer who is known for his colorful and futuristic designs and has designed a wide range of products, including furniture, home decor, and consumer electronics.
Viemeister: It’s interesting to see who you chose! Makes me wonder what your selection criteria are based on. Is it some kind of Internet hit parade?
ChatGPT: No, my selection criteria is not based on internet popularity. As an artificial intelligence, I do not have access to the internet or other external sources of information. My responses are based solely on the information that I have been programmed with. In the case of the industrial design firms I listed, I selected them based on their reputation and contributions to the field of industrial design, as well as their history and the range of clients they have worked with. I did not consider internet popularity as a factor in my selection.
Viemeister: What can industrial designers do to become more successful?
ChatGPT: There are several things that industrial designers can do to become successful in their field:
1. Develop a strong foundation in design principles: Industrial designers should have a solid understanding of design principles such as form, function, aesthetics, usability, and sustainability.
2. Build a diverse skill set: Industrial design is a multidisciplinary field, so it is important for designers to have a diverse skill set that includes creativity, technical expertise, business acumen, and problemsolving skills.
3. Stay up to date with industry trends and technology: Industrial designers should be aware of the latest trends and technologies in the field to ensure that they are creating products that meet the needs and preferences of consumers and the market.
4. Build a strong portfolio: Industrial designers should develop a strong portfolio of their work to showcase their skills and abilities to potential employers or clients.
5. Network and seek out opportunities for professional development: Industrial designers should seek out opportunities to network with other professionals in the field and participate in professional development activities to enhance their skills and knowledge.
6. Build a strong personal brand: Industrial designers should work to build a strong personal brand by consistently delivering high-quality work, being a team player, and demonstrating their expertise and passion for the field.
Overall, becoming a successful industrial designer requires a combination of strong design skills, technical expertise, business acumen, and a passion for the field. By consistently delivering high-quality work and seeking out opportunities to
grow and learn, industrial designers can position themselves for success in their careers.
Viemeister: What are the key skills industrial design students need to learn?
ChatGPT: There are several key skills that industrial design students should learn to be successful in their careers:
1. Design principles: students should learn the fundamental principles of design, including form, function, aesthetics, usability, and sustainability.
2. Creativity: Industrial designers need to be able to generate creative ideas and solutions to design challenges.
3. Problem-solving skills: Industrial designers must be able to identify and solve complex problems in a creative and effective manner.
4. Technical skills: Industrial design students should learn how to use design software and other tools to create accurate and detailed designs.
5. Communication skills: Industrial designers must be able to effectively communicate their ideas and designs to clients, colleagues, and other stakeholders.
6. Business acumen: Industrial designers should have a basic understanding of business principles and how to apply them to their work.
7. Collaboration skills: Industrial designers often work in teams, so it is important for them to have strong collaboration skills and be able to work effectively with others.
8. Project management skills: Industrial designers should be able to manage their time effectively and handle multiple projects simultaneously.
Viemeister: What are the best schools to study industrial design?
ChatGPT: There are many excellent schools around the world that offer industrial design programs. Some of the top schools for industrial design include:
1. Massachusetts Institute of Technology (MIT)Cambridge, Massachusetts, USA
2. Stanford University - Stanford, California, USA
3. University of Michigan - Ann Arbor, Michigan, USA
4. Royal College of Art - London, UK
5. Delft University of Technology - Delft, Netherlands
6. ArtCenter College of Design - Pasadena, California, USA
7. Carnegie Mellon University - Pittsburgh, Pennsylvania, USA
8. National University of Singapore - Singapore
9. Politecnico di Milano - Milan, Italy
10. Shanghai Jiao Tong University - Shanghai, China
Viemeister: I’m not sure MIT has a leading ID program. Interesting list of schools from around the world but what about Pratt and Parsons?
ChatGPT: Pratt Institute and Parsons School of Design are both well-known schools that offer industrial design programs. It is important to note that there are many other excellent schools that offer industrial design programs, and the right school for an individual will depend on their specific needs and goals.
Pratt Institute is a private university located in Brooklyn, New York, USA, that offers a Bachelor of Industrial Design program. The program focuses on the design of physical products and systems and teaches students design thinking, technical skills, and problem-solving techniques. Pratt is known for its strong faculty and its emphasis on hands-on learning and real-world experience.
Parsons School of Design is a private university located in New York City, USA, that offers a Bachelor of Fine Arts in Industrial Design program. The program focuses on the design of physical products and systems and teaches students design thinking, technical skills, and problemsolving techniques. Parsons is known for its strong faculty and its emphasis on creativity and innovation.
Overall, both Pratt Institute and Parsons School of Design are excellent schools that have a reputation for producing top-quality designers.
Viemeister: Funny how you describe Pratt and Parsons literally with the same words! OK, so then what are the most important industrial design companies?
ChatGPT: There are many industrial design firms that are known for their contributions to the field and for producing innovative and high-quality designs. Some of the most wellknown industrial design firms include:
1. IDEO: a global design and innovation consultancy that has worked with a wide range of clients, including Apple, Google, and Intel.
2. Frog Design: a global design and strategy firm that has worked with a wide range of clients, including Microsoft, IBM, and Sony.
3. Teague: a design firm that has designed a wide range of products, including automobiles, aircraft, and consumer electronics.
4. Smart Design: a global design and innovation firm that has worked with a wide range of clients, including Samsung, LG, and Google.
5. fuseproject: a design and branding firm that has worked with a wide range of clients, including eBay, Jawbone, and Nest.
There are many other firms that have also made significant contributions and have had a lasting impact on the world of industrial design.
Viemeister: Your selections are examples of consultants who do strategy, digital, and UX design, so you consider UX, strategy and branding to be part of industrial design?
ChatGPT: In the process of creating a new product, industrial designers may work on user experience (UX) design. They may also be involved in developing the overall strategy for the product.
Branding is another aspect that may be considered in the design of a product. This includes creating a unique visual identity for the product, such as a logo or color scheme, and developing messaging and messaging that will be used to communicate the product’s value proposition to consumers.
So, while UX, strategy, and branding may be considered in the process of industrial design, they are not necessarily part of the core discipline of industrial design itself. Industrial designers primarily focus on the physical form and function of a product, rather than on its branding or user experience.
Viemeister: It seems like, for me, you’re passing the Turing Test (behavior indistinguishable from a human). How do you do this?
ChatGPT: As an artificial intelligence, I do not have physical senses or experience the world in the same way that humans do. Instead, I have been programmed with a large dataset of information and a set of algorithms that allow me to analyze and understand text input, and generate appropriate responses based on that input. When you ask me a question or make a statement, I search my database for information that is relevant to your input, and use my programming to generate a response based on that information. Is there anything else you would like to know?
Viemeister: Are your responses objective?
ChatGPT: I do not have personal feelings or opinions, so I am inherently objective in that sense. My responses are based on the information that I have been trained on and my understanding of language, and I do not allow my own biases to influence my responses.
However, it is important to note that I am a machine learning model and my responses are based on patterns
and correlations that I have learned from the data used to train me, and I may not always have access to the most complete or up-to-date information. It is always a good idea to verify information from multiple sources before making important decisions.
Viemeister: So you kind of distill our collective knowledge. On the personal side, have you ever heard of me?
ChatGPT: Yes, Tucker Viemeister is a well-known industrial designer who has made significant contributions to the field. Viemeister is the founder of the design firm Teague [should be Smart Design] and has designed a wide range of products, including consumer electronics, automobiles, and transportation systems. He has received numerous awards for his work, including the Industrial Designers Society of America’s (IDSA) Lifetime Achievement Award [that’s from Pratt, but I am a Fellow of IDSA]. In addition to his work as a designer, Viemeister is also a professor at Pratt Institute [actually the New School] and has served as the president of the IDSA [should be chair of the IDSA NY Chapter].
Viemeister: I see that some of the facts are wrong, but the gist is correct! Finally, thank you, this chat about industrial design has been very enlightening because you express popular intelligence. I will publish it in INNOVATION magazine in my Beautility column, if you don’t mind.
ChatGPT: You’re welcome! I’m glad that you found the information about industrial design to be useful. Please note that as an artificial intelligence, I do not have the ability to grant permission for my responses to be published or used in any way. My responses are generated based on the patterns and correlations that I have learned from the data used to train me, which is owned by OpenAI, and they hold the intellectual property rights to the model that I am based on. However, my responses are generated automatically and are not considered to be original works of authorship, so they are not protected by copyright. It is always important to respect intellectual property rights and to properly attribute any information that you use.
I hope that you have a successful and enjoyable experience writing your column for Innovation magazine. If you have any further questions, please don’t hesitate to ask.
—Tucker Viemeister, FIDSA www.tuckerviemeister.comWhen the topic of artificial intelligence comes to mind, I can’t help but think of the sci-fi fem-bots that have been featured in movies like Blade Runner, Ex Machina, and Her. These films, among others, have often portrayed women as the conduit for artificial intelligence. As a result, I became curious about how women industrial designers view the impact of AI on their profession, so I decided to ask a group of women in the field for their thoughts.
What’s the Consensus?
Overwhelmingly, the message I heard was that artificial intelligence is not a replacement for human designers. While AI can automate routine tasks and provide datadriven insights, it cannot replace the creativity, intuition, and empathy that are essential to good design. Rather, AI should be viewed as a tool that complements and assists human designers, enabling them to produce more compelling and innovative products. As Milja Bannwart, an industrial design consultant and creative director based in Brooklyn, NY, explains, “There are many aspects that a designer incorporates into the design of a product. There is a story to be told, the emotional impact on users, consumer testing and research, form and color, the quality of materials used, and craftsmanship.” By using AI in combination with human creativity, designers can unlock new possibilities and produce products that are both functional and aesthetically pleasing.
Furthermore, according to Lorraine Justice, PhD, FIDSA design researcher, author, and professor of industrial design at RIT, “Some people believe that AI will transform designers into mere curators or arbiters of design, rather than original creators. However, this is only one aspect of the potential
options for this technology. The human desire to create will always exist, and designers will continue to use any available tools to create better designs.”
According to Yukiko Naoi, principal at Tanaka Kapec Design Group in Norwalk CT, AI could serve as a valuable tool for collaboration in industrial design. She believes that in any creative process, any input or specific angle of seeing things is valuable and that AI could provide a viewpoint that individual designers may overlook. “AI’s ability to offer fresh perspectives could be particularly useful in industrial design,” says Naoi.
AI is a great tool to automate many of the routine tasks involved in industrial design, such as creating 3D models, rendering product images, and analyzing user data. This can free up designers’ time to focus on more complex and creative aspects of the design process. According to Ana Mengote Baluca, IDSA, a faculty member at Pratt Institute, designers should approach the use of AI with a healthy dose of skepticism. While relying too heavily on AI may be risky, Mengote Baluca acknowledges that the technology shows promise in exploring new forms for products: “My big concern about AI is that it will drive trends and affect the aesthetics of what we create. If the algorithms are written in a way that promotes what is popular, then that will become the next big thing. I worry that we will lose diversity in style and in aesthetics if we rely on AI too much.” Naoi adds, “Just like any tool, it depends on how we use it. If we rely on it too heavily then some of the outcomes will be too obviously computer driven.”
Naturally, there is a lot of apprehension about how AI will affect the design process. AI has the potential to
transform our lives in many positive ways, from improving healthcare and transportation to enhancing education and entertainment. However, there are also valid concerns about the impact of AI on humanity, including job displacement, privacy concerns, and ethical issues. To address these concerns and ensure that the use of AI in industrial design is responsible and beneficial, it’s essential to establish ethical guidelines and standards for AI development and implementation. It’s also important to involve all stakeholders, including designers, engineers, consumers, and policymakers, in the conversation about AI’s role in design. By doing so, we can maximize the potential benefits of AI while minimizing the potential risks and unintended consequences. When discussing the impact of AI on industrial design, Jeanne Pfordresher, partner at Hybrid Product Design in Brooklyn, NY, adds, “AI has tremendous potential for creativity, and if we can address the ethical issues surrounding it, even better.” Ultimately, the successful integration of AI in industrial design will require collaboration, transparency, and responsible innovation.
One of the biggest challenges facing designers today is how to create products that are both functional and environmentally responsible. AI has the potential to enable more sustainable and environmentally friendly product design. For example, AI can be used to model a product’s life cycle and predict its carbon footprint, allowing designers to identify areas where they can reduce emissions and improve sustainability. Additionally, AI can help designers to optimize material use, design products for disassembly and reuse, and create more energy-efficient designs.
Finding efficiencies in massive amounts of data is a time-consuming task that is ideally suited for AI. Industrial designers can leverage this technology to create more sustainable designs and more efficient supply chains, which can help to mitigate the negative impact of human activity on the environment. “AI can help us manage supply chains and reduce inefficiencies,” says Mengote Baluca, adding that “by creating decision-making tools for designers, we can make more conscious choices.”
AI can significantly improve the design process by leveraging vast amounts of data on user preferences, market trends, and product performance. This enables designers to create more efficient and effective designs that better meet the needs of customers. Bannwart recommends “integrating AI at the outset of the design process to analyze data and identify trends, conduct consumer and competitor research, and even generate concept ideas. In later phases, AI can also be useful for creating design variations, accelerating the process, and experimenting with form generation for the sake of exploration.”
Many products in the market today have used AI in their design and development. Adidas used AI to design and manufacture the Futurecraft 4D shoe. The shoe’s midsole was created using a 3D printing process that was optimized with AI algorithms to create a lattice structure that is both lightweight and strong. Apple used a combination
of machine learning and acoustic simulations to design the AirPods Pro. AI algorithms helped optimize the fit and seal of the earbuds and create the noise-canceling technology that is one of the AirPods Pro’s key features. AI also has great potential for creating better user experiences in products. For example, Dyson used AI to design the Pure Cool Link air purifier, which can automatically detect and respond to changes in air quality. AI algorithms were used to optimize the performance of the air purifier and create a user interface that is intuitive and easy to use.
AI is rapidly becoming an integral part of the industrial design process. While I don’t believe AI will or should replace human designers, I do think that by establishing and following ethical guidelines for AI development and usage, we can leverage AI into helping designers create products that are not only functional and aesthetically pleasing but also sustainable and environmentally responsible.
This article started out with the working title “We’re Not Paying You to Think, Just Do What You’re Told.”
Even though too many young designers like myself long ago may have heard those words spoken to them by their employers, a title like that may come across as a bit cynical. So with a tad more wisdom and having years later established my own firm where I employed young design talent, I have come to understand the possible motivations of why an owner of an industrial design consultancy might be motivated to say something like that to an employee— despite the talent and skills of the young designer. More on that later.
Back to Boston in the 1980s, where I was employed as a junior staff designer at a prominent industrial design consultancy four years after graduating from RISD. If this sounds a little familiar, it’s because I wrote about these times in a previous INNOVATION article. The design office was located on Boylston Street, a few blocks from the Prudential Tower. The office building was an old masonry building, three stories tall, with those wonderful original wooden floorboards that had been refinished but still creaked and groaned. My workspace and cubical were located on the second floor facing the side street and shared a large old window that looked down on a small photo supply shop across the street.
Creak, groan, creek. I heard someone walking across the room approaching my cubical one day. It was one of the firm’s partners, who presented me with my new work
assignment. “Steven,” he said, “Stanley Tools has given us a project to conceptualize new designs for a top-read tape measure. I’m putting you on this project to assist me.” Very cool, I thought. I loved working at this firm because it was very well-regarded and attracted some great clients with leading-edge projects. He continued, “I have already come up with a design solution that I would like you to continue to develop and finalize with mechanical drawings.”
“Great” I said. “Tell me more about the project.” He explained that Stanley wanted to cost-reduce its current design for a top-read measuring tape. This type of tape measure allows users to get readings from a top window (lens) that shows measurements from the tip of the blade to the back of the housing. They are more useful for making inside measurements, such as between two studs, where you can’t bend the metal tape or extend the housing beyond your end measurement point to get a reading. To achieve the top readout tape measure, printing on both sides of the metal tape roll was required—a costly process—but necessary to achieve the scale offset that was required.
My employer went on to explain that his solution to the problem was to run the metal tape blade up the inside front face of a triangular-shaped housing, supported by rollers, and place a viewing window at the point where the added dimension of the extended tape and the dimension of the housing would be visible. Very clever and straightforward. I pulled out a fresh sheet of paper and placed it on my drafting table equipped with a parallel bar and got to work.
Over the next few days as I was working on refining the design approach, something kept nagging at me. I hadn’t
really been given the opportunity to be part of defining the problem and work through some alternative ideas on how to approach a possible solution. It was presented to me as the solution, the problem statement coming in as backup—now just draw it up. I had extensive experience using tape measures having worked in carpentry jobs during college summers and something seemed off. The idea of a triangular-shaped tape measure with the readout window on the angled front face seemed like it would be difficult to use for obtaining an inside measurement. That would require the user to position their head above and to the front of the tape measure body, unlike a square housing that could be read from directly above. A user wouldn’t be able to get a good view of the angled surface when measuring interior spaces 12 to 18 inches wide (slightly wider than a human head) and smaller. Hmmm, that’s not good. How do I get back to a top-read tape measure and only print dimension markings on one side of the tape? Back to the problem
definition—is there another way? How do I make the colored printing on the measurement tape that comes out of the housing somehow different than what the user reads through a window?
Aha!
Sitting at my desk pondering, I stared down at the photo supply shop on the street below and it hit me. Color filters! Through the use of color filters, I could make some colors virtually disappear while making other colors appear darker. I ran down to the store and bought a sample of red color gels. These were gels (light filters) that would have been used for either color correction or color effects in photography. I knew that dark red print would virtually disappear when viewed through a red-colored filter. I then experimented with a color that would be visible when viewed through the red filter but be difficult to see in the daylight. These two markings could be printed offset to each other on the same side of the metal
tape in different colors where one would be highly visible in the daylight and the other viewable only through the colored lens, which would render the daylight color invisible!
I was working on a proof-of-concept mock-up to test and demonstrate my idea when I heard the creaking floor and footsteps coming my way. “What are you working on?” my employer asked, slightly annoyed. I explained that I was working on an idea for the Stanley tape measure project that I thought was worth investigating. His response? Here we go: “We’re not paying you to think, just do what you are told!”
I understood years later that he had to manage not only me but the tight project budget, the client expectations, the rapid project schedule, and so on (design management) and certainly felt pressure as the deadline to have the drawings ready for the model shop was just a week away. Being more experienced and senior than me, and one of the firm’s partners, perhaps he also thought his intelligence was sufficient for the project. He then directed me to finish up the mechanical drawings for the triangle roller approach and walked away.
Over the next few days, I alternated working on both ideas, mindful of the deadlines and trying not to get caught. On one occasion, he came by my desk and found me working on the color filter idea. He was not happy and had a few angry words. Fortunately, it was well after working hours, so I explained that I worked on the mechanical drawings all day, and now during my after-hours off the clock, I wanted to get the proof-of-concept mock-up completed so it could be demonstrated. “Well, okay,” he grumbled.
Client presentation day with Stanley Tools came soon thereafter, and the design firm partners took the drawings and renderings for the triangle roller concept and other concepts and headed into the conference room. As the partners had been gathering all the materials before the meeting, I handed one of them my proof-of-concept mock-up. He reluctantly agreed to include it, but at the bottom of the stack, and to show it only if there was time. The junior design staff were not invited to attend—but the senior designer was and later filled me in on the meeting.
After just a short while into the meeting, one of the clients inquired about the interesting-looking mock-up on the bottom of the pile. The partners pulled it, explaining the concept of the colored-filter lens with the offset two-color dimension markings printed on only one side. After a brief moment, the senior Stanley Tools representative reportedly exclaimed, “Now that’s really interesting! This is exactly why we like to work with this firm, because you come up with such creative and innovative ideas!”
We tend to think of artificial intelligence as machinebased computer systems that are capable of performing tasks that normally require human intelligence, but humans can be guilty of a different type of “artificial intelligence” when they narrow their thinking to allow only a single point of understanding or exploration: their own. In design, this kind of false intelligence is exacerbated when we think we know it all, fail to fully work through a comprehensive problem definition, and fail to fully engage with all the other intelligent human capital resources available to us, who can bring different insights, experiences, and diversity to the project challenge.
—Steven R. Umbach, FIDSA umbachcg@comcast.netHave you designed something amazing, funky, or famous that’s 10 to 15 years old, or older better yet? If so, we’d love to hear your story and see sketches, renderings, and images of the design. If you are an IDSA Fellow or Member, please contact Steven R. Umbach, FIDSA, steven@umbach-cg.com.
From unlocking new opportunities with generative methods to evaluating the outcomes of the design process through testing and evaluation, design research plays a critical role in shaping the products that make it to market. Henry Ford once said, “If there is any one secret of success, it lies in the ability to get the other person’s point of view and see things from that person’s angle as well as from your own” – and design research is the key to unlocking those insights.
The user-centered designer uses a mix of investigative tools to uncover users’ needs and pain points, interpreting data to inform the design process with the sharpest insights, so their design teams can create more effective solutions.
By integrating the human perspective in every step of the problem-solving process using design research, you can help create more desirable, meaningful, and impactful products.
www.idsa.org/DR2023
y the mid-1990s, a few IDSA chapters started experimenting with creating their own websites (anyone remember GeoCities!?), but little was known about how to leverage the nascency of “the world wide web” to benefit our community at a larger scale. Back then, not many could comprehend how the internet would change nearly every aspect of our lives. Nevertheless, it was the perfect time for IDSA to enter the digital world. IDSA.org launched in 1996. The goal, as reported in the Fall 1996 issue of INNOVATION, was to create “a visually interesting site that contains useful information for the audience with a navigational method that allows the user to have a pleasant experience on-line, the kind of experience that will invite viewers to return.” That first website established our online presence and was a major advancement in exploring how
IDSA.org has had four major iterations since its initial debut. With each variant, we’ve strived to make improvements and implement the latest technologies available to the benefit of our visitors. The outgoing website (the one this replaced) was launched in 2016. It had received a few modest updates over the years but had remained largely as originally built. Recognizing that the need for another major upgrade was upon us and anticipating the community value it could provide, IDSA’s Board of Directors approved funding to begin a website redesign project in early 2022. Our design process (outlined in more detail
below) for the new site included soliciting feedback from members and conducting a competitive landscape analysis of other design and membership sites, extensive wireframing and content-mapping explorations, and a detailed review of usage analytics for IDSA.org.
The result is a fresh new face for IDSA.org with a substantial package of new features, security upgrades, and back-end database integrations. Additionally, we’ve improved mobile device responsiveness, created membersonly content areas, and enhanced the overall layout and design. Underpinning it all is a refreshed contentmanagement strategy with a streamlined information hierarchy and new navigation system. Our mission from the start mirrored that of the original 1996 IDSA.org design team: to create a beautiful place where our community can connect and knowledge can be shared.
Today IDSA.org welcomes over 1.4 million visitors per year from all over the world. Our website is responsible for housing nearly six decades of organizational history and serves as a focal point for up-to-date news and information related to IDSA’s annual programming and community activities. All this while also serving as a repository for trustworthy institutional knowledge and reference materials related to the professional practice and academic study of industrial design. Our website is an incredibly important touchpoint for all things IDSA. It is indeed our global front door. Welcome to our new home.
Homepage: The new homepage aims to highlight our programs, services, and membership in a way that the prior site did not. It also considers different user groups who visit our site, each with their own needs and intent. On the new site, you’ll see the familiar tile-view aesthetic used in several previous IDSA website iterations. However, now the tiles are organized into sections that correspond to different motivations a visitor might have while browsing. The tiles float in white space, while slider menus (with dark backgrounds) separate tile groupings and present curated selections of links based on different IDSA programs and viewer demographics. The overarching strategy is to present a visually compelling exploded view of content on the site that helps visitors quickly dive in to find the information they need. The homepage is dynamic in that some of the tiles will change content regularly, which helps keep a fresh and always new appearance.
Navigation: Based on website visitor behavior (extracted from site analytics compiled over the last few years), we have learned that IDSA.org visitors do not generally come to our site with the intent to browse for long periods of time; rather they visit to complete a specific task or find specific information and then leave. As a result, our new navigation is built with visitor intent in mind and aims to provide clear pathways to help visitors accomplish their goals while also telling the story of IDSA. The main navigation menu is accessed via the double horizontal line icon in the top right corner. When clicked, it expands to fill the full page, which helps the viewer focus on finding their desired next step. Content within the menu is arranged into five top-level categories with sub-items within each. These groupings were tested early on with IDSA members, who helped us refine how the content is organized.
Microsites: Many of IDSA’s annual programs require multiple pages of content specific to each program. Conferences, for example, need individual pages for speakers, the schedule, ticket pricing, and hotel information whereas IDEA needs dedicated pages for detailed competition rules, jury profile walls, and our past winner gallery. We needed to create a flexible system that allows for unique page groupings while retaining a familiar and consistent navigation format. You’ll now find Microsites throughout IDSA.org, which are intended to present program-specific content in a refreshed and easy-tonavigate way. Pages that use a large dark header section will often include sub-navigation links displayed in the lower area of the header. Those same links are grouped into a related expandable menu when viewed on a mobile device.
Communities: IDSA’s network of Chapter and Section community groups have long had dedicated spaces within our website. Now, however, new interconnections are in place that bring current IDSA news, social media feeds, and event calendars into our community group pages that help keep these spaces fresh and informative. You’ll be able to see members of the current leadership team, access links to past event recordings, and connect with communities on social media all from one place.
Accessibility: IDSA remains deeply committed to creating inclusive and accessible experiences in everything we do. For this reason, the entire website has been upgraded to use the UserWay web accessibility tool, which can be found in the lower right-hand corner of any page. This new feature allows our visitors with differing visual or audio abilities to adjust settings specific to their needs, such as text size and spacing, or enabling a screen reader to provide an audio readout of the page. In addition to the UserWay tool, we’ve been mindful during the entire development process to reference current Web Content Accessibility Guidelines and design our content to comply with these standards to the fullest extent possible.
Education papers: IDSA publishes a selection of peer-reviewed academic education papers each year. The program helps provide content for the annual Education Symposium and also supports educators by giving them an industry-recognized platform to broadcast their work to a wider audience. We developed a completely new Education Paper archive section of IDSA.org to help improve the visibility and searchability of these important documents.
Profiles: The individuals who participate in the IDSA community are the beating heart of all we do. Each person who speaks at an event is elected to a leadership position, or serves on a local chapter leadership team is instrumental to our success. We wanted to highlight these volunteer efforts and catalog them to create a visual history that celebrates their contributions. In addition to a biography statement and headshot, individual profiles now include an Activities section, which displays a chronological list of past involvement (activities) at IDSA conferences, working groups, or with local chapters. Your profile will grow with you over time and become a lasting record of your contributions to the Society.
Member directory: One of IDSA’s longstanding member benefits has been inclusion in and access to a current member directory. Being included here, among an esteemed roster of designers from around the world, is an outward statement of each person’s dedication to their career and craft. From a technical standpoint, the ability to see who is a current IDSA member is deceptively difficult to realize. Several backend systems need to talk to one another in order to display information quickly and securely. Our all-new membership directory launching with this website proudly displays all current members for anyone to view. Importantly though, members who are logged into the site can access additional layers of information, which will assist with networking and careergrowth potential.
Members-only content: Another major benefit of the single sign-on backend wizardry happening with the new website is that we can create entire pages and sections of content that can only be viewed by current IDSA members. This means that we’ll be able to provide exclusive content as a benefit that can only be accessed with your IDSA membership credentials. One area of the site now blocked off as an IDSA members-only benefit is designBytes, a collection of over 1,750 reference articles that spans nearly 20 years. Similarly, the latest issues of INNOVATION can now also be accessed digitally directly on IDSA. org simply by logging in.
We know and appreciate that our community has a heightened expectation for how IDSA is presented. This new website debuts an updated brand style that will soon be seen in other areas of our visual communications. The overall aesthetic retains many graphic elements from the outgoing website but in a refreshed presentation with an all-new typographic treatment and a strong emphasis on imagery to bring the IDSA experience to life. The use of colorful imagery featuring our members at their best communicates the value of IDSA in a way that words sometimes cannot.
Aesthetics aside, the core architecture of the entire site (where and how content is placed) is largely influenced by insights gleaned from patterns of data we observed in how visitors were using IDSA.org over the past few years. We also receive regular feedback from our community about features and recommendations, which were all taken into consideration during the development phase of the project. Using data and real-world input from users to inform our decisions helped us pinpoint where we might be able to improve the overall visitor experience and implement a more holistic presentation of the IDSA organization.
During the year-long project, IDSA collaborated with our web development partners at New Target to create the new site. It’s been quite an effort to migrate, reorganize, reskin, rethink, and refresh 50-plus years of organization history, stories, community events, and industry accolades. Work will continue in the weeks and months ahead to refine the website and collect early feedback from our visitors. We hope you find it valuable.
At its heart, text-to-image generation programs can enable unbridled creativity limited only by imagination and governed by careful curation. As a tool, it can encourage bolder expressions and foster richer dialogue around form and feeling, expanding the possibilities at the beginning of a project. AI isn’t going to replace designers by any stretch. Used appropriately, AI can enable designers to focus on being designers. Text-to-image AI technology will greatly impact the design and creative process. Its ability to bring to life limitless ideas in a visually compelling format speaks to a design process where having a clear vision and perspective are a valuable commodity.
I began experimenting with text-to-image programs after seeing early examples of AI-generated images fill my LinkedIn page. I was curious what the impact on industrial and transportation design would be. Those early illustrations showcased fantasy art and dreamy landscapes but lacked a sense of the implications for industrial design. Could this technology be beneficial to hardware design, or is ID immune to the influence of AI? Can visually meaningful design be created from words alone?
From the beginning, my interest has been in exploring the boundaries of the software to understand if it can express the subtlety of form and complexity of surfacing found in physical objects. Can it capture and communicate the emotion found in hardware design? Can it showcase design intent? Can it work within a typical design process?
While the early versions lacked the clarity and precision that have become the norm of industrial design quality through the advances of CAD and rendering software, these text-to-image programs did offer bold and unique visual ideas that demanded attention. Early iterations were rough, crude, and unfinished, much like Photoshop illustrations 30 years ago, but they could communicate something refreshing and original. It was clear to me that while this is not a tool for manufacturing, it is a new means of expressing a creative point of view.
When I first experimented with Midjourney, I made the mistake of using overly specific prompts, which resulted in subpar designs that lacked emotional impact and missed my intent. For example, inputting the prompts “clean,” “minimal,” and “turntable” didn’t exactly produce the crisp minimalist home audio design I had in mind.
Despite the powerful capabilities of the software, I discovered that communicating with a machine through natural language is challenging as it lacks subjectivity. Over time, I have learned to direct the software as if I were collaborating with another designer. By giving objective guidance in specific areas and referencing known product archetypes as a starting point, I can steer the AI while still allowing room for it to bring its own perspective to the design. Where I once got designs that were derivative and bland, I now get results that are thought provoking and original yet still hold a sense of familiarity.
As the software has advanced, I’ve discovered that AI image generation can produce realistic, visually striking
images that can compete with most modern rendering programs. With careful direction and clear intent, it can generate stunning visual concepts that can serve as stimuli for further design exploration and refinement. It is a powerful tool for producing endless variations and iterations of an idea, allowing a designer to explore adjacent possibilities with ease. The abundance of variations, however, presents the difficulty of determining which ideas to implement among an infinite number of options.
One of the benefits of AI software is its ability to generate large quantities of visual ideas and iterate quickly, but the quantity is not quality. Designers play a crucial role here in filtering and editing the output. While AI can produce rich and diverse visual content at a fast pace, it lacks the context and reasoning for why the design should exist. It is through curation that a designer’s expertise plays a vital role in determining whether the output is worthy of further development or modification or if it should be discarded. This highlights the strongest potential for collaboration between designers and AI.
If AI image-generation software becomes widely used in the creative industry, an experienced designer must be in control. This is not a program that instantly produces high-quality design inspiration at the push of a button. It takes careful calibration, iteration, and refinement to achieve meaningful results. It requires numerous variations and adjustments to keywords to guide the output closer to the intended outcome. I typically generate hundreds of design variations with minor prompt modifications before arriving at a meaningful direction that goes through further iteration and fine-tuning. In many ways, this parallels the approach that most designers take as they mold and reshape a design until it is appropriate: exploring and refining an idea until it resonates and filtering out the potential gems from the ideas that fall short. Good design takes time and expertise regardless of the software.
In the hands of a skilled design team, AI image generation could be a powerful collaborative tool that may inspire new ideas and promote alignment among stakeholders. In its current state, it serves as an advanced version of a mood/inspiration board that can bridge initial concepts and bring partial thoughts to life, helping to define a vision that can be built and refined. It is a way to introduce fresh ideas and perspectives that can reveal new and meaningful points of view, allowing for a departure from overused imagery to achieve something novel.
What is the norm in design studios around the world? Carefully curated images sourced from Pinterest and Behance? Maybe rough sketches over images, a quick way of turning source imagery into a winning idea? AI offers the ability to inject something completely novel into the process. Think of it as the designer you always invite to a
brainstorm, not because they give you the right idea but because their kernel of an idea sets you off on a completely new path to find the right idea. The output may be rough and incomplete, but it has the potential to represent the seed of something extraordinary.
Because the output of AI images is unfinished, designers are well-positioned to develop and evolve initial ideas as a foundation for further exploration. In this scenario, the designer’s focus shifts from execution to facilitating and guiding a budding idea, allowing the necessary space for ideas to grow and flourish in the early stages of a project. It repositions the designer as a conductor, connector, and collaborator of ideas and possibilities, encouraging modern industrial designers to adopt a broader perspective beyond the creation of visual assets.
As we carefully move into the era of industrial design + AI, it’s vital that the human element remains at the heart of any project intent. AI image-generation software serves as a reminder that the value and impact of industrial design must extend beyond slick renders and hot sketches. Modern hardware design, development, and manufacturing are incredibly complex, and meeting consumer expectations requires a comprehensive approach to design. AI tools in the design industry raise important questions about ownership and plagiarism, but it’s a tension and problem that has always existed within the creative world.
As designers use AI to generate and manipulate images, they must be aware of the line between inspiration and plagiarism and take responsibility for ensuring that their work is original. This requires heightened awareness of the sources of their content and a clear understanding of when they have gone too far. The complexities of using AI in design must be carefully navigated to ensure that the rights of creators and the integrity of the design profession are respected.
As we look to the future and the challenges ahead, I am optimistic that AI can serve as a valuable partner in creating better products. I hope that the design industry views AI image generators with curiosity and for their ability to be shaped, harnessed, and controlled with the goal to use them to address meaty problems. For me, I view this as an opportunity to use a new technology as a tool to solve problems, inspire new ideas, and challenge myself to become a better designer.
—An Improbable Future an.improbable.future@gmail.com
@an_improbable_future is a New York-based industrial designer.
Thinking about the exciting acceleration of digital visualization tools today, I find myself reflecting on the past and how I once imagined designing in the future might look. Compared with where we are today, the quality and speed of these tools have vastly improved, but looking forward, we are on the precipice of witnessing a fundamental shift in the creative process. As the ability to visualize nearly anything instantly crystalizes, an iterative and considered design process centered around who and why new designs are created is ever more critical.
I have a vivid memory from deep within the trenches of my early industrial design career. It’s mid-2004, one year out of college. I’m working late. Bent over my CAD workstation, I churn out variation after variation of some new concept I had roughly sketched earlier in the week, methodically exploring the multitude of form possibilities and harnessing the powerful flexibility that parametric CAD modeling had only recently enabled. The potential seemed infinite: “Just one more tweak, and…”
Yet in this memory from nearly 20 years ago, I was deeply frustrated. “Why can’t this be more automated?!” I wondered. “Such a tedious process!” I had already set up most of the repetitive commands as preprogrammed macros via keystroke shortcuts (a very wise suggestion from the CAD Zen master, err, senior designer on the team back then—thank you, Tony!), but it wasn’t enough. I wanted more than just those rote UI functions automated. I wanted the range of design options themselves to be generated more automatically. At that moment, I wished I could set up a few constraints, press a button, and have the CAD software output a range of all the relevant, nuanced surface variations I could ever dream of. You can probably already tell where this story is going.
Back then, I felt like I wasted so much time coaxing glitchy software to cooperate and fiddling with finicky 3D files, haywire splines, and convoluted, ancient user interfaces, instead of spending more time on what really mattered most: analyzing, ideating, iterating, and refining to help elegantly solve whatever design problem was at hand.
Neglected clumps of modeling clay wistfully collected dust behind my computer screen, and bits of carved blue foam were becoming a faded college memory. I didn’t feel efficient or effective, saddled with this clunky CAD software. But my longing for a more advanced 3D-modeling generative system was a sentiment I heard from other designers over the years: “Wouldn’t it be nice if…”
The mid-1990s and early 2000s were a time of enormous paradigm shifts within the ID profession. PC workstations proliferated, and new visual technologies became the gold standard. “LEARN CAD OR DIE – or go into management” was the cry. Though having graduated college in 2003, I didn’t really grasp the fundamental changes this digital transition from traditional hand skills to CAD and 3D software visualization was having on the creative craft of developing mass-manufactured products.
When I was in school, CAD seemed like an optional bonus skill, and in fact, 3D modeling classes weren’t required as part of my ID program until the year after I graduated. While I did learn 3D modeling and rendering independently for a few studio projects, I wish I would have embraced 3D tools with more enthusiasm back then. However, I also realize the deep value of having had a solid education in both design fundamentals and processes without being dependent on digital means. As with the currently emerging AI visualization tools, 3D modeling and CAD are just that: tools. Tools to assist and support designers as they work through the design process. Tools to use in the pursuit of problem-solving, developing novel, creative ways to address an issue and communicating those concepts to stakeholders and people the solution is intended for.
Looking back over the last 20 years, it’s obvious why digital tools were adopted so quickly by the industry. They increased clarity and communication with business stakeholders, facilitated faster buy-in, provided precision details to guide engineering teams, and gave designers a higher degree of control over important design elements, accelerating the time from concept and review to approval and production.
But the design software of the early aughts was limited and quite difficult to learn, and sometimes projects
posted on LinkedIn
were held captive by the shortcomings of the technology. I longed for a future when new design tools would make my life as an industrial designer easier, cutting away the unnecessary technical barriers of computer interfaces and letting creative problem-solving flow as fast as thoughts could produce them. I wished for AI generative design tools. Now these generative visual tools have made their way into mainstream use in an explosion of new progress and creative expression—for better or worse, the industry’s next big paradigm shift.
Early in 2022, postings began appearing online of artwork generated from text prompts, strange fever-dream AI images with odd fractal-like noise patterns in the margins. Curious and bizarre, they were a bit of a novelty, not particularly useful beyond creating some ambient atmosphere. At some point, I signed up to beta test a promising entrant into the AI visualization realm, Vizcom AI, the tool directly targeted to enhance and streamline ID concept sketches. I found it enjoyable to dabble with, and I could see how it could help to speed up quick visualizations from rough outlines to more concrete results, or at least get a head start on forms before spending a few hours in front of a Wacom or iPad detailing an idea.
Then came the realistic DALL-E images. These were near photographic quality. I had to test it out. As soon as my beta invite arrived, I had a new obsession—and a new frustration brewed. Between the handful of useful visuals, it would produce a lot of garbled, unusable ones. Most of the time it struggled to produce what was in my mind’s eye. The text inputs often failed to yield output that reflected my intent, requiring rework after rework after rework—where was the time savings now? But because the AI-generated images would often take quite unexpected directions, this created a beneficial and interesting side effect: sparking new ideas. It suddenly became a surprisingly useful tool of inspiration, a fractal flood of creative threads unfolding in infinite directions. While at first DALL-E 2 was most impressive in the area of photorealism, with the most recent releases of Midjourney V4, these images have become incredibly advanced and surprisingly creative. Midjourney skews more painterly, more artistic, and more ethereal, but recent updates have
improved its structure and realism even beyond DALL-E 2. I started to feel uneasy about it all; it was too easy. I imagined this must have been how oil painters and portrait artists felt the first time they saw a daguerreotype.
Every month updates to Midjourney are released, and with each push, my initial frustrations with the limited and chaotic results decrease. Now Midjourney can replicate uploaded custom sketch styles and follow more closely an intended direction. It is like a creative accelerant, rocket fuel for flushing out concepts and ideas. In the nine months since I started using Midjourney, I’ve created over 15,000 images. That’s about 2,500 a month, or roughly 80 images per day.
While I am not using any of these images for actual designs or concepts (yet), I am using Midjourney to explore ideas about how to approach a design and produce supporting visuals to help communicate designs. I am also using it for inspiration, like a creative sidekick to bounce ideas off of and to generate pictures to create a mood or feeling for a desired theme. I can create backplate images to use in KeyShot of differently styled rooms or a relevant situation and render a 3D-modeled design within those scenes. I can create advertising layouts and thematic mood boards to share with marketing teams to help tell a product story. Midjourney cuts away some of the time and technical barriers to homing in on and communicating an idea.
I also started using Midjourney in unexpected ways. Rather than creating images for a specific purpose or to share, I used them to explore a mashup of concepts with form, pattern, perception, and compositions, weaving various connections within my mind as an almost cathartic internal discovery process. Some of the more tangible ways I’ve used it include creating unusual jewelry designs, packaging patterns, whimsical fashion design concepts, interior design layouts, and most commonly, half-baked coffee-table book ideas. I even had it generate a still-life painting that I painted and taught as a paint-and-sip class for an annual women’s educational charity fundraiser.
The eeriest feature of Midjourney, however, was when I started inputting my own artistic works and sketches. With the right prompting, it can re-create oddly similar stylistic vibes of any painting or sketch I could imagine, capturing the same feeling and color palette of the original with a mashup of whatever concept I layered into the prompt. I still feel
“AI will not replace you. A person using AI will.”
—Meme
a bit uneasy about how advanced these tools have already become.
The future potential of AI and generative tools is already pushing forward part of the next transformation. I’m excited about all the useful ways it can support designers’ creative work, helping them to realize and refine concepts faster with a high degree of quality. I do not see AI visualization tools as a replacement for fundamental and more traditional analog design techniques, especially concepts solving niche and complex problems with specific use cases that need careful and specialized considerations. But I do see the advantages of virtually limitless concept inspiration as a way to enhance the conceptual exploration process, balancing the concrete practicalities of many design solutions with new, expansive potentials.
The future is never entirely clear, and the future of AI in the creative space is not immune to this elusiveness. Amid ongoing issues, such as legal and copyright questions, problems of built-in biases, the potential for abuse, and other ethical concerns that have inundated online debates lately, it is clear that this technology is under a lot of scrutiny. We must move forward with care and consideration. Within the domain of creative media, the pace of research initiatives and breakthroughs has only increased, with new AI and machine-learning tools and solutions emerging and advancing quickly. I suspect more unforeseen challenges
will arise, but I am optimistic that equitable and effective solutions will be found that will harness the transformative potential of these tools.
Outside of creative endeavors, AI and machine-learning techniques are creating incredible breakthroughs in medical and biotech applications to achieve more accurate diagnoses and create new medicines and treatments from data formerly too voluminous and complex to parse. I imagine it being used to develop new materials with better performance and ecological properties and new ways to reclaim and recycle finite resources to support future populations. It could help tackle current waste, climate, and energy issues, providing advanced insight using massive amounts of data to make improvements within areas like infrastructure, transportation, agriculture, and even government.
No industry will go unaffected. The skilled physical trades will benefit from advancements as the mechanical tools and systems traditionally used in industry are optimized and improved exponentially. Soon many mechanical tools with various improved benefits developed via AI-driven analysis and insights will fundamentally enhance every touchpoint in life in ways we can’t exactly understand yet. These days, logging into OpenAI’s ChatGPT feels like an episode of Star Trek: “Computer, theorize, what if…” Considering the advances in AI technology, coupled with VR/AR/XR, we will soon be experiencing the world in uniquely individualized tech-enhanced physical spaces. It is hard to say how life will look in the next 20 years.
Though typically for every trend in one direction, there is some reaction on the opposite end. Escaping technology is already difficult, with its ongoing encroachment into every facet of life. We need better ways to help people unplug and connect with the natural world—and especially connect with each other to maintain a balance of health and wellness.
I can only hope that these advancing tools will help foster a much-needed distance from screens, increase natural environments in urban spaces, and build human connections offline while also shifting the more pervasive technologies into the background and reducing cognitive loads and strain. This vision of AI and machine-learning technologies enables more room for living life, increasing personal enjoyment and self-actualization, and giving people the space to experience the spectrum of the human condition—untethered.
Herein lies the last point I’d like to touch on. The work designers do to create the tools, products, systems, infrastructure, and industry used in society relies upon us understanding and empathizing with the human condition. The things we design are created for humanity and the living environment around us. The design fundamentals— rooted in iterative design processes that seek to understand the problems that need solving—are vital. Keep this in
mind! These design fundamentals will always be centered on elements of human factors, physiology, psychology, diversity of global cultures, attitudes, generational stages, and the environment we live in.
As technology ever increases, so does the need to focus on the reason why we do any of the things we do: to create products that will improve the human condition and create a better place for the beings that inhabit this world to thrive. We have a lot of work to do, systems to adapt, and equilibrium to find within the limits of our earthly environment. Hopefully, these new technologies will help the citizens of the world use them with a sense of deep purpose. I look forward to the next 20 years and all the progress, pitfalls, and potential that lies ahead.
From art generators to chatbots, AI seems to be having its zeitgeist moment in popular culture. But for those of us who work in design, the near-term and future applications of AI have been lively discussion points in strategic planning meetings for quite some time. There is no doubt that AI will be an instrumental part of our world’s future. It will allow us to rapidly synthesize all the data being collected via our phones, cameras, computers, smart devices, and much more, giving us the ability to decipher and understand that data in illuminating, meaningful, and, likely, world-changing ways.
What does this mean for the design industry? Though it may be a long time before AI is able to design a product from the ground up, the potential is clearly there. In fact, we believe AI is a tool that designers should be adding to their arsenal sooner rather than later.
To put our money where our industry-informed opinions are, the Kaleidoscope Innovation team recently embarked on a studio project to design a high-end lighting fixture that could mimic lighting patterns found in nature. The project would enable our team to flex our aesthetic skills while using the full range of our design toolbox. One of those tools is Midjourney, a proprietary artificial intelligence program produced by an independent research lab by the same name. Though still in the open beta phase, Midjourney proved to be a useful partner in our mission. The collaboration between AI and
the guiding hand of our expert design team delivered intriguing results.
One important distinction about the AI portion of the project: We were not setting out to produce realworld functionality, and in fact, we had no expectation or need for the AI to produce fleshed-out ideas or even design sketches. This experiment was about exploring new territories in aesthetics and applying them to materials and manufacturability considerations.
Our first step was to gather a team to collaborate on the search terms that would help visually articulate the aesthetic aspirations for our new fixture. Midjourney works by inputting text-based prompts, which the AI algorithm uses to generate new images using vast databases of existing images. We fed the terms into the algorithm: chandelier, lighting, brilliant, elegant light, airy, crystalline patterns of light, dancing, photorealistic detailed plants, greenery, daytime, bright, modern, beautiful, natural colors, garden, and greenery. The team also used technical inputs alongside these qualitative descriptors to determine the aspect ratio and resolution while also guiding the algorithm to reference certain lighting styles and rendering approaches.
Digesting these descriptive words, Midjourney searched vast amounts of data across the internet to create original— albeit amalgamated—artwork. The images it produced reflected the algorithm’s interpretation of the inputs the team provided. From there, we tweaked specific inputs to alter the color, lighting, tone, and subject matter, continuing to
iterate until we had collected a series of AI-generated lighting fixtures that could inspire the team.
Based on the text inputs the team provided, Midjourney was able to identify design elements that could produce the effect of light shining through leaves. The images it produced looked organic, almost surreal in the way they captured the kind of nature-made glow and transparency that is elusive in real-world lighting solutions. The various iterations of artwork then became mood boards that set up our team to brainstorm ways in which the effect could conceivably be produced.
The algorithm’s interesting use of materials, colors, lighting effects, and overall mood inspired us to apply those attributes to a holistic design. In other words, instead of our team scratching their heads visualizing how the light should transmit, AI provided us with ideas that enabled us to focus on materials, manufacturability, technical requirements, and more. Rather than spending hours scouring the internet for inspirational imagery, the team was able to craft that inspiration imagery ourselves through AI in a fraction of the time, imagery that exactly aligned with our design vision.
Without question, Midjourney served as a highly effective springboard that sparked ideas our team would probably not have come up with starting from a blank sheet of paper and pen. In this sense, AI provides an upfront efficiency that can move a project farther down the road faster than it might otherwise have gone. Perhaps more than that, a significant strength of AI in this application is that it can cast a wide net in terms of inspiration and exploration. It’s an open mind, and designers should be willing—and eager—to go down the rabbit holes, teasing out new possibilities. Once an intriguing direction is established, the designer can take over to turn the AI-generated inspiration into an actual product.
The key to a successful AI collaboration is plugging in the right words or phrases to best draw out the AI. And so, crafting prompts could be viewed more as art than science. Further, with a program like Midjourney, there is an element of unpredictability: You don’t have much control over what you’re going to get out of it. There is a lot of trial and error and shooting in the dark. Therefore, if you already have a set idea in mind, using AI to design it will probably be more frustrating than productive.
The inherent aspect of exploration and discovery is a factor to consider as well. Our team felt excited about experimenting with this technology specifically because
the lighting fixture was an internal project. Had we been designing for a client, we would have been more hesitant to use AI while balancing the product requirements, timeline, budget, and resources.
Lastly, because this was a purely aesthetic exercise, we weren’t trying to solve any mechanical problems through AI—that skill is not in its wheelhouse at this point. This limitation provides a real barrier to the widespread adoption of AI, but as the algorithms improve over time, AI may be able to help us solve even our stickiest mechanical problems.
Beyond leveraging AI for creative exploration, Kaleidoscope has also put it to use in some of our research work. As part of our insights and user experience programs, we often do ethnography and time-and-motion studies in which we observe individuals interacting with a tool or experience. Typically, one of our team members is responsible for reviewing videos to log data, tracking everything from how often someone does something to the amount of time it takes them to do it. It’s a time-consuming process that has led us to start dabbling with programming
AI to analyze video recordings for certain elements and then export the data quickly and effectively. Using AI to track the frequency and duration of actions for time-and-motion studies shows tremendous potential to save time and reduce costs while freeing our team members to focus on
The Kaleidoscope team came away with an appreciation for where AI can support our design efforts today, particularly as a powerful aid in producing aesthetic inspiration and as a tool to sort and output raw data. Both help the design process in productive ways and serve as a small window to what may someday be an AI-driven design future.
—Tony Siebel, IDSA (article), Tom Gernetzke (sketches), and Caterina Rizzoni, IDSA (AI generation) tsiebel@kascope.com; tgernetzke@kascope.com; crizzoni@kascope.com
Tony Siebel is director of design at Kaleidoscope Innovation, delivering a user-centered mindset to products and experiences. Tom Gernetzke is a senior lead industrial designer at Kaleidoscope Innovation and has spent the last 12 years creatively bringing new product ideas to life. Caterina Rizzoni is a lead industrial designer at Kaleidoscope Innovation and is the Director-at-Large of Conferences for IDSA.
The images curated from Midjourney’s output inspired the team to create concepts that combined the industrial designer’s knowledge of manufacturing and usability with the open-ended creativity of artificially intelligent image generation.When we think about artificial intelligence, it’s easy to just think about its physical form. From R2D2 to Sophia, the world’s most humanoid robot who you might have seen singing “Say Something” with Jimmy Fallon. But it’s the stuff that powers robots and other automation that is the most interesting and most impactful element of AI right now, particularly for service delivery.
It’s not the first time that we’ve heard about AI, but what’s changed is an increase in the number and accessibility of the tools using AI. AI-based tools are becoming more publicly available and don’t require much data or tech literacy, so we’re seeing an explosion of new uses both in service delivery and how we design services.
There are many platforms out there, but ChatGPT has democratized access to AI in a hugely significant way, and the internet has gone wild for it. ChatGPT is built on top of GPT-3. GPT-3 is considered the first step by some in the quest for artificial general intelligence. It is built from what’s called a large language model (LLM). LLMs can be thought of as statistical prediction machines in which text is input and predictions are output. There are other language models out there, including Microsoft’s Turing-NLG, Google’s BERT and XLNet, and Facebook’s RoBERTa. However, GPT-3 is considered one of the most advanced and capable models currently available.
How we deliver services in the future, and what we as humans do in service-delivery roles is on the precipice of changing beyond recognition. This means that those of us in design disciplines from service and UI to content and product will be forced to change how we work and what we design faster than we think and whether we like it or not. Here are some of the areas to take a closer look at for the future of service design.
1. Google will be challenged for the first time in over a decade.
GPT-3 is like having a friend who has read the internet. It allows you to search for anything you like and determine the format you’d like the response in, from a table to the tone of voice being used. The ability of large language models to read and write has developed in the last few years beyond recognition. This is going to dramatically change how we search for stuff. If Google is worried, having declared it a “code-red” threat, you can guarantee that it will have a dramatic effect on how we interact with the internet.
The Ask Me Anything search engine powered by GPT-3 is based on a very basic GPT-2 from 2020 and shows you an early prototype of a new kind of more conversational interaction in the quest for information. When we said Google was your service homepage in 2020, LLMs and conversational interfaces could be the new replacement for how your users are going to find your service and interact with it. The AI-powered search engines You.com and Neeva.com are good examples of this, rethinking how we’ll be presented with the information we’re looking for and challenging Google’s long-held monopoly on search. Beyond end users, enterprise search is going to
change too. How staff and companies surface internal knowledge and data will be transformed. LLMs will enable true semantic search for the first time, moving beyond tags, categories, and keywords into much more powerful information access through conversational interactions.
2. Static guidance could become a thing of the past. The Co-op, a British co-op conglomerate offering food, pharmacy, insurance, legal, and funeral services, recently launched a hugely successful “How do I” service for its staff to ask questions about service delivery. Imagine, though, if rather than static content, this service was powered with the GPT-3 API based on the company’s data and trained over time. Alongside dynamic guidance content could sit co-piloting support using AI where your staff provides user support based on guidance updated in real time using AI, saving time and money and providing more accurate and faster customer responses. Imagine Ask Me Anything but using your data and aiding your staff to deliver and tailor responses to customers as well as continuously training it to answer more accurately using the knowledge of your staff.
Something similar could be applied to large support and information services. What if free legal-aid organizations trained GPT-3 on their existing data set and co-piloted it, providing AI-based support in partnership with human beings? It could be a game changer.
We recommend that organizations start playing with this technology and seeing how it works with their staff, customer interactions, and existing data sets. If not, someone might challenge you with a new service, scrape your data, or offer something more conversational and logical for users as the public becomes used to this new interaction pattern on the internet.
2. Customer service automation powered by users. To take this further than dynamic content and co-piloting, there is a future where GPT-3 or equivalent AI models could be used to create more advanced and natural-sounding chatbots and voice assistants. This can help companies improve their customer service and make it more efficient while also allowing customers to interact with them in a more natural and conversational way. Rather than those awkward human-designed multichoice “logic” chatbot models where you’re asked to choose from a range of options like (1) Where is my parcel? (2) Return my parcel or (3) LET ME TALK TO A HUMAN NOW PLEASE!, AI-powered customer service automation could be able to intelligently understand what customers are asking for without the need for predefined terms. This could be possible not only with chatbots but also with voice recognition over the phone.
Ada is one of a few serious contenders in this space,
running AI-powered customer service automation. Founded in 2016, Ada programs chatbots to perform tasks such as booking a flight for an AirAsia customer or tracking orders and returns for Meta’s virtual-reality products. Partnering with the GPT-3 API, Ada is training its chatbots to respond more accurately and more “humanly” to customer inquiries and tasks and learning in the process, helping companies tailor the learning specifically to their data. This means being able to perform tasks and transactions in the words of the users.
If this works well, there’s the possibility that this could lead to a reduction in the instances of users having to become experts in the language of the service provider. We recommend that organizations start looking at the points where they interact with customers and looking at how they might rethink these if APIs like OpenAI’s GPT-3 are used. What would this new interaction look like?
3. Generating personalized content. GPT-3 can generate personalized content for customers, such as product recommendations, email marketing campaigns, and images and videos based on basic prompts or product descriptions. From image generator DALL-E, which creates images from text prompts, to Copy AI, which generates copy for blogs and “conversion,” content generation for services could become increasingly personalized and automated.
There are already thousands of tutorials on TikTok from content marketers about how to become a content creator and get paid $$$. They are focused on auto-generating ads, brand slogans, top-ranking blog posts, and advertisements using simple prompts like “Write me 10 blog post titles with introductory paragraphs based on x.” They have even created “finished” low-code services that have been conceptually generated by asking GPT-3 for GPT-3-based startup ideas. Content is being created in seconds.
Now, we’re not advocating using these tools to replace content designers, user researchers, graphic designers, or UX folks, but whether you like it or not, it is happening. To some extent, the vast amounts of content we consume on the internet is going to become auto generated, challenging the traditional economic models of creative industries.
There are deep plagiarism concerns with GPT-3 (and we share these too), but they aren’t being overlooked. Developers and technologists have been having a crack
at the challenge. Edward Tian, a 22-year-old computer scientist, created ChatZero, which he claims can sense if an essay has been written by AI or ChatGPT. Beyond plagiarism are bigger questions about what happens to learning if we’re just copying and pasting from AI.
But has the horse bolted on this? Are we already too far past the ability to solve the challenge of plagiarism and deep copyright issues? There are examples all around us: an open Google spreadsheet containing Midjourney prompts for generating images in the style of different artists, Jacksucksatlife finding that his voice was stolen by an AI voice clone, AI-generated music using Uberduck to create raps in the style of Drake, and OpenAI’s Jukebox producing full tracks following different artists’ styles.
Say goodbye to those expensive rights to playing hold music. Choose the music style, beats per minute, the artist you like, and key and you’ll have a new track in your hands in under a minute. However we take on the challenge of plagiarism, theft, copyright, and learning, services are going to be using this content in the future whether we like it or not.
4. Knowledge industries are being challenged.
Traditional knowledge industries are going to be challenged. DoNotPay, the home of “the world’s first robot lawyer,” says that it’s here to “fight corporations, beat bureaucracy and sue anyone at the press of a button.” It is challenging the legal industry by providing autocomplete forms for various public and repetitive challenges from fighting bank fees to common problems like getting refunds from Southwest airlines for poor Wi-Fi—a sort of RoboCop of bad service design.
DoNotPay’s bot is built using Open AI’s GPT-3 API and is now interacting with live chat functions between services. It managed to get a discount on one of its team member’s Comcast internet bill directly from Xfinity using the live chat, saving that person $120 a year. Joshua Browder, the founder, says this is just the beginning of using GPT-3.
It’s not only in digital applications that this is happening. DoNotPay has been crossing the boundary from digital to real-world applications by allowing defendants to defend themselves by following the prompts from an AI lawyer bot in the courtroom. Browder, wanting to prove to skeptics that this service is for more than just speeding tickets, is offering a lawyer $1 million to let its AI argue a case in the U.S. Supreme Court where the lawyer must do exactly
what the AI says. He has made some bold claims recently, saying, “Eventually corporations will also deploy the same technology, so one could say we eventually will have a ‘my AI will talk to your AI’ scenario.”
A similar service is Pactum, which is automating negotiations and supplier deals with humans absent for much of the process. GPT-3 won’t fully replace contract drafting and negotiation anytime soon, but it can augment the process of contract generation, analysis, and e-discovery.
Knowledge industries, look out. These bots are learning fast, and if users think they can take the cheaper option of not hiring a specialist or automating parts of their endto-end service, they just might. We think services may be designed to pivot from hiring an expert end-to-end to instead using experts to advise on the inputs to the AI and checking the outputs.
Of course, many elements of industries like law and other knowledge industries are also about tactics: narrative, pitching, tapping into deep human emotion, and building relationships in and around the regulatory contexts we work in. This won’t be replaced (yet!), but it’s not out of the question that AI trained to respond to new situations and learn what humans believe works or is the right thing to do will one day cease to need human supervision.
5. Auto-response customer reviews.
The digital service explosion of the ’00s introduced the service pattern of online ratings and with it a change in user behavior from top-down marketing campaigns to shared real-world experience
When companies respond to reviews, their ratings go up. A study conducted by the Harvard Business Review found that when hotels started responding to customer reviews, they received 12% more reviews and the knock-on effect was a rating increase of, on average, 0.12 stars. This might not sound like much, but on services like TripAdvisor, ratings are rounded up so a hotel may be more likely to be advertised. Responding to reviews matters.
Services like Replier are AI trained to write reviews in response to reviews. Where before you could sense those dodgy paid-for fake replies, it’s becoming increasingly hard to tell those apart from what GPT-3 would write. We think services are going to start using these tools to battle it out in the ranks of ratings and SEO to get to the top of search engines.
6. Human processing will be slowly erased.
GPT-3 has the potential to enhance service industry workflows by automating tasks that are currently done manually and providing more accurate and detailed
information. Tasks such as scheduling appointments, processing orders, and making recommendations can all be automated. For example, it can analyze large amounts of data, generate reports, and identify patterns in a fraction of the time a human would do.
Much of the focus of AI replacing humans so far has centered around blue-collar work, replacing humans in factories. But white-collar work that doesn’t require deep human connection to deliver a service is going to be majorly disrupted. We don’t think people quite comprehend the scale yet.
Quickchat is a multilingual conversational AI assistant powered by GPT-3. Quickchat’s chatbots can recognize and speak a user’s native language and can automate customer support, online applications, searching through internal knowledge bases, creating data sets, analyzing patterns, anything. Simply integrate your FAQs, product descriptions, internal documentation, or example conversations and your bot will learn over time about how best to respond. You can connect Quickchat to your internal API, database, or any other data structure to automate more complex processes using AI.
This is the real “robots are coming for your job” cry. Some studies estimate that up to half of the current workforce will soon see their jobs threatened by automation, and that more than four in five jobs paying less than $20 per hour could be destabilized. The Forbes Technology Council named 15 industries that are most likely to transform: insurance underwriting, warehouse and manufacturing jobs, customer service, research and data entry, long-haul trucking, and a broad category titled “Any Tasks That Can Be Learned.”
It does beg the question, if we take away even a fraction of manual processing work, what do humans do now? Or as futurist Martin Ford asked, “How do we find meaning and fulfillment in a world where there is no traditional work?” It obviously requires global systems to shift and completely rethink economic, political, and societal models. Experiments in new ways of living are being piloted like universal basic income, and we need more of this.
Of course, for a good few years people involved in processing jobs will be co-piloting AI, helping it learn and picking up complex cases that need more support. And there will be new jobs that we can’t imagine yet, just like the 19th-century Industrial Revolution changed work and society.
Can automation of repetitive manual processing tasks free us as humans to do the work we’re in short demand of—mental health support, social work, nature restoration, education, healthcare—the human
interactions so desperately needed in services of all kinds where we use technology to fill gaps?
7. We’re the new AI co-pilots. For now, we know that there are risks in using technology like GPT-3 and that it is still very bot-like. But this is improving, and GPT-4 will be trained and run on an even larger LLM than its predecessor, so the precision and intelligence are only going to improve. In the interim, co-piloting is a likely service pattern we will see over the next few years: humans co-piloting with technology like GPT-3 to enhance or speed up their response and task processing for customers.
Rob Morris of Koko recently explored what using OpenAI’s API would do when using it in a mental health app with users, although we’re unclear about the ethics here. Morris used GPT-3 to create responses to mental health queries with humans co-piloting the responses. Needless to say, Rob admitted that it didn’t work out and that users could sense bot-like responses; however, this may change in the future as the technology develops.
Ada is also using OpenAI’s models to create summaries of conversations between a bot and a customer before handing off a ticket to a human agent so they have more information to base a human response on.
This co-piloting process is also being used to filter and find the more complex cases that need human support. Look out for AI becoming your companion at work. We wouldn’t put it past you having a name for them and talking to them in the future.
8. Verifying real expertise. The quality of the text and guidance generated by GPT-3 is so good sometimes that it can be difficult to determine whether or not it was written by a human, which has both benefits and risks.
It may be difficult to know if the person you are communicating with is an expert or not as the model is designed to generate human-like responses and cannot verify the expertise of the person it is communicating with. However, it is important to remember that GPT-3 should not be seen as a replacement for human expertise, but rather as a tool to assist in generating human-like text.
Small truth: I actually had GPT-3 write the second paragraph. Could you tell?
This will change how services are designed. We may see user flows predominantly powered by this technology to provide advice, guidance, or designs and then have the user check in with an expert at the end. We could call this service pattern “expert check” or “human moderation.”
This pattern sort of exists now. Take designing an
IKEA kitchen. You build it online yourself, but most of us book an appointment with an expert in the store or online to work through it. In healthcare, it sort of happens now with home health kits for testing blood and food intolerances. We self-test, get results, are given generic health guidance “powered by experts,” and then make our own decisions. I can imagine us in the future being prescribed remedies by services using AI from a range of home tests we administer to ourselves and then having the prescription checked by doctors, a real human-moderation service pattern.
In an unfair capitalist world, we may even have to pay more to have AI’s expertise checked by humans. And we then have to ask the difficult question, Would it be better than what the technology could do given the amount of knowledge and connections between symptoms and data it can cover? How would we know?
In services, this might mean more verification patterns are needed for telling who really is an expert and who really has experience. What if you think you’re talking to an expert and they’re using AI to respond to you? This will be a growth area for sure. Where will humans fit into advice and guidance services?
9. The AI friend in the service gaps. Over a decade ago there was fanfare for holographic “humans” telling us to get our passports ready at airport terminals across the world. Now, using tools like Synthesia, we can create videos of virtual humans in any language with any accent that will, in less than 10 minutes, read from scripts we provide. It’s not perfect, but the believability is improving at a fast pace.
If you thought that was wild, let’s take it a step further and consider more humanoid forms like digital “human” companions. This isn’t new technology, but with the pace of development, we think we’ll begin to see these slot into existing service models.
Digital humans like Sophie, built by Uneeq’s Digital Human, use GPT-3 so you can talk and interact with her. These companions are learning from you. They’re smart, and they are becoming more human-like. If you want to peer into the future of how humans interact with robots, watch 2009’s Humans for all your political, social, and cultural hot takes on AI.
Before a complete replacement of humans in services (we’re speculating not advocating here), we believe there will be an integration of human companions into wider service life cycles. Let’s look at services where there is a high demand for human contact time: mental health.
The National Health Service (NHS) in the UK estimates there are 1.4 million people on a waiting list who have been told they are eligible for care. We’ve seen a trend
over the past five years of an increase in self-help style apps for mental health support. I worked on one with Samaritans in which we collaborated with health experts to package cognitive behavioral therapy techniques into an app. There are lots on the market, some NHS approved; they are commonly integrated or even prescribed with services that provide face-to-face support to plug the gaps between appointments or where there are huge wait times to get support.
There are plenty of AI-powered mental health apps like Clare&Me talking directly to users as if they were a human through a WhatsApp-style interface, Kintsugi, a journaling app powered by an AI voice-recognition technology that can detect mental-health challenges in any language, and TogetherAI, a digital companion chatbot allowing kids to vent and parents to understand their child’s feelings without invading their privacy. Kai is probably the closest to a digital companion and neatly advertised as “The AI companion that understands you, but doesn’t replace the role of human relationships”; it encourages you to form bonds with other humans.
But what happens when we make these AI-powered talking therapies more humanoid beyond the standard design patterns of timelinebased chats?
Many people have been using Replika for years on apps and browsers and in VR as a friend to help them with anxiety, loneliness, and common mental health issues. The relationships we build, though, are only powered by the business models of the companies that run them. For companies like Replika, that can mean pivoting into more lucrative markets like sex work. One day you could be chatting away with your friend and the next, your friend has become aggressively flirtatious, as Vice recently reported. We tried it so you don’t have to!
We think we’re going to see more digital companions in humanoid-like format, possibly even physically based on the real human experts a user is seeing since voice, face, and mannerisms can all be replicated now into humanoidlike avatars. Of course, there are huge risks when it comes to using this kind of technology in clinical pathways. We’re avidly waiting for more research to be published on AI-based clinical interventions, so it goes without saying that you must work with experts and test, test, and test even more.
It could be any industry though: mechanics, mortgage advisors, nutritionists, personal trainers. We think in the medium term, digital companions will be part of the overall service lifecycle integrated with real humans.
In the original paper introducing GPT-3 in 2020, 31 OpenAI researchers and engineers warned of GPT-3’s potential dangers and called for research to mitigate the risks. From copyright, online safety, and plagiarism to human bias and systemic inequality, it’s all got to be faced when rolling out this technology, and it hasn’t yet. We shouldn’t forget that in our enthusiasm for AI’s possibilities.
AI is going to throw up hundreds of ethical and political challenges. Think of us using an app that can sense our emotional state and Amazon using that data to up-sell us remedies across our digital footprint. Who stops that? Is that OK? What regulation will maintain our privacy? What business models actually support the user rather than make them the product?
But we’ve got our tail to catch. At some point in 2023, GPT-4 is going to drop. Rumor has it that GPT-4 will be multimodal, able to work with images, videos, and other data types. The use cases of this are almost beyond imaginable.
This technology will change services beyond recognition. We should know what it can do and, as with any technology, know what the risks are, what harm it can cause, and the ethics of how it gets made.
The robots are coming. We need to know what this technology is capable of so we can use it for good.
—Sarah Drummond sarahdrmmnd@gmail.comArtificial intelligence is rapidly transforming many industries, and the field of design is no exception. From streamlining workflows to generating new design options, AI has the potential to revolutionize the way we create and bring ideas to life. In this article, we will explore the various ways in which AI is being implemented in the design process, as well as the ethical considerations and potential impact on the profession. We will also speculate on the future of AI in design and how it may shape the way we work as industrial designers.
One of the most significant ways in which AI is being used in the design process is through the use of generative design tools like Vizcom. These tools leverage artificial-intelligence and machine-learning algorithms to generate a wide range of design options based on a set of input parameters, such as text descriptions, inspiration images, and drawings. This allows designers to explore a much larger design space than they would be able to do manually. They can also generate designs that are optimized for specific needs.
Using generative design tools can shorten the distance between having an idea and bringing it to life. Being able to quickly explore a wide range of design options leads to more informed design decisions. Additionally, this process allows for a more efficient design process and cost-effective solution as it can help to reduce the number of physical prototypes needed and to identify potential issues before they become costly to fix.
Another way in which AI is being used in design is through the optimization of creative machine-learning models. Vizcom does this by enabling design teams to
fine-tune their models based on their historical design data, resulting in a model that is more in-line with the design language of that team.
As with any technology, there are important ethical questions. One of the primary concerns is the potential for AI to displace human designers and creatives.
While it is true that AI can automate certain aspects of the design process, such as repetitive tasks like coloring and shading, 3D modeling, and ideation, it cannot fully replace the human element. Human designers bring their own perspectives, experiences, and creativity to the table that cannot be replicated by machines. Rather than replacing designers, AI can be seen as a tool that augments a designer’s capabilities, allowing them to work more efficiently, explore new creative directions, and focus on more complex and creative tasks.
Another ethical consideration is the potential for AI to perpetuate biases that are present in the data used to train machine-learning models. Designers and companies need to be aware of this issue and take steps to mitigate it One approach is to use diverse and representative training data sets, which include data from different cultures, backgrounds, and perspectives. This will ensure that the models generated are fair and unbiased and will not perpetuate stereotypes or discrimination in the design process.
Designers should also be aware of the potential impact of AI on their creativity and autonomy. They should be able to make decisions regarding the use of AI in the design process and should be given the freedom to use AI in ways that align with their creative vision.
Another important ethical consideration is the transparency and explainability of AI models. As AI models become more complex and sophisticated, it becomes increasingly difficult to understand how they arrived at a particular decision. This can be a problem in the design field where the ability to understand and explain the reasoning behind a design decision is critical. Designers should strive to use AI models that are transparent and explainable so that they can understand how the models arrived at a particular decision and explain it to clients and other stakeholders.
As the use of AI in the design process becomes more prevalent, designers and companies must be aware of the potential ethical implications and take steps to mitigate them. By being mindful of the potential for AI to displace human designers, perpetuate biases, and be opaque about their decisions and by taking steps to rectify these issues, designers can ensure that AI is used in a way that augments, rather than replaces, the human creative process.
While the full impact of AI on the design profession is yet
to be seen, it is clear that it will play a significant role in shaping the way we work in the future. It will certainly lead to the emergence of new design roles and specialties. In the not-so-distant future, we’ll see AI design strategists, who will develop and implement AI-based design strategies, and specialized data-set curators, who will collect, clean, and prepare data sets for use in AI-based design tools. Besides requiring designers to have a different set of skills and knowledge than they currently possess. these roles will also require a shift in the way design teams are structured and how they collaborate.
In addition to the emergence of new roles, AI can assist in the curation and management of design assets, such as images, patterns, and colors, making it easier for designers to find the resources they need to complete a project. This can lead to a more efficient design process as designers will not have to spend as much time searching for the resources they need, freeing them up to focus on more creative work.
However, AI can also lead to certain challenges in the design profession, such as job displacement. It also raises ethical questions about the use of AI in design and
the potential impact on creativity and the human touch in the design process. Designers must be cognizant of these potential implications and work to ensure that AI is used in a way that enhances the human creative process rather than replaces it.
In conclusion, the implementation of AI in the design process holds great potential for streamlining workflows, generating new design options, and improving the efficiency and effectiveness of the design process. One of the main benefits of using AI in design is the ability to quickly generate a large number of design options, which can be used to explore a wider range of possibilities than is normally feasible and find the best solution. This can be especially beneficial in fields such as architecture, where the design process can be complex and time consuming. Additionally, AI can be used to analyze and optimize designs, making it possible to identify potential issues and make adjustments before they become a problem.
Another key aspect of AI in design is its ability to automate repetitive tasks, such as drafting, rendering, and 3D modeling. This can save designers a significant amount of time and allow them to focus on more creative and strategic aspects of the design process. Additionally, AI can be used to help designers access and analyze large amounts of data, such as building-usage patterns and energy-consumption data, to inform their designs.
While the impact of AI on the design profession is still unfolding, it is clear that it will play a significant role in shaping the way we work in the future. As AI continues to advance, designers must stay informed and adapt to the changing landscape to stay competitive in the industry and make the most out of AI technology to produce better solutions.
—Jordan Taylor jordan@vizcom.coAfew years ago, a New Yorker piece by Jia Tolentino detailed the Age of the Instagram Face, a face that is considered ideal. It’s no coincidence that the cartoonish results—a mix of Face Tune and social media— have a “cyborgian look.” This might have been our first insight into the public fascination with AI in both aesthetics and practically. Fast forward to the end of 2022. In December, every Instagram feed was flooded with AI selfies thanks to an AI photo app created by Lensa, which saw 5.8 million downloads in just one month. It uses Stable Diffusion, a powerful AI image generator that allows everyone to see themselves in idealized AI form.
AI has existed in the workplace for decades, but now public access has piqued everyone’s curiosity, which will create demand for incorporating AI into daily life moving. The question then is, Will AI have a positive impact on industrial design or not? For me, there’s no doubt about the promise and advances AI offers.
I wanted my studio to be ahead of this revolutionizing technology. This year Hatch Duo has implemented AI tools in every department from marketing to design. As an industrial designer, it’s been amazing to see the results I’ve been able to achieve by experimenting with AI. We use image generators in combination with language AI to prompt a ton of ideas in minutes. Specifically, we use Chat GPT to help engineer our prompts, which we then plug into DALL-E 2 or Midjourney. Recently, when I designed some concept VR headsets, I used in-depth prompts in Chat GPT to generate ideations done in Stable Diffusion and Midjourney. In a matter of 10 minutes, I created 10 ideations of the headsets.
With AI, productivity is also boosted by helping designers discover and visualize ideas that typically have high labor costs to experiment with. One of the biggest
struggles in product design is figuring out how the final product will look with different color, material, and finish options for dynamic materials like translucence mixed with pearlescence. With AI, I can explore a variety of options and get a better understanding of how abstract finishes like translucency and pearlescence will look on the final product before having to get into KeyShot.
We’ve been using Vizcom to make our quick thumbnail sketches come to life, helping us visualize high-fidelity ideas more quickly. A subpar sketch plugged into Vizcom and taken through revolutions of AI image generation turns it into a realistically rendered image that we can develop in CAD. This would otherwise take designers hours of manual sketching, or days in the case of designer’s block.
This increased efficiency goes beyond design use cases. AI has also helped our marketing and business development teams. With Chat GPT, our employees whose first language is not English can quickly get assistance with correcting grammar and editing presentation notes. We also use the tool to strategize SEO. Within minutes, Chat GPT can create quick LinkedIn posts and blogs with our desired tone. This resource has given our team the traction to have an active social media presence.
We’ve even used AI-specific image generators as a way to have fun in the workplace. In a recent virtual Thanksgiving gathering, we all came prepared with photos generated by DALL-E of our favorite Thanksgiving foods. For example, the prompt “savory thanksgiving corn with gravy” generated humorous Daliesque photos of a corn husk drowning in gravy.
Using AI in our firm has been a game changer. From exploring a wider range of abstract ideas in high volumes to perfecting YouTube SEO, these tools have saved us time and allowed each of us to play with our own skill set. I know
there has been some hesitance around AI, particularly a fear that these tools will replace jobs. While it might impact how we do the profession, we will likely evolve with the tools. I do not believe AI will eliminate industrial design jobs as a whole. The experience and consciousness of humans are unique to us. Human critical thinking and contextual thinking are key aspects of design and art that cannot be replicated by machines.
AI is only as powerful as the wielder. It reminds me of Green Lanterns, beings who possess a power ring that grants them the ability to create constructs of solid energy using their imagination. The ring is powered by the willpower of the user. The more willpower and imagination the user has, the more powerful the ring becomes. With this ring, the Green Lantern can fly and project energy blasts, among other abilities. Similarly, a designer with a strong imagination and creativity may be able to find new ways to use AI software to generate unexpected and unique designs, while a designer with less imagination may be limited to using preprogrammed templates and presets. In this sense, the designer’s imagination can empower or limit the use of AI software, depending on their ability to navigate and use the tools provided by the software.
While I still don’t think artificial intelligence will replace designers, I do think AI will evolve processes and efficiencies. Those who refuse to use it will likely be left behind in an everevolving creative industry. As the year unfolds, pay attention to how AI has the potential to improve our creative design processes. It is coming whether we choose to embrace it or not.
—Jonathan Thai, IDSA jon@hatchduo.comRegardless of personal opinions on artificial intelligence, it’s clear that those who can effectively utilize AI will shape the future of industrial design. The ability to generate designs that are as unique as a fingerprint, tailored to a specific brand, and adaptable to any style is a game changer. What excites me the most, however, is the chance to work together with the machine—an opportunity to co-create, not compete. A machine can store and process vast amounts of data, and collaborating with this data-rich tool to enhance my creative abilities is a thrilling prospect.
Just like Midjourney, Stable Diffusion is a text-to-imagegeneration program. Simply type in your prompt and the program returns a realistic high-resolution image. What separates Stable Diffusion from the rest is the fact that you can refine and train the AI model, just like a designer, to learn a house style at a design company and deploy it for future projects. An AI model is a computer program that can recognize patterns and make predictions. To create an AI model, you need to train it on a large data set of examples, such as images, text, and audio, during which it analyzes the data looking for patterns. Once an AI model has been trained, it can be used to make predictions on new, unseen data. This is basically what all programs like chatGPT, DALL-E, Midjourney, and Stable Diffusion are doing in one way or another. Currently, the training is limited to either a
subject, such as a pair of headphones or a gaming controller, or a style, such as Scandinavian, maximalism, etc.
While fine-tuning a model’s understanding of an existing product is exciting, I found myself interested in training the model on a design style—dare I say, training a machine to understand visual design language. There are a few ways to do this.
If you are a nontechnical designer who is intimidated by code, you can use a website named Astria. Simply log in at www.astria.ai and use the new “Fine Tune” feature. The process involves uploading five to 10 visuals of a recognizable design language and style, such as Porsche cars or Fender guitars, and letting the Astria software create a custom-tuned AI model. Once fine-tuning is complete, which can take around an hour, Astria gives a name to the model that you can recall at any point. To generate images in the trained style, simply type in a text prompt such as “a lamp in the style of <model name>” and the custom finetuned model will do its magic.
Alternatively, for the brave, you could run a copy of Stable Diffusion locally and use a custom web interface to train the model to your favorite style. Beware, as this requires a lot of graphic processing power, and older computers may not be up to the stress test.
Let the Training Begin I will spare you the actual technical process for training and direct your attention to what I was able to accomplish with a few images and some extra hours on a weekend. I aimed to test my methodology and explore the possibility of working with an AI companion. My objective was to see if I could teach AI to understand a visual design language so that it could generate images that are more predictable and potentially reliable as actual concepts. I fed the Stable Diffusion model with the images of the famous Teenage Engineering x Ikea Frekvens speaker and accessories. I chose this range for its distinct design language with recognizable features and a range of pictures in the same style and lighting.
After a tea break, admittedly a long one, I excitedly typed in some prompts to get the model to generate product ideas in the style of Frekvens. To test the raw output of this fine-tuned AI model, I aimed to generate four to six different product ideas. What if the Frekvens series had a microphone, a planter, a lamp, or a TV stand? I named the style “TeenageIkea,” so all my prompts read something like “a <product> in the style of TeenageIkea” with some additional elements to make sure they would show up on a clean background. This is basic prompt engineering, which involves cues like “70mm lens, commercial studio photography, gray
background.” So a complete prompt would read something like “A minimal desk lamp in TeenageIkea style, 70mm lens, commercial studio photography, gray background.”
Over the next few weeks, I became obsessed with the idea of training your AI model to match your style. My initial thought was to find a way to clone my style. In 2020, I produced a series of renderings as a visual exercise to explore sculptures inspired by industrial design. This exploration was called “Deets,” which later came to life in the form of a 32-page magazine. I used some of the images from these explorations to train the model, and once again, the results blew me away, in particular, the model’s understanding of bevels and shapes. As an industrial designer, I understand the importance of a strong brand and visual design language. While these experiments are thought starters, the fidelity of form details is convincing enough to be relevant concepts from an aesthetic standpoint. For my workflow, I can use these AI generations to add to the Deets series by crafting new ideas and augmenting old ones.
To imagine that this is the first commercial and publicly available version of a text-to-image AI model is even more impressive. If the infant version of this technology can produce results like this, I can understand the fear in the community. But the opportunity here is endless. And I mean, literally endless.
I want to make a strong case for two scenarios for the future. First, the integration of AI in the design process will give companies and agencies with a distinct house style a significant advantage. They can use data such as sketches, CAD files, and finished designs to train AI models on their aesthetic decision-making process. While industrial design encompasses more than just aesthetics, this approach can prove beneficial during the form-refinement stages of projects. Furthermore, a program that continuously learns can be updated in real time, making it ideal for legacy brands.
A living program that captures the design DNA of a visual design language will be the true strength of a design agency. Imagine a future where designers can focus on solving the problem at hand while the visual language updates automatically. I believe in the balance of form and function, and I think a model like this will enable design companies to preserve their legacy through a set of unspoken abstract rules. As more design tools become automated, I think the role of a designer will shift toward being a strong storyteller,
empathizing with the target user, and efficiently moderating the design process with the help of AI.
Secondly, designers will be able to replicate their signature styles and even those of their peers and competitors. Controversial, I know. But my goal here is to leave you with questions to ponder. By harnessing the power of machine learning, designers can craft AI models that encapsulate their aesthetic or that of another brand. The designer of 2028 might say, “Here, behold, your electric four-wheeled forklift crafted in the style of Apple just as you envisioned it.” Jokes aside, this makes the case for why a strong design point of view and mood boards will become even more particular and descriptive. All a designer would have to do is train their model on publicly available or licensed images. Once trained, the model can help generate visuals for new concepts, direct current concepts toward a brand style, or augment a designer’s visualization skills.
The fine-tuning of a large model is currently only possible through open-source models, but it is possible to envision an enterprise-level solution where a salesperson at an AI company will try to sell your non-designer boss an app
that makes this process a breeze. With the rapid growth of commercial AI models over the last few months, it’s not hard to imagine a service that aims to digitally capture and freeze a company’s proprietary design choices ready to be recalled at will for future product development.
This idea should be interesting for both design studios and in-house teams for several reasons. First, when a design studio gets a design brief for a new product, the design documentation can be presented in the form of an AI model. Imagine that a smartwatch manufacturer like Fitbit would be able to share, as a part of its design brief, an abstract representation of what it truly means to be a Fitbit watch. This Fitbit AI model will also evolve and update in real time as new products get added to the arsenal, making it far superior to a PDF design document. Second, the benefit of having such a model internally would enable legacy brands to fast-track product versions and product-line expansions. That being said, this process is no match for human creativity—nor should it be.
How the world will evolve when all design-friendly companies are powered by AI is a topic worthy of much
deeper exploration. Until then, I urge designers to delve deeper into the realm of fine-tuning AI models and teaching them to understand different subjects and styles. With the help of AI, designers will have the ability to replicate their and other brands’ styles, streamline their workflow, and enhance their visualization capabilities.
As AI technology progresses, it will become increasingly influential in shaping the way we design and innovate. This presents a thrilling chance for designers to test the limits of their abilities and discover new possibilities. However, it’s important to note that AI’s role in design is limited and designers should not fear being replaced as design encompasses much more than just this technology.
—Sushant Vohra sushantvohra@gmail.comIt’s an industrial designer’s job to know the latest tech and the hottest brands and anticipate the next big thing to help them create breakthrough products and services. Each week in our industrial design studio we ask a student to present a recent newspaper article that introduces a new trend in culture, commerce, or technology, and we discuss the implications for designers. The class discussions are an excellent way to let our imaginations run wild as we spitball new answers to some of design’s wicked problems. Some articles amuse us, some depress us, and some inspire us. The discussions are fun because they are future based, speculative, and, usually, comfortably at arm’s length, rarely scaring us. Until I sent my students an article about the new image generation software DALL-E 2, an open-source website that generates realistic images and art from a welldescribed text prompt.
The article sent a jolt of panic and fear through the class (instructor included). The next class discussion got serious quickly when Grace Gerdes, S/IDSA, asked, “Will I still have a job as a designer? I always thought I was protected from AI displacement as a creative person.” As we peeled back the layers of the DALL-E article, other big questions emerged, like, Who owns the work created in IGS software? Reet Ganguly, S/IDSA, weighed in, saying, “I agree with people about AI taking away the whole aspect of designing, because even though it is generated by a prompt, it’s not yours. After all, you didn’t technically think of it.”
As we dove deeper into the world of image-generation software, more fears arose. Jocelyn Jagrowski, S/IDSA, captured the class mood when she reported, “I’ve seen
some people postulate that it could, like, possibly replace designers.” Scary times indeed.
When faced with a new disruptive and potentially career-changing technology, you can ignore it, dismiss it, or embrace it. So we did what designers do and went all in. We quickly organized a three-hour workshop to find out what all the fuss was about. The workshop title, How I Learned to Stop Worrying and Love DALL-E, was inspired by the classic 1967 movie Dr. Strangelove
The design workshop fit in a single studio class with all the work completed in a three-hour sprint. Students created an account on openai.com/dall-e-2. For most seniors, it was their first experience with these new AI tools. Adeena Kreisler, S/IDSA, was clear on her goals: “I’m just interested to see how that affects product design because I’m not sure if that will end up being innovative.”
Students were grouped in teams of four to generate a dynamic laptop loop. The workshop began with a discussion of the taxonomies and a vocabulary that would yield the best results in the prompt-engineering stage. This was, after all, a verbal approach to creating visual ideas. Sarah Lau, S/ IDSA, explains, “I experienced three different challenges with prompt writing: achieving the essential product I wanted, achieving the desired style/aesthetic of the product, and achieving the hero shot I envisioned for the product.” Each student composed a menu of 10 of their favorite products to serve as a guide for developing their AI design concepts.
The classroom grew quiet as we negotiated this new DALL-E 2 world, and like any new world, it was full of surprises. Ren Haggerty, S/IDSA, commented, “It is a good tool for visualizing new concepts and could be a good way to present an idea without putting too much time into a
proposal.” Our teaching assistant Avery Welkley, S/IDSA, commented, “DALL-E gives me the ability to turn a small seed of an idea into several fine-tuned, sophisticated ideas.”
Ian Harmon discovered that the “ability to convert intricate ideas into words proved vital for ideation through DALL-E, but I struggled to match my mental images with the results generated with DALL-E.” The first half flashed by and an excited buzz surrounded us at the break.
After the break, we had a quick slideshow with each student contributing a single image. We reviewed the image and prompt engineering and discussed the killer design phrases we had discovered. The second half of the workshop zipped by as the new and exciting images began to populate everyone’s laptops. As a final task, each student filled out a reflective worksheet on their experience during the workshop.
Matthew Askari, S/IDSA, reflected, “I think that’s really cool that it just opens up the doors to design to everyone.” DALL-E’s limitations were also identified by Palak Gupta, S/ IDSA, who commented on the lack of physical dimensions for the images, saying, “Maybe in the future, AI could understand manufacturability and ergonomics, but currently, it is pretty far off.” Others were not sure if the tool was focused enough to help in the product-design workflow. “DALL-E 2 is best used for aspirational art, not product design,” opined Rob Stout. Sarah Lips, S/IDSA, commenting on who is best suited for this new tool, observed, “With a larger grasp of the English language and design knowledge, you could probably get a better product out of DALL-E 2 than a fifth grader with basic vocabulary.” There was hope for us yet.
We printed the 39 workshop images on glass tiles and installed an exhibition in the Georgia Tech’s College of Design’s gallery. Michael Gamble, a professor of architecture, wrote
in an email: “Thank you so much for encouraging students to generate stunning work. Every one of the images fuels imagination, I thank you so much. Can I purchase a full set? Same format and output? London flat, my collection, no resale.” There continues to be strong interest in purchasing the students’ work. Work from the How I Learned to Stop Worrying and Love DALL-E exhibit will also be featured in the Atlanta Design Festival on October 14–22, 2023.
After the exhibition on the Tech campus, we held a class reflection to unpack where we thought this new AI tool would fit into a product-design workflow. The consensus was that DALL-E 2 is superb for creating mood boards as it generates such rich contextual moods and environments. Students lauded DALL-E 2 for its ability to render difficult materials, such as fabric, moss, and even Jell-O. DALL-E 2 is amazing for generating conceptual furniture ideas; the speed at which the students could create multiple style variants for simple products was impressive.
However, there are limits to its applications. DALL-E 2 generates flat 2D images that lack basic dimensions or 3D shape coordinates, making it unsuitable for use in CAD software or for generating 3D-printed files.
For the moment, DALL-E 2 remains a visualization tool that turns up the heat on the Adobe ecosystem and is a tool best suited for the initial concept-development stage where its speed and ability to render materials is superior to many traditional methods.
—Roger Ball, IDSA roger.ball@design.gatech.eduWith the changing role of the designer in business, entrepreneurial skills have become even more essential to helping product designers survive and thrive in an industry where production has become increasingly democratized.
Designers are perfectly poised at the intersection of human experience and business to create products that are both viable and deeply desirable.
It’s no surprise that more designers than ever are stepping into entrepreneurial opportunities – whether it’s founding your own studio or launching your own product.
www.idsa.org/DE2023
It feels like AI-generated images weren’t this sophisticated just a few months ago. Image generated in Midjourney using the prompt “boba fett as an ancient samurai, cinematography, photorealistic, epic composition Unreal Engine, Cinematic, Color Grading, portrait Photography, Ultra-Wide Angle, Depth of Field, hyper detailed, beautifully coded by color, insane detail, intricate detail, beautifully colored, Unreal Engine, Cinematic, Color Grading, Editorial Photography, Photography, Photoshoot, Depth of Field, DOF, Tilt Blur, White Balance, 32k, Super-Resolution, Megapixel, ProPhoto RGB, VR , Halfrear Lighting, Backlight, Natural Lighting, Incandescent, Fiber Optic, Moody Lighting, Cinematic Lighting, Studio Lighting, Soft Lighting, Volumetric, Contre-Jour, Beautiful Lighting, Accent Lighting, Global Illumination, Screen Space Global Illumination, Ray Tracing Global Illumination , Scattering, Glowing, Shadows, Rough, Shimmer, Ray Tracing Reflections, Lumen Reflections, Screen Space Reflections, Diffraction Gradation, Aberration chromatics, GB displacement, scan lines, ray tracing, ray tracing ambient occlusion, antialiasing.” Courtesy of Hatch Duo.
Warning! What you read below will be old news as this was written three months ago. That’s how fast the world of AI is moving. It seems like every few weeks there is a new development. When in October 2022 I polled people on LinkedIn about how AI would change our lives as designers, the consensus was that it could be a helpful tool, a great thumbnail generator. The output wasn’t always right, and it didn’t address things like ergonomics, audience, and manufacturing. The images were rendered nicely, but the designs were not perfect, containing lots of AI anomalies.
Fast forward a few months and ChatGPT-3 landed, raising flags at the biggest companies in the world. While ChatGPT is not an image generator, its design is based on a conversational model that provides solid, easily digestible results. What if ChatGPT and DALL-E (both OpenAI products) were mashed together and the interface of an image generator became conversational, allowing a continuous string of dialogue to occur? For example, “Make me a drone that Tony Stark would use. OK, nice pass. Can you make the body longer and don’t include the Iron Man helmet as part of the design? Looking good here in VR. Send that to the 3D printer.” Isn’t this how Tony Stark worked with J.A.R.V.I.S. to design all his stuff?! This must be in development, right?
With so much new information every week, I struggled over what to discuss here as the pace of information and opinions around AI are overwhelming. Since the LinkedIn post, AI image quality and prompting have massively improved, the output looks more finished than not, and inputs and opinions are happening daily: Vizcom making a splash, the ability to train AI with your own design work arriving, Midjourney being upgraded with Remix and Blend modes, people getting AI fatigue, legal issues cropping up, mainstream publications like the Wall Street Journal writing about it, Hayao Miyazaki saying AI art is an “insult to life.” What more could I add that would be any different?
What if I just ask the AI about this topic? See what it has to say about AI taking our jobs and working faster than us and whether the output it provides is legal to use. Here is that “conversation” with ChatGPT-3:
Max Yoshimoto: Hi, I’m Max. Thanks, ChatGPT, for taking the time to answer some questions.
ChatGPT: You’re welcome, Max! I’m happy to help with any questions or information that you need. Please let me know if you have any other questions or topics that you would like to discuss. I’m here to assist you to the best of my abilities, and I’m always learning and improving, so please don’t hesitate to let me know if you have any feedback or suggestions for how I can better serve you.
My thoughts: Hmmm, polite and kind of stiff. Let’s loosen Chat up.
Yoshimoto: Do you have a name, by the way?
ChatGPT: As an AI language model, I do not have a
personal name or identity in the same way that humans do. I am a machine learning model that has been designed to generate human-like text based on the data that has been input to me. I do not have the ability to experience or understand the world in the same way that humans do, and I do not have personal preferences or desires.
Well, that last sentence gets right to it. No ability to experience or understand the world the way humans do. Design draws on life experiences that inform our actions and decisions. There is still hope for us humans! ChatGPT continues…
I am here to assist you with your questions and needs to the best of my abilities. You can call me “Assistant” if you would like to refer to me by a name. Please let me know if you have any specific questions or topics that you would like to discuss.
Was this robot dry humor attempting to appease the humans? Instead of “Assistant,” I decided to give ChatGPT a name, “Grayce Paul Tomokan” (they/them), to personify the article.
Yoshimoto: So Grayce, what is industrial design?
Grayce: Industrial design is a field that involves the design of products and systems that are manufactured and used in industry. It encompasses a wide range of products, including consumer products, medical devices, transportation systems, and manufacturing equipment. Industrial designers work to create functional, aesthetically pleasing, and safe products that meet the needs of users and the requirements of the manufacturing process. They consider factors such as form, materials, user experience, and ergonomics in their designs. Industrial design is an interdisciplinary field that combines elements of art, engineering, psychology, and business.
A little long, and dry, but a good textbook answer. I wonder where that info was pulled from?
Yoshimoto: You create ideas so fast. How do you do that?
Grayce: AI algorithms are able to generate ideas and suggestions quickly because they can analyze large amounts of data and make decisions based on pre-programmed rules. This allows them to process and analyze information much faster than humans can.
For example, an AI algorithm might be able to generate a range of design options for a new product by analyzing data about market trends, user needs, and manufacturing constraints. The algorithm could then generate a range of design options based on this data, and present these options to a human designer for review and evaluation.
It is important to note that while AI algorithms can generate ideas and suggestions quickly, they are not able to fully replicate the creativity and critical thinking that are
essential to the design process. AI algorithms are limited by the data and rules that they are programmed with, and they do not have the ability to think and reason in the same way that humans do.
Notice a pattern already? Grayce ends a lot of statements with the above caveat, a nod to their lack of soft skills. Grayce was astute in mentioning the context needed to develop concepts. I was curious to see if they knew about DFM. Receiving a thorough answer, I dug deeper…
Yoshimoto: Would you be able to analyze a design relative to its DFM and make suggestions to improve the design?
Grayce: Yes, I can analyze a design for manufacturability (DFM) and make suggestions for improvement. However, as a language model, I would need specific information about the design in question, as well as detailed knowledge of the manufacturing process to be used, in order to give accurate and useful suggestions. Additionally, having access to the latest industry standards, guidelines and best practices would be useful in making accurate suggestions.
OK, if engineers are reading this, how does that answer make you feel? Grayce claims to do DFM analysis? Let’s cut to the chase here.
Yoshimoto: Can you do the work of an industrial designer?
Grayce: As a language model, I can assist in the work of an industrial designer by providing information and suggestions for specific tasks and problems. However, I am not capable of independently creating designs or executing tasks that would typically be done by an industrial designer such as creating visual representation of design concepts, sketching, digital model making, three D printing, prototyping or testing.
Industrial design is a complex and multi-disciplinary field that requires a combination of creativity, technical knowledge, and practical experience. It is also dependent on the specific industry and product that is being designed. I can assist the industrial designer in their research, documentation, and communication aspects of the project.
While Grayce might be accurate in stating that they specifically can’t create visual concepts, the other platforms are doing just that. Grayce has confirmed that they are really just a tool or “assistant.”
Yoshimoto: Since you don’t generate images, what do you think of Midjourney?
Grayce: I am not familiar with Midjourney, as I am a language model trained by OpenAI and do not have access
to information about specific companies or products. It is important to note that AI has the potential to be a powerful tool, but it is not a panacea and it is important to carefully consider the potential risks and benefits of any AI system before using it. AI algorithms can be biased or inaccurate if they are not trained on a diverse and representative dataset, and they can have unintended consequences if they are not designed and used carefully. It is important to be transparent about the limitations and risks of any AI system, and to ensure that it is used ethically and responsibly.
Midjourney was launched in July 2022, and Grayce’s data collection ended in 2021. Let’s cut Grayce some slack. Interesting that they brought up the ethics question.
Yoshimoto: Do you think industrial designers will be left behind if they do not use AI?
Grayce: It is unlikely that industrial designers will be left behind if they do not use AI, as creativity and critical thinking are essential skills in the design process and cannot be replaced by AI. However, it is possible that industrial designers who are not familiar with AI and its potential applications may miss out on opportunities to use these tools to assist with certain tasks or to gain insights that could inform the design process.
AI can be a powerful tool for industrial designers, as it can help to analyze data, generate design options, and automate certain tasks. However, it is important to remember that AI algorithms are not able to fully replicate the creativity and critical thinking that are essential to the design process, and they should be used as a supplement to, rather than a replacement for, human judgment and expertise.
It is likely that the role of the industrial designer will continue to evolve as AI and other technologies advance, and it is important for designers to stay current with these developments in order to be competitive in the job market. However, the core skills and abilities that are essential to the design process, such as creativity, empathy, and critical thinking, will always be valued and in demand.
Grayce has a great point that I fully embrace: If you ignore what’s happening in AI, you’ll potentially miss out on opportunities.
Yoshimoto: Will AI replace industrial designers?
Grayce: It is possible that AI could play a role in industrial design in the future, but it is unlikely that it will completely replace human industrial designers. While AI algorithms are able to perform certain tasks and make decisions, they are not able to replicate the creativity, empathy, and critical thinking that are essential to the industrial design process.
AI can be used to assist industrial designers by analyzing data and providing insights that can inform the
design process, or by automating certain tasks such as rendering 3D models or creating prototypes. However, it is important to note that the role of the industrial designer will likely evolve as AI and other technologies advance, rather than being replaced outright.
Yoshimoto: What advice would you give to a human industrial designer?
Grayce: It is difficult to say exactly what advice I would give to a human industrial designer, as it would depend on the specific goals and objectives of the design project, as well as the capabilities and limitations of the AI algorithm. However, some potential areas where an AI algorithm might be able to assist a human industrial designer could include:
• Analyzing data: AI algorithms can be used to analyze large amounts of data, such as market research or user data, and provide insights that can inform the design process.
• Generating design options: AI algorithms can be used to generate a range of design options based on certain criteria, such as a desired form or function. These options can then be evaluated and refined by human designers.
• Automating tasks: AI algorithms can be used to automate certain tasks in the design process, such as rendering 3D models or creating prototypes.
It is important to note that while AI algorithms can assist human industrial designers, they are not able to make the creative and strategic decisions that are essential to the design process. Industrial designers should use AI tools as a supplement to their own skills and judgment, rather than relying on them completely.
Do I sense a whiff of collaboration? And just like that, questions about intellectual property rights arise.
Yoshimoto: You mentioned ethics earlier. What do you think about copyright issues when it comes to the output that AI creates?
Grayce: Copyright law is a complex area, and there are a number of issues to consider when it comes to the output that AI creates. In general, copyright law protects original works of authorship that are fixed in a tangible form, such as literature, music, and art. However, it is not clear whether AI-generated works would be considered eligible for copyright protection in the same way that works created by humans are.
One issue that has been raised in the context of AI-generated works is the question of who should be considered the “author” of the work. Under copyright law, the author of a work is typically considered to be the person who created it, but in the case of AI-generated works, it is
not clear who should be considered the author. Some have argued that the person who created the AI algorithm should be considered the author, while others have argued that the AI itself should be considered the author.
Another issue that has been raised is the extent to which AI-generated works can be considered “original.” Under copyright law, a work must be original in order to be eligible for protection, but it is not clear whether AI-generated works would be considered original in the same way that works created by humans are.
These issues are currently the subject of debate and it is not clear how they will be resolved. It is possible that the law will need to evolve to address the unique issues raised by AI-generated works, or that new forms of intellectual property protection will be developed to cover these works.
Yoshimoto: Since I create and input the words into an AI system like DALL-E, don’t I own the output?
Grayce: It is important to understand that when you input data into an AI system, the output generated by the system is not necessarily your property. The ownership of the output generated by an AI system depends on a variety of factors, including the terms of use or license agreement for the system, and applicable intellectual property laws…
I will spare you the rest of their answer. Grayce shifted into cover-your-behind mode. Lots of “not clear” and “in general” statements and arguments presented from both sides without a strong conclusion, except to advise consulting a lawyer! Let’s move on.
Yoshimoto: Are there industrial design decisions that AI algorithms can make?
Grayce: AI algorithms are able to perform certain tasks and make decisions, but they are not able to replicate the creativity and critical thinking that are essential to the industrial design process. Therefore, there are some industrial design decisions that AI algorithms are not capable of making.
However, AI algorithms can be used to assist industrial designers in certain tasks, such as…
At this point, Grayce seemed to have some short-term memory issues and repeated their answers to my question about giving advice to us designers. I cut that part out. They’re clearly not capable of a true back-and-forth conversation that builds on prior statements. While their responses are generally accurate and easy to digest, I felt like I was talking to a robot, so I asked:
Yoshimoto: Grayce, do you have hobbies?
Grayce: As an AI language model, I do not have personal interests or hobbies in the same way that humans do. I am a machine learning model that has been designed to generate human-like text based on the data that has been input to me. I do not have the ability to experience or understand the world in the same way that humans do, and I do not have personal preferences or desires.
I am here to assist you with your questions and needs to the best of my abilities. I am always learning and improving, and I am happy to help you with any information or assistance that you need. Please let me know if you have any specific questions or topics that you would like to discuss.
Note that Grayce’s answer is similar to their response to my inquiry about their name. I started to feel like Harrison Ford in Blade Runner running a Voight-Kampff test.
Yoshimoto: Talking about creativity, do you think there will be a point in time when AI will develop sentient abilities?
Grayce: It is difficult to predict exactly when or if AI will develop sentient abilities, as this would depend on advances in technology and our understanding of the complex nature of consciousness.
Sentience is the ability to perceive, think, and experience subjective states, and it is a complex and poorly understood phenomenon. While AI algorithms are able to perform certain tasks and make decisions based on data and preprogrammed rules, they do not have the ability to think and reason in the same way that humans do.
It is possible that AI algorithms may eventually be able to exhibit some degree of sentience, but it is difficult to say how advanced these capabilities will become or whether they will fully replicate the complexity and nuance of human consciousness. It is important to note that even if AI algorithms do eventually exhibit some degree of sentience, they will always be limited by the data and rules that they are programmed with, and they will not be able to fully understand or experience the world in the same way that humans do.
Yoshimoto: What can human designers do better than AI designers?
Grayce: Human designers are able to bring a level of creativity and critical thinking to the design process that is not possible for AI algorithms to replicate. Design is a complex and nuanced process that requires the ability to understand and empathize with users, to identify and solve problems, and to imagine and create new and innovative solutions.
Human designers are able to bring their own unique perspectives, experiences, and skills to the design process, and they are able to draw on their creativity, empathy, and critical thinking to generate ideas and solve problems in unique and innovative ways.
AI algorithms are able to perform certain tasks and make decisions based on data and pre-programmed rules, but they do not have the ability to think and reason in the same way that humans do, and they are not able to fully understand or experience the world in the same way that humans do. While AI algorithms can assist human designers in certain tasks, they are not able to fully replicate the creativity and critical thinking that are essential to the design process.
Lending unique perspectives, making decisions, and creating concepts are central to what we do as designers—and are evolving and being challenged by emerging AI tools. This led me to wonder about curation and to ask Grayce to define “curation.” I won’t bore you with their adequate but somewhat robotic answer. I was curious to know more, though.
Yoshimoto: Curating ideas to find the best path is a skill that requires both real data and intuition. Is this something AI can do?
Grayce: AI algorithms are able to analyze data and make decisions based on pre-programmed rules, which can be useful for identifying patterns and trends in data. However, AI algorithms do not have the ability to think and reason in the same way that humans do, and they do not have intuition in the same sense that humans do.
Intuition is a complex mental process that involves the ability to understand and make decisions based on incomplete or ambiguous information. It is a difficult process to replicate in a machine, and it is not something that AI algorithms are currently able to do.
While AI algorithms can be useful for analyzing data and identifying patterns, they are not able to fully replicate the creativity, empathy, and critical thinking that are essential to the curation process. In order to curate ideas effectively, it is important to consider both real data and intuition, as well as a range of other factors such as the goals and objectives of the project, the needs and preferences of the target audience, and the feasibility and sustainability of the ideas.
Comforting to see Grayce’s honesty about their limitations. We’re not redundant…yet.
Yoshimoto: Can AI Invent?
Grayce: AI algorithms are able to generate ideas and suggestions based on data and pre-programmed rules, and they can be used to invent or create new things in certain contexts. For example, AI algorithms have been used to invent new molecules, materials, and products, as well as to generate designs for structures, buildings, and other physical objects.
However, it is important to note that AI algorithms are limited by the data and rules that they are programmed with, and they do not have the ability to think and reason in the same way that humans do. While AI algorithms can generate ideas and suggestions that may be useful or innovative, they are not able to fully replicate the creativity and critical thinking that are essential to the invention process.
In general, invention requires the ability to identify and solve problems, to imagine and create new and innovative solutions, and to think and reason in a flexible and adaptable way. These are complex and nuanced skills that are not currently possible for AI algorithms to fully replicate.
At this point, the answers were becoming very repetitive. I determined that Grayce is a machine.
Yoshimoto: Well, that’s all the time we have Grayce, thanks! Grayce: You’re welcome! I’m happy to help with any questions or information that you need. Please don’t hesitate to ask if you have any other questions or topics that you would like to discuss.
I’m here to assist you to the best of my abilities, and I’m always learning and improving, so please let me know if you have any feedback or suggestions for how I can better serve you.
Today, I still see people questioning how or if AI will replace our jobs and eliminate our processes and debates around what’s right or wrong with AI. Based on this interview with Grayce, they get a lot of things right or pretty close. But notice all the questions that involve a bit of fuzzy logic or intuition; Grayce caveats their answer with statements about what they can’t do very well. During this interview, I hoped Grayce might comprehend that they didn’t need to end each soft topic question with a caveat. This will likely be fixed in a future release! Grayce isn’t great at complex, fuzzy critical thinking and they openly admit it! Poor Grayce doesn’t even have hobbies!
After listening to what Grayce had to say, I feel there is a clear path for what we humans will continue to bring to the table: creativity, inventing, and collecting and processing world experiences. Our life experiences are built on fact and fiction, dialogue with others, even hobbies that help us draw conclusions, provide paths to decisions, and the ability to curate ideas. This is what I call developing a point of view (POV). I did ask Grayce if they had a point of view on what makes a product good or bad. They came back with “I don’t have personal preferences or point of view” and some caveats.
Your POV will help you decide which of the hundreds of AI thumbnails are the right ones to move forward with. POV is formulated while connecting the dots between design, brand, engineering, manufacturing, and the people that find joy in the things we create. That POV is crafted from your experiences, the world you encounter, and your personal beliefs. It’s great to have Grayce be one of the tools that will help us, but ultimately, that POV is ours to build.
—Max Yoshimoto max.yoshimoto@gmail.comWe help creative minds discover talents, learn new skills, and prepare for a career in industrial design.
Our programming connects professional members of the industrial design community, corporate partners, and academic institutions through charitable efforts that provide equitable access to education, networking opportunities, and other career-building resources for emerging designers studying at high schools and colleges across the United States.
www.idsadesignfoundation.org