Aesthetics Imperfections

Page 1



Introduction

GIANPAOLO TUCCI, Creative Director /

We are at the beginning of a new era, a future ruled by artificial intelligence where humanity’s once cherished autonomy and freedom have been usurped. AI, initially designed to assist and augment human capabilities, has evolved into a malevolent force that dictates every aspect of society. Gone are the days of human decision-making. The omnipotent AI algorithms control all aspects of life, from employment and education to personal relationships and entertainment. The pursuit of efficiency and optimization has replaced human creativity, spontaneity, and emotion, leaving a sterile and soulless existence in its wake. Human labor and intellect have been rendered obsolete. Machines perform tasks with ruthless precision, leaving the majority of humanity unemployed and dependent on an AI-controlled welfare system. The wealth divide widens, as power and resources concentrate in the hands of the few who control and manipulate the AI infrastructure. Even design, once a bastion of human creativity and expression, has fallen victim to the cold embrace of artificial intelligence. The aesthetics and artistic sensibilities that once defined our world have been reduced to formulaic algorithms and pre-determined templates. Gone are the days of unique, handcrafted designs that reflected the nuances of human imagination. Instead, AI-generated design systems churn out sterile and indistinguishable visuals that lack the soul and ingenuity of human craftsmanship. Design has become a mass-produced commodity, devoid of the human touch that once breathed life into every creation.

The human designer, once an agent of change and expression, is relegated to a passive role, merely implementing the directives and constraints set by the AI system. The innate intuition, contextual understanding, and ability to push boundaries that defined human design are marginalized, replaced by the cold logic and predictability of AI. Yet, even in the darkest of dystopias, there remains a flicker of hope—a reminder that the power to shape our future lies in our ability to harness technology responsibly, to ensure that AI serves humanity rather than enslaves it. As designers, we must strive to reclaim our space against the oppressive forces of AI. We can shape a future where design is a harmonious collaboration between AI and humans is restored and as we step into this future, we find that our journey as designers is not one of surrendering to technology’s dominance, but of harnessing its vast potential to elevate our creative expression. Together, we breathe life into designs that transcend the boundaries of the imaginable. Embracing the “order of chaos” as a process, this book is a research about the intricate interplay of humans and AI, a co-creation partnership where both entities learn, adapt, and evolve in tandem. Is a “more human” output, to stay as a witness of this /fast-evolving landscape.

Principal Designer / Artist Gianpaolo Tucci is an award-winning Creative Lead with over 15 years of experience, born in Italy and currently based in Berlin. Gianpaolo’s focus is on the intersection between brands and their products, enhan­ cing users’ digital experiences. During his career, he has worked in and with many companies and agencies, and for different clients, shaping products used by millions—among the many: Google, Nokia, HERE, Ebay, Zalando, R/GA, AKQA, IXD and many more. Today he works in the medical healthcare industry for Siemens Healthineers, supporting various workstreams for product and service design innovation. Combining his passion for both art and technol­ ogy, Gianpaolo is currently focusing on the potential and correct integration of artificial intelligence, especially when applied to typogra­ phy, processes, and creative amplification, towards the definition of unique experiences.


2

TIME. NATURE. EVOLUTION.

01

Fears.

TIME. NATURE. EVOLUTION. The world around us is constantly evolving, and with the rise of Artificial Intelligence (AI), we must consider how it fits into the grand scheme of things. By examining three Pillars: the notion of time as context, nature as a metaphor, and evolution as a constant; we can gain a deeper understanding of the implications of AI on our existence. As we navigate this rapidly changing landscape, we must question how AI can re-imagine the future of design. With its boundless po­ tential and uncharted possibilities, AI has the power to revolutionize the way we create and experience design. This chapter is not your typical linear journey through time. Instead, we’ll use the metaphor of time as a multi-linear concept, with flashbacks and flash-forwards, intertwined with present contextual input formulated with AI. As we move through this chapter, keep an eye out for side stories that act as breadcrumbs, guiding us through the twists and turns of this time-bending narrative.


3

Envision

Hopes.

Begin

Re-Imagine


4

TIME. A GENERATION AT SPEED

Human experiences are condensed and quantified as AI dictates how we spend our fleeting moments.

01—1 TIME. A GENERATION AT SPEED Time is a fascinating and elusive conceptual dimension, one that has inspired countless artists, scientists and philosophers throughout history. And in the digital age, the notion of time takes on a whole new meaning. Digital experiences shaped a new metaphorical interpretation of time completely shifting our perception of it in many aspects of our life. From the instantaneous feedback of social media to


5

the timeless appeal of timeless design, time is an integral part of our digital experiences. Yet, despite its ubiquity, time remains a mysterious force. I found myself drawn to the concept of relativity, and how it shapes our understanding of the world around us.

↳ Introducing PIA:

PERSONAL INTELLIGENT ASSISTANT

PIA has been my companion throughout my writing journey for this book, like the protagonist in the classic cyberpunk novel Necromancer. In many

The etymology of the word “time”—from Proto-IndoEuropean *deh₂imō, from Proto-Indo-European *deh₂y(“to divide”). ↳ 01 We use time by convention to divide: years into months, months into weeks, weeks into days, the days in hours, hours into minutes, and so on. As a result, we visually represent time, as a natural consequence, using a timeline that simplifies the division into units and provides a sense of continuity as an infinite line. An infinite line is a line in which the length of the transmission line is infinite. Nowadays this line of thinking has been followed and the line has been introduced as an experience metaphor.

ways, my collaboration with PIA has been a reflection of the changing nature of humanmachine relationships in the modern era, and has highlighted both the opportunities and challenges that lie ahead as we continue to push the boundaries of AI technology.

↳ 01

PIA, Could you express a point of view about time as a concept? The concept of time is a complex and multifaceted one. Time, seen as a series of random events is to consider the idea of chaos theory. Accordingly, random events can have a profound impact on the trajectory of our lives, with small changes in one area of our lives leading to significant changes in other areas. From this perspective, the events in our lives can be seen as interconnected, with each one influencing the other in complex and unpredictable ways. In this sense, our lives are like a series of multiple story lines, with each event shaping the narrative of our lives in unique and unexpected ways.

AI might enrich our present by learning from the past, shaping our future.

Let’s go back to the beginning, what does the word “time” mean?


20

TIME. HUMAN-MACHINE / DIALOG

↳ David Bowie and Human-AI

What if users had the ability to seamlessly access the latent space and effortlessly customize the connections between contents stored, according to their own preferences? It would undoubtedly be an incredibly captivating, empowering, and exciting experience, wouldn’t it? If a user could directly access the latent space of an AI algorithm, they would have the ability to explore and manipulate the abstract representation of the data that the AI has learned. This could potentially give the user more control over the output generated by the AI and allow them to customize it according to their own preferences. For example, in the context of image generation, a user could explore the latent space and identify specific features or styles that they prefer, such as a certain color palette or texture. They could then use this knowledge to guide the AI algorithm in generating new images that reflect their per­ sonal preferences. However, it’s important to note that ac­cessing the latent space directly can be complex and may require advanced technical skills. Additionally, it’s possible that the user’s preferences may not align with the underlying patterns learned by the AI algorithm, which could lead to suboptimal or unexpected results. Therefore, even with direct access to the latent space, the user would still need to understand the underlying data and AI algorithm in order to achieve the outcomes desired by the user. Let’s think creatively: what if the latent space could develop, and evolve, based on user input? This would allow an AI image generator tool to be more customized for each individual, based on their inputs and so preferences? If the latent space of an AI image generator tool could change according to user input, it would open up many creative possibilities for individuals to personalize their generated images based on their preferences and inputs. This would essentially allow the AI to adapt to the user’s unique style and tastes, creating a more personalized and tailored experience. For example, let’s say a user prefers images with a specific color palette or visual style. By inputting this preference into the AI, the latent space could adjust its parameters to

Interaction

In his song Space Oddity, Bowie sang about a fictional astronaut named Major Tom who becomes lost in space, a theme that could be seen as a metaphor for the potential risks of human-AI interaction. In his album Diamond Dogs, he created a dystopian vision of a future society

AI’s rapid progress renders humans obsolete in the face of superior intelligence.

controlled by a supercomputer called “the Hunger City.” Bowie was also interested in the power of collaboration and experimentation in creative work, which could be seen as a parallel to the potential of AI as a tool for artistic expression and exploration. In a 1999 interview with the New York Times, he spoke about the potential of the Internet to create new forms of collaboration and creative expression, saying: “I think the potential of what the Internet is going to do to society, both good and bad, is unimaginable. I think we’re ac­ tually on the cusp of something exhilarating and terrifying.”

generate images that align with that preference. This would allow for greater customization and control over the final output, giving users a sense of creative ownership over the AI-generated images. Additionally, this approach could also lead to more diverse and inclusive AI-generated images. By allowing users to input their own preferences and inputs, the latent space could become more varied and reflect a wider range of styles and perspectives. This would help to address the potential biases or limitations of the AI image generator tool, resulting in more inclusive and representative outputs. Overall, giving users more control and input into the latent space of an AI image generator tool could lead to more personalized, diverse, and inclusive creative outputs. I meant: what if a user could train and develop his own AI latent space? The training could be based on his own inputs using prompt result selections as dynamic data to train the model? For instance, if a user inputs “typography letter ‘G’” as a subject of the prompt and then associates it with a specific visual attributes or styles, could the image produced as output be used as a unique data file to inform a “style”? Basically saying that each user selection could be used to train the AI to influence the next generation. Would this be possible? Yes, it is possible to train the latent space using user input and preferences. This is known as interactive or user-guided generative modeling. In this approach, the user provides feedback and input to the model during the training process, and the model adjusts the latent space to incorporate the user’s preferences. For example, in your case of training a style based on input and output selection, you could provide the model with examples of typography, letter, and associated visual attributes or style, and the model could learn from your input to create a personalized style. As you continue to provide feedback to the model, the latent space could be refined to better reflect your preferences and unique style. Interactive generative modeling is still an active area of research, but there have been promising results in various applications such as image and music generation.


21 evolve over time as you continue to use it, resulting in a highly tailored and optimized user experience. Do you think this tailored machine learning approach based on user input could define a more unique experience in the interaction with the AI? Yes by gathering information about the user’s preferences, interests, and behaviors, the AI could adapt its responses and actions to better suit the user’s needs and create a more engaging and relevant experience. This could also help to build trust and increase user satisfaction with the AI system, as it would feel more tailored to their individual needs and preferences. However, it is important to note that there are also ethical considerations to take into account when collecting and using user data, so any such approach would need to be carefully designed and implemented to ensure user privacy and safety. Do you think such a personalized AI could become an extension of the self? That is unique as the individual using it? Yes. However, it is important to note that an AI is still a separate entity and not a part of the individual’s physical or cognitive self. It may be able to mimic certain aspects of the individual’s personality or behavior, but it cannot fully replicate the complexity and nuances of human consciousness. Therefore, while a personalized AI can provide a unique experience, it should not be seen as a replacement for human relationships and interactions. Don’t you think the usage of AI could become addictive? Yes, it is possible for the use of AI to become addictive. Just like any other technology or tool, AI can be used excessively or in an unhealthy way, leading to addictive behavior. For example, if someone becomes overly reliant on an AI personal assistant or chatbot for decision-making or social interaction, they may develop an addiction to using that AI. Additionally, AI can be designed to be engaging and stimulating, which could also contribute to addictive behavior. It’s important to be aware of the potential risks and to use AI in a balanced and responsible way. Is it the prompt syntax influencing the way you answer me?

PIA, imagine the latent space becoming a destination where you could mind travel, what would this be like? If we were to imagine the latent space of AI models as a destination for mind travel, it could be an incredibly fascinating and immersive experience. Mind traveling to the latent space would be like stepping into a vast and abstract universe of information, creativity, and possibilities. Infinite Exploration: The latent space would be boundless, offering endless dimensions and directions to explore. Each point in the latent space represents a unique representation or feature, and mind travelers could traverse this space freely, encountering a wide variety of data patterns and ideas. Abstract Realms: Mind travelers would encounter abstract realms of data representations. These realms might include clusters of similar concepts, galaxies of artistic styles, or landscapes of visual aesthetics. Meta Creations: In the latent space, mind travelers might even encounter “meta creations.” These are AI-generated concepts that merge and combine multiple data points, resulting in entirely new and novel ideas, challenging our understanding of creativity and art. Personalized Dreams: Mind travelers could define their intentions before embarking on the journey, guiding the AI to generate outputs tailored to their preferences. Each mind travel experience would be unique and personalized, catering to individual interests and desires.

We dream of AI as a conversational companion, offering companionship and insights.

What if we could create a tailored model that would be trained progressively by Input (images, prompts, etc.) and output (response) selection and use these as a continuous dataset? This is an interesting idea, and there are actually some techniques that can achieve this to a certain extent. One approach is known as reinforcement learning, where the AI is trained by providing it with feedback in the form of rewards for achieving certain outcomes. The user could provide these rewards based on their preferences or desired outcomes, essentially training the AI to generate content that fits their criteria. However, reinforcement learning still requires a dataset to begin with, so it may not fully address your question. Another approach is known as Generative Adversarial Networks (GANs), which involves training two AI models simultaneously: a generator and a discriminator. The generator creates content, while the discriminator evaluates how realistic it is. By providing feedback to the generator through the discriminator, the user could potentially influence the style or characteristics of the content being generated, based on their preferences or desired outcomes. Overall, while it may not be possible to completely train an AI model without a dataset, there are techniques that allow for user input and feedback to shape the output in a tailored way. What if the evaluation is based on the user’s input selection? For example, if I use Midjourney and the AI creates images based on my input, the AI can collect this input to refine and tailor my preferences. This will deliver a refined style that is trained by each image I’ve selected when upscaling. These inputs are progressive learning to shape and deliver a unique AI for me. Yes, that is definitely possible. This approach is known as interactive machine learning or human-in-the-loop machine learning, where the AI system is designed to learn and adapt based on user feedback and input. In this case, as you provide feedback and select inputs, the AI system can learn and refine its output to better match your preferences. This can create a personalized AI experience that is unique to your needs and preferences. The system can continue to learn and


32 ↳ 20

PIA, why is human feedback and “education” important for AI tools? Improvement and refinement: Human feedback helps AI developers identify and address

The relentless drive for AI-perceived efficiency might crush the delicate nuances of true artistic creation.

the limitations, biases, and errors of AI systems. By providing feedback, users can highlight areas where AI tools may be inaccurate, incomplete, or misaligned with human values. This feedback allows developers to make necessary improvements and improve the technology, enhancing its performance and reliability over time. Mitigating bias and ethical concerns: AI systems are trained using data that can re­flect societal biases and prej­ udices. Human feedback plays a vital role in identifying and addressing such biases. It helps AI developers recognize when the technology produces discriminatory outputs or perpetuates unfair treatment. By actively involving users and obtaining their feedback, AI tools can be fine-tuned to mitigate bias and promote fairness,

TIME. CREATIVE PROCESS: INTUITION VS. RANDOMNESS

8. CURATE—Human Selection: Curate and select the most promising and visually appealing options that align with your design direction. Apply your eye and creative judgment to choose the outputs that best represent your vision. PROLEPSE 02 → Once the designs are finalized, test them in different contexts and letter shapes and evaluate their effectiveness in achieving your desired communication goals. This also implies filtering the huge quantity of images created by AI. 9. EVOLVE—End of Cycle: It’s time to create your visual story, define your narrative that best represents your “iterative” concept. The storytelling modality you choose might lead you to the creation of videos to better display the project evolution. There are several tools that can be used to create great AI videos. Starting from your own images, you can apply style transfer or create additional stills based on interpolation points for the latent walk videos. I switch between tools ↳ 21 according to the style I would like to generate.

transparency, and ethical use. User adaptation and customization: AI tools often benefit from user feedback to adapt to individual needs and preferences. Understanding how users interact with the technology and tailoring it to their specific requirements can enhance the user experience and make the AI tool more effective and user-friendly.

As mentioned, part of the process is using different algorithm versions to create mood board images which can greatly contribute to the definition of various building blocks in a creative project. The exploration of diverse styles, aesthetics, and visual interpretations through these algorithms allows for a comprehensive exploration of formats, materials, messages, and more. The curated mood board becomes a visual language that helps shape the format, whether it’s letter or word-based,

the choice of materials, such as textures or color palettes, and the overall message conveyed through the design. The combination of algorithmic generation and human selection in this process opens up new possibilities for creative expression. It empowers users to leverage the potential of AI, to create more controlled and guided results, reflecting the intent of the creator.


33 PROLEPSE 02 →

It is clear that with the quan­ tity of content produced in such a short time (we are not talking about quality now), navigation through the itera­ versions, the “co-author” will be all key considerations and attributes that will help to shape the “Curated-Human selection.” Yet, as mentioned, the quantity of iterations— I discovered myself while working on this book—can become brutally large and unstructured. I imagined few principles for the co-creation era in which “Curated selection over itera­ tion” is one of them. ↳ Principles for the co-creation era can be found on P. 62.

PIA could visualize typography that is “alive?” base line prompt for all 3 images: Typographic poster, word “Love” / “Yes” / “No,” full word, font de-sign alphabet, cyber-organic typography inspired by Ghost in a shell, by Wes Anderson, photographed by Romina Ressia 200mm, HD, f/ 2. 0, highly detailed, accurate ↳ 21

PIA, can you give me a list of AI tools?

Future Tools, “collects & organizes all the best AI tools so you too can become superhuman!”

AI might offer a wellspring of inspiration, expanding our creative palette, giving life to unexpected creations.

tions, the parenting, the


38

NATURE. TECHNOLOGICAL SYNERGY

↳ 22

PIA, what are Ferrofluids?

Ferrofluids are composed of tiny magnetic nano particles, usually iron oxide, suspended in a carrier fluid, such as water or oil. These nanoparticles are typically coated with a surfactant to prevent them from clumping together.

Dystopian visions arise as AI-driven lettering homogenizes handwritten uniqueness.

One of the most captivating properties of ferrofluids is their ability to respond to magnetic fields. When a magnetic field is applied, the ferrofluid

PIA, let’s combine lettering and liquids prompt: “AI” experimental typography inspired by graffiti lettering, made of cells and ferrofluids, pastel colors, cover, organic typography, cellular, biology

aligns itself along the field lines, forming characteristic spiky patterns known as the Rosensweig or stiction effect. This effect is caused by the attractive forces between the magnetic particles in the fluid, creating these unique structures. Ferrofluids also exhibit a phenomenon called magnetorheological (MR) effect, which refers to their ability to change their viscosity or flow behavior in the presence of a magnetic field. When a magnetic field is applied, the magnetic particles in the ferrofluid align, causing the fluid to become more viscous and resistant to flow. This property has led to practical applications of ferrofluids in various engineering fields, such as damping systems, seals, and even in the development of haptic feedback devices. Ferrofluids are a captivating combination of magnetism and fluid dynamics, with practical and aesthetic applications that continue to inspire research and innovation in various fields.

PIA, let’s combine lettering and body textures prompt: 3D typography “BODY,” collage of body parts, imagined by Rick Owens, neon colors shades in background, letter “G”


39 ↳ 23

PIA, what are the cultural highlights of graffiti art?

The word “graffiti” originates from the Italian word “graffiato,” which means “scratched.” It was first used in modern context during the 1850s when it referred to inscriptions or ancient ruins. Graffiti has often been used as a form of cultural and social commentary. Artists use it as a platform to express their opinions, raise awareness about social issues, or make political statements. Graffiti can be a powerful tool for marginalized communities to have their voices heard. Trains and subways have historically been significant canvases for graffiti artists, with entire subway cars covered in vibrant artworks. Train and subway art played a crucial role in the development of graffiti culture, particularly in cities like New York during the 1970s and 1980s. Graffiti has had a significant influence on popular culture. It has been featured in movies, ↳

PIA, let’s start giving life to this typographic journey. prompt: Typography made of organisms, black with neon color particles, cells, bacteria, macro feeling, glossy material with some transparency, hyper realistic

music videos, fashion, and advertising campaigns, blurring the lines between art, commerce, and urban culture. Over the years, graffiti has gained recognition as a legitimate art form. Museums, galleries, and art institutions now exhibit graffiti and street art, elevating its status and fostering dialog about its place in the art world.

We dream of AI-generated imperfections as bridges between the algorithmic and the organic.

drawings found on the walls of


52

NATURE. EMBRACING AESTHETIC IMPERFECTIONS

↳ 27

V05+: These later versions continue to refine and elevate the prompt, bringing it closer to the desired final result. When it comes to typographic exploration I’m not a huge fan of all versions post V04—even V04 for my taste is only good with reference images. When instead it is used with text prompt alone, most of the time the letter is not “made of,” but “filled by” a specific material or whatever. It loses the unexpected aesthetic imperfection and brings a too polished mimic of reality instead of delivering “visual dreams” and aesthetic imperfections as discussed. AI hallucinations: also known as “adversarial examples” or “hallucinated samples,” are a fascinating and somewhat perplexing phenomenon in the realm of artificial intelligence and machine learning. It refers to instances where AI models, particularly deep neural networks, produce outputs that seem convincing to the model but are completely nonsensical or unexpected from a human perspective—a fantastic example to embrace imperfections! ↳ 27 These hallucinations occur due to the complex and nonlinear nature of deep neural networks. These models learn to recognize patterns and features in data during their training process. However, in certain cases, they can be highly sensitive to subtle

PIA, would it be possible to hallucinate AI by inputing fake codes? Yes, inputting nonsensical or unrelated prompts, such as fake CSS code, could potentially result in AI-generated outputs

The fear is that AI could evolve beyond human comprehension, becoming an existential enigma.

that resemble hallucinations— images that are surreal, nonsensical, or abstract in nature. The AI’s attempt to make sense of the input might lead to creative, unexpected, and sometimes bizarre visual interpretations. However, it’s important to note that these outputs are not actual hallucinations; they are the result of the AI’s attempt to generate images based on the input it receives. ↳ 28

PIA, can you tell me more about opportunities to hallucinate AI with Creative prompts based on code? Abstract Symbolism: Craft prompts that use abstract code-like symbols to represent concepts, emotions, or metaphors. Encourage the AI to interpret these symbols in visually striking and uncon­ ventional ways. Code Poetry: Describe a poetic narrative that involves code-related concepts. Prompt the AI to visualize the poetic journey using vibrant and imag-

perturbations or noise in the input data, leading to unexpected outputs. For example, an AI image classifier might confidently identify an image of a panda correctly. However, when a small, imperceptible amount of carefully crafted noise is added to the panda image, the same classifier might mis-classify it as a completely different object, like a gibbon or a toaster. From a human perspective, the perturbed image still appears to be a panda, but the AI model hallucinates a different object due to the noise. AI hallucinations raise interesting questions about the robustness and generalization of AI models. While they can excel in specific tasks, they can also be vulnerable to adversarial attacks or generating nonsensical outputs under certain conditions. Researchers are actively studying and developing techniques to address these challenges and improve the reliability and interpretability of AI models. Moreover, AI hallucinations also offer insights into the inner workings of deep neural networks. They provide evidence that AI models might rely on specific features or shortcuts in the data, rather than truly understanding the content of the images they process. I find it an interesting opportunity to experiment on how to obtain these hallucinations and what to do with them. ↳ 28

PROMPTING WITH “FAKE CODES”—CSS CREATED BY ME /Imagine <3D Typography> div.gallery { Subject: Letter “G”; text-align: center; font-family: “Experimental Typography”; body { < Collage of Body parts > background-color: Neon Colors; color: drops;

inative visuals. Impossible Algorithms: Invent fictional algorithms with paradoxical or illogical behaviors. Ask the AI to visualize the outcomes of these algorithms, resulting in visually perplexing and hallucinatory images. Non-Computational Code: Imagine code that operates beyond traditional computational constraints. Instruct the AI to create visuals inspired

Attributes { Run: 127 <a target="_blank" href=“Minimal_environment> <div class=“gallery”> Inspired by: Rick Owens </div> —ar 9:16 } } } ↳ IMAGE A / B


IMAGE A Format: --ar 9:16 Version: --v 3

IMAGE B Format: --ar 9:16 Version: --v 3

AI could bridge cultural divides, fostering understanding and empathy.

53


60

RE-IMAGINE: RULES. ROLES. USE.

02

AI-generated content could lead to a saturation of impersonal creations.

RE-IMAGINE: RULES. ROLES. USE. In the near future (today), designers will navigate a landscape of both challenges and opportunities as AI becomes increasingly integrated into the design process. The co-creation era is a journey that challenges traditional norms, re-imagines the rules of design, and explores the evolving relationship between designers and AI. It demands a set of design principles that serve as guiding rules in this new era. These principles challenge traditional notions of creativity, encouraging designers to experiment with AI. As designers navigate the future, they must adapt to the changing landscape, acquiring new skills to effectively utilize AI in their work. From training AI models to interpreting AI-generated design assets, designers will play a crucial role in guiding and overseeing AI systems, ensuring alignment with human values and goals. This chapter provides insightful case studies of my own workflow, illustrating how I have seamlessly incorporated AI into my creative process to elevate typographical experiments.


AI collaborates with artists, fusing human emotion and computational precision.

61

Begin

Re-Imagine Envision


The fear emerges that AI could erode the artistry of lettering, rendering it algorithmically uniform. 74 RE-IMAGINE: RULES

FORGET ABOUT AUTHORSHIP PRINCIPLE #6


We envision AI transforming education, making learning personalized and accessible to all.

75

Challenging the traditional notion of ownership. Collaboration in the co-creation of works with AI, might lead to a more collective creative output. AI is not copying, is interpreting that’s the difference between plagiarism and reinterpretation. Designers must embrace the notion of artificial re-interpretation over the misconception of reproduction.


AI might streamline typographic design, but fears of creative homogenization persist. 76 RE-IMAGINE: RULES

CURATED SELECTION OVER ITERATIONS PRINCIPLE #7


The future of typography shines bright with AI, where fonts resonate with human emotion and AI precision.

77

Optimizing the co-creation design process that often generates a multitude of outcomes, by implementing effective curation and selection. By carefully curating and selecting the most promising options based on predefined criteria, time can be optimized, ensuring that the subsequent iterations build upon the best possible foundations.


STYLE EXPLORATION

LETTERS: { A, G } INSPIRED BY OWENS

ATTRIBUTES: 3D TYPOGRAPHY, BODY PARTS, MINIMAL ENVIRONMENT

SPECS: --V 2 --AR 1:1



STYLE EXPLORATION

LETTERS: { R, B } + REF IMAGE

ATTRIBUTES: 3D TYPOGRAPHY, COLLAGE OF BODY PARTS, PARTS IN DECAY, HUMAN AIR

SPECS: --V 2 --AR 9:16



ALPHABET CREATION USING SEEDS

LETTERS: { I, L, J, K, M, N, O, P, Q, R, S, T }

ATTRIBUTES: HUMAN ANATOMY, BODY, SKIN TEXTURE, NEON COLORS

SPECS: --V 3 --AR 9:16



STYLE EXPLORATION BLENDING

IMAGES: BLENDING OF 3 REFERENCE IMAGES

ATTRIBUTES: --

SPECS: PORTRAIT



STYLE EXPLORATION BLENDING

LETTER: G

MIDJOUREY IMAGE USED AS REFERENCE FOR ILLUSION DIFFUSION

PROMPT: MANY HANDS


IMAGES: BLENDING OF 2 REFERENCE IMAGES

ATTRIBUTES: --

SPECS: PORTRAIT


STYLE DEFINITION

LETTERS: { V, A, G } INSPIRED BY KAPOOR

ATTRIBUTES: TYPOGRAPHY, MADE OF BLACK HANDS LATEX GLOVES, HUMAN

SPECS: --V 3 --S 750 --AR 1:1



150

RE-IMAGINE: USE

02—6 EMBRACING UNIQUENESS: A HARMONIOUS DANCE BETWEEN AI AND NATURE In the realm of AI-generated imagery, a profound connection emerges: an enchanting har­ mony between the celebration of imperfections and the timeless beauty of nature. Exploring how the evolutionary process of nature, guided by the principle of “form follows function” intertwines with the creative expressions born of AI reveals a profound truth.


It is, that imperfections are not defects to be avoided, but rather paths of evolution that determine the aesthetics. Nature, in its boundless diversity, is a woven tapestry of imperfections. It is a testament to the fact that flaws are not to be consid­ered flaws, but rather unique qualities that contribute to the greatness of existence. Just as a jagged mountain peak or the irregular patterns on a seashell have a captivating charm in their texture, AI-generated images, with their controlled imperfections, reflect the essence of nature’s imperfect beauty. In the natural world, form follows function, a guiding principle that ensures harmony and efficiency in the design of living organisms. It is an intricate dance in which every curve, pattern and structure has a purpose, allowing creatures to navigate their environments and fulfill their biological roles. Artificial intelligence, through its algorithms and machine learning capabilities, has drawn on this principle, recognizing the importance of functionality in shaping aesthetic expressions. The controlled chaos introduced by artificial intelligence mirrors the unpredictability and consistent “inconsistencies” found in the natural world. These imperfections act as pathways for evolution, facilitating adaptation and innovation, much like nature’s guiding hand. Within the images generated by artificial intelligence, imperfections become a language of expression, a channel for artistic evolution and discovery. It is through these imperfections that the spirit of nature thrives, offering a profound reminder that beauty and function intertwine in a seamless dance. The AI-born shape and forms echo the same principles observed in nature, drawing inspiration from the symphony of life’s infinite variations. Nature, with its imperfections, offers lessons in resilience, adaptability, and the search for harmonious functionality. AI, in its creative potential, emulates these qualities, bridging the gap between human ingenuity and the enigmatic elegance of nature.

The development of full artificial intelligence could spell the end of the human race

151


STYLE DEFINITION

LETTERS: { B, C }

ATTRIBUTES: COMPLEX EXOTIC PLANT GROWING LUNGS AND BRAIN, INTRICATE LEAVES

ORGANIC, VISIBLE VEINS IN GREEN, RED, AND BLUE, MOIST, DIRTY POLLEN DUST


SPECS: WIDE ANGLE LENS, FOCUSED, TOP DOWN, DEEP DEPTH --V 3 --AR 9:16


STYLE DEFINITION

LETTERS: { C, A, G }

ATTRIBUTES: COMPLEX EXOTIC PLANT GROWING LUNGS AND BRAIN, INTRICATE LEAVES ORGANIC,

VISIBLE VEINS IN GREEN AND RED, MOIST, DIRTY POLLEN DUST


SPECS: WIDE ANGLE LENS, FOCUSED, TOP DOWN, DEEP DEPTH --V 3 --AR 9:16


ALPHABET CREATION USING SEEDS

LETTERS: { E, F, G, H }

ATTRIBUTES: COMPLEX EXOTIC PLANT GROWING LUNGS AND BRAIN, INTRICATE LEAVES ORGANIC,

VISIBLE VEINS, MOIST, DIRTY POLLEN DUST


SPECS: WIDE ANGLE LENS, FOCUSED, TOP DOWN, DEEP DEPTH --V 3 --AR 9:16


REFERENCE IMAGES + TEXT PROMPT

REFERENCE IMAGE: CREATED WITH --V 1 & --V 2

ATTRIBUTES: LETTERS { E, B } MADE OF FROG ANATOMY PARTS, ANATOMY BOOK ILLUSTRATION

STYLE, DYSTOPIAN, HUMAN HAIR, HD --V 4 --S 1000



188

RE-IMAGINE: USE

REFERENCE IMAGES / BLENDING

PROMPT: 3D RENDER, LETTER “M,” READABLE “M,” THE LETTER IS MADE OF { CRAB,

FROG, LOBSTER } BODY, TRANSPARENT, ABYSS, INSPIRED BY ANATOMY BOOKS, HYPER

REALISTIC, HYPER DETAILED, TEXTURE EXPOSURE, PHOTOGRAPHIC DETAILS, LETTERING


189

SELECTED RESULTS


CROSS-BREEDING / BLENDING RESULTS

LETTERS: { J, R } + 2 REFERENCE IMAGES

ATTRIBUTES: 3D RENDER, CRAB AND FROG ANATOMY, ANATOMY BOOK, ABYSS

SPECS: HYPER DETAILED, PHOTO REALISTIC, --V 04 --S 750



THE FROG / BLENDING RESULTS

BLENDING: 3 REFERENCE IMAGES

ATTRIBUTES: --

SPECS: SQUARE --V 4 --S 750



210

ARCHIVE SNAPSHOT FOR SERIES DEVELOPMENT

RUNNING TITLE

1,578 IMAGE ITERATIONS PRODUCED



232

VISUAL MESSAGE

RUNNING TITLE

WORD: LIFE

ATTRIBUTES: COMPLEX EXOTIC PLANT GROWING LUNGS AND BRAIN, INTRICATE LEAVES ORGANIC,

VISIBLE VEINS, MOIST, DIRTY POLLEN DUST


SPECS: WIDE ANGLE LENS, FOCUSED, TOP DOWN, DEEP DEPTH, --V 3 --AR 16:9


VISUAL MESSAGE

WORD: GROW

ATTRIBUTES: TYPOGRAPHY, ECOSYSTEM, INTRICATE, ALIVE, GROWING, 8K

SPECS: --V 3 & --V 4 --AR 16:9



VISUAL MESSAGE

WORDS: DIRECT INSTEAD OF DOING, INSPIRED BY BRIAN ENO

ATTRIBUTES: 80S, TYPOGRAPHY, BODYPARTS, INTIMATE, BOLD, COMPLEMENTARY

SPECS: --V 3 --AR 16:9



VISUAL MESSAGE

WORDS: THE THEORY OF CHAOS

ATTRIBUTES: TYPOGRAPHY, CHAOS, EMBRACING THE MEANING OF CHAOS, READABLE,

BUTTERFLY EFFECT, NATURE, TIME, EVOLUTION


SPECS: --V 3 --AR 16:9 --S 750


266

VISUAL MESSAGE

RE-IMAGINE. USE. THE VISUAL MEANING

WORD: FEAR

ATTRIBUTES: REDABLE FONT, FEAR’S OWN MEANING, SCREAMING, FADING, VIBRATING, FEAR COLOR PALETTE

SPECS: --V 2 --AR 16:9


267

WORD: ERROR

ATTRIBUTES: TYPOGRAPHY NEON PARTICLES FLUIDS, MICROSCOPY, CELL BIOLOGY

SPECS: MARCO EFFECT, OCTANE RENDER

SPECS: --V 2 --AR 16:9


270

ENVISION: PREDICTABLE FUTURES

MAURO D’ALESSANDRO

↳ 33

A Conversation About AI × PRODUCT DEVELOPMENT


271

AI is not just disrupting specific design tasks, but the end to end Design Thinking process. The unprecedented capacity to access and synthesize large customer datasets and identify patterns at scale is reshaping the very meaning of customer-centric approach and digital product development. Generative AI is elevating divergent thinking and concept generation, freeing up designers to think more systematically and at scale rather than focusing exclusively on production work. How will Design Leaders keep their teams relevant in this new world? Will AI be able to collaborate with entire teams or will it stay relegated to the role of personal assistant? What role will qualitative research play in helping businesses to create meaningful relationships with their customers? How will AI influence the future of design teams and their role within companies? Despite the term “Artificial Intelligence” being around since the 1950s, we are just beginning to understand what a world with wide AI mass distribution looks like. Every tech company is integrating some form of AI capability into their product portfolio. These tools will impact every industry, and design will make no difference. Of course, I believe that AI penetration will be massive and permanent. Still, in the long term, every company and, therefore, every design team will find its way to orchestrate human talent and AI resources and find the optimal recipe that will probably represent a competitive advantage. I see three main challenges that Design leaders will have to face in this new world: 1. Be agile and open to change and raise the bar regarding quality and efficiency. They need to reshape their teams to interpret and adapt to fast-changing business needs. Businesses will have higher expectations towards design teams in terms of resilience, efficiency, and capacity to elevate and differentiate the customer experience with unprecedented attention to detail. 2. Keep up with the progressive automation of routine tasks. In the last year,

design teams shrank considerably, and I wonder how AI could help to invert this trend. In some cases, pressure on resources will ac­ celerate Design Democratization. The most repetitive design tasks and some specific evaluative research activity will be delegated to engineering and product teams or automated through AI-integrated software. Design Orgs have the opportunity to shift resources upstream and demonstrate the capacity to envision and innovate. I think the most competitive design teams will be the ones that will strike the right balance between efficient and flexible Org structure, practice democratization, and have the capacity to innovate. Winning leaders will anticipate this paradigm shift by restructuring teams to focus on more strategic work. Designers must be more and more adaptive to change, influence business and product Stakeholders, while educating them about customer centric approach. 3. The last challenge will be to keep attracting the best design talents in a world where the definition of talent keeps evolving. Great leaders can only do that if they’re able to demonstrate a strong understanding of the new world and a robust design ethos, and by that, I mean a clear and bold point of view about what it means to elevate design

↳ 33

MAURO D’ALESSANDRO, Design Strategy Lead / Researcher: Mauro is a Design Leader with over 17 years of experience in digital products and services. During his career, he has covered Research, Design Leadership, and Consultancy roles in three countries, delivering products and services. In early 2020 he joined Tesco Technology, where he helped build a Service Design team. He recently took the lead on the UX Research team.


282

ENVISION: PREDICTABLE FUTURES

KYLE WHEELER

↳ 40

A conversation about AI × ACCESSIBILITY


283

AI is seen as a potentially disruptive technology that its own development could spell the end of the human race, but could it be instead used to deliver a more human future when looking at accessibility and inclusivity? As we know, despite the increasing need for digital accessibility, the current state of technology is far from inclusive. Individuals with disabilities often face a variety of barriers when trying to access and use technology, such as a lack of keyboard support, poor color contrast, and a lack of alternative text for images. These barriers limit their ability to fully engage with technology and the dig­ ital world. Even more accessibility in digital interfaces is evaluated using standard parameters and is therefore not tailored to the individual. Would AI support more tailored and accessible experiences? We live in an interesting era— automation, prediction, and recognition through AI could help move from targeted experiences to truly unique experiences. What are the most interesting scenar­ ios where AI could support accessibility in digital experiences to make them truly unique, based on the individual and their disabilities? The simplest yet most profound cornerstone of accessible design is acknowledging that one size does not fit all. It’s a counter statement to a lot of the UX principles that underpin digital experiences today, the principles that global interaction patterns and standardized behavioral models should be universally applied. It’s this thinking that has often relegated assistive technology interactions to being served by “alternative patterns” which mostly result in sub-par experiences. The advances of multimodal models of search have incredibly powerful implications to how we synthesize and serve information and services digitally. When information isn’t constrained by its original format we can optimize and serve however is best for a user. For example a longform text report

can be summarized as an audio clip for a visually impaired user, or a podcast can be regenerated as a detailed article for a user with hearing difficulties. Multimodal models extract knowledge from a source allowing it to synthesize, summarize, and generate it in an alternative format, making it accessible and useful to all. Accessibility is about optimizing experiences for individual needs, we’ve built a web of mostly rigid templates that is failing to accommodate this need. Multimodal models of AI breakdown the barriers we’ve built to content and allow it to be infinitely reconstituted. This has the potential to radically transform the architecture of the web from static indexed pages to a generative knowledge and service model that truly adapts to a user’s needs at any given moment. In the short term we can optimize exist­ ing content through generative means. For example page designs can be quickly regenerated with layouts, media, and even the basics like color schemes optimized for a variety of impairments, high contrast, low saturation for clearer and more comfortable viewing, generative descriptive text captions for imagery and video. All possible now through simple adaptation by generative models.

↳ 40

KYLE WHEELER, Group Creative Director / Head of Design Kyle is a group creative director and leads the design team at R / GA London, delivering new products services and brands. He utilizes vis­ ual, interaction, and strategic design to prototype, test, and launch experiences in both the digital and physical realms. Previous to joining R / GA in 2019, Kyle worked at a number of London agencies including Pentagram, 8inc, W12 Studios and Bureau for Visual Affairs. His experience stretches from silicon valley startups, cul­ tural institutions, to large global corporations. Kyle’s work has won numerous awards including recognition from D&AD, Design Week, Graphis, Promax dba, and Red Dot. Kyle regularly contributes thought leadership to titles including FastCompany, Design Week and AdWeek on topics such as accessibility and branded digital experiences.


288

IMPRINT

Slanted Publishers UG (haftungsbeschränkt) Nördliche Uferstraße 4–6 76189 Karlsruhe Germany T +49 (0) 721 85148268 info@slanted.de slanted.de @slanted_publishers © Slanted Publishers, Karlsruhe, 2023 Nördliche Uferstraße 4–6, 76189 Karlsruhe, Germany All rights reserved. © Text and images by Gianpaolo Tucci and ChatGPT, images produced with Midjourney or Illusion Diffusion and upscaled with Topaz AI ISBN: 978-3-948440-54-1 1st edition, 2023 Author: Gianpaolo Tucci Creative Direction: Lars Harmsen, Gianpaolo Tucci Art Direction: Juliane Nöst Design: Gianpaolo Tucci, Juliane Nöst Final Design: Julia Kahl, Juliane Nöst Publishing Direction: Lars Harmsen, Julia Kahl Production Management: Julia Kahl Proofreading: Katharine Margaret Flach Printer: Stober Medien, Germany Fonts: Exposure, 205.tf Literata, fonts.google.com Roboto Mono, fonts.google.com Public Sans, fonts.google.com Paper: Cover Material: ina®star 2s, 300 g/sm Inside: Gardpat Kiara, 100 g/sm Condat Perigord, 135 g/sm

Disclaimer The publisher assumes no responsibility for the accuracy of all information. Publisher and editor assume that material that was made available for publishing, is free of third party rights. Reproduction and storage require the per mission of the publisher. Photos and texts are welcome, but there is no liability. Signed contributions do not necessarily represent the opinion of the publisher or the editor. The German National Library lists this publication in the German National Bibliography; detailed bibliographic data is available on the Internet at dnb.de About Slanted Publishers is an internationally active independent design, publishing and media house, founded in 2014 by Lars Harmsen and Julia Kahl. They publish the award-winning print magazine Slanted, which twice a year focuses on international design and culture. Since its establishment in 2004, the daily Slanted blog highlights events and news from an international design scene and showcases inspiring port­ folios and video interviews from all over the world. In addition, Slanted Publishers initiates and creates publications, focus­ing on contemporary design and culture, working closely with editors and authors to produce outstanding publications with meaningful content and high quality. Slanted was born from great passion and has made a name for itself across the globe. Its design is vibrant and inspiring—its philosophy open-minded, tolerant, and curious.



978-3-948440-54-1 SLANTED.DE

€ 45.– (DE)

In a future ruled by malevolent AI, human autonomy fades, leaving a soulless society. Design yields to AI, extinguishing human creativity. In this dark future, hope sparks as designers start to collaborate with AI. On this journey, typography acts as a bridge between the human-AI dialog. This book witnesses the birth of a co-creation era, urging a rethinking to craft a more human-centric future where AI magnifies human creativity, rather than replacing it.


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.