USC Annenberg 2024 Relevance Report

Page 1



© 2023 University of Southern California


EXPLORING UNCERTAINTY WELCOME TO THE 8TH EDITION OF THE RELEVANCE REPORT, which I suspect won’t be relevant for long. But today, it definitely is. Everyone in communications is talking about artificial intelligence. This was not the case in 2019, when The USC Center for Public Relations surveyed PR professionals and found that only 18% felt that AI would be an important part of their future business. In an early 2023 survey, developed in partnership with WE Communications, that percentage soared to 80. Six months later, AI is a topic at every conference, a debate at every agency and the focus of this report. Previously, the Relevance Report has included a variety of issues that affect the PR profession — from Activism to Ethics. But this year, we zoomed in on just one — AI — in an attempt to understand how this tidal wave of technology will impact the future of the communications industry. To tackle this challenge, we teamed up with the experts at Microsoft, who are on front lines of the AI revolution. In addition to contributions from their Chief Communications Officer Frank X. Shaw and computer scientist Jaron Lanier, we also heard from Dean Willow Bay, USC Faculty, CPR board members and a few Annenberg students. Overall, this report contains almost 40 insightful essays expressing different points of view about where AI is going to take us. To sum them up, I borrowed a metaphor from one of our brilliant student editors, called the Hero’s Journey.

Fred Cook is the director of the USC Center of Public Relations, a professor of professional practice at USC Annenberg, and the chairman emeritus of the global PR firm Golin. During his 35-plus years at Golin, he has had the privilege to work with a variety of high-profile CEOs, including Herb Kelleher, Jeff Bezos and Steve Jobs, and managed a wide variety of clients, including Nintendo, Toyota and Disney. His book, “Improvise: Unconventional Career Advice from an Unlikely CEO,” is the foundation for his popular USC Annenberg honors class on Improvisational Leadership.



On a Hero’s journey, the main characters are swept up in a noble quest that will bring chaos into their ordinary lives (think Bilbo Baggins). Along the way, they will be guided by wizards and tested by demons. They will face many trials. Sometimes they will prevail and sometimes they will fail. Eventually, they will learn the secrets of this new world and return home with the knowledge to enlighten others. For communicators, our AI journey begins with a sense of curiosity that sparks the exploration of basic programs like ChatGPT. Our experimentation will lead to the invention of innovative products and the adoption of unfamiliar processes. As we transition to new tools, we will face tricky business issues around accuracy, security, and transparency. We will also encounter thorny ethical and legal concerns related to ownership, privacy and protection. These will be formidable obstacles, but they won’t deter us from reaching our destination. In the end, the experiences we gain on our quest will profoundly impact our efficiency, our creativity and our effectiveness back home. This year’s Relevance Report is not designed to be a map for this journey. Right now, no one can accurately predict which direction our future will take. Think of this collection of essays as an invitation— welcoming you to enter an uncharted territory and explore its uncertainties. Hopefully, emerging as a hero.















REINVENTING COMMUNICATIONS WITH AI .....................................................................8 CREATORS, COLLABORATORS AND COMMUNICATORS IN THE AGE OF AI................................ 10 CREATIVITY, COLLABORATION AND OUR NEW PARTNER: GENERATIVE AI...............................12 AI WILL NAVIGATE A GRAY FIELD, ANALYZING RISK FOR CRISIS COMMUNICATIONS................ 14 A COMMUNICATIONS REFORMATION MOMENT................................................................. 16 AI ACROSS THE GENERATIONS: TIME FOR ACUMEN AND ACTIONS.........................................18 FROM SEO TO AIO: ARTIFICIAL INTELLIGENCE AS AUDIENCE............................................... 20

THE ETHICAL DILEMMAS OF AI.................................................................................... 24 AI PLAYBOOK: UNLOCKING ITS POTENTIAL FOR SOCIETY.................................................... 26 AI REQUIRES A SPIDER-MAN PRINCIPLE TO BALANCE POWER AND RESPONSIBILITY................ 28 UPLEVELLING .......................................................................................................... 30 IT’S NOT AI VS. HUMANS, IT’S AI WITH HUMANS............................................................ 32 PERSUADED TO BE BIASED: THE UNDER-SKIN OF AI’S SPIN................................................ 34 NAVIGATING THE ETHICAL MINEFIELD OF AI IN COMMUNICATION........................................ 36 AI’S EFFECT ON WRITERS AND ENTERTAINMENT.............................................................. 38 TO AI OR NOT TO AI?: IMPORTANT QUESTIONS TO ASK BEFORE DEPLOYMENT.........................40 GENERATIVE AI'S IMPACT ON STUDENTS OF COLOR AND DIVERSE STUDENTS.......................... 42

ARTIFICIAL INTELLIGENCE GETS BETTER WHEN YOU TURN IT UPSIDE DOWN..........................46 WE ALL NEED A PERSONAL AI LEARNING PLAN ...............................................................48 AI’S TRANSFORMATIVE IMPACT ON LEADERSHIP ............................................................ 50 GENERATIVE AI INTRODUCES NEW CONSEQUENCES FOR AN OLD CHALLENGE ......................... 52 TURNING AI-NXIETY INTO CREATIVE ACTION.................................................................. 54 AI BRIDGES THE GAP BETWEEN DATA AND DIALOGUE FOR CLIMATE JOURNALISM .................. 56 THE NEXUS OF AI AND CYBERSECURITY RISK IN MARKETING AND COMMUNICATIONS............... 58 ANALYSIS: STUDY SHOWS AI MORE CREATIVE THAN WHARTON STUDENTS............................60 AI AND REPUTATION: THE PROMISE OF TRANSFORMATION; THE PERILS OF DISINFORMATION..... 62 AI MEETS THE WORLD OF FANDOM...............................................................................64

FIRST CONTACT.......................................................................................................68 THE SYMBIOSIS OF AI AND COMMUNICATIONS: AUGMENTING CREATIVITY AND STRATEGY FOR THE FUTURE..................................................71 BEYOND THE HYPE: UNCOVERING POSSIBILITIES WITH ENTERPRISE AI.................................. 74 AI IS DISRUPTING HEALTHCARE... AND THAT'S A GOOD THING............................................ 76 AI’S POWER AND POTENTIAL PITFALLS IN REVOLUTIONIZING HEALTHCARE COMMUNICATIONS AND MARKETING............................................................ 78 SHAPING THE FUTURE OF ESPORTS WITH AI...................................................................80 AI FOR PRODUCTIVITY AND CREATIVITY........................................................................ 82 SYNTHETIC AI VIDEO AND THE MAGIC OF DIGITAL TWINS FOR BUSINESS STORYTELLING...........84 BUILDING AI BRANDS IN A HUMAN WAY.......................................................................86 GENERATIVE AI: FROM EXPERIMENTATION TO INTEGRATION.............................................. 88 AI MEETS PR: THE EMERGENCE OF THE COMMUNICATIONS ENGINEER...................................90 AI AND WHAT IT MEANS FOR HUMAN CREATIVITY........................................................... 92



REINVENTING COMMUNICATIONS WITH AI BY FRANK X. SHAW IT WAS 1992. I was working at a small advertising and PR firm in Portland, Oregon, focused on B2B and crisis communications. I’d early fallen in love with technology and was smitten with the early, pre-browser web, with Usenet and CompuServe, Prodigy. When I got my hands-on TCP/IP software for my PC and grabbed a beta copy of the Mosaic browser, my mind was blown. I knew this was going to change the way companies and individuals communicated. I tried to convince wood product clients to purchase domains. I talked to The Oregonian about the opportunity of online journalism. I was not successful. ▶ It was 2002. Blogs were starting to take off. I was in high-tech communications then, and suddenly could see a new nexus of influence emerging, one related to but independent from the media. News was being made on blogs; crises started with blog posts. I knew then that the work I was doing needed

to change, and rapidly — and that while we used to think we had most of a day to come up with a plan for an issue or challenge, that we’d need to move a ton faster, and increase our understanding of influence if we were going to be successful. ▶ It was February 2008. I was standing in the back of a conference presentation in San Francisco and decided that today was the day I’d join Twitter. I followed a few people, a few more, and then boom – the world opened to me. There was more news, more perspective, more connection and communication with influencers, reporters, editors. As a senior communications professional, the ability to make news in real time, respond in real time, connect in real time, it was incredible. This explosion of social media and connection with a news focus reinvented the way we worked, and how we thought about speed — we no longer had days, we had hours to make decisions. I miss that Twitter.

Frank X. Shaw is the chief communications officer at Microsoft. He is responsible for defining and managing communications strategies worldwide, company-wide storytelling, product PR, media and analyst relations, executive communications, employee communications, global agency management, and military affairs.



▶ It was early 2021. GPT-3 from OpenAI is released. A smart engineer at Microsoft developed a hack that allowed me access to the API using a copy and paste function. I was able to take massive text documents and summarize them, issue requests to create simple letters, translate into different languages. When one of my direct reports retired, I combined the retirement email with the LinkedIn profile and had GPT-3 create an epic poem, which I read at the retirement party. There were challenges, but the utility was off the charts, even in hack mode. I started thinking. ▶ It was September 2022. A small group of Microsoft executives met with OpenAI engineers in Redmond, Washington, where we saw a demo of GPT-4. It passed exams, translated documents, engaged in philosophical discussion, wrote papers, answered medical questions. It was the most mind-blowing demo I’d seen in my entire career. I drove home, consumed with the thought of how my job would change in the year ahead. ▶ It is December 2023. AI is reinventing how we work as communicators. We’re getting access to an incredible new set of tools to help us create, brainstorm, connect, improve our process, speed, capability. Here at Microsoft, we’re focused on putting a copilot or two in the hands of every person on the planet, to help them with whatever they want to get done. ▶ For communicators, the best way to get there is to focus on three things: ▪ Experiment and pioneer: Be courageous. Play with the tools. Be a pioneer. ▪ Operate with a sense of urgency: Own your destiny. Reinvent ahead of demand. ▪ Drive culture change: Champion this next phase of communications with AI. ▶ The thread is clear — technology has always had a massive impact on the art and science of communications, from the web to blogs to social and now to AI. We thrive best when we focus on the opportunity ahead and

embrace what we do best. Too often, I see people thinking about AI as a black box, a single entity that is going to “do” something, either good or bad. Instead, it is a set of tools, new tools, ready for us to use. To do this, we must reject pessimistic and Panglossian thinking equally. ■




CREATORS, COLLABORATORS AND COMMUNICATORS IN THE AGE OF AI BY WILLOW BAY “We need talent who can reach across disciplinary silos, and think expansively, not limited by current expertise.” “We need graduates trained and fluent in digital media technologies. But, we also need people with a hunger to acquire the skills to evolve and adapt as quickly as these industries.” “The interdisciplinary nature of today's digital media requires cross-functional teams and cross-cultural competency.” “We need more bridge builders and storytellers.” WHEN WE SPEAK with our industry partners and alumni — producers, creators, sellers of content, commerce and advertising, agents brokering global production and distribution deals, strategic communications pros or

brand strategists — about their future talent needs, this is just a small sampling of what they have shared with us. ▶ In short, the industry is really telling us: We need creators, we need collaborators, and we need communicators. ▶ But, what does it mean to be a creator, a collaborator and a communicator in the age of artificial intelligence? How do we train students to use, challenge and critically trace the power of AI storytelling with skills and rigor that help them shape and lead media futures? And how do we nurture and support our talent pipeline as the contours of media and communication change before our eyes? ▶ The answer: continuing to do what we do so well at USC Annenberg, developing nimble and responsive academic programs and scholarship. ▶ We launched our first course this fall focusing exclusively on the subject “Artificial Intelligence and the Future of Creative Work,”

Willow Bay is the first female dean of the USC Annenberg School for Communication and Journalism. A broadcast journalist, media pioneer and digital communication leader, Bay oversees more than 200 faculty and staff, and more than 2,500 undergraduate and graduate students across the fields of communication, journalism, public relations and public diplomacy. Since her installation in 2017, Bay has increased Annenberg’s public engagement around critical issues such as the role of communication technology in advancing equity and access, digital media literacy, gender equity in media and communication, and sports and social change. 10


led by professor Gabriel Kahn. Eager to understand this phenomenon that quite literally everyone is talking about, students are diving into real-time case studies that examine how AI tools are opening up new frontiers in newsrooms, writers’ rooms and boardrooms. They are examining the decision-making and calculus that goes into how companies integrate AI and what they expect it will achieve. ▶ Our students, native to the technological wave of social media and mobile devices, are grappling with reservations about this emerging technology that is more powerful than any they have seen before. A generation that cares deeply about social justice and equity is now examining very real and immediate questions, including changing job descriptions, the nature of creative production, and worker displacement as this technology spreads. Perhaps, most critically, they are also exploring these questions by interacting directly with the professionals who are facing these same problems. Roughly half the course involves guest lectures from executives charged with taking the next step in the AI revolution. ▶ Meanwhile, this fall, the faculties of USC Annenberg and the USC School of Cinematic Arts have joined forces under the auspices of USC’s new Center for Generative AI and Society to consider GenAI’s potential to impact the nature of creativity, originality, intellectual property, truth, and the role and definition of the artist, journalist and communicator.

▶ Center co-directors Mike Ananny and Holly Willis are convening USC’s interdisciplinary scholars to understand the social, cultural and political significance of global lives shaped by GenAI stories. Together, they are identifying the challenges and the opportunities as new storytelling crafts, communities and ethics begin to emerge alongside a new suite of easy-to-use generative AI tools. Their ultimate goal is to help practitioners and media industries navigate and shape the new terrain of GenAI storytelling while preparing our students to do the same. ▶ The first wave of change, journalist and entrepreneur Jim VandeHei at Axios notes, is headed for “... any job that involves writing or coding; creativity; information synthesis; or sifting through large sets of data or info.” Recent studies suggest that massive disruption is coming for highly skilled jobs and highly educated workers rather than lower wage workers in highly automated fields. Rather than be a bystander, VandeHei urges us to be curious and critical. ▶ In other words, those who will succeed in the age of AI will know how to work alongside AI and will know how to bring their human dimension as creators, collaborators and communicators to bear. ▶ We’re at the frontlines of the revolution, helping our students reach across this technological silo and realize AI’s potential as a positive force. ■



CREATIVITY, COLLABORATION AND OUR NEW PARTNER: GENERATIVE AI BY MELISSA WAGGENER ZORKIN LIKE A CANNONBALL splash, generative AI’s bold public arrival in 2023 has made one thing very clear for the communications industry: The future, once again, is right now. ▶ The smartest PR professionals, communications leaders and brands are already jumping in, immersing themselves in artificial intelligence technology, which — like other innovations that have transformed our lives — is on its way to touching every part of our world. And those who hesitate risk being left behind by clients, consumers and employees. ▶ That’s what my colleagues at WE Communications and I have learned through two key surveys — one in collaboration with the USC Annenberg Center for Public Relations. The surveys were conducted after the release in late 2022 of OpenAI’s ChatGPT and a rush of other similar applications that can produce text and images within seconds of a user’s prompt. AI is on everyone’s mind, from communications leaders figuring out

how to use it responsibly to a general public anxious about how it will affect their jobs. Why is AI important for communicators? It’s important to remember that artificial intelligence already has been working alongside us for some time. From helping us plot driving routes and unlocking our phones through facial recognition to offering spelling and grammar help, AI has been a smart assistant in our daily lives for years. ▶ With generative AI though, we’re in a new, exciting stage. Our future with AI depends on how we, as communicators, talk about AI and embrace it as a partner in our own industry. In fact, when WE surveyed 15,000 people across seven global markets for our Brands in Motion study, “It’s Personal: The New Rules of Corporate Reputation,” 64% said the responsible use of technology — including AI and customer data — will become a more crucial factor in corporate reputation

Melissa Waggener Zorkin is global CEO and founder of WE Communications, one of the largest independent communications and PR agencies in the world. She is an inductee of the PRWeek Hall of Fame, the PRWeek Hall of Femme, and the ICCO Hall of Fame and is a member of the USC Annenberg Center for PR Board of Advisors.



THOSE WHO HESITATE RISK BEING LEFT BEHIND BY CLIENTS, CONSUMERS AND EMPLOYEES over the next two to three years. Our approach will play a defining role in how AI fits into our lives and influences society. ▶ I like to stay focused on how technology fits into the human experience. Think back to that sense of wonder and creativity we had as children. Paradoxically, as we grow up and gain life and career experience, we box in our creativity and stop taking big chances. By the time we’re adults, most of us fall into a creative status quo — often focusing on getting it done over being creative. And the data backs that up. ▶ In our survey of communications leaders with USC Annenberg, “Fascinated and Frightened: How Are Communications Professionals Viewing the AI Opportunity Ahead?,” 88% said AI will have a positive impact on the speed and efficiency of certain work tasks, and 72% said it will help reduce workloads. Much further down the list was creativity, with only 55% of comms professionals saying AI will positively impact PR and comms creativity. ▶ This reveals the opportunity I don't want us to miss. If we spend all our time thinking about how AI can make us more efficient, we will miss out on this once-in-a-generation chance to unlock new forms of creativity with AI at our side. So how do we usher in AI as a collaborative partner? I see generative AI reconnecting us with the freedom to play and let our imaginations run wild. In our study with USC, we advised communications professionals to “bring AI into your next brainstorm.”

▶ Ask it to storyboard a new project. Prompt it to write a company's mission statement in the style of different movie genres. Have it rewrite your latest blog post as a Taylor Swift song. See how it opens your own mind. The goal isn’t to generate perfect, ready-to-use results but to unleash and spark our own creative thinking — and then apply that thinking to new ideas and business strategies. ▶ Our research found that while many of the nearly 400 communications leaders in the U.S. we surveyed had played around with generative AI, much of it was one-off experimentation. And although 80% of the respondents agreed that AI will be “extremely or very important” to the future of our industry, only 16% said they felt extremely knowledgeable about the applications of AI in our work. ▶ The technology around AI is advancing rapidly, and we can only begin to predict the thousands of applications it will have in our lives and our businesses. But as communicators, we can do something more powerful than make predictions — we can take action. We can start now developing ethical and responsible AI policies that keep humans at the center of our partnership with technology. We can encourage our employees and colleagues to keep experimenting. And we can open our minds to all the possibilities AI offers. ▶ By embracing generative AI as a new partner, we can reignite a new imaginative spirit that, together with our human touch, will define the future of communications. ■



AI WILL NAVIGATE A GRAY FIELD, ANALYZING RISK FOR CRISIS COMMUNICATIONS BY BURGHARDT TENDERICH, PhD & MICHAEL KITTILSON WHILE ALGORITHMS ARE BUSY predicting the next bull market and diagnosing your heart in real time, crisis communication still fumbles in the dark — reactive, not proactive and increasingly outdated. Why be content to remain bystanders in our own narrative, letting crisis define us, rather than seizing innovative technologies that could redefine the future of strategic messaging? ▶ Picture a stockbroker on Wall Street, epitomizing urgency in action. For them, algorithms sift through chaotic market ebbs and flows to precisely dictate the next best move. Then there’s Cardisio— an advanced machine learning algorithm distilling over 3.2 million data points in mere minutes, rendering

a nuanced heart risk assessment that healthcare practitioners can use to effectively help their patients. Yet, the sphere of crisis communications, enmeshed in public sentiment, real-time news and corporate imperatives, has yet to harness the full capabilities of this groundbreaking technology. ▶ Call it inertia or lack of imagination, the result is a field entrenched in reactivity. Teams scramble post-crisis, crafting messages, framing responses, and mitigating damage always after the storm, never before. We need an algorithm that doesn’t just sense tremors but understands the seismic patterns well enough to give us a fighting chance to stabilize the ground beneath us before it splits open.

Burghardt Tenderich, PhD, is a professor of practice at USC Annenberg in Los Angeles, where he teaches and researches about strategic communication, emerging media, technologies and brand purpose. Tenderich is associate director of the USC Center for Public Relations.

Michael Kittilson is a first-year graduate student at USC Annenberg studying public relations and advertising who aspires to help solve the world’s toughest messaging and communication problems. His background spans 5 years in various roles that intersect strategic communications, tech, and policy, including work with the U.S. Department of State and national media organizations.



▶ The real potential lies in predictive analytics, a facet of machine learning that can forecast potential crises by analyzing previous trends, media cycles and real-time data. By doing so, predictive analytics can offer an accurate risk assessment on what logically might occur, and devise a proactive strategy, arming communication teams with the data necessary to formulate countermeasures and even prevent a crisis before it happens. ▶ What if before the ink dries on a salacious headline, an algorithm could have already alerted you, drafted an optimal response, and even sketched out a media strategy? The proposition is no longer science fiction, rather an ethical and operational imperative. No longer confined to the theoretical, the future stares us in the face, questioning our readiness and challenging our adaptability as PR professionals. ▶ The blueprint for this new reality will start at the intersection of crisis management professionals in dialogue with data scientists and machine learning experts, collaborating to fuel an evolving algorithm with a rich diet of past crises, real-time human variables and human intuition. This algorithm anticipates, evolves, and advises — blending crisis communication into an art and science. ▶ Consider the Cardisio example — a health-tech startup out of Europe starting to bridge the diagnostic gap in cardiovascular health. One moment, the human heart can function perfectly; the next, a long built-up condition previously undiagnosed strikes, severely damaging the organ. At the earliest stages, and even if patients display no symptoms, these changes in electrical current are measurable, and the condition can be preemptively treated before it strikes. To screen for early warning signs, Cardisio’s AI algorithm calculates 290 parameters per heartbeat, effectively evaluating metrics like length, width, distance, height, and even angles. Utilizing vectorcardiography, the spatial representation of electromotive forces generated during cardiac activity measured

in three planes, the AI can show your heart health in three dimensions, providing physicians with nuanced risk assessments. ▶ Likewise, imagine a machine learning system dedicated to crisis communications: a robust algorithm trained on decades of public relations case studies, media cycles and realtime public sentiment analysis. The algorithm wouldn’t merely react but anticipate emerging crises by sifting through social media chatter, netnography data, news reports, and even internal corporate data to identify early tremors before a potential crisis. Then, determine the most dangerous crises via a risk factor scale, providing PR professionals with actionable insights. ▶ Now, imagine the landscape with this ultimate decision-making tool in hand: reputations preserved, public trust maintained, and a monumental shift from being eternally reactive to strategically proactive. We live in an era where markets can crash with a single tweet and public opinion can swing like a pendulum at the smallest thread of information. The perils have become existential and ethical implications profound. ▶ While the capacity for anticipatory crisis management promises to revolutionize public relations by bringing ethical dilemmas to the fore, we work in a gray field. The power to preemptively shape narratives also comes with the responsibility to be discerning, avoiding manipulative tactics that veer into propaganda. ▶ So, here we are, equipped with technology that will usher in a new era, personalizing our own narratives with more freedom to make our own choices, but bound by ethical implications just as complex as compelling. The world will never be the same. But we cannot be spectators to the risks or opportunities that lie ahead. It’s become our time to embrace this new change, navigate these ethical challenges and become strategic visionaries. It’s no longer just a choice between reactivity and proactivity; it’s now a choice about the kind of professionals — and even people — we want to be. ■ 15



BY CHRIS PERRY & CHRIS DERI Our industry must reflect the realities of the market or risk obsolescence. IN 1395, Johannes Gutenberg invented movable type and the printing press in Mainz, Germany. ▶ The printing press created the first era of mass communication, primarily devoted to printing religious texts, but which also led to the rapid democratization of literacy across Europe. By the early 16th century, with widespread literacy on the rise, a schism within the Catholic church led to the Protestant Reformation in 1517. This movement was empowered by disseminating pamphlets and documents questioning the status quo of religion, politics, government and other

institutions. This Reformation signaled the end of the Middle Ages and, ultimately, the dawning of the Renaissance, an era of enlightenment and invention. ▶ Communication innovations have occurred repeatedly since the dawning of the printing press, accelerating throughout the 20th century. The newspaper, telegraph, telephone, recorded media, radio, tv, web 1.0, the smartphone, web 2.0., and now web 3.0. ▶ Twenty years ago, we did not have Facebook, Twitter, YouTube, social media managers, or TikTok stars. We made incremental changes to our operations to manage new channels and platforms, which were the beneficiaries of these changes as

Chris Deri is chief corporate affairs officer and president, C-Suite Advisory, at Weber Shandwick. He leads its global corporate affairs group, comprised of hundreds of professionals across corporate positioning, financial communications, and crisis and issues management. Deri is a member of the USC Center for PR board of advisers.

Chris Perry is chief digital officer and chairman, Futures, at Weber Shandwick. He helps clients decode media change as well as develop new learning platforms and programs. He also translates trends into tangible commercial opportunities, drawing on more than 20 years of digital and media experience.



they assembled enormous market share and power. Now AI is changing everything — again. ▶ Today's media landscape has been upended by significant developments, including fragmented social platforms, influencers, misinformation, and now generative AI. But while the nature of communications has radically changed over recent years, the structure of communications organizations has not.

IF COMPANIES WANT TO DELIVER EFFECTIVE MESSAGING TO BUILD THEIR REPUTATIONS, THEY MUST ADOPT A MORE FLUID ORGANIZATIONAL STRUCTURE CAPABLE OF RESPONDING TO THE COMPETING DEMANDS OF THEIR STAKEHOLDERS IN REAL TIME. A New Agenda As early as 2018, Weber Shandwick began studying what we felt was a looming crisis: The fundamental dismantling of our capacity to make sense of events in society. As our subsequent research concluded, all is not normal. And that abnormality is not solely due to the global effects of coronavirus, culture wars driving societal polarization and the decline of trusted institutions This loss of coherence is also the result of technology accelerating faster than human’s ability to keep up. ▶ As generative AI is fully deployed across media, education, health, and political ecosystems, we must set a new agenda to

stay in sync with an empowered, autonomous, AI-informed public. This new agenda should incorporate the latest concepts in data science, open-source intelligence, technology adoption, content distribution, and organizational readiness. ▶ Our industry remains built for a reality that no longer exists. Communicators still operate in silos, while communications in “the wild” have never been more networked. We work hierarchically, while platforms proliferate into forums, apps, communities, games, and intelligent networks. We address our stakeholders as distinct groups, while they operate as individuals that defy categorization. ▶ The disconnect is evident when looking at standard PR operating procedures and supporting technology. Media monitoring, press releases, and social media management tools that may have been sufficient in the past, no longer match the task at hand or achieve the desired results. ▶ If companies want to deliver effective messaging to build their reputations, they must adopt a more fluid organizational structure capable of responding to the competing demands of their stakeholders in real time. If companies want to defend against attacks on their narrative, and avoid landing in the crossfire of culture wars, they need to invest in more proactive, predictive analytics to prepare them for the future. ▶ Attempting to overcome these challenges by training existing staff to use new tools to do their old jobs won’t be productive. Real change requires a radical rethink of organizational structure, necessary data, community tech, conversational computing, and immersive experiences. A complete reconceptualization of the communications function is the only strategy for real success in today's shifting environment. ▶ Doing so will be a significant strategic move for those who invest in it, a competitive weapon for those who operate within it, and an endless source of value for those who tap into it. ■ 17


AI ACROSS THE GENERATIONS: TIME FOR ACUMEN AND ACTIONS BY BARBY K. SIEGEL EVERYONE HAS THEIR OPINION about generative artificial intelligence. And those opinions range from doom and gloom, to nirvana, and everything in between. And here comes another. ▶ It is a new chapter for communications that cannot be overlooked or underestimated. All signs point to the fact that AI will have an impact on how we work. Hopefully, leaning more toward the positive outcomes we will deliver. Just as the internet changed many aspects of how we go about our business, making our craft more precise, more measurable and more indispensable, so too will AI leave its indelible mark. ▶ I am likely not alone with a newsfeed filled with new stories about AI. Plenty of them clamoring to offer up the next shiny AI object. ▶ For years now — predating the rise of generative AI — the conversation has focused on what AI will replace. Particularly in our industry, replacing the mundane, everyday

tasks delegated to the most junior members of our team. Some are suggesting that AI will wipe out a level or two of staff, making them obsolete and unnecessary. Deleting a row of the Scope of Work document. I sure hope not. ▶ Rather, shouldn’t this be the opportunity to train and cultivate the next generation of communicators in ways that will further broaden, deepen and democratize strategic communications — critical thinkers across the generations at the table, shoulder to shoulder with colleagues and clients? ▶ A lot has been written about the artificial side of things that will replace humans. Loss of jobs to machines that can do it faster and cheaper. We cannot overlook this, but I believe we can shift the conversation to how business can be re-imagined for less loss and more gain. In our busy lives the ‘to do’ list often gets the best of us. Time to think — consider a new path, explore the unknown, challenge the status quo — falls to the

Barby Siegel is chief executive officer of Zeno Group, overseeing a global organization of 750 staffers with operations across North America, Europe and Asia Pacific. Under Siegel’s leadership for the last 12 years, Zeno has experienced annual double-digit growth while staying true to the firm’s core values of being inclusive, ambitious, kind, entrepreneurial, collaborative and fearless. She is a member of the USC Center for PR board of advisers.



bottom of the list way more often than any of us would like. ▶ Yet the very thing we need more of, now more than ever, is critical thinking and business acumen. And not just from the most experienced among us. Given time and space, our next generation of communicators should be right there with us collaborating and challenging, perhaps earlier than they currently are. ▶ Imagine having the time to train those just starting out to be critical business thinkers. To not expect they will pick up on it just by being in our presence, but to actively teach the art of asking questions, how to become subject matter experts, how to articulate and advance an outside point-of-view. ▶ To be clear, those entry-level tasks are important in understanding and appreciating the building blocks of communications. We can expose our teams to that, and pivot to

put them on a path to more intellectually interesting work. This is not about cutting out assistant account executives and account executives (outdated titles, but a topic for another day) but committing to nurture them early on into strategic thinkers, planners, storytellers and more. ▶ Therefore, could the rise of AI be the dawn of a new era that makes room for more of what we want to do to nourish our minds and spirits? If AI is the great disruption of this era, so too can we disrupt our daily lives with what we are gaining — time, space, and the opportunity to focus on what matters to each of us. ▶ In the meantime, for our industry, let’s not reduce this to eliminating jobs, or even worse, lowering our fees (!), but rather advancing all that we communicators can address with our acumen and our actions. ■



FROM SEO TO AIO: ARTIFICIAL INTELLIGENCE AS AUDIENCE BY ROBERT KOZINETS, PhD & ULRIKE GRETZEL, PhD Dancing the New Two-Step In the golden age of radio and newspapers, the Two-Step Flow Model held sway. Developed in 1948 by Paul Lazarsfeld and Elihu Katz to help explain voter decision-making, this classic communication model proposed that media messages flowed in two distinct stages. First, messages tended to flow to “opinion leaders,” such as political pundits, influential journalists, or community leaders. The message was then transmitted from these influential figures to the wider populace, often becoming interpreted in the process. Consider the popular mayor of a small town, who listens to the news and then sets the tone

of the conversations happening that evening at the local diner. ▶ With the emergence of the internet, search engines such as Yahoo, Bing, and Google took on the role of information curators. More recently, social media platforms such as Twitter and Instagram provided a vast public space where content creators and influencers included not just local dignitaries, but also beauty vloggers, tech aficionados, and TikTok video artists. These digital trendsetters, armed with hashtags and followers, filtered and communicated information on a worldwide scale, replicating the two-step flow.

Robert Kozinets, PhD, & Ulrike Gretzel, PhD, are the co-authors of Influencers and Creators: Business, Culture and Practice, published in May 2023. Kozinets is a professor of journalism at USC Annenberg whose methods and theories are widely used by researchers and organizations around the world. The founder of netnography and a social media influencer and brand research pioneer, Kozinets develops theory and method to apply to marketing, communication, and other fields that seek a contextualized understanding of digital culture. Gretzel is the senior research fellow at the USC Annenberg Center for Public Relations. She is currently the Director of Research at Netnografica, an innovative market research company that provides actionable insights by extracting meaning from online conversations.



▶ Now, the dance has a new participant: artificial intelligence. Today's influencers aren't merely human; they're also algorithms and machine learning systems, like recommendation engines that curate our online experiences. The age-old function of the two-step flow remains, but it has evolved for the AI age. For PR professionals, there is a new game to learn. Public relations and marketing are now performing for both human and algorithmic audiences. The new challenge is how to navigate the nuances of these powerful and intelligent, but often opaque and sometimes unpredictable, tech-driven 'opinion leaders.'

PUBLIC RELATIONS AND MARKETING ARE NOW PERFORMING FOR BOTH HUMAN AND ALGORITHMIC AUDIENCES. AI: The New Audience Public relations practitioners have traditionally considered their “audience” to be the collective of individual persons with whom a message resonates. An audience is composed of recipients, reactors, and responders. Historically, these audiences were human, but the contemporary digital landscape, with its search engines and automatically compiled social media feeds, is already compelling us to stretch that definition. ▶ And that was before artificial intelligence entered the conversation. For the modern PR professional, AI is no longer just a sophisticated tool that can help formulate captivating

messages, provide grammar checking, automate messaging, or analyze data. Today’s AI has evolved into an “audience” in its own right. Consider that algorithms determine which content gains visibility on social media, which news articles rise to the top of search engine results, and even which products get recommended on e-commerce platforms. If PR professionals don't “communicate” effectively with these AI entities, their messages could be lost in the digital void. ▶ As we have been saying and writing for years, crafting a successful campaign now demands not just an understanding of human psychology but also of the intricacies of machine learning. Tailoring your message and content to resonate with AI helps to increase its chances of reaching and impacting the right members of human audiences. Public relations and marketing increasingly must blend science with art, calculations with creativity, in this ongoing demand for algorithmic appeal mixed with human relevance. In this brave new world, PR maestros must become bilingual, speaking a novel patois that combines human desire with computer algorithms, to engage their newest audience: AI. AIO: Artificial Intelligence Optimization AI is already becoming integrated with search engines in a number of ways. For example, Google Search can now answer complex questions in a comprehensive and informative way, even if they are open-ended, challenging, or strange. Machine language algorithms are used to train search engines to identify patterns and trends in search data. For example, Bing Search recently launched a new AIpowered feature called “Prometheus” which is designed to provide users with more relevant and timely search results. And generative AI applications, such as Bard and ChatGPT, can be used to generate new content, such as text summaries, product descriptions, and code. Importantly, AI is becoming increasingly conversational. For example, Alexa can now be used to search for information on the internet, 21


and it uses AI to understand the meaning and intent of user queries. ▶ Much like the need to keep up with the ever-shifting algorithms and unique features of search engines, professionals now face mounting demands to stay abreast of the burgeoning field of AI products. From giants like ChatGPT and Bard, to the myriad specialized offerings entering the marketplace, the AI landscape is as diverse as it is dynamic. ▶ In the early days of search engines, search engine optimization (SEO) emerged as the frontier for digital professionals. SEO was not simply about understanding the digital medium, either; true SEO masters could tweak, refine, and sometimes completely reinterpret the message for the spiders, bots, and algorithms in charge of search engine rankings. PR professionals had to learn the steps to this new SEO dance that involves anticipating algorithmic changes and continually adapting content to align with the criteria set by search engines. ▶ Now, as AI transcends being a mere tool and rapidly becomes an integral part of the communication ecosystem, a parallel shift is occurring. Recognizing this shift means 22

realizing that AI is not just a passive receptor of content. AI evaluates, sorts, and often determines the visibility of information. It may even change its form, interpreting it before passing it on to a human audience, just as the opinion leaders of the past have done. This transition necessitates a new mode of optimization. ▶ Enter AIO: Artificial Intelligence Optimization. Much like its SEO counterpart, AIO isn't just about disseminating messages, but about ensuring that these messages effectively “communicate” with AI systems. ▶ For PR professionals, AIO implies a dual role: ensuring that their narratives resonate with both human and machine audiences. They need to grasp the technical intricacies of AI applications while retaining the emotive essence that appeals to human sensibilities. AIO is not just the future; it's the present frontier. As algorithms and large language learning models grow ever more sophisticated and AI's role in content curation and recommendation becomes increasingly important to the conduct of daily life and business, we predict that mastering AIO will become an essential skill for all PR professionals. ■




BY KIRK STEWART CO-AUTHORED BY CHATGPT ETHICAL ISSUES related to artificial intelligence are a complex and evolving field of concern. As AI technology continues to advance, it raises various ethical dilemmas and challenges. Here are some of the key ethical issues associated with AI: ▪ Bias and Fairness: AI systems can inherit and even amplify biases present in their training data. This can result in unfair or discriminatory outcomes, particularly in hiring, lending, and law enforcement applications. Addressing bias and ensuring fairness in AI algorithms is a critical ethical concern. ▪ Privacy: AI systems often require access to large amounts of data, including sensitive personal information. The ethical challenge lies in collecting, using, and protecting this data to prevent privacy violations. ▪ Transparency and Accountability: Many AI algorithms, particularly deep learning models, are often considered “black boxes”

because they are difficult to understand or interpret. Ensuring transparency and accountability in AI decision-making is crucial for user trust and ethical use of AI. ▪ Autonomy and Control: As AI systems become more autonomous, concerns about the potential loss of human control exist. This is especially relevant in applications like autonomous vehicles and military drones, where AI systems make critical decisions. ▪ Job Displacement: Automation through AI can lead to job displacement and economic inequality. Ensuring a just transition for workers and addressing the societal impact of automation is an ethical issue. ▪ Security and Misuse: AI can be used for malicious purposes, such as cyberattacks, deepfake creation, and surveillance. Ensuring the security of AI systems and preventing their misuse is an ongoing challenge.

Kirk Stewart is the CEO of KTStewart, which offers clients a full range of communications services including corporate reputation programs, crisis and issues management, corporate citizenship, change management and content creation. Kirk has more than 40 years of experience in both corporate and agency public relations, having served as global chief communications officer at Nike, chairman and CEO of Manning, Selvage & Lee, and executive director at APCO Worldwide. He is a member of the USC Center for PR board of advisers. 24


▪ Accountability and Liability: Determining who is responsible when an AI system makes a mistake or causes harm can be difficult. Establishing clear lines of accountability and liability is essential for addressing AI-related issues. ▪ Ethical AI in Healthcare: The use of AI in healthcare, such as diagnostic tools and treatment recommendations, raises ethical concerns related to patient privacy, data security, and the potential for AI to replace human expertise. ▪ AI in Criminal Justice: The use for predictive policing, risk assessment, and sentencing decisions can perpetuate biases and raise questions about due process and fairness. ▪ Environmental Impact: The computational resources required to train and run AI models can have a significant environmental impact. Ethical considerations include minimizing AI’s carbon footprint and promoting sustainable AI development. ▪ AI in Warfare: The development and use of autonomous weapons raise ethical concerns about the potential for AI to make life-and-death decisions in armed conflicts. ▪ Bias in Content Recommendation: AI-driven content recommendation systems can reinforce existing biases and filter bubbles, influencing people’s views and opinions. ▪ AI in Education: The use of AI in education, such as automated grading and personalized learning, raises concerns about data privacy, the quality of education, and the role of human educators. ▶ Addressing these ethical issues requires a multidisciplinary approach involving technologists, ethicists, policymakers, and society at large. It involves developing ethical guidelines, regulations, and best practices to ensure that AI technologies are developed and deployed in ways that benefit humanity while minimizing harm and ensuring fairness and accountability.

AS AI SYSTEMS BECOME MORE AUTONOMOUS, CONCERNS ABOUT THE POTENTIAL LOSS OF HUMAN CONTROL EXIST. ▶ Pretty well-written essay. Right? Well, in the interest of full disclosure, I didn’t write one word of it. ChatGPT did. This raises several ethical and legal questions: Is this considered plagiarism? Do I or my firm own the content? Did I infringe on the copyright of pre-existing written work? What if I included a piece of AI-generated artwork, a link to an accompanying video, or background music? Who owns that content, and did it infringe on another creator’s work? Is the information accurate and unbiased? Should I have disclosed upfront this was AI-generated content? ▶ These are perplexing questions, with the answers being debated in universities, corporations, and courts of law. As an adjunct faculty member at USC Annenberg, I believe I have a responsibility to help students think, write, and sharpen their creative skills. I worry about how much of that is lost with the increasing reliance on generative AI. As well as its ethical and legal use. Only time will tell. ▶ Like ChatGPT, I encourage regulators, educators, developers, and users to continue to create and refine some guardrails around the use of this powerful tool to ensure, in the words of Jason Furman, a professor of the practice of economic policy at the Kennedy School, “..that technology serves human purposes rather than undermines a decent civic life.” ■ 25



BY TERESA HUTSON OVER THE LAST YEAR we’ve seen breakthroughs in AI innovation driven by developments in systems that can generate new information — including text, images and video, code, and audio content — based on simple prompts or existing data. Natural language interfaces can now connect to a reasoning engine that will usher in a new category of computing. These generative AI capabilities have the potential to drive breakthroughs in areas like healthcare, scientific research, and sustainability. The growth of these conversational interfaces will help unlock the benefits of AI across society, allowing anyone to use cutting-edge AI regardless of their background or level of technical skill. ▶ While the potential is significant, there is also concern that it may undermine information integrity, exacerbate bias and inequality, and harm jobs, education, and the environment. That is why we must collectively

commit to meeting the opportunity responsibly and work to bring the benefits of AI to all in society including protecting and advancing fundamental rights. ▶ For instance, we’ve seen an increase in the prevalence of misleading information online which has raised questions about the development of technical standards to certify the source and history of media content. Innovative solutions like Truepic Project Providence leverage technology to maintain the provenance or origin of images captured from storage to display. This enables users to verify images as authentic and transparently display their time, date, location, and source to viewers. With this technology, modifications to the images can be detected and the authentic source of images can be proven. In the Ukraine, Project Providence is currently being used to film and describe war related damage to cultural heritage sites. Having transparent and authentic documentation

Teresa Hutson is corporate vice president of the Technology for Fundamental Rights at Microsoft. Her team works to support people’s fundamental rights and address the challenges created by technology by promoting responsible business practices, using data and technology to expand accessibility and meaningful connectivity, and advance fair and inclusive societies.



of the damage will be critical to pursue reparations and restoring the damage in the future. ▶ As a starting point, we must think about the fundamental rights impacts of how we build and deploy AI technology. This requires conducting human rights impact assessments and advancing responsible practices in our technology supply chains — which is not just raw materials to finished goods, it now includes innovation in product design to the end of the product lifecycle. ▶ As organizations continue to focus on traditional supply chains, they also need to consider the “digital supply chain” which

AI REQUIRES CONNECTIVITY, YET ROUGHLY ONE-THIRD OF THE WORLD’S POPULATION (IS) WITHOUT INTERNET ACCESS. includes the people involved in the evaluation and training of AI models as well as the data — how it is captured, organized, and consumed. To mitigate potential harm against people, there must be accountability in tech development and deployment, so that AI can be leveraged broadly to help people in their day-to-day lives and make progress on our greatest societal challenges. ▶ We also must be critically aware of how people access and connect to AI technology. As with the internet and smartphones, AI will reshape personal and professional life, helping people advance critical thinking, stimulate creative expression, and be more productive.

But AI requires connectivity, yet approximately 2.7 billion people, roughly one-third of the world's population, are without internet access. As advancements in AI continue, these communities are at risk of being left behind. To make sure all people are included in the future created by AI, it requires us to think of internet connectivity as a prerequisite for inclusive access to AI. ▶ Once connected, there is an enormous opportunity for AI innovation to help close gaps and bridge divides. To deploy AI responsibly, we need to build technology that is inclusive and accessible by design, work to incorporate disability data and protect against ableism, and push for paradigm shifts that empower communities. For instance, a digital virtual assistant enabled with AI capabilities can generate the same level of context and understanding as a human volunteer. By simply submitting a question and a picture of their product, a blind or low vision customer can access effective tech support in an accessible way. This is accessibility by design and if we continue to do this right, AI could help close the disability divide and empower people with disabilities. ▶ As we think about communicating in this AI moment, it’s important to find the right balance between embracing nuance and keeping it simple. Embracing nuance requires being transparent about what we’re learning and sharing successes while also innovating responsibility to avoid potential harm and being accountable when we get it wrong. Keeping it simple means being clear and concise about the capabilities of these tools and honest about the limitations. ▶ We all have a role to play to help anticipate and guard against potential harm. We need government, academia, civil society, and industry to come together to make sure that, as AI becomes a bigger part of our lives, we put in place norms and standards to guide responsible use. By working together to address the challenges, we can embrace the opportunities and bring the benefits of AI to all in society. ■ 27


AI REQUIRES A SPIDER-MAN PRINCIPLE TO BALANCE POWER AND RESPONSIBILITY BY GERRY TSCHOPP AS I THINK OF AI TODAY, I think of the wise words of Uncle Ben in Spiderman: “With great power comes great responsibility.” ▶ Artificial Intelligence (AI) has revolutionized the way businesses operate, and as communication professionals we are provided unique opportunities to leverage the potential of AI to create compelling content and successful brand strategies. However, along the lines with these powerful AI tools come ethical considerations, security risks, and data privacy concerns that we must navigate judiciously. ▶ From data-driven messaging, to reshaping entire brand narratives, we’re standing at the forefront of a new technological revolution. ▶ The very power that enables us to innovate, also mandates us to act responsibly; because our actions echo in unimaginable ways — when things go awry, the responsibility is ours to bear. ▶ What follows are key principles that we

should consider indispensable as we embrace AI in our communication strategies. Guided by Ethics When deploying AI, ethics isn’t a checkbox, but a commitment. This goes far beyond reputation management or mitigating risk. Ensuring that an algorithm's purpose aligns with the organizational ethos leads to brand unity and legacy rooted in trust. Choose responsible AI operations as you would a business partner, and you’ll foster consumer confidence and brand allegiance. Data-Driven Wisdom AI’s brilliance lies in its ability to take raw, complex data and turn it into actionable strategic insights in a short amount of time. It offers a sort of “editorial oversight” that presents macro trends and micro patterns in consumer sentiment and media cycles akin to narrative blueprints. Data, under an AI lens,

Gerry Tschopp is senior vice president and head of global external communications for Experian, leading a team of communications professionals from all major regions. In addition, he also serves as chief communications officer for North America, with direct oversight of external and internal communications in North America. He is a member of the USC Center for PR board of advisers.



becomes more than a feedback mechanism — it becomes an architect providing a blueprint for communication strategies. Brand Authenticity AI perpetually evolves and improves. So does your brand voice. But let’s take it a step further: as AI perpetually evolves and improves, so should its alignment with your brand voice. Take ChatGPT for instance: it might churn out values that are universally appealing, but lack the nuance specific to your brand identity. When asking generative AI to explain Experian’s brand values, the robot referenced integrity, innovation, collaboration, and customer-first focus. Sounds good right? It’s vanilla. It missed a key value proposition: a strong commitment to improving the financial lives of consumers worldwide. Never let the machine drown out your brand’s unique symphony; orchestrate it to amplify your message instead.

customer data and validate its confidentiality throughout the data lifecycle as it relates to AI usage. In short, secure the vaults of consumer data as meticulously as you would safeguard personal secrets. A solid fortress around data privacy mitigates risk and builds a bridge of trust between your brand and the audience. ▶ As we work within our industry to stay ahead of this fast-growing AI trend, it’s becoming increasingly clear that AI has immense potential to revolutionize the way we operate. However, it cannot replace the human element of our profession, which involves building relationships, understanding the nuances of human communication and adapting to changing circumstances. ▶ Instead, we can leverage AI to enhance our work by automating repetitive tasks, generating personalized and engaging content, and providing data-driven insights that can help inform strategy and decision-making. This enables us to focus

NEVER LET THE MACHINE DROWN OUT YOUR BRAND’S UNIQUE SYMPHONY; ORCHESTRATE IT TO AMPLIFY YOUR MESSAGE INSTEAD. Humans in the Loop Automation might be AI’s promise, but it requires a human’s supervision. While AI can help in accelerating the development of content creation, human oversight is essential to ensure that quality and congruence remain gold standards. By curating outputs carefully, it’s ensured that narratives are both unbiased and to be suitable for widespread visibility. Privacy First Data privacy isn’t just an ethical imperative; it’s a legal requirement. It is crucial to take measures to protect company and

more on high-level strategic planning, building relationships with key stakeholders and delivering value to our organizations and clients, while also improving efficiency and driving better results. ▶ Let’s be clear. AI is not a replacement for human intelligence, but rather a powerful tool that can help augment it. And it is continually evolving, so you must operate in a state of constant learning — study new tools, experiment with them and refine your approach. ▶ And as we embrace this powerful tool, heed Uncle Ben’s words: use it responsibly to enact great change. ■ 29



BY GABRIEL KAHN IN THE LEGAL DRAMA “Suits,” recently revived on Netflix, the character Louis Litt seethes with resentment as he grinds away at his cutthroat corporate law firm. Though he puts in more hours and brings in bigger clients, his slicker rival gets promoted ahead of him. ▶ For the Louis Litts of the world, artificial intelligence is going to be one bitter pill. The poison? Uplevelling. That’s the term to describe the great equalizing force that AI adoption brings to the workplace. It goes like this: When AI tools are introduced into the workflow, the worst performers get the biggest boost in productivity. Those who were already at the top of the pyramid also get a boost, but much less of one. ▶ In a recently published paper, Ethan Mollick, an associate professor at The Wharton School at The University of Pennsylvania, along with several colleagues, monitored how employees at the Boston Consulting Group performed when using AI

to complete their work. Those who had previously performed the worst at their jobs saw their productivity jump 43% with AI. The top performers, meanwhile, increased output by only 17%. In the end, everyone is performing within a much tighter band. ▶ At first glance, this means a workplace suddenly operating at Red Bull-level productivity. The C-suite is thrilled. Down the hall at HR, however, it’s a nightmare. The entire structure of the office — from seniority to compensation and recruiting — are about to be upended. It’s not clear how it will ever be neatly put back together again. ▶ “I do not think enough people are considering what it means when a technology raises all workers to the top tiers of performance,” writes Mollick. ▶ One thing it means: The top performers who stayed up late to prepare and showed up early to perform now see a diminishing return on their effort. The lazy, disheveled employees

Gabriel Kahn Gabriel Kahn is a professor of professional practice at the USC Annenberg School of Journalism, where he is co-director of the Media, Economics and Entrepreneurship program. In 2018, he launched Crosstown, a project that uses data to generate local news. Before joining USC, Kahn was an editor and foreign correspondent for The Wall Street Journal.



WHEN AI TOOLS ARE INTRODUCED INTO THE WORKFLOW, THE WORST PERFORMERS GET THE BIGGEST BOOST IN PRODUCTIVITY. who never seemed to catch on are now performing at almost the same level. ▶ The premium placed on effort will begin to disappear. Your worst performers are feeling energized. Your best performers are demoralized. Louis Litt is drowning in his own bile. ▶ The disruption goes beyond office politics. It infects everything from training to pay. Imagine the certified paralegal with years of experience under her belt peering over her cubicle at the newbie with no qualifications doing the same work. ▶ The Thomson Reuters’ “The Future of Professionals Report” examines how this will overturn the current system of credentials that both workers and employers currently depend upon. In this scenario, a Juris Doctor or Certified Public Accountant, two degrees which require years of preparation and investment, no longer command the same premium. ▶ “As automation and AI solutions make completing traditional legal tasks easier, it could become more appropriate for such tasks to be completed by a paralegal or more

junior professional,” the report concluded. ▶ One person surveyed by Thomson Reuters put it more bluntly: “The average tax firm has little-to-no use for a CPA compared to an EA,” or “enrolled agent,” a significantly more junior credential. ▶ The downstream effects flow straight to universities and community colleges. In recent years these places have added numerous master’s degrees and certificate programs designed to help workers meet all sorts of qualifications which, in a few years, may be irrelevant. ▶ AI is not the first innovation to turn skill into a commodity. We’ve seen the assembly line, the steam shovel, the CPU. Each of those brought with it brutal dislocations of labor followed, in due course, by greater prosperity. ▶ The pace of AI adoption is happening more quickly and impacting more sectors than any of those previous shifts in labor. In order to mitigate the inevitable pain before the eventual reward, all these structures — from compensation to education and beyond — must learn how to bend so they don’t break. ■ 31



BY CHRISTINA BELLANTONI & MICHAEL KITTILSON IT’S AS IF GUTENBERG’S printing press got a serious upgrade — an algorithmic boost. Imagine walking into a modern newsroom and feeling the electricity in the air, a charge not just fueled by a relentless pursuit for authentic stories, but also by the whir of algorithms processing important data at speeds once deemed unimaginable. We stand at an intersection of history and innovation, where artificial intelligence goes beyond support and collaborates with journalists and broadcasters. Yes, the game is changing. But make no mistake: The playbook, the essence of journalism and effective storytelling, is still very much in the hands of those who pen the first and final draft. So, we must ask: What happens when what most professors at Annenberg consider to be traditional journalism meets technology that was at one point considered futuristic but today has become widely used by the consumers of news?

▶ We train our aspiring journalists to find information and verify information. AI does one of these things well, but not both. As we dip into this newly emerging form of mass media, we find ourselves asking how we can harness the robots for good. ▶ Cut to a newsroom where AI takes dispatch logs and spits out headlines: “Weekend Crimes in X Location at Y Time.” Impressive? Sure. But journalism is more than just the nuts and bolts. The numbers that can be crafted into a formulaic sentence tell just part of the story. A journalist can and should contextualize them with additional data, by talking with people and by — gasp! — leaving their desk to see X location with their own eyes. That’s what we teach our students, and it’s a practice that should not stop just because AI can do a lot of the work for us. ▶ The LA Times has a pioneering earthquake bot that can quickly publish basic details about a temblor. The story only gains

Christina Bellantoni is a professor of professional practice, the director of USC Annenberg’s Media Center and the Annenberg Center on Communication Leadership and Policy faculty fellow. She also is a contributing editor with the independent nonprofit newsroom The 19th News, which focuses on gender, politics and policy. With over two decades of journalistic experience, from the LA Times to Roll Call, she has shaped critical political discourse and championed investigative journalism. 32


its soul, depth and texture when a journalist steps in and personalizes the narrative. For example, “It was felt as far away as ____,” or “fire departments reported minor injuries.” So, while we’re spellbound by the efficiency of AI, let’s not forget it’s just half of the recipe — it’s a utility, but to label it as journalism would be to ignore the finer shades of storytelling.

process itself — are legitimate, think of the limitless potential in collaborative projects. A machine will sketch the initial outlines given some direction, then we can fill in the colors. In a world flooded with information, the design elements of stories have become crucial for capturing audience attention. The dialectics of design, the serendipity of

AI CAN GIVE JOURNALISTS THE FREEDOM TO DIVE INTO THE MORE NUANCED MOMENTS THAT MAKE STORIES NOT JUST READABLE, BUT RELATABLE. ▶ And it’s those finer shades — those nuances — that machines can miss. At Annenberg, student journalists learn to trust their senses and instincts to not just grasp the visible or audible, rather understand the intangible mood of a moment. Can a robot feel the pulse of a public meeting, discerning who cried at the podium and why? Can an algorithm capture the excitement when a hometown hero catches the game-winning touchdown against a long-time rival? While AI will enable journalists to effectively simplify data and turn it into stories that reach audiences, they can’t impart the vibe, the emotional richness, that makes a story resonate with audiences. ▶ Yet, when integrated wisely, AI can give journalists the freedom to dive into the more nuanced moments that make stories not just readable, but relatable. ▶ Artificial intelligence’s fusion with design and visuals offers another vista of opportunity. While concerns about AI’s ability to generate art — and even encroach on the creativity

a brainstorming session, the “Aha!” moment of human collaboration — these are aspects of creativity that a machine can’t replicate, but will be enhanced with algorithmic help. ▶ Let’s also acknowledge the Stylebot at Annenberg Media, which functions as an interactive guide to AP Stylebook rules, optimizing language while serving as the first line of defense against syntactical faux pas. This ingenuity improves writing quality and amplifies the potential for impactful storytelling. Students love it. ▶ We should neither underestimate AI nor overestimate its prowess. Because at the end of the day, it’s not AI vs. humans, it’s AI with humans. It indicates a future where society benefits from technology, without compromising the unique qualities that make us human. We are not relinquishing authorship of our own narratives; we are expanding the possibilities to be visionaries of our collective futures instead of becoming mere footnotes in the stories spun by our own creations. ■ 33


PERSUADED TO BE BIASED: THE UNDER-SKIN OF AI’S SPIN BY CLARISSA BEYAH IN A WORLD OF mixed hues and moods where PR practitioners scale corporate stares with care — basking in our collective potential, aspiring to be consequential — we have yet to view the complexity of AI through mirrors in our rear view. ▶ I was in my sister’s living room, listening to my nephew spew an interesting rap, reading it from the screen of his iPad. As a Black woman and prior slam poet, I fought back the self-conscious feeling I get when something stereotypically “Black” is awkwardly appropriated — in this instance the rap he was reciting. When he completed his recitation, he joyously revealed the rap was created by the multibillion-dollar tech known as artificial intelligence. I wondered how AI generated the stereotypes I heard and felt as the rap played. Did the AI prompt indicate what race or gender the rap should mimic? ▶ For PR practitioners responsible for influencing publics and elevating brands, using

AI to generate content or imitate art requires we proceed with caution. An AI-created rap intended to elevate an entire group or illuminate a critical issue might unintentionally perpetuate stereotypes — and so too can well-meaning practitioners using AI. ▶ As AI goes platinum, biased algorithms struggle to see non-white faces, fail to categorize Black photographs as human, and erase major parts of history in attempts to fix the bias. Zachary Small’s New York Times article, “Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History,” describes many of these issues. ▶ In our eagerness to let AI do the easy work— writing press releases, generating media lists, speeches, tweets — hurtful stereotypes may be scaled. When the unconscious bias we all share results in overlooking nuances in AI-generated content, we might cause unintended and costly harm to individuals and brands.

Clarissa Beyah is chief communications officer for Union Pacific and a professor of professional practice at USC Annenberg. Her expertise spans the professional services, healthcare, technology, transportation and utilities sectors, and she has served as a chief communication advisor for numerous Fortune 50 companies, including Pfizer. She is a member of the USC Center for PR board of advisers.



▶ AI is drenched in stereotyped laden data created by disproportionately over-represented algorithm writers who are often white males. Attempts to make AI cleaner, less offensive, and more inclusive are complicated, and in some instances, overtly hypocritical. In Billy Perrigo’s Time magazine exclusive, “Open AI Used Kenyan Workers on Less than $2 Per hour to Make ChatGPT Less Toxic,” he describes the horrors experienced by Kenyans hired to screen out abhorrent content from AI algorithms. While scrubbing offensive data for little pay over long periods of time, many were traumatized by what they saw and had to do. ▶ Throughout history, we’ve witnessed protests of major brands for abusive human rights violations. Will PR professionals need to tackle similar protests of AI-generated products and experiences? ▶ I asked Rafiq Taylor, an Annenberg graduate student, who is also my son, to share his thoughts. Rafiq said, “AI’s accuracy is limited because it cannot replicate a lived experience. It can replicate shared experiences, but not experiences that are present but unacknowledged. AI can only be as advanced as people are… and the inequality present in the world impacts whether racial bias is even a consideration on the design end.” ▶ I ask my students and teams considering AI for use in our field to keep some fundamental concepts in mind. These practices apply familiar content creation approaches to what is still an unfamiliar tool, requiring caution and deliberation: ▪ Test messaging for intended audiences. Have diverse individuals read, react, and advise prior to publishing. ▪ Create diverse teams. Diverse PR teams are better equipped to create AI prompts that take nuance into account and are more able to identify stereotypes before content is finalized. ▪ Remember the ‘A’ in AI stands for artificial. Keep content hot by ensuring the H — for humanity — is present in what you create and amplify.

▶ Following these key principles, basic in nature but critical to consider, may help us get a stronger handle on AI as a tool in our field. Thus, as we examine the under-skin of AI’s spin, PR practitioners must continue to flip the script, humanizing the beat and elevating the shared harmony of our humanity. ■




NAVIGATING THE ETHICAL MINEFIELD OF AI IN COMMUNICATION BY PENELOPE SOSA THE RAPID PROLIFERATION of artificial intelligence (AI) in our global communities and daily lives has been nothing short of transformative, exceeding the expectations of many just a few years ago. AI has revolutionized online connectivity, enhanced transportation safety, and ushered in new economic opportunities, profoundly altering how we interact, commute, and work. ▶ However, beneath these remarkable achievements lies a pressing concern: the absence of essential safeguards and guidelines to regulate AI development and prevent the emergence of material harm rooted in human bias and inequitable power structures. ▶ Despite the immense potential for growth, connection, and innovation that AI brings, the lack of proper oversight has allowed technologies and systems to emerge, designed and shaped by humans, that perpetuate a multitude of biases. Of particular concern is the prevalence of racial bias,

which disproportionately affects historically marginalized groups. ▶ This bias is pervasive across various industries, from AI-driven job application filters to risk-assessment software used in parole decisions. The root cause lies in the biased data used to train these algorithms, leading to discriminatory outcomes that threaten to exacerbate existing inequalities. ▶ Consider the algorithms dictating our social media feeds, expertly engineered to maximize user engagement. Unfortunately, this can lead to the promotion of false information, disinformation, deepfake videos, and inflammatory content designed to exacerbate social and political tensions, posing a significant challenge to the fabric of our society. ▶ In the workplace, the rapid advancement of AI has the potential to enhance the prosperity of human workers, but the lack of clear metrics and industry commitments raises

Penelope Sosa is the digital communications manager at the Partnership on AI, a multi-stakeholder organization that brings together diverse voices to shape the future of artificial intelligence. She previously worked at the Illinois Supreme Court Commission on Professionalism. She is a USC Annenberg alumnae, now living in Chicago.




concerns about lower wages and diminished job quality. This concern could potentially bring the “robots taking our jobs“ scenario closer to reality, highlighting the urgent need for responsible AI adoption and management. ▶ As AI's influence continues to permeate various industries and society as a whole, it becomes imperative that the algorithms driving these advancements adhere to principles of fairness, transparency, and inclusivity. Those responsible for developing AI algorithms must be held accountable for identifying and mitigating algorithmic biases that may arise. ▶ However, the responsibility for creating ethical AI does not rest solely on the shoulders of tech companies. It demands a collaborative effort that brings together diverse industries, academic researchers, and civil society organizations. This collaboration fosters knowledge sharing, problem-solving, and the co-creation of solutions to address the multifaceted challenges posed by AI. As an independent convener, this is precisely the role that my organization, Partnership on AI (PAI), plays. We also identify gaps and development opportunities as they arise, striving to pave the way for a better and more equitable future for everyone. ▶ The challenges associated with AI transcend borders and disciplines, and so must their solutions. As exemplified by Secretary-General António Guterres at a recent UN Security Council meeting on AI, a global approach, a sense of urgency, and a commitment to continuous learning are essential. ▶ Diverse organizations must come together to set the necessary guardrails for safe and responsible AI development, ensuring that ethical principles guide its evolution. This collaborative effort expands the community that guides the future of AI, reinforcing our collective commitment to shaping its trajectory responsibly and for the benefit of all. Delay is not an option; the time for action is now. ■ 37



BY CHANNING SPARKS THE ENTERTAINMENT INDUSTRY is one of the most forward-looking. It's constantly seeking new ways to capture an audience and tell a story. With the creation of streaming media and artificial intelligence (AI), how Hollywood is telling its stories has become erratic. The Writers Guild of America (WGA) called this moment an “existential crisis”. ▶ The WGA, the labor union to which most working writers in Hollywood belong, went on strike over a labor dispute with the Alliance of Motion Picture and Television Producers (AMPTP) on May 2, 2023. The Writers vs. Producers dispute centered around residuals from streaming media and the growing utilization of AI within the entertainment industry. Writers began picketing in front of major studio headquarters including Netflix, Warner Bros. and Universal Studios. ▶ The 2022 release of ChatGPT, a natural language processing tool driven by AI technology, was one of the leading forces

that led to the strike. The World Economic Forum predicted that AI will disrupt a quarter of all jobs over the next five years. The problem lies in the fact that AI could potentially produce a first draft of simple prompts. Consequently, when writers were hired, they could be hired at a lower pay rate because first concepts would be completed for them. ▶ Other issues revolve around reducing the work or pay for writers: streaming media’s shorter TV seasons and lower renewal rates of those shows leading to fewer steady jobs; smaller writers’ rooms leading to fewer hires and lower pay; and shrinking residuals for past shows that were streamed or syndicated. According to a recent WGA report, the median weekly writer-producer pay declined by 23% over the last decade when considering inflation. ▶ Award-winning television showrunner, executive producer, and writer Anthony

Channing Sparks is a second-year graduate student at USC Annenberg, studying public relations and advertising as an Annenberg Deans Scholar, and is a graduate associate at the Center for Public Relations. Her internship experience includes work at the boutique agency AM PR Group. Channing is a dedicated, hardworking, passionate, and open-minded young professional aspiring to seek career opportunities and experiences in the entertainment, media and dance industries. 38


Sparks, Ph.D., expressed his concern about the matter. ▶ “The embrace of AI in the film and television industries is the biggest threat to the viability of the profession of film writing that I've seen in my career and lifetime. Not only is AI-produced film and television destructive to a unionized workforce by aiming to reduce our workforce drastically, but it is also theft. Plain and simple. It is theft. AI would consume the hard work of current and previous generations of writers, then splice it and dice it into a regurgitated stew of nonsense that presumes that human creativity has reached its limits. ▶ “AI is not only dangerous because of its job-killing capability in the creative industry; it is dangerous because it forever freezes our popular culture and limits it to the current moment in time. This embrace of AI in the entertainment industry must be stopped.” ▶ Writers and professionals feared that the emergence of AI-written content would replace their professions as technology advanced. The writers had three demands of the producers going into the strike: ▪ “AI can't write or rewrite literary material.” ▪ “AI can't be used as source material.” ▪ Scripts written under the WGA contract “can't be used to train AI.”

▶ This means that no producers (AMPTP members) could release media where any part was created using AI programming, preventing AI-generated content from being used as source material. Initially these demands were rejected as the AMPTP was unwilling to accept any restrictions on the future use of AI. However, the AMPTP negotiated annual meetings with the writers to discuss the future advancements. ▶ Fortunately, the WGA and the AMPTP reached a preliminary deal on a tentative contract on September 27, ending the strike pending a final vote by the 11,500 WGA members. The deal, effective January 1, 2024, will last until May 2026 and includes the following: ▪ AI cannot write or rewrite literary material. ▪ AI-generated material is not to be considered source material. ▪ Streaming content viewed by 20% or more of the service’s domestic subscribers in the first 90 days of release will earn bonuses for the writers. (This would be equivalent to the viewing of a popular network series.). ▶ Though at the time of this publishing we don’t know the outcome of the SAG-AFTRA strike over similar issues, the future of entertainment seems to be promising as writers and producers have moved toward finding common, creative ground. ■




TO AI OR NOT TO AI?: IMPORTANT QUESTIONS TO ASK BEFORE DEPLOYMENT BY LUCIE BUISSON MCKINSEY PREDICTS THAT generative AI will add “trillions of dollars in value to the global economy” thanks to productivity gains. Given the pace in adoption and projected value, it’s very likely most organizations will start integrating generative AI solutions to their workflows and products in some capacity in the near future. And while the recent global pandemic tested many organizations' ability to be agile and innovate on the fly, this year’s economic headwinds have propelled efficiency to the top of the growth driver pyramid. And with efficiency gains on the menu, it’s no wonder 39% of business leaders will use generative AI every day. ▶ Having said that, it’s important for companies to start by investigating whether adding generative AI to their services adds value to their customers or not. A lot of organizations are too quick to jump on the hype and execute without thinking about the real value to the user. In our engineering and

product teams, we are encouraged to ask five times why. The funny thing with generative AI is it's a really different conundrum, and now we are asking 50 times why. Common questions orbit: Why will this add value, how will they use it, what will they ask of it, what outcomes will they get, will it make a difference to their daily work, is it trusted, should we charge for it, will it cost us a lot of money, is it hard to implement, will it be difficult to support? With any new technology, it takes experimentation to fully understand it. And where generative AI is concerned, we’re very much still in the phase of discovering new use cases. So whilst I'd say 'nobody is required to implement it' I would say 'everyone should be experimenting with it'. ▶ Generative AI is simply scaling data and using natural language to extract it. So it's simpler for companies to implement as a plugin. But it can be costly. And tech leaders

Lucie Buisson is chief product officer at Contentsquare, where she leads the product vision, strategy, and co-leads go-to-market. Her team’s mission is to develop innovative products that empower businesses to make the digital world more human.



need to understand these costs before really deeming it a solution to any one problem. ▶ We’ve heard of use cases where AI chatbots are replacing human customer service departments — a bold move to say the least. Time will tell if this is the right move, but an important question to ask is if the cost of AI to replace a human worker will yield better results, and then on top of that, how can you then utilize those human workers in a way that still builds your business while creating meaningful and valuable work for them? AI should be viewed as a co-pilot, with the expressed

work out what it means when sensitive data is being put into an external AI engine. When Google Translate first came out, everyone loved it. We all used it to translate simple text, but some companies were using it to translate sensitive documents. Legal teams would eventually advise that this is not a secure application and shouldn’t be used. Same could be said for generative AI! ▶ Critical early steps to org-wide deployment is to establish a working group of early adopters to learn and test new technology. Generative AI is a perfect use case for this

IT’S IMPORTANT FOR COMPANIES TO START BY INVESTIGATING WHETHER ADDING GENERATIVE AI TO THEIR SERVICES ADDS VALUE TO THEIR CUSTOMERS OR NOT. role of improving productivity to allow humans to focus on higher-value tasks. What we know for certain is that people using AI the right way will be much more efficient than those who do not. If organizations do not embrace this fact, their reluctance will eventually become a disadvantage to the business in the long-run. ▶ What we are seeing in market is only a beta of a much more sophisticated intelligence engine. These are the early days and we have to continue to learn how to adopt it, and put guardrails around it. ▶ For instance, we don't yet know the privacy and security concerns around using AI and many organizations are scrambling to

model, and makes the most sense when this test group — with transparency and consistent communication — can help identify where the technology creates the most value for the business. Only then, when there’s a clear purpose for new technology, should it be deployed at scale. ▶ At the end of the day, companies and teams need to be flexible, they need to experiment with consideration, and they still need to distill the value of the work they are doing in order to separate hype from using generative AI to develop real solutions for real challenges. ■ 41


GENERATIVE AI'S IMPACT ON STUDENTS OF COLOR AND DIVERSE STUDENTS BY BILL IMADA CO-AUTHORED BY CHATGPT LET ME BE TRANSPARENT HERE. Much of this Relevance Report entry uses generative AI — specifically ChatGPT. ▶ Generative AI is here to stay and its impact on our daily lives is everywhere. As the use of generative AI grows, so will debates — pro and con — in post-secondary institutions of higher learning. As many of us already know, AI is all about change, which is not always easy for highly structured institutions where even the slightest modification to curricula is viewed with angst. ▶ During a discussion on generative AI hosted by VOICES for AAPIs, a national organization dedicated to supporting Asian Americans and Pacific Islanders in the fields of communications and marketing, panelist Dr. Gain Park, an associate professor in the Department of Journalism and Media Studies, talked about how students and faculty are feeling about AI. According to Dr. Park, students and faculty are excited and scared of AI. Why? Because it is growing so rapidly.

▶ This rapid change highlights the importance of keeping up with AI and staying ahead of it for higher education. However, generative AI may pose even greater challenges for historically underserved students, especially for college students of color. As we delve further into AI, it is essential to discuss and evaluate the impact AI will have on underrepresented communities. ▶ Here are some of the challenges and opportunities that generative AI presents to students of color and the broader education ecosystem. While this list is not comprehensive, it will broaden conversations about AI relating to students of color and other diverse communities. ▪ Bias and Fairness: Generative AI models aggregate information from large datasets that contain biases. This can generate content that may perpetuate negative stereotypes and tropes or fail to adequately represent the genuine voices and experiences of students of color and

Bill Imada is founder, chairman and chief connectivity officer of IW Group, a minority owned and operated advertising, marketing and communications agency focusing on the growing multicultural markets. Bill is also a trainer and mentor, and serves on the advisory councils for Cal State Northridge, University of Florida, and Western Connecticut State University. He is a member of the USC Center for PR board of advisers.



other diverse population segments. These individuals may encounter AI-generated materials that minimize their perspectives and experiences. Or worse, AI-sourced information could be historically inaccurate and rely on past racist beliefs and views. Suppose students of color and other diverse individuals cannot engage in building and shaping AI models. In that case, they may remain marginalized and unable to use generative AI tools to excel in their academic development. ▪ Access Disparities: Students need equal access to technology and the internet. Students of color, particularly those from low-income backgrounds, may face barriers to accessing generative AI tools and the educational opportunities they offer. Furthermore, the digital divide only exacerbates existing inequalities in education. Chris Cathcart, a public relations lecturer at California State University, Northridge, and a VOICES panelist shared these same concerns using one word: diversity. Cathcart said the generative AI tools must be available to everyone, and not just for people of means. If access to AI tools and the internet are limited, it will impede the ability of marginalized communities from benefiting from these opportunities. ▪ Privacy Concerns: Generative AI can collect and process vast amounts of data, including personal information such as race, gender, net worth, and family history. Students of color and diverse students may be at greater risk of privacy breaches, especially if their personal information is mishandled or misused. Concerns and fears over data security can hinder their engagement with AI-driven educational platforms. ▶ Undocumented students are also at risk of using AI-enhanced tools. AI could be used to screen college applicants, revealing their legal status, and exposing details about a potential student that are private to that individual and their families. While companies

are now using AI tools to address and reduce employment bias, it could also inadvertently create barriers for diverse applicants for entry into the workforce. ▪ Personalized Learning: Generative AI can create bespoke educational content catering to the unique disposition of each student. Customized learning is particularly beneficial for students who may require lessons and assignments that take into consideration their diverse learning needs and preferences. Immigrants and refugees with limited experience attending U.S. colleges could have AI-enhanced lesson plans that incorporate more visual aids, bilingual text, and a timeline to facilitate learning at a comfortable pace for the student and professor. AI tools can also be deployed to detect learning challenges for students who have difficulties with written and oral presentations. When these difficulties are identified, AI can customize a process to help students overcome these challenges. Yet despite these opportunities, Ms. Christie Ly, an adjunct professor at USC with more than 20 years of strategic communications experience, says students cannot rely on generative AI to get by in class; instead, they need to fuel what they are learning with their human touch. ▪ Language Support: For students of color and non-native English speakers, AI-powered language translations and learning tools can provide invaluable support, making educational content more accessible and understandable. For example, an English as a second language student may have generative AI bilingual lesson plans created to match their skill level and offer an optimal pace for that individual to learn. AI can also bridge the gap in personalized education. Professors with many non-English-speaking immigrants and refugees in their classes can create culturally relevant curricula that speak to students in a language and tone that will accelerate their comprehension and learning. 43


▪ Diversity in Curriculum: AI can assist educators in diversifying classroom materials, incorporating various voices and perspectives. This helps students of color and other diverse students feel more represented and engaged in their studies. Furthermore, AI can be used to identify terms that are demeaning, hurtful, and inappropriate in curriculum, creating a safer and more inclusive environment for learning. ▪ Accessibility Features: Generative AI can create accessible content, benefiting students

▪ Reducing Bias: As AI technology evolves, efforts are being made to reduce bias in AI-generated content. Students of color and diverse students stand to benefit from a more inclusive and unbiased educational experience. However, without diverse human interaction, general AI models will not learn about the experiences of those marginalized by racism, classism, and other biases. Finding the right AI prompts and inputs are essential to reducing and eventually eliminating unconscious bias in generative AI models and tools. Conclusion Generative AI presents challenges and opportunities for students of color and other diverse students. To responsibly utilize its potential to advance education, it is crucial to address ongoing issues of bias, unequal accessibility, and justifiable fears over privacy. By doing so, we can create a more equitable educational landscape where all students, regardless of their background and circumstances, can learn, grow, and thrive. ▶ As generative AI continues to evolve, it is our responsibility to stay active in its development and use and to ensure that it remains accessible to everyone who wishes to use its tools to advance their knowledge. Furthermore, it is our responsibility to ensure that access to generative AI is managed wisely and responsibly, so that it is a tool for empowerment and not a source that widens disparities. As Mr. Cathcart shared in the remaining few minutes of the VOICES panel discussion, “AI is a tool, not a toy.” ▶ The future of education, with generative AI as an active partner, holds promise, but it is up to us to ensure that promise is fulfilled inclusively and equitably. ■

AI CAN ASSIST EDUCATORS IN DIVERSIFYING CLASSROOM MATERIALS, INCORPORATING VARIOUS VOICES AND PERSPECTIVES. with disabilities, including those from diverse communities. This content promotes inclusivity and belonging and ensures that students living with disabilities have new opportunities to attain knowledge on par with others without apparent and nonapparent disabilities. For example, several AI-enhanced words-to-speech programs can help students on the autistic spectrum hear text from books, articles, and periodicals that will allow them to visualize content to facilitate learning and understanding. These same tools can be enhanced with different voices, sounds, delivery speeds, and tones to customize the learning experience. 44



ARTIFICIAL INTELLIGENCE GETS BETTER WHEN YOU TURN IT UPSIDE DOWN BY JARON LANIER AI HAS HELPED PEOPLE. That’s the most important starting point. For instance, it has made programming less tedious and programmers more productive. ▶ And yet AI also inspired remarkable fears, with some of the central creators and sellers of AI systems warning that it could cause human extinction. AI COULD do great harm, through campaigns of deepfakes that undermine society but only if we insist on mystifying AI in a way that paralyzes our ability to work with the technology responsibly. ▶ There is a better way of thinking about AI that turns it on its head, making it both safer and more valuable. This alternative is sometimes called “Data Dignity”. The idea is that you can think of AI as a new way for people to collaborate, instead of as a new kind of personage, or entity on the scene. ▶ The two ways of thinking are equivalent from a strict technical point of view, but Data Dignity makes it easier to think about how the

technology can best fit into our lives. ▶ Let’s demystify AI and summarize how it works. (We’re talking here about the GPT-style AI that has become so prevalent.) You can understand it in three steps. ▶ First, consider how computers can recognize what kind of data is present based on statistics. For instance, a bunch of statistical measurements applied to a stretch of text might determine whether it was really written by Shakespeare or an imposter. A similar tangle of statistical values might be able to tell if an image is of a dog or a cat. ▶ The bundles of statistical measurements are called neural networks, although they differ from biological neural networks, and they are created by a process called training, where they get tweaked repeatedly until they function. It’s a messy process, and can seem mysterious, but it would be weird if it didn’t eventually work. After all, math is real, and enough statistics, once trained, will inevitably do the job.

Jaron Lanier is a computer scientist, author, and musician. He is presently Prime Unifying Scientist at Microsoft.



▶ The second step is to take in an extremely large amount of data and train a stupendously gigantic conglomerate neural network, called a large language model, to recognize — we say “classify” - all the stuff identified in the data. For instance, you could take in the whole internet and assume that text found adjacent to an image usually has something to do with the image, and then train the model to classify which image matches which text.

merely using random stabs, constrained by the large amount of training data indirectly, through the classifiers, to find answers. In other words, instead of thinking about this process as a new kind of creature, we can think of it as a new kind of collaboration. People made the data that the model is trained on, and the model simply finds hidden correspondences in what people did. This is not a disparagement of AI at all; it is high praise. Helping people work together better is precisely what civilization, and computer science, are for. ▶ Once you think of AI as a form of collaboration, it becomes less scary. It is not here to replace people. It also becomes clearer how to use it best. For instance, the increase in productivity for programmers arises precisely because a programmer can now rely on what others have done (over and over) to avoid the Now you can classify a vast multitude of most repetitive and tedious aspects of the job. ▶ things, and it’s time for the third step. This is A lot of people who work on AI like to ▶ to run the process in reverse. For instance, think of it as a new creature on the scene, let’s say you want a picture of a cat. You can maybe because that has been a part of so start with an image of random snow and ask many science fiction movies like The Matrix or the model to rank it for similarity to an image Terminator. But if you think of it that way you of a cat. Then add some more randomness. make it more mysterious than it has to be. If it starts to look a little more like a cat, then ▶ Instead of trying to convince it not to harm keep the modification, otherwise discard. us, we have the option of paying attention They do it enough times and you get a cat to what data we train it on. We can develop emerging out of the snow. ways to turn off the influence of data from The magic, the new trick, the thing that malicious, incompetent, or useless sources. ▶ has never been possible before, is that you ▶ A lot of people who think of AI as a can combine qualities you want in the output creature want to think of just the right way of the model. That is why this kind of AI is to convince it not to harm people. But this often called “generative”. You can prompt is like the oldest stories of genies or devils. for a cat using a parachute while playing a Whatever you ask of a creature might be mandolin, rendered in watercolor. And there twisted by the creature. it is. In order for a batch of classifiers to be ▶ When we think of AI as being made of satisfied at once, the process often solves new people, then it becomes clear that we are problems like how a parachute would fit on a the sources of all AI does, and we can take cat, or how cats paws would fit on a mandolin. responsibility. While this type of problem solving can seem ▶ We don’t need to be mystified of our own magical, it’s important to remember that is activity and then terrified of ourselves. ■





BY DOMINIC CARR WHEN I STARTED my communications career my first manager told me that our work was doubly hard. Not only did I need to master the discipline of communications, but I also needed to master the work of the broader organizations if I wanted to be an effective counselor, and really drive impact. Understanding comms while deeply understanding the business is a principle I’ve tried to live with ever since. ▶ AI is unleashing dramatic change across every part of every organization. If we as communicators want to live by the principle of knowing the business, we’ll need to double down on a commitment to being perpetual students or risk becoming irrelevant. The good news is that while this wave of innovation challenges us in new ways, AI also offers us powerful new tools we can use to transform the discipline, transform ourselves and rise to the challenge. Communicators who deeply understand the impact of AI on their business

and know how to harness it in their function will continue to elevate both themselves and the discipline of communications. To ride this wave of change and prosper we all need a personal AI-learning plan that covers at least four dimensions: 1. How is AI affecting the organization where I work and what is our overall AI strategy? Organizations big and small are rapidly experimenting with and embracing AI. In areas like customer service, marketing, product development and more, teams are just beginning to understand the power and potential. As communicators we have a real opportunity to help shape how our organizations use and deploy AI. We can be an important voice for ensuring our organizations use AI in a transparent and responsible way. And we need to help tell the AI story for our organization. Investors, policy makers, the media, customers, and

Dominic Carr is a communications leader with more than 25 years of experience at major multinational companies, including at Microsoft and Lyft, where he’s built global teams and strategic communications programs that change perceptions of businesses, brands, and issues. He is a member of the USC Center for PR board of advisers.



employees are all hungry to learn more about how organizations are using AI today, where they see opportunities, and how they plan to manage the risks and ensure the technology is used responsibly. Some might say this is the only thing some audiences are interested in. Your “AI story” needs to be front and center. Do you know it? Can you tell it? Is your team working every day to refine and improve it? ▪ Part one of our AI learning plan is: Learn your organization’s AI plan and story. 2. How can I personally use AI to help me better understand my organization? Knowing the business takes work and time. It always has. And as the pace of change accelerates, it becomes even more demanding. But the good news is AI can help. How can you use the same AI tools that are driving change to understand the change? For example, how are you using AI tools to summarize longer or more technical documents and presentations? How are you using AI to generate insights from data, identify interesting trends, and monitor competition in real time? ▪ Part two of our AI learning plan is: Learn the AI tools you can use to help you better research, track and understand your organization. 3. How is AI impacting the discipline of communications and what’s my plan to harness it? This one is closer to home. It’s still early but already you see some of the potential for AI tools to radically transform our field. Potential uses include generating first drafts or suggested outlines of speeches, press releases, blog posts, and FAQs, and tracking external sentiment and spotting potential crisis or issues earlier. Even identifying new reporters or influencers who might be interested in your story or predicting the success of a pitch. These are just some of the ways AI is impacting the discipline of communications.

Just as your business partners are experimenting with AI for the business, you need a clear plan to experiment with AI for communications. Understand what AI is good at and what it is not good at. And you’ll need training and clear operating principles for the Comms team and others to ensure they are using these new technologies responsibly and transparently. ▪ Part three of our AI learning plan is: Learn how you can use AI responsibly in communications. 4. How can I personally use AI to make myself more productive and have greater impact? AI has the potential to transform how organizations work and AI can also be harnessed by the individual too, driving productivity, and efficiency and freeing us up from routine tasks to focus on the things that have the biggest impact. New AI tools can summarize meetings you missed and identify if there are any actions for you, transcribe phone calls or interviews, generate first drafts of internal reports or emails more quickly, speed up production of that all-important PowerPoint deck, and automate coverage reporting. We’ll all need to experiment with these tools, understanding their strengths and limitations, and how to get the most out of them. New capabilities are being added all the time, so our plan needs to ensure we’re staying up to date on the latest feature or new service. ▪ Part four of our AI learning plan: Learn the personal productivity tools that help you get ahead. ▶ AI is a wave of change heading our way. We all need to adapt to succeed. And AI can help us. But we’ll all need to commit to lifetime learning and a personal AI learning plan if we want to ride the wave and prosper. ■




BY BETH FOLEY UNLESS YOU’VE BEEN on an extended digital detox, you know that communications are no exception to the integration of generative artificial intelligence. I wondered how the machines and humans thought about these changes. When I asked ChatGPT how AI would affect the professional communications function, it revealed a plethora (ChatGPT’s word, I would have said “a ton”) of possibilities. When I turned to fellow human CCOs and asked the same question, their responses consistently revolved around how AI would change their teams, and by association, their roles as leaders. This contrast offers a unique perspective on the convergence of AI and leadership in the communications profession. ▶ Most of us have a sense of how AI may help our work. We’ve likely heard about AI’s potential to revolutionize communications but very little about how it will impact leaders.

AI empowers communications professionals in at least three (and a half) distinct areas: 1. Data-Driven Insights and Personalization AI unlocks the power of data by processing vast amounts of information from various sources, providing valuable insights into public sentiment, emerging trends and data-driven decision-making. It enables personalized communication strategies tailored to specific audience segments, increasing engagement and effectiveness. AI’s knack for crunching data and personalization is like having a crystal ball that reveals what the audience is thinking and feeling. But leaders need to know how to apply these deeper insights. 2. Efficiency Through Automation AI-driven tools automate routine tasks, freeing up CCOs and their teams for more strategic endeavors. Chatbots, virtual assistants and content generators are the workhorses that

Beth Foley is the chief communications officer and vice president of corporate communications and philanthropy for Edison International and Southern California Edison. She directs Edison's employee and external communications, community engagement, and brand and advertising strategy ― all focused on sharing the message that Edison is leading the transition to a clean energy future. She is a member of the USC Center for PR board of advisers.



AS THE WORKPLACE TRANSFORMS, LEADERS MUST FOSTER A CULTURE THAT EMBRACES AI AS A TOOL FOR LEARNING, ENGAGING, CONNECTING AND EMPOWERING COMMUNICATORS TO ACT. ensure that repetitive tasks are handled efficiently, allowing communications professionals to focus on the creative and strategic aspects of their work. This places an even heavier emphasis on the ability to think critically and lead strategically. 3. Ethical Leadership and Continuous Learning While AI enhances efficiency and personalization, CCOs must exercise ethical leadership in deploying AI applications. They need to be vigilant about AI accidentally reinforcing biases (this is the one that keeps me from falling asleep at night) and ethical concerns in communications strategies to ensure that AI aligns with organizational values and avoids unintended consequences. Additionally, AI can be a powerful tool for continuous learning and skill development for communications professionals, helping them stay updated on industry trends and equipping them for the very-quickly evolving landscape. 31/2. Hallucinations Communicators are spending more time than ever fighting off misinformation and creating reliable sources of truth. AI's panache for lying like a teenager skipping out of school creates a whole new layer of work and worry for leaders. ▶ While AI is undoubtedly revolutionizing the communications field, it’s not a substitute for human leadership.

▶ As leaders, we must prepare our teams for this AI-driven future, ensuring the workforce is adaptable and equipped with the necessary skills to harness AI’s positive potential. AI may automate tasks, but it shouldn’t replace the nuanced understanding, empathy and strategic thinking that human leaders bring to the table. ▶ Maintaining employee engagement in an era of AI is a new leadership challenge. As the workplace transforms, leaders must foster a culture that embraces AI as a tool for learning, engaging, connecting and empowering communicators to act. Employee training and involvement in AI integration are key strategies to ensure that the human touch is still at the core of communications efforts. AI’s impact on communications is complex, offering new tools and capabilities to enhance what we deliver. Yet, it’s the leadership aspect that truly distinguishes professional communications in this AI-addicted landscape. The ability to guide teams through change, support employee engagement and uphold ethical standards will decide success in the age of AI. As we embrace the transformative power of AI while staying true to leadership values, we can be positioned to navigate this pivotal transformation successfully, ensuring that the future of communications is still both tech-savvy and human-centered. ■ 51


GENERATIVE AI INTRODUCES NEW CONSEQUENCES FOR AN OLD CHALLENGE BY DALE LEGASPI AS THE COMMUNICATIONS FIELD continues to rapidly evolve by introducing new AI tools, practitioners must be wary of adopting them hastily to prevent giving new life to old negative influences. One prominent example is AI bias. While it may be counterintuitive to think of bias when discussing a technology driven by an algorithm (which is, theoretically, objective), our opinions, viewpoints and, thus, our biases are always present — even if we are not conscious of them at all times. Consequently, the data that feeds the AI algorithms may be biased if not properly screened and vetted. ▶ Today, communication professionals have a whole host of handy new generative AI (GenAI) tools at our fingertips, but without taking proper precautions to ensure the data we put into them are free of bias, these tools can have consequences much more severe than we are accustomed to. The technology is

designed to drive results and spread information at machine speed, meaning a biased input would drive and perpetuate a skewed output further and faster than ever before. Considering GenAI produces outputs from a simple query on the front end, the power of the tool is immense. But so are the potential drawbacks without proper accountability and oversight from communicators. ▶ To maintain relevance and keep the emergence of AI from crippling the integrity of the profession, communications practitioners must: ▪ Embrace the technology by becoming familiar with GenAI tools ▪ Understand the importance of clean input data while safeguarding private or sensitive data ▪ Beware of potential biases that occur in setting the prompts or the data set

Dale Legaspi is a USC Annenberg adjunct instructor and public relations professional with more than a decade of experience in both agency and in-house positions across B2B tech. At Zeno Group he leads the day-to-day client programs for multiple accounts across corporate technology and healthcare.



Embrace the technology Whether we like it or not —and it seems clear that most people do not —AI is here to stay. Pew Research recently revealed that the majority of Americans (52%) are more concerned than excited about the emergence of AI, while a Gallup survey found that four out of five Americans have little or no trust in businesses to use AI responsibly. Communications professionals will fall all over this spectrum, but as generative AI is driving more and more communications, practitioners must recognize the power of the technology and have at least a basic understanding of how to use it. It can simultaneously make our jobs easier and more difficult, managing ever-increasing content demands but potentially perpetuating divisions within an increasingly polarized public. And that is to say nothing of the way it impacts the day-to-day role of a communications practitioner.   Understand the importance of clean data input While analytics and results tracking in communications have come a long way, data has not historically been viewed as a cornerstone of the communications field. That is going to continue to change rapidly with the proliferation of AI. Despite the fact that it makes for great sci-fi movie plots, the technology is not sentient. Still, generative AI is producing everything from thesis papers to works of art based on simple queries. Its emergence is making machines smarter, faster and more adept at completing tasks that previously required much more human intervention. The technologies underlying AI (namely automation and machine learning) have existed for decades, but the role of new generative AI large language models across a wide range of applications is still emerging. Concurrently, machine learning is cumulative, so the early data inputs will not only perpetuate, but they will also have a

disproportionate impact on outputs. It is absolutely vital that data inputs be clean and free of sensitive data.

THE BURDEN OF ACCOUNTABILITY FALLS ON US AS COMMUNICATIONS PROFESSIONALS. Beware of potential biases While values and integrity must always be our North Star as communications professionals, the nature of the field often requires us to step beyond our own viewpoints and into those of the organizations we represent. If we do not remain diligent in avoiding bias, it can creep into everything from our word choices to the information sources we consult. Now, with AI, we are feeding into algorithms that have the potential to not only perpetuate biased inputs but spread skewed results at machine speed. Furthermore, in today’s polarized media environment, which capitalizes on volume often at the expense of accuracy, unconscious bias can have outsized ramifications. Right, wrong or indifferent, the burden of accountability falls on us as communications professionals, as we must be the ones to ensure bias remains out of the data our organizations feed into AI. ▶ Communications practitioners are not going to be required to become data scientists overnight, but the emergence of AI is a forcing function that is making data literacy a prerequisite for being effective in the field. The only way to ensure AI’s influence on communications doesn’t become a debacle is for practitioners to lead the charge in ensuring that communications from our organizations remain transparent and authentic. Our professional relevance is at stake. ■ 53



BY JOSH ROSENBERG AT THE BEGINNING of this year, we coined the term “AI-nxiety” in our annual trend report, The 2023 Predictionary, to reflect “the unease about the overarching ramifications of AI on human creativity and ingenuity.” By the time you read this, AI will have already become far more tangible, scary and exciting. ▶ The growth rate of AI’s adoption and capability is completely unprecedented. ChatGPT reached 100M monthly active users in just six weeks, all organically. A mere glance at Midjourney’s improvements since its original version looks like decades of innovation in less than a year. It appears increasingly as though Moore’s Law doesn’t apply to AI and predictions of the future are a fool's errand. ▶ Our place along the AI timeline is murky amidst these technological leaps. But it feels like we are somewhere in the awkward teenage years: a period of rapid growth and experimentation, hits and misses.

▶ The truth is no one really knows what will happen next with AI. So then how do we use it in our work? When the future is unclear, the best crystal ball is looking to the past for guidance. Mark Twain’s words remain true as ever: “history doesn’t repeat itself, but it often rhymes.” ▶ So in AI’s experimentation phase, it’s critical to learn from what we already know to manage what we don’t. As communicators, let’s focus on what we know is real: brand storytelling, ideas that real people love and solving real business and consumer problems. ▶ What follows is a playbook of three principles, as told through real life examples — as true for AI any far away form of creative marketing — that can guide us through the adolescence of AI to create work that makes a lasting impact, not a flash in the pan.

Josh Rosenberg is co-founder and CEO of Day One Agency. Josh is a communications strategist and digital media authority with extensive experience shaping marketing communications programs for some of the world’s leading brands, including American Express, Chipotle Mexican Grill, Facebook, Nike, Comcast, Abercrombie & Fitch, Motorola and Ferrara. He is a member of the USC Center for PR board of advisers.



Real Brand Storytelling At The Heart The AI creative process should begin with a clear answer to a critical question — what do we want to say about our brand? — to ensure we tell a story about our brand that is going to resonate with our audience and move the needle for our clients. ▶ For example, Heinz put their creative platform “It has to be Heinz” to work, tossing it to the all-knowing mind of generative AI, asking DALLE-2 to ‘draw Ketchup’ with it inevitably returning images that resembled the iconic Heinz bottle. It used AI to paint the picture, but their creative platform was the core of the story and the audience outtake was about the brand, not AI itself. Make Things For Real People AI is complicated beyond comprehension, but that doesn’t mean our ideas also have to be. For ideas that spread through culture, anyone should be able to understand and want to engage with them. ▶ Virgin Voyages worked with Jennifer Lopez to create a generative AI tool that allows people to create a custom cruise invitation video to their friends and family, as if it were read off a script on the spot by J-Lo herself. While this technology is mind bogglingly sophisticated, the output is simple; “I can make a fun video where J-Lo can invite my buddies on a cruise”.

▶ So again, we should be asking the right questions with candor: Would the average person on the street care about this? Solve Real Problems The possibilities that AI opens up are new and exciting, to a distracting extent. ▶ We should be asking ourselves “Does this idea solve a real consumer or business problem, or is it a solution in search of a problem?” ▶ The NFL saw that they faced a huge problem in declining youth participation and therefore interest in football. They recently recruited the help of Disney to create an AI-powered alternative kid-friendly broadcast of Sunday Night Football, recreating an NFL game in real-time as if it were live from Andy’s Bedroom in Toy Story. They used AI to address a huge business problem by making their product relevant to a new audience. ▶ Despite the feeling of unprecedented change, this disruption to the industry is also another arrow in our creative quiver. Just as marketing has extended from print to radio to television to digital to social media, AI is presenting us once again with the same lesson: people are still people and brands are still brands. ▶ So let’s navigate this uncertain future with our wisdom of experience in culture, people and brands. And let’s always, always check in with our legal teams before doing so. ■



AI BRIDGES THE GAP BETWEEN DATA AND DIALOGUE FOR CLIMATE JOURNALISM BY ALLISON AGSTEN WITH MICHAEL KITTILSON WE STAND AT THE NEXUS of an information glut and an action deficit — particularly when it comes to defining the crisis of our time: climate change. Despite an excessive amount of data, a disconnect persists. According to the Yale Program on Climate Change Communication, 72% of Americans claim climate change stokes their anxieties, yet only 33% actually bring it up in conversations, even with those closest to them. Why this incongruity between apprehension and dialogue? It’s not just the message that counts, but how it’s delivered. ▶ Just this year, the Intergovernmental Panel on Climate Change (IPCC) published its latest report — a document describing the state of scientific, technical, and socioeconomic knowledge on climate change, its impacts and future risks. Crafted by hundreds of experts, and ratified by over 200 nations, the full report is sobering, unflinching, and, at 115 pages, overwhelming.

Nonetheless, journalists can receive just a 24-hour embargo to distill this information into a comprehensive yet digestible read. ▶ To directly tackle this communication challenge, we initiated a specialized test case utilizing ChatGPT, which offers journalists facing tight deadlines the capability to rapidly analyze expansive data sets. For this exercise, we input a large volume of data extracted from the latest IPCC report. Then, we prompted the robot to “review and analyze the following data, then concisely give us the main points.” We repeated this process to ensure we gave the AI a broader contextual view of the entire report. To assess the effectiveness of the AI tool in data interpretation, we compared its output with the initial stories on the IPCC report published by national news sources including The Washington Post, New York Times, Los Angeles Times, and Wall Street Journal. Our objective was to gauge the analytical accuracy and narrative coherence

Allison Agsten leads USC Annenberg’s Center for Climate Journalism and Communication, leveraging her diverse experience from CNN and LACMA to shape the future of climate communication. She pioneers art-focused climate discussions as the first curator of the USC Wrigley Institute for Environmental Studies.



that AI tools could potentially bring to journalistic coverage. ▶ While journalistic renditions did an incredible job prioritizing urgency and accessibility — backing these sentiments with accurate data — they sometimes glossed over nuances, particularly the impacts of climate change on human health and social equity emphasized in the report, typically for the sake of simplicity and brevity. However, we found that AI could quickly link these technical details without omitting subtleties — like how climate change disproportionately impacts different communities and other specific health impacts. ▶ For example, in their coverage of the recent IPCC report, prominent national publications like The New York Times, The Wall Street Journal, and Washington Post, were unequivocal in asserting that human activities have become the main driver of climate change, impacting everything from the ocean to the atmosphere. However, there was a divergence in depth. While some publications like The New York Times connected climate data to food and water security, others either briefly touched on this or omitted it altogether. In our parallel experiment, the AI tool demonstrated an ability to connect the dots on another level, effectively articulating how food and water security are not standalone issues; they’re components of a complicated web that has pervasive repercussions on our daily lives. It not only extended this analysis to encompass issues like urban infrastructure but also included both physical and mental health impacts, as emphasized in the full IPCC report and summary. ▶ To effectively tell the story of climate change, we need to simply tell the comprehensive story of climate change — with all of its possible implications for our futures. AI can help journalists cut through a dense fog of information and distill insights, enabling broader contextualization for storytellers to explore, ultimately magnifying the potential for storytelling.

▶ It is critical to acknowledge that AI does not possess the ethical discernment seasoned journalists have honed after years of practice. Moreover, even though AI can sift and decipher enormous datasets leading to helpful analysis, the model cannot inherently understand the social, political, or cultural nuances that are often critical to great journalism. Consequently, misinformation or unverified AI-generated content could sow confusion, dilute the urgency of the issue, or even reassert harmful ideologies that have inhibited action toward climate change to begin with. While AI should be a tool used cautiously to enhance human storytelling and journalistic endeavors, we believe it holds great potential in supporting journalists navigating large data sets related to climate change and beyond. Ultimately, the true measure of its impacts on climate journalism will not be counted by clicks or shares, but in the depths of the conversations it ignites and the meaningful impacts those conversations inspire. ■




THE NEXUS OF AI AND CYBERSECURITY RISK IN MARKETING AND COMMUNICATIONS BY HEATHER RIM MUCH LIKE THE PAIRING of cybersecurity and technology, the integration of artificial intelligence (AI) into marketing and communications has been revolutionary. As a powerful and evolving tool, AI holds unprecedented potential to advance our profession — from enhancing the ability to connect with target audiences and uncover data insights, to fast-tracking the creation of persuasive messaging and streamlining routine work. But with this transformative technology comes immense security risk, requiring vigilance and a balanced approach. ▶ On the marketing front, AI has been nothing short of a game-changer. This year has seen impressive advancements in generative AI in the form of chatbots and virtual assistants that have taken customer service to the next level, providing rapid, personalized interactions at scale. Sophisticated predictive AI systems continue to evolve, enabling granular insights into

consumer behavior through big data analysis and allowing marketers to deliver hypertargeted and customized experiences like never before. Additionally, AI has automated tedious tasks like content distribution, social media management and email marketing, allowing marketing professionals to focus their efforts on more strategic, creative work. ▶ Surveys consistently show that customers are more satisfied and loyal thanks to the personalized service and engagement enabled by AI tools. Marketing teams can devote their human talents to high-value work that machines can’t match — like strategy, ideation and innovation. The same can be said on the communications side. ▶ For communicators, AI content tools can generate first drafts of press releases, website copy and social media posts tailored to resonate with targeted audiences. In fact, AI helped me quickly research the latest headlines on this very topic. By synthesizing

Heather Rim is chief marketing officer for Optiv, where she leads all aspects of marketing and communications to accelerate brand visibility, drive demand generation and inspire stakeholder engagement. She previously held senior corporate communications, marketing and investor relations roles at AECOM, Avery Dennison, The Walt Disney Company and WellPoint. She is a member of the USC Center for PR board of advisers.



vast troves of data, AI gives communications teams powerful insights to inform strategic messaging and outreach. Through AI, communicators have a plethora of possibilities to thrive in the digital age. ▶ As we usher in this new era of AI, the need for effective and efficient security guardrails is paramount. As AI systems ingest enormous datasets on customer and public activity, they inherently create tempting targets for data breaches and cyberattacks. There are also risks of biases in algorithmic systems, privacy violations due to opaque data practices and AI-generated misinformation — a growing trend producing concerning results. Sufficient oversight and precaution must be in place to prevent the demise of public trust.

AI HAS CREATED A TREASURE TROVE FOR THREAT ACTORS, AS THEY THRIVE ON EXPLOITING HOLES IN DATA SECURITY TO ACCESS ENORMOUS AMOUNTS OF CUSTOMER DATA. ▶ AI has created a treasure trove for threat actors, as they thrive on exploiting holes in data security to access enormous amounts of customer data which could then be used to craft hyper-targeted phishing emails and social engineering scams that credibly

impersonate brands. Attackers may also leverage AI to generate incredibly realistic deepfakes — manipulated audio, video or images — designed to deceive the public and cause reputational damage. ▶ Even more insidiously, adversaries and nation-states could “poison” certain AI models by introducing carefully crafted but biased training data. This data alters the model's learning and outputs in a way that benefits the attacker's aims. For example, a compromised content recommendation algorithm may start promoting misinformation or biased political messaging. ▶ So, how can marketing and communications professionals harness AI's immense potential while steering clear of its pitfalls? A multi-layered approach is essential. First and foremost, rigorous data security protections like encryption, access controls and routine audits are crucial to safeguarding customer data during breaches. Ethical AI practices must also be baked into processes from the start, such as bias testing and ensuring transparency in data usage and algorithmic decision-making. ▶ Just like practicing good cyber hygiene, ongoing training and education helps security teams spot phishing attempts, identify manipulated content and recognize other emerging AI threats. Investing in cutting-edge AI cybersecurity tools provides additional monitoring and defense. It’s vital to cultivate a vigilant, proactive culture focused on accountability and transparency. Cyber risks and ethical concerns must be openly acknowledged and addressed through collective diligence. ▶ Balancing innovation with security will be the cornerstone of success in the AI-driven marketing and communications landscape. This is an opportunity like no other to bridge ethical innovation with the age of artificial intelligence. While we can’t let AI determine our future, we can set the precedent for the role it will play, and work to safeguard our organizations along the way. ■ 59


ANALYSIS: STUDY SHOWS AI MORE CREATIVE THAN WHARTON STUDENTS BY ANDREA HUBBARD AI CRITICS ARGUE that computers can never be as creative as human beings. Even at its smartest, AI is a predictive instrument that rearranges the information it receives. Surely, if AI can come up with an idea for a viral new product or an engaging marketing campaign then a human would be able to come up with it as well. ▶ However, Karl Ulrich and Christian Terweisch have evidence that says otherwise. The two professors at the Wharton School of the University of Pennsylvania tested this theory by assigning ChatGPT-4 the same assignment that they give their students: Come up with a brand-new product that costs less than $50 that college-aged people would want to buy. ▶ They compared 200 AI-generated ideas to 200 randomly selected ideas from their 2021 class (these ideas were submitted before the widespread access to AI). A hundred of the AI-generated ideas were also given examples

of what a good idea looks like to boost its chances of getting it right. To determine which ideas were the best, the researchers surveyed an additional 400 college students and asked them to choose which product they were most interested in purchasing. ▶ The results? There was a significantly higher preference for the ideas created by AI than by the Wharton students. 47% of ChatGPT-generated ideas were likely to be purchased, compared with 40% of the ideas shared by Wharton MBA students. The AI ideas generated after receiving examples fared even better, with this list having a 49% purchase interest. Of the 400 ideas generated, only five human-created ideas were among the 40 most desirable products in this experiment. The most popular idea overall was a compact printer with a 70% buyer intent rate. It was created by AI. ▶ Could AI be better at creating products people actually want to own? The most inter-

Andrea Hubbard is a second-year PR and advertising graduate student at USC Annenberg and a Center for PR graduate research assistant. She is an eager strategist who is excited to bring her passion for critical thinking and celebrity culture to the communications field. A myriad of work experiences — from canvassing for an environmental public interest group to improving off-site SEO portfolios for Fortune 500 brands to working at a beauty-focused PR agency — has trained her to become an expert in any industry her clients reside in. 60


esting finding is the revelation that humans aren’t as novel as we believe. When asked to identify the most original ideas, respondents rated the human-created ideas, on average, only slightly better than the computer-generated ideas. In this experiment, the one skill that we hold closest to humanness was replicated by AI. Although the most innovative of the 400 ideas did come from a student, his product did not make it into the top-40 preferred products. Originality is clearly not a factor for consumers when they’re deciding what to buy — and it was not able to give the students the upper hand in this competition.

▶ What human characteristics are truly irreplicable? For now, the answer to this question is discernment. AI is incapable of telling the difference between a golden egg from a rotten one. It cannot tell if the words it produced have racist or misogynistic undertones, or if there are cultural nuances that may lead a seemingly innocent phrase to suddenly offend half the country. It can’t train the next generation of innovators how to think critically and guide them through their mistakes. For this reason, humans are still needed to weed out the bad ideas and strengthen the good ones. It’s not time for companies to dismiss their workforce in favor of AI tools just yet. ▶ This novel technology is a great tool for novice specialists. Future brainstorming sessions may begin to look different; they’ll likely start with AIproduced proposals for junior-level professionals to weed through. However, as these professionals grow the skill of identifying viable ideas worth bringing to their managers, they likely won’t need to rely on AI as often as someone less experienced. They’ll be able to fully conceptualize an idea from start to finish without needing to consult a computer assistant. Career advancement will be reserved for those who are quick on their feet and are capable of using their minds, not because their ideas are more profitable or original than what AI can suggest, but because their ideas will likely need less fine-tuning on the back end. ■

FUTURE BRAINSTORMING SESSIONS MAY BEGIN TO LOOK DIFFERENT; THEY’LL LIKELY START WITH AI-PRODUCED PROPOSALS FOR JUNIOR-LEVEL PROFESSIONALS TO WEED THROUGH. ▶ This study suggests that human ideas are not more valuable to consumers. AI is a faster, cheaper idea generator that may be on par creatively with a person. There’s little reason for companies or individuals to avoid using these systems as a starting point if the playing field is equal and we all have access to the same tool.



AI AND REPUTATION: THE PROMISE OF TRANSFORMATION; THE PERILS OF DISINFORMATION BY GRANT TOUPS AS A CTO, it will surprise exactly no one that I’m all-in on the power of artificial intelligence to transform the communications profession (and society, for that matter). In fact, I’ve become known around H+K and with our clients for repeating the phrase, ‘AI won’t take your job, but someone who knows how to use AI might.’ And the environment is only getting more complex as AI gets better, faster and cheaper. ▶ In many ways, part of what we’re seeing is a new manifestation of known threats, first seen by governments in the form of propaganda, then as an ugly feature of political discourse where partial truths and controversy have been actively leveraged to engage and activate voters of every stripe. Whether intentionally and maliciously spread as disinformation or unintentionally spread as misinformation, there’s been a dramatic and critical shift in the last few years.

There’s a new target … you and the brands you protect. ▶ Consider the case of Target, the victim of AI generated content on TikTok suggesting the company was selling satanic clothing. Turns out the images were traced back to a Facebook post which disclosed the content had been generated by AI. By the time it was discovered the TikTok post had over a million views. ▶ Most of us aren’t anywhere near ready for this new world. It requires a new set of specialized skills to evaluate the accuracy, precision and truth of content, all at a scale and pace that far eclipses what we’re built to manage. The magnitude of risks associated with these challenges becomes material for individuals, businesses and society as a whole — faster than you can type a prompt into ChatGPT.

Grant Toups is first global chief technology officer at Hill+Knowlton Strategies. He is charged with working across H+K and WPP to build the firm’s ecosystem of technology-based offerings and improve the use of data science and analytics to drive client success and employee growth and experience. He is a member of the USC Center for PR board of advisers.



▶ A 2019 study from the University of Baltimore estimated the annual financial cost of misinformation on reputation management to be $9.54 billion USD. Billion (with a capital B). And that was in 2019. Anyone want to wager that it’s a higher number today? ▶ Last spring, in May 2023, stock markets around the world briefly plummeted as images of an explosion at the Pentagon went viral. Thanks to open-source style investigators, the hoax was quickly identified, not from a technical analysis of the images themselves but from the lack of corroborating content from other “boots on the ground.” ▶ As tools improve, AI-generated disinformation will continue to become increasingly compelling and easier to create at such a scale that it will become exponentially harder to detect and combat. Unlike what we might historically have thought of as disinformation, which is often spread by bots or bad actors on social media or Reddit (or the like), AI-generated disinformation is much more subtle and sophisticated. It may be designed to mimic real news stories or even to create entirely new narratives to manipulate public opinion. It can even be used to create an overnight chart-topper from Drake and The Weeknd. ▶ So, what do we do about it? ▶ First, we need to find these harmful narratives and the deepfakes that support them. Luckily technology is pushing the envelope every day to help us leverage equally sophisticated AI to find bubbling harmful narratives and predict where future ones will emerge. But the tech alone isn’t going to be enough. Much of this AI-generated content is built to evade automated detection, designed to look authentic while avoiding traditional fact-checking and verification methods. ▶ In the Target and Pentagon examples, a bit of clever exploration helped uncover the truth. But what if the content had been about a lawsuit? Or a complex public policy issue? Or cyber security? Or all of the above at once. Specialized expertise with an understanding

of AI is going to become even more important. As is the ability to convene and mobilize teams that bring together diverse skills to mitigate reputational risk and assist in reputation management. ▶ The AI age doesn’t mark the death of trust or authenticity. But it does require a new paradigm that combines human and artificial intelligence. It is about purposefully leveraging data and AI tools in concert with human intuition and experience. Think broadly, the demands of the challenges we face requires full and integrated bench strength to address it; lawyers, policy experts, strategic communicators, engineers, developers, regulators … you name it.

TRUST ISN’T DEAD. IT’S JUST HARDER TO SECURE AND MORE DIFFICULT TO MAINTAIN. ▶ Ultimately, while so many voices are focused on how AI can help us do more, faster, better, cheaper (and as one of those voices, I’d challenge us all to engage in those discussions and innovations), let’s not lose sight of the other outcomes of these innovations. Trust isn’t dead. And it isn’t less important. It’s just harder to secure and more difficult to maintain. ▶ Are you ready for an influx of disinformation? Are you confident you can detect and respond? If not, you’d do well to prepare because if your company hasn’t been targeted yet, someone is probably plotting against it now. And if you have, I’d bet it’ll happen again. ■ 63



BY HENRY JENKINS CALEB WARD, a media industry professional, gained visibility in Spring 2023 for his production of two mock trailers — one of Star Wars, one of Lord of the Rings — which mimicked the recognizable style of Wes Anderson (The Great Hotel Budapest, Asteroid City) using the generative visual AI program, Midjourney. ▶ As a Polygram critic noted of Anderson, “Most cinephiles immediately recognize his distinctive look….He has plenty of familiar touchstones for humorists who are trying to mimic him: The use of chapter titles and other on-screen text, a love of elaborate fixed tableaus, characters with minimal emotional affect and a clipped, precise way of speaking.” Moreover, Anderson has a recurring stock company from project to project, so these AI parodies can cast, for example, Bill Murray as Gandolf in Lord of the Rings and Obi-Wan Kenobi in Star Wars. And what could be more fun that imaging Christopher Walken as Gollum.

▶ Ward’s trailers were widely circulated as the most considered and most polished of the Wes Anderson parodies, which became the focus of critical anxiety. The same Polygon critic argued: ▶ “No artbot is going to actually replace Wes Anderson: His work comes from a distinctive voice and artistic mindset. Individual still images aren’t going to replace entire movies, and Anderson’s films are much more than just the visual imagery….but even so, it’s easy to look at the images … and see how readily AI art generators can devalue an individual artist’s style and voice….All the signature stylings Anderson has been refining for more than 25 years can be reduced to a single repetitive joke, to the point where his own actual movie stills may not stand out much in the mix.” ▶ Midjourney art, these critics argued, was “soulless” because it was produced by machines; these artists, with no mastery

Henry Jenkins is the Provost Professor of Communication, Journalism, Cinematic Arts and Education at USC, and primary investigator, Civic Paths Research Group. Jenkins is the author or editor of 20 books on various aspects of media and popular culture. He writes extensively about cinema, television, comics, computer games, online communities, popular theater, and other forms of popular media, primarily in the American context.



over the medium, might replace and devalue “accomplished” producers. ▶ Ward said he wanted to test the capacities of these new tools, acquire new skills, and demonstrate his professional capabilities. For many fan artists, AI’s ability to mimic the style of particular artists, to effectively construct images using their favorite performers, and to apply them to beloved fan objects has enormous appeal. Such images display fan literacy — the capacity to identify codes from popular culture and apply them to new contexts. So we might compare the ways Star Wars might have been visualized by German expressionist filmmaker Fritz Lange or Japanese samurai film director Akira Kurosawa, or what a Chinese version of Game of Thrones might have looked like with Michelle Yeoh as Cersi. Such fan art depends on the machine’s recognition and application of each artist’s techniques to different contexts. So, we come to see the fascist undercurrents of Star Wars in the Lang interpretation or can retrace what George Lucas borrowed from Kurosawa in the first place. ▶ As a long-time student of grassroots creativity, I see fandom as a group of early adopters and lead users of generative AI. The logic of fandom has historically involved appropriating and remixing resources drawn from mass media for other critical purposes. But fans have also been important early advocates for ethical practices surrounding machine learning. Some are seeing Midjourney as a tool which democratizes the capacity for visualization, allowing many who have not received Art School training to express themselves in visual terms for the first time, while others see the repetition of Midjourney’s recycled images as a threat to individuality of expression. ▶ Like other artists, many fan artists feel threatened by the ways that the skills they have acquired through hard work and practice may be less valuable in a world where others can use AI to replicate many similar effects.

They argue for important distinctions between human and “machine” art, suggesting that AI can only replicate but not really transform the works it scrapes. That argument, though, ignores the active role humans play in designing prompts and in curating and refining those images. The effective use of Midjourney requires the development of skills and knowledge just as any other artform does. Not all artists can make the sometimes balky software achieve what they envision. ▶ Some of the fascination right now has to do with the comical and sometimes infuriating ways AI fails to understand people’s prompts, such as when the term “Silence of the Lambs” produces images of actual lambs. These breakdowns in communication can also provide insights into how AI understands the human world. For example, at one point my son was unable to make Midjourney produce a picture of a female doctor and a male nurse no matter how clearly he explained what he wanted. This blind spot occurred because the AI looked at a vast numbers of images online and observed that typically men wore the white coats and women wore scrubs. Asked to depict Los Angeles without cars, Midjourney spat out images of Los Angeles without humans. Already a recent update has made it possible for users to highlight the section of an image that is wrong and explain how it needs to change, making as many small modifications as they want until they express their vision accurately. But there is something valuable, if sometimes painful, to learn from the computer's blunt honesty about its initial impressions. ▶ Midjourney also depends on a community of digital artists sharing work online with one another, providing feedback on how to produce more effective prompts and how to exploit the full capacities of the program. It is no accident that the researchers chose to embed Midjourney in Discord, already a social network of gamers and fans. This platform ensures that the overwhelming majority of art is created in public forums 65


FOR MANY FAN ARTISTS, AI’S ABILITY TO MIMIC THE STYLE OF PARTICULAR ARTISTS, TO EFFECTIVELY CONSTRUCT IMAGES USING THEIR FAVORITE PERFORMERS, AND TO APPLY THEM TO BELOVED FAN OBJECTS HAS ENORMOUS APPEAL. where novices can observe precisely what experienced users are doing to achieve the best results. ▶ Fan artists have also expressed concern about how artworks — including fan works — are scraped from the web as a basis for machine learning without authorization, ecognition or payment. They fear that work produced as part of the fan gift economy may well be used for profit by major corporations even if the fans themselves have chosen not to make money from their work. These fan artists often use an inflamed rhetoric of “stolen art.” We need to be clear on the parallels and differences between how machine learning takes “inspiration” from existing artworks and the ways any genre artist learns, by following — and deviating — from a formula acquired by studying pre-existing works. ▶ Yet, guidelines need to be established about how these works get commercialized, how AI companies are marketing using images 66

produced without permission that are the style of living artists, and whether there could be mechanisms — similar to the way radio stations pay recording artists — that might provide compensation for the appropriation and remix of artist works. Such ethical standards for generative AI would provide a greater balance between artist rights and fair use. Corporations are not going to walk away from the profits to be made from AI, so the question is whether fans walk away, leaving these developments entirely in the hands of commercial interests … or whether they collectively fight for a say in how AI might be used for social benefit. ▶ As we work through the status of artists in a world being reshaped by AI, this fan community represents an important site of debate, as well as a set of stakeholders that may be too easily ignored amidst the struggles between tech and entertainment companies. ■




BY STEVE CLAYTON NOVEMBER 30, 2022, will be remembered as the date of “first contact” for many of us in the field of communication. That was the day ChatGPT launched and as many of us played with the service, I suspect we had similar reactions. What have we encountered here? A species of sorts that can communicate and produce words with alarming speed and alacrity. Isn’t that our profession we asked each other? In those first few days, we both marveled at this new technology, and quietly questioned our own value and worth. ▶ I certainly did — and when I checked the postbox outside of my house in the days following and it contained an invite to become a mail carrier for the US Postal Service, I thought perhaps this with a sign from above that my career in communications was taking an unplanned turn. ▶ Accelerating forward by nine months, the future is far less bleak and personally

I think far more exciting. My role leading communications strategy for the Microsoft communications team has us exploring the many ways generative AI can improve our profession rather than replace our profession and make our jobs more enjoyable vs. non-existent. I am genuinely excited about the prospect of an AI infused communications era. Let me explain why. ▶ Over the last few months, we have embarked on looking at the processes within our discipline of communications and “atomizing” them — breaking them down into sub processes and then considering where we can apply AI, and how much. Let’s take the journey to an ‘earned media story’ that lies at the heart of our discipline. At Microsoft, we broke this down into a twenty-step process as shown to the right. Although our process may have a few more or a few less than others, this likely looks similar for any communications team.

Steve Clayton has more than 25 years of experience at Microsoft across technical, strategy, storytelling roles. He leads the newly formed Microsoft Communications Strategy Team whose primary focus is to reinvent how Microsoft operates their communications business using AI.








STEP 1 A story idea is formed

STEP 2 Ideate on the narrative

STEP 3 Determine the audience

STEP 4 Brainstorm on media targets

STEP 5 Media targets are selected and story is pitched



STEP 10 Interviews conducted, recorded and transcribed

STEP 9 Prepare briefing documents with FAQ and recent coverage

STEP 8 Interview dates set and spokespeople are scheduled

STEP 7 Media train spokespeople, if needed

STEP 6 Confirm spokespeople





STEP 11 Fact checking and story timeline is confirmed

STEP 12 STEP 13 News is PR advisory entered into created media planning tools

STEP 14 Precap email sent to spokespeople and stakeholders

STEP 15 PR advisory issued internally






STEP 20 End of moment report sent to spokespeople and stakeholders

STEP 19 End of day report sent to spokespeople and stakeholders

STEP 18 First look reporting send to spokespeople and stakeholders

STEP 17 Social media amplification and internal amplification

STEP 16 Story publishes — any corrections identified and fixed



▶ We then considered where we could apply AI throughout this process and to what extent. For example, at Step 1, the formation of a story idea we think a light sprinkling of AI can help with ideation. Meanwhile, at Step 4, we think there is a lot of room for AI assistance as we consider media targets. There will always be an artform to this process that considers relationships and reach, but there is also room for some science to selection — especially as the media landscape continues to fragment. ▶ As we considered this atomization, three things stood out for us that we think are worth sharing: 1. AI can be applied in many places — though it requires careful consideration on just how much. Our profession is an artform that can benefit from some science with AI — but we never want to replace art with science. AI should act as our Copilot, not an autopilot. 2. Perhaps our biggest realization is the potential for automation of processes to

connect disparate steps into a cohesive system that replace repetitive, drudgery with smooth flow. Steps 12, 13 and 15 are where we’re applying this at Microsoft with our own Power Automate tools. 3. There are numerous steps where AI will never replace humans. We don’t plan to use AI to train spokespeople or manage our engagement and relationships with the media. ▶ So, what to make of this “first contact”? I am confident that mail carriers will continue to bring emotion to our doorsteps — delivering handwritten birthday cards and letters. Nothing can replace human connection and stories that move people — it’s at the core of our profession as communicators and just as we adapted to every technology beforehand, we now have a new tool that creates new opportunities. ▶ It’s time to experiment and to operate on the frontiers. That is what we’re doing at Microsoft, and we’re committed to sharing our journey and our learnings as we go. ■



THE SYMBIOSIS OF AI AND COMMUNICATIONS: AUGMENTING CREATIVITY AND STRATEGY FOR THE FUTURE BY MATTHEW HARRINGTON & GARY GROSSMAN ARTIFICIAL INTELLIGENCE (AI) has swiftly moved from the realm of science fiction to become an invaluable tool in many industries, including public relations. As we enter an era of “symbiotic intelligence,” the key is to view AI not as a replacement but as a strategic partner. Here's how this relationship is shaping what lies ahead for the communications field broadly: Augmenting Human Skills for High-Value Work While the fear of job loss due to automation exists across various sectors, it is essential to view this transitional phase as an opportunity for role evolution. Those who adapt will find new avenues to develop expertise. While

generative AI tools like ChatGPT and Claude can produce content at scale, communications remains rooted in human skills such as strategic planning, ethical reasoning and relationship building. ▶ Generative AI is already incredibly good at certain tasks and can handle portions of drafting press releases, performing audience segmentation, and automating key message creation. By doing so, this frees communications professionals to focus on higher-value work — building connections, providing strategic counsel, and understanding audience motivations. This marks the beginning of a collaborative future, one where humans and machines work in tandem to achieve goals.

Matthew Harrington is global president & chief operating officer of Edelman, and is a specialist in corporate positioning and reputation management. His expertise includes crisis communications, merger and acquisition activity and IPOs. Harrington is a member of the USC Center for PR board of advisers.

Gary Grossman is senior vice president at Edelman, and is the global lead of the Edelman AI Center of Excellence.



Data-Driven Insights and Crisis Management AI algorithms are increasingly sophisticated at analyzing large datasets, offering agencies the power of data-driven decision-making. Moreover, AI can proactively monitor media and social platforms, alerting teams to emerging crises and shifts in public sentiment. While AI offers invaluable data and early warnings, it requires a human touch for strategic crisis management — AI lacks the emotional intelligence, ethical reasoning, and strategic adaptability that are essential for navigating complex situations, making the human touch irreplaceable. Unleashing Creativity Through Collaboration One of the most exciting implications of AI is its role in enhancing creativity. AI can churn out ideas and preliminary drafts at an unprecedented scale, allowing humans to refine these into



inspired campaigns. Agencies embracing this collaborative, human-plus-AI creative process will set new benchmarks in the industry. The Orchestrator: A New Role in PR As AI tools continue to mature, they will be more seamlessly integrated. The role of many communicators will evolve into that of an “orchestrator” — where professionals will remain critical for understanding context, making ethical choices, and building stakeholder relationships that a machine cannot fully grasp. As the orchestrator, they will guide various AI tools — be it text generators, image creators, or video tools — to integrate outputs for the highest quality work product. Each tool serves as a member of an orchestra, and it's the human orchestrator who ensures that the symphony is both harmonious and impactful. Ethical and Future Considerations With the advent of AI, a new layer of ethical considerations has emerged. It is critical for individuals, agencies and communications teams to maintain a human in the loop to ensure quality and consider any ethical implications of AI-generated content. Effective PR is not just about reaching an audience; it’s about reaching them responsibly and being mindful of considerations about authenticity and confidentiality, with eyes wide open to the potential risks and benefits. In this we agree with Microsoft CEO Satya Nadella when he said that AI is the “defining technology of our times” and stressed the importance for his teams to follow a set of human values and principles that guide the choices they make with AI. Stay Ahead of the Curve For forward-leaning organizations and individuals, the time to experiment with existing AI tools is now. Learning to wield these tools with strategic intent will be crucial for developing AI literacy and for maximizing the vast possibilities that AI offers. Industry analyst firm Gartner said: “Generative AI will change the world faster than any innovation in history.


THIS MARKS THE BEGINNING OF A COLLABORATIVE FUTURE, ONE WHERE HUMANS AND MACHINES WORK IN TANDEM TO ACHIEVE GOALS. The ramifications in the near-, mid- and longterm will be startling, fundamentally altering the way businesses operate.” ▶ Accordingly, agencies need to prioritize ongoing AI education and reskilling programs for their staff to stay competitive. At Edelman, for example, we have developed a set of operating principles for AI alongside four AI instructional modules, which are required training for our 6,000 employees. This is only a start. Staying abreast of the latest advancements and regularly updating AI toolkits will not only optimize current workflows but also prepare agencies for emerging technologies that could redefine the landscape.

What Comes Next As technology advances, our industry will need to keep pace. Ravi Kumar, CEO of Cognizant commented that “with generative AI, we are humanizing the technology, and handing over agency to the end user”. The future of communications lies in a balanced collaboration between human skills and AI capabilities. Agencies and professionals who recognize and adapt to this symbiotic relationship will find themselves well-positioned to evolve their craft and deliver groundbreaking work in an increasingly complex landscape. ■



BEYOND THE HYPE: UNCOVERING POSSIBILITIES WITH ENTERPRISE AI BY JONATHAN ADASHEK IN THE 90S, a company called Netscape launched a web browser that acted as the first portal to the internet for the public. Like Netscape did for the internet, the generative AI hype has made AI “real” for consumers, at least at a base level. They understand AI is behind the voice assistant playing their favorite song or a chatbot giving restaurant recommendations. These applications are relatively easy to grasp: the user asks a question, and the device produces a response using vast amounts of data scraped from across the web. ▶ As a marketer and communicator, I am thankful for the increased awareness in AI. But some of the most important AI use cases are unseen by consumers, and those require special care to execute and explain. ▶ While consumer AI can help plan your vacation, modify your headshot or write you a poem, enterprise AI is the technology that

powers your bank’s chatbot, helps avoid downtime in food supply chains, and keeps customer data secure. It harnesses the power of AI to transform entire businesses, organizations and sometimes even governments. Because the stakes are so high, enterprise AI requires expertise and close partnership using trusted data and the highest levels of data security to ensure maximum privacy and security. ▶ After years of intense focus on developing AI built for businesses, IBM introduced watsonx in May — an enterprise-ready AI and data platform that enables enterprises to scale and accelerate their operations. As I write this, watsonx is automating repetitive tasks, advancing customer service and modernizing apps. ▶ Like I said, some of the best technology is never seen by the consumer — and that is okay. Not everyone needs to know why their banking app works, just that it does. We do not need everyone to understand on a technical level what we do, but we

Jonathan Adashek is chief communications officer for IBM, responsible for overseeing its Global Communications and Corporate Citizenship organization, including internal and external communications as well as content creation, strategic events, strategic positioning, social media and citizenship activities. He brings over 25 years of communications, marketing and corporate affairs experience across domestic and global teams serving different sectors, including technology, automotive and manufacturing, government, retail, finance, energy, B2B and B2C. He is a member of the USC Center for PR board of advisers. 74


need the right people to know. So, how does IBM ensure the right audiences are educated on the technology that makes the world work better? ▶ We move past story telling and start showing by tapping into our audiences’ personal passions, like tennis, golf, fantasy football and more. We gather thousands of data points from iconic events like the US Open and Masters tournament and use AI to make predictions about player performance. The purpose is to get our audience thinking about the possibilities for their industry, for example, “What if we used watsonx to connect our bank’s data?” or “What if we used AI to make our buildings more efficient?” Ultimately, we want our audiences to understand the possibility for AI to impact every industry. ▶ Sports is one of our ways in, but the broader potential for AI is much more inspiring. Coming from a career in politics, I am especially excited about the possibilities to transform government operations. ▶ AI is one of the most important changes for the U.S. government in the past decade. With technologies like watsonx, IBM has helped transform and advance many federal agencies to be more efficient, productive, and faster at making decisions. We help veterans get their benefits by processing claims faster for the Department of Veterans Affairs (VA), translate NASA’s satellite data to help gather insights about climate change, and help the Navy’s Fleet Forces Command plan and balance food supplies, making them more resilient to supply chain interruptions. ▶ In the days of Netscape, we could have never imagined the speed and scale at which AI would change the world. Advancements like those mentioned above have the potential to make massive, positive change and as we live through this defining moment for AI, I encourage everyone to consider the possibilities. I am looking forward to seeing what’s next for watsonx as IBM continues to tackle the world’s toughest challenges. ■




AI IS DISRUPTING HEALTHCARE... AND THAT'S A GOOD THING BY TOROD NEPTUNE I SAW THIS HEADLINE the other day: “A boy saw 17 doctors over 3 years for chronic pain. ChatGPT found the diagnosis” (Source: Today. com). After a frustrating three-year journey trying unsuccessfully to diagnose a series of mysterious symptoms for her son, the boy’s mother turned to AI. ▶ “I went line by line of everything that was in his (MRI notes) and plugged it into ChatGPT,” she says. The story goes on to say, “She eventually found tethered cord syndrome and joined a Facebook group for families of children with it. Their stories sounded like Alex's. She scheduled an appointment with a new neurosurgeon and told her she suspected Alex had tethered cord syndrome,” which the doctor confirmed. ▶ This example points to the potential of AI in the digital healthcare era. To be clear, AI can’t replace the expertise, training, or clinical judgement of physicians. The biggest advancements in modern healthcare happen

when new technologies are placed in the hands of skilled physicians to improve the speed or accuracy of decision making, and AI is no exception. ▶ One example from Medtronic is the GI Genius™ Intelligent Endoscopy Module, our computer-aided polyp detection system powered by AI. Colorectal lesions, including polyps and adenomas, are precursors to colorectal cancer — the #2 deadliest cancer worldwide. Almost 1 in 20 adults will be diagnosed with colon cancer in their lifetime; however, with early detection, 90% will beat it (Source: data on file). ▶ GI Genius works with a clinic’s endoscopy/colonoscopy system. Harnessing deep learning algorithms and real-time data, it assists physicians in detecting polyps during the procedure through enhanced visualization. It’s been shown to have a 99.7% sensitivity rate, less than 1% false positives, and performs real-time analysis 82% faster

Torod Neptune is senior vice president and chief communications officer of Medtronic, the world’s largest healthcare technology company. He is a member of the company’s Executive Committee, directs the company’s corporate marketing, communications, and ‘business in society’ initiatives, and oversees the Medtronic Foundation and Medtronic’s social business enterprise, Medtronic Labs. Torod is a member of the USC Center for PR board of advisers.



than the endoscopist (Source: data on file). The first U.S. trial showed a 50% reduction in missed colorectal polyps when GI Genius was used, as compared to standard colonoscopy methods (Source: data on file).

adults think the use of AI would reduce the number of medical mistakes (40% vs. 27%). An even larger share says the problem of racial and ethnic bias and unfair treatment in healthcare would get better (51%) than worse (15%) if AI was used more. That said, six-in-ten say they would feel uncomfortable if their doctor relied on AI to do things like diagnose disease and recommend treatments (Source: Pew Research Center, 2023). ▶ As Chief Communications Officer, I’ve described my role as helping companies build trust, which is especially important in these early stages of major, systemic innovations like AI. ▶ To accelerate adoption, my first responsibility is to make sure our organization has current skills (via hiring, training, data/analytics capabilities, agency relationships) and modern brand experiences (developing/ delivering targeted content to make it easy for people to find and understand what they’re looking for). ▶ But it also requires transparency and balance. Yes, we need to make the general public aware of the benefits of these breakthroughs while clearing up misinformation and misconceptions. But in our enthusiasm, it’s important to avoid the risk of overselling — balancing exciting claims with an honest assessment of potential risks and treatment alternatives. ▶ This is an exciting time to be in healthcare technology, and AI has brought an equal dose of opportunity and challenge. While I can’t sit here today and predict where this will go, I know we have a responsibility to get it right for Alex’s mom and the millions of patients and caregivers like her seeking more control and better outcomes. ■

THE BIGGEST ADVANCEMENTS IN MODERN HEALTHCARE HAPPEN WHEN NEW TECHNOLOGIES ARE PLACED IN THE HANDS OF SKILLED PHYSICIANS. ▶ The potential for AI in healthcare goes beyond diagnostics. Our company is advancing research in areas like personalization of care with implanted technologies that can tailor therapy delivery in real time for complex conditions like diabetes, chronic pain, and Parkinson’s disease. We’re also exploring AI to improve telemedicine and remote monitoring as well as predictive analytics to reduce hospital readmissions. ▶ Our mission to improve lives obligates us to harness AI to advance these areas, which includes training physicians to support safe and responsible adoption. I said AI won’t replace physicians, but some believe physicians who embrace AI will eventually replace those who don’t. That said, if patients aren’t ready to accept AI in their care, these advancements won’t reach their potential. ▶ Consumer perceptions tend to be a blend of optimism and caution. A large share of U.S.



AI’S POWER AND POTENTIAL PITFALLS IN REVOLUTIONIZING HEALTHCARE COMMUNICATIONS AND MARKETING BY JENNIFER GOTTLIEB UNIMAGINABLE. Groundbreaking. Fast-moving. These terms are how experts describe the use of artificial intelligence (AI) and machine learning (ML) across industries. These technologies are poised to dramatically change the way we live and do business. ▶ Specific to the healthcare industry, AI and ML have become a driving force in improving how healthcare companies connect their life-changing treatments and interventions with patients and healthcare professionals. We will continue to see the impact of this innovation across the entire healthcare ecosystem, including drug development, clinical trials, health literacy, and commercialization. And for healthcare communications professionals — who translate complex information into understandable, relevant content for patients — it will be transformative. ▶ As a leader in data and AI-driven communications and marketing, our company,

Real Chemistry, is committed to realizing the potential of data connectivity through AI and ML in healthcare. We are applying it to the diagnosis, management and treatment of many conditions, particularly rare diseases, to improve patient outcomes. By analyzing massive amounts of health data, we are uncovering new information daily that can help patients and physicians identify a disease they might not even know the patient has, despite suffering from troubling symptoms unresponsive to treatment for years. ▶ That said, AI’s adoption must unfold carefully in the highly regulated healthcare industry to ensure we protect patients’ privacy and provide them with accurate information. We must move thoughtfully in partnership with legal, regulatory, and medical experts to make AI a help, not a hindrance. ▶ Here are four major ways AI is markedly improving healthcare communications and marketing:

Jennifer Gottlieb is the global president and chief client officer of Real Chemistry, a leading global health innovation company that uses real-world evidence, proprietary technologies, and analytical insights to address the demands of our ever-evolving healthcare ecosystem. She a Trojan Parent and a member of the USC Center for PR board of advisers.



AI’S ADOPTION MUST UNFOLD CAREFULLY IN THE HIGHLY REGULATED HEALTHCARE INDUSTRY. 1. Speaking to Patients Like You Know Them: Every patient is unique. Words that matter to a 35-year-old single Latina mother living in a large Southern city will likely not resound with a 50-year-old married farm worker in the Midwest. To inspire both of these patients to care about their health, we must reach them where they already seek information with words and messages that matter to them. By analyzing billions of data points — all de-identified so individual patient information is never surfaced — we can create highly personalized content and target it to different patient segments. We can achieve better health equity by tailoring content for diverse patient populations. That said, we must work hard to remove biases from the data we collect to ensure the data we use to make decisions creates an accurate picture and more equitable healthcare experience for all. 2. Getting Immediate Answers at Crucial Times: Imagine bringing AI-powered virtual agents into the highly complex and regulated healthcare setting, where a patient or health professional needs real-time answers to complicated medical

questions. Advanced automation, AI, and Natural Language Processing (NLP) can create human-like conversational AI agents that can address healthcare professionals’ questions any time they need it with medically accurate information that is compliant with regulatory and legal guidelines. We believe conversational AI will be industry-changing, especially as companies continue to reduce the number of physical sales representatives calling on physicians’ offices post-COVID. 3. Creating the Best Content, Quickly: Generative AI platforms like ChatGPT and DALL-E enhance our ability to work smarter by allowing us to focus more on creative and storytelling skills and less on simplistic tasks. In healthcare, we are using tools purpose-built for the complexity of our industry. As much as everyone is excited to experiment with different platforms, communicators and marketers need to move forward with proper legal and regulatory counsel to navigate how these new technologies and their outputs fit in the current regulatory frameworks set by groups like the Food & Drug Administration. 4. Democratizing AI for Everyone: AI platforms have increasingly become user friendly, which means everyone will be using them in the very near future. So what does that mean for the future of our profession? While there is fear that AI will make jobs obsolete, I believe AI will become the best time-saving tool we have ever had. It can reduce repetitive tasks so you can focus more on interesting and complex tasks like analysis, strategy, and creativity. My advice is to lean into AI and learn how it can help you become more efficient. AI is here to stay. We will surely look back in two, five, 10 years and not remember what it was like not to have AI and ML at our fingertips, just like we can’t remember what it was like before we had calculators, the computer, the internet, the cell phone and social media. ■ 79



BY JENNIFER ACREE THE ESPORTS INDUSTRY is undergoing a seismic shift and artificial intelligence stands to play a pivotal role in its transformation. Companies are already exploring practical applications of AI and how it can not only streamline processes, but also improve player experiences and performance. As the industry continues to evolve, this emerging technology will be a critical force in shaping the future of esports. ▶ In just two decades, esports has grown from a niche pastime to a $1.39 billion global market, influencing every corner of pop culture from movies to music to sports. Esports now includes an audience of over half a billion fans — 60% of whom are between the ages of 18 and 34. ▶ AI has demonstrated an impressive understanding of video games that — in some cases — even rivals the capabilities of the world’s best players. GT Sophy, an AI bot for Gran Turismo, used machine learning

to develop skills that surpassed 95% of players in just 1-2 days, and achieved “superhuman” status after 10 days. While esports AI models are often trained on simplified versions of games, their rich understanding of these titles is based on tens of thousands of hours of gameplay, unlocking insights humans have yet to see. Pro players have studied the behavior of AI bots like GT Sophy to learn which lines they need to take on the track and which strategies to employ to continually improve their game. ▶ Competitive gamers are using AI to study their opponents. For instance, the professional organization Evil Geniuses partnered with Hewlett Packard Enterprise to support its coaching staff with AI tools trained on archival footage and data of League of Legends gameplay. The goal was to arm coaches with predictive analytics to help navigate the pre-game character selection process, analyze player voice communications, and more. The

Jennifer Acree is founder and CEO of JSA Strategies, where she works with select Fortune 500 and start-up clients to develop communications programs tailored based on clear business objectives in order to get results. Jennifer is a member of the USC Center for PR board of advisers.



results materialized during the team’s League Championship Series 2023 Spring Playoffs when the AI correctly predicted the majority of opponent’s pick priorities, meaning EG had been anticipating and preparing for the correct strategies. ▶ This technology extends to amateur players looking to improve their gameplay. Esports data and analytics firm Omnic has an AI coaching platform, Omnic Forge, that has already seen AI-generated insights level up players’ skills. Fortnite players using the platform on average reduced damage by 32% and improved healing efficiency by 104%. Improving these core mechanics significantly impacts player performance, and may be the difference between winning and losing when it matters most.


▶ AI has far more capabilities than improving competitors’ skills; it’s also helping to combat toxicity. Over 80% of multiplayer gamers have experienced harassment while playing video games, and developers have historically lacked the tools to moderate this behavior. To combat this, competitive gaming platform FACEIT developed Minerva in 2019, an AI solution that uses deep learning to track toxicity in real-time and police bad actors. Minerva has scaled to analyze 3 million matches a month — comprising 110 million messages and 200,000 hours of voice communications. The success of these tools have pushed AAA developers like Riot Games and Activision to invest more heavily in AI to improve safe player experiences. What’s Next for AI in Esports While the esports industry has made tremendous leaps in the adoption of AI, the technology still has a way to go in order to improve the value it has for top players. For example, future AI models may be trained on specific competitors’ gameplay, giving teams bots to practice against that could more accurately mimic their opponents. The technology also has other untapped potential in making esports content more digestible to fans, with opportunities to train AI broadcasters to more accurately capture in-game action for viewers tuning in at home. ▶ While still in the early days of exploration, AI is proving to be an essential tool to usher in a new wave of professionalism, performance, and community. Applications across all levels of competition offer players and their teams a new level of support that not only continues to improve the quality of their gameplay, but brings a level of technology-enabled professionalism that rivals established industries, like traditional sports. ■




BY MARY CZERWINSKI ARTIFICIAL INTELLIGENCE (AI) is a powerful technology that can augment human capabilities and help automate tedious and repetitive tasks, allowing individuals to focus their time and energy on the more creative and complex aspects of their work. For communications professionals, we’ve seen how AI can also help workers overcome writer’s block, generate content, and spark new ideas. These capabilities have presented opportunities, especially for novice workers lacking experience or confidence. For example, generative AI tools such as the GPT-4 large language model (LLM) can generate text, images, music, and code using natural language prompts, providing individuals with inspiration and guidance, without requiring deep technical skills. ▶ AI can also help individuals learn new skills and improve performance by providing personalized feedback, recommendations, and coaching. For instance, in our work on Focus Assist for Windows and our research

examining the use of Focus Time in Outlook, we’ve been able to demonstrate that people appreciate an agent that blocks off time so they can get their highest-priority tasks done. In the future, chatbots can leverage typical user working patterns to recommend good times for focus as well as the best times to take breaks, help with triaging email, or catch up on missed meetings, all resulting in less stress and healthier work-life balance for the user. In addition, system builders may be able to personalize and adapt LLMs to different users, domains, and situations while respecting their preferences and goals. ▶ AI is not only a tool for enhancing productivity but also a medium for expressing creativity. AI can generate novel and diverse outputs that challenge human expectations and conventions. AI can also enable new forms of collaboration and co-creation between humans and machines. However, AI can also disrupt the practice of design

Mary Czerwinski is a partner research manager of Microsoft Research's Human Understanding and Empathy group, where she focuses primarily on using technology to help with workplace productivity and wellbeing. Mary holds a PhD in Cognitive Psychology from Indiana University in Bloomington, is a Fellow of the ACM and a member of the National Academy of Engineering.



AI CAN ALSO HELP INDIVIDUALS LEARN NEW SKILLS AND IMPROVE PERFORMANCE BY PROVIDING PERSONALIZED FEEDBACK, RECOMMENDATIONS, AND COACHING. and change our relationship with creative materials, tools, and mediums. As AI becomes more broadly understood and adopted, communications professionals will set new norms and expectations for when and how to use AI, whether and how to disclose its use, and how they perceive the value of work generated by AI. ▶ In these scenarios, AI can raise ethical, social, and cultural issues, such as those around ownership, authorship, responsibility, and trust. AI can also affect the role and identity of designers, as they may need to adapt to new workflows, skills, and mindsets. AI can also influence the perception and appreciation of creative works, as they may be seen as less authentic, original, or valuable. ▶ These are all important questions that the industry can explore together, as this emerging era of generative AI shifts knowledge work from material production to critical integration. In practice, this means that communicators will need to use and develop new skills. They may spend less time generating and collecting content, and more time analyzing, synthesizing, and evaluating content. This will require more expertise and critical judgment, as professionals will need to assess the quality, relevance and validity of the content

produced by AI, and integrate it with their own knowledge and insights. Communicators may also need to have more creativity and innovation, as they will need to generate new and original content that AI cannot produce and leverage AI's potential to enhance their work. New skills around coordinating multiple AI agents to help with various aspects of creative work will also emerge, with human users working as conductors to orchestrate the work done by the LLMs. And, of course, soft skills will become more valuable during this kind of orchestration to ensure that the more mundane work performed by the AI is packaged in an empathic, trustworthy, and privacy-preserving manner. ▶ To leverage the power of AI and even multiple AI systems, communicators will need to practice this new craft of partnering with AI to create. Skilling up on effective prompt engineering practices, working across multiple agents, acquiring quick summaries of areas of expertise, and packaging AI-delivered solutions in an effective manner will all take iteration over time. However, the field is moving rapidly and the user experience around these steps is improving very quickly, ensuring all of us can harness the promise of AI and human collaboration for increased focus, creativity, and productivity. ■ 83


SYNTHETIC AI VIDEO AND THE MAGIC OF DIGITAL TWINS FOR BUSINESS STORYTELLING BY STEPHEN LIND PICTURE THIS: Your business's communication team is buzzing with the idea of creating riveting video deliverables featuring your CFO and CEO to bolster your brand’s narrative. There's just one snag ­­— the CFO is globetrotting on business, and your CEO would rather juggle flaming torches than face a camera. Enter the magic realm of synthetic video, where your leaders’ digital twins take center stage, effortlessly performing scripts fed to them, without demanding a retake or a coffee break. For a modest monthly fee, platforms like Synthesia roll out the red carpet, inviting every business to the enthralling theatre of AI video. For communication professionals, this isn’t just a new tool; it’s a showstopping magic act. The problem is that trick could go horribly wrong. And it requires a lot more study before we embrace it wholesale. ▶ AI technology has now brought a capability that was once the domain of niche

Hollywood effects studios to the average corporate communications team. Programs like Synthesia allow users to produce slick quality videos with a lifelike, human, albeit synthetic, avatar via a simple text-to-speech and textto-video interface. Just copy and paste your text in, choose your human-modeled avatar, and the program will create a dynamic vocal performance while also animating the mouth of the model so that it looks like they are speaking the words you typed. ▶ This is something I am studying in my current business communication research. The potential for businesses is enormous. Barriers to professional quality video products, like speaking skill, camera and lighting setup, and editing know-how, are evaporated with a click of a subscription. Dozens of diverse avatars, with dozens of voices replete with various global accents, are then available for your use in whatever language you choose.

Stephen Lind is a associate professor of clinical business communication at USC's Marshall School of Business. His teaching encompasses strategic messaging, technology in communication, consulting, and refining speaking and writing skills for business contexts. He received his PhD, with distinction, from Clemson's transdisciplinary Rhetorics, Communication, and Information Design program.



▶ Even further, for a premium upgrade, you can submit your own video and have your own digital twin. Or in the case of the communication team, perhaps a whole C-Suite set of digital twins. ▶ Obviously, this technology raises many questions. Can AI video prove equally as effective as its traditional human counterpart? And what happens to brand identity and brand reputation when a viewer knows that it’s an AI spokesperson and not a real human speaking to them? Are the vital elements of authenticity and trust diminished by employing AI to be the synthetic face of the company instead of the actual organic face of your leader? These are questions I am studying through a series of experiments right now, and the early results of fascinating. ▶ It is only responsible to also thoroughly acknowledge the potential this technology

offers to scam artists. The ease in crafting realistic videos can be a double-edged sword, serving both bona fide business agendas and nefarious schemes. The nightmare scenario? A fraudulent video, cloaked with your company’s logo, going viral, only to be debunked later, leaving a stain on your brand’s reputation. The deepfake threat isn’t a distant thunderstorm; it’s knocking at our corporate doors, urging the necessity for robust verification frameworks in digital communications. ▶ The research I am conducting now, along with the many case studies that will unfold as more and more businesses adopt this technology, will produce fascinating content for our new chapter in AI-augmented communications. The prudent professionals will proceed with a healthy balance of both boldness and care. ■




BY VIJAY CHATTHA & DANIELA RODRIGUEZ THERE ARE THREE PATHS of AI for a future symbolized by Hollywood films: The first is Superman, a superhuman that leverages AI to be a superior being. The second is Robocop, a world where AI takes over from humans. The third is Iron Man, a world where humans and machines work together in harmony to build a better future. ▶ We believe the future will look something like Superman and Iron Man but regardless of what future you see, AI will define our lives, and we as communicators, have an opportunity to define AI. ▶ Generative AI is the new kid on the block, but building AI brands has been a labor of the past 10 years at VSC. Our agency has leveraged communication to drive fundraising into impact-

ful AI startups and create compelling narratives about why and how they do what they do. ▶ While today’s AI market is rapidly growing, with thousands of new startups and businesses created every day, the need for a consistent brand presence is key for establishing category creation, market leadership, and a persuasive story that connects with audiences over time. ▶ PR and communication professionals play an essential role in the way companies present their brand stories to their stakeholders, especially when the popular narratives around artificial intelligence lack humanity, empathy, and accuracy. The following recommendations address some of the key challenges in building AI brands: navigating hype cycles, job-

Vijay Chattha is founder and CEO of VSC, a strategic content and communications agency with a 20+ year track record of establishing dominant technology brands across AI, automation, fintech, enterprise software, health tech, mobile, and venture capital.

Daniela Rodriguez Martinez is a Fulbright Scholar at USC Annenberg focused on boosting STEM initiatives through communications. She is two-year Graduate Associate at the USC Center for PR, completing her master’s program in public relations and advertising. An experienced writer and awarded broadcaster, Dany has worked for PR agencies, space projects and NGOs with a focus on brand strategy, data storytelling and digital content creation.



displacement fears, and diving deep into the real scope of this technology. ▶ Remain human-centric The fear of job displacement is a significant challenge for AI brands. Companies that will face the most backlash will focus on how AI can replace human jobs. This is a mistake in our view. ▶ Messaging shouldn’t address what AI can do better than humans but rather demonstrate how it will empower them to become “superhumans” with more tools and time to be creative, productive, and safer. Switching the narrative from “instead of humans” to “superhuman” is one key example of verbiage mattering. ▶ For example, VSC worked with Zume, a robot pizza-making company in the Bay Area, to highlight how automation enhances job roles rather than replace them. Zume uses AI to predict how many and what kind of pizzas people will order each day, and to power robots that will bake and deliver the pizzas. The real value is that automation keeps the crew safe from injuries and makes them focus more on the menu, ingredient selection, and customer service. ▶ When marketing AI software or services, AI brands should focus less on benefits like “saving costs”, which often refers to the headcount, and more on showcasing how their solutions increase productivity and efficiency. By highlighting the value AI brings to business growth, brands can overcome the fear of job displacement and gain customer trust. ▶ Deep tech vs. shallow tech ‘Deep tech’ refers to companies whose business model is based on high-technology innovation and who own the AI core of their business. In contrast, ‘shallow tech’ simply adapts the current offering by enhancing its products or services with artificial intelligence. ▶ AI brands face the challenge of differentiating themselves in a crowded market where consumers are becoming more tech-savvy and curious. Whether you are working with a deep or applied AI brand, it is important to communicate the brand’s unique value proposition beyond just “having AI.” This distinction helps establish credibility and expertise.

▶ According to Crunchbase, AI is a technology that applies to many fields rather than a standalone sector. Just as being “an Internet company” lost its distinctiveness over time, labeling a startup as an “AI company” might become redundant because artificial intelligence is becoming fundamental for all. Be sure to know the real value and purpose of your brand. ▶ Be skeptical and strategic Whether your brand is the developer or the user, AI is not infallible and can make mistakes. That’s why brands should have a well-defined crisis communication plan in place to address situations where AI delivers incorrect or potentially harmful results. This is particularly important in applications such as self-driving technology, image recognition, automated selection processes, and conversational/ generative AI, among others. ▶ In an example that made news earlier this year, lawyer Steven A. Schwartz used OpenAI's ChatGPT for legal research in a case against Avianca Airlines. However, ChatGPT gave him fake court case references, and Schwartz used them in his arguments without checking if they were real. This mistake led to a $5,000 fine and his case being thrown out. ▶ This incident underscores the unreliability of AI in certain applications. It emphasizes the need for brands to be cognizant of the limitations of AI and have strategies in place to rectify errors. ▶ Go beyond the buzzword Building AI brands requires a thoughtful approach that combines effective communication, differentiation, accuracy, and honesty. ▶ Honesty is a disruptive communications strategy: Amidst noise, honesty is so rare that if brands remained accurate and simple, they would become more visible. Build a team of PR pros who can actually dig deep into the technology behind the AI brand and create relatable and concise messages. ▶ By being honest with their stakeholders about what they do and accurate about why and how they do it, brands can successfully navigate the evolving landscape of AI technology. ■ 87


GENERATIVE AI: FROM EXPERIMENTATION TO INTEGRATION BY TREVOR JONAS THERE WAS A PALPABLE feeling of excitement, energy, and opportunity within PR and communications back in 2004 - 2005. Technology was changing at break-neck speed and it was making it possible for individuals and companies to communicate in brand-new ways. Blogs, podcasts, RSS, and wikis were like a ‘Wild West’ for communications professionals. They offered huge advantages for early adopters, but came with some risk as well. ▶ Today, we find ourselves in a similar period of dramatic technological change and the implications for PR and communications are profound. I’m talking, of course, about the rise of large language models (LLMs), multimodal AI, and generative AI. ▶ When ChatGPT hit the scene in late 2022 it looked like an interesting experiment, but by March 2023 with GPT-4 it quickly became clear that the game had changed. Suddenly, an entire sea of possibilities was right at our

fingertips, seemingly only limited by our imagination or the ability to craft a solid prompt. ▶ Today, I do marketing in the healthcare industry, which is notorious for being slow to adopt new technology and generally risk-averse. As part of my job, I produce a podcast where we speak to bold thinkers and executives in the field of medicine. One C-level executive at an East Coast academic health system recently said, “Generative AI is the future. It is probably the biggest technology advancement since the birth of the Internet.” ▶ Another health system vice president, who is not one for hyperbole, told us, “Generative AI, LLMs, all of that, is truly going to be one of the biggest inventions of our lifetimes.” ▶ The reasons why are clear. The latest models have been trained and fine-tuned on a massive corpus of text and media. We’re talking billions of parameters, making them (in theory) more knowledgeable than any

Trevor Jonas started his career in technology public relations in San Francisco during the original dot-com boom. Over the past 20+ years, he has worked with dozens of brands across multiple industries, providing PR, communications, editorial, content strategy, and social and digital marketing counsel. Currently, he leads content marketing for a Silicon Valley-based healthcare technology start-up and lives with his wife and two teenage children in Napa Valley.



human could ever hope to be. On top of that, tools like ChatGPT, Google Bard, Midjourney, and others can generate responses to queries and produce content at insane speeds. Things that would take a human several minutes or even several hours can now be done in a matter of seconds. ▶ When huge technology shifts like this take place, there are at least three ways to approach them. You can ignore them, after all, they say ignorance is bliss. You can fear them and dismiss them as something that will never measure up to the status quo. Or, you can embrace them, lean in, and learn everything you can about it and how it might help you do your job better.

▶ Among our biggest learnings? You have to have humans with creativity, knowledge of the business, a keen understanding of the challenge at hand, and the know-how to use the tools in order to achieve results. This will not change. ▶ To be clear, it’s not all sunshine and roses. With generative AI there are obvious issues related to bias, ethics, plagiarism, and more that simply cannot be ignored. But, what also can’t be ignored is the change that is coming. ▶ Over the next few years, businesses of all shapes and sizes will be able to do more work, of certain types, with far fewer people. What’s more, that work will be able to be completed faster. The notion of how long it should take to draft a press release, create organic social media copy, conduct an SEO audit, or build an end-toend marketing campaign is going to change dramatically and the ripple effects will be significant. Entire business models will have to change, staffing models will too. ▶ By all accounts, 2023 has been the year of experimentation with generative AI. 2024 will be the year of integration. We will see the tools that are already deeply embedded in business, PR and communications integrate generative AI capabilities that will take human productivity to never-beforeseen levels. ▶ Those who have experimented, tested, and learned will be far better positioned for the future than those who haven’t. At the same time, creativity and original thinking will rise in importance as the ability to produce a baseline of content continues to democratize. Personally, I’m having a blast exploring the ‘Wild West’ for the second time in my career. ■

2023 HAS BEEN THE YEAR OF EXPERIMENTATION... 2024 WILL BE THE YEAR OF INTEGRATION. ▶ In late March 2023, I identified nine different marketing-related use cases for generative AI that I wanted to test. My team and I set out to understand how generative AI could help with things like research and insights into our ideal customer profile (ICP), keyword research and analysis, content gap analysis, creating outlines, frameworks, and drafts of new content, generating thumbstopping headlines, writing compelling captions for visuals and infographics, conducting on-page SEO reviews, doing key message analysis, and more. ▶ The results speak for themselves. We’ve generated more content, in more formats, with fewer human resources doing the work, than ever before. But it goes beyond volume. Much of that content continues to perform better by a variety of measures (leads generated, organic search ranking, engagement) than most of our previous content.



AI MEETS PR: THE EMERGENCE OF THE COMMUNICATIONS ENGINEER BY AARON KWITTKEN ONE OF THE HOTTEST topics in recent memory, AI’s exponential growth has commanded our attention, shifting from science fiction to reality in real time. ▶ Professionals across industries are pondering the possibilities of AI tools in the workplace, with knowledge workers, including PR pros, increasingly uncertain about the future. Given the impressive language capabilities of ChatGPT and other generative AI tools, it makes sense why workers who spend much of their time writing may be a bit worried. ▶ Though the fearful take feels like the easy one, the reality is more complex — and more positive. ▶ AI tools aren’t here to replace us. But as we move forward, the most successful communicators will need to recognize a culture shift across the industry, meeting at the intersection of art and science by assuming the role of the “communications engineer.”

▶ The term “engineer” may not automatically land with those who take more pride in the art of comms rather than the science, but bringing the two halves together ultimately creates a prettier picture. Communications engineers use actionable analytics to drive strategy and backstop their gut instinct. They utilize generative AI and other advanced tech to become more performative, predictive and productive, and maybe one day more prescriptive. ▶ Tasks like drafting pitches, blogs, bylines, media briefing books, and even crisis statements — what once took hours for first drafts can now be done in under two minutes. Let’s take a look at some of the most successful use cases to date. Productivity Through leveraging AI tools tactically, workers can begin to free themselves from repetitive and tedious tasks, allowing them to focus on more engaging work. Junior PR pros, often

Aaron Kwittken is the founder and CEO of PRophet, the first-ever generative and predictive AI SaaS platform designed by and for the PR community. Aaron is also the CEO of Stagwell Marketing Cloud’s Comms Tech Unit. Prior to PRophet, Aaron founded KWT Global, a PR and brand strategy agency where he currently serves as Chairman. He is a guest lecturer at USC Annenberg.



assigned admin and research tasks, can use AI to help build media lists, monitor and compile coverage reports, and conduct research. This not only enhances overall efficiency, but also gives PR pros more energy to tap into their ingenuity. Assistance One of AI’s most underrated benefits is its keen ability to serve as an assistant. No, it can’t pick up your dry cleaning or go on coffee runs (yet), but it can improve your work in many ways. ▶ For example, you can approach AI chatbots and fit-for-purpose platforms like PRophet as coaches. While many tend to take AI-generated copy and touch it up on the back end, humans can also input their own copy to find areas for improvement; in this way, generative AI can serve as a convenient writing tutor — an affordable resource for upskilling staff. AI can also assist writers with ideation, research and search engine optimization. Tools such as Midjourney or Adobe’s Firefly also show us AI’s ability to serve as a creative assistant. Communicators can present their challenges to AI tools, unlocking fresh ideas and novel perspectives; AI’s ability to sift through information and identify patterns can act as a catalyst for creative breakthroughs. Resonance With AI tools entering the mainstream, the days of gut instinct and “spray and pray” are officially coming to a close. Thanks to smart software, crafting resonant content will become easier than ever before. ▶ This is a massive game-changer for PR tasks such as pitching. Tools like PRophet are leveraging generative and predictive AI to identify the reporters most likely to cover a story positively — and generate personalized pitches for each one. With this bespoke approach, media placements are not only more likely, but also more impactful. Other industry software players are following suit. The same theory applies to creating content such as blogs, bylines, press releases, social

posts and marketing emails. Generative AI can streamline the drafting process while predictive AI takes the proper KPIs into account, ensuring a final product that readers can’t get enough of. Clairvoyance In an industry as unpredictable as ours, any way to stay ahead of the curve is a welcomed advantage. ▶ Predictive AI makes this possible, allowing us to see around corners. We can now make predictions with greater accuracy, such as detecting threats to brand safety and integrity before they materialize. We can also identify shifting trends and consumer/ customer preferences, finding ways to capitalize on opportunities while mitigating risks. And AI tools are inherently interdisciplinary, allowing for inspiration and ideas to come from multiple sources. Connection When leveraging the efficiencies of AI, agencies may be able to make the shift to value-based billing. As opposed to timebased billing, this model can give clients more confidence that they’re receiving quality work in a timely manner without paying agencies to take the slowest and longest path possible. And, as many of us can attest, a confident client is often a happy one, which can strengthen relationships long-term. ▶ Additionally, smart comms tech can help PR pros foster stronger relationships with the media. In targeting only the most relevant reporters, communicators can create mutual benefit and stand up relationships. What’s next? As we all know, the explosion of AI tools is unlikely to slow down. But amid all the fearmongering, hype and speculation, it’s promising to see communications professionals successfully implementing AI at this early stage. These pioneers are paving the way for AI as a supplement — not a replacement — in PR and beyond. ■ 91



BY DAINIUS KRASAUSKAS IN RECENT DECADES, the mighty buzzword AI spread from one industry to the other like wildfire, sparking widespread concerns about people losing jobs to machines. ▶ While everything on the mainland slowly caught flames, one group always seemed safely situated on their own private island: people with creativity. Remember? It’s only a few years ago when creativity felt like this one thing robots could never challenge us humans on. Cashiers, manufacturers, warehouse workers — most will agree, AI is a threat for them. But for writers? Musicians? Entrepreneurs? Never! ▶ Well, recently we entered the era of generative AI. Is it finally time to panic? The Feeling Of Generative AI AI has long overtaken us in countless disciplines: We are worse at math, worse at chess, worse at detecting cancer from scans. Is it creativity that makes us think differently?

The ominous C-word … our ability to think and create new, surprising and useful things or ideas … the art of imagination. ▶ In November 2022, I had one of those Eureka moments you occasionally get in life when I tried OpenAI’s freshly launched image generator DALL-E 2 for the very first time. ▶ Back then, I was pretty curious about AI and followed news on deep fakes, early image generators and GPT-2. The difference: all these things were either clunky or not accessible to the wider public to try out. But when I started playing around with DALL-E I knew I touched something unearthly. ▶ I could write a prompt like “Painting of the Monarch, Orange cat Otto von Garfield, depicted wearing Prussian Wear and eating his favorite meal — Lasagna” and voilà, it created precisely that. Try it yourself. ▶ It was eerie and yet captivating, like witnessing magic unfold before my eyes. Since then, countless generative tools, such as

Dainius Krasauskas is an overly curious, long-haired European creative who loves to strategically break rules to create impact. Before joining USC to pursue a Masters in PR and advertising, he brought the Bavarian Prime Minister to a former employer to play with robots, art-directed a rap video with over 20 million views for a soccer superstar, and composed music for one of Europe's largest modern art museums. He also loves early seasons of “The Simpsons,” tech innovation and bossa nova. 92


AI DIDN'T DICTATE MY ACTIONS; INSTEAD, IT HELPED ME NAVIGATE THE JOURNEY AND BRING THE IDEA IN MY MIND TO LIFE. ChatGPT, have emerged, making it easier than ever to rapidly generate high-quality content. ▶ With all of this, I could be terrified of AI invading my creative island, but surprisingly, I am excited. I know that AI made me more creative than ever before. Why? The Extension Of Human Creativity Since touching generative AI, I know that there is no way back anymore. (That is for all of us by the way.) ▶ An example: Recently, I created a short video about me that I want to attach to my resume. At one point I wanted to emphasize my love for innovation, but instead of saying it myself I asked ChatGPT to come up with an In-N-Out pun to describe my innovative skillset and I combined the best suggestions into a one-liner. Then I downloaded a speech of Barack Obama and synthesized his voice with ElevenLabs to create a fake voiceover. ▶ After, I downloaded another video of Obama and made sure the gestures and mimics in the video matched his fake voiceover.

Lastly, I generated new lips that were in sync with a tool called Wav2Lip and assembled everything together. ▶ I didn’t need to leave my room or spend a single cent, and still I somehow created a highly convincing version of the former president saying that my “knack for innovation is like a Double-Double … a juicy combination of quality and value.” ▶ Without AI? Not imaginable. And certainly not doable. Is Generative AI Genuinely Creative? Creativity, to me, is like a journey from point A to point B. ▶ Point A is where it all begins, in our brains. Point B is the output: It can be a song, an essay, a business idea. But time constantly changes how we reach point B. ▶ A long time ago, ideas left point A, the brain, and were never captured because all our ancestors could do is communicate verbally. Then we learned how to capture ideas on walls, then on paper, then printed, then written, then typed, then digitized, then generated. No next step was ever doable before breakthrough advancements. ▶ But innovation doesn’t just change the journey from A to B: it also changes what point B can actually be. It stretches the limits of our imagination. It makes things imaginable that were unimaginable before. ▶ Without AI my output, point B, would never have been Obama. It was technologically not possible; therefore unimaginable. But my idea, point A, was always deeply human. AI didn't dictate my actions; instead, it helped me navigate the journey and bring the idea in my mind to life. And it formed what that idea can actually be. Implications Of Generative AI So, will generative AI replace human creativity? Yes and no. ▶ It will certainly replace many hours that are currently routine because AI is just brilliantly good at cutting through crap. 93


Following my earlier analogy, you can see how AI will immensely accelerate our journey to point B. That frees up countless hours that we can spend on other things. ▶ If we reach point B way faster, that means we can… 1. Create content of the same quality as before at a fraction of the speed 2. Use the same amount of time to create content of much higher quality than now ▶ Today, when I compose music, I use LANDR to master my songs within seconds, sparing me hours of manual work. I use ChatGPT to help me with brainstorming. I use DALL-E to add objects to photos. AI not only made me a faster creative; it made me a way better one. Long Story Short Very soon, we will undoubtedly feel a big shift, and we will need to adapt. But while AI's impact can at times be intimidating, history

proves that innovation never fully replaces previous forms of creative expression. Many still sculpt by hand, calligraphy remains an art form, and countless musicians continue to record with analog equipment, regardless of the digital revolution. Not everything in this world is about efficiency or financial maximization: Some things are truly about heart and soul. ▶ The impact of AI is a very serious topic, I believe it will have the biggest implications on our lives that we will ever experience. But so have a handful of technologies for other generations before. ▶ Maybe I’m an optimist. Maybe I simply have no idea what I am talking about. But what can we do? I guess really embrace it — and start learning how to better curate the upcoming storm of generated content. ▶ Let’s become more creative! That’s what mankind has always done. Because no matter what we do, generative AI will stay. ▶ Just like human creativity. ■



THANK YOU AS THE CONTENT CURATORS, we hope that our readers will learn as much about artificial intelligence as we did while reviewing and organizing this year’s book. Our contributors share a wide range of perspectives on a trend that is clearly impacting our profession. Many of them conclude that despite the advancement of AI, the human ear and voice will remain essential. Even during the six months of production, we have seen numerous advances in AI, and we are certain the pace of change will continue to accelerate. Our role at the Center for PR is to continue tracking and reporting these changes and how they impact the future of public relations and those who will shape it. The Center for PR team thanks all of our board members for their time spent researching, writing and editing content for this report. Thanks to USC professors Kozinets, Gretzel, Kahn and Jenkins for contributing thoughtful pieces. And to Dean Willow Bay and the Annenberg team for supporting and promoting our work. Finally, thank you to the team at Microsoft for supporting and underwriting this year’s edition. Their expert essays in the Relevance Report cover a wide range of viewpoints about AI, and are certainly worth discussions in your classrooms, boardrooms and organizational meetings. Ron Antonette Chief Program Officer USC Center for Public Relations Adjunct Instructor, USC Annenberg



EDITORS Ron Antonette ‘90 Daniela Rodriguez Martinez MA ‘24 WITH

Andrea Hubbard MA ‘24 Grayson Wolff MA ‘24 Michael Kittilson MA ‘25

LEADERSHIP Fred Cook, Director

Janine Hurty, Director of Development

Burghardt Tenderich, PhD, Associate Director

Tina Vennegaard, Senior Strategic Advisor

Ron Antonette, Chief Program Officer

Ulrike Gretzel, PhD, Senior Research Fellow

BOARD OF ADVISERS *Jennifer Acree JSA+Partners

Doug Dawson Microsoft

Seema Kathuria Russell Reynolds Assoc.

Kristina Schake The Walt Disney Company

Jonathan Adashek IBM

Chris Deri Weber Shandwick

*Megan Klein Warner Bros. Discovery

Barby Siegel Zeno Group

Jessica Adelman Mars Wrigley

*Dani Dudeck Instacart

Chris Kuechenmeister PepsiCo

Charlie Sipkins FGS Global

*Christine Alabastro Prenuvo

†Bob Feldman Feldman + Partners

*Maryanne Lataif AEG Worldwide

Hilary Smith NBCUniversal

Vanessa Anderson AMPR Group

Beth Foley Edison International

*Elizabeth Luke Pinterest

†Don Spetner Weber Shandwick

Clarissa Beyah Union Pacific

Matt Furman ExxonMobil

Gulden Mesara City of Hope

*†Kirk Stewart KTStewart

Dan Berger Amblin Partners

Robert Gibbs Bully Pulpit Interactive

Josh Morton Nestlé North America

*Michael Stewart Hyundai

*Tala Booker Via Group

*Brenda Gonzalez USCIS

†Torod B. Neptune Medtronic

*Grant Toups Hill & Knowlton

Judy Gawlik Brown

†Cynthia Gordon Nintendo of America

*†Glenn Osaki USC

David Tovar Grubhub

Jennifer Gottlieb Real Chemistry

Erica Rodriguez Pompen Micron

Gerry Tschopp Experian

*Simon Halls Slate PR

†Ron Reese Las Vegas Sands

Mya Walters Athleta

Matthew Harrington Edelman

*Heather Rim Optiv

KeJuan Wilkins Nike

Jon Harris Conagra Brands

Melissa Robinson Boingo Wireless

*Julia Wilson Wilson Global Comms

†Bill Imada IW Group

Josh Rosenberg Day One Agency

*†Deanne Yamamoto Golin

*†Adrienne Cadena Havas Street Dominic Carr *Janet Clayton Vectis Strategies *Stephanie Corzett Nordstrom Carrie Davis CD Consulting

*Megan Jordan Claremont McKenna


Melissa Waggener Zorkin WE Communications

* USC alumnus

† CPR founding member


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.