9 minute read

UTNY: Students gain the tools to succeed at careers in New York.

If you want to learn more about supporting programs you read about in this issue, contact Wendy Anderson, Development and External Relations, at wendy.anderson@mccombs.utexas.edu.

WHEN NYC IS THE CLASSROOM

UTNY PROGRAM PREPARES McCOMBS STUDENTS TO COMPETE FOR JOBS IN THE BIG APPLE

CEINNA LITTLE KNEW THAT AN INTERNSHIP

at The Fortune Society —a New York City nonprofit that helps formerly incarcerated people rebuild their lives — would give her valuable hands-on experience researching housing policy. But when her supervisor connected her with community activists organizing rallies at City Hall, she gained an even deeper understanding of the challenges facing economically disadvantaged people in the city. Little, a senior studying management at Texas McCombs, says she is now more committed than ever to pursuing a master’s degree in urban planning in New York City in the fall, and she plans to focus on affordable housing.

“It’s definitely perception-changing,” she says of her time in New York. Little is one of 19 UT Austin students who participated in the school’s UTNY program during the fall semester, juggling a jam-packed schedule that included her internship, coursework, and visits to famous landmarks such as the Statue of Liberty. The experiential learning program, meant to help students launch their careers and discover New York through the lens of a local, began in the fall of 2019 but was interrupted by the coronavirus pandemic. After restarting in 2021, it picked up momentum and has already enrolled 82 students for the upcoming summer semester. The main objective of the program is to help UT Austin students gain a foothold in the competitive New York job market, explains Michael Wilson, executive director of UTNY and UTLA — the university’s Los Angeles-based experiential learning program established in 2005. He helped design the program — along with former McCombs Dean and now UT President Jay Hartzell, Dean Jay Bernhardt of Moody College of Communication, and former College of Fine Arts Dean Doug Dempster, now a professor at the college. Through its internship program, UTNY — which receives financial support from McCombs, Moody, Fine Arts, the College of Liberal Arts, and philanthropic support from alumni, parents, and friends of UT—grooms students to compete for jobs with candidates who attended college in New York City. But it also helps them discover their strengths. “The students learn a lot about their own ability to live in a market like New York,” Wilson says. “It’s self-evaluation and assessment.” For Kaelan Replogle, a junior studying finance at McCombs, this meant working a 9-5 job as a forensics accounting intern at the firm Weaver, networking through the Texas Exes alumni group, and learning to navigate the city’s public transportation system — even if it meant getting lost sometimes. “It is kind of true — if you can make it in New York, you can make it anywhere,” Replogle says, referencing the famous song lyric. Decorated in the familiar burnt-orangeand-white colors, the UTNY Center in Times Square serves as a headquarters for students. It is a classroom, a study lounge, and sometimes, a remote office. Laura Brown, UTNY program director, says watching students conquer challenges, gain confidence, and build friendships is inspiring. Although the students in the fall cohort came from various schools across the UT Austin campus and did not know one another previously, they were practically family by semester’s end, Brown says. “Those will be friendships and professional networks they’ll have for decades to come,” Brown says. Planning ahead is key, whether it means preparing for the internship or finding a place to live, says Sophie Jacquet, a sophomore studying business administration at McCombs. Looking back on her experience last summer as a marketing strategy and sales intern at the real estate startup Rent Ready, Jacquet adds, “I grew a lot in terms of how I prioritize my goals.” —Alice Popovici

Students with UT alum and “Today” co-host Jenna Bush Hager (right), who recently served as a lecturer in a panel discussion, and Hilary Smith, SVP Corporate Communications (pictured next to Jenna) of NBC Universal.

NEWS: THOUGHT LEADERSHIP

EXPLAINABLE AI

GLOBAL ANALYTICS SUMMIT EXAMINES EFFORTS TO UNDERSTAND HOW ARTIFICIAL INTELLIGENCE SYSTEMS WORK AND WHY THAT’S ESSENTIAL

MORE THAN TWO DOZEN ARTIFICIAL intelligence expertsfrom business and academia, including Texas McCombs, explored the importance of understanding how machine learning systems arrive at their conclusions so humans can trust those results.

This relatively new frontier of explainable AI, or XAI, was scrutinized for two half-day sessions in November during the online Global Analytics Summit held by the McCombs Center for Analytics and Transformative Technologies (CATT). More than 3,500 people registered for the conference, another step in the university’s drive to be a thought leader in the field.

Experts say that as AI systems become more sophisticated, they become increasingly difficult for humans to interpret. The calculation process turns into what is called a “black box" that even data scientists who create the algorithms can’t understand, according to IBM Corp. XAI methods, however, provide transparency so humans can understand the results and assess their accuracy and fairness.

“XAI is an important tool for business organizations and has a wide-ranging impact on major activities including risk management, compliance, ethics, reliability, and customer relationship management,” said Michael Sury, managing director of CATT.

Although AI is more than 50 years old, “deep learning has been a mini-scientific revolution” since the 2010s, said one keynote speaker, Charles Elkan, a professor of computer science at the University of California, San Diego. It has “enabled tasks that really look like they require remarkable intelligence because they require the combination of language and vision.” Elkan is

The Global Analytics Summit included (top row, from left) Charles Elkan of the University of California, San Diego; Hima Lakkaraju of Harvard University; Michael Sury of UT Austin; (bottom row, from left) Nazneen Rajani of Salesforce Research; Kumar Muthuraman of UT Austin; and Cynthia Rudin of Duke University. also a former managing director for Goldman Sachs Group.

BLACK BOXES

A panel discussion called “Explainable vs. Ethical AI: Just Semantics?” followed Elkan’s presentation. The panelists sought to define some of the terms, themes, and paradigms of XAI, as well as examine the role of black boxes. Alex London, a professor of ethics and philosophy at Carnegie Mellon University, said that explainability and interpretability “express a relationship between the humans and the model that’s used in an AI system.” Alice Xiang, a lawyer and a senior research scientist for Sony Group, said, “I see explainability as an important part of providing transparency and, in turn, enabling accountability.” She noted the challenge of black boxes, citing as examples drug-sniffing dogs, whose abilities are mysterious but highly accurate, and the horse Clever Hans, who appeared to understand math but was really following cues from its owner. Polo Chau, an associate professor of computer science at Georgia Tech, pointed to the use of counterfactual tests — the turning on and off of parts of a model — as a way to test it. “It can be quite usable to a lot of users, including consumers,” he said.

BIAS, PSYCHOLOGY, AND ETHICS

After that discussion, Krishnaram Kenthapadi, a principal scientist for Amazon Web Services AI, spoke on a panel called “Responsible AI in Industry” about how human bias can be reflected and amplified in the data. “There’s a risk we may be living in our own bubble and implementing some approaches where we may not be even aware of what may be the issues with these approaches,” he said

Understanding how machine learning systems arrive at their conclusions was the subject of the online Global Analytics Summit held by the McCombs Center for Analytics and Transformative Technologies in November. More than 3,500 people registered for the conference.

In a panel discussion called “Adopting AI,” James Guszcza, a behavioral research affiliate at Stanford University and chief data scientist on leave from Deloitte LLP, said: “I think one of the previous speakers said we need to be interdisciplinary; I take it a little bit further and say we need to be transdisciplinary.” The work needs to involve machine learning and human psychology and ethics, he said. Without taking those into account, “You're going to get artificial stupidity.”

Fellow panelist Anand Rao, the global AI lead for PwC (PricewaterhouseCoopers), noted that the human-computer interface is evolving and that society needs to be given time to adapt to new technology. “You need to be patient,” he said, referring especially to businesses. And during his talk, Mark Johnson, chief AI scientist with Oracle Corp, said that humans are “at the very beginning” of learning how to use XAI to build better machine-learning models. He also said, “Data is the new oil,” noting that

SPRING 2022

it is a valuable resource but comes in different grades, as do coal and crude, and needs to be processed and refined.

DIFFERENT APPROACHES TO XAI

McCombs assistant professor Maria De-Arteaga, who researches societal biases and machine learning, moderated a panel focused on XAI solutions. Jette Henderson, a senior machine learning scientist with CognitiveScale Inc., said she works to help other companies understand their models. “So, I very much approach explainability from helping out customers.” Zachary Lipton, an assistant professor of machine learning at Carnegie Mellon University, noted the disconnect between problems and solutions. Explainability is like chronic fatigue syndrome, he said. It’s an “all-encompassing bag … for a whole set of problems that don’t have a common solution.” In talking about the mismatch between problems and solutions, panelist Scott Lundberg, a senior researcher at Microsoft Corp., said that the explanation could be actually hiding something about your model and its behavior. One of the key risks of AI is a lack of transparency for clients, said Daniele Magazzeni, the AI research director with JPMorgan. Magazzeni said that his company has 80 research projects involving economic systems, data, ethics, crime, and more. The final panel of the conference, moderated by Raymond Mooney, director of the UT AI Lab, took on the subject of “Explanations, But for Whom?”

Panelist Nazneen Rajani, a Salesforce research scientist, discussed how to evaluate explanations by giving them a score. Fellow panelist Christoforos Anagnostopoulos, senior data scientist at McKinsey & Co., pointed out the five principles of artificial intelligence in society, which were written about by Luciano Floridi and Josh Cowls, and which draw on the four principles of bioethics. The fifth AI principle is explainability, he said, and it is “probably the one thing that we should add … to have a complete framework of ethics.” —Mark Barron