8 minute read

/THE REAL WORLD OF ARTIFICIAL INTELLIGENCE

WRITTEN BY ROB CSERNYIK

PHOTOGRAPHY BY CARLOS OSORIO

Imagine working in an industry advancing so quickly that even some of its biggest boosters want it to slow down. That’s the reality for Remi Ojo ’08, who chose a career in artificial intelligence (AI), where machines perform intelligent tasks similar to humans.

While working in field operations at Bell Canada, Ojo wanted to incorporate more advanced analytics into his work. After developing a taste for the subject in night school courses, he later earned an MBA, a Master of Management Analytics and a Master of Management in Artificial Intelligence from Queen’s University.

He believes a lot of “archaic industries” can use AI to revolutionize their efficiency and productivity. As VP in Data Operations at AVO Inc., Ojo does innovative work within the mining industry, helping firms save millions of dollars on construction projects.

Ojo helps develop software solutions that use camera-recorded data to break down the steps in building mine shafts for firms, helping reduce costs. “We can see how long that task actually should take, and where there is some room for efficiencies,” he says. “I think that we’ve only scratched the surface in terms of harnessing AI.”

But earlier this year an open letter called for AI labs to pause training AI systems more powerful than the GPT-4 language model for at least six months. Tech leaders like Elon Musk and Apple co-founder Steve Wozniak signed it, though some have walked back support and critics disagree with the letter’s premise.

“I don’t believe that there needs to be a delay,” Ojo says. “But I think with the rate that these large language models and AI tools have permeated everyday life, there should be greater urgency on transparency and explainability.”

While there is much discussion about how AI might affect the job market— whether an AI ChatGPT bot might one day write articles like this, for instance— expectations are high. A global survey of CEOs from professional services firm PwC found 63 percent believe AI will have a larger impact on the world than the internet did. As new AI roles and technologies proliferate, this is playing out in real time and these careers are taking on a variety of forms. →

For some, like Ojo, it means taking an active role in building AI technology as data scientists or machine learning engineers or researchers. But there are also non-technical roles like consulting or sales. “I think it’s doable to get into the AI space or work in an AI company, but not necessarily get in the weeds of doing AI work,” he says. According to Crescent alumni working in the field, there’s room for talents of all kinds as the industry scales up. Yet for all the recent developments designed to make life easier— facial recognition or content generation —new challenges are being created and fresh questions need to be answered.

One overlooked issue, says Ojo, is the potential for bias, which requires a look at how AI models get applied and how they may negatively impact certain groups. For example, he suggests, Amazon’s AI recruitment tool showed a bias against female applicants.

To many, AI feels like a far-off future; for Crescent alumni like Ojo, it’s all in a day’s work.

/THE FUTURE OF WORK

As an undergraduate at the University of Waterloo, David Ferris ’14 created an online bail financing platform called Better Bail for America, and noticed bias issues firsthand based on ZIP codes. “Even if you don’t actually have race as a signal in your dataset, just by having some location signal, you’ve actually snuck race into your dataset.”

As the founder of an AI software company, Ferris has dealt with the ethical challenges of AI more directly than most.

In 2020, Ferris founded Phonic, a market research company that used AI to draw research data from voice and video surveys and through analyzing conversations. For example, when conducting a taste survey on a new product for a food brand, Phonic picked up that users had mentioned having trouble using the packaging, a detail that might otherwise go unnoticed. While market research can be expensive and time consuming, the company allowed even smaller firms or individuals to mine conversations using AI for data that may be useful to their business. (Advertising technology company Infillion acquired Phonic in 2022.)

But though the company attracted big clients including academic institutions and

Fortune 500 firms, it also inspired critics.

“We got a lot of criticism when we were building Phonic that we were putting market researchers out of business,” he says. In an industry with a traditional reputation, some believed Phonic was taking jobs away. Ferris instead sees it as offering smaller firms and academics access to tools they might not otherwise have.

Ferris sees this as part of the tradeoff with the innovation and productivity growth AI is ushering in: the potential loss of some jobs and ways of doing business.

“I think AI is such a force multiplier for humanity,” he says. Though he admits his experience as a start-up founder might make him “a bit more fearless” about applying AI and “sprinting as fast as we can” as opposed to someone who may find their long-term career disrupted. That latter number could be staggering: investment bank Goldman Sachs suggested AI might replace the equivalent of 300 million full-time jobs.

But Ferris, who presently works with AI start-up Playground, feels companies looking to use the tools simply to cut employees might be on a fool’s errand. “I would probably say that’s very short-sighted, cost-cutting thinking,” he says, as opposed to long-term investment. Though he also notes that being serious about changing the way people work is uncomfortable.

“Everyone says that they want disruption, but nobody actually does,” he adds. “People are very comfortable with the status quo, and they really don’t want that to change. I think a lot of the discomfort originates from that.” →

Getting A Head Start On Ai

Former Head Boys Cooper Midroni ’16 and Max Bennett ’18 share a keen interest in artificial intelligence and the requisite leadership skills that led Midroni to start the AI club QMIND at Queen’s University, with Bennett joining in 2018.

QMIND is Canada’s largest undergraduate data science and machine learning group. “It came from this understanding that machine learning was an emerging trend in the technology space,” says Midroni, “but big institutions, like major companies and universities, hadn’t yet picked up on it.”

Connected through the Crescent Alumni University Mentorship Program, the two young men organized QMIND projects ranging from simple (identifying a person’s accent) to complex (autonomous driving).

“The mentorship was a great way to formalize a regular point of contact between us,” says Bennett. “That’s what helped me get interested in QMIND.”

In his third year, Bennett became co-chair of the Canadian Undergraduate Conference on Artificial Intelligence (CUCAI) with delegates from universities across Canada and industry representatives from various technology and business firms. CUCAI is organized and funded by QMIND and has experienced remarkable growth since it was founded in 2018.

Despite Midroni and Bennett having long careers ahead of them, they have already created rich legacies for curious people to learn about AI and connect with like-minded individuals.

Jeremy Gilchrist ’11 says society has yet to encounter whether customers will still be willing to support companies that use AI to replace large numbers of human employees.

“There are loads of ethical challenges, and even some that aren’t really challenges yet, but they’re questions we’re starting to ask ourselves,” says Gilchrist, the Canadian Intelligent Automation lead for the consulting firm Avanade.

After studying environmental studies as an undergraduate he expected to go to the Alberta oil patch, but the mid-2010s was turbulent for the industry. He pivoted, first to project management and later to technology consulting with robotic process automation, an AI-like process where robots mimic human actions.

At Avanade he works on a variety of automation tool projects including building a “self-healing robot.” (Think of a software tool able to fix or update itself without human intervention.) He likens it to a car discovering it has a punctured tire, dropping its owner off at work, driving to get the tires fixed, and coming back without the owner ever noticing. He feels this automatically updating tool represents a tremendous growth opportunity, albeit one that may face fewer challenges than other areas of AI.

“When it comes to things like journalism, books, and content generation, people will accept [AI] to a point and then they will get uncomfortable and there will be resistance.” He adds this may cause greater demands from users that content is genuine or verified, and may create new risk exposures for companies using AI.

Gilchrist also sees how data gets used by AI as one of the major looming ethical challenges. When data published for free online is drawn on, for example, it’s unsettled whether the original creators will profit when it’s used commercially by AI companies. He likens it to the debate over whether Facebook should have to pay news companies for the stories it shares on its site.

“It is something that is going to need to be answered very quickly before these tools are cleared for public use in a more broad sense.”

/PROMISE AND PERPLEXITY

While working for the consulting company Slalom, a project that used machine learning to analyze cancer cell tests for anomalies wowed Steven Curtis ’95. “It was the type of work that would normally take a human years, and it was actually a pro bono, special project we did in under three months.”

He’s bullish on the potential of new AI tools and feels like we’re in the early stages, like the internet at the millennium’s dawn. “It’s no longer in a box somewhere,” Curtis says of AI. “I think all of us can experience it firsthand.”

Denver-based Curtis currently works for Google Cloud helping companies like financial services and telecommunications providers improve telephone menus using interactive voice response technology. Curtis says the goal is to provide the type of customer experience “you would expect or hope for from a human,” while freeing workers to focus on more complex matters that virtual experiences can’t yet replicate.

But one challenge he’s noticed is a misunderstanding of the industry—for instance, the distinction between AI (a machine simulating human intelligence) and machine learning (teaching machines tasks by identifying patterns). While “most people know what the acronyms are, and even what they stand for,” he says, they don’t know what they mean in a technical context. →

Farhad Shariff ’01 experiences this, too. When talking to different executives at a single organization, he can get varying answers on what AI is. “Because they’ll all think about it a different way.” This makes a degree of level-setting necessary before moving forward on projects to get everyone on the same page.

Shariff helps companies use the data they have to better anticipate customer needs and make company offerings more valuable to them—and, of course, more valuable to the firm.

But he notes that data security— who owns and uses data—is becoming a critical issue companies face within the AI field. “How do they use the data effectively, and ethically, in order to be able to drive the business but not cross the line?”

It’s a question that currently remains unanswered, and one that given AI’s relentless expansion, is unlikely to remain this way for long. There’s a steep learning curve not only for the machines and programs evolving to become more humanlike, but for the humans in charge of deciding how to best wield the power of these new technologies.

“There’s still a lot to be learned,” Shariff says. “It’s still very much a wild west out there.”