15 minute read

FR OM MICROSOFT TO AI ADVISORY: A JOURNEY IN BUILDING SCALABLE & RESPONSIBLE AI STRATEGIES

DIDEM UN ATES is a renowned AI and Responsible AI executive with over 25 years of experience in management consulting (Capgemini, EY, Accenture) and industry roles (Motorola, Microsoft, Schneider Electric). As the founder of LotusAI Ltd, she advises private equity firms, hedge funds and financial institutions, including Goldman Sachs Asset Management and its portfolio companies, on AI/GenAI strategies, responsible AI implementation, and talent transformation.

Didem serves as Advisor and member of the AI/ Generative AI Council for Goldman Sachs Value Accelerator, sits on the Boards of the Edge AI Foundation and Wharton AI Studio, and is a certified AI Algorithm Auditor and Executive Coach. Her industry contributions have earned her recognition such as the Tech Women 100 Champion and Trailblazer 50 awards. In her tenure as Vice President of AI Strategy & Innovation at Schneider Electric, she led the development of the company’s AI/GenAI strategy and forged strategic partnerships with startups, VCs/PEs, and academia. At Microsoft, she was pivotal in launching Business AI Solutions at Microsoft Research and spearheaded diversity-focused programs, including Alice Envisions the Future / Girls in AI hackathons. In her role as head of Applied Strategy, Data & AI in Microsoft’s Chief Data Office, she oversaw Microsoft’s largest AI engagements, contributing to the OpenAI partnership.

Didem has led the operationalisation of Responsible AI at Microsoft, Accenture, and Schneider Electric, advising C-suite leaders on talent strategy adaptations and promoting responsible AI practices.

Can you tell us about your early experiences in the AI sector and how they shaped your career path, particularly during your time at Microsoft?

I’ve spent over 26 years in the technology sector, focusing mainly on disruptive technologies. For the past 13 years, I’ve been deeply involved in AI.

I witnessed the rise of traditional AI while I was working with Microsoft Ventures and Accelerators. Across accelerators worldwide, 80–90% of the highestgrowth startups were AI-focused. That’s when I realised AI was where I wanted to be. Steven Guggenheimer, a VP at Microsoft at that time, sent an email about forming a Business AI team. I offered to volunteer for competitive benchmarking and startup ecosystem projects. That volunteer role turned into my first official position in AI.

I was part of Microsoft’s first Business AI team, which originated within Microsoft Research. Our team had about 100 members, and I was the only person based outside the U.S. – in fact, outside of Redmond, the headquarters. I was also the only Turkish woman in a predominantly male team, consisting mostly of Chinese, French, and Canadian men. I highlight this because, although I was accustomed to limited diversity in tech, the lack of diversity in AI felt even more pronounced. It motivated me to take intentional action, and that’s how the initiative Alice Envisions the Future / Girls in AI started.

At Microsoft Research, we developed some of the first enterprise AI algorithms for customer service, sales, and marketing. When these proved successful, our team transitioned to the AI Engineering division to launch them as official products. This effort led to the creation of Microsoft’s first AI SaaS platform: Power Platform. I call it the ‘grandparent’ of the Co- incredible to see.

Pilot suite, because it shares a similar architecture and logic, later enhanced with generative AI.

Fast-forward 12 years at Microsoft, and my final role involved working with the Chief Data Officer on large-scale AI engagements with Fortune 500 clients. One such collaboration was with Schneider Electric, who later invited me to implement their AI strategy.

Now, through my own company, LotusAI, I help organisations like Goldman Sachs and private equity firms, hedge funds, and other financial institutions with AI transformation, responsible AI strategies, and talent development. It’s been an incredible journey, combining proactive volunteering, seizing opportunities, and pursuing cutting-edge technology – not because it’s trendy, but because it has a massive power to drive positive societal change at scale.

Can you share pivotal moments from your career that shaped your approach to AI strategy and leadership?

Another defining moment was attending my first offsite meeting with our team of approximately 100 members. I was the only woman and the only person based outside the U.S.

I hadn’t set out to be a leader in diversity and inclusion, but I knew someone had to address the issue. Diversity isn’t just a social responsibility; it’s essential for business success, especially when developing complex products like AI. Alice Envisions the Future / Girls in AI , which eventually became a global program, has connected over 8,000 girls worldwide through a Facebook group. High school girls in cities like Seattle, Warsaw, London, Athens, and Istanbul began hosting hackathons, sometimes even independently at their schools, and Microsoft partners such as Accenture and KPMG supported these events.

One pivotal moment was when I brought Microsoft its fourth paying AI customer. Today, we talk about AI generating trillions in market value, but back then, AI was still in its early days. With that customer, we made a bold decision: instead of selling the AI product as a managed service – where we could only serve a few large customers per year –we decided to package it as a SaaS AI solution.

This required us to pivot our entire business model just six months before the product’s general availability, and ultimately led to the creation of Microsoft’s Power Platform as a SaaS offering. It wasn’t just pivotal for me personally but also for Microsoft and the broader tech industry. Today, millions of developers build on that platform, which is

It’s one of the most meaningful parts of my career – doing something positive for society while scaling AI in my day job. Many of these young women stay in touch with me, and I continue supporting them with recommendation letters for their university applications. It might be a small step, but it means a lot to me.

A third pivotal moment happened recently, after I left Schneider Electric in mid-November. Goldman Sachs’ asset management team had invited me to join their new Generative AI Council of seven or eight experts from around the world. It was intended as a board-like role, requiring just a few hours a month. But two days later they reached out again. They’d seen my LinkedIn post announcing my departure from

Schneider Electric and asked if I would consider a broader role. They had invested in hundreds of portfolio companies, many of which needed AI support, and they wanted help at the value accelerator level.

So, while waiting for a friend who was late to lunch, I officially launched my AI advisory business, LotusAI, on December 13, 2023. It was a spontaneous start, but it’s turned into the most fulfilling phase of my professional life, and I’m now helping many clients with their AI journeys. As much as I enjoyed my time at Microsoft and other companies, this is the best time of my career. I always wanted to be an entrepreneur, and everything just aligned perfectly: clients were ready, product (AI) was in demand, and the timing was right.

What are some common misconceptions enterprise organisations have about implementing generative AI?

The first, and perhaps the most important, relates to culture and mindset. From an enterprise transformation perspective –especially with generative AI – things move so fast that the real starting point is leadership. There needs to be a leader at the top with vision, courage, and an investment mindset.

This investment isn’t just financial or resource-based; it’s also about people and time. Leaders must commit to the journey, knowing it won’t be easy. They must accept that there will be failures, but these are part of the learning process. Ultimately, there’s no turning back – generative AI isn’t just a tool for survival but for thriving in the future.

Another major misconception is that many organisations start with a small proof of concept (POC) – a use case that’s tangential to their core business. They do this because they either lack the leadership sponsorship to take a holistic approach, or fear the complexity of full-scale adoption.

While a POC might work, the biggest pitfall is that these isolated experiments rarely scale. If AI initiatives aren’t integrated into the core business strategy and work flows from the start, they remain stuck in the lab, failing to generate real impact. Later, organisations conclude that AI doesn’t work or doesn’t scale, but the reality is that the approach wasn’t designed for success in the first place.

Generative AI requires a holistic approach – an end-toend transformation that includes talent development, responsible AI practices, and integration across every function, from HR to finance. It doesn’t all have to happen simultaneously, but the vision must be comprehensive.

Another misconception revolves around data. With advancements like synthetic data, automation, and new methodologies, many data challenges can now be overcome. Organisations shouldn’t let data concerns hold them back from exploring generative AI solutions.

Lastly and most importantly, there’s the issue of talent and execution. Many companies hire consultants for time-boxed projects – eight, twelve, or sixteen weeks –to develop a strategy or implement a few use cases. These projects often end with hundreds of slides or complex Excel models. But once the consultants leave, internal teams are left catching up on their day jobs they had to put on hold for weeks, and the AI strategy sits idle because there’s no internal capability to execute it.

To avoid this, organisations need a hybrid approach: a balance of in-house expertise, strategic partnerships, and external advisors. Most importantly, there should be a ‘chaperone’ figure – someone who can walk the journey with the organisation at its own pace, ensuring the transformation is sustainable and scalable. Such a figure is also helpful in terms of having an objective outsider who is not part of internal politics and can candidly point to the North Star –sometimes playing the ‘good cop’ and other times the ‘bad cop.’

In your opinion, is a chief AI officer needed to ensure AI initiatives will scale and have the desired impact? Many AI initiatives sit within the CIO or CTO organisation. The core mission of these departments is often to ‘keep the lights on.’

As a result, AI projects become low priority. This is a key reason why these projects don’t scale or deliver impact.

On top of that, when AI initiatives are placed in these departments, employees often don’t even have access to generative AI tools because relevant websites are blocked. I’ve seen employees transfer company data to their personal PCs so they can work with AI tools like ChatGPT at home. This is a nightmare scenario for CIOs and CTOs.

I do believe the chief AI officer role is essential, but it must operate closely with other C-level leaders. If the role ends up merely coordinating AI efforts in a hub-and-spoke model, it becomes ineffective.

If you already have a chief digital officer, a CTO, and a CIO, adding a chief AI officer can cause overlaps and conflicts. Therefore, this role needs to be clearly defined and strategically positioned. The visionary leader driving AI doesn’t need to understand all the technical details, but they must believe in AI’s transformative potential and communicate that belief effectively.

Another critical misconception lies in language, communication, and change management. How leaders frame AI adoption matters immensely. For example, in one of the largest German organisations I worked with, a new CEO wanted to make an impact. At major global conferences, he repeatedly said, ‘I’m going to save X billion dollars with AI,’ framing AI purely as a cost-cutting tool.

When developing an end-to-end AI transformation roadmap, the projected business impact is often quoted between 20–40%. This could affect top-line revenue, bottom-line profit, EBITDA, or a combination of these.

I gave feedback that this messaging was counterproductive. If employees hear that AI is primarily about cost-cutting, they will resist it with everything they have. Instead, leaders should present AI as an opportunity for growth and empowerment.

The conversation should be: ‘This is the future. There’s no turning back. But we’ll work with you. We’ll co-create talent transformation maps, personal development plans, and team growth strategies. How can you imagine your job being augmented by AI? How can AI help you add more value? We’ll upskill and reskill you. And in the process, you might save enough time to finally take your evenings or weekends off.’

In industries like financial services, this is already becoming a reality. In R&D organisations, tools like GitHub Copilot accelerate time to value for research projects. Ultimately, how we articulate the opportunity AI presents – how we inspire and address employees’ concerns – will determine whether AI strategies succeed or fail.

What are the most critical factors that determine success or failure when implementing an enterprisewide AI strategy?

The most critical factor, without a doubt, is talent transformation.

Naturally, this excites CFOs and board members. However, I always caution them: this doesn’t mean we can simply cut costs by 30%. Even if we wanted to, there isn’t enough skilled talent available to do the transformative work AI requires. The only viable path forward is upskilling and reskilling the workforce, which accounts for 80–90% of the solution.

Talent transformation involves both technical and cultural training. It’s about shifting mindsets, fostering an AI-ready culture, and ensuring employees have foundational AI knowledge. Alongside talent transformation, responsible AI is another crucial success factor. Consider a midsize bank I’m advising; they currently operate with 500 AI systems. This situation is like having 100 different ERP systems. Many organisations have implemented AI solutions without conducting proper impact and risk assessments or establishing guardrails.

This is where responsible AI becomes essential. Without embedding responsible AI principles from the design stage throughout the product or solution lifecycle, problems are inevitable. Irresponsible AI systems are like cybersecurity vulnerabilities –ticking time bombs. Eventually, they will lead to scandals or catastrophic failures.

Having worked closely with the Office of Responsible AI at Microsoft, I’ve seen firsthand how essential it is to prioritise responsibility from the outset.

Whether or not governments enforce AI regulations, these issues will resurface. Why wait for a crisis when it can be addressed proactively?

Finally, I must emphasise the role of visionary leadership. Leaders at the top must be committed to AI transformation. They need to provide the necessary time, resources, and investment. And, just as importantly, they must understand that AI adoption won’t be perfect from day one. AI algorithms start ‘dumb’ and become smarter over time. Leaders must grasp at least this much about the technology to champion it effectively.

Ultimately, success comes down to talent transformation, responsible AI, and visionary leadership – all working in harmony.

What other aspects of responsible AI are important for organisations to consider?

Responsible AI starts with core principles, and globally, these principles are remarkably consistent across industries. I recently conducted a global review of AI regulations and responsible AI guidelines for a client, and the same key principles came up repeatedly: transparency, accountability, fairness, privacy, reliability, and security.

However, knowing these principles isn’t enough. The critical factor is executive sponsorship.

Leadership at the highest level must understand that AI technologies carry significant risks. To protect the business – its finances, legal standing, and reputation – responsible AI practices must be adopted proactively, not just because regulations demand it.

For example, I’m working with a client in a country without any AI regulations. Yet, they’re still performing gap analyses and impact assessments because they recognise the risks. The potential consequences are enormous. There are countless cases of AI-related scandals resulting in billions of dollars in lawsuits. While many of these incidents are quietly handled behind the scenes to honour data privacy of the individuals involved and to avoid reputational damage, the risks are real and significant.

What advice would you give to organisations that are just beginning to develop a generative AI strategy?

My key recommendation for organisations starting their generative AI journey is to find a guide or ‘chaperone.’

This could be an advisor, an external organisation, or someone who can support the company at the right pace. AI transformation isn’t something you can inject overnight or complete in an eightweek sprint. It needs to be infused at a pace the organisation can absorb, adopt, and grow with.

Sometimes, an organisation might be ready to sprint for a few weeks or months, but then may need to slow down to focus on training and absorbing the benefits of the technology. Every industry and organisation has its own rhythm, and it’s crucial to respect that.

Having external guidance is especially valuable. AI is an exciting, ‘shiny’ object, and internally, this can create friction and power struggles. An external advisor can serve as a neutral mediator – someone who isn’t caught up in internal politics. Sometimes, I play the ‘bad cop,’ telling clients they need to invest more or that progress will take longer. Other times, I’m the ‘good cop,’ encouraging them by reminding them about the North Star of their AI transformation and showing achievable milestones.

After 26 years in corporate life, I understand how challenging internal dynamics can be. AI transformation is deeply humancentric and culture-centric. If people don’t want to share their data, they won’t. You can have the best tools and automation, but resistance to change will derail any effort. That’s why having an inspiring, external catalyst can make all the difference.

This article is from: