What's Next for AI, What's Next For Us?

Page 1


COLLABORATION

MOMENTUM

DISRUPTION IMPACT

REGULATION

LA RÔTISSERIE

Leave the hustle and bustle of everyday life behind as we whisk you off on an exquisite culinary journey. Chef Stefan Jäckel and his team will surprise you with refined, seasonal creations and innovatively interpreted classics directly from the trolley – perfect for a family get-together, meal with friends or business lunch.

storchen.ch

MASTHEAD

CEO & PUBLISHER

ANA C. ROLD

EDITOR-IN-CHIEF

SHANE SZARKOWSKI

ART DIRECTOR

MARC GARFIELD

MULTIMEDIA MANAGER

WHITNEY DEVRIES

EDITORS

JEREMY FUGLEBERG

MELISSA METOS

PHOTOGRAPHER

MARCELLUS MCINTOSH

RESEARCHER

EILEEN ACKLEY

EDITORIAL ADVISORY BOARD

ANDREW M. BEATO

FUMBI CHIMA

KERSTIN COATES

DANTE A. DISPARTE

AUTHORS

NIKOS ACUÑA

ANDREA BONIME-BLANC

LEONOR DIAZ ALCANTARA

ASHA CASTLEBERRYHERNANDEZ

LISA CHRISTEN

RUI DUARTE

LISA GABLE

SIR IAN FORBES

LISA GABLE

GREG LEBEDEV

ANITA MCBRIDE

JOYSY JOHN EUGINA JORDAN BERNHARD KOWATSCH

MARISSA QUIE

AIDA RIDANOVIC

STACEY ROLLAND

DANIEL SHIN

TARJA STEPHENS

LOUISA TOMAR

Copyright© by Diplomatic CourierTM and Medauras Global Publishing 2025. All rights reserved under International and Pan-American Copyright Conventions. Published in the United States by Medauras Global and Diplomatic Courier.

LEGAL NOTICE. No part of this publication may be reproduced in any form—except brief excerpts for the purpose of review—without written consent from the publisher and authors. Every effort has been made to ensure the accuracy of information in this publication; however, the authors, Diplomatic Courier, and Medauras Global make no warranties, express or implied, in regards to the information and disclaim all liability for any loss, damages, errors, or omissions.

EDITORIAL. The content represents the views of their authors and do not reflect those of the editors and the publishers. Every effort has been made to ensure the accuracy of information in this publication, however, Medauras Global and the Diplomatic Courier make no warranties, express or implied in regards to the information, and disclaim all liability for any loss, damages, errors, or omissions.

CITATIONS & SOURCES. All articles in this special edition have already been or will be published on the Diplomatic Courier website, including relevant hyperlink citations.

PERMISSIONS. This publication cannot be reproduced without the permission of the authors and the publisher. For permissions please email: info@medauras.com with your written request.

ARTWORK. Cover design via Adobe Stock. Artwork and design by Marc Garfield for Diplomatic Courier.

The future of AI we deserve, for better or worse

The future of AI is here, and while it feels surprising it isn’t exactly unexpected. From the emerging ability to understand missions and act independently to accomplish them to actively creating new AI agents to carry out those missions, the rapidity of innovation is surprising—even if the developments themselves seem logical in retrospect. Anticipation of such a leap forward has been rife for years, which is why over that time experts (including many of ours) have spoken about standard setting and best practice norms, if not outright regulatory guardrails for AI.

That is a task which looks more complex than ever, because we have at least three unique and increasingly divergent approaches to AI. China is leaning into state control of AI as a strategic asset. The EU is bolstering regulatory guidelines that are meant to be flexible and resilient but which critics say will stifle innovation. Meanwhile, the U.S. is taking a very light touch on regulation to encourage rapid innovation to stay ahead of its competitors in the AI race. Taken together, many are concerned about what this divergent approach means for the digital future of society.

Truly global standards are likely beyond us for the foreseeable future. As a society, or as a set of various global societies, we’ve understood for some time that with exponential technologies, we have for some time been at an inflection point. Having not managed to agree on standards for AI development, we are now entering a period of AI uncertainty we foresaw but could not prevent.

What will this mean for our shared digital future? Regardless of what you may have read or heard, that’s unclear. To gain some insights into what scenarios we might see play out, or what we can do to steer toward scenarios we may prefer, Diplomatic Courier asked its expert community for their analysis. The response was exactly what we expected— overwhelming in volume and representative of a wide sampling of perspectives.

We hope this digital compilation of commentaries gives you some insight into what the future of AI could be, what you would like it to be, and what you might be able to do to help that future come into being.

#CiscoCDA #SwissCDA

I Navigating the Many Headed Hydra of Global Tech Regulation By: Andrea Bonime-Blanc

How Personal Data Sovereignty Could Save Us from AI’s Darkest Risks

How Education Can Make AI Policies More Practical By: Joysy John

How We Can Win the Future of AI By: Lisa Gable

I Human–Centric AI: Consensus Building Towards AI Innovation

I Build Momentum from the Middle to Find Common Ground on AI By: Stacey Rolland

I Collaborating With AI in a Fractured World By:Leonor Diaz Alcantara

I Power, Peril, and AI’s Governance Challenge to Democracy By: Aida Ridanovic

I Speak Softly and Carry a Bag of Carrots: Soft Power in the Age of AI By: Louisa Tomar

How personal data sovereignty could save us from AI’s darkest risks

Image by Towfiqu barbhuiya from Unsplash.

In our rush to embrace artificial intelligence, we face an uncomfortable truth: the digital economy that enabled this revolution now threatens our democratic foundation. AI–powered misinformation tops global risk assessments. Job displacement looms over millions. Privacy erodes daily. And a fragmented regulatory landscape—with Europe favoring precaution, America prioritizing innovation, and China asserting state control—creates dangerous gaps in governance.

Yet amid these challenges emerges a radical solution hiding in plain sight: harnessing the power of your own data to unlock a new economic model for society.

The digital economy stands at a crossroads. The advertising–based model that built internet giants is faltering as privacy concerns mount and targeting becomes harder. Apple’s App Tracking Transparency feature alone cost Meta $10 billion in 2022. This economic reality is forcing a rethinking of the fundamental market principle underpinning the internet: your data for our services.

The promise of personal data sovereignty

Personal data sovereignty tools enable individuals, not corporations, to control their digital footprint. Instead of your health app, bank, and email provider each storing separate information about you, imagine a secure vault you control. Services would request specific access to relevant data, under your terms, for limited periods.

This isn’t mere theory. Tim Berners–Lee, who invented the web, leads the Solid project creating personal data “pods” that keep information independent from applications. Governments from Singapore to Finland are piloting similar initiatives. The EU’s forthcoming Data Act will establish frameworks for user–controlled data sharing.

But ownership alone isn’t enough. We need a “semantic layer”—structured context that helps AI systems understand data’s meaning, relationships, and permitted uses. When your birthday is tagged with

standardized metadata, any AI knows exactly what this represents and how it can be used. This combination—personal ownership with semantic context—creates a foundation for safer AI grounded in transparent, user–provided information.

The implications extend beyond privacy. Such a model could combat misinformation by enabling AI to trace claims to verified sources. It could reduce algorithmic bias by providing clearer context. And it could create new economic opportunities as people monetize their own data instead of surrendering value to platforms. It will also create jobs, with new demand for data integrity and accuracy workers to ensure provenance. This is one of the first emergent jobs to be created in the era of exponential automation.

For businesses, this shift presents both challenge and opportunity. Companies that partner with users rather than exploit them will build sustainable advantages. For governments, it necessitates rethinking regulatory frameworks to recognize data as a personal asset.

Most crucially, this approach returns agency to individuals in the AI age. Rather than being passive data sources or unwitting subjects of algorithmic decisions, people become active participants in a more equitable digital economy.

The technology exists. The policy momentum grows. What’s needed now is collective will to reimagine our relationship with data. By empowering individuals as data stewards rather than data subjects, we can harness AI’s benefits while mitigating its gravest risks.

The promise of this revolution needs a more human touch than ever.

About the author: Nikos Acuña is the Founder and CEO of Aion Labs. He is an interdisciplinary AI researcher, technologist, entrepreneur, author, and artist.

Great AI Divide requires diplomacy balancing acceleration and ethics

Photo by Niklas Ohlrogge (niamoh.de) on Unsplash.

In 2025, AI is no longer an emerging force, it is a defining one. From economic structures to geopolitical alliances, its impact is profound and immediate. The question we face is not whether AI will reshape the world, but how we ensure it does so with wisdom, fairness, and strategic foresight. This is where diplomacy, governance, nd cross–sector collaboration must take center stage.

The accelerating trajectory of AI has placed humanity at a crossroads. Futurist Ray Kurzweil envisions AI reaching technological singularity very soon, unlocking unprecedented capabilities. Meanwhile, Melanie Mitchell urges caution, emphasizing the complexity of intelligence and the risks of overestimating AI’s abilities. But the future is not about choosing between acceleration and restraint; it is about building a bridge between them.

AI as a diplomatic imperative

Diplomacy has long been the art of balancing power, managing uncertainty, and fostering cooperation. The governance of AI demands the same approach. Current deregulation winds introduce superposition into AI diplomacy, where both rapid innovation and potential risks coexist. Unchecked AI can accelerate economic growth, yet without oversight, it risks reinforcing bias, deepening inequality, and enabling mass surveillance. Just as climate diplomacy has evolved into a complex global network, AI requires an AI Diplomacy Network, a structured, multilateral effort to ensure alignment on ethical AI development, security standards, and economic impacts.

Policymakers must move beyond reactive regulation and embrace AI as public infrastructure. The European Union’s AI Act exemplifies an agile approach, balancing risk management with innovation incentives. Similar frameworks should be scaled internationally, ensuring AI does not become a tool of unilateral dominance but rather a shared resource for global progress.

AI REQUIRES AN AI DIPLOMACY NETWORK, A STRUCTURED, MULTILATERAL EFFORT TO ENSURE ALIGNMENT ON ETHICAL AI DEVELOPMENT, SECURITY STANDARDS, AND ECONOMIC IMPACTS.

Human–centered AI: a shared responsibility

An AI–first mindset starts with human–first ethics. The rapid deployment of generative AI, predictive analytics, and autonomous systems raises profound ethical questions. Businesses have a responsibility to embed ethical oversight into AI development, as seen in Microsoft’s Responsible AI Charters. Governments must facilitate cross–sector dialogue, ensuring that AI serves public interest rather than corporate monopolies or state control.

By 2026, AI could either entrench digital divides or become a force for global equity. The outcome hinges on quantum diplomacy: where nations, businesses, and civil society must navigate AI’s uncertainty with both speed and strategic foresight. In an era of deregulation, diplomacy remains the only force capable of ensuring that intelligence, artificial or human, remains a tool for collective progress rather than unchecked power.

About the author: Rui Duarte is an expert in political economy (LSE) with over a decade of leadership in public policy, global communications, and science PR.

Avoid real AI crisis with smart regulation for business

Photo by kate.sade from Unsplash.

What if we’ve been looking at regulating AI all wrong? Global agreements on ethics, bias, and safety are critical for humanity— but they’re notoriously slow and bureaucratic. By the time world leaders reach a consensus—if they ever do—the AI landscape will have already changed dramatically.

How can we take immediate action? It requires us to flip the script and rethink: What if AI isn’t stealing our jobs but companies are? This is not because companies are malicious, but because they’re doing exactly what they’re designed to do: maximizing efficiency and serving their shareholders.

This is where companies face a big dilemma. When AI makes automation cheaper and faster, companies would be irresponsible not to use it. Ignoring AI means falling behind. But widespread automation without a plan for workers leads to mass unemployment—a crisis governments are unprepared to handle. Instead of relying on governments to fund universal basic income—a financial impossibility— we need policies that encourage businesses to reinvest in workers.

This could start with Human–AI Integration Policies. For example, if a company is boosting productivity with AI or AI “super agents,” they should simultaneously be investing in their people. That might include retraining employees for new roles or creating new roles for humans. Alternatively, companies would still have the option to automate without offering reskilling by contributing to a workforce transition fund through an Automation Tax. An additional opportunity could be requiring companies above a certain size and revenue to maintain a minimum number of human workers per unit of revenue to ensure that they’re not maximizing profit at the expense of mass layoffs.

WHAT IF AI ISN’T STEALING OUR JOBS BUT COMPANIES ARE? THIS IS NOT BECAUSE COMPANIES ARE MALICIOUS, BUT BECAUSE THEY’RE DOING EXACTLY WHAT THEY’RE DESIGNED TO DO: MAXIMIZING EFFICIENCY AND SERVING THEIR SHAREHOLDERS.

We should absolutely continue working on solutions for regulating AI technology itself. But we can’t forget: the most effective way to act now is by shaping how businesses integrate AI into their workforces. Economic growth and job stability are universal priorities, and smart policies can ensure AI–driven innovation doesn’t come at the cost of human livelihoods. By balancing efficiency with workforce sustainability, we can build an AI–powered future where all players thrive together. The choice isn’t between progress or protection—it’s about designing a system where we all succeed.

About the author: Lisa Christen is the CEO of Christen Coaching and Consulting GmbH.

Preparing for the impact of AI on cryptocurrency

Photo by Sajad Nori from Unsplash.

Artificial intelligence (AI) presents both opportunities and new dangers in the growing cryptocurrency industry. With the industry’s growth, AI is increasingly becoming germane to its existence, and the relationship is rapidly expanding. AI is driving innovation in the cryptocurrency industry through various sub–domains, including international transactional behavior, global exchange rates, and blockchain quality.

Despite its growing integrated role in the international financial markets, the global community has yet to present a comprehensive framework on how to deal with artificial intelligence’s role in the cryptocurrency industry. No country on the world stage has taken on the leadership to create such a framework. However, the U.S. expressed interest in taking on this leading role. The current administration signaled this strategic goal after signing a recent executive order recommending the U.S. to be “the crypto capital of the world planet.” The executive order recommended governing digital assets, establishing central bank digital currencies, and appointing a new AI & Cryptocurrency Czar. Moreover, the intent of the executive order is to generate billions of dollars for American investors by curtailing regulatory overreach on digital assets.

The intersection of artificial intelligence and cryptocurrency also presents significant economic security challenges. There is no international framework or comprehensive guidance that provides standards on how to deal with the risks associated with the disruptive, technological relationship. Uncontrollable AI–crypto tools will likely exacerbate wealth inequality if it continues operationalizing without regulation—driving false predictions, manipulation, and unexpected market shifts. As a result, investors, especially from the American working class, are more at risk of losing significant money; many American working class investors already experienced losses from unregulated AI–crypto tools during and after the cryptocurrency bubble of 2017–18.

UNCONTROLLABLE

AI–CRYPTO TOOLS WILL LIKELY EXACERBATE WEALTH

IT

INEQUALITY IF

CONTINUES OPERATIONALIZING WITHOUT REGULATION—DRIVING FALSE PREDICTIONS, MANIPULATION, AND UNEXPECTED MARKET SHIFTS.

The U.S. will gain more credibility as the leading role in this industry by facilitating international standards on the AI and cryptocurrency integration. The UN AI advisory board and Bretton Wood Institutions are great starting points for promoting international standards on data privacy, transparency, predictive analytics, innovation, and network operations. Washington should present a balanced approach to the global community as part of the international standards. While Washington attempts to curtail overreaching regulatory behavior, the current administration should continue implementing protective mechanisms for American consumers since the market remains volatile. American consumers must be protected from risks associated with AI–crypto relationships, including biased advertisement and money laundering, and also allow consumers to take legal action against proven, abusive behavior.

About the author: Asha CastleberryHernandez is a national security and foreign policy expert, U.S. Army veteran, author of “Why National Security Matters,” and former U.S Congressional candidate.

Building a new foundation for the intelligent age

Photo by Tom Chen on Unsplash

Today we stand at the crossroads of a new era where AI is not just any other technology, but a global force changing how we work and live. China’s DeepSeek and a generation of AI “super agents” are driving a transformation that can feel overwhelming in its pace and scope. Yet technology isn’t all that will direct this transformation. Leaders will have a profound impact on the direction of AI transformation through how they guide their organizations and sectors.

Competition around AI is heating up around the globe. While the U.S. is dialing back on regulation, the startling pace of China’s AI growth illustrates the need for shared governance norms. The World Economic Forum has also surfaced mis– and disinformation as one of the critical top risks further amplifying the need for trust and resiliency in a highly contested digital ecosystem.

Leaders everywhere will need to work to build new frameworks and foundations toward the effective, fair adoption of AI. Good leadership here means prioritizing trust, transparency, and addressing the socio–economic divide. No less important is increasing AI literacy and constant reskilling so that workers can respond to new demands while ensuring that the enabling power of AI works for all.

More than just another emerging technology, AI signifies a crucial evolution in the way we work and create value. To ensure that its benefits are harnessed to the fullest, leaders must invest in their employees, help them evolve, and enable new career paths while improving organizational performance. By concentrating on human–centered leadership transformation based on ever–changing organizational learning systems, leaders can always go beyond mere technological change in the workforce, creating robust competitive advantage.

TO ENSURE THAT ITS BENEFITS ARE HARNESSED TO THE FULLEST, LEADERS MUST INVEST IN THEIR EMPLOYEES, HELP THEM EVOLVE, AND ENABLE NEW CAREER PATHS WHILE IMPROVING ORGANIZATIONAL PERFORMANCE.

This new era also needs new innovative creative partnerships. Businesses can collaborate with governments, civil society, and academia. This partnership should involve sharing information, developing and co–creating new foundations that will make effective and efficient systems possible, and implementing AI changes that provide sustainable work.

In the end, the intelligence age will thrive on leaders who elevate people, promote innovation, and build resilient organizations. The leaders that can act as catalysts for a future where human creativity and technology thrive together to spur growth and provide customer value will be those that embrace strong AI literacy and undertake enterprising reskilling initiatives that embed ethics in every AI effort. And in doing so, they empower everyone to take an active part of building the future together.

About the author: Tarja Stephens is an entrepreneur, advisor, and leading voice in AI readiness, the future of work, and talent development.

Can AI solve hunger? The promise of technology in food security

Up to 757 million people face chronic hunger daily. Without new technologies and AI, we risk leaving millions behind.

As head of the World Food Programme (WFP)’s global innovation accelerator I’ve seen first–hand AI’s potential, but only if we approach it responsibly to solve real problems.

Real solutions for real challenges

A lot of people only know hunger as an abstract issue, but it’s a real human crisis exacerbated by conflict, natural disasters or economic issues. To be effective, AI must address real, immediate challenges faced by people—not just for affluent people, but also considering remote

Image by seungwoo yon from Pixabay.

areas with people who don’t have the same economic opportunities. For example, AI–powered precision agriculture can help smallholder farmers even in remote areas in the Global South optimize resources such as water, fertilizers and seeds, increasing productivity while minimizing waste. AI–driven supply chain systems can reduce transport costs or food waste and ensure food supply chains are more efficient and effective. These innovations succeed only when guided by local knowledge and on–the–ground realities.

AI for its own sake is not the answer. For humanitarian response, its value lies in addressing both the immediate and long–term needs of those most affected by hunger. Smallholder farmers, for example, often lack access to even basic technologies, let alone advanced AI tools.

Public–private partnerships are crucial to bridge this gap, combining cutting–edge AI with deep local understanding. This is the mission of the WFP Innovation Accelerator, which connects high–impact innovations with WFP’s global reach in over 120 countries.

Timely and accurate data for the Global South is essential

At the heart of AI lies data, and its quality determines the effectiveness of AI–driven solutions. But hunger hotspots are often data deserts. For AI to make a meaningful impact, the humanitarian sector, governments and private companies must prioritize building robust, accurate datasets that reflect the realities of users or regions that are often overlooked in global data models. What is sometimes considered an “edge case” can be the core reality of people living in remote areas in the Global South. Beyond offering insights, AI can play a crucial role in forecasting. By predicting food shortages, extreme weather–related risks or resource gaps, AI–powered early warning systems can enable proactive interventions before crises escalate.

However, these systems are only as effective as the data driving them.

Scaling through proven impact

To scale AI solutions effectively, we need to focus on high–impact areas that will enable real breakthrough innovations. Instead of another chatbot, consider precision agriculture or personalized nutrition for everybody, or AI optimized, collaborative systems that ensure farm–to–fork digitization on blockchain to enable more food, higher nutrition and lower costs all at the same time. It’s a new opportunity for investment that will make a big impact a reality. But scaling solutions isn’t just about funding; it’s about proving they work. Demonstrating the impact of AI through rigorous testing and real–world results is essential for humanitarian organizations to build trust and unlock further investment, laying the foundation for wider adoption.

Trust: The cornerstone of AI’s success

Perhaps the biggest takeaway is the need for actors to come together. Humanitarian organizations, private companies, startups and local stakeholders together can build public trust in AI. Public–private partnerships can democratize AI’s benefits, ensuring access and improving people’s lives also for vulnerable communities.

AI can transform the fight against hunger, but only if used responsibly. By combining innovation with partnership, investment and a commitment to expand AI to the Global South, we can work toward a future without hunger. The opportunity is immense, and the time to act is now.

About the author: Bernhard Kowatsch is Head of Innovation Accelerator at the United Nations World Food Programme, and is Co–Founder of ShareTheMeal.

Balance AI innovation, democratic safeguards to protect human agency

Photo by Robs on Unsplash.

In the aftermath of World War II, Hannah Arendt warned that technological progress could alienate humans from their political existence, diminishing human judgment. She was similarly disturbed by its resistance to human control. Today, artificial intelligence (AI) presents a similar challenge, raising urgent questions about democratic accountability and human agency. AI’s potential to generate misinformation, bias, and manipulated content threatens to distort public understanding, undermine trust, and erode informed participation in democracy. Addressing these risks requires robust governance frameworks that balance innovation with ethical safeguards.

The rapid development of AI is driven by tech giants like Microsoft, Google, and OpenAI, whose influence often prioritizes profit over democratic accountability. The convergence of AI with national security, exemplified by the U.S.–China technological rivalry and the rise of DeepSeek, further complicates regulation. Cybersecurity threats, such as Russia’s 2017 NotPetya attack , which caused $10 billion in damage, highlight the inadequacy of existing frameworks and the growing reliance on private–sector expertise. While companies like Microsoft and Google have played critical roles in countering cyberattacks, their involvement raises concerns about transparency and impartiality. Corporate overreach is exacerbated by opaque government–tech relations, as the Musk–Trump alliance demonstrates.

To address these challenges, a multi–stakeholder governance model is essential. This approach should integrate technical expertise from big tech with the legitimacy of international regulatory bodies and civil society groups.

• Drawing lessons from UN peacekeeping, a coalition of private cybersecurity firms, government agencies, and international organizations could mitigate AI–driven threats under multilateral agreements.

• Governments and multilateral institutions

WHILE

COMPANIES LIKE

MICROSOFT
GOOGLE

HAVE

AND

PLAYED

CRITICAL ROLES IN COUNTERING CYBERAT-

TACKS,

THEIR INVOLVEMENT RAISES CONCERNS ABOUT TRANSPARENCY AND IMPARTIALITY.

should establish regulatory sandboxes to rigorously test AI technologies forethical compliance prior to deployment.

• A UN–backed AI ethics and security framework, akin to nuclear arms control treaties, could prevent monopolization by corporations or states.

• Cross–border partnerships involving governments, private firms, academia, and civil society could enhance transparency and establish best practices for AI safety and cybersecurity.

Arendt cautioned that thoughtlessness enables totalitarianism. Without democratic oversight, AI’s fusion with state and corporate power risks creating new forms of authoritarianism. Human agency requires active political engagement.

By embedding ethical considerations into AI policy and fostering public discourse, societies can harness AI’s benefits while protecting democracy. AI must not replace human judgment; instead, it should act as a mechanism to enhance democratic participation and accountability.

About the author: Dr. Marissa Quie is a Fellow and Director of Studies in HSPS at Lucy Cavendish College (University of Cambridge).

Navigating the many headed hydra of global tech regulation

Photo by Bruno Martins on Unsplash.

At a time of exploding global AI and other frontier technology invention, the Trump administration has elevated the mostly anti–guardrail Silicon Valley tech accelerationists to positions of great influence. Witness the well–covered work of Elon Musk with DOGE—whether he’s the official head or not— and the less discussed but still very anti–regulation proponent David Sacks who now chairs the President’s Council of Advisors on Science and Technology. Early Trump executive orders reflect this hostility toward guardrails.

While regulatory upheaval in the U.S. is much–discussed, the AI and exponential landscape globally is evolving rapidly.

We are heading into a multipolar world generally (and more specifically regarding technology and technology regulation) where the most significant “poles” will be the US, China, and the EU. Witness the unveiling of DeepSeek and increased state support for exponential tech development in China on one hand, and the doubling down on robust regulatory systems for exponential tech in the EU on the other.

What this translates into for business and other decision makers is the prospect of an increasingly complicated navigation landscape requiring a lot of reliable data, systemic planning and brave action— not necessarily in sequence but simultaneously and continuously.

Below are a few considerations.

For business:

Governance still matters. Don’t relax your internal governance and guardrails just because the government is.

Stakeholders still matter. The wellbeing of your stakeholders is paramount—if you treat them badly, they will not stay. Reputation risk has a long tail. Your corporate reputation is on the line—

WE ARE HEADING INTO A MULTIPOLAR WORLD GENERALLY (AND MORE SPECIFICALLY REGARDING TECHNOLOGY AND TECHNOLOGY REGULATION) WHERE THE MOST SIGNIFICANT “POLES” WILL BE THE U.S., CHINA, AND THE EU.

short term gains are not worth the long tail of a lost reputation.

For individual stakeholders:

Be an educated consumer/user. Educate yourself from reliable sources about the tech products and services you use or purchase.

Use your boycott power. Don’t join a company or organization (or use or buy a product) from a company that doesn’t care about your and other stakeholders’ safety or wellbeing.

For other key actors:

The work of responsible governments, inter-governmental organizations and civil society could not be more critical to the future wellbeing of the planet. Why? Because these exponential technologies— including GenAI—hold unbelievable promise in addition to the perils for which we need good governance. So please continue and build on your essential work.

About the author: Dr. Andrea Bonime–Blanc is the Founder and CEO of GEC Risk Advisory, a board advisor and director, and author of multiple books.

How education can make AI policies more practical

Image by Alexandra_Koch from Pixabay.

The global race to shape AI’s future is defined by diverging national strategies. In the United States, AI innovation is led by the private sector in a largely deregulated environment. China’s state–driven approach prioritizes AI for economic and military applications, while Europe’s regulation–first model focuses on ethics, transparency, and compliance. These differing paths not only shape AI’s development but also influence global power dynamics, making it increasingly difficult to find common ground for responsible AI governance.

If governments cannot align, where else can we turn to establish best practices? One promising area is education, which features several forward–thinking initiatives that present a rare opportunity for global alignment—even while national AI policies remain fragmented.

While national AI policies remain fragmented, education presents a rare opportunity for global alignment. Moving forward, the focus must be on:

• Establishing international collaboration through forums like Salzburg Global to develop best practices and ethical guidelines for AI use in education.

• Bridging the policy–education gap by investing in national AI literacy initiatives.

• Encouraging teacher–led AI experimentation to ensure AI tools meet real classroom needs.

As a Salzburg Global Fellow, I am involved in the Future of Teaching initiative which brings together global thought leaders to explore how we can maximise the benefits while reducing the risks of AI. These discussions stress the need for teachers to be equipped with the right knowledge, skills, and tools to integrate AI effectively to benefit students.

At the national level, the UK is taking proactive steps to prepare educators for AI. As an advisor to the EdTech Evidence Board, I work with diverse stakeholders

IF GOVERNMENTS CANNOT ALIGN, WHERE ELSE CAN WE TURN TO ESTABLISH BEST PRACTICES? ONE PROMISING AREA IS EDUCATION, WHICH FEATURES SEVERAL FORWARD–THINKING INITIATIVES THAT PRESENT A RARE OPPORTUNITY FOR GLOBAL ALIGNMENT.

to assess AI’s impact on education. The UK government is also investing in AI training for teachers, ensuring AI enhances, rather than replaces, human –led instruction.

Through grassroots initiatives at schools, colleges, and universities, we must equip individuals with AI literacy—not just technical skills but also critical thinking to navigate an era of AI-generated misinformation. Teacher–led AI initiatives can create a workforce that understands and engages with AI responsibly, sharing best practices and developing real-world use cases of practical and responsible use.

At a time when AI’s impact on jobs, governance, and society remains uncertain, education offers a rare space for consensusbuilding. AI is already transforming learning —the real challenge is ensuring that teachers, students, and societies are prepared for what comes next.

About the author: Joysy John is an entrepreneur, edtech advisor, and innovation consultant. Joysy is the ex–Director of Education at Nesta and ex–CIO of Ada National College for Digital Skills.

How we can win the future of AI

Photo by Matt Palmer on Unsplash.

We are at an inflection point at which free nations can lead, or we can lose the advantages of the tech revolution, which was developed under free markets and free people in the United States and allied nations.

AI provides the most significant chance to drive massive economic growth, achieve groundbreaking scientific discoveries, and change how we work and learn. But with opportunities come risks—not the kind that demands government restraints, but the kind that requires the flexibility and investment to create a world that adapts more responsively to our needs and challenges.

The question isn’t whether AI should move forward—it’s how we make sure it works to support freedom and democratic ideals. The internet, computers, and even the printing press were once seen as threats. Yet every time, those who embraced innovation won. AI is no different.

To get the most out of AI, we need to accelerate investment, not drive ourselves into endless bureaucratic conversations. AI can revolutionize industries—improving healthcare, speeding up research, boosting productivity, and strengthening national security. The companies, countries, and people who integrate AI early will outpace those who hesitate.

At the same time, we must stay resilient against real dangers. But the risk isn’t AI itself—it’s who controls it. If the U.S. doesn’t lead, bad actors will. China, for example, is already using AI to track its citizens and control speech.

First, free societies must move fast and innovate rapidly by driving competition across pioneering companies of all sizes, ensuring the process will be secure and fair.

WE SHOULD AVOID HEAVY–HANDED REGULATIONS THAT WOULD PUT DOMESTIC AI DEVELOPMENT AT A DISADVANTAGE AGAINST COMPETITORS IN COUNTRIES WHERE BAD ACTORS DON’T PLAY BY THE RULES.

Second, we must use AI defensively— for cybersecurity, fraud prevention, and national security—before our rivals do. Third, we should avoid heavy–handed regulations that would put domestic AI development at a disadvantage against competitors in countries where bad actors don’t play by the rules.

We don’t need policies driven by fear. We need incentives for responsible AI use, strong intellectual property protections, and a mindset that values groundbreaking discoveries over red tape.

The future of AI isn’t something to fear —it’s something to build. The people who move fearlessly, embrace change, and out–innovate the competition will define the next chapter of progress. That’s the choice in front of us. Move fast—or fall behind.

About the author: Lisa Gable is a Diplomatic Courier Advisory Board member, Chairperson of World in 2050, and WSJ and USA Today best–selling author of “Turnaround: How to Change Course When Things Are Going South.”

Human–centric

AI: Consensus building towards AI innovation

Image courtesy of Ecole polytechnique from Paris, CC BY-SA 2.0 , via Wikimedia Commons.

As governments around the world pursue different approaches to foster AI innovation while also regulating AI development, deployment, and use, public distrust of AI has been growing due to the technology’s unmanaged, disruptive effects on various sectors. If such distrust of AI continues to go unchecked, the market demand for AI adoption may decline, potentially frustrating the progress of AI innovation and its implementation across communities.

What is required in this increasingly divisive environment is a shared vision of AI that respects the public’s genuine aspiration for the technology but combines it with norms that can properly actualize such a vision. Government and key stakeholders should disregard AI as a mere utilitarian tool and instead consider how this technology could enhance society’s cultural, educational, and other standards.

For instance, the early internet spurred public excitement due to its seemingly endless potential to enhance the information society with democratized benefits for all. If there is a shared vision of AI that can inject similar levels of widespread enthusiasm, then strong foundational AI norms can lead the way toward realizing the public’s expectations.

Furthermore, any pursuit of AI regulation, guidance, and best practices should always principally support society at large. Much of the public’s distrust of AI appears to stem from the growing perception that it is chiefly being used to serve everyone but the people. This belief can grow into the cynical belief that AI is only a disrupting force used to serve only a few. If the public’s distrust of AI is left unaddressed, widespread adoption of this technology will stall, leading society to lose its opportunity to harness the benefits of AI for all.

IF THE PUBLIC’S DISTRUST OF AI IS LEFT UNADDRESSED, WIDESPREAD ADOPTION OF THIS TECHNOLOGY WILL STALL, LEADING SOCIETY TO LOSE ITS OPPORTUNITY TO HARNESS THE BENEFITS OF AI FOR ALL.

Fortunately, the OECD AI Principles and NIST’s AI trustworthiness characteristics provide a roadmap for policymakers and industry personnel to pursue a human–centric approach to AI development, deployment, and use. Even with a lack of a mandatory regulatory regime, there is a ripe opportunity for government, industry, academia, and other organizations to collaborate and pursue a shared AI standard that supports the people. Intersectoral collaboration is key to building a consensus on AI policy.

About the author: Daniel Shin is the Center for Legal and Court Technology’s (CLCT) Cybersecurity Researcher at William & Mary Law School and the Coastal Node Commonwealth Cyber Initiative Research Scientist.

Build momentum from the middle to find common ground on AI

HERO Photo by Matt Seymour on Unsplash.

In an era defined by polarization, algorithms often amplify extreme views on our screens and politicians appear locked in opposing camps. Yet, as with many complex challenges, progress in artificial intelligence (AI) policy depends on finding middle ground—cutting through the noise that divides us to focus on our shared goals.

Diplomacy teaches us that words matter, but successful negotiations often require looking beyond rhetoric to uncover real areas of agreement. When it comes to AI, building future–proofed policies and best practices based on shared goals is essential. We must resist the temptation to let the pursuit of perfection on either end of the political spectrum stall meaningful progress in AI standard–setting.

U.S. Vice President J.D. Vance’s recent remarks at the Paris AI Summit offers insights. While his “America–first” rhetoric and call for deregulation sparked controversy, his opposition to AI–facilitated censorship and emphasis on job creation through AI could serve as unifying themes. Vance’s advocacy for a level playing field—where innovators of all sizes can thrive—further underscores opportunities for bipartisan collaboration on issues like preventing AI misuse and ensuring equitable access to its benefits.

The debate between safety and deregulation need not be a zero–sum game. A thriving AI ecosystem depends on healthy competition, which requires clear rules, consumer trust, and an environment that fosters innovation across the board. National security is another critical area where stakeholders can align. Policymakers across the spectrum recognize the transformative impact of AI on security and defense, making it a natural focal point for bipartisan action.

History reminds us that progress often comes from unexpected alliances.

WHEN IT COMES TO AI, BUILDING FUTURE–PROOFED POLICIES AND BEST PRACTICES BASED ON SHARED GOALS IS ESSENTIAL. WE MUST RESIST THE TEMPTATION TO LET THE PURSUIT OF PERFECTION ON EITHER END OF THE POLITICAL SPECTRUM STALL MEANINGFUL PROGRESS IN AI STANDARD–SETTING.

Policy logjams have been broken before when leaders prioritized shared goals over reductive partisan divides. While today’s political climate may highlight our differences—and those differences are real—it is more urgent than ever to create momentum in AI policymaking and standard–setting by focusing on the common ground that exists in between.

Polarization may be a challenge, but it is not insurmountable. The future of AI policy depends on our ability and willingness to work together—to create momentum through points of consensus. By centering on shared goals, we can ensure that AI policies and standards do not fall further behind the pace of innovation.

About the author: Stacey Rolland is a leading expert in emerging technology policy and strategy in Washington, DC.

Collaborating with AI in a fractured world

Photo by Eugene Golovesov on Unsplash.

AI is rapidly reshaping education, sparking both excitement and concern among educators. Educators’ mix of enthusiasm and fear toward AI mirrors the broader societal response to the rapid advancement of technology.

Popular culture has both reflected and shaped public perception of AI. Films like 2001: A Space Odyssey (1968), The Matrix (1999), and Avengers: Age of Ultron (2015) have explored AI’s potential as both a force for good and a threat to humanity. While these portrayals may have once seemed fantastical, today’s AI developments make their warnings increasingly relevant.

Despite these concerns, AI’s creators overwhelmingly aspire to advance human progress. This vision was central to discussions at the launch of my think tank, Saviesa, which is dedicated to transforming education in the age of AI.

AI’s potential in education is immense— it can personalize learning, expand access to knowledge, and support students with diverse learning needs. AI–driven tools are already transforming classrooms by helping educators streamline tasks and focus on student engagement. However, these advancements come with challenges. AI–generated misinformation, deepfake technology, and algorithmic biases raise concerns about academic integrity and the reliability of educational content. Additionally, the growing reliance on AI sparks fears of job displacement and increased educational inequality.

The challenge lies in harnessing AI thoughtfully—leveraging its strengths while safeguarding the human qualities that drive innovation and ethical decision making. By fostering responsible AI policies and prioritizing education, society can ensure that AI serves as a collaborator rather than a substitute for human ingenuity. The goal is not to replace human creativity but to amplify it, building a future where technology and humanity coexist harmoniously.

AI–GENERATED MISINFORMATION, DEEPFAKE TECHNOLOGY,

AND ALGORITHMIC

BIASES RAISE CONCERNS

ABOUT ACADEMIC INTEGRITY AND THE RELIABILITY OF EDUCATIONAL CONTENT. ADDITIONALLY, THE GROWING RELIANCE ON AI SPARKS

FEARS OF JOB DISPLACEMENT AND

INCREASED EDUCATIONAL

INEQUALITY.

About the author: Leonor Diaz Alcantara is an award–winning leader with nearly 25 years of experience as a CEO, specializing in transformation, change management, and organizational growth. In January 2025, she launched the Saviesa think tank.

Power, peril, and AI’s governance challenge to democracy

Illustration via Adobe Stock.

Artificial intelligence (AI) is reshaping economies, security, and governance at a pace governments struggle to keep up with. At this point, the question is not whether society will be transformed by AI, but how it will be transformed—and whether democracy remains intact as governance institutions fall behind.

The global market landscape is undergoing a seismic shift, with technology firms and state–backed initiatives investing heavily in AI research and infrastructure. A 2025 World Economic Forum report estimates that automation could displace 92 million jobs by 2030 while creating 170 million new roles, primarily in healthcare, education, and technology. However, job losses in administrative and manufacturing sectors may exacerbate economic inequality, necessitating policies that prioritize workforce adaptation.

Within organizations, AI is changing how teams function, making some roles redundant while demanding new skill sets. Companies must balance automation with human capital investment, ensuring employees are reskilled rather than displaced. A McKinsey report suggests that by 2030, nearly half of all workers will require retraining in digital skills, yet corporate re–skilling efforts remain underfunded and inconsistent. The challenge lies in integrating AI without undermining workforce stability.

Geopolitical power structures are also being redrawn, particularly in cybersecurity, defense, and intelligence. Nations that achieve AI supremacy gain significant leverage in military and intelligence operations, while others risk dependence on foreign technologies. The decisions made today regarding regulation, investment, and international cooperation will shape whether AI contributes to global stability or deepens strategic divides.

Algorithmic decision making is increasingly embedded in hiring, financial approvals, and public services, yet concerns persist over bias and fairness. A University of Cambridge

study found that AI recruitment tools often reinforce biases rather than eliminate them. Without clear governance, AI could erode trust in public institutions by making decision making less transparent and more difficult to challenge.

Deepfakes and AI–driven disinformation threaten elections, polarize societies, and weaken public trust in institutions. AI–generated content can manipulate public opinion at scale, making it harder for voters to distinguish fact from fiction. Governments and technology firms must act decisively to balance AI–driven innovation with protections against mass disinformation.

AI’s rapid expansion also raises sustainability concerns. According to a recent Goldman Sachs report, global data center power demand is projected to increase by up to 165% by 2030, primarily driven by the expansion of artificial intelligence applications. AI hardware production depends on lithium and rare earth elements, heightening geopolitical tensions and environmental risks. Managing these challenges requires forward-thinking policies on energy efficiency and sustainable resource use.

The future of democratic institutions depends on the governance decisions made today. The U.S. prioritizes market–driven expansion, the EU emphasizes regulatory safeguards, and China integrates AI into centralized state control. These approaches will shape whether AI governance reinforces democratic values or concentrates power in ways that weaken public accountability.

The real risk is not that AI will become too powerful, but that institutions will remain too weak to shape it. Technology does not erode democracy. Failure to govern it does.

About the author: Aida Ridanovic is an international strategic communications expert with over 20 years of experience in stakeholder engagement and diplomacy.

Speak softly and carry a bag of carrots: Soft power in the Age of AI

Image via Adobe Stock.

It’s easy to overlook the significance of soft power in today’s geopolitical landscape, as governments increasingly turn to hard power to sway nations or move markets through force and coercion. But in the race for AI dominance, soft power is a tool we cannot afford to discard. Digital public infrastructure and similar capital–intensive modernization investments are more than just technological decisions; they are geopolitical commitments and societal ambitions. Technology choices are not just about efficiency and cost but increasingly about advancing a positive vision of the future.

When countries opt for technology through China’s Belt and Road Initiative (BRI), they may be signing up for long–term dependency on a regime that prioritizes control over innovation, creativity, or expression. The Chinese Communist Party (CCP) is increasingly shaping global markets by offering cheap, state–backed technology that comes with far fewer restrictions on privacy, security, and human rights. Without a commitment to democratic values, governments risk sacrificing national security, personal freedoms, and political autonomy. Bringing only sticks to combat this agenda would be a strategic error and a losing hand.

A Competing Vision

Western companies already face a daunting challenge: competing with Chinese products that benefit from massive state investments and unrestricted data access, giving them an inherent advantage. The price of opaque agreements with CCP–backed firms is often long–term influence and acquiescence. Governments that choose information and communications technology (ICT) infrastructure from authoritarian states aren’t just buying technology and importing authoritarian values, they are also gambling on future economic coercion and political pressure tied to their most sensitive ICT systems.

While some governments view these tradeoffs and partnerships as no–brain-

WHEN COUNTRIES OPT FOR TECHNOLOGY THROUGH CHINA’S BELT AND ROAD INITIATIVE (BRI), THEY MAY BE SIGNING UP FOR LONG–TERM DEPENDENCY ON A REGIME THAT PRIORITIZES CONTROL OVER INNOVATION, CREATIVITY, OR EXPRESSION.

ers, others work aggressively towards AI action plans that envision a thriving tech ecosystem that builds a prosperous future through local innovation. The future shouldn’t be determined by a top–down government agenda, but an open and competitive marketplace where diverse and audacious ideas can thrive and be tested. Government control, surveillance, and censorship are inherently antithetical to human flourishing.

The Stakes of the AI Race

The global competition for AI will be resource intensive, requiring massive capital investments, sustainable management of energy, minerals, and water, robust policy frameworks, and complex tradeoffs impacting labor, business, education, and society. Every country must decide which partners, approaches, and financing models to embrace.

The United States must continue to leverage its historical strengths, values like the rule of law, transparency, good governance, and cooperative engagement with government and local businesses in foreign markets. This engagement alongside its enviable innovation and a powerful humanitarian and development agenda builds vital trust and the goodwill neces-

U.S. COMPANIES, CONSTRAINED BY ETHICAL AND REGULATORY STANDARDS WILL FACE AN UNEVEN PLAYING FIELD AGAINST AUTHORITARIAN–BACKED AND SUBSIDIZED FIRMS.

sary to position the U.S. as the preferred technology and digital trade partner. U.S. private sector engagement remains a cornerstone of its influence abroad, fostering relationships built on possibility, shared values, market competitiveness, and long–term stability.

The growing tensions between a democratic and authoritarian vision for AI is playing out across the globe in multilateral standard–setting and legislative bodies. Without a strong commitment to privacy, security, data protection, and good governance, there is a real risk of a race to the bottom. U.S. companies, constrained by ethical and regulatory standards (or simply the expectation of such from constituencies at home), will face an uneven playing field against authoritarian–backed and subsidized firms. They will also continue to encounter overregulation from rivals and allies alike, in response to a growing lack of trust as U.S. soft power erodes.

The Future of Innovation: A Call to Action

America is at a crossroads. The outcome of the AI race will not only determine which country dominates the next generation of technology—it will influence the global balance of power in economic, political, and military terms. The choices we make today will shape the future of innovation, democracy, global security, and economic stability.

To maintain its leadership, the U.S. must double down on its investment in the democratic fundamentals (and its bag of carrots) that undergird its hard power:

• Increase investment in global digital infrastructure through the Development Finance Corporation, ensuring secure solutions are available globally.

• Strengthen AI governance frameworks that promote ethical AI development and deployment at home and through multilateral bodies.

• Expand investments in democracy and governance assistance that foster trust and offer meaningful alternatives to authoritarian capture.

• Expand technology partnerships, strategic cooperation, and common policy frameworks with like–minded partners such as Singapore.

The AI race is not just about who has the best technology; it’s about who can offer a more compelling vision for our shared digital future—one that is safer, stronger, and more prosperous for everyone. If the U.S. gives up its soft power, it may surrender the future to those who would use technology to undermine freedom, innovation, and progress everywhere. The time to invest in a positive, global vision for a democratic and prosperous future enhanced by AI and American leadership is now.

About the author: Louisa Tomar is Director of the Center for Digital Economy and Governance at the Center for International Private Enterprise (CIPE).

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.