
6 minute read
Innovator's Saga - An Interview with Lord Tim Clement- Jones
Topic: AI Governance, Ethics, and the Future of Innovation
Column Editor: Darrell W. Gunter (President & CEO, Gunter Media Group) d.gunter@guntermediagroup.com
"I'm on team human. That's my team. That's why I got involved in technology- because we have to work out where humans fit into all of this.” — Lord Tim Clement-Jones
Introduction
We’re honored to welcome Lord Timothy Clement-Jones, a leading voice in AI governance and policy. As the former chair of the House of Lords Select Committee on AI and co-chair of the All-Party Parliamentary Group on AI, Tim has played a critical role in shaping the conversation around artificial intelligence in the UK and beyond.
With a distinguished background in law and as a spokesperson for the creative industries, he brings a unique perspective on the intersection of technology, policy, and innovation. AI’s evolution offers remarkable opportunities alongside significant challenges, and today we’ll explore governance, ethics, and the future of industries worldwide.
Setting the Stage
DARRELL GUNTER: Tim, welcome to the “Innovator’s Saga.” Is there anything you’d like to add to your bio that I might have missed?
LORD TIM CLEMENT-JONES: Probably the only thing I’d add is — I’m on Team Human. That’s why I got involved in technology. We have to work out where humans fit into all of this.
International AI Governance
DG: Let’s start with the big picture. What kind of international governance is needed for AI?
TJ: The way we regulate is diverging. The EU has a lot of regulation, which many people aren’t happy about. The UK has taken a slightly different approach, but still plans to regulate the largest language models. The U.S.? We don’t quite know yet — Donald Trump tore up the executive order, so it may be some time before a new system emerges, if at all. I suspect most action will be at the state level.
For developers and adopters — especially multinational ones — that’s a problem. The answer is interoperability, through adopting international global standards. ISO, the OECD, and NIST in the U.S. are working on risk assessment frameworks, audit standards, testing, and continuous monitoring. These are developed by industry experts, not politicians, making them practical.
There’s a barrier for small and medium-sized companies — sometimes you must pay to use these standards. That needs fixing. But they are emerging quickly, and I hope we see developers follow these global standards rather than three separate regimes.
“Interoperability comes from adopting international global standards … If we can align on those, it won’t matter so much whether you’re following EU, UK, or U.S. law — your systems will meet a shared benchmark.”
DG: On a scale from one to five — five meaning strong international collaboration and one meaning it’s still in its infancy — where are we?
TJ: Four out of five. The OECD is a strong convening organization for the West. China prefers the UN, but all these major bodies are converging. Since the G20 principles in 2019, new standards have been designed to reflect those principles.
Finding and Following Standards
DG: For our readers, is there a website you’d recommend that lists these standards?
TJ: Not one single site. The NIST website is a good starting point, as is the OECD AI Policy Observatory. The UN website also has relevant material.
DG: And in terms of the UK, would you say we’re at a four or maybe a five in establishing standards?
TJ: We have an excellent standards-setting body — the British Standards Institution. They work closely with NIST and with the European standards body. But our government hasn’t pushed hard enough on regulation. I’d like to see certain standards made mandatory.
Open Source and Guardrails
TJ: I’m in favor of open source — it allows smaller developers to compete — but we need guardrails. Even if the large commercial developers are 100% ethical, open-source models can be misused. Standards should be mandatory for those models.
DG: DeepSeek has been making headlines in the U.S. Senators have raised concerns about its user policy and data usage.
TJ: DeepSeek should be adhering to a set of standards. They’ve innovated in interesting ways using fewer resources due to export bans on high-end chips, but standards still apply. The same goes for Meta’s open-source LLaMA model. Just because something is open source — or from China — doesn’t make it inherently bad. The key is whether it’s ethical and safe.
The Risk-Based Approach
DG: Could you define a risk-based approach and why it’s essential?
TJ: It’s about outcomes. You assess the possible harm — misinformation, deepfakes, pornography, reputational damage. In high-stakes contexts — social security, immigration — you’re
impacting lives in profound ways. Those require stronger oversight.
The EU model is correct in identifying high-risk uses, though I find it overly complex. They’re also moving toward standards, so I believe there will be convergence.
“It’s about outcomes. You assess the possible harm … and in high-stakes contexts, those require stronger oversight.”
Ethics as the Foundation
DG: What ethical principles should guide AI?
TJ: The OECD principles from 2019, later adopted by Beijing, still hold:
• Transparency — People should know when AI affects them.
• Accountability — There must be a responsible party.
• Fairness — Systems should be tested to avoid bias.
• Explainability — Where possible, decisions should be explainable.
• Right to Redress — People must be able to challenge decisions.
These aren’t complicated, but they’re essential.
Copyright and Creators’ Rights
DG: What’s the current state of copyright law regarding AI training on copyrighted material?
TJ: In the U.S., “fair use” is under legal challenge. In the UK, the government is considering a text and data mining exception that favors big tech. Creators are pushing back.
Artists aren’t against AI — many use it creatively — but they want to be compensated if their work is used for training. It’s fine to copy Van Gogh, but not David Hockney — he’s alive, and his work is protected. Intellectual property exists to encourage creativity. Remove that incentive, and you risk stifling artistic work.
Tracking Use and Transparency
DG: Could blockchain be used to protect content?
TJ: Yes — for proving provenance. Watermarking, if indelible and linked to blockchain metadata, could also help. But above all, we need transparency: developers must disclose what data they’ve used for training.
The Next 6–18 Months
DG: What do you see in the immediate future for AI?
TJ: AI will become embedded in our daily lives — on phones, in the workplace, in tools we don’t label as “AI.” Agentic AI — systems that act proactively — will take on personal and administrative tasks.
Robotics integration will grow, especially in healthcare. We’ll also see more “walled garden” models — smaller systems with curated, high-quality data for reliable results.
DG: In scholarly publishing, I think AI could improve peer review and cut down on poor-quality research.
TJ: Absolutely. With curated data and robust auditing, AI can help ensure higher quality. But “garbage in, garbage out” still applies. Cleaning and controlling inputs will be essential.
Closing Thoughts
DG: Tim, thank you for sharing your insights.
TJ: Thank you, Darrell. It’s been a real pleasure.
DG: And that’s it for this edition of the “Innovator’s Saga.” Remember — leadership begins with you.
Note: You can watch the video of the interview on Darrell W. Gunter’s YouTube Channel, Leadership with Darrell W. Gunter. https://youtu.be/3qeTTjgn5R0?si=foJgqr-F4SiPidEH