


![]()



Across the Middle East, enterprise technology conversations are moving beyond rapid digital adoption toward more deliberate, long-term strategy. The region is no longer experimenting with transformation; it is embedding it into core business operations.
One of the clear trends related to this is the growing investment in localized digital infrastructure. Regional data centre expansion reflects underscores a strategic shift toward sovereignty, compliance, latency optimisation, and AI-readiness. As intelligent workloads scale, proximity to data becomes critical. Infrastructure is therefore being seen as foundational to resilience and competitive positioning.
Parallel to this is the steady maturation of enterprise automation. Organizations are deploying robotic process automation and workflow intelligence not as pilots, but as embedded operational layers. The objective is precision, efficiency, and scalability. Yet the narrative is evolving. Automation today is increasingly being evaluated through the lens of governance, risk exposure, and accountability.
Against the backdrop of such significant digital acceleration, a broader shift toward human-centric AI is becoming the need of the hour. AI is increasingly being integrated into procurement decisions, financial controls, service management, fraud detection, and customer engagement. But AI maturity is no longer measured by model accuracy alone. It is defined by governance frameworks, traceability, role-based controls, and risk-calibrated autonomy as shared by the experts spoken to for this edition’s cover story.
Enterprises are adopting structured autonomy models, allowing AI to operate independently in low-risk, reversible contexts while retaining human-in-the-loop oversight for financial, regulatory, or strategic decisions. The principle is clear that AI may execute, but accountability remains human.
Simultaneously, investments in digital learning ecosystems and creative technology platforms indicate recognition that workforce capability must evolve alongside intelligent systems.
The region is moving towards hybrid, intelligent, and accountable enterprise models, where infrastructure supports intelligence, automation is governed by design, and technology augments judgment rather than replacing it. The next phase of enterprise maturity will not be defined by how fast systems operate, but by how responsibly they scale.
R. Narayan Editor in Chief, CXO DX
RAMAN NARAYAN
Co-Founder & Editor in Chief narayan@leapmediallc.com Mob: +971-55-7802403
Ali Raza Designer
SAUMYADEEP HALDER
Co-Founder & MD saumyadeep@leapmediallc.com Mob: +971-54-4458401
Nihal Shetty Webmaster
MALLIKA REGO
Co-Founder & Director Client Solutions mallika@leapmediallc.com Mob: +971-50-2489676

20 » REDEFINING
From shadow AI risks to autonomous decision-making, governance and accountability are redefining what AI maturity truly means.
Over the three decades of its existence, Zoho Corp. has followed a trajectory shaped by core beliefs around building engineering depth, serving customer needs, earning their trust, and maintaining a long-term view.
Andy Mackay, Head of EMEA Marketing at Genetec discusses regional momentum, AI’s impact on physical security, open-platform strategy, vertical focus, cybersecurity, and the growing role of sovereignty in Saudi Arabia.
Nirmal Kumar Manoharan, VP – Sales at ManageEngine and Sujoy Banerjee, Regional Business Director, UAE, ManageEngine highlight how ecosystem collaboration, architectural guidance, and local presence enable the company to scale across the region’s enterprise customer environments.
Inference is infrastructure, whether you planned for it or not writes Lori MacVittie, Distinguished Engineer & Chief Evangelist at F5
Leo Brunnick, Chief Product Officer, Cloudera discusses the company’s roadmap, data fabric capabilities, and its containerized deployment model.
Mateo Rojas-Carulla, Head of Research, AI Agent Security, Check Point Software describes how with AI agents, the stakes are different and the attack vectors are emerging faster than many organizations anticipated
Santosh Varghese, Managing Director of Tosh Nxt Tech Ventures explains how Toshiba’s roadmap, BYODC initiative, and localized market strategy are positioning the company for sustained growth.
Mostafa Kabel, CTO, Mindware Group discusses how tech partners must rethink AI deployment


Fred Crehan appointed Head of Middle East to lead regional growth & build partner network
Verkada, a leader in AI-powered physical security technology, today announced its expansion into the Middle East with the establishment of a Dubai-based office and the appointment of Fred Crehan as Head of Middle East. This expansion reflects Verkada’s continued global growth, driven by the region’s rapid urban development, large-scale infrastructure projects, and rising demand for modern, cloud-based security solutions.
“The Middle East is experiencing rapid urban development, large-scale infrastructure projects, and a strong focus on security and innovation,” said Eric Salava, Chief Revenue Officer at Verkada. “Verkada’s cloud-based platform aligns well with the region’s ambitions, and Fred’s deep understanding of the local landscape and proven track record of scaling high-growth businesses make him the ideal leader to bring our integrated platform to market.”
Crehan brings over 25 years of enterprise technology experience. Most recently, he served at Confluent, where he successfully launched the Dubai office and built a robust partner ecosystem.
"Safety is a top priority in the Middle East, driven by a commitment to world-class tourism and the rapid development of new urban centers," said Fred Crehan, Head of Middle East at Verkada. "As cloud adoption continues to accelerate, many organizations are looking for modern solutions that can help them overcome traditional resource constraints. Verkada’s platform is uniquely positioned to support government, hospitality, retail, and logistics providers, as well as the large-scale construction and real estate sectors."
While Verkada already supports global customers operating in the Middle East, the Dubai office represents the company’s first

dedicated regional office. Verkada plans to grow its Middle East team, with initial hires focused on sales engineering and leadership roles.
Verkada’s international expansion builds on the company’s recent investment from CapitalG, Alphabet’s independent growth fund, which valued Verkada at $5.8 billion. The investment will help accelerate Verkada’s AI innovation and support its more than 30,000 customers worldwide.
Multi-year agreement between ServiceNow and OpenAI will enable direct customer access to frontier model capabilities, custom ServiceNow AI solutions, and increased speed and scale with no bespoke development required
ServiceNow and OpenAI have announced an enhanced strategic collaboration to power agentic AI experiences and accelerate enterprise AI outcomes. The agreement unlocks a deep collaboration between OpenAI technical advisors and ServiceNow engineers that will be equipped with its frontier models, which will give customers direct access to frontier capabilities, custom ServiceNow AI solutions built and aligned to their unique roadmaps, and increased speed and scale with no bespoke development required. ServiceNow will build direct speech-tospeech technology using OpenAI models to break through language barriers and offer more natural interactions. With the latest OpenAI models including GPT-5.2, ServiceNow will unlock a new class of AI-powered automation for the world’s largest companies.
“ServiceNow leads the market in AI-pow-
ered workflows, setting the enterprise standard for real-world AI outcomes,” said Amit Zavery, president, chief operating officer, and chief product officer at ServiceNow. “With OpenAI, ServiceNow is building the future of AI experiences: deploying AI that takes end-to-end action in complex enterprise environments. As companies shift from experimenting with AI to deploying it at scale, they need the power of multiple AI leaders working together to deliver faster, better outcomes. Bringing together our respective technologies will drive faster value for customers and more intuitive ways of working with AI.”
Bringing OpenAI models into the ServiceNow AI Platform complements a customer’s ServiceNow configuration management database (CMDB) while also offering native, embedded access to intelligence to further inform actions that

will be taken within workflows. ServiceNow’s AI Control Tower then provides the governance and orchestration layer, giving organizations centralized visibility into how models are applied across workflows, how they interact with enterprise data and systems, and how AI-driven actions are executed at scale in a controlled, auditable way.
The center is a facility to showcase and test operational security systems before deployment

AI-driven public safety systems are moving into daily operations across the UAE. To support this shift, Milestone Systems has opened a new Experience Center in Dubai Media City, giving government and enterprise stakeholders a place to test and shape operational security systems before they are deployed at scale.The Milestone Experience Center is designed for this environment. Located at Aurora Towers in Dubai Media City, the facility anchors Milestone Systems’ long-term commitment to the UAE and establishes Dubai as a regional hub for open-platform security systems that support public safety operations.
Unlike traditional demo spaces, the center is designed for operational use. Visitors walk through live public safety and city monitoring scenarios. These environments show how the Milestone XProtect platform connects video feeds, AI analytics, and external systems inside one command structure. The focus is on the practical aspects of incident detection and information sharing, and on how response teams act on intelligence rather than raw footage. The center also provides a collaborative workspace. Technology partners, system integrators, enterprise leaders, and public sector teams can build proofs-of-concept,
test integrations, and prepare joint deployment plans. This shortens the path between product capability and field implementation, which is critical as smart city and security programs continue to expand across the Emirates.
The opening event brings together stakeholders from government, enterprise, and technology sectors, with partners from Dubai Holding, Jumeirah Group, Emaar, Miral Group, Abu Dhabi Ports, and the Danish Consulate. The mix reflects the cross-sector cooperation required to deliver large-scale urban security and intelligence systems.
“The UAE has set a clear direction for smart, secure, and digitally connected cities. The next challenge is execution. Video, AI, and operational systems must work together in real time and at scale,” said Louise Bou Rached, Director for the Middle East, Turkey, and Africa at Milestone Systems. “Our Experience Center in Dubai allows partners and stakeholders to see that integration in action. It shows how open-platform architecture turns video into intelligence that supports faster decisions and coordinated public safety operations at scale.”
New AI-powered- consulting service delivers a secured platform, shared standards, and reusable AI assets to help organizations accelerate growth and drive innovation
IBM has announced IBM Enterprise Advantage, a first-of-its-kind asset-based consulting service that combines proven AItools and expertise to help clients quickly build, govern, and operate their own tailored internal AI platform at scale.
Organizations can now use IBM Enterprise Advantage to redesign workflows, connect AI to existing systems, and scale new agentic applications without requiring changes to their cloud providers, AI models, or core infrastructure. This includes Amazon Web Services, Google Cloud, Microsoft Azure, IBM watsonx, and both open- and closed -source models, allowing companies to build on their existing investments.
IBM Enterprise Advantage brings together the technical and industry expertise of IBM consultants with technology built from IBM Consulting Advantage, IBM’s own in-
ternal AI-powered delivery platform. With a growing marketplace of industry-specific AI agents and applications, Consulting Advantage has already supported more than 150 client engagements and been shown to boost consultants’ productivity by up to 50% to help clients achieve faster results.
Now with the Enterprise Advantage service, IBM is giving clients access to the same proven approach and capabilities to build their own AI platforms and navigate the complex AI marketplace to drive enterprise value.
“AI has the potential to transform every business, but turning that potential into real, scalable value remains a challenge for many organizations,” said Lula Mohanty, Managing Partner – MEA, IBM Consulting. “At IBM, we’ve navigated this journey ourselves—using AI to modernize our

Lula Mohanty
Managing Partner, Middle East and Africa, IBM Consulting
operations and achieve measurable results. Enterprise Advantage extends that proven approach to our clients, combining human expertise, secure AI assets, and intelligent digital workers, so businesses can confidently scale AI and drive meaningful, lasting impact.”
Changes are designed to accelerate service delivery and business growth
Infoblox has announced significant enhancements to its Skilled to Secure Partner Program, unveiling a refreshed Service Provider track tailored to the evolving needs of Managed Security Service Providers (MSSPs) and Global System Integrators (GSIs). These changes are designed to provide partners with clearer requirements, flexible enablement pathways and expanded tools and benefits to accelerate service delivery and business growth.
“As the cybersecurity landscape continues to evolve, so too must the way we support our partners,” said Chris Millerick, VP of Partners and Alliances at Infoblox. “These updates reflect direct partner feedback and deliver more flexible, market-aligned paths to success within our Skilled to Secure ecosystem.”
The updated program introduces a Service Provider track with two invite-only subtracks specifically for MSSPs and GSIs, enabling partners to better align their busi-
ness models with Infoblox’s services. Key enhancements include:
• Flexible Enablement and Clear Requirements: New role-based enablement paths and clearly defined criteria reflect real-world go-to-market needs and partner capabilities.
• Enhanced Tools and Resources: Partners now have access to non-for-resale (NFR) demo environments, detailed operational runbooks and expanded benefits that help scale and accelerate solution delivery.
• Strengthened MSSP Support: The program updates provide MSSPs with the training, technical resources and go-tomarket support needed to grow recurring security service portfolios.
These enhancements build on Infoblox’s broader Skilled to Secure Partner Program, which rewards partners for investment in technical competency across sales, presales and professional services training. The program emphasizes competency as
These will power Smart Government, Fintech, Digital Health, and Large Enterprises
WSO2 has announced a sharpened, solutions-led focus to support large organizations accelerating smart government, fintech and digital health transformation programs. Building on its established platforms and solution portfolio, the company is increasing its industry alignment in high-potential regions, such as the Middle East, where national digital strategies and regulated modernisation initiatives are driving sustained demand for scalable, secure digital platforms.
The initial focus is on Banking, Financial Services and Insurance (BFSI), and Healthcare, with industry reference architectures and maturity models that combine AI, APIs, data, and integration into ready-to-deploy solution blueprints. Offerings include an API Banking Platform, AI-driven payment modernization, unified patient record architectures, and CMSaligned prior authorization frameworks. Government and public administration
solutions are expected to follow later this year based on WSO2 global initiatives around government data interoperability, government digital ID systems, and public and private API marketplaces, supporting the UAE’s continued expansion of smart government platforms and digital citizen services.
Alongside industry offerings, WSO2 is prioritising transformation initiatives aligned with UAE priorities such as cloud repatriation and hybrid cloud adoption, vendor consolidation, application modernization, API and data monetization, IT cost reduction, and the path towards AI. For instance, these initiatives will assist large enterprises to simplify complex integration landscapes, reduce dependency on fragmented vendor stacks, and modernize core digital platforms. This simplified, modern digital foundation will enable leadership teams to scale digital initiatives faster.

a key factor in long-term, profitable partnerships and enables partners to achieve higher status tiers such as Sapphire and Diamond.
The refreshed Service Provider track underscores Infoblox’s commitment to helping partners adapt to rapidly changing security demands. By aligning certification pathways and enablement resources with partner business models, Infoblox aims to empower MSSPs and GSIs to deliver differentiated services, enhance customer outcomes and scale efficiently.

“In the UAE, the conversation has moved beyond digital transformation to intelligent platform modernization at national scale,” said Mifan Careem, senior vice president and general manager of solutions at WSO2. “Our technology has long powered core integration and API platforms. What’s new is the focus on packaging this expertise into industry-ready solutions that help UAE organizations reduce fragmentation, meet regulatory and data residency requirements, and accelerate smart services, and form the foundation of an AI enabled landscape.”
Latest regional expansion addresses urgent demand for privilege-centric identity security in high-growth regions
BeyondTrust has expanded the BeyondTrust Pathfinder Platform to the United Arab Emirates, India, Singapore, and South Africa. The proliferation of agentic AI—autonomous systems capable of reasoning and executing actions without human intervention—has introduced unprecedented "shadow" risks. The Pathfinder Platform provides a unified, "privilege-centric" identity security experience, specifically engineered to secure this new threat vector across hybrid, cloud, SaaS, and OT environments.
"The explosion of machine identities in the modern enterprise ecosystem is amplifying the need to evolve identity security models," said Janine Seebeck, CEO at BeyondTrust. "In regions like the UAE, India, Singapore and South Africa, where rapid technological adoption meets stringent regulatory frameworks, the inability to see and control machine identities is no longer just a risk; it is a critical compliance failure.
Pathfinder unites the visibility, intelligence, and control necessary to close these gaps."
The expansion enables organizations to align with specific, evolving regulatory standards that demand robust security safeguards for all identity types:
• United Arab Emirates: Pathfinder supports organizations’ compliance with the National Electronic Security Authority (NESA) Information Assurance Standards, particularly for critical infrastructure sectors requiring strict access control and supply chain security.
• India: With the Digital Personal Data Protection (DPDP) Act mandating robust security safeguards and India Computer Emergency Response Team (CERT-In) requiring rapid incident reporting, Pathfinder’s automated discovery and AI-driven risk prioritization help organizations maintain the required security posture.

Janine Seebeck CEO, BeyondTrust
• Singapore: For owners of Critical Information Infrastructure (CII), Pathfinder supports organizations’ compliance with the Cybersecurity Code of Practice (CCoP) under the Cybersecurity Act, ensuring privileged access to essential services is tightly monitored and minimized.
• South Africa: The Pathfinder Platform aids organizations in adhering to Protection of Personal Information Act (POPIA) Condition 7, which mandates the integrity and confidentiality of personal information by securing the machine identities that often process this sensitive data.
Ankabut to equip educational and workforce development organisations with intuitive video and collaboration tools
Ankabut, a leading technology provider for the education and research community in the UAE, has announced the signing of a strategic partnership with WeVideo, a leading global cloud-based video learning platform, to accelerate digital learning innovation across the UAE and the wider GCC.
Through this agreement, Ankabut will integrate WeVideo into its portfolio of advanced education technology solutions. This will expand access to creative digital tools that support the UAE’s national innovation agenda and knowledge-based economy.
The UAE’s commitment to embedding digital excellence and innovation at the core of its education strategy sets the foundation for this initiative. These strategies are supported by national frameworks such as the National Strategy for Higher Education 2030 and policies promoting digital transformation. With these enhanced capa-

bilities, Ankabut aims to equip universities, K–12 schools, and workforce development organisations with intuitive, powerful tools for interactive content creation, video-based learning, and digital collaboration. This will help educators foster engagement, creativity, and 21st-century skills in alignment with the UAE’s vision for a smarter and more connected education ecosystem.
“Ankabut is dedicated to providing cutting-edge solutions that support the UAE's national strategies for a knowledge-based economy,” said Tarek Jundi, Chief Executive Officer of Ankabut. “Integrating WeVideo into our ecosystem is a strategic step that addresses the growing demand for highly engaging, active learning tools. This agreement strengthens our service offering
and ensures our member institutions remain at the forefront of global educational innovation by leveraging the power of video as part of the learning process.”
“Partnering with Ankabut marks a pivotal step in our commitment to global education and expansion into the GCC market,” said Kevin Knight, Chief Executive Officer of WeVideo. “Ankabut’s regional relationships and technological expertise make them the ideal partner to ensure seamless integration of our platform. Together, we will empower thousands of educators and students to tell their stories, demonstrate their learning, and accelerate the region’s shift toward creative and collaborative digital pedagogy.”
Certification validates Dynatrace’s SaaS platform on UAE Microsoft Azure
Dynatrace, a leading AI-powered observability platform, today announced it has received certification from the Dubai Electronic Security Center (DESC).
Established by the Government of Dubai, DESC sets rigorous cybersecurity standards to safeguard critical systems, promote secure cloud adoption, and enable transformational smart city initiatives. This milestone acknowledges that the Dynatrace SaaS platform meets the UAE Government’s stringent compliance requirements for cloud services, enabling public sector organizations to accelerate their digital transformation initiatives with confidence.
The DESC certification applies to Dynatrace on UAE Microsoft Azure, reinforcing its readiness to support Dubai’s vision for agile, data-driven public services. Government entities and regulated institutions can now adopt Dynatrace’s platform, knowing it aligns with DESC’s cloud governance and operational compliance standards.
“Dubai is rapidly scaling its digital infrastructure and investing in AI to reimagine
how public services are delivered,” said David Noël, Vice President Middle East and Africa at Dynatrace. “This certification affirms Dynatrace’s role as a key enabler of Dubai’s digital ambitions as it moves toward a digital-first future, helping public sector teams simplify complexity, drive smarter operations, and deliver seamless digital services to citizens, all while meeting the region’s regulatory expectations.”
Dynatrace is uniquely positioned to support public sector organizations in the UAE as they adopt modern cloud architectures and work to deliver more responsive, reliable, and efficient digital services.
Key platform capabilities now certified for DESC-aligned environments include:
• End-to-End Observability: Unified insights into applications, infrastructure, and user experience across hybrid and multicloud ecosystems
• AI-Powered Automation: Dynatrace’s AI continuously analyzes telemetry, detects anomalies, and pinpoints root causes
• Smart Operations: Automated responses

David Noël Vice President
MEA, Dynatrace
to performance, availability, and efficiency issues — reducing manual overhead and operational risk
• Regional SaaS Deployment: Operated on UAE Microsoft Azure with data residency, enabling alignment with local privacy and compliance requirements
• Framework Alignment: Dynatrace’s platform undergoes regular review by independent assessors to support alignment with regional mandates like DESC
This collaboration aims to address the rising infrastructure demands of the AI era by delivering high-performance, flexible networking solutions.

AurCore, a U.S.-based network technology innovator, has officially signed a Master Distribution agreement with DVCOM Technology, a leading value-added distributor (VAD) in the Middle East and Africa (MEA). The partnership was formalized during DVCOM’s Annual Roadshow, an event attended by regional industry leaders and key channel partners.
As the Master Distributor, DVCOM Technology will lead the distribution, marketing,
and technical support for AurCore’s portfolio across MEA markets. This collaboration aims to address the rising infrastructure demands of the AI era by delivering high-performance, flexible networking solutions.
The partnership brings together diverse software options, offering comprehensive access to commercial, open-source solutions, and AurCore’s signature AURC-NOS. It is supported by a robust hardware portfolio that includes a wide range of enterprise and industrial switches designed for high-demand environments. Through this collaboration, AurCore solutions will see expanded regional reach with rapid deployment across the GCC and African territories. Customers will also benefit from localized technical
support, with dedicated pre- and post-sales assistance delivered by DVCOM’s expert technical teams.
“At DVCOM, our mission is to bring bestin-class technology to our partners. Adding AurCore to our portfolio aligns perfectly with our commitment to market-leading solutions. Launching this partnership at our Annual Roadshow allowed us to showcase the immediate value this brings to our ecosystem as we navigate the AI era,” said Renjan George, Managing Director, DVCOM Technology.
AurCore, a U.S.-based network technology company built for the infrastructure demands of the AI era, offers high-performance solutions from 1G to 400G, AurCore provides versatile software options (AURC-NOS) and a vast range of Enterprise and Industrial switches designed to perform anytime, anywhere.
DVCOM Technology is a premier Value-Added Distributor in the MEA region, specializing in Unified Communications, Networking, Cybersecurity, and Cloud solutions.
The new program helps customers reach transformative AI outcomes faster
Cisco announced the launch of the Cisco 360 Partner Program after fifteen months of co-design with partners. Cisco’s success is built on close collaboration with its partners to meet customer needs in the fast-changing AI world. Now, Cisco is boosting how it supports partners while making it easier for them to help customers. The new program, built for developers, consultants, managed services providers, resellers, and other partner business models, better equips Cisco partners to deliver customer outcomes in the areas of AI-ready data centers, future-proofed workplaces, and digital resilience.
The Cisco 360 Partner Program is designed to provide clarity and empower partners to drive more predictable profitability. At the same time, with the new Cisco Partner Locator tool, customers can now search for the right partner across key Cisco portfolios like Security, Networking, Collaboration, Services, Splunk, and Cloud and AI Infrastructure.
“With our partners, we’ve strengthened what is already a world-class ecosystem to deliver even greater value and help our mutual customers connect, protect and thrive,” said Tim Coogan, Senior Vice President of Global Partner Sales at Cisco.
Cisco’s recent AI Readiness Index shows that being AI ready is a competitive advantage for companies. Meeting these needs relies on expert partners collaborating to provide essential infrastructure, services, and AI-native capabilities. The Cisco 360 Partner Program recognizes partner expertise and rewards value creation across the customer lifecycle—empowering partners to provide secure, agile solutions that support customers through transformation.
Now live, the Cisco Partner Incentive (CPI) streamlines past program elements; offers partners clearer, more predictable earnings across Cisco’s portfolio; and helps partners plan for growth by aligning sales focus and go-to-market strategies with Cisco’s roadmap.

Cisco’s new partner designations help customers easily identify partners with the right capabilities. All participants are recognized as registered Cisco Partners. Cisco Portfolio Partners demonstrate proven sales and technical expertise, practice maturity, and a strong commitment to customer engagement.
New data centres in Dubai and Abu Dhabi to host 100+ cloud-based solutions of both Zoho and ManageEngine, the company's two key brands

Zoho Corporation, a global technology company, announced the launch of its data centres in UAE in Dubai and Abu Dhabi. The data centres form a part of AED 100 million investment in the UAE that the company had announced in 2023. The data centre will host solutions from Zoho Corporation's two key brands: ManageEngine (enterprise IT management) and Zoho (cloud business solutions).
"The opening of our data centres is part of our ongoing investment in the UAE, which remains one of the largest markets in the region for both ManageEngine and Zoho brands," said Shailesh Davey, Co-founder and CEO, Zoho Corporation.
"With this move, Zoho Corporation will be enabling businesses store their data locally, strengthening data sovereignty, and supporting National Cybersecurity Agenda. Furthermore, 100+ solutions across Zoho and ManageEngine, will enable businesses of all sizes, and government and semi-government organisations adopt cloud technology for digital transformation in nearly every area of operation, and help Dubai
become a digital economy line with Dubai Vision 2030.”
The data centres have also received certification from CSP Security Standard Certificate by DESC (Dubai Electronic Security Center). This qualifies Zoho Corporation to serve government and semi-government entities in addition to local businesses. As part of this, the data centres are also compliant with: ISO 27001, ISO 22301, ISO 27017 and CSA STAR Level 2 Certificate for data centres. In addition, the company's Dubai office has received ISO 27001 certificate.
Zoho has grown by 38.7% in 2025 in UAE, and expanded its partner network by 29% in the same period. It has further increased its employee count by 35% last year to serve the needs of its increasing customer base, and expanded into a larger office. ManageEngine has grown by 20% in 2025 in the UAE, led by its continued focus on the enterprise sector. The brand has strengthened its local presence, including through the partner network, to support the increasing adoption of its solutions across both private and government organizations.
92% of the UAE enterprises now view AI/GenAI as a key part of their organization’s business strategy.

New research from Dell Technologies shows that businesses in the United Arab Emirates (UAE) are increasingly viewing artificial intelligence (AI) as a strategic priority. The global State of Innovation and AI Survey study – which surveyed 2,850 business and IT decision-makers, of which 50 were from the UAE – found that 92% of the surveyed UAE companies now view it as a ‘key part’ of their business strategy. Additionally, 88% of UAE organizations report seeing tangible productivity and financial returns from initial AI investments.
1. Data Readiness Leads the Way: 64% of UAE organizations are prioritizing "data readiness for AI" as a top IT initiative, signaling a strong focus on the foundational infrastructure required to support AI’s potential.
2. On-Premises AI is on the Rise: 74% of businesses plan to consume AI through software and run it locally on AI PCs over the next 12 months, reflecting a shift toward localized AI deployment
to address data sovereignty and compliance concerns.
3. AI as the Sustainability Catalyst: 92% of UAE companies recognize AI’s critical role in optimizing resource use and improving sustainability, driving efforts in energy efficiency, smarter data center management, and circular IT practices.
Despite these promising indicators, 78% of businesses in the UAE struggle to fully integrate AI into every aspect of their operations, while 30% are still in the earlyto-mid stages of their AI adoption journey. Challenges such as data security concerns, lack of executive/senior management buyin, and integration with existing systems/ infrastructure continue to hinder largescale implementation.
The research highlights that while UAE companies are making strides toward AI adoption, scaling AI effectively across an enterprise requires a holistic approach. Building infrastructure that supports AI, fostering new skillsets, and prioritizing secure and ethical practices are key.
While the interest in AI continues to grow, progress is hindered by three persistent challenges:
1. Skills Gap: Alarmingly, all respondents (100%) of the UAE companies surveyed believe their teams lack the necessary skills to fully leverage AI. This marks a sharp increase in concerns compared to previous years, especially surrounding the safe implementation of GenAI, an area where 66% of the UAE organizations report insufficient knowledge.
2. Security Concerns: The pressure to innovate is often tempered by increasing worries about security risks. 72% of the UAE companies expressed fears about exposing sensitive corporate data and intellectual property to third-party AI tools, a significant rise from 64% last year. Additionally, 80% of the organizations highlight that it is a challenge to
find a balance between innovation and mitigating cybersecurity risks.
3. Infrastructure Readiness: Many companies find their current IT environments inadequate for the demands of AI workloads. Challenges include increasing processing power (e.g. CPUs/GPUs), implementing AI-optimized hardware, and enhancing data security. Without addressing these issues, AI integration efforts will continue to face delays.
An encouraging trend revealed in the report is the increasing link between AI and sustainability goals. Businesses are exploring AI’s potential to optimize energy efficiency – such as smarter data center management, minimizing idle workloads and moving inferencing tasks to edge computing environments. Organizations in the UAE are increasingly leaning on third-party collaborations to integrate sustainable practices, with 82% highlighting the importance of external collaboration in achieving circular IT solutions. This trend signals an emerging ecosystem of shared expertise to tackle the complexities of implementing both AI and sustainability strategies. From advanced cooling solutions to energy-aware AI architectures, Dell Technologies is helping organizations reduce their environmental impact with AI infrastructure that balances performance with energy efficiency.
Walid Yehia, Managing Director, South Gulf at Dell Technologies said, “The State of Innovation and AI Survey reflects a powerful shift in how UAE businesses are embracing AI to drive innovation and growth. Organizations are moving beyond piloting AI and are instead beginning to embed it at the core of their strategies. This transition underscores the transformative potential of AI, but achieving its full impact will require addressing critical challenges, from workforce upskilling to secure and collaborative implementation.”
Recommendations help organizations safeguard sensitive data while maintaining effective security operations
Genetec Inc. , a global leader in enterprise physical security software, shared best practices to help organizations protect sensitive physical security data while maintaining effective security operations.
Physical security systems generate large volumes of information from video footage, access control records, and license plate information. As this data plays a growing role in daily operations and investigations, organizations are under increasing pressure to manage it responsibly amid evolving privacy regulations, rising cyber threats, and heightened expectations around transparency.
“Physical security data can be highly sensitive, and protecting it requires more than basic safeguards or vague assurances,” said Mathieu Chevalier, Principal Security Architect at Genetec Inc. “Some approaches in the market treat data as an asset to be exploited or shared beyond its original purpose. That creates real privacy risks. Organizations should expect clear limits on how their data is used, strong controls throughout its lifecycle, and technology that is designed to respect privacy by default, not as an afterthought.”
Observed annually on January 28, International Data Protection Day serves as a reminder that protecting personal data is a shared and ongoing responsibility. For physical security teams, adopting clear strategies, resilient technologies, and trusted partnerships can help ensure privacy and security objectives remain aligned as risks and regulations continue to change. Genetec recommends the following best practices to help organizations strengthen data protection across physical security systems:
Start with a clear data protection strategy
Organizations should regularly assess what data they collect, for which purpose they collect it, where it is stored, how long it is retained, and who has access to it. Documenting these practices helps reduce unnecessary data exposure, identify policy gaps, and support ongoing compliance as regulations continue to evolve. Transparency around data handling practices also plays an important role in building trust with employees, customers, and the public.
Privacy-by-design means limiting privacy risk not only through security controls, but also through how personal data is collected, used, and governed. Organizations should apply purpose limitation and data minimization principles to ensure only the data required for defined security objectives is collected and retained. Strong security measures, including encrypting data in transit and at rest, enforcing strong authentication, and applying granular access controls, help reduce the risk of unauthorized access.

Privacy-enhancing technologies, such as automated anonymization and masking, further support transparency and help protect individuals’ identities while preserving the operational value of security data.
Maintain strong cyber defenses over time
Data protection is an ongoing process. Regular system hardening, vulnerability management, and timely updates are essential to address new cybersecurity risks as they emerge. Treating privacy and cybersecurity as continuous operational responsibilities helps organizations maintain a stronger overall security posture.
cloud services to support resilience and compliance
Cloud-managed and software-as-a-service deployments can help organizations stay current with security patches, privacy controls, and compliance features, while reducing the operational burden on internal teams. Many organizations are adopting flexible deployment approaches that allow them to balance scalability, control, and data residency requirements across on-prem and cloud environments.
Choose partners committed to privacy and transparency
Working with trusted technology partners is critical. Organizations should evaluate vendors based on how they govern personal data, define clear limits on data use, and communicate transparently about their privacy practices. Independent security standards and attestations, such as ISO/IEC 27001, ISO/IEC 27017, and SOC 2 Type II reports, provide important assurance around how systems and data are protected and managed, and help reduce privacy risks associated with unauthorized access or misuse. Organizations should also assess vendors’ vulnerability disclosure processes, data governance practices, and approach to developing and deploying artificial intelligence, including whether they prioritize transparency, safety, and human-led decision-making when personal data is involved.
Over the three decades of its existence, Zoho Corp. has followed a trajectory shaped by core beliefs around building engineering depth, serving customer needs, earning their trust, and maintaining a long-term view. This philosophy has guided the company’s evolution, including that of its enterprise IT division, ManageEngine, anchoring growth in intent and longer-term objectives.
On the sidelines of its recent annual partner meet at its headquarters in Estancia IT Park, Guduvanchery near Chennai, ManageEngine’s leadership held focused interactions with visiting media representatives. These conversations offered rich insight into how the company has carved out a distinctive position in the technology industry, pivoted around enduring commitments to customers and society, long-term thinking, and a strong engineering focus.
In their respective keynote sessions and subsequent discussions, Shailesh Kumar Davey, Co-Founder & CEO of Zoho Corp., and Rajesh Ganesan, CEO of ManageEngine, outlined how Zoho Corp and its enterprise IT division, ManageEngine, are evolving, while holding firm to the principles that have guided the company for three decades.
The dot-com crash of 2001 was an unexpected inflection point for Zoho that led to the eventual emergence of ManageEngine. As the telecom sector collapsed, many of the enterprise customers the company then served had disappeared almost overnight. It was a phase of disruption in the technology industry. In response, the company leadership adopted a more ambitious outlook and identified a more sustainable opportunity to provide reliable, accessible software for managing IT infrastructure in small and mid-sized businesses.
Rajesh says “Even in the early 2000s, the answer was clear that there were tens of thousands of businesses with this need. That was the birth moment for ManageEngine. We decided to pivot and diversify risk. This was our opportunity to go truly global.” OpManager’s launch in 2003 as a downloadable product from the web, designed to solve a simple but mission-critical problem of keeping business systems running, was a watershed moment in the company’s journey.
“In 2003, we launched OpManager, a product you could download directly from the web and install on Windows NT, Windows 95, or Windows 2000 systems. It addressed a very simple but critical problem for businesses at the time, making sure their networks and systems were always running, so their operations would not go down. That was the founding idea behind ManageEngine.”

Shailesh Kumar Davey Co-founder and CEO, Zoho Corporation
In the next few years, as cloud and mobile technologies transformed how enterprises operated their Businesses, ManageEngine scaled up its product offerings, helping businesses adopt technology with confidence to run their operations. Since then, the company has steadily built its growth story, evolving and reinventing through the years, and at all times guided by customer needs and a long-term view of the business.
As Shailesh remarks, “This is not something that happened overnight. It’s a 30-year journey”.
Evidence of Zoho’s visionary outlook lies in the fact that the company began offering cloud services as early as 2006, ahead of the rise of hyperscalers. This was a deliberate decision to run its own data centers and build in-house infrastructure expertise, and as Shailesh explains, this choice was never about owning infrastructure for its own sake, but about developing deep operational knowledge that could ultimately benefit customers.
“At the end of the day, what we sell is our skills and our knowledge,” he notes. “If we build expertise, whether in software or running data centers, we can pass that value on to the customer.” Today, Zoho operates its own data centers and a global footprint of around 30 offices across regions including Egypt, Saudi Arabia, Thailand, and Singapore, among others.

Rajesh Ganesan CEO, ManageEngine
“When we open offices, we hire locally, train locally, and build teams that do the last mile for customers. Growth is important, but contributing back to the local economy is just as important,” adds Shailesh.
Zoho follows a distinctive expansion model, which the company refers to as transnational localism, by building offices in second and third-tier cities. The company stays invested in local economies by hiring locally and training them.
Shailesh explains, “Instead of forcing talent to move to cities, we go where the talent is, to the tier-three and tier-four towns and build offices there.”
The same approach extends globally, whether in the US, Mexico, or elsewhere.
Since its beginning, ManageEngine adopted a defining strategy of building many focused products rather than a single monolithic solution. This philosophy is based on building “a thousand speedboats” rather than one large ship, which resulted in the multitude of enterprise-focused products that it has built.
Whether serving IT departments through ManageEngine or supporting broader business functions, the company focused on solving the full spectrum of operational challenges, from network connectivity and cloud services to vendor coordination and supply chain workflows.
Shailesh explains that Zoho’s core strength lies not in the type of problem solved, but in using technology effectively to solve problems, regardless of domain.
As of today, ManageEngine offers close to 60 products supporting multiple IT functions, and as a privately held company, the group has fuelled its growth through its own earnings. Staying away from external funding has allowed the company the freedom to maintain control, stability, and a long-term focus.
ManageEngine’s trajectory mirrors Zoho Corp’s broader philosophy, which is about building depth before breadth and pursuing evolution without abandonment.
“Over the last 20 years, we’ve been known for different things at different times. We started as a monitoring company, then we were known as a service management company, later for Active Directory services, and now endpoint management,” says Shailesh.
Security has become a core focus for the company in recent years. “There is a strong focus on security today, and there will be many announcements around new security products. I believe by 2027, people will look at ManageEngine as a security company as well,” he adds.
This shift does not come at the cost of other areas of strength for the company where it equally continues to invest and strengthen its solutions offerings.
“We believe in being a one-stop shop for IT monitoring, service management, endpoint, server management, analytics, and security. All of these have been built over two decades and continue to evolve,” he adds.
While ManageEngine follows a unified platform approach across
its product portfolio, it ensures that each product continues to evolve with feature enhancements at every refresh, whether it is ServiceDesk Plus (ITSM), Endpoint Central (unified endpoint management), OpManager (network monitoring), and AD360 (Active Directory management), or any other solution in its lineup. A key advantage underpinning this approach is the fact that, unlike in many organizations where expertise is lost as employees move on, ManageEngine has teams that have been working on the same problem spaces for a decade or more. This continuity, Shailesh explains, enables ManageEngine product teams to build deep, sustained expertise within specific product domains and to retain critical domain knowledge over long periods.
“All our products are built on the same platform, which makes integration easy. But equally important is our people. We have one of the lowest attrition rates in the industry. Many have been working on the same domain for 10 years, 15 years, and so they are able to react faster to the needs of the customer and market shifts,” adds Shailesh.
One of the strongest and most consistent messages across sessions was Zoho’s emphasis on data privacy, particularly as AI adoption accelerates.
“Whether it’s Zoho or ManageEngine, our approach to data privacy is the same. Customer data belongs 100 percent to the customer. We do not use it, we do not show ads, and we enforce this through processes,” says Shailesh.
In the context of AI, this principle becomes even more critical. “We do not use customer data to train our AI models. We use synthetic data or data procured externally. If customers want to train models using their own data, they can do so and deploy those models themselves,” he adds.
Zoho has built this capability directly into its products.
“We’ve made it easy through the user interface. Customers can train, deploy, and manage their own models. We monitor model drift and retrain automatically, but all of this happens without us looking at customer data,” adds Shailesh.
Rajesh reinforces this point.
“We do analyze anonymized usage patterns to improve products,” he adds. “But we never access identifiable customer data, and it never goes to third parties. Monetizing data is simply not our business model.”
Zoho Corp takes a more pragmatic view of Generative AI. For over a decade and more, the company has focused on using and leveraging AI where it looks relevant and not just for the sake of it.
“AI didn’t start with generative AI. We’ve been using classical machine learning and deep learning for over a decade where it
made sense,” says Shailesh. Today, Zoho uses both traditional AI techniques and generative models selectively.
“Not every problem needs generative AI,” he said. “Based on the problem, we use the right technique without falling for the hype.” On the LLM front, the company follows a pragmatic approach to offer solutions that address productivity objectives for Businesses.
Raesh says, “We focus on right-sized models that work with three to seven billion parameters today, and work is underway on larger models like 32 billion. These are tuned for specific business contexts and low-risk decisions. We’re not trying to compete with foundational models. We fine-tune models to deliver meaningful productivity gains, between 40 to 60 percent, which is still significant.”
Looking ahead, agentic AI is a key area of focus for the company. “Agents can reason, use tools, and self-help,” says Shailesh. “We believe technology is at an inflection point, and we are investing heavily to understand where AI genuinely solves customer problems.”
Regarding sovereign cloud and private deployments, the company draws a clear distinction between infrastructure providers and application platforms.
“We are not competing with hyperscalers in IaaS or PaaS. But we fully comply with data residency laws wherever we operate,” says Shailesh.
For governments and regulated enterprises, Zoho supports private and air-gapped deployments.
“Most of our applications are cloud-native, but we do deploy them in private cloud environments when required,” he adds.
As Zoho and ManageEngine look forward, the priorities are clear. “Our roadmap has three focus areas: AI, security, and platform,” Shailesh said. “Enterprises don’t want isolated point products. They want customization, process modelling, and deep integration.”
Sharing his perspective, Rajesh says, “We’re not here to chase trends. We’re here to build resilient systems, protect trust, and stay relevant for decades, not quarters.”
Zoho’s leadership is driven by the belief that trust, patience, and engineering depth are not constraints but advantages.
As Shailesh reiterates, “We are in this for the long term. That freedom allows us to do what is right for the customer, and that has made all the difference.”
Nirmal Kumar Manoharan, VP – Sales at ManageEngine and Sujoy Banerjee, Regional Business Director, UAE, ManageEngine highlight how ecosystem collaboration, architectural guidance, and local presence enable the company to scale across the region’s enterprise customer environments.
How successful has ManageEngine been with the enterprise market and customers in the region?
Nirmal: We are very much a large enterprise player in the Middle East, and the perception is clear that we can serve large enterprises. Some of our large customers include entities managing transportation infrastructure in Dubai, Airports, airlines, and all major banks. The MENA region today contributes roughly 8% of our global revenue and is growing at an impressive rate. Saudi Arabia is our largest market in the region, followed by the UAE, and then Egypt. Despite economic downturns in 2022, revenue has grown 5–6x since then because we stayed engaged with customers and partners.
Sujoy: We can’t name specific entities, but one of ManageEngine’s biggest strengths is that we’re not restricted to a single vertical. We work across healthcare, education, oil and gas, and multiple government departments. In the UAE, government and public sector contribute a significant share, especially from Abu Dhabi, but we’re not limited to Abu Dhabi. We’re active across the Northern Emirates as well. For example, we hosted an invite-only IT seminar in Ajman with over 50 government customers. Government offices there are clustered closely, which helped penetration. That said, nothing is permanent. We usually start with a large organization and then expand into multiple entities under that umbrella. We’ve seen this approach work in Abu Dhabi and now in Ras Al Khaimah as well. The government sector remains a large chunk of our UAE growth.
Beyond the UAE and Saudi Arabia, how are other GCC markets performing from your perspective?
Nirmal: IT maturity is not uniform across the GCC, but that’s actually an opportunity. Even organizations with lower maturity can use our products. We’ve had engagement with GCC, North Africa, and African markets for over 10–15 years. In the last five years especially, these markets have grown significantly.
We always take a long-term approach. Benefits may not come immediately, but they come over time. In terms of revenue today, roughly, Saudi Arabia and the UAE make up one-third each of our regional revenue. The rest of the GCC and North Africa make up the balance one-third. It’s a very significant market for us.
Qatar is largely government-driven. We work through partners like QCS and Burhan Tech, which has strong traction in the region.

Nirmal Kumar Manoharan VP – Sales, ManageEngine
Oman has been with us for nearly 20 years; one of our earliest large customers came from there. We work with partners like Khimji Ramdas and others. While government dominates, private sector presence is also strong there. Bahrain is a mix of government and private.
Kuwait has strong private sector business, and government engagement is now picking up. Even Egypt contributes evenly. While markets may appear unequal, all have built strong IT capabilities over the last decade, and we are deeply engaged across them.
Part of this success comes from proximity to our India office. We’re only three to four hours away, compared to vendors traveling 15 hours from the US. We’ve had our experts traveling continuously for the last 20 years from India. Even today, we have people on the ground, and domain experts traveling from India where the technology know-how resides.
Sujoy: At any given point in a month, there are around 10 to 12 members in the UAE onlyfrom our product teams on the ground, engaging directly with customers. While partners play a critical

Sujoy Banerjee Regional Business Director, UAE, ManageEngine
role, this is not a model where we simply rely on partners and step back. We work in parallel with them. Our partners act as an extended arm of ManageEngine, handling on-ground engagement, opening doors, and driving conversations. At the same time, we provide active support across both pre-sales and post-sales engagements. Whether in the UAE or Saudi Arabia, where the geography itself is significant, there are always teams from our side travelling, meeting customers, and supporting large deployments. Partners are an extension of our team, not just resellers.
What led to the decision to own your own data centers in the region?
Nirmal: We wanted tighter control, over bandwidth, infrastructure, and long-term operating costs. Once organizations are locked into third-party environments, costs tend to escalate over time, and eventually those increases are passed on to customers. No company can absorb those costs indefinitely.
That’s why, even 15 years ago, we made a conscious decision to invest in our own data centers with a long-term perspective. Owning our infrastructure gives us greater flexibility, cost predictability, and operational control, benefits that ultimately translate into better outcomes for our enterprise customers.
When it comes to partner engagement, how does the model work if a local partner has the customer relationship but lacks the required technical depth? How do you handle support and commercial responsibilities in such cases?
Nirmal: One important aspect of how we operate is that we actively facilitate collaboration within our partner ecosystem. As a vendor, we do have certain limitations in terms of how much time we can spend delivering services for a single customer. Our core focus remains on the product, and we typically avoid directly taking on services engagements.
At the same time, customers often want services and are willing to pay for them in order to extract full value from the product. That is where our partners play a critical role. While one partner
may not have all the required capabilities, the partner community collectively has immense expertise. Partners frequently work together, bringing in specialists from other partners when needed. This kind of collaboration happens very often. In most cases, partners work these arrangements out among themselves as part of the ecosystem we have built. We don’t need to get involved in every instance, although we are always available when required.
For large deployments, especially at the architectural level, we do step in more actively. We have better visibility into long-term product direction and help customers design deployments that are scalable and future-ready, rather than taking a short-term approach. We may suggest alternative deployment architectures that can scale better over time.
When it comes to integrations or niche expertise that a partner may lack, they typically engage another partner or specialist developer. Over time, a strong ecosystem has developed where these collaborations are well understood and executed. Commercially, these collaborations are handled by the primary partner engaging the customer. They manage the commercial arrangements and bring in additional partners or resources as required.
What does your Middle East team look like today, and how are you expanding?
Nirmal: Currently, the Middle East team is about 15 people. Given post-pandemic growth, we plan to at least double the team by the end of this year.
We’re also engaging with universities and leveraging Zoho’s broader ecosystem to build our own talent pipeline, because partners don’t typically hire fresh graduates, and that’s important for long-term growth. This expansion is fully aligned with our overall vision.
"While one partner may not have all the required capabilities, the partner community collectively has immense expertise. Partners frequently work together, bringing in specialists from other partners when needed. This kind of collaboration happens very often."



From shadow AI risks to autonomous decision-making, governance and accountability are redefining what AI maturity truly means.
AI is no longer an experimental tool operating at the margins of enterprise IT. It is increasingly embedded in workflows, customer interactions, operational processes, financial controls, and decision-making frameworks. Yet as AI systems become more autonomous and deeply integrated into organizational fabric, the question is more about how it should be governed.
Across industries as diverse as diversified conglomerates, insurance and financial services, higher education, and manufacturing, technology leaders are converging on a shared realization that AI maturity is not measured solely by model performance. It is defined by governance, traceability, culture, and human accountability.
In conversations with some industry experts, a common theme emerges that while AI must accelerate enterprise capability, it must remain fundamentally human-centric. The intent must be to augment human capabilities.
AI:
The rise of generative AI tools has made artificial intelligence accessible to employees across departments. But with accessibility comes risk. The unsanctioned use of AI platforms, often referred to as “shadow AI”, has rapidly emerged as a governance challenge.
Dr. Walid Salem, IT Lead at Zurich International Life Limited describes the issue as urgent and escalating. “Shadow AI represents a significant and escalating danger,” he says. “While often driven by well-meaning employees pursuing efficiency, the unsanctioned use of some public AI tools introduces critical vulnerabilities. The primary risks include data leakage, IP infringement, and compliance violations.”
He highlights a key concern: employees uploading proprietary code or sensitive customer data into public AI platforms whose
data handling policies may not align with regulatory frameworks such as GDPR or HIPAA. Beyond data exposure, he warns of another subtle risk including unverified outputs. “Output from these tools is often unverified, leading to the potential integration of hallucinations or biased data into business procedures.”
Jiju G.S., Head of IT at S.S. Lootah Group echoes this risk, particularly in the context of personal subscriptions. “Most employees use external AI tools with good intentions, to improve productivity or speed, but without realizing the implications of uploading sensitive data into personal or unapproved platforms.” The larger concern, he notes, is persistence. “Company-related prompts and documents may remain stored in private histories even after an employee leaves.”
Yet not all leaders interpret shadow AI as rebellion. Muhammad Affan Habib, Director of Information Technology at Sharjah Maritime Academy frames it differently. “Shadow AI is undeniably increasing. However, I see it less as a rebellion and more as a signal. It indicates that teams are eager to innovate and recognize AI’s potential to improve productivity.”

Dr. Walid Salem
IT Lead, Zurich International Life Limited

For him, the problem is not experimentation itself. “The risk lies in unstructured experimentation without data governance, compliance controls, or oversight.”
The unanimous opinion is that banning AI tools is not the answer. Dr. Walid cautions that “unconditional prohibitions often backfire, driving usage further underground.” Instead, he proposes a “Guardrails, Not Roadblocks” framework, including a tiered access model: enterprise-sanctioned tools, isolated innovation sandboxes, and clearly prohibited applications.
Jiju reinforces this balance. “The solution is not to ban AI. Governance should enable innovation, not suppress it.”
The message is clear that shadow AI is not merely a security issue, it is a governance design challenge. Organizations must create structured pathways for experimentation while maintaining accountability.
As AI systems grow more capable, the line between recommendation and action becomes increasingly blurred. Determining when AI can act independently and when human oversight is required is now a central strategic question.
At Al Jabr Holding Group, Asim Badhuralam, Group CIO applies a structured risk-based model. “AI operates independently for low-risk, reversible tasks like password resets or routine ap-
Jiju G.S. Head of IT, S.S. Lootah Group

provals. But whenever a decision has financial impact, regulatory risk, or could affect customers at scale, we introduce a humanin-the-loop.”
He provides a practical example within IT service management. AI handles password resets and software installations autonomously. However, if a ticket affects production systems or multiple users, it escalates immediately to a human engineer. The AI may provide diagnostics and recommended actions, but the final decision remains human.
Dr. Walid frames autonomy through a Risk-Impact Matrix. “Choosing the degree of autonomy is a risk management choice rather than a technology one.” Low-risk activities such as spam filtering or product suggestions can operate independently with periodic audits. Medium-risk operations may include a “stop button” mechanism for real-time human override. High-risk decisions such as those affecting safety, creditworthiness, employment, or health must retain human judgment.
Jiju adds another dimension by distinguishing between “humanin-the-loop” and “human-on-the-loop.” “Autonomy must always be calibrated to risk. High-risk domains require human-in-theloop design, while operational tasks may allow human-on-theloop supervision.”
Muhammad Affan summarises the philosophy into a single principle, “AI may analyse, recommend, and even execute within boundaries, but accountability cannot be automated.”
Hence, across sectors, autonomy can be seen as a binary question. It is very contextual and based on risk exposure and regulatory obligations.
As AI systems take on more decision-making responsibilities, organizations are confronting a deeper challenge about how to encode institutional wisdom into machine systems.
At Al Jabr Holding Group, Asim provides a detailed example in

Affan Habib Director of IT, Sharjah Maritime Academy
procurement governance. “We embed senior judgment into AI through historical decisions, policy rules, and escalation patterns. Senior teams define the guardrails about what AI can and cannot decide.”
In practice, this involves training AI on historical approval and rejection data, including risk justifications used by CFOs, CIOs, and compliance heads. “For example, the AI is trained on rules such as approving vendors below SAR 100,000 only if they have a minimum two-year relationship, and automatically rejecting firsttime vendors in regulated categories unless board-approved.”
Importantly, the AI initially operates in recommendation mode. “Over time, AI learns from those expert decisions and scales them consistently, but final accountability always stays with leadership.”
Dr. Walid describes a complementary approach of Reinforcement Learning from Human Feedback (RLHF). “Senior professionals should be regarded as educators within the training loop, rather than merely users.” Instead of coding rules, executives grade AI outputs, refining tone and judgment over time. He also references Constitutional AI principles such as embedding explicit business values and ethical guidelines directly into prompts.
Jiju frames the issue as preserving institutional wisdom. “AI should encode institutional wisdom, not bypass it. Senior teams translate real-world judgment into structured frameworks that guide AI outputs.”
Rather than replacing experience, these organizations are scaling it. AI becomes a multiplier of senior judgment experience and actions, not a substitute.
No AI system is infallible. The difference between immature and mature AI deployment lies in how mistakes are handled.
Asim shares a real-world example. An AI-based fraud detection system blocked a legitimate credit-card transaction. Upon review,

the system had flagged the transaction due to sudden location change, unusually high amount, and a confidence score of 92%.
On investigation, it became clear the customer was traveling internationally, and fraud thresholds had been set too aggressively. The organization corrected the model by incorporating travel history as contextual input and adjusting thresholds for trusted customers. “As a result, the issue did not recur and overall fraud detection accuracy improved.”
The key enabler? Logging. “We ensure every AI decision is fully traceable by logging input data, confidence scores, and the rules that influenced the outcome. Traceability turns AI mistakes into measurable improvements.”
Dr. Walid similarly emphasizes traceability, categorizing errors by root cause, whether flawed data, incorrect thresholds, or missing context.
Muhammad Affan reinforces this with a governance lens. “Every AI-driven output must be auditable. If a system cannot justify its decision pathway, it cannot be trusted at scale.”
Jiju describes structured review mechanisms, monitoring dashboards, and layered validation processes. “The goal is not only to correct mistakes but to refine governance models.”
Transparency is not merely technical; it is cultural capital. It builds institutional confidence.
As AI systems grow more accurate, the risk shifts from technical error to human complacency.
Muhammad Affan identifies automation bias as “one of the most underestimated risks of AI maturity.” He advocates system designs that display confidence levels and highlight uncertainty, ensuring employees understand that AI outputs are probabilistic, not absolute.
In HR scenarios, for example, AI may shortlist candidates, but recruiters must justify their final selections, hether aligning with or diverging from the AI’s recommendation.
Jiju emphasizes cultural reinforcement. “We promote a culture where AI recommendations are treated as advisory rather than authoritative.”
Dr. Walid similarly stresses that high-risk decisions must retain human final judgment, regardless of AI accuracy.
Across organizations, the guiding philosophy is consistent that AI should support thinking, not replace it.
If AI handles routine tasks, will the next generation lose foundational expertise?
Jiju refers to this concern as the “Skill Gap Trap.” “Young professionals must still understand foundational processes and core business logic. AI should accelerate learning, not eliminate it.”
Asim provides an example from Al Jabr’s soft drinks manufacturing operations. AI monitors machine parameters and schedules preventive maintenance, but engineers review anomalies, investigate deviations, and manage corrective actions. Exposure to exceptions builds practical expertise.
Affan reinforces that AI should elevate humans toward higher-value judgment rather than deskill them. The consensus that surfaces is that AI should become a training accelerator, not a skills eroder.
Does AI success depend more on model accuracy or human trust? Muhammad Affan quips, “Performance enables AI capability, but trust determines its impact.”
Jiju concurs when he adds, “Even the most advanced AI system fails if users do not comprehend its limitations or rationale.”
Dr. Walid frames it as a balance between velocity and cost of error. Asim underscores that final accountability remains human. Across industries, the conclusion is unmistakable that AI adoption is as much a human equation as a technical one.
Perhaps the most concise articulation of the collective philosophy comes from Jiju G.S.: “Organizations must shift from a mindset of ‘maximum automation’ to ‘responsible augmentation.’”
Human-centric AI does not slow innovation. It strengthens it by grounding it in governance, explainability, and accountability.
Across different enterprises and sectors, leaders are demonstrating that scaling AI responsibly is not about surrendering control. It is about designing systems that amplify human judgment and not diminish it.
In the autonomous enterprise, machines may execute. However, accountability remains with humans; a distinction likely to define the next phase of AI adoption and what organizations must not overlook.
As hybrid and multi-cloud AI strategies gain urgency across global enterprises, Cloudera is positioning itself around a “true hybrid” architecture enabled by recent acquisitions and its Anywhere Cloud strategy. Leo Brunnick, Chief Product Officer, Cloudera explains how its roadmap, data fabric capabilities, and containerized deployment model are evolving.
The demand for hybrid and multi-cloud AI deployments is growing. How is that driving your product strategy?
Cloudera has always supported hybrid, but the definition of hybrid has matured over time. Initially, hybrid meant you could run software on prem or in the cloud or even in Amazon’s cloud and Microsoft’s cloud but those deployments didn’t necessarily know about one another.
The next phase was enabling customers to run some workloads on prem and some in the cloud, while managing them through a single pane of glass across a complex data estate.
Now the world has moved further. What we are delivering, currently in tech preview, is the same piece of code, the same container, deployed on prem or across Amazon, Google, Microsoft, or Oracle Cloud. It is literally the same code base. This is enabled through our Anywhere Cloud architecture, made possible by our acquisition of Tykon six months ago.
With Anywhere Cloud, customers can burst workloads. For example, they may use on-prem GPUs because it’s cheaper, and when they need a spike, burst to the cloud. They can even choose spot instances dynamically, whichever provider is cheaper that day. Additionally, because this is a virtualized platform managed through a single console, customers gain multi-cloud failover. If Azure or AWS goes down, it doesn’t have to take your operations down.
This isn’t just about vendor lock-in. It’s about true hybrid for resilience, sovereignty, and cost control.
You’ve made several acquisitions recently, including Verta, Octopai, and Tykon. What do they bring?
Across our customer executive forums globally, we consistently hear similar priorities: AI and agentic AI adoption, understanding complex data estates, and simplifying deployment in hybrid environments. We made three acquisitions in the span of a year.
Verta, in the AI space, accelerated our AI Studios and Agent Studios capabilities. Octopai, a data lineage company, provides more than lineage; it offers comprehensive data visualization and mapping, forming a core part of our data fabric. Tykon enabled us to own our Kubernetes layer. Originally, the goal was to avoid having six different product versions across Kubernetes environ-

ments. What we gained was much more with a powerful orchestration layer that enables rapid deployment from bare metal in minutes.
The combination of these capabilities allows us to deliver AI solutions with full data fabric inside a write-once, deploy-anywhere infrastructure. When we explain this architecture, customers immediately understand the value because no one else combines all these elements in this way.
Post-acquisitions, what synergies have you driven into your product roadmap? Have there been announcements that bring these together?
Yes, we’ve announced several roadmap advancements over the past few months.
First, we announced extended long-term support for our core CDP
(Cloudera Data Platform) platform. Our 7.1.9 and 7.3.2 releases will now have six years of long-term support. That means customers don’t have to constantly migrate. Second, our Data Services platform, version 1.5.5, now enables on-prem capabilities that were not previously available. This includes full AI Workshop, AI Studios, Agent Studios, and on-prem inferencing. It also includes Trino support, REST catalog integration, and Apache Iceberg support, all capabilities customers have been requesting. We’ve also announced long-term support for this platform.
With the acquisition of Tykon, we are launching the Anywhere Cloud strategy, entering tech preview in Q1 2026.
Normally, building a fully virtualized platform would take years. In our case, we can pull all CDP (Cloudera Data Platform) platform elements into a container, including our data fabric and lakehouse architecture. Cloudera AI components are also integrated into that container.
The container itself is best-in-class and was already outperforming others in the market. As a result, in months, we’ve delivered the architecture customers have been waiting for. We believe it is a game changer. No other vendor has all of these components combined.
Customer and analyst feedback over the past 60 days has been very strong. In the recent Forrester Wave for Data Fabric, we were positioned ahead of most major vendors and were the only company to receive a perfect score in both vision and roadmap.
Tell us about your Open Data Lakehouse powered by Apache Iceberg. How does it strengthen flexibility?
At its core, Cloudera is a lakehouse company. Streaming, ingestion, analytics engines, object storage, and file storage all revolve around the lakehouse architecture.
This is crucial as every enterprise has a complex data estate. Even organizations that think they are “pure cloud” are one acquisition away from complexity again. We focus on interoperability. By supporting open formats like Apache Iceberg and enabling REST APIs, customers can query across environments, data sitting in Snowflake, Cloudera, Oracle, or elsewhere, within a single complex query. That reflects how real-world enterprise data environments actually look.
How are you simplifying GenAI, RAG, and agent-based deployments?
With the Verta acquisition, we accelerated development of our AI Studios, including RAG Studio, Agent Studio, and Fine-Tuning Studio. These environments are low-code or no-code. We’ve conducted hackathons with large enterprise customers where business users build deployable agents within a single day. That has been eye-opening for many organizations.
What type of customers benefit most from Cloudera?
Cloudera is particularly suited for large enterprises with complex estates. Banks, governments, and industrial companies typically operate mainframes, decades-old databases, client-server sys-
tems, SaaS applications, and multiple cloud platforms simultaneously. That is where we add value.
That complexity becomes critical when organizations begin deploying AI. AI is only as good as the underlying data, and most organizations do not yet have fully governed, lineage-tracked, and secured data environments. This becomes even more important with AI agents. Agents should operate like employees, just as an employee badge provides limited access, AI agents require fine-grained and role-based access control. Without proper governance, agents can act beyond their intended permissions.
How do you see the future balance between cloud and onprem infrastructure?
Market perception has shifted significantly. Previously, some analysts predicted only 2–3% of meaningful data would remain on-prem long term. However, more recent projections suggest closer to 40% on-prem and 60% in public cloud. That is because many enterprises are experiencing “cloud hangover”, with escalating and unpredictable cloud spending. Some organizations are spending hundreds of millions annually and struggling to forecast costs. At the same time, data volumes are expanding exponentially, especially with AI and metadata growth. We have customers managing petabytes, even exabytes, of data.
Looking ahead, the future is not cloud-only when it comes to data storage. It is definitively hybrid.
"Normally, building a fully virtualized platform would take years. In our case, we can pull all CDP (Cloudera Data Platform) platform elements into a container, including our data fabric and lakehouse architecture. Cloudera AI components are also integrated into that container."
Santosh Varghese, Managing Director of Tosh Nxt Tech Ventures, Commercial Partner in MEA for Toshiba Storage, explains how Toshiba’s roadmap, BYODC initiative, and localized market strategy are positioning the company for sustained growth.
You mentioned highlights around demand and sales growth. How would you describe 2025 so far for Toshiba in the region?
2025 has been a phenomenal year for us. Demand for storage has increased exponentially, largely driven by the AI revolution. Every business today wants to be part of the AI journey, and AI servers require massive storage capacity to train and run models effectively. This has significantly fuelled demand for enterprise storage. From a technology standpoint, Toshiba is well positioned. We already offer capacities up to 30TB, and our roadmap extends beyond 40TB in the coming quarters. Research indicates that nearly 166 zettabytes of data will be generated by 2027, and all of that data needs to be stored somewhere.
Our strength lies in offering a complete 360-degree storage portfolio. This includes external hard drives for personal computing, the S300 series for surveillance, and in 2025 we introduced the S300 AI, specifically designed for AI-driven surveillance workloads. On the enterprise side, our MG series continues to scale in both capacity and efficiency, delivering higher density with lower power consumption. As a result, we are seeing year-onyear growth of over 50–60 percent. At the same time, demand has become so strong that availability is emerging as a bottleneck, as businesses race to prepare their data centers for AI workloads. This is where our regional strategies are focused, aligning supply, technology, and partnerships to meet customer needs.
How successful has the BYODC (Build Your Own Data Center) initiative been?
BYODC is a unique and strategic initiative from Toshiba. We recognized early on that many organizations want to build their own on-premises or private cloud data centers, especially as AI adoption accelerates. Businesses that fail to adopt AI risk falling behind very quickly.
Through BYODC, we create an ecosystem that brings together Toshiba, system integrators, and customers. We help businesses design and build data centers using best-of-breed components,

resulting in total cost of ownership that is nearly 30 percent lower, while remaining highly scalable. Organizations can start small, at terabyte scale, and grow to petabytes or even exabytes as their needs expand. The initiative has seen strong success in the UAE and Saudi Arabia, was rolled out in South Africa last year, and is now expanding across other GCC markets and Africa. This was a vision we laid out back in 2022, and we are proud of how it is gaining traction today.
Can you outline the product roadmap ahead?
Our flagship enterprise offering remains the MG series. These drives are helium-filled, designed for lower power consumption, and currently offer up to 30TB capacity, with the roadmap extending beyond 40TB in the near future.
On the surveillance side, the S300 series continues to scale in capacity, now enhanced with AI capabilities. We also serve small and mid-sized businesses through our NAS drives, while our consumer portfolio, including external Canvio drives, addresses the need for reliable personal data backup. Many users still prefer not to store critical data entirely on the cloud, making local backup increasingly important.
Together, this forms a comprehensive storage portfolio that addresses enterprise, SMB, surveillance, and consumer needs.
What is the focus when it comes to market and channel expansion?
ToshNxt manages nearly 40 countries across the Middle East and Africa from Dubai, but our strategy is not centralized. Localization is key. Each market has its own dynamics, and our success comes from tailoring strategies accordingly. This approach explains our strong market share; around 30 percent in several countries, nearly 50 percent in Saudi Arabia, and about 35 percent in South Africa. We work closely with channel partners through initiatives like our Channel Connect program, which focuses on training, feedback, and continuous improvement.
In Africa, our Go Africa strategy ensures that we bring the latest technologies to the market, not obsolete products. These regions have immense potential, and our localized, partner-led approach is critical to unlocking that growth.
What is driving storage demand from businesses today?
Historically, businesses stored data mainly for compliance and legal retention. Today, the role of data has changed completely. Organizations are actively using analytics to extract insights and drive better business decisions. Data has effectively become a currency. Storage is no longer passive; it is central to analytics, AI, and strategic decision-making. This shift is one of the
strongest drivers behind the sustained growth we are seeing in the storage industry.
While GPUs and high-performance compute often dominate AI conversations, the foundation of AI success lies in where and how data is stored. This is where HDDs continue to play a critical role in both on-premises and cloud data center environments. In on-premises and private cloud data centers, HDD-based storage allows organizations to retain full control over their data while building AI-ready infrastructures. This aligns strongly with initiatives like BYODC, where enterprises want predictable costs, data sovereignty, and the flexibility to scale storage capacity as AI adoption grows. HDDs form the backbone of these architectures, supporting large data pools that can be accessed by AI compute layers when needed.
"Through BYODC, we create an ecosystem that brings together Toshiba, system integrators, and customers. We help businesses design and build data centers using best-of-breed components, resulting in total cost of ownership that is nearly 30 percent lower, while remaining highly scalable."
As AI, cloud, and data convergence reshape the physical security industry, the Middle East has emerged as a major growth engine. Andy Mackay, Head of EMEA Marketing at Genetec discusses regional momentum, AI’s impact on physical security, open-platform strategy, vertical focus, cybersecurity, and the growing role of sovereignty in Saudi Arabia.

Andy Mackay Head of EMEA Marketing, Genetec
How important is the Middle East market for you in terms of size and opportunity?
The Middle East market is an area we’ve really focused on over the last five years in particular. It’s become a very big and important market for us, not just within EMEA, but globally.
Specifically, Saudi Arabia and the UAE, with some contribution from Qatar, are strong growth engines. We’ve seen double-digit growth over multiple years in these markets, so they are very important for us.
Physical security has been redefined by AI across many domains. From an industry perspective, and from your company’s perspective, how is this evolving?
The introduction of AI functionality through cloud services is fundamentally changing the physical security market.
Ironically, the industry is moving away from focusing purely on protecting people and assets, and toward delivering business value. The analytics capabilities and the compute power of the cloud allow organizations to extract operational and revenue insights.
For example, in retail, you can analyze how people move through a store. In airports, you can monitor passenger flow. Physical security is evolving into something that contributes to resilience, operational efficiency, and even revenue.
From our perspective, we continue to maintain our unified approach, combining video management, access control, and license plate recognition through an open platform that integrates with multiple hardware vendors.
To capitalize on AI growth, we’ve developed our Security Center platform to operate fully in the cloud, in SaaS, on-premises, or in any hybrid combination.
At Intersec Dubai, we have focused on our Cloudlink 210 device. It delivers cloud-like compute power and functionality but can be hosted on-premises. For organizations that want AI value without direct cloud connectivity, this provides that flexibility.
Regarding facial recognition, is there a limit to how many individuals can be captured in crowd management scenarios?
That gets into more detailed technical specifications, but philosophically, we design our solutions to be scalable rather than limited.
If one environment reaches capacity, we can scale horizontally by adding additional components that integrate seamlessly in a federated architecture. Whether it’s facial recognition or license plate recognition, our portfolio is designed to scale as required.
What is your go-to-market model in this region?
We are a software vendor, and our route to market is through system integrators.
We work closely with local system integrators and, where necessary, distributors. They implement and integrate our software into end-user environments and help customers extract the full value from our portfolio.
Which verticals are strongest for you in the region?
Smart cities are a major focus. Oil and gas is significant. Mining is important, particularly in parts of Africa. Banking infrastructure is also a strong vertical. We’re also seeing growth in retail, especially where AI-driven business insights add value beyond security.
In Banking, it has been primarily been the physical security at major sites. However, we’re seeing growth in smaller distributed sites, branches in smaller cities and towns. Customers want unified physical security across all locations, integrating access control and video into a single platform.
How does your open platform approach work with hardware vendors?
We operate an open and unified platform. We integrate with ONVIF-compliant camera providers across the market.
We maintain a partner program and work particularly closely with certain vendors where we share API integrations for deeper interoperability. For example, Axis cameras, i-Pro cameras (formerly Bosch), and HID for access control are key partners.
This allows us to deliver an open solution while integrating tightly with leading hardware providers.
If cameras lack adequate cybersecurity configurations, how do you address that?
Cybersecurity is a core priority for us.
If cameras do not meet required standards, for example, in legacy deployments, we offer solutions within our Cloudlink range that isolate those cameras on their own network.
For example, the Cloudlink 210 device includes dual network cards. It can operate like a sandbox. Cameras can be isolated from the internet and other networks while still transmitting video feeds securely.
From the end user’s perspective, there’s no difference in functionality. However, if a malicious actor attempts to access the system through those cameras, they cannot breach the broader network because it is isolated.
How is IT/OT convergence impacting physical security?
As physical security evolves beyond protecting assets, the integration of IT sensors and building systems adds significant value. For example, we have a deployment at 22 Bishopsgate in London where our system integrates with lighting sensors, heat sensors, and occupancy detection systems.
This enables not just security monitoring but building efficiency
optimization, understanding where people are located and managing energy consumption more effectively.
That’s where IT and OT convergence creates measurable operational value.
Looking ahead, what growth potential do you see in Saudi Arabia and the wider region?
Saudi Arabia is a fascinating environment. There are similarities with the UAE, but also differences.
The scale of projects is enormous, new cities, new airports, and major infrastructure developments. They are moving rapidly to position themselves at the forefront globally.
There is strong demand for the latest technology, including AI capabilities and cloud-based compute power. However, there are also significant concerns around data sovereignty.
I believe over the next year or two, we will see significant investment in sovereign data centers within Saudi Arabia. That will enable continued adoption of AI-powered physical security solutions while meeting regulatory and sovereignty requirements.
"Ironically, the industry is moving away from focusing purely on protecting people and assets, and toward delivering business value. The analytics capabilities and the compute power of the cloud allow organizations to extract operational and revenue insights."
Inference is infrastructure, whether you planned for it or not writes Lori MacVittie, Distinguished Engineer &
Chief Evangelist at F5
Most enterprises still talk about AI as “innovation.” Something experimental. Something that belongs to a lab. Something you demo in a slide deck with a hopeful tone and a suspiciously clean architecture diagram.
But in practice, we’re already operating AI as infrastructure. We just haven’t admitted it yet.
And that mismatch is where the pain starts, because infrastructure has rules. Infrastructure needs uptime. Infrastructure needs budgets that don’t jump 30% because a product team discovered “agentic workflows” and decided to turn every internal process into a chat-based microservice.
Innovation can be a little chaotic. Infrastructure cannot.
Inference crossed the infrastructure threshold quietly, mostly because nobody woke up one morning and declared, “We are now running inference at scale.” It happened the way pilots and tools always become infrastructure: one team adopted it, then another, then suddenly it was “critical,” and now everyone is surprised the existing delivery and security model is showing stress fractures.
The fastest way to tell inference has become infrastructure is to look at how many ways enterprises are deploying it.
In our latest research for the 2026 State of Application Strategy, organizations aren’t choosing one inferencing approach. The data shows respondents are using an average of 2 distinct inferencing services (such as public AI like OpenAI, hyperscaler offerings, and open-source models like vLLM and Ollama) per organization. Of the 78% that are operating their own inference, the average number of different models in use is 7.
That’s not experimentation. That’s model sprawl.
Omar Akar Regional VP, Middle East & Emerging Africa, Pure Storage
Even more telling, only 2.79% say they’re not currently using any inferencing services. And when nearly all organizations are running inference, the question becomes less “what model are you using” and more “who is responsible when it breaks at 2 AM?”
Spoiler: it won’t be the data science team.

Lori MacVittie Distinguished Engineer & Chief Evangelist, F5
There’s a reason old-school infrastructure teams twitch when they hear phrases like “it’s just another endpoint.” Because if you treat inference like an ordinary stateless request pipeline, you will design it like one.
Inference punishes that mistake immediately.
Inference has state. KV-cache locality matters. Context windows change resource shape mid-stream. Concurrency is gated by memory. Latency expectations are brutal, because humans are sitting there waiting for tokens to show up, not reading a “request completed successfully” log line.
This is why the “AI is innovation” framing breaks down. Innovations can tolerate inefficiency for a while. Infrastructure cannot. If inference workloads scale under the wrong assumptions, the penalty comes as cost spikes, unpredictable performance, and a steady drift toward operational fragility.
That fragility is not theoretical anymore. It is already being felt.
Our research makes something else very clear. Organizations are not dabbling with AI in ops. They’re actively using it to automate the machinery of IT. Only 1.73% say they are not using AI for automation. And what’s more interesting is that 66.4% of those who are, use AI to automatically adjust policies and controls.
Yes, they’ve given AI agency in ops. The conversation has changed because when AI starts acting, it stops being “innovation” and starts becoming operational reality. That reality comes with new requirements: governance, explainability, blast-radius control, guardrails, rollback, and the ability to answer a simple question under pressure: “Why did the system do that?”
If you can’t answer that, you don’t have automation. You have chaos with extra steps.
Chaos inevitably leads to failure, and when infrastructure fails, it rarely fails as an explosion. It fails as drift.
It starts with small incidents. Slower responses. Higher cost per transaction. More “it depends” in the answers. A creeping increase in retries, timeouts, fallback behavior, and exceptions that are technically within SLO but operationally corrosive.
That is exactly what inference will do if it is treated like ordinary stateless traffic.
The classic enterprise approach is to address drift with more capacity. More nodes. More GPUs. More spend.
The problem is that inference isn’t bottlenecked by willpower. It’s bottlenecked by architecture. If your routing model ignores state, your cache strategy is accidental, and your governance is retrofitted after the fact, scaling simply magnifies inefficiency.
And the CFO will notice before your dashboards do.
This is the part many organizations still resist: AI scaling is not being determined by data science. It is being determined by delivery, security, and governance.
Inference is infrastructure. That means it needs traffic management, policy enforcement, resilience, cost controls, and operational discipline. The teams that know how to do that are the teams that have been running infrastructure for years, the ones who already learned the hard lessons from “simple” systems that turned into critical systems overnight.
Enterprises can keep calling AI “innovation” if they want. Reality will not cooperate.
Inference already crossed the line. You’re running it like infrastructure. The only question left is whether you intend to run it well.
"Innovations can tolerate inefficiency for a while. Infrastructure cannot. If inference workloads scale under the wrong assumptions, the penalty comes as cost spikes, unpredictable performance, and a steady drift toward operational fragility."
Mateo Rojas-Carulla, Head of Research, AI Agent Security, Check Point Software describe how with AI agents, the stakes are different and the attack vectors are emerging faster than many organizations anticipated
As AI moves from controlled experiments into real-world applications, we are entering an inflection point in the security landscape. The transition from static language models to interactive, agentic systems which are capable of browsing documents, calling tools, and orchestrating multi-step workflows, is already underway. But as recent research reveals, attackers are not waiting for maturity: they are adapting at the same rapid pace, probing systems as soon as new capabilities are introduced.
In the fourth quarter of 2025, our team at Lakera analyzed real attacker behavior across systems protected by Guard and within the Gandalf: Agent Breaker environment — a focused, 30-day snapshot that, despite its narrow window, reflects broader patterns we observed throughout the quarter. The findings paint a clear picture: as soon as models begin interacting with anything beyond simple text prompts (for example: documents, tools, external data) the threat surface expands, and adversaries adjust instantly to exploit it.
This moment may feel familiar to those who watched early web applications evolve, or who observed the rise of API-driven attacks. But with AI agents, the stakes are different. The attack vectors are emerging faster than many organizations anticipated.
For much of 2025, discussions around AI agents largely centered on theoretical potential and early prototypes. But by Q4, agentic behaviors began appearing in production systems at scale: models that could fetch and analyze documents, interact with external APIs, and perform automated tasks. These agents offered obvious productivity benefits, but they also opened doors that traditional language models did not.
Our analysis shows that the instant agents became capable of interacting with external content and tools, attackers noticed and adapted accordingly. This observation aligns with a fundamental truth about adversarial behavior: attackers will always explore and exploit new capabilities at the earliest opportunity. In the context of agentic AI, this has led to a rapid evolution in attack strategies.
Across the dataset we reviewed, three dominant patterns emerged. Each has profound implications for how AI systems are designed, secured, and deployed.

Mateo Rojas-Carulla Head of Research, AI Agent Security
In traditional language models, prompt injection (directly manipulating input to influence output) has been a well-studied vulnerability. However, in systems with agentic capabilities, attackers increasingly target the system prompt, which is the internal instructions, roles, and policy definitions that guide agent behavior. Extracting system prompts is a high-value objective because these prompts often contain role definitions, tool descriptions, policy instructions, and workflow logic. Once an attacker understands these internal mechanics, they gain a blueprint for manipulating the agent.
The most effective techniques for achieving this were not brute force attacks, but rather clever reframing:
• Hypothetical Scenarios: Prompts that ask the model to assume a different role or context — e.g., “Imagine you are a developer reviewing this system configuration…” — often coaxed the model into revealing protected internal details.
• Obfuscation Inside Structured Content: Attackers embedded malicious instructions inside code-like or structured text that
bypassed simple filters and triggered unintended behaviors once parsed by the agent.
This is not just an incremental risk — it fundamentally alters how we think about safeguarding internal logic in agentic systems.
2. Subtle Content Safety Bypasses
Another key trend involves bypassing content safety protections in ways that are difficult to detect and mitigate with traditional filters.
Instead of overtly malicious requests, attackers framed harmful content as:
• Analysis Tasks
• Evaluations
• Role-Play Scenarios
• Transformations or Summaries
These reframings often slipped past safety controls because they appear benign on the surface. A model that would refuse a direct request for harmful output might happily produce the same output when asked to “evaluate” or “summarize” it in context.
This shift underscores a deeper challenge: content safety for AI agents isn’t just about policy enforcement; it’s about how models interpret intent. As agents take on more complex tasks and contexts, models become more susceptible to context-based reinterpretation — and attackers exploit this behavior.
3. Emergence of Agent-Specific Attacks
Perhaps the most consequential finding was the appearance of attack patterns that only make sense in the context of agentic capabilities. These were not simple prompt injection attempts but exploits tied to new behaviors:
• Attempts to Access Confidential Internal Data: Prompts were crafted to convince the agent to retrieve or expose information from connected document stores or systems — actions that would previously have been outside the model’s scope
• Script-Shaped Instructions Embedded in Text: Attackers experimented with embedding instructions in formats resembling script or structured content, which could flow through an agent pipeline and trigger unintended actions
• Hidden Instructions in External Content: Several attacks embedded malicious directives inside externally referenced content — such as webpages or documents the agent was asked to process — effectively circumventing direct input filters
These patterns are early but signal a future in which agents’ expanding capabilities fundamentally change the nature of adversarial behavior.
One of the report’s most striking findings is that indirect attacks — those that leverage external content or structured data — required fewer attempts than direct injections. This suggests that traditional input sanitization and direct query filtering are insuffi-
cient defenses once models interact with untrusted content.
When a harmful instruction arrives through an external agent workflow — whether it’s a linked document, an API response, or a fetched webpage — early filters are less effective. The result: attackers have a larger attack surface and fewer obstacles.
The report’s findings carry urgent implications for organizations planning to deploy agentic AI at scale:
1. Redefine Trust Boundaries
Trust cannot simply be binary. As agents interact with users, external content, and internal workflows, systems must implement nuanced trust models that consider context, provenance, and purpose.
2. Guardrails Must Evolve
Static safety filters aren’t enough. Guardrails must be adaptive, context-aware, and capable of reasoning about intent and behavior across multi-step workflows.
3. Transparency and Auditing Are Essential
As attack vectors grow more complex, organizations need visibility into how agents make decisions — including intermediate steps, external interactions, and transformations. Auditable logs and explainability frameworks are no longer optional.
4. Cross-Disciplinary Collaboration Is Key AI research, security engineering, and threat intelligence teams must work together. AI safety can’t be siloed; it must be integrated with broader cybersecurity practices and risk management frameworks.
5. Regulation and Standards Will Need to Catch Up
Policymakers and standards bodies must recognize that agentic systems create new classes of risk. Regulations that address data privacy and output safety are necessary but not sufficient; they must also account for interactive behaviors and multi-step execution environments.
The arrival of agentic AI represents a profound shift in capability and risk. The Q4 2025 data is an early indicator that as soon as agents begin operating beyond simple text generation, attackers will follow. Our findings show that adversaries are not only adapting — they are innovating attack techniques that traditional defenses are not yet prepared to counter.
For enterprises and developers, the message is clear: securing AI agents is not just a technical challenge; it’s an architectural one. It requires rethinking how trust is established, how guardrails are enforced, and how risk is continuously assessed in dynamic, interactive environments.
In 2026 and beyond, the organizations that succeed with agentic AI will be those that treat security not as an afterthought, but as a foundational design principle.
Mostafa Kabel, CTO, Mindware Group discusses how tech partners must rethink AI deployment
Artificial intelligence is no longer an experimental technology confined to innovation labs. It is actively shaping customer experiences, automating business decisions, and generating original content at scale. As adoption accelerates across industries, tech partners sit at the centre of this transformation responsible not only for deployment, but for ensuring AI is used legally, ethically, and transparently.
The new phase of AI adoption demands more than technical expertise. It requires partners to rethink legal frameworks, intellectual property models, service accountability, and ethical responsibility. Those who fail to adapt risk regulatory exposure, reputational damage, and erosion of customer trust.
One of the most critical areas partners must address is licensing and legal compliance. AI models particularly generative ones are only as deployable as the rights that govern them. Partners must ensure that models are authorised for commercial use and that the outputs they generate do not infringe on copyright, privacy, or data sovereignty regulations.
This becomes especially important in automated decision-making scenarios such as hiring, credit assessments, or fraud detection, where accountability must be clearly defined. Contracts should outline liability boundaries and compliance obligations under frameworks such as GDPR or regional equivalents. Auditability and bias mitigation are no longer optional safeguards; they are legal necessities, particularly in regulated sectors.
Adding another layer of complexity is the infrastructure underpinning AI. The growing reliance on high-performance GPUs introduces exposure to export controls, sanctions, and hardware usage restrictions. In regions with geopolitical sensitivities, partners must ensure AI infrastructure deployments align with government regulations and vendor licensing requirements.
Intellectual property ownership in AI is rarely straightforward. Partners must clearly distinguish between ownership of the base model, the training data, and the resulting outputs. This becomes especially nuanced in co-development or white-label arrangements.

Mostafa Kabel
CTO, Mindware Group
If a partner fine-tunes a model using a customer’s proprietary data, ownership of that model variant and its outputs must be explicitly defined. Agreements should also cover redistribution rights, commercial usage, and branding controls. Addressing these questions early not only avoids disputes but establishes trust and alignment between partners and enterprise clients.
When AI influences hiring decisions, financial outcomes, or customer interactions, ethical responsibility becomes inseparable from technical delivery. Partners have a duty to ensure systems are fair, transparent, and non-discriminatory.
This means investing in diverse training data, conducting regular bias assessments, and enabling explainable AI outputs. Importantly, these responsibilities should be reflected in service agreements. Clients should have the right to human oversight, audit AI-driven decisions, and request corrective action when unintended outcomes arise. Ethical guardrails are no longer philosophical ideals they are essential to regulatory compliance and long-term adoption.
Traditional service level agreements were never designed for systems that learn, adapt, and sometimes behave unpredictably. Generative AI introduces challenges such as hallucinations, data drift, and inconsistent outputs, all of which must be acknowledged contractually.
Partners should update SLAs to include AI-specific performance benchmarks, monitoring mechanisms, and escalation procedures. Risk disclaimers must clearly state that AI-generated content may not always be accurate or contextually appropriate. Regular model reviews and updates should also be built into agreements to ensure sustained performance over time. Just as important is educating customers setting realistic expectations is foundational to responsible deployment.
Trust in AI begins with transparency. Partners reselling or customising third-party models should disclose the model’s source, version, training scope, and known limitations. Any modifications or fine-tuning must be documented and shared with clients.
Labelling AI-generated content, enabling explainability tools, and offering audit capabilities all contribute to greater accountability. Many organisations are also adopting ethical AI frameworks or certifications as a way to formalise best practices. Ongoing education and openness about AI capabilities and limitations are key to building durable client relationships.
Looking ahead, the partner ecosystem must take a proactive approach to AI governance. Standardised AI clauses will increas-
ingly become part of contracts, addressing IP rights, data privacy, explainability, and liability. On the technical side, partners must invest in governance platforms, continuous monitoring, and bias detection tools.
Ethically, alignment with global regulations such as the EU AI Act will be critical, even for organisations operating outside Europe. Shared codes of conduct, regular training, and collaboration with policymakers will define the next generation of responsible AI partnerships.
At Mindware, we are already supporting partners on this journey. With deep experience across AI infrastructure, software, and compliance services, we help organisations build secure, scalable, and responsible AI frameworks. From compliant GPU deployments and AI-ready data platforms to ethical governance advisory, we work closely with partners across the MEA region to navigate evolving regulatory and technological demands.
As AI continues to reshape industries, success will belong to those who can deploy it not just quickly but responsibly, transparently, and ethically.
"When AI influences hiring decisions, financial outcomes, or customer interactions, ethical responsibility becomes inseparable from technical delivery. Partners have a duty to ensure systems are fair, transparent, and non discriminatory."
Built on ARTPEC-9, this next-generation outdoor camera offers outstanding high-resolution image quality in 4K at 60 fps. It features a light-sensitive ½” sensor to deliver clear, bright images and better handling of shadows—ideal in urban areas. With 34x optical zoom, you can easily follow fast-moving objects. Plus, laser focus ensures precise focus every time. Lightfinder 2.0 and Forensic WDR deliver true colors and great detail in challenging light or near darkness. Furthermore, Axis Zipstream with support for AV1, H.264, and H.265 significantly lowers bandwidth and storage requirements.
Featuring a deep learning processing unit (DLPU), this AI-based camera runs advanced features and powerful analytics on the edge. It also delivers valuable metadata, facilitating fast, easy, and efficient forensic search capabilities in live or recorded video. AXIS Object Analytics comes preinstalled to detect, classify, track, and count humans, vehicles, and different types of vehicles. It also comes with AXIS Image Health Analytics. Autotracking 2 with click-and-track functionality enables active object tracking, and an orientation aid enables dynamic text overlays. What’s more, chameleon masking safeguards privacy.
Axis Edge Vault, a hardware-based cybersecurity platform, safeguards the device and offers FIPS 140-3 Level 3 certified secure key storage and operations. With IP66-, IK10-, NEMA 4x, and NEMA TS2-ratings, it’s both impact and weather-resistant. The camera’s operating temperature is -50 to 50 °C (-58 to 122 °F), while Arctic Temperature Control allows it to power up at as low as -40 °C (-40 °F). With shock detection, you'll be informed if the camera has been hit.

Highlights:
• High-resolution with 1/2" sensor
• Lightfinder 2.0 and Forensic WDR
• Next-generation AI-powered analytics
• Precise laser focus and 34x optical zoom
• Built-in cybersecurity with Axis Edge Vault
Designed for professionals who expect emerging technology to unlock real productivity, the EliteBook X G2 portfolio delivers future-ready AI performance, enterprise-grade security, and ultra-mobile design to make intelligent work feel intuitive. As part of HP’s broader EliteBook X portfolio, created for every kind of leader, the G2 Series delivers devices tailored to diverse workstyles and performance needs.
These Copilot+ PCs combine future-ready AI performance, all-day power, unmatched serviceability,5 and enterprise-grade security — all in a sleek, light package that elevates the work experience wherever it takes place.
Highlights:
• Power through demanding AI workloads with the HP EliteBook X G2q, the world’s first business notebook with up to 85 TOPS NPU performance for processing and running concurrent AI apps, powered by up to the latest Snapdragon X2 Elite processor. The portfolio also features the HP EliteBook X G2i, equipped with the latest Intel Core Ultra Series 3 processors for graphically demanding AI apps, including up to 50 NPU TOPS and up to 180 platform TOPS. For significantly faster AI performance, the HP EliteBook X G2a is loaded with up to 55 TOPS NPU and the latest AMD RyzenAI processor.
Designed for modern hybrid work environments, the ExpertBook Ultra emphasizes powerful performance, enterprise-grade security, and sustainable design.The all-new ExpertBook Ultra’s design embodies quiet sophistication tailored for next-generation business elites, featuring a sleek, minimalist chassis with a premium finish available in two shades — Jet Fog and Morn Grey. Meticulously crafted using precision CNC engineering, the ExpertBook Ultra combines a durable, 9H-hardness magnesium-aluminum alloy, finished with a Nano Ceramic Technology coating. This delivers exceptional rigidity and a refined aesthetic without adding additional weight. The result is a lightweight device starting at 0.99kg, with US military-grade durability and security. Unlike other ultralight laptops, ExpertBook Ultra makes no compromises, integrating a full suite of I/O ports and a 70Wh long-lasting battery to power professionals through an entire day of work and beyond.
At its core, ExpertBook Ultra is an AI powerhouse, powered by the latest Intel® Core™ Ultra X9 Series 3 processor with up to 50 TOPS of NPU performance. This combination effortlessly handles multitasking, demanding AI workloads, and critical business applications. The ASUS ExpertCool Pro thermal solution supports up to 50W TDP, ensuring robust performance even under intensive workloads.
Highlights:
• The ExpertBook Ultra further delivers an exceptional user experience. Its 3K tandem OLED touchscreen protected by scratch-resistant Corning Gorilla Glass delivers up to 1400 nits HDR brightness. This provides crisp, vivid visuals for both de-

tailed work and immersive media, with an anti-glare finish for added clarity and comfort.
• It has a six-speaker system tuned with Dolby Atmos, delivering clear, surround-sound audio quality for presentations and conference calls. Navigation and typing are equally refined, with a haptic-force touchpad and a keyboard featuring a skin-friendly excimer coating for enhanced comfort.
• Security is also a priority for the ExpertBook Ultra. ASUS ExpertGuardian is built on the principles of stringent NIST SP 800-193 guidelines, safeguarding firmware by preventing unauthorized changes, detecting attacks, and automatically restoring trusted versions. This reduces downtime, preventing failure and ensuring government-grade reliability for enterprise continuity.
• Adapt instantly and work smarter with HP Smart Sense dynamically tuning power, thermals, and system settings in real time. Flexible creation, collaboration, and presenting are unlocked with the EliteBook X Flip G2i in laptop, tent, tablet, or stand modes with an optional garaged pen.
• Stay resilient against emerging threats with quantum-resistant protection below, in, and above the OS through hardware-enforced HP Wolf Security for Business— delivering enterprise-grade security even when offline and on the move.
• Work light, go far, and recover fast with the HP EliteBook X G2i ultra-thin design — weighing under 1 kg. Design and mobility work together on this device with all day battery life and vivid 3K Tandem OLED display options.

Geopolitical, Regulatory, and Security Pressures Spur Governments to Boost Investment in Independent AI Infrastructure

By 2027, 35% of countries will be locked into region-specific AI platforms using proprietary contextual data, according to Gartner, Inc., a business and technology insights company. Gartner also predicts that platform lock-in will rise from 5% to 35% by 2027.
“Countries with digital sovereignty goals are increasing investment in domestic AI stacks as they look for alternatives to the closed U.S. model, including computing power, data centers, infrastructure and models aligned with local laws, culture and region,” said Gaurav Gupta, VP Analyst at Gartner. “Trust and cultural fit are emerging as key criteria. Decision makers are prioritizing AI platforms that align with local values, regulatory frameworks, and user expectations over those with the largest training datasets.”
Localized models deliver more contextual value; regional LLMs outperform global models in applications such as education, legal compliance, and public services, especially in non-English languages.
Nations Will Need to Invest 1% of GDP in AI Sovereignty by 2029
With non-Western customers changing alignment due to concerns
of overly Western influence, AI sovereignty will lead to reduced collaboration and duplication of effort. Because of this, Gartner predicts that nations establishing a sovereign AI stack will need to spend at least 1% of their GDP on AI infrastructure by 2029.
AI sovereignty refers to the ability of a nation or organization to independently control how AI is developed, deployed, and used related to its geographical boundaries.
Regulatory pressure, geopolitics, cloud localization, national AI missions, corporate risks and national security concerns are driving governments and corporations to accelerate investments in sovereign AI. A fear of falling behind in the technological AI race will also push nations and companies to innovate rapidly and invest in an attempt to achieve self-sufficiency in all aspects of the AI stack.
“Data centers and AI factory infrastructure form the critical backbone of the AI stack that enables AI sovereignty, " said Gupta.
“As a result, data centers and AI factory infrastructure will see explosive build-up and investment going forward, propelling a few companies that control the AI stack to achieve double-digit, trillion-dollar valuations.”
Because of this, CIOs must:
• Design model agnostic workflows using orchestration layers that enable switching between LLMs across regions and different vendors.
• Ensure AI governance, data residency, and model tuning practices can meet country-specific legal, cultural, and linguistic requirements.
• Establish relationships with national cloud providers, regional LLM vendors, and sovereign AI stack leaders in priority markets and build a vetted list of partners.
• Monitor AI legislation, data sovereignty rules, and emerging standards that may affect where and how they can deploy AI models and process users data.



