CXODX_Magazine_Apr_2025

Page 1


BACKUP IN THE ERA OF AGENTIC AI

In an era that is likely to be redefined massively by the pervasive impact of Agentic AI across Business processes and industries, data and backup strategies are also undergoing a significant transformation. Businesses are increasingly relying on AI to drive processes, innovation, and provide insights critical for decision making.

Therefore, the importance of securing data through robust backup systems has never been more critical.

Agentic AI relies on massive datasets to deliver better outcomes, such as analysing patterns and making recommendations, which in turn influence decision-making across industries. This dependence on data is also fraught with risks, and these could range from data corruption to sophisticated attacks on the data. In such a context, traditional methods of backup no longer suffice to meet the demands of modern businesses. Instead, businesses must adopt intelligent backup strategies that leverage AI itself for enhanced protection and reliability.

AI-powered backup solutions provide predictive analytics to anticipate potential disruptions, detect anomalies, and ensure seamless recovery. These systems can autonomously prioritize critical data, optimize storage capacity, and adapt to dynamic workloads without human intervention. While traditional backup solutions usually stick to fixed schedules, AI-powered platforms are much smarter and adjust in real time based on how your data changes. They make sure the most important information gets backed up exactly when it needs to be. Further, they can also help identify duplicate or outdated files, and this can help you save on storage space and keep backup costs under control. AI-powered backup solutions also add an extra layer of protection by detecting unusual activity early, such as signs of ransomware or data corruption, before real damage is done.

To sum up, by embracing advanced, AI-driven backup solutions, organizations can mitigate risks, maintain data integrity, and harness the full potential of AI to drive sustainable growth in an ever-changing digital landscape.

RAMAN NARAYAN

Co-Founder & Editor in Chief narayan@leapmediallc.com Mob: +971-55-7802403

Sunil Kumar Designer

SAUMYADEEP HALDER

Co-Founder & MD

saumyadeep@leapmediallc.com Mob: +971-54-4458401

Nihal Shetty Webmaster

MALLIKA REGO

Co-Founder & Director Client Solutions mallika@leapmediallc.com Mob: +971-50-2489676

18 » BACKUP STRATEGIES FOR ENABLING BUSINESS RESILIENCE

Businesses should redefine their approach to business continuity planning, viewing it not as a mere operational necessity but as a strategic driver of business resilience and growth.

By embracing advanced technologies and best practices, organizations can safeguard their critical data, minimize downtime, and remain resilient in the face of unexpected disruptions.

Prasanna Rajendran, Vice President –EMEA, Kissflow discusses how their low-code/no-code platform is built for the realities of enterprise digital transformation.

Firas Jadalla, Regional Director, META, Genetec Inc discusses how collaboration is made easy using a work management platform for security teams

Francesco Colavita, VP Presales Consulting, JAGGAER discusses how Agentic AI is transforming industries

Data is the core of Industry 5.0 says Christian Pedersen, Chief Product Officer, IFS

Vibhu Kapoor, RVP, MEA & India, explains how industry-specific cloud ERP, embedded AI, and strong partner collaboration are driving a new wave of transformation in manufacturing

The age of corporate accountability is here as far as cybersecurity is concerned says Andre Troskie, EMEA Field CISO at Veeam

Vidura Gamini Abhaya - Vice President, Solutions Architecture at WSO2 discusses how API governance can empower today’s decentralized teams

Bashar Bashaireh, VP Middle East, Türkiye & North Africa at Cloudflare says we must rethink our approaches to infrastructure, security, and the role of technology in our lives.

MIDDLE EAST AND AFRICA’S LARGEST CYBERSECURITY EVENT

NETAPP FUELS FUTURE OF AGENTIC AI REASONING SOLUTIONS WITH NVIDIA

NVIDIA-powered data agents coming to NetApp ONTAP will speed up inference

NetApp announced it is advancing the state of the art in agentic AI with intelligent data infrastructure that taps the NVIDIA AI Data Platform reference design. By collaborating with NVIDIA, NetApp is enabling businesses to better leverage their data to fuel AI reasoning inference.

In the era of intelligence, businesses need to rethink their data strategies to turn rapidly growing data estates into competitive assets that empower them to agilely navigate their business environment. By adopting an Intelligent Data Infrastructure framework, businesses will be able to operate under a unified vision that integrates metadata cataloging, automation, and hybrid cloud capabilities to eliminate siloes and deliver actionable insights at every stage of the AI pipeline.

Together, NetApp ONTAP and the NVID-

IA AI Data Platform enable businesses to navigate the era of intelligence by building distributed systems that unlock the full value of business data to fuel data-driven actions. Built on NVIDIA’s expertise in AI workflow optimization, the platform is a customizable reference design for integrating NVIDIA’s accelerated computing, networking, and AI software with enterprise storage, transforming data into actionable intelligence.

With the NVIDIA AI Data Platform, NetApp customers will be able to connect their data to fuel AI reasoning workloads with agents using NVIDIA AI Enterprise software, including the NVIDIA AI-Q Blueprints and NVIDIA NIM microservices for NVIDIA Llama Nemotron Reason and other models.

Suhail Hasanain, Regional Director for the Middle East and Africa region at NetApp

said,“NVIDIA and NetApp are collaborating to give customers the tools they need to strategically unlock their data to drive the next wave of AI applications. By combining the NVIDIA AI Data Platform with the rich and mature data and AI management capabilities and services of NetApp ONTAP, enterprises can more easily bring AI to their data while maintaining security and compliance to achieve their goals of becoming data-driven businesses.“

VAST DATA PLATFORM ADDS NEW CAPABILITIES TO SUPPORT REAL-TIME AGENTIC APPLICATIONS

VAST InsightEngine powers AI-driven decision-making with real-time data ingestion, processing, and retrieval

VAST Data, a AI data platform company, announced new enhancements to the industry-leading VAST Data Platform, making it the first and only system in the market to unify structured and unstructured data, into a single DataSpace that scales linearly to hyperscale – with unified enterprise-grade

security. These new capabilities are redefining enterprise AI and analytics by combining real-time vector search, fine-grained security, and event-driven processing into a seamless, high-performance data ecosystem that powers the VAST InsightEngine, which transforms raw data into AI-ready insights through intelligent automation, enabling enterprises to build advanced AI applications, agentic workflows, and highspeed inferencing pipelines.

Organizations today face significant challenges in scaling enterprise AI deployments. AI models call for ultra-fast vectorized search and retrieval for fast access to the most up-to-date information, with AI-driven workloads requiring massive computational power and well integrated data pipelines.

As organizations embrace AI retrieval, and as embedding models continue to make exponential improvements in their understanding of enterprise data, only the VAST Data Platform can provide a unified, AIready solution that can meet the needs of

extreme-scale agentic enterprises. The parallel transactional nature of VAST’s unique DASE architecture makes it possible to update vector spaces in real-time for the first time, and this shared-everything approach allows for all servers to search the entire vector space in milliseconds – enabling VAST InsightEngine to transform raw data into AI-ready insights instantly, empowering organizations to make decisions with maximum accuracy.

“Only two kinds of companies exist today: those becoming AI-driven organizations, and those approaching irrelevance,” said Jeff Denworth, Co-Founder at VAST Data. “In order to thrive in the AI era, enterprises need instant AI insights, enterprise-grade security, and limitless scalability – without worrying about managing fragmented tools or data infrastructure. The VAST InsightEngine is the only market’s first and only solution able to securely ingest, process, and retrieve all enterprise data – files, objects, tables, and streams – in real-time to make enterprise data instantly usable for accurate AI-driven decision making.”

ENTERPRISE AI READINESS LAGS AMBITIONS

Survey highlights key gaps threatening Generative AI Success

Qlik announced findings from an IDC survey exploring the challenges and opportunities in adopting advanced AI technologies. The study highlights a significant gap between ambition and execution: while 89% of organizations have revamped data strategies to embrace Generative AI, only 26% have deployed solutions at scale. These results underscore the urgent need for improved data governance, scalable infrastructure, and analytics readiness to fully unlock AI’s transformative potential.

The findings, published in an IDC InfoBrief sponsored by Qlik, arrive as businesses worldwide race to embed AI into workflows, with AI projected to contribute $19.9 trillion to the global economy by 2030. Yet, readiness gaps threaten to derail progress.

"AI’s potential hinges on how effectively organizations manage and integrate their AI value chain," said James Fisher, Chief Strategy Officer at Qlik. "This research highlights a sharp divide between ambi-

tion and execution. Businesses that fail to build systems for delivering trusted, actionable insights will quickly fall behind competitors moving to scalable AI-driven innovation."

The IDC survey uncovered several critical statistics illustrating the promise and challenges of AI adoption:

• Agentic AI Adoption vs. Readiness: 80% of organizations are investing in Agentic AI workflows, yet only 12% feel confident their infrastructure can support autonomous decision-making.

• "Data as a Product" Momentum: Organizations proficient in treating data as a product are 7x more likely to deploy Generative AI solutions at scale, emphasizing the transformative potential of curated and accountable data ecosystems.

• Embedded Analytics on the Rise: 94% of organizations are embedding or planning to embed analytics into enterprise applications, yet only 23% have

achieved integration into most of their enterprise applications.

• Generative AI’s Strategic Influence: 89% of organizations have revamped their data strategies in response to Generative AI, demonstrating its transformative impact.

• AI Readiness Bottleneck: Despite 73% of organizations integrating Generative AI into analytics solutions, only 29% have fully deployed these capabilities.

NUTANIX LAUNCHES NUTANIX CLOUD CLUSTERS (NC2) ON MICROSOFT AZURE IN QATAR

NC2 enables businesses to comply with Qatar’s data sovereignty regulations

Nutanix, a leader in hybrid multicloud computing, announced the launch of Nutanix Cloud Clusters (NC2) on Microsoft Azure in Qatar. This new offering enables private and public enterprises in Qatar to leverage a seamless hybrid cloud solution that promotes data localization, compliance, and enhanced operational performance, helping businesses navigate their digital transformation journeys.

The arrival of NC2 on Azure comes after strong demand from both Nutanix and Microsoft customers to provide an optimal joint hybrid cloud offering. With NC2 on Azure, organizations in Qatar can extend their existing Nutanix environments to the Microsoft Azure cloud, allowing for the smooth operation of applications across both on-premises and cloud infrastructures. This integration provides a flexible, scalable solution, ensuring that enterprises can meet local data governance regulations while optimizing performance and minimizing latency.

In Qatar, where data sovereignty and compliance with local laws are critical for businesses, the launch of NC2 ensures that enterprises can localize sensitive data while remaining agile. This solution not only helps organizations comply with the country's regulatory framework but also builds trust among customers and stakeholders by safeguarding their information.

"After strong demand from our customers, we are excited about the launch of Nutanix Cloud Clusters on Microsoft Azure in Qatar. Now, businesses have the tools to drive digital transformation and cloud first policy with greater efficiency and compliance," said Hani Salameh, Sales Manager at Nutanix Qatar. "NC2 empowers organizations to easily integrate their on-premises and cloud environments, supporting operational continuity and innovation while adhering to the country’s data protection standards."

The availability of Nutanix Cloud Clusters

on Microsoft Azure in Qatar marks an important milestone for businesses seeking to modernize their IT infrastructure and embark on a secure and efficient digital transformation journey. By combining the strengths of Nutanix’s hyper-converged infrastructure with Azure’s cloud services, organizations can optimize their performance, reduce costs, and remain compliant with local data regulations.

Hani Salameh Sales Manager, Nutanix Qatar
James Fisher Chief Strategy Officer, Qlik

PURE STORAGE INTEGRATES

NVIDIA

AI DATA PLATFORM INTO FLASHBLADE

FlashBlade now certified for NVIDIA Cloud Partner and Enterprise deployments

Pure Storage has announced it is integrating the NVIDIA AI Data Platform reference design into its FlashBlade platform, expanding its commitment to deliver validated, enterprise-grade scalable, AI-ready solutions for customers that meet NVIDIA’s rigorous standards.

Pure Storage is advancing enterprise data storage and management by enhancing enterprise customers’ ability to manage the high-performance data and accelerated compute requirements necessary for successful AI deployments, including:

Pure Storage is advancing AI data infrastructure by delivering NVIDIA AI Data Platform capabilities with FlashBlade. Supporting NVIDIA’s new reference design for intelligent AI platforms, FlashBlade acts as a high-performance, distributed storage system, enabling faster business insights by harnessing NVIDIA’s accelerated computing, networking, and AI Enterprise software. This builds on the recent launch of FlashBlade//EXA, a platform designed for

the most demanding AI and high-performance computing workloads.

Additionally, Pure Storage has achieved High-Performance Storage (HPS) certification for NVIDIA Cloud Partner Reference Architectures. This certification validates Pure’s ability to support NVIDIA Cloud Partners using NVIDIA HGX systems with B200 or H200 GPUs, confirming Pure as a trusted partner for building advanced GPU cloud environments.

Pure Storage also earned NVIDIA-Certified Storage Partner approval at both Foundation and Enterprise levels. Along with Pure Storage’s recent certification of FlashBlade//S500 with NVIDIA DGX SuperPOD, these new certifications provide NVIDIA Cloud Partners and enterprises with high-performance storage that meet all requirements for building state-of-theart AI infrastructure.

“GPUs have fast become a driving force behind the next wave of AI innovation.

Rob Lee CTO, Pure Storage

The incorporation of the NVIDIA AI Data Platform into FlashBlade provides the AI-ready storage necessary for peak performance. Additionally, our recent NVIDIA certifications affirm that Pure Storage is supporting the pace and scale that AI models need to create change. Through our purpose-built storage solutions, enterprise customers can easily and efficiently harness the power of AI to accelerate success,” said Rob Lee, Chief Technology Officer, Pure Storage.

AMIVIZ PARTNERS WITH KITEWORKS TO EMPOWER ENTERPRISES TO PROTECT SENSITIVE DATA

This partnership extends AmiViz's portfolio, introducing Kiteworks' approach to zero-trust data exchange

AmiViz, a leading cybersecurity-focused value-added distributor headquartered in the Middle East, has forged a strategic partnership with Kiteworks, which empowers organizations to effectively manage risk in every send, share, receive, and use of private data. This partnership extends AmiViz's portfolio, introducing Kiteworks' approach to zero-trust data exchange enabled with a hardened virtual appliance and next-generation digital rights management (DRM) to enterprises across the Middle East and Africa.

Kiteworks, recently recognized by G2 for its product excellence, empowers organizations to control, monitor, and protect every interaction between people, machines, and systems across user collaboration, automated workflows, and enterprise AI—all from one platform. The Kiteworks Private Data Network delivers unified compliance

controls through centralized audit logs, automated compliance reporting, and preconfigured templates for multiple regulations (GDPR, HIPAA, CCPA). Real-time compliance monitoring and automated policy enforcement ensure consistent regulatory adherence across all data sharing activities. .This strategic partnership aims to enhance operational resilience and data protection for enterprises navigating complex regulatory landscapes.

Ilyas Mohammed, Chief Operating Officer at AmiViz, commented, "We are very excited to forge this strategic alliance with Kiteworks, whose groundbreaking approach to secure content communications addresses a critical vulnerability in today’s increasingly complex cybersecurity landscape. Their innovative platform doesn’t merely protect data—it fundamentally transforms how organizations safeguard their most sensitive

Ilyas Mohammed COO, AmiViz

information. Together, AmiViz and Kiteworks are empowering enterprises across the Middle East and Africa with solutions that protect private data from emerging threats while also ensuring seamless regulatory compliance in an era where data privacy has never been more crucial."

CISCO UNVEILS AI FACTORY ARCHITECTURE WITH NVIDIA WITH SECURITY AT THE CORE

The Cisco Secure AI Factory with NVIDIA will embed security within all layers, from the application to the workload, to the infrastructure using solutions like Cisco AI Defense and Hybrid Mesh Firewall

Cisco unveiled an AI factory architecture with NVIDIA that puts security at its core. This collaboration with NVIDIA builds on the expanded partnership that was announced last month, and the companies have moved swiftly to provide validated reference architectures today. Together, the companies are developing the Cisco Secure AI Factory with NVIDIA to dramatically simplify how enterprises deploy, manage, and secure AI infrastructure at any scale.

“AI can unlock groundbreaking opportunities for the enterprise,” said Chuck Robbins, Chair and CEO, Cisco. “To achieve this, the integration of networking and security is essential. Cisco and NVIDIA's trusted, innovative solutions empower our customers to harness AI's full potential simply and securely.”

“AI factories are transforming every industry, and security must be built into every layer to protect data, applications

and infrastructure,” said Jensen Huang, founder and CEO, NVIDIA. “Together, NVIDIA and Cisco are creating the blueprint for secure AI—giving enterprises the foundation they need to confidently scale AI while safeguarding their most valuable assets.”

Developing and delivering AI applications require high performing, scalable infrastructure and AI software tool chain. Securing this infrastructure and AI software requires a new architecture – one that embeds security at all layers of the AI stack and automatically expands and adapts as the underlying infrastructure changes. Cisco and NVIDIA’s partnership on the NVIDIA Spectrum-XTM Ethernet networking platform provides the foundation for the Cisco Secure AI Factory with NVIDIA. Cisco is integrating security solutions like Cisco Hypershield, to help protect AI workloads, and Cisco AI Defense, to help protect the development, deployment, and use of AI models and ap-

CLOUDFLARE INTRODUCES CLOUDFLARE FOR AI

Advanced suite of AI security capabilities provides an easy, safe, and reliable way for companies of all sizes to protect AI model deployment, data, and integrity

Cloudflare, a leading connectivity cloud company, unveiled Cloudflare for AI, a suite of tools to provide comprehensive visibility, security and control for AI applications–from model deployment to usage and defense against abuse. Now, Cloudflare customers will be able to protect themselves against the most pressing threats facing today’s AI models, including employee misuse of tools, toxic prompts, personally identifiable information (PII) leakage, and other emerging vulnerabilities.

AI is rapidly reshaping business operations, driving organizations to aggressively develop and integrate new models into critical areas that touch everything from how to price products, restock grocery store shelves, analyze medical data, and more. At the same time, AI models have become more accessible than ever—or-

ganizations of all sizes can leverage AI technology without the need for massive investments. As AI experimentation becomes increasingly widespread, new risks emerge–cybercriminals are targeting AI applications, while security teams struggle to keep pace with the rapid speed of innovation. Organizations that fail to securely use and deploy AI models run the risk of exposing themselves to emerging cyber threats that could compromise their core operations and company data.

“Over the next decade, an organization’s AI strategy will determine its fate–innovators will thrive, and those who resist will disappear. The adage ‘move fast, break things’ has become the mantra as organizations race to implement new models and experiment with AI to drive innovation,” said Matthew Prince, co-founder and CEO at Cloudflare. “But there is often a

plications. Together, Cisco and NVIDIA will provide customers with the flexibility to design infrastructure for their specific AI needs without sacrificing operational simplicity or security.

missing link between experimentation and safety. Cloudflare for AI allows customers to move as fast as they want in whatever direction, with the necessary safeguards in place to enable rapid deployment and usage of AI. It solves customers’ most pressing concerns without putting the brakes on innovation.”

Charlie Li
Global Head of Cloud Security, NTT DATA Inc
Matthew Prince co-founder and CEO, Cloudflare

MANAGEENGINE EVOLVES LOG360 AS A UNIFIED SECURITY PLATFORM

This is expected to simplify security operations and future-proof security investments

ManageEngine, a division of Zoho Corporation announced the evolution of Log360—its unified security information and event management (SIEM) and IT compliance management solution—into a security analytics platform. The platformization of Log360, encompassing open APIs and a developer ecosystem, enables ManageEngine to address the critical need for adaptable, future-proof security. ManageEngine's leadership believes this shift empowers enterprises, system integrators (SIs), and managed security service providers (MSSPs) to combat evolving threats on their own terms, turning SIEM from a cost center into a strategic asset.

Log360's evolution into a robust security platform began last year with key enhancements, laying the foundation for future innovation.

"A platform isn’t defined by just what it does today, but by what it enables tomorrow. With Log360 evolving as a platform,

we’re empowering customers and partners to innovate on top of our foundation, whether integrating cutting-edge AI models or niche compliance frameworks. This ecosystem-driven approach turns security from a cost center into a strategic enabler," says Manikandan Thangaraj, vice president at ManageEngine.

ManageEngine will expand Log360's platform capabilities by growing its partner and developer ecosystem with industry-specific extensions, integrating advanced AI and ML tools for predictive security and fostering community-driven security innovation. As an initial step towards this direction, ManageEngine has entered into a partnership with Sacumen, a firm specializing in the development of cybersecurity product engineering and services.

"Our partnership with ManageEngine reflects our shared vision: empowering enterprises with comprehensive and integrated security solutions. Sacumen's con-

GBM CELEBRATES 35TH ANNIVERSARY

The leading digital solutions provider continues to help enterprises future-proof operations, enhance security, and drive business agility.

Gulf Business Machines (GBM), a leading end-to-end digital solutions provider, is marking its 35th anniversary this year, celebrating a legacy of technological innovation, strategic partnerships, and business transformation that has shaped the Middle East’s digital economy.

Since its inception in 1990, GBM has been a trusted partner in digital transformation in the region, delivering cutting-edge IT solutions across diverse industries, including government, banking and financial services, healthcare, and retail. With a strong foundation rooted in the region and

a commitment to global standards, GBM has consistently empowered organizations with secure, scalable, and intelligent technology solutions, playing an instrumental role in the ongoing evolution of the region’s digital economy.

Strengthened by long-standing partnerships with global technology leaders like IBM, and Cisco, GBM remains at the forefront of delivering advanced solutions that drive business growth. With the acquisition of Coordinates Middle East in 2022 and the launch of GBM Shield, the company continues to expand its capa-

tribution lies in building the crucial bridges—the connectors—that allow Log360 to seamlessly interact with the broader security ecosystem, maximizing its value for customers," says Nitesh Sinha, CEO and founder of Sacumen. "ManageEngine's platform approach coupled with Sacumen's expertise in connector development breaks down the data silos, providing unified visibility and streamlined integration, empowering enterprises to move beyond reactive security and embrace a proactive, data-driven defense."

bilities in advanced security services, reinforcing its role as a key enabler of the region’s digital transformation. With a team of over 1500 employees, and offices across the GCC, GBM is driving innovation and helping businesses stay resilient, secure and future-ready.

Looking ahead, GBM is accelerating its growth by expanding its presence in key Middle Eastern markets, advancing managed services with AI and automation, increasing focus on cloud technologies, and strengthening technical expertise to support clients in an increasingly digital-first world.

Mike Weston, CEO at GBM, said “Over the past three and a half decades, GBM has established itself as a trusted partner for regional organizations navigating the complexities of IT infrastructure and business operations. As we celebrate 35 years of success, we remain committed to driving innovation and delivering exceptional value to our clients. The future is about smarter, more agile IT services, and we are poised to lead the way with our customer-first approach.”

74% OF CEOS FEAR JOB LOSS WITHOUT MEASURABLE AI SUCCESS IN 2 YEARS

Dataiku/Harris Poll survey of 500 international CEOs reveals AI’s impact on leadership, competitive survival, and the future of the C-suite

A staggering 74% of CEOs internationally admit they are at risk of losing their job within two years if they fail to deliver measurable AI-driven business gains, according to the newly released “Global AI Confessions Report: CEO Edition” by Dataiku, the Universal AI Platform. The study, conducted by The Harris Poll for Dataiku, exposes the candid admissions and revelations of global chief executives as they face a new reality: AI strategy has become the defining factor in corporate survival.

The findings underscore an unprecedented shift in executive accountability, as 70% of CEOs predict that by the end of the year, at least one of their peers will be ousted due to a failed AI strategy or AI-induced crisis. Meanwhile, more than half of CEOs (54%) admit that a competitor has already deployed a superior AI strategy, highlighting the urgency for organizations to move beyond AI ambition into tangible execution.

AI vs. BoD and Executive Leadership: A Growing Power Struggle?

The report also signals a radical redefinition of corporate leadership, as AI increasingly challenges the role of decision-making. Key findings include:

• 94% of CEOs admit that an AI agent could provide equal or better counsel on business decisions than a human board member.

• 89% of CEOs believe AI can develop an equal or better strategic plan than one or more of their executive leaders, a cohort defined as VP to C-suite.

As AI’s influence expands, it’s not just reshaping strategy — it’s challenging the very foundation of corporate leadership, forcing CEOs to reconsider who, or what, will make the most critical decisions in the future.

The “AI Commodity Trap” and AI Washing: CEO Blind Spots

Despite their growing reliance on AI, many CEOs remain dangerously unaware of the pitfalls of poorly executed AI strategies.

• 87% of CEOs fall into the “AI commodity trap,” expressing confidence that off-the-shelf AI agents can be just as effective as custom-built solutions for highly nuanced vertical or domain-specific business applications.

• 35% of AI initiatives are suspected to be "AI washing" — designed more for optics than real business impact.

• 94% of CEOs suspect employees are using GenAI tools — such as ChatGPT, Claude, and Midjourney — without company approval (known as “shadow AI”), exposing a massive governance failure within organizations.

AI Governance and Regulatory Uncertainty: Delays and Cancellations on the Rise

While AI adoption accelerates, poor governance and regulatory uncertainty are creating significant roadblocks:

• Eight-in-ten CEOs expressed concern that AI deployments could inadvertently harm their employees (80%) or their customers (83%), underscoring a lack of confidence in execution and control.

• One-in-three (37%) CEOs admit their AI projects have been delayed due to regulatory uncertainty.

• 32% of CEOs admit their AI projects have been canceled or abandoned due to regulatory uncertainty.

“For CEOs today, every AI decision feels like a high-stakes gamble that can drive competitive dominance or lead to costly consequences,” explained Florian Douetteau, co-founder and CEO of Dataiku. “The only way to turn AI into an enduring

advantage is to assert greater control and governance — future-proofing not just the companies these CEOs run, but their own roles as leaders in an increasingly AI-powered economy.”

AI: The Defining Factor for CEO and Company Survival

With 78% of CEOs prioritizing AI strategy as a core business goal for 2025 and 83% acknowledging AI’s impact on investor confidence, the message is clear — CEOs must turn AI intent into measurable impact, or risk becoming a cautionary tale in the next inevitable wave of executive turnover.

Florian Douetteau CEO, Dataiku

AI AGENTS SET TO TRANSFORM SOFTWARE DEVELOPMENT

New State of IT research from Salesforce reveals software development leaders in the UAE and globally are bullish on agentic AI and its impact

AI agents are set to revolutionize software development processes in the UAE, where 100% of teams use, or expect to use, AI code for generation, according to Salesforce’s new State of IT survey of software development leaders.

The large global study of more than 2,000 software development leaders, including 100 IT leaders in the UAE and a supplementary survey of 250 frontline developers in the United States, found that 86% of teams in the UAE will use AI agents within two years and 70% believe AI agents will be essential as traditional development tools.

The UAE data aligns with global trends, with more than 9 in 10 developers around the world excited about AI’s impact on their careers, while an overwhelming 96% expect it to change the developer experience for the better. The survey also revealed that more than four in five globally believe AI agents will become as essential to app development as traditional software tools.

Developers are not only looking to agents to unlock greater efficiency and productivity, but 92% globally believe agentic AI will help them advance in their careers. Some developers, however, believe that they, as well as their organizations, need more training and resources to build and deploy a digital workforce of AI agents.

The arrival of agentic AI provides developers with the opportunity to focus less on tasks like writing code and debugging and grow into more strategic, high-impact work. And with developers increasingly using agents powered by low-code/no-code tools, development is becoming faster, easier, and more efficient than ever — regardless of developers’ coding abilities.

Developers are enthusiastic about agentic AI and its impact on their career

Respondents are excited for agents to take on simple, repetitive tasks, freeing them up to focus on high-impact projects that contribute to larger business goals.

• 96% of developers globally are enthusiastic about AI agents’ impact on the developer experience.

• Developers are most eager to use AI agents for debugging and error resolution, then for generating test cases and building repetitive code.

• The arrival of agentic AI comes at a time when 92% of developers are looking to measure their productivity based on impact over output.

• With the help of AI agents, developers believe they'll focus more on high-impact projects like AI oversight and architecting complex systems.

Low-code/no-code tools help developers unlock greater productivity, regardless of coding skills

With agents powered by low-code/no-code tools, developers of all levels can now build and deploy agents. Respondents believe these tools will help democratize and scale AI development for the better.

• 85% of developers globally, and 68% in the UAE, who are using agentic AI currently use low-code/no-code tools.

• 77% of developers globally, and 78% in the UAE, say that lowcode/no-code tools can help democratize AI development.

• 78% of developers globally say that the use of low-code/nocode app development tools can help scale AI development.

Developers are eager for more resources to build AI agents

Developers say updated infrastructure and more testing capabilities and skilling opportunities are critical as they transition to building and deploying AI agents.

• Infrastructure Needs: Many developers (82% globally, and 81% in the UAE) believe their organization needs to update their infrastructure to build/deploy AI agents. Over half (56%) of developers say their data quality and accuracy isn't sufficient for the successful development and implementation of agentic AI.

• Testing Capabilities: Nearly half (48% globally and 46% in the UAE) of developers say their testing processes aren’t fully prepared to build and deploy AI agents.

• Skills and Knowledge: More than 80% of developers globally, and 74% in the UAE, believe AI knowledge will soon be a baseline skill for their profession, but over half don't feel their skillsets are fully prepared for the agentic era. Meanwhile, 56% of software development leaders in the UAE say they’ve introduced employee training on AI.

Survey respondents globally identified training on technical AI skills and redefining current roles as the most important areas for employers to provide support.

UIPATH LAUNCHES TEST CLOUD TO BRING AI AGENTS TO SOFTWARE TESTING

UiPath agentic testing can significantly reduce the 25% of IT budget typically dedicated to traditional software testing methods

UiPath, a leading enterprise automation and AI software company, announced the launch of UiPath Test Cloud, a revolutionary new approach to software testing that uses advanced AI to amplify tester productivity across the entire testing lifecycle for exceptional efficiency and cost savings.

Through Test Cloud, agentic testing for quality assurance teams becomes reality, equipping professionals with AI agents such as UiPath Autopilot and testing agents built with Agent Builder to act as collaborative partners throughout the testing lifecycle. In augmenting testers with AI, businesses can enable faster time to market, improve production stability, and deliver higher-quality software to customers.

Manual testing and test automation with legacy tools is costly, slow, and resource intensive. In fact, one study found up to 25% of IT spend is dedicated to quality assurance and testing. Test Cloud represents a shift toward AI-augmented testing, focusing on collaboration between people and AI agents to enhance the complete testing process. It is designed to address both technical and personal challenges faced by testers in an increasingly complex software development landscape.

UiPath Test Cloud introduces agentic testing to quality assurance, engineering, and testing teams at any organization via:

• Autopilot for Testers: an out-of-the-box agent that harnesses a broad collection of built-in and customizable AI to accelerate the testing lifecycle, including agentic test design, agentic test automation, and agentic test management.

• Agent Builder: a toolkit for building custom AI agents tailored to unique testing needs, giving teams flexibility to create exactly what they need, when they need it, according to their own specifications.

“Agentic testing marks an exciting new era for companies to advance an area of their business that is still stubbornly manual and time intensive. With Test Cloud, testing teams engage interactively with AI agents that act like partners in collaborating, supporting, and working in tandem with testing professionals around the clock across the entire testing lifecycle,” said Gerd Weishaar, General Manager and Senior Vice President of Testing Products at UiPath. “Traditional testing is recognized by CIOs and CTOs as the biggest bottleneck to delivering new innovations to customers rapidly. Implementing agentic testing with Test Cloud enables faster time to market and improves production stability, which increases customer satisfaction and helps companies grow revenue.”

Capabilities of Test Cloud

UiPath Test Cloud is a full-featured testing offering that equips software testing teams with enterprise-ready, production-grade, resilient end-to-end automation for modern and enterprise applications as well as a deployment environment catered to the needs of testers.

Agentic testing extends, accelerates, and simplifies testers' work, increasing their productivity and job satisfaction. Together, Autopilot and Agent Builder form a powerful duo: built-in, customizable AI capabilities with Autopilot to get started fast, and the freedom and flexibility with Agent Builder to create the exact agents needed to accelerate testing. These agents are more than just conversational partners—they can perform tasks using tools teams equip them with like UI and API automations or even other agents.

With Test Cloud, organizations can unlock the benefits of a full-featured agentic testing offering:

• Resilient end-to-end automation and production-grade architecture: automate testing for any UI or API of modern web, mobile, and enterprise applications, such as SAP and Oracle, and leverage production-grade architecture with industry-certified secure application testing, auditing and role management, and centralized credentials

• Open, flexible, and responsible AI: enterprise-ready agentic testing capabilities are open, flexible, and responsible within CI/CD integrations, ALM integrations, version control, and webhooks. In addition, the UiPath AI Trust Layer ensures that agentic testing capabilities meet the highest standards of security, safety, and governance

• Powered by the UiPath Platform: with enterprise-wide automation, users can share and reuse components across teams, and utilize marketplaces, snippets and libraries, object repositories, and asset management

DRIVING BUSINESS CONTINUITY AND GROWTH

By embracing advanced technologies and best practices, organizations can safeguard their critical data, minimize downtime, and remain resilient in the face of unexpected disruptions.

In today’s digital-first world, where cyber threats, ransomware attacks, and simple human errors threaten business continuity, backup has evolved from an item in an IT checklist to a boardroom imperative. Robust backup strategies are now integral to operational resilience, regulatory compliance, and reputation management.

Industry leaders opine that modern backup must be secure, smart, resilient, and ready for the unexpected and unpredictable.

Backup, the Cornerstone of Resilience

Having a reliable backup used to be enough. Today, backup is just one element in a broader data resilience strategy. Modern data resilience strategies integrate backup with disaster recovery, cyber defense, continuous data replication, and immutability, ensuring businesses can maintain operations and meet compliance requirements even under attack or failure.

Rick Vanover, VP of Product Strategy at Veeam Software, highlights the critical role backups play in modern resilience planning.

He says, "Data backup is needed as part of an overall data resilience strategy. Every day, I hear stories about threats to good data—from user mistakes to natural disasters and cyberattacks."

Vanover stresses that organizations must go beyond a single solution and prepare for multiple failure scenarios.

"Always have a plan A, plan B, and a plan C. We can’t predict what will go wrong, but we can prepare ahead to prevent disaster and ensure rapid recovery."

Fred Lherault, Field CTO, EMEA / Emerging Markets at Pure Storage, makes an important observation that backups alone may not be enough to guarantee survival after an incident, "Backing up data remains critical, but implementing advanced protection capabilities—like immutability and rapid recovery—is what truly enables companies to withstand ransomware and cyberattacks."

In fact, a resilient strategy requires layered defenses—including backup, replication, disaster recovery, endpoint security, and continuous monitoring

Recovery Speed and Backup Integrity Go Hand-in-Hand

Organizations often overlook an important fact: recovery time can be just as critical as backup integrity. Slow recovery can mean prolonged downtime, financial losses, reputational damage, and even regulatory penalties, making speed and efficiency just as vital as data protection itself.

Fred points out, "Reliable backups have limited effectiveness if operations can't be restored quickly. The most advanced flashbased storage solutions can recover hundreds of terabytes per hour, allowing organizations to bounce back in hours, not weeks."

In sync with this perspective, Ziad Nasr, General Manager – Acronis Middle East, says that backup is a living process, not a onetime exercise.

He adds, "Protecting data is not a one-time task—it’s a continuous responsibility. In an age of hardware failures, cyber threats, and accidental deletions, tested backups are essential."

Testing recovery speed regularly—and ensuring backups are verified and secure—is crucial. This is because, in the event of a security incident or failure, untested or compromised backups could fail when they are needed most, endangering business continuity and delaying critical operations.

Nasr adds, "At Acronis, we emphasize not only creating backups but also verifying them and ensuring they are kept secure. Backups are the cornerstone of a solid cybersecurity strategy."

The Hidden Risk

As businesses expand their digital footprint, "forgotten" data stored on inactive devices and cloud services becomes an invisible liability.

Rob T. Lee, Chief of Research at the SANS Institute, warns that “Data doesn’t just disappear. Today, it lives across cloud services, phones, laptops, USB drives—places we barely think about. Forgotten data poses a serious risk."

Organizations must initiate proactive strategies to manage this data sprawl

"Organizations must create full data inventories, build data disposal into offboarding and system upgrades, and automate retention and destruction policies to avoid vulnerabilities, " adds Lee.

Without proper oversight, dormant data can become a soft target for attackers, enhancing the risk during breaches or insider incidents.

Fred Lherault
Ziad Nasr
General Manager, Acronis Middle East
Rob

Backup Strategies Must Assume Breach

Today’s cyber threat environment is so hostile that backup planning must assume that breaches are inevitable.

Ezzeldin Hussein, Regional Senior Director, Solution Engineering, META, SentinelOne, says, "Data loss isn’t a question of 'if'— it’s 'when.' Businesses must ensure their critical data is secure, recoverable, and resilient against relentless cyber threats."

He points out that layering defenses, such as cloud storage, offline archives, and immutable backups, is no longer optional but necessary for true resilience.

Harish Chib, Vice President Emerging Markets, Middle East & Africa at Sophos, highlights a chilling trend: attackers now explicitly target backups during ransomware attacks.

"According to our State of Ransomware report, 94% of ransomware attacks attempt to compromise backups, and 57% are successful."

He says organizations have no choice but to bolster their backup protection:

"Organizations must protect backups with strong security measures like encryption, endpoint protection, 24/7 monitoring, and a practiced incident response plan."

Backups must be as protected as the production environment they are meant to safeguard. If backups are compromised by attacks, that can lead to making the entire recovery process useless, leaving the organization vulnerable to prolonged downtime or even complete data loss.

RPO and RTO

Without clear recovery objectives, even the best backup systems can fail to deliver when they are needed most.

Ram Vaidyanathan, Chief IT Security Evangelist at ManageEngine, explains:

"Organizations need to define their Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO) to accurately assess risk tolerance and ensure business continuity."

He stresses that a backup strategy must be more than just copying data—it must align with broader resilience and compliance goals. "Regular, automated backups, offsite copies, encryption, and compliance readiness must all work together to minimize operational disruptions and financial risks."

Without these objectives clearly defined, organizations risk losing critical data—or valuable time—when disaster strikes.

Backup is Now a Boardroom Priority

To sum up, Backup is no longer just a technical routine task for the IT departments but a strategic priority. It is a business-critical priority for organizations to safeguard their capabilities for resilience in the face of any cyberattacks. As cyber threats grow more aggressive and data environments become more fragmented, organizations need to rethink traditional backup strategies and treat backup resilience as a top leadership priority.

Ezzeldin Hussein
Regional Senior Director, Solution Engineering, META, SentinelOne
Harish Chib
Vice President Emerging Markets, Middle East & Africa, Sophos
Ram Vaidyanathan Chief IT Security Evangelist, ManageEngine

Rapid Connectivity Uninterrupted Wi-Fi Everywhere

FOR ENABLING BUSINESS RESILIENCE BACKUP STRATEGIES

Businesses should redefine their approach to business continuity planning, viewing it not as a mere operational necessity but as a strategic driver of business resilience and growth.

Business continuity today isn’t just an IT checklist item but rather a boardroom priority, integral to an organization’s survival strategy. As technology evolves and risks escalate, backup and disaster recovery (DR) have had to rapidly evolve too. Backup strategies are being redefined by cloud innovation, AI-powered resilience, and the relentless rise of cyber threats.

In this context, modern enterprises need to look at architecting their business continuity plans not as routine backend operations but as critical business enablers.

Mustansir Aziz, Head of IT, Automech Group says, “Backup and disaster recovery (DR) have transformed from a routine IT task to a core component of business resilience. The pandemic underscored the need for reliable data access and system uptime, even in disrupted environments. The surge in ransomware attacks has forced organizations to focus on immutability, faster recovery, and zero-trust security. Cloud-native DR solutions and automa-

tion have made DR more agile, cost-effective, and easier to test.”

Reflecting on how backup strategies have evolved, Dr. Shijin Prasad, Group IT Head, Elyzee Healthcare Group (UAE & Saudi Arabia) offers his perspective on how it could further change.

“Over the past few years, backup and DR solutions have evolved significantly with cloud-based platforms, hybrid strategies, and AI/ML integration leading the way. Predictive analytics and self-healing systems have enhanced data protection. Looking ahead to 2025, DRaaS (Disaster Recovery as a Service) will dominate, offering end-to-end cloud-based instant recovery capabilities. Emerging technologies like blockchain and quantum computing will further strengthen data integrity and accelerate recovery times.”

Backup and DR are indeed no longer just operational concerns but strategic pillars to future-proof enterprises.

Business Continuity at Board Level

The question is no longer whether business continuity is a priority, but how high it ranks on the leadership agenda.

Mustansir elaborates, “Business continuity is now a top concern for C-level executives and board members. It’s about maintaining customer trust, meeting compliance requirements, and protecting the company’s reputation. Prolonged downtime can have severe financial and reputational consequences.”

Particularly in sensitive sectors like healthcare, this priority is even more pronounced.

According to Shijin, “In the healthcare sector, business continuity is increasingly seen as a top priority at the board and C-suite level. Boards view it as a key component of risk management, directly impacting patient safety, compliance, and financial stability.”

Business leaders and boards are realizing that IT resilience is critical for the business itself — to protect reputation, revenue, customer trust, compliance, etc. This shift, wherein technology and core business strategy are now closely tied together, marks a big cultural shift across many industries.

On-Prem, Cloud, or Hybrid?

One recurring theme is that no single storage model can address all business needs and against this backdrop, hybrid models are gaining dominance. While sensitive and mission-critical data often remains on-premises for greater control and security, organizations are leveraging cloud environments for scalability, rapid recovery, and redundancy.

Saravana Kumar, Senior IT Manager at BITS Pilani Dubai campus says, "The best storage infrastructure to support modern business continuity is a hybrid storage infrastructure, which combines on-premises and cloud solutions. This approach ensures data redundancy, resilience, disaster recovery, scalability, and cost optimization. Considering the latest threat trends, it is highly recommended to adopt a hybrid backup infrastructure to achieve optimal data

resilience and business continuity."

The hybrid model offers a flexible approach that allows organizations to tailor their continuity strategies to evolving risks without compromising performance or regulatory requirements.

Mustansir seems to agree that a hybrid model offers the best of options. He says, “Hybrid storage solutions strike a balance between on-premises control and cloud scalability. Sensitive data may stay on-prem for compliance reasons, while backups and DR leverage the cloud for redundancy and flexibility.”

Speaking from a healthcare perspective, Shijin elaborates, “Hybrid storage is increasingly favoured in healthcare. It allows storage of sensitive patient records on-premises for compliance while utilizing cloud storage for disaster recovery and non-sensitive data. It provides the best balance of compliance, control, and disaster recovery capabilities.”

Emerging Technologies: SDS and STaaS

Software-defined storage (SDS) and Storage as a Service (STaaS) are redefining how organizations approach scalability, resilience, and agility.

Mustansir explains, “Software-defined storage and Storage as a Service bring unmatched agility and scalability. By decoupling hardware from software, they enable seamless expansion, automated tiering, and policy-driven management. STaaS, in particular, reduces upfront costs with its OPEX model and supports rapid provisioning and geo-distribution.”

Again from a healthcare perspective, Shijin reinforces this standpoint. He says “SDS allows healthcare providers to adapt rapidly to changing data storage needs without being locked into specific hardware. STaaS ensures on-demand scalability, high availability, and robust disaster recovery capabilities, making it crucial for operational efficiency and uninterrupted care delivery.”

These technologies make it easier for enterprises to scale, diversify risk, and respond rapidly to emerging threats without the traditional constraints of hardware dependencies.

Dr. Shijin Prasad
Group IT Head, Elyzee Healthcare Group (UAE & Saudi Arabia)
Mustansir Aziz Head of IT, Automech Group

Cloud DR Versus Traditional DR

Disaster recovery itself is evolving, thanks to cloud technologies.

According to Mustansir, “Traditional DR often requires duplicate hardware, high capital costs, and manual testing. Cloud DR, by contrast, offers pay-as-you-go pricing, faster recovery, and simplified testing. It eliminates the need to maintain physical DR sites and enables automated, application-aware recovery across multiple regions.”

Cloud DR not only reduces costs but significantly enhances the speed and flexibility of recovery, which is critical when minutes of downtime can translate into millions in losses.

Meeting Stringent RPO and RTO Targets

Recovery Point Objective (RPO) and Recovery Time Objective (RTO) are vital for business continuity success.

Mustansir stresses, “RPO and RTO are critical for setting realistic expectations across IT and business teams. Mission-critical systems like ERP platforms, customer transaction databases, and healthcare records demand near-zero RPOs and RTOs. Defining these metrics helps prioritize investments in infrastructure and automation to meet SLAs.”

Organizations must classify workloads based on their criticality and assign RPO/RTO targets accordingly to ensure that vital services recover first during a disruption.

Ransomware Protection in Storage Strategies

Given the surge in ransomware attacks, organizations must treat backups as a sacred last line of defense.

Mustansir elaborates, “It’s absolutely critical to incorporate ransomware protection. Strategies like immutable storage, air-gapping, and anomaly detection are now standard. Regular backup validation and offline retention policies ensure recoverability even if production systems are compromised.”

Similarly, Saravana Kumar advises maintaining traditional offline backup methods like Tape drives to counter encrypted attacks. He highlights, “Offline backups are essential to overcome ransomware attacks. Adopting a Zero Trust security model can effectively prevent breaches.”

Immutable backups, air-gapped copies, and Zero Trust frameworks are now non-negotiable in any serious DR plan.

Preparing for the Unpredictable

Organizations today must plan for the unknown, not just the likely.

Saravana Kumar recommends, "Organizations must embrace a multilayered backup strategy and implement multi-level disaster recovery sites with diversified cloud DR providers. They should integrate AI/ML tools to detect anomalies, automate responses, conduct regular resilience testing, and establish clear incident response and communication protocols."

Adding to this, Mustansir points out, “Preparing in advance against unpredictable threats requires diversification and proactive plan-

ning. Organizations should spread data across locations and providers, conduct Business Impact Analyses for non-technical risks, and run scenario-based exercises. AI-driven threat modeling can provide early warnings, but the foundation is a mindset of continuous improvement, designing for failure, and learning from incidents.”

Both experts converge on the idea of the need for proactive, diversified, and continuous resilience planning which is the only defense against unpredictable threat scenarios.

Need for Orchestration and Automation in DR Management

Especially in critical sectors like healthcare, automated DR orchestration is now vital.

Shijin elaborates, “Deploying orchestration and automation tools for DR drills and failovers is essential in healthcare. These tools simplify the DR process, minimize downtime, and ensure uninterrupted care. Automated DR drills allow IT teams to test recovery strategies regularly without manual effort, while orchestration ensures vital systems like electronic health records (EHR) are restored in the correct sequence.”

Automation eliminates human errors, speeds up recovery, and ensures compliance, making it a critical investment for industries where every second matters.

Resilience, now an enterprise-wide Priority

Experts seem to have a shared message that business continuity has become a strategic enterprise-wide priority. Hybrid infrastructures, cloud DR, ransomware-proof storage strategies, AI-driven resilience planning, and automation will be key to defining the next era of organizational survival and growth.

Organizations that embrace these principles can not only survive future disruptions but emerge stronger, more agile, and be more trusted by their customers, partners, and stakeholders

Saravana Kumar Senior IT Manager, BITS Pilani Dubai campus

MAKING AI ACTIONABLE AND ACCESSIBLE

Prasanna Rajendran, Vice President – EMEA, Kissflow discusses how their low-code/ no-code platform is battle-tested, future-ready, and built for the realities of enterprise digital transformation.

How do you make a case for technology leaders to approach AI from the lens of low-code and no-code?

It’s hard to have a conversation in tech today without AI coming up—and rightly so. But what often gets overlooked is that AI on its own isn’t a silver bullet. It’s how we make AI accessible and actionable that really matters. That’s where low-code and nocode come in.

At Kissflow, we always believe that software should be accessible for everyone. And we see AI as not just a cool add-on but as a core enabler of smarter and faster development. Think of it this way: traditionally, using AI required deep technical knowledge, specialist teams, and long development cycles. But with low-code and no-code, we’re reducing that complexity. We’re putting AI capabilities directly into the hands of business users, empowering them to automate decisions, analyse data, and optimise workflows without needing to write a single line of code.

The beauty of this approach is that it democratizes software development. Instead of being something reserved for tech-savvy professionals, the low-code no-code AI combination becomes a tool for HR managers, finance teams, and operations leaders to solve real-world problems in real time. That’s incredibly powerful—especially in the Middle East, where digital transformation is happening at speed and scale. Low-code and no-code allow organisations to embed AI in their day-to-day processes without being held back by IT bottlenecks or talent shortages.

Once you’ve got business and IT leaders to understand the low-code value proposition, how do you then position Kissflow given that this is a crowded space with large enterprise players?

That’s a great question, and one we welcome—because this is exactly where Kissflow shines. Yes, the space is crowded, but not all platforms are created equal. Many solutions either focus purely on simplicity, making them too lightweight for enterprise use, or they swing too far the other way and end up being so complex that they require significant IT involvement.

Kissflow is different. We’ve built a platform that’s intuitive enough for business users to embrace from day one, while still being robust enough to meet the demands of complex enterprise

environments. What sets us apart is this ability to walk the line between ease-of-use and deep functionality. We also offer seamless integration with core enterprise systems like SAP, Oracle, and Microsoft—something that’s critical for businesses in this region where legacy infrastructure still plays a central role.

And then there’s trust. Kissflow has been in the low-code/nocode game from the beginning. We’re not jumping on the bandwagon—we helped build it. Our platform has evolved with our customers' needs, and that consistency counts for a lot in a market where longevity and reliability really matter.

So while the market may be crowded, we’re not just another face in the crowd. We’re leading with a product that’s battle-tested, future-ready, and built for the realities of enterprise digital transformation.

With GISEC taking place in the region this month, security is front and centre in IT conversations. Can you talk us through the security considerations of giving non-IT staff the freedom to create applications? Is this a secure approach to development?

It’s a fair concern—and a common one. Giving non-IT staff the power to build applications does raise important questions around governance and security. But when done properly, with the right platform and guardrails in place, it can actually improve an organization’s overall risk management.

At Kissflow, we’ve always designed our platform with enterprise-grade security baked in. That means role-based access controls, audit trails, encryption, and compliance with international standards like ISO 27001 and SOC 2. What this allows us to do is create a secure sandbox where business users can innovate without putting sensitive data or systems at risk.

The key is that IT doesn’t step away—it steps into a new role as an enabler. IT teams still define the rules, set permissions, and oversee integrations. But they’re no longer the bottleneck. Instead, they become the architects of a secure, scalable development environment that empowers the broader organisation.

This approach is especially relevant in the Middle East, where data sovereignty, compliance, and uptime are non-negotiables. By giving non-technical teams secure tools to build what they need without bypassing IT, we’re helping organisations accelerate digital transformation in a responsible and resilient way.

How has your message of democratising app development within a secure framework been received by the Middle East market? What are your plans for growth and regional expansion?

The response has been overwhelmingly positive. There’s a real appetite in the Middle East for platforms that can keep up with the pace of change, and Kissflow hits that sweet spot. Organisations here are facing mounting pressure to innovate quickly, but they also operate in environments where compliance, scale, and reliability are absolutely critical. When they see that Kissflow can

deliver on both fronts—ease of use and enterprise-grade security—it resonates strongly.

We’ve found a natural home in the UAE, particularly in Dubai, where the drive for digital transformation aligns perfectly with our mission to democratise development. We set up our office in Dubai Internet City and have already built a solid customer base across sectors like banking, logistics, government, and retail. Looking ahead, we’re planning to double our team in Dubai to meet growing demand and deepen our local engagement.

Saudi Arabia is another exciting growth frontier for us. Vision 2030 is accelerating digital transformation at every level—from government services to private enterprise—and we’re seeing more and more organisations turn to low-code as a way to scale innovation across departments without overburdening their IT teams.

Ultimately, our focus is on being where the need is greatest. The Middle East is emerging as one of the fastest adopters of low-code/no-code, and we’re committed to growing with the region—supporting its digital ambitions with a platform built for what comes next.

"We’ve built a platform that’s intuitive enough for business users to embrace from day one, while still being robust enough to meet the demands of complex enterprise environments. What sets us apart is this ability to walk the line between ease-of-use and deep functionality."

AGENTIC AI: THE NEXT LEAP IN AUTONOMOUS DECISION-MAKING

Francesco Colavita, VP Presales Consulting, JAGGAER discusses how Agentic AI is transforming industries by enabling smarter, faster, and more cost-effective decision-making

How is agentic AI different from other types of AI, like GenAI?

Agentic AI is a type of artificial intelligence designed to operate independently, making decisions and completing tasks without constant human oversight. Unlike traditional AI systems, which rely heavily on pre-programmed rules, agentic AI adapts dynamically to new data and evolving environments. It’s also distinct from generative AI, which primarily creates content or predictions, as agentic AI focuses on achieving specific goals through autonomous action and collaboration with human users. For example, our integration of agentic AI into the JAGGAER One platform enables businesses to optimise procurement processes across the Source to Pay process by leveraging several domain-specific AI agents to streamline specific processes.

In what ways does agentic AI make autonomous decisions, and where do the limits of its "agency" lie?

Agentic AI achieves autonomy by combining data analysis, machine learning, and feedback loops. It processes data from multiple sources, identifies patterns, and uses this insight to execute tasks like supplier negotiations or contract management. Machine learning enables it to refine its decision-making over time, while feedback mechanisms ensure continuous improvement.

However, its autonomy is bounded by predefined rules set by human operators, ensuring it operates within ethical, legal, and strategic constraints. For example, it must comply with procurement regulations and respect organisational risk tolerance.

The effectiveness of agentic AI may also be limited by how well it integrates with existing systems and processes. Just as a human employee couldn’t effectively complete a task without access to the necessary resources, so too an AI agent would be limited by such constraints.

Can you share examples of how agentic AI can increase efficiency and decision-making in various industries?

Across sectors, Agentic AI is transforming procurement by enabling smarter, faster, and more cost-effective decision-making. Take for example logistics where AI can evaluate supplier relationships to ensure partnerships that are both reliable and cost-effective. It also predicts inventory needs to avoid overstocking or shortages and monitors external factors like weather to provide early alerts about potential supply chain disruptions, allowing teams to stay ahead of issues.

In healthcare, AI supports supplier selection by analysing performance and cost metrics, predicts the demand for medical supplies to optimise inventory management, and identifies cost-saving opportunities by reviewing spending patterns. It also enhances contract management by automatically detecting risks and summarising terms, enabling faster and more informed decisions.

In finance, we’ve seen JAGGAER clients benefit significantly

from guide-buying features that ensure compliance with preagreed frameworks, improving both accuracy and speed. Beyond this, AI strengthens supplier risk assessments by evaluating financial stability, automates the review of contracts to ensure regulatory compliance, and streamlines routine tasks like creating and approving purchase orders, accelerating the procurement process.

Are there any ethical concerns around the technology?

A key ethical concern could be accountability, as determining responsibility for AI-driven decisions can be complex, especially when outcomes deviate from expectations.

Job displacement is another challenge. Automating entire has a greater potential to cause job displacement. Organisations must manage these transitions responsibly, providing retraining and support to affected employees. Transparency in decision-making and adherence to legal and ethical standards are also critical to ensure agentic AI benefits society without compromising trust, fairness, or morale.

What are some of the roadblocks organizations face when it comes to deploying agentic AI?

A fundamental challenge is aggregating high-quality, real-time data from diverse sources, without which it’s not possible to draw reliable insights. Once the data challenge is solved, organisations have to set about creating models capable of handling complex decision-making criteria, like balancing cost with quality. Such operations require sophisticated algorithms. However, all this complexity needs to be abstracted from users via intuitive. Additionally, the AI systems must adapt to evolving market trends and organisational strategies, which demands continuous learning capabilities.

Francesco Colavita VP Presales Consulting, JAGGAER

EPICOR’S INDUSTRY CLOUD VISION

Vibhu Kapoor, Regional Vice President - Middle East, Africa & India, explains how industry-specific cloud ERP, embedded AI, and strong partner collaboration are driving a new wave of digital transformation across the region’s manufacturing sector.

While most ERP vendors on the market have a cloud offering, Epicor stresses the importance of “industry cloud.” Can you elaborate on the distinction and what this means for Middle East manufacturers?

Great question. “Industry cloud” isn’t just marketing speak. It means our ERP solutions are not general-purpose—they’re built from the ground up with manufacturers in mind. That’s why our platform understands concepts like make-to-order workflows, complex bill of materials, or inventory forecasting in real-world production environments. That’s critical when you’re working with, say, a precision parts manufacturer in Jeddah or a food processor in Sharjah.

In the Middle East, where many manufacturers are family-run, deeply rooted in tradition but looking to modernise, the ability to deploy ERP that fits like a glove—not a box they need to force themselves into—is a game-changer. With Epicor, they’re not adapting to the software; the software adapts to them.

Are manufacturers in the Middle East mature enough to embrace full-scale ERP transformation?

The appetite is definitely there—but it’s pragmatic. Many businesses here are still running older systems, often customised beyond recognition. So we’re not just pitching cloud—we’re guiding a journey. That might start with hybrid deployment or focusing on a single area like financials or supply chain visibility. But the long-term vision is always full digital transformation.

Interestingly, we’re seeing generational shifts in leadership across industrial firms. The younger decision-makers are tech-savvy and cloud-native in mindset, but they still want to see ROI. They’re asking: “Can this reduce downtime? Can it improve forecast accuracy?” That’s where our cloud ERP platform—with embedded AI and real-time data insights—gives them answers.

AI is obviously top of mind for all vendors and businesses. How does Epicor leverage AI?

That’s where Epicor Prism—which we launched in the US market earlier this year, and intend to introduce soon in the Middle East—comes in. With this solution, which is already deployed for some real world customers, we’ve introduced AI not as a layer on top of ERP but within the core of how people work. It’s like having a subject-matter expert built into your system. Want to know which product lines have the highest margin trends? Ask Prism. Need a recommendation on inventory levels during Ramadan demand fluctuations? Prism can advise.

In the Middle East, where many firms don’t have in-house data

Vibhu Kapoor Regional Vice President - Middle East, Africa & India, Epicor

science teams, that kind of accessibility is key. We’ve made it conversational, intuitive, and practical. It’s AI you don’t need a PhD to use—which, frankly, is how it should be.

What advice would you give to your partners to help them drive better customer outcomes?

Simple: bring your industry knowledge and your customer relationships—we’ll bring the platform, the innovation, and the commitment to help you scale. Epicor is not a vendor that treats partners like middlemen. We see them as co-creators of value.

Our channel partners would attest to the fact that we see them as a seamless extension of our own team. As we continue to scale the Epicor economy in the region, we’re actively looking to grow with like-minded partners who share our long-term vision. This isn’t just about signing new deals—it’s about building enduring partnerships that drive results for our mutual customers.

And, importantly, we offer choice. Some customers in the Middle East are ready for public cloud. Others need private, hybrid, or even on-prem options. With Epicor, our partners don’t have to walk away from those deals—they can offer the flexibility clients are looking for, without compromising on innovation.

RECIPE FOR RELEVANCE

Vidura Gamini Abhaya - Vice President, Solutions Architecture at WSO2 discusses how API governance can empower today’s decentralized teams

If you work in software development, you will definitely have noticed that your role has changed and that the way you design, build, and test has become more and more frenetic. The United Arab Emirates is a national microcosm of this worldwide trend in which consumers expect digital experiences to launch and update almost daily. Digital experience builders in UAE therefore find themselves under increasing pressure to get projects out the door quickly, which means they need to accelerate the development lifecycle. When they set out to create the next best user experience, developers do not create every component themselves. This, at least, is nothing new. The modern development team rarely has the inclination to code complex functionality like recommended purchases or a payments gateway from scratch. Instead, they opt for an API.

The application programming interface (API) has been around for decades and has been through its own evolution from being an in-memory interface between components to an interface across the network for applications. But what is changing is the way API usage is managed. Before decentralized software development itself was even a concept, most software development team leaders could count on one hand the number of APIs being used in their organization. Today, multiple teams could be publishing and/or consuming multiple APIs. Following 2020’s mass cloud migration, IT architecture became more decentralized. API management has had to become decentralized to account for this, leading to a reimagination of API governance to accommodate changing industry standards and the need for security and compliance.

The Evolution of API Management

In the early years, governance was built into APIs through standard formats like Web Service Description Language (WSDL) and XML Schema Definition (XSD). The pressures of the modern market to scale, scale, scale meant the old ways are too restrictive. Centralized architectures were replaced by layered architectures, and now we must take the next step and decentralize our software-development ecosystems. Leaving stricter standards behind, we must move to more flexible approaches supported by more straightforward architecture. Representational State Transfer (REST) is a lightweight system built for the microservices age – agile, decentralized, and ideal for collaboration. But it brings up questions about governance that we must address.

Modern enterprise software development often comprises several teams – voracious consumers and prolific builders of APIs. But, without proper oversight, inconsistencies in design, data formats, and documentation could lead to integration problems. The UAE has a robust regulatory framework aimed at the protection of businesses and consumers. Any digital product, including an API, must comply with laws of both land and industry, so every published tool must be built around strict encryption standards and authentication policies. But decentralized teams, plagued by all the challenges associated with siloed operations – duplicated labor, missed opportunities for reuse, and so on – are hard-pressed to deliver consistently on compliance.

We all know the adage that compares innovation with lemonade. The lemons of decentralization can be squeezed and sweetened to produce a collaborative, flexible environment where teams are empowered and their outputs are secure and compliant while also well-received by users. To get there, decentralized teams must work together on the shared API governance framework. They must think together about design, documentation, testing, versioning, and lifecycle management. Special attention should be paid to security-by-design principles, from encryption and tokenization to authentication and auditing.

Autonomy vs. accountability

Decentralized development teams must acknowledge the need for autonomy while at the same time embracing the need for accountability. They should also look for ways to integrate automation and collaboration tools (automated testing frameworks and management platforms, for example) to guarantee consistent enforcement. Similar automation can be applied to deployment activities such as version control and documentation.

Much thought has been devoted over the years to the optimum size of a business team. We appear to have settled on something between four and eight people – the so-called two-pizza team. The good news is that teams of this size are easier to govern and are more likely to have a sense of ownership in governance programs. API governance can step in at design-time and runtime. Design-time governance is concerned with the application of standards and best practices and covers planning, design, and development stages. Runtime governance focuses on monitoring, controlling, and enforcing policies as the API receives live calls. Runtime policies can help with, among other things, preventing distributed denial of service (DDoS) attacks. The two-pizza team is more likely to understand the guidelines, as they will have had greater influence on their authorship.

In an API management environment optimized for decentralized teams, AI can play a significant (shall we say “central”?) role in creation and deployment. Generative AI can even be set to work composing specifications and automated deployment scripts,

taking a lot of labor burden away from human developers and accelerating all-important GTM metrics. AI can take responsibility for the gamut of tedium traditionally associated with coding (including documentation and adherence to best practices), leaving software professionals to dedicate more brain bandwidth to innovation. And at runtime, AI can recommend relevant APIs to a developer in the same way ecommerce sites recommend purchases to consumers – based on past behavior. By analyzing API traffic, AI can even help identify unusual behavior that could be the preamble to a breach.

A recipe for relevance

Governance, while a dry topic, stands as a bodyguard between the organization and its market, protecting it from legal slings and brand-busting arrows. API governance has evolved from a dusty rulebook to a juiced-up enabler of decentralized API management. It is nothing less than the key to distinguishing oneself in a crowded market – a recipe for relevance.

"The UAE has a robust regulatory framework aimed at the protection of businesses and consumers.
Any digital product, including an API, must comply with laws of both land and industry, so every published tool must be built around strict encryption standards and authentication policies."

WORK MANAGEMENT SOLUTION FOR SECURITY TEAMS

Firas Jadalla, Regional Director – Middle East, Turkey & Africa, Genetec Inc discusses how collaboration is made easy using a work management platform for security teams

Effective collaboration between security operators, teams, and other departments is essential for the smooth functioning of any organization. However, as organizations grow in complexity, it becomes increasingly challenging for teams to coordinate. Factors such as staffing shortages, high turnover rates, and outdated collaboration tools exacerbate these challenges.

When staff rely on multiple disconnected tools for dispatch, reporting, and task tracking, operations often become fragmented, leading to delays and gaps in communication. In critical areas like safety and security, these inefficiencies can have serious consequences.

Work management solutions bridge these gaps by managing, tracking, and documenting activities, streamlining processes, and fostering real-time collaboration. Built specifically for security teams, these solutions enhance communication, boosts productivity, and improves overall operational efficiency through workflow automation.

Organizations in the Middle East operate in high-security environments where seamless collaboration is essential. A robust work management platform enables swift response and coordination across complex operational landscapes. This growing need for integration is driving more organizations to align their security and IT departments. According to a recent Genetec report, 78% of end users in the META region indicate that these departments now work collaboratively, reflecting a shift toward a more unified security approach.

Overcoming barriers to effective collaboration

Over time, many organizations accumulate a patchwork of databases, spreadsheets, and standalone systems to communicate, create reports, and track activities. Some still rely on outdated paper-and-pen processes, which aren’t only time-consuming but also prone to errors. These disjointed methods hinder information sharing and coordination.

A digital work management platform consolidates these fragmented systems, offering teams a unified view of activities accessible on both desktop and mobile devices. To take full advantage of their security system data, security teams need to consider more than a generic work management solution.

Firas Jadalla
Regional Director
META, Genetec Inc

An ideal work management solution for security teams should accommodate security activities such as guard tours, patrols, and maintenance inspections. It should also seamlessly integrate with existing security systems. For instance, a video operator should be able to create a work request with an attached camera snapshot and route it to the appropriate team in just a few clicks.

To ensure trustworthy audits and reporting, the work management system should be built with strong cybersecurity measures and ensure that data can’t be manipulated after the fact by applying blockchain principles.

Benefits of work management systems

Implementing a work management system can transform security operations in several ways:

• Improved Communication: Teams gain real-time visibility into task progress, responsibilities, and pending assignments. Updates and alerts can be shared seamlessly to request assistance or provide situational awareness.

• Enhanced Collaboration: Every team member contributes to shared goals rather than isolated tasks. Custom API integrations can connect with other systems, such as employee apps, further fostering teamwork.

• Time Savings: Built-in reporting tools automate activity logs and compliance audits, freeing up time for other critical tasks.

• Operational Efficiency: Routine tasks, incident management, and resource tracking are streamlined. Tasks are assigned to personnel with the appropriate skills, tools, and knowledge, ensuring readiness and precision.

• Workflow Automation: Automations simplify recurring tasks, such as setting reminders, generating reports, or notifying team leads when new requests are added.

• Resource Optimization: Features like work ticketing and asset management enable efficient resource allocation and management of internal and external requests.

• Mobile Support: Field officers benefit from mobile apps that enhance situational awareness, communication, and access to standard operating procedures on the go.

• Today, governments across the region, including the UAE and Saudi Arabia, are heavily investing in smart security solutions as part of their national digital transformation strategies. A centralized work management platform not only supports these efforts but also helps businesses align with evolving security regulations, ensuring compliance and streamlining reporting processes.

Tips for successful implementation

Every organization has unique workflows, so selecting a customizable work management system is crucial. It’s important to choose a solution that’s customizable and intuitive to minimize the need for extensive training.

Integration is another key factor. A platform that deeply integrates with your existing security ecosystem provides a cohesive view of operations and eliminates the need for manual data transfers or redundant processes.

A well-designed work management system can break down silos, empower teams, and boost efficiency. To ensure a successful deployment, adopt a lean and agile approach: start small and gradually incorporate more features as your team becomes comfortable with the platform.

With initiatives like Saudi Vision 2030 and UAE’s Smart City strategy, organizations are increasingly integrating AI-driven security and IoT-enabled monitoring into their operations. A work management platform with automation capabilities supports these advanced security frameworks.

"An ideal work management solution for security teams should accommodate security activities such as guard tours, patrols, and maintenance inspections. It should also seamlessly integrate with existing security systems."

THE ROAD TO INDUSTRY 5.0 IS YOUR DATA AND AI

Data is the core of Industry 5.0 says Christian Pedersen, Chief Product Officer, IFS

Many industries that rely on physical assets, from construction to manufacturing, to energy & utilities, are still working to fully realize the benefits of Industry 4.0. But the data they are gathering now will help them build that next step, cementing the foundations of Industry 5.0.

Industry 4.0, or the Fourth Industrial Revolution (FIR), is a concept that has been around since the mid-2010s when Klaus Schwab at WEF coined the term. This revolution was rooted in digitalization and the implementation of digital transformation. This was marked by using connected devices, data analytics, and

automation for process-driven shifts. Combined, these digital elements play a huge role in our everyday lives, whether we’re at work or at home. Industry 5.0 is the next rung on that digital ladder and the natural progression to what we've learned so far.

When Industry 5.0 emerges, we can expect to see the convergence of all that work and collected data. The next industrial revolution will be steeped in bridging the physical and the digital realms. Effectively this goes back to that human versus machine argument but optimizing both human and machine to enhance their capabilities. AI and cloud computing will reach a harmony where workers can produce their best results, which can be replicated in processes throughout the supply chain. Industrial AI powers our lives in the back end. Industrial AI capabilities will enable power decision-making and won't be a force for contention despite speculation. While AI is set to join the disparate data and physical elements to create Industry 5.0, remember that AI is only as good as the data it’s trained on.

Data Begets Innovation

Using AI to integrate the data points from physical assets will unlock new avenues for innovation and variation. Real-time insights from production machinery and equipment will help drive operational excellence and provide an edge over competitors.

As well as operational excellence, Industry 5.0 sees the intersection of AI and Environmental, Social, and Governance (ESG) frameworks. AI presents a serious opportunity for businesses to drive sustainability throughout their workflows, physical operations and, to that extent, the larger supply chain. By harnessing AI-driven insights companies can optimize their processes from a manufacturing-based level, with AI proposing opportunities to reduce waste of all kinds to achieve greater profitability, use of time, and sustainability.

For businesses that manage physical assets, this integration is a reality not an overstatement. The decision-making processes for businesses with significant capital assets will be transformed. Through advanced decision analytics, asset-rich enterprises can optimize capital allocation, manage risk, and drive more precise, data-driven business decisions. In an increasingly data-centric environment, industrial AI can provide a competitive edge by helping companies prioritize high-impact investments, adapt to

changing regulatory and market conditions, and align with sustainability goals.

We live in a world of servitization where companies increasingly rent or lease industrial equipment instead of buying it outright. Think of robotics, aircraft engines, heavy construction machinery, or even delivery vehicles. As a result, manufacturers will be designing and building higher-quality machinery with in-build smart technologies to meet the demands of the servitization era.

AI can detect anomalies and maintenance issues in this equipment before determining the proper court of action. Monitoring workflows for redundancy in the network and calling out a field service engineer to remedy the machine. At the same time, rerouting the planners of other field service engineers to recoup any time losses. Again, this is streamlining operations and minimizing industry downtime until the machinery is up and running again. Industrial AI will change the entire value proposition of production in the circular economy, identifying parts and components that need servicing before they show any physical signs of wear and tear.

Building Digital Twins Into Strategy

Taking something from the physical world and replicating it virtually is a technical concept. A concept that becomes crucial in an environment with integrated Manufacturing Execution Systems. Historically, these worlds were separate. Data would generate manufacturing orders and necessitate translation. Digital twins bridge this gap by processing information in real-time, breaking down the silos between the virtual and physical environments quicker than humans can. Again, achieving true optimization in milliseconds rather than seconds or even minutes.

There’s a cycle that occurs: simulation informs business practices which change the parameters for simulation, and so on. Simulations help identify the areas needed for improvement, allowing for iterative adjustments until the desired outcome is tested, proven and achieved. An easy way to visualize this process is to think of a farm. With the farmer’s current systems in place, it takes him three days to harvest. The farmer is sure, however, that there is a more efficient way of doing things but is hesitant to experiment with his live environment. With Digital Twin technology, the farmer can optimize tool usage, harvesting routes, and crop storage before muddy boots even hit the soil. Taking a three-day and optimizing it so it’s achievable in just a single day.

Considering the time-to-value aspect, whether implementing universal AI across the business or focusing on specific edge cases, is crucial. In the above example, it’s before the harvest even happens in August. Lifecycle production time can be reduced, production waste minimized, and service efficiency increased. These, in turn, lead to environmental gains with minimal energy consumption, optimized delivery routes and reduced charging times for electric vehicles.

Innovation in Industry 5.0 is using technological advancement to attain industry foresight and adaptation. Digital twins can em-

power customers to drive operational efficiencies and unlock productivity in a way never seen before. Integrating Industrial AI gives businesses a holistic view of their resources to find other areas of opportunity with informed decision-making at a granular and investment-based level.

Eliminating the Complexity

From the regulatory complexities of data collection and storage to varying levels of AI adoption within businesses, a successful transition into Industry 5.0 requires expert support. Costs of AI investments can snowball, so you must be strategic and targeted at improving specific areas of your business. Generic, off-theshelf AI tools trained on irrelevant data won’t help here. To remain competitive at a global scale, companies need to invest in this technology and work with proven partners.

Businesses that can effectively harness the data they collect and employ AI to create actionable insights will be ready for Industry 5.0. Delivering more value to their customers, improving their employees’ working lives with better everyday processes, to become a true industry leader. Whether in manufacturing, construction, or any other physical asset-focused industry, the businesses that can't see the wood through the trees with their collected data will miss out on Industry 5.0.

"The next industrial revolution will be steeped in bridging the physical and the digital realms. Effectively this goes back to that human versus machine argument but optimizing both human and machine to enhance their capabilities."

GOODBYE CISO SCAPEGOATING

The age of corporate accountability is here as far as cybersecurity is concerned says Andre Troskie, EMEA Field CISO at Veeam

Responsibility for cybersecurity and data resilience can no longer be placed on the shoulders of CISOs alone. New EU regulations like NIS2 and DORA bring corporate accountability to the foreground, holding the wider corporate leadership team responsible. Collectively, boards need to be educated on cyber threats as they face being held accountable for any cybersecurity incidents that occur under their watch - and can now be fined individually alongside the wider organization in the case of non-compliance.

Despite this, awareness of corporate accountability is still too low. That’s not to say that there’s not been buy-in, but C-levels aren’t moving fast enough. And it’s no use being aware of a concept if you don’t take action. 95% of EMEA organizations alone have siphoned budgets from other resource pots to reach compliance. So, the urgency is there, but C-suite action is yet to catch up. What do they need to change to get up to speed?

Shifting priorities

NIS2 and DORA have ushered in a new era of corporate accountability, enshrining it in regulation on a level never seen before in cybersecurity. And for good reason. Over the last couple of decades, practically every business function has become digital, creating an exponentially growing source of data for organizations to manage and, more importantly, protect. Cybersecurity has become a vital business outcome, making it just as important as any commercial aspect, so naturally, it should sit under the purview of the C-suite.

These regulations simply formalize what should have been occurring within organizations. For many, however, cybersecurity and resilience was still being sidelined. Understandably, the C-suite has historically left cybersecurity in the hands of the security teams. Admittedly its business value can be hard to see at times. Being more resilient and able to recover faster will minimize the damage organizations face, across share prices, revenue, and customer trust. As C-suites are educated further on the topic thanks to these regulations, these long-term benefits should help adjust priorities - alongside the added pressure of non-compliance!

While these pressures have improved the rates of C-suite buy-ins

Andre Troskie
EMEA Field CISO, Veeam

to corporate accountability, hands-on involvement is still not at the necessary levels. The vast majority of EMEA organizations siphoned budgets from other sources to meet NIS2 compliance, so while they understand the need for compliance, C-suites still lack a joined-up strategy to reach it. Sure, part of this can be chalked up to the immense learning curve that many C-suite executives are facing. Cybersecurity is no small task. To understand it properly, they’ll need to get stuck in at the deep end.

Taking the leap

Getting first-person experience of your organization’s incident response plans is essential for executives seeking to truly understand their responsibilities in this new age of corporate accountability. The same regulations that demand this also call for consistent compliance, not a one-and-done tick box. C-levels will need to be able to demonstrate that their organization’s incident response plans work in the real world, with consistent and rigorous scenario testing. It’s not something that executives can memorize and recite when the occasion arises, they need to live and breathe it.

These regulations don’t call for executives to become experts on cybersecurity by any means. The core thing that C-suites need to know inside and out, are their incident response plans. Take physical security safety as an example. As a C-suite, you wouldn’t need to know the ins and outs of your fire alarm systems, you just need to know they’re there, that they function, and who is in

charge of maintaining them. It’s not their responsibility to be the fire safety expert, simply to know who is, who the backups are, and to ensure that the necessary drills are taking place to adequately prepare.

Cybersecurity incident response plans follow a similar philosophy, and both NIS2 and DORA compliance hinges on their robustness, and that’s where C-suites need to focus their efforts. With a practical understanding of these plans, executives can identify and address their weak spots, whether that be with new processes or by bringing in new, external skills into their workforce.

Forward-thinking

Just as these regulations call for consistent compliance and frequent scenario-based stress testing of plans - so does the cybersecurity landscape. Vulnerabilities and attack surfaces change every day, and plans need to be able to keep up. Using the demands of these regulations as an opportunity not just to tick a box, but to develop a truly security-aware and data-resilient culture is an opportunity that executives can’t afford to miss.

You can be as compliant as possible but it’s impossible to become 100% secure. Without data resilience and safeguards like backups in place, C-suites won’t be able to recover following a breach - no matter how compliant they are.

THE DIGITAL HORIZON TRACKING SECURITY TRENDS

Bashar Bashaireh, VP Middle East, Türkiye & North Africa at Cloudflare says we must rethink our approaches to infrastructure, security, and the role of technology in our lives.

The digital landscape is undergoing a rapid transformation, driven by several converging factors: the swift advancements in artificial intelligence (AI), growing infrastructure needs, the escalating challenges of cybersecurity, the shifting regulatory landscape, and fundamental changes in how we connect and collaborate. Together, these elements are shaping the future of the internet, creating a complex, interconnected ecosystem where each trend influences and amplifies the others. As a result, we must rethink our approaches to infrastructure, security, and the role of technology in our lives.

The

AI Revolution: The Future Is Now (But Unevenly Distributed)

AI, once merely a buzzword, is now at the core of many technological advancements. A recent McKinsey survey on AI reveals that 65% of organizations use generative AI regularly, and 72% have integrated AI into at least one business function. AI's impact is becoming as transformative as the advent of electricity in the early 20th century, which reshaped entire industries and economies. Similarly, AI is now embedded in various workflows, enhancing productivity and fostering new forms of creativity.

AI-powered coding assistants are streamlining software development, generative AI tools are enhancing content creation, and advanced AI models are assisting healthcare providers with early disease detection. These real-time breakthroughs are revolutionizing industries, making AI an invisible but indispensable part of our daily lives.

However, with such advancements come significant responsibilities. Concerns around algorithmic bias, data privacy, and intellectual property have moved from hypothetical to urgent. As AI systems become increasingly integrated into everyday life, society faces the challenge of balancing innovation with accountability. This balance is crucial for both engineers and policymakers and is vital for all who rely on digital services.

Infrastructure Evolution: The Edge Gets Sharper

While AI might dominate the headlines, profound changes are also taking place in the foundational layers of our digital world. Edge computing, once a nascent concept, is now rapidly evolving into a more sophisticated model that fundamentally alters how we conceive of infrastructure.

To understand this, imagine the internet as a sprawling city. In the past, most computing tasks were handled in large, centralized

data centres. Now, edge computing is like setting up satellite offices across the city's suburbs, bringing processing power closer to the areas that need it. This localized model reduces latency, enabling real-time analytics, autonomous vehicles capable of split-second decision-making, and gaming without lag. Beyond speed, when AI is integrated into this distributed framework, it opens up entirely new classes of applications.

However, these advantages come with their own challenges. The demand for GPU capacity to support AI workloads has skyrocketed, often outstripping supply. As a result, infrastructure providers must rethink chip designs, explore new architectures, and invest in sustainable energy solutions. The future data centre will likely be a global network of micro-facilities, carefully coordinated to balance performance, sustainability, and security.

The growth of edge computing highlights the need for neutrality, flexibility, and a distributed approach to computing and storage. By directing workloads to regions abundant in resources and clean energy, we can create an economically viable and environmentally responsible digital ecosystem. The edge is not just becoming more powerful but smarter, more efficient, and more adaptive to the demands of an increasingly connected world.

Cybersecurity: New Challenges Amid a Changing Landscape

As we look ahead to 2025, cybersecurity remains a paramount concern for businesses and IT leaders. According to Cloudflare's Shielding the Future: Middle East & Türkiye Cyber Threat Landscape Report 2024, 42% of regional business and IT leaders expect cybersecurity to make up at least 20% of their organizations’ IT spend over the year ahead. Of those expecting a budgetary increase, 91% anticipate a rise of more than 10%.

While this is good, cybersecurity now faces a range of transformative forces, including the democratization of AI, the adoption of zero-trust security models, and the rise of quantum computing.

AI is a double-edged sword in cybersecurity. On the one hand, AI enhances threat detection and automates defense systems. IBM’s 2024 Cost of a Data Breach Report highlights that AI-driven tools can reduce breach costs by nearly half. On the other hand, cyber attackers are leveraging AI to create more adaptive and sophisticated threats. This has led to the shift away from static defenses towards more agile, continuously updated security models.

AI’s role in cyberattacks is also a growing concern. Hackers are using AI to launch automated, adaptive malware attacks that exploit vulnerabilities on an unprecedented scale. There is an urgent need to leverage AI for defense and bolster cybersecurity measures.

The rise of quantum computing adds an additional layer of urgency. Quantum computing's emerging capabilities could eventually compromise current encryption methods, necessitating a move towards quantum-safe cryptography. Recent breakthroughs in quantum chip technology, like Google’s advances, make it clear that quantum-scale challenges are imminent. Preparing for this shift by adopting crypto-resilience is no longer a matter of choice but a pressing priority.

Connectivity: The Next Frontier Is Above Us

For all the innovations in AI, edge computing, and cybersecurity, one fundamental element underpins them all: connectivity. As the digital world evolves, ensuring robust, universal connectivity is crucial. Over the next few years, new approaches like satellite-based networks will significantly expand global internet access. Projects such as SpaceX's Starlink aim to connect even the most remote regions, while the rollout of 5G and the future development of 6G will dramatically enhance network performance and alter the way we architect communication systems.

However, connectivity isn’t just about increasing speed. As the Internet of Things (IoT) and machine-to-machine interactions become more prevalent, networks must be capable of handling massive volumes of data, from autonomous drones delivering medical supplies to sensors monitoring agricultural fields. The challenge will be ensuring that these networks are secure, reliable, and scalable, meeting the demands of a connected world.

The Human Element: A Workforce in Transition

At the heart of these technological transformations lies the human element. As the digital landscape evolves, so too does the demand for new skills. The digital skills gap is rapidly widening, and as AI, cybersecurity, and other technologies become more integrated into daily life, coding literacy, cybersecurity awareness, and AI fluency are becoming essential competencies for the 21st century workforce. The World Economic Forum forecasts that 23% of global jobs will change due to technological advancements like AI and automation.

The rise of remote collaboration platforms is another significant shift. Initially a response to the pandemic, remote work is now a permanent fixture. Today's platforms go beyond basic tools like email and video calls, integrating AI-driven features such as real-time language translation and meeting transcription. These innovations create opportunities for more inclusive workplaces and communities, but they also present challenges in terms of ensuring technology meets diverse human needs.

Conclusion: Navigating the Digital Horizon

The technological shifts we are witnessing are interdependent, with AI, edge computing, cybersecurity, connectivity, and human resources all influencing one another. Companies will need to take a holistic approach to navigate this complex landscape, choosing partners who can scale operations, maintain security, and adapt to evolving regulations.

As we stand at this critical juncture, the choices made in the coming years will determine whether we harness these technologies to solve global challenges or become overwhelmed by their complexity. The digital horizon is rich with opportunities. By fostering responsible stewardship, thoughtful regulation, and collaboration between enterprises, governments, and citizens, we can ensure that the digital ecosystem remains resilient and trustworthy. The true measure of success will not be the number of new technologies we adopt but how effectively we integrate them into the fabric of society to meet the needs of a rapidly changing world.

CISCO N9300 SERIES SMART SWITCHES

Cisco announced a family of data center Smart Switches, disrupting traditional data center network design by enabling networking and security services in a compact all-in-one solution. Utilizing programmable AMD Pensando data processing units (DPUs), the switch functions as a high-capacity, multifunctional service-hosting device, architecturally transforming data centers to simplify their design and make them more efficient. Cisco’s first integrated offering, the Smart Switch with Cisco Hypershield, introduces a new approach to securing AI data centers by fusing security directly into the network fabric.

As AI drives rapid growth, organizations must manage significantly increased power, compute and networking demands. In traditional data center architectures, when each new service required a specific device, growth led to complexity. It also required adding, changing, or upgrading the enforcement of security policies with each new service or workload. Cisco Smart Switches offer a simpler, more efficient and extensible architecture by integrating services directly with the data center fabric, rather than bolting them on top.

By combining Cisco data center networking, Silicon One, and AMD DPUs, customers can scale services and adapt quickly to evolving business needs, all without the need for any additional hardware. The switches feature two processing engines: a high-performance network processor for stable data transfer and a network services sidecar for agile security processing. Traffic is intelligently steered between the two engines for optimal perfor-

mance. This architectural shift drives cost savings through hardware consolidation, reduced power consumption, and operational simplicity. Cisco Smart Switches embrace all the capabilities of a NX-OS switch and management through Nexus Dashboard, and will unlock a diverse set of use cases like stateful segmentation, IPSec encryption, enhanced telemetry, DDoS protection and more.

Highlights:

• Simplified, scalable networking and security with Cisco Nexus Dashboard and Cisco Hypershield for hyper-distributed data centers.

• Get 800G services throughput, enabling high-speed data transfer and superior network performance.

• Self-qualifying updates with dual data plane. Consistent enforcement across workloads and networks in public and private clouds.

• Consolidates networking, security, and other services into a single physical form factor.

Dell Pro Max 16 laptop

Dell has expanded the Dell Pro Max high-performance AI PC portfolio to meet the needs of today’s AI developers, power users and specialty users. The portfolio offers a versatile range of powerful AI PCs designed for demanding tasks – from light AI development, data analysis and design simulation to training, inferencing and fine-tuning the most complex LLMs, before deploying at scale.Dell Pro Max 16 laptop merges exceptional performance with a large display and is designed for power users and light design applications in the office or on the go. Designed for maximum performance, Dell Pro Max provides the power and top-of-the-line specs for engineering, AI development, data analysis and other intensive workloads.

Highlights:

• Built-in AI boosts your workflows, effortlessly tackles large data sets, and helps you seamlessly run applications enhanced with AI capabilities by top ISVs. With Intel Core Ultra processors, dedicated NPU and NVIDIA professional graphics, enjoy powerful on-device AI experiences like inclusive collaboration, seamless content creation, energy efficiency, and faster threat detection.

• Complex workloads become effortless with Intel Core Ultra processors. With dedicated engines for specific tasks, exceptional performance and speed become the new standard.

HPE PROLIANT SERVERS

Hewlett Packard Enterprise announced eight new HPE ProLiant Compute Gen12 servers, the latest additions to a new generation of enterprise servers that introduce industry-first security capabilities, optimize performance for complex workloads and boost productivity with management features enhanced by artificial intelligence (AI). The new servers will feature upcoming Intel Xeon 6 processors for data center and edge environments.

The HPE ProLiant Compute Gen12 portfolio sets a new standard for enterprise security with built-in safeguards at every layer – from the chip to the cloud – and every phase of the server lifecycle. HPE Integrated Lights Out (iLO) 7 introduces an enhanced and dedicated security processor called secure enclave that is engineered from the ground up as HPE intellectual property. HPE ProLiant Compute servers with HPE iLO 7 will help organizations safeguard against future threats as the first server with quantum computing-resistant readiness and to meet the requirements for a high-level cryptographic security standard, the FIPS 140-3 Level 3 certification.

Highlights:

• HPE Compute Ops Management is a cloud-based software platform that helps customers secure and automate server environments. Proactive and predictive automation, now enhanced with AI-driven insights, helps organizations improve energy efficiency by forecasting power usage and enabling enterprises to set thresholds to control costs and carbon emissions on a worldwide level.

ticularly in remote or branch-office deployments where local IT resources are not available.

• All new HPE Compute Ops Management features, including AI-informed insights, new map-based visibility and third-party tool integration, will be available to HPE ProLiant Compute Gen10 servers and newer.

• A new global map view simplifies management so customers can instantly identify server health issues across distributed IT environments and multi-vendor toolset integration reduces downtime by up to 4.8 hours per server every year.

• Automated on-boarding simplifies server set-up and ongoing management, par-

• Designed for professionals on the move, tackle demanding tasks wherever you are. Its sleek, lightweight design ensures effortless mobility, while the high-performance hardware ensures seamless multitasking, video editing, and more.

• With the Smart Amp audio system and 8MP + IR camera experience more vibrant video. A heightened display and sharp details make your virtual meetings and creative projects even more clear.

• World's first workstations built with a modular USB-C port that is more durable and enables easier repairs

• AI-powered multi-tasking mastery for a new way of working with Windows 11 Pro. With features like Copilot in Windows[12], get better answers, grow your skills, and effortlessly optimize workflows. Windows 11 Pro on Dell PCs come with the latest advances in security and AI delivered conveniently to your desktop.

• To aid customers evaluating future purchases, a standalone tool called HPE Power Advisor estimates environment performance metrics such as energy costs and greenhouse gas emissions.

• New additions to the HPE ProLiant Compute Gen12 portfolio are right-sized to address demanding workloads that include AI, data analytics, edge computing, hybrid cloud and virtual desktop infrastructure (VDI) solutions.

AGENTIC AI TO AUTONOMOUSLY RESOLVE

80% OF CUSTOMER SERVICE ISSUES BY 2029

Agentic AI set to reshape customer service and drive automation in support teams, predicts Gartner.

By 2029, agentic AI will autonomously resolve 80% of common customer service issues without human intervention, leading to a 30% reduction in operational costs, according to Gartner, Inc.

Agentic AI is poised to revolutionize the way service interactions are conducted. While previous AI models were limited to generating text or summarizing interactions, agentic AI introduces a new paradigm where AI systems possess the capability to act autonomously to complete tasks. Both customers and organizations will leverage this technology to automate interactions through the use of AI agents and bots, fundamentally reshaping the relationship between service teams and their customers.

“Agentic AI has emerged as a game-changer for customer service, paving the way for autonomous and low-effort customer experiences,” said Daniel O’Sullivan, Senior Director Analyst in the Gartner Customer Service & Support Practice. “Unlike traditional GenAI tools that simply assist users with information, agentic AI will proactively resolve service requests on behalf of customers, marking a new era in customer engagement.”

Redefining the Customer Experience Through Automated Service Requests and Enhanced Interactions

This shift requires service teams to adapt to supporting both human customers and an increasing number of machine customers powered by these advanced AI tools. For customer service teams accustomed to handling reactive demand from human customers, this transition presents a potential challenge.

“Organizations will need to rethink their approach to managing inbound service

interactions, preparing for a future where AI-driven requests become the norm. In this future, automation will need to become the dominant strategy for all service teams,” O’Sullivan said.

For customers leveraging agentic AI, the service experience will undergo a significant transformation. AI agents will not only provide information but will also take action, such as navigating websites to cancel memberships or negotiating optimal shipping rates on behalf of business customers. Beyond these delegated tasks, agentic AI holds the potential for proactive issue identification and resolution.

Service organizations must brace for changes in the nature and volume of interactions, which will redefine the relationship between service teams and their customers, open new avenues for value delivery, and alter the landscape of customer data collection. In order to prepare, customer service and support leaders should:

Prepare for Automation: Anticipate

more automated interactions from AI agents, and invest in scalable infrastructure and optimize self-service channels to manage bot traffic.

Revise Service Models: Update models to handle AI-driven service volume and implement dynamic routing to differentiate between human and AI interactions.

Set AI Interaction Policies: Develop guidelines for AI-led interactions, addressing data privacy, security, and escalation.

Collaborate with Product Teams: Partner to integrate agentic AI into products for proactive issue detection and reduced reliance on external AI.

“As customers increasingly leverage agentic AI-powered agents to initiate, manage, and negotiate service requests on their behalf, service teams must adapt to this transformative shift, embracing new roles and skills to effectively collaborate with these intelligent systems,” said O’Sullivan.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.