CXODX_Magazine_June_2025

Page 1


DISCOVER

Combat Advanced Attacks with Speed, Precision, and Efficacy

Maximize Efficiency and Reduce Alert Fatigue Navigate the Complex Regulatory and Compliance Landscape

Securonix is pushing forward in its mission to secure the world by staying ahead of cyber threats. Securonix EON provides organizations with the first and only AI-Reinforced threat detection, investigation and response (TDIR) solution built with a cybersecurity mesh architecture on a highly scalable data cloud. The innovative cloud-native solution delivers a frictionless CyberOps experience and enables organizations to scale up their security operations and keep up with evolving threats. For more information, visit www.securonix.com or follow us on LinkedIn and Twitter.

CXO DX / JUNE 2025

FROM PILOTS TO PRODUCTION

Enterprises are rapidly adopting new technologies such as GenAI, data streaming, digital twins, and so on. They’re becoming central to how businesses operate, compete, and scale.

This month’s cover story on Omnix highlights some of the significant trends of AI adoption and how the company is addressing the challenges that enterprises face when it comes to ROI from AI deployments. With a three-pronged approach that includes AI monetization, conversational AI, and engineering-led digital transformation, Omnix is helping enterprises move from fragmented pilots to AI-first architectures. Among other highlights, in the feature on enterprise GenAI adoption, CIO perspectives reveal that GenAI strategies are increasingly aligned with business KPIs—whether through copilots for internal automation or fine-tuned models tailored to domain-specific tasks.

The Confluent interviews featured in this issue highlight that real-time data infrastructure is the backbone of any scalable AI strategy. Data streaming is not just replacing legacy data lakes and warehouses, it’s unifying them. From Flink-powered stream processing to Tableflow’s seamless data lake integration, Confluent is enabling organizations to act on data at the speed of the business.

Equally critical is the question of cyber readiness in all enterprise approaches to technology adoption. Positive Technologies is taking an ecosystem view of security by building cyber talent pipelines, embedding practical training into university curricula, and advancing cybersecurity sovereignty through strategic regional partnerships.

Across these conversations, one thing stands out: technology adoption in the enterprise is moving fast but with purpose. It’s not just about scaling tools but about enhancing outcomes.

It is also evident that there is a growing emphasis on interoperability and flexibility. Enterprises aren’t looking for siloed solutions; they want platforms that integrate across cloud, edge, and on-prem environments, that can meet evolving compliance mandates while staying agile enough to support new AI workloads.

RAMAN NARAYAN

Co-Founder & Editor in Chief narayan@leapmediallc.com Mob: +971-55-7802403

Sunil Kumar Designer

SAUMYADEEP HALDER

Co-Founder & MD saumyadeep@leapmediallc.com Mob: +971-54-4458401

Nihal Shetty Webmaster

MALLIKA REGO

Co-Founder & Director Client Solutions mallika@leapmediallc.com Mob: +971-50-2489676

Fred Crehan, Area Vice President Growth Markets at Confluent discusses regional momentum, cloud flexibility, and why streaming data is essential for AI success.

Waqas Butt, Group Head of ICT & AI, Alpha Dhabi Holding PJSC advocates moving beyond fragmented AI experiments to strategic AI-first integration in the enterprise

Omnix is helping enterprises unlock value from AI, data, and digital twin ecosystems across verticals.

Yuliya Danchina, Positive Technologies Customer and Partner Training Director, Head of Positive Education discusses how the company is helping build global cybersecurity talent ecosystem

Jose Thomas Menacherry, Managing Director of Bulwark Technologies provides valuable insights into Bulwark Technologies’ strategies and initiatives in the ever-evolving cybersecurity landscape.

Yazen Rahmeh, Cybersecurity Expert at SearchInform highlights the most important competencies for Data Loss Prevention (DLP) systems

As generative AI gains momentum across industries, enterprise leaders are shifting focus from experimentation to scalable, compliant deployment

Yevhen Zuhrer, Head of Sales and Business Development, Syteca shares how the company is redefining privileged access and user activity monitoring for modern enterprises.

Antoinette Hodes, Evangelist & Global Solution Architect, Office of The CTO at Check Point Software discusses the rising threat of exploited edge devices in cyber security

Shaun Clowes, Chief Product Officer of Confluent explains how the company is redefining data architecture for the AI era with innovations like Flink, Tableflow, and a governance-first approach.

Moty Cohen, Director of EMEA at Vicarius, discusses the company’s innovative approach to vulnerability and patch management

Biju Unni, Vice President at Cloud Box Technologies says that the evolving cyberthreat complexities demand a new kind of leadership

international cybersecurity festival Positive Hack Days, hosted by Positive Technologies emphasized the need for

cooperation as the basis

Bart Lenaerts, Senior Product Marketing Manager, Infoblox write about how adversaries innovate with GenAI and the case for predictive intelligence

CISCO JOINS STARGATE UAE INITIATIVE

Cisco to collaborate with G42, OpenAI, Oracle, NVIDIA and SoftBank Group to power AI innovation and infrastructure development in recently announced UAE-US AI Campus in Abu Dhabi

Cisco has announced the signing of a Memorandum of Understanding (MoU) to join the Stargate UAE consortium as a preferred technology partner. The strategic MoU, signed by Cisco’s Chair and Chief Executive Officer Chuck Robbins together with other consortium partners, G42, OpenAI, Oracle, NVIDIA and SoftBank Group, envisions the construction of an

AI data center in Abu Dhabi with a target capacity of 1 GW, with an initial 200 MW capacity to be delivered in 2026.

As a partner in this initiative, Cisco will provide advanced networking, security and observability solutions to accelerate the deployment of next-generation AI compute clusters.

IBM UNVEILS HYBRID CAPABILITIES TO SCALE ENTERPRISE AI

Agent Catalog in watsonx Orchestrate to simplify access to 150+ agents and pre-built tools

At its annual THINK event, IBM unveiled new hybrid technologies that break down the longstanding barriers to scaling enterprise AI – enabling businesses to build and deploy AI agents with their own enterprise data.

IBM estimates that over one billion apps will emerge by 2028, putting pressure on businesses to scale across increasingly fragmented environments. While AI investments are accelerating, only 25% of initiatives meet ROI expectations, as per the new IBM CEO study, prompting IBM to offer hybrid technologies, agent capabilities, and deep industry expertise from IBM Consulting to help businesses operationalize AI.

"The era of AI experimentation is over. Today's competitive advantage comes from purpose-built AI integration that drives measurable business outcomes," said Arvind Krishna, Chairman and CEO, IBM. "IBM is equipping enterprises with hybrid technologies that cut through complexity and accelerate production-ready

AI implementations."

IBM is providing a comprehensive suite of enterprise-ready agent capabilities in watsonx Orchestrate to help businesses put them into action, including pre-built domain-specific agents for areas such as human resources, sales, and procurement, as well as utility agents. It also facilitates integration with over 80 business applications, including Salesforce, Workday, and SAP, while providing comprehensive agent lifecycle management. IBM is also introducing the new Agent Catalog in watsonx Orchestrate to simplify access to 150+ agents and pre-built tools from both IBM and its wide ecosystem of partners.

The enhanced watsonx.data, now supports unstructured data, offering up to 40% more accurate AI agent performance than traditional methods. New tools include watsonx.data integration (cross-format orchestration), watsonx.data intelligence (insight extraction) and Content-aware storage (CAS) on IBM Fusion for real-time data processing

“With the right infrastructure in place, AI can transform data into insights that empower every organization to innovate faster, tackle complex challenges, and deliver tangible outcomes,” said Chuck Robbins, Cisco Chair and CEO. “Cisco is proud to join this consortium to harness the power of AI and deliver the infrastructure that will enable tomorrow’s breakthroughs.”

This announcement follows Robbins’ recent visit to Bahrain, Saudi Arabia, Qatar, and the UAE where Cisco announced a series of strategic initiatives across all phases of the AI transformation in the region. These new initiatives employ Cisco’s trusted technology across the region’s AI infrastructure buildouts, leveraging the company’s deep expertise in networking and security together with longstanding regional partnerships. By fostering the development of secure, AI-powered digital infrastructure and collaborating with key Cisco partners, the company is delivering worldclass, trusted technology to the region.

IBM is also expanding its capabilities via DataStax acquisition and integration with Meta’s Llama Stack, reinforcing its leadership in generative AI openness and scalability.

IBM's new content-aware storage (CAS) capability is now available as a service on IBM Fusion.

IBM is launching IBM LinuxONE 5, its most secure and performant Linux platform for data, applications, and trusted AI, capable of processing up to 450 billion AI inference operations per day.

ZOHO LAUNCHES AI-POWERED ULAA ENTERPRISE BROWSER

AI-powered enterprise browser offers proactive phishing protection, security, and centralised control for organisations

Zoho announced the launch of Ulaa Enterprise, a new secure enterprise browser designed to help businesses in the Middle East and North Africa (MENA) region enhance their cybersecurity posture amid escalating threats—particularly phishing attacks, which have emerged as the region’s most pervasive cybersecurity concern.

As MENA organisations embrace cloud solutions, the browser has become the primary workspace—and the largest attack surface. Ulaa Enterprise secures this critical access point by embedding protection directly into the browser, eliminating the need for complex third-party tools or virtual environments. The browser’s AI capabilities, powered by Zoho’s proprietary AI engine Zia, provide an additional layer of intelligent protection. Equipped with an integrated ZeroPhish system, it analyses URLs and web page behaviour in real-time

to detect and block phishing attempts before users even interact with malicious links. Zia also categorises and filters unsafe web content automatically, creating a safer and more compliant browsing experience without disrupting employee productivity.

“It’s uncommon for businesses to consider investing in paid browsers as part of their security strategy. However, with the sharp rise in cyberattacks across the MENA— particularly those stemming from unsafe and unsecured browsing—this mindset is shifting. Ulaa Enterprise was built specifically for organisations that want to strengthen their first line of defence, enhance cybersecurity hygiene, and safeguard both their data and their customers’ trust.” said Saran B Paramasivam, Regional Director MEA, Zoho.

Ulaa Enterprise offers IT and security

SOPHOS TO LAUNCH NEW DATA CENTER IN THE UAE

Strategic investment strengthens local data sovereignty, performance, and partner enablement in support of the UAE’s digital transformation goals

Sophos, a global leader of innovative security solutions for defeating cyberattacks, has announced plans to launch a new data center in the UAE by year end. The expansion is part of Sophos’ broader regional investment strategy and reinforce its commitment to supporting the UAE’s vision of becoming a global digital hub, while enabling local organizations to benefit from enhanced performance, data sovereignty, and regulatory compliance.

Hosted on Amazon Web Services (AWS) infrastructure within the UAE, the data center will power Sophos’ advanced, cloud native security solutions - bringing improved performance, regulatory compliance, and data sovereignty to organizations across the region.

“This launch reflects our mission to defend organizations of all sizes against inevitable cyberattacks with unmatched expertise and adaptive defenses,” said

Gerard Allison, senior vice president EMEA Sales at Sophos. “By bringing local infrastructure to the UAE, we’re delivering on our vision of enabling every organization to achieve superior cybersecurity outcomes. This expansion supports our strategy of democratizing, leveraging AI and automation, and empowering our partners to scale securely.”

Key Benefits for Local Customers:

• Enhanced Data Sovereignty: Local hosting ensures adherence to national and sector - specific regulations – critical for industries like government, healthcare, and finance.

• Improved Performance: Reduces latency for faster responsiveness for cloud – based services, including Sophos Central.

• Enterprise and Public Sector Readiness: Built to meet the high standards of mission – critical environments with advanced security and operational resilience.

teams complete visibility and granular control over browser activity. Administrators can centrally define security policies, restrict downloads and extensions, monitor user behaviour, and enforce rules across different departments or user groups—all from a single console. Built-in data loss prevention measures ensure that sensitive information cannot be shared, copied, or downloaded without authorisation, while detailed audit logs and real-time monitoring allow teams to act quickly and decisively against potential threats.

The new data center enables Sophos’ regional partners to deliver in-region data hosting to their customers, enabling them to meet local compliance demands while improving service delivery. It also reflects Sophos’ ongoing commitment to empowering partners with the tools, performance, and innovation needed to succeed in an evolving threat landscape.

Gerard Allison
Senior VP EMEA Sales, Sophos
Saran B Paramasivam
Regional Director MEA, Zoho

FINESSE PARTNERS WITH SECURITI

Partnership will seek to bolster data security and AI governance

Finesse, a leading Digital Transformation and AI Revolution has announced a strategic, partnership with Securiti, a pioneer of the Data+AI Command Center.

The partnership brings together Finesse’s proven expertise in deploying transformative digital solutions with Securiti’s groundbreaking technologies in managing sensitive data, offering a sophisticated blend of capabilities designed to address the multifaceted demands of today’s cybersecurity landscape. By integrating Securiti’s innovative AI-driven data privacy and security solutions into Cyberhub’s comprehensive portfolio, Finesse is set to equip organizations with advanced tools to protect their critical information across

both structured and unstructured data environments.

Finesse’s Cyberhub already boasts an impressive suite of solutions, including its industry-leading Cognitive Security Operations Center (CSOC), automated incident response frameworks, AI-driven Zero Trust architecture, and governance frameworks for generative AI. With the addition of Securiti’s advanced platforms, such as Data Vault, Data Leakage Protection, and privacy-compliance automation, the enhanced Cyberhub offering will empower businesses to deploy proactive, scalable, and highly intelligent cybersecurity strategies.

Megha Shastri, Vice President – Enterprise Accounts Finesse, said, “Together with Securiti we will address critical set of challenges that organizations face in securing sensitive data across modern, distributed

environments—especially in the cloud. This partnership will empower our customers with continuous visibility and control over sensitive data, reducing security risks, and ensuring compliance across environments.”

“Organizations today face a dual imperative- to innovate rapidly with AI and cloud technologies, while simultaneously maintaining stringent data security and privacy controls,” said Tahir Latif, Chief Privacy Officer ( META) at Securiti. “Through this partnership, we are enabling enterprises to automate the discovery, classification, and protection of sensitive data at scale, providing the foundational intelligence that downstream security and AI governance solutions rely on.”

Securiti’s unique ability to combine robust data visibility and controls with AI intelligence aligns seamlessly with Cyberhub’s mission to ensure businesses maintain visibility and control over their digital assets while staying ahead of emerging threats. This integration also enables enterprises to achieve and sustain compliance with international privacy regulations, such as GDPR, CCPA, and similar standards, through automation and data-driven insights.

GOOGLE CLOUD SUMMIT DOHA CELEBRATES THE SECOND ANNIVERSARY OF ITS LOCAL CLOUD REGION

Second annual Summit showcases AI advancements and customer success

Google Cloud hosted its second annual Google Cloud Summit in Doha, held under the patronage of His Excellency Mohammed bin Ali bin Mohammed Al Mannai, Minister of Communications and Information Technology, bringing together over 1,500 industry leaders, developers, and IT professionals. The Summit marked two years of the Google Cloud Doha region empowering local innovation and is set to explore the latest advancements in AI, data analytics, and cloud technologies. The event featured significant discussions and

announcements on strategic collaborations.

The Google Cloud Summit Doha showcased the vibrant tech ecosystem in Qatar and Google Cloud’s commitment to it. Attendees experienced keynotes from Google Cloud executives and Qatari leaders, gained insights into transformative AI technologies like Gemini, AI Agents, and NotebookLM, heard compelling customer success stories, and participated in deep-dive sessions on data management and cybersecurity. This ongoing partnership with Qatar is poised to play a vital role in building a resilient, secure, and digitally advanced ecosystem in the nation.

Sami Al Shammari, Assistant Undersecretary for Infrastructure and Operations Affairs at MCIT, stated: "Our collaboration with Google Cloud has served as a key

enabler in Qatar’s journey toward a knowledge-based, innovation-driven economy. Since the launch of the Doha cloud region two years ago, this collaboration has yielded tangible outcomes that directly support the objectives of Qatar National Vision 2030, particularly in enhancing digital infrastructure, delivering scalable and secure government services, and building a future-ready digital workforce."

The Summit also highlighted how a diverse range of leading Qatari organizations are leveraging Google Cloud for their transformation journeys. These include Al Jazeera Media Network (AJMN), Aspire, beIN MEDIA GROUP, Media City Qatar, Ministry of Endowment & Islamic Affairs (Awqaf), Ministry of Labour, Ooredoo Group, Ooredoo Qatar, Qatar Airways, Qatar Foundation Pre-University Education, Qatar Free Zones Authority (QFZA), Qatar Insurance Company (QIC), Snoonu, University of Doha for Science and Technology (UDST) among many others who are driving innovation across various sectors.

TITAN DATA SOLUTIONS AND NEXSAN SIGN DISTRIBUTION AGREEMENT FOR MENA

This agreement will expand access to Nexsan’s high-performance, secure, and scalable storage infrastructure across the MENA region

Titan Data Solutions, a specialist distributor for server and storage solutions, has signed a distribution agreement with Nexsan, a global leader in enterprise-class storage solutions. The partnership will expand access to Nexsan’s high-performance, secure, and scalable storage infrastructure across the Middle East and North Africa (MENA), supporting the region’s rapid investment in AI and advanced computing technologies.

As governments and enterprises across MENA accelerate their digital transformation agendas, investment in artificial intelligence, machine learning, and high-performance computing is surging. According to a recent Deloitte report, businesses in the region face mounting pressure to adopt AI and ramp up investments while a critical disconnect exists between their ambitious goals and the availability of the necessary infrastructure.

“This agreement with Nexsan is a key mile-

stone in our MENA strategy,” said Anand Chakravarthi, Regional Director, Titan Data Solutions. “Businesses across the region are eager to harness the power of AI and advanced computing, but success depends on a strong foundation, particularly when it comes to secure, reliable, and high-performance storage. That’s exactly where Nexsan delivers.”

Nexsan offers a broad portfolio of storage solutions designed to meet modern data challenges, including:

• High-capacity, high-throughput storage for large-scale, critical workloads

• Secure archiving for compliance, data sovereignty, and long-term retention

• Cost-effective, high-performance storage solutions for data centers and edge deployments

• Hybrid and unified storage that integrates seamlessly with existing infrastructure, enhanced by S3 edge caching for optimized data delivery and reduced cloud costs.

• Advanced storage for Digital Video Surveillance (DVS) and CCTV, delivering scalable, secure storage for critical video data

“We are thrilled to expand our partnership with Titan Data Solutions into the MENA region,” said Adrian Hedges, Regional Sales Director, Nexsan. “Nexsan’s technologies are designed for environments where rapid data growth, high-performance demands, and reliability are non-negotiable. By joining forces with Titan, we’re committed to bring our technologies to more partners across the region, helping them bridge the infrastructure gap and unlock the full potential of their data.”

MANAGEENGINE ENHANCES UNIFIED PAM WITH NATIVE INTELLIGENCE AND ADVANCED AUTOMATION

The Company's Unified PAM Platform, PAM360, Now Offers AI-Governed Cloud Access Policies and Qntrl-Powered Task Automation for Identity-Centric Routines

ManageEngine, a division of Zoho Corporation and a leading provider of enterprise IT management solutions, announced that it has added AI-powered enhancements— featuring intelligent least privilege access and risk remediation policy recommendations—to its privileged access management platform, PAM360. A new privileged task automation module enabled by Qntrl, Zoho’s unified workflow orchestration platform, has also been introduced. Together, these newly added capabilities help enterprises automate enterprise-wide administrative routines, enforce least privilege at scale with intelligent, context-aware controls and reduce security risks through automated remediation.

AI-Governed Least Privilege Access

Traditional PAM models, which rely on static policies and manual processes, often operate without sufficient context. This can result in excessive permissions, entitlement drift, and configuration errors. To address

these challenges, organizations should adopt an adaptive, context-driven approach to privileged access management—one that leverages AI to enable dynamic, riskbased access control. In fact, according to ManageEngine's 2024 Identity Security Insights, 68% of the respondents are looking for AI-driven improvements in risk-based access control.

"Today’s hybrid, multi-cloud environments have led to an explosion of human and non-human identities, creating complex access workflows and rampant privilege sprawl. To tackle this, organizations require dynamic policies that can intelligently enforce the principle of least privilege across their identity stack. With the AI-driven CIEM module in PAM360, IT security teams can now generate intelligent least privilege policies, proactively flag risky entitlements and automate remediation, helping enterprises close critical identity security gaps before they’re exploited," said Ramanathan Kannabiran, director of

product management at ManageEngine.

PAM360’s CIEM module now features AI-generated least privilege policies, automated remediation of shadow admin risks and real-time access and session summaries. These AI-driven capabilities help organizations proactively tackle access sprawl and misconfigurations in hybrid environments with minimal manual effort.

IFS LAUNCHES NEXUS BLACK

IFS Nexus Black to co-create solutions at the forefront of Industrial AI, rapidly delivering tangible business outcomes at scale

IFS, the leading provider of enterprise cloud and Industrial AI software, announced the launch of IFS Nexus Black, a strategic innovation program to expedite high-impact AI adoption for industrial organizations. Nexus Black provides a credible alternative to legacy software vendors by delivering bespoke solutions at pace with guaranteed industrial scalability and security.

Nexus Black combines advanced AI technologies, deep industrial context and a dedicated delivery team, partnering with customers to tackle bespoke, complex challenges in asset-intensive industries. Built on the foundation of IFS.ai, Nexus Black enables rapid development and de-

ployment of AI capabilities to turn bold ideas into tangible outcomes in a matter of weeks.

Nexus Black offering to customers comprises:

• Agile, sprint-based co-creation and prototyping. A proven co-development model that is safe, scalable and fast

• Structured four phase model: Problem Definition; Proof of Value; Accelerated Development; Digital Continuity

• Access to dedicated AI engineers, domain experts, and solution architects, with deep expertise in industrial contexts and enterprise architecture

• Collaboration on agentic AI and contextual intelligence with industrial scalability

REDINGTON OFFERS ILLUMIO SEGMENTATION TO STOP BREACHES

Illumio segmentation proactively protects critical assets, contains attacks, and enhances cyber resilience

Redington, a leading technology aggregator and innovation powerhouse across emerging markets, announced a new distribution partnership with Illumio, the breach containment company. The partnership will see Redington work with Illumio to evolve its channel strategy, drive partner enablement, and accelerate go-to-market momentum for Illumio Segmentation, helping organizations across the region reduce risk, contain attacks, and stop cyberattacks from

turning into cyber disasters.

Despite record spending on cybersecurity, the volume, cost, and impact of cyberattacks continue to rise. Ransomware and other threats bypass perimeter defenses, with attackers exploiting vulnerabilities in hybrid and multi-cloud environments to move across networks and reach critical data, assets, and infrastructure.

Illumio Segmentation proactively protects critical assets, contains attacks, and enhances cyber resilience. By applying the principles of Zero Trust to stop lateral movement across multi-cloud and hybrid infrastructure, it enables organizations to protect critical resources and prevent the spread of cyberattacks.

“Our partnership with Illumio reflects Redington’s continued commitment to bringing the most advanced and relevant cybersecurity solutions to our partners and custom-

Nexus Black turns intelligence into applied impact thanks to IFS’s deep industry footprint and proximity to rich industrial asset data, combining context and AI resulting in trusted contextual AI into live operations, quickly and securely.

“Too many businesses are stuck choosing between inflexible enterprise tools or niche AI vendors with no roadmap to scale. Nexus Black changes that,” said Mark Moffat, CEO of IFS. “Nexus Black is IFS’s commitment to rapid, high-impact AI innovation for leading industrial organizations. It combines the agility of a start-up with the industrial context, security and delivery strength IFS is known for. It’s how we help our customers leap ahead - not just catch up.”

Nexus Black enables customers to access capabilities ahead of general launch and directly engage in the creation process. Through a co-investment model, customers gain a fast-mover advantage in their industries and influence solutions that enhance their agility. Initial use cases include predictive maintenance, manufacturing scheduling optimization, AI copilots for service and sales, and intelligent automation for finance and supply chain.

ers,” said Dharshana Kosgalage, Executive Vice President, Technology Solutions Group, Redington. “In today’s threat landscape, Zero Trust Segmentation is no longer optional—it’s essential. Through our extensive channel ecosystem, we will accelerate access to this critical technology, enabling partners to drive real cyber resilience for their customers.”

Recognized as a leader in The Forrester Wave: Microsegmentation Solutions, Q3 2024 report, Illumio Segmentation is proven to strengthen cyber resilience and reduce the impact of attacks. A Forrester Total Economic ImpactTM report shows Illumio reduces the blast radius of attacks by 66%, saving $1.8 million in decreased risk exposure.

“Breaches today are inevitable, but disasters don’t have to be,” said Sam Tayan, Director of Sales for Middle East, Turkey and Africa (META) at Illumio. “Illumio Segmentation provides a simple and effective way to contain threats, minimize risk, and build resilience, so that organizations can thrive without fear of cyber disasters. We’re thrilled to partner with Redington to jointly deliver value to customers and empower them to stay agile in the face of today’s cyberthreats.”

AI FALLING SHORT IN CUSTOMER SERVICE, FINDS SERVICENOW’S SURVEY

54% of UAE consumers say failure to understand emotional cues is more of an AI than human trait, pushing AI to evolve beyond automation

ServiceNow has released the ServiceNow Consumer Voice Report 2025. Now in its third year, the report, which surveyed 17,000 adults across 13 countries in EMEA — including 1000 in the UAE — explores consumer expectations when it comes to AI’s role in customer experience (CX).

Need to put the EQ in AI

Despite rapid advancements in AI and its widespread use in customer service, UAE consumers overwhelmingly (at least 68%) prefer to interact with people for customer support. Based on the findings of the research, this can be attributed to the perceived lack of AI’s general emotional intelligence (EQ). More than half (54%) of UAE consumers say that failing to understand emotional cues is more of an AI trait than human; 51% feel agents having a limited understanding of context is more likely to be AI; and an equal number (51%) say misunderstanding slang, idioms and informal language is more likely AI. Meanwhile, nearly two thirds (64%) of UAE consumers feel repetitive or scripted responses are more of an AI trait.

“The key takeaway for business leaders is that AI can no longer be just another customer service tool – it has to be an essential partner to the human agent. The future of customer relationships now lies at the intersection of AI and emotional intelligence (EQ). Consumers no longer want AI that just gets the job done; they want AI that understands them,” commented William O’Neill, Area VP, UAE at ServiceNow.

High Stakes, Low Trust

The report also highlights a clear AI trust gap, particularly for urgent or complex requests. UAE consumers embrace AI for speed and convenience in low-risk/routine tasks — 23% of UAE consumers trust an AI chatbot for scheduling a car service appointment and 24% say they are happy to use an AI chatbot for tracking a lost or delayed package. However, when it comes to more sensitive or urgent tasks, consumer confidence in AI drops. Only 13% would trust AI to dispute a suspicious transaction on their bank account with 43% instead preferring to handle this in-person. Similarly, when it comes to troubleshooting a home internet issue, only 20% of consumers across the Emirates are happy to rely on an AI chatbot, with 50% preferring to troubleshoot the issue with someone on the phone.

Humans and AI

For all the frustrations with AI — almost half (47%) of UAE consumers say their customer service interactions with AI chatbots have not met their expectations — the research does suggest that consumers consider AI as crucial for organizations looking to deliver exceptional customer experiences.

For one, in addition to seamless service (90%), quick response times (89%) and accurate information (88%), more than three quarter (76%) of UAE consumers expect the organizations they deal with to provide a good chatbot service. But perhaps more interestingly, 85% of consumers across the Emirates expect the option for self-service problem solving, which does indicate the need for organizations to integrate AI insights and data analysis into service channels to anticipate customer needs before they arise.

“While AI in customer service is currently falling short of consumer expectations, it is not failing. Rather, it is evolving. There is an opportunity for businesses to refine AI by empowering it with the right information, making it more adaptive, emotionally aware, and seamlessly integrated with human agents to take/recommend the next best action and deliver unparalleled customer relationships,” added O’Neill. “Consumers do not want less AI – they want AI that works smarter. By understanding the biggest pain points, companies can make AI a trusted ally rather than a frustrating barrier.”

GBM ENHANCES CYBERSECURITY

AT PRIME HEALTH

The collaboration involves 24x7 threat monitoring, detection and response, technology enablement, advanced incident response, healthcare-specific threat intelligence, brand protection, and security posture improvement

Gulf Business Machines (GBM), a leading end-to-end digital solutions provider, has entered a strategic partnership with PRIME Health to deliver a comprehensive managed detection and response (MDR) service designed to enhance cybersecurity across the healthcare provider’s ecosystem.

Through this collaboration, GBM will deliver around-the-clock threat monitoring, rapid detection, and swift incident response to protect PRIME Health’s digital environment. At the core of the collaboration are fully managed Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platforms, empowering security teams with automated workflows, real-time insights, and a unified dashboard for enhanced visibility and faster decision-making.

In addition, PRIME Health will leverage GBM’s expert Digital Forensics and Incident Response (DFIR) team to rapidly investigate and contain threats, while gaining access to contextual, healthcare-specific threat intelligence. Furthermore, the partnership will help safeguard the healthcare provider’s digital reputation by proactively monitoring for misuse,

impersonation, and threats targeting the brand.

Beyond day-to-day operations, GBM will also provide ongoing support to strengthen PRIME Health’s cybersecurity posture, ensuring continuous alignment with best practices, regulatory compliance, and evolving threat landscapes.

Driven by a strong commitment to technological advancement in healthcare, the UAE's digital health market is projected to reach US$ 2.65 billion by 2030. The collaboration further taps into this opportunity by proactively identifying and mitigating cyber threats targeting patient data and medical devices, thereby contributing to safeguarding the critical healthcare ecosystem in the UAE.

Jaleel Rahiman, IT Director at PRIME Health, said, “At PRIME Health, we recognize that cybersecurity is essential for ensuring operational resilience and safeguarding patient data, and preserving the trust of every individual who walks through our doors. As part of our continued pursuit of excellence, we were looking for a globally recognized provider that could deliver top-grade cybersecurity while securing our complex environment end to end so that our internal teams are free to focus on innovation and care delivery. This initiative reflects our promise of ‘Personalised Care, Personally’—where every patient interaction is built on a foundation of trust and data confidentiality. With its proven reputation, strong partnerships, and local presence in the UAE, GBM is our trusted partner in supporting our security goals and compliance roadmap.”

Ossama El Samadoni, General Manager of GBM Dubai, said, “Our MDR service will enable PRIME Health to ensure uninterrupted care delivery while staying ahead of cyber threats. Our multi-layered collaboration reflects our shared commitment to building a secure and resilient

healthcare environment. We are delighted to help PRIME Health demonstrate leadership in secure digital transformation, build patient trust by proactively defending sensitive health data, and strengthen their reputation as a security-conscious, future-ready healthcare provider.”

PRIME Health is one of the UAE’s most trusted healthcare providers, known for its commitment to medical excellence and patient-centric care. With a portfolio that includes Prime Hospital, Prime Medical Centers, Premier Diagnostic Center, Medi Prime Pharmacies, and specialized services such as Home Care and Corporate Medical Services, the company plays a vital role in the UAE’s healthcare landscape. The group’s leadership continues to drive innovation across clinical, operational, and digital domains.

The increased use of technology in the Middle East and North Africa has created a rich playground for cybercrimes. With sophisticated and powerful cyberattacks compromising businesses at an unprecedented rate, redesigning security for the digital-first world has become a key priority for organizations, with revenue in the region’s cybersecurity market projected to reach US$4.63 billion this year.

Jaleel Rahiman IT Director, PRIME Health
Ossama El Samadoni General Manager, GBM Dubai

NUTANIX AND PURE STORAGE PARTNER TO DELIVER NEW INTEGRATED SOLUTION FOR MISSION-CRITICAL WORKLOADS

Combined benefits of Nutanix Cloud Platform with Pure Storage FlashArray to provide flexibility, security, and scale with the high-performance needed for the most demanding environments

Nutanix, a leader in hybrid multicloud computing, and Pure Storage, an IT pioneer that delivers the world’s most advanced data storage platform and services, announced a partnership aimed at providing a deeply integrated solution that will allow customers to seamlessly deploy and manage virtual workloads on a scalable modern infrastructure.

This integrated solution comes at a pivotal time for customers as the virtualization market evolution is top of mind. IT leaders are focused on helping their organizations maintain pace with the rapidly changing technology landscape while simultaneously implementing greater operational effectiveness. Gartner predicts that “by 2028, cost concerns will drive 70% of enterprise-scale VMware customers to migrate 50% of their virtual workloads.”

With this collaboration, the Nutanix Cloud Infrastructure solution, powered by the Nutanix AHV hypervisor along with Nutanix Flow virtual networking and security, will integrate with Pure Storage FlashArray over NVMe/TCP to deliver a customer experience uniquely designed for high-demand data workloads, including AI.

Key Benefits:

• Scalable, Modern Infrastructure - This partnership will provide customers with access to high-performance, flexible, and efficient full-stack infrastructure to power their most business-critical workloads through the simplicity and agility of Nutanix Cloud Infrastructure for virtual compute, and the consistency, scalability, and performance density of Pure Storage all-flash systems.

• Built-in Cyber Resilience - Customers will be able to strengthen their end-to-end cyber-resilience posture by leveraging native Nutanix capabilities, such as Flow micro-segmentation and disaster recovery orchestration, alongside Pure Storage FlashArray capabilities, such as data-atrest encryption and SafeMode.

• Freedom of Choice - Customers want agility and control of their mission-critical environments. The combination of Nutanix and Pure Storage will offer a resilient and easy-to-use alternative to existing market options.

“We’re thrilled to see Nutanix and Pure Storage joining forces. Their collective expertise, innovative technologies, and shared commitment to reliability and performance will deliver a compelling solution that directly addresses critical needs in the market,” said Anthony Jackman, Chief Innovation Officer at Expedient. “Expedient is proud to be an early design partner, collaborating closely with both companies to ensure this solution elevates the quality of service we deliver, ultimately enhancing the value and experience for our clients nationwide.”

“This new solution will help Nutanix and Pure Storage reach more customers together and help them better manage and modernize their mission-critical applications,” said Tarkan Maner, Chief Commercial Officer at Nutanix. “Our integrated solution will be ideally suited for companies with storage-rich environments looking for choices in modernization.”

“With more than 13,500 global customers, I’m hearing more than ever that organizations of all shapes and sizes have a growing

Maciej Kranz

General Manager, Enterprise, Pure Storage

need for efficient, flexible, and high-performance solutions that can also scale to support their most critical, data-intensive applications,” said Maciej Kranz, General Manager, Enterprise at Pure Storage. “Nutanix and Pure Storage are both known for pushing the boundaries of traditional infrastructure, driving innovation, and enabling unmatched agility. With this easy-to-manage solution, our joint customers will have the power of a virtual infrastructure that’s truly built for change.”

This solution will be supported on major server hardware partners that currently support Pure Storage FlashArray, including Cisco, Dell, HPE, Lenovo and Supermicro, for both existing and new deployments.

Additionally, Cisco and Pure Storage are expanding their partnership of more than 60 FlashStack validated designs to include Nutanix in the portfolio - further simplifying full-stack delivery.

The solution is currently under development and is expected to be in early access by the summer of 2025 and generally available at the end of this calendar year through both Nutanix and Pure Storage channel partners.

Tarkan Maner Chief Commercial Officer, Nutanix

ENGINEERING THE INTELLIGENT ENTERPRISE

Omnix is helping enterprises unlock value from AI, data, and digital twin ecosystems across verticals.

With a strong legacy of over 30 years of operations, Omnix offers proven capabilities across both engineering and ICT, positioning itself as a strategic enabler of transformation and bringing a pragmatic, grounded approach to the complex landscape of enterprise digitalization. This has earned the company the trust of a growing client base that includes several of the region’s leading enterprise players. And now Omnix is sharpening its focus on how enterprises deploy AI, implement digital twins, and manage data across their digital transformation journeys.

Walid Gomaa, CEO of Omnix International, says, “What we're really focusing on right now is staying in tune with the evolving market landscape—keeping up with what’s happening out there, particularly in the realm of AI. While AI has been a dominant theme of discussion for some time now, if we look at the ground reality in terms of adoption and implementation, it’s still not where we would like it to be.”

He mentions there are several reasons for this. One is that many organizations still approach AI with a trial-and-error mindset, which, while not wrong, often lacks a clear path or outcome. The business use cases are frequently unclear, and ROI is rarely well-calculated or articulated. These gaps end up delaying or derailing the actual execution of AI systems and solutions.

“To address these challenges, we’ve been working on a few key areas. The first is from a structural perspective—how we approach AI in a way that’s practical and outcome-focused. We’ve realized that customers need help in the early stages of the AI cycle. So we work closely with them—right from the beginning—to define use cases. This involves engaging both IT teams and business users to bridge the common gap that exists between them. We sit with them, collaborate on real, practical use cases, and ensure the business side is involved from the outset.”

Often in traditional IT-led implementations, the IT team builds a solution and hands it over to the business, expecting adoption. But because the business users weren’t part of the process, the feedback is often negative.

“We are actively breaking that cycle. We’ve created a specific service offering to do just that, and we call it ‘AI Monetization Service’. It’s focused on clearly defining the use case, establishing a strong business case, and aligning the entire effort across IT and business functions,” says Walid.

An example would be implementing AI-led Computer Vision. Omnix has implemented several use cases utilizing computer vision, which is rapidly becoming one of the most widely used technologies in the AI space.

“We have worked on face recognition, people counting, occupancy management, and predictive analytics in specific industry scenarios. These use cases are real, tangible, and are already making an impact,” Walid adds.

Another major area of focus for Onnix is conversational AI, which is becoming increasingly important for enhancing interactions between employees and their organizations, citizens and government, and customers and enterprises.

“Improving the user experience is a key goal here. We’re not just talking about basic chatbots. We’re looking at full conversational ecosystems that incorporate natural language processing, chatbots, and LLM integration to ensure responses are accurate, contextual, and aligned with enterprise data. Conversational AI push is more than just convenience. It’s about meaningful engagement, and we see significant potential both internally for employee support systems and externally for customer interactions,” says Walid.

Alongside AI, data is a priority focus at Omnix as structured, clean, and accurate data is a necessity to enable successful AI systems..

“We have taken a firm position on helping customers across the entire data lifecycle. This includes everything from data definition and cataloging, all the way to ETL (Extract, Transform, Load) processes, building data lakes, and progressing into the analytics phase. Without proper data management at the foundation, any AI initiative is likely to fail or underperform. That’s why our efforts in this area are comprehensive. We don’t just clean up data but help build the infrastructure to manage it going forward. It’s a very intentional effort to strengthen the data backbone of every organization we work with,” elaborates Walid.

The third key area and one that sets Omnix apart is the company’s strong engineering heritage.

“At Omnix, we carry two types of DNA. One is digital, and the other is engineering. And we still maintain a strong hold in engineering services. That includes everything from training to building information management (BIM) services to product lifecycle management, and project delivery solutions.”

Now, Omnix has extended this even further with 3D scanning and asset digitalization capabilities. This capability allows Omnix to scan and digitize physical environments from university campuses to industrial facilities and convert them into interactive, data-rich models. This is the foundation for the company’s digital twin offerings.

“With 3D scanning, we can not only visualize the asset but also tag and map individual components including furniture, equipment, infrastructure. Then we attach IoT devices to these assets. That’s when the digital twin really comes to life. We can track real-time data, control environments remotely, perform predictive maintenance, and simulate future states. It’s a powerful way to deliver value, especially in sectors like oil and gas, education, and manufacturing.”

Digital twins - use cases and delivery processes

Omnix has been taking a phased approach to digital twin implementations as the company sees it as a process with multiple stages. For instance, Omnix executed a project for a university in Abu Dhabi. The goal was to digitally map the entire campus. Omnix started by offering a complete scan with 3D laser scanners and drones and captured the full physical environment, including buildings, interiors, and assets.

“Once the scanning was complete, we took all that input, what we call point cloud data and began transforming it into a 3D digital model. We use platforms like Unity for this stage, which allows us to build interactive visualizations. The final output is a walkable 3D representation of the entire site. You can do virtual walkthroughs, examine specific areas, and interact with objects,” says Walid.

But they didn’t stop at visual modelling and extended the process to asset tagging.

“Because we were scanning everything in detail, we also conducted asset tagging. That means every chair, table, screen, or device we captured is logged as an individual asset within the system. That’s what we refer to as digitization—taking the physical and converting it into digital with intelligence attached.”

From there, the next logical step is IoT integration. Once assets are tagged, IoT sensors and devices are attached to them. This allows for live data streaming information like energy consumption, equipment status, or environmental metrics (temperature, humidity, occupancy). This makes it possible to control and monitor systems remotely. For example, operations like switching lights on or off, tracking environmental conditions, or scheduling maintenance, etc can be executed from a centralized system.

“This is what brings the digital twin to life. It’s not just about having a visual replica of your environment; it’s about having a functional, responsive digital interface tied to your physical operations. We’ve applied this model in oil and gas, in petrochemical facilities, and other industrial settings, where the stakes are high and the environments complex,” says Walid.

"Some clients define transformation only as process digitization. But we go beyond that. For us, it includes data readiness, digital twins, AI use cases, and conversational interfaces, all integrated based on the organization’s priorities."

In such deployments, Ommix uses hardware like physical scanners, drones, and laser devices to capture spatial and environmental data. That raw data is then processed into point clouds, which are further modeled into 3D environments using software like Unity. From there, the system becomes interactive, often accessed through dashboard views.

“These dashboards are customized depending on the use case. For a manufacturing plant, it might show machine temperature or failure warnings. For a building, it could reflect energy usage or air quality. And once you’ve got enough historical data coming in, you can go a step further into predictive analytics. For example, if three specific sensors trigger together consistently before a fault, we can pre-emptively alert the team that a breakdown may occur within 24 hours or 3 days, depending on the pattern,” says Walid.

Access to this information is managed securely. Omnix works with clients to define user roles and privileges, from data scientists to general staff, and implement access controls. This ensures the right people have the right access, aligned with enterprise policies. Omnix is thus offering the infrastructure layer, both hardware and software, to support real-time, intelligent, and secure data-driven operations.

“From defining the use case, to building the system, to implementing it in the live environment, it’s a collaborative journey.And this is a core part of our digital transformation offering. Some clients define transformation only as process digitization. But we go beyond that. For us, it includes data readiness, digital twins, AI use cases, and

"At Omnix, we carry two types of DNA. One is digital, and the other is engineering. And we still maintain a strong hold in engineering services. That includes everything from training to building information management (BIM) services to product lifecycle management, and project delivery solutions."

conversational interfaces, all integrated based on the organization’s priorities,” says Walid.

Defining AI use cases via a consultative approach

AI is increasingly seen as the entry point for many conversations around digital transformation. Everyone wants AI, but the challenge is to find relevant use cases aligned with the business processes of the organization.

“When we engage with clients, we emphasize: AI by itself is not the solution. It's a tool. The real value comes from how well it is aligned with the organization's business objectives. We handle these conversations by breaking AI down into practical use cases. For some clients, that might mean starting with conversational AI to improve customer service. For others, it could be computer vision to manage footfall or safety compliance. In some cases, it's predictive analytics to anticipate operational failures. The idea is to identify the use case that will have the highest immediate impact on the organization. Interestingly, many clients, especially in the government, are now under pressure to implement AI. Some come to us with a mandate that they need to implement five use cases by year-end. But when we ask if they have defined those use cases, often the answer is no.”

Omnix, therefore, guides these customers through a gap analysis, helping them define and prioritize the most relevant use cases, and then proceed from there.

“It's not about implementing everything at once—that's not realistic. What we recommend is: start with two or three use cases, build the data models, execute, learn, and then scale. Because every implementation will require adjustments—whether it's internal processes, team reskilling, or new hires like data scientists and engineers.”

Sometimes, what the customer thought was an AI problem would turn out to a workflow or automation issue. In such instances, Omnix helps solve the problem with a low-code automation platform, that’s still transformation but doesn’t have to be an AI model. In some cases, Omnix helps enhance the use case with NLP capabilities, so there's still an AI element involved.

“But the point is: it's about solving the real problem, not forcing AI where it’s not needed,” Walid adds.

He also discusses the approach to Agentic AI.

“People often ask how it ties into conversational AI, and the answer is—it’s the next layer. Conversational AI is essentially the front-end interface, where users interact via chat, voice, or app. Behind the scenes, agentic AI kicks in, orchestrating multiple tasks simultaneously. Think of it this way: when a user makes a request, an agentic system deploys multiple specialized agents. One might fetch data from a database. Another could trigger a backend workflow. A third might perform reasoning or offer recommendations based on patterns. These agents collaborate in real time, but the user only experiences a seamless front-end interaction. That’s where real intelligence and value generation happens. It’s not just about responding to a question. It’s about reasoning, synthesizing, and even anticipating— pulling from multiple systems, understanding context, and delivering outcomes.”

H.O.T systems and AI at the edge

One of the most impactful innovations Omnix has developed over the years is H.O.T. systems, short for Hardware Optimized Technology.

“These were born out of a very real pain point we observed among engineers and designers. When using high-performance visualization or design software, they were spending an excessive amount of time simply waiting, waiting for models to open, waiting for renders to complete, waiting for data to save. So we asked ourselves: How can we optimize the hardware path to boost performance? It’s not just about throwing in a powerful GPU or the latest CPU. It’s about understanding how the software interacts with memory, disk, bus speeds, and graphics rendering and streamlining that entire workflow. That’s what our HOT systems do,” says Walid.

Omnix has created its own intellectual property around performance optimization, enabling users to experience up to 30–35% better performance compared to standard machines with similar specs. It’s a solution that’s already been deployed at more than 500 client sites across the UAE and other Gulf countries. Walid claims that whenever these systems consistently outperform traditional configurations.

The emphasis is on ensuring better productivity.

Walid elaborates, “If you’re wasting 20% of your time waiting for your machine to respond, then you’re losing 20% of your salary—or rather, the organization is. When we present this to CFOs or operations heads, the value becomes very clear: investing in better systems equals better ROI on human capital.”

Omnix is now adapting these HOT systems to support edge AI workloads. He says that this is where the market is headed and there will be more and more AI processing happening at the edge—on laptops, mobile devices, even field-deployed sensors. To do this effectively, those endpoints need to be powerful and optimized for local AI execution. That’s the next focus for Omnix.

Omnix is also exploring how these systems can support LLMs and agentic models locally, without relying on the cloud for every transaction.

“This makes AI more accessible, responsive, and secure—especially for sectors like defense, healthcare, and manufacturing where data sovereignty is critical,” Walid adds.

Omnix offers both on-premises and in the cloud deployments. It has partnered with HyperFusion, which allows it to offer GPU-based development and test environments in the cloud.

He elaborates, “Let’s say a customer wants to experiment with a use case—they’re not ready to commit to buying infrastructure yet. We create a GPU-powered sandbox environment for development and

testing. Once the use case is validated, the client has two choices: either they move to on-prem hardware for full deployment, or they continue using the hosted environment in the cloud. It’s flexible and scalable. And this isn’t limited to AI. These environments can support simulation workloads, 3D rendering, digital twin environments, and even high-performance computing (HPC).”

What really makes Omnix unique is the company’s dual DNA. On one hand, it has deep roots in engineering and AEC (architecture, engineering, construction) and they have been long-time partners with Autodesk, and continue to support the full ecosystem around BIM, digital twin, and wearables.

On the other hand, Omnix also has an equally strong digital technology capability.

“We’re known for our work in low-code/no-code platforms, application modernization, data engineering, and smart infrastructure. Our teams execute on both fronts, and more importantly, they collaborate across disciplines,” adds Walid.

In an era where there is a lot of convergence happening, this twin legacy of Omnix gives it ample advantages. For instance a customer working on a construction project might need a custom application. A client deploying a digital twin might need data integration and dashboarding. Omnix with its engineering and digital expertise is best placed to address such requirements through a unified approach.

What’s next for Omnix

Omnix is looking at further growth not just within the UAE, but across the broader GCC region.

“We’re placing a significant strategic focus on Saudi Arabia right now. The momentum there is incredible, and we’re expanding our salesforce and technical teams on the ground to meet growing demand. But it doesn’t stop there. We’re also actively pursuing expansion in Oman, Kuwait, and Qatar,” says Walid.

While the UAE and Saudi Arabia have been at the forefront of adopting new technologies, every country in the region is launching its own vision plans with clear digital strategies backed by mandates, funding, and national-level objectives.

“We’ve started to see Kuwait, Qatar, and Oman take bold steps forward, setting their own transformation agendas. And that creates a massive opportunity for organizations like us who are equipped to execute. The dynamics have shifted. It’s no longer about trying to convince clients to digitize. In many cases, the governments themselves are mandating transformation. This changes the nature of the conversation. Clients now come to us with clear directives: “We have to implement this by year-end.” There’s less resistance. The budget is there. The timelines are defined. What they need is a partner who can deliver across use cases and domains, “ says Walid.

Omnix today has a core workforce of about 250 employees, and including their outsourced and managed services, the company is managing over 1,300 professionals across the region. Omnix has physical offices in five countries including the UAE, Saudi Arabia, Qatar, Oman, and Kuwait.

However, the goal isn’t just to grow headcount or geographic footprint at Omnix. The focus is to build a strong, cross-functional team, people who understand engineering, digital transformation, AI, and smart infrastructure, and who can come together to deliver holistic outcomes for their customers. And this is driving Omnix ahead.

RIGHT-SIZING ENTERPRISE GENAI

As generative AI gains momentum across industries, enterprise leaders are shifting focus from experimentation to scalable, compliant deployment.

As generative AI (GenAI) matures, the enterprise world is rapidly transitioning from cautious exploration to confident deployment. Across industries and regions, business leaders are recognizing that GenAI is not just a technological experiment—it is an enabler of strategic value, competitive differentiation, and operational efficiency. In the Middle East in particular, a wave of national AI strategies and private-sector investments is accelerating this shift.

Aligning Capability with Context

For enterprises to realize value from GenAI, the starting point is right-sizing—adapting AI capabilities to meet organizational needs without overburdening resources or compromising compliance.

“Right-sizing GenAI is the process of aligning AI capabilities with the unique strategic, operational, and risk context of the enterprise,” says Dr. Ali Katkahda, Group CIO of Depa United Group. “It depends largely on the organization's digital maturity, regulatory environment, leadership priorities, and industry dynamics.” He emphasizes that in sectors where IT has traditionally played a support role, GenAI offers a rare opportunity to reposition it as a driver of value, especially when quick-win use cases such as copilots for employee inquiries or inventory lookups can create fast, tangible impact.

Sudhir Kumaran, Director of IT at Dubai Aviation City Corporation, offers a complementary perspective. He says, “The objective is to strike a balance between performance, cost-efficiency, risk management, and regulatory compliance. From a strategic standpoint, it involves matching the model’s capabilities to specific use cases—avoiding the inefficiency of using large-scale models for simple tasks, or the inaccuracy that can come from underpowered models for complex problems.”

The consensus seems to be that right-sizing is an ongoing process, requiring continuous optimization of latency, cost, and operational overhead as business needs evolve.

Foundation Model or Fine-Tuned LLM?

When it comes to choosing between general-purpose foundation models and fine-tuned large language models (LLMs), the decision has to be taken in the context of the company’s requirements. “The decision hinges on business goals, risk tolerance, and regu-

latory obligations,” says Dr. Katkahda. “General-purpose models like ChatGPT or Gemini offer speed and flexibility, but they are not always suitable for regulated industries or data-sensitive use cases.”

Kumaran adds, “Foundation models are great for broad, multimodal tasks and fast deployment scenarios, especially where time-to-market matters. But fine-tuned models excel in use cases that require domain-specific accuracy, data governance, and compliance, such as legal analysis, healthcare diagnostics, or financial operations.”

While foundation models offer convenience through API access, they become cost-intensive at scale. Fine-tuned models, although resource-intensive initially, often offer better long-term value for repetitive, high-precision tasks. The decision ultimately depends on the complexity of the use case, the sensitivity of the data, and the need for internal control.

Choosing the Right Model

Dr. Katkahda outlines four primary criteria for choosing between pre-trained and fine-tuned models: data sensitivity, domain spec-

ificity, latency and performance requirements, and cost-to-ROI balance. For highly sensitive applications or those involving proprietary data, internal fine-tuning on open-source models is the preferred route. It not only improves performance but also enhances control over data residency and auditability.

Kumaran seems to concur with this, and he emphasizes that enterprises dealing with niche domains, such as aviation compliance or financial risk require custom-trained models to avoid generic and potentially misleading outputs. "Pre-trained models are effective for non-specialized, public-facing tasks. But when it comes to regulated or high-stakes domains, customization is not optional but essential.”

Deployment Challenges

Despite the excitement surrounding GenAI, its enterprise deployment comes with real challenges.

“Data readiness is a major hurdle,” says Dr. Katkahda. “Legacy systems, unstructured datasets, and fragmented sources impede model training and degrade output quality.” He also notes the shortage of GenAI-skilled professionals in the region, the ambiguity around AI liability and explainability, and the infrastructure costs associated with GPUs and orchestration platforms.

Kumaran points out the difficulties in integrating GenAI into legacy enterprise systems and workflows. “It’s not just a technical deployment. it’s a cultural and procedural shift. Internal resistance, lack of governance frameworks, and inconsistent data

policies can all slow down GenAI adoption.”

For both leaders, one solution lies in strong change management and governance. Enterprises need clear internal policies around AI ethics, data classification, model monitoring, and responsible use, especially as regulatory pressure builds.

Data Sovereignty and Model Strategy

Regulatory requirements are not just constraints; they’re now shaping AI strategies from the ground up.

“In both UAE and KSA, data sovereignty laws require critical data to remain within national borders,” says Dr. Katkahda. “This mandates localized model hosting, strict access controls, and often excludes reliance on global public APIs.” His organization’s GenAI strategy incorporates role-based access, encryption, audit trails, and jurisdiction-aware governance mechanisms.

Kumaran agrees, noting that for public entities or sectors governed by civil aviation or financial regulations, using external APIs introduces unacceptable risks. “In regulated environments, explainability is not a luxury; rather, it’s a necessity. We must be able to trace how a GenAI model arrived at a decision, especially if it influences operational outcomes.”

Regional Outlook

The Middle East is entering a high-growth phase for GenAI, and we could see a more rapid pace of deployments.

Dr. Katkahda says, “The UAE’s AI market is projected to grow from USD 3.47 billion in 2024 to USD 46.33 billion by 2030. We’ve seen a clear shift from experimentation to deployment across sectors.”

Kumaran concurs, and he adds, “Countries like the UAE and Saudi Arabia are at the forefront of AI adoption. With strong government backing, public-private partnerships, and focused upskilling programs, the pace of GenAI implementation is accelerating sharply.”

This optimism is supported by significant infrastructure developments, including a forecasted 6+ GW of datacenter capacity in the region by 2030, and a 344% year-over-year increase in GenAI course enrollments. Industries such as media, healthcare, manufacturing, and banking are leading adoption, with use cases ranging from cybersecurity to predictive maintenance and customer service automation.

GenAI in the enterprise is this no longer experimental—it’s becoming operational.

CIOs are at the forefront of this shift, navigating a complex landscape of performance optimization, regulatory compliance, and organizational change. For organizations looking to kickstart their GenAI journeys, they need to start with value-aligned use cases, invest in governance and skills, and build AI strategies that are both scalable and sovereign. In summary, GenAI is about building smarter, more adaptive enterprises for the future.

Dr. Ali Katkahda Group CIO, Depa United Group

DATA STREAMING IN MODERN ENTERPRISES

As enterprises shift toward real-time intelligence, data streaming is emerging as a core enabler of agility, AI, and unified data management. Shaun Clowes, Chief Product Officer of Confluent explains how the company is redefining data architecture for the AI era with innovations like Flink, Tableflow, and a governance-first approach.

How do you define the total addressable market for data streaming, and how widespread is its adoption?

We estimate the overall addressable market for streaming platforms at around $100 billion. Over 150,000 organizations are using Apache Kafka in some form, and Confluent serves more than 6,000 enterprise customers globally. These organizations use data streaming not just to transport data but to unify and simplify their data architecture, replacing a range of fragmented tools—from data lakes and warehouses to event brokers and integration software.

Can you elaborate on the origin story of Confluent and Kafka?

Kafka was created at LinkedIn about 13 years ago to solve the challenge of real-time data synchronization across multiple systems. The company needed a way to build AI and ML capabilities on top of a seamless data pipeline. Confluent was founded shortly after to commercialize and evolve Kafka, starting with on-prem deployments, then a cloud-native version, and now an elastic, scalable platform with built-in stream processing, governance, and integration features.

How does Confluent position itself in the competitive landscape?

Kafka was the first true streaming architecture at internet scale. While other streaming tools exist, including some cloud-native and open-source alternatives, Confluent remains the only truly integrated platform that works across on-prem, cloud, and multi-cloud environments. Most enterprise customers operate in hybrid setups, and Confluent uniquely supports that complexity with a unified, vendor-agnostic solution.

Can your solution integrate with existing, diverse enterprise architectures?

Absolutely. Our set of connectors, called "Connect," enables customers to plug Confluent into a wide range of applications, databases, and infrastructure. Streaming doesn’t replace all legacy systems at once; it’s additive. Organizations can start small and gradually expand their use of streaming to unlock more value from their data.

How does data streaming relate to broader data management efforts?

Confluent approaches data from the perspective of motion and meaning—not just storage. Unlike traditional storage-led data

management solutions, we manage the shape, movement, and governance of data in real time. Our approach ensures that the quality and compliance of data is enforced from the moment it is created, minimizing downstream errors and regulatory risks.

Can you elaborate on governance as a priority?

Yes, governance is critical to making data reusable. We’ve layered governance tools on top of our streaming foundation to provide lineage, quality, metadata, and access controls. This ensures that data is not only real-time but also trusted and compliant—particularly important for regulated sectors.

Why is unifying stream and batch processing significant?

All data is created in real time, but legacy systems forced us into batch processing because real-time movement was too hard. This created inefficiencies and data inconsistencies. Our Tableflow feature bridges this gap by allowing real-time data to be queried like a table in a data lake, while still being used in real-time applications. It enables consistent, reliable data for both real-time and analytical use cases.

What are some public sector or government use cases for your platform?

Use cases in government range from real-time customs data and import/export tracking to social security claim verification and citizen authorization. The common theme is the need for data to move instantly, reliably, and across siloed systems—something that streaming platforms are uniquely capable of addressing.

Is the platform customizable for specific verticals or processes?

We provide a generic data streaming platform that is used by developers and technical teams across industries. While we offer patterns and examples, our strength lies in flexibility. Our platform supports many of the world’s largest banks, airlines, and telecom companies who build highly customized real-time experiences on top of our architecture.

What’s the significance of recent announcements related to Flink and Tableflow?

Flink allows users to run SQL-like queries on streaming data as if it were static. With snapshot queries, you can experiment with historical data and then deploy those queries into real-time production environments—enabling faster development cycles and better data-driven experiences. Tableflow, meanwhile, lets streaming data appear as tables in data lakes, ensuring that real-time and batch data are aligned.

Are these features immediately available to customers?

Yes. Our business model is consumption-based. All customers

have access to the unified platform and can choose which services to use. Some are charged per use, like Flink jobs, but there are no separate licenses required. It’s flexible and on-demand.

What kind of customers does Confluent serve—SMBs or large enterprises?

We serve all customer types, from small and medium businesses to Fortune 500 firms. The same platform powers both ends of the spectrum. Customers can choose between self-managed deployments or our managed cloud service, depending on their needs.

What can customers expect from Confluent over the next year?

We’re working to further unify batch and stream processing. This means customers will be able to run real-time and historical queries on a single platform, integrate their data with all major SaaS and enterprise tools, and benefit from deeper analytics integrations. Our goal is to make data usable, consistent, and available wherever it’s needed, in real time or in retrospect.

"While other streaming tools exist, including some cloud-native and open-source alternatives, Confluent remains the only truly integrated platform that works across on-prem, cloud, and multi-cloud environments. Most enterprise customers operate in hybrid setups, and Confluent uniquely supports that complexity with a unified, vendor-agnostic solution"

DATA STREAMING OUTLOOK IN THE MIDDLE EAST

With AI adoption surging across the Middle East, Confluent is positioning its data streaming platform as a foundation for real-time, intelligent enterprises. In this interview, Fred Crehan, Area Vice President Growth Markets at Confluent discusses regional momentum, cloud flexibility, and why streaming data is essential for AI success.

Discuss Confluent’s growth and presence in the Middle East?

We officially launched our Middle East operations shortly after the pandemic, around 2021, and have since set up our regional base in Dubai Media City. The region has been fantastic for us — we’re growing steadily, acquiring new customers across verticals, and building strong partner relationships. Given the recent strategic announcements in Saudi Arabia and the UAE, especially around AI and data infrastructure, we see this region as a priority for sustained investment.

Which verticals are seeing the most traction for Confluent’s platform in the region?

Confluent, built around Apache Kafka, has wide appeal across industries. Kafka is one of the most adopted open-source technologies globally, and we’re seeing strong interest from banking, finance, retail, travel, telecom, and even government sectors. Any enterprise that wants to build a strategic edge with real-time data is a potential fit for us and that’s practically every modern organization today.

What do your latest product announcements mean for customers in the Middle East?

Our recent product enhancements are focused on simplifying how organizations build and manage data in motion. We introduced Tableflow, which enables seamless integration between Kafka and leading data warehouses like Databricks and Snowflake, essentially allowing bi-directional data movement with a single click. We’ve also expanded our partnership with Databricks and improved Flink capabilities, letting customers process and transform data in real time. This is especially critical for AI projects, where data freshness, quality, and transformation are essential.

How do you see Confluent’s role in the current AI wave, especially in the context of the Middle East?

We believe the battleground for AI leadership will be shaped significantly in the Middle East. The UAE and Saudi Arabia are

Area Vice President Growth Markets, Confluent

making aggressive investments and are very ambitious in their AI goals. But no AI project can succeed without a robust data strategy. Confluent plays a foundational role here; our platform ensures that data is fresh, varied, high-quality, and available in real-time, which are non-negotiables for any serious AI initiative. We see ourselves as an essential enabler for AI in the region.

How do you address data residency and regulatory requirements, particularly in highly regulated sectors?

We support both self-managed (on-prem) and Confluent Cloud deployments. Our on-premise solution allows customers to fully control their environment and comply with strict residency requirements. On the cloud side, we’ve aligned with all major hyperscalers; AWS, Azure, and GCP, and are available in local cloud regions across the Middle East, including Damam (KSA), Dubai (UAE), Doha (Qatar), and Bahrain. This allows us to meet national data sovereignty regulations and sector-specific mandates like those in government and finance.

What deployment trends are you observing — on-prem, cloud, or hybrid?

The region shows strong interest in hybrid architectures. Many organizations are keeping critical data on-premise for security and compliance, while leveraging cloud for scalability and performance. This hybrid approach is a key strength of Confluent. We offer unmatched flexibility in moving data between on-prem and cloud — and even across multiple cloud environments — without vendor lock-in. Multi-cloud is increasingly common here, and Confluent supports it fully.

How are you enabling and collaborating with partners in the region?

Partners are central to our go-to-market strategy in the Middle East. We work closely with both global and regional systems integrators and distributors. Our “Build with Confluent” initiative encourages partners to develop solutions on top of our platform. We support them through training, certifications, co-delivery of projects, and marketing enablement. The creativity we’re seeing, from fraud detection to real-time supply chain analytics, is tremendous, and partners play a pivotal role in bringing these innovations to life.

Are there examples where customers tried open-source Kafka and later moved to Confluent?

Absolutely. We embrace the open-source Kafka community. In fact, Confluent was founded by the original creators of Kafka. Many customers start with open-source Kafka, which is great for experimentation. But as their use cases scale, they face challenges: managing multiple integrations, ensuring security, meeting SLAs, and handling deployment complexity. That’s where we come in. Confluent simplifies and secures large-scale data streaming while reducing hidden costs like hiring Kafka special-

ists, maintaining infrastructure, and building custom governance. The build vs. buy decision often tilts in our favor once customers experience the demands of running Kafka in production.

How does cloud adoption look in terms of provider preference across your regional customer base?

There’s no single dominant provider — we see customers using AWS, Azure, GCP, and often more than one in parallel. Multicloud strategies are common, as enterprises seek flexibility and avoid lock-in. Since Confluent is cloud-agnostic, we allow customers to operate seamlessly across clouds. That flexibility is one of our strongest differentiators, especially in a region where digital infrastructure is evolving so rapidly.

Are you participating in regional tech events like GITEX?

Yes, we’ve participated in GITEX previously in partnership with AWS, Azure, and GCP. It's a flagship event in the region and a key platform for visibility and collaboration. We’re excited to continue engaging with the GITEX community in future editions — it’s an unmissable forum for showcasing innovation and forging meaningful relationships.

"Confluent, built around Apache Kafka, has wide appeal across industries. Kafka is one of the most adopted open-source technologies globally, and we’re seeing strong interest from banking, finance, retail, travel, telecom, and even government sectors."

PHDAYS FEST HIGHLIGHTS

CYBERSECURITY COLLABORATION

The international cybersecurity festival Positive Hack Days, hosted by Positive Technologies emphasized the need for technological cooperation as the basis of digital sovereignty

The international cybersecurity festival Positive Hack Days, hosted by Positive Technologies, a leader in result-driven cybersecurity, took place in Moscow's Luzhniki sports complex on May 22–24. The event was supported by the Ministry of Digital Development of Russia. The Moscow Government acted as a strategic partner: this year's cyberfestival received support from the Social Development Complex, the Department of Information Technology, and the Department of Entrepreneurship and Innovative Development.

How can we ensure a secure digital future for everyone and build sovereign cybersecurity in every country? How can we achieve technological independence and professional readiness to autonomously maintain cyber resilience under any circumstances, using only our own cybersecurity specialists? Is it possible for a country to train skilled personnel independently, without interacting with experts from other countries and global industry leaders?

These were just a few of the questions that delegations from over 40 countries, including Latin America, Africa, Asia, and the Middle East, discussed at the largest cybersecurity forum. Among the participants were representatives of government bodies and

cybersecurity agencies, business leaders, renowned tech experts, software developers, and ethical hackers. This PHDays Fest has been the largest in its history since 2011. Over 150,000 people visited the three-day event, with more than 180,000 viewers tuning in online. The technical sessions were grouped in 26 tracks and featured 270 talks covering key cybersecurity issues. More than 500 speakers participated in the festival, ranging from budding tech enthusiasts to top specialists, CIOs, and CISOs of major IT companies.

Geopolitical confrontations have exposed the problem of the world's digital architecture: dominant control by large vendors and the infringement of individual countries' interests through restricted access to technology and equipment. Global tech giants wield enormous influence; dependence on them can paralyze a national economy and undermine technological sovereignty. It was previously believed that there were hardly any worthy alternatives to their solutions in the global market.

An innovative idea on how to achieve digital sovereignty was proposed at PHDays Fest by Positive Technologies. The concept is based on the idea that strategic cooperation and mutual exchange

of expertise can become a source of strength. For example, when developing technologies or products, partners depend on each other only in terms of sharing knowledge and practical experience. Such synergy can be achieved by moving away from the traditional schemes of importing IT and cybersecurity solutions, which plunge countries into complete dependence on suppliers. Instead, cooperative partners, whether countries or commercial enterprises, can progress together by complementing each other. The concept was positively received by Russian experts and supported by delegates from other countries. Positive Technologies declared its willingness not only to launch this initiative, but also to take a leading role in its implementation. The company's experts are ready to openly share their unique experience in the field of cybersecurity, accumulated over more than two decades, with friendly countries. They offer practical expertise in protecting individual facilities and entire economic sectors, as well as methods for developing effective cybersecurity systems. To foster the growth of local professional talent, the company will act as a mentor and allocate resources to train and upskill cybersecurity specialists and ethical hackers. Positive Technologies also intends to develop local expert communities. This way, the vendor will help partners establish the necessary technological foundation and properly build a sovereign cybersecurity industry, which the partners will be able to maintain and improve independently, relying on their own well-trained and highly qualified specialists. The company took its first steps in this direction in 2024, specifically by launching an international training program for cybersecurity professionals. In addition, Positive Technologies helps to enhance the competencies of cybersecurity leaders in financial institutions of the Gulf countries to better protect the local financial sector

The idea of co-developing new digital architectures that would be based on security principles and prevent situations where a few large vendors have unlimited power, as well as the idea of working together to fill the gaps in cybersecurity education through expertise transfer were consistent themes throughout the forum sessions during the festival. The business program included over a dozen discussions and plenary sessions. International participants particularly noted the relevance and global significance of the issues raised.

“As the official distributor of Positive Technologies in the Middle East and Africa, we are thrilled to be attending Positive Hack Days Fest. This is our second time participating in this event, and it provides a fantastic opportunity to connect with cybersecurity experts from around the world,” said Ali Azzam, Vice President of Mideast Communication Systems. “Positive Technologies offers a range of unique cybersecurity solutions and has significant strengths and key advantages in various sectors. We believe it is essential for us to encourage Egyptians to attend this festival, as they can gain valuable insights that can be applied to the Middle Eastern industries.”

The forum opened with a plenary session on digital sovereignty, emphasizing the importance of technological independence, service continuity, and national security. Maksut Shadayev, the Russian Minister of Digital Development, who was one of the speakers said this primarily means protecting the interests of users and national security, regardless of external pressure. The panel highlighted the global nature of digital dependency and the

need for international collaboration to counter.

A panel on humanless technology discussed the balance between innovation and control. Experts stressed the need for ethical standards, especially in generative AI, and highlighted the risks of overexposure to digital products, particularly among children. Positive Technologies' CEO Denis Baranov underscored the role of cybersecurity professionals in enabling safe tech adoption.

The growing threat of cyberfraud was also discussed, with officials warning of increasingly sophisticated, remote, AI-powered attacks. Kazakhstan’s proactive fraud prevention models and the importance of multi-stakeholder cooperation were presented as key strategies.

Lastly, the evolving role of CISOs was explored. Security leaders must now act as business enablers, translating cyber risks into financial impact and embedding cybersecurity into corporate culture and decision-making across sectors and geographies. A prime example is the experience of Arab Islamic Bank (Jordan), where the CISO plays one of the key management roles. According to Ahmad Mohammad Sabri Aljamal, Information Security Risk Manager at the Arab Islamic Bank, this was made possible by translating technical threats into financial and reputational consequences, as well as fostering a culture of cybersecurity within the organization. Tushar Dinesh Vartak, CISO of RAKBANK (UAE), also emphasized the importance of mastering the language of top management and understanding business processes for proactive security.

New partnerships and initiatives.

During the forum, the DOM.RF Group and Positive Technologies signed a partnership agreement that aims to cultivate the Russian software market and develop domestic alternatives to cybersecurity products for the banking industry. In addition, the FinTech Association has joined forces with Positive Technologies to ensure the cyber resilience of the financial sector.

Positive Technologies also signed memorandums of understanding and cooperation with four educational institutions in Indonesia: Universitas Muhammadiyah Jakarta, Universitas NU NTB, Business Center Alumni UI (KBA UI), and Sakuranesia Foundation. The collaboration will focus on improving the skills of cybersecurity professionals in Southeast Asia.

Additionally, the second season of Positive Hack Camp, an international educational project on practical cybersecurity organized by Positive Technologies, was announced. In July, specialists from multiple countries will once again come to Moscow to learn about cybersecurity from leading Russian experts.

Other PHDays Fest highlights

As part of the cyberfestival, the legendary cyberbattle Standoff 15 took place. In this battle, defenders faced off against ethical hackers in conditions as close to real life as possible. More than 40 teams of attackers and defenders from 18 countries, including CIS countries, Southeast Asia, and the Middle East, participated in the large-scale competition. The total prize fund for the attackers was 5 million rubles. The Russian team DreamTeam secured the top spot, claiming the title of seven-time Standoff champion.

Yuliya Danchina

Positive Technologies Customer and Partner Training Director

What is the focus of the Cybersecurity Academy at Positive Technologies?

The Academy’s mission is to equip professionals and students with real-world cybersecurity skills, and to help build digital sovereignty in our partner countries. We operate as an internal education division within Positive Technologies, running upskilling programs for professionals, university faculty, and even school students. Everything we do is grounded in practical, hands-on training, not theory.

What types of training programs do you offer?

We run specialized programs across several domains. These include Ethical Hacking / White Hacking, Penetration Testing, Vulnerability Assessment, SOC Analytics, Application Security and Machine Learning for Cyber Defense.

Each program is tailored for a specific skill set, and participants receive certifications upon completion.

SHAPING THE NEXT GENERATION OF CYBERSECURITY EXPERTS

Yuliya Danchina, Positive Technologies Customer and Partner Training Director, Head of Positive Education discusses how the company is helping build global cybersecurity talent pipelines through practical education and strategic partnerships

Do you collaborate with academic institutions?

We work with over 600 universities across Russia and are expanding globally. Our partnerships include co-developing cybersecurity and IT curriculum, offering credits for white hacking and “methodology by design”, equipping institutions with advanced cyber ranges and simulation labs and training university faculty through our School for Faculties program.

We also engage with schools to build early awareness, encouraging the teaching of ethical hacking to students as young as ten years old.

How does the “School for Faculties” program work?

It’s a six-month training initiative for university professors teaching cybersecurity or IT. Participants learn from our top white-hat hackers about real-world threats, cyberattack methods, and how to secure infrastructure. The goal is to modernize university curriculums based on industry needs.

What is the Positive Hack Camp, and who is it for?

Positive Hack Camp is a two-week, fully practical cybersecurity bootcamp for early-stage IT learners. It attracts global participants — last year we had 70 students from 20 countries, including the Middle East (Saudi Arabia, UAE, Bahrain, Egypt). This year, we expect 100 participants, including 40 from the Gulf.

What opportunities are available after the camp?

We identify top talent — often the top 10–15 participants — and offer them internships or junior training roles. These roles can lead to full-time positions within Positive Technologies. We also welcome

international interns, as part of our broader goal to develop cyber talent across regions.

How do you ensure ethical values are instilled in participants, especially those learning offensive techniques?

We teach defensive security. Even when training red-team tactics, we emphasize responsible disclosure and ethics. We screen participants carefully, conduct video interviews, and do basic background checks. We do not share offensive tactics usable by cybercriminals.

Do you have active collaborations in the Middle East?

Yes. We’ve partnered with SBU University in Indonesia and signed partnerships in Dubai. We’re also in ongoing discussions with universities and government-affiliated bodies in Saudi Arabia, including those in cybersecurity federations. Our goal is to expand the reach of practical cybersecurity education across the region.

How do you address the global cybersecurity skills gap?

In Russia, about 10,000–15,000 cybersecurity professionals graduate annually — far short of the 40,000 needed. Positive Technologies has spent two decades building talent pipelines through consistent investment in education. Our model involves working with the entire learning pipeline: from school kids to faculty to working professionals.

Are there any upcoming competitions or events in the Middle East?

We're exploring annual competitions such as hackfests or regional hackathons to strengthen engagement. These would complement the Positive Hack Camp and provide additional platforms for talent scouting and regional collaboration.

NAVIGATING NEW THREATS

Technologies provides valuable insights into Bulwark Technologies’ strategies and initiatives in the ever-evolving cybersecurity landscape.

Tell us about some of the new vendors you’ve signed up and introduced in the market this year

This year, we’ve onboarded over 14 new vendors, including Vicarius, Syteca, CryptoBind, and others. Vicarius is focused on vulnerability management, Syteca, formerly Ekran System focuses on PAM and user activity monitoring, whereas CryptoBind focuses on PKI and cryptography solutions. These additions reflect our commitment to staying ahead of emerging cybersecurity threats and providing our partners with cutting-edge solutions.

AI is a significant focus in the industry. How is Bulwark integrating AI into its operations and offerings?

AI is integral to our daily operations and the solutions we offer. Many of our vendors now utilize AI tools to enhance their products. Internally, we leverage AI for various tasks, and we’re actively upskilling our team to better support AI-driven solutions for our partners.

With the rise of IoT and industrial systems, how is Bulwark addressing security in these areas?

While our focus has traditionally been on enterprise IT security, we’re increasingly moving into IoT and industrial security. Recognizing the critical importance of energy systems and infrastructure security, we’re expanding our portfolio to include solutions that address these emerging threats.

Bulwark has been in the industry for 25 years. How do you assess the company’s growth and journey?

It’s been a challenging yet rewarding journey. We’ve established a strong pres-

ence across the GCC and India. This year, we’re focusing more on market expansion and integrating new vendors to meet the evolving needs of our customers.

Many vendors are now offering unified platforms. How does Bulwark approach this trend?

We recognize the value of unified platforms. Many of our vendors provide integrated solutions, and we work closely with them to ensure these offerings are accessible and beneficial to our partners.

How does Bulwark support its partners in managing the diverse solutions in your portfolio?

We offer comprehensive support, including training, pre-sales consulting, and post-sales assistance. Our goal is to empower our partners with the knowledge and resources they need to effectively deliver our solutions to customers.

Email security remains a top priority for many organizations. How is Bulwark addressing this?

Email security is indeed critical. We offer solutions that provide robust protection against threats, ensuring secure communication for our clients.

Given the dynamic nature of cybersecurity, how does Bulwark stay ahead of emerging threats?

We continuously monitor the cybersecurity landscape and adapt our offerings accordingly. Our participation in events like GISEC allows us to stay informed about the latest trends and technologies, ensuring we can provide our partners with the most effective solutions.

While our focus has traditionally been on enterprise IT security, we’re increasingly moving into IoT and industrial security. Recognizing the critical importance of energy systems and infrastructure security, we’re expanding our portfolio to include solutions that address these emerging threats.

SECURING THE DIGITAL WORKPLACE

Syteca shares how the company is redefining privileged access and user activity monitoring for modern enterprises.

Discuss the key developments over the past year?

Over the past year, we’ve significantly evolved our core offering—user activity monitoring—by splitting it into two focused product lines: Productivity Monitoring and Privileged Access Management (PAM). The idea was to deliver deeper functionality in each area. Productivity Monitoring gives organizations insight into how users interact with applications and websites on endpoints—tracking engagement and time spent. On the other side, PAM has now matured into a standalone line, offering full Privileged Account Session Management (PASM) capabilities.

What differentiates us is that these modules, while distinct, are all deployed using a single agent, or even agentlessly in some cases, making implementation fast and scalable, especially compared to traditional PAM vendors.

What prompted the rebranding or renaming of the product suite?

The name change reflects the broader scope and modular architecture we now offer. What began as a user activity monitoring tool has grown into a unified solution stack covering productivity, privileged access, and session management. This evolution needed to be better reflected in the branding.

What’s your core market focus right now?

We’re strongly focused on SMBs and SMEs—particularly because they often lack robust privileged access security. Our deployment model is ideally suited for them. While other PAM vendors typically start with hundreds of users, we can start small—as few as five to ten users—and scale to tens of thousands. This flexibility is one of our strongest differentiators.

Our slogan even reflects “PAM for S, M, L, and XXL.” That means our solution works just as well for small businesses as it does for large enterprises.

How has your regional presence grown, especially in the Middle East?

Our entry point into the region was through user activity monitoring, and that gained traction especially in the government sector, where strong compliance and security requirements prevail. This laid a foundation for trust. We’re now leveraging that to expand within the SME space, which serves as a bridge to eventually reach large enterprises that may already have PAM tools—but may consider switching over time.

Can you elaborate on how AI is being leveraged in your platform? AI plays a role primarily in threat detection and automated response. For example, if a privileged user begins performing sus-

picious activity—say, launching unauthorized applications—we can use AI to flag the behavior, terminate the session, or notify administrators in real time. The intelligence layer enables faster and more adaptive incident response.

While it’s still early days for broader AI adoption in PAM, we see this evolving rapidly, especially for use cases like anomaly detection and behavioral analytics.

Which verticals are you seeing the most traction in? Our strongest verticals are:

• BFSI (Banking, Financial Services & Insurance) – accounting for 40–50% of our customer base.

• Critical Infrastructure – especially in Europe, where the category has expanded to include 18 verticals such as energy, water management, waste management, and utilities.

These sectors benefit not only from PAM but also from productivity and session monitoring, especially when dealing with third-party contractors and remote access environments.

What role does your solution play in securing multi-cloud environments?

While we don’t claim to address every vulnerability in multicloud environments, we secure access—which is often the most critical layer. We help manage and protect access to applications and infrastructure across clouds through features like password rotation, session monitoring, and privileged credential protection. That’s where our value lies: making sure access is controlled, monitored, and secure—no matter where the data or workloads reside.

PROACTIVE CYBERSECURITY WITH VICARIUS VRX

Moty Cohen, Director of EMEA at Vicarius, discusses the company’s innovative approach to vulnerability and patch management, highlighting the capabilities of their vRx platform

Can you elaborate on how Vicarius positions itself in the cybersecurity landscape?

Vicarius aims to resolve one of the most pressing challenges in IT: the vulnerability and patch management cycles. Traditional organizations often rely on multiple tools to cover vulnerability assessment, remediation, and patching, leading to inefficiencies and increased exposure. Our vRx platform consolidates these functions into a single solution, offering real-time visibility, advanced prioritization, and automated remediation across various operating systems and applications.

vRx simplifies vulnerability discovery with a cloud-based catalog that continuously detects active servers, workstations, applications, and operating systems across both on-premise and cloud environments. It enables real-time visibility and continuous monitoring of all assets, giving security teams instant insights into the organization’s overall security posture.

How does vRx differ from traditional patch management solutions?

Unlike conventional patch management tools that only address vulnerabilities when patches are available, vRx provides a comprehensive remediation approach. It includes automated patching, a scripting engine for complex vulnerabilities, and virtual patching technology that secures applications even in the absence of a patch. This multi-faceted approach ensures continuous protection against emerging threats.

Could you explain the concept of virtual patching and its significance?

Virtual patching acts as a protective layer when a traditional patch isn’t immediately available. It monitors specific memory spaces of applications, detecting and blocking potential exploit attempts in real-time. This proactive measure is crucial for mitigating risks associated with zero-day vulnerabilities or when applying patches is not feasible.

How does AI integrate into vRx’s functionality?

AI plays a pivotal role in vRx by enhancing risk prioritization and remediation processes. Our AI-powered risk-scoring engine evaluates vulnerabilities by considering asset criticality and the likelihood of exploitation, thus helping focus on the most impactful threats that could disrupt operations.

We leverage AI to analyze exploitability patterns and threat in-

at Vicarius

telligence, allowing the platform to identify and address highrisk vulnerabilities promptly. While our current AI integration is targeted, we are continuously expanding its capabilities to stay ahead of evolving threats.

What sets vRx apart from competitors?

vRx distinguishes itself by focusing on the entire remediation lifecycle. Our platform not only identifies vulnerabilities but also provides actionable solutions, including patching, scripting, and virtual patching. This holistic approach ensures that organizations can mitigate risks effectively and efficiently

How has Vicarius expanded its presence and growth in the Middle East market?

Vicarius has been active in the Middle East for approximately two years, establishing a solid customer base across various sectors. We currently serve around 15 clients in the region, ranging from medium-sized enterprises to large organizations. Our commitment to the region is underscored by plans to enhance our local infrastructure, including the potential establishment of a UAEbased cloud hosting facility to meet the specific requirements of governmental clients.

THE FUTURE OF AI

Waqas Butt, Group Head of ICT & AI, Alpha Dhabi Holding PJSC advocates moving beyond fragmented AI experiments to strategic AI-first integration in the enterprise

Experiments to Strategic AI First | The Believe:

AI is no longer an option; it’s a mandate and it’s important to believe on why organizations must shift from fragmented experiments to strategic AI-First Integration.

In the digital era, one truth is becoming crystal clear: Artificial Intelligence (AI) is no longer a futuristic concept or a tech enthusiast’s dream — it’s a business imperative. Organizations across every sector now recognize AI as a catalyst for growth, efficiency, and innovation. But simply acknowledging AI’s importance isn’t enough. What truly matters is how AI is adopted, integrated, and scaled.

As the competitive landscape evolves, we’re seeing a widening gap between organizations that treat AI as a checkbox and those that embed it as a core strategy. The difference lies not in the presence of AI — but in its placement and purpose.

Beyond the AI Hype | Integration Over Experimentation:

Many businesses fall into the trap of superficial AI adoption — adding chatbots here, automating workflows there, or piloting AI tools in isolated departments. While these efforts may offer short-term benefits, they often lead to fragmented architectures, inconsistent experiences, and data silos.

Winners in the AI race aren’t the ones with the most tools. They are the ones with the most strategic, integrated intelligence.

These are organizations that:

• Choose platforms natively built with AI at their core,

• Operate with end-to-end intelligent systems rather than patchwork solutions,

• Focus on secure, scalable, and aligned adoption that transforms their entire operating model.

The Risks of Fragmented AI Adoption | Foundation Component Not Bolt-On Capability:

Treating AI as a bolt-on capability instead of a foundational component can create several challenges:

• Cost and Complexity: Managing multiple, disjointed AI tools increases operational overhead and requires specialized skillsets to maintain.

• Security Risks: Siloed implementations introduce vulnerabilities and make governance more difficult.

• Inefficiency: Without a centralized AI strategy, organizations struggle to achieve synergy between departments, hindering innovation and responsiveness.

This kind of sporadic AI usage might seem progressive on the surface — but underneath, it’s like building a high-tech structure on shaky foundations.

The Case for a Strategic AI | AI-First Approach:

To truly unlock the transformative potential of AI, organizations must shift from experimental to intentional adoption — where AI is built into the DNA of platforms, products, and processes.

A strategic, AI-first model enables:

• Scalability: AI becomes a growth enabler, not a bottleneck.

• Resilience: Integrated AI can adapt dynamically to changing conditions and customer needs.

• Transformation: AI isn’t just enhancing what exists — it’s reshaping how businesses operate.

This isn’t about “adding” AI. It’s about reimagining your business through the lens of AI.

From AI “Sprinkles” to Full-Stack Intelligence | Tomorrow’s Enterprise:

The time for half-measures is over. To stay competitive and future-ready, businesses must move beyond surface-level AI experiments and adopt full-stack intelligence — systems that are smart by design, not by addition.

We’re entering an era where AI doesn’t sit on the sidelines; it drives the playbook hence with full stack intelligence you will have “AI Agents” in action but that only possible when you have it native AI into your native solution and data rather tools working in silos or independently leaving you with many gaps and risk which you will take another round to fix and trust.

Waqas Butt
Group Head of ICT & AI, Alpha Dhabi Holding PJSC

IS YOUR DLP SLACKING?

Yazen Rahmeh, Cybersecurity Expert at SearchInform highlights the most important competencies for Data Loss Prevention (DLP) systems

The development of information technologies brings new challenges. It is especially relevant for the field of cybersecurity. Malicious actors adopt AI-powered solutions, enhance phishing techniques, and evolve their tactics. The issue of new data transfer channels became acute with the widespread use of remote work and hybrid workplaces. Therefore, protective measures need to adjust to the new circumstances. Let’s discuss the most important competencies for Data Loss Prevention (DLP) systems.

Control Data Channels

The number of monitored data transfer channels is an essential parameter for Data Loss Prevention systems. Generally speaking, the longer the list, the more effective the solution will be. It is crucial to manage all data channels within the company to ensure data security. Data security can be compared to a boat: if there is a leak, the boat will sink.

The majority of DLP systems are capable of monitoring wellknown things like emails, FTP servers, NATs, and USB devices. Many companies have business accounts in WhatsApp to contact customers. A lot of remote workers are using Google Drive to store business documents. Are you sure that your DLP system can reliably control cloud storage services? Can it secure modern instant messengers like Telegram?

For example, let’s examine one of the recent data leak incidents, Apple vs. Rivos, Inc. In 2022, several Apple employees terminated their contracts and found new positions at Rivos. According to the lawsuit, some of them synced their workstations with cloud storage, while others transferred corporate files to external HDDs or wireless NATs. As a result, Apple’s trade secrets were under the threat of disclosure.

This breach is relevant because it involved multiple data transfer methods. It highlights the fact that effective DLP systems need to provide comprehensive data monitoring. Nobody wants to hear that a leak occurred through instant messaging or corporate cloud storage. One hole is enough to compromise the entire security architecture. It is important to ensure that your DLP system monitors all data channels in use.

Analyze Content

The second critical parameter for a DLP system is the capability to prevent threatening data transfers. It’s a well-known trick to advertise that “we can block this and that” in marketing materials, while the reality is a little bit more complicated. It is important for a DLP to not only block file transfer on the basis of file attributes but also on the results of content analysis. Sometimes even loyal employees can make a mistake and share sensitive records by accident.

In 2023, a member of Samsung's R&D department encountered an issue with some source code. To swiftly find the solution, the specialist copy-pasted the code into ChatGPT and asked it to find the mistake. The consequences were disastrous because AI-powered solutions are capable of learning and integrating entered prompts in the training database. OpenAI’s tool shared the code with other users to answer questions in this specific field. The incident resulted in the violation of Samsung’s intellectual property rights and significant financial losses for the corporation.

This incident emphasizes the importance of content analyses for DLP systems. Employees can share sensitive records in a plain text form. A data leak can happen as a result of a mistake or by deliberate actions of a malicious insider. All information security specialists have to double-check technical aspects and details of implemented solutions. It is nice if your DLP system can prevent 100% of suspicious file transfers. However, are you sure it can achieve the same outcome for plain text and screenshots? The devil is in the details.

Optimize Workflow

According to the survey by Gurukul, 76% of respondents attribute growing business and IT complexity as the main drivers for increased insider risk. One might ask why both factors were considered together. From my perspective, the answer is obvious— both factors can be attributed to time. The growing infrastructure and complexity of the implemented solutions increase the time required for security specialists to perform their duties.

I would like to state that advanced analytical tools are a must for a DLP system. With investigation assistance, an information security specialist can address an incident in a couple of minutes. On the other hand, the same incident can take 10-15 minutes without such instruments. Just imagine that you have not a couple of dozen alerts per day, but several hundred. The cumulative effect will impact the quality of security measures.

For example, detailed information about data transfers between users can greatly enhance the investigation process. An IS specialist will not need to spend time manually reconstructing the incident. An advanced DLP solution will provide them with a complete picture of user connections and file operation logs.

Complex GUI, inefficient analytical tools, and lack of proper investigation instruments—all of these contribute to the amount of time spent on security. Information security specialists are working under the stress conditions. Nearly a quarter of employees in this field are considering leaving their positions. An overwhelming majority of them, 93%, cite stress as a main reason. Choosing a smart DLP can save you time, reduce stress levels, and make you more effective.

Checklist for DLP

There are a lot of DLP solutions on the market. You can easily compare them and find pros and cons for each system. But my

key point will be the same—pay attention to the essential parameters. They're the major ones.

The number of monitored data transfer channels is important for a solid DLP (Data Loss Prevention) system. As IT continues to develop, so do the potential sources of data leakage. Instant messengers and cloud storage services are just two examples of such sources.

Advanced prevention capabilities: DLP systems need to work with both file attributes and content. Sometimes, malicious actors and trusted employees can bypass attribute-based access rights management. Safeguard your company with additional content-based analytical capabilities.

Powerful analytical and investigative tools will greatly enhance an IS specialist's performance. They will save time and prevent burnout, as well as strengthen the security posture of the organization.

"The number of monitored data transfer channels is an essential parameter for Data Loss Prevention systems. Generally speaking, the longer the list, the more effective the solution will be. It is crucial to manage all data channels within the company to ensure data security."

2025 Threat Trend Spotlight

EDGE DEVICES

Antoinette Hodes, Evangelist & Global Solution Architect, Office of The CTO at Check Point Software discusses the rising threat of exploited edge devices in cyber security

The strategy of exploiting edge devices, originally used by state-sponsored actors for covert infiltration, has now been adopted by financially motivated cyber criminals at a quickening scale. Routers, firewalls, and VPN appliances have become especially attractive to attackers based on their minimal security compared to other areas of concern. These devices are commonly repurposed for creating Operational Relay Boxes (ORBs), a type of infrastructure used by cyber criminals to anonymize and relay communications. The rise of Operational Relay Boxes (ORBs) adds a new layer of complexity and opportunity: these intelligent gateways act as both control points and communication bridges between operational technology (OT)

and IT networks. While ORBs enhance edge intelligence and real-time decision-making, they also become critical choke points. A compromised ORB could act as a launchpad for lateral movement, data exfiltration, or even operational sabotage.

By compromising these devices, attackers can establish covert communication channels that evade detection, enabling them to infiltrate further into networks. And over the past year, both cyber criminals and state-sponsored actors have dramatically increased their focus on exploiting edge devices as an initial access vector. The issue has become so severe that Check Point Research pointed to the security risks that arise from edge devices as one of five significant cyber security trends to monitor for this year.

Why Edge Devices are Now Being Targeted

Edge devices have become a more attractive target for cyber attacks because they play a critical role in a network's flow, making them difficult to patch without causing very noticeable operational disruptions. Vulnerabilities found in devices like Ivanti Connect Secure and Palo Alto Networks' PAN-OS GlobalProtect in early 2024 allowed attackers to exploit remote code execution flaws and bypass multi-factor authentication. Both state-sponsored actors and ransomware groups took advantage of these vulnerabilities to compromise corporate networks and gain access to sensitive environments. And because patching these devices often leads to service downtime, potentially impeding business operations, organizations must balance the need to secure their systems with the risk of disrupting vital services.

The exploitation of these edge devices isn't limited to just zero-day vulnerabilities. Magnet Goblin, which emerged in 2024, focuses on exploiting newly disclosed vulnerabilities in popular edge devices like Ivanti Connect Secure VPNs. They leverage tools like NerbianRAT—a cross-platform remote access Trojan (RAT)—to gain access to networks and deploy custom malware. Magnet Goblin’s swift exploitation of vulnerabilities in widely used devices highlights a concerning trend where cyber criminals are increasingly targeting critical infrastructure components to access sensitive data.

There’s also the risk of “smart” edge which features ORBs that not only aggregate and preprocess telemetry but also enforce policy, orchestrate workflows, and bridge the gap between OT and

IT. Yet this very intelligence makes ORBs irresistible targets; a single compromised relay box could allow adversaries to silently manipulate sensor readings, disrupt critical processes, or pivot into core networks, all under the guise of routine edge communications. As we hurry to tap into IoT’s data and automation, we need to face one clear fact: our smart edge devices are only as safe as the relay points we set up—and the next wave of cyber threats is already hiding around the edges of our connected world.

The Continued Role of State-Sponsored Attacks

While financially motivated actors are rapidly exploiting edge devices, state-sponsored threat groups are also targeting these vulnerabilities – and doing so with a high level of sophistication. Cisco’s Adaptive Security Appliances (ASA) were targeted in a campaign known as ArcaneDoor. This operation, executed by nation-state actors, exploited weaknesses in ASA devices, allowing the attackers to infiltrate government and industrial networks. Once inside, they could exfiltrate sensitive data and establish long-term espionage capabilities, all while maintaining a covert presence.

Another notable campaign, codenamed Pacific Rim, points to China-based threat actors’ ongoing efforts to target perimeter devices, including Sophos firewalls and VPN gateways. The operation, which leveraged vulnerabilities in internet-facing services like CVE-2020-12271 and CVE-2022-1040, granted attackers access to critical network points. Once compromised, these devices were integrated into a covert ORB network, supporting command-and-control (C2) channels that could evade detection. The attackers employed advanced tactics such as rootkits and obfuscated hotfixes to maintain persistence and conceal their activities, enabling them to pivot from edge devices to other internal network assets.

Pacific Rim’s multi-year effort underlines the security risks posed by edge devices, especially in sectors where timely patching and comprehensive monitoring can be challenging. Underestimating the risks associated with unsecured perimeter devices can come with steep consequences.

The Threat of Botnets and DDoS Attacks

While sophisticated backdoors and custom implants dominate discussions around edge device exploitation, more traditional threats remain prevalent. In September 2024, CloudFlare mitigated what was described as the largest DDoS attack in history. The attack, originating from compromised edge devices like MikroTik routers, DVRs, and web servers, involved an extraordinarily high packet rate. Many of these compromised devices were likely exploited using critical vulnerabilities, with ASUS home routers accounting for a large portion of the attack. This campaign, which has not been attributed to any specific state-sponsored actor or cybercriminal group, demonstrates the scale and impact that compromised edge devices can have.

In 2024, botnets created from unsecured and vulnerable edge devices became indispensable tools for advanced threat actors. These botnets, like Raptor Train and Faceless, use decentralized C2 infrastructures that dynamically rotate between compromised devices. This ability to switch nodes and evade detection allows attackers to remain undetected for extended periods while maintaining persistent access to critical systems. Some malware, such as TheMoon, employs advanced evasion tactics like in-memory-only execution and frequent IP switching, making it even more difficult for defenders to track and mitigate.

Protect Your Edge (Devices)

Edge devices are no longer a minor part of the network. As attacks become more frequent and disruptive, we’re seeing edge device vulnerabilities as a key focal point for attackers seeking entry into corporate environments. As threat actors evolve their tactics and tools, the need for robust security practices around edge devices has become more critical than ever. Businesses must act quickly to secure their networks by closing the gaps in edge device security, ensuring these devices are properly secured through strong authentication methods, routine vulnerability scanning, and timely patch management.

"By compromising these devices, attackers can establish covert communication channels that evade detection, enabling them to infiltrate further into networks. And over the past year, both cyber criminals and state-sponsored actors have dramatically increased their focus on exploiting edge devices as an initial access vector."

CYBERTHREAT COMPLEXITIES DEMAND NEW LEADERSHIP

Biju Unni, Vice President at Cloud Box Technologies says that the evolving cyberthreat complexities demand a new kind of leadership

The promise that modern cybersecurity technologies would ease the burden on security teams has not materialized. The role of CISOs and security decision-makers remains as demanding as ever, weighed down by new risks, heightened complexities, and relentless pressure to adapt in real-time.

Geopolitical uncertainties and volatility are more than ever impacting the stability of enterprise supply chains. Existing risk and compliance regulations are continuously enhanced and improved, and new ones are announced. These could be directed at industries, countries, or regions.

While cybersecurity vendors are constantly innovating their products and introducing concepts of layered defences for enterprises, lack of skilled resources and weak, immature cybersecurity programmes inside enterprises, delay the benefits of these new technology opportunities.

The development of cybersecurity technologies and platforms is continuous and is a wide and complex expanse. Cybersecurity decision makers and administrators need to make the right selections based on many factors like suitable integration and interoperability, reducing complexity and technology debt. However, their decision timelines are usually too slow for business and impacts business ROI.

Cybersecurity risks are enhanced when there is a shared infrastructure and services with suppliers. Risks can also be exacerbated during transfer of acquired IT resources such as hardware and software.

Usage of cybersecurity products and tools, such as Secure Access Service Edge, Identity and Access Management, and Extended Detection and Response, also bring with them supply chain risks to the enterprise. They also increase the dependency on vendors and specialized partners can help reduce such dependencies.

&

While the benefits of Generative AI are positively incremental for processes and worker productivity, implementing Generative AI brings with it the threats of data leakage, compromises of data privacy, and loss of intellectual property, if cybersecurity administrators are not proactive from the start of the adoption cycle.

Regional enterprises are at differing levels of cybersecurity matu-

rity based on multiple factors that are internal to each enterprise. Each enterprise needs to rework its cybersecurity programme with initiatives that can range from basic security awareness and hygiene to advanced security capabilities.

Digital transformation, cloud implementations, application delivery modernization, network modernization, for example, each have their benefits and timelines for implementation, but bring with them additional risk exposure that needs to be planned for, before going live. Such changes to the enterprise cybersecurity programme needs teamwork, not just with business and IT, but also within IT administrators.

Here is a recommended list of practices that should be on the top of the implementation list for cybersecurity decision makers and administrators:

• Manage geopolitical risk by increasing the visibility and governance of business locations and supplier relationships.

• Managing geopolitical challenges by reducing supply chain exposure and introducing enhanced policy and technology controls.

• Involve specialized partners in the audit of the enterprise’s cyber resilience plans and across its supply chain.

• Integrate risk management tools to monitor diligence and identify potential threat behaviours across all incoming services for the enterprise. Specialized partner can help in this area.

• Build a Generative AI policy that defines acceptable user practices as well as the levels of risk and risk management across the enterprise.

• Build a schedule of end user training to educate on leakage of sensitive data while accessing Generative AI edge and cloud applications.

• Test, validate and introduce the usage of Generative AI enhanced, cybersecurity tools and products.

• Prioritize implementing cybersecurity tools that align with the enterprise’s cybersecurity mesh architecture.

• Prioritize cybersecurity products that drive intelligence layer capabilities, behaviour data analysis, risk index scores for example.

• Adopt modern and established cybersecurity practices like Zero-Trust policies and consolidate and simplify cybersecurity

• Leverage security automation and security orchestration to reduce human error and resources in well-defined cybersecurity practices.

• Avoid introducing automation into security operations without a well-defined plan and assessment of the consequences.

• To anticipate the inevitability of ransomware and advanced threats, repeatedly simulate and practice enterprise-wide, recovery and continuity operations, processes, and responsibilities.

In summary, inevitabilities happen, and need to be a key focus for cybersecurity leaders and administrators. Data is now being stored and processed everywhere as enterprises adopt Generative AI and continue to move towards hybrid and multi-cloud application and data models. Zero Trust is being introduced as a key design element in IT systems.

Effective communication between IT and cybersecurity administrators will be essential for successful security policy implementation. API deployments are essential to increase the value that IT offers to business.

Resilient systems and data require be part of the overall enterprise IT design to combat the high risks of incidence hovering around ransomware.

Finally, data security is the key to a successful and functional data-everywhere world. By partnering with specialized solution providers, enterprises can avoid costly and painful errors associated with reinventing the wheel.

AS ADVERSARIAL GENAI TAKES OFF, THREAT INTEL MUST MODERNIZE

Bart Lenaerts, Senior Product Marketing Manager, Infoblox write about how adversaries innovate with GenAI and the case for predictive intelligence

Generative AI, particularly Large Language Models (LLM), is enforcing a transformation in cybersecurity. Adversaries are attracted to GenAI as it lowers entry barriers to create deceiving content. Actors do this to enhance the efficacy of their intrusion techniques like social engineering and detection evasion.

This article provides common examples of malicious GenAI usage like deepfakes, chatbot automation and code obfuscation. More importantly, it also makes a case for early warnings of threat activity and usage of predictive threat intelligence capable of disrupting actors before they execute their attacks.

Example 1: Deepfake scams using voice cloning

At the end of 2024, the FBI warned that criminals were using generative AI to commit fraud on a larger scale, making their schemes more believable. GenAI tools like voice cloning reduce the time and effort needed to deceive targets with trustworthy audio messages. Voice cloning tools can even correct human errors like foreign accents or vocabulary that might otherwise signal fraud. While creating synthetic content isn’t illegal, it can facilitate crimes like fraud and extortion. Criminals use AI-generated text, images, audio, and videos to enhance social engineering, phishing, and financial fraud schemes.

Especially worrying is the easy access cybercriminals have to these tools and the lack of security safeguards. A recent Consumer Reports investigation on six leading publicly available AI voice cloning tools discovered that five have bypassable safeguards, making it easy to clone a person’s voice even without their consent.

Voice cloning technology works by taking an audio sample of a person speaking and then extrapolating that person’s voice into a synthetic audio file. However, without safeguards in place, anyone who registers an account can simply upload audio of an individual speaking, such as from a TikTok or YouTube video, and have the service imitate them.

Voice cloning has been utilized by actors in various scenarios, including large-scale deep-fake videos for cryptocurrency scams or the imitation of voices during individual phone calls. A recent example that garnered media attention is the so-called “grandparent” scams, where a family emergency scheme is used to persuade the victim to transfer funds.

Example 2: AI-powered chat boxes

Actors often pick their victims carefully by gathering insights on their interests and set them up for scams. Initial research is used to craft the smishing message and trigger the victim into a conversation with them. Personal notes like “I read your last social post and wanted to become friends” or “Can we talk for a moment?” are some examples our intel team discovered (step 1 in picture 2). While some of these messages may be extended with AI-modified pictures, what matters is that actors invite their victims to the next step, which is a conversation on Telegram or another actor controlled medium, far away from security controls (step 2 in picture 2).

Once the victim is on the new medium, the actor uses several tactics to continue the conversation, such as invites to local golf tournaments, Instagram following or AI-generated images. These AI bot-driven conversations go on for weeks and include additional steps, like asking for a thumbs-up on YouTube or even a social media repost. At this moment, the actor is trying to assess their victims and see how they respond. Sooner or later, the actor will show some goodwill and create a fake account. Each time the victim reacts positively to the actor’s request, the amount of currency in the fake account will increase. Later, the actor may even request small amounts of investment money, with an ROI of more than 25 percent. When the victim asks to collect their gains (step 3 in picture 2), the actor requests access to the victim’s crypto account and exploits all established trust. At this moment, the scamming comes to an end and the actor steals the crypto money in the account.

While these conversations are time-intensive, they are rewarding for the scammer and can lead to ten-thousands of dollars in ill-gotten gains. By using AI-driven chat boxes, actors have found a productive way to automate the interactions and increase the efficiency of their efforts.

InfoBlox Threat Intel tracks these scams to optimize threat intelligence production. Common characteristics found in malicious chat boxes include:

• AI grammar errors, such as an extra space after a period, referencing foreign languages

• Using vocabulary that includes fraud-related terms

• Forgetting details from past conversations

• Repeating messages mechanically due to poorly trained AI chatbots (also known as parroting)

• Making illogical requests, like asking if you want to withdraw your funds at irrational moments in the conversation

• Using false press releases posted on malicious sites

• Opening conversations with commonly used phrases to lure the victim

• Using specific cryptocurrency types used often in criminal communities

The combinations of these fingerprints allow threat intel researchers to observe emerging campaigns, track back the actors and their malicious infrastructure.

Example 3: Code obfuscation and evasion

Threat actors are using GenAI not only for creating human readable content. Several news outlets explored how GenAI assists actors in obfuscating their malicious codes. Earlier this year Infosecurity Magazine published details of how threat researchers

at HP Wolf discovered social engineering campaigns spreading VIP Keylogger and 0bj3ctivityStealer malware, both of which involved malicious code being embedded in image files. With a goal to improve the efficiency of their campaign, actors are repurposing and stitching together existing malware via GenAI to evade detection. This approach also assists them in gaining velocity in setting up threat campaigns and reducing the skills needed to construct infection chains. Industry threat research HP Wolf estimates evasion increments of 11% for email threats while other security vendors like Palo Alto Networks estimate that GenAI flipped their own malware classifier model’s verdicts 88% of the time into false negatives. Threat actors are clearly making progress in their AI driven evasion efforts.

Making the case for modernizing threat research

As AI driven attacks pose plenty of detection evasion challenges, defenders need to look beyond traditional tools like sandboxing or indicators derived from incident forensics to produce effective threat intelligence. One of these opportunities can be found by tracking pre-attack activities instead of sending the last suspicious payload to a slow sandbox.

Just like your standard software development lifecycle, threat actors go through multiple stages before launching attacks. First, they develop or generate new variants for the malicious code using GenAI. Next, they set up the infrastructure like email delivery networks or hard to trace traffic distribution systems. Often this happens in combination with domain registrations or worse hijacking of existing domains.

Finally, the attacks go into “production” meaning the domains become weaponized, ready to deliver malicious payload. This is the stage where traditional security tools attempt to detect and stop threats because it involves easily accessible endpoints or networks egress points within the customer’s environment. Because of evasion and deception by GenAI tools, this point of detection may not be effective as the actors continuously alter their payloads or mimic trustworthy sources.

The Value of Predictive Intelligence Based on DNS Telemetry

To stay ahead of these evolving threats, organizations should consider leveraging predictive intelligence derived from DNS telemetry. DNS data plays a crucial role in identifying malicious actors and their infrastructure before attacks even occur. Unlike payloads that can be altered or disguised using GenAI, DNS data is inherently transparent across multiple stakeholders—such as domain owners, registrars, domain servers, clients, and destinations—and must be 100% accurate to ensure proper connectivity. This makes DNS an ideal source for threat research, as its integrity makes it less susceptible to manipulation.

DNS analytics also provides another significant advantage: domains and malicious DNS infrastructures are often configured well in advance of an attack or campaign. By monitoring new domain registrations and DNS records, organizations can track the development of malicious infrastructure and gain insights into the early stages of attack planning. This approach enables the identification of threats before they’re activated.

Conclusion

The evolving landscape of AI and the impact on security is significant. With the right approaches and strategies, such as predictive intelligence derived from DNS, organizations can truly get ahead of GenAI risks and ensure that they don’t become patient zero.

SYNOLOGY PAS77

Synology has unveiled PAS7700, an active-active NVMe all-flash storage solution engineered to deliver uninterrupted, high-performance services for enterprise mission-critical workloads. It features a dual-controller, active-active architecture to ensure non-disruptive service continuity. With security at its core and built-in 3-2-1-1 data replication capabilities, it safeguards data integrity at every level.

Engineered for exceptional performance and unmatched cost efficiency, the PAS7700 leverages an end-to-end NVMe design to deliver up to 2 million IOPS and sub-millisecond latency—offering up to 3x the performance of existing Synology models

Combining dual-controllers with 48 NVMe SSD bays in a space-efficient 4U chassis, PAS7700 scales seamlessly to 1.65 PB of raw capacity with the addition of seven expansion units. PAS7700 features comprehensive support for a range of file and block protocols including NVME-oF. With redundant memory upgradable to 2,048 GB* across both controllers and support for high-speed 100GbE networking, PAS7700 delivers exceptional performance, availability, and scalability to meet enterprise storage demands.

Highlights:

• Optimized for demanding workloads, PAS7700 leverages an all NVMe array to deliver millisecond-grade low latency and up to 2 million IOPS and 30GB/s sequential throughput.

• PAS7700 is built on an active-active dual-controller architec-

ture to deliver uninterrupted operations for mission-critical workloads. Designed with security at its core, PAS7700 features robust data protection tools including immutable snapshots, advanced replication and flexible offsite tiering ensuring end-to-end data integrity and resilience.

• Engineered to provide exceptional performance with cost efficiency in mind, the PAS7700 enables enterprises to achieve primary storage-grade performance and reliability at the cost of mainstream storage. With both inline and offline deduplication, it helps organizations strike the optimal balance between efficiency and performance.

RUCKUS R370 dual-band Wi-Fi 7 AP

CommScope, a global leader in network connectivity, announced new AI-driven Wi-Fi 7 and Gen AI-based solutions from RUCKUS Networks, which round out its portfolio for the hospitality industry. The portfolio of solutions provides full-service, mid-scale, and limited-service hotels a range of connectivity options for delivering customized guest Wi-Fi experiences that meet the unique demands of each market segment—from luxury to budget hotels.

The RUCKUS R370 dual-band Wi-Fi 7 indoor access point rises to the occasion, delivering 802.11be performance in a compact, IoT-ready form factor. Packed with the patented RUCKUS technologies found in our high-end APs, it optimizes performance and reduces interference—without the bulk.

The RUCKUS R370 is a compact, enterprise-grade entry-level, dual-band Wi-Fi 7 AI-driven AP offering economical multi-gigabit performance and future-ready connectivity for low to medium-density environments with patented RUCKUS innovations.

Highlights:

• CommScope, a global leader in network connectivity, announced new AI-driven Wi-Fi 7 and Gen AI-based solutions from RUCKUS Networks, which round out its portfolio for the hospitality industry. The portfolio of solutions provides full-service, midscale, and limited-service hotels a range of connectivity options for delivering customized guest Wi-Fi experiences that meet the unique demands of each market segment—from luxury to budget hotels.

ASUS EXPERTCENTER

P400 AIO

ASUS announced the ExpertCenter P400 AiO (P440VA), an allin-one solution that combines performance and style in a compact design. It features a 24-inch FHD touchscreen display with 100% sRGB color gamut and a 93% screen-to-body ratio. Combined with immersive Dolby® Atmos audio, it provides a premium experience for multimedia tasks.

ExpertCenter P400 AiO also integrates ASUS AI ExpertMeet to enhance videoconferencing experiences, adding value for businesses that rely on virtual collaboration. Its VESA-mount compatibility and sleek form factor offer flexibility in any modern workspace, while business-grade security with ASUS ExpertGuardian and connectivity features ensure reliability in SMB environments.

Designed with a sleeker profile, the ExpertCenter P400 AiO is 25% thinner than previous generations, making this ASUS’ most compact all-in-one PC. It also includes a comprehensive lineup of ports, including an optional HDMI®-in port that allows users to connect their laptop for a bigger screen experience. Moreover, the height-adjustable stand (HAS) on the ASUS ExpertCenter P400 AiO offers tilt, swivel, and height adjustments, providing perfect viewing angles whether sitting or standing.

ASUS ExpertCenter P400 AiO is built to power through any task and offers streamlined management to reduce admin workload. Powered by up to an Intel Core i7 processor, along with up to 64 GB DDR5 memory and up to 2 TB SSD, ExpertCenter P400 AiO delivers exceptional processing power.

Businesses can experience lightning-fast connectivity on the ExpertCenter P400 AiO with Wi-Fi 7 that delivers speeds of up to 3x faster than Wi-Fi 6.

ASUS ExpertCenter P400 AiO has been engineered to take advantage of the latest AI tools. This includes ASUS AI ExpertMeet, which helps elevate the meeting experience, optimizing calls with AI noise-canceling audio and AI camera. AI Meeting Minutes automatically transcribes meetings by turning audio into text, making it easy to review later. It can summarize key points of the meeting, as well as identify multiple speakers.

Highlights:

• Peak Performance: Powered by up to Intel® Core™ i7 processor and ultra-fast Wi-Fi 7.

• Display of Brilliance: A 24-inch FHD touchscreen with edge-toedge slim-bezel design and an impressive 93% screen-to-body ratio.

• Retractable Camera: An intuitive push-and-pull design that hides the camera when not in use for an added layer of security.uards critical data from software, firmware and hardware.

• The RUCKUS R370 dual-band Wi-Fi 7 indoor access point rises to the occasion, delivering 802.11be performance in a compact, IoT-ready form factor. Packed with the patented RUCKUS technologies found in our high-end APs, it optimizes performance and reduces interference—without the bulk.

• The RUCKUS R370 is a compact, enterprise-grade entry-level, dual-band Wi-Fi 7 AI-driven AP offering economical multi-gigabit performance and future-ready connectivity for low to medium-density environments with patented RUCKUS innovations.

GARTNER IDENTIFIES THE TOP TRENDS SHAPING THE FUTURE OF CLOUD

Gartner Predicts 50% Of Cloud Compute Resources Will Be Devoted To AI Workloads By 2029

Gartner, Inc. has announced the top trends shaping the future of cloud adoption over the next four years. These include cloud dissatisfaction, AI/machine learning (ML), multicloud, sustainability, digital sovereignty and industry solutions.

Joe Rogus, Director, Advisory at Gartner, said, “These trends are accelerating the shift in how cloud is transforming from a technology enabler to a business disruptor and necessity for most organizations. Over the next few years, cloud will continue to unlock new business models, competitive advantages and ways of achieving business missions.”

According to Gartner, the following six trends will shape the future of cloud, ultimately resulting in new ways of working that are digital in nature and transformative in impact (see Figure 1):

Source: Gartner (May 2025)

Figure 1: Trends Shaping the Future of Cloud

Trend 1: Cloud Dissatisfaction

Cloud adoption continues to grow, but not all implementations succeed. Gartner predicts 25% of organizations will have experienced significant dissatisfaction with their cloud adoption by 2028, due to unrealistic expectations, suboptimal implementation and/or uncontrolled costs.

To remain competitive, enterprises need a clear cloud strategy and effective execution. Gartner research indicates that those that have successfully addressed upfront strategic focus by 2029 will find their cloud dissatisfaction will decrease.

Trend 2: AI/ML Demand Increases

Demand for AI/ML is set to surge, with hyperscalers positioned at the core of this growth. They will drive a shift in how compute resources are allocated by embedding foundational capabilities into their IT infrastructure, facilitating partnerships with vendors and users, and leveraging real and synthetic data to train AI models. Gartner predicts 50% of cloud compute resources will be devoted to AI workloads by 2029, up from less than 10% today.

“This all points to a fivefold increase in AI-related cloud workloads by 2029,” said Rogus “Now is the time for organizations to assess whether their data centers and cloud strategies are ready to handle this surge in AI & ML demand. In many cases, they might need to bring AI to where the data is to support this growth.”

Trend 3: Multicloud and Cross Cloud

Many organizations that have adopted multicloud architecture find connecting to and between providers a challenge. This lack of interoperability between environments can slow cloud adoption, with Gartner predicting more than 50% of organizations will not get the expected results from their multicloud implementations by 2029.

Gartner recommends identifying specific use cases and planning for distributed apps and data in the organization that could benefit from a cross-cloud deployment model. This enables workloads to operate collaboratively across different cloud platforms, as well as different on-premises and colocation facilities.

Trend 4: Industry Solutions

There is an upward trend toward industry-specific cloud platforms, with more vendors offering solutions that address vertical business outcomes and help scale digital initiatives. Over 50% of organizations will use industry cloud platforms to accelerate their business initiatives by 2029, according to Gartner.

Gartner recommends organizations approach industry cloud platforms as a strategic way to add new capabilities to their broader IT portfolio, rather than a total replacement. This allows organizations to avoid technical debt, drive innovation and business value.

Trend 5: Digital Sovereignty

AI adoption, tightening privacy regulations and geopolitical tensions are driving demand for sovereign cloud services. Organizations will be increasingly required to protect data, infrastructure and critical workloads from control by external jurisdictions and foreign government access. Gartner predicts over 50% of multinational organizations will have digital sovereign strategies by 2029, up from less than 10% today.

“As organizations proactively align their cloud strategies to address digital sovereignty requirements, there are already a wide range of offerings that will support them,” said Rogus. “However, it’s important they understand exactly what their requirements are, so they can select the right mix of solutions to safeguard their data and operational integrity.”

Trend 6: Sustainability

Cloud providers and users are increasingly sharing responsibility for sustainable IT infrastructure. This is being driven by regulators, investors and public demand for greater alignment between technology investments and environmental goals. As AI workloads demand more energy, organizations are also under pressure to better understand, measure and manage the sustainability implications of emerging cloud technologies.

Gartner research shows the percentage of global organizations prioritizing sustainability as part of procurement will rise to over 50% by 2029. To deliver greater value from cloud investments, organizations must look beyond environmental impact alone and align their sustainability strategies with key business outcomes.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.