As 2025 draws to a close, one truth is unmistakable for chief operating officers across Asia: artificial intelligence is no longer a pilot project or boardroom buzzword—it is the operational engine of the future.
But with great opportunity comes great complexity. AI workloads demand unprecedented compute density, ultra-low latency, stringent compliance, and sustainable energy—all while navigating Asia’s uniquely fragmented regulatory and infrastructural landscape.
So how can COOs ensure their AI infrastructure investments deliver long-term operational agility and business value amid rapidly shifting AI models, regulations, and market conditions?
Rethinking infrastructure for the AI era
According to Tejaswini Tilak, VP of marketing for APAC at Digital Realty, the answer begins with “AI-ready infrastructure”: high-density, low-latency, compliant data centres where proximity is non-negotiable. “Legacy infrastructure simply wasn’t built for AI,” she notes.
AI doesn’t just consume data—it thrives on real-time interaction between datasets, applications, and users. This is where the concept of data gravity becomes critical: as data accumulates, it attracts everything else—applications, services, even talent—making it harder to move or interact fluidly.
For COOs, this means infrastructure must move closer to where data and users reside, not the other way around. Gartner (2025) reinforces this, predicting that by 2026, over 60% of AI inference workloads in APAC will run at the edge or in regional hubs—not in distant, centralised cloud regions. The result? Faster decisions, lower latency, and stronger compliance posture.
The operating model for hybrid AI workloads
As business units demand responsiveness—from factory floors to customer touchpoints—COOs must design operating models that balance compute-intensive AI training with real-time inference.

Tilak advocates a “rewired” IT infrastructure centred on hybrid, distributed architectures. “Data centres remain the engine rooms of the digital economy,” she explains, “but they must now be interconnected, local, and agile.”
Digital Realty’s Pervasive Data Centre Architecture offers a blueprint: repeatable, modular designs that incorporate best practices from over 5,000 global clients across industries.
For COOs, this translates into accelerated deployment, reduced technical debt, and infrastructure that can evolve alongside AI advancements—without costly overhauls.
Building resilience at the edge across Asia
With AI inference migrating to the edge, COOs must rethink their colocation and edge strategies to ensure uptime, performance, and disaster resilience. Tilak points to a distributed digital infrastructure strategy as essential.
“72% of APAC enterprises are already adopting data localisation strategies,” she reveals—a figure consistent with IDC’s 2025 Asia/Pacific Cloud and AI Infrastructure Survey, which found that regulatory pressure and latency concerns are the top drivers of edge adoption.
In markets like India, Indonesia, and Vietnam, where digital transformation is accelerating but power and connectivity remain uneven, resilience hinges on partnerships with infrastructure providers that offer both local presence and global interconnection. The ability to fail over seamlessly, scale on demand, and comply with national data laws is no longer optional—it’s existential.
Standardising governance without sacrificing agility
Asia’s “patchwork” of data sovereignty laws—from China’s PIPL to Singapore’s PDPA and India’s evolving DPDPA—poses a formidable challenge.
Yet Tilak insists COOs can strike a balance: “Plan to keep sensitive data local but design your infrastructure to participate in global ecosystems.”
This duality is enabled by platforms that offer in-market private infrastructure coupled with secure, compliant multi-cloud interconnectivity.
McKinsey (2025) echoes this approach, advising COOs to embed “compliance-by-design” into infrastructure planning. Standardised data governance frameworks—supported by certified providers—can reduce time-to-market by up to 40% while mitigating regulatory risk.
Measuring what matters: Beyond raw compute
Finally, COOs must shift their lens from raw processing power to outcome-driven metrics. Tilak highlights three critical dimensions:
- Energy efficiency (e.g., BCA Green Mark Platinum certification in Singapore),
- Latency and interconnection quality, and
- Infrastructure agility—the ability to adapt without lock-in.
As governments across Asia intensify sustainability mandates—Japan’s GX League, Singapore’s Green Plan 2030, and Thailand’s Bio-Circular-Green model—energy per inference and carbon-aware computing will become key ROI indicators. A recent BCG report notes that sustainable AI infrastructure can reduce the total cost of ownership by up to 18% over five years through efficiency gains and regulatory incentives.
Collaboration is key
No single organisation can navigate this alone. Tilak underscores the importance of strategic partnerships—with infrastructure providers, utilities, and policymakers. Digital Realty’s collaborations, from biomass-powered data centres in Singapore to its role in founding the Asia-Pacific Data Centre Association, exemplify this ethos.
For COOs in 2026, success won’t be defined by who has the biggest GPUs—but by who orchestrates the most innovative, most resilient, and sustainable infrastructure ecosystem. The time to act is now: pilot, partner, and position your organisation not just to adopt AI, but to orchestrate it.
Click on the PodChats player to hear the details of Tilak’s perspective on orchestrating the organisation’s AI Infrastructure as viewed by the COO in 2026.
- How can we ensure our AI infrastructure investments deliver long-term operational agility and business value amid rapidly shifting AI models, regulations, and market conditions across Asia?
- What operating model enables us to efficiently manage both compute-intensive AI training and real-time, low-latency inference—especially as business units demand responsiveness from factory floors to customer touchpoints?
- As AI inference moves closer to end users and industrial operations, how must we evolve our edge and colocation footprint to guarantee uptime, performance, and disaster resilience across diverse Asian markets?
- Given Asia’s patchwork of data sovereignty laws, how can we standardize data governance across markets while enabling seamless AI operations and avoiding regulatory penalties or business delays?
- Beyond raw compute, which operational metrics—such as time-to-decision, energy-per-inference, or cost-per-AI-outcome—should drive our AI infrastructure strategy and investment reviews?
- What strategic partnerships—with digital infrastructure providers, energy utilities, and technology vendors—are essential to de-risk and accelerate our AI operational roadmap across Asia in 2026–2027?
- Drawing from your observations in the market in 2025, what advice can you offer COOs and other members of the C-Suite when it comes to their investment strategies for 2026?
- With 2026 just around the corner, what are your expectations of things to come?


