In Indonesia's Financial Services, Transaction-Speed AI Is Exposing the Data Layer

Closed door discussion, Jakarta, co hosted by AIBP , MongoDB and Contentsquare. , with enterprise participation across BFSI and fintech.

TLDR

  • Indonesia’s BFSI leaders are under pressure to deliver faster digital services while meeting strict regulatory expectations.

  • The bottleneck is rarely “more AI ideas”. It is data sprawl, inconsistent definitions, and production systems that cannot tolerate latency or instability.

  • Teams are prioritising three foundations for scale: trusted data layers, governance that works in day to day operations, and an operational data layer that supports real time decisioning.

  • Alternative and unstructured data are expanding credit and risk models, but only when organisations can integrate them safely and explain outcomes.

In Indonesia’s financial services sector, the cost of moving slowly is increasing. Customers expect faster service, regulators expect stronger controls, and new digital players continue to raise the bar.

That combination forces a hard question for CIOs, CTOs, CDOs and risk leaders: how do you build AI that can operate at transaction speed without weakening trust, auditability, or cost discipline?

AIBP convened a closed door working group in Jakarta to compare how BFSI and fintech teams are tackling the practical blockers to scaling AI. The discussion moved quickly away from tools and towards the structural issues that make real time AI difficult in regulated environments.

Pilots succeed; scaling fails on the data layer

Across the room, leaders described the same pattern. Pilots work. Scaling does not. The break point is rarely model performance. It is that the same logic cannot be reproduced reliably across channels, products, and business lines when underlying data sits in fragmented systems with inconsistent definitions and unclear ownership.

This showed up in day to day decision making. One participant described a common issue: management asks the same question to different teams and gets different answers, because the underlying data layers are not aligned.

The operational symptom is familiar: management asks the same question of two teams and receives two different answers. Miftahurrizqi Adhi Nugroho, Data Engineering Manager at DANA Indonesia, described how this has pushed his team toward a single source of truth and a clearer separation between business and technical data layers, anchored in the recognition that multiple groups generate the same data with subtly different meanings. The implication for AI is direct. If the organisation cannot trust its own data, it will not trust any AI system built on top of it.

This framing is changing how teams sequence AI investment. The order of operations being adopted is data layer first, governance second, model deployment third.

Transaction speed is an architecture commitment, not an engineering preference

When AI moves into fraud detection, onboarding, credit decisioning, or next-best-action, latency stops being an internal preference. It becomes part of the customer experience. Yanuar Budi, Senior IT Business Partner at Bank Mandiri, observed that adding a machine-learning layer into existing infrastructure becomes a speed problem before it is anything else, because the response window for live customer journeys is measured in milliseconds.

This exposes the limits of analytics-first architectures. Wei You Pan, Field CTO, Financial Services Industry APAC at MongoDB, put the distinction plainly: "A warehouse is designed for batch reporting, while transaction speed decisioning requires an operational layer that can respond in a short time."

Several leaders described the practical consequences. An operational data layer that supports real-time decisions is now a separate architectural commitment, not an extension of the analytics estate. The two serve different jobs, and conflating them tends to produce systems that are too slow for production AI and too narrow for analytical work.

Data Governance Is Moving from “Policy” to “Source of Truth”

A second pattern was the quiet redefinition of governance. The framing is shifting from governance as a compliance checklist to governance as the discipline that prevents conflicting answers, duplicated work, and slow decisions.

This is a meaningful change. Treated as compliance, governance slows the business. Treated as source-of-truth discipline, governance becomes a precondition for speed. The shift is being driven by lived experience. Inconsistent definitions and unclear ownership generate rework that compounds across model lifecycles, vendor integrations, and regulator queries.

The scope of governance is also widening. As Wei You Pan noted, the data is everywhere and consolidating it is already a substantial undertaking. That now includes documents, images, and other unstructured sources increasingly feeding AI systems.

Alternative data widens the surface, but not the accountability rules

In a market with a large credit-thin segment, alternative data is changing what underwriting can do. Zuhrianda, AVP of Data at Amartha Financial, described an approach that draws on field-collected demographic, business, and visual data, including photographs of borrower premises, to estimate proxy income and improve credit scoring for borrowers without traditional financial histories. It is one of the more concrete examples of alternative data delivering measurable inclusion outcomes in the region.

The governance counterweight remained explicit throughout the session. For sensitive use cases such as loan approvals, model validation and human accountability are non-negotiable. Sigit Prihatmoko, Vice President Corporate Innovation Centre, Bank Negara Indonesia emphasised the operational requirement: “For sensitive use cases like loan proposals… the model must be validated, and the final decision still requires human approval.”

This widening of inputs places further pressure on the data layer. Every new signal type, whether image, text, or behavioural, adds integration, lineage, and explainability requirements that the underlying architecture must absorb.

Consolidation is now a cost-and-pace decision, not a technology preference

As data sprawl grows, so does operational overhead. Multiple tools, duplicated stores, and parallel pipelines raise both maintenance load and the training overhead required to keep teams productive.

Céline Zuberek, Regional Vice President ASEAN at MongoDB, framed this as the practical brake on enterprise AI: "The more tools you have, the more maintenance you have, and that becomes the biggest slowdown preventing companies from moving forward."

Consolidation, in this context, is not a technology preference. It is a cost and pace decision. It is also rarely a rip and replace exercise. For most institutions, the practical first step is integration, consolidating and harmonising data across existing systems before any consideration of core replacement.

What This Means for ASEAN Enterprises

The Indonesia working group surfaced a set of observations that resonate beyond Jakarta and beyond BFSI.

The binding constraint on AI scale is increasingly data sprawl, and the senior conversation is shifting away from model choice toward data layer coherence. When teams debate what to build first, the answer is landing more often on foundations rather than new use cases.

Governance is being re-scoped as a speed enabler. Where it was once positioned as the brake, leaders are recognising it as the mechanism that keeps decisions consistent, auditable, and defensible. The teams making fastest progress are treating governance as part of the operating model, not a compliance overlay.

At the same time, architecture choices are becoming visible to the business. The distinction between batch warehouses and operational data layers, once an internal engineering discussion, is being raised in board-level conversations because it directly affects customer experience, fraud loss rates, and regulatory exposure.

Alternative data is widening the inclusion frontier, but it is not loosening the accountability constraint. Image-based income proxies, behavioural signals, and field-collected data are entering production. The expectation that a human owns the consequential decision has not moved.

Cost is re-entering the AI conversation. After two years of capability-led investment, leaders are openly discussing the maintenance and run cost of fragmented tooling. The question of consolidation is now on the table in ways it was not twelve months ago: how, in what sequence, and at what cost.

The shared signal across these patterns is straightforward. AI capability in BFSI is being built on data foundations and operating discipline. Where those foundations are coherent, teams scale. Where they are not, AI progress remains expensive, fragile, and difficult to defend in front of regulators.

Join the dialogue

AIBP is continuing this conversation at our upcoming 55th Conference and Exhibition 2026 bringing together leaders across industries to compare how they are scaling AI while maintaining control over risk, cost, and governance. View the programme and register here.

Next
Next

Navigating the AI Frontier: Governance, Risk, and ROI for ASEAN Enterprises