AI Is Scaling Faster Than It Can Be Secured: What Indonesian Enterprises Are Fixing First
70% of Indonesian enterprises plan to invest in AI. 41% name cybersecurity and privacy as the constraint. 34% name legacy infrastructure. The arithmetic of these three figures, drawn from AIBP's Indonesia Innovation Survey, describes the gap that closed-door discussions in Jakarta keep returning to: the ambition to deploy AI is now outpacing the architecture meant to support it.
This was the pattern technology, infrastructure, and security leaders described at an AIBP Executive Discussion on 23 April. Pilots succeed. Production stalls. The friction points are conditions under which models are exposed to real data, real users, and real systems.
The session extended themes raised at the February discussion (covered inSecuring AI at Machine Speed: Inside Indonesia's Enterprise Cyber Response). What has shifted is where the conversation now sits. In February, cyber response was about machine-speed reaction. In April, it had moved upstream, into the design of the data and infrastructure layers AI must operate through. Indonesia's Ministry of Communication and Digital Affairs has flagged the same direction of travel, noting that AI adoption is moving faster than regulatory readiness.
The boundary is no longer a network
A consistent observation across the room was that the boundary within which AI must be secured is no longer a network. Users, applications, and workloads sit across on-premise systems, cloud platforms, and SaaS tools, and AI accelerates the volume and frequency of interactions between all of them. Security designed for a centralised environment does not transfer cleanly. The risk is not coming from AI itself; it is emerging from how AI is being deployed across environments that were never designed to share visibility or control.
Pilots succeed under conditions production cannot reproduce
Across sectors, leaders described the same arc. AI proofs-of-concept generate results in controlled environments and produce internal momentum. The slowdown begins when those pilots meet the requirements of secure data movement, integration with legacy infrastructure, and consistent enforcement of access controls. Dana and Ciputra Life both pointed to this transition as the point at which pilot-stage assumptions about data availability and system access stop holding.
This is one reason 31% of Indonesian enterprises continue to report unclear ROI. The question is not whether AI generates value in a pilot. It is whether that value can be sustained once the system around it is brought into compliance with the controls enterprises actually operate under.
Legacy systems are now a security constraint, not only an innovation drag
Legacy systems were raised less as a barrier to innovation and more as a barrier to consistent control. Participants highlighted that fragmented data and long-established systems make uniform enforcement across AI use cases difficult. In another scenario, participants also described systems that are not yet fully integrated with modern platforms, which limits both scalability and the ability to apply a consistent security posture. Connecting these environments to cloud and AI platforms is not a bounded integration project. It expands the surface that must remain visible and controllable.
Visibility before enforcement
Several organisations described converging on the same starting point: route AI activity through centralised gateways, monitor across systems, and control how data is accessed and shared. The approaches were developed independently, but the principle is common. Enterprises are recognising that they cannot secure what they cannot see, and that the first investment is in visibility rather than enforcement.
From perimeter to identity
The second shift is from perimeter-based control to access-based control. As users and systems operate across multiple environments, security increasingly hinges on who can access data, how that access is granted, and how it is monitored. Identity and access management is becoming the operational layer where AI risk is contained.
Across financial institutions, access is increasingly controlled at the point of data interaction rather than at the network level. Bank Jakarta reflects this through tighter constraints on how data is shared across AI tools and external applications, governing AI use at the point of contact with data rather than at the edge of the network.
The same logic is reshaping what data functions are responsible for. At Astra International, the focus has moved away from building machine learning pipelines towards overseeing the observability of AI outputs, ensuring that what AI produces can be monitored, understood, and trusted before it informs consequential decisions. Access control and output accountability are becoming two sides of the same governance question.
The principle is not new; what AI changes is its weight, because the frequency and complexity of cross-system interactions rise sharply once models are in production.
Where regulation aligns with the direction of travel
Indonesia's Personal Data Protection Law sits in the background of these decisions. The controls being discussed (visibility, monitoring, access management) align closely with what enforceable data protection now requires. With the PDP Law operative, accountability over how data is processed and secured across systems is no longer optional, and the security direction enterprises are taking is increasingly consistent with regulatory expectation rather than separate from it.
More budget, the same fragmentation
Across ASEAN, 52% of organisations plan to increase cybersecurity budgets. The Jakarta discussion suggested this will not, on its own, simplify the picture. Most enterprises operate environments assembled incrementally over years, and adding tools to fragmented stacks has historically expanded complexity rather than reduced it. The shift now visible is towards better integration and unified visibility across what is already deployed, rather than further accumulation.
What This Means for ASEAN Enterprises
The conversations in Jakarta point to several signals worth tracking as AI moves further into production across Southeast Asia.
The constraint on AI scaling is moving from the model layer to the integration layer. Enterprises that were measuring AI maturity in pilot success are increasingly measuring it in production resilience.
Visibility is being treated as a precondition for control, not as a parallel investment. Where centralised gateways and cross-system monitoring are being introduced, they are landing earlier in the AI rollout sequence than enforcement controls.
The locus of security is moving from network perimeter to identity and access. The frequency of cross-environment interaction makes this less of an option and more of a default architectural assumption.
Regulatory and security trajectories are converging. PDP-aligned controls and AI-driven security investments are increasingly the same conversation rather than two parallel ones.
Fragmentation, not under-investment, is emerging as the binding constraint. The question enterprises are now asking is less about how much to spend on security tools and more about how to make existing tools work coherently in environments AI continues to expand.
Are you seeing similar shifts in how Indonesian enterprises are approaching cybersecurity? We are currently mapping how organisations across the region are navigating the evolving threat landscape, from legacy infrastructure challenges to AI-driven risks, and would love to hear your perspective. Share your thoughts in our short Innovation Survey here.