Securing AI at Machine Speed: Inside Indonesia’s Enterprise Cyber Response

TLDR:

  • AI adoption across ASEAN enterprises is accelerating faster than governance frameworks can adapt.

  • Indonesia’s Personal Data Protection Law (PDP) sharpens accountability even as AI-specific regulation remains limited.

  • The structural risk is fragmentation of tools, oversight and responsibility.

  • Identity and machine-level monitoring are becoming central control surfaces.

  • Digital trust must scale at the same speed as AI deployment.

Cyber risk in Indonesia is no longer peripheral. It is operational and persistent. AIBP’s 2025 Cyber Resilience in ASEAN shows that 52 percent of organisations plan to increase cybersecurity budgets in the next 12 months, reflecting heightened executive attention to resilience. At the same time, persistent exposure across cloud environments, identity management and monitoring discipline continues to shape enterprise risk profiles.

These dynamics were examined during a luncheon in Jakarta, supported by Zscaler and held as part of the AIBP Executive Briefing 2025 to 2026 Insights, AI and Data, and Cyber Resilience, where it was clear that AI is embedded in daily workflows and governance models must adapt at pace.

The issue is whether AI can scale without increasing operational or compliance risk. Security design and identity governance will determine the outcome.

Machine Speed Risk

AI alters operating velocity. It increases productivity, but it also compresses detection windows and expands attack surfaces. Defensive models designed for slower environments are under pressure as automation becomes embedded across enterprise systems.

Brad Lee, Managing Director ASEAN Region, Zscaler, captured this shift directly: “we are not fighting with a human being. We're fighting with the machine.”

Enterprises are observing similar pressure internally. Isa Falaq Albashar, Dept Head SOC, Bank Negara Indonesia, reflected on how defensive assumptions must evolve: “We should also consider the possibility of a breach. Once we have established our parameters, configured the firewalls, and updated our signatures, we must ensure that attackers can be detected if they manage to penetrate our system. So we need to create rules to detect any anomalies in our system.

Acceleration reshapes exposure. Security controls must therefore operate continuously rather than reactively.

From Policy to Architecture

As AI moves from pilot environments into production systems, governance is shifting from advisory policy to enforced architecture. Visibility is foundational, but structure determines consistency.

Felix Lam, Head of Solution Engineering (ASEAN), Zscaler, emphasised the starting point: “Everyone agrees that the first step is to have visibility, because you can't see, you can't control what you cannot see.”

Enterprises are pairing visibility with structural containment. Danny Natalies, Head of Corporate Information Technology and System, Kalbe Farma, described governance as a business risk discipline rather than a standalone compliance function. Operating across e commerce, telemedicine and laboratory services, the organisation treats employee data with the same safeguards as customer data and consolidates data governance within a single entity to contain exposure across business lines.

At the implementation layer, Reza Setiadi, Head of SRE, DANA, explained how this translates into system control: “We ensure every AI tool, or every productivity AI tool that we are using, is enterprise-ready. What we mean by enterprise is that we need to ensure that every prompt, every response or result generated by the AI, is never stored by the provider… we govern all the models… So we call it the AI gateway.”

Embedding governance into architecture reduces reliance on post deployment oversight and strengthens auditability under PDP obligations.

Regulatory Ambiguity, Enterprise Responsibility

Indonesia’s PDP establishes accountability for personal data management. However, AI specific governance standards remain less defined. Enterprises are therefore building internal frameworks while regulation evolves.

Mohamad Diaz Permana, AI Governance Team Leader, Bank Rakyat Indonesia, described how institutions look beyond domestic regulation when shaping standards: “The common issue in banking is that we don’t have any regulation specific to AI, so we tend to look externally for guidance. We get references from other countries … basically how we can develop AI in a way that is safe, fair and responsible, and how we can use AI tools like Copilot and GPT, but in a safe and secure way.”

Governance, in this context, becomes an internal operating discipline rather than a compliance afterthought.

Identity as the New Control Surface

As AI integrates deeper into enterprise environments, identity becomes central to control, both human and machine.

Srinivas Kannan, Business Value Consulting, Zscaler, articulated the architectural principle: “We want to move towards what we call as zero trust, which means any interaction, anything and everything to do with users and the data, you do not have any trust towards it, and you have to verify every aspect of it.”

Enterprises are observing how threat focus extends beyond user credentials. Raditio Ghifiardi, Vice President Head of IT Security Strategy and Architecture, Indosat Ooredoo Hutchison, noted the growing risk surface: “The threat actors try to focus on identity. They try to go to the machines that we are less concerned about. We are concerned about user access. But they also try to attack machine identities. Then the service accounts, because service accounts have less attention. So first, we must monitor, and then we can control it.”

Monitoring therefore extends to service accounts, machine credentials and automated processes that historically received less scrutiny.

Governance as Operating Discipline

Embedding governance at the start of development reduces downstream risk and strengthens operational stability.

Mohammad Ramadlan, AVP Security Architecture Manager at Superbank, described how this discipline is institutionalised: “We align with governance first. We set everything up from a governance perspective, including defining red lines. We also refer to the circular issued by OJK, and then we build our standards for AI accordingly.”

When AI deployment scales across fragmented systems, operational complexity increases. Where governance is architectural, enforced and identity driven, resilience improves.

Digital trust, in this environment, is the outcome of design decisions.

What This Means for ASEAN Enterprises

Senior leaders should prioritise five structural decisions:

  • Centralise AI model access rather than allowing distributed experimentation.

  • Embed PDP accountability directly into AI workflows and data governance.

  • Integrate AI risk review into development pipelines before deployment.

  • Shift from perimeter based security to identity and anomaly based monitoring.

  • Consolidate governance across subsidiaries to reduce structural fragmentation.

These are architectural decisions, not incremental controls.

Do you have an interesting cybersecurity project that utilizes innovative digital tools or technologies? We want to hear about it! Nominate your project for our upcoming awards here.

About ASEAN Innovation Business Platform (AIBP)

Since its inception in 2012, ASEAN Innovation Business Platform (AIBP) is an initiative focused on enabling innovation and strategic partnerships across public and private organisations in Southeast Asia. Through curated engagement activities, AIBP supports the growth of regional government agencies, enterprises and solution providers in navigating key themes such as innovation, digital transformation, and sustainability.


Learn more at www.aibp.sg 

Next
Next

AI That Pays Off: Inside Malaysia’s Enterprise Playbook