Securing Enterprise Data in the Age of AI for ASEAN Leaders

The rapid proliferation of Generative AI (GenAI) presents a dual challenge for enterprises across Southeast Asia. While promising unprecedented gains in productivity and innovation, it simultaneously introduces significant cost pressures, operational complexities, and amplified risk exposure. The dilemma for CIOs, CTOs, COOs, and CDOs remains clear… How to harness the transformative power of AI without compromising data integrity, security, and regulatory compliance?

The Evolving Data Landscape and Inadequate Traditional Defences

In ASEAN, data privacy and security consistently rank among the top concerns for organisations grappling with AI disruption. Traditional data protection strategies, often reliant on identifying Personally Identifiable Information (PII) through keywords and regular expressions, are proving insufficient. The sheer volume and complexity of unstructured data, coupled with widespread data sprawl across cloud and on-premise environments, mean that critical sensitive information often remains undiscovered and unprotected.

GenAI as a Catalyst for Both Productivity and Peril

The drive for productivity is pushing organisations to adopt GenAI tools at an accelerating pace. However, this adoption significantly expands the data attack surface. Employees are increasingly using public AI tools, often without corporate oversight, and internal AI copilots are training on vast internal datasets. This creates "shadow AI," leading to critical visibility gaps regarding what sensitive data is being ingested and processed.

The risk is particularly acute with tools like Microsoft Copilot, which, by default, has access to an organisation's Microsoft 365 data. If sharing permissions are overly permissive (e.g., "everyone in the organisation" links), Copilot can inadvertently surface highly confidential information in response to user prompts. This non-malicious data leakage, historically a significant concern, is now amplified tenfold by GenAI, making robust guardrails imperative.

The Intricate Challenge of Operationalising Granular AI Governance

The ambition to operationalise AI-driven data security is frequently met with a complex array of challenges that undermine traditional governance frameworks. In this workshop, we spoke about:

  1. Inaccurate and insufficient data classification remains a pervasive problem. Traditional methods, often relying on rigid regular expressions or manual user input, lead to widespread mislabeling or missing labels. As Arthur Yeung, Senior Pre-Sales Engineer - APAC at Concentric AI highlighted, "if you rely on users, humans, there's always a behavior like this," where individuals tend to choose either the lowest (public) or highest (secret) classification to avoid friction or accountability, compromising the accuracy essential for effective protection. As Chris Farrelly, Vice President for APJ from Concentric AI succinctly puts it, "Classification without accuracy is garbage in, garbage out". This lack of granularity means that downstream security tools, such as DLP solutions, cannot function optimally.

  2. Pervasive data sprawl and visibility gaps obscure the true landscape of sensitive information. Organisations often struggle to identify where critical data resides, who has access, and how many duplicates exist across diverse repositories. Instances of single documents being duplicated up to fourteen times across multiple systems are not uncommon. This issue is further compounded by "shadow AI," where employees utilise public GenAI tools, creating invisible data ingestion points and expanding the attack surface without corporate oversight.

  3. Overly permissive access controls in widely used platforms present significant exposure risks. Default sharing settings, such as "everyone in the organisation" links in Microsoft 365, can inadvertently expose sensitive data when accessed by AI agents like Copilot. These agents, by design, can retrieve information from any data they are permitted to access, turning seemingly innocuous sharing into a major vulnerability.

  4. Limitations of legacy security tools and operational burdens hinder agile response. Traditional DLP and data governance solutions are often labour-intensive, generate numerous false positives, and struggle to adapt to the dynamic nature of modern data. Their reliance on sampling methodologies means that training periods can be extensive—up to 18 months for large datasets in some cases—making them slow to operationalise and costly to maintain. This creates a significant challenge in balancing the imperative for AI-driven productivity gains with the need for robust data security, as the risk of non-malicious data leakage is amplified tenfold by GenAI.

What This Means for ASEAN Enterprises

Organisations may find success by moving beyond traditional, PII-centric classification toward a more nuanced, AI-driven contextual understanding of their data. By expanding the scope of protection to include intellectual property, financial records, and strategic documents, leaders can create a more comprehensive security blanket. This evolution is often supported by investing in contextual data intelligence, leveraging tools like Large Language Models and Natural Language Processing to accurately discover and categorise data at scale, which serves as a vital foundation for modern security operations.

Rather than implementing blanket bans on emerging technology, a more balanced approach involves developing granular guardrails for Generative AI. Leaders may focus on establishing clear policies regarding how data is ingested, processed, and shared across both public and internal tools. This focus on usage rather than restriction allows for innovation while maintaining safety. Additionally, a renewed focus on data hygiene and lifecycle management can help organisations reduce costs and minimise their attack surface by actively addressing data sprawl and duplication.

Finally, a holistic approach to risk management can be achieved by fostering cross-functional AI governance councils. By bringing together perspectives from security, IT, legal, compliance, and various business units, leaders can ensure that AI policies are well-rounded and aligned with the broader goals of the enterprise. This collaborative structure helps ensure that risk management is not just a technical hurdle, but a shared strategic priority across the entire organisation.

Conclusion

The journey towards AI-driven transformation in Southeast Asia is fraught with both immense opportunity and significant peril. The ability to innovate at speed while maintaining robust data governance is not merely a technical challenge but a strategic imperative. By embracing contextual data intelligence and implementing granular, automated guardrails, ASEAN enterprises can navigate the AI paradox, securing their digital assets and building resilience in an increasingly complex landscape.

Read more in the 2025/26 AIBP Enterprise Innovation Market Overview here.

To explore how these benchmarks apply to your specific industry or to join our upcoming peer-learning sessions, please reach out to us to find out more.



Previous
Previous

Navigating the AI Frontier: Governance, Risk, and ROI for ASEAN Enterprises

Next
Next

AI's Double-Edged Sword: Governance Imperatives for ASEAN Enterprises