The Identity Nexus: Securing Thailand’s AI Enterprise

As Thai firms move from testing AI to using it in real business, the stakes for the C-suite have shifted. It is no longer just about how smart the AI is; it is about the structural risk it brings. With rising costs and the strict rules of the Personal Data Protection Act (PDPA), using AI "agents" or bots has created a new gap in the digital fence.

Balancing Rapid Deployment with Structural Control

The main struggle for Thai leaders is the trade-off between speed and control. Business units need fast AI bots to stay ahead. However, this creates a "governance gap" where non-human accounts—like bots and service tools—have high-level access without the human eye needed to stop data leaks or legal errors.

Thai realities and the deepfake threat

In Thailand, fake emails and scams are the top cyber threats. This is now harder to stop because of a growing "deepfake" world, where AI mimics a boss’s voice or face. Recent evidence shows that AI "hallucination" is now a real business risk:

  • Legal Risk: AI giving wrong data can lead to bad medical or money choices, creating a big risk under Thai law.

  • Security Risk: Rogue bots can bypass checks by acting like humans, making simple passwords useless.

"Identity is no longer just about the login; it is about watching the behavior. In a world of deepfakes, the front door is only the start. You need a system that checks the intent of the action, not just the key." — Kenneth Devan, Country Manager at Okta

Structural risks and "ghost" accounts

The identity crisis in Thailand is made worse by old IT systems. Many firms use setups that do not track Non-Human Identities (NHIs). A major risk involves "orphan" accounts—bots made for a project that stay "on" long after the project or the staff member has left.

In one case, a bot made by a staffer who left two years ago was still used to get into a firm’s "crown jewel" data. Without Identity Threat Detection and Response (ITDR), these hidden doors stay open, showing the danger of a "set and forget" plan.

Redefining the fence

To bridge the gap between new ideas and safety, Thai leaders are moving toward Fine-Grained Checking. Instead of giving broad access, every single move is checked.

Modern setups must now include:

  • Constant Monitoring: Moving from a one-time login to tracking behavior in real-time.

  • Human-in-the-loop: In Thai hospitals or banks, while AI can sort data, the final green light for a big move must stay as a logged human event.

What this means for Thai leaders

To manage AI and identity, senior leaders should take these practical steps:

  • Audit all bots: Find every service account and AI agent. Create a "turn-off" process for bots just like you have for human staff.

  • Invest in ITDR: Shift budget toward tools that spot when a bot or user starts acting strangely.

  • Log all AI moves: With 2026 rules coming, ensure every AI choice is linked to a human "boss" in a digital log.

  • Follow PDPA closely: Make sure your data rules cover how AI bots handle customer info, not just how humans see it.

Conclusion

The message for Thailand’s digital leaders is clear: failing to lock down the identity of both humans and bots makes AI a risk rather than a win. By closing the gap between speed and control, CIOs can ensure their growth is built on a solid floor rather than a house of cards.

For more content on Cybersecurity in ASEAN, you may check out AIBP’s latest report here

To explore how these benchmarks apply to your specific industry or to join our upcoming peer-learning sessions, please reach out to us to find out more.

This writeup is based on discussions from the AIBP closed-door session focused on “Who takes accountability for AI "hallucinations?” held on 18th March 2026.

To join upcoming peer-learning sessions visit:
https://www.aibp.sg/upcoming

Fill in the form to connect with us

Next
Next

Scaling AI in Thailand: When the Pilot Ends and the Real Work Begins