Key Takeaways
AI strengthens state power in fragile authoritarian contexts by expanding capacity without sufficient accountability, increasing risks of surveillance and misuse.
AI amplifies existing systems of propaganda by accelerating and scaling disinformation and narrative control.
AI-driven development can deepen inequality due to unequal data representation, digital access gaps, and low levels of AI literacy.
Introduction
Artificial intelligence (AI) is increasingly embedded in governance systems worldwide, reshaping public administration, security infrastructures, and service delivery mechanisms. Governments are adopting AI-driven tools not only to enhance efficiency but also to advance development objectives aligned with the Sustainable Development Goals (SDGs), particularly in healthcare, education, and economic growth.
In response, a growing number of jurisdictions have developed AI governance frameworks emphasising risk-based regulation, data protection, and ethical standards. Prominent examples include the frameworks advanced by the EU AI Act, the OECD AI Principles, and the UNESCO Recommendation on the Ethics of Artificial Intelligence. These models promote transparency, accountability, and human-centred AI.
However, these frameworks largely presuppose the existence of stable democratic institutions, regulatory capacity, and accountability mechanisms. Using Myanmar as a case study, this article argues that AI does not simply enhance governance in fragile and authoritarian contexts; it also reshapes state power, amplifies information control, and risks reinforcing structural inequality.
Artificial Intelligence and Governance under Polycrisis
Artificial intelligence has the potential to transform development outcomes across sectors. In healthcare, AI supports disease detection and treatment optimisation; in agriculture, it enables precision farming and climate adaptation; and in education, it expands access through digital learning platforms. These applications highlight AI’s potential to support inclusive development. However, the effectiveness of AI depends heavily on governance conditions. Its impact is increasingly shaped by what scholars describe as a polycrisis, the intersection of multiple, overlapping crises such as political instability, economic disruption, technological change, and information disorder. In contexts like Myanmar, these crises do not occur in isolation. Political instability following the 2021 Myanmar Coup, combined with digital fragmentation, economic constraints, and social inequalities, creates a complex environment in which AI systems are introduced and deployed.
Artificial intelligence (AI) is often framed as a tool for enhancing state capacity. Yet in contexts where governance structures are weak, it may instead produce unaccountable capacity. In Myanmar:
Regulatory institutions are limited
Legal safeguards are weak
Oversight mechanisms are minimal
According to Access Now (2023), the rapid and unchecked deployment of AI systems, particularly in surveillance and public-sector decision-making, creates conditions in which these technologies operate without the necessary transparency, accountability, or safeguards. Rather than strengthening governance, AI risks reinforcing centralised control without checks and balances.
AI and Information Control in a Fragile Authoritarian Country
In authoritarian contexts, artificial intelligence is increasingly used to enhance information manipulation and political control. The governments of several countries have drawn on disinformation techniques originally pioneered in Russia to implement sophisticated information operations that spread messages via fake Facebook accounts and false news stories, in cases intentionally contributing to communal violence against religious minorities. Across the region, governments, activists, and non-state armed groups actively use social media to gain domestic and international support for their cause. Similarly, in Myanmar, propaganda and disinformation are not new; they have long been embedded in state and military communication strategies.
Recent reporting by The Irrawaddy shows how the military regime has institutionalised its information operations. For example, the junta has established a multilingual propaganda body to counter international criticism, demonstrating a coordinated effort to shape narratives beyond domestic audiences. In another case, the regime dismissed reports of civilian casualties as “fake news”, reflecting a broader strategy of denying accountability and controlling information flows.
These examples illustrate that information manipulation in Myanmar is already systematic and strategic. Social media platforms such as Facebook have further amplified this dynamic, enabling the rapid dissemination of misleading content and contributing to public confusion and polarisation.
The emergence of AI technologies significantly intensifies these existing practices. AI-generated content, including synthetic images, videos, and text, can:
scale propaganda production rapidly
produce highly realistic but fabricated content
enable targeted and adaptive disinformation campaigns
Scholarly research on synthetic media highlights how these technologies can erode trust in information ecosystems and undermine accountability, particularly in environments with weak media literacy and restricted access to independent information (see Pawelec, 2022). Myanmar has long experienced state-led information control, but digital technologies have significantly expanded its reach. Social media platforms, particularly Meta’s Facebook, play a central role in shaping public discourse, often acting as the primary source of information.
A notable case in Myanmar illustrates how emerging technologies intersect with existing propaganda practices. The junta was accused of using deepfake technology to support corruption allegations against Aung San Suu Kyi, after releasing a video confession by a detained official. The footage triggered widespread public skepticism, with observers noting inconsistencies such as unsynchronised lip movements and unusual audio patterns. While experts debated whether the video was definitively a deepfake or a coerced statement, the incident highlights a critical issue: in low-trust and restricted information environments, the mere possibility of AI manipulation is enough to shape public perception and generate uncertainty.
At the same time, misinformation does not always require advanced AI. A recent report by Agence France-Presse found that social media posts falsely presented an old image of Aung San Suu Kyi as a recent photo, demonstrating how easily misleading content can circulate in Myanmar’s information environment. Public discourse has pointed to suspected manipulated or synthetic content, which has fueled speculation and public anxiety, particularly regarding high-profile political figures. In this context, AI acts as a force multiplier - scaling both the speed and reach of information manipulation. Such cases show that low-tech misinformation and high-tech AI-generated content exist on the same spectrum, reinforcing each other.
In Myanmar, where verification is already difficult due to censorship and limited media freedom, AI-generated or AI-suspected misinformation presents even greater risks. These risks are further amplified by the growing use of encrypted platforms such as Telegram, where military-linked channels and affiliated networks have been reported to circulate large volumes of unverified and, in some cases, AI-generated videos related to the ongoing conflict. In the wake of the 2021 coup, the military regime, or State Administration Council (SAC), quickly shifted its primary information operations from Facebook to alternative platforms, including Telegram, highlighting the adaptability of disinformation networks. The closed and decentralized nature of such platforms makes monitoring and verification significantly more difficult, allowing misleading or fabricated content to spread rapidly with limited accountability.
At the same time, there are rising concerns about the use of AI tools to generate non-consensual and sexually explicit content, particularly targeting women. Reports and public discussions indicate that manipulated or synthetic 18+ videos have been used to harass, intimidate, and discredit female activists, journalists, and public figures. These practices reflect a broader pattern in which digital technologies are weaponised along gendered lines, exacerbating existing vulnerabilities and reinforcing social harms.
AI, Inequality, and Digital Literacy Gaps
Artificial intelligence (AI) is often presented as a tool for improving access to information and expanding opportunities. However, its impact is deeply shaped by existing social and digital inequalities. In contexts such as Myanmar, these inequalities are not only economic or geographic; they are also reflected in uneven access to digital knowledge, skills, and information systems. AI systems are fundamentally dependent on data, making them highly sensitive to existing structural inequalities. In fragile and linguistically diverse contexts such as Myanmar, these inequalities are not only social but also embedded in digital infrastructures.
Key challenges include:
unequal access to digital infrastructure between urban and rural communities
limited inclusion of minority groups and languages in digital systems
significant gaps in digital and AI literacy across different segments of society
These factors influence who benefits from AI and who is left behind. Communities with limited connectivity or lower levels of digital literacy are less able to access AI-driven services, while also being more vulnerable to misinformation and manipulation.
A critical dimension of this challenge is the generational divide in digital and AI literacy. Younger populations, particularly in urban areas, are generally more active users of social media and digital platforms. They are more likely to engage with emerging AI tools and integrate them into everyday activities such as learning, communication, and content creation. However, this familiarity does not always translate into critical understanding. People need to have digital literacy to verify the information and not contribute to the rapid spread of misinformation.
In contrast, older generations often face barriers in accessing and navigating digital technologies. Limited exposure, lower confidence in using AI tools, and difficulties in evaluating online information can lead to exclusion from digital services or increased susceptibility to misleading content. In some cases, this results in dependence on informal information networks, which may further amplify misinformation.
Platforms such as Facebook and Telegram play a central role in shaping these dynamics. Information spreads rapidly across these platforms, often without effective verification mechanisms. In such environments, both overconfidence among younger users and limited digital literacy among older users contribute to an existing fragile information ecosystem.
These structural gaps shape how AI systems interpret and represent reality. When datasets exclude or simplify certain populations, AI outputs risk becoming systematically biased, incomplete, or misleading. A particularly illustrative example is Myanmar’s long-standing use of Zawgyi encoding, a non-standard font system historically used across digital platforms. Unlike Unicode, which is internationally standardised, Zawgyi is not fully compatible with modern computational systems. For example, during the transition period, Facebook reported that identical posts written in Zawgyi and Unicode were treated differently by its algorithms, affecting content visibility, moderation, and information retrieval. Google similarly noted measurable improvements in search and input tools following Unicode standardisation. This demonstrates a critical point: when linguistic systems are fragmented, AI systems inherit and reproduce that fragmentation.
Therefore, AI does not create inequality, but it encodes, scales, and amplifies existing disparities across data, language, and generational access. AI does not create inequality, but it can significantly amplify it.
Fragmented Governance and Emerging Actors
Myanmar’s governance landscape is increasingly fragmented, with alternative actors such as the National Unity Government (NUG) playing a growing role. While the National Unity Government (NUG) actively counters military narratives through official statements, social media engagement, and diaspora-supported networks, it does not yet operate a formalised AI governance or verification system. As a result, its responses to misinformation remain largely reactive and decentralized rather than systematic.
At the same time, the absence of trusted, institutionalised verification mechanisms combined with the proliferation of AI-generated and manipulated content contributes to a broader environment of information uncertainty. In such conditions, competing narratives circulate simultaneously, making it increasingly difficult for the public to distinguish credible information from misinformation.
This raises important questions about the future of AI governance:
Can alternative actors develop more accountable digital systems?
How can AI support service delivery in contested areas?
What role can decentralised governance play in shaping technology use?
At present, these dynamics remain underdeveloped, highlighting a critical area for further research and policy engagement.
Implications and Recommendations
1. Context-Sensitive AI Governance Frameworks
Global AI governance models must be adapted to fragile contexts. This includes prioritising minimum safeguards, even in low-capacity environments, such as transparency standards and basic oversight mechanisms. Efforts to strengthen AI governance in Myanmar should also consider broader regional developments in Southeast Asia. Several countries, including Indonesia, Malaysia, Singapore, and Thailand, have already adopted national AI strategies or ethical frameworks to guide investment, innovation, and risk management (see). These initiatives highlight a growing regional trend toward integrating AI into economic development and public service delivery.
However, Myanmar remains significantly behind in terms of AI readiness, governance capacity, and digital infrastructure. At the same time, past cases such as the role of social media in amplifying violence against the Rohingya population underscore the risks of unregulated digital ecosystems
2. Investment in AI Literacy and Public Capacity
Improving AI literacy is essential to reduce misuse and vulnerability to misinformation. Targeted education initiatives should focus on:
digital literacy
critical media skills
responsible use of AI tools
3. Addressing Data Inequality
Efforts should be made to ensure more inclusive data systems by:
incorporating minority languages
improving rural data representation
supporting equitable digital infrastructure
4. Safeguarding Information Ecosystems
International organisations and civil society actors should prioritize:
monitoring disinformation
supporting independent media
strengthening fact-checking systems
5. Supporting Alternative Governance Innovation
Emerging governance actors should be supported in developing:
transparent digital systems
accountable AI applications
inclusive service delivery models
Ultimately, strengthening AI governance in Myanmar requires more than technical solutions; it demands coordinated efforts across institutions, society, and regional partnerships. Without such efforts, AI risks reinforcing existing patterns of inequality, misinformation, and political control. Ensuring that AI contributes to inclusive and accountable development will depend on sustained investment in capacity, trust, and governance.
Htay Su Wai is a Junior Research Fellow at the Sustainability Lab of the Shwetaungthagathu Reform Initiative Centre (SRIc) and holds a Master of Public Policy (MPP) from the Hertie School of Governance in Berlin, Germany.
“Advocating Sustainability, Shaping Our Future”
Help Sustain The Sabai - Myanmar’s Voice for Sustainability Support The Sabai



