This article examines the ethical challenges posed by artificial intelligence and argues that a multi-stakeholder, value-based approach, integrating legal regulation, technical expertise, and international ethical frameworks, is essential for governing AI in a way that protects human rights, privacy, and social values.
AI and the Role of Algorithms
Along with the development of AI, the role of algorithms has become increasingly important, particularly in different types of machine learning within the AI field. Algorithms set up by computer scientists and engineers can be controlled by specific management systems for particular tasks; however, machine learning algorithms that result from the data we provide to AI systems are not fully controllable by computer scientists, especially when they cause harm by violating data privacy. Since people hope to benefit from these algorithms for business or other economic or social purposes, violations of personal data privacy and social values caused by these algorithms must also be brought to public attention. If algorithms cause harm to humans and their lives, they should indeed be regulated through proper regulatory frameworks within each country or society.
However, regulation is not the only or necessarily the best solution to control AI. An alternative approach is to design better algorithms or to grant technicians greater access to explore technical solutions independently. Nevertheless, technicians are not the only ones who understand every aspect of protecting human rights, such as regulators, police officers, or judges, and they are not the primary actors responsible for addressing violations of privacy or fairness. Therefore, legal scholars have focused on traditional solutions that attempt to regulate algorithms, data, and machine learning. Even though there are several debates and many unanswered questions related to ethical AI principles, there is a general belief in the importance of regulating AI or establishing ethical AI guidelines as an essential step toward benefiting society.
Ethical Challenges Posed by AI and EU Efforts to Regulate AI
Several challenges arise in AI technology, and some of the most common include discrimination and bias, privacy concerns, transparency, accountability, justice, and fairness. These challenges demonstrate the need to justify AI systems and to design effective AI ethics guidelines. The EU AI Act 2024 adopts a risk-based regulatory approach and classifies AI risks into different categories, such as unacceptable risk, high risk, transparency risk, and minimal risk. Firstly, unacceptable risks are strictly prohibited under Article 5 of the Act, as they violate fundamental EU rights and values.
Secondly, high-risk AI systems include those that impact health, safety, or fundamental rights, such as systems subject to conformity assessments or post-market monitoring, and these are identified under explicit criteria in Article 6 of the Act. Thirdly, risks related to impersonation, manipulation, or deception—such as chatbots, deepfakes, or AI-generated content—are categorised as transparency risks, and state parties are obligated to comply with information and transparency requirements. Finally, minimal-risk AI systems, such as spam filters or recommender systems, are not subject to specific regulatory obligations. However, the EU AI Act mainly focuses on a regulatory framework, and its ethical guidelines for AI remain unclear. Although the 2019 EU Ethics Guidelines for Trustworthy AI encourage AI systems to be lawful, ethical, and robust, they do not contain clear or explicit ethical sections or principles. To effectively regulate AI ethics, core ethical principles should be clearly articulated within the EU legal framework.
Defining AI Ethics by Key Stakeholders
Regulating technology is one of the most important duties of government, and it is not an easy task to perform perfectly in practice. Regardless of the outcome, governments must strive to protect public interests by providing legal frameworks to regulate AI. For example, the EU enacted the AI Act in 2024, introducing strict regulations and requirements for the benefit of the EU community. However, a value-oriented approach to defining AI ethics may be more effective than a purely regulatory legal framework, and governments should collaborate with technical experts to develop effective AI guidelines. While legal experts may lack sufficient technical knowledge of AI, technical experts and industries are at the forefront of AI advancement. Their involvement is beneficial because they possess the resources, expertise, and practical experience related to AI technologies. For instance, Google published its AI Principles in 2018 to address AI-related challenges, and Microsoft implemented the Responsible AI Standard based on six principles: fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability.
In addition, academic researchers and legal scholars from both theoretical and practical domains of AI development should contribute to defining AI ethics. Inviting civil society organizations (CSOs), non-governmental organizations (NGOs), and international non-governmental organizations (INGOs) can further encourage the creation of ethical frameworks that respect human rights, equality, and equity through an inclusive approach. For example, the United Nations Special Rapporteur on contemporary forms of racism introduced measures for due diligence and impact assessments on AI technologies used by private and corporate sectors.
Furthermore, the United Nations Guiding Principles on Business and Human Rights have demonstrated that Meta’s algorithms directly contributed to harm by amplifying anti-Rohingya content, including advocacy of hatred against the Rohingya, during the conflict between the Myanmar military and the Rohingya people. Additionally, the Organisation for Economic Co-operation and Development (OECD) adopted the first intergovernmental AI principles in 2019 and updated them in 2024, comprising five value-based principles and five practical recommendations aimed at promoting trustworthy AI. The United Nations Educational, Scientific and Cultural Organisation (UNESCO) also adopted the first global instrument on AI norms and ethics for its member states through consultations with international experts and organisations. Therefore, a multi-stakeholder approach to defining AI ethics, bringing together governments, companies, legal scholars, technical experts, and international bodies, is necessary to promote shared responsibility and transparency in AI ethical principles.
Accordingly, this article recommends implementing a multi-stakeholder approach to defining AI ethics in each country, using existing guidelines as references to develop well-designed, ethics-based frameworks.
REFERENCES
1. Stuart J. Russell and Peter Norvig, “Artificial Intelligence: A Modern Approach”, 3rd Edition,2021
2. Ben Goertzel, Cassio Pennachin (Eds.), “Artificial General Intelligence” 2007
3. Michael Kearns and Aaron Roth, “The Ethical Algorithm: The Science of Socially Aware Algorithm Design”, Oxford University Press, 2020
4. Nathalie A. Smuha “The Cambridge Handbook of the Laws, Ethics and Policy of Artificial Intelligence”, KU Leuven Faculty of Law and Criminology,2025.
5. EU AI Act,2024.
6. https://ai.google/principles/
7. Microsoft Responsible AI Standard, v2, June 2022.
8. Brent Daniel Mittelstadt , Patrick Allo , Mariarosaria Taddeo, Sandra Wachter and Luciano Floridi, “The ethics of algorithms: Mapping the debate”, Oxford Internet Institute, University of Oxford, Oxford, UK 2 Alan Turing Institute, British Library, London, UK, 2016.
9. UN Special Rapporteur on contemporary forms of racism, 31st March,2024.
10. BBC, “Rohingya sue Facebook for $150 bn over Myanmar hate speech”, 8th December 2021.
11. UNESCO, “Recommendations on the Ethics of Artificial Intelligence” 26th September 2024.
Lwin Nyein Chan Thu holds an LL.M. in Business Law from Thammasat University, Thailand, and is currently working as a researcher and lawyer providing free legal services to youth detainees during the military coup in Myanmar.
“Advocating Sustainability, Shaping Our Future”
Help Sustain The Sabai - Myanmar’s Voice for Sustainable Development Support The Sabai


