top of page

Launch Of Artificial Intelligence Governance And Ethics Guidelines (AIGE) In Malaysia






Recently, the Ministry of Science, Technology and Innovation (MOSTI) launched the National Guidelines on Artificial Intelligence Governance and Ethics (AIGE). These Guidelines provide a comprehensive framework for the ethical and responsible development, deployment and use of artificial intelligence (AI) within the nation, reinforcing Malaysia’s commitment to advancing its AI capabilities in line with the National Artificial Intelligence Roadmap 2021-2025 (AI-RMAP). The roadmap aims to create a sustainable AI ecosystem that drives innovation, enhances economic growth and generates new employment opportunities by 2025. 

 

The AIGE Guidelines provide a structured set of voluntary principles for key stakeholders involved in AI. These stakeholders include: 

 

  1. Society, which encompasses users, workers, and consumers at large. 

 

  1. Government agencies and policymakers, responsible for setting regulatory standards. 

 

  1. Industry players, including developers, designers, and businesses involved in the creation and deployment of AI technologies. While the Guidelines complement existing laws and regulations, they specifically focus on mitigating potential risks associated with AI, ensuring its social, economic and ethical benefits are maximised across all sectors of society. 

 

AIGE Guidelines  

 

The Guidelines are designed to achieve the following key objectives: 

 

  1. Support the implementation of the Malaysia National AI Roadmap 2021 – 2025. 

 

  1. Encourage responsible AI practices aligned with the seven foundational principles. 

 

  1. Build public trust by promoting trustworthy AI systems. 

 

  1. Manage risks associated with AI development and deployment. 

 

  1. Maximise AI's benefits to enhance national productivity, economic growth, and competitiveness. 

 

7 Core AI Principles  

 

To foster the responsible development and deployment of AI, the AIGE Guidelines emphasise seven key principles that reflect global best practices in AI governance. These principles serve as foundational pillars for developing AI systems that are ethical, inclusive and focused on enhancing human well-being. 

 

  1. Fairness:  

 

AI systems should be designed to ensure equitable treatment for all users. This includes eliminating biases that might perpetuate discrimination in AI-driven decisions. Developers are encouraged to design systems that promote equal access and ensure that the benefits of AI are distributed fairly across all segments of society.   

 

  1. Reliability, Safety and Control:  

 

AI systems must be reliable and safe in their operation, ensuring they perform as intended and are secure from unauthorised access or manipulation. Developers are urged to conduct testing and verification to ensure the functionality and safety of AI systems including obtaining explicit consent from individuals before using personal data for AI development. 

 

  1. Privacy and Security:  

 

Privacy protection is essential to building trust in AI systems. The Guidelines emphasise that AI systems must meet stringent data protection standards, ensuring that personal information is used securely and ethically. Developers and end-users are encouraged to carry out relevant testing and certification as well as risk and impact assessments to safeguard data.  

 

  1. Inclusiveness:  

 

AI technologies must be inclusive of all stakeholders to ensure equal access and avoid disparities. The Guidelines emphasise the importance of fostering diversity within the AI workforce and highlight the need for AI solutions that accommodate a wide range of languages, abilities and socio-economic backgrounds, ensuring that the benefits of AI are accessible to everyone, regardless of language, disability or financial status. 

 

  1. Transparency:  

 

AI systems should operate in a transparent manner, with algorithms and decision-making processes being explainable and understandable to stakeholders. This transparency enables users to assess how decisions are made and identify potential risks or biases. Developers are encouraged to provide clear explanations of AI’s technical processes and decisions, especially in areas such as automated decision-making that may impact individuals. 

 

  1. Accountability:  

 

Developers, organisations and owners of AI systems must be accountable for the outcomes of their AI technologies. This principle ensures that AI operators take responsibility for the performance of their systems and adhere to legal and ethical standards. Clear accountability frameworks should be established, enabling users to hold AI stakeholders responsible for both the successes and failures resulting from AI deployment. 

 

  1. Pursuit of Human Benefit and Happiness:  

 

At the heart of the AIGE Guidelines is a commitment to human-centred values. AI systems should prioritise the well-being of humans, supporting societal progress, enhancing quality of life and increasing happiness. This principle is supported by the concept of human oversight, which is implemented through three key frameworks: 

 

  • Human-in-the-Loop (HITL):  

 

Human intervention is possible at every decision point in the AI system’s operation. 

 

  • Human-on-the-Loop (HOTL):  

 

Human oversight is maintained during both the design and ongoing monitoring of the system. 

 

  • Human-in-Command (HIC):  

 

Humans retain the power to oversee and control AI systems at all levels, ensuring that AI operates within ethical boundaries and respect human autonomy.  

 

Continuous Improvement And Forward Momentum 

 

In recognition of the rapid advancement of AI technologies, the AIGE Guidelines emphasise the need for a Continuous Improvement Process (CIP). This approach ensures AI systems are regularly evaluated, updated and refined to align with evolving ethical standards, societal needs and technological progress. The CIP is crucial for managing AI’s long-term implications, enabling proactive adaptation to emerging ethical challenges and risks. By incorporating CIP into its AI governance framework, Malaysia ensures its AI ecosystem remains agile, resilient and capable of addressing both current and future challenges. This ongoing process supports risk mitigation, fosters innovation and ensures AI technologies evolve responsibly, delivering societal benefits aligned with national values and global best practices. 

 

Conclusion 

 

The introduction of the AIGE Guidelines marks a step forward in Malaysia’s commitment to fostering ethical, responsible, and inclusive AI innovation. Grounded in seven core principles, these Guidelines aim to balance AI's vast potential with the protection of societal values. As AI evolves, the AIGE Guidelines provide a dynamic, future-ready framework for responsible development, positioning Malaysia’s AI ecosystem as both innovative and ethical. Through these Guidelines, Malaysia demonstrates leadership in ethical AI governance, positioning itself as a global leader in responsible AI innovation. This initiative not only drives economic growth and global competitiveness but also promotes national well-being, builds public trust and reinforces a commitment to ethical practices that benefit both society and the global community. 


6 December 2024

bottom of page