Digital Regulation Platform
«
»
«
»

A guide towards collaborative AI frameworks

02.09.2025
Share this article to:

Introduction

Artificial intelligence (AI) is a transformational technology rapidly reshaping governance, economies, and societies. From predictive analytics that inform public health interventions to the smart automation of service delivery, AI provides governments with unprecedented tools to enhance efficiency, inclusion, and innovation. However, alongside these transformative benefits come significant risks. Without proper safeguards, AI systems can reinforce bias, compromise rights, and exceed the capacity of institutions to govern them effectively. As governments and regulators are faced with the dual challenges of harnessing AI’s potential and mitigating its harms, this Guide serves as a practical, globally informed resource for policymakers, regulators, and public sector leaders aiming to establish responsible, inclusive, and agile national AI governance frameworks.

This Guide is structured to walk readers through the essential components that should be in place when designing and implementing an AI collaborative governance framework. It takes a systematic approach: beginning with foundational concepts, moving through key risks and ethical principles, and advancing toward practical governance tools and institutional mechanisms. Each section explores a core dimension of AI frameworks, and together, they provide a comprehensive and interconnected resource to support informed decision-making across every stage of the AI lifecycle.

The Guide begins by clarifying what AI is and what it is not. Section 1 provides a foundational understanding of AI systems, demystifying their technical characteristics and categorizing their many forms, including narrow AI, generative AI, and agentic systems. It introduces the AI value chain and lifecycle, showing how AI evolves from research and design to deployment and disassembly. By explaining machine learning techniques, natural language processing, and deep learning, this section equips decision-makers with the conceptual literacy needed to participate in governance discussions meaningfully. Importantly, it also highlights global disparities in AI investment and infrastructure, making the case for inclusive policy interventions that bridge the digital divide.

As AI systems proliferate, their social impacts become increasingly complex. Section 2 examines how AI can perpetuate and even amplify bias, whether in datasets, algorithmic design, or systemic patterns embedded in society. Drawing on real-world cases from healthcare, criminal justice, and finance, the section explores how discriminatory outcomes arise and what mitigation strategies, such as bias audits, explainable AI (XAI), and inclusive development teams, can address them. It also introduces the growing field of explainable AI, which seeks to make complex systems more transparent and accountable.

Building on this, Section 3 turns to the security and data governance challenges specific to AI systems. AI is uniquely vulnerable to cyber threats, data manipulation, and misuse of personal information. This section outlines how AI systems collect and process data, the hidden privacy costs of large-scale model training, and the evolving regulatory responses. It introduces global privacy frameworks and privacy-enhancing technologies, emphasizing the need for robust governance across the AI lifecycle.

Recognizing AI’s inherently transnational nature, Section 4 explores the evolving global and regional landscape of AI governance, underscoring the importance of harmonizing national efforts with international norms while safeguarding local priorities. It examines multilateral and regional initiatives—from UNESCO’s Recommendation on the Ethics of AI (2021), to binding instruments like the Council of Europe’s Framework Convention on AI and Human Rights, Democracy, and the Rule of Law. The section also surveys major governance models emerging across jurisdictions, including the EU AI Act’s structured risk-based regulation, the United States’ sectoral and innovation-led approach, Singapore’s practical toolkits and voluntary frameworks, and Japan, China and the Republic of Korea’s hybrid strategies. Regional charters and frameworks in Latin America, Africa, and Southeast Asia reflect diverse paths to AI governance, each shaped by local development goals and institutional capacities. Section 4 then compares risk-based and rights-based regulatory models, highlighting their strengths and limitations. Risk-based approaches, such as those embraced by the EU and Singapore, aim to calibrate oversight based on the potential harms posed by AI systems, offering regulatory flexibility and proportionality. Meanwhile, rights-based frameworks, grounded in international human rights law, prioritize fundamental freedoms, non-discrimination, and democratic safeguards. It also introduces the need for innovative and agile regulatory approaches to keep pace with technology.

Ethics serves as a normative compass for AI governance, and Section 5 provides a comprehensive guide to embedding ethical principles into AI design, deployment, and oversight. It addresses key values such as fairness, transparency, accountability, inclusivity, sustainability, and human oversight. The section offers practical tools for assessing ethical trade-offs, particularly in high-risk sectors such as law enforcement, employment, and health.

Effective AI governance cannot be top-down alone. Section 6 emphasizes the importance of multi-stakeholder ecosystems, showcasing collaborative models that bring together government, industry, academia, and civil society. It discusses regulatory sandboxes, co-regulation mechanisms, open-source governance tools, and participatory policy development. This section offers tools to build legitimacy and public trust through inclusive governance processes.

Finally, Section 7 translates these concepts into actionable steps. It presents a detailed, operational roadmap for developing and implementing national AI governance strategies, guiding policymakers through key stages such as awareness-raising, vision-setting, stakeholder coordination, risk mapping, institutional capacity-building, and regulatory implementation. This section reinforces the message that governance is not a one-time act but a dynamic, iterative process that must evolve alongside AI technologies.

Throughout the Guide, real-world examples, case studies, and practical tools illustrate how countries at different stages of AI adoption can build responsive, context-aware governance systems. Whether applied sequentially or referenced thematically, each section offers insight into a critical dimension of public sector AI governance, reinforcing the policymaker’s role as a steward of responsible, ethical, and inclusive innovation.

Section 1: Understanding AI

AI is not a singular tool or technology; it is a rapidly evolving ecosystem encompassing diverse techniques, system types, and applications. From AI-powered chatbots and diagnostic tools to predictive policing and precision agriculture, AI systems increasingly influence economic, social, and governmental domains. Yet, despite their widespread deployment, governance discussions often fail to reflect these systems’ technical diversity and sector-specific implications. This gap can undermine the effectiveness of governance strategies and policymaking.

This section provides a foundational understanding of AI, equipping policymakers, regulators, and relevant stakeholders with the conceptual clarity needed to assess and design effective governance frameworks. It demystifies key AI technologies and traces their development across the AI value chain and lifecycle. It also examines the transformative impact of AI across sectors, using engaging case studies and emphasizing why governance should be collaborative, principled, and context-specific.

1.1 What is AI?

There is no widely accepted definition of AI; however, in this Guide, the term “AI” refers to a “Computerized system that uses cognition to understand information and solve problems”.[1] An AI system is fundamentally a computer system that employs AI algorithms to achieve specific goals with a certain degree of autonomy. While there are various types of AI systems, all consist of a combination of software and hardware designed to produce outputs based on the inputs they receive. More specifically, an AI system comprises an AI algorithm, software, and a computer where the algorithm is processed, as well as hardware. There are numerous types of AI algorithms, but machine learning algorithms are the most prominent and commonly used in the AI field; therefore, they will be discussed in detail. In very simple terms, these algorithms learn from data. This implies that most AI systems today comprise software, hardware, and data. The Figure below illustrates these three components:

Source: https://www.ai-lawenforcement.org/guidance/intro


AI opportunities and challenges

AI holds vast promise in addressing the world’s most pressing challenges, from combating climate change and managing resource scarcity to mitigating the effects of aging populations and enhancing public health systems. AI-powered tools are already being used to detect wildfires through satellite imagery,[2] predict the spread of diseases like COVID-19 using mobility and case data[3] and optimize energy use in smart grids to accelerate the shift to low-carbon economies.[4]

These benefits are not just technical; they are also economic. AI is expected to impact nearly 40% of jobs worldwide, according to the International Monetary Fund.[5] Goldman Sachs projects that AI could boost global GDP by USD7 trillion, or 7%, over the next decade.[6] McKinsey offers a more expansive outlook, estimating annual growth between USD17.1 trillion and USD25.6 trillion.[7] And even these figures are considered moderate compared to some other forecasts.

AI also plays an increasingly important role in helping telecommunications companies improve efficiency, reduce costs, and deliver better services. Operators use AI to optimize how networks allocate resources, automate tasks, and predict when maintenance is needed to prevent outages. For example, AI-powered systems can monitor network activity in real time, detect faults before they escalate, and trigger corrective actions automatically. This is especially valuable as telecom networks evolve by deploying gigabit broadband and 5G mobile technologies. AI also supports advanced capabilities like network slicing, which allows telecom providers to offer different levels of service quality depending on customers’ needs, such as low-latency connectivity for remote surgery or industrial robotics. According to the Body of European Regulators for Electronic Communications (BEREC), AI-driven operations will likely become standard practice in the next six to ten years. AI is also helping telecom companies explore new sources of revenue. One area of opportunity is the growing demand for high-performance network services to support AI-heavy applications, such as real-time video analytics, autonomous vehicles, and virtual reality. These services often require reliable, high-speed connections that AI can help manage and deliver efficiently. Additionally, AI is used to personalize customer experiences by analyzing behavior and preferences, which can improve marketing strategies and increase average revenue per user. While most telecom companies lack the infrastructure or expertise to build large language models or advanced generative AI tools like those developed by big tech firms, some, such as SoftBank in Japan, are partnering with AI companies to deliver these services. SoftBank partnered with Perplexity AI to provide a generative AI assistant that answers customer questions with sourced responses. These partnerships, along with supportive policy network investment, could position telecom operators as essential infrastructure providers for the expanding AI ecosystem.[8]

However, AI comes with a plethora of challenges.

First, there is a widening “compute divide”: only a few countries possess the domestic capability to train large-scale AI models. For example, as of 2023, all of the world’s supercomputers are located in just 30 countries, meaning that 85% of countries worldwide do not have the necessary AI infrastructure to compete in the emerging AI-driven economy.[9] For many developing countries, this isn’t just a matter of falling behind; they are currently reliant on external sources for AI development, a dependency that could significantly shape the future distribution of global economic wealth.[10] For instance, countries like Rwanda and Bangladesh are exploring how to utilize AI in agriculture to predict weather-related crop failures, but face challenges due to limited access to annotated data, computing infrastructure, and skilled AI talent.[11] Meanwhile, Brazil’s pioneering use of AI in its judicial systems, such as the Victor system at the Supreme Federal Court, demonstrates how public sector innovation can be domestically driven, but only where data governance and institutional capacity are in place.[12]

One of the key factors shaping the landscape of AI governance is the concentration and uneven distribution of capital flowing into AI companies. The Figure below illustrates the relative distribution of venture investment in AI companies in 2024, categorized by the country or region where these companies are based. The data highlight that the United States (represented by the red block) is the largest recipient of venture capital, followed by Asia (yellow) and Europe (orange).

Source: https://www.brookings.edu/articles/network-architecture-for-global-ai-policy/

This trend extends beyond venture investment. According to the Center for Security and Emerging Technology’s Country Activity Tracker on AI, in 2024, total AI investment in the U.S. reached USD 763 billion, significantly outpacing China’s USD 97 billion and the combined investment of France and Germany at USD 46 billion.[13]

These investment patterns reinforce the United States’ early dominance in the generative AI space, backed by significant private capital, world-class compute capacity, and access to vast, high-quality datasets. Companies like OpenAI, Anthropic, and Google DeepMind have benefited from substantial venture investment and privileged access to large-scale public and proprietary data, enabling them to train increasingly powerful and sophisticated foundation models.[14]

By contrast, Europe presents a paradox. Despite being a global leader in AI research output and home to some of the most robust internet infrastructure and privacy-preserving governance frameworks (such as the EU General Data Protection Regulation), the region continues to lag in scaling AI startups and attracting comparable levels of investment. Structural barriers, including fragmented markets, regulatory uncertainty, and limited access to high-volume, domain-specific data, have hampered Europe’s competitiveness in foundation model development and generative AI applications.[15]

The global digital divide becomes even more pronounced when looking at Africa. The continent holds tremendous potential for AI deployment in agriculture, health, climate resilience, and public service delivery, but continues to face serious structural limitations. Africa’s share of global AI venture capital remains below 1%, while its network infrastructure is still underdeveloped in many regions, considering its vast geography and rapidly growing population.[16]

Access to high-quality data and metadata necessary for training machine learning models is often limited by factors such as insufficient digital infrastructure, under-resourced statistical systems, and a lack of open data frameworks. For example, satellite image data for agriculture or urban planning is often unavailable or prohibitively expensive for local innovators to access and label.[17]

These disparities in data access, compute power, and investment reflect a broader challenge in the global diffusion of AI capabilities. Without targeted support, including through public-private partnerships, regional data commons, and investments in data stewardship, many countries risk becoming mere consumers of AI technologies rather than co-creators of solutions tailored to their unique contexts and needs.

The infrastructure and financial backing that drive AI development also influence geopolitical dynamics. As these factors shift, such as China’s increasing ability to build on open models like DeepSeek R1, the AI governance landscape will inevitably evolve. Given the difficulty of predicting how AI advancements will reshape global AI leadership, governance frameworks should remain adaptive, capable of responding to emerging enterprises, evolving capabilities, and emerging risks. As AI development continues to be highly dynamic, AI governance should be equally collaborative and agile.

1.2 AI system types

Understanding AI governance demands a nuanced taxonomy of AI systems, each type akin to a distinct species within a complex technological ecosystem. These systems differ not only in functionality but also in their risk profiles, societal impacts, and regulatory needs, requiring tailored governance approaches that reflect their capabilities, degrees of autonomy, and potential for harm or misuse.

Source: Author’s representation

Artificial Narrow Intelligence (ANI) represents the current state of most AI technologies—specialized systems designed for specific tasks. A chess-playing computer or a recommendation algorithm are prime examples. These systems excel within their defined parameters but lack broader cognitive flexibility. Governance for ANI focuses on ensuring reliability, preventing unintended consequences, and maintaining transparency in decision-making processes.

Example:
A credit card fraud detection system analyzes transaction patterns to flag unusual behavior. It’s excellent at spotting discrepancies, but it cannot apply its” knowledge” to unrelated tasks, such as diagnosing diseases.

Governance Considerations for ANI:

  • Reliability: Ensure systems operate as intended without errors.
  • Transparency: AI Regulators should consider mandating that developers clearly explain how ANI systems reach decisions (e.g., why a loan application was denied).
  • Preventing Unintended Consequences: Regularly test systems to identify risks such as bias or discriminatory behavior in areas like hiring or lending.


Artificial General Intelligence (AGI)

Science fiction has created the impression that an AI system should be capable of understanding or learning any intellectual task that a human being can and even attain self-awareness and consciousness. This form of AI is known as Artificial General Intelligence (AGI), a concept that most experts consider to be decades, if not hundreds of years, away from realization. Currently, AI algorithms fall short of “true intelligence” and can only perform a limited set of tasks. For one, AI algorithms cannot distinguish between cause and effect or draw inferences outside of the data on which they are trained. While currently more theoretical than practical, AGI presents profound governance challenges, such as ensuring that a system with broad cognitive capabilities remains aligned with human values and ethical principles.

Example:
Taking the fictious example of an AI that could simultaneously manage city traffic systems, provide personalized healthcare advice, and write academic papers. Its broad capabilities could revolutionize multiple sectors but also raise ethical concerns.

Governance Challenges for AGI:

  • Value Alignment: How do we ensure AGI reflects human values and remains under ethical control?
  • Accountability: Who is responsible if AGI makes decisions with unanticipated consequences?
  • Ethical Boundaries: AGI might challenge traditional notions of human agency—e.g., could AGI develop policies without human oversight?

Generative AI introduces another layer of complexity; systems that can create novel content across various domains. While Generative AI is technically a subset of ANI, it represents a significant evolution in capability and impact. Traditional ANI systems are built to perform specific tasks with high accuracy, often relying on classical machine learning algorithms such as decision trees, support vector machines, or small-scale neural networks. These systems excel in domains like medical diagnosis, loan approval, or spam detection, but they lack the flexibility to operate outside their predefined functions. For instance, a resume screening tool powered by ANI can efficiently filter out applications that do not meet certain criteria. Still, it cannot write a job description, explain a poem, or compose an email.

Generative AI systems, such as OpenAI’s GPT series, Anthropic’s Claude, Google DeepMind’s Gemini, Chinese DeepSeek, and Meta’s Llama, are built on large-scale deep learning architectures, typically transformer-based models like GPT, BERT, or LLaMA. These models are trained on massive datasets and can generate novel outputs in various forms, including text, images, audio, video, and code. The investment and adoption of these generative AI models have been rapid. For instance, OpenAI’s ChatGPT application was released in November 2022 and reached 100 million monthly active users in just two months, making it the fastest-growing consumer application in history.[18]

Gen AI excels at content generation, not just classification or prediction. For example, it can not only review a resume but also generate a cover letter tailored to a job posting, write a poem about job searching, or summarize a complex HR policy in plain language. This shift from performing to creating marks a profound difference in how AI systems interact with users and content. These models can generate articles, synthesize text, write poetry, and even create code. They also respond to questions, engage in discussions, explain complex scientific or social concepts, and provide comprehensive answers to specific inquiries.

Smaller organizations, such as startups, civil society groups, and public sector entities, are increasingly able to leverage the capabilities of advanced artificial intelligence through access to pre-trained models available via application programming interfaces (APIs) or open-source model repositories. This approach lowers technical and financial barriers by allowing users to fine-tune and deploy powerful AI tools without the need to develop and train models from scratch, a process that typically requires vast datasets, specialized expertise, and high-performance computing infrastructure. For example, a local government in a low-resource setting might integrate OpenAI’s GPT model via an API to develop a multilingual chatbot that provides residents with real-time information on public services. Similarly, a small education-focused nonprofit might use Meta’s Llama 2 or Google’s PaLM 2 through a hosted platform to generate culturally relevant learning materials or translate them into minority languages.[19] These use cases demonstrate how access to pre-trained AI can democratize innovation by enabling smaller actors to build context-specific applications tailored to their communities, sectors, or regions.[20]

Because of its expansive potential, Gen AI introduces new governance challenges beyond those associated with traditional ANI. Issues of authorship, content authenticity, intellectual property rights, and the societal impacts of machine-generated misinformation all demand urgent regulatory and ethical consideration.

Exploring the Most Popular Applications of Generative AI

  • Language & Text: Generative AI has revolutionized text-based applications, making it one of the most mature domains. Large Language Models (LLMs) are at the forefront, powering tools that can draft reports, generate legal and policy documents, write code, translate languages, and even assist in scientific research, such as analyzing genetic sequences. These capabilities raise important questions about accuracy, bias, and intellectual property—critical considerations for regulators.
  • Audio & Speech: AI models can compose original music, generate voiceovers, enhance accessibility through speech synthesis, and even create realistic sound effects for multimedia content. Regulators should consider issues such as deepfake audio, copyright implications, and ethical use of AI-generated voices.
  • Visual Content & Imagery: From creating hyper-realistic images to generating 3D models for gaming and virtual reality, generative AI is a game-changer in the visual domain. It can assist in drug discovery by visualizing molecular structures, enhance design processes by generating marketing materials, and even aid in digital forensics. With this power, however, come concerns about deepfakes, misinformation, intellectual property rights (copyright, in particular), and authenticity—challenges regulators will need to address.
  • Synthetic Data: When real-world data is scarce, restricted, or costly to label, generative AI can create high-quality synthetic data to train machine learning models more efficiently. This is particularly useful in sectors like finance, healthcare, and cybersecurity. Synthetic data can enhance privacy protections by reducing dependency on sensitive personal data, yet it also raises questions about data reliability and regulatory oversight.

Source: https://www.nvidia.com/en-us/glossary/generative-ai/

Agentic AI

While Agentic AI is often grouped under the broader umbrella of ANI, it stands apart in design and operational capacity. Unlike traditional ANI systems, which perform specific, predefined tasks (such as credit scoring or image classification), Agentic AI systems operate independently, making decisions and taking actions to achieve set objectives, often using reinforcement learning, real-time environmental feedback, and long-term planning strategies. This allows them to function effectively in dynamic, real-world settings, such as autonomous vehicles navigating traffic, drones conducting disaster assessments, or AI agents optimizing power grid operations without human intervention.

Generative AI, on the other hand, differs fundamentally in purpose and architecture. It is designed not to act, but to create, producing novel content such as text, images, code, or music based on large-scale data patterns. Gen AI systems, such as ChatGPT, Gemini, DeepSeek, or DALL·E, excel at generating human-like content and interacting with users, but do not act autonomously toward their own goals. They are typically reactive: responding to prompts without the capacity to independently pursue objectives or plan sequences of actions over time. While Gen AI may appear agentic, especially when integrated into chatbots or workflow automation tools, it lacks the persistent, environment-aware decision-making loop that defines true Agentic AI.

Agentic AI focuses on autonomy and action, whereas Generative AI concentrates on creativity and content synthesis. The two can be combined (e.g., an AI assistant that generates responses and also decides when and how to act), but they are conceptually distinct. From a governance perspective, Agentic AI raises deeper concerns around safety, control, and goal alignment, especially in high-stakes applications. Gen AI, in contrast, prompts regulatory focus on intellectual property, misinformation, bias in content generation, and the ethics of machine creativity.[21]

Below is a comparative table highlighting the key distinctions between Agentic AI and Generative AI:

Comparison of Agentic AI and Generative AI

Source: Author’s representation

Machine learning techniques

Machine Learning (ML) is a subset of AI that enables computers to learn from data and improve over time without being explicitly programmed. Instead of following fixed rules, ML systems analyze historical data to recognize patterns and make predictions or decisions.

The complexity of computational scale and the multiple layers of machine learning make it challenging to fully grasp the capabilities and potential applications of AI models. Despite extensive testing and alignment efforts to refine generative AI models and implement safeguards, the United Kingdom’s expert panel on AI safety noted in its final report that Developers still understand little about how their general-purpose AI models operate. This lack of understanding makes it more difficult both to predict behavioral issues and to explain and resolve known issues once they are observed.” One striking example is that developers were unexpectedly surprised by the proficiency of large language models in computer coding. Given these complexities, identifying both the intended and unintended uses of generative AI models will require ongoing, rigorous monitoring by developers, deployers, policymakers, and experts from civil society and academia. Ensuring responsible AI deployment will necessitate governance that is well-informed, flexible, and responsive to rapid technological shifts.[22]

The rapid evolution of AI further amplifies existing uncertainties around governance, safety, and equitable access. In just over two years since the public release of generative AI tools, foundational models have not only grown in size and complexity but also adopted new learning architectures and training techniques. As shown in the figure below, the computational power required to train leading generative models, such as OpenAI’s GPT-4[23], Google’s Gemini[24], and Meta’s LLaMA[25] series, has increased dramatically, enabling these systems to process vast datasets with unprecedented efficiency. However, this arms race in model scale and compute resources risks further entrenching power in a few technology firms and advanced economies with the capacity to train and deploy such models.

At the same time, new entrants like Chinese DeepSeek[26] represent a potential turning point. DeepSeek’s architecture is notable for achieving competitive performance while using significantly less computational power than other state-of-the-art models, suggesting that innovation in efficiency, not just scale, could help democratize access to advanced AI. These developments signal a possible shift toward more sustainable and inclusive AI ecosystems, particularly important for governments and developers in the Global South who lack the infrastructure to compete in high-compute environments. Still, they also introduce new governance questions about how to evaluate and regulate performance, safety, and ethical boundaries across models that are increasingly diverse in architecture and intent.

The Figure below illustrates the increasing computational power required for training major generative models in recent years, enabling them to process larger datasets more efficiently.

Source: https://www.brookings.edu/articles/network-architecture-for-global-ai-policy/

Below are different types of ML that come with a different set of governance challenges:

Types of Machine Learning

Source: https://www.ibm.com/think/topics/machine-learning-types

Advanced AI techniques

Natural Language Processing (NLP)

NLP represents one of the most fascinating and transformative AI techniques, teaching machines to understand, interpret, and generate human language. It is about bridging the communication gap between humans and machines in ways previously considered science fiction.

NLP is not just about translation or transcription. It is about comprehension. Modern NLP systems can analyze sentiment, detect nuanced context, and even generate human-like text that captures contextual subtleties. One illustrative example would be a customer service chatbot that can understand not just the words a customer writes, but the underlying emotion and intent behind those words.

The governance implications of NLP are profound: ensuring that these systems respect linguistic diversity, prevent potential misuse for manipulation, and uphold ethical communication standards. A system that can generate convincing text could potentially spread misinformation or impersonate human communication with unprecedented sophistication.

Examples of NLP

  • In healthcare, NLP extracts insights from vast amounts of unstructured clinical data, such as doctors’ notes, medical research papers, and electronic health records. By analyzing patient records, NLP systems can detect early signs of diseases, predict potential health risks, and even automate the summarization of patient histories for better decision-making by medical professionals. Chatbots powered by NLP assist in patient engagement by answering questions, scheduling appointments, and providing preliminary diagnoses, reducing the burden on healthcare workers while improving accessibility for patients.
  • In the education sector, AI-driven tutoring systems use NLP to personalize learning experiences by assessing students’ comprehension levels and adapting lesson plans. Automated grading systems can analyze written assignments for grammar, coherence, and even logical argumentation, providing immediate feedback to students and allowing teachers to focus more on instruction rather than administrative tasks. NLP also enables real-time translation and transcription, making education more accessible to students with diverse linguistic backgrounds or disabilities.
  • In customer service and e-commerce, NLP powers intelligent chatbots and virtual assistants that enhance user experience. Companies use NLP to analyze customer inquiries and provide accurate responses, reducing the need for human intervention. Sentiment analysis enables businesses to comprehend customer emotions and refine their services based on real-time feedback. In e-commerce, NLP-driven recommendation engines analyze customer reviews, past purchases, and browsing history to suggest products that match user preferences. This level of personalization not only improves customer satisfaction but also drives higher sales and brand loyalty.
  • Entertainment and media industries also leverage NLP to create and curate content. Streaming platforms use NLP algorithms to analyze viewer preferences and generate personalized content recommendations. Social media platforms employ NLP for content moderation, detecting harmful speech, misinformation, and spam. AI-powered tools also assist journalists by summarizing news articles, transcribing interviews, and generating draft reports based on real-time data. By automating repetitive tasks, NLP allows media professionals to focus on creative and investigative aspects of journalism.

Source: https://callminer.com/blog/25-examples-of-nlp-and-machine-learning-in-everyday-life

Deep Learning

Deep learning represents the closest technological approximation to human cognitive processing. Using neural networks inspired by the human brain’s structure, these systems can analyze incredibly complex data, identifying intricate patterns that might escape human perception.

To illustrate, a medical imaging system uses deep learning to detect early-stage cancer, where a human radiologist might see an ambiguous shadow. The deep learning system can analyze thousands of previous images, identifying microscopic variations that suggest potential malignancy. The system doesn’t replace human expertise; instead, it augments and enhances it.

Algorithmic transparency is one of the most significant governance challenges for deep learning. “Neural networks” often operate as complex “black boxes,” making it difficult to understand how they reach specific conclusions. This opacity raises critical questions about accountability and trust (see Sections 2 and 5).

1.3 The AI value chain

The AI value chain represents a complex, interconnected ecosystem where multiple technological components collaborate to transform computational potential into real-world applications.

1.4 The AI lifecycle

The AI lifecycle is not a linear progression but a complex, iterative journey of continuous learning and adaptation.

1.5 Common Applications of AI

Image Analysis: AI systems can analyze large datasets of photos, videos, and other visual information to automatically recognize, classify, and contextualize images or their elements. The AI systems that detect and recognize elements in an image employ a technique known as object recognition. Object recognition involves ML algorithms designed to process pictures, identify geometric shapes, and ultimately recognize objects, faces, and other entities. Facial recognition technology (FRT) is a prevalent form of object recognition that identifies specific individuals in photos, videos, and other visual inputs. While FRT provides significant opportunities for socially beneficial applications, particularly through enabling biometric automated verification and identification, it also introduces unique governance challenges, which will be discussed in the “Data governance for AI” section below.[27]

Section 2: Addressing bias, opacity, and risk in AI

Bias and opacity in AI systems threaten to entrench discrimination and erode public trust. This section explains how data, algorithmic, and systemic bias arise and how they can be mitigated. It explores fairness-aware design, representative datasets, and tools like bias audits and algorithmic impact assessments.

It addresses the “black box” problem and showcases explainable AI (XAI) techniques. Real-world sectoral examples, such as facial recognition misidentification, biased loan approvals, and opaque welfare systems, demonstrate the importance of transparency and accountability as governance priorities.

2.1 Bias in AI systems

Bias in AI systems manifests across multiple levels, each with unique causes and implications. Whether stemming from data, design, or societal structures, bias in AI reinforces inequities, erodes trust, and perpetuates harmful outcomes if left unchecked. This section examines two critical forms of bias, data bias and algorithmic bias, and explores practical solutions for mitigating their impact.

Data bias

Data bias occurs when the datasets used to train AI models are unrepresentative, incomplete, or reflect historical inequities. The fairness, transparency, and accountability of AI systems depend heavily on the quality of the data they are trained on. If training datasets are unrepresentative, i.e., skewed toward specific demographics (e.g., white, male, etc.), missing key variables, or reflecting historical inequalities, AI will reproduce and amplify those biases.

For example, if an AI-driven credit scoring model is trained predominantly on data from urban male borrowers, it may systematically underestimate the creditworthiness of rural applicants or women entrepreneurs. Facial recognition systems trained predominantly on lighter-skinned faces tend to misidentify individuals with darker skin tones at significantly higher rates. This misidentification can lead to potential discrimination in law enforcement and hiring practices. Virtual assistants and speech recognition tools often struggle to understand non-standard accents, regional dialects, or underrepresented languages because they are primarily trained on standardized English or dominant dialects. This results in AI systems that are less accessible to speakers of diverse linguistic backgrounds, reinforcing the dominance of certain languages and dialects in digital spaces.

Pathways to a biased dataset

Source: https://haas.berkeley.edu/equity/resources/playbooks/mitigating-bias-in-ai/

Bias in datasets can emerge at various stages of data collection, selection, and labeling, resulting in significant consequences for AI and decision-making systems. This process begins when certain data points do not exist or are not disaggregated, resulting in gaps in representation. Even when data points are available, they may still be biased due to poor quality, prejudiced perspectives, or systemic inequalities embedded in society. Additionally, the selection process for determining which data points are included in a dataset can introduce further bias, as can subjective and discriminatory labeling practices.

The root causes of these biases stem from several factors. Data generated by users often incorporates individual biases, whether intentional or not. Historical and structural inequities, such as the digital divide, poverty, and geographical disparities, can further skew data representation. Additionally, safety concerns may restrict data collection from marginalized or vulnerable groups, reinforcing existing gaps. Human discretion is critical, as unconscious and conscious biases, personal experiences, and organizational structures shape how data is curated. Moreover, technical constraints, such as the binary nature of some AI models, can further limit how data is classified and interpreted.

As a result, biased datasets can lead to two primary outcomes. First, they may be unrepresentative of society, meaning that certain groups are either overrepresented or underrepresented in ways that do not accurately reflect real-world diversity. Second, datasets can be accurate but reflective of an unjust society, perpetuating existing biases and discrimination rather than mitigating them. In both cases, these biases can reinforce harmful stereotypes and lead to unfair outcomes, particularly in AI-driven decision-making.

Mitigation Strategies

Diverse and Representative Datasets: One of the most effective ways to mitigate bias is to ensure that datasets accurately represent the populations they serve. This means collecting data across different demographics, geographic locations, and socioeconomic backgrounds. For example, in healthcare AI, algorithms used for disease diagnosis have historically performed poorly for underrepresented racial and ethnic groups due to a lack of diverse training data. To address this, institutions such as the US National Institutes of Health (NIH) have launched initiatives to collect diverse medical imaging datasets, ensuring that AI models perform equitably across different patient populations.[28] Likewise, facial recognition systems have been criticized for racial and gender biases due to underrepresentation in training datasets. Efforts like the Gender Shades Project by Joy Buolamwini advocate for more inclusive datasets to improve model fairness.[29]

In developing countries, where data collection systems may be fragmented or under-resourced, initiatives have emerged to fill these gaps. For instance, in Kenya, the Ministry of Health collaborated with the World Health Organization and other partners to strengthen the Civil Registration and Vital Statistics (CRVS) system. This initiative aimed to improve the registration of births and deaths, enhancing the quality and completeness of vital statistics. By bolstering the CRVS system, Kenya sought to ensure more accurate health data collection across rural and urban areas, providing a robust foundation for future AI-driven health interventions.[30]

Facial recognition technologies trained primarily on Western datasets have failed when deployed in African or Southeast Asian contexts. In response, African AI researchers have advocated for locally sourced image datasets, such as Labeled Faces in the Wild–Africa (LFW-Africa), to counteract racial bias in machine vision applications.[31]

Bias Audits: Conducting bias audits is another crucial step in identifying and addressing dataset biases before deploying AI systems. Bias audits involve systematically analyzing datasets and models to detect disparities and implement corrective measures. In the financial sector, banks have implemented audits on AI-driven loan approval systems to ensure that algorithms do not disproportionately deny loans to marginalized communities. By performing these audits, organizations can adjust their models, retrain AI systems, and introduce fairness constraints to minimize biased decision-making.

Bias audits are essential but often underutilized in countries with limited regulatory capacity. Nonetheless, efforts are growing. In Brazil, for example, civil society organizations like Coding Rights[32] and Data Privacy Brasil[33] have conducted independent algorithmic audits of public sector AI tools used in social welfare and predictive policing. These grassroots efforts demonstrate that even without robust legal frameworks, watchdog organizations can play a pivotal role in promoting fairness and accountability. In Africa, Nigeria’s National AI Strategy[34] proposes transparency requirements and ethical reviews that could evolve into formal audit mechanisms, particularly if supported by regional institutions such as the African Union or ECOWAS.

Synthetic Data Generation: In cases where real-world data is insufficient, incomplete, or skewed, synthetic data generation can be a powerful tool. AI techniques can generate synthetic data that accurately represents missing or underrepresented groups, helping to create a more balanced dataset. In education technology, AI-powered tutoring systems often struggle to provide equitable feedback due to the limited data availability from students in low-resource settings. Companies like Duolingo have experimented with synthetic data to create more inclusive learning experiences. Also, in finance, synthetic datasets are used to model risk assessments for underserved communities, ensuring that AI-driven credit scoring systems do not disproportionately disadvantage minority groups.[35]

National-Level Application and Data Limitations: While the following strategies are broadly applicable, their successful implementation depends on national context and capacities. In countries with limited data infrastructure, particularly in parts of the Global South, adapting these strategies requires innovation, capacity building, and often international collaboration. For example, countries with underdeveloped health information systems may be unable to generate large-scale, representative datasets without targeted support or regional data-sharing agreements. Nonetheless, these strategies serve as a foundational roadmap that can be tailored to different national realities.

Algorithmic bias

Algorithmic bias arises from design decisions during the AI development process. These include the choice of optimization metrics, feature selection, or insufficient testing protocols. Even with unbiased data, flawed algorithms can embed inequities and perpetuate unintended consequences.

Bias can creep in at multiple stages of development, from selecting which features the model prioritizes to deciding the metrics used for optimization. For instance, if an AI hiring tool is optimized for ‘cultural fit’ but trained on past hiring decisions from a company with a history of gender imbalance, it may automatically deprioritize women or minority candidates, even if the dataset itself appears neutral. Likewise, predictive policing algorithms that emphasize crime hotspots may reinforce over-policing in marginalized communities, not because crime rates are higher but due to historical biases in law enforcement data.

Some universities worldwide have begun using AI in their admissions processes, relying on algorithms to evaluate factors such as family income, academic test scores, and extracurricular activities. Students from wealthier backgrounds often have greater access to high-quality test preparation and exclusive enrichment opportunities. As a result, AI systems may unintentionally give disproportionate weight to these advantages, disadvantaging applicants from lower-income or marginalized communities. This approach can reinforce existing educational inequalities instead of recognizing students with strong potential from diverse socioeconomic backgrounds.

For policymakers, this highlights the need for AI governance frameworks that mandate fairness checks, impact assessments, and diverse stakeholder input during the development process. Simply having ‘good data’ isn’t enough; transparent decision-making, rigorous testing, and accountability mechanisms are essential to prevent AI from embedding inequities into critical public and private sector decisions.

Mitigation Strategies

Transparency in Design: A crucial step in mitigating algorithmic bias is ensuring transparency in how AI models make decisions. This involves clearly documenting and reviewing the criteria used in feature selection and optimization metrics. Transparency enables stakeholders, including regulators, researchers, and affected communities, to evaluate whether an AI system is fair, ethical, and compliant with legal standards. As an example, the European Union’s AI Act includes provisions requiring high-risk AI applications, such as facial recognition and loan approval algorithms, to undergo transparency assessments and provide explanations for their decisions. These initiatives demonstrate the importance of making AI systems explainable and accountable to avoid reinforcing existing societal biases.

Regular Testing: Continuous testing of AI systems is crucial for detecting and mitigating biases before deployment. Scenario-based testing should evaluate how AI models perform under diverse conditions, including different demographic groups, socioeconomic backgrounds, and geographic locations. Testing helps identify patterns of discrimination and allows developers to adjust algorithms accordingly.[36]

Inclusion of Diverse Teams: AI development teams should reflect a diverse range of backgrounds, encompassing gender, ethnicity, and socioeconomic diversity. Engaging individuals from affected communities and different disciplines helps identify potential blind spots in AI models, leading to fairer and more inclusive outcomes.[37]

The table below breaks down each type of bias with specific causes, AI impact, real-world examples, and targeted policy interventions:

Type of BiasRoot CausesHow It Affects AI DecisionsReal-World ExamplesPolicy & Governance Interventions
Data Bias (Bias in the Training Data)– Lack of diverse, representative, and high-quality datasets
– Historical exclusions or omissions in data collection
– Overrepresentation of certain groups, underrepresentation of others
– Poor data labeling and annotation
– Sampling errors in AI model training
– AI models produce skewed or unfair predictions
– Systemic discrimination gets embedded into automated decisions
– Groups excluded from datasets receive inaccurate or biased outcomes
– Facial recognition fails to identify darker-skinned individuals, leading to wrongful arrests correctly
– AI credit scoring undervalues applicants from lower-income neighborhoods due to historical lending biases
– Medical AI tools perform poorly for women because training data was based on male subjects
– Context-sensitive dataset diversity requirements for AI models, adapted to reflect the demographic, linguistic, and socioeconomic realities of the countries or regions where the systems will be deployed.[38]
– Bias-resistant data collection protocols
– Third-party bias audits before AI deployment
– Publicly available fairness benchmarks for AI datasets
Algorithmic Bias (Bias in Model Design & Optimization)– Choice of optimization metrics prioritizing efficiency over fairness
– Feature selection that inadvertently reinforces disparities
– Lack of fairness constraints during model training
– Inadequate testing across different demographic groups
– Opaque AI decision-making (black-box models)
– AI produces biased outcomes even when data is neutral
– Automated decisions create disparities in hiring, credit scoring, law enforcement, and healthcare
– Some groups face higher error rates or misclassifications
– AI resume-screening tools reject non-traditional applicants due to biased keyword matching
– Predictive crime algorithms assign higher risk scores to minority communities based on previous law enforcement patterns
– AI-based insurance pricing assigns higher premiums to women without justification
– Fairness-aware AI design principles integrated into development
– Transparent AI decision-making with explainability standards
– Inclusive testing and evaluation to measure impact on different demographics
– Regulation on algorithmic transparency and accountability


2.2 The Black box of AI: Transparency and explainability challenges

AI techniques, such as deep learning, are becoming increasingly complex as they learn from terabytes of data. Moreover, the number of parameters[39] has grown exponentially over the years: in July 2023, Meta released its Llama 2 model[40] with a parameter count at 70 billion. Google’s PaLM[41] parameter count is reported to be as large as 540 billion.

Deep learning algorithms use multilayered neural networks, which differ significantly from traditional ML models. While conventional ML may rely on a network with just a few layers, deep learning models can have hundreds or even thousands. Each layer is composed of multiple neurons, bundles of code designed to mimic the way the human brain processes information.

These deep neural networks can analyze massive volumes of raw, unstructured data with minimal human intervention. They detect patterns, learn from them, and use that knowledge to generate new outputs, such as text, images, and videos. This ability to process information at scale enables AI systems to perform tasks like advanced language comprehension, content generation, and other functions that closely resemble human intelligence.

Despite their impressive capabilities, deep neural networks can be opaque. Users, including AI developers, can observe what goes into the system and what comes out, known as the “visible layers.” However, they cannot fully understand what happens within the intermediary layers, known as the “hidden layers.” While developers have a broad understanding of how data moves through the network and how the model processes information, they lack complete insight into the specific interactions occurring at each layer. For example, they may not fully grasp why certain neuron activations occur or how the model retrieves and combines vector embeddings to generate a response. This is called the “black box of AI” – i.e., the inability of users, and often designers, of AI systems to have a clear understanding of how AI works.[42]

Even open-source AI models, which provide access to their underlying code, might remain black boxes to some extent. Although users can examine the model’s structure, they still can have difficulties in interpreting exactly how data transformations unfold within each layer while the model is in operation.[43]

The “black box” nature of many AI systems, particularly deep learning models, creates barriers to trust and accountability. In some cases, AI developers intentionally obscure the inner workings of their systems before making them publicly available. This is often done to protect intellectual property, ensuring that competitors cannot replicate their proprietary technology. In these instances, the creators fully understand how the AI functions but keep the source code and decision-making processes hidden. Many traditional, rule-based AI algorithms fall into this category due to their proprietary nature. However, some of the most advanced AI technologies, including generative AI models, are what might be called “organic black boxes.” Unlike intentionally hidden models, their complexity arises from deep learning processes that even their own developers struggle to comprehend fully. These systems are not deliberately made opaque; rather, the sheer intricacy of their internal mechanisms makes them challenging to interpret.

Transparency in AI encompasses not only making AI explainable but also involves providing clear technical and non-technical documentation throughout the AI life cycle. This documentation is crucial for regulators, users, and businesses to understand how an AI system works, its risks, and its limitations. AI developers and deployers should ensure that information about the inputs and functioning of AI systems is shared in a way appropriate to the context and meaningful to different audiences, while safeguarding privacy, data protection, security, safety, and proprietary information. This tailored transparency should be accessible not only to regulators and auditors, but also to business users and the wider public. In the case of high-risk AI systems, clear documentation should be provided outlining the system’s intended purpose, potential misuse scenarios, and associated risks, along with guidance for deployers on how to mitigate those risks.[44]

To illustrate, a bank considering an AI-powered credit risk assessment tool will want transparency on:

If the AI provider has robust documentation, the bank can quickly evaluate compliance and risks, streamlining the sales process.

AI explainability[45] refers to understanding how a black-box model, i.e., an incomprehensible or proprietary AI model, works. While useful, the difficulty with black-box models is that the explanation may not be entirely accurate or faithful to the underlying model, given its incomprehensibility. When full explainability is not possible due to the abovementioned factors, an alternative is interpretability.[46]

Ensuring explainability is not just about maintaining customer trust; it also helps companies avoid reputational damage and regulatory scrutiny. Recognizing the importance of transparency, Article 86 of the EU AI Act grants individuals the right to request an explanation for decisions made by high-risk AI systems.[47] This provision marks a significant step toward greater accountability, ensuring that algorithmic decisions can be understood and challenged when necessary.

Interpretability, on the other hand, refers to designing models that inherently make the reasoning process of the model understandable. It encourages designing models that are not black boxes, with decision or prediction processes that are comprehensible to domain experts. In other words, interpretability is applied ante hoc[48]. While it does away with the problems of explainable models, interpretable models are often domain specific and require significant effort to develop in terms of domain expertise.

For years, researchers seeking to understand the capabilities and risks of new AI systems have primarily relied on interacting with them directly. This method, often referred to as “red-teaming,” involves testing AI models to identify harmful or dangerous behavior, helping developers implement safeguards before public release. However, this approach does not address a specific concern that some AI experts find particularly troubling—the possibility that an AI system could become intelligent enough to deceive its creators. In such a scenario, the AI might intentionally conceal its true capabilities until it gains the ability to operate beyond human control, potentially leading to serious consequences.[49]

Source: https://www.interpretable.ai/interpretability/what/

The Figure above illustrates the contrast between Black Box and Interpretable Models in AI-driven decision-making, particularly in healthcare risk assessment. The left side presents patient information, including age, gender, albumin levels, sepsis status, INR, and diabetic status. This data is input for both models to predict the patient’s mortality risk.

This comparison illustrates the importance of explainable AI (XAI) in critical applications, such as healthcare. While black box models may be powerful, they can obscure decision-making processes, raising concerns about accountability and trust. In contrast, interpretable models provide insights into why a particular outcome was reached, ensuring that AI-driven decisions are transparent and justifiable. The source link at the bottom directs to additional information on AI interpretability and its role in responsible AI deployment.

Explainable AI (XAI) refers to methods and techniques that make AI models more transparent, enabling stakeholders to interpret and trust their outputs. Without explainability, AI operates as a “black box,” producing results without revealing the reasoning behind them, which can lead to mistrust, biased outcomes, and ethical concerns.

.

Source: https://medium.com/deepviz/what-is-xai-explainable-ai-and-visualization-part-10-da41c981c5fa

To address black box challenges, three primary techniques can enhance AI transparency and interpretability:

2.3 Macro risks and AI governance challenges

AI systems offer immense potential but pose significant macro-level risks requiring robust governance frameworks. These risks span global, societal, and systemic challenges that, if not addressed, could undermine trust, fairness, and security.

On a global scale, governance challenges arise from the fragmented nature of AI regulatory approaches across nations, creating barriers to international collaboration and interoperability.

As AI systems become increasingly autonomous, ensuring human oversight and accountability will become more challenging in the long term. Advanced AI systems with poorly aligned goals could pose unpredictable and potentially catastrophic consequences, raising existential risks.

Addressing these macro risks requires a proactive, collaborative approach involving regulatory oversight, ethical frameworks, and coordinated international efforts. By anticipating and mitigating these challenges, stakeholders can ensure that AI systems are deployed responsibly, maximizing their societal benefits while minimizing potential harms.

Source: United Nations System White Paper on AI Governance by UN (2024)

For regulators, understanding these risks is essential to guiding responsible AI deployment across sectors. The accompanying visual outlines twelve interlinked risk areas, which can be grouped into four broad categories: ethical, economic, security, and environmental.

Ethical risks relate to issues of bias, trust, inclusivity, and fairness. AI systems often operate as “black boxes,” making decisions without clear justification. This lack of transparency erodes public trust, especially when decisions affect health, justice, or financial inclusion. When trained on biased data, AI models can reinforce discrimination. These outcomes highlight the need for explainability, fairness, and ethical standards in AI systems. For instance, a generative AI tool spreading false medical advice could lead to misdiagnoses and public harm. Regulators should explore the adoption of explainability standards in high-stakes sectors like healthcare or criminal justice and mandate independent algorithmic audits to detect and mitigate bias.

Economic risks are closely tied to inequality and displacement. While boosting productivity, AI technologies can displace low-skilled workers, particularly in sectors such as manufacturing, logistics, and retail. Automated checkout systems and autonomous vehicles are already transforming traditional labor markets. Additionally, the growing concentration of AI expertise and infrastructure in a handful of global tech firms threatens to widen the global digital divide. Developing countries risk being left behind, unable to compete with well-funded innovation hubs. This disparity exacerbates geopolitical tensions and deepens inequality. National AI strategies should therefore promote inclusive innovation, support the reskilling of displaced workers, and ensure equitable access to AI infrastructure and capabilities.

Security risks arise from the misuse and manipulation of AI systems. Poorly secured AI technologies can be exploited for cyberattacks, fraud, or disinformation. For example, deepfakes have already been used to impersonate executives in financial scams, and voice-cloning tools can be leveraged to deceive employees into transferring funds. Without strong enforcement mechanisms and updated legal frameworks, such vulnerabilities remain unchecked. Moreover, AI-generated content can infringe on intellectual property rights, raising questions about accountability and authorship. It is critical for regulators to work closely with cybersecurity authorities to define regulatory safeguards, such as standards for data integrity, model robustness, and liability for AI-generated misinformation.

Environmental risks concern the sustainability of AI development. The energy required to train and operate large-scale models is significant. Training a single model, such as a large language model, can result in up to 550 tons of carbon emissions, equivalent to the lifetime emissions of several automobiles. Beyond electricity, the AI lifecycle consumes vast amounts of water. It relies on extracting rare earth minerals like lithium and cobalt, which are associated with land degradation, biodiversity loss, and geopolitical conflict. Despite being touted as tools to combat climate change, AI systems carry heavy environmental costs. Tech companies have struggled to meet sustainability goals, and data center cooling systems are placing additional stress on water-scarce regions. To mitigate this impact, regulators should encourage transparency in reporting AI’s environmental footprint, support research into energy-efficient model architectures, and incentivize the adoption of renewable energy in data centers.

In addition to these four categories, cross-cutting challenges further complicate regulatory responses. Monitoring and enforcing compliance in decentralized and rapidly evolving AI ecosystems are increasingly complex. Traditional regulatory frameworks may not be equipped to address the unique risks posed by AI, especially regarding real-time decision-making or autonomous behavior. This creates an urgent need for adaptive, risk-based approaches to regulation, as well as stronger coordination across sectors and jurisdictions. Regulatory design must evolve in parallel with technological development, promoting interoperability, legal clarity, and cross-border cooperation.

Ultimately, AI risks are not isolated. They are complex, interconnected, and dynamic. For regulators, this demands a systems-thinking approach that embeds ethics, equity, security, and sustainability into policy frameworks. Coordination across ministries, sectoral agencies, and international bodies is essential to ensuring that AI technologies are deployed responsibly, inclusively, and with resilience at their core.

Section 3: Securing AI – cybersecurity and data governance

The convergence of artificial intelligence, cybersecurity, and data governance is emerging as one of the most urgent and complex challenges in the digital era. As AI systems become embedded across all layers of critical infrastructure, from national security and financial systems to healthcare, transport, and public administration, their exposure to cyber threats increases exponentially. Unlike traditional software, AI systems are dynamic, data-dependent, and often opaque in their decision-making processes. These characteristics introduce new forms of risk that cannot be addressed by conventional cybersecurity or data protection approaches alone.

This section begins by outlining the evolving threat landscape specific to AI systems, and examining how adversarial attacks, data poisoning, model inversion, and automated social engineering tactics such as deepfake-enabled phishing campaigns are being used to compromise AI models, manipulate outputs, and exfiltrate sensitive data. It highlights how the growing automation and sophistication of these threats can undermine the reliability, security, and ethical performance of AI technologies, particularly in mission-critical applications.

The section then turns to the concept of AI system security across its lifecycle, from development and training to deployment and operation, exploring the risks associated with supporting infrastructure, including data storage systems, cloud environments, and hardware-software interfaces.

In parallel, the section addresses the central role of data governance in ensuring AI accountability and trust, also highlighting emerging risks from generative AI, such as the inference of sensitive attributes from seemingly non-sensitive data, and the challenges of tracing data provenance within large language models. Existing and emerging policy responses at the national and international levels are reviewed providing comparative insights into cybersecurity frameworks and data governance policies worldwide and discussing key initiatives. The section further explores the use of privacy-enhancing technologies to improve data security while enabling innovation. These technologies are presented not as silver bullets, but as complementary instruments that should be supported by robust legal, procedural, and institutional safeguards.

3.1 Cybersecurity threats in AI systems

Cybersecurity threats in AI systems pose significant risks as AI becomes more integrated into critical infrastructure, finance, healthcare, and digital services. These threats include adversarial attacks, where malicious actors manipulate input data to deceive AI models; data poisoning, in which training datasets are corrupted to skew outcomes; and model inversion, where attackers extract sensitive information from AI models. Additionally, AI-driven automation can be exploited for cyberattacks, such as deepfake-based social engineering or autonomous malware. Ensuring robust AI security requires implementing adversarial training, data integrity checks, access controls, and continuous monitoring to mitigate risks and enhance resilience against cyber threats.

Adapted from: WEF (2025), Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards, https://reports.weforum.org/docs/WEF_Artificial_Intelligence_and_Cybersecurity_Balancing_Risks_and_Rewards_2025.pdf

While AI’s role in both cyber offense and defense is well understood, the recognition of AI itself as a critical asset requiring dedicated protection is a relatively new but rapidly growing focus area. As AI systems become more embedded in national security, finance, healthcare, and public services, safeguarding them from cyber threats is paramount.

AI System Attack Surface and Security Properties

AI systems face security vulnerabilities across various stages of their lifecycle. These risks can be categorized into risks that impact several domains:

Business applications rely on AI-generated outputs for decision-making and client-facing services, such as recommendation engines and chatbots. Any compromise at the input, model, or output stage can significantly impact trust, compliance, and operational integrity. Ensuring AI security requires a multi-layered approach, incorporating robust monitoring, secure training environments, and continuous validation of data integrity. As AI technologies evolve, so must cybersecurity strategies to counter emerging threats and maintain the reliability of AI-driven systems. For a detailed graphical representation, refer to the Figure below.

Source: WEF (2025), Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards, https://reports.weforum.org/docs/WEF_Artificial_Intelligence_and_Cybersecurity_Balancing_Risks_and_Rewards_2025.pdf

Governments and institutions worldwide are now actively identifying the cybersecurity risks associated with AI, with key initiatives mapping and categorizing emerging threats. For example, US MITRE and the UK’s National Cyber Security Centre (NCSC) are leading efforts to define attack vectors specific to AI systems. Similarly, various national policies and frameworks are setting the foundation for AI cybersecurity best practices, including:

At the same time, real-world AI cybersecurity incidents are being closely documented, with dedicated repositories and databases tracking vulnerabilities, attacks, and best practices. This growing body of evidence underscores the pressing need for AI-specific security frameworks, proactive risk assessments, and robust regulatory measures. For AI regulators, the challenge is clear: ensuring AI systems are not only powerful and efficient but also resilient against emerging cyber threats.

3.2 Data governance for AI

Data governance is foundational to the responsible and effective deployment of AI. As AI systems depend on large quantities of data for training, decision-making, and automation, the integrity, security, and ethical use of this data are essential for ensuring reliable AI outcomes. Effective data governance frameworks establish clear policies for data quality, privacy, security, and access control, reducing risks associated with bias, misinformation, and regulatory non-compliance.

AI models are not developed in isolation; they rely on extensive datasets sourced from publicly accessible materials, including social media posts, blogs, and online forums. However, the availability of such data does not inherently grant permission for its collection and use in AI training. Privacy laws and frameworks, such as the OECD Privacy Guidelines[59], establish that personal data should be collected lawfully, fairly, and with clear justification.[60]

The Risks of AI “Hallucinations” and Data Protection

Generative AI models are known to produce “hallucinations”, fabricated or misleading content that appears credible but lacks factual accuracy. This phenomenon poses significant data privacy risks.

A medical AI assistant, for instance, might generate incorrect treatment recommendations based on flawed training data, potentially putting patients at risk. Similarly, an AI-powered background check system could mistakenly flag an individual due to errors in data processing. Such inaccuracies raise fundamental legal and ethical questions: If an AI model generates false personal information, should individuals have the right to correct or delete it? And if identifying and removing specific data points proves technically unfeasible, should the entire model be reconsidered?

Beyond privacy risks, these inaccuracies can also create security vulnerabilities. AI-generated misinformation or disinformation can be exploited in sensitive domains, leading to real-world consequences. Without adequate safeguards, AI systems may contribute to disseminating false narratives, exacerbate existing biases, or compromise decision-making processes in high-stakes environments.

Source: https://iapp.org/news/a/ghosts-in-the-algorithm-busting-ai-hallucinations-under-the-gdpr

Privacy regulations worldwide emphasize the right of individuals to access and understand how their personal data is used and processed. Data subjects also have the right to have their personal data rectified, and under certain circumstances, erased. Yet, AI systems, many of which are black boxes, frequently obscure these processes. Generative AI systems go even further and can infer sensitive personal details, even when such information is not explicitly included in their training data. Research has demonstrated that AI models can accurately predict attributes such as gender, age, or ethnicity by analyzing linguistic patterns and social media activity. For example, a chatbot can generate responses containing personal details retrieved from outdated or obscure online sources. If the individual affected wishes to have their information removed, identifying the exact origin of the data within the AI model may be nearly impossible. This problem is further compounded when AI-generated content falsely attributes statements or actions to individuals, potentially leading to reputational harm.[61]

To counter these trends, robust data governance enhances AI transparency and accountability by enforcing standards on data provenance, auditability, and algorithmic explainability. This is particularly vital for high-stakes applications in finance, healthcare, and public services, where errors or biases can have profound social and economic consequences. Organizations and governments can foster innovation by aligning AI systems with well-defined governance principles while safeguarding human rights, data protection, and fair decision-making processes.[62]

How does AI collect personal data?

Social Media Tracking:

Social media platforms worldwide rely on AI algorithms to monitor user interactions, analyzing likes, comments, shares, browsing history, and even the time spent on specific posts. This data fuels highly personalized content recommendations, ensuring users remain engaged for longer periods. Whether it’s the next viral video on TikTok (ByteDance, China), a breaking news alert on X (formerly Twitter), or a curated Instagram feed, AI works behind the scenes to shape digital experiences. For instance, TikTok’s “For You” page rapidly learns from user behavior. If someone watches multiple travel videos in full but skips food-related content, the algorithm quickly prioritizes travel-related posts, offering an endless stream of similar videos. [63] Similarly, VK (Russia’s leading social network)[64] and WeChat (Tencent, China) use AI to recommend content, groups, and advertisements. India’s ShareChat[65] tailors vernacular content across regional languages using behavioral cues, while Brazil’s Kwai (a localized version of Kuaishou, China)[66] uses AI to curate short-form video content popular in Latin America. In Africa, platforms like Ayoba (South Africa) combine messaging, music, and micro-apps, using AI to personalize user engagement based on local content preferences.[67]

In Europe, streaming platforms such as Deezer (France)[68] apply AI to curate personalized music playlists, while fashion e-commerce site Zalando (Germany)[69] uses AI to recommend styles based on browsing history, body type, and return patterns. These tools illustrate how AI personalization extends well beyond traditional social networks, influencing what users see, buy, and listen to.

AI also plays a key role in content moderation, flagging harmful posts and removing misinformation. However, AI systems can be used by malicious actors to generate biased recommendations, misinformation, and privacy risks. As already mentioned, AI-powered tools can generate deepfake content, amplifying concerns about digital manipulation and online safety. Recommendation systems powered by AI can also create “echo chambers,” reinforcing biases by continuously showing users content that aligns with their existing views, thereby limiting exposure to diverse perspectives. [70]

As AI advances, the line between user benefit and potential privacy intrusion becomes increasingly blurred, underscoring the need for stronger regulations and digital literacy to ensure the responsible use of AI in social media.

Facial Recognition Technology (FRT)

AI-powered facial recognition technology has transformed how individuals are identified, authenticated, and tracked. These systems utilize sophisticated algorithms to analyze facial features, converting images into digital representations that enable accurate and rapid identification. This technology is widely employed in security and law enforcement, unlocking smartphones, enabling digital payments, streamlining airport check-ins, and even personalizing customer experiences in retail.

Despite its benefits, facial recognition raises serious privacy and ethical concerns. Governments and private entities can use this technology for mass surveillance, often without individuals’ explicit consent. In some cases, it has been deployed to track protesters or monitor citizens in ways threatening civil liberties. The controversy surrounding Clearview AI, a company that scraped billions of online images to develop a powerful facial recognition database, exemplifies the risks of unauthorized data collection. Moreover, studies have shown that facial recognition algorithms can exhibit biases, with higher error rates for people of color, women, and marginalized communities, leading to potential discrimination and wrongful identifications.[71]

While some countries and organizations have banned or restricted its use (e.g., the EU), others continue to expand its applications, making it imperative to balance technological advancement and the protection of individual rights. As AI-driven facial recognition technology evolves, the debate over its ethical implications will intensify, necessitating greater transparency, accountability, and responsible governance.

Case Study: Use of FRT in social media

Facebook was a pioneer in using facial recognition at a consumer scale. In the early 2010s, it introduced a feature that automatically suggested tagging your friends in photos you uploaded by scanning the faces in the image​. Behind the scenes, Facebook’s algorithm (known as “DeepFace”) would compare faces in new photos to a database of face templates it had created from users’ tagged photos. If it found a likely match, it would prompt the uploader with “Tag suggestion: [Friend’s Name]?” This made tagging easier for users and helped drive engagement. However, it also ignited privacy complaints and regulatory scrutiny, especially under Illinois’s Biometric Information Privacy Act (BIPA). Facebook was sued for collecting and storing facial data without explicit consent. In 2020, the company agreed to a landmark settlement, paying USD650 million to Illinois users to resolve a class-action lawsuit over these tag suggestions​. By 2021, amid growing public concern, Facebook announced it would shut down its facial recognition system and delete over a billion face templates, essentially ending automatic face-tag suggestions​.

​Source: https://www.americanbar.org/groups/business_law/resources/business-law-today/2021-february/historic-biometric-privacy-settlement/

Location Tracking

AI-driven location tracking has become an integral part of modern digital services, using data from GPS, Wi-Fi, and cellular networks to enable seamless navigation, personalized recommendations, and targeted advertising. From helping users find the fastest route home with Google Maps to allowing food delivery apps to estimate arrival times, this technology has revolutionized convenience. Companies have even integrated AI-powered location awareness into smartphones, enabling features such as automatic traffic updates, geofencing for smart home devices, and real-time safety alerts.

However, location tracking also raises significant privacy concerns, as it can reveal highly sensitive details about an individual’s daily routines, personal habits, and even medical or religious affiliations. For example, smartphone data can indicate whether a person visits a specific place of worship, attends political protests, or frequents a healthcare facility.[72]

Law enforcement agencies and advertisers also leverage location data, sometimes without user consent. For instance, businesses use geofencing to deliver targeted ads, such as sending a discount notification when a user passes by a store. While this can enhance the customer experience, it also highlights the extent to which personal data is being tracked and monetized.

To balance the benefits and risks of location tracking, stronger privacy protections, clearer consent mechanisms, and more transparent data policies are necessary. As AI continues to refine location tracking capabilities, ensuring ethical and responsible use will be critical in maintaining trust and protecting individual rights.

Voice Assistants and Audio Data

Voice assistants like Amazon Alexa, Google Assistant, and Apple’s Siri have become indispensable in everyday life, helping users set reminders, control smart home devices, and access information hands-free. These AI-driven assistants rely on continuous listening to detect wake words (e.g., “Hey Siri” or “Alexa”) before processing and responding to commands. While this functionality enhances user convenience, it also means that vast amounts of personal and potentially sensitive audio data are collected, stored, and analyzed. Privacy concerns have grown worldwide as reports surfaced about how voice recordings are handled.

Baidu’s DuerOS (China)[73] and Samsung’s Bixby (Republic of Korea)[74] also rely on voice data to refine their AI capabilities, especially in mobile devices and smart TVs. In Europe, Snips (France)[75], a privacy-centric voice assistant company later acquired by Sonos, pioneered on-device processing to reduce privacy risks. Meanwhile, Africa’s Viamo platform[76] is exploring local language voice interactions, adding a layer of linguistic inclusivity, and raising questions about data localization and consent in low-regulation environments.

Accidental activations are another concern. For example, a voice assistant device mistakenly recorded a private conversation and sent it to a random contact, highlighting the risks of unintended surveillance.[77] Similarly, voice AI built into smart TVs and mobile apps has prompted debate over the extent of passive listening in connected environments.

Beyond home use, law enforcement agencies in multiple jurisdictions have sought access to voice assistant data for criminal investigations, further complicating the privacy landscape.

Meanwhile, advertisers across global markets are beginning to explore how voice data can be leveraged for hyper-targeted marketing, raising red flags about whether ambient conversations could be analyzed for commercial profiling. With companies in India, Brazil, and Asia deploying voice assistants in localized e-commerce and health apps, the scale of data collection continues to expand.

As voice AI continues to evolve and expand across borders, balancing convenience with user privacy remains a pressing global challenge, underscoring the urgent need for stronger regulations, cross-border standards, greater transparency, and user-centric design to ensure responsible and ethical use of personal audio data.

Web Activity Monitoring

AI algorithms have become increasingly sophisticated in tracking users’ browsing history, search queries, and page visits, often operating without explicit user consent. While this data collection fuels targeted advertising and personalized user experiences, it raises significant concerns about privacy and digital surveillance. Companies leverage these insights to build detailed consumer profiles, which can include preferences, behavioral patterns, and even personal struggles. What many users may not realize is that such tracking can reveal deeply sensitive information, such as medical conditions, financial difficulties, or political leanings, without their knowledge.

For instance, if a user frequently searches for terms related to a specific illness, AI-powered advertising engines may start displaying ads for prescription medications, specialist doctors, or health insurance options. While this may seem helpful, it also means that third parties, including advertisers and data brokers, have access to private medical information without consent.

Beyond health data, financial privacy is also at risk. AI-driven tracking can reveal when someone frequently visits debt relief websites, browses payday loan services, or searches for budget-friendly alternatives for daily expenses. This kind of behavioral profiling allows advertisers to target financially vulnerable individuals with high-interest loans or predatory financial products.

Despite these concerns, many internet users remain unaware of how extensively their data is harvested. AI-driven tracking extends beyond websites to cross-device monitoring, where a user’s activity on their smartphone, tablet, and desktop is stitched together to create a seamless digital profile. Even when privacy settings are adjusted, AI-powered tracking techniques like browser fingerprinting can still identify and monitor users across multiple platforms. A 2023 study revealed that many websites could still reconstruct user identities using machine learning techniques even after clearing cookies or browsing in “incognito” mode.[78]

While some governments have introduced regulations, such as the European Union’s GDPR and California’s Consumer Privacy Act (CCPA), to limit invasive tracking, enforcement remains challenging. Many companies find loopholes or rely on dark patterns, which are manipulative design tactics that nudge users into sharing more data than they intended. Users attempting to opt out of data collection often face deliberately confusing privacy settings, making it difficult to fully escape AI-powered tracking.

The growing tension between AI-driven personalization and digital privacy underscores the need for stronger consumer protection. Privacy-focused alternatives, such as search engines like DuckDuckGo or encrypted messaging apps like Signal, offer users more control over their data. However, without widespread awareness and stricter enforcement of data protection laws, AI-powered tracking will continue to push the boundaries of surveillance, shaping not just the ads users see but potentially their opportunities, decisions, and rights in the digital age.

Smart Devices and IoT Data Collection

The rise of smart devices and the Internet of Things (IoT) has transformed daily life, offering convenience, automation, and personalized experiences. Smart home appliances, such as thermostats, voice assistants, security cameras, and connected refrigerators, continuously collect data to optimize their functions and enhance user experiences. Wearable devices, including fitness trackers and smartwatches, gather extensive health data, such as heart rate, sleep patterns, activity levels, and even stress indicators. While these devices improve efficiency and well-being, they also introduce significant concerns regarding privacy and security. One of the major risks associated with IoT data collection is the vast amount of sensitive information these devices collect.

Beyond individual privacy, IoT devices are also vulnerable to cyber threats. Hackers have exploited security flaws in connected home gadgets, taking control of baby monitors, smart locks, and even thermostats. One alarming case involved a hacker gaining access to a family’s Nest security camera, broadcasting disturbing messages through its speaker. Additionally, some smart TVs and home assistants have been found to collect voice and behavioral data for advertising purposes without obtaining explicit user permission. The nature of IoT devices can make it challenging for organizations to maintain complete control over every aspect of their operations. For example, organizations often have little to no control over security and privacy risks associated with communication technologies as these are typically managed by third-party telecommunications providers. The same applies to cloud services, where users may have varying degrees of control, from none to extensive, over security and privacy settings.

Governments and regulatory bodies are also working to establish stricter IoT security standards, ensuring manufacturers implement robust encryption and user privacy safeguards. As IoT adoption continues to grow, striking the right balance between innovation and protecting personal data will be crucial in fostering trust and responsible technology use.[79]

Distinguishing Data Privacy and Data Protection

The terms “data privacy” and “data protection” are often used interchangeably in everyday conversations, but they have distinct meanings and implications.

Data Privacy focuses on determining who has authorized access to personal data, how that data is collected and processed, and the extent to which individuals can control its use. It encompasses not just personal data but any information that, if accessed by others, could infringe upon an individual’s right to a private life and personal autonomy. The concept of privacy is primarily centered on an individual’s control over their own information. However, this notion has become more complex with the increasing difficulty of managing personal data in the digital age. What is considered private depends on context—sharing real-time location with a friend is different from a third party collecting and using that data for targeted advertising without explicit consent. Furthermore, the relational nature of data, such as social media activity or genetic information that discloses data beyond one individual, complicates the traditional idea of privacy as individual control.

Data Protection refers to the legal and procedural measures designed to safeguard personal data. This includes ensuring that data processing is fair, purpose-driven, and conducted under a legally accepted basis, such as consent or legitimate interest. While consent grants individuals the right to withdraw their data at any time, legitimate interest allows data processors broader discretion in justifying data collection for business purposes. Data controllers and processors must uphold core data protection rights, including notifying individuals about data collection, enabling access and correction, allowing data portability, and providing the right to object to automated processing. However, there is often an implicit assumption that certain types of personal data can be collected by default.

For instance, the European Union explicitly differentiates between privacy, which pertains to an individual’s private life, and data protection, enshrined in the European Charter of Fundamental Rights. Despite their differences, the two concepts overlap. For example, even when data protection laws do not apply (e.g., when body scanner data is anonymized), privacy rights still ensure protection against potential violations of individual autonomy. Conversely, data protection regulations can restrict the processing of personal data, even in cases where privacy concerns are minimal.

Source: https://hai.stanford.edu/white-paper-rethinking-privacy-ai-era-policy-provocations-data-centric-world

 

The regulatory dilemma: What should be done?

Since AI models cannot easily forget or correct mistakes, regulators and policymakers face a tough decision:

AI Governance and Data Protection for Policymakers: Lessons from Italy’s DeepSeek Investigation

As AI technologies continued to advance, policymakers faced growing challenges in enforcing responsible data governance and regulatory compliance. The 2025 investigation by Italy’s Data Protection Authority (Garante) into DeepSeek AI highlighted key concerns around data transparency, legal justifications for data use, and cross-border data transfers. Central to the inquiry was whether DeepSeek had clearly disclosed its data sources, informed users, and lawfully processed personal information—issues critical for aligning with global privacy laws like the GDPR.

The investigation also examined the legality of transferring personal data to servers located in China, raising broader concerns about international data security and regulatory alignment. The case illustrated the urgent need for AI-specific data protection rules, stronger transparency requirements, and regular compliance audits to ensure ethical AI development. Garante’s scrutiny of DeepSeek served as a timely lesson: effective AI governance required immediate and coordinated regulatory action.

Source: https://www.hunton.com/privacy-and-information-security-law/italian-garante-investigates-deepseeks-data-practices

Global data protection regulations and initiatives and their impact on AI

Compliance with data protection frameworks requires adherence to key rights and guidelines that ensure transparency, accountability, and individual control over personal data when processed by AI systems. Some of these frameworks have been outlined below:

The importance of Data Infrastructure in AI Governance: The Republic of Korea’s Data Dam Initiative

The Republic of Korea’s has embarked on an ambitious initiative called the Data Dam to strengthen its data infrastructure and promote innovation in AI and big data. This initiative is a key component of the Korean New Deal, prioritizing digital transformation and green growth. The Data Dam project is focused on collecting and leveraging vast amounts of data across multiple sectors, including healthcare, transportation, and finance. By integrating data from various sources and making it accessible through a centralized platform, the Republic of Korea aims to establish a robust data ecosystem that fosters AI development. Similar to how a water-storage dam gathers, processes, and distributes water for agricultural use, the Data Dam aggregates information from public and private sectors, converts it into valuable datasets, and makes it available for widespread use.

Key Features of the Data Dam Initiative:

  • Centralized Data Integration: Merging data from public and private sectors into a unified platform to break down silos and enable more efficient data usage.
  • AI Hub Development: Establishing an AI hub to provide businesses and researchers with access to AI training data from the Data Dam and cloud-based high-performance computing resources.
  • Sector-Specific Data Utilization: Applying data insights to key sectors like healthcare, transportation, and finance to drive innovation and enhance AI-powered services.
  • Data Privacy and Security: Implementing robust data protection policies to safeguard personal information and ensure compliance with regulations, fostering trust in the public sector.

Source: World Bank (2024) Global Trends in AI Governance: Evolving Country Approaches, https://documents1.worldbank.org/curated/en/099120224205026271/pdf/P1786161ad76ca0ae1ba3b1558ca4ff88ba.pdf.

 

Section 4: Approaches to AI Governance

AI governance has become a critical global issue, with several international organizations working to establish frameworks that promote responsible AI development while addressing ethical, legal, and security concerns. The United Nations has taken a leading role in this effort, particularly through the UN AI Advisory Body’s “Governing AI for Humanity: Final Report” (2024), which proposes a Global AI Governance Framework emphasizing inclusivity and public interest. The report outlines guiding principles, including inclusivity, public interest, the alignment of AI governance with data governance, and the promotion of a data commons. It recommends the establishment of an AI Office within the UN Secretariat to coordinate international efforts and advocates for a Global AI Data Framework to ensure interoperability and establish secure data-sharing standards.[80] It also identifies essential institutional functions, including evaluating AI’s future trajectory and impact, developing and harmonizing safety and risk management standards, and fostering international multistakeholder collaboration to support AI’s responsible development and application for economic and social progress.[81] The Global Digital Compact adopted at the Summit of the Future (2024) recognizes as its 5 objective the need for a balanced, inclusive and risk-based approach to the governance of artificial intelligence (AI) which requires an agile, multidisciplinary and adaptable multi-stakeholder approach.[82]

UN White paper on AI Governance (2024), prepared by the Inter-Agency Working Group on Artificial Intelligence, under the leadership of UNESCO and ITU, the White Paper analyses the UN system’s institutional models, functions, and existing international normative frameworks applicable to global AI governance[83].

UNESCO has adopted the Recommendation on the Ethics of AI (2021), providing guidelines on AI literacy, regulatory capacity building, and integrating AI ethics into national policies. This international framework prioritizes human rights, fairness, and transparency while offering practical policy guidance for implementing ethical principles. However, like many similar efforts, it remains nonbinding.[84]

The Council of Europe’s Framework Convention on AI and Human Rights, Democracy, and the Rule of Law[85] is the first legally binding international agreement on AI governance, setting transparency, accountability, and risk assessment requirements for AI systems across member states. The Convention focuses on human rights protections, ensuring that AI applications align with democratic values and do not exacerbate societal inequalities. Yet, the Convention can be seen as establishing only a basic framework with limited obligations and rights. The Convention is primarily open to member states of the Council of Europe. However, non-European countries can also sign and accede to it — if the Committee of Ministers of the Council of Europe formally invites them. This aligns with the Council’s approach to other conventions, such as the Budapest Convention on Cybercrime, which has several non-European parties.

The OECD AI Principles provide a globally recognized framework for fair, transparent, and accountable AI systems. The OECD AI Policy Observatory is crucial in tracking AI policy developments and facilitating international cooperation among member states.

Beyond intergovernmental organizations, the Global Partnership on AI (GPAI) brings together governments, private sector leaders, academia, and civil society to foster the responsible development and deployment of AI. GPAI supports AI ethics research, capacity development, and best practices in AI regulation, ensuring that AI systems align with human rights and democratic values. The initiative also promotes international collaboration, recognizing that AI governance requires a multi-stakeholder approach to address the complexities of emerging AI technologies.

At the regional level, multiple governance models have emerged to regulate AI in different geopolitical contexts. The Ibero-American AI Charter harmonizes AI policies in Latin America and Spain, promoting regulatory coherence across the region. In Europe, the EU Digital Services Act introduces regulations for AI-powered online platforms, focusing on content moderation and algorithmic transparency. Meanwhile, the ASEAN Guide on AI Governance and Ethics establishes governance principles for Southeast Asia, emphasizing risk-based AI regulation and cross-border data governance.

The African Union’s comprehensive continental strategy, adopted in July 2024[86], seeks to harness AI as a driving force for achieving the Sustainable Development Goals. It highlights “limited infrastructure and skills” as significant barriers to AI adoption. It emphasizes the need to strengthen infrastructure in key areas such as energy, broadband connectivity, data centers, cloud computing, high-performance computing, IoT, and high-quality data.

Moreover, initiatives such as the G7 Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems promote cross-border cooperation on AI governance, reinforcing the need for internationally aligned AI safety and security standards.[87]

These global and regional initiatives reflect the growing recognition that AI governance is not just a technical issue but a fundamental policy challenge requiring international coordination, ethical safeguards, and regulatory adaptability. As AI continues to evolve, global governance frameworks must balance innovation, security, and human rights protection, ensuring that AI technologies benefit all of humanity. The UN can play a crucial role in this process, serving as a unifying institution that transcends these divisions and can help catalyze the formation of the necessary networks and hubs.[88]

Apart from the global AI governance initiatives, at the national level, countries adopt AI governance models tailored to their unique economic, social, and political priorities, resulting in inconsistencies across jurisdictions. The regulation of AI systems often falls into two broad frameworks: risk-based and rights-based approaches. Both aim to govern AI responsibly but differ in focus, methodology, and outcomes. Understanding their distinctions, complementarities, and limitations is critical to designing effective AI governance strategies.

4.1 Risk-based approaches

A risk-based approach to AI governance categorizes AI systems based on their level of potential harm, with stricter requirements for high-risk applications. This approach seeks to balance innovation with the need for accountability by tailoring regulations according to the level of risk an AI system poses. It draws on established regulatory principles in finance, medicine, and cybersecurity, where risk stratification determines the levels of oversight.

Risk-based approaches offer regulators significant flexibility, allowing oversight mechanisms to be adjusted based on the specific risks associated with different AI applications. This alignment with sectoral realities ensures that low-risk innovations can advance with minimal regulatory barriers, thereby fostering efficiency and technological growth. At the same time, AI systems that raise serious ethical, safety, or human rights concerns are subjected to more rigorous requirements, helping to safeguard the public interest. This approach also enables scalable compliance, allowing smaller firms and startups to engage with AI development without facing excessive regulatory burdens. By proportionally aligning obligations with risk, such frameworks help maintain robust oversight where needed while encouraging broad-based participation in the AI economy.

Different jurisdictions, however, interpret and apply risk-based regulation in distinct ways. Countries like Japan, the United States, and Singapore have embraced risk-based models, prioritizing flexibility and innovation. These frameworks aim to create enabling environments for experimentation and rapid deployment, often relying on soft-law instruments such as guidelines and voluntary codes. In contrast, the European Union’s AI Act also follows a risk-based structure but places a strong emphasis on protecting fundamental rights and preventing harm. The EU framework imposes strict legal obligations on high-risk systems, focusing on transparency, accountability, and redress.

Despite their benefits, risk-based approaches face considerable challenges, particularly concerning the subjectivity of risk classification. There is often limited consensus among governments, industry actors, and civil society on how to define high, medium, or low risk. For example, while the European Union designates emotion recognition technologies as high-risk and heavily regulates them, other jurisdictions permit their use with relatively little oversight. Moreover, risk-based frameworks may under-regulate AI systems that pose less obvious but nonetheless significant long-term societal harms. Tools like automated hiring systems may appear low-risk initially but can exacerbate discrimination and inequality if not properly assessed. These challenges highlight the importance of ongoing evaluation, inclusive stakeholder engagement, and adaptive governance mechanisms that can evolve alongside technological developments.

EU AI Act

The European Union AI Act[89] is one of the most comprehensive risk-based AI regulatory frameworks. The AI Act establishes a well-defined, risk-based framework for AI developers and deployers, outlining regulations for specific AI applications. As part of a broader policy initiative to promote trustworthy AI, the Act is complemented by the AI Innovation Package[90], the introduction of AI Factories[91], and the Coordinated Plan on AI.[92] Collectively, these measures ensure safety, uphold fundamental rights, and promote human-centric AI while fostering adoption, investment, and innovation across the EU.

The Act classifies AI applications into four categories:

Source: https://www.pwc.lu/en/newsletter/2024/eu-ai-act.html

The AI Act’s risk-based framework already influences regulatory efforts beyond the EU. In Argentina, Bill 3003 D 2024[93] aims to ban AI systems deemed “unacceptable risk”, those that violate human rights, while imposing mandatory safeguards on high-risk systems. These include compulsory impact assessments and adherence to outcome-oriented principles like fairness, equity, and non-discrimination. Similarly, Brazil’s draft AI law proposes a tiered model: AI systems with excessive risk are prohibited, while high-risk systems are subject to enhanced obligations, such as human oversight, algorithmic impact evaluations, and transparency requirements, alongside overarching principles around accountability, safety, fairness, and non-discrimination.[94]

Singapore’s approach to AI governance

AI Verify: Singapore’s AI Verify is a voluntary[95] AI governance framework, developed by the Infocomm Media Development Authority (IMDA), that helps companies assess the transparency and fairness of their AI models. It is a governance framework and software toolkit that assesses AI systems against internationally recognized principles. This framework is aligned with global AI governance standards, such as those from the EU and OECD, and includes 11 key principles:

While AI Verify supports standard AI models like supervised learning algorithms, it does not currently evaluate generative AI. The tool was developed in consultation with leading global tech firms, including AWS, Google, Meta, Microsoft, DBS, and Singapore Airlines. In 2022, it launched an international pilot with participation from over 50 multinational companies like Dell, Hitachi, and IBM. Financial institutions in Singapore use AI Verify to test algorithmic biases in credit-scoring models to prevent discriminatory outcomes.

Model AI Governance Framework: First introduced in 2019 by Singapore’s Personal Data Protection Commission (PDPC), the Model AI Governance Framework provides practical guidelines for the ethical deployment of AI in the private sector. It covers:

The framework was updated in 2020 to incorporate new considerations, including system robustness and reproducibility, and remains adaptable across various sectors and technologies.[96]

Singapore’s Generative AI Governance Framework: IMDA introduced the Model AI Governance Framework for Generative AI in 2024. Developed in collaboration with the AI Verify Foundation, the framework aims to balance innovation and accountability, ensuring that AI development aligns with ethical principles while fostering public trust.

One of the central aspects of the framework is accountability, which emphasizes clear role distribution among AI developers, application deployers, and cloud service providers. It advocates for shared responsibility models, similar to those in cloud computing, ensuring that liability aligns with control over AI systems. It also suggests insurance mechanisms to cover unforeseen risks. Regarding data governance, the framework highlights the importance of high-quality training datasets and urges policymakers to clarify the use of personal data and copyrighted content in AI model training. It recommends the adoption of privacy-enhancing technologies (PETs), such as anonymization techniques, to protect sensitive data while enabling AI development.

The framework calls for baseline safety practices to ensure trust in AI deployment, including Reinforcement Learning from Human Feedback (RLHF) to align AI behavior with human values and Retrieval-Augmented Generation (RAG) to minimize hallucinations. Greater transparency is encouraged, particularly for high-risk AI models, requiring providers to disclose training data sources, risks, and limitations. In addition, the framework emphasizes the importance of incident reporting mechanisms, urging AI developers to establish real-time reporting and remediation processes similar to bug-bounty programs in cybersecurity.

The framework also highlights the importance of third-party testing and validation, proposing independent audits to ensure compliance with ethical and security standards. On security, it recognizes new AI-driven threats, including deepfake fraud and embedded malicious code, and calls for enhanced cybersecurity tools, such as input moderation systems to block harmful AI prompts and digital forensics tools to detect and mitigate AI-generated threats. It emphasizes the need for content provenance and transparency, recommending digital watermarking and cryptographic labeling to help distinguish AI-generated content from human-created material.

The framework advocates for global cooperation in AI safety research and stresses the role of public sector adoption of AI for social good. It encourages AI-driven initiatives in education, workforce development, and sustainability, ensuring equitable access to AI technologies.

Even though it is not legally binding, the Framework can still be effective and serves as a model for AI governance in Asia and beyond. Its strength lies in providing a clear, structured, and internationally aligned approach to responsible AI, which encourages companies to adopt good practices voluntarily. Aligning with global frameworks, such as the EU AI Act and NIST AI RMF (see below), helps build trust, especially in cross-border contexts where regulatory interoperability is highly valued. However, the absence of legal enforceability can limit its reach, particularly when holding companies accountable or ensuring industry-wide adoption. Challenges such as ambiguous liability rules and high compliance costs for smaller firms may also impact uptake. Still, the framework serves as a valuable guide and reputational benchmark for many organizations, especially those seeking to demonstrate responsible AI use to partners or consumers.[97]

Japan’s approach to AI governance

Japan has passed the Act on the Promotion of Research and Development and the Utilization of AI-Related Technologies. The Act, while not strictly a risk-based framework in the way the EU AI Act is, incorporates a risk-based approach through its focus on mitigating potential negative consequences of AI development and use.[98] Passed by the House of Councilors on 28 May 2025 after earlier approval by the House of Representatives, the bill supports AI as a foundational technology for Japan’s economy and society. Rather than introducing new regulatory structures, it relies on existing laws and voluntary business cooperation, with no immediate penalties for non-compliance. Instead, the government can issue guidance and disclose harmful uses under current legal frameworks, such as data protection and copyright law.[99]

The law reflects Japan’s preference for “soft law” approaches, aligning with similar trends in Asia-Pacific jurisdictions like the Republic of Korea and Singapore. It emphasizes transparency and responsible development of AI, warning of possible misuse such as privacy breaches or copyright violations. Although it has not yet established binding rules, it authorizes the government to develop future enforcement mechanisms and creates space for an AI strategy task force, which will likely be led by the prime minister.

US approach to AI governance

Unlike the EU, which has a horizontal approach to AI regulation, the United States (US) AI governance framework follows a sectoral and market/innovation-driven approach, with different federal agencies setting their own standards. For example, the FDA regulates AI in healthcare (e.g., AI-powered diagnostic tools), the NHTSA oversees AI in autonomous vehicles, the FTC enforces AI-related consumer protection laws, and the AI-related executive orders emphasize innovation, security, and economic competitiveness rather than strict regulation.

The AI Risk Management Framework (AI RMF), developed by the National Institute of Standards and Technology (NIST), is a voluntary resource designed to help organizations incorporate trustworthiness considerations into designing, developing, using, and evaluating AI products, services, and systems. Developed over 18 months with input from over 240 organizations across private industry, academia, civil society, and government, the AI RMF emphasizes a collaborative approach to managing AI risks.

The framework is structured around four core functions:

  1. Govern: Establishes a risk management culture within organizations, outlining processes and structures to anticipate and manage AI system risks.
  2. Map: Involves understanding and documenting the context, capabilities, and potential impacts of AI systems.
  3. Measure: Focuses on analyzing and assessing AI systems to manage risks and ensure they meet trustworthiness requirements.
  4. Manage: This entails prioritizing and responding to risks, ensuring that AI systems achieve their intended purposes while minimizing negative impacts.

The NIST AI RMF identifies seven trustworthy characteristics of risks: (i) Valid and Reliable; (ii) Safe; (iii) Secure and Resilient; (iv) Accountable and Transparent; (v) Explainable and Interpretable; (vi) Privacy-Enhanced; (vii) Fairness – With Harmful Bias Managed.

To assist organizations in implementing the framework, NIST has developed several companion resources:

Additionally, NIST has released the Generative Artificial Intelligence Profile (NIST-AI-600-1), which helps organizations identify unique risks posed by generative AI and proposes actions for managing these risks in alignment with their goals and priorities. These resources collectively aim to guide organizations in effectively managing AI risks, ensuring the development and deployment of trustworthy and responsible AI systems.[100]

China’s approach to AI governance

China has been among the first to enact domestic AI legislation, contributing to its broader digital regulatory framework, much like the EU. Its AI-specific regulations target recommendation algorithms and deepfakes, mandating content labelling and prohibiting activities that could disrupt social stability. Another regulation imposes liability for illegal content generated by public-facing AI models and requires high-quality training data that adhere to intellectual property protections.[101]

China’s approach to AI governance is fragmented but targeted, relying on a combination of administrative regulations (e.g., the Generative AI Measures), technical standards, and pilot programs. These rules are sector- and service-specific, with no overarching law or unified definition of AI. Compliance obligations are tailored to services like algorithmic recommendations, deep synthesis, and generative AI, focusing on content control, user protection, transparency, and national security.

China applies risk controls flexibly, focusing on services with “public opinion attributes” or “social mobilization capabilities.” The regulations emphasize security assessments and algorithm filings, particularly for services influencing public discourse.

For regulatory oversight, China employs a multi-agency model, with central and local offices of ministries like the Cyberspace Administration of China (CAC) and Ministry of Industry and Information Technology (MIIT) involved based on sectoral mandates.[102]

The Republic of Korea’s approach to AI governance

The Republic of Korea has become the second country in the world, after the EU, to enact a comprehensive AI regulation with the Framework Act on the Development of Artificial Intelligence and Establishment of Trust Foundation (AI Framework Act). Passed in December 2024 and effective from January 2026, the Act integrates 19 bills and combines promotional and regulatory provisions. Its objectives include protecting citizens’ rights and dignity, promoting AI development, and establishing a foundation of trust.

The Act defines AI and AI systems in a manner consistent with international standards and applies to both AI developers and AI service providers, including foreign entities meeting specified thresholds. The Ministry of Science and ICT is the lead authority, supported by the National AI Committee and other institutions such as the AI Safety Institute.

Regulatory focus is placed on “high-impact AI” (those affecting life, safety, or rights in 11 defined sectors) and generative AI, with transparency, labelling, and risk management obligations. High-performance AI systems based on cumulative computational power must conduct risk assessments and establish incident response systems. There is also a mandatory fundamental rights impact assessment (FRIA) for high-impact AI products or services, especially in public procurement.

The Act also promotes AI development through support for R&D, SMEs, international cooperation, data policies, and ethics initiatives, including optional private AI ethics committees.

In contrast with the EU’s tiered and risk-based approach and the U.S.’s light-touch, innovation-led model, the Republic of Korea’s framework offers a hybrid path, balancing support for innovation with moderate regulation. While similar in many aspects to the EU AI Act (e.g., use of FRIAs), it applies obligations more uniformly across actors and currently imposes lighter penalties. Future subordinate legislation may increase regulatory rigor.

Ultimately, the Republic of Korea’s AI Framework Act represents a distinct regulatory model, potentially serving as a template for other countries seeking to balance competitiveness with AI safety and trust, while contributing to global discussions on regulatory interoperability.[103]

4.2 Rights-based approaches

Rights-based approaches to AI governance prioritize protecting fundamental rights such as privacy, equality, freedom of expression, and non-discrimination in AI design, development, and deployment. These approaches are grounded in human rights frameworks like the Universal Declaration of Human Rights (UDHR), the European Convention on Human Rights (ECHR), the American Convention on Human Rights and the Protocol of San Salvador[104], and the African Charter on Human and Peoples’ Rights[105]. Rights-based regulation emphasizes proactive safeguards and accountability mechanisms to uphold these rights throughout the AI lifecycle. While this approach provides an ethical foundation and comprehensively addresses societal impacts, it can be complex to translate abstract rights into actionable requirements. Stricter limitations on technology may also slow innovation, and enforcement gaps often challenge implementation.

One of the most significant recent developments in AI governance is the Council of Europe Framework Convention on Artificial Intelligence and human rights, democracy and the rule of law[106], the first legally binding international agreement focused on AI. For a more detailed analysis of the Framework Convention, please refer to Section 4.

Another foundational framework guiding AI governance is the UDHR.[107] As the cornerstone of modern human rights law, the UDHR establishes principles such as equality, non-discrimination, privacy, and freedom of expression, all of which are critical in shaping AI policies. AI regulations grounded in UDHR principles would promote transparency in decision-making, fairness in AI-driven processes, and safeguards against discriminatory or unethical AI applications. The UDHR’s enduring relevance in AI governance lies in its ability to set a global benchmark for the rights-based development and deployment of AI systems, reinforcing the necessity of embedding human rights protections within AI policy frameworks.

Beyond these core instruments, several other international frameworks play a crucial role in guiding AI governance. For instance, the UN Guiding Principles on Business and Human Rights[108] provide a framework for corporate accountability in AI governance, outlining how businesses must conduct human rights due diligence when developing and deploying AI technologies. These principles highlight the importance of assessing and mitigating AI-related risks, ensuring that private-sector actors take responsibility for preventing human rights abuses in AI applications.

A rights-based approach to AI governance offers several compelling advantages. First and foremost, it embeds human rights safeguards into the design, development, and deployment of AI systems. This approach requires AI developers to implement mechanisms such as bias detection, fairness audits, and impact assessments to avoid discriminatory or unethical outcomes. Establishing clear lines of accountability ensures that both developers and deployers are held responsible for any adverse consequences stemming from AI use.

A rights-based regulatory model promotes equity, social justice, and inclusion by requiring transparent algorithms, participatory design, and inclusive policy development. This helps ensure that the benefits of AI are distributed fairly across all segments of society and that no group is disproportionately harmed or excluded. In doing so, this model aligns AI innovation with long-term development goals and ethical public policy.

Moreover, integrating human rights norms into AI governance strengthens public trust and facilitates cross-border cooperation. Widespread concerns about opaque and unaccountable AI systems have eroded public confidence and raised alarms about the impact of AI on democratic governance. A rights-based framework counters these concerns by promoting greater transparency, participatory policymaking, and effective oversight. It also provides a common normative foundation that can help harmonize regulatory approaches across jurisdictions. As AI systems are increasingly used in transnational contexts, aligning regulations with international human rights standards fosters interoperability and supports global cooperation on AI safety and ethics.

However, the implementation of rights-based AI regulation is not without significant challenges. One of the most pressing issues is the difficulty of translating broad human rights principles into concrete, enforceable regulatory mechanisms. Instruments such as UDHR, the OECD AI Principles, and the Council of Europe’s Framework Convention on Artificial Intelligence offer normative guidance but often lack the technical specificity, operational tools, or enforcement mechanisms needed to guide real-world AI governance effectively.

Another challenge lies in balancing the need for regulation with the imperative to support innovation. While stringent regulatory safeguards are essential to protect against harm and uphold human rights, overly rigid rules may stifle technological advancement, deter investment, and hamper economic growth. Achieving the right balance requires a differentiated approach that imposes stricter obligations on high-risk AI systems while allowing more flexible oversight of low-risk applications. Constructive dialogue and collaboration between government, industry, and civil society are essential to designing governance frameworks that are both protective and enabling.

Finally, the fast pace of AI development complicates efforts to maintain regulatory relevance. As new applications and risks continue to emerge, existing legal and institutional frameworks may struggle to keep up. To address this, policymakers could consider adopting adaptive and responsive regulatory models. Mechanisms such as regulatory sandboxes, flexible legal instruments, and periodic policy reviews can help ensure that governance structures remain fit for purpose, capable of addressing novel risks while fostering responsible innovation.

How AI impacts human rights

AI has far-reaching consequences for human rights, both positive and negative. It can enhance rights such as life, health, and education by improving disaster prediction, advancing medical diagnostics, and personalizing learning. However, it also threatens rights to privacy, equality, and due process when deployed without adequate safeguards. The indivisibility and interdependence of human rights mean that AI’s impact on one right can have cascading effects on others. A privacy breach in biometric surveillance, for example, might violate data protection rights and limit access to employment or healthcare, exacerbating inequalities.

For practical recommendations on what a rights-respecting AI governance framework could look like, please refer to Annex 4.

4.3 Voluntary self-regulation and standards

Recent trends in AI governance indicate a growing reliance by governments on soft law mechanisms. Unlike formal regulation, which consists of legally binding rules supported by sanctions or incentives, soft law encompasses non-binding tools such as private standards, general policies, guidelines, principles, codes of conduct, and international dialogue platforms. Although these instruments lack legal enforceability, they often carry considerable moral or political authority.

Technical standards development organizations (SDOs) play a crucial role in advancing interoperability, making their work among the most extensive, tangible, and sophisticated efforts in this area. The involvement of governments in AI standards development has been a consistent focus, as standards serve as a technical and managerial framework for translating principles and policies into practice. They also facilitate harmonization across different legal systems. This is why AI standardization has been a key topic in EU-U.S. Trade and Technology Council discussions[113] and why the G7 Hiroshima Summit in 2023[114] reaffirmed support for “standards development organizations through multi-stakeholder processes.” The EU AI Act relies on standards to define conformity assessment criteria, emphasizing the adoption of international standards.[115] Moreover, the pan-European SDO, CEN-CENELEC, is developing additional standards at the request of the European Commission.[116] The Global Digital Compact (GDC) emphasizes the importance of enhanced international governance of AI and diverse AI-related standards. It encourages SDOs to collaborate in promoting the development and adoption of interoperable AI standards that prioritize safety, reliability, sustainability, and human rights.[117] The UN’s High-Level Advisory Body on AI has proposed a new UN-led entity for standards coordination.

ITU AI/ML standards

ITU has over 470 AI standards that are either published or in development[118]. For instance, the ITU has created standards for applying AI in areas such as managing 5G and next-generation networks, advancing multimedia technologies, evaluating and enhancing the quality of digital services, and boosting energy efficiency.[119]

New ITU standards provide: an architectural framework to integrate ML into 5G and future networks (ITU-T Y.3172)[120]; an evaluation framework for intelligence levels across different parts of the network (ITU-T Y.3173)[121]; and a framework for data handling in support of ML (ITU-T Y.3174).[122] These standards originated in discussions by the ITU-T Focus Group on ‘Machine Learning for Future Networks including 5G’.[123]

ISO and ISO/IEC standards

To ensure responsible AI governance, international standards such as ISO/IEC 42001, ISO 23894, ISO/IEC TS 12791, ISO 5339, and ISO 5338 provide critical frameworks that guide policymakers in establishing regulatory best practices. This section aims to equip policymakers to understand these ISO standards and their implications for AI governance, focusing on risk management, bias mitigation, ethical oversight, and continuous monitoring. For a detailed graphic overview of these standards, please refer to Annex 1.

Ethical Oversight and Stakeholder Engagement in AI Governance: Ensuring AI alignment with societal values and ethical principles is crucial for its sustainable development. ISO 42001 highlights the importance of ethical alignment and stakeholder engagement in AI governance, while ISO 5339 provides guidance on societal impact assessments and ethical stakeholder considerations.[127]

Other relevant international guidelines include:

4.4 Innovative approaches to AI governance

Regulating AI requires a shift from rigid rulebooks to dynamic, adaptive governance models. Unlike traditional regulations, which can quickly become obsolete, AI governance must be iterative, risk-based, and collaborative. Several innovative approaches are emerging to help regulators strike the right balance between fostering innovation and mitigating risks.

One such approach is regulatory sandboxes, which function as controlled test environments where AI innovations can be piloted under regulatory supervision before full-scale deployment. Regulatory sandboxes in AI allow regulators to gain insights into the operations of AI firms, refine evidence-based policies, and prevent regulatory bottlenecks that stifle innovation by offering a safe space for AI experimentation.

Co-regulation is another innovative model of AI governance. It blends government oversight with industry-led compliance frameworks to ensure that regulations evolve alongside technological advancements. In Germany, the AI Quality & Testing Hub brings together public agencies and industry leaders to develop standard methodologies for testing AI in healthcare and autonomous driving, ensuring AI products meet innovation and safety standards. Australia has adopted a similar approach[132] with its AI Ethics Framework, which was co-created with tech firms and regulators to establish ethical guidelines for AI deployment while allowing companies to conduct self-assessments. This model encourages industry buy-in, accelerates AI compliance adoption, and ensures governance frameworks remain practical and enforceable.

A forward-looking concept is that of “dynamic regulation” or “living legislation,” which proposes building mechanisms for adaptation directly into laws. This would allow regulatory frameworks to be updated based on data-driven triggers, predefined thresholds (e.g., a model’s computational power), or mandatory periodic reviews, without requiring a full, lengthy legislative amendment process each time. Such an approach aims to create a regulatory system that can adapt more fluidly to the rapid pace of AI development.[133]

No single regulatory model is a silver bullet. AI governance requires a flexible, multi-layered approach that adapts to the level of risk, sector-specific needs, and jurisdictional differences. Regions can foster responsible AI innovation by leveraging regulatory sandboxes, co-regulation models, multi-stakeholder platforms, and open-source governance tools while safeguarding public trust and fundamental rights.

Regulatory sandboxes: controlled AI testing environments

Regulatory sandboxes offer controlled environments where AI developers and companies can test new technologies under regulatory oversight without immediately being subject to all legal constraints. This approach allows policymakers to assess potential risks and refine regulations while supporting innovation.

Benefits of Regulatory Sandboxes

Source: https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html

While sandboxes promote innovation, challenges remain, such as regulatory fragmentation (where different jurisdictions have varying rules) and forum shopping, where companies seek the most lenient regulatory environments. Global regulatory interoperability efforts, such as the Global Financial Innovation Network (GFIN) AI Sandbox Initiative, are emerging​to mitigate these issues.[138]

An AI regulatory framework can benefit from including a clear legal foundation that enables regulators to establish sandboxes, including joint efforts with relevant authorities such as those overseeing data protection, competition, media, consumer protection, health and pharmaceuticals, telecommunications, and finance. It is also important to consider how existing legal mandates or enforcement practices might influence the willingness of companies to participate. At the same time, to maintain public confidence, sandbox initiatives can incorporate safeguards to ensure that individuals remain protected from potential harms during periods of regulatory experimentation.

Co-regulation models: public-private collaboration in AI governance

AI governance is increasingly moving towards co-regulation, where governments and private sector entities jointly develop standards and regulatory frameworks. This approach leverages industry expertise while meeting regulatory objectives such as fairness, safety, and accountability. To address AI risks before deployment, PPPs emphasize proactive risk mitigation strategies that ensure AI systems are safe, reliable, and ethically aligned before they reach real-world applications. A key approach is using AI sandbox environments, discussed above. Additionally, red-teaming exercises play a crucial role in AI risk assessment by subjecting models to adversarial testing, where simulated cyberattacks, misinformation campaigns, or harmful use cases help identify weaknesses and prevent exploitation by malicious actors.[139] Complementing these efforts, algorithmic impact assessments (AIAs) are increasingly mandated by regulatory frameworks such as the EU AI Act, ensuring that AI models do not generate biased, harmful, or unsafe outcomes. By embedding rigorous testing, continuous monitoring, and proactive risk assessments into AI governance frameworks, PPPs not only mitigate AI-related risks but also foster responsible AI development, ensuring that AI technologies are innovative and aligned with global safety standards.

Section 5: AI Ethics

5.1 Core ethical principles in AI governance

In 2021, UNESCO introduced its Recommendation on the Ethics of Artificial Intelligence,[145] emphasizing the integration of ethical considerations throughout the entire AI life cycle. This framework is particularly valuable for addressing bias in AI systems at various stages of development. The recommendation underscores the importance of fairness and non-discrimination in AI products and urges organizations to “ensure inclusive access to and participation in the development of AI”. Adhering to this principle fosters ethical AI development by promoting stakeholder engagement and participatory design, which incorporates diverse perspectives from the outset.

Similarly, the OECD Principles on Artificial Intelligence (2019, updated in 2024[146]) provide a widely accepted ethical framework for AI governance. These principles align with best practices in ethical AI, focusing on Human rights (respect for human rights, including freedom, dignity, and privacy; Transparency (openness and responsible disclosure about AI systems); Safety and security (robust mechanisms to prevent harm from AI systems); Accountability (taking responsibility for the outcomes of AI systems); Inclusive growth (promoting equitable access to AI benefits); Sustainable development (considering the environmental impact of AI).

The Table below highlights the similarities and distinctions between the two frameworks.

A study documented 200 different AI ethics principles and frameworks established by various organizations by 2023. A common theme across these frameworks was that AI should be “trustworthy,” meaning it should be reliable, safe, and centered around human needs. Rather than replacing or harming humans, AI should enhance their experiences.[147]

Fairness

AI fairness refers to the principle that automated systems should produce decisions that are impartial and equitable, avoiding discrimination based on characteristics such as gender, race, or socioeconomic status. To ensure fairness, it is important to use diverse and representative datasets, apply fairness-aware design methods, and implement continuous monitoring throughout the AI lifecycle to identify and mitigate potential biases. Individuals in comparable circumstances should be treated equally, except where there is a legitimate and objective justification for differential treatment.

Fairness primarily addresses the risk of AI systems discriminating against certain groups, whether based on factors such as ethnicity, gender, age, religion, or other sensitive characteristics that require protection against discrimination. A key aspect of fairness is assessing the likely impact of discrimination from the perspective of the affected. This is especially critical for AI systems that influence individual outcomes, such as AI-driven lending decisions, job applicant selection, or medical treatment recommendations. Discriminatory AI decisions can lead to violations of personal rights, financial harm, or reputational damage.

The challenge lies in defining fairness in an objective, measurable, and scalable way. Below, concrete methods for quantifying fairness are described.[148] These measures help make AI systems fairer and prevent discrimination, ensuring that all people are treated equally based on their qualifications rather than their background.

The Figure below shows one way of tracking and measuring AI fairness:

Source: https://bias-and-fairness-in-ai-systems.de/en/basics/

Regulators can play a crucial role in ensuring fairness by requiring AI audits and bias impact assessments before deploying AI in sensitive areas such as hiring, lending, law enforcement, and healthcare.

Accountability

AI accountability ensures that developers, operators, and deployers take responsibility for the outcomes and implications of AI systems. It necessitates clear governance structures, regulatory compliance, and oversight mechanisms to prevent harm and promote transparency. One of the biggest challenges in AI accountability is the opacity of AI systems, often referred to as “black boxes” (see Section 2).

Lack of accountability can lead to human rights violations. Determining liability when AI systems cause harm is another thorny issue for policymakers. Unlike traditional decision-making processes, AI-driven decisions may involve self-learning algorithms (where human control can be several times removed), biased data or biased algorithms, and multiple stakeholders (including developers, data providers, and end users – see Figure below). This makes assigning responsibility a very difficult legal issue.

Actors in AI Accountability

Source: OECD (2023) Governing and managing risks throughout the lifecycle for trustworthy AI, https://www.oecd.org/en/publications/advancing-accountability-in-ai_2448f04b-en.html

To address these concerns, organizations and regulators might consider requiring AI system developers and deployers to implement robust procedures that promote transparency and accountability. Key mechanisms include:

Algorithmic & Human Rights Impact Assessments: Frameworks like HUDERIA (Human Rights, Democracy, and Rule of Law Impact Assessment for AI), developed by the Council of Europe, evaluate AI’s societal impact, ensuring that systems align with fundamental rights. Please refer to Annex 3 for a detailed description of these frameworks. Regulators play a crucial role by mandating accountability measures such as bias audits, transparency reports, and impact assessments. For example, before deploying AI in credit scoring or hiring, organizations should be required to demonstrate that their systems are free from discrimination. Similarly, AI used in law enforcement must undergo rigorous scrutiny to prevent biased policing.[150]

Promoting algorithmic accountability is essential to mitigating AI-related risks and ensuring that AI aligns with human rights and the rule of law. By enforcing transparency, liability frameworks, and ethical oversight, regulators can help build trust in AI and prevent automated systems from exacerbating existing inequalities.

Transparency

Transparency is crucial for ensuring AI decisions are understandable and can be scrutinized. It involves documentation, explainability, and open communication regarding AI models’ limitations and decision-making processes.

Key Aspects:

Examples of instances where AI algorithms have not been transparent and biased abound, as discussed in Section 2.

AI developers should implement explainability-by-design frameworks that provide users with clear justifications for AI-driven decisions. This enables users to understand and contest the outcomes of the AI system. AI governance mechanisms should entail rigorous bias audits, diverse training datasets, and regulatory oversight to ensure fairness and accountability in AI systems. Organizations should also adopt human-in-the-loop mechanisms to review AI-driven decisions, especially in high-stakes areas such as finance, healthcare, hiring, and criminal justice.

Developing legal frameworks and standards incorporating algorithmic transparency can enhance government accountability and promote transparency and fairness. Such measures would safeguard citizens’ rights to oversee and challenge algorithmic decisions. The OECD AI Principles[151] serve as a strong example, particularly the principle of “Transparency and Explainability,”[152] which emphasizes that AI actors should enable individuals affected by AI systems to understand their outcomes and challenge adverse decisions based on clear, accessible explanations of the underlying factors and logic.

Algorithmic Transparency Initiatives: Global Efforts to Regulate AI and Automated Decision-Making

Algorithmic transparency has become a critical aspect of AI governance as governments worldwide grapple with the increasing integration of AI and automated decision-making systems in public administration. Many countries and cities have taken significant steps to enhance public oversight and accountability in algorithmic decision-making, particularly in high-stakes areas like law enforcement, public services, and digital governance. Below are some key milestones in algorithmic transparency initiatives across different regions.

Several governments have started implementing their ethical guidelines for algorithmic use. European cities such as Amsterdam,[153] Helsinki,[154] and Nantes have taken a different approach by introducing public algorithm registers. These registers, launched in 2020 and ongoing, provide detailed documentation of the AI systems and algorithms used by city administrations. By making this information publicly accessible, these cities aim to build trust with residents, allowing them to understand how automated decisions impact their daily lives and ensuring that government use of AI remains transparent and accountable.

In 2021, the United Kingdom made a landmark move by publishing national algorithmic transparency guidelines for public sector organizations. These guidelines set out best practices for documenting and explaining how algorithms are used in decision-making processes within government bodies. The UK’s approach has been hailed as a leading model for institutionalizing transparency in AI governance, providing a structured framework that encourages accountability while promoting innovation in the public sector.[155]

During the period 2021-2022, France, the Netherlands, and New Zealand also joined the movement by developing guidance on the responsible use of algorithms in public sector decision-making. These guidelines focus on ethical AI deployment, ensuring that machine learning models and automated systems uphold fundamental human rights, minimize bias, and maintain clear lines of accountability. The initiatives in these countries underscore a growing global consensus on the need for formal oversight mechanisms to prevent potential harms associated with algorithmic decision-making.[156]

In Latin America, Chile emerged as a pioneer in 2022 by drafting and proposing a regulation aimed at making it the first country in the region to adopt algorithmic transparency standards. This proposed law seeks to establish clear rules for AI deployment in government services, ensuring that automated decision-making is explainable, fair, and accountable to citizens. By taking this step, Chile is positioning itself at the forefront of AI governance in Latin America, signaling a broader push toward greater digital rights protections in the region.[157]

To effectively scrutinize these systems, citizens should have access to clear, easily digestible information. Public education efforts, such as informational videos and community engagement campaigns, can encourage broader participation. Strengthening civil servants’ understanding of digital rights, data ethics, and algorithmic accountability is also crucial, helping establish internal oversight mechanisms within governments.[158]

Explanation, Appeal, and Redress Mechanisms in Automated Decision-Making: To operationalize accountability and transparency in AI systems, governments and deployers should consider establishing mechanisms that allow individuals to understand, contest, and seek redress for decisions made by AI. These mechanisms are especially critical in high-risk domains, such as social welfare, criminal justice, healthcare, and credit scoring, where algorithmic decisions can have a significant impact on lives. Explanation mechanisms should enable users to access clear, context-appropriate rationales behind decisions. Appeal processes must provide accessible channels for individuals to challenge outcomes, particularly when decisions are fully or semi-automated. Redress mechanisms should ensure timely and fair remedies when harm has occurred, including opportunities for human review, correction, or reversal of decisions. For instance, under Article 22 of the EU GDPR, individuals have the right not to be subject to decisions based solely on automated processing that significantly affects them, and may demand human intervention.[159] Brazil’s General Data Protection Law (LGPD) similarly guarantees the right to an explanation and review of automated decisions.[160] To be effective, such mechanisms must be embedded early in the design phase and supported by legal and institutional structures that empower both individuals and regulators.

As governments increasingly integrate technology and automation into decision-making, civil society needs to maintain oversight of their design and deployment. While algorithmic decision-making in the public sector offers benefits, such as expediting processes, analyzing vast datasets, and supporting evidence-based decisions, it also presents risks. Ensuring transparency and accountability in these systems is key to mitigating potential harms and maintaining public trust.

Due to the opacity of the models themselves, AI developers must be transparent about how their systems or models are trained, developed, and tested to ensure accountability and address the outputs and impacts of these systems effectively. This disclosure should include, at a minimum:

Transparency requirements can impose costs on AI developers and implementers. However, these costs should be viewed as an integral part of the AI system’s development and deployment. To help alleviate this burden, regulations should provide guidance on how companies of all sizes, including small businesses, can proportionately implement transparency requirements.

Inclusivity

Inclusivity in AI development ensures that systems are designed with diverse stakeholders in mind, particularly underrepresented and marginalized groups. Achieving this requires proactive engagement, equitable representation in AI development, and inclusive design practices that address the needs of all users.

Access to AI Infrastructure

At its core, fostering an inclusive AI ecosystem requires equitable access to the fundamental technical infrastructure, computing power, data storage, and networking, necessary for AI system development. This access must be available not only for training new AI practitioners but also for enabling citizens who cannot afford cloud computing services to engage in AI innovation. Ensuring broader access can drive inclusivity, with government investments in national and regional computing and data management capabilities—whether through public technology facilities or subsidized access to commercial cloud resources—empowering more individuals to participate in AI-related fields, both technical and non-technical, and ultimately contributing to AI-driven job creation.

At a foundational level, promoting access and equity involves prioritizing technology as a public or shared resource. Public sector investments should focus on expanding access to basic computing resources for the general population rather than established AI developers, such as large ICT corporations or academic institutions with extensive AI expertise. Governments and civil society organizations can be crucial in convening diverse stakeholders through global networks to exchange best practices on regulation, redress mechanisms, and resource-sharing models. These efforts should include establishing funding initiatives and regional collaborations to enhance data collection infrastructure.[161]

Africa’s Deep Learning Indaba Compute Program provides free cloud computing access to AI researchers across the continent, reducing barriers for universities and independent AI developers.

The EU’s GAIA-X Cloud Project is a European cloud infrastructure project that provides a federated data-sharing ecosystem, enabling businesses and researchers across the EU to access AI cloud resources without dependency on U.S.-based cloud providers like AWS or Google Cloud.

Source: https://deeplearningindaba.com/about/our-mission; https://gaia-x.eu/

Additionally, resource allocation strategies should incentivize the development of AI applications that contribute positively to society and the environment. Adopting a comprehensive governance approach, as outlined in the following sections, the public sector can enforce legal safeguards, establish mechanisms for redress, and ensure that AI deployment aligns with ethical and sustainable principles.

Stakeholder Engagement

Stakeholder engagement is one key aspect of AI inclusivity. Policymakers and AI developers should actively involve diverse communities, such as civil society groups and regulatory bodies, in decision-making processes. This participation ensures that AI technologies reflect a broad spectrum of perspectives and needs, reducing the risk of exclusionary biases. Mozilla[162], for instance, has been a strong advocate for ethical AI development, promoting open-source initiatives and community-driven governance to ensure AI serves the interests of diverse populations.

Accessible Design

Another critical strategy is accessible design. AI interfaces should adhere to Web Content Accessibility Guidelines (WCAG) to support users with disabilities, ensuring usability for individuals with visual, auditory, and motor impairments. Developers must also prioritize compatibility with assistive technologies such as screen readers and speech recognition software. For example, Microsoft has integrated accessibility features into its AI-driven tools, such as Seeing AI[163], an application designed to assist visually impaired individuals by narrating their surroundings in real time.

Linguistic representation plays a vital role in making AI systems more inclusive. By incorporating regional languages, dialects, and cultural norms, developers can improve user accessibility and engagement across different linguistic and cultural backgrounds. OpenAI, for instance, has been expanding the linguistic capabilities of its language models to support a broader range of languages and dialects. Similarly, Mozilla’s Common Voice project is building an open-source dataset of diverse speech samples to train AI models in underrepresented languages, ensuring that voice recognition technologies are accessible to non-English speakers.

Real-world examples illustrate the importance of inclusivity in AI. Google’s speech recognition system initially struggled to recognize African-American Vernacular English (AAVE), leading to lower accuracy for Black users. By diversifying its training data, Google significantly improved the system’s inclusivity and reduced bias. Similarly, Meta has worked on improving translation models for low-resource languages, ensuring that AI-driven communication tools serve speakers of indigenous and less widely spoken languages.[164]

Data invisibility and data justice as complementary to inclusivity in AI development and deployment

Initiatives aimed at achieving data justice should complement any algorithmic inclusivity efforts. A key challenge related to AI ethics and inclusivity initiatives is the exclusion of “data invisible groups.” Those without digital access or who lack digital skills may not be included in assessments of populations and their needs.

“Data invisibility” results from the digital divide across many countries of the Global South and is likely to impact traditionally marginalized communities such as women, tribal communities, castes, religious and linguistic minorities, and migrant workers. In an increasingly digital world, data invisibility also means limited voice and reinforces restrictions on effective participation in social, economic, and political spheres. An overreliance on “automatic” data collection methods can exclude highly vulnerable groups and undermine trust in digital tools. Such exclusions may exacerbate biases that limit the effectiveness and validity of AI algorithms trained on easily accessible data, further reinforcing the need for greater transparency in data usage.

Governments and international development partners should promote inclusiveness and reduce digital inequalities by ensuring that data do not overrepresent those who are connected and give voice to data-invisible populations.

Source: Stankovich et al (2023) https://www.dai.com/uploads/ai-in-public-service.pdf

Human oversight and determination

Human oversight ensures that AI systems align with ethical principles, legal standards, and societal values. While AI can enhance decision-making by processing vast amounts of data efficiently, human judgment remains essential in interpreting AI outputs, addressing biases, and making final determinations in critical areas such as law enforcement, healthcare, and financial services. Realizing that many AI systems are black boxes and prone to bias, stakeholders will start addressing questions concerning the extent to which humans can or should depend on AI. Should humans supervise or approve certain AI-recommended outputs and decisions before implementation? Who is accountable for faults or hacking of AI-based technologies? There will be disputes over the inability of parties to fully comprehend or manage certain AI-powered operations, as well as disputes over what is fair in automated decision-making.

One key principle of human oversight is the human-in-the-loop (HITL) approach, where AI-generated recommendations or decisions require human validation before implementation. This approach is crucial in high-stakes scenarios, such as judicial sentencing algorithms, autonomous weapons, and medical diagnostics, where unchecked AI decisions could lead to severe consequences.

Another model, the human-on-the-loop (HOTL) approach, allows AI to operate autonomously under human supervision. This is commonly used in automated fraud detection and predictive maintenance systems, where human oversight is necessary to intervene when anomalies are detected. In cases where AI systems function without continuous human intervention, ensuring that override mechanisms exist is critical to maintaining accountability and ethical integrity. A practical example is GPS navigation: the system calculates the optimal route based on preset conditions, but the driver retains the ability to override or modify the suggested route in response to real-world factors like traffic congestion.

Human-out-of-the-loop (HOOTL) describes AI systems that operate autonomously, with no human intervention in decision-making. An instance of this would be an AI-powered cybersecurity system that can detect and fix system vulnerabilities without the need for human involvement. Mayhem, the winning system in the Defense Advanced Research Projects Agency (DARPA) Cyber Grand Challenge, is an innovative system that constantly scans for any new vulnerabilities that hackers could exploit. When Mayhem detects a new bug, it automatically generates code to protect the software from this vulnerability. This system is an expert in prescriptive analytics, meaning it can detect and interact with machines without any human intervention. This is in contrast to traditional intrusion detection systems that rely on human input to anticipate cyber-attacks.[165]

Human determination plays a fundamental role in AI governance, particularly in safeguarding rights and ensuring due process. The EU’s GDPR enshrines the “right to explanation” and the “right to human intervention,” ensuring that individuals affected by automated decisions can contest them and seek human review.

To strengthen human oversight, organizations and regulators should consider implementing:

Maintaining a “human in the loop” ensures efficiency and safety in AI-driven applications. The quality of AI depends on the underlying data, human expertise, and the interdisciplinary teams involved in its development. A well-defined AI and data governance framework should establish clear liability among stakeholders, providing necessary safeguards to protect human rights while advancing collective interests. One approach is public certification of AI systems to guarantee data and algorithm quality, preventing the reinforcement of existing inequalities. Certification can also foster public trust and enable users to provide informed consent.

Despite the importance of HITL, its effectiveness may be limited by automation bias, where humans tend to uncritically accept AI-generated decisions, particularly when dealing with complex “black box” models where the rationale behind AI outputs is challenging to interpret. It is therefore important to be able to measure the level of risk and impact of different AI systems. In this regard, it is important to determine the requirement for human oversight, based on the use case, its sensitivity, the algorithm’s complexity and opacity, and the impact on human rights.

Sustainability in AI governance

Sustainability is a critical yet often overlooked aspect of AI governance, encompassing environmental, social, and economic dimensions. The UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) explicitly highlights sustainability as a guiding principle, emphasizing the need for AI to minimize environmental harm and contribute to global sustainability efforts. Similarly, the OECD AI Principles (2019) call for AI to drive inclusive and sustainable growth, ensuring its deployment aligns with long-term ecological and economic stability. However, as AI systems become increasingly integrated into industries, energy consumption, resource depletion, and socioeconomic inequalities present growing challenges that should be addressed through responsible governance.

Environmental Sustainability and AI’s Carbon Footprint

One of the most pressing sustainability challenges in AI governance is energy consumption and carbon emissions. Training advanced AI models, particularly deep learning systems, demands immense computational power. Global data center electricity consumption reached 415 TWh in 2024, representing 1.5% of total global electricity use. The United States led with 45% of this consumption, followed by China (25%) and Europe (15%). AI-focused data centers, known for their high energy intensity, are expanding rapidly and are as energy-hungry as aluminum smelters, though they are more geographically concentrated. Since 2017, data center electricity use has grown at 12% annually, four times faster than overall electricity demand, raising significant sustainability concerns.[166]

The surge in AI deployment is a major driver of this trend. By 2030, data center electricity consumption is projected to more than double to 945 TWh, exceeding Japan’s current electricity usage. By 2035, demand could reach between 700 and 1,700 TWh, depending on growth and efficiency improvements.[167]

AI chip manufacturing has also seen a dramatic rise in electricity use. This spike, primarily powered by fossil fuels, has led to a corresponding rise in carbon emissions. Projections suggest that by 2030, electricity demand from AI chip production could be 170 times higher than in 2023, surpassing the electricity consumption of Ireland.[168]

Key solutions to mitigate AI’s environmental impact include, developing and adopting low-energy AI models, green data centers, however, regulatory incentives are needed to ensure industry-wide adoption, and federated learning and edge computing, as AI applications can reduce energy waste and network congestion by processing data locally instead of transmitting it to energy-intensive cloud servers. The industry must also adopt transparent, standardized reporting for emissions, including both direct and supply chain impacts. Importantly, AI itself may play a role in reducing energy demand through optimization and smarter energy systems.[169]

Despite these efforts, AI-driven environmental risks extend beyond energy consumption. For example, the surge in demand for AI hardware is accelerating the extraction of rare earth minerals (e.g., lithium, cobalt, and nickel), used in semiconductors and GPUs, with consequences on some local communities. AI governance frameworks should integrate sustainable supply chain standards to mitigate these unintended consequences.[170]

AI and Economic Sustainability: The Automation Paradox

AI-driven automation is reshaping industries by boosting productivity and reducing costs, but it also poses risks to job displacement and economic inequalities. The International Labour Organization warns that up to 14% of global jobs could be fully automated in the next two decades, disproportionately affecting low-wage and blue-collar workers. [171]

Governments and policymakers will need to address these challenges through:

AI’s impact on economic sustainability also extends to financial markets. Algorithmic trading, driven by AI, increases market volatility, raising concerns about automated financial crashes. Regulatory bodies such as the U.S. Securities and Exchange Commission (SEC) and the European Central Bank (ECB) are beginning to examine AI’s role in financial instability. However, comprehensive AI risk governance is still lacking.

AI and Social Sustainability: The Digital Divide

The global digital divide is another sustainability challenge in AI governance. AI benefits are disproportionately concentrated in high-income nations, while developing economies struggle with limited AI infrastructure, digital literacy, and regulatory capacity. The ITU estimates that 2.6 billion people worldwide still lack internet access, effectively excluding them from AI-driven economies and opportunities.[172]

To promote social sustainability, AI governance frameworks should focus on:

Integrating Sustainability into AI Governance Frameworks

AI governance frameworks should explicitly incorporate sustainability measures. Key recommendations include:

  1. Mandating Environmental Impact Assessments (EIAs) for AI systems: Similar to industrial sectors, AI should be subject to carbon footprint assessments, particularly for large-scale model training and data center operations.
  2. Regulating AI-driven resource extraction: Governments should enforce supply chain transparency laws for AI hardware manufacturing, ensuring ethical and sustainable sourcing of minerals.
  3. Embedding sustainability metrics into AI policy frameworks: For instance, the European Green Digital Coalition (EGDC) is working on integrating AI sustainability indicators into EU digital policies—a model that other regions could adopt.
  4. Promoting sustainable AI business models: Incentives such as green tax credits should be offered to AI companies that prioritize energy-efficient and socially responsible AI applications.

See also: ITU (2025) Greening Digital Companies 2025: Monitoring emissions and climate commitments, https://www.itu.int/en/ITU-D/Environment/Documents/Publications/2025/ITU-WBA-Greening-Digital-Companies-2025.pdf

Section 6: Multi-stakeholder ecosystems and collaborative AI governance

AI governance cannot function in isolation. It requires deliberate coordination across diverse institutions and communities, each with different responsibilities, incentives, and knowledge. This section merges identifying key stakeholder roles with collaborative policymaking, oversight, and system design approaches. From formal rule-setting to co-creation with digital players and citizens, effective AI governance is shaped through inclusive, iterative processes.

Policymakers and regulators play a crucial role in establishing rules, fostering innovation, and enforcing protections. The judiciary and ombudsman institutions are essential for safeguarding rights, resolving conflicts, and providing redress. The private sector should ensure ethical design and compliance of the AI technologies it designs and deploys. At the same time, civil society and academia serve as watchdogs and knowledge partners in the AI lifecycle. Citizens and affected communities lend legitimacy and context through direct engagement, particularly when systems affect public services, welfare, and everyday life.

Collaborative AI governance can take many forms, from regulatory sandboxes and co-regulation frameworks to citizen assemblies and open consultations. This section examines how participatory AI governance platforms strengthen trust, transparency, and system resilience.

6.1 Stakeholder priorities

AI governance necessitates striking a balance between the interests of governments, private sector entities, civil society, and academia. Government agencies primarily focus on risk mitigation, public safety, and accountability, aiming to prevent AI-driven discrimination. In contrast, tech companies prioritize innovation, profitability, and efficiency, often resisting stringent regulations that could stifle AI advancements. A clear example of this tension is the EU AI Act, which imposes transparency requirements that tech companies argue may hinder AI product development and innovation.

Beyond the public-private divide, AI governance challenges differ between developed and developing countries. One such challenge is enforcement. Developed nations have the resources to enforce AI regulations and establish oversight institutions. On the other hand, many developing nations lack comprehensive enforcement frameworks and the technical capacity of regulators to monitor AI-related risks effectively. For instance, AI governance in sub-Saharan Africa remains in its early stages, and many countries in Latin America and Africa lack dedicated AI safety institutions or face enforcement challenges due to resource constraints. These disparities highlight the need for tailored AI governance approaches that account for regional capacities and policy priorities.[173]

6.2 Sectoral governance considerations: the need for collaboration

Effective AI governance recognizes that a one-size-fits-all approach is unlikely to address the complexity and diversity of different sectors. Each area, for instance, healthcare, finance, education, agriculture, telecommunications, and public administration, operates within its own regulatory, ethical, and operational environment. To develop responsible and adaptive frameworks, regulators might consider tailoring oversight to each sector’s context while also establishing collaborative mechanisms that support coordination among regulatory bodies, industry actors, and affected communities. The following considerations may help guide such efforts:

Across all sectors, multi-stakeholder engagement is key to building AI governance frameworks that are effective, equitable, and sustainable. By fostering inclusive dialogue and shared responsibility, regulators can help ensure that AI technologies benefit society while upholding core public interest principles.

Multi-stakeholder platforms: inclusive AI policy development

AI governance cannot be shaped by governments alone. Multi-stakeholder platforms bring together governments, industry leaders, academia, and civil society to co-develop AI policies, ethical guidelines, and regulatory strategies. Several countries have actively incorporated stakeholder engagement into developing their AI regulatory frameworks, using consultations and participatory processes to ensure inclusive and effective policymaking. Australia initiated public discussions through consultation papers on AI ethics, while Brazil established an internal AI commission (CTIA) and held expert hearings to inform legislative proposals. In Canada, stakeholder input followed the passage of Bill C-27, with the government engaging industry, academia, and civil society through broader consultations and voluntary guideline development. The EU institutionalized this engagement through its Advisory Forum, which brings together voices from industry, academia, and standardization bodies. Israel adopted a similarly inclusive model by consulting various governmental and industry actors. Meanwhile, the United Kingdom pursued an iterative feedback approach for its 2022 AI white paper, adjusting its proposed regulatory framework in response to public and expert input.[174] At the global level, the following multi-stakeholder platforms exist:

Open-source AI governance tools

To increase trust in AI systems, open-source governance tools enable transparent, community-driven evaluation of AI applications. These tools allow independent audits, testing, and assessment of AI models, ensuring compliance with ethical and legal standards.

By leveraging open-source methodologies, these tools promote collaborative AI governance, increase regulatory transparency, and enable independent oversight.[183]

Section 7: Operationalizing AI governance at the national level

Effective AI governance requires more than high-level vision statements or aspirational plans. It should be based on operational realities, including identifying clear institutional mandates, allocating appropriate resources, and coordinating long-term policies across sectors. National strategies should be developed through inclusive, multi-stakeholder consultations and be firmly anchored in existing digital transformation, data governance, and human rights frameworks. Legal reforms may be necessary to establish adequate safeguards around data use, algorithmic transparency, liability, and oversight.

Institutional arrangements are essential for promoting coherence, accountability, and coordination. Some countries may opt to establish a dedicated AI Commission, while others might distribute AI-related functions among existing regulatory bodies, supported by a cross-sectoral interministerial committee or a national AI advisory council. These models should be tailored to the national context and administrative traditions, with clearly defined mandates and responsibilities to prevent overlap and regulatory fragmentation.

To ensure effective implementation and oversight, regulatory institutions should also build technical expertise, digital infrastructure, and forward-looking capabilities. This includes training staff to conduct AI system audits, evaluate algorithmic impacts, and apply international standards. Partnerships with global institutions such as the ITU, UNESCO, and the OECD can provide technical guidance, promote alignment with emerging global norms, and support local capacity-building efforts.

Operationalizing AI governance is not a one-time activity. It is a dynamic and iterative process that requires adaptive learning, inclusive evaluation, and sustained political commitment. Countries should start with foundational capacities, pilot new approaches, and scale up based on local learning and global best practices. The roadmap below provides a step-by-step guide for policymakers to help move from strategic planning to institutionalized implementation.

This section consolidates the principles, processes, and institutional elements explored throughout the guide into a practical roadmap. As countries seek to operationalize AI governance strategies, it is essential to reflect on the foundational considerations outlined in earlier sections, ranging from establishing a national vision, engaging stakeholders, and promoting inclusive governance, to managing risks, embedding rights-based values, and creating effective institutional frameworks.

The path toward national AI governance is not linear. It involves iterative decision-making, negotiation across diverse interest groups, and careful consideration of trade-offs. The structured roadmap below mirrors the stages covered in this guide, from setting a vision to creating sustainable, adaptive governance ecosystems. Each item in the checklist serves as a reminder of the interconnectedness of the elements discussed: political leadership, regulatory capacity, innovation incentives, human rights safeguards, risk-based approaches, and cross-sectoral alignment.

The roadmap provides a holistic overview, closing the loop on the governance journey by offering a diagnostic tool that countries can use to plan, sequence, and assess the design and implementation of their national AI governance frameworks.

From Vision to Implementation: A Practical Roadmap for National AI Governance

1. Foundational Readiness: Awareness and Visioning
Objective: Establish a national understanding of AI’s relevance and define a shared vision.
Activities:
✓ Organize high-level awareness-raising workshops for cabinet members, parliamentarians, and senior ministry officials to foster a common understanding of AI’s opportunities and risks.
✓ Launch inclusive public education campaigns in partnership with civil society, youth networks, and local governments to demystify AI and build public trust.

✓ Conduct a national SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis to identify the country’s unique position in the global AI landscape, paying special attention to data availability, infrastructure, and human capital.

✓ Define a national AI vision grounded in sustainable development, inclusive innovation, and digital sovereignty.
✓ Convene inter-ministerial dialogues to align AI vision with sectoral plans (e.g., health, finance, justice, education, public services).
✓ Identify and integrate cross-cutting priorities (e.g., gender equality, green economy, digital trade, digital public services).
✓ Commission foresight studies on AI’s long-term implications for domestic labor market, social cohesion, and governance.

Outcomes: National AI vision adopted; foundational awareness among political and institutional leaders, clear understanding of national strengths and weaknesses
Sections:

  • Section 1 (Understanding AI)
  • Section 4 (Approaches to AI Governance)
  • Section 6 (Multi-stakeholder Ecosystems and Collaborative AI Governance)
2. Building the Architecture: Stakeholder Mapping and Coordination
Objective: Identify institutional actors and establish mechanisms for collaboration and coordination.
Activities:
✓ Map relevant AI stakeholders across the public sector (e.g., digital ministries, telecom regulators, national statistical offices, data protection offices), private sector, academia, and civil society.
✓ Conduct mandate mapping to clarify overlaps and gaps among government entities (e.g., Ministry of Science vs. Ministry of Justice).
✓ Establish or reinforce a national AI coordination body (e.g., AI Task Force, National AI Steering Committee Inter-Ministerial Task Force) with a clear mandate for strategy oversight and implementation..
✓ Create a multistakeholder advisory platform (e.g., AI Ethics Forum, Civil Society AI Roundtable) to ensure diverse perspectives inform policy.
✓ Institutionalize regular cross-sectoral coordination platforms to enable joint strategy and coherent implementation.
✓ Engage with international partners (e.g., the International Telecommunication Union (ITU), UNESCO, and UNDP) for technical support and alignment with global standards.
Outcomes: Stakeholder ecosystem clarified; inclusive and coordinated governance architecture in place.
Sections:

  • Section 5 (AI Ethics , Approaches to AI Governance)
  • Section 6 (Multi-Stakeholder Ecosystems and Collaborative AI Governance)
3. Assessing the Landscape: Institutional Readiness and Legal Gap Assessment
Objective: Evaluate legal, institutional, and technical environments to identify governance gaps.
  • Activities:
    ✓ Conduct AI readiness assessments using international tools (e.g., Stanford AI Index Report, The Global Index on Responsible AI (GIRAI), AI Readiness Index – Oxford Insights) to benchmark national capabilities.
    ✓ Review existing national laws and frameworks (systematic framework, like the OECD’s mapping tool) that impact AI (e.g., data protection, e-governance, cybersecurity, consumer rights, competition, intellectual property (IP), privacy and non-discrimination).
    ✓ Identify gaps in legal coverage of AI-specific risks (e.g., algorithmic bias, explainability, liability).
    ✓ Assess inter-agency capacities for AI oversight (staffing, skills, budget, ICT infrastructure).
    ✓ In collaboration with sectoral regulators, conduct high-risk sectoral case studies (e.g., facial recognition in law enforcement, predictive tools in welfare, AI in credit scoring) to understand context-specific challenges.
Outcomes: Baseline legal-institutional landscape mapped; regulatory and institutional priorities in AI identified.
Sections:

  • Section 2 (Addressing Bias, Opacity, and Risk in AI)
  • Section 3 (Securing AI – Cybersecurity and Data Governance)
  • Section 4 (Approaches to AI Governance)
4. Designing the Framework : Strategy and Legislation
Objective: Draft a national AI governance strategy and legal framework aligned with international norms and standards.
Activities:
✓ Co-develop a national strategy grounded in principles such as ethics, safety, accountability, and human rights. This strategy should include distinct pillars, such as Governance, Technology & R&D, Data, Infrastructure, Ecosystem, and Talent.
✓ Align policy frameworks with guidance from international organizations (e.g., ITU, UNESCO’s Recommendation on AI Ethics, OECD Principles on AI, and GPAI).
✓ Include dedicated sections on algorithmic transparency and accountability, cybersecurity, data governance, funding/innovation incentives, and provision of public services and access to justice.
✓ Embed co-regulatory innovative and agile instruments (e.g., impact assessments, sandboxes, certification schemes).
✓ Draft enabling or reinforcing legislation in collaboration with legal and technical experts.
✓ Ensure meaningful participation of ministries, agencies, parliamentarians, and civil society.

✓ Draft complementary legislation, if necessary, to enable or enforce the AI governance framework.

Outcomes: AI governance strategy drafted and validated; legal tools prepared for operationalization.
Sections:

  • Sections 3, 4 and 5 (for sectoral and normative integration)
  • Section 6 (Multi-stakeholder Ecosystems and Collaborative AI Governance)
5. Empowering Institutions : Institutional Design and Capacity Building
Objective: Strengthen or establish dedicated institutions and build public sector capacities for oversight.
Activities:
✓ Establish a lead AI regulator or assign oversight roles to existing bodies (e.g., telecom/ICT regulator, data protection authority, judiciary).
✓ Clearly define mandates, powers, and coordination roles across agencies.
✓ Set up sectoral oversight mechanisms (e.g., AI ethics review boards in health, education, finance).
✓ Deliver AI literacy and redress training for judges, ombudspersons, and frontline officials.
✓ Partner with ITU Academy, innovation labs, and universities for national capacity development in AI governance.
Outcomes: Oversight architecture defined; capacity-building mechanisms institutionalized.
Sections:

  • Sections 5 and 6 (Ethics and Oversight)
  • Section 7 (Operationalizing Governance)
6. Activating Governance: Policy Instruments and Enforcement Tools
Objective: Equip government agencies with tools to enforce safe, ethical, and rights-respecting AI use.
Activities:
✓ Develop AI procurement and deployment guidelines for line ministries.
✓ Require AI system deployers to conduct algorithmic and data protection impact assessments (Mandate impact assessments for high-risk systems).
✓ Institutionalize AI model documentation (e.g., datasheets, model cards, audit trails).
✓ Pilot regulatory sandboxes and co-regulation labs to safely test high-risk AI systems in a controlled environment with regulatory supervision.

✓Explore “living legislation” mechanisms, where specific technical standards or risk classifications can be updated via secondary legislation based on monitoring data, without requiring a full parliamentary process.
✓ Introduce mechanisms for explanation, appeal, and redress in automated decision-making.

Outcomes: AI governance toolkit in place; accountability and safeguards embedded.
Sections:

  • Section 2 (Addressing Bias, Opacity, and Risk in AI)
  • Section 3 (Securing AI – Cybersecurity and Data Governance)
  • Sections 4 and 6 (Tools for Implementation)
  • Section 5 (Core ethical principles in AI governance)
7. Ensuring Adaptability: Monitoring, Evaluation, and Iteration
Objective: Establish adaptive governance to track performance and learning from implementation, and adapt the strategy over time.
Activities:
✓ Define sector-specific KPIs for fairness, accuracy, accessibility, and legal compliance.
✓ Institutionalize inter-agency periodic reviews every 2–3 years of the AI strategy and legal framework, involving Industry, civil society and academia, to ensure alignment with technological advancements.

✓.
✓ Publish audit results and public registers of government AI deployments.
✓ Use real-world feedback and ombudsperson reports to revise AI policy instruments.

Outcomes: Evidence-based AI governance cycle operationalized; public trust reinforced.
Sections:

  • Section 2 (Addressing Bias, Opacity, and Risk in AI)
  • Sections 6 and 7 (Monitoring, Foresight and Collaboration)
8. Looking Ahead: Future-Proofing and International Collaboration
Objective: Prepare for emerging AI risks and opportunities, and actively shape global AI governance.
Activities:
✓ Establish a foresight unit or task force to monitor emerging models (e.g., AGI, Agentic AI).
✓ Join or establish international and regional AI alliances (e.g., ITU’s AI for Good, UNESCO AI Coalitions, Global Partnership on AI GPAI).
✓ Forge Memoranda of understanding (MoUs) for knowledge exchange with peer regulators and research centers.
✓ Collaborate with international standards bodies (e.g., ITU, ISO, IEEE) on AI testing, certification, and harmonization.
✓ Co-develop interoperable safety protocols, especially for high-risk and transboundary AI applications.
Outcomes: National AI governance framework aligned with global norms; international leadership role strengthened.
Sections:

  • Section 3 (Securing AI – Cybersecurity and Data Governance)
  • Section 6 (Multi-stakeholder Ecosystems and Collaborative AI Governance)

 

Annexes

Annex 1

ISO standards relevant to AI governance

AspectISO StandardPrimary FocusContribution to Value Creation
Comprehensive Risk ManagementISO 42001Structured Risk Management FrameworkProtects value by building resilience, reducing disruptions, and aligning AI performance with strategic goals.
Comprehensive Risk ManagementISO 23894Lifecycle Risk ManagementEnsures proactive risk identification and mitigation, helping prevent costly operational issues.
Bias Management at Every StageISO 42001Continuous Bias MitigationAchieves fair, reliable AI outcomes that meet ethical and organizational standards, preserving brand trust.
Bias Management at Every StageISO/IEC TS 12791Practical Guidance on Bias ReductionReduces reputational risk by minimizing bias in data and models, enhancing public trust and long-term value.
Ethical Oversight & Stakeholder EngagementISO 42001Ethical Alignment and Stakeholder EngagementAligns AI with societal expectations and organizational values, fostering customer loyalty and reputation.
Ethical Oversight & Stakeholder EngagementISO 5339Societal Impact and Stakeholder ConsiderationsBuilds transparency and accountability by addressing ethical concerns, strengthening trust across stakeholders.
Continuous Monitoring & AdaptabilityISO 42001Ongoing Performance MonitoringSustains AI alignment with business goals, optimizing AI for relevance and minimizing costs of outdated models.
Continuous Monitoring & AdaptabilityISO 5338Lifecycle-Based Improvement ProcessesFacilitates iterative refinement, ensuring AI systems are adaptable and maintain efficiency over time.
Continuous Monitoring & AdaptabilityISO 23894Continuous Risk MonitoringEnhances resilience by adapting risk management to evolving threats and operational changes.

Annex 2

Source: https://www.researchgate.net/figure/Seven-potential-phases-for-the-AI-Algorithm-Audit-In-each-phase-an-auditor-has-various_fig2_353568078

Annex 3

Council of Europe HUDERIA – AI Systems Risk and Impact Assessment

The HUDERIA (Human Rights, Democracy, and Rule of Law Impact Assessment for AI Systems) is a guidance framework designed to provide a structured approach to evaluating the risks and impacts of AI systems, with a specific focus on safeguarding and promoting human rights, democracy, and the rule of law. This guidance is intended for both public and private sector stakeholders, serving as a crucial bridge between international human rights standards and existing technical risk management frameworks in the AI domain. As a non-legally binding, standalone document, the HUDERIA offers flexibility for Parties to the Framework Convention, allowing them to adopt or adapt it, fully or partially, to develop new risk assessment methodologies or enhance existing ones. However, Parties remain obligated to uphold the Framework Convention’s requirements, particularly the minimum standards for risk and impact management outlined in Section V.

Source: https://www.coe.int/en/web/artificial-intelligence/huderia-risk-and-impact-assessment-of-ai-systems

Netherlands – Fundamental Rights and Algorithms Impact Assessment (FRAIA)
The Dutch House of Representatives has mandated the implementation of the Fundamental Rights and Algorithms Impact Assessment (FRAIA) for the use of algorithms by public authorities. FRAIA serves as a human rights assessment tool, allowing teams to systematically and proactively evaluate key considerations related to algorithm deployment. Its primary objective is to ensure that algorithms are not implemented without a clear understanding of their potential impacts.

Source : https://www.government.nl/documents/reports/2022/03/31/impact-assessment-fundamental-rights-and-algorithms

Annex 4

What does a rights-respecting AI governance look like?

Practical remedies and accountability mechanisms are crucial in addressing AI-driven rights violations. Without transparency and oversight, AI systems can make consequential decisions that are difficult to contest. Legal frameworks must ensure that individuals affected by AI-driven systems have access to redress when harm is caused. A responsible AI governance framework should follow these five principles:

Despite their differences, these approaches are not mutually exclusive. Combining them can create comprehensive AI governance frameworks. Risk-based methods can provide operational guidance, while rights-based approaches ensure ethical oversight.

For example, the EU AI Act integrates elements of both by embedding non-discrimination and privacy protections within a risk-based structure. A hybrid model that leverages the strengths of both approaches can balance the need for innovation with the imperative to safeguard fundamental rights, ensuring that AI technologies are responsibly developed and deployed.

  1. Recommendation ITU-T M.3080,https://www.itu.int/rec/T-REC-M.3080
  2. Wadhwa, R. et al. (2023). Machine Learning-based Weather Prediction: A Comparative Study of Regression and Classification Algorithms, https://ieeexplore.ieee.org/abstract/document/10201679
  3. Whitelaw, S. et al. (2021). Applications of digital technology in COVID-19 pandemic planning and response, https://www.thelancet.com/journals/landig/article/PIIS2589-7500(20)30142-4/fulltext.
  4. International Energy Agency (IEA). (2022). Digitalisation and Energy, https://www.iea.org/reports/digitalisation-and-energy.
  5. https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity
  6. https://www.goldmansachs.com/insights/articles/generative-ai-could-raise-global-gdp-by-7-percent
  7. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-AI-the-next-productivity-frontier#business-value
  8. https://www.frontier-economics.com/de/de/nachrichten-einblicke/veroeffentlichungen/article-i20841-the-impact-of-artificial-intelligence-on-the-telecoms-sector/
  9. A blueprint for building national compute capacity for artificial intelligence. OECD Digital Economy Papers No. 350 (2023).
  10. https://www.nature.com/articles/s41598-023-45723-x
  11. GSMA (2024) https://www.gsma.com/solutions-and-impact/connectivity-for-good/mobile-for-development/wp-content/uploads/2024/07/AI_for_Africa.pdf.
  12. UNESCO (2023). Global Toolkit on AI and the Rule of Law.
  13.  “Country Activity Tracker (CAT): Artificial Intelligence.” Emerging Technology Observatory, December 18, 2024. https://cat.eto.tech/?dataset=Investment&expanded=Summary-metrics
  14. Stanford HAI (2024). AI Index Report 2024. https://aiindex.stanford.edu
  15. OECD (2024) The impact of Artificial Intelligence on productivity, distribution and growth, Key mechanisms, initial evidence and policy challenges, https://doi.org/10.1787/8d900037-en; See also: https://ai-watch.ec.europa.eu/topics/ai-landscape_en
  16. https://www.brookings.edu/articles/network-architecture-for-global-ai-policy/
  17. GSMA (2024) https://www.gsma.com/solutions-and-impact/connectivity-for-good/mobile-for-development/wp-content/uploads/2024/07/AI_for_Africa.pdf.
  18. Source: World Bank (2024) Global Trends in AI Governance, Evolving Country Approaches, https://documents.worldbank.org/en/publication/documents-reports/documentdetail/099120224205026271/P1786161ad76ca0ae1ba3b1558ca4ff88ba
  19. UNESCO (2023) Guidance for generative AI in education and research, https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research
  20. World Bank (2024) AI, the new wingman of development

    https://thedocs.worldbank.org/en/doc/20ca38de6ebb3fc55a9c6a2883bffda8-0050022024/original/AI-the-new-wingman-of-development-Siddharth-Dixit-and-Indermit-Gill.pdf

  21. https://blogs.nvidia.com/blog/what-is-agentic-ai/
  22. https://www.brookings.edu/articles/network-architecture-for-global-ai-policy/
  23. https://chatgpt.com/
  24. https://gemini.google.com/
  25. https://www.llama.com/
  26. https://www.deepseek.com/en
  27. https://www.aclu.org/news/privacy-technology/wrongfully-arrested-because-face-recognition-cant-tell-black-people-apart
  28. https://www.nlm.nih.gov/research/visible/visible_human.html
  29. https://www.who.int/news-room/feature-stories/detail/strengthening-health-data-kenya
  30. https://www.kaggle.com/datasets/jessicali9530/lfw-dataset. The Labeled Ancestral Origin Faces in the Wild dataset contains 14,000 images divided into four classes, including sub-Saharan Africa, to study biases in face recognition, https://www.robots.ox.ac.uk/~vgg/data/laofiw/
  31. http://privacyinternational.org/partners/coding-rights
  32. https://www.dataprivacybr.org/en/
  33. https://ncair.nitda.gov.ng/wp-content/uploads/2024/08/National-AI-Strategy_01082024-copy.pdf
  34. https://www.nvidia.com/en-us/glossary/synthetic-data-generation/
  35. https://www.sciencedirect.com/science/article/pii/S2667010024001951
  36. Achieving representative datasets across countries is technically and politically complex, especially in the Global South where data infrastructure may be limited. However, regional partnerships, open data initiatives, and support for local data governance capacity can help bridge these gaps.
  37. https://iapp.org/media/pdf/resource_center/key_terms_for_ai_governance.pdf
  38. https://www.llama.com/faq/
  39. https://www.youtube.com/watch?v=zizonToFXDs
  40. https://www.nature.com/articles/s42256-019-0048-x.epdf
  41. https://www.ibm.com/think/topics/black-box-ai
  42. https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-ai-system-definition-facilitate-first-ai-acts-rules-application
  43. https://hyperight.com/role-of-explainability-in-ai-regulatory-frameworks/
  44. https://insights.sei.cmu.edu/blog/what-is-explainable-ai/
  45. https://artificialintelligenceact.eu/article/86/
  46. Ante-hoc explanation means to rely, by design, on a transparent model, instead of providing explanations of an AI model.
  47. https://time.com/6980210/anthropic-interpretability-ai-safety-research/
  48. https://www.ibm.com/think/topics/explainable-ai
  49. https://reports.weforum.org/docs/WEF_Artificial_Intelligence_and_Cybersecurity_Balancing_Risks_and_Rewards_2025.pdf
  50. Ibid.
  51. An API, or Application Programming Interface, is a set of rules and protocols that defines how software applications can communicate and exchange data. It acts as an intermediary layer, allowing one application to request services or data from another. In essence, an API is a contract that specifies how different software components can interact with each other. See: https://www.ituonline.com/tech-definitions/what-is-an-api-application-programming-interface/
  52. https://www.desc.gov.ae/dubai-electronic-security-center-launches-the-dubai-ai-security-policy/
  53. https://www.csa.gov.sg/resources/publications/guidelines-and-companion-guide-on-securing-ai-systems
  54. https://www.gov.uk/government/calls-for-evidence/cyber-security-of-ai-a-call-for-views
  55. https://www.nist.gov/news-events/news/2024/01/nist-identifies-types-cyberattacks-manipulate-behavior-ai-systems
  56. https://owaspai.org/
  57. https://www.oecd.org/en/topics/policy-issues/privacy-and-data-protection.html
  58. https://www.oecd.org/en/publications/ai-data-governance-and-privacy_2476b1a4-en.html
  59. https://www.emilabs.ai/blog/data-privacy-netflix
  60. https://www.oecd.org/en/publications/ai-data-governance-and-privacy_2476b1a4-en.html
  61. https://www.tiktok.com/foryou?lang=en
  62. https://vk.com/
  63. https://sharechat.com/
  64. https://musically.com/2022/03/14/kuaishous-kwai-short-video-app-has-45m-users-in-brazil/
  65. https://www.ayoba.me/web/home?lang=en
  66. https://www.deezer.com/fr/
  67. https://en.zalando.de/?_rfl=de
  68. https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html
  69. https://www.forbes.com/sites/roberthart/2024/09/03/clearview-ai-controversial-facial-recognition-firm-fined-33-million-for-illegal-database/
  70. https://www.aclum.org/en/press-releases/cellphone-location-data-used-target-abortion-misinformation-visitors; https://www.wired.com/story/strava-heat-map-military-bases-fitness-trackers-privacy/; https://www.nytimes.com/interactive/2018/12/10/business/location-data-privacy-apps.html
  71. https://dueros.baidu.com/en/index.html
  72. https://www.samsung.com/us/apps/bixby/
  73. https://snips.ai/
  74. https://viamo.io/regions/sub-saharan-africa/
  75. https://www.theguardian.com/technology/2018/may/24/amazon-alexa-recorded-conversation
  76. https://www.mdpi.com/1424-8220/23/6/3087
  77. https://ovic.vic.gov.au/privacy/resources-for-organisations/internet-of-things-and-privacy-issues-and-challenges/
  78. https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf
  79. https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf
  80. https://www.un.org/global-digital-compact/sites/default/files/2024-09/Global%20Digital%20Compact%20-%20English_0.pdf
  81. https://unsceb.org/united-nations-system-white-paper-ai-governance
  82. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence.
  83. https://www.coe.int/en/web/portal/-/council-of-europe-opens-first-ever-global-treaty-on-ai-for-signature.
  84. “Continental Artificial Intelligence Strategy.” African Union, July 2024. https://au.int/sites/default/files/documents/44004-doc-EN-_Continental_AI_Strategy_July_2024.pdf, p. 1.
  85. https://www.soumu.go.jp/hiroshimaaiprocess/pdf/document05_en.pdf
  86. https://carnegieendowment.org/research/2024/10/the-ai-governance-arms-race-from-summit-pageantry-to-progress?lang=en&center=europe
  87. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
  88. https://ec.europa.eu/commission/presscorner/detail/en/ip_24_383
  89. https://digital-strategy.ec.europa.eu/en/policies/ai-factories
  90. https://digital-strategy.ec.europa.eu/en/policies/plan-ai
  91. https://digitalpolicyalert.org/event/21335-introduced-bill-3003-d-2024-establishing-a-legal-framework-for-regulation-of-the-responsible-use-of-ai-including-ban-on-systems-that-pose-unacceptable-risks
  92. https://www.whitecase.com/insight-our-thinking/latin-america-focus-2024-ai-regulation
  93. “Voluntary” means that using Singapore’s AI Verify framework is not legally required or mandated by the Government. Companies can choose whether or not to adopt it. The framework is offered as a tool to encourage responsible AI practices by providing guidance and assessment methods, but participation is entirely optional. It is designed to promote transparency and fairness in AI systems without imposing regulatory obligations.
  94. https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework
  95. https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2024/public-consult-model-ai-governance-framework-genai
  96. https://ediscoverytoday.com/2025/05/30/japans-ai-promotion-bill-and-how-it-differs-from-the-eu-ai-act-artificial-intelligence-trends/
  97. https://iapp.org/news/a/japan-passes-innovation-focused-ai-governance-bill
  98. https://airc.nist.gov/airmf-resources/airmf/
  99. https://www.brookings.edu/articles/network-architecture-for-global-ai-policy/
  100. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-china
  101. https://iapp.org/news/a/analyzing-south-korea-s-framework-act-on-the-development-of-ai
  102. https://www.eff.org/files/2024/10/11/human_rights_implications_and_operational_framework.pdf
  103. https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence
  104. https://www.un.org/en/about-us/universal-declaration-of-human-rights
  105. https://www.ohchr.org/documents/publications/guidingprinciplesbusinesshr_en.pdf
  106. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
  107. https://digitalfreedomfund.org/the-syri-welfare-fraud-risk-scoring-algorithm/
  108. https://www.nature.com/articles/s41591-021-01595-0
  109. “EU-US Trade and Technology Council.” EU Commission. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/stronger-europe-world/eu-us-trade-and-technology-council_en.
  110. Sharma, Keah, and Malhaar Mohrarir, “2023 G7 Hiroshima Goals Set and Met.” G7 Research Group, May 26, 2023. https://g7.utoronto.ca/evaluations/2023hiroshima/goals-met.html.
  111. O’Brien, Claire, Mark Rasdale, and Daisy Wong, “The role of harmonised standards as tools for AI act compliance.” DLA Piper, January 10, 2024. https://www.dlapiper.com/es-pr/insights/publications/2024/01/the-role-of-harmonised-standards-as-tools-for-ai-act-compliance.
  112. “Artificial Intelligence.” CEN CENELEC. https://www.cencenelec.eu/areas-of-work/cen-cenelec-topics/artificial-intelligence/.
  113. “Global Digital Compact.” United Nations, September 22, 2024. https://www.un.org/global-digital-compact/sites/default/files/2024-09/Global%20Digital%20Compact%20-%20English_0.pdf, p. 14.
  114. Report on PP Resolution 214 (Bucharest, 2022) – Artificial intelligence technologies and telecommunications/information and communication technologies. https://www.itu.int/md/S25-CWGWSIS43-C-0009/en
  115. https://www.itu.int/hub/2025/04/standards-help-unlock-trustworthy-ai-opportunities-for-all/
  116. https://www.itu.int/ITU-T/recommendations/rec.aspx?rec=13894&lang=en
  117. https://www.itu.int/ITU-T/recommendations/rec.aspx?rec=14133
  118. https://www.itu.int/ITU-T/recommendations/rec.aspx?rec=14134
  119. https://www.itu.int/en/ITU-T/focusgroups/ml5g/Pages/default.aspx
  120. https://www.iso.org/standard/81230.html
  121. https://www.iso.org/standard/77304.html
  122. https://www.iso.org/standard/84110.html
  123. https://www.iso.org/standard/81120.html
  124. https://www.nist.gov/itl/ai-risk-management-framework
  125. International AI Safety Report (2025), https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf
  126. https://ieeexplore.ieee.org/document/10851955.
  127. https://ieeexplore.ieee.org/browse/standards/get-program/page/series?id=93.
  128. https://aiqualityhub.com/en/
  129. https://arxiv.org/abs/2407.19439. See also: https://www.lawfaremedia.org/article/a-dynamic-governance-model-for-ai.
  130. https://aiverifyfoundation.sg/what-is-ai-verify/
  131. https://digital-strategy.ec.europa.eu/en/news/first-regulatory-sandbox-artificial-intelligence-presented; also see: https://artificialintelligenceact.eu/ai-regulatory-sandbox-approaches-eu-member-state-overview/
  132. https://www.mlex.com/mlex/artificial-intelligence/articles/2297586/south-korea-designates-two-ai-projects-for-regulatory-sandbox-program
  133. https://www.fca.org.uk/news/news-stories/launch-permanent-digital-sandbox
  134. https://www.thegfin.com/annual-work-plan
  135. https://www.techtarget.com/searchenterpriseai/definition/AI-red-teaming
  136. https://partnershiponai.org/workstream/ai-incidents-database/
  137. https://www.nist.gov/caisi
  138. https://aistandardshub.org/
  139. https://www.mas.gov.sg/~/media/MAS/News%20and%20Publications/Monographs%20and%20Information%20Papers/FEAT%20Principles%20Final.pdf
  140. https://www.csiro.au/en/research/technology-space/ai/ai-ethics-framework
  141. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence
  142. https://www.oecd.org/en/topics/ai-principles.html
  143. https://www.brookings.edu/articles/network-architecture-for-global-ai-policy/
  144. https://bias-and-fairness-in-ai-systems.de/en/basics/
  145. To prevent such issues, organizations and regulators can use fairness-focused tools that detect and mitigate bias in AI systems. Some widely used tools include:

    IBM AI Fairness 360: An open-source toolkit that provides algorithms to measure and mitigate bias in machine learning models. It includes fairness metrics and bias mitigation techniques to improve decision-making transparency (https://research.ibm.com/publications/ai-fairness-360-an-extensible-toolkit-for-detecting-and-mitigating-algorithmic-bias)


    Google’s What-If Tool: A tool integrated with TensorFlow that allows users to visually inspect AI models, test different scenarios, and analyze biases in model predictions.(https://pair-code.github.io/what-if-tool/)


    Microsoft Fairlearn: A Python library that helps assess and improve fairness in AI models by identifying disparities in how different groups are treated.


    Aequitas: A bias audit toolkit developed by the University of Chicago to help policymakers and data scientists assess the fairness of AI systems, particularly in public policy applications (http://aequitas.dssg.io/)

  146. Several tools support AI accountability efforts:

    AI Explainability 360 (IBM): Helps organizations understand how AI models make decisions, improving transparency (https://aix360.res.ibm.com/)


    Google Model Cards: Provide standardized reports on AI models, detailing their intended use, limitations, and ethical considerations (https://iapp.org/news/a/5-things-to-know-about-ai-model-cards)


    OECD AI Policy Observatory: A platform tracking global AI policies and best practices (https://oecd.ai/)

  147. https://oecd.ai/en/ai-principles
  148. https://oecd.ai/en/dashboards/ai-principles/P7
  149. https://algoritmeregister.amsterdam.nl/en/ai-register/
  150. https://ai.hel.fi/
  151. https://www.gov.uk/government/news/uk-government-publishes-pioneering-standard-for-algorithmic-transparency.
  152. https://www.opengovpartnership.org/open-gov-guide/digital-governance-automated-decision-making/
  153. https://oecd-opsi.org/blog/chile-algorithmic-transparency/
  154. https://www.dailymotion.com/video/x7uv803
  155. https://gdpr-info.eu/art-22-gdpr/
  156. https://www.gov.br/anpd/pt-br/centrais-de-conteudo/outros-documentos-e-publicacoes-institucionais/lgpd-en-lei-no-13-709-capa.pdf
  157. https://www3.weforum.org/docs/WEF_A_Blueprint_for_Equity_and_Inclusion_in_Artificial_Intelligence_2022.pdf
  158. https://blog.mozilla.org/en/mozilla/ai/open-source-ai-definition/
  159. https://www.microsoft.com/en-us/garage/wall-of-fame/seeing-ai/
  160. https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2021.725911/full
  161. https://www.darpa.mil/research/programs/cyber-grand-challenge
  162. ITU (2025) Greening Digital Companies 2025: Monitoring emissions and climate commitments, https://www.itu.int/en/ITU-D/Environment/Documents/Publications/2025/ITU-WBA-Greening-Digital-Companies-2025.pdf
  163. Ibid.
  164. Ibid.
  165. Ibid.
  166. https://www.wri.org/insights/how-mining-impacts-forests
  167. https://www.itu.int/en/ITU-D/Statistics/Pages/stat/default.aspx
  168. https://www.brookings.edu/articles/smart-ai-regulation-strategies-for-latin-american-policymakers/
  169. OECD.AI Policy Observatory, AI Policy Toolkit: Stakeholder Engagement in AI Governance, https://oecd.ai/en/tools-evaluations/policy-toolkit
  170. https://www.itu.int/en/ITU-D/ICT-Applications/Pages/Initiatives/ITU_OSPO/Open-Source_AI_for_Public_Services/About_the_Initiative.aspx
  171. https://aiforgood.itu.int/summit24/
  172. https://ai-commons.org/initiatives/
  173. https://interoperable-europe.ec.europa.eu/collection/open-source-observatory-osor/news/inspect-uk-government-backed-open-source-ai-testing-platform.
  174. https://aiverifyfoundation.sg/what-is-ai-verify/
  175. https://www.mozilla.ai/
  176. https://bigscience.huggingface.co/
  177. Bridging global intelligence divide: China’s solutions to making AI more inclusive | english.scio.gov.cn
  178. Useful resources on regulatory sandboxes in AI and transformative tech:ITU, Digital Regulation Platform, A case for ICT Regulatory Sandbox, https://digitalregulation.org/a-case-for-ict-regulatory-sandbox/; CST, Emerging Technologies Sandbox, https://www.cst.gov.sa/en/services/Pages/Emerging_Technologies_sandbox.aspx

    OECD, https://www.oecd.org/publications/regulatory-sandboxes-in-artificial-intelligence-8f80a0e6-en.htm


    IAPP, Saudi Data and AI Authority launches regulatory sandbox, https://iapp.org/news/a/saudi-data-ai-authority-launches-regulatory-sandbox/; World Bank,  Key Data from Regulatory Sandboxes across the Globe, https://www.worldbank.org/en/topic/fintech/brief/key-data-from-regulatory-sandboxes-across-the-globe; IADB, Regulatory Sandboxes, Innovation Hubs, and Other Regulatory Innovation Tools in Latin America and the Caribbean, https://publications.iadb.org/en/regulatory-sandboxes-innovation-hubs-and-other-regulatory-innovation-tools-latin-america-and; Germany, Regulatory Sandboxes – Testing Environments for Innovation and Regulation, https://www.bmwk.de/Redaktion/EN/Dossier/regulatory-sandboxes.html


    Singapore IMDA, Data Regulatory Sandbox, https://www.imda.gov.sg/how-we-can-help/data-innovation/data-regulatory-sandbox. Case study, https://www.imda.gov.sg/-/media/imda/files/programme/data-collaborative-programme/datathon-case-study.pdf; UN DESA Policy Brief No. 123: Sandboxing and experimenting digital technologies for sustainable development | Department of Economic and Social Affairs, https://www.un.org/development/desa/dpad/publication/un-desa-policy-brief-123-sandboxing-and-experimenting-digital-technologies-for-sustainable-development/; OECD, Regulatory sandboxes can facilitate experimentation in artificial intelligence, https://oecd.ai/en/wonk/sandboxes

Last updated on: 10.09.2025
Share this article to: