Digital Regulation Platform

Transformative technologies (AI) challenges and principles of regulation



The world is experiencing a technological and social revolution moving with exponential velocity. Innovative technological trends such as Artificial Intelligence (AI), the Internet of Things (IoT), Blockchain, robotics, 3D printing, nanotechnology, augmented and virtual reality, emerge and converge bringing about a new digital era.

This new digital era is different due to the extensiveness of its scope and the vitality of its impact on human interaction and identity, distribution, production, and consumption systems around the globe. It is pervasive and non-linear; often, its consequences cannot be anticipated with certainty. It is an era where machines learn on their own; self-driving cars communicate with smart transportation infrastructure; smart devices and algorithms respond to and predict human needs and wants.

Al-powered products and services have the potential to lead to new medicines, speed the transition to a low-carbon economy, and help people enjoy dignity in retirement and old age. The economic gains alone could be enormous. AI could contribute up to USD 15.7 trillion to the global economy by 2030, more than the current output of China and India combined. Of this, USD 6.6 trillion will be derived from increased productivity and USD 9.1 trillion will be derived from consumption-side effects. The total projected impact for Africa, Oceania and other Asian markets would be USD 1.2 trillion. For comparison, the combined 2019 GDP for all the countries in Sub-Saharan Africa was USD 1.8 trillion. Thus, the successful deployment of AI and big data presents a world of opportunities.[2]

Figure 1. Expected economic gains from AI in different regions of the world

Source: ITU Emerging technology trends: Artificial intelligence and big data for development 4.0 report, 2021.

New governance frameworks, protocols, and policy systems are needed for the new digital era to ensure all-inclusive and equitable benefits. Societies need regulatory approaches that are not only human-led and human-centered, but also nature-led and nature-centered. Government policies need to balance public interests, such as human dignity and identity, trust, nature preservation and climate change, and private sector interests, such as business disruptiveness and profits. As novel business models emerge, such as fintech[3] and the sharing economy[4], regulators are faced with a host of challenges: rethinking traditional regulatory models, coordination problems, regulatory silos, and the robustness of outdated rules.

This reinforces the need to build flexible and dynamic regulatory models to respond to the changes and optimize their impact. A complex web of regulations would impose prohibitive costs on new entrants into transformative technologies’ markets. Imposing cumbersome compliance costs with a robust system of regulations would lead to a situation where only large firms could afford to comply.

This article highlights the unique regulatory issues posed by transformative technologies: the unpredictable nature of business models that rely on transformative technologies; the importance of data ownership, control, privacy, consumer protection, and security; and the AI conundrum. The article further defines and provides a set of principles to guide the future of regulation of transformative technologies: innovative and adaptive regulation, outcome-focused regulation, evidence-based regulation, and collaborative regulation.

Transformative technologies regulation: the challenges

Traditional regulatory structures are complex, fragmented, risk-averse, and adjust slowly to shifting social circumstances, with various public agencies having overlapping authority. On the other hand, a unicorn startup can develop into a company with a global reach in a couple of years, if not months. For instance, Airbnb went from startup in 2008 to a Silicon Valley unicorn in 2011 valued at a billion dollars, based on USD112 million invested by venture capitalists.[5]

Transformative technologies are multifaceted and transcend national boundaries. Since there are no global regulatory standards, coordinating with regulators across borders is a challenge.

There are three key challenges in regulating transformative technologies: (i) the unpredictable nature of business models that rely on transformative technologies; (ii) data privacy, security, ownership, and control; and (iii) the AI conundrum.

The unpredictable nature of business models that rely on transformative technologies

Products and services embedded in transformative technologies’ solutions evolve quickly and shift from one regulatory category to another. For example, if a ride-hailing company, such as Uber, begins delivering food, it can fall under the jurisdiction of health regulators. If it expands into delivering drone services, it will fall under the purview of aviation regulators. If it uses self-driving cars for passengers, it may come under the jurisdiction of the transport regulators. Maintaining consistency in regulations is difficult in the sharing economy, where the lines between categories and classification of services and products are often blurred.

Recently, Airbnb has won a court battle in the European Union (EU) that affects how the company is regulated in the future. The EU’s Court of Justice has ruled that Airbnb should not be considered an estate agent but an “information society service,” meaning it can avoid certain responsibilities and continue operating as an e-commerce platform.[6]

However, in the case of Uber, the EU Court of Justice ruled that the company is a transportation service, and not a platform. The Court ruled that the difference between Uber and Airbnb is in the level of control exercised by Airbnb over the services hosted on its platform. Unlike Uber that has controlled pricing and automatically paired up sellers and customers, Airbnb has allowed property owners to set their own prices and rent their homes using other channels.[7]

Transformative technologies and liability

The fast-evolving, interconnected nature of disruptive business models can also make it difficult to assign liability for the harm done. For example, if a self-driving car crashes and kills someone, who will be held liable — the system’s programmers, the driver behind the wheel, the car’s manufacturer, or the manufacturer of the vehicle’s onboard sensory equipment? The general inclination across different jurisdictions has been towards assigning strict liability for the damage caused by transformative technologies, under certain circumstances, such as the use of these technologies in public spaces (e.g., drones, self-driving cars).[8]

The legal concept of liability is challenged even more by the concept of reinforcement learning, a training method that allows AI to learn from past experiences. Imagine a scenario where an AI-controlled traffic light learns that it is more efficient to change the light one second earlier, resulting in more drivers running the light and causing more accidents. In this example, human control is removed several times, making it difficult for regulators to assign liability.[9]

3D printing is another transformative technology that challenges the traditional legal concept of liability. If a 3D house crashes down, who is to blame — the supplier who supplied the design, the manufacturer who 3D printed the house parts or the manufacturer of the 3D printer?

Blockchain and its decentralized nature present a different type of concern to regulators. Even though blockchain applications have been praised for their security and immutability, their anonymous and decentralized nature is a novel challenge for regulators around the globe. An illustrative example in this regard is the cyberattack of the Decentralized Autonomous Organization (DAO), a decentralized investment fund running on Ethereum, a blockchain platform. DAO’s creators intended to build a democratic financial institution whose code would eliminate the need for human control and oversight. However, in 2016 a hacker took advantage of a flaw in DAO’s code and stole USD50 million of virtual currency. The hacker has not been identified yet, and due to the decentralized nature of the system, liability cannot be assigned to anyone or anything.[10]

The importance of data: ownership, control, privacy, consumer protection and security

The rising use of smartphones, security cameras, connected devices, and sensors has created a massive digital footprint and data overload. An illustration of data overload can be seen in the case of self-driving cars that are expected to churn out around 4,000 gigabytes of data per day.[11] Other machines generating data overload include satellites, environmental sensors, security cameras, and mobile phones.

People’s lives can benefit greatly when decisions are informed by pertinent data that reveal hidden and unexpected connections and market trends. For instance, identifying and tracking genes associated with certain types of cancer can help inform and improve treatments. However, often unaware, ordinary people bear many of the costs and risks of participating in data markets. In many jurisdictions, the so-called data brokers are amassing and selling personal data, and this is a legal practice.[12]

The data economy brings along disruptive changes propelled by AI and machine learning. For instance, human bankers are already replaced by AI and big data. Many fintech lending startups have started using alternative data sources, and traditional insurance companies are following suit. Regulators are struggling in providing guidelines in this area that would enable the financial industry to innovate, and at the same time protect consumers from bias and discrimination. For example, New York’s Department of Financial Services has released new guidelines that allow life insurance companies to use customers’ social media data to determine their premiums (as long as they do not discriminate).[13]

Data ownership

From a regulatory point of view, the crux of the question is who has access and control over all this data. Is it the government, the users, or the service providers who store the data? From a legal perspective, data per se cannot be owned, and no legal system offers ownership of raw data.[14] If the service provider has access to personal information, what obligation does it have to store and protect it? Can the service provider share our personal data with third parties, i.e., data brokers? Can a car manufacturer charge a higher price to car buyers who refuse to share their personal data?[15]

Usage of data

Privacy impacts data uses far beyond consumers’ understanding. Consumers may sign up for a clever app, not realizing that the app is using account data for purposes far broader than necessary for immediate use. Or they may apply for a loan, thinking that account access is just for the primary purposes of granting the loan without realizing that the company has ongoing access to their account. These issues become compounded.

Data sharing and sale

Privacy policies can be opaque. Consumers may not realize that their data has been shared or sold, potentially to unrelated third parties. This is further complicated when the whole process is automated.[16]

No global agreement on data protection [17]

There is no global agreement on data protection, and regulators around the globe take very different, oftentimes conflicting, stances in regulating data within their national borders. For instance, the EU’s General Data Protection Regulation (GDPR) provides for the principle of privacy, strict controls over cross-border data transmissions, and the right “to be forgotten”. The GDPR will likely influence other countries in revising their data protection legislation. The GDPR is already having an extraterritorial grasp in the private sector’s data transactions across borders. Global companies are revising privacy policies to comply with the GDPR. Content websites outside Europe have already started denying access to European consumers because they could not ensure compliance with the GDPR.[18]

Unlike the EU approach, the US approach has been more segmented and focused on sector-specific rules (e.g. health care, financial, and retail) and state laws. In the US, it is not unusual for credit card companies to know what their customers consume. For instance, Uber knows where its customers go and how they behave while taking the drive. Social media platforms know if their users like to read CNN or Breitbart News.

In the EU, the right to privacy, and the right to have personal data protected, are fundamental rights guaranteed by the EU Charter of Fundamental Rights.[19] The EU has an umbrella data protection framework that does not differentiate between data held by private or public actors, with only a few exceptions (e.g. national security). By contrast, in the US for example, the right to privacy is not considered a fundamental right. The right to privacy is counter-balanced by strong rights to free speech and freedom of information. Nevertheless, some cities and states have started regulating privacy following the EU’s GDPR model.

Anonymization does not equal privacy [20]

The privacy of public data is usually protected through anonymization. Identifiable things such as names, phone numbers, and email addresses are stripped out. Data sets are altered to be less precise, and “noise” is introduced to the data. However, a recent study by Nature Communications suggests that anonymization does not always equate privacy. Researchers have developed a machine-learning model that estimates how individuals can be re-identified from an anonymized data set by entering their zip code, gender, and date of birth. [21]  

Cybersecurity is a key regulatory challenge in the era of transformative technologies

Cybersecurity is particularly important in areas such as fintech, digital health, digital infrastructure, and intelligent transportation systems where private, sensitive data can be compromised. Taking for instance the case of self-driving cars that need to communicate between themselves and the transport infrastructure. Designers and manufacturers of self-driving cars should take necessary precautions to ensure that the system is not overtaken by hackers who might try to steer the vehicle into causing accidents. Hackers might also try to manipulate traffic lights to disrupt traffic.[22]

Another example is data aggregators that access a host of sensitive personal and financial information and provide much of that information to third parties. It is very difficult for consumers to know whether the data aggregator or the end user fintech has robust security controls. Data breaches are common even at the largest companies with extensive compliance programs. Small fintech startups may be especially vulnerable.

Often, data aggregators and fintechs require consumers to turn over their bank account and login credentials to engage in “screen scraping” of the account records. This practice increases security risks. Though data aggregators have struck agreements with many banks to use more secure application programming interfaces (APIs), screen scraping is still used to access accounts at smaller institutions[23].

IoT, data protection and cybersecurity

The IoT is omnipresent nowadays. One study estimated that there would be 50 billion active IoT devices worldwide by 2022.[24] And that’s counting offers only for consumers, not “smart” offices, buildings, and factories. For example, it was estimated that there will be an average of 14.8 appliances and devices connected to the Internet in EU households by 2022 – light switches, lights, heating controls, security cameras, blinds, doorbells, loudspeakers etc.[25]

The example of smart wearables. Smart wearables provide new solutions to healthcare through medical monitoring, emergency management and safety at work. These electronic devices can monitor, collect, and record biometric, location and movement data in real-time and communicate this data via wireless or cellular communications.

In 2018, 2 million employees with dangerous or physically demanding roles (e.g., paramedics and firefighters) were required to wear health and fitness tracking devices as a condition of their employment.[26] More than 75 million wearables were used in the workplace by 2020. Employers recognize that supporting the health of their staff translates into reduced healthcare costs, less sick leave taken, and higher productivity.[27]

Smart wearables raise interesting data privacy questions: are the companies that monitor health-related data under an obligation to disclose that data to the subjects they belong to if, for example, the device reveals certain health conditions? To what extent can companies use this data for secondary purposes?[28]

The example of smart home devices. Ubiquitous smart home devices present another challenge to regulators. Challenging questions for regulators in this regard are the following: what is the extent to which the manufacturer of one smart device may be to blame for the failure of another smart device. If, for example, a smart fridge can be hacked and bypassed to unlock a connected smart lock, to what extent should liability for the economic loss of items stolen from the home be distributed between the manufacturers of each product? Depending on how these issues are tackled, there may potentially be a significant risk, as a single weakness in the code could be applied to thousands of products written with the same code.[29]

Many of the data processing activities involved in IoT operation will fall within personal data protection regulation, given that IoT devices tend to process personal data. Concepts of transparency, fairness, purpose limitation, data minimization, data accuracy and the ability to deliver on data subject rights should be built into the design of the IoT product, to ensure compliance with stringent data protection regulations. 

It is can also be challenging to determine if certain stakeholders act as data controllers or data processors in a particular processing activity in the IoT data protection context. For example, device manufacturers qualify as controllers for the personal data generated by the device, as they design the operating system or determine overall functionality of the installed software. Third party app developers that organize interfaces to allow individuals to access their data stored by the device manufacturer can be considered controllers. Other third parties (e.g., an insurance company offers lower fees by processing data collected by a step counter) can be considered controllers when using IoT devices to collect and process information about individuals. These third parties usually use the data collected through the device for other purposes different from the device manufacturer.[30]

IoT stakeholders need to conduct an assessment over the processing activities to identify the respective data protection roles (e.g., controller, joint controllers or processor) and correctly allocate responsibilities (particularly about transparency and data breach obligations and data subject rights).

AI and machine learning might lead to power imbalances and information asymmetries for consumers

AI-based applications raise new, so far unresolved legal questions, and consumer law is no exception.

Targeted advertising

The use of self-learning algorithms in big data analytics gives private companies an opportunity to gain a detailed insight into one’s personal circumstances, behavior patterns and personality (purchases, sites visited, likes on social networks, health data). AI is used in online tracking and profiling of individuals whose browsing habits are collected by “cookies” and digital fingerprinting and then combined with queries through search engines or virtual assistants. Companies can tailor their advertising, but also their prices and contract terms, to the respective customer profile and – drawing on the findings of behavioral economics – exploit the consumer’s biases and/or her willingness to pay. AI-based insights can also be used for scoring systems to decide whether a specific consumer can purchase a product or take up a service.

This creates growing issues for privacy and data protection. Targeted advertising uses internet tracking and profiling based on the person’s expected interests. The use of all these methods has incapacitated users from giving meaningful consent because everything is automated. Intensive data processing using AI may exacerbate other rights violations when personal data is used to target individuals, such as in the context of insurance or employment applications, or when algorithms threaten both the right to privacy and the freedom of expression.[1] For instance, social media algorithms decide the content of a user’s newsfeed and influence the number of people who see and share information. Search engine algorithms index content and determine what appears at the top of search results raising concerns about diversity of views.

Price discrimination

AI supports digital businesses in presenting consumers with individualised prices, and offering to each consumer an approximation of the highest price point that consumer may be able or willing to pay. Certain markets, such as credit or insurance, operate on cost structures based on risk profiles correlated with features distinctive to individual consumers, suggesting that it may be reasonable to offer different prices (e.g., interest rates) to different consumers. Should regulators allow price discrimination in other cases, too, based on the ability of different consumers to pay?[31]

Consumers are not usually aware that advertising, information, prices or contract terms have been personalized according to their profile. Suppose a certain contract is not concluded or only offered at unfavorable conditions because of a certain score calculated by an algorithm. In that case, consumers are often unable to understand how this score was achieved. Complexity, unpredictability, and semi-autonomous behavior of AI systems can also make effective enforcement of consumer legislation difficult, as the decision cannot be traced to a singular actor and therefore cannot be checked for legal compliance.

Deep Dive on Transformative Technologies’ Regulation: Regulating AI

In the past two years, numerous countries have begun to establish robust governance frameworks for AI. The European Union has been a frontrunner in this area with its draft AI Act, and countries such as the United States, Canada, China, and the United Kingdom have followed suit. This trend stems from the adoption of risk-based approaches to AI regulation, where authorities aim to manage AI applications based on their potential risks to societal and human well-being. The Stanford AI Index has revealed that there has been a significant increase in the number of bills containing the term “artificial intelligence” that have been passed into law. In 2016, only one such bill was passed, but this number has grown to 37 by 2022. Moreover, a comprehensive analysis of parliamentary records on AI in 81 countries indicates that mentions of AI in global legislative proceedings have increased nearly 6.5 times since 2016.[2]

Spotlight on Generative AI systems

We are currently experiencing an unprecedented era of progress in generative AI, which refers to machine learning algorithms that can create new content like audio, code, images, text, simulations, and videos. These algorithms have recently gained attention for their ability to power chatbots such as ChatGPT, Bard, and Copilot, which use large laguage models (LLMs) to perform various functions, including research gathering, legal case file compilation, repetitive clerical task automation, and online search. The technology has the potential to dramatically increase efficiency and productivity by simplifying specific processes and decisions, such as streamlining physician medical note processing[3] or helping educators teach critical thinking skills.[4]

Natural language generation is a popular application of generative AI, with ChatGPT (Generative Pre-trained Transformer) garnering a lot of attention lately. The hype surrounding text-based generative AI often revolves around a model called GPT, short for Generative Pre-trained Transformer. Pre-training a language model and fine-tuning it on a specific dataset is not a new concept, as it has been utilized in other models for decades. However, GPT is notable for its use of transformer architecture on a large scale, which allows it to generate human-like texts. This has made GPT a popular choice in natural language processing. For instance, ChatGPT is a chatbot that leverages advanced NLP and reinforcement learning to participate in realistic discussions with people. ChatGPT can generate articles, tales, poetry, and even computer code. It can also respond to questions, engage in discussions, and, in certain instances, provide extensive replies to extremely precise questions and inquiries. ChatGPT was released in November 2022 and acquired over one million users within a week.

However, the ongoing advancements and applications of generative AI have sparked important questions regarding its impact on the job market, its use of training data that can be protected by privacy and copyright rules, and the necessary government regulations.[5]

Many governments around the globe have started curtailing the use of generative AI. Due to data protection and privacy concerns, the Italian data protection regulator, the Garante, issued a temporary ban on ChatGPT, which was removed after OpenAI started cooperating with the Garnte, including by publishing a new information processing notice, expanding its privacy policy and offering users an opt-out from data processing.[6] The Canadian Government has released a draft of a code of practice for Generative AI, which is open for public comment before being enacted into law as part of the country’s Artificial Intelligence and Data Act.[7] The G7 launched the Hiroshima AI Process to coordinate discussions on generative AI risks.[8] In July 2023, US President Joe Biden announced voluntary commitments from large AI companies in support of safety, security, and trust.[9] On July 13, 2023, China implemented temporary measures to regulate generative AI industry. Service providers are now required to undergo security assessments and file algorithms. [10] The Beijing Municipal Health Authority proposed 41 new rules that would strictly prohibit the use of AI in various online healthcare activities, including automatically generating medical prescriptions.[11] The United States Federal Trade Commission (FTC) has launched a wide-ranging investigation into OpenAI. The probe is focused on allegations that OpenAI has violated consumer protection laws by putting personal data and reputations at risk.[12] The FTC’s Civil Investigative Demand has raised concerns that ChatGPT may produce false or disparaging statements about real individuals. [13] The agency has also requested information following a data privacy breach in which private user data was exposed in ChatGPT’s results.[14]

European Union (EU): The draft AI Act

In 2019, following the publication of the Ethics Guidelines for Trustworthy AI[15], the European Commission started a three-pronged approach for regulating AI and addressing AI-related risks. In addition to the draft AI Act, the new and amended civil liability rules[16] act in conjunction with other current and planned data-related policies, such as the General Data Protection Regulation (GDPR)[17], the Digital Services Act[18], the proposed Data Act[19], and the proposed Cyber Resilience Act.

The draft EU AI Act sets horizontal standards for developing, commercializing, and using AI-powered products, services, and systems within the EU. It provides fundamental AI risk-based guidelines applicable across all industries and includes a “product safety framework” with four risk categories, specifying market entry rules and certification for High-Risk AI Systems through a mandatory CE-marking process. This compliance regime also covers datasets used for machine learning training, testing, and validation to ensure fair outcomes.

The draft EU AI Act employs a risk-based strategy with multiple enforcement mechanisms. Low-risk AI applications would be subject to a more lenient regulatory framework, while those with unacceptable risks would be banned. As risk increases, more stringent regulations apply. These vary from lighter external certification requirements throughout the application’s life cycle to non-binding self-regulatory soft law impact evaluations combined with codes of conduct.

The regulatory framework defines four levels of risk in AI:

  1. Unacceptable risk. AI systems harmful to people’s rights, safety, and livelihoods shall be outlawed, including social scoring systems used by governments and voice-activated toys that promote risky behavior.[20]
  2. High risk. The initial proposal (2021) included (i) critical infrastructure (e.g., transport), that could put the life and health of citizens at risk; educational or vocational training that may determine the access to education and professional course of someone’s life (e.g., scoring of exams; (iii) safety components of products (e.g., AI applications in robot assisted surgery; (iv) employment, management of workers and access to self-employment (e.g., resume sorting services for recruitment purposes); (v) essential private and public services (e.g., credit scoring denying citizens opportunity to obtain a loan); (vi) law enforcement activities interfering with human rights (e.g., evaluation of admissibility of evidence; (vii) migration, asylum and border control management (e.g. verification of authenticity of travel documents); (viii) administration of justice and democratic processes (e.g., applying the law to a concrete set of facts).[21]

The proposal of December 2022 removed deepfake detection by law enforcement, crime analytics, and verification of the authenticity of travel documents from the list of high-risk AI systems. The latest changes make clarify that the scope of the draft Act does not encompass AI for national security, defense, and military purposes.

High-risk AI systems are subject to strict obligations before they can be put on the market (Figure 2).

Figure 2. Obligations on high-risk AI systems as defined in draft EU AI Act.

Source: Based on European Commission (2021),

All remote biometric identification technologies are subject to tight regulations and are regarded as high-risk. In general, it is forbidden to employ remote biometric identification for law enforcement in areas open to the public. Only a few situations can be allowed as exceptions, such as when it’s imperative to find a missing kid, stop a specific and impending terrorist threat, or find, identify, or prosecute a perpetrator or suspect of a major crime. Such use is subject to proper time, location, and database search limitations, as well as approval by a judicial or other impartial body.[22]

  1. Limited risk. AI systems with limited risk must adhere to specific disclosure requirements. Users should be aware that they are engaging with a machine when using AI systems like chatbots so they may decide for themselves whether to move forward or back.[23]
  2. Minimal or no risk. Applications like spam filters or video games with AI are included in this.[24] Users of high-risk AI systems would ensure human control and monitoring once an AI system is put on the market, while suppliers have a post-market monitoring structure in place. Authorities oversee market monitoring. Severe events and malfunctions will be reported by both providers and users.[25]

Other countries, such as the United Kingdom, Canada, the United States and China, have developed distinct frameworks for regulating and governing AI.

United Kingdom. The government introduced its cross-sector plan for AI regulation on July 18, 2022, which features a “pro-innovation” framework supported by six main principles addressing AI’s key risks. These non-statutory principles apply to all UK sectors and are supplemented by ‘context-specific’ regulatory guidance and voluntary standards developed by UK regulators. This approach differs from the EU AI Act, which offers a more prescriptive, horizontal approach to AI regulation across industries. Instead, the United Kingdom is moving towards a light-touch, risk-based, context-specific approach focused on proportionality, with practical requirements determined by the industry and dependent on the AI system’s deployment context.[26] In March 2023, the United Kingdom Department for Science, Innovation and Technology published a White Paper, “A pro-innovation approach to AI regulation,” for consultation.[27]

Canada. The Directive on Automated Decision-Making requires most federal agencies to complete an Algorithmic Impact Assessment for any automated decision system (ADS) used to recommend or make administrative decisions about clients. As a result, through public-private collaboration, the Canadian government has initiated the development of a model Algorithmic Impact Assessment tool for agencies to reference or use in compliance with the Directive on Automated Decision-Making.[28] Moreover, the Directive is reviewed every six months to keep up to date with new technological developments. This is an excellent example of an innovative and adaptive approach to AI governance.[29]

United States. Congress passed the National AI Initiative Act in 2021. establishing the National AI Initiative as a framework for coordinating and strengthening AI research, development, demonstration, and education initiatives across all US Departments and Agencies. Several administrative agencies, including the Federal Trade Commission (FTC), Department of Agriculture, Department of Defense, Department of Education, and Department of Health and Human Services, are involved in implementing a national AI strategy due to new offices and task forces established by the AI Act.[30]

The Algorithmic Accountability Act of 2022, introduced in the US Congress in February 2022, is pending national legislation. This proposed act would direct the FTC to develop regulations requiring certain “covered entities” to conduct impact assessments before implementing automated decision-making processes, specifically covering AI and machine learning-based technologies. [31] The FTC has started a regulatory process to address AI discrimination, fraud, and related data misuse, while other agencies have also initiated actions targeting AI practices.[32] This list of policy actions is beginning to resemble the EU’s position on “high-risk” AI.[33]

The Office of Science and Technology Policy (OSTP) of the White House produced a Blueprint for an AI Bill of Rights (Blueprint) in October 2022. This document offers a nonbinding roadmap for the appropriate use of AI. The Blueprint has outlined five fundamental principles to guide and govern the efficient development and implementation of AI systems, paying particular attention to the unintended consequences of civil and human rights violations. These principles are meant to guide and control the effective development and implementation of AI systems:

China. AI regulation has advanced beyond the proposal stage, with the Cyberspace Administration of China (CAC) leading the way in establishing rules controlling specific AI applications. The CAC’s approach is the most developed, rule-based, and focused on AI’s role in information dissemination.[35] China’s AI legislation, implemented in March 2022, restricts corporations’ use of algorithms in online recommendation systems, requiring services to be ethical, accountable, transparent, and “disseminate positive energy.” This legislation mandates that companies warn consumers when an AI algorithm is used to select displayed information and provide an opt-out option. Moreover, the regulation prohibits algorithms from using personal data to offer different rates to consumers.[36]

The AI conundrum: Key challenges in regulating AI

AI presents one of the most difficult challenges to traditional regulation. Three decades ago, one could think of software being programmed. But the way to think about it in terms of shifting to an AI environment is that the software is not programmed anymore, it is trained. Today, we are dealing with networks of information that often have surprising capacities. AI itself is not one technology or even one singular development. It is a bundle of technologies whose decision-making mode is often not fully understood even by AI developers.[32]

It is very difficult to relate something as technical as AI to robust regulation. On the one hand, most regulatory systems require transparency and predictability; on the other, most laypeople do not understand how AI works. The more advanced certain types of AI become, the more they become “black boxes”, where the creator of the AI system does not really know the basis on which the AI is making its decisions. Accountability, foreseeability, compliance, and security are questioned in this regard.[33]

The “black box” problem

AI algorithms make strategic decisions, from approving loans to determining diabetes risk. Often these algorithms are closely held by the organizations that created them or are so complex that even their creators cannot explain how they work. This is AI’s “black box” — the inability to see what is inside an algorithm.

A study conducted by the AI Now Institute at NUY states that many automated decision-making systems are opaque to the citizens.[34] Regulators have already started enacting algorithm accountability laws that try to curtail the use of automated decision systems by public agencies. For instance, in 2018 New York City enacted a local Law in relation to automated decision systems used by agencies.[35] The Act created a task force to recommend criteria for identifying automated decisions used by city agencies, a procedure for determining if the automated decisions disproportionately impact protected groups. However, the law only permits making technical information about the system publicly available “where appropriate” and states there is no requirement to disclose any “proprietary information”.[36]

Many are not made public because of nondisclosure agreements with the companies that developed them. The EU GDPR requires companies to be able to explain how algorithms using the personal data of customer’s work and make decisions — the right to explanation. However, since this right has been mentioned in Recital 71 of the GDPR many scholars point out that it is not legally binding.[37] Article 22 of the GDPR states that EU citizens can request that decisions based on automated processing concerning them or significantly affecting them and based on their personal data are made by natural persons, not only by computers. You also have the right in this case to express your point of view and to contest the decision.[38]

Another illustrative example of AI’s black box in decision making is the case of using automated systems in recruitment and selection. Some companies have used a hiring technology that analyzes job candidates’ facial expressions and voice to advise hiring managers.[39] It has been feared that using AI in hiring will re-create societal biases. Regulators have already started tackling these legal conundrums.[40] For instance, the new Illinois Artificial Intelligence Video Interview Act[41] aims to help job candidates understand how these hiring tools operate.

Algorithmic bias

Many algorithms have been found to have inherent biases. AI systems can reinforce what they have been taught from data. They can amplify risks, such as racial or gender bias. Even a well-designed algorithm must make decisions based on inputs from a flawed and inconsistent reality. Algorithms can also make judgmental errors when faced with unfamiliar scenarios. This is the so-called artificial stupidity. Many such systems are “black boxes”, the reasons for their decisions are not easily accessed or understood by humans — and therefore difficult to question, or probe. Private commercial developers generally refuse to make their code available for scrutiny because the software is considered proprietary intellectual property, which is another form of non-transparency.

Facial recognition algorithms have been proven to be biased when detecting people’s gender. Several cities, such as San Francisco and a few other communities, have banned their police departments from using facial recognition in the US.[42]

Legitimate news and information are sometimes blocked, illustrating the weaknesses of AI in determining what is appropriate. Examples have led to a growing argument that IT firms posting news stories should be subject to regulations similar to those that media firms face.

Deepfakes[43], computer-generated, and highly manipulated videos or presentations, present another significant problem. Some governments have started regulating them. For instance, China has made it a criminal offense to publish deepfake videos created with AI or virtual reality. From January 2020 any deepfake video or audio recording should be clearly designated as such; otherwise content providers, which are expected to police the system, together with offending users will be prosecuted.[44] 

Principles of regulating transformative technologies

Innovative and Adaptive

Traditional regulatory models are time consuming and robust. It takes months and sometimes years to draft new regulations in response to market developments and technology push. This needs to change. The modern regulatory models are innovative and adaptive. They rely on trial and error and co-design of regulation and standards and have shorter feedback loops. Regulators can seek feedback using a number of “soft-law” innovative instruments such as policy labs, regulatory sandboxes, crowdsourcing, codes of conduct, best-practice guidance and self-regulation. Soft-law instruments accommodate changes in technology and business models and allow regulators to address issues without stifling innovation.[47]

For instance, Singapore has adopted progressive regulations for testing self-driving vehicles due to its high population density and limited space to expand. In 2017, Singapore modified its road traffic law to accommodate “automated vehicles’ technologies and their disruptive character.[48] In order to ensure that regulations remain agile, the rules will remain in effect for five years, and the government has the option to revise them sooner. The autonomous vehicles testing falls under the purview of a single agency, the Land Transport Authority. The authority actively partners with research institutions and the private sector to facilitate pilots of autonomous vehicles.

Regulatory Sandboxes. A regulatory sandbox is a safe space for testing innovative products and services without having to comply with the applicable set of regulations. The main aim of regulators that establish sandboxes is to foster innovation by lowering regulatory barriers and costs for testing disruptive innovative technologies while ensuring that consumers will not be negatively affected. The concept of regulatory sandboxes, and any other form of collaborative prototyping environment, builds on the tradition of open-source software development, the use of open standards and open innovation.[49]

Regulatory sandboxes are created by regulators around the globe. In 2018 Japan introduced a regulatory sandbox where foreign and domestic firms and organizations are able to demonstrate and experiment with new technologies such as blockchain, AI and IoT in financial services, healthcare, and transportation. These sandbox experiments also occur in virtual spaces, rather than limited geographical regions like Japan’s National Strategic Special Zones. Sandboxes are a means through which new businesses are assessed, after which the government can introduce deregulation measures.[50] Singapore announced the launch of two sandboxes on July 24, 2023. The sandboxes will provide a platform for government agencies and businesses to develop and test generative AI applications using Google Cloud’s toolsets. [37]

An illustrative example of an innovative regulatory sandbox is Michigan’s MCity, an autonomous transportation regulatory testbed where large-scale deployment would be dangerous but controlled experiments can provide useful insights for companies and regulators.[51]

Photo by Jared Brashier on Unsplash

Public agencies also take innovative approaches to regulating drones. For instance, the US is piloting a sandbox approach for drones. Beginning in 2017, the Unmanned Aircraft System (UAS) Integration Pilot Program has brought state, local, and tribal governments together with private sector entities, such as UAS operators or manufacturers, to accelerate safe drone integration. The Federal Aviation Administration has chosen 10 public-private partnerships to test drones. The pilot programs test the safe operation of drones in various conditions that are currently forbidden, such as flying at night or beyond the line of sight of operators, allowing companies to test applications including medical equipment delivery, monitoring oil pipelines, and scanning the perimeter of an airport.[52]

Singapore established a 5-year regulatory sandbox for self-driving cars in 2017, effectively turning the whole city-state into a test zone for the technology.[53]

The concept of regulatory sandboxes has also raised some questioning on the potential of creating market distortions and unfair competition by the possibility of regulators becoming too close with and protective of the regulatory sandbox participants.

Policy Labs. A policy lab is a group of actors that have various competencies in developing a regulatory framework. They deploy a set of user-centric methods and competencies to test, experiment, and learn to develop new policy solutions.[54]

In the US, some states and local governments have already established policy labs to partner with academia and use their administrative data to evaluate and improve programs and policies while safeguarding personal privacy. The policy labs provide the technical infrastructure and governance mechanisms to help governments gain access to analytical talent, these data labs are helping to convert data into insights and driving more evidence-based policymaking and service delivery.[55]


Outcome-focused regulation is a set of rules that prescribe achieving specific, desirable, and measurable results, unlike traditional prescriptive and input-based regulatory models. This offers the private sector greater flexibility in choosing its way of complying with the law.[56]

Outcome-focused regulations stipulate positive outcomes that regulators want to encourage. For instance, drone regulation can be prescriptive and focus on inputs: “One must have a license to fly a drone with more than xx kilowatts of power (not very helpful)”, or it can be outcome-based and focus on effects: “One cannot fly a drone higher than 400 feet, or anywhere in a controlled airspace (better)”.[57]

The real benefits of transformative technologies lie in their ability to interconnect and converge. For instance, blockchains can be used to secure data generated through IoT-enabled devices, or machine learning models can amplify human bankers’ abilities. Innovators need enough space to innovate for such interconnections to happen, and outcome-focused regulation can provide this.[58]


Evidence-based regulation[59] is a modern regulatory model that is data-driven and risk-based. It is dynamic and based on real-time data flows between the private sector and regulators. The data could then be compared with regulations to decide whether a firm is in compliance. Firms in compliance would be listed as safe, and if not, the data systems could produce a set of action items to meet the standard.

The first capital city in the world to regulate ridesharing was Canberra in Australia. Before the service had begun, Uber signaling its intention to enter the local market prompted the local government to take a systematic and evidence-based approach to reform the ridesharing sector. The ridesharing business model differs from the traditional taxi industry in terms of risk. The additional information that is available to both drivers and passengers through the booking service, such as rank and hail work by taxis, significantly reduces the risk involved with anonymous transactions. Additionally, a reputation rating system provides an incentive for drivers and customers alike to behave respectfully. By integrating a booking system and payments, payment risks such as cash handling and non-payment have been minimized. The City of Canberra designed a new regulatory framework adaptable to new technologies by considering the approach to risk from different business models. It further anticipated the emergence of novel business models, such as fleets of automated vehicles providing on-demand transport. The designed system does not regulate individual business, but it provides a regulatory framework and promotes fair treatment of different business models, thus making the framework more flexible. The Government formally monitors the outcomes of the new regulatory framework through the collection of qualitative and quantitative data on industry changes, including customer outcomes and impacts on various stakeholders.[60] This evaluation is intended to be used to see if the industry is experiencing change that is in line with the modeled forecasts and to determine whether further actions are required.[61]

Open data has also been used by regulators to complement their own data. In the case of digital health software, a regulator could monitor products through publicly available data on software bugs and error reports, customer feedback, software updates, app store information, and social media.

Once the data flows are integrated, this part of the regulatory process can be automated. Enforcement can become dynamic and reviewing and monitoring can be built into the system. For example, the City of Boston inspects every restaurant to monitor and improve food safety and public health. These health inspections are usually random, which can increase the time spent at restaurants following the rules carefully, thus missing opportunities to improve health and hygiene in restaurants with food safety issues. In Boston, the search for health code violations is narrowed down using a winning algorithm that uses data generated from social media. These algorithms detect words, phrases, ratings, and patterns that allow them to predict violations, thus helping public health inspectors execute their working duties better and more efficiently. This algorithm could allow the City of Boston to catch the same number of health violations with 40 percent fewer inspections, by simply better-targeting city resources at dirty-kitchen hotspots. As of 2017, these winning algorithms have been employed by the City of Boston and have found 25 percent more health violations and surfacing around 60 percent of critical violations earlier than before. The city has been able to catch public health risks sooner and to get a smarter view of how to utilize scarce public resources by taking advantage of past data and combining it with new sources of information.[62]

A Pre-Cert pilot program for digital health developers that demonstrate a culture of quality and organizational excellence based on objective criteria (e.g. software design, development, and testing) has been created by the US Food and Drug Administration. This Program[63] has been envisioned as a voluntary pathway embodying a regulatory model that is tailor-made, assessing the safety and effectiveness of software technologies without inhibiting patient access to these technologies. This is in stark contrast to the current regulatory paradigm. Because software as a medical device allows for a product to be adaptable and can respond to glitches, adverse events, and other safety concerns quickly, the FDA has been working to establish a regulatory framework that will be equally responsive when issues arise to help consumers continue to have access to safe and effective products. The idea behind this is to allow the FDA to accelerate time to market for lower-risk health products and focus its resources on those posing greater potential risks to patients. This will enable pre-certified developers market lower-risk devices without an additional FDA review, or with a simpler market review, as the FDA monitors the performance of these companies continuously, with real-world data.


This ecosystem approach — when multiple regulators from different nations collaborate with one other and with those being regulated[64] — can encourage innovation while protecting consumers from potential fraud or safety concerns. Private, standard-setting bodies and self-regulatory organizations also play key roles in facilitating collaboration between innovators and regulators.

In recent years, managing AI through a more cohesive global response has become increasingly important, as AI governance concerns are inherently international. AI governance systems are gradually becoming more collaborative, relying on public-private partnerships. There are many stakeholders involved in the governance of AI systems: government bodies (for instance, telecom regulators, data protection authorities, and cybersecurity agencies), private sector stakeholders such as the IEEE, civil society (examples are Algorithm Watch[38] or Derechos Digitales[39]) and international organizations such as the ITU, the World Bank Group, UNESCO, GPAI, Globalpolicy.AI[40] and OECD.

Given AI’s versatility, governance approaches can no longer be designed in silos that, focus solely on individual sectors like health, education, or agriculture. Successful AI governance depends on multi-stakeholder collaboration, ensuring the appropriate integration of AI solutions in the context of developing countries. Public-private partnerships, localization, and cross-disciplinary cooperation among key stakeholders are essential for the successful development and deployment of AI.

One way forward is being developed by the Asia-Pacific Economic Cooperation (APEC) forum through the Cross-Border Privacy Rules (CBPR) system, which fosters trust and facilitates data flows amongst participants. A key benefit of the APEC regime is that it enables personal data to flow freely even in the absence of two governments having agreed to recognize each other’s privacy laws as equivalent formally. Instead, APEC relies on businesses to ensure that data collected and then sent to third parties either domestically or overseas continues to protect the data consistent with APEC privacy principles. The APEC CBPR regime also requires independent entities who can monitor and hold businesses accountable for privacy breaches.[65]

For those countries party to the Comprehensive and Progressive Trans-Pacific Partnership (CPTPP), the commitments on privacy in the e-commerce chapter provide another framework for integrating privacy, trade, and cross-border data flows.[66]

The example of the fAIr LAC Initiative

The Inter-American Development Bank (IDB), in collaboration with partners and strategic allies, leads the fAIr LAC initiative through which it seeks to promote the responsible adoption of AI and decision support systems. Initiatives like the IDB’s fAIr LAC are also important for changing challenges into opportunities. fAIr LAC works with the private and public sectors, civil society, and academia to promote the responsible use of AI to improve the delivery of social services and create development opportunities to reduce growing social inequalities. Their pilot projects and system experiments create models for ethical evaluation, as well as other tools for governments, entrepreneurs, and civil society to deepen their knowledge of the subject, provide guidelines and frameworks for responsible use of AI. These resources also consider how to influence policy and entrepreneurial ecosystems in Latin America and the Caribbean (LAC) countries. 


Key considerations for regulators

By working with key stakeholders from the private sector, the non-for-profit sector, and academia, regulators can ensure that they co-create an environment where transformative technologies are built with consumer safety, privacy and security in mind and where digital products and services are as inclusive and affordable as they are innovative. Transformative technologies are therefore a challenge that regulators can embrace, if they are ready to adapt. And this requires knowledge sharing and cross-sector collaboration between key stakeholders.

Regulators should create an enabling environment (governance institutions, policies and laws) for an effective roll-out of transformative technologies. Appropriate policies and regulatory measures include the establishment of data protection frameworks and sectoral regulatory frameworks and the promotion and adoption of international standards and international cooperation. Regulators should also ensure that adequate levels of privacy and security and handling of data are in place, for example by regulating against the use of data without consent and by reducing the risk of identification of individuals through data, data selection bias and the resulting discrimination by AI models and asymmetry in data aggregation. This also includes addressing safety and security challenges for complex AI systems, which is critical to fostering trust in AI and big data for development.

In particular, regulators should consider the following[67]:

Level of risk

It is also crucial to determine the requirement for human oversight based on the use case, its sensitivity, the complexity and opacity of the algorithm, and the potential impact on human rights – whether this implies the human is “in the loop”, “on the loop” (HOTL), or “in command” (HIC).[43] The framework developed by the Government of Singapore can be helpful in this regard (Figure 3).

Figure 3. Level of human involvement in AI deployment


Description automatically generated

Source: IMDA & PDPC (2020).


  1. This article is adapted from:  ITU (2021) Emerging technology trends: Artificial intelligence and big data for development 4.0, available here
  2. ITU (2021) Emerging technology trends: Artificial intelligence and big data for development 4.0. 
  29. Ibid. 
  67. ITU (2021) Emerging technology trends: Artificial intelligence and big data for development 4.0, available here 


  1. Council of Europe (2017), Study on the human rights dimensions of automated data processing techniques (in particular algorithms) and possible regulatory implications, available here
  2. Stanford University, Artificial Intelligence Index Report 2023, available here
  3. HealthcareDive “Hurtling into the future’: The potential and thorny ethics of generative AI in healthcare”, available here
  4. Rudra (2023) ChatGPT in Education: The Pros, Cons and Unknowns of Generative AI, available here
  5. WEF (2023) What is generative AI? An AI explains, available here.
  6. Italian Garante (2023) ChatGPT: OpenAI riapre la piattaforma in Italia garantendo più trasparenza e più diritti a utenti e non utenti europei, available here
  7. Canadian Guardrails for Generative AI – Code of Practice (2023), available here
  9. Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI , available here
  10. Reuters (2023) China says generative AI rules to apply only to products for the public, available here
  11. Beijing to limit use of generative AI in online healthcare activities, including medical diagnosis, amid growing interest in ChatGPT-like services, available here
  12. Reuters (2023) US FTC opens investigation into OpenAI over misleading statements, available here
  13. FTC file 232-3044, available here
  14. The New York Times, When AI chatbots hallucinate, available here
  15. Ethics Guidelines for Trustworthy AI, available here
  16. European Commission (2022), New liability rules on products and AI to protect consumers and foster innovation.
  17. European Commission (2018), General Data Protection Regulation (GDPR). available here
  18. Digital Services Act (2022).
  19. Data Act (2022).
  20. European Commission (2021), available here” target=”_blank”>available here
  21. European Commission (2021), available here” target=”_blank”>available here
  22. Ibid.
  23. Ibid.
  24. Ibid.
  25. Ibid.
  26. UK Government. 2022), UK sets out proposals for new AI rulebook to unleash innovation and boost public trust in the technology. available here
  27. UK Government (2023) White Paper “A pro-innovation approach to AI regulation”, available here
  28. Treasury Board of Canada Secretariat (2019) Directive on automated decision-making, available here
  29. CIOs and technical experts deliberate on government of Canada’s directive on automated decision making (2022), available here
  30. Foley & Lardner LLP (2022).AI regulation: Where China, EU, and US stand today, available here” target=”_blank”>available here
  31. Ibid.
  32. National Institute of Standards and Technology (2021) Artificial Intelligence Risk Management Framework. Federal Register, available here
  33. West, D. M. (2022) The EU and US are starting to align on AI regulation. Brookings Institution, available here
  34. AI Bill of Rights, available here
  35. Carnegie Endowment for International Peace (2022) China’s new AI governance initiatives shouldn’t be ignored, available here
  36. Foley & Lardner LLP (2022).AI regulation: Where China, EU, and US stand today, available here” target=”_blank”>available here
  37. Yu (2023) Singapore looks for generative AI use cases with sandbox options, available here
  40. is an online platform developed through ongoing cooperation between intergovernmental organizations with complementary mandates on AI.
  41. .Stankovich, M. (2021). Regulating AI and big data deployment in healthcare. available here
  43. According to the European Commission High-Level Expert Group on AI: “HITL refers to the capability for human intervention in every decision cycle of the system, which in many cases is neither possible nor desirable. HOTL refers to the capability for human intervention during the design cycle of the system and monitoring the system’s operation. HIC refers to the capability to oversee the overall activity of the AI system (including its broader economic, societal, legal and ethical impacts) and the ability to decide when and how to use the system in any particular situation.” (European Commission 2021). See also Personal Data Protection Commission, Singapore (2020).
Last updated on: 11.09.2023
Share this article to: