Digital Regulation Platform
«
»
«
»

Data protection and trust

16.12.2020

Introduction

Data are sometimes described as the “oil of the digital economy”[1] while their use in the digital economy is sometimes referred to as “surveillance capitalism.”[2] While the former has relatively benign connotations, the latter directly provokes concerns about the use of personal data.[3] This chapter focuses on regulatory aspects of data with an emphasis on personal data.

The digital transformation process has necessarily focused attention on the adequacy of, and need for, legal and regulatory frameworks that govern information products, services, and platforms. Intellectual property laws, especially copyright, have had to be revised to reflect the increasing value of intangible assets. Criminal laws and procedures have needed to be amended to address cybercrime; deterring new forms of harmful conduct and enabling effective investigations. Likewise, as personal data has become an increasingly valuable strategic commercial asset, so there has been a demand for rules that protect such data and give individuals the ability to control the collection, processing, use and abuse of their data. Having a data protection regime in place is widely recognized as a key factor in facilitating digital transformation (African Union 2020). Control is seen as critical in terms of engendering trust in data subjects about operating in an online environment, whether as citizens, consumers, or friends, which itself encourages take-up, participation, and consumption (ITU 2018).

This chapter examines the nature of data protection regimes, focusing particularly on its regulatory aspects – a feature that results in interesting similarities with the telecommunication sector. It examines the extent to which emerging technology and services should, and could, be impacted, as well as the controls over the cross-border flow of personal data and the resultant trade implications. Data protection and privacy concerns particularly overlap when considering the need for special rules to govern our communication activities. The complex intersection between data protection and information security is also examined. Finally, some key considerations for regulators are offered.

Data protection regimes

While data protection is clearly related to and overlaps with the right to privacy, it also contains some unique characteristics that distinguish it from traditional notions of privacy law. First, data protection is firmly embedded in the digital economy, the processing of data in its many forms by information and communications technologies (ICTs), hence its relevance to this handbook. Privacy laws, meanwhile, extend into areas of our lives that may be far removed from technology. Second, data protection regimes are generally applicable to all data relating to a person, referred to as “personal data,” whether that data can be said to be of a private or public nature. So, what a person posts on their Facebook page is as deserving of protection under the regime as that which is held in a password-protected file, even if not to the same level. Third, personal data can generally only be processed on some legitimate ground, such as consent, which places the onus on the person possessing the personal data to justify having control over it; whereas traditional privacy law has focused on controlling instances where a person’s private life is interfered with, resulting in a harm, whether pecuniary or non-pecuniary. Fourth, there is generally a need for a supervisory authority to “control” compliance with data protection rules. It is this feature that establishes data protection as a regulatory regime, a regime that goes way beyond our traditional notion of a privacy right.

Data protection as a regulatory concept first appeared in the Council of Europe’s 1981 Convention on data protection (commonly known as Convention 108).[4] While based in the right to privacy in Article 8 of the European Convention on Human Rights (ECHR), it was exclusively concerned with the “automatic processing of personal data.” The focus shifted to the European Union (EU) in 1995 with the adoption of the Data Protection Directive.[5] This quickly became the leading instrument against which most other laws and initiatives were measured. In May 2018, the Directive was replaced by the General Data Protection Regulation (GDPR) and is generally recognized as being the leading measure in the field.

While data protection emerged in Europe, data protection regimes have since been adopted widely around the world, with nearly 140 countries having some form of legal regime (Greenleaf 2020), as well as numerous other regional instruments, including the Asia-Pacific Economic Cooperation Privacy Framework[6] and the African Union Convention on Cyber Security and Personal Data Protection (2014).[7] However, despite such a multitude of instruments and laws, as noted by the International Telecommunication Union (ITU), there is “a trend towards adoption of laws along the ‘European’ lines.”[8] China is perhaps the most recent country to introduce draft legislation regarding data protection.[9]

While these various legal instruments differ considerably in scope and detail, the majority of data protection regimes are based around a common set of “data protection principles” to which regulatees are expected to comply when processing personal data. Broadly, these principles can be further subdivided in two categories:

• Principles focusing on the quality of personal information and information systems, such as the need to ensure accurate data and secure systems,

• Principles applicable to those whose data is being processed, such as fairness and transparency. (Bygrave 2002)

Similar to telecommunications, the regulatory nature of data protection arises both from the nature of the obligations imposed upon regulatees and the role of the supervisory authority in exercising powers of oversight and enforcement against non-compliant regulatees. Each national authority (and some at a regional level) will be issuing various forms of opinion, guidance, and recommendations that, although not generally mandatory, comprises part of the regulatory framework for regulatees. In addition, there are numerous industry self-regulatory initiatives, such as codes of conduct, often at a sectoral level, which elaborate and supplement the legal rules.

Finally, data protection issues have been incorporated into standards, such as those of the International Organization for Standardization (ISO),[10] which can also come to form part of the regulatory regime, whether through express incorporation or becoming de facto best practice. Related to data protection codes of conduct and standards has been the emergence of certification regimes that enable organizations to obtain external review and verification that their practices and procedures meet the requisite standards, and publicize such compliance through the use of seals, marks, or labels.[11]

Together this spectrum of measures, from “hard” and “soft” law, comprises a data protection regulatory regime. In terms of regulatees, attention is focused on the “controller,” either acting alone or jointly with others, as the person that determines the purpose and the means of the processing. In most laws, there is a second category of regulatee, the “processor,” who processes the personal data on behalf of the controller. The direct statutory obligations placed upon on a processor are generally less onerous than those imposed on the controller, such as compliance with the principles, although there is often a requirement for a contract governing the relationship between the controller and processor, which may redistribute the responsibilities of the respective parties. Determining the role of a person, as controller and/or processor, and therefore their respective regulatory obligations, can itself be a difficult and contentious issue. This is especially pertinent given the increasing complexity of digital supply chains and markets, such as in the Internet of Things (IoT), which blends products, services, and software.

Regulatory authorities

As noted already, one pillar that distinguishes data protection as a regulatory regime rather than simply a statutory framework is the establishment of an authority or appointment of a commissioner to oversee compliance and enforce against infringers. Indeed, under the Charter of Fundamental Rights of the European Union, the need for an authority is enshrined within the right itself: “Compliance with these rules shall be subject to control by an independent authority” (Art. 8(3). While some international instruments did not initially acknowledge the need for an authority, such a requirement has been inserted into subsequent revisions.[12]

Data protection regimes must therefore be distinguished into their substantive and procedural components. The former comprises the obligations imposed on regulatees and rights granted to individual data subjects, while the latter concerns the nature and powers of the authority. When comparing national regimes, both components should be considered of equal importance and, indeed, may have regulatory consequences. Under EU law, for example, when the European Commission assesses whether a third country ensures “an adequate level of protection” to enable the transfer of personal data, the suitability of the procedural elements is a critical part of the analysis.[13]

In terms of nature and powers, there are clear parallels between national telecommunication and data protection authorities. The authority should be independent from regulatees, which will generally include the government and public bodies. While an incumbent operator may or may not be owned, in whole or part, by the government, the government will certainly be a major collector and processor of personal data. However, in many countries, data protection laws are either limited in application to the private sector or, where public bodies are included, the public sector is subject to exceptions not available to private sector controllers. Achieving effective independence for regulatory authorities can be problematic, with the need for adequate financial resources, capacity, and expertise to be ensured. In terms of powers, data protection authorities will generally be granted both ex-ante powers to intervene, such as authorizing certain activities, as well as ex-post investigative powers, such a right to request the disclosure of information. As a public body itself, the appropriate exercise of these powers by the authority enters the realm of administrative law, with concerns that decision-making and other types of conduct are carried out properly.

The role and importance of a data protection authority can create issues for countries, especially where the notion and experience of independent regulation is less well established. However, a lack of an effective and independent regulator may be viewed by data protection regulators in other countries as a reason to prohibit or restrict the transfer of personal data into that country, to protect data subjects. To address this issue, there may be an argument for considering expanding the mandate of an existing national regulator, such as the telecommunications or ICT authority or consumer authority, to cover data protection issues rather than establishing a completely new body with the attendant issues of resourcing and building a culture of independence.

Box 5.1. The Global Privacy Assembly

The Global Privacy Assembly (GPA) is an international entity which brings together more than 130 data protection and privacy authorities, as well as observers from various international organizations and NGOs. It first met as an international conference in 1979. The GPA adopts ad hoc resolutions and communiques, including a recent call for increased co-operation between data protection, consumer protection and competition authorities. The GPA also operates a number of working groups, including one on International Enforcement Cooperation, which includes a consideration of legal solutions. As a forum for collaboration, co-operation and the sharing experiences and know-how, the GPA can provide support to national data protection authorities in less developed countries.

Source: https://globalprivacyassembly.org

Technologies and services

As noted earlier, one distinction between data protection and privacy is that the former is solely concerned with personal data processed by IT systems. To that extent, it cannot be said that data protection law is technology neutral, since it was computerization that gave rise to the concern for personal data in the 1960s and 1970s, and current regulatory frameworks are primarily focused on such processing activities. Processing is generally given a broad legislative definition to encompass the whole processing lifecycle from data collection to its eventual deletion or, in the alternative, anonymization, which would prevent it from any longer identifying individual data subjects. Such a broad scope does extend the application of data protection regimes into the physical sphere; for example, governing the manual collection of data intended for subsequent processing.

At the same time, however, the data protection principles that form the core of national and international data protection regimes have been developed precisely to avoid being too prescriptive with respect to particular types of technologies and services, which are evolving rapidly. Principles-based regulation is designed to help avoid rules and regulations becoming technologically redundant, or worse, inhibiting innovation.

Although “technology neutral,” compliance with the data protection principles can directly impact the development of particular technologies and services by controllers, as well as indirectly those upstream in the supply chain, producing components and applications that form part of the deployed technologies and services. The data minimization principle, in particular, should be reflected in the design of applications and systems.

Box 5.2. Case study: COVID-19 tracing apps

Since the start of the COVID-19 pandemic, one of the biggest challenges has been to track people who develop symptoms and anyone who they have been in close contact with during the virus’ incubation period to prevent them infecting other people. To help achieve this, various “contact tracing” applications have been developed for people to use on their mobile phones. Such applications can differ between the collection of data on a decentralized or centralized basis, as well as the types of data collected, such as location or proximity data. Such design decisions result in trade-offs between the rights users may have over their data and countervailing public health considerations:

• A system developed by Apple and Google is based on the DP-3T Bluetooth decentralized protocol, meaning that all processing and storing of data takes place on a user’s device. The data stored is stored on the device only for 14 days and the system does not allow for location data to be collected.

• Some governments have developed systems using a centralized model, which enables medical professionals to obtain access to more data, which could help them better trace people, as well as providing data for ongoing research into the virus and its public health implications.

International and national laws recognize that in extraordinary circumstances, certain fundamental rights, including the right to data protection, may be restricted, on the condition that basic democratic principles and safeguards are ensured, and the restriction is legitimate, time limited, and not arbitrary.[14]

While the data protection principles are applicable to all technologies and services, that does not preclude jurisdictions from determining that particular technological or market developments deserve additional rules to address manifest public concerns and public interest objectives.[15] Two key examples are the emergence of artificial intelligence (AI) and social media.

AI enables machines to identify patterns in big data sets and to build representative models that can be used in automated decision-making – from purchasing recommendations for consumers and diagnosing medical conditions, to sentencing decisions in respect of criminals. Such automated decision-making can obviously have real world impacts on individuals that may not always be welcome or deserving. Data protection laws have addressed two aspects of such automated decision-making: fair and transparent processing, and the ability to demand a review of the decision. In terms of the former, a data subject is seen as needing to be informed of three things: (a) the existence of the automated decision-making process; (b) how was the decision reached, i.e. meaningful information about the logic embodied in the algorithm that determines the result; and (c) what consequences flow from the processing for the data subject. Such transparency is designed to enable a data subject to exercise effective control over the use of their data. With regard to review, data subjects may be given the right, in certain circumstances, to have a person intervene in the decision-making process, often referred to as a “human in the loop” (Wang 2019). This reflects a widely held concern that we should be able to appeal against a decision made by a machine to a human, although whether this makes for better decision-making is debateable.

Social media companies, such as Facebook and Weibo, are prime examples of the phenomenon that has been termed “surveillance capitalism.” Data subjects obtain a service for free in exchange for enabling the providers to exploit their personal data by selling it to advertisers, a two-sided market. One concern with social media is the possibility that consumers may get “locked-in” to the provider they use, because of difficulties with moving the content they have posted to an alternative provider. To address this concern, some data protection regimes have given individuals a right of data portability, which can include the right to demand that their data be transmitted directly from their incumbent provider to their chosen alternative.[16] This can be seen as analogous to number portability obligations in the telecommunications sector, which enables customers to change providers rapidly, cheaply, and easily without having to change their numbers.[17] By reducing the switching costs involved in moving providers, dependency becomes less likely. Although achieved through data protection law, such user empowerment can also be seen as a demand-side competition measure, as well as a component of consumer protection regimes.[18]

Other innovative and disruptive technologies can generate data protection concerns that, while not addressed through specific regulatory measures, may require regulators to give specific attention to how they can be used in a data protection compliant manner. Blockchain, for example, when deployed using a distributed architecture, creates concerns about the roles of the respective parties and the ability of data subjects to exercise their rights.[19]

It is often noted that law tends to lag behind technologies, which is an inevitable result of the different environments under which each operates. Data protection authorities can try to reduce this gap through soft law guidance and recommendations, based on applying the data protection principles to novel processing situations. Such regulatory intervention can ensure that the rights and interests of data subjects are given appropriate consideration, while innovation is not unduly constrained by inflexible rules.

Transfers and trade implications

The digital economy is inherently transnational in nature, with the possibility of data being transmitted across borders in accordance with efficient network design and resource allocation parameters that are often opaque to users. The global nature of networks also provides economic opportunities for those capable of processing data in one country to get access to markets in other countries. From a data protection perspective, however, such transborder data flows represent an avenue for potential erosion of the protections granted to data subjects under national law. As such, data protection regimes will generally include rules governing the flow of personal data out of the jurisdiction.

Cross-border data transfer controls can take a variety of forms, but are generally based on establishing or achieving some threshold standard of protection between the two jurisdictions, which provides a gateway through which the data transfers are governed. The standard to be met to facilitate cross-border data flows may be expressed in differing terms, such as “equivalent,” “adequate,” “appropriate,” “comparable,” “sufficient,” and may achieved through different mechanisms:

International agreement: Countries may enter into bilateral or multilateral agreements to govern data flows in general,[20] specific categories of personal data or for specific purposes (e.g. law enforcement).

Jurisdictional determination: a country may reach a determination that a foreign country’s legal framework is adequate, whether in general or at a sectoral level, and therefore no further restrictions are required.[21]

National licensing/authorization regime: The national regulatory authority may licence or authorize transfers on either an individual or class basis.

Private law mechanisms: Private entities may be able to enter into binding and enforceable legal agreements, such as contracts, that govern the handling of data when transferred between jurisdictions.

Most regimes also provide for derogations or exceptions from these control mechanisms under certain circumstances, such as when the transfer is infrequent or involves only a limited number of data subjects.

Achieving interoperability between differing data protection laws remains an ongoing challenge in our global economy, with different cultures and regimes assigning differing priorities to competing public interests, of which data protection is but one. Indeed, the ability to send and receive information across borders engages the rights of individuals as much as that of data protection and privacy.

The nature of such controls over the cross-border flows of personal data can obviously have trade implications. Under the World Trade Organization’s (WTO) General Agreement on Trade in Services (GATS), member states that commit to liberalizing a service sector, such as telecommunications or “information supply services,”[22] may continue to rely on an exception where it concerns the “[p]rotection of privacy of individuals in relation to the processing and dissemination of personal data and the protection of confidentiality of individuals.”[23] This provision can legitimize countries imposing data localization rules under the auspices of data protection on the processing of all or certain categories of personal data, such as health or financial data. Such controls may either prohibit the processing of personal data outside the territory or restricting such flows and impose conditions, such as data residency that require copies of the personal data to be maintained within the originating jurisdiction. However, in terms of scope and application, data protection laws should operate in a manner that restricts international trade only to the extent necessary and proportionate to safeguard the interests of individuals, rather than be used as a tool for disguising non-tariff trade barriers.

Data protection laws have become an increasingly important part of international trade policy and negotiations. Given the multistakeholder and societal interests involved, greater compatibility and interoperability between national data protection regimes will not only serve to protect the interests of data subjects, but reduce the compliance costs for business, especially SMEs, and facilitate international trade and investment (UNCTAD 2016).

Communications privacy

Telecommunications law comprises sectoral rules, while data protection rules tend to be horizontally applicable across industry sectors. As such, these rules may overlap in certain areas, such as security. Indeed, in some countries, telecommunications law may be the only source of data protection rules or may contain additional specific data protection rules for industry players.

In the EU, when the Data Protection Directive was first proposed it was supplemented by a sectoral measure applicable to privacy and electronic communication.[24] The measure refers to privacy rather than data protection in recognition that our communication activities are traditional viewed as part of the fundamental right to privacy.[25] The provisions can be further divided into four distinct privacy relationships that exist within a communications environment.

First, that between the service provider and the subscriber or user. In providing communication services, an operator is obviously in a potentially privileged position concerning the handling of user data, both in terms of communications content and associated communications data, the “who,” “when,” “where” and “how” of a communication. Telecommunications law often expressly makes it unlawful for employees within operators to exploit this position, whether for commercial purposes or otherwise.[26] With digital transformation, communications-generated data has grown in value and volume exponentially. As a consequence, some jurisdictions have adopted sectoral rules restricting the ability of operators to process users’ personal data except for limited purposes, such as end-user billing and interconnection payments, or subject to restrictive conditions, such as consent-based only. One area of ongoing controversy in the telecommunication sector is that the current regulatory playing field is not level, since in some jurisdictions traditional telecommunication operators are subject to strict controls over their use of personal data, while providers of over-the-top (OTT) communication services, such as Skype and Gmail, are not subject to such controls and are therefore free to monetize their customers’ data. There have been calls to harmonize the approach and remove this regulatory asymmetry, with the EU proposing to move in the direction of imposing restrictions on all providers of communications services.[27]

A second privacy relationship is that between a subscriber to a communication service and the user of that service. An example where this scenario may give rise to concerns is that between an employer and employees, since the former may want to monitor and record the communications of employees or other system users, such as customers contacting a call centre, while those users may have a legitimate expectation that they can make private calls or be notified when such monitoring occurs. To protect the privacy of users, therefore, the law may require that itemized bills sent to subscribers will not record calls that do not incur a charge, such as a toll-free number.

A third privacy relationship is that between the two parties to a communication, traditionally referred to as the calling party and called party. Rules governing the use of calling line identification (CLI) and prohibiting unsolicited messaging are examples of measures designed to protect the privacy of the recipient from the communications of the sender, although the latter may also be designed to safeguard the network from harmful practices.[28] It is the desire to regulate this privacy relationship that has resulted in the proliferation of so-called “cookie” banners, so prevalent in our online environment, which are designed to ensure that users are informed (and give their consent) when a website with whom the user interacts tries to place a “cookie” on the user’s device, whether for their own or a third-party purposes.

The final privacy relationship is that between the user and the state. Controls are imposed over the circumstances and conditions under which the state can engage in communications surveillance for law enforcement purposes; primarily the interception of communications and the acquisition of communications data, although also extending to data retention requirements imposed on operators.[29] The user-state relationship is a fundamental privacy concern and the reason that “correspondence” has always been an element of the standard constitutional right to privacy.[30]

Whether and how each of these privacy relationships are governed will vary between states, sometimes being addressed as part of the general data protection regime, sectorally within the telecommunications framework, under employment law, as an element of consumer protection law[31] and/or criminal procedure.

Data protection and information security

If viewed as a Venn diagram, data protection, privacy and information security would be drawn as three distinct but overlapping sets. One widely recognized fundamental data protection principle is the need to implement “appropriate” technical and organizational security measures when processing personal data, to protect against both accidental and deliberate conduct resulting in the loss, alteration, disclosure, or destruction of the data. What constitutes “appropriate” security will vary depending on the nature of the personal data being processed, with “sensitive” personal data requiring greater protection. What constitutes “sensitive” personal data may be specified in the legislation, e.g. health and financial data, but should also reflect the nature of the processing activity, as the harmful consequences of, for example, an unintended loss or disclosure of personal data will vary according to the specific context. The nature of the security measures taken will also need to be reviewed and evolve over time to reflect both technological developments and the attendant risks and threats posed. It should also be noted that security obligations should not be viewed as binary in the sense that any breach results in infringement and potential liability, since a regulator may find that all the “appropriate” measures were taken by the controller as regulatee, but a breach still occurred.

The relationship between data protection and information security is primarily viewed as complementary in nature. Information security laws and requirements tend to impose both safeguarding and transparency obligations on systems operators:

• Safeguarding obligations, which require the implementation of appropriate security measures, especially by those entities operating or supplying services to critical national infrastructure; and,

• Transparency obligations, which generally take the form of security breach notification requirements, whether to the relevant authority and/or the individual data subject, where they are likely to suffer harm and can take steps to mitigate.

Data protection laws will often impose similar obligations on controllers and processors. While such complementarity is positive, it is important that the standards the respective regimes impose, such as time limits for breach reporting, are harmonized in a manner that does not generate legal uncertainties and additional compliance burdens for regulatees.

Box 5.3. The cost of data breaches

In January 2019, the Marriott hotel group reported that hackers had accessed the accounts of some 339 million guests from the reservation database system of its Starwood division, which it had acquired in 2016. The stolen data included names, addresses, phone numbers, email addresses, credit card details, passport numbers, and travel details. However, of the 25 million passport details, 20 million were encrypted, so the Marriott expected them to remain protected. In terms of costs, Marriott is expected to have to pay out some $500 million to its affected customers, especially those who subsequently became victims of fraud. The breach was investigated by the United Kingdom’s data protection authority, the Information Commissioner’s Office, who imposed a fine of £99 million in relation to the seven million guests that were U.K. residents

Security and safety concerns may also come into conflict with data protection laws. Access to, or the sharing of, information may be seen as a necessary measure to protect the security of a community from harm, which may run counter to data protection practices that focus on the rights of the individual. Two key areas that are generating policy debates in many countries are online safety and the current COVID-19 pandemic.

Online safety: The dark side of the Internet as an environment is that it can facilitate illegal and harmful conduct; from hacking systems to exposing children to obscene material and “fake” news. Enforcement against illegal conduct in cyberspace is fraught with difficulties, because of the ephemeral nature of online activities, the technological complexities, and its cross-border nature (see, generally, Walden 2016). Acting against harmful (but legal) conduct generates even greater problems for policy-makers, legislators, and regulators. While it is broadly recognized that effective action requires input by both public- and private-sector actors, including service providers, determining what the respective roles and responsibilities should be is highly and fiercely debated in many countries. One element of that debate concerns the extent to which personal data should be used to investigate, prevent, and detect illegal and harmful conduct. Online anonymity, provided by virtual private networks (VPN)s and end-to-end encryption, will equally protect the political dissident or public interest whistle-blower, as shield the child sexual abuse predator or terrorist.

COVID-19: As noted in Box 5.2, the pandemic has generated tensions between the need to safeguard the health of the community and limit the use and abuse of personal data. The data collected from individuals may not only stem the current spread of the disease, but its aggregation and analysis over time may further understanding of the virus that can enable public authorities to better handle future such public health emergencies. Countries are having to make determinations on a range of things that have direct implications for data subjects and their personal data: the types of data that can be collected from the individual (e.g. location data); whether disclosure is mandatory or voluntary; whether the collected data can be aggregated with other data held on the individual (e.g. national ID); the range of purposes for which the data can be used (e.g. care and management or research) and the length of time the data is retained.

A key area of debate in the data protection field is the use of techniques to pseudonymize or anonymize personal data using techniques such as encryption. With regard to the former, while pseudonymization can safeguard personal data, it remains subject to data protection regulation, since the process can be reversed and the data reidentified. Conversely, effective anonymization should take the data outside of the data protection regime, since it is no longer personal data. The debate revolves around whether certain anonymization techniques are truly effective in preventing a person from reidentifying individuals, given sufficient motivation, technical capability, and being able to correlate the data against other data sets (Ohm 2010). Both pseudonymization and anonymization techniques are tools of effective information security, but may also be seen as potential “weapons,” having both a civilian and military application, as well as rendering the Internet “dark” and inhibiting legitimate law enforcement investigations. Balancing these multiple and conflicting interests presents challenging policy choices for governments and legislators.

References

African Union. 2020. The Digital Transformation Strategy for Africa (2020-2030). Addis Ababa: African Union. https://au.int/en/documents/20200518/digital-transformation-strategy-africa-2020-2030.

Bygrave, Lee A. 2002. Data Protection Law: Approaching its Rationale, Logic and Limits. The Hague: Kluwer Law International.

CNIL (Commission Nationale Informatique & Libertés). 2018. Blockchain: Solutions for a Responsible Use of the Blockchain in the Context of Personal Data. Paris: CNIL> https://www.cnil.fr/sites/default/files/atoms/files/blockchain_en.pdf.

Greenleaf, G., and B. Cottier. 2020. “2020 Ends a Decade of 62 New Data Privacy Laws.” Privacy Laws and Business International Report 163 (February): 24-26.

ITU (International Telecommunication Union). 2018. Regulatory Challenges and Opportunities in the New ICT Ecosystem. Geneva: ITU. https://www.itu.int/dms_pub/itu-d/opb/pref/D-PREF-BB.REG_OUT03-2018-PDF-E.pdf.

Ohm, P. 2010. “Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization.” UCLA Law Review (57) 6: 1701-1777. https://www.uclalawreview.org/pdf/57-6-3.pdf.

UNCTAD (United Nations Conference on Trade and Development). 2016. Data Protection Regulations and International Data Flows: Implications for Trade and Development. New York and Geneva: UNCTAD. https://unctad.org/en/PublicationsLibrary/dtlstict2016d1_en.pdf.

Walden, I. 2016. Computer Crimes and Digital Investigations. Oxford: Oxford University Press.

Wang, G.E. 2019. “Humans in the Loop: The Design of Interactive AI Systems.” Human-centred Artificial Intelligence Blog, October 20, 2019. https://hai.stanford.edu/blog/humans-loop-design-interactive-ai-systems.

World Bank. 2021 (forthcoming). World Development Report 2021: Data for Better Lives. Washington, DC: World Bank.

Zuboff, S. 2019. The Age of Surveillance Capitalism. New York: Public Affairs.

Endnotes

  1. The comparison of data (an infinite and non-rivalrous thing) to oil (a finite and rivalrous resource) is probably erroneous.
  2. “Surveillance capitalism” was coined by Shoshana Zuboff (Zuboff 2019).
  3. This chapter refers to data protection, not privacy. While these concepts are linked, they are not synonymous. See, for example, World Bank 2021 (forthcoming). Sometimes used interchangeably, and sometimes referred to in mixed fashion (“privacy and data protection,” or “data privacy”) a distinction is drawn here to firmly place data protection in regulatory context.
  4. The Council of Europe is a 47 Member State intergovernmental body responsible for the European Convention on Human Rights (ECHR) and the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data, Strasbourg, January 28, 1981 (Convention 108). Convention 108 has been recently updated to align it with the GDPR, and is available at: available here.
  5. Directive 95/46/EC “on the protection of individuals with regard to the processing of personal data and on the free movement of such data,” OJ L 281/31, 23.11.1995.
  6. First adopted in 2005, it was revised in 2015.
  7. available here.
  8. Fn. 1, at p. 35.
  9. China published for consultation a draft law on Personal Data Protection that in many respects mirrors provisions of the GDPR. See, e.g., available here.
  10. ISO, 27018: 2014 “Code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors.”
  11. For example, Truste Privacy Program Standards, see available here.
  12. For example, Council of Europe Additional Protocol to the Convention (2001), OECD (2013), and APEC (2015).
  13. Article 29 Working Party, Adequacy Referential (updated), WP 254, November 2017.
  14. See, World Bank 2021 (forthcoming), Spotlight 6.1.
  15. For example, Mexico, Article 52 “Processing of Personal Data in Cloud Computing,” in Regulations to the Federal Law on the Protection of Personal Data held by Private Parties (2011).
  16. GDPR, Art. 20.
  17. In the EU, see Directive 2002/22/EC “on universal service and users’ rights relating to electronic communications networks and services’ (OJ L108/51, 24.2.2002), at Art. 30. From December 2020, Art. 106 of the European Electronic Communications Code (Directive (EU) 2018/1972, OJ L 17.12.2018) will be the applicable provision.
  18. For more details, see Chapter 4 on “Consumer affairs.”
  19. For example, see CNIL 2018.
  20. For example, the APEC Cross-Border Privacy Rules (CBPR) system, whereby companies can be certified.
  21. For example, the EU’s GDPR, Art. 45; e.g. Commission Implementing Decision (EU) 2019/419, January 2019 with respect to Japan.
  22. CPC Ver.2.1 (2015), Sec. 8, Div. 84: “information supply services.”
  23. GATS, Art. XIV(c)(ii).
  24. The initial measure was adopted in 1997, but subsequently amended and replaced by Directive 02/58/EC of the European Parliament and of the Council concerning “the processing of personal data and the protection of privacy in the electronic communications sector.” OJ L 201/37, 31.7.2002 (ePrivacy Directive).
  25. For example, the Universal Declaration of Human Rights (1948), Article 12, states: “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.”
  26. For example, The Kenya Information and Communications Act, Chapter 411A, s. 30, “Modification etc., of messages,” and s. 31, “Interception and disclosure.”
  27. Proposal for a Regulation concerning the respect for private life and the protection of personal data in electronic communications, COM(2017) 10 final, 10.1.2017.
  28. See ITU, International Telecommunications Regulations (Dubai, 2012), at Article 7, “Unsolicited bulk electronic communications.”
  29. For example, Russia, Federal Law 374-FZ, 2016, requires providers to store content and communications data for six months.
  30. United Nations Resolution 68/167: “The right to privacy in the digital age,” December 18, 2018.
  31. For more details, see Chapter 4 on “Consumer affairs.”
Last updated on: 19.01.2022
Share this article to: