Digital Regulation Platform

Crowdsourcing techniques in quality of service assessment


Crowdsourcing devices for fixed and mobile networks

Crowdsourcing for quality of service monitoring involves collecting data as measurements from “data collection devices” on user connections for processing into measurement results by the “data collector.” The data collector provides measurement results to the general public and to other organizations, such as operators, regulators, and consumer groups.

The data collection device might be the user terminal or a special device associated with the user terminal. In fact:

In practice, crowdsourcing in quality of service assessment often involves a mixture of these techniques. Moreover, operators or regulators might combine the use of crowdsourcing with drive and walk tests. Various data collectors, such as P3 and Rootmetrics, support such combinations by performing drive and walk tests as well as collecting and processing data from data collection devices. Overall, for mobile networks crowdsourcing is a relatively cheap alternative or complement to such tests that can reveal much information about market trends and network requirements, as illustrated in Singapore (IMDA 2019).

Data collection

As the discussion above illustrates, crowdsourcing techniques can differ in two dimensions:

The application from 5Gmark adapted for the regulator in Burkina Faso, for example, provides manual initiation of tests and active collection of data (ARCEP 2020). It also collects simple information about phone calls, by asking users to state whether calls succeed and to judge the voice quality, so it provides subjective information as well as objective information.

The distinction between “active” and “passive” is much the same as that between “intrusive” and “non-intrusive” in ITU-T Recommendation E.802 (ITU-T 2007). The terms “active” and “passive” are preferred in other standards, such as ITU-T Recommendation E.812 (ITU-T 2020). That recommendation includes a good discussion of the advantages and disadvantages of both manual and automated initiation and of both active and passive collection. The tables below, derived from one in that recommendation, sum up the advantages (treating the disadvantages as advantages for the alternative).

Initiation Advantages (available for both active and passive collection)
  • Raises user awareness of relevant network conditions
  • Allows the insertion of simple questions about user motivations and locations
  • Is supported on all user terminals without needing covert background processes
  • Is perceived to be more transparent by being under user control
  • Provides data for the times and conditions under inspection by the operator
  • Offers large sample sizes by removing user control of test frequencies
  • Eliminates the bias of users from the choice of circumstances for initiations

Source: ITU 2020, derived from ITU-T 2020.

Collection Advantages (available for both manual and automated initiation)
  • Allows measurements of parameters dissociated from immediate user activities
  • Opens possibilities for test standardization
  • Offers the same measurements irrespective of user terminal capabilities
  • Precludes assembling information about users from application-specific data
  • Indicates performance based on actual user activities over periods
  • Returns results unaffected by the conditions of the data collection devices
  • Operates without needing explicit choices of test servers
  • Avoids congesting the network without needing tests to be devised specially
  • Avoids consuming user data allowances without needing test traffic to be free

Source: ITU 2020, derived from ITU-T 2020.

Data processing

Data collected by crowdsourcing should be filtered and categorized before being aggregated into measurement results. In general:

There is more detail about these steps in ITU-T Recommendation E.812 (ITU-T 2020).

Limitations to representativeness

Crowdsourcing is not necessarily free for regulators: collecting and processing the data has a cost. However, already there is a large amount of information available free of charge, in country reports from data collectors such as Opensignal and Tutela. It can certainly provide insights into telecommunications in the countries, but it is unable to provide the complete picture. Specifically, filtering and categorizing the data will not overcome the following problems:

This limits the extent to which the samples tested by the crowdsourcing applications represent the community of users as a whole. However, they are becoming more representative, both because broadband connections and smartphones are becoming more widespread and because the applications are broadening the ranges of parameters and the kinds of services that they assess. For instance, one now determines the likely voice quality and video quality of over-the-top (OTT) services from characteristics of user terminals and quality of service parameters (Opensignal 2019; Opensignal 2020). For this purpose, it uses ITU-T Recommendation G.107 and ITU-T Recommendation P.1203 (ITU-T 2015; ITU-T 2017).

Implications for regulation

Crowdsourcing approaches are promising technically but need care, because:

Similar points are made in ITU-T Recommendation E.812 (ITU-T 2020).

The documentation required about the collection and processing of the data could resemble that required by the code of conduct for data collectors issued by the regulator in France (Arcep 2018a; Arcep 2018b). So that measurement results can relate to specific networks and user terminals, the regulator requires that large fixed network operators include in their network access points a particular application programming interface (API); data collectors that comply with the code of conduct can then obtain through the API, and use for data collection and processing, information on the network connections (Arcep 2019a; Arcep 2019b). For mobile network operators, analogous information is already available on smartphones.


  1. See available here, and available here.


Arcep. 2018a. Code de conduite 2018 de la qualité de service à destination des acteurs de la mesure. Arcep. December 18, 2018.

Arcep. 2018b. 2018 Code of Conduct on Internet Quality of Service for Measurement and Testing Stakeholders. Arcep. December 20, 2018.

Arcep. 2019a. Régulation par la data: Régulation par la donnée: L’Arcep adopte la décision visant à fiabiliser la mesure de qualité de service de l’internet fixe à travers la mise en place d’une API dans les box. Arcep. October 25, 2019.

Arcep. 2019b. Data-Driven Regulation: Data-Driven Regulation: Arcep Adopts a Decision Aimed at Increasing the Accuracy of Fixed Internet QoS Measurements by Installing an API on Boxes. Arcep. October 25, 2019.

ARCEP. 2020. «Résomètre»: une nouvelle application pour évaluer la performance des réseaux de téléphonie mobile.

IMDA. 2019. “IMDA’s Use Cases on the Use of Crowdsourced Data Collection.” Presentation at ITU Workshop on Telecommunication Service Quality as an Enabler of the Digital Economy, August 20, 2019.

ITU-T. 2007. Framework and Methodologies for the Determination and Application of QoS Parameters. ITU-T Recommendation E.802.

ITU-T. 2015. The E-Model: A Computational Model for Use in Transmission Planning. ITU-T Recommendation G.107.

ITU-T. 2017. Parametric Bitstream-Based Quality Assessment of Progressive Download and Adaptive Audiovisual Streaming Services over Reliable Transport. ITU-T Recommendation P.1203.

ITU-T. 2020. Crowdsourcing Approach for the Assessment of End-to-End QoS in Fixed and Mobile Broadband Networks. ITU-T Recommendation E.812.

Opensignal. 2019. The State of Mobile Voice App Experience.

Opensignal. 2020. Methodology Overview: How Opensignal Measures Mobile Network Experience.

Last updated on: 19.01.2022
Share this article to: