Crowdsourcing techniques in quality of service assessment25.08.2020
Crowdsourcing devices for fixed and mobile networks
Crowdsourcing for quality of service monitoring involves collecting data as measurements from “data collection devices” on user connections for processing into measurement results by the “data collector.” The data collector provides measurement results to the general public and to other organizations, such as operators, regulators, and consumer groups.
The data collection device might be the user terminal or a special device associated with the user terminal. In fact:
- For fixed networks crowdsourcing the data collection device is often the user terminal. Many regulators suggest or even provide online applications that users can initiate from their personal computers. The speed tests from M-Lab and Ookla are widespread examples that, when initiated by users, generate traffic to test file transfer speeds and packet loss ratios. However, personal computers can depend on slow operating systems and poor home networking, so the measurement results might be indicative of user experience but not ideal for comparing network performance between operators.Probably more satisfactory, but certainly more expensive, is the use of a data collection device that is separate from the terminal. A test is then usually not initiated by the user but automatically, according to a rule about the times and conditions that determine when tests are run, and the results of the test do not depend on the characteristics of the user terminal. Such data collection devices are given to users that become part of a programme funded by the operators or the regulator in several countries.
- For mobile networks the data collection device is usually a user terminal that is assumed to be a smartphone, a tablet, or a computer; however, other possibilities have been developed. A test might be initiated manually (by the user) or automatically (according to a rule). It collects data relating to traffic that might be generated by itself or the activities of the user.
In practice, crowdsourcing in quality of service assessment often involves a mixture of these techniques. Moreover, operators or regulators might combine the use of crowdsourcing with drive and walk tests. Various data collectors, such as P3 and Rootmetrics, support such combinations by performing drive and walk tests as well as collecting and processing data from data collection devices. Overall, for mobile networks crowdsourcing is a relatively cheap alternative or complement to such tests that can reveal much information about market trends and network requirements, as illustrated in Singapore (IMDA 2019).
As the discussion above illustrates, crowdsourcing techniques can differ in two dimensions:
- The initiation of a test can be manual (done by the user) or automated (done according to a rule).
- The collection of data can be active (relating to traffic generated for the tests) or passive (relating to traffic arising from the activities of the users).
The application from 5Gmark adapted for the regulator in Burkina Faso, for example, provides manual initiation of tests and active collection of data (ARCEP 2020). It also collects simple information about phone calls, by asking users to state whether calls succeed and to judge the voice quality, so it provides subjective information as well as objective information.
The distinction between “active” and “passive” is much the same as that between “intrusive” and “non-intrusive” in ITU-T Recommendation E.802 (ITU-T 2007). The terms “active” and “passive” are preferred in other standards, such as ITU-T Recommendation E.812 (ITU-T 2020). That recommendation includes a good discussion of the advantages and disadvantages of both manual and automated initiation and of both active and passive collection. The tables below, derived from one in that recommendation, sum up the advantages (treating the disadvantages as advantages for the alternative).
|Initiation||Advantages (available for both active and passive collection)|
Source: ITU 2020, derived from ITU-T 2020.
|Collection||Advantages (available for both manual and automated initiation)|
Source: ITU 2020, derived from ITU-T 2020.
Data collected by crowdsourcing should be filtered and categorized before being aggregated into measurement results. In general:
- The measurements are filtered to remove duplicates and defects. These might originate outside the times and places being monitored, in attempts to bias the results deliberately, or from apparent but irrelevant changes in performance (when, for example, content becomes popular enough to migrate to local servers).
- The measurements are put into categories intended to provide separate measurement results. Doing this can entail taking account of information about the capabilities and locations of data collection devices, including the answers from users to any questions inserted in tests. The categories might be separated according to the circumstances of measurements (such as indoors, outdoors driving and outdoors walking), the times and places, the operators and even the user terminals.
- The measurements are aggregated into results by applying standard statistical techniques to those categories of data for which the results can have statistical and practical significance.
There is more detail about these steps in ITU-T Recommendation E.812 (ITU-T 2020).
Limitations to representativeness
Crowdsourcing is not necessarily free for regulators: collecting and processing the data has a cost. However, already there is a large amount of information available free of charge, in country reports from data collectors such as Opensignal and Tutela. It can certainly provide insights into telecommunications in the countries, but it is unable to provide the complete picture. Specifically, filtering and categorizing the data will not overcome the following problems:
- Crowdsourcing provides information only about the QoS available to certain people: they must have enough money, education, and interest to have suitable broadband connections, enthusiasm for knowing the results of tests, and willingness to let data be collected from their terminals.
- Some user terminals do not provide all the data that would be useful; for instance, they might not give access to signal strength measurements or permit background monitoring in some situations.
- Several crowdsourcing applications are designed on the assumption that the user will be interested in data services but not in voice services: the only parameters assessed are just some of those relevant to data services.
This limits the extent to which the samples tested by the crowdsourcing applications represent the community of users as a whole. However, they are becoming more representative, both because broadband connections and smartphones are becoming more widespread and because the applications are broadening the ranges of parameters and the kinds of services that they assess. For instance, one now determines the likely voice quality and video quality of over-the-top (OTT) services from characteristics of user terminals and quality of service parameters (Opensignal 2019; Opensignal 2020). For this purpose, it uses ITU-T Recommendation G.107 and ITU-T Recommendation P.1203 (ITU-T 2015; ITU-T 2017).
Implications for regulation
Crowdsourcing approaches are promising technically but need care, because:
- Crowdsourcing applications need to have enough users at similar times and in similar places to justify inferences with high levels of statistical confidence.
- The confidentiality of the data has to be confirmed. Extra inputs might provide more refined results but might also generate correlations that undo supposed anonymity.
- Crowdsourcing highlights the importance of universal service that is available, accessible, and affordable. Without this it biases the data and widens the gap between people with and without access.
- The ways in which the measurements are made for individual users and then filtered, categorized, and aggregated to form measurement results need to be properly documented and understood.
- The data collectors need to be scrutinized, just as would be agents performing drive and walk tests on behalf of the regulator. In fact, as noted above, in some cases they might also be performing drive and walk tests.
Similar points are made in ITU-T Recommendation E.812 (ITU-T 2020).
The documentation required about the collection and processing of the data could resemble that required by the code of conduct for data collectors issued by the regulator in France (Arcep 2018a; Arcep 2018b). So that measurement results can relate to specific networks and user terminals, the regulator requires that large fixed network operators include in their network access points a particular application programming interface (API); data collectors that comply with the code of conduct can then obtain through the API, and use for data collection and processing, information on the network connections (Arcep 2019a; Arcep 2019b). For mobile network operators, analogous information is already available on smartphones.
- See available here, and available here. ↑
Arcep. 2018a. Code de conduite 2018 de la qualité de service à destination des acteurs de la mesure. Arcep. December 18, 2018. https://www.arcep.fr/uploads/tx_gspublication/code-de-conduite-qs-internet-2018_FR.pdf.
Arcep. 2018b. 2018 Code of Conduct on Internet Quality of Service for Measurement and Testing Stakeholders. Arcep. December 20, 2018. https://www.arcep.fr/uploads/tx_gspublication/code-of-conduct-internet-qs-2018_EN.pdf.
Arcep. 2019a. Régulation par la data: Régulation par la donnée: L’Arcep adopte la décision visant à fiabiliser la mesure de qualité de service de l’internet fixe à travers la mise en place d’une API dans les box. Arcep. October 25, 2019. https://www.arcep.fr/actualites/les-communiques-de-presse/detail/n/regulation-par-la-donnee-1.html.
Arcep. 2019b. Data-Driven Regulation: Data-Driven Regulation: Arcep Adopts a Decision Aimed at Increasing the Accuracy of Fixed Internet QoS Measurements by Installing an API on Boxes. Arcep. October 25, 2019. https://en.arcep.fr/news/press-releases/p/n/data-driven-regulation-4.html.
ARCEP. 2020. «Résomètre»: une nouvelle application pour évaluer la performance des réseaux de téléphonie mobile. http://www.arcep.bf/resometre-nouvelle-application-evaluer-performance-reseaux-de-telephonie-mobile-27223/.
IMDA. 2019. “IMDA’s Use Cases on the Use of Crowdsourced Data Collection.” Presentation at ITU Workshop on Telecommunication Service Quality as an Enabler of the Digital Economy, August 20, 2019. https://www.itu.int/en/ITU-T/Workshops-and-Seminars/qos/201908/Documents/Henry_Foo_Presentation.pdf.
ITU-T. 2007. Framework and Methodologies for the Determination and Application of QoS Parameters. ITU-T Recommendation E.802. https://www.itu.int/ITU-T/recommendations/rec.aspx?id=8848&lang=en.
ITU-T. 2015. The E-Model: A Computational Model for Use in Transmission Planning. ITU-T Recommendation G.107. https://www.itu.int/ITU-T/recommendations/rec.aspx?id=12505&lang=en.
ITU-T. 2017. Parametric Bitstream-Based Quality Assessment of Progressive Download and Adaptive Audiovisual Streaming Services over Reliable Transport. ITU-T Recommendation P.1203. https://www.itu.int/ITU-T/recommendations/rec.aspx?id=13399&lang=en.
ITU-T. 2020. Crowdsourcing Approach for the Assessment of End-to-End QoS in Fixed and Mobile Broadband Networks. ITU-T Recommendation E.812. https://www.itu.int/ITU-T/recommendations/rec.aspx?id=14272&lang=en.
Opensignal. 2019. The State of Mobile Voice App Experience. https://www.opensignal.com/sites/opensignal-com/files/data/reports/pdf-only/data-2019-10/voice_app_experience_october_2019_opensignal.pdf.
Opensignal. 2020. Methodology Overview: How Opensignal Measures Mobile Network Experience. https://www.opensignal.com/sites/opensignal-com/files/opensignal_methodology_overview_march_2020.pdf.Last updated on: 28.08.2020