4 minute read

5.1 Risks Posed by Converging Technologies

and contacts on mobile phones. For example, recently India powerfully upgraded its technological surveillance capacities to deploy individual facial recognition at railway stations and airports, algorithmic crowd analysis during street protests, and mobile, contactless biometrics identification for temperature detection (BiometricUpdate.com 2020). Beyond state surveillance is a whole range of risks related to manipulation of information and behavior modification for commercial surveillance purposes.

The risks posed by converging technologies as applied to human capital are presented in table 5.1, grouped by existing sources of risk and those likely to emerge in the next few years. Vulnerable populations, in particular, are exposed to risks of cybercrime,

TABLE 5.1 Risks Posed by Converging Technologies

Timeline Risk stratification Current Data commodification • Commodification of behavioral, emotional, and biometric data of children and other vulnerable populations for education scoring, future commercial targeting, and exclusion/discrimination schemes Failure of technological design and predictive value • Biases in datasets and algorithmic design, as well as poor performance in predictive value, may lead to system (access, delivery, optimization) failures, with corrosive implications for underserved groups Manipulation for state surveillance • Commodification of behavioral, transactional, socioeconomic, and consumption data for social credit systems and exclusion/discrimination schemes • Use of personal data to silence civil society resistance, repress traditional media structures, and harm the reputation of knowledge institutions, leading to the closure of virtual civic spaces, affecting people’s resilience and society’s social fabric Information disorders, disinformation, and hate speech • Use of personal, demographic, ethnic, behavioral, and emotional data collected on children and adults for targeting disinformation and polarization, for emotional manipulation and hate speech, and for radicalization • Mobilization of large population subgroups around violent narratives, including around elections

Near term to five-year time frame Cyberoperations, cyberbullying, and social engineering • Use of personal and emotional data for social engineering, leading to more efficient and more powerful acts of cybercrime • Use of biometric data for precision biometric attacks (cyberattacks where autonomous malware uses soft facial, voice, or biometric features for impersonation) • Exfiltration of sensitive datasets about populations to direct attacks to vulnerable subgroups (such as targeting groups facing food insecurity and retaliating against specific minorities, based on biometric data) • Automated data poisoning—that is, poisoning data in critical information infrastructure such as that related to medical or hospital databases or biometric, civic, and electoral registries • Cyberattacks targeting automated supply chains, thereby affecting food security and the delivery of essential human capital services • Cyberattacks in which autonomous malware weaponizes other dual-use technologies (such as biotech, 3-D printing, and robotics, including drone technologies)

Source: Adapted from Pauwels 2020.

cyberbullying, and social engineering, which are enabled through data collection efforts across human development sectors.

The discussion around the appropriate governance structures for converging technologies is still in the early stages as societies and governments struggle with understanding the full ramifications of the ongoing technological revolution. However, a few priority areas can be identified for urgent work to promote the inclusion and empowerment agenda for human capital in South Asia. Developing regulatory standards and legal mechanisms for ensuring transparency, accountability, and local empowerment can help to ensure that these principles are integrated into the design and deployment of converging technologies from the outset and that their beneficial potential can be harnessed for human capital development.

TRANSPARENCY: PROTECTING FAIRNESS AND SAFEGUARDING AGAINST BIAS

Algorithms used for decision-making, whether for personalized learning, job selection, or medical diagnostics, can harbor biases, conscious as well as unconscious, based on the training datasets used and the design of the algorithms themselves. Two sources of bias are of special concern. The first is datasets that may in fact accurately reflect society’s existing biases, thereby pointing to unresolved societal problems that need to be addressed. For example, a society’s discrimination against a particular group cannot be corrected by “fixing” the underlying dataset or algorithm. Instead, it requires a societal solution. The second is datasets that mispresent reality. In such datasets, because of a low participation rate or inaccuracies in synthetic data, the data and the algorithmic application layer fail to accurately reflect reality. It has been shown, for example, that many current facial recognition algorithms fail to discern the features of African Americans, particularly women, because the underlying data did not take into account a representative sample of faces. Another example comes from the new field of emotion analysis, in which facial recognition analysis struggles to detect the “smiles” of Asian women. Such algorithms now play a critical role in programs such as those measuring stunting. Use of such algorithms can perpetuate or worsen the existing inequalities in a society where there is insufficient data on the marginalized and vulnerable or on the biases of the developers.

To correct for missing data and economize on the substantial cost of creating and labeling new datasets, a frequently employed second-best solution is to rely on synthetic datasets.15 This work-around can introduce new problems, however. For example, drawing on the case study on the diagnosis of stunting noted in chapter 3, images and measurements of a few real human beings are used to create images of artificial individuals through feature-based editing, yielding people with different heights and weights and features such as double chins and bony structures (Pauwels 2020). Without further scrutiny, this approach introduces uncertainty about its performance when applied to real individuals.