428 | Revisiting Targeting in Social Assistance
Using big data for eligibility determination may raise social or ethical concerns that bear consideration. The use of data on personal actions, such as phone calls or social media posts, for eligibility determination takes on a larger cultural dimension of “Big Brother watching,” especially if people do not fully understand that it is not the content of the conversations that is monitored but the somewhat less intimate details of their frequency, length, origin, whether incoming or outgoing, and text or voice (although it does include who else you have been talking to). Will people feel comfortable if how they chat with friends by phone, Facebook, or Twitter; search on Google; or buy on Amazon influences their eligibility for social protection programs? And how will the outcomes be explained? Although this book is not the place, we believe it is important that a deeper look be made into not just what legal protections are required governing data ownership and protection, but more broadly what sociocultural considerations should be explicitly brought to bear on these issues. Big data and machine learning may raise some significant human rights issues (see box 6.10), although work is underway to try to provide privacy guarantees while still allowing public good applications.81 In the case of Togo, the academic team carefully tested for demographic parity and
BOX 6.10
Machine Learning, Big Data, and Human Rights Machine learning, private big data, and biometric technology may generate gains to the delivery system and selection of beneficiaries of social protection programs, but they also pose risks and challenges that must be considered and minimized or mitigated. Among the advantages, biometric identification systems help in uniquely identifying people for social protection through systems such as fingerprints, iris, and face recognition, allowing not only such identification but deduplication and interoperability of systems. Machine learning and big data can help in understanding poverty and patterns of need, matching people to programs, and changing the interaction between people and the state (Gelb and Metz 2018). However, there are multiple risks that have implications for human rights. Ohlenburg (2020a) and Sepulveda (2018) are thoughtful sources. Among the risks they cite are (1) inaccuracy of data or exclusions from data or algorithms, (2) identity theft/data protection, and (3) security risks and the misuse of data. continued next page