4 minute read

5.1 National Artificial Intelligence Strategies in the South Asia Region

Transparency thus requires public scrutiny of the training dataset and whether the predictive analyses produced by algorithms are accurate, precise, and reproducible. Is the training dataset large enough, of good quality, and representative of the target population? Were data labeling and data curation handled with competence? Was the algorithmic bias systematically assessed? How often will the algorithm and dataset be updated? Has documentation of the model learning process been prepared according to acceptable standards? Proposals for the verification and validation of datasets and algorithmic decisions may find growing resonance in the sphere of public policy (such as that related to court decisions and social welfare programs) in a bid to build and restore the public’s trust in government. For private organizations, this approach will likely meet resistance about disclosing confidential information and trade secrets. Although some private organizations will be motivated by self-interest and reputational concerns and may allow measures such as third party auditing, the majority will be guided by the enactment of principle-based legislation and upholding of robust ethical principles and standards.

Several countries in South Asia have developed national AI strategies and intend to use AI in the health and education sectors, but this approach should be carefully assessed (box 5.1). It is unlikely that these countries can elucidate the type of

BOX 5.1 National Artificial Intelligence Strategies in the South Asia Region

India (2018). India’s National Strategy for Artificial Intelligence, a work in progress but not fully funded or implemented, focuses on the use of technologies to ensure social growth, inclusion, and elevation of the country to leadership in artificial intelligence (AI) on the global stage. Strategically, the government also seeks to establish India as an “AI garage,” incubating AI that could be applicable to the rest of the developing world. NITI Aayog, the government think tank that wrote the “in-progress” national AI strategy report, calls this approach #AIforAll. The strategy aims to (1) equip and empower Indians with the skills to find quality jobs, (2) invest in research and sectors that can maximize economic growth and social impact, and (3) scale Indian-made AI solutions to the rest of the developing world. Areas for AI interventions include health care, agriculture, education, smart cities and infrastructure, and smart mobility and transportation. The section “Ethics, Security, Privacy and Artificial Intelligence” highlights the need to be conscious of the factors of the AI ecosystem that may undermine ethical conduct, impinge on one’s privacy, and undermine the security protocol. Budget allocation for Digital India, the government’s umbrella initiative to promote AI, machine learning, 3-D printing, and other technologies, was almost doubled to Rs 30.73 billion (US$477 million) in 2018 (Bhattacharya 2018). Pakistan (2018). Pakistan announced an AI initiative in April 2018 to be funded at US$3.3 million over three years. The project will be supervised by the Higher Education

(Box continues on next page)

BOX 5.1 National Artificial Intelligence Strategies in the South Asia Region (continued)

Commission (HEC), and six public sector universities were selected to develop nine AI research labs (Khan 2018). In addition, the government announced the Presidential Initiative for Artificial Intelligence and Computing to develop human capacity. Sri Lanka (2018). Sri Lanka, through its National Export Strategy Advisory Committee, announced the launch of an AI Nation to promote the education of 5,000 data scientists from 2018 to 2025 (Daily FT 2018). This measure will serve as a step toward drafting an AI national plan.

Source: Pauwels 2020.

safeguards just discussed. Such safeguards are especially urgent in relation to children and education. The United Nations Children’s Fund (UNICEF), in partnership with many institutions, has developed a new memorandum on AI and children’s rights that recommends developing a framework for AI based on child rights and delineates rights and corresponding duties for developers, corporations, parents, and children around the world (UNICEF 2019). Collaboration with international partners and civil society organizations is needed to develop the appropriate transparency standards for AI and converging technologies.

ACCOUNTABILITY FOR MISUSES OF TECHNOLOGY

As converging technologies become more automated and more decentralized, there is an accompanying lack of clarity about who will be held accountable for their potential and actual misuses. Furthermore, the technological supply chain is long and complex, involving training data, data centers, cloud-based computing services, fiber-optic networks, and highly specialized technical expertise, which is often distributed across the world. These weakly regulated supply chains of AI create pervasive cybersecurity and data security threats and a growing accountability gap, with vulnerable populations exposed to considerable harm and without recourse to legal remedies. Some damage could be irreversible and affect human rights.

Because reliance on corporate ethics or self-regulation will not be sufficient, rulesbased accountability mechanisms and institutions are required, but that will take time. Thus there is an urgent need to develop and operationalize a “theory of no harm,” based on a normative framework, and to undertake sociotechnical system analysis, which would contribute to the development of a strong governance framework that empowers human capital (Pauwels 2020). Such approaches could include, for example, the right to object to automated decision-making and a moratorium on the use of facial recognition technology and other methods for which currently legal protections are weak or cannot be enforced.