(Un)fair AI. Toward an Unbiased Algorithmic Future for People with Disabilities by Helena Jańczuk

Page 1


NATolIN PolIcy PAPers serIes

(Un)fair AI. Toward an Unbiased Algorithmic Future for People with Disabilities

mentored by: olaf osica NATolIN DIgITAl TrANsFormATIoN NesT

Vol. (7) 4/2025

ISSN 3071-7183

author:

Volume editor:

Mentored By:

publication Coordinator:

Design and typesetting:

proofread by:

Natolin Policy Papers series vol. (7) 4/2025 (un)fair ai. toward an unbiased algorithmic future for people with Disabilities

Helena Jańczuk

Barbara Bobrowicz

Olaf Osica

Jan Bogusławski

Maja Grodzicka

Mateusz Byrski

published in poland By:

College of Europe in Natolin www.coleurope.eu www.natolin.eu

84, Nowoursynowska Str., 02-797 Warsaw, Poland

ISBN 978-83-63128-26-5

ISSN 3071-7183

ebook Natolin Policy Papers Series

© College of europe in natolin, 2025

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them.

Algorithmic bias seems to be an innate feature of systems based on artificial intelligence (AI). The existing research and development approaches toward technological advancement often provide insufficient action to prevent it effectively because they focus on perfecting the algorithmic outputs and, in most cases, disregard the quality of information “fed” to AI models. As a result, the human-algorithm interaction of certain social groups, for example people with disabilities (PWD), may not be optimal and can be marked with bias and discrimination. This issue calls for immediate action aimed at mitigating the negative outcomes of AI-based systems.

This policy paper proposes a set of recommendations for policymakers and developers of algorithm-based systems to reduce the risk of algorithmic bias against people with disabilities. Building upon the concepts of responsible AI and AI fairness1, the recommendations include:

1. Training AI models with atypical bodily and cognitive features in mind to develop algorithms that are fair to everyone, despite their physical or mental capabilities;

2. Organizing consultations with PWD and disability specialists to accommodate the real needs of the disabled who use AI-based systems;

3. Continue investing in research on responsible AI to promote interdisciplinary scientific dialogue that will create a set of standards for policymakers and developers in the area of technology and digitalization.

Although this policy paper focuses on the question of algorithmic bias against people with disabilities, these recommendations can be applied to any group that is being discriminated against by AI-based systems. Overall, this paper aims to promote discussion on AI responsibility and fairness.

1 Cole Stryker, ‘What Is Responsible AI? | IBM’ (IBM, 2024) <https://www.ibm.com/think/topics/responsible-ai> accessed 24 October 2025; Stanford Institute for Human-Centered Artificial Intelligence, ‘Artificial Intelligence Index Report 2024’ (Stanford, CA: Stanford Institute for Human-Centered Artificial Intelligence 2024); Emilio Ferrara, ‘Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies’ (2023) 6 Sci 3, 3 <https://www.mdpi.com/2413-4155/6/1/3> accessed 24 October 2025.

(Un)fair AI. Toward an

Unbiased Algorithmic Future for People with Disabilities

NATolIN PolIcy PAPers serIes

TABle oF coNTeNTs

executive summary

Introduction

responsible AI, AI fairness, and the Value lock-In Hazard

social and Algorithmic Bias

Aggregation and Disability Bias

Implications for AI Fairness and recommendations

concluding remarks

Bibliography

(Un)fair AI. Toward an Unbiased Algorithmic Future for People with Disabilities

When using systems based on artificial intelligence, such as chatbots and face or voice recognition programs, we rarely realize that the outcomes we get might not be the same for other people. Oftentimes, these differences stem from our interaction with an algorithm and are based on our previous choices and preferences. However, some decisions made by AI are not rooted in individual actions. Rather, they result from a systemic issue, the so-called algorithmic bias, and might lead to harmful outcomes, e.g., technological exclusion, inaccessibility, and the spread of stereotypes.

This policy paper aims to discuss the question of bias and discrimination against people with disabilities (PWD) in artificial intelligence algorithms. Firstly, the qualities of responsible AI, with emphasis on AI fairness and the emergence of AI unfairness, as well as the value lock-in hazard is presented. Then, the key definitions and concepts are explained, followed by a discussion on the differences between allocative and representational harm as well as different kinds of bias. This theoretical framework lays the necessary foundation for the subsequent discussion of the often-overlooked topic of AI-powered disability bias. The final part of this policy paper will consider the issues of safety, accessibility, functionality, and standards of normalcy for PWD in the light of AI bias research. The discussion of both theoretical and practical dimensions of this matter will provide a background against which a set of recommendations to mitigate the problem can be proposed.

resPoNsIBle AI, AI FAIrNess, AND THe VAlUe lock-IN HAzArD

The topic of algorithmic bias against people with disabilities falls within a broader discussion on AI responsibility and fairness. Responsible AI is a field of research within the domain of AI alignment and machine ethics. It aims to build trust in AI solutions based on guidelines concerning the design, development, and deployment of intelligent systems2. The concept of responsible AI focuses on the broader societal impact of artificial intelligence and the ways in which human values, legal standards, and ethical principles can be installed in advanced computer programs. Research on responsible AI differentiates four main dimensions of research on intelligent systems: (1) privacy and data governance, (2) transparency and explainability, (3) security and safety, and (4) fairness3. Fairness, being the central notion for this analysis, focuses on creating unbiased, equitable algorithms and avoiding discrimination through AI misuse4

Importantly, understanding AI fairness simply as the “absence of bias or discrimination in AI systems”5 is correct, but does not fully determine its range. The complexity of this concept allows for differentiating five types of AI fairness based on scope. The first type, group fairness, is based on principles of demographic parity and equal opportunity to ensure that “different groups are treated equally or proportionally”6 Conversely, individual fairness focuses on guaranteeing equitable treatment by AI systems, where individuals with similar characteristics are treated similarly, irrespective of their group affiliations7. Furthermore, AI fairness can also be understood as avoiding bias in hypothetical scenarios (counterfactual fairness), transparency and lack of discrimination in decision-making processes (procedural fairness), and “ensuring that an AI system does not perpetuate historical biases and inequalities” (casual fairness)8. The disruption of any of the above types can result in the emergence of a phenomenon referred to as AI unfairness There are two sources of unfairness in intelligent systems. On the one hand, it can result from data collection and processing errors, such as under-sampling of certain groups, subjective “feature engineering” (i.e. subjective augmentation, aggregation, and summarization of variable

2 Stryker (n 1).

3 Stanford Institute for Human-Centered Artificial Intelligence (n 1) 163.

4 ibid.

5 Ferrara (n 1).

6 ibid.

7 ibid.

8 ibid.

characteristics), or even the fact that given data reflect discrimination in society9. On the other hand, AI unfairness may be a consequence of the design and deployment of intelligent systems based on subjective human labor10

The concepts of algorithmic bias, responsible AI, and AI fairness are closely linked to the value lock-in hazard – one of the eight AI speculative hazards and failure modes described within the framework of AI existential risk (X-risk) analysis11. Value lock-in may manifest itself when our technology reinforces the values of a dominant group, or when groups become entrenched in a disadvantageous state resistant to efforts aimed at change12. This problem is two-fold. On the one hand, highly developed artificial intelligence itself might be able to decide which values are to be promoted in the future13. On the other hand, the rapidly growing requirements of computing power and data act as barriers to entry, centralizing the influence of AI. Over time, this could result in a scenario where only a limited number of stakeholders have access to and control over the most advanced AI systems14. “This may enable, for instance, regimes to enforce narrow values through pervasive surveillance and oppressive censorship”15. No matter which scenario one finds most convincing, the main concern is locking in a small group’s value system, i.e. spreading biased attitudes, which in turn can be harmful to humanity’s long-term potential16.

socIAl AND AlgorITHmIc BIAs

The term bias as such does not refer solely to algorithms, computers, or other machines. Rather, it can be applied to any manifestation of social prejudice, stereotype, and discrimination toward an individual or a group17. However, since machines are said to be able to discriminate, one can observe the phenomenon of algorithmic bias, sometimes referred to as computer bias. One can speak of algorithmic bias in a

9 Michael Veale and Reuben Binns, ‘Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data’ (2017) 4 Big Data & Society <https://journals.sagepub. com/doi/epub/10.1177/2053951717743530> accessed 24 October 2025.

10 ibid.

11 Dan Hendrycks and Mantas Mazeika, ‘X-Risk Analysis for AI Research’ (arXiv, 20 September 2022) <http://arxiv.org/abs/2206.05862> accessed 24 October 2025.

12 ibid 5.

13 ibid 13.

14 ibid.

15 ibid.

16 ibid 14.

17 Tony Busker, Sunil Choenni and Mortaza Shoae Bargh, ‘Stereotypes in ChatGPT: An Empirical Study’, Proceedings of the 16th International Conference on Theory and Practice of Electronic Governance (ACM 2023) <https://dl.acm.org/doi/10.1145/3614321.3614325> accessed 24 October 2025.

situation when “[digital] systems (…) systematically and unfairly discriminate against certain individuals or groups of individuals in favor of others”18 and there are “systematic errors that occur in decision-making processes, leading to unfair outcomes”19. This means that an algorithm discriminates when it denies opportunities for good and/or assigns an undesirable outcome20. There are three types of algorithmic bias:

1. Preexisting bias – is rooted in social institutions, practices, and attitudes. It may reflect personal (individual) and group (societal) bias, e.g., that of subcultures and private or public institutions. Preexisting bias “takes on the prejudices either of its creators or the data it is fed” 21 .

2. Technical bias – results from technical constraints of technology and “originates from attempts to make human constructs such as discourse, judgments, or intuitions amenable to computers”22

3. Emergent bias – appears when end users interact with an algorithm and stems from changes in societal knowledge, population, or cultural values23

Any of the kinds of bias enumerated above can be dangerous once they impact a broad group of users. As Friedman and Nissenbaum claim, “[c] omputer systems (…) are comparatively inexpensive to disseminate, and thus, once developed, a biased system has the potential for widespread impact”24. The emergence of large-scale biased algorithms might bear harmful consequences that are either allocative, i.e., “when opportunities or resources are withheld from certain people or groups” 25 or representational, i.e., “when certain people or groups are stigmatized or stereotyped”26. Although as many as seven sources of algorithmic harm can be identified27, one of them, namely aggregation bias, is particularly important for this analysis. That being said, aggregation bias occurs when a universal model is used where there are groups or types that should be considered differently. Inadequate functions of an AI-based

18 Batya Friedman and Helen Nissenbaum, ‘Bias in Computer Systems’ (1996) 14 ACM Transactions on Information Systems 332, 332 <https://nissenbaum.tech.cornell.edu/papers/biasincomputers.pdf>.

19 Ferrara (n 1) 2.

20 Friedman and Nissenbaum (n 18) 332.

21 Megan Garcia, ‘Racist in the Machine: The Disturbing Implications of Algorithmic Bias’ (2016) 33 World Policy Journal 111 <https://doi.org/10.1215/07402775-3813015> accessed 24 October 2025.

22 Friedman and Nissenbaum (n 18) 334.

23 ibid 336.

24 ibid 331.

25 Harini Suresh and John Guttag, ‘A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle’, Equity and Access in Algorithms, Mechanisms, and Optimization (ACM 2021) 2 <https://dl.acm.org/doi/10.1145/3465416.3483305> accessed 27 October 2025.

26 ibid.

27 ibid 4–6.

system may, in turn, harm or exclude these groups or types28. This analysis will prove that people with disabilities are particularly vulnerable to this type of algorithmic bias.

AggregATIoN AND DIsABIlITy BIAs

To reiterate, aggregation bias in AI arises when a single, generalized model is applied across all groups, even though certain groups or categories require distinct consideration29. This means that a given variable can represent something different across groups, since they may consist of people of various backgrounds, cultures, or norms. “Aggregation bias can lead to a model that is not optimal for any group or a model that is fit to the dominant population”30. It can be argued that AI technologies are unfit for people with disabilities (PWD) and thus can be treated as an example of aggregation bias. The topic of disability and AI is relatively unexplored, and when it is mentioned, the discussion focuses mainly on the questions of safety and accessibility. There are, however, more dimensions of disability bias in AI. Intelligent systems that are biased against PWD may lead to safety issues, limited access to key technologies, inadequate functionality, and unfair standards of normalcy.

To begin with, aggregation bias in AI may pose a direct danger to people with disabilities. Since AI body and gesture recognition systems are trained mainly on healthy, physically typical people, they may be unable to identify PWD correctly. For instance, gesture recognition systems might not work well for amputees, people with tremor or spastic motion31 Also, body recognition systems, such as pedestrian-detection algorithms applied in self-driving cars, might not recognize people with posture differences caused by, e.g., cerebral palsy, Parkinson’s disease, advanced age, or wheelchair use32. There is considerable fear that self-driving cars may misrecognize people in wheelchairs due to insufficient data given to AI algorithms and cause accidents involving them. Algorithms deployed in such vehicles “may not correctly identify [people with disabilities] as objects to avoid or may incorrectly estimate the speed and trajectory of those who move differently than expected”33. These fears are based on similar incidents that have happened before. For instance, in 2018 Elaine Herzberg, a pedestrian from Arizona who was pushing a bicycle,

28 ibid 6.

29 ibid 5.

30 ibid.

31 Anhong Guo and others, ‘Toward Fairness in AI for People with Disabilities SBG@a Research Roadmap’ [2020] SIGACCESS Access. Comput. 2 <https://doi.org/10.1145/3386296.3386298> accessed 27 October 2025.

32 ibid.

33 ibid.

was misrecognized by a pedestrian-detection algorithm and killed by a self-driving Uber car34. Given the limited capability of body and gesture recognition AI systems as well as real-life examples of accidents involving self-driving cars, it is safe to assume that people with disabilities, including wheelchair users, can be classified as particularly vulnerable road users35. Thus, as long as the imperfection of gesture and body recognition systems prevails, there will be aggregation bias toward PWD.

Furthermore, AI aggregation bias can be observed in the inaccessibility of advanced technologies to people with disabilities. Algorithms responsible for face, object, scene, and text recognition as well as speech recognition and generation prove to be unfit for the needs of PWD. Algorithms used in face detection, identification, verification, and analysis are prone to misidentify and misclassify facial expressions and emotions of PWD. Such algorithms “may not work well for people with conditions such as Down syndrome, achondroplasia, cleft lip or palate, or other conditions that result in characteristic facial differences”36. Moreover, many systems adopted by people with visual impairments (e.g., Microsoft Seeing AI, Google Lookout, KNBF Reader, etc.) are trained using photos taken by the sighted. As a result, these algorithms can misidentify objects, scenes, and text due to poor framing, blur, unusual angles, poor lighting, etc., characteristic of pictures taken by the blind or visually impaired37. Additionally, AI-supported speech systems are believed to be maladjusted to the needs of PWD. Automatic speech recognition (ASR) systems used to auto-generate subtitles do not work well with old adults’ speech and accents, including so-called “deaf accent”. Also, text-to-speech technologies, such as voice fonts and voice assistants (e.g., Alexa, Siri, Google Assistant), do not accommodate “diverse user needs [because] people with cognitive or intellectual disabilities may require slower speech rates, whereas people with visual impairments may find rates too slow”38. Creating an AI system that is accessible to people with disabilities poses a great challenge to engineers and user experience designers. As such, disability is an umbrella term that covers a vast number of physical and mental health conditions and thus is difficult to classify and include in training datasets for AI systems39 Therefore, it might not be possible to fully prevent aggregation bias in AI.

34 Daisuke Wakabayashi, ‘Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam’ The New York Times (19 March 2018) <https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality. html> accessed 27 October 2025.

35 Juan Guerrero-Ibañez and others, ‘Assistive Self-Driving Car Networks to Provide Safe Road Ecosystems for Disabled Road Users’ (2023) 11 Machines 967 <https://www.mdpi.com/20751702/11/10/967> accessed 27 October 2025.

36 Guo and others (n 31) 2.

37 ibid 2–3.

38 ibid 3.

39 Meredith Whittaker and others, ‘Disability, Bias, and AI’ (2019) 10 Now Institute at NYU <https://ainowinstitute.org/wp-content/uploads/2023/04/disabilitybiasai-2019.pdf>.

Additionally, inadequate functionality of AI-supported conversational agents and information retrieval can make it difficult for people with disabilities to use the Internet and contribute to the spread of harmful stereotypes. Conversational agents are deployed among others in customer service, education, and health support. They are based on various models, such as automatic speech recognition, text analysis, text-tospeech, and speaker analysis40. Given varying communication needs, “conversational agents may not work well for people with cognitive and/ or intellectual disabilities, resulting in poor user experience”41. In this case, one size-fits-all approach to technology does not work. In other words, disability encompasses a huge variety of physical and mental health conditions that should be taken into consideration.

For example, conversational agents may need to correctly interpret atypical spelling or phrasing from users with dyslexia or may need to adjust their vocabulary level to be understood by someone with dementia. Further, conversational agents may need to support conversation in a user’s preferred expressive medium, which may not be written language for some disability segments – i.e., it may be important to support communication via sign languages (for people who are deaf) or via pictures and/or icons (for people with aphasia or autism)42

The lack of such accommodations may exclude PWD from using the Internet freely. Moreover, information retrieval (IR), i.e., web search engines and other systems used to rewrite, autocomplete, correct spelling, rank search results, summarize content, and answer questions, may spread stereotypes and misinformation about people with disabilities43 “It is likely that many IR systems may inadvertently amplify existing biases against PWD, such as through returning stereotypical and/or overand under-represented content in search results”44. For instance:

Advertising algorithms and other types of recommender systems may hold particular risk for PWD by actively propagating discriminatory behavior such as through differential pricing for products and services and/or differential exposure to employment or other opportunities45

40 Guo and others (n 31) 4.

41 ibid.

42 ibid.

43 ibid.

44 ibid.

45 ibid.

In 2019, the US Department of Housing and Development charged Facebook with violating the Fair Housing Act. The company was accused of allowing advertisers to exclude certain groups based on their search preferences, including accessibility for the disabled46. This and similar situations may influence the overall experience of PWD using information retrieval systems.

The discussion above shows that AI systems may have a strong tendency to display aggregation bias toward PWD. Safety concerns, accessibility issues, and inadequate functionality, which seem to preclude the daily use of technology by PWD, are important to the discussion. However, there exists yet another issue, arguably the most important one, that should be mentioned. It can be argued that AI bias may also have social consequences for PWD. The maladjustment of AI technologies to the needs of people with disabilities enforces notions of “normal” (i.e., without disabilities) and “abnormal” (i.e., with disabilities)47. Ultimately, artificial intelligence can be treated as unfair to PWD because “versions of normalcy reflected in the cultures and logics of corporate and academic tech environments are encoded in data and design and amplified through AI systems”48. Thus, disability becomes a social model, the product of disabling environments and attitudes49. In a way, due to aggregation bias, PWD experience a phenomenon that can be labeled as a Reverse Turing Test, in which humans must prove their humanity to AI systems and adjust to fit normative, predetermined categories to be recognized as human50. Additionally, biased algorithms may limit individual freedoms, human agency and autonomy as well as reinforce societal power dynamics51. Biased intelligent systems can violate the right to privacy, fail to capture nuances of complex situations, lead to unfair outcomes, and remove the human element from important decisions52. In essence, the existence of biased AI “may undermine public trust in technology”53 as well as promote skepticism toward the application of algorithms and “decreased adoption or even rejection of new technologies”54

46 Adam Gabbatt, ‘Facebook Charged with Housing Discrimination in Targeted Ads’ The Guardian (28 March 2019) <https://www.theguardian.com/technology/2019/mar/28/facebook-ads-housing-discrimination-charges-us-government-hud> accessed 27 October 2025.

47 Whittaker and others (n 39) 12.

48 ibid.

49 ibid 4.

50 ibid 14.

51 Ferrara (n 1) 5.

52 Aaron Smith, ‘Attitudes toward Algorithmic Decision-Making’ (Pew Research Center, 16 November 2018) <https://www.pewresearch.org/internet/2018/11/16/attitudes-toward-algorithmic-decision-making/> accessed 30 October 2025.

53 Ferrara (n 1) 5.

54 ibid.

ImPlIcATIoNs For AI FAIrNess AND recommeNDATIoNs

As can be seen, aggregative algorithmic bias against people with disabilities is a complex issue that goes well beyond the questions of safety and accessibility. The analysis of this problem points to the fact that algorithmic bias may be an unavoidable feature of intelligent systems. The tendency to bias is not only signaled by researchers but also by public opinion. For example, as many as 58% of Americans think that algorithmic bias will always reflect some level of human bias55. What is more, it is believed that impartial, neutral algorithms can never be created because they are intended to identify, sort, and classify as well as “seduce, coerce, discipline, regulate, and control”56.

While programmers might seek to maintain a high degree of mechanical objectivity – being distant, detached, and impartial in how they work and thus acting independent of local customs, culture, knowledge, and context – in the process of translating a task or a process or calculation into an algorithm they can never escape these57.

It is not only the inherent design of algorithms but also available resources, the quality of training data, standards, protocols, laws, hardware, platforms, bandwidth, and languages that constrain algorithmic objectivity58. While some of the causes of AI-powered bias are an innate part of algorithmic systems, the others depend on external factors and thus can be altered. In other words, algorithms should not be expected to completely cease to amplify existing inequalities nor discriminate against marginalized groups based on gender, skin color, ethnicity, or disability. However, some changes to the current decision-making processes, policies, and research approach can be made to minimize the risk.

As can be seen, the issues of biased algorithms mentioned above fall into the scope of consideration of research on responsible AI. They should not be overlooked as they may seriously impact the relationships of people with advanced technologies and harm entire social groups. An example of such a negative influence was analyzed in this policy paper that focused on people with disabilities. Even though AI bias cannot be entirely averted, it is important to remember that there are some ways

55 Smith (n 52).

56 Rob Kitchin, ‘Thinking Critically about and Researching Algorithms’ (2017) 20 Information, Communication & Society 14 <https://doi.org/10.1080/1369118X.2016.1154087> accessed 27 October 2025.

57 ibid 17–18.

58 ibid 18.

in which algorithmic bias against PWD can be mitigated. The following steps are recommended to reduce the problem:

1. Train AI models with atypical bodily and cognitive features in mind

As mentioned before, algorithmic bias against people with disabilities may be a result of the fact that the majority of training materials for AI systems rely on healthy people. Algorithms responsible for body and gesture recognition systems as well as face, object, scene, text, and speech recognition rely heavily on typical bodily images, gestures, and behaviors of people who are not disabled. Consequently, the current approach to designing and training AI-based systems needs to be reformed in order to avoid misidentification and misclassification of PWD. Thereby, it should be thoroughly reframed to minimize the risk of discrimination against and promotion of biased views on disability.

For AI, imperfect, defective input produces an output of similar quality. Thus, it is crucial to provide more diversified training materials for algorithms, i.e., include images of PWD, to reduce the problem of misclassification and misinterpretation of atypical bodily features. Also, developers of algorithms should incentivize interactions between AIbased systems and PWD to ensure that various cognitive needs are recognized and met. In order to decide the kind of input that should be used for training, AI developers should take into consideration the differences between the needs of mentally and physically disabled users. This differentiation could be facilitated by close collaboration between technicians and disability experts, for instance, psychologists and medical doctors. What is more, AI developers should bear in mind that, given the complexity of the issue, it is virtually impossible to create a “one-size-fits-all” system. However, this does not mean that there should be no procedures for the development process at all. The crucial step in designing AI systems compatible with the needs of PWD should take place before the actual training and programming happens. An extensive conceptualization phase could help defining objectives, outlining project-specific procedures, and deciding on training materials (e.g., images of PWD and voice recordings) needed for a particular AI system to function well.

In sum, AI fairness can be reached by considering the needs of PWD on various levels of designing and developing algorithmic systems. In other words, a PWD-friendly AI system should account for not only the technical issues of safety and accessibility but also consider social and philosophical perspectives, such as the question of “normal” vs

“abnormal” or human agency, autonomy, and societal power dynamics. Therefore, creating an AI system that is fair toward people with disabilities requires interdisciplinary cooperation between technicians and other experts.

2. Organize consultations with PWD and disability specialists

The inability to provide universal solutions is the overarching motif of the analysis of threats posed by biased algorithms to PWD. Disability is an umbrella term that encompasses a wide array of physical and cognitive capabilities. As indicated above, it remains resistant to “one-size-fits-all” measures to prevent algorithmic bias. For instance, a wheelchair user benefits from a radically different type of AI-based system than a person with sight impairment. True, there exist comprehensive frameworks that describe and classify various types of disability59 and vast research on social dimensions of disability60. However, being mostly addressed to the experts in the field, these documents might be difficult to comprehend by people who lack the specialist knowledge and vocabulary, in this case programmers and designers of AI-based systems. Hence, other means should be used to ensure proper recognition of the needs of particular groups of people with disabilities. The technical process of creating a system based on AI solutions should be supported by consultations with PWD and disability specialists at all stages of work.

3. Continue investing in research on responsible AI

As indicated by the previous recommendation, during the initial stages of designing an algorithm-based system, programmers and developers should consult their project with disability specialists, i.e., medical doctors, psychologists, psychiatrists, and caretakers and PWD, who will potentially use it, to recognize their unique needs and implement solutions that accommodate them. This way, the functionality of an AI system will be more fit for those with special needs. Once the system is developed and introduced to the broader public, feedback sessions should be organized to identify and fix any flaws and inconveniences for disabled users. Although some organizations advocate for inclusion of disability in AI ethics debates (e.g., Disability Ethical AI? Alliance (DEAI), Equitable AI Alliance, American Foundation for the Blind (AFB), and NYU center for Disability Studies), they are limited by their grassroot character and operate mostly by means of recommendations or companies-users

59 World Health Organization, ‘International Classification of Functioning, Disability and Health’ <https:// icd.who.int/browse/2025-01/icf/en#1248248764> accessed 26 August 2025.

60 Jerome E Bickenbach, ‘The International Classification of Functioning, Disability and Health and Its Relationship to Disability Studies’, Routledge Handbook of Disability Studies (Routledge 2012) <https:// www.taylorfrancis.com/chapters/edit/10.4324/9780203144114-11/international-classi%EF%AC%81cation-functioning-disability-health-relationship-disability-studies-jerome-bickenbach>.

dialogues. Therefore, there is a need for more decisive legislative action that will hold AI companies responsible for their social impact.

The final recommendation is concerned with mitigating algorithmic bias systemically. In order to ensure equal and unbiased access to advanced technologies, it is necessary to develop a solid scientific body of research on responsible AI, especially AI fairness. Universities and national governments alike should invest in research grants, promote scientific collaboration between research facilities and Big Tech companies, and organize academic exchange for students and scientists who wish to contribute to the idea of AI responsibility, especially in the area of AI bias toward PWD which is still excluded from mainstream AI fairness discussions. As the authors of the 2019 report of the AI Now Institute claim, there are areas where more research and intervention are needed61

In recent years, the academic world has experienced a period of “AI summer” with new interdisciplinary research perspectives and innovations that have impacted AI policies worldwide. This increased interest in the topic highlighted the need for global action and resulted in regulations such as the 2024 AI Act of the EU, the 2024 Global Digital Compact of the UN, and the 2025 AI Action Plan of the US that foster digital cooperation and governance of digital technologies and artificial intelligence. These examples show that comprehensive research on AI safety and responsibility has a real impact on governmental actions. Therefore, it should be continued to support global actors with up-todate knowledge.

Nonetheless, it is important to note that the majority of currently developed AI regulations are concerned with a wide scope of responsibility and fairness questions, leaving the needs of particular groups somewhat behind. In order to secure full AI safety, more attention should be given to research perspectives that are marginalized or simply overlooked. The needs of people with disabilities should be a subject of well-funded, mainstream, interdisciplinary scientific collaboration that includes various fields other than computer science, i.e., biology, psychiatry, philosophy, social sciences, and law. Only then will it be possible to establish a universal code of conduct that will help develop technology that is beneficial, safe, and fair for people with special needs. A well-developed body of research and a code of conduct for responsible AI could constitute a reference point and serve as a backbone for present-day and future legislation regarding AI and other advanced technologies, similar to the previously mentioned AI Act of the European Union or the UN Global Digital Compact.

61 Whittaker and others (n 39) 2.

Additionally, the idea of responsible AI should be promoted among the public. It should be advocated for a close collaboration between researchers, think tanks, and NGOs that specialize in human-oriented aspects of digital transformation. Raising societal awareness on the issue of AI bias against disability could put more political pressure on governments to regulate it and, resultingly, make companies develop products designed with the real needs, equality, and safety of PWD in mind.

In conclusion, given the complexity of algorithmic bias against people with disabilities, a wide range of remedial measures is to be taken. To mitigate the problem, decisive action on two levels is crucial. On the one hand, the needs of PWD should be recognized by designers and developers at the early stages of algorithm-based systems development. Providing sufficient training materials for algorithms and organizing feedback sessions with PWD and disability experts for engineers developing a product using AI is a way of accommodating the real needs of disabled users. On the other hand, systemic changes based on thorough research and interdisciplinary scientific dialogue should be introduced. A universal code of conduct within the scope of AI fairness will be a useful tool for decision-makers responsible for legislation in the field of advanced technologies and digitalization.

This policy paper proves that responsible AI is a key area of research in an era characterized by rapid technological development. The recommended solutions to the problem of algorithmic bias against people with disabilities should be taken as a guideline for the overall direction of AI bias reduction efforts. To ensure fairness of AI-based systems, future developments in artificial intelligence and other advanced technologies should go hand in hand with consideration of the assumptions of research on responsible AI.

coNclUDINg remArks

BIBlIogrAPHy

Bickenbach JE, ‘The International Classi fi cation of Functioning, Disability and Health and Its Relationship to Disability Studies’, Routledge Handbook of Disability Studies (Routledge 2012) <https:// www.taylorfrancis.com/chapters/edit/10.4324/9780203144114-11/ international-classi%EF%AC%81cation-functioning-disability-health-relationship-disability-studies-jerome-bickenbach>

Busker T, Choenni S and Shoae Bargh M, ‘Stereotypes in ChatGPT: An Empirical Study’, Proceedings of the 16th International Conference on Theory and Practice of Electronic Governance (ACM 2023) <https://dl.acm.org/ doi/10.1145/3614321.3614325> accessed 24 October 2025

Ferrara E, ‘Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies’ (2023) 6 Sci 3 <https://www. mdpi.com/2413-4155/6/1/3> accessed 24 October 2025

Friedman B and Nissenbaum H, ‘Bias in Computer Systems’ (1996) 14 ACM Transactions on Information Systems 332 <https://nissenbaum. tech.cornell.edu/papers/biasincomputers.pdf>

Gabbatt A, ‘Facebook Charged with Housing Discrimination in Targeted Ads’ The Guardian (28 March 2019) <https://www. theguardian.com/technology/2019/mar/28/facebook-ads-housing-discrimination-charges-us-government-hud> accessed 27 October 2025

Garcia M, ‘Racist in the Machine: The Disturbing Implications of Algorithmic Bias’ (2016) 33 World Policy Journal 111 <https://doi. org/10.1215/07402775-3813015> accessed 24 October 2025

Guerrero-Ibañez J and others, ‘Assistive Self-Driving Car Networks to Provide Safe Road Ecosystems for Disabled Road Users’ (2023) 11 Machines <https://www.mdpi.com/2075-1702/11/10/967> accessed 27 October 2025

Guo A and others, ‘Toward Fairness in AI for People with Disabilities SBG@a Research Roadmap’ [2020] SIGACCESS Access. Comput. <https:// doi.org/10.1145/3386296.3386298> accessed 27 October 2025

Hendrycks D and Mazeika M, ‘X-Risk Analysis for AI Research’ (arXiv, 20 September 2022) <http://arxiv.org/abs/2206.05862> accessed 24 October 2025

Kitchin R, ‘Thinking Critically about and Researching Algorithms’ (2017) 20 Information, Communication & Society 14 <https://doi.org/10.1080/1 369118X.2016.1154087> accessed 27 October 2025

Smith A, ‘Attitudes toward Algorithmic Decision-Making’ (Pew Research Center , 16 November 2018) <https://www.pewresearch.org/internet/2018/11/16/attitudes-toward-algorithmic-decision-making/> accessed 30 October 2025

Stanford Institute for Human-Centered Artificial Intelligence, ‘Artificial Intelligence Index Report 2024’ (Stanford, CA: Stanford Institute for Human-Centered Artificial Intelligence 2024)

Stryker C, ‘What Is Responsible AI? | IBM’ (IBM, 2024) <https://www.ibm. com/think/topics/responsible-ai> accessed 24 October 2025

Suresh H and Guttag J, ‘A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle’, Equity and Access in Algorithms, Mechanisms, and Optimization (ACM 2021) <https://dl.acm.org/ doi/10.1145/3465416.3483305> accessed 27 October 2025

Veale M and Binns R, ‘Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data’ (2017) 4 Big Data & Society <https://journals.sagepub.com/doi/ epub/10.1177/2053951717743530> accessed 24 October 2025

Wakabayashi D, ‘Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam’ The New York Times (19 March 2018) <https://www. nytimes.com/2018/03/19/technology/uber-driverless-fatality.html> accessed 27 October 2025

Whittaker M and others, ‘Disability, Bias, and AI’ (2019) 10 Now Institute at NYU <https://ainowinstitute.org/wp-content/uploads/2023/04/disabilitybiasai-2019.pdf>

World Health Organization, ‘International Classification of Functioning, Disability and Health’ <https://icd.who.int/browse/2025-01/icf/ en#1248248764> accessed 26 August 2025

College of Europe in Natolin

Nowoursynowska 84, 02-797 Warszawa, Polska

coleuropenatolin.eu

NATolIN PolIcy PAPers serIes

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.