Discipline, Punish, Design: Policing the Police

Page 1

AI VISUALIZATION FOR POLICE BRUTALITY REPORTING Strategic Design and Management in New Economies ● Fall 2021 Aditi ● Arianna ● Nina ● Sebastian ● Rashad


FINAL PROJECT PROPOSAL 1

We plan to further iterate and expand upon our AI Visualization for Police Brutality for the final project. Specifically, we intend to elaborate on how best to optimize the AI system in order to eliminate bias as we understand that is a huge concern for AI systems of this nature.

We also plan to elaborate on how we can quantify the scoring system in such a way that the system understands and can thereby suggest actions for police officers and departments to act on for the betterment of their unit and overall safety of their workers and communities.

2


PREVIOUS VISUAL ITERATION

AI VISUALIZATION FOR POLICE BRUTALITY REPORTING


CONTENTS

1

Optimizing the AI system in order to eliminate bias

2

Quantifying the scoring system to suggest actions

3

System adoption & end cases

4

Final Visualization of AI system


PROBLEM

Police brutality happens within targeted communities in the US with limited accountability and is a reality that is not visualized by all the actors in our society.

SOLUTION

Use of artificial intelligence to generate a scoring system that lessens the likelihood for recurring police brutality within targeted communities.


How do we know a problem exists?




In 2021, there have only been 12 days where the police did not kill someone

According to the mappingpolicevioelnce.org database Database updated as of 12/10/2021


Police have killed 966 people in 2021.

According to the mappingpolicevioelnce.org database Database updated as of 12/10/2021


Black people have been 27% of those killed by police in 2021 despite being only 13% of the population.

According to the mappingpolicevioelnce.org database Database updated as of 12/10/2021


Over the course of their lives, about 1 in every 1,000 black men can expect to be killed by police with people of color being significantly more likely than white women and men to be killed by police. For young men of color, police use of force is among the leading causes of death.*

*According to the Proceedings of the National Academy of Sciences


There is no accountability 98% of killings by police from 2013-2020 have not resulted in officers being charged with a crime.

According to the mappingpolicevioelnce.org database Database updated as of 12/10/2021


Why does this matter to us?


Our Motivation

As we’ve seen in recent years, racism & oppression is a systemic issue embedded in our society. Unfortunately, the justice & policing systems are not designed for the benefit of all citizens in our community. Utilizing AI systems thinking to design a reformed system is an exciting application of this emerging technology for the betterment of the oppressed groups of people within our society. It excites me to see a new technology being used to visibly add such a great value to our society.

-

Nina Vujasinovic


Our Motivation

Growing up a Black man in America, I am a direct target of the inhumane treatment and brutality that U.S. law enforcement has practiced on communities of color for nearly three decades. While there are human organizations that lead the way for protecting, informing and advocating for Black lives, I recognize it is a systemic design issue. One method to combat human-created design f***ups is for humans to redesign altogether. Only in this iteration, we may be able to create non-human designs that can mitigate the problem better than humans can.

-

Rashad Williams


Our Motivation

As our world transitions towards automation and AI, we must ensure the incoming technologies minimize misery and corruption instead of adding to it. History has clearly shown that the police cannot police themselves. Therefore, I am passionate about utilizing these emerging technologies to initiate true police reform. -

Arianna Tamaddon


Our Motivation

Social inequalities is a topic that needs to be addressed from its root-causes. The actual economic models rely on several institutions which are part of their ecosystem and that also shall be redesigned. Designing a solution with AI motivates me to set new frontiers and rules to what we have seen until now. -

Sebastian Munoz Awad


Our Motivation

The Internet has proven to be a great power for change, AI further increases the strength of these powers. I believe technology can positively impact our response to global challenges and therefore I’m excited to utilize the technology to protect true democratic values.

-

Aditi Patel


What biases currently exist?


BIASES IN AI

AI can help reduce bias, but it can also incorporate and scale biases.

AI Models may be trained on data using human decisions or on data that reflect effects of societal or historical inequities.

Bias can also be introduced into the data through how the data is collected or selected for its intended use.

In criminal justice models, oversampling certain neighborhoods because they are overpoliced can result in recording more crime, which results in more policing.

Data generated by users can also create a feedback loop that leads to bias. *According to McKinsey Global Institute, 2019


ELIMINATING BIASES IN AI

Underlying data rather than the algorithm itself are most often the main source of bias

Minimizing bias in AI will be critical to increase people’s trust in the system & ultimately allow AI to reach its potential to drive benefit for society

Human judgment is still needed to ensure AI supported decision making is fair

*According to McKinsey Global Institute, 2019


What is “fair”?

1

We often accept outcomes that derive from a process that is considered “fair” but procedural fairness ≠ outcome fairness

2

If the group making a decision contains a diversity of viewpoints, then what it decides is deemed fair, but this is not necessarily the case always

3

We need to hold humans more accountable & hold human decision making to a higher standard

*According to McKinsey Global Institute, 2019


ELIMINATING BIASES IN AI ➢

Establish best practices to test for & mitigate bias including: Improving data collection through more cognizant sampling Using internal teams or 3rd parties to audit data & models, & transparency about processes & metrics to understand steps taken to promote fairness.

Acknowledge & engage in fact-based conversations about potential biases in human decision making Consider how to improve human-driven decision making & processes in the future

Consider situations & end use cases when automated decision making is acceptable vs. when humans should be involved When AI algorithms provide recommendations, humans may need to double check especially when in sensitive fields Transparency about the algorithm’s confidence in its recommendation can also help humans understand how much weight to give it

*According to McKinsey Global Institute, 2019


ELIMINATING BIASES IN AI: PRE-SYSTEM CREATION

1. The importance of broader representation ➔ Within AI Design, Model Iterations, Deployment, and Governance 2. Diversity within AI teams reflects diversity within training data sets ➔ Removing descriptor information from past police reports ➔ AI word recognition to seek and replace descriptions of race, eye/hair color, names, locations, neighborhoods 3. Giving extra attention to historical data ➔ Removing data from training sets that comes from historically racially biased geographical regions


ELIMINATING BIASES IN AI: POST-SYSTEM CREATION

Datasheets for data sets* ◆

Model cards for model reporting* ◆

➔ ➔

Documents its motivation, composition, collection process, recommended uses, etc. Documentation detailing the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information Clarify the intended use cases of AI models and minimize their usage in contexts for which they are not well suited

Encourages the machine learning community to prioritize transparency and accountability Ongoing impact assessments and audits to check for fairness in systems before & after deployment

*Based on Cornell University arXivLabs, 2019


What values will our outputs be based on?


VALUES FOR SCORING

OUTPUTS FOR SCORING ➔ ➔ ➔ ➔ ➔ ➔ ➔

Principles of Democracy Preservation of human life Citizens relationship management (CRM) Accountability to the serving community Commitment to professionalism Standards of integrity Policing methods


Understanding Our Scoring Principles Principles of Democracy

Citizens Relationship Management (CRM)

It is incumbent upon the police to enforce the law and deliver a variety of other services in a manner that not only preserves, but also extends precious democratic values. It is in this context that the police become the living expression of the meaning and potential of a democratic form of government.

Improve quality of life for citizens, and a reduction in the fear that is generated by both the reality and perception of crime & police. Crime is not solely a police problem, and it should not be considered as such. Thus, it is important for the police department to involve the community in its operations.

Preservation of human life

Accountability to serving community

The police department must believe that human life is the most precious resource. Therefore, the department, in all aspects of its operations, must place its highest priority on the protection of life despite of its race, origin, language & religious values.

An important element of accountability is openness. Secrecy in police work is not only undesirable but unwarranted. Police becomes more transparent in presenting certain reports & finding to the serving community upon community’s requests.


Understanding Our Scoring Principles (cont.) Commitment to professionalism

Standards of integrity

Relying less on interpersonal biases and relationships to be a barrier in receiving protection from law enforcement. Professionalism in this context means trusting police to do what they are commissioned to do, without personal prejudice.

The proposed transparency that lies inherent in our solution helps to rebuild integrity and trust between law enforcement and communities that are most impacted by police brutality.

Policing methods We see our scoring values as an offering to government systems who may need support deploying mechanisms of accountability, risk management, and internal systems management.


SCORING VALUES AND OUTPUTS

0

60

75

90

100


What change will come from how we use AI?


ADOPTION & END CASES

Adoption is based on a cultural change ❏ ❏ ❏ ❏ ❏

The use of new technology with the communities require that new relationships must be built with the communities New “machines” need to be matched with the operational knowledge that the police forces use/or need to redesigned Tech simplifies some human jobs. This time that is “saved” should be used to interact with the communities Talent must be redefined. The new police force will need other skills The role of culture in all this process:

*According to Deloitte, 2019


Framing Change Management as a Data-Driven Approach: ➔ ➔ ➔ ➔ ➔

To generate changes in the police institution and secure the adoption of this AI solution, there needs to be a focus in how data is managed Transformations must be linked to data Changes in the Police Academy, Leaderships, Recruitment process need to have a transformation based on data No data in transformation process = no bad reputation of the change process To address biases that influence negative perception on target communities, we must enact sustainable change in human psychology through our output of scoring.


Managing Change in the Relationship Between Scoring Outputs and Institutions

DATA is used to measure quantitative performance (i.e. crimes solved, arrests made, tickets written) It must be connected to our solution to generate cultural change


Managing Change in the Relationship Between Scoring Outputs and Human Psychology Principles of Democracy

Standards of integrity End Use: Improve the perception of good performance in police forces to be more just and transparent to communities

End Use: Empower communities through the power of awareness, transparency, and checks/balances that will hold true in the court of law

Commitment to professionalism

Preservation of human life

End Use: Create a standard for professionalism (verbal and non-verbal) that safeguards citizens from microaggressions that may escalate a situation.

End Use: Fewer instances of death by police force by provoking empathy and care for citizens’ livelihood v. a ‘kill or be killed’ mindset

Citizens relationship management

Accountability to the serving community

End Use: Assessing qualitative criteria in a way that depicts emotional intelligence and tightens the margin for culturally-driven prejudice and other soft hazards

End Use: A standard of responsibility that allows communities to interrogate an officer’s score + rationale behind neighborhood-level assignments

Policing methods End Use: Improved policing systems and a decline of cases involving inappropriate use of force in historically-targeted communities.


FINAL ITERATION

AI VISUALIZATION FOR POLICE BRUTALITY REPORTING


DATA STACKS DEFINED:


DATA STACKS DEFINED:


DATA STACKS DEFINED:


DATA STACKS DEFINED:


SCORING INPUTS DEFINED:


SCORING OUTPUTS DEFINED:


With the use of our system, we are hoping we can reinstate trust & transparency between communities & their police force who are meant to serve & protect.


Thank You


Appendix


Team Reflection → Had weekly team meetings to discuss deliverables & delegate roles/tasks among team members → Structured project based on: → Problem → Current AI applications within this field → Research → Insight synthesis → Iterative visualizations → Utilized each team member’s strengths but also allowed opportunities for growth & personal development within this project


Arianna’s contributions

Visual design lead on original graphics for AI design project including our iterations

Final iteration graphics for final project

Research for final ○

Eliminating bias in AI


Nina’s Contributions

Research lead & insight synthesis throughout design project & final project ○

Researched eliminating bias for final

Final presentation outline & flow of information

Introducing and showcasing the gravity of the problem - telling this story in compelling way with strong images & powerful stats


Sebastian’s Contributions

Facilitator for creative discussions

Structure for our scope of work

Research and ideation of cultural changes that must be addressed under a change management perspective


Rashad’s Contributions

Strategy and utilization development

Visualization development

Content and copywriting support


Aditi’s Contributions

Defining scoring principles

Creation of scoring values

Development of scoring matrix

Visualization developments


APPENDIX: SOURCES https://mappingpoliceviolence.org/ https://www.pnas.org/content/116/34/16793 https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans https://hbr.org/2017/07/data-can-do-for-change-management-what-it-did-for-marketing https://www2.deloitte.com/us/en/insights/focus/defense-national-security/future-of-law-enforcement-ecosystem-of-policing. html https://www.justice.gov/archive/crs/pubs/principlesofgoodpolicingfinal092003.htm#63 https://www2.deloitte.com/content/dam/Deloitte/uk/Documents/public-sector/deloitte-uk-future-of-policing.pdf https://www.wonderslist.com/10-countries-best-police-forces/ https://www.datarobot.com/wiki/scoring/ https://www.publicsafety.gc.ca/cnt/rsrcs/pblctns/2015-r034/index-en.aspx#meth


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.