Design Principles for Beneficial and Responsible AI

Preamble
We are in front of another technical leap with the rise of generative artificial intelligence (AI), mirroring the transformative impacts of the internet and smartphones.
Generative AI’s mass adoption, widespread availability, and the unprecedented speed of change and scale drives us to create necessary ethical guidance to lead the development and implementation of the technology. A variety of frameworks and principles exist that relate to the ethics, risk and governance of generative AI. To tend to these, we draw on this voluminous literature to extract concise and actionable principles, including guidance for operationalizing them and concrete examples of their application. To be effective and remain relevant these principles must continue to evolve in tandem with the rapid speed of change and our deepening and evolving understanding of the technology.
Recognizing ASU's distributed and empowered design, it is imperative that we have shared principles aligning our collective efforts towards generative AI creation and use. As a public enterprise committed to advancing society through excellence and knowledge, unleashing the untapped potential within all people, we bear an important responsibility to lead by example, demonstrating to the world how generative AI can expand the scope of human endeavor while upholding high standards of integrity and respect for the inherent worth of all individuals. The following Design Principles for Responsible and Beneficial AI have been created by the transdisciplinary ASU Faculty Ethics Committee for AI Technology. These principles are intended to guide daily decision-making about the creation and implementation of generative AI experiences at the enterprise level and serve as a resource and as an accountability framework for the ASU community. In applying these principles, trade-offs will be encountered and should be thoughtfully considered and transparently acknowledged supporting a culture of responsible decision-making.
Design Principles for Beneficial and Responsible AI
1. Amplify Possibilities
We have a responsibility to create AI experiences that open and amplify possibilities (as opposed to limiting or closing down pathways or options) in service of respecting human autonomy and empowering individuals and communities. We must always put humans first. We recognize that all data and models are incomplete and flawed, tending to create bias as they replicate formal legacy systems and ways of thinking. By amplifying possibilities we can mitigate the harm that can come from limiting options or biased pathways which can have the effect of reinforcing inequities, lead to coercion, undermining human dignity, and restrict autonomy and choice. The goal is to create AI that respects the diversity of human experiences and values and reflects the ASU charter and design aspirations, striving to ensure that AI serves to enhance the human experience rather than diminish it.
2. Be Agile
We have a responsibility to bring the best of what technology has to offer to the ASU community while being aware of potential risks, and to keep pace with the rapid progression of generative AI. This requires us to embrace experimentation and agility in the learning process, determining what works, adopting a mindset of learning fast, learning forward, and sharing knowledge.
3. Evaluate Vigilantly and Continually Improve
Before release — and on an ongoing basis – we must rigorously evaluate AI tools, platforms, models, and experiences for possible impacts and potential harm. We must continually seek to improve transparency and increase observability. Our commitment extends to continuous improvement, actively working to mitigate harm, and decisively removing technologies or procedures that fall short of our ethical standards.
4. Elevate Fairness and Access:
We have a responsibility to prioritize access and outcomes across the ASU community and communities we serve, centering fairness in AI development, deployment, and use. We must assess impact to ensure that AI experiences are not widening existing disparities or other gaps based on demographics. We thus have a responsibility to collect and act on use and impact data and also to do so in a way that protects individual privacy (such as aggregations and analyzing de-identified data).
5. Protect Privacy:
We have a responsibility to develop and deploy AI models and applications with attention to the rights of individuals’ to privacy and agency in the use of their data, individually and in aggregate. We should prioritize transparency of scope, purpose, and risks inherent in disclosing data to ASU and leverage opportunities for disclosure of privacy terms to educate our stakeholders in being informed and empowered data citizens.
6. Shared Responsibility
Developing and using generative AI responsibly and beneficially is a shared responsibility between the enterprise and individuals. This responsibility should be iterative and reciprocal in nature.
• The enterprise has a responsibility to provide clear, current, concise and visible:
▪ Feedback mechanisms
▪ Training and education
▪ Expectations for engagement with AI tools for their stated purpose
▪ Disclosures including whether and when AI is in use, potential risks, responsibility and accountability
• The individual has a responsibility to:
▪ Provide timely feedback
▪ Engage meaningfully with training and education
▪ Read and comply with expectations for engagement
▪ Read and account for disclosures, including potential risks, in active decision-making using the technology
Meet the Committee

Ron Beghetto
Professor, Pinnacle West Presidential Chair, Mary Lou Fulton Teachers College

Gary Merchant
Regents and Foundation Professor of Law; Faculty Director, Center for Law, Science and Innovation, Sandra Day O'Connor College of Law

Diana Bowman
Associate Dean for Applied Research and Partnerships, Professor of Law, Sandra Day O'Connor College of Law

Olivia Sheng
W. P. Carey Distinguished Chair & Professor, W. P. Carey Information Systems

Andrew Maynard
Senior Global Futures Scholar, Global Futures Scientists and Scholars

Horacio Velasquez Melo
Clinical Assistant Professor, The Sidney Poitier New American Film School
