8 minute read

Libraries and GenAI: Ethics, Advocacy and a Call for a Humanistic Approach

By Gwendolyn Reece, PhD (Director of Research, Teaching and Learning, American University Library)

While there is some debate about the empirical validity of the Gartner Hype Cycle, the role of hype in the technological lifecycle and the general stages are at least heuristically useful.1, 2 In the five stages outlined by Fenn and Raskino, (Innovation Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment, and Plateau of Productivity), Generative AI probably remains somewhere between the stages of inflated expectations and disillusionment.3 Generative AI has rapidly become a ubiquitous part of the information landscape, but the development of ethical, legal, and social frameworks that allow for the responsible use of AI are lagging behind. Combined with the opacity of how many of the tools operate, privacy concerns, some spectacular instances of toxic and biased content, and an explosion of AI related academic integrity violations, this situation is increasing disillusionment.4, 5 Simultaneously, there are growing demands that libraries address AI in information literacy instruction, take on the mission of AI literacy, and change our workflows to incorporate AI. A strong sense of FOMO (Fear of Missing Out) is prevalent.6

Perhaps the most powerful thing librarians can do is to constantly re-center our mission and resist uncritically giving in to FOMO. Librarians have always been early adopters of new technologies that aid us in sharing and preserving information and helping the public transform information into knowledge. I have yet to see evidence that librarianship is a change-resistant profession. However, librarians are not motivated by profit. The good we provide to the community is different than the good provided by the market, and librarians are, therefore, resistant to being driven by the market. Our mission is significantly educational; it is about supporting knowledge creation, including the creation of knowledge that will never be profitable; it is about preserving the knowledge of the past so that it can be available to generations in the future; and, in a democracy, it is about ensuring that citizens have access to information they need to be sufficiently informed for effective self-governance, that they know how to use it wisely, and that they can use it safely and privately. Ethics are not tangential to librarianship, they are central. Therefore, I believe librarians are an essential counterweight to the powers that prize profit over other values. Librarians have a long history of thinking critically and carefully about the information landscape, and I encourage as many as possible to be brazen in getting seats at the table in every discussion they can find about how we should be developing our sense of responsible use of AI. Our voices are needed.

A Humanistic Approach to Assessing Uses of AI

As we are considering how we may best take advantage of the opportunities that GenAI presents but in ways that are responsible and ethical, I believe it is beneficial to flip the script. Rather than immediately considering what AI can do and letting technology set the agenda, I believe we should first deeply consider the question, “What is human work?” This is a question I encourage libraries to debate internally, but it is also a question that can lead students to deepen their reflections. Understanding which parts of our work are inherently “human” is a necessary prior condition to being able to identify aspects of work for which AI is an appropriate tool. For the use of AI to be responsible, the tool would have to be effective for its chosen purpose, and its use would have to be ethical. The user of a tool always remains responsible for the outcome of their work.

What are some examples of human work? Making judgments, especially value judgments and ethical judgments, is human work. Nurturing human thriving through activities such as mentoring, coaching, and (I would argue) education, is human work. Exercising critical thinking is human work. Aesthetic appreciation is human work. Creativity — including creating meaning — is human work. These are a few examples. Additionally, humans are social animals, so there may be instances in which the social aspect of interacting with another human is important to the work, even if the content of that interaction does not necessarily require a human. Ideally, outsourcing work that is not inherently human work to AI could free up time for people to be more deeply engaged in work that is truly human work.

This type of analysis is not always straightforward. For example, coding research interviews requires critical judgment, which is human work, but transcribing the interview may not be human work and could possibly be outsourced to AI. However, if it is human subjects research and the participant has been promised confidentiality, or if identifying information is being stripped in the process of transcription for the protection of the subject, then that interview transcription requires judgment. It would be human work in that instance, and perhaps inappropriate for AI.

Once we see what, out of the work we are trying to accomplish, is human work, then we may see aspects of the work that are not inherently human work — where using AI as a tool could be desirable. However, knowing that AI might be useful is not the same thing as determining whether its use is ethical. I have written elsewhere presenting an ethical framework for assessing potential uses of AI.7 I argue that rather than making summary judgments about the ethics of AI, it is best to treat the use of AI as an intervention and assess the ethics of a specific use of a particular AI tool. I draw on the three ethical principles that guide all human subjects research. “Respect for Persons” includes autonomy, informed consent, and privacy. “Beneficence” demands that the benefits outweigh the risks. “Justice” requires equity and fair compensation/ attribution. When considering whether to use AI in a particular instance, careful analysis of the risks and thinking through any strategies to mitigate risks are necessary for making an ethical determination. Risk assessment should consider both individual risks and systemic risks.

To conduct this type of analysis, we need to have a good understanding of how the tools we are considering are working. This requires us to continuously learn about the tools but also about the IT environments in which the tools operate. Opacity renders this type of analysis impossible. As a profession, we should strongly advocate for transparency and for privacy protections. We also need to promote evolving norms of transparency in the use of AI in the production of knowledge — both as a matter of policy and cultural norms.

Institutional Policy Creation and AI

One year further into addressing AI within libraries, most institutions are still in the process of developing policy frameworks for AI. For libraries who are embedded within larger institutions, such as universities, it will be necessary to align library policies with those of the parent body. I believe librarians should strive to become active participants in the policy formation for their institutions as well as being active participants in any university-wide discussions of AI.

At American University I was welcomed onto our university’s Responsible Use of AI Working Group and chaired the Ethics and Principles Subcommittee. This group is drafting university-wide guidance and creating structures for our collective approach to AI. The library’s head of acquisitions and eResources also served as a resource member to the subgroup working on procurement guidelines. The library has been central in leading numerous discussions in the university about AI and pedagogy. We are known for our thoughtfulness and have a collective reputation for being valued partners across the institution. Therefore, when we volunteer for committees, we are typically received with gratitude.

My review of publicly available information leads me to conclude that institutional policy frameworks addressing AI within higher education are still emergent. If you are just starting now, you are not far behind, but it is important to begin. There is significant danger of being driven by technological agendas rather than by our values and our mission if we delay.

Practical Recommended Steps

1. Consider creating “in-service” days to allow for deeper reflection and learning about AI. Include opportunities to have deep discussions about the mission of libraries and consider the question within your local contexts about “what is human work?”

2. Look for opportunities to participate in discussions about AI in your larger context (within the universities, within professional networks) and bring the values and mission of the library into those contexts.

3. Advocate for a seat at the table in any policy/guideline/ best practice conversation addressing AI.

4. Advocate with vendors for transparency and real choice in how their AI tools are working and whether they will be embedded in library tools.

5. Incorporate a humanistic and ethical approach to AI within instruction to students and library decisionmaking.

Endnotes

1. Ozgur Dedehayir and Martin Steinert, “The Hype Cycle Model: A Review and Future Directions,” Technological Forecasting & Social Change 108 (2016): 28–41.

2. Evvira Winraub Lajoie and Laurie Bridges, “Innovation Decisions: Using the Gartner Hype Cycle,” Library Leadership & Management 28, no. 4 (2014): 107.

3. Jackie Fenn and Mark Raskino, Mastering the Hype Cycle: How to Choose the Right Innovation at the Right Time. Boston, Mass.: Harvard Business Press (2008).

4. Beth McMurtrie, “Cheating Has Become Normal,” The Chronicle of Higher Education 71, no. 7 (2024), https://link.gale.com/apps/doc/A828291408/ GRNR?u=wash11212&sid=summon&xid=34f02a91.

5. Zeynep Tufekci, “Musk’s Chatbot Started Spouting Nazi Propaganda. That’s Not the Scariest Part,” New York Times (New York), July 11, 2025, https://www.proquest. com/blogs-podcasts-websites/musk-s-chatbot-startedspouting-nazi-propaganda/docview/3229016866/se-2.

6. Adam Eric Berkowitz, “‘Slow-MO or FOMO’: AI Conversations at Library Conferences,” Public Services Quarterly 21, no. 1 (2025): 51–70.

7. Gwendolyn Reece, “We Already Have an Ethics Framework for AI,” Inside Higher Ed, April 25, 2025, https://www. insidehighered.com/opinion/views/2025/04/25/wealready-have-ethics-framework-ai-opinion.

This article is from: