Can people have healthy relationships with algorithms?

Page 1

Can people have healthy relationships with algorithms?

March 2023

SHS/MOST/DA/2023/PI/3 Rev.
Author(s): Maria Cury, Millie P.Arora

Call for Evidence: Methodological Summary

What motivated you to write this article or thought piece?

Our belief that if tech companies and policymakers continue to misunderstand the human-toalgorithm relationship, they run the risk of creating irrelevant products or services and perpetuating practices that run counter to people leading healthy lives with tech.

What can someone learn by reading this paper?

How applied digital anthropology provides insights that help people create productive, twoway relationships with online algorithms, and that help businesses build algorithms that are healthier and forward-looking.

What is the name of the new methodology or idea you proposed?

For wider accessibility, please use clear language when possible. This facilitates translation and adaptation to different contexts.

Studying human-algorithm relationships

Please provide a concise explanation of how your methodology or idea works in practice or theory.

Use simple language that can be translate easily to other languages and applied to different contexts easily (2- 4 sentences)

Anthropology is a discipline focused on relationships. The relationship that the discipline should now be studying is the one between people and algorithms, using the approaches of (digital) ethnography.

Steps

"If possible, and to promote wider adoption of new methods, could you kindly outline a step-by-step guide for others to replicate in their own projects or context? A clear detailed explanations will ensure your approach or idea is easily understandable for a diverse audience.

Step 1

Take note of the language and terminology that research participants are using to describe their understanding of content recommendation engines and machine learning algorithms.

Step 2

Study the types of relationships being formed, and focus on the relationship – between human and algorithm – as the unit of analysis and the center of observation. Consider the algorithm as an active agent.

Step 3

Note the type of relationship the human expects of the algorithm, and vice versa (to the extent an algorithm can “expect” anything – there is a degree of anthropomorphizing here, and the algorithm is of course a reflection of the priorities of the organization that built it).

Step 4

Observe people across platforms and in the context of their physical lives – behaviors and norms may differ across platforms in ways that reveal the logics and dynamics of the different relationships with different algorithms that are emerging.

Participants

What profiles or skillsets are needed to perform this methodology?

Social scientists who are applying their work in industry or policymaking.

Time

If applicable, please estimate how much time this method generally takes to complete

Variable – observation of over time is ideal.

Materials or tools needed

What will people need physical / virtual access to for this method (computer, social platform etc)?

Social scientists will need access to the physical and digital contexts of people’s everyday lives, to observe the human-algorithm relationship.

Contact Information

If you or your team would be open to receiving questions about this methodology, please include your email or social media handles here.

Maria Cury – mcu@redassociates.com

Millie P. Arora – mpa@redassociates.com

Can people have healthy relationships with algorithms?

Abstract

People are building relationships with content-generating algorithms – relationships often described as toxic, fraught, or one-sided. These relationships with algorithms are changing the expectations and experiences people seek from tech and social media, in ways that run counter to the assumptions and orthodoxies of many tech companies and policymakers. If tech companies and policymakers continue to misunderstand the human-algorithm relationship, they run the risk of creating irrelevant products or services, as well as perpetuating corporate practices that run counter to people leading healthy, happy lives with technology. This is where digital anthropology comes in: anthropology is a discipline focused on relationships – between people, between groups of people, between people and their communities, objects, or environments. The relationship that the discipline should now be studying is the one between people and algorithms, using the tools of digital ethnography. Can applied digital anthropology provide insights that help people create productive, two-way relationships with online algorithms, and that help businesses build algorithms that are healthier and forward-looking?

Introduction

We know about toxic relationships with romantic partners, with colleagues, with work, within families, but the latest toxic relationship people might be in is with their algorithms. These invisible agents –particularly the ones behind content recommendation engines [1] – are showing people new videos to watch on their social media feeds, suggesting shoes or furniture or travel destinations they might like based on what they’ve liked before, and letting them know they are already late for the next appointment in their calendar based on traffic conditions.

For better and worse, algorithms influence the worlds we live in [2,3,4]. And increasingly, people are considering the ways in which algorithms are shaping their own preferences, opinions, and experiences –do they like the things they like because of who they are and their life experiences, or because an invisible agent is influencing them every time they’re online [5]?

Algorithms are often described as opaque [6]. We can’t really see into them and how they are made –drawing scrutiny from many directions in society. Data scientists, social scientists, ethicists, theorists, and policymakers all describe the key issues with how algorithms are built and deployed [7,8]. To name a few of the issues: the lack of transparency makes accountability and assessment difficult [9,10] we can’t easily examine or replicate what an algorithm is doing. Auditing or external review is infrequent, super specialized, and hard to do [11]– we can’t keep algorithms or their makers easily accountable. Hidden biases perpetuate inequalities and injustices if a machine learning algorithm gets trained on data or a set of rules that are based on blind spots and that sustain or worsen already unfair and inequitable structures in society today [12,13,14,15]. There are unhealthy effects of algorithms [16,17] they can be purposely addictive, they can expose us to content that’s inflammatory, false, or offensive more often than we’d like; and they can remove our feeling of control and reduce how productive we feel.

At the solutions level, a lot of important discussions are happening about the transparency and regulation that’s needed with algorithms [18,19,20,21,22]. But in the meantime, it is perhaps just as important to pay attention to what’s happening at the individual level, on the day to day, with algorithms.

Through digital anthropology, we are starting to observe some perceptions and behaviors among everyday people interacting with algorithms in everyday life that could also have big implications for how tech and social media companies design products and experiences that involve algorithms generating or recommending content, and how policymakers develop effective regulation of those algorithms.

What we are observing: the social life of algorithms

In our work at ReD Associates, we apply the theories and methods of the social sciences to advise businesses and organizations across a range of industries and help them better understand the social worlds and cultural contexts that shape how customers use their products or services. We do this through a deep understanding of the people who use or interact with those products and services – which requires rigorous and extensive fieldwork. In our work in the technology sector over the last couple of years[23][23], we have helped companies understand ‘users’ through digital ethnography, particularly when the COVID-19 pandemic made in-person fieldwork impossible.

When we do digital ethnography, we observe both the digital and non-digital lives of people through a range of methods that come from the anthropological and sociological toolkit – mirroring similar approaches and consent guidelines we follow with in-person fieldwork. For example, we often ask participants to meet with us for multiple sessions over a secure video-conferencing platform and screenshare with us across multiple devices showing us how they use apps or spend time online. We often combine this with video or photo-based diaries asking for their reflections on daily life over a period of time [24]. We sometimes we combine the methods of participant-observation with large scale surveys or social-listening of publicly available conversations that are happening online (posts, likes, reshares of online content), and we triangulate across these data sources to uncover meaningful insights.

Across our studies with everyday users of tech and social media over the last couple of years, we’ve observed that people are increasingly aware of content-generating algorithms. When we spend time with people and ask them about their experiences online we hear reflections like, “It’s recommending based on what I like…”, “It’s choosing to show me…”, “It knows who I’m connected to…”. People seem to be building a vocabulary around algorithms, with metaphors and ways of describing how the apps they use function. We see this qualitatively, not with everyone we spend time with in our ethnographic research, but with increasingly more people, and across a diversity of age groups, tech literacy levels, and regions.

The people we meet are often aware that content is algorithmically generated when it surfaces on the feeds of the apps they use. For example, in a study with QAnon formers, we met with research participants like one we’ll call Tiffany, who was frustrated that a lot of old QAnon content she no longer wanted to see kept re-appearing on her YouTube feed. She showed us how she had to keep right clicking on content to signal she wasn’t interested in that content, and how if it was content that was really annoying only then was it worth going to her channels and unsubscribing: “It probably took a good few months for YouTube to get the message and stop suggesting me crazy stuff.”

Many people we meet these days are aware of the negative consequences of algorithms, yet at the same time they feel conflicted, still wanting to be surprised by unexpectedly relevant content they might actually enjoy. If the early discussions around algorithms centered first on how they were creating filter bubbles, then on how they will be running our world, we see signs now that the conversation might be shifting to, how do we live with algorithms?

Algorithms are a social, not just technological, phenomenon: as algorithms shape people’s experience, people shape algorithms, too. Machine learning algorithms respond to what we say and do – for instance, the content we like online shapes the content we will see more of, on social media feeds. Beyond just being aware, we observe in our fieldwork that people are curious about how algorithms work and how their own actions influence the algorithm. We encountered observations similar to someone saying, ‘My

Explore page on Instagram must know I’m a dancer because of all the choreographers I follow. I don’t get any dance content on Twitter, probably because I don’t follow anyone on there who dances. I haven’t used it in years. [25]’ People are noticing patterns (similar to what algorithms do) and coming up with their own logics for how content-generating algorithms work.

But all too often, we hear about how people’s relationship with their algorithms isn’t working, in ways similar to how one might describe a human relationship breaking down. There’s a lack of trust and users are unsure what information is being used to inform the algorithms. Some examples we’ve come across in fieldwork: People expressing a feeling that the algorithm doesn’t know them really or is stuck in the past; they perceive the algorithm is tied to ‘an old version of me,’ or isn’t aware they are trying to change. Or the algorithm is a bad influence [26], making them spend too much money, or making them waste time that could have been spent more productively. And as with other toxic relationships, our research suggests people are finding ways to fix or shed their relationship with algorithms.

Why companies and policymakers should care

A common assumption in the tech industry is that good experiences with technology should be frictionless, seamless –the best technology intuits what the user needs or wants, and provides this in a delightful way. Adding time or effort on the part of the user is bad. Of course, there are exceptions, but this has long been the popular discourse and the gold standard. The problem is that when it comes to relationships, people tend to want other things – like transparency, a balance of power, understanding, feedback, growth over time... and the same goes with their relationship with algorithms.

Companies that deploy content recommendation engines should care in the most basic sense because a peculiar phenomenon arises when it’s unclear how an algorithm works: we notice people often fill the gap with unfavorable perceptions of the content-generating mechanisms. They describe the algorithm (whether or not they call it as such) as scary, creepy, or evil, or perceive of it as smarter than it actually is, which then becomes a disappointment when the algorithm does something dumb – like recommend the shoes you already bought. Or, perhaps the biggest disappointment for any UX designer, people might think the platform as whole is confusing, disorienting, or difficult to use, and leave.

Indeed, people’s perceptions of how an algorithm works may be shaping their perception of the value they find in a particular technology or platform. Take the difference in how people often perceive of TikTok versus Instagram. Many of the people we’ve met in the field over the years perceive TikTok to be kind of random but then really accurate in covering the breadth content they like –and they are ok with TikTok when the platform gets it wrong, because ‘that’s how TikTok works.’ Instagram, meanwhile, ‘knows them’ super well (sometimes perceived as eerily too well), so when the platform then ‘gets them wrong,’ it’s disappointing. Changing how people perceive of the value of a platform may require changing (or helping to build) their understanding of how the platform works, how the algorithm ‘makes’ decisions. This runs counter to much of the tech industry default assumption, which is that people want frictionless, automated, magical experiences that delight.

What’s even more interesting, and perhaps troublesome for companies, is how people are trying to build a more productive relationship with algorithms [27,28]. People ask or wish things of the algorithm – ‘Help me build this habit around…’ ‘Help me learn….’ ‘Keep me away from…’ ‘Help me remember…’ ‘Help me understand opinions different from…’ They want to exercise more agency and ability to shape a system that is today largely designed to automate experiences and make them feel seamless.

Of course, there are features at users’ disposal, like muting or unfollowing people or types of content, to get more of what they want from their time online. But the tools at people’s disposal for working with the algorithm are often hard to find, or too finite rather than flexible. So instead, we see people take matters into their own hands. We’ve observed people trying to hack algorithms, even in small ways. For example,

withholding ‘likes’ or ‘follows’ of content they actually do enjoy, to keep their interests separate across platforms. Or doing the opposite, indicative over-liking or viewing to try to train the algorithm to show more of something. Some people provide advice to others through public posts on how to understand and tailor their newsfeeds, and others use generic or unrelated hashtags to boost how people discovery content they’ve posted. When people can’t hack the algorithm, they consider breaking from it – going on digital detoxes or swapping content-generating feeds with human-curated feeds. When there isn’t a way for people to proactively relate to the algorithms providing them with content or experiences, there’s an emerging risk that their engagement on the platform goes down. While algorithms are in the employ of Big Tech, eventually people may want algorithms that work for them. And new products or services may become available that do just that.

Meanwhile, the policy world is calling for more transparency and regulation [29,30] of online platforms, when in reality our relationships with content-generating algorithms are more complex. Policymakers must consider what kind of controls are helpful for people, and in what contexts. That means giving people and communities a firmer sense of control over the kind of content they do or do not see, as opposed to only providing blanket regulations that dictate what people are shown.

In addition, transparency in rules should correspond to a change in people’s in-the-moment experiences that they can observe and notice. Without that, people will feel less empowered and even more confused. Digital literacy is a topic that often comes up in policy and tech circles, but in our experience people don't read the fine print on tools meant to help them curate their content feeds. Sometimes they don't even notice all the functionalities that already exist – recommendations, subscriptions, trackers of time spent online – to influence how an algorithm works. Rather, in a vacuum of literacy, people look for features with concrete immediate feedback that help them build a better intuition for how the algorithm works. For instance, if someone chooses to filter out a certain kind of content, they want to be able to see and understand the immediate change.

How digital anthropology can play a role

Anthropology is a discipline focused on relationships. The relationship that the discipline should now be studying is the one between people and algorithms, using the approaches of (digital) ethnography.

At the most basic level, researchers should take note of the language and terminology that research participants are using to describe their understanding of content recommendation engines and machine learning algorithms. As with most ethnographic interviewing, let the participant lead and pay attention to the tone and descriptors used in talking about algorithms (whether or not they use the word ‘algorithm’). This requires that the researcher understands how the algorithms work, in at least a general sense and also on the platforms they are studying, in order to compare and contrast the technical capabilities with people’s imagination and perception.

Beyond noting the language used, the perhaps more difficult-to-pull-off but fruitful approach is to study the types of relationships being formed, and focus on the relationship – between human and algorithm – as the unit of analysis and the center of observation. When does someone interact with the algorithm, and how do they feel afterwards? When does the algorithm in some way proactively reach out to the human –in the form of a notification or alert or suggested content? Consider the algorithm as an active agent, in much the same way that material anthropology considers objects as active forces in people’s lives (as do philosophers – Heidegger and hammers come to mind). What are the norms emerging between the algorithm and the human, and how did those come about? When do people feel more in control or less in control? When do they feel ‘seen’ and when do they feel wronged, and what does that say about their expectations, hopes, and challenges with the algorithm? The longitudinal perspective is critical here, understanding how the relationship with the algorithm evolves over time.

Within this approach of making the relationship with the algorithm the unit of analysis, it is important to note the type of relationship the human expects of the algorithm, and vice versa (to the extent an algorithm can “expect” anything – there is a degree of anthropomorphizing here). We know from comparison across projects, that people’s expectations may be different depending on what type of relationship and help they understand the technology is trying to offer – is it about daily assistance?

Connecting to others? Finding entertainment content? For recommendation engines related to entertainment content, is the relationship defined by comfort and familiarity, or expansion of the self?

Understanding the type of relationship people want, against the type of relationship the algorithm has been built for, may be critical. For daily assistance, we know from our work for personal assistant technologies that people want the algorithm to understand the environment they are in, to demonstrate discretion, and to remove distractions, not just surface content.

Another helpful approach is to not be overly digital in the ethnography. Observe people across platforms and in the context of their physical lives – behaviors and norms may differ across platforms in ways that reveal the logics and dynamics of the different relationships with different algorithms that are emerging. And certain relationships might be anchored to different physical contexts in people’s lives, and the needs that emerge in those contexts – relaxation at night, distraction in line at the grocery store, enlightenment while driving in the car on the way to work. Study the other relationships people have in their lives – do they get better recommendations from their barista than from Yelp? Do they get more out of giving help to others, than for receiving it?

Digital anthropologists can use the above approaches to gain a rigorous understanding of the humanalgorithm relationship, but then what, and to what end? First, it is vital that digital anthropologists determine their stance, whether that’s through codes of practice, ethical guidelines, or setting forth a clear human-driven goal at the outset of a particular project. How do we want organizations and decisionmakers to act on insights about relationships people are building with algorithms? The aim may be to ultimately improve rather than worsen the human condition through technology, but the applied anthropologist then needs to show that there is a business case [31] for more agency and control – that both platforms and policies will fail otherwise.

Conclusion

What we see emerging is a relationship between machine-learning algorithms and end-users, but what’s still unclear is how much agency or control people will really have in those relationships. Is it possible to have productive, two-way, non-exploitative relationships with algorithms, if people are increasingly aware of how algorithms work? While it’s easy to imagine that these content-generating algorithms will always be in the service of, ultimately, Big Tech, what’s perhaps also possible is a world in which people want or demand algorithms that also work for them, and new products or services become available that do that in order to differentiate. What will it mean if or when some services or platforms offer this control, but not others, and what inequalities might emerge in access to algorithms that work in the service of the user? What product developers and policymakers should strive for – instead of frictionless delight, or only regulated transparency – is building for agency, more controls, and more dynamism in the interactions people have with algorithms. Algorithms are social. If tech companies and policymakers don’t embrace this distinction, it’ll be an exploitative relationship people have with their algorithms, not a productive one. Digital anthropologists can help to uncover, understand, and build towards better human-algorithm relationships.

FOOTNOTES:

[1] https://sloanreview.mit.edu/article/the-hidden-side-effects-of-recommendation-systems/

[2]https://www.nytimes.com/2007/01/03/technology/03google.html

[3] https://www.nytimes.com/2014/12/28/technology/the-scoreboards-where-you-cant-see-your-score.html

[4] https://www.nytimes.com/2015/08/04/science/chilly-at-work-a-decades-old-formula-may-be-to-blame.html

[5] https://www.newyorker.com/culture/infinite-scroll/the-age-of-algorithmic-anxiety

[6] https://journals.sagepub.com/doi/full/10.1177/2053951716665128

[7] https://www.itbusinessedge.com/applications/whats-next-for-ethical-ai/

[8] https://www.researchgate.net/profile/JamesArvanitakis/publication/319163734_If_Google_and_Facebook_rely_on_opaque_algorithms_what_does_that_mean_for_democracy/ links/59964af70f7e9b91cb096250/If-Google-and-Facebook-rely-on-opaque-algorithms-what-does-that-mean-for-democracy.pdf

[9] https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00041/43452/Data-Statements-for-Natural-Language-Processing [10] https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00041/43452/Data-Statements-for-Natural-Language-Processing

[11] Burrell, J (2016) How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data and Society 3(1): 1–12.

[12] https://www.techrepublic.com/article/guardrail-failure-companies-are-losing-revenue-and-customers-due-to-ai-bias/

[13] https://arxiv.org/abs/2105.07554

[14] https://www.politico.eu/article/uk-consumer-group-tinders-pricing-algorithm-discriminates-against-gay-users-and-over-30s/

[15] Rosenblat A and Stark L (2016) Uber's drivers: Information asymmetries and control in dynamic work. International Journal of Communication 10: 3758–3784.

[16] https://www.itbusinessedge.com/applications/whats-next-for-ethical-ai/

[17] https://www.theatlantic.com/newsletters/archive/2022/04/american-teens-sadness-depression-anxiety/629524/

[18]

https://www.documentcloud.org/documents/21100363-buck_030_xml-filter-bubble

[19]

https://www.axios.com/2021/11/09/algorithm-bill-house-bipartisan

[20] https://www.itbusinessedge.com/business-intelligence/ai-bias-struggles-solutions/

[21] https://www.wired.com/story/china-regulate-ai-world-watching/

[22] https://www.technologyreview.com/2022/05/13/1052223/guide-ai-act-europe/

[23] https://jigsaw.google.com/the-current/conspiracy-theories/

[24] Participants are given a paid incentive for their participation in the studies, and have the right to withdraw from the study at any point.

[25] This and all quotes are exemplary composites reflective of statements from across fieldwork projects.

[26] https://www.newyorker.com/culture/infinite-scroll/the-age-of-algorithmic-anxiety

[27] https://www.washingtonpost.com/technology/2022/04/08/algospeak-tiktok-le-dollar-bean/

[28] https://www.technologyreview.com/2023/02/06/1067794/escape-grief-content-unsubscribe-facebook-instagram-amazonrecommendation-algorithms/

[29] https://www.wired.com/story/china-regulate-ai-world-watching/

[30] https://www.technologyreview.com/2022/05/13/1052223/guide-ai-act-europe/

[31] https://insights.som.yale.edu/insights/building-trust-with-the-algorithms-in-lives

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.