
34 minute read
Case 2 : Towards a More Ethical Development of Digital Platforms
Advertisement
Case 2
Abstract
Social media platforms have become an important meeting place for social interaction, and the main source of information among many different demographics. These platforms have become more and more problematic, as exemplified in The Social Dilemma, a documentary drama which explores damages the extensive use of social media platforms has on its users and society. It became the highest ranked documentary on Netflix and engaged a large, multinational audience. The documentary explores how technology insiders have a great impact on how these platforms are built. This case investigates further what conditions should be present for technology in social media platforms to be developed in a way that facilitates healthy digital behavior. The report analyzes the work being conducted by The Center for Humane Technology, a non-profit organization working for technology being built to facilitate human needs and values. The findings show that it requires extensive changes in politics and business models in order to make a change. I come to the conclusion that raising awareness among technologists and users of social media platforms about the damaging implications is the first and most important step towards a change of priorities among business leaders and for political influence.
Introduction
In 2006, New York Times Magazine nominated “You”, meaning everyone, to be the “Person of the Year” for transforming the information age by consuming and generating content on the Internet (Nichols, 2007). Since then people around the world have been interacting with digital tools for different purposes such as communicating with friends, sharing dance videos, and participating in political discussions. There are a wide range of platforms to join, and companies are continuing to improve usability and applicability to meet the needs of their users’ attention, but these features do not always favor the users. Technologists that have contributed in designing some of these digital platforms have started to speak out about how influential tech companies actually work in order to maximise the time users spend interacting with their platforms. In the Netflix documentary “The Social Dilemma” (Orlowski, 2020) we are introduced to former employees in social media companies such as Facebook, Google, and Apple. The companies are using algorithms, design features, and data to encourage us to keep interacting with their platform for as long as possible in order to sell advertisement space by exploiting our human weaknesses (Orlowski, 2020). I will take a closer look at why these design features in social media are considered to disfavor the user’s inner values, and what it will take to change the growth of these techniques. The case will focus on organizations and individuals working for systemic change in order to make technology more humane and ethical. The need for change arises from research showing that problems such as increased incidence of depressions, political polarization, and impaired attention span is highly influenced by the digital environment facilitated by technological development (Center for Humane Technology, 2020a). Expanded use of social media is fueled by addiction because of technical design features developed to support a market strategy that makes money on users’ attention rather than their actual benefits and well-being (Center for Humane Technology, n.d.g). I will be mapping the movement towards an ethical change in digital technology by analyzing the content in conversations among people who agree that the responsibility for the negative aspects of social media lies in the hands of the technology industry rather than the users themselves. The research question is: What conditions should be present in order for technology in digital platforms to be developed in a way that facilitates healthy digital behavior? First I will define key concepts and look closer at the background for this movement. Then I will explain and discuss the methodology used to research this question, before analysing the data collected from the conversations previously mentioned to consider different solutions.
Background and Theory
In this chapter I will define the concepts of persuasive technology and machine learning, as they are essential in understanding the technological functioning of social media platforms. I will explain how the CDA 230, as an existing law to facilitate freedom of expression, has made interactive communications between users of social media possible. I will then look closer at some of the key people in the ethical system design movement. Finally, the business model many tech companies base their business on will be described, as it is one of the key reasons problems potentially get out of control.
Persuasive Technology and Machine Learning
Digital technology has transformed the way we communicate, collaborate, share information, and socialize. Social media can be defined as websites and computer programs that allow people to communicate and share information on the Internet using a computer or mobile phone (“Social media”, 2020). In this study I will focus on social media as persuasive technology and emphasize that people use social media because they provide value and connect people to friends, ideas, and information. Persuasive technology is a common term to describe technology designed to reinforce alteration or improve attitudes, behavior, or both without coercion or deception (Widyasari, Nugroho & Permanasari,
2019). Fogg (1998, p. 225-226) defines persuasion as “an attempt to shape, reinforce, or change behaviors, feelings, or thoughts about an issue, object or action”. He emphasizes that technology is persuasive when it is created, distributed, or adopted with an intention to affect human attitudes or behaviors (Fogg, 1998).
One technology implemented in social media for improved efficiency and optimisation is machine learning. Machine learning is the science of getting computers to act without being explicitly programmed (Expert System Team, 2020). Systems based on machine learning work to identify patterns in big amounts of data in order to give an accurate output (Expert System Team, 2020). They are able to learn and self-correct when they are presented with new data. These mechanisms are implemented in Google search engines, Youtube recommendations, and friend recommendations on Facebook and are therefore relevant when it comes to the discussion of ethical digital platforms (Artificial Intelligence Team, n.d.). The Communications Decency Act of 1996, section 230, is a federal law in the United States of America considered to be the most influential law in allowing the Internet to grow since the founding in 1996. The law is often referred to as “The CDA 230”. It says that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. The law has enabled freedom of expression and innovation on the internet (Electronic Frontier Foundation, n.d.) The law allows the provider of websites, blogs, and forums not to be responsible for content the users upload on their platforms. This implies that platforms such as Twitter, YouTube, and Facebook are not forced to censor content their users upload. Without the law we would most likely not have free, interactive services like social media because it would be costly and time consuming to review all uploads (Electronic Frontier Foundation, n.d.).
The Movement of Ethical System Design
When designing interactive technology, it is a challenge to balance between good user experience and interfaces that actively encourages the user into addictive habits. Major tech platforms have developed and implemented several features as an attempt to make their platforms facilitate more healthy digital behavior. YouTube implemented a “take a break” notification at self-reported time limits in 2018 (Center for Humane Technology, n.d.a). Google made it easier for the user to track and understand their digital behaviors through an initiative called Digital Wellbeing (Center for Humane Technology, n.d.a).
In 1998, the Stanford behavior scientist B. J. Fogg, formulated and published the first paper to address the ethics of persuasive technology, which shows that this problem has been on the agenda for quite a while (Stanford University, n.d.). This paper is still required reading for students and members of the Stanford Persuasive Technology Lab. Fogg (1998, p. 230) emphasizes that values vary and that this makes it important for designers to base their design on what he calls a “defensible ethical standard”. This involves the ideas that computing design should avoid deception, respect individual privacy, and enhance personal freedom (Fogg, 1998, p. 230). Fogg (1998) noted that researchers have a responsibility to evaluate both the intended and unintended implications made with persuasive technology. He also encourages taking social action, or advocating for others if a computing artifact can be considered harmful or questionable (Fogg, 1998, p .230). He emphasized that educating people about persuasive technology helps them to use technologies in a way that enhances their lives, and to be aware of which tactics are used to persuade them (Fogg, 1998). Tristan Harris, one of the people we meet in The Social Dilemma (Orlowski, 2020), was a former en-
gineer at Google. He is an influential voice in the movement of making technology more humane. Through years of experience working on his degree in computer science focused on human computer interaction, as well as working for Google and Apple, Harris became aware of the negative impacts increased use of social media had on individuals and social interactions (Tristan Harris, n.d.). He started to communicate his concerns with other employees and technologists through conversations and a presentation (Tristan Harris, n.d.). He later started to inform the public through Ted Talks and interviews (Tristan Harris, n.d.). In 2018, Tristan Harris (n.d.) founded the nonprofit organization The Center for Humane Technology. The organization focuses on the ethics of consumer technology as they assume some digital technology to be downgrading humanity (Thompson, 2019). The phrase “human downgrading” is coined by Tristan Harris and the Center for Humane Technology, set to combine the negative effects of digital technology on humans and human life (Rouse, n.d.). The Center for Humane Technology is concerned with digital addiction, increasing polarization, social comparison, cyberbullying, disinformation, political manipulation and superficiality (CNBC, 2019). They consider technology as a force that affects how we think, feel and behave in a more divisive direction, and consider the extraction economy a hurdle to human interests (Center for Humane Technology, n.d.b). Through conversations with executives, designers and programmers from the tech industry, it became clear that most people want to find solutions to these issues. They work to support technologists to take responsibility in order to develop technology more humanely instead of holding the consumers accountable (Center for Humane Technology, n.d.b).
Attention Economy and the Business Model : To Pay With Attention
In 1997, Michael H. Goldhaber (1997) pointed out that the ability to consume material things is no longer keeping up with the possible production rate. As people become more busy, attention becomes a scarce resource. Goldhaber predicted that the global economy had started to change from a money-industrial economy towards an attention economy. He also suggested that this new economy was based on endless originality and diversity as attention can not be standardised. Goldhaber (1997) emphasized that being able to extract attention would lead to success. Google, Facebook, and Apple, among others, are now fighting for the consumers’ attention through optimized technology developed with the intention of keeping the users’ attention and need for interaction (White, 2014). The attention economy can be described as a market war between tech companies (Memory, n.d.). Through offering a free, but useful platform for the users to interact with, the activity on the platform is tracked with the purpose of tailoring content in order to keep the user interacting (Memory, n.d.). The platforms are also collecting data about the users in order to create targeted ads for the users (Matsakis, 2018). Big Data is a term used to describe the enormous amounts of data created the last couple of years (SAS Institute Inc., n.d.). McKinsey Global Institute (MGI) report defines it as “large pools of data that can be captured, communicated, aggregated, stored, and analyzed” (Bradshaw, n.d.). Processing and analysing data correctly can improve decision making, help risk management and produce valuable insights about social behavior more accurately and effectively than other statistics or surveys (Bradshaw, n.d.).
Methodology
The methodology chapter will explain the choice of document analysis as the research method, as well as the process of collecting and analyzing the data. The quality of the data will also be discussed.
Research Design
Document analysis As previously mentioned, the study will focus on factors needed to make technology more ethical. Many people are working on this at the moment. The contributors are using several approaches in order to reach out to decision makers and the public. Through articles, podcasts, debates, and interviews, these activists are communicating possible solutions to solve complex problems related to technology in people’s digital life (Center for Humane Technology, n.d.b). Because of this, it is important to examine the content across these platforms in order to find the main motivation behind the movement, their most valuable arguments, and whom these organizations and individuals make responsible for actual change. Document analysis is suitable for examining nuances of different perspectives, organizational behavior, and change over time (Bowen, 2009). A document analysis is a research method that consists of reviewing documents that already exist (Bowen, 2009). The method is chosen in order to capture the variety of factors through work being conducted among different actors in debates.
Data Collection and Analysis I have chosen to look at The Center for Humane Technology’s contribution to the movement of making technology more ethical because their work is based on the assumption that the problems related to social media is the industry’s responsibility (Center for Human Technology, n.d.b). The organization describes their team as consisting of deeply concerned tech industry and social impact leaders who intimately understand how the tech industry’s culture, techniques, and business models control 21st century digital infrastructure. Among the executives and contributors are professors, researchers, scientists and authors that together represent a broad understanding of the problem. The organization is non-profit, but funded by several foundations and funds (Center for Humane Technology, n.d.c).
The Center for Humane Technology is presenting information about the organization and the work being conducted through their website at humanetech.com. The website clearly shows the call to engage technologists, students, parents, educators and policymakers (Center for Human Technology, n.d.c). Different types of information and guidance are offered depending on which role the participator can take in the movement. The section for technologists includes several tools and resources for the purpose of inspiring and educating technologists to align technology with human needs (Center for Humane Technology, n.d.c). The principles of humane technology are one of the documents which is examined in more detail in the analysis. The technologists are also offered other resources, such as a humane design worksheet, newsletters and an upcoming online course (Center for Humane Technology, n.d.d). Students, parents, and educators are presented with information about how technology shapes our lives and values. Parents and educators are provided with third-party organizations which are not a part of The Center for Humane Technology, but are considered by them to be doing good work. Through the website, The Center for Humane Technology proves to be open to suggestions and feedback from the general public (Center for Humane Technology, n.d.e) These qualities indicate a genuine interest to get to the bottom of the problems and provide durable solutions.
Your Undivided Attention is a podcast from The Center for Humane Technology as a contribution to provide technologists and others with knowledge about different aspects of the contextual relationship between technology and society (Center for Humane Technology, n.d.f). The podcast is hosted by the founder Tristan Harris and co-founder Aza Raskin (Center for Humane Technology, n.d.f). The hosts are both former
tech insiders and have seen and designed technology which seize the user’s attention (Center for Humane Technology, n.d.f). I have chosen to analyse this podcast because of the relevant experience Tristan Harris and Aza Rskin have from the industry as technologists. They are both deeply involved in the movement of humane technology as they are important figures in The Center for Humane Technology. Through their work with the movement of ethical design they explore a diversity of approaches through interviewing different experts on themes related to the problem. The host represents knowledge and experience on a wide and extensive level in which they explain key points in the interviews. This makes it easy to understand the complexity of the problem. I have chosen to analyse two episodes of the podcast where we meet two people who are engaged in finding solutions relevant to how technology influences our lives.
In episode 3 of Your Undivided Attention: “With Great Power Comes… No Responsibility” we meet the former CIA officer and advisor at the white house, Yaël Eisenstat (Harris & Raskin, 2019a). Lately, she has worked with advising technology companies in the U.S. because she wanted to contribute in changing the way technology was contributing to polarization and election hacking (Harris & Raskin, 2019a). Yaël shares her perspective on the government’s role in regulating tech (Harris & Raskin, 2019a). I have chosen this episode because of the conversation regarding the CDA 230 and the alternative solutions Eisenstat suggests.
In episode 4 of Your Undivided Attention: “Down the Rabbit Hole by Design” we meet the Artificial Intelligence expert Guillaume Chaslot (Harris & Raskin, 2019b). Harris, Aza and Chaslot reflect on implications of systems designed to catch and keep the user’s attention, and discuss some points of view on the CDA 230 (Harris & Raskin, 2019b). Chaslot contributed to the making of YouTube’s recommendation engine and explains how those priorities emphasize outrage, conspiracy theories and extremism (Harris & Raskin, 2019b). Chaslot also talks about his project AlgoTransparency which tracks and publicizes YouTube recommendations for controversial content channels (Harris & Raskin, 2019b). This episode is chosen because it represents an example of how hard it is to evoke change in the industry of software because of the priorities set by the companies. Even with well-intentioned and alternative solutions the need for policies is necessary.
Another part of the work The Center for Humane Technology is conducting is raising awareness and driving change through high-profile presentations to global leaders. Tristan Harris participated in a congressional hearing entitled “Americans at Risk: Manipulation and Deception in the Digital Age” (Center for Humane Technology, 2020b). His speech represents some of the key information considered to be important in order to make people aware of the impacts and implications of the problem. It also shows what kind of involvement from policymakers is important in order to affect systemic change, which is why it is chosen for the analysis.
Through the work of finding relevant documents to analyze, I have looked at a variety of approaches to the relationship between digital development and societal impacts. Tristan Harris and The Center for Humane Technology have been shown to be seen as important contributors to the movement, as they were often referred to in articles and interviews. I have gone through large parts of the information produced or shared by them, through their websites and podcast episodes. The Center for Humane Technology clearly shows that there is a need for many groups to get informed and involved in the movement. I have broadly read the range of involved institutions and fields of expertise before deciding to focus specifically on technologists because that is the career starting point of many committed contributors to the movement. I have analysed the documents by looking for themes and aspects that are relevant to the research question. It was important to get a grip on what was important through searching for elements that were repeatedly discussed across documents.
Quality of Data
To assess the quality of the conclusions drawn from this research, we rely on the terms validity, reliability, and generalizability. Validity refers to the logical connection between the questions asked, the method chosen, and the conclusions we reach (Tjora, 2017, p. 231). In simple terms, high validity means that we are measuring the right concepts to answer the questions we ask. Low validity can be a result of problems with research design. Reliability, on the other hand, refers to the more random errors that can occur during a research process (Ringdal, 2014, p. 355). Both validity and reliability affects the generalizability of the research presented. Generalizability refers to the relevance the presented results can have to a broader perspective (Tjora, 2017, p. 231). Do these results speak for other cases than the one investigated? In the following, these dimensions of quality will be addressed.
Validity The validity in this study is strengthened by the fact that the choice of research method is accounted for, and that the choice of documents is also explained. With the research question being: What conditions should be present for technology in social media platforms to be developed in a way that facilitates healthy digital behavior?, the documents chosen give us good insight into skilled professionals’ thoughts on this topic. This has made it possible to organize essential points of improvement for technologists, which are presented in the analysis to answer the research question. The research is also based on previous research conducted on this topic, for example by Fogg and Harris, which strengthens its validity as the starting point for the research question is rooted in valid research.
Reliability Before starting the research of this theme I was aware of the relationship between use of social media and the impact it has on mental abilities. I have been using several social media platforms over a decade and know how they have developed through the years. As a student in the field of Computer Science I have had undergraduate courses such as programming and system development. Through the courses I have learned fundamental skills, but most importantly in this context I have experienced what programmers and system developers learn in their first classes in a Norwegian university. It has been important to set my experiences and assumptions aside, and present the information in the documents in a neutral way. I have done this through ensuring the elements are actualized through various platforms. How the documents have been chosen and analyzed is previously accounted for which strengthens the reliability of the study.
Generalizability The role of social media in our society in terms of information distribution and socialisation is the main focus in this study. In terms of generalizability, this study already has a wide basis as it is meant to answer what technologists in general can do to create more ethical digital platforms. The term “technologists’’ covers many professions. Algorithms, data and machine learning are technologies that automate other functions in society other than social media platforms. Therefore, developing technology in an ethical manner could be generalized to cover more than social media platforms.
Analysis and Discussion
In this chapter, extracts from the documents presented earlier will be analyzed to point out different methods for moving towards a more ethical technological development. First I will look at how the technologists see structural change as necessary, and then how we also need cultural change from both companies and users in order to put pressure on the system while waiting for regulations to be put in place.
Structural Change : Regulations and Laws
A Quest for Allocating Responsibility Tristan Harris testified at the Congressional Hearing on consumer protection and commerce on January 8th, 2020 (Center for Humane Technology, 2020b). The hearing was titled “Americans at Risk: Manipulation and Deception in the Digital Age” (Center for Humane Technology, 2020b). Harris argues that the issue of technological deception and manipulation is an infrastructural problem and that the responsibility for the problems should be put on the people building the infrastructure, instead of the consumers (Center for Humane Technology, 2020b). He focuses on two problems related to the business model of social media - mental health among youths and polarized information distribution (Center for Humane Technology, 2020b). He points out that the solutions to the problems related to social media are not aligned with the business model of the tech companies that offer these services (Center for Humane Technology, 2020b). In the background I wrote about the attention as a scarce resource and the attention economy as a market war between tech companies. The race for the users’ attention in social media platforms is powered by attention-grabbing design, for example visually through the use of colors and graphics and audibly through notifications (Harris & Raskin, 2019b). The need for acceptance and attention from others is exploited in a system based on numerically visible reactions, such as likes, shares and comments (Harris & Raskin, 2019b). The way technology is being built is based on human behaviors, but does not necessarily favor humans’ own interest (Harris & Raskin, 2019b). The technical design features are still profitable because it keeps the user interacting with the digital platform (Harris & Raskin, 2019b). A continuous development of these features has been necessary in order to compete with other sources the users attention may be dragged towards (Harris & Raskin, 2019b).
Harris addresses these issues as a national security threat and requires that the government contribute with external guidelines for the companies, as it is not sufficient to rely on the moral compasses of employees in this profitable industry (Center for Humane Technology, 2020b). He suggests initiating a massive awareness campaign to show the public how companies are manipulating their attention for profit (Center for Humane Technology, 2020b). According to Harris, laws and regulations in the physical society should be equally applicable in the digital society (Center of Humane Technology, 2020b).
The business model profits from the amount of time users interact with the platform, and the use of data created through patterns of behavior (Harris & Raskin, 2019a). The negative effects this has on the user and society is well documented, but does not affect the company’s financial gain (Harris & Raskin, 2019a). In episode 3 of Your Undivided Attention: “With Great Power Comes… No Responsibility?”, Yaël suggests a potential method to increase the continuing developing growth of the successfulness of attention extracting business models (Harris & Raskin, 2019a). She suggests quantifying the effects attention extraction has on public health, productivity, and polarization, and using this as a basis for taxing profiteering companies accordingly (Harris & Raskin, 2019a). This would make it less attractive for companies to earn money on people’s attention, and thus make the business model unsustainable (Harris & Raskin, 2019a). It would be a significant challenge to accurately estimate the economic costs of decreased attention span, increased polarization, disinformation and social media addictiveness. I will later discuss raising awareness among the users about the business model and the impact of social media.
Freedom of Speech or Freedom of Reach The CDA230 is important for freedom of expression because it does not force the platforms to censor uploads. In episode 4 of Your Undivided Attention: Down the Rabbit Hole by Design, Guillaume Chaslot, Aza Raskin and Tristan Harris discuss the CDA 230 (Harris & Raskin, 2019b). Guillaume Chaslot justifies the CDA 230 as a way of legislating user uploads, but he points out that CDA 230 was voted into action before artificial intelligence was implemented in recommendation systems in social media (Harris & Raskin, 2019b). CDA 230 was not originally formulated to justify Google’s freedom to create algorithms intended to recommend content with the aim of increased watch time (Harris & Raskin, 2019b). 70 percent of views on Youtube are generated by the recommendation engine (AlgoTransparency, n.d.). These numbers show how the technology implemented on YouTube is to a large extent responsible for what we see on YouTube. What content we are presented with on YouTube is not a reflection of users’ uploads, but what content that keeps attention on the platform. In the conversation, Raskin refers to a distinction between responsibility regarding users uploads and the responsibility regarding which content to promote or amplify: “The freedom of speech is not the same thing as the freedom of reach” (Harris & Raskin, 2019b, [34.30-34.36]). When YouTube choses which content to amplify they are choosing what information millions of viewers are presented with every day. It does not matter that there are machines making these choices, because there are humans developing the systems and choosing what conditions is important. One can question if there is an important distinction between recommendations from artificial intelligence compared to recommendations by acquaintances. Chaslot considers it to be important to shed light on the algorithms that amplify content, to understand how they work and demonstrate their outcomes (Harris & Raskin, 2019b). I will talk more about this when we look more closely at Chaslot’s examination of the amplification patterns in YouTube.
In another episode of Your Undivided Attention, episode 3 called “With Great Power Comes… No Responsibility?” Harris, Raskin and former CIA officer Yaël Eisenat discuss the lack of nuance in the conversation about CDA 230 (Harris & Raskin, 2019a). The conversation has turned into a polarized conversation regarding whether you are for or against the freedom of expression (Harris & Raskin, 2019a). The focus on content moderation overshadows the need for a systemic change (Harris & Raskin, 2019a). Yaël calls for a determination of what role the platforms should have, and to find a way to regulate them accordingly (Harris & Raskin, 2019a). As long as CDA 230 is not adjusted, the companies are not responsible for what content you see on their platforms (Harris & Raskin, 2019a).
Cultural Change : Companies and Users
Making a Change from the Inside The Center for Humane Technology (n.d.b) is supporting technologists in changing the way technology is built by suggesting other criterias for success. Improving the technologist’s knowledge about the relationship between the human mind and technology is set as a fundament for creating technology which honors humane nature and helps the consumers live lifes aligned with their deepest values (Center for Humane Technology, n.d.d). The Center for Humane Technology (n.d.d) has developed what they call “Principles of Humane Technology” in order to guide technologists to grow responsibility for the technology they develop. The principles are based on the idea that technology is not neutral, but rather shapes our social life (Center for Humane Technology, n.d.d). They base this statement on three reasons. The first is that the technologist’s values and assumptions shape the way technology is built (Center for Humane Technology, n.d.d). This appears through selecting which features should be set by default, which and in what order content should be presented, and which options for interaction the user should have. The second reason why technology is not neutral is that the intentions of the developer can fail to comply with the actual values and assumptions of the world (Center for Humane Technology, n.d.d).
Economic pressure to grow sales for shareholders and social dynamics change the effects of the new technology. The third reason is that interactions with digital technologies affect how we feel and shape our life just as other interactions with humans and real life experiences (Center for Humane Technology, n.d.d). The Center for Humane Technology (n.d.d) exemplifies the last reason by referring to the social media environment of likes and comments that shape what we share and how the numeric reactions affect how we feel about it.
In episode 4 of Your Undivided Attention: Down the Rabbit Hole by Design Guillame Chaslot talks about his experience working in Google (Harris & Raskin, 2019b). His experience is an example which shows the importance of supporting technologists to grow responsibility for the systems they develop. Chaslot worked with improving the algorithms in the recommendation feature on YouTube in 2010 and 2011 (Harris & Raskin, 2019b). After successfully optimizing the algorithms which made the numbers of streams multiply, he observed how the recommendations systematically seemed to favor extreme content (Harris & Raskin, 2019b). Chaslot, together with motivated co-engineers at Google, developed and suggested other algorithmic options (Harris & Raskin, 2019b). But due to management’s top priority being to increase watch time with 30 percent every year, the options were rejected (Harris & Raskin, 2019b). An attempt to get people out of their filter bubbles were seen as a distraction towards the main goal of obtaining the users attention to increase the time they spend online (Harris & Raskin, 2019b). Similar recommendation systems are implemented in Instagram, Snapchat and other social media platforms (Harris & Raskin, 2019b).
As a result of Chaslot’s discoveries about the functioning of the algorithms, he founded a project in 2018 called AlgoTransparency (n.d.). In the project he examines the incentive structure of the recommendation engine as an attempt to visualize which videos are getting most recommended (AlgoTransparency, n.d.). In episode 4
of Your Undivided Attention Chaslot, Harris and Raskin (2019b) explain a few factors that make the algorithms effective for generating increased views through grabbing the users attention. One of the topic that is effective at grabbing our attention is conspiracy theories. Chaslot explains that there are multiple reasons why conspiracy theories are effective. Conspiracy theories are easy to create. The people who believe in conspiracy theories often do not watch classical media and therefore spend more time on YouTube (Harris & Raskin, 2019b). YouTube’s algorithm weighs the activity of people who spend more time on YouTube more heavily than people who do not spend as much time on YouTube (Harris & Raskin, 2019b). The algorithm does not consider content that shows morality or truth, because these qualities do not necessarily generate more views (Harris & Raskin, 2019b). In order for the recommendations to be relevant for generating views it is necessary to recommend videos that people actually watch (Harris & Raskin, 2019b). Harris explains that getting the viewer to watch more videos is not the only factor that increases the tilt towards extreme content. The rush of feedback the creators at YouTube gets when the number of views on their content increases stimulates addiction to getting attention from other people (Harris & Raskin, 2019b). This stimulates the creators to make content that gets a lot of attention by being creative and extreme in the race for the viewers attention. In episode 4 of Your Undivided Attention, Aza and Harris point out that the society structurally is dependent on unpaid, nonprofit civil society researchers like Guillaume Chaslot to see the inner problems and communicate them to the rest of the world’s population (Harris & Raskin, 2019b).
Harris and Raskin (2019b) talk about why amplification transparency is a great idea because it shows why the algorithms are behaving one way or the other. These systems are not only applied on YouTube, but in other platforms such as Facebook, Google Autocomplete, and Twitter etc. (Harris & Raskin, 2019b). Raskin assumes that there is no significant difference between choosing the content to amplify or choosing an algorithm which chooses the content to amplify (Harris & Raskin, 2019b). Because the choice of amplification is affecting what information many people are presented with every day, the power of the choice being made is not aligned with the lack of responsibility on the platforms. For example, 70 % of the total amount of views on YouTube are determined by the recommendation algorithm (Harris & Raskin, 2019b). By visualizing the functioning of the algorithm through amplification transparency it is possible for civil society to determine if the patterns of amplification of content is aligned with our values or not. In this way it is possible to make the platforms, rather than the user, accountable. In this system, it will be visible that the recommendations we get on social media platforms are not random, but motivated by a purpose of engaging the user.
Chaslot’s failed attempt to convince the management to change the algorithms clearly shows why technology companies need pressure from several sources in order to commit to a change of strategy. Harris points out that it is not true that the companies are unable to understand the problem, but rather consider the questions surrounding it less important than other plans for the company. For instance, there are new phones to ship and new versions of Android to launch. (Harris & Raskin, 2019b). To change the priorities in the companies it can be necessary to involve the users’.
Encourage Users to Express Their Needs Raising awareness about how digital technology affects human life is necessary as a supplement in the development for more humane technology. The problems related to how social media constructs the user’s social life is current. Changing the culture among technologists, creating new market conditions for humane technology, and agreeing on how technology companies should be regulated is necessary for sustained change. However, systemic change is not done overnight.
The call to the users to take action might seem like an attempt to hold the user accountable, but it is rather to encourage the users to show what is necessary in order to align technology with their actual values. In this way, users can demand that technology companies facilitate valuable interactions with social media. The Center for Humane Technology (n.d.e) provides recommendations for what users of social media can do to improve their relationships with digital technology. Among the suggestions is to delay access and further restricted use among children (Center for Humane Technology, n.d.e). They also suggest deleting applications such as Snapchat, Tik Tok and Instagram, or at least reconsider who to follow (Center for Humane Technology, n.d.e). The recommendations emphasize discussions in families and schools on themes like polarization, cyber bullying, and gaming addiction (Center for Humane Technology, n.d.e).
It is important to remember that social media is a valuable tool for expression of freedom, information and social interactions, and is not exclusively used because of the design and features that grabs the users attention. There are actual benefits to using social media, but the heavy volume of distractions from what is important in life requires the user to be aware and conscious when interacting with the platforms. Knowledge about the market forces that dominate social media platforms today, as well as the impact the digital environment has on the human mind and social life, can help users make informed choices about how they want to involve social media in their lives.
Conclusion
There are many ways technologists can contribute into a movement towards more ethical development of technology. It is practically impossible to make this change without structural changes and involvement from the users to put pressure on companies. Regulations and laws are necessary in order to make a change across companies and country borders, and this especially concerns the CDA 230. This law is still essential to ensure that users of digital platforms are able to express themselves freely, without the platforms being responsible. However, as the automated technologies like machine learning and algorithms that intentionally amplify extreme content in order to keep users connected, it is necessary to decide which role the platforms play. They are profiting on designing an environment where humans interact and spread knowledge, but choose design features that filter content, making knowledge something that is individually altered instead of general.
In order for these structural changes to occur, both tech companies and users need to participate. Users need to express their wishes for digital platforms, what they expect of them and what they would like to be changed. Tech companies need to take these users’ experiences into account, which will show the willingness of the community to implement regulations and laws when applied by governments and / or international organizations.
A fundamental starting point to achieve these movements for change is to know the implications of existing systems and the potential alternative solutions. Technologists play a crucial role in mapping the technical functioning of existing systems and explaining their inner functioning so that the non-technologist can understand how they work. Developing knowledge on human nature among technologists is an important element of helping technologists understand the impact technology has on individuals and society. This can stimulate technology to be developed in the future to align with actual human values.
There are many other interesting aspects of this topic to research, such as the consequences of the algorithms that are just briefly mentioned in this study. How have digital platforms changed the way we create and spread knowledge? This study has looked at what conditions are important for social media to be designed and used to be valuable for the users and society. It would also be interesting to dive into the way knowledge actually is being spread through the platforms today.