INFORMATION NETWORKS: IMPROVING INFORMATION CONSUMPTION THROUGH HUMAN CURATION

Page 1

INFORMATION NETWORKS: IMPROVING INFORMATION CONSUMPTION THROUGH HUMAN CURATION

BY PARANKUSH BHARDWAJ

A Thesis

Submitted to the Division of Natural Sciences New College of Florida in partial fulfillment of the requirements for the degree Bachelor of Arts under the sponsorship of Dr. Tania Roy

Sarasota, Florida May 2020


Acknowledgements I would like to thank my thesis sponsor, Dr. Tania Roy. Dr. Roy’s first course at New College of Florida, Foundations of Human-Centered Computing, was where I got the idea for this research project. Not only was she supportive of the project, but her active involvement, including the constructive criticism I often received on the designs of the app, proved to be invaluable. I would also like to thank Dr. John Doucette. I consider myself extremely lucky to have been his first and longest-running advisee. His support have helped me transition from a high school debate kid who knew nothing about computers into an excited software engineer. Dr. Doucette is more than just my advisor; he is my friend. I thank him for proofreading this thesis multiple times, for providing crucial feedback, and for making my four years at New College unforgettable. Next, I want to thank all the students in the Computer Science department at New College. Thank you for all the fun conversations we had, not only in classes but also during random hours in the Computer Science Reading Room. Thank you for all the advice that has made me a better student and engineer. I am lucky to have been part of a department that has been so warm, welcoming, and supportive. Thank you to all the participants who were part of the user studies in this thesis. The feedback you gave directly made the software and research significantly more robust. This thesis would not be where it is today if it were not for your generosity. I like to also thank all my friends for supporting me. In particular, thank you, Myles Rodriguez, Raymond Blaha, Sydney Clingo, and Kevin Howlett for the fun and unforgettable four years we all had together. I especially want to thank my partner, Gina Vazquez, for being my foundation and helping me thrive during my final year at New College. Finally, and most importantly, I want to thank my parents, Princy and Suraj Bhardwaj. Words cannot describe the amount of gratitude that I have for them. I owe my success to their constant support, and this thesis is dedicated to them.

ii


Information Networks: Improving Information Consumption Through Human Curation Parankush Bhardwaj New College of Florida, 2020

Abstract This thesis goes through the development of a software application aimed at delivering news content to readers devoid of misinformation, clickbait, and filter bubbles. Rather than relying on an algorithmically curated timeline, like Facebook and Google, to deliver content to consumers, this application instead relies on human-curation. Two user studies were conducted to examine the effectiveness of the app’s user interface in providing a great user experience to consumers. The results show that human curation creates a high-quality browsing experience and should be used more often in information networks when delivering news content to consumers.

Dr. Tania Roy, Assistant Professor of Computer Science April 20, 2020

iii


Contents Acknowledgements

ii

Abstract

iii

1 Introduction

1

2 Background

4

3 Research Statement

7

4 Methods

8

4.1

4.2

4.3

Initial Prototype - The Home Library . . . . . . . . . . . . . . . . . . . . . . . . . .

9

4.1.1

Development Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

4.1.2

Interface Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

Second Prototype - The Global Library . . . . . . . . . . . . . . . . . . . . . . . . .

16

4.2.1

Development Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

4.2.2

Interface Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

User Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

4.3.1

User Study A - Home Library . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

4.3.2

User Study B - Global Library . . . . . . . . . . . . . . . . . . . . . . . . . .

25

5 Results

28

5.1

First User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

5.2

Second User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

6 Conclusion & Limitations

39

A Appendix

41

A.1 Evaluation Form A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

A.2 Evaluation Form B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

iv


Chapter 1

Introduction The story below is a fictional work, designed to illuminate a pressing issue facing society today. One winter morning, a new public library was introduced to the world, dubbed the ‘infinite library.’ Little did anyone know at the time that the world around them would never be the same again. The library was a radical departure from its peers because of its cunning ability to hold an infinite amount of information. The infinite library, with its immense size, provided users access to any book in the world, any newspaper or magazine, and even any type of media content. Further, the library allowed any person to add any of their pieces of work into the library. On top of publishers, many individuals, organizations, and corporations added content to the library. All of this was labeled as revolutionary. Valuable information once guarded by the oldest and most exclusive institutions was now all publicly available for anyone to read. Other infinite libraries started to be built as well, ensuring that everyone across the world had instant access to all of the world’s information. As the shelves of content in these libraries infinitely expanded, browsing through an infinite library became increasingly challenging. Thus, individuals began taking advantage of ‘library agents,’ robotic agents that help readers find things they’re trying to find. There are hundreds of these type of agents in any given infinite library. Agent G, for example, will ask what you’re looking for and then instantaneously provides you with all the most relevant books using a system that ranks books by the number of times other popular books reference them. Another one, Agent F, recommends books based on your interests and what your friends are reading. Most readers navigate the infinite library exclusively using these agents, most of which are free to use. This infinite library, however, started to have problems that seemed infinitely more complex. Four issues in particular seemed most alarming. The first was the introduction of poorly written content. Recall that many agents have the goal of giving you the books you most likely will enjoy

1


reading. The most popular agents were designed to observe whether their recommendations were picked up by users and to tailor their recommendations based on users’ responses. These agents often found that readers were picking up books with provocative titles. As a result, they recommended more and more books with that style of writing. These agents also observed that readers are more likely to engage with texts that invoke strong negative feelings, like anger and anxiety. Thus, they also recommended books that contain more negative and controversial content. Because these agents also act as gatekeepers of knowledge, authors started to adapt that style of writing so that those agents were more likely to recommend their books. This situation resulted in the library becoming filled with increasingly more misleading and controversial content. The second issue that started to emerge was the lack of information diversity. Popular agents tended to use personal interests to determine the type of books a given person is likely to enjoy reading. Thus, if you like reading books about science fiction, agents would recommend more books of that genre. However, this created a new issue - opinions were no longer challenged and were instead reinforced by what people read. For example, a person who reads a book about Communism started to see books being recommended to them that discuss more extreme and radical ideologies. People who read books affirming one conspiracy theory were recommended books advancing other conspiracy theories. The result is a library that causes readers to form more extremist views on many topics. Readers with different opinions about the world are pushed to a more extreme version of their initial views, leading to increased polarization. Third, the library faced a dangerous rise of misinformation. The infinite library held infinite books, including books with false information. Books were often created to misinform the public for personal, political, or business motives. Because agents could not tell the difference between the truth and a lie, they inadvertently spread misinformation by sharing these books to millions of readers. This situation leads to a growing proportion of the public being misinformed about the world around them. Finally, lots of great content became lost in this infinite library simply because it was old. The library accepted many new books every minute, which meant agents were always reanalyzing their collections to account for those new books. The most popular agents found that new books were more likely to be read than old ones. As a result, agents recommended books that are usually new and rarely recommended older books that may contain higher quality content. The infinite library had an interesting dynamic. On the one hand, there was a liberating library that gave an endless amount of information to everyone. However, on the other hand, the content that people were reading became increasingly more negative, misleading, sensationalist, and false. As you may be able to tell, the infinite library is a metaphor for the internet, and library agents represent the popular tools we use to browse the internet — like Google and Facebook. The infinite library, and by extension, the internet, holds a phenomenal contradiction — its best feature, that

2


it can show you all of the world’s information, is also its worst trait. The surplus of information in the infinite library has made us dependent on algorithmic curation, creating unintended negative consequences. These consequences will be detailed in the next chapter. This thesis looks at a new way we can explore the internet. Imagine that inside the infinite library exists a much smaller library closed off from everything else. This smaller library has shelves of books stocked entirely by human curators. These curators, who are domain experts in various fields, aim to create an experience where readers can browse through the collection without ever worrying about whether they can trust the content their reading or not. This thesis seeks to determine if such a library can spark interest among readers.

3


Chapter 2

Background Before we dive into the details of this application, let’s briefly discuss the current state of the internet. Like the infinite library, the internet is a place for the masses to get extensive knowledge from any single source. The world wide web currently holds well over six billion active webpages[1]. This number is so large that if you were to travel back in time by six billion seconds, you would go back 200 years, which at the time of this writing, places you in the early 1800s. And indeed, this large size has attracted a large number of visitors. As of 2019, there are 4.4 billion internet users [18]. More than one in two people on the entire planet are connected to the internet. Further, this number represents a 9 percent increase in internet users from just the previous year [18]. This fast growth rate indicates that we are not far from a time where every human on the planet has an internet connection. Recall the rise of poorly written content being recommended to readers in the infinite library parable. This occurred because books with catchy titles and salacious content were more likely to be picked up, causing library agents to recommend more of those types of books. For authors to have their work recommended, they increasingly followed this style of writing, resulting in lower quality of works being published. This trend also exists on the internet. In particular, for new internet content to be widely read, it must be recommended by platforms like Google and Facebook. These platforms can be referred to as information networks. Let’s look at Facebook’s News Feed as an example. Of the over 1,500 stories a person might see whenever they open Facebook, the News Feed displays only about 300 posts [5]. Because most users only see the top few posts, the ranking of these 300 posts by Facebook also determines what content users will read. Like library agents, these networks effectively decide which news content is and isn’t visible. Further, like library agents, these networks aim to give users content that they’re likely to read and enjoy. They accomplish this through an array of metrics, such as the number of reactions and comments a post has received from other users [10]. However,

4


this has some downfalls when it comes to the quality of content. Notably, it incentivizes authors to write more misleading and negative content. A particular type of misleading content is clickbait. Clickbait is defined as “a form of web content that employs writing formulas and linguistic techniques in headlines to trick readers into clicking links, but does not deliver on promises” [20]. A recent study analyzed 1.67 million Facebook posts created by 153 media organizations and found that over 33 percent of mainstream media organizations (the 25 most circulated print media and the 43 most-watched broadcast media) had clickbait content [20]. The study also found that the proportion of clickbait content at unreliable media organizations (which includes media organizations that are based on conspiracy, clickbait, satire, and junk science), was surprisingly at a similar rate, 39 percent. The high levels of clickbait could be attributed to news publications vying to gain traction in online platforms. Pew Research found that four in ten Americans get their news online, and 44 percent of Americans get their news from Facebook alone [16]. Further, due to the falling demand for printed newspapers, advertising revenues for newspapers have been falling from “a high of $49.4 billion in 2005 to just $18.3 billion in 2016” [3]. The falling revenues have resulted in news publications becoming more financially dependent on advertising revenues through online sources. As a result, news publications write more clickbait content with the hope that “they’ll be widely shared and generate advertising revenue” [6]. Indeed, a study conducted by the Joint Research Centre, the European Commission’s in-house science service, found that online news distribution “separated the roles of editor and curator of news and to a large extent transferred the latter role to advertising-driven algorithms” [15]. Information diversity is also an issue that profoundly affects the internet today. The lack of information diversity has caused the rise of filter bubbles, which is defined as a “personalized universe of information” [7]. Many internet sites, like Facebook and Google, tailor results to your preferences. Indeed, during the Deepwater Horizon oil spill, an author asked two similar friends (white, leftleaning women who live in the northeast) to search for the term “BP” and was surprised at the results. He stated that “One of my friends saw investment information about BP. The other saw news. For one, the first page of results contained links about the oil spill; for the other, there was nothing about it except for a promotional ad from BP” [7]. Thus, two users can search the same term on Google for a trending news event and be greeted with opposing results. Similarly, two users on Facebook may be viewing news in their timeline that showcase different perspectives on the same set of current events [17]. The issue with this is that most users are unaware of the type of biases these networks impose. In other words, these agents do not communicate to readers the nature of the biased information that they’re receiving. This distorts people’s perception of reality, which can have drastic consequences, especially in politics. Take the example of custom search results for different people. One study

5


found that “biased search rankings can shift the voting preferences of undecided voters by 20% or more” and that “search ranking bias can be masked so that people show no awareness of the manipulation” [9]. The effects of these echo chambers go beyond influencing elections. YouTube’s recommendation system has been faulted for radicalizing individuals by feeding them a constant stream of information that displays more extremist views [19]. These software agents are satisfying their goal of giving people the information they are likely to consume, but doing so in a fashion that is polarizing society. Another well-discussed issue on the internet is the problem of misinformation. Misinformation can be defined as “false or inaccurate information that is deliberately created and is intentionally or unintentionally propagated” [22]. Misinformation is a common problem that platforms like Facebook and Google face. One paper found that during the month before the 2016 US Presidential election, one in four Americans visited a fake news website [11]. Further, a Pew Research study found that nearly one in four Americans said they had shared a made-up news story (knowingly or not) [4]. A recent Stanford study found that during the 2018 election, fake news sites received nearly the same level of engagements on Facebook as most major news sites [2]. Further, a different study found that “false news reports that attack US politicians have been viewed more than 150 million times on Facebook since the beginning of 2019” [14]. Although the internet will always contain misinformation, the issue is that specific agents are unintentionally helping to spread it. The top recommended video on YouTube News after a mass shooting at a public high school in Florida was a conspiracy video, claiming that one of the outspoken survivors of the shooting was not a student but instead a paid actor [12]. The internet today is a largely democratizing force, but it comes at some costs. Our reliance on a select few internet companies to surf the internet has resulted in the proliferation of clickbait, filter bubbles, and misinformation. Consuming information from an infinite source is challenging. Today, we cede power to anonymous algorithms behind Facebook and Google to give us the information we need for our daily lives. We propose a system that instead allows for human curation of news in real-time.

6


Chapter 3

Research Statement With the popularity of the internet, nearly everyone can have a voice online. The central issue people face is no longer who deserves to have a voice, but rather who among the masses they get to hear from. The most popular software applications, like Facebook and Google, rely on algorithmic curation to answer this question. This thesis seeks to answer if human curation can deliver a great user experience to consumers. To provide a great user experience implies two key things: users enjoy using the software, and users give high ratings when asked to rate the software. Human curated software can be defined as digital systems where the content presented to users is all approved by a set of expert individuals. These individuals have a role similar to editors; they remove content that contains misinformation or clickbait content. To answer the research question, we create a prototype software application that allows people to consume and share curated content. Then, this application goes through two user studies where individuals test and rate the application’s user interface. The results from the user studies are used to understand if human-curated content can deliver a high-quality user experience.

7


Chapter 4

Methods A prototype software application was developed that allows people to consume human-curated information. This application goes through an iterative design process, which can be defined as “a design methodology based on a cyclical process of idea generation, evaluation, and design improvement until the design requirement is met” [21]. An initial software application prototype was first developed based on idea generation. Then, user studies were conducted to evaluate the design of the application. After that, the prototype was improved upon based on feedback provided by the user studies. A second usability test was then used to evaluate the second version of the prototype. The second prototype also included new features that go through similar usability tests. Results from the second round of user studies were used to determine if the prototype was able to successfully provide users with improved user experiences when surfing the internet. A high-level overview of this can be seen in Figure 4.1

Figure 4.1: This diagram details the four key steps of the research. Step 1 involves the design and development of one core component of the application. Then, in step 2, the application goes through an extensive usability test. Upon completion of this test, we start step 3, which involves the design and development of the second core component of the software. Once that is completed, step 4 involves doing a second usability test, but this time focused on the second feature that was built.

8


4.1

Initial Prototype - The Home Library

This section provides details on the initial prototype design of the application. It first covers the development process of the initial prototype, including explanations for the development of all major features by detailing why certain features and design choices were made for the entire application. It also showcases the progression from what the first paper prototype looked like to what the final high fidelity prototype looked like and noted the tools used for the development of the prototypes. Later, this section explains how the user interface of the application works. In particular, it provides multiple images of the initial prototype’s interface to show the design of the application. It also details how users interact with the app to use all the main features.

4.1.1

Development Process

The standard convention for the way people consume information digitally is by viewing a list of links curated by an agent. One can visit Google, Facebook, Twitter, or any other feed aggregator and see that most interfaces are designed around this concept of a curated list. However, lists contain a structural issue to fighting filter bubbles – any individual link may contain information about some subtopic that includes a biased or narrow perspective. Recall that filter bubbles create a “personalized universe of information” by showcasing pieces of content that all display the same set of biases [7]. We cannot limit filter bubbles by simply preventing individual links that contain any bias from appearing in the feed. Biased information is not in and of itself a bad thing; most authors have some implicit bias. For instance, a publication from a national news organization may contain a biased perspective on a hotly debated topic, but that alone does not make the webpage unworthy of being read. The core issue stems from when individuals only read a single viewpoint of a story. To combat this, it was decided that each link must have neighboring links that expose alternative views. Thus, the interface should not be structured like a list of individual links, but rather as a list of groups of links. Each group represents some subtopic of news or information. For example, if there was a news story about European elections, rather than allowing for individual links to be posted about it on a linear feed, the application instead shows a group of links related to the European election news. The group consists of webpages curated by domain experts to ensure there is no dominant bias. These curators prevent webpages that contain clickbait content and misinformation about the subject from appearing in the group. Unlike editors in news publications, these curators focus more on filtering out bad content than shaping public opinion. We term each group of webpages a ‘ReadList,’ named after playlists, because of their similar functionality. Just like how a playlist represents a collection of songs paired together by specific attributes, a ReadList represents a collection of webpages joined by the shared subtopic they discuss.

9


The application consists of two key features: creating ReadLists and browsing curated ReadLists. It was decided that two rounds of user studies would be conducted for each one of those features. A user study allows us to gain crucial feedback from participants on the software’s user interface. Establishing a user study for each feature allowed us to focus on each feature individually. For this reason, the initial prototype only included the functionality for creating new ReadLists. The second prototype addresses feedback from the first prototype and includes the ability to browse curated ReadLists. The first prototype went through a three-phase development. First, multiple paper prototypes were created to establish the schematics of the user interface. These prototypes focused on the visual attributes of the interface, like how to visualize a ReadList. They are developed using large sheets of paper and a pen. The rudimentary tools of paper and pen allowed us to create multiple designs with little friction in costs. The final iteration of the paper prototype can be seen in Figure 4.2.

Figure 4.2: This is the final version of the paper prototype. The spotlight was focused on developing a interface that would augment certain features - like fighting misinformation from being in the feed. Upon the completion of the paper prototype, the development of a digital prototype began. Instead of drawing out the interface by pen, digital renderings were made to make the prototype feel more realistic. The renderings allowed us to refine essential details, including the colors and shapes of the elements in the app. The software application ‘Sketch’ was used to create realistic prototypes. The final digital prototype can be seen in Figure 4.3 The software development of the app began immediately after the completion of the digital prototype. Software empowered the prototype to have dynamic interactions, including actions responding to user input. Developing the software application allowed us to focus on gestures and animations,

10


Figure 4.3: This is the final version of the digital prototype. In this version, we developed colors for the interface and navigation functionality. which enhanced the user experience of the prototype. The software application was developed using an Integrated Development Environment called ‘Xcode’ [8]. The software was made as an iPad app so that it can easily adapt to work well on phones and computers. It’s a touch-driven interface, so the interaction can naturally work as a smartphone application. And the interface is fit for large screen sizes, making it possible to work as a computer application as well. Having the prototype be easily adaptable for other platforms was essential in case this application is used for future work on other non-iPad devices. The software application can be seen in Figure 4.4.

4.1.2

Interface Overview

When users first open the application, they are greeted with the initial screen, called the Home Library, as seen in Figure 4.4. This screen shows all their saved ReadLists. All webpages must be kept within some ReadList. Thus, the library showcases a collection of these folders. Notice from Figure 4.4 that each ReadList visually looks like a stack of three webpages. These three webpages visually resemble the first three saved webpages within that ReadList. This visual folder makes it clear to users the type of content they can expect from any given ReadList. Whenever a user taps on an existing ReadList, they enter a new screen that displays all the saved webpages under that ReadList. For example, if a user touches on the ReadList with the title ‘1920s US Political History’, they will be greeted with all their saved webpages related to that topic. This

11


Figure 4.4: This is the final version of the software prototype and is called the Home Library screen. The screen shown is the first screen users see when opening the app. Each group of tiles shown are called ‘ReadLists’. screen is called the ReadList screen and can be seen in Figure 4.5. The last webpage in the grid is always empty, and it acts as an action for adding a new webpage to the ReadList. Users can also tap on any webpage to view it in full screen. When inside a webpage, users can interact with it like they can with any web browser. You can see this in Figure 4.6. Animations are also used across the application to make the interaction feel more fluid. Animations can be defined as moments when software objects move from one place to another. They make it consistently clear to users where they are when navigating the interface. For instance, when a user taps on a ReadList, two key animations happen simultaneously. First, the neighboring ReadLists fade away into the background and disappear. Second, the selected ReadList grows in size and spreads its cards to form the grid that you see in Figure 4.5. This animation process can be seen in Figure 4.7. This type of animation also occurs in the ReadList screen. When users are inside any given ReadList, once they select a webpage, that webpage expands in size to fill the entire screen. Then, users can instantly start interacting with the website. This animation process can be seen in Figure 4.8. These animations are also all reversible. If a user exits a webpage, the webpage shrinks in size and simultaneously moves to its correct placement on the grid on webpages in the ReadList screen. Additionally, when users exit any given ReadList, the grid of webpages cluster together to form a stack of webpages and also move to its correct place in the collection of ReadLists, as seen in the Home Library screen.

12


Figure 4.5: This is the ReadList screen. It’s seen after a user selects a particular ReadList. The application also took advantage of the large multi-touch screen to allow users to navigate the interface using interactive gestures. Gestures are hand movements a user makes on the screen, like dragging their finger from the left side of the screen to the right side. The goal of these gestures was to make the interface feel faster to navigate across. There are three critical gestures that people can use. First, notice from Figure 4.4 of the Home Library, that the second two webpages belonging to any particular ReadList are mostly hidden behind the first webpage. To view the other webpages, or to ‘peek’ into the ReadList, users can pinch into the given ReadList. To pinch in, users put two fingers on the ReadList and then expand the distance between the fingers. Pinching out, or decreasing the gap between the two fingers, will cluster the webpages together again. This interactive gesture can be seen in Figure 4.9. Users can enter the ReadList screen by pinching in to fully reveal all webpages and then removing both fingers from the screen. This gesture will allow the webpages to animate to their correct place in the ReadList screen. Second, when on the ReadList screen, users can pinch into any webpage to make that webpage full screen. When a user pinches into a webpage, the selected webpage grows in size as the distance between both fingers increases. Once the user removes their fingers from the screen, the webpage continues to grow until it fills the screen. Third, when surfing a webpage in full screen, users can pinch out to go back to the ReadList screen. As the user pinches out, the webpage simultaneously shrinks in size as the distance between both fingers gets smaller. Once the user removes their fingers from the screen, the webpage continues to shrink

13


Figure 4.6: This is the Web Page screen. It’s seen after a user taps on a particular webpage within a ReadList. The website instantly loads after selection so that users can immediately interact with it. until it’s in the correct placement of the ReadList screen. This interactive gesture looks like the reverse of the one seen in Figure 4.8. An essential attribute of this interface is the relationship between the gestures and the animations. For instance, when users pinch into a webpage from the ReadList screen, they can remove their fingers from the screen at any time. Once they remove their fingers, the animation continues on its own until complete. The total distance from when the user starts the pinch to when they end the pinch is called the gesture distance. The gesture distance can vary drastically, from just a few millimeters to the entire screen length of the iPad. Some will move their fingers only slightly apart before removing their fingers from the screen, whereas others may drag their fingers as far apart as possible before removing them. But whenever the gesture distance stops, there is an expectation that the animation continues even after the user stops gesturing. For instance, if a user expands a webpage tile by 100 percent in size by pinching into the webpage and then removes their fingers, they expect the webpage to continue growing in size automatically until it fills the screen. Thus, the animations and gestures work in parallel, with the animations starting when the gestures end. The app determines what the gesture intends to do and then runs the correct animation. For instance, suppose a user pinches into a webpage tile, expanding it by 200 percent, but then starts pinches out, shrinking it by 10 percent, and then ends the gesture. The application assumes that the user

14


Figure 4.7: This figure shows how the app transitions between the Home Library and ReadList Screen. Notice that the stack of webpages expands until it forms a grid. intended to dismiss the webpage tile back to its original size, thus the animation continues to let the tile shrink in size until it fits back to its original size. Further, the speed of the animation is dependent on the speed of the gesture. This relationship exists because users will naturally pinch into a webpage from the ReadList screen at various speeds (some may pinch on the screen quickly, whereas others may do so very slowly). It would be inconsistent if the user pinches in suddenly, and once they let go, the animation finishes the transition at a noticeably slower rate. Thus, when a user pinches into a webpage, the application computes the velocity of the pinch. The moment they remove their fingers from the screen, the animation immediately starts at that point and has a starting velocity equal to the final velocity of the pinch gesture. This animation makes the software objects feel like real physical objects because they operate under the same laws of physics. The animations and gestures work alongside each other to deliver an interactive experience that feels realistic, fluid, and fun. These characteristics are essential to have because they make the software more enjoyable to use.

15


Figure 4.8: This figure shows how the app transitions from the ReadList Screen to a web page. In this example, a user selected the middle tile, resulting in that tile expanding until it fills the entire screen. Then, users can start interacting with that webpage. The tile can also expand by users pinching into that tile using two fingers.

Figure 4.9: This figure shows the effect of a user ‘peeking’ into a ReadList. This action is accomplished by pinching into a particular ReadList to view more information about it.

4.2

Second Prototype - The Global Library

The first prototype focused on the development of the Home Library. This focus allowed us to perfect the interface for people to consume and create ReadLists. The second prototype was centered around the development of the Global Library. The Global Library is a place for people to consume curated ReadLists that other people have uploaded. Further, the second prototype also creates a quick process for people to upload their curated ReadLists to the Global Library. Finally, this prototype includes updates to the Home Library interface based on the feedback from the first user study. This section will detail the development process of the prototype and will then describe the application’s

16


user interface.

4.2.1

Development Process

Recall that the Home Library from the first prototype revolved around an interface for people to create their own ReadLists. This second prototype builds on top of the first one by adding two key features - the ability to upload ReadLists so that anyone can read them and the ability to browse through ReadLists other people have uploaded. The interface for browsing other ReadLists is called the ‘Global Library.’ The goal with this prototype was to create a place where people can share and consume information without worrying about clickbait, filter bubbles or misinformation. To accomplish this, two key components were added to the interface. First, a ReadList can only be accepted into the Global Library if a curator approves it. Curators are domain experts who read uploaded ReadLists and decide, based on the quality of the content, whether or not it should be accepted into the Global Library. Their main job is to prevent ReadLists that contain poor content from entering the network, ensuring that most ReadLists that contain clickbait content or misinformation don’t enter the Global Library because curators will deny them entry. Second, the Global Library organizes ReadLists based on genres. The interface adopts the Dewey Decimal Classification system, which divides the ReadLists into ten genres: computer science, philosophy and psychology, religion, social sciences, languages, pure science, technology, arts and recreation, literature, and history and geography. This classification system is commonly found in public libraries, and it ensures the interface showcases a broad set of content to readers. On top of organizing ReadLists by genre, the interface also does not have a dedicated feed to give users ReadLists they may be interested in reading based on their reading history. We suspect that by deliberately not adding a recommendation feature, entering a filter bubble becomes even more challenging. The Global Library interface largely resembles its sibling (the Home Library). This similarity was done for the sake of consistency. By establishing a standard interface between both libraries, users will be able to switch between them with more ease. We first created a digital prototype of the interface using Sketch, a Mac application [13]. Working with the digital prototype allowed for more flexibility, as significant design changes were possible without having to spend any time writing code. Upon the completion of the digital prototypes, software production began. The iPad application built in the first prototype was modified to include the additional interfaces. Using software allowed for more realistic interactions with the application, ensuring we get more reliable results in the user studies. For users to be able to upload and browse ReadLists, we needed to create a cloud database. When a user uploads a ReadList, the ReadList travels to this cloud database for storage and re-

17


trieval. Curators then receive the new ReadList pending approval. If approved, the ReadList is then transferred to a different database, called the global database. The Global Library retrieves ReadLists from this global database. Thus, every time a new ReadList is approved and added into this database, the Global Library immediately includes the new ReadList for everyone to see. This flow is visualized in Figure 4.10.

Figure 4.10: This diagram showcases the process for how ReadLists enter the Global Library. A user first uploads a ReadList they have created to the database. Then, (2) a curator will review this ReadList. If the ReadList contains webpages with misinformation or poor quality content, it’s rejected and doesn’t move any further. If the ReadList is approved, its sent to a different database (3). The Global Library fetches ReadLists for people to see from that secondary database (4). Users read these ReadLists by browsing the Global Library (5).

4.2.2

Interface Overview

The Home Library now includes a new button in the header for navigating to the Global Library, as seen in Figure 4.11. After tapping the button, the entire interface flips horizontally to reveal the Global Library, as seen in Figure 4.12. The transition between both libraries is visualized in Figure 6. Once inside the Global Library, users can scroll to view all the genres as well as the most recently added ReadLists in each genre. Tapping on a ReadList produces a familiar animation, where the webpages of that ReadList spread to form a grid. The ReadList screen includes a new summary text so that users can quickly grasp what content the ReadList provides. This ReadList screen can be seen in Figure 4.13. Users also have the ability to add the ReadList to their Home Library, allowing for faster retrieval. To add the ReadList, users have to tap on the ‘Add’ button on the top left of the screen in Figure 4.13. Doing so will create a pop-up that alerts the user that the ReadList has been added to their library. This can be seen in Figure 4.14. The Home Library now consists of a mix of private and public ReadLists. Private ReadLists

18


Figure 4.11: This is the updated Home Library. It showcases both, public and private ReadLists. Private ReadLists are ReadLists made by the user. Public ReadLists are ReadLists downloaded from the Global Library. On the top right of the screen, a new button exists that, when tapped on, will allow users to navigate to the Global Library.

Figure 4.12: This is the Global Library. It showcases curated ReadLists, organized into ten different genres. These are ReadLists that users have created and have been verified by curators to contain high quality content. Users can scroll down to view more ReadLists in each genre. This screen is always updated to show the most recently added ReadLists in each genre.

19


Figure 4.13: This is a Public ReadList. It’s displayed when a user taps on a ReadList in the Global Library. It showcases all the webpages belonging to that ReadList, along with a description to summarize what the ReadList is about. are ReadLists that users create themselves, and public ReadLists are ReadLists downloaded directly from the Global Library. In the Home Library, when users tap on a public ReadList, they enter a screen that looks identical to the one in Figure 4.13. When they tap on a private ReadList, they reach a screen that resembles the one in Figure 4.15. Notice that in the ReadList screen, on the header, a new button exists for uploading the ReadList to the Global Library. By tapping it, users are greeted with a quick form sheet to fill out, as seen in Figure 4.17. The submission form animates from the bottom of the screen to the center, on top of the ReadList. This animation process can be seen in Figure 10. Users can modify the ReadList’s title, description, and genre before submitting it for approval. Users are notified once their ReadList is approved or rejected. The submission form sinks back down after a ReadList is submitted. Users can also dismiss the screen by simply dragging the view downwards using one finger. All gestures and animations present in the Home Library also work for the Global Library, ensuring a clear consistency of functionality between both libraries.

20


Figure 4.14: This alert gently animates in and then out to communicate to users that the ReadList has been successfully added. It’s activated after a user taps on the top right button with the ‘+’ symbol.

4.3

User Studies

Two user studies were conducted to evaluate each prototype. The user studies served two essential purposes. First, we gained feedback that improved the usability and design of the application. Second, we were able to infer from the results of the user studies whether human curation could improve the user experience of content consumption. In each user study, eleven participants were interviewed individually. Each interview lasted half an hour and consisted of two portions. The first portion was a discussion-based session where participants were asked to complete a specific number of tasks using the software prototype. After each task, a discussion followed on how they felt about the app’s interface and functionality. In the second portion, participants were asked to fill out an evaluation form. This form asked them to rate all the features of the app and write suggestions for improvement. Participants were volunteers recruited from the student body at New College of Florida. All participants were above the age of 18 and were undergraduate students at New College of Florida. No demographic criteria were used in selecting volunteers. Participants were recruited through New College of Florida’s public email forum, flyers, and email blasts. The results and feedback from the user studies are presented in Chapter 5.

21


Figure 4.15: This is the private ReadList screen. It’s similar to the ReadList screen from the first prototype. Private ReadLists represent ReadLists that users have created themselves. ReadLists that have been downloaded from the Global Library are called Public ReadLists.

4.3.1

User Study A - Home Library

This user study consisted of four tasks for participants to complete. The interviews were designed to be discussion-focused and interactive. After asking participants to complete a task, notes were written observing the users’ behaviors. For instance, we would note if the participant tapped on an incorrect button to complete an action or looked confused while completing a task. After a task was completed, participants were then asked about their actions. We probed what their intentions were, what they thought individual buttons would do, and why they used the interface the way they did. This information helped us improve the design of the application to make the interface more usable. Participants were first given an iPad with the application open. The app had five default ReadLists that users could read. Participants were told that a ReadList represents “a group of webpages organized around a shared topic”. The first task was timed at 5 minutes and asked them to open and interact with any of the ReadLists in the Home Library. Users were purposely given no instructions on how to use the application beyond the explanation of what a ReadList represented. The lack of instructions allowed us to see if the users were able to figure out how to navigate the application naturally. As users explored the app, we observed if users ever seemed confused or lost. Once the time allocated for the first task had elapsed, we then asked the users questions based on our observations. For instance, after noticing a participant struggled when attempting to exit a ReadList, we asked them why they had a hard time with that.

22


Figure 4.16: The Global Library appears after tapping on the explore button in the top left of the Home Library. The Home Library ‘flips’ to reveal the Global Library. Users can go back by taping the back arrow button in the top left of the Global Library screen, which would similarly flip the screen back to show the Home Library. After the experimenter was satisfied with the responses from the discussion, participants were told the instructions for the second task. This task asked participants to find a specific webpage tile (ex: Six Colors, a tech blog). The webpage was randomly selected before each user study. This webpage was a tile in one of the ReadLists in the Home Library. This task allowed us to test the navigation features of the application. If people were struggling to find content in the library, that would indicate that more work needed to be done to improve navigation. Upon completion of the task, participants were asked how they felt navigating the application and how they thought it could be improved, if at all. Before the third task began, participants were informed of the gestures available to navigate the application. In particular, they were told that they could peek into a ReadList by pinching into it, enter a webpage by pinching into the tile, exit a webpage by pinching out, and exit a ReadList by pinching out. These gestures were also demoed in front of the participants to ensure they knew how the gestures worked. For the third task, users were asked to complete the first task again, but this time using only the gestures when navigating the application. By forcing participants to use these gestures exclusively, we hoped to gain insight into how useful participants thought the gestures are. After participants finished interacting with the app using the gestures, we asked how they felt about the interaction. The question was purposely vague so that we could hear a broad range of opinions

23


Figure 4.17: This form is shown when users tap on the upload button for a ReadList they created. Users must fill out a description and identify the genre for that ReadList. This ReadList then seeks approval from a curator before being shown in the Global Library.

Figure 4.18: This figure shows how the submit form animates on top of the ReadList screen. Users can dismiss this screen by dragging it down using a finger. The form also animates back down after a user submits the ReadList. about the utility of the gestures. The final task asked participants to create their own ReadList. This task tested how easy or difficult it was for users to navigate the interface to create a ReadList. Based on the ReadList created, it also informed us if the participant understood how ReadLists worked. For instance, if a

24


participant created a ReadList that consisted of random webpages, it would indicate to us that the application does not communicate its function well enough. The full set of discussion questions is summarized in Table 4.1 Upon completion of the final task, participants were then asked if they had any final comments, questions or concerns. This allowed us to gain feedback from users that our previous questions could not capture. After their response, we gave participants an evaluation form to complete. This form asked participants fourteen questions about their experience using the application. This form allowed us to more easily compare the results of each interview when synthesizing the results. To view the evaluation form, see Appendix A.1.

Question

User Study A - Discussion Questions Discussion Questions

1

How did you feel about navigating to the Home Library? Was it simple or too complex? Why?

2

How do feel about the process for finding that specific webpage tile?

3

How did you feel about the gestures? Did they make the navigation easier/more fluid or not? Why?

4

How do you feel about the process for creating a ReadList?

Table 4.1: This table displays the standard questions asked in the first user study.

4.3.2

User Study B - Global Library

This user study focused on testing the interface for the Global Library. This user study followed a format that is similar to the previous one. Users were asked to complete three different tasks, with their behaviors observed by the tester. After each task was completed, a discussion emerges about their opinions on the interface. After all tasks were finished, users were given an evaluation form to complete. Participants were given an iPad with the application open. The application had the Home Library as the starting screen. For this user study, the Home Library had two dozen ReadLists preinstalled. This large amount allowed us to test how well the prototype worked with a more substantial amount of content to browse through. The first task consisted of three small sub-tasks. First, we asked users to navigate to the Global Library. This task allowed us to observe whether users were able to transition between both libraries quickly. Once they reached the Global Library, participants were asked how they felt trying to navigate to the new Global Library. This discussion allowed us to hear how the navigation could be improved. Second, participants were instructed to spend the next five minutes exploring the 25


Global Library. Participants were free to browse through all the ReadLists in any genre and read any webpage within any ReadList. This allowed us to see how they interacted with the library and whether those interactions matched our expectations. Third, we asked participants to repeat the second sub-task, but this time exclusively using gestures. Participants were informed of all the gestures available to navigate the application, and then they were observed using these gestures. Afterwards, we had a discussion with the participant about how they felt using these gestures. The second task asked participants to add any ReadList from the Global Library to their Home Library. Minimal instructions were given in order to see if the feature was easily discoverable by participants. After participants added a ReadList, they were asked to find that ReadList in their Home Library. This task helped measure if users can easily find recently added ReadLists in a Home Library filled with dozens of other ReadLists. Participants were asked if they were able to find the ReadList quickly and if they had any trouble doing so. Users’ responses helped evaluate whether more work needed to be done in improving search and discoverability of ReadLists. The third and final task asked participants to submit a ReadList from their Home Library to the Global Library. This task tests the submission process for ReadLists. The participants needed to find a private ReadList, figure out how to submit it, and go through the submission process. Upon completion of this task, participants were asked how easy or difficult they felt this process was. Once again, the responses here were used to improve that process so that users could have an easier time submitting ReadLists. To view all discussion questions, see Table 4.2 Like the first user study, participants were asked if they had any final comments about the interface. The responses captured observations from participants that the previous questions had not revealed. After all the tasks and discussions were finished, participants completed an evaluation form. This evaluation form, similar to the previous one, asked sixteen questions that helped us quantify the results of each interview and thus compare the eleven different interviews more easily. To view the evaluation form, see Appendix A.2.

26


Question

User Study B - Discussion Questions Discussion Questions

1A

How did you feel about navigating to the global library? Was it simple or too complex? Why?

1B

Why did you choose this ReadList? Did you enjoy reading the contents of it?

1C

How did you feel about the gestures? Did they make the navigation easier/more fluid or not? Why?

2A

How did you feel about the interface for subscribing to a ReadList? Was it simple or too complex? Why?

2B

Were you able to easily find this new ReadList in your Home Library? Why or why not?

3A

How did you feel about the interface for uploading a ReadList? Was it simple or too complex? Why?

Table 4.2: This table displays the standard questions asked in the second user study.

27


Chapter 5

Results This chapter displays the results from both user studies and evaluation forms. The first section in this chapter examines answers from the first user study, highlighting key responses made in the discussion section and the evaluation form. The second section follows the same template, first listing key responses made in the discussion and then highlighting key responses from the second evaluation form.

5.1

First User Study

The first task asked participants to use the software for up to five minutes and then afterward, to describe their experience using the interface. Common words to describe the interface include “clean”, “smooth”, and “natural”. In terms of criticism, seven out of eleven participants signaled initial confusion when interacting with the webpage header while browsing through a webpage. In particular, they found the button text ’save and exit’ to be unclear, and the shape of the webpage navigation buttons (used for navigating to a previously visited webpage) to confuse them. The second task required participants to navigate to a particular webpage within a specific ReadList. Afterward, participants were asked how they felt about the navigation of the interface. Six of the eleven participants mentioned that navigating was “easy”. However, three people recommended the use of a search bar to make it faster. Further, five participants suggested that each ReadList should include a webpage title on each card, so it’s more clear to users what webpage address the card is visually referencing. The third task asked participants to use gestures to navigate the interface. Then, participants were asked how they felt about the gestures. Ten out of eleven participants mentioned how gestures made the interface feel faster. Five people said the gestures felt “intuitive” or “natural”. Four people said the gestures were not easily discoverable, that without being informed of these gestures, they never would have discovered that they exist. 28


The final task asked participants to create their own ReadList. When evaluating this task, eight participants complimented how fast the process was. Three participants mentioned how the gestures made the task feel quick to complete. Two participants said they were concerned about trying to find a particular ReadList in an extensive library. After the completion of the user study, participants were asked to fill out the evaluation form. First, participants had to provide a rating between 1 and 5 on how excited the app makes them feel. A rating of 1 represents ‘not excited,’ and 5 represents ‘very excited.’ Seven participants gave a score of 5, and 4 gave a score of 4, as seen in Figure 5.1.

Figure 5.1: This graph shows the ratings for how excited the app makes users feel. A rating of 1 represents ‘not excited’ and 5 represents ‘very excited’. Seven participants gave a rating of 5 and four participants gave a rating of 4. The next question asked participants to write why they gave their respective ratings. Seven participants complimented the app’s ‘idea,’ and three explicitly mentioned the interface design as reasons for the rating. Then, the form asked participants to list things they liked about the app and things they disliked about the app. Six participants explicitly said they liked using the ‘gestures’ to navigate the interface. Four participants mentioned features they wished the app had when discussing what they disliked. Three participants said they disliked some of the bugs they experienced while using the prototype. The evaluation form then asked participants to rate how often they would use the app. Participants had to select one of four choices: never, once a week, a few times a week, and daily. The majority of participants, 64 percent, said they would use the app every day. The result of the results can be seen in Figure 5.2. Next, participants were asked to pick one of four choices that best describe the app’s interface: very difficult to use, somewhat difficult, mostly simple, and very simple. Seventy-three percent of participants found the interface to be very simple, as visualized in Figure 5.3. Then, participants were asked which of the five ReadLists in the Home Library they liked the 29


most. For this question, there was no dominant winner, but instead three ties. The results for this question can be seen in Figure 5.4.

Figure 5.2: This chart shows how often participants would use this application. Seven participants said ‘everyday’, three participants said ‘a few times a week’, and one participant said ‘once a week’.

Figure 5.3: This chart reveals how participants would best describe the application’s user interface. Eight participants said ‘very simple’ and another three said ‘mostly simple’. Question ten asked participants to rate their experience from 1 to 5 for browsing between different webpages within a ReadList, where 1 represents ‘confusing and difficult’ and 5 represents ‘simple 30


Figure 5.4: This chart shows which ReadList participants liked the most.‘Tech Blogs, ‘Indian Food Recipes’, and ‘Writing Comedy’ each had two participants vote for it. Three participants voted for ‘Current Events’ and one voted for ’Antitrust’. and straightforward.’ Most participants, seven of eleven, rated this as a 4. The results can be seen in Figure 5.5. Then, participants were asked to rate their experience between 1 and 5 for creating a new ReadList. A score of 1 indicated ‘confusing and difficult’ whereas a score of 5 represented ‘simple and straightforward.’ In this case, six participants gave a rating of 5, and four participants gave a rating of 4. The results for this question can be seen in Figure 5.6. Finally, participants were asked to choose how gestures made the interface feel. There were four choices: faster, slower, confusing, or natural, and participants were allowed to select multiple selections. Eight participants found the gestures made the interface feel more natural, and seven found it also made the interface feel faster. The results of this question are visualized in Figure 5.7.

5.2

Second User Study

The first task of this user study asked participants to explore the Global Library for a timed five minutes. Upon completion of the task, participants were asked how they felt about the interface for navigating the Global Library. Four participants suggested improvements for the design of the button that allowed them to enter the Global Library. Further, three participants said they wished there was a search bar within the Global Library. Five participants complimented the gestures, finding they helped make the interface feel more ‘intuitive.’ The second task asked participants to subscribe to any ReadList in the Global Library. Subscribing to a ReadList allows users to see that ReadList within their Home Library, where it’s faster to

31


Figure 5.5: This chart shows the ratings participants gave for their experience with browsing between webpages in a ReadList. A score of ‘1’ represents ‘confusing and difficult’ and a score of ‘5’ represents ‘simple and straightforward’. Three participants gave a rating of ‘5’, seven gave a rating of ‘4’, and one gave a rating of ‘3’.

Figure 5.6: This chart shows the ratings participants gave for their experience with creating their own ReadList. A score of ‘1’ represents ‘confusing and difficult’ and a score of ‘5’ represents ‘simple and straightforward’. Six participants gave a rating of ‘5’, four gave a rating of ‘4’, and one gave a rating of ‘3’. access and find. Once they finished this task, participants were asked how they feel about the interface for subscribing to a ReadList. Six participants complained about the design of the subscription button, which is used to add a particular ReadList to the Home Library. Three participants also found it challenging to find the newly added ReadList in their Home Library, suggesting that ‘search

32


Figure 5.7: This chart reveals how participants felt about the gestures available for the interface. Multiple selections were allowed. ‘Faster’ and ‘Natural’ each received a combined score of 15 and ‘slower’ and ‘confusing’ received a combined score of 3. functionality’ would be excellent. One participant also suggested having an option to sort ReadLists in the Home Library by specific attributes, such as their names or the date they were created. The third and final task of the user study asked participants to upload a ReadList from their Home Library to the Global Library. Five participants complimented the design of this interface, noting that submitting the ReadList was ‘easy,’ ‘simple,’ and ‘intuitive.’ Three participants wished there was a more clear distinction between self-made ReadLists and subscribed ReadLists within the Home Library. After the user study, participants were then given an evaluation form to complete. The first question in the form asked participants to provide a rating between 1 and 5 on how excited the app made them feel. A rating of 1 represented ‘not excited,’ and 5 described ‘very excited.’ This time, eight participants gave a perfect score of 5, as seen in Figure 5.8. The next question asked the participants why they gave their respective ratings. Six participants complimented the app’s user experience and interface. Five participants said they liked this ‘idea,’ finding it very useful for their needs. After this question, the evaluation form asks participants to list things they like and dislike about the application. Three participants said they enjoyed using the gestures to navigate the software, and another three participants mentioned that they liked reading the content in the Global Library. The evaluation form then asked participants to rate how often they would use the app. Participants had to select one of four choices: never, once a week, a few times a week, and daily. This time, nearly three quarters (73 percent) said they would use the app a few times a week, and 18 percent said they would use it every day. The results can be seen in Figure 5.9. Then, participants

33


Figure 5.8: This chart shows the ratings for how excited the app makes users feel. A rating of 1 represents ‘not excited’ and 5 represents ‘very excited’. Eight participants gave a rating of ‘5’, two participants gave a rating of ‘4’, and one gave a rating of ‘3’. were asked to pick which one of four choices best described the app’s interface: very difficult to use, somewhat difficult, mostly simple, and very simple. Sixty-four percent found the interface to be ‘very simple,’ and no one rated the interface as somewhat difficult or very difficult. This is visualized in Figure 5.10.

Figure 5.9: This chart shows how often participants would use this application. Two participants said ‘everyday’, eight participants said ‘a few times a week’, and one participant said ‘never’.

34


Figure 5.10: This chart reveals how participants would best describe the application’s user interface. Seven participants said ‘very simple’ and another four said ‘mostly simple’. The evaluation form then asked participants to rate their experience from 1 to 5 for exploring ReadLists through the Global Library, with 1 representing a poor experience and 5 representing an excellent experience. The majority of participants, seven, gave a rating of 4, as seen in Figure 5.11. Participants were then asked to rate their experience for subscribing to a ReadList in the Global Library (using the same scale as the one in the previous question). In this case, there was a tie, with five participants giving a rating of 4 and another five participants giving a rating of 5. This can be seen in Figure 5.12. Next, participants were asked to use that same scale to rate their experience for uploading a ReadList to the Global Library. The overwhelming majority, eight participants, gave a rating of 4, as seen in Figure 5.13. The evaluation form then asked participants to rate how the use of gestures made the interface feel. There were four choices: faster, slower, confusing, or natural, and participants were allowed to choose multiple selections. Ten participants found the gestures made the app feel faster, and six said it made the app feel more natural, as seen in Figure 5.14. Then, participants were asked if they prefer browsing through the Global Library or Facebook News Feed. Participants had three choices to select from: the Global Library, Facebook, or neither. Figure 5.15 shows that slightly more participants preferred the Global Library over Facebook. Finally, participants were asked if they trust the content in their Global Library more than the content from their Facebook timeline. Ten of the eleven participants trusted the content in the Global Library more, visualized clearly in Figure 5.16.

35


Figure 5.11: This chart shows the ratings participants gave for their experience with exploring ReadLists in the Global Library. A score of ‘1’ represents ‘confusing and difficult’ and a score of ‘5’ represents ‘simple and straightforward’. Four participants gave a rating of ‘5’ and seven gave a rating of ‘4’

Figure 5.12: This chart shows the ratings participants gave for their experience with subscribing to ReadLists. A score of ‘1’ represents ‘confusing and difficult’ and a score of ‘5’ represents ‘simple and straightforward’. Five participants gave a rating of ‘5’, another five gave a rating of ‘4’, and one participant gave a rating of ‘3’.

36


Figure 5.13: This chart shows the ratings participants gave for their experience with submitting their own ReadLists to the Global Library. A score of ‘1’ represents ‘confusing and difficult’ and a score of ‘5’ represents ‘simple and straightforward’. Two participants gave a rating of ‘5’, eight participants gave a rating of ‘4’, and one participant gave a rating of ‘2’.

Figure 5.14: This chart reveals how participants felt about the gestures available for the interface. Multiple selections were allowed. ‘Faster’ and ‘Natural’ each received a combined score of 17 and ‘slower’ and ‘confusing’ received a combined score of 1.

37


Figure 5.15: This chart shows what network participants prefer for browsing information. Five participants preferred the Global Library, four participants preferred Facebook’s news feed, and two said neither.

Figure 5.16: This chart reveals what network participants trust more when browsing information. Ten of the eleven participants chose the Global Library over Facebook’s news feed.

38


Chapter 6

Conclusion & Limitations The results from the first user study revealed two keen insights. First, participants enjoyed using this novel interface for organizing web content. Indeed, Figure 5.2 shows us that nearly two-thirds of participants would use this interface daily. Furthermore, Figure 5.1 tells us that all participants said they were either ‘excited’ or ‘very excited’ to use this application, which indicates strong satisfaction with the application. Second, the user study also tells us that the core features of the app were directly useful. In particular, 5.5 and 5.6 tell us that ten of the eleven participants rated their experience using ReadLists with a score of either four or five out of five total points. Additionally, Figure 5.4 shows us that participants were nearly evenly divided on which ReadList they liked the most. This result indicates to us that the ReadLists contained content that appealed to a broad range of interests, further improving the user experience of the software. The second user study also reveals similar insights. First, it found that participants enjoyed using the Global Library, but not as much as the Home Library. Figures 5.11, 5.12, and 5.13 show us that a greater portion of participants rated the Global Library features as four out of five stars. Although this rating indicates a great user experience, it also shows room for improvement. Second, the user study tells us that this application has the potential to compete with an algorithmically-curated timeline, like Facebook. Indeed, Figure 5.15 shows that nearly half of participants would prefer to explore content through the Global Library instead of Facebook News Feed. This result indicates that a human-curated feed may have strong demand from consumers. Finally, Figure 5.16 reveals that ten of the eleven participants trust the content in the Global Library more than the content in the Facebook timeline. Earlier in this thesis, three key issues were noted with existing information networks today: the rise of clickbait, misinformation, and filter bubbles. This thesis attempted to create a new information network that did not have those three attributes while also delivering a positive user experience for consumers. This preliminary study shows clear potential in consumer demand for

39


this type of network. Thus, human-curated information networks should be further studied and developed as a way to bring a new web browsing experience for consumers. There were two critical limitations to this thesis that we hope to address in future works. First, the sample sizes of both user studies (eleven participants in each one) were too small to pull any statistically significant data. In addition to the inadequate sample size, the demographics of the participants (all college students in Sarasota) gave us results that we could not generalize to the average internet user. In future work, the user studies should include participants not just from college, but also adults from a wide range of ages and backgrounds. This change would give us more valuable feedback, as we would then have the results from a population more representative of typical internet users. The second limitation is the lack of direct comparisons between the Global Library and Facebook news feed (or other algorithmically curated timelines) in user studies. If, for instance, participants were asked to use both types of software to complete a particular task, then the results can be used to compare which kind of network performed better. Comparing two types of networks more directly would allow us to more accurately determine if human-curated software can deliver a better user experience than algorithmically-curated software. Just over twenty years ago, the internet was a rare commodity only starting to be introduced to the world. Today, the internet is home to misinformation, clickbait, and filter bubbles. This thesis claims that these attributes are not inevitable features of the internet, but instead, mistakes that can be fixed to create an improved browsing experience for consumers across the world. The internet is still young, and given its central importance in society and culture, improving its quality can create forceful impacts that will be felt across the world for generations.

40


Appendix A

Appendix A.1

Evaluation Form A

41


Usability Evaluation - Expose Complete this form only after the successful completion of the usability test * Required

1.

Email address *

App Overview

2.

How excited does this app make you feel? * Mark only one oval. 1 Not excited

3.

2

3

4

5 Very excited

Why did you choose to give the above rating?


4.

What do you like about the app? (if nothing, leave blank)

5.

What do you dislike about the app? (if nothing, leave blank)

6.

How o!en would you use this app? * Mark only one oval. Never Once a week A few times every week Everyday

7.

Pick one of the following that best describes the app's user inte"ace Mark only one oval. Very diIcult to use (I struggled with navigating within the app and frequently lost where I am) Somewhat diIcult (I sometimes struggled navigating within the app and at times needed help) Mostly simple and easy (I rarely got confused and mostly knew how to use the app) Very simple and easy (I never got lost and always knew how to navigate the app)


Features Analysis

8.

Which ReadList did you like the most? Mark only one oval. "Antitrust" "Current Events" "On Writing Comedy' "Indian Food Recipes" "Tech Blogs"

9.

10.

Why did you like that WebList the most?

Please rate your experience browsing between webpages in a web playlist. Mark only one oval. 1 Confusing and diIcult

2

3

4

5 Simple and straightforward.


11.

Why did you choose to give the above rating?

12.

Please rate your experience with creating a new WebList This includes adding new webpages to the web list.

Mark only one oval. 1

2

3

4

Confusing and diIcult

13.

Why did you choose to give the above rating?

14.

The use of gestures made the user inte"ace... Check all that apply.

Check all that apply. feel faster feel slower more confusing to use more natural to use

5 Simple and straightforward.


A.2

Evaluation Form B

46


Usability Evaluation - Expose Complete this form only after the successful completion of the usability test * Required

1.

Email address *

App Overview

2.

How excited does this app make you feel? * Mark only one oval. 1 Not excited

3.

2

3

4

5 Very excited

Why did you choose to give the above rating?


4.

What do you like about the app? (if nothing, leave blank)

5.

What do you dislike about the app? (if nothing, leave blank)

6.

How o!en would you use this app? * Mark only one oval. Never Once a week A few times every week Everyday

7.

Pick one of the following that best describes the app's user inte"ace Mark only one oval. Very diIcult to use (I struggled with navigating within the app and frequently lost where I am) Somewhat diIcult (I sometimes struggled navigating within the app and at times needed help) Mostly simple and easy (I rarely got confused and mostly knew how to use the app) Very simple and easy (I never got lost and always knew how to navigate the app)


Features Analysis

8.

Please rate your experience with exploring ReadLists throughout the Global Library Mark only one oval. 1

2

3

4

5

Confusing and diIcult

9.

10.

Simple and straightforward.

Why did you choose to give the above rating?

Please rate your experience with subscribing to a ReadList in the Global Library This includes adding new webpages to the web list.

Mark only one oval. 1 Confusing and diIcult

2

3

4

5 Simple and straightforward.


11.

Why did you choose to give the above rating?

12.

Please rate your experience with uploading a ReadList to the Global Library This includes adding new webpages to the web list.

Mark only one oval. 1

2

3

4

Confusing and diIcult

13.

Why did you choose to give the above rating?

14.

The use of gestures made the user inte"ace... Check all that apply.

Check all that apply. feel faster feel slower more confusing to use more natural to use

5 Simple and straightforward.


15.

Do you prefer browsing through the Global Library or Facebook Newsfeed? Mark only one oval. Global Library Facebook Newsfeed Neither

16.

Do you trust the content in the Global Library more than the content from your Facebook Timeline? Mark only one oval. Yes No

This content is neither created nor endorsed by Google.

Forms


Bibliography [1] The size of the World Wide Web ( The Internet ) The size of the World Wide Web : Estimated size of Google ’ s index The size of the World Wide Web : Estimated size of Bing index, 2020. [2] Allcott, H., Gentzkow, M., and Yu, C. Trends in the diffusion of misinformation on social media. Research and Politics 6, 2 (2019), 1–13. [3] Aral, S., and Zhao, M. Social Media Sharing and Online News Consumption. SSRN Electronic Journal (2019), 1–46. [4] Barthel, M., Mitchell, A., and Holocomb, J. Many Americans Believe Fake News Is Sowing Confusion — Pew Research Center. Pew Research Center (2016), 2–6. [5] Boland, B. Organic Reach on Facebook: Your Questions Answered. Facebook (2014). [6] Broadshaw, S., and Howard, P. N. Why Does Junk News Spread So Quickly Across Social Media? Knight Foundation (2018), 24. [7] Bruns, A. Filter bubble. Internet Policy Review 8, 4 (2019). [8] Design, D., Distribute, D., and Account, S. Xcode 11 SwiftUI. [9] Epstein, R., and Robertson, R. E. The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. Proceedings of the National Academy of Sciences of the United States of America 112, 33 (2015), E4512–E4521. [10] Facebook. How News Feed Works, 2015. [11] Guess, A. M., Nyhan, B., and Reifler, J. Exposure to Untrustworthy Websites in the 2016 U.S. Election. ” (2016), 1–38. [12] Herrman, J. After the Parkland Tragedy, 2018. [13] In, S., and In, S. The start best with products Sketch. [14] Mark Scott. False attacks on Facebook could bring ’a Titanic-sized disaster’ in 2020, nov 2019. 52


[15] Martens, B., Aguiar, L., GGmez, E., and Mueller-Langer, F. The Digital Transformation of News Media and the Rise of Disinformation and Fake News. SSRN Electronic Journal, April (2018). [16] Matsa, K., and Lu, K. 10 Facts About the Changing Digital News Landscape. PewResearch.Org (2016). [17] Pariser, E. Did Facebook ’s Big Study Kill My Filter Most Popular Bubble Thesis ? Wired Magazine (2015). [18] Price, R. Internet growth accelerates and social continues to climb , but there are changes brewing, 2019. [19] Roose, K. Site Navigation New York Times Site Mobile Navigation Navigation Section, jun 2019. [20] Uddin Rony, M. M., Hassan, N., and Yousuf, M. Diving deep into clickbaits: Who use them to what extents in which topics with what effects? Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2017 (2017), 232–239. [21] Wong, A. S. T., and Park, C. B. A Case Study of a Systematic Iterative Design Methodology and its Application in Engineering Education. Proceedings of the Canadian Engineering Education Association (2010). [22] Wu, L., Morstatter, F., Carley, K. M., and Liu, H. Misinformation in social media: Definition, manipulation, and detection. ACM SIGKDD Explorations Newsletter 21, 2 (2019), 80–90.

53


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.