Ougd505 research

Page 1

:-(


Melissa Nilles

Technology is Destroying the Quality of Human Interaction

Little by little, Internet and mobile technology seems to be subtly destroying the meaningfulness of interactions we have with others, disconnecting us from the world around us, and leading to an imminent sense of isolation in today’s society. Instead of spending time in person with friends, we just call, text or instant message them. It may seem simpler, but we ultimately end up seeing our friends face to face a lot less. Ten texts can’t even begin to equal an hour spent chatting with a friend over lunch. And a smiley-face emoticon is cute, but it could never replace the ear-splitting grin and smiling eyes of one of your best friends. Face time is important, people. We need to see each other. This doesn’t just apply to our friends; it applies to the world around us. It should come as no surprise that face-to-face interaction is proven by studies to comfort us and provide us with some important sense of well-being, whether it’s with friends or friendly cashiers in the checkout line of Albertson’s. That’s actually the motivation behind Albertson’s decision last year to take all of the self-checkout lanes out of its stores: an eerie lack of human contact. There’s something intangibly real and valuable about talking with someone face to face. This is significant for friends, partners, potential employers, and other recurring people that make up your everyday world. That person becomes an important existing human connection, not just someone whose disembodied text voice pops up on your cell phone, iPad or computer screen. It seems we have more extended connections than ever in this digital world, which can be great for networking, if it’s used right. The sad fact of the matter is that most of us don’t. It’s too hard to keep up with 1000 friends, let alone 200. At that point, do we even remember their names? We need to start prizing the meaning of quality in our connections, not sheer quantity.


One of my best friends from my hometown has 2,241 Facebook friends. Sure, her posts get a ton of feedback, but when I asked her about the quality of those relationships, she said to me that she really has few friends that she can trust and spend time with happily. Using a strange conundrum like this as a constructive example, we should consider pruning our rampant online connections at the very least. Past evolutionary psychology research by British anthropologist and psychologist Robin Dunbar has revealed that people are actually limited to a certain number of stable, supportive connections with others in their social network: roughly 150. Furthermore, recent follow-up research by Cornell University’s Bruno Goncalves used Twitter data to show that despite the current ability to connect with vast amounts of people via the Internet, a person can still only truly maintain a friendship with a maximum of 100 to 200 real friends in their social network. While technology has allowed us some means of social connection that would have never been possible before, and has allowed us to maintain long-distance friendships that would have otherwise probably fallen by the wayside, the fact remains that it is causing ourselves to spread ourselves too thin, as well as slowly ruining the quality of social interaction that we all need as human beings. So what are we doing with 3000 friends on the Internet? Why are we texting all the time? Seems like a big waste of time to me. Let’s spend more time together with our friends. Let’s make the relationships that count last, and not rely on technology to do the job for us.


Susan Tardanico | Forbes

Is Social Media Sabotaging Real Communication?

On a crisp Friday afternoon last October, Sharon Seline exchanged text messages with her daughter who was in college. They ‘chatted’ back and forth, mom asking how things were going and daughter answering with positive statements followed by emoticons showing smiles, b-i-g smiles and hearts. Happiness. Later that night, her daughter attempted suicide. In the days that followed, it came to light that she’d been holed up in her dorm room, crying and showing signs of depression — a completely different reality from the one that she conveyed in texts, Facebook posts and tweets. As human beings, our only real method of connection is through authentic communication. Studies show that only 7% of communication is based on the written or verbal word. A whopping 93% is based on nonverbal body language. Indeed, it’s only when we can hear a tone of voice or look into someone’s eyes that we’re able to know when “I’m fine” doesn’t mean they’re fine at all…or when “I’m in” doesn’t mean they’re bought in at all. This is where social media gets dicey. Awash in technology, anyone can hide behind the text, the e-mail, the Facebook post or the tweet, projecting any image they want and creating an illusion of their choosing. They can be whoever they want to be. And without the ability to receive nonverbal cues, their audiences are none the wiser. This presents an unprecedented paradox. With all the powerful social technologies at our fingertips, we are more connected – and potentially more disconnected – than ever before. Every relevant metric shows that we are interacting at breakneck speed and frequency through social media. But are we really communicating? With 93% of


our communication context stripped away, we are now attempting to forge relationships and make decisions based on phrases. Abbreviations. Snippets. Emoticons. Which may or may not be accurate representations of the truth. A New Set of Communication Barriers Social technologies have broken the barriers of space and time, enabling us to interact 24/7 with more people than ever before. But like any revolutionary concept, it has spawned a set of new barriers and threats. Is the focus now on communication quantity versus quality? Superficiality versus authenticity? In an ironic twist, social media has the potential to make us less social; a surrogate for the real thing. For it to be a truly effective communication vehicle, all parties bear a responsibility to be genuine, accurate, and not allow it to replace human contact altogether. In the workplace, the use of electronic communication has overtaken face-to-face and voice-to-voice communication by a wide margin. This major shift has been driven by two major forces: the speed/geographic dispersion of business, and the lack of comfort with traditional interpersonal communication among a growing segment of our employee population: Gen Y and Millennials. Studies show that these generations – which will comprise more than 50% of the workforce by 2020 – would prefer to use instant messaging or other social media than stop by an office and talk with someone. This new communication preference is one of the “generational gaps” plaguing organizations as Boomers try to manage to a new set of expectations and norms in their younger employees, and vice versa. With these two trends at play, leaders must consider the impact on business relationships and the ability to effectively collaborate, build trust, and create employee engagement and loyalty.


Further, because most business communication is now done via e-mails, texts, instant messaging, intranets, blogs, websites and other technology-enabled media - sans body language – the potential for misinterpretation is growing. Rushed and stressed, people often do not take the time to consider the nuances of their writing. Conflicts explode over a tone of an e-mail, or that all-important cc: list. When someone writes a text in all capital letters, does it mean they’re yelling? Are one- or two-word responses a sign that the person doesn’t want to engage? On the flip side, does a smiley face or an acknowledgement of agreement really mean they’re bought in and aligned? Conclusions are drawn on frighteningly little information. We Need a New Golf Course The idea of doing business on the golf course seems anachronistic these days, but the reason why the concept became so iconic is because it proved that when colleagues spend personal time together – face to face – more progress can be made, deals can get done and relationships can deepen, allowing the colleagues to function more effectively off the course. This concept has been proven over and over again with correlations between face-to-face relationship-building and employee engagement and loyalty. And years ago, we learned about the power of “Management By Walking Around” in Tom Peters’ groundbreaking book In Search of Excellence. So in this wired world, what’s our new golf course? How do we communicate effectively and build deeper, more authentic relationships when we have only words (truncated at best) instead of voice, face and body expression to get all the important and powerful nuances that often belie the words? Assuming this trend is here to stay, we need to create cultures where managers, employees and their key stakeholders redouble their efforts to get at the real messages and issues. Here are some suggestions.



Rory Cellan-Jones | BBC

Stephen Hawking warns artificial intelligence could end mankind

Prof Stephen Hawking, one of Britain’s preeminent scientists, has said that efforts to create thinking machines pose a threat to our very existence. He told the BBC:”The development of full artificial intelligence could spell the end of the human race.” His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI. But others are less gloomy about AI’s prospects. The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak. Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next. Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.


“The development of full artificial intelligence could spell the end of the human race.”

“It would take off on its own, and redesign itself at an ever increasing rate,”

“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”


Hayley Jones | HASTAC

Social Media’s Affect on Human Interaction

How does social media affect interaction in our society? Will face-to-face communication ultimately diminish because of these new social technologies. These questions are ones that many researchers have found extremely intriguing since the advent and popularization of social media in the last decade. Within this topic, social competency is an important ideal that most people strive towards, but there is evidence to support the claims that social media is actually harming people’s ability to interact competently in an offline setting. Studies on the social competency of youths who spend much of their time on social media networks are sometimes very conflicting. For example, a study executed by the National Institute of Health found that youths with strong, positive face-to-face relationships may be those most frequently using social media as an additional venue to interact with their peers. As a pretty outgoing person myself, I find myself wanting to use social media as an extra outlet to interact with my friends, whether it is through a random funny post from Tumblr or posting pictures from our adventures. Although I personally agree with this study’s findings, I also believe that social media can be an excellent avenue for introverted people to find a comfortable setting to interact. From my own experience, I see many of my friends who aren’t as comfortable with face-to-face interactions thriving in an online environment. However, on a case-to-case basis, there can be dramatic differences. For example, a study performed by Jean Twenge, a professor of psychology at San Diego State University, showed that a sample of high school students from Connecticut with problematic levels of Internet use were more likely to get into serious fights or carry a weapon. Despite the seriousness of this type of situation, I think that this study really focuses on the extremes of a certain situation, and not the norm of people’s interactions on social networks. The other problem with this particular study is that the connection


between this type of antisocial behavior and excessive Internet use has not been set in stone by comprehensive research. There is only a correlation, and it is not clear which behavior is causing the other. This discrepancy especially leads me to believe that social media is mostly positive for young people in my generation. Despite my opinion that social media is positive, I definitely believe that face-to-face interaction must continue to be our main source of communication. According to Forbes magazine, only 7% of communication is based on the verbal word. That means that over 90% of communication is based on nonverbal cues such as body language, eye contact, and tone of voice. Technology’s rampant popularization over the past decade in terms of social media has meant that texting, Facebook, and Twitter have inevitably taken over as the most efficient ways of communicating with each other. I do agree that the “efficiency factor,” or our reliance on the most efficient ways of communication, is becoming one of the largest concerns for people in our interactions with one another. However, there needs to be a constant reminder that face-to-face interaction must remain a staple in our society because it is of a much higher quality and has the ability to satisfy so many more of our inherent social needs. Social media of all kinds has become such an important part of our society that looking at it in a negative way will only set us back. We as a society must push forward and continue to incorporate social media in more positive ways. Social networking sites have been categorized as both beneficial and consequential to offline contact. Although I believe that it can be mostly positive, in some cases, both sides can be true; however, the affect that social media has on each person in this world is always different and can only be looked at on a case-by-case basis. The future of social media is also highly unknown. According to each company’s respective reports, in March 2013, Facebook reported having 1.11 billion active users, Instagram reported having 100 million users, and Twitter had 200 million active users. These numbers have experienced immense growth even in the past year alone. This just proves the fact that social networking is a rapidly changing field, and even if we solve the question of how current social media affects interaction, more questions will continue to arise as these sites continue to change. So now the question remains: Is our society really ready to harness these new social media technologies to our advantage?


Is Facebook Making Us Lonely? Stephen Marche | The Atlantic

Social media—from Facebook to Twitter— have made us more densely networked than ever. Yet for all this connectivity, new research suggests that we have never been lonelier (or more narcissistic)—and that this loneliness is making us mentally and physically ill. A report on what the epidemic of loneliness is doing to our souls and our society.


YVETTE VICKERS, A FORMER Playboy playmate and B-movie star, best known for her role in Attack of the 50 Foot Woman, would have been 83 last August, but nobody knows exactly how old she was when she died. According to the Los Angeles coroner’s report, she lay dead for the better part of a year before a neighbor and fellow actress, a woman named Susan Savage, noticed cobwebs and yellowing letters in her mailbox, reached through a broken window to unlock the door, and pushed her way through the piles of junk mail and mounds of clothing that barricaded the house. Upstairs, she found Vickers’s body, mummified, near a heater that was still running. Her computer was on too, its glow permeating the empty space. The Los Angeles Times posted a story headlined “Mummified Body of Former Playboy Playmate Yvette Vickers Found in Her Benedict Canyon Home,” which quickly went viral. Within two weeks, by Technorati’s count, Vickers’s lonesome death was already the subject of 16,057 Facebook posts and 881 tweets. She had long been a horror-movie icon, a symbol of Hollywood’s capacity to exploit our most basic fears in the silliest ways; now she was an icon of a new and different kind of horror: our growing fear of loneliness. Certainly she received much more attention in death than she did in the final years of her life. With no children, no religious group, and no immediate social circle of any kind, she had begun, as an elderly woman, to look elsewhere for companionship. Savage later told Los Angeles magazine that she had searched Vickers’s phone bills for clues about the life that led to such an end. In the months before her grotesque death, Vickers had made calls not to friends or family but to distant fans who had found her through fan conventions and Internet sites. Vickers’s web of connections had grown broader but shallower, as has happened for many of us. We are living in an isolation that would have been unimaginable to our ancestors, and yet we have never been more accessible. Over the past three decades, technology has delivered to us a world in which we need not be out of contact for a fraction of a moment. In 2010, at a cost of $300 million, 800 miles of fiber-optic cable was laid between the Chicago Mercantile Exchange and the New York Stock Exchange to shave three milliseconds off trading times. Yet within this world of instant and absolute communication, unbounded by limits of time or space, we suffer from unprecedented alienation. We have never been more detached from one another, or lonelier. In a world consumed by ever more novel


modes of socializing, we have less and less actual society. We live in an accelerating contradiction: the more connected we become, the lonelier we are. We were promised a global village; instead we inhabit the drab cul-de-sacs and endless freeways of a vast suburb of information. At the forefront of all this unexpectedly lonely interactivity is Facebook, with 845 million users and $3.7 billion in revenue last year. The company hopes to raise $5 billion in an initial public offering later this spring, which will make it by far the largest Internet IPO in history. Some recent estimates put the company’s potential value at $100 billion, which would make it larger than the global coffee industry—one addiction preparing to surpass the other. Facebook’s scale and reach are hard to comprehend: last summer, Facebook became, by some counts, the first Web site to receive 1 trillion page views in a month. In the last three months of 2011, users generated an average of 2.7 billion “likes” and comments every day. On whatever scale you care to judge Facebook—as a company, as a culture, as a country—it is vast beyond imagination. Despite its immense popularity, or more likely because of it, Facebook has, from the beginning, been under something of a cloud of suspicion. The depiction of Mark Zuckerberg, in The Social Network, as a bastard with symptoms of Asperger’s syndrome, was nonsense. But it felt true. It felt true to Facebook, if not to Zuckerberg. The film’s most indelible scene, the one that may well have earned it an Oscar, was the final, silent shot of an anomic Zuckerberg sending out a friend request to his ex-girlfriend, then waiting and clicking and waiting and clicking—a moment of superconnected loneliness preserved in amber. We have all been in that scene: transfixed by the glare of a screen, hungering for response. When you sign up for Google+ and set up your Friends circle, the program specifies that you should include only “your real friends, the ones you feel comfortable sharing private details with.” That one little phrase, Your real friends—so quaint, so charmingly mothering—perfectly encapsulates the anxieties that social media have produced: the fears that Facebook is interfering with our real friendships, distancing us from each other, making us lonelier; and that social networking might be spreading the very isolation it seemed designed to conquer.


FACEBOOK ARRIVED IN THE MIDDLE of a dramatic increase in the quantity and intensity of human loneliness, a rise that initially made the site’s promise of greater connection seem deeply attractive. Americans are more solitary than ever before. In 1950, less than 10 percent of American households contained only one person. By 2010, nearly 27 percent of households had just one person. Solitary living does not guarantee a life of unhappiness, of course. In his recent book about the trend toward living alone, Eric Klinenberg, a sociologist at NYU, writes: “Reams of published research show that it’s the quality, not the quantity of social interaction, that best predicts loneliness.” True. But before we begin the fantasies of happily eccentric singledom, of divorcées dropping by their knitting circles after work for glasses of Drew Barrymore pinot grigio, or recent college graduates with perfectly articulated, Steampunk-themed, 300-square-foot apartments organizing croquet matches with their book clubs, we should recognize that it is not just isolation that is rising sharply. It’s loneliness, too. And loneliness makes us miserable. We know intuitively that loneliness and being alone are not the same thing. Solitude can be lovely. Crowded parties can be agony. We also know, thanks to a growing body of research on the topic, that loneliness is not a matter of external conditions; it is a psychological state. A 2005 analysis of data from a longitudinal study of Dutch twins showed that the tendency toward loneliness has roughly the same genetic component as other psychological problems such as neuroticism or anxiety. Still, loneliness is slippery, a difficult state to define or diagnose. The best tool yet developed for measuring the condition is the UCLA Loneliness Scale, a series of 20 questions that all begin with this formulation: “How often do you feel …?” As in: “How often do you feel that you are ‘in tune’ with the people around you?” And: “How often do you feel that you lack companionship?” Measuring the condition in these terms, various studies have shown loneliness rising drastically over a very short period of recent history. A 2010 AARP survey found that 35 percent of adults older than 45 were chronically lonely, as opposed to 20 percent of a similar group only a decade earlier. According to a major study by a leading scholar of the subject, roughly 20 percent of Americans—about 60 million people—are unhappy with their lives because of loneliness. Across the Western world, physicians and nurses have begun to speak openly of an epidemic of loneliness.


The new studies on loneliness are beginning to yield some surprising preliminary findings about its mechanisms. Almost every factor that one might assume affects loneliness does so only some of the time, and only under certain circumstances. People who are married are less lonely than single people, one journal article suggests, but only if their spouses are confidants. If one’s spouse is not a confidant, marriage may not decrease loneliness. A belief in God might help, or it might not, as a 1990 German study comparing levels of religious feeling and levels of loneliness discovered. Active believers who saw God as abstract and helpful rather than as a wrathful, immediate presence were less lonely. “The mere belief in God,” the researchers concluded, “was relatively independent of loneliness.” But it is clear that social interaction matters. Loneliness and being alone are not the same thing, but both are on the rise. We meet fewer people. We gather less. And when we gather, our bonds are less meaningful and less easy. The decrease in confidants—that is, in quality social connections—has been dramatic over the past 25 years. In one survey, the mean size of networks of personal confidants decreased from 2.94 people in 1985 to 2.08 in 2004. Similarly, in 1985, only 10 percent of Americans said they had no one with whom to discuss important matters, and 15 percent said they had only one such good friend. By 2004, 25 percent had nobody to talk to, and 20 percent had only one confidant. In the face of this social disintegration, we have essentially hired an army of replacement confidants, an entire class of professional carers. As Ronald Dworkin pointed out in a 2010 paper for the Hoover Institution, in the late ’40s, the United States was home to 2,500 clinical psychologists, 30,000 social workers, and fewer than 500 marriage and family therapists. As of 2010, the country had 77,000 clinical psychologists, 192,000 clinical social workers, 400,000 nonclinical social workers, 50,000 marriage and family therapists, 105,000 mental-health counselors, 220,000 substance-abuse counselors, 17,000 nurse psychotherapists, and 30,000 life coaches. The majority of patients in therapy do not warrant a psychiatric diagnosis. This raft of psychic servants is helping us through what used to be called regular problems. We have outsourced the work of everyday caring.


We need professional carers more and more, because the threat of societal breakdown, once principally a matter of nostalgic lament, has morphed into an issue of public health. Being lonely is extremely bad for your health. If you’re lonely, you’re more likely to be put in a geriatric home at an earlier age than a similar person who isn’t lonely. You’re less likely to exercise. You’re more likely to be obese. You’re less likely to survive a serious operation and more likely to have hormonal imbalances. You are at greater risk of inflammation. Your memory may be worse. You are more likely to be depressed, to sleep badly, and to suffer dementia and general cognitive decline. Loneliness may not have killed Yvette Vickers, but it has been linked to a greater probability of having the kind of heart condition that did kill her. And yet, despite its deleterious effect on health, loneliness is one of the first things ordinary Americans spend their money achieving. With money, you flee the cramped city to a house in the suburbs or, if you can afford it, a McMansion in the exurbs, inevitably spending more time in your car. Loneliness is at the American core, a by-product of a long-standing national appetite for independence: The Pilgrims who left Europe willingly abandoned the bonds and strictures of a society that could not accept their right to be different. They did not seek out loneliness, but they accepted it as the price of their autonomy. The cowboys who set off to explore a seemingly endless frontier likewise traded away personal ties in favor of pride and self-respect. The ultimate American icon is the astronaut: Who is more heroic, or more alone? The price of self-determination and self-reliance has often been loneliness. But Americans have always been willing to pay that price. Today, the one common feature in American secular culture is its celebration of the self that breaks away from the constrictions of the family and the state, and, in its greatest expressions, from all limits entirely. The great American poem is Whitman’s “Song of Myself.” The great American essay is Emerson’s “SelfReliance.” The great American novel is Melville’s Moby-Dick, the tale of a man on a quest so lonely that it is incomprehensible to those around him. American culture, high and low, is about selfexpression and personal authenticity. Franklin Delano Roosevelt called individualism “the great watchword of American life.” Self-invention is only half of the American story, however. The


drive for isolation has always been in tension with the impulse to cluster in communities that cling and suffocate. The Pilgrims, while fomenting spiritual rebellion, also enforced ferocious cohesion. The Salem witch trials, in hindsight, read like attempts to impose solidarity—as do the McCarthy hearings. The history of the United States is like the famous parable of the porcupines in the cold, from Schopenhauer’s Studies in Pessimism—the ones who huddle together for warmth and shuffle away in pain, always separating and congregating. We are now in the middle of a long period of shuffling away. In his 2000 book Bowling Alone, Robert D. Putnam attributed the dramatic post-war decline of social capital—the strength and value of interpersonal networks—to numerous interconnected trends in American life: suburban sprawl, television’s dominance over culture, the self-absorption of the Baby Boomers, the disintegration of the traditional family. The trends he observed continued through the prosperity of the aughts, and have only become more pronounced with time: the rate of union membership declined in 2011, again; screen time rose; the Masons and the Elks continued their slide into irrelevance. We are lonely because we want to be lonely. We have made ourselves lonely. The question of the future is this: Is Facebook part of the separating or part of the congregating; is it a huddling-together for warmth or a shuffling-away in pain? WELL BEFORE FACEBOOK, digital technology was enabling our tendency for isolation, to an unprecedented degree. Back in the 1990s, scholars started calling the contradiction between an increased opportunity to connect and a lack of human contact the “Internet paradox.” A prominent 1998 article on the phenomenon by a team of researchers at Carnegie Mellon showed that increased Internet usage was already coinciding with increased loneliness. Critics of the study pointed out that the two groups that participated in the study— high-school journalism students who were heading to university and socially active members of community-development boards—were statistically likely to become lonelier over time. Which brings us to a more fundamental question: Does the Internet make people lonely, or are lonely people more attracted to the Internet? The question has intensified in the Facebook era. A recent study out of Australia (where close to half the population is active


on Facebook), titled “Who Uses Facebook?,” found a complex and sometimes confounding relationship between loneliness and social networking. Facebook users had slightly lower levels of “social loneliness”—the sense of not feeling bonded with friends—but “significantly higher levels of family loneliness”—the sense of not feeling bonded with family. It may be that Facebook encourages more contact with people outside of our household, at the expense of our family relationships—or it may be that people who have unhappy family relationships in the first place seek companionship through other means, including Facebook. The researchers also found that lonely people are inclined to spend more time on Facebook: “One of the most noteworthy findings,” they wrote, “was the tendency for neurotic and lonely individuals to spend greater amounts of time on Facebook per day than non-lonely individuals.” And they found that neurotics are more likely to prefer to use the wall, while extroverts tend to use chat features in addition to the wall. Moira Burke, until recently a graduate student at the Human-Computer Institute at Carnegie Mellon, used to run a longitudinal study of 1,200 Facebook users. That study, which is ongoing, is one of the first to step outside the realm of self-selected college students and examine the effects of Facebook on a broader population, over time. She concludes that the effect of Facebook depends on what you bring to it. Just as your mother said: you get out only what you put in. If you use Facebook to communicate directly with other individuals— by using the “like” button, commenting on friends’ posts, and so on— it can increase your social capital. Personalized messages, or what Burke calls “composed communication,” are more satisfying than “oneclick communication”—the lazy click of a like. “People who received composed communication became less lonely, while people who received one-click communication experienced no change in loneliness,” Burke tells me. So, you should inform your friend in writing how charming her son looks with Harry Potter cake smeared all over his face, and how interesting her sepia-toned photograph of that tree-framed bit of skyline is, and how cool it is that she’s at whatever concert she happens to be at. That’s what we all want to hear. Even better than sending a private Facebook message is the semi-public conversation, the kind of back-and-forth in which you half ignore the other people who may be listening in. “People whose friends write to them semipublicly on Facebook experience decreases in loneliness,” Burke says. On the other hand, non-personalized use of Facebook—scanning your friends’ status updates and updating the world on your own


activities via your wall, or what Burke calls “passive consumption” and “broadcasting”—correlates to feelings of disconnectedness. It’s a lonely business, wandering the labyrinths of our friends’ and pseudo-friends’ projected identities, trying to figure out what part of ourselves we ought to project, who will listen, and what they will hear. According to Burke, passive consumption of Facebook also correlates to a marginal increase in depression. “If two women each talk to their friends the same amount of time, but one of them spends more time reading about friends on Facebook as well, the one reading tends to grow slightly more depressed,” Burke says. Her conclusion suggests that my sometimes unhappy reactions to Facebook may be more universal than I had realized. When I scroll through page after page of my friends’ descriptions of how accidentally eloquent their kids are, and how their husbands are endearingly bumbling, and how they’re all about to eat a homecooked meal prepared with fresh local organic produce bought at the farmers’ market and then go for a jog and maybe check in at the office because they’re so busy getting ready to hop on a plane for a week of luxury dogsledding in Lapland, I do grow slightly more miserable. A lot of other people doing the same thing feel a little bit worse, too. Still, Burke’s research does not support the assertion that Facebook creates loneliness. The people who experience loneliness on Facebook are lonely away from Facebook, too, she points out; on Facebook, as everywhere else, correlation is not causation. The popular kids are popular, and the lonely skulkers skulk alone. Perhaps it says something about me that I think Facebook is primarily a platform for lonely skulking. I mention to Burke the widely reported study, conducted by a Stanford graduate student, that showed how believing that others have strong social networks can lead to feelings of depression. What does Facebook communicate, if not the impression of social bounty? Everybody else looks so happy on Facebook, with so many friends, that our own social networks feel emptier than ever in comparison. Doesn’t that make people feel lonely? “If people are reading about lives that are much better than theirs, two things can happen,” Burke tells me. “They can feel worse about themselves, or they can feel motivated.”


JOHN CACIOPPO, THE director of the Center for Cognitive and Social Neuroscience at the University of Chicago, is the world’s leading expert on loneliness. In his landmark book, Loneliness, released in 2008, he revealed just how profoundly the epidemic of loneliness is affecting the basic functions of human physiology. He found higher levels of epinephrine, the stress hormone, in the morning urine of lonely people. Loneliness burrows deep: “When we drew blood from our older adults and analyzed their white cells,” he writes, “we found that loneliness somehow penetrated the deepest recesses of the cell to alter the way genes were being expressed.” Loneliness affects not only the brain, then, but the basic process of DNA transcription. When you are lonely, your whole body is lonely. To Cacioppo, Internet communication allows only ersatz intimacy. “Forming connections with pets or online friends or even God is a noble attempt by an obligatorily gregarious creature to satisfy a compelling need,” he writes. “But surrogates can never make up completely for the absence of the real thing.” The “real thing” being actual people, in the flesh. When I speak to Cacioppo, he is refreshingly clear on what he sees as Facebook’s effect on society. Yes, he allows, some research has suggested that the greater the number of Facebook friends a person has, the less lonely she is. But he argues that the impression this creates can be misleading. “For the most part,” he says, “people are bringing their old friends, and feelings of loneliness or connectedness, to Facebook.” The idea that a Web site could deliver a more friendly, interconnected world is bogus. The depth of one’s social network outside Facebook is what determines the depth of one’s social network within Facebook, not the other way around. Using social media doesn’t create new social networks; it just transfers established networks from one platform to another. For the most part, Facebook doesn’t destroy friendships—but it doesn’t create them, either. In one experiment, Cacioppo looked for a connection between the loneliness of subjects and the relative frequency of their interactions via Facebook, chat rooms, online games, dating sites, and face-to-face contact. The results were unequivocal. “The greater the proportion of face-to-face interactions, the less lonely you are,” he says. “The greater the proportion of online interactions, the lonelier you are.” Surely, I suggest to Cacioppo, this means that Facebook and the like inevitably make


people lonelier. He disagrees. Facebook is merely a tool, he says, and like any tool, its effectiveness will depend on its user. “If you use Facebook to increase face-to-face contact,” he says, “it increases social capital.” So if social media let you organize a game of football among your friends, that’s healthy. If you turn to social media instead of playing football, however, that’s unhealthy. “Facebook can be terrific, if we use it properly,” Cacioppo continues. “It’s like a car. You can drive it to pick up your friends. Or you can drive alone.” But hasn’t the car increased loneliness? If cars created the suburbs, surely they also created isolation. “That’s because of how we use cars,” Cacioppo replies. “How we use these technologies can lead to more integration, rather than more isolation.” The problem, then, is that we invite loneliness, even though it makes us miserable. The history of our use of technology is a history of isolation desired and achieved. When the Great Atlantic and Pacific Tea Company opened its A&P stores, giving Americans self-service access to groceries, customers stopped having relationships with their grocers. When the telephone arrived, people stopped knocking on their neighbors’ doors. Social media brings this process to a much wider set of relationships. Researchers at the HP Social Computing Lab who studied the nature of people’s connections on Twitter came to a depressing, if not surprising, conclusion: “Most of the links declared within Twitter were meaningless from an interaction point of view.” I have to wonder: What other point of view is meaningful? LONELINESS IS CERTAINLY not something that Facebook or Twitter or any of the lesser forms of social media is doing to us. We are doing it to ourselves. Casting technology as some vague, impersonal spirit of history forcing our actions is a weak excuse. We make decisions about how we use our machines, not the other way around. Every time I shop at my local grocery store, I am faced with a choice. I can buy my groceries from a human being or from a machine. I always, without exception, choose the machine. It’s faster and more efficient, I tell myself, but the truth is that I prefer not having to wait with the other customers who are lined up alongside the conveyor belt: the hipster mom who disapproves of my high-carbon-footprint pineapple; the lady who tenses to the point of tears while she waits to see if the gods of the credit-card machine will accept or decline; the old


man whose clumsy feebleness requires a patience that I don’t possess. Much better to bypass the whole circus and just ring up the groceries myself. Our omnipresent new technologies lure us toward increasingly superficial connections at exactly the same moment that they make avoiding the mess of human interaction easy. The beauty of Facebook, the source of its power, is that it enables us to be social while sparing us the embarrassing reality of society—the accidental revelations we make at parties, the awkward pauses, the farting and the spilled drinks and the general gaucherie of face-to-face contact. Instead, we have the lovely smoothness of a seemingly social machine. Everything’s so simple: status updates, pictures, your wall. But the price of this smooth sociability is a constant compulsion to assert one’s own happiness, one’s own fulfillment. Not only must we contend with the social bounty of others; we must foster the appearance of our own social bounty. Being happy all the time, pretending to be happy, actually attempting to be happy—it’s exhausting. Last year a team of researchers led by Iris Mauss at the University of Denver published a study looking into “the paradoxical effects of valuing happiness.” Most goals in life show a direct correlation between valuation and achievement. Studies have found, for example, that students who value good grades tend to have higher grades than those who don’t value them. Happiness is an exception. The study came to a disturbing conclusion: Valuing happiness is not necessarily linked to greater happiness. In fact, under certain conditions, the opposite is true. Under conditions of low (but not high) life stress, the more people valued happiness, the lower were their hedonic balance, psychological well-being, and life satisfaction, and the higher their depression symptoms. The more you try to be happy, the less happy you are. Sophocles made roughly the same point. Facebook, of course, puts the pursuit of happiness front and center in our digital life. Its capacity to redefine our very concepts of identity and personal fulfillment is much more worrisome than the data-mining and privacy practices that have aroused anxieties about the company. Two of the most compelling critics of Facebook— neither of them a Luddite—concentrate on exactly this point. Jaron Lanier, the author of You Are Not a Gadget, was one of the inventors


of virtual-reality technology. His view of where social media are taking us reads like dystopian science fiction: “I fear that we are beginning to design ourselves to suit digital models of us, and I worry about a leaching of empathy and humanity in that process.” Lanier argues that Facebook imprisons us in the business of selfpresenting, and this, to his mind, is the site’s crucial and fatally unacceptable downside. Sherry Turkle, a professor of computer culture at MIT who in 1995 published the digital-positive analysis Life on the Screen, is much more skeptical about the effects of online society in her 2011 book, Alone Together: “These days, insecure in our relationships and anxious about intimacy, we look to technology for ways to be in relationships and protect ourselves from them at the same time.” The problem with digital intimacy is that it is ultimately incomplete: “The ties we form through the Internet are not, in the end, the ties that bind. But they are the ties that preoccupy,” she writes. “We don’t want to intrude on each other, so instead we constantly intrude on each other, but not in ‘real time.’” Lanier and Turkle are right, at least in their diagnoses. Selfpresentation on Facebook is continuous, intensely mediated, and possessed of a phony nonchalance that eliminates even the potential for spontaneity. (“Look how casually I threw up these three photos from the party at which I took 300 photos!”) Curating the exhibition of the self has become a 24/7 occupation. Perhaps not surprisingly, then, the Australian study “Who Uses Facebook?” found a significant correlation between Facebook use and narcissism: “Facebook users have higher levels of total narcissism, exhibitionism, and leadership than Facebook nonusers,” the study’s authors wrote. “In fact, it could be argued that Facebook specifically gratifies the narcissistic individual’s need to engage in self-promoting and superficial behavior.” Rising narcissism isn’t so much a trend as the trend behind all other trends. In preparation for the 2013 edition of its diagnostic manual, the psychiatric profession is currently struggling to update its definition of narcissistic personality disorder. Still, generally speaking, practitioners agree that narcissism manifests in patterns of fantastic grandiosity, craving for attention, and lack of empathy. In a 2008 survey, 35,000 American respondents were asked if they had ever had certain symptoms of narcissistic personality disorder. Among people older than 65, 3 percent reported symptoms. Among people in their 20s,


the proportion was nearly 10 percent. Across all age groups, one in 16 Americans has experienced some symptoms of NPD. And loneliness and narcissism are intimately connected: a longitudinal study of Swedish women demonstrated a strong link between levels of narcissism in youth and levels of loneliness in old age. The connection is fundamental. Narcissism is the flip side of loneliness, and either condition is a fighting retreat from the messy reality of other people. A considerable part of Facebook’s appeal stems from its miraculous fusion of distance with intimacy, or the illusion of distance with the illusion of intimacy. Our online communities become engines of selfimage, and self-image becomes the engine of community. The real danger with Facebook is not that it allows us to isolate ourselves, but that by mixing our appetite for isolation with our vanity, it threatens to alter the very nature of solitude. The new isolation is not of the kind that Americans once idealized, the lonesomeness of the proudly nonconformist, independent-minded, solitary stoic, or that of the astronaut who blasts into new worlds. Facebook’s isolation is a grind. What’s truly staggering about Facebook usage is not its volume—750 million photographs uploaded over a single weekend—but the constancy of the performance it demands. More than half its users—and one of every 13 people on Earth is a Facebook user—log on every day. Among 18-to-34year-olds, nearly half check Facebook minutes after waking up, and 28 percent do so before getting out of bed. The relentlessness is what is so new, so potentially transformative. Facebook never takes a break. We never take a break. Human beings have always created elaborate acts of self-presentation. But not all the time, not every morning, before we even pour a cup of coffee. Yvette Vickers’s computer was on when she died. Nostalgia for the good old days of disconnection would not just be pointless, it would be hypocritical and ungrateful. But the very magic of the new machines, the efficiency and elegance with which they serve us, obscures what isn’t being served: everything that matters. What Facebook has revealed about human nature—and this is not a minor revelation—is that a connection is not the same thing as a bond, and that instant and total connection is no salvation, no ticket to a happier, better world or a more liberated version of humanity. Solitude used to be good for self-reflection and self-reinvention. But now we are left thinking about who we are all the time, without ever really thinking about who we are. Facebook denies us a pleasure whose profundity we had underestimated: the chance to forget about ourselves for a while, the chance to disconnect.




Jerome Taylor | The Independent

Google chief: My fears for Generation Facebook

Eric Schmidt, the chief executive of Google, has issued a stark warning over the amount of personal data people leave on the internet and suggested that many of them will be forced one day to change their names in order to escape their cyber past. In a startling admission from a man whose company has made billions by perfecting the art of hoarding, storing and retrieving information on us, Mr Schmidt suggested that the enormous quantity of detail we leave online may not be such a good thing after all. The man who – alongside Google’s founders Sergey Brin and Larry Page – runs the world’s largest search engine said that young people will need to go as far as changing their identities if they are to truly erase what they have left online. “I don’t believe society understands what happens when everything is available, knowable and recorded by everyone all the time,” he told the Wall Street Journal. “I mean we really have to think about these things as a society.” For a man whose company is built on the ability to store information and retrieve it again in a faster and more efficient way than its rivals, Mr Schmidt’s admission revealed a surprising concern among Google’s leadership over the importance of data privacy.


But it has also provoked a wider debate on the sheer amount of information we give away about ourselves online and how most of that data is virtually un-erasable. Perhaps more than any other company Google has helped created a world where we willingly deposit vast amounts of personal data into the public domain – information that might previously have taken months of investigative work by professionals to find. Google has made billions from storing data on its customers’ browsing habits so that it can use that data to target them with personalised adverts. It also runs the kind of websites that have pioneered the open sharing of information online. The Californian internet giant owns You Tube, the world’s largest video sharing website; it handles billions of our emails through Gmail; and – if you live in a big city – chances are that a Google Street View car has photographed your front door. A series of recent acquisitions also suggests it is hoping to move into the social networking market, the area of the internet that most concerns privacy campaigners. Thanks to the global popularity of social networking – an estimated 600 million people have personal online profiles – friends, prospective employers and enemies alike are able to access photographs, videos and blogs that we may have long forgotten with a few simple clicks of a mouse. Recently one columnist in The New York Times went so far as to describe our current world as an age defined by “the impossibility of erasing your posted past and moving on”. Many websites yesterday picked up on the apparent disconnect between Mr Schmidt’s comments and his company’s ethos. Chris Williams, of the online tech news website The Register, said: “Recording everything and making it knowable by everyone all the time is Google’s stated mission, and it is profiting handsomely from the fact that society doesn’t understand the consequences.” Other blogs remarked that one previous instance when Mr Schmidt had admitted concerns over the amount of personal information stored online was in 2005 when Google blacklisted the online technology magazine Cnet for an entire year. In an article discussing privacy concerns generated by Google’s data mining capabilities, Cnet’s reporters published Mr Schmidt’s salary, named the neighbourhood where he lives, some of his hobbies and political donations. All the information had been gleaned from Google searches. But while bloggers and web forums reacted with tangible scepticism


to Mr Schmidt’s comments, others welcomed his frankness. “His comments are a little ironic but they are also timely,” said Dylan Sharpe from Big Brother Watch, which has campaigned against Google collecting wifi data on web users while taking photographs with its Street View cars. He added: “Google is a company that specialises in knowing where you are, what you are doing and who you are talking to. That’s a scary prospect even though Google’s users sign up to this sort of data collection willingly. “But Mr Schmidt is completely right on how much information we are giving away online. Right now there are millions of young kids and teenagers who, when they apply for jobs in 10 years’ time, will find that there is so much embarrassing stuff about them online that they cannot take down.” Those who wish to delete what they have put up online, meanwhile, may find it next to impossible to entirely erase their cyber past. “What many people do not realise is that as soon as you put something up online you lose possession and control of that information immediately,” said Rik Ferguson, a cyber security expert at Trend Micro. “Anyone can download, store and distribute that information, it’s out of your hands.” Privacy campaigners say more needs to be done to stop young people in particular depositing information online that may come back to haunt them. “I think we need to change people’s mindsets through education rather than legislation but it’s definitely something that we need to talk to our children about,” said Mr Sharpe. Mr Ferguson, meanwhile, believes web users will increasingly demand better levels of data privacy over the coming decade. “What would be ideal is some sort of technology where you as an end user would be able to assign the right to use, copy or distribute information about yourself to people of your own choosing,” he said. “That sort of technology is already used in encrypted emails. I’m sure people will soon start asking for some form of encrypted social networking and companies will respond to that demand.” In his own words... * “The internet is the first thing that humanity has built that humanity doesn’t understand, the largest experiment in anarchy we’ve ever had.” * “Show us 14 photos of yourself and we can identify who you are.


You think you don’t have 14 photos of yourself on the internet? You’ve got Facebook photos! People will find it’s very useful to have devices that remember what you want to do, because you forgot... But society isn’t ready for questions that will be raised as a result of user-generated content.” * “When the internet publicity began, I remember being struck by how much the world was not the way we thought it was, that there was infinite variation in how people viewed the world.” * “People are surprised to find out that an awful lot of people think that they’re idiots.”

Case study: ‘Drunken pirate’ lark destroyed teaching career The tale of Stacy Snyder, the “drunken pirate”, is a cautionary one for any young person hoping to embark on a promising career. Ms Snyder, a trainee teacher, had passed all her exams and completed her training. Her academic record was unblemished. That is, until her final summer, when her teachers – out of the blue – deemed that the behaviour she had displayed in her personal life was unbecoming of a teacher. Her crime? She had uploaded an image of herself, wearing a pirate costume and drinking from a plastic cup on to a social networking site with the caption: “drunken pirate.” A colleague at the school where she had been training had seen it and reported it, saying that it was unprofessional to potentially expose pupils to photographs of a teacher drinking alcohol. As university officials told her that her dream career was now out of her reach, she offered to take the photo down, and argued that it was not even possible to see what was in the cup. After all, she told them, “is there anything wrong with someone of a legally permissable age drinking alcohol?” But her pleas were ignored. Ms Snyder never got the certificate she needed to teach and an attempt to sue the university for it was unsuccessful. Placing a photograph of herself in “an unprofessional state” was her downfall: the image had been catalogued by search engines and by the time she realised the danger, it was impossible to take down.


Eric Palisner

Beware Online Filter Bubbles TED Talk

Mark Zuckerberg, a journalist was asking him a question about the news feed. And the journalist was asking him, “Why is this so important?” And Zuckerberg said, “A squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa.” And I want to talk about what a Web based on that idea of relevance might look like. 0:40 So when I was growing up in a really rural area in Maine, the Internet meant something very different to me. It meant a connection to the world. It meant something that would connect us all together. And I was sure that it was going to be great for democracy and for our society. But there’s this shift in how information is flowing online, and it’s invisible. And if we don’t pay attention to it, it could be a real problem. So I first noticed this in a place I spend a lot of time -- my Facebook page. I’m progressive, politically -- big surprise -- but I’ve always gone out of my way to meet conservatives. I like hearing what they’re thinking about; I like seeing what they link to; I like learning a thing or two. And so I was surprised when I noticed one day that the conservatives had disappeared from my Facebook feed. And what it turned out was going on was that Facebook was looking at which links I clicked on, and it was noticing that, actually, I was clicking more on my liberal friends’ links than on my


conservative friends’ links. And without consulting me about it, it had edited them out. They disappeared. 1:54 So Facebook isn’t the only place that’s doing this kind of invisible, algorithmic editing of the Web. Google’s doing it too. If I search for something, and you search for something, even right now at the very same time, we may get very different search results. Even if you’re logged out, one engineer told me, there are 57 signals that Google looks at -- everything from what kind of computer you’re on to what kind of browser you’re using to where you’re located -- that it uses to personally tailor your query results. Think about it for a second: there is no standard Google anymore. And you know, the funny thing about this is that it’s hard to see. You can’t see how different your search results are from anyone else’s. 2:42 But a couple of weeks ago, I asked a bunch of friends to Google “Egypt” and to send me screen shots of what they got. So here’s my friend Scott’s screen shot. And here’s my friend Daniel’s screen shot. When you put them side-by-side, you don’t even have to read the links to see how different these two pages are. But when you do read the links, it’s really quite remarkable. Daniel didn’t get anything about the protests in Egypt at all in his first page of Google results. Scott’s results were full of them. And this was the big story of the day at that time. That’s how different these results are becoming. 3:21 So it’s not just Google and Facebook either. This is something that’s sweeping the Web. There are a whole host of companies that are doing this kind of personalization. Yahoo News, the biggest news site on the Internet, is now personalized -- different people get different things. Huffington Post, the Washington Post, the New York Times -- all flirting with personalization in various ways. And this moves us very quickly toward a world in which the Internet is showing us what it thinks we want to see, but not necessarily what we need to see. As Eric Schmidt said, “It will be very hard for people to watch or consume something that has not in some sense been tailored for them.” 4:05 So I do think this is a problem. And I think, if you take all of these filters together, you take all these algorithms, you get what I call a filter bubble. And your filter bubble is your own personal,


unique universe of information that you live in online. And what’s in your filter bubble depends on who you are, and it depends on what you do. But the thing is that you don’t decide what gets in. And more importantly, you don’t actually see what gets edited out. So one of the problems with the filter bubble was discovered by some researchers at Netflix. And they were looking at the Netflix queues, and they noticed something kind of funny that a lot of us probably have noticed, which is there are some movies that just sort of zip right up and out to our houses. They enter the queue, they just zip right out. So “Iron Man” zips right out, and “Waiting for Superman” can wait for a really long time. 5:02 What they discovered was that in our Netflix queues there’s this epic struggle going on between our future aspirational selves and our more impulsive present selves. You know we all want to be someone who has watched “Rashomon,” but right now we want to watch “Ace Ventura” for the fourth time. (Laughter) So the best editing gives us a bit of both. It gives us a little bit of Justin Bieber and a little bit of Afghanistan. It gives us some information vegetables; it gives us some information dessert. And the challenge with these kinds of algorithmic filters, these personalized filters, is that, because they’re mainly looking at what you click on first, it can throw off that balance. And instead of a balanced information diet, you can end up surrounded by information junk food. 5:59 What this suggests is actually that we may have the story about the Internet wrong. In a broadcast society -- this is how the founding mythology goes -- in a broadcast society, there were these gatekeepers, the editors, and they controlled the flows of information. And along came the Internet and it swept them out of the way, and it allowed all of us to connect together, and it was awesome. But that’s not actually what’s happening right now. What we’re seeing is more of a passing of the torch from human gatekeepers to algorithmic ones. And the thing is that the algorithms don’t yet have the kind of embedded ethics that the editors did. So if algorithms are going to curate the world for us, if they’re going to decide what we get to see and what we don’t get to see, then we need to make sure that they’re not just keyed to relevance. We need to make sure that they also show us things that are uncomfortable or challenging or important -- this is what TED does -- other points of view.


7:03 And the thing is, we’ve actually been here before as a society. In 1915, it’s not like newspapers were sweating a lot about their civic responsibilities. Then people noticed that they were doing something really important. That, in fact, you couldn’t have a functioning democracy if citizens didn’t get a good flow of information, that the newspapers were critical because they were acting as the filter, and then journalistic ethics developed. It wasn’t perfect, but it got us through the last century. And so now, we’re kind of back in 1915 on the Web. And we need the new gatekeepers to encode that kind of responsibility into the code that they’re writing. 7:51 I know that there are a lot of people here from Facebook and from Google -- Larry and Sergey -- people who have helped build the Web as it is, and I’m grateful for that. But we really need you to make sure that these algorithms have encoded in them a sense of the public life, a sense of civic responsibility. We need you to make sure that they’re transparent enough that we can see what the rules are that determine what gets through our filters. And we need you to give us some control so that we can decide what gets through and what doesn’t. Because I think we really need the Internet to be that thing that we all dreamed of it being. We need it to connect us all together. We need it to introduce us to new ideas and new people and different perspectives. And it’s not going to do that if it leaves us all isolated in a Web of one.




Sherry Turkle | NY Times

The Flight from Conversation

At home, families sit together, texting and reading e-mail. At work executives text during board meetings. We text (and shop and go on Facebook) during classes and when we’re on dates. My students tell me about an important new skill: it involves maintaining eye contact with someone while you text someone else; it’s hard, but it can be done. Over the past 15 years, I’ve studied technologies of mobile connection and talked to hundreds of people of all ages and circumstances about their plugged-in lives. I’ve learned that the little devices most of us carry around are so powerful that they change not only what we do, but also who we are. We’ve become accustomed to a new way of being “alone together.” Technology-enabled, we are able to be with one another, and also elsewhere, connected to wherever we want to be. We want to customize our lives. We want to move in and out of where we are because the thing we value most is control over where we focus our attention. We have gotten used to the idea of being in a tribe of one, loyal to our own party. Our colleagues want to go to that board meeting but pay attention only to what interests them. To some this seems like a good idea, but we can end up hiding from one another, even as we are constantly connected to one another. A businessman laments that he no longer has colleagues at


work. He doesn’t stop by to talk; he doesn’t call. He says that he doesn’t want to interrupt them. He says they’re “too busy on their e-mail.” But then he pauses and corrects himself. “I’m not telling the truth. I’m the one who doesn’t want to be interrupted. I think I should. But I’d rather just do things on my BlackBerry.” A 16-year-old boy who relies on texting for almost everything says almost wistfully, “Someday, someday, but certainly not now, I’d like to learn how to have a conversation.” In today’s workplace, young people who have grown up fearing conversation show up on the job wearing earphones. Walking through a college library or the campus of a high-tech start-up, one sees the same thing: we are together, but each of us is in our own bubble, furiously connected to keyboards and tiny touch screens. A senior partner at a Boston law firm describes a scene in his office. Young associates lay out their suite of technologies: laptops, iPods and multiple phones. And then they put their earphones on. “Big ones. Like pilots. They turn their desks into cockpits.” With the young lawyers in their cockpits, the office is quiet, a quiet that does not ask to be broken. In the silence of connection, people are comforted by being in touch with a lot of people — carefully kept at bay. We can’t get enough of one another if we can use technology to keep one another at distances we can control: not too close, not too far, just right. I think of it as a Goldilocks effect. Texting and e-mail and posting let us present the self we want to be. This means we can edit. And if we wish to, we can delete. Or retouch: the voice, the flesh, the face, the body. Not too much, not too little — just right. Human relationships are rich; they’re messy and demanding. We have learned the habit of cleaning them up with technology. And the move from conversation to connection is part of this. But it’s a process in which we shortchange ourselves. Worse, it seems that over time we stop caring, we forget that there is a difference. We are tempted to think that our little “sips” of online connection add up to a big gulp of real conversation. But they don’t. E-mail, Twitter, Facebook, all of these have their places — in politics, commerce, romance and friendship. But no matter how valuable, they do not substitute for conversation. Connecting in sips may work for gathering discrete bits of information or for saying, “I am thinking about you.” Or even for saying, “I love you.” But connecting in sips doesn’t work as well when it comes to understanding and knowing one another. In conversation we tend to one another. (The word itself is kinetic; it’s derived from words that mean to move, together.) We can attend to tone and nuance. In conversation, we are called upon to see things from another’s point of view.


Sherry Turkle

Connected but Alone TED Talk

As we expect more from technology, do we expect less from each other? Sherry Turkle studies how our devices and online personas are redefining human connection and communication — and asks us to think deeply about the new kinds of connection we want to have. ‘what excited me most was the idea that we would use what we learned in the virtual world about ourselves, about our identity, to live better lives in the real world.’ ‘I’m still excited by technology, but I believe, and I’m here to make the case, that we’re letting it take us places that we don’t want to go.’ ‘those little devices in our pockets, are so psychologically powerful that they don’t only change what we do, they change who we are.’ ‘I think we’re setting ourselves up for trouble -trouble certainly in how we relate to each other’ I see that people can’t get enough of each other, if and only if they can have each other at a distance, in amounts they can control. ‘People say, “I’ll tell you what’s wrong with having a conversation. It takes place in real time and you can’t control what you’re going to say.”’ ‘(texts and messages) don’t really work for learning about each other, for really coming to know and understand each other.’


‘We expect more from technology and less from each other. And I ask myself, “Why have things come to this?”’ ‘phones... offer us three gratifying fantasies. One, that we can put our attention wherever we want it to be; two, that we will always be heard; and three, that we will never have to be alone’ ‘Being alone feels like a problem that needs to be solved. And so people try to solve it by connecting.’ “Those who make the most of their lives on the screen come to it in a spirit of self-reflection.” ‘There’s plenty of time for us to reconsider how we use it (technology,) how we build it. I’m not suggesting that we turn away from our devices, just that we develop a more self-aware relationship with them, with each other and with ourselves.’ ‘relationships are filled with risk. And then there’s technology -- simpler, hopeful, optimistic, ever-young.’


Matthew Rock | Real Business

Is Digital Technology Destroying Juman Interaction?

Over the past few weeks, we’ve surveyed the owners of the UK’s most disruptive, new businesses, as part of the build-up to the Real Business Wonga Future 50 report. Our aim was to find out about the concerns, challenges of the UK’s smartest new companies, and their predictions for the year ahead. One particularly surprising theme emerged: even the next generation of hyper-digital entrepreneurs is asking whether the internet, by replacing real, physical human interaction, is having wide-scale negative social consequences. The discussion was started by Joanna Montgomery, founder of Little Riot, a small Edinburgh design consultancy that’s attracted a lot of attention for its “Pillow Talk”, which connects longdistance lovers via a pillowcase device that transmits the real-time heartbeat of your loved one. The result, says Montgomery, “is an intimate interaction between two lovers, regardless of the distance between them.” In today’s smartphone and computer-obsessed world, says Montgomery, communicating with loved ones via technology has become “cold and two-dimensional.” Screen-based communication is unhitching us from our natural instincts, and she calls for a return to physical, tactile means of interacting. Montgomery is talking to hospitals, nursing homes etc about using her technology to enable mothers to feel close to a newborn, or providing relatives of hospitalised individuals with comfort, by connecting them directly. Julia Grinham, founder of the bespoke shoe business Upper Street, also detects a digital backlash. “We’re starting to see a move to ‘switch off’ and disconnect from everything constantly being online.” In parallel, she believes that offline retailers will evolve in new and important


ways. When making a purchase decision, “people will put greater value on face-to-face connections, and the opportunity to touch, feel and experience,” she predicts. Jason Foreman, whose business Kinetique is bringing “hybrid” (ie, non-mined) diamonds to the UK, describes the impact of social media and computer games as “toxic” on personal relationships. “The grip of Facebook and computer games has stifled people’s ability to hold or instigate conversations,” he says. Some young entrepreneurs see a more positive convergence of the online and offline worlds. Alice Amies of neighbourhood lending scheme Streetbank says: “Our hope is to create a nation where people living in cities know at least ten other people living on their street. Our hope is that these local connections will enable people to take a greater sense of ownership over where they live and, as result, to see an improvement in mental health; combating loneliness; and reducing crime rates - because of a greater sense of accountability.” The social consequences of the internet become more important because digital technology will only become more prevalent. Nick Massey of online music business Rara. com says 2013 will be all about the effect of technology on society. “Today’s babies are using touch-screen tablet devices before they learn to talk; while grey consumers – the rock and roll generation alienated by the complexity of PC computing – are re-engaging with technology thanks to tablets’ ease of use.” In 2013, says Brian O’Reilly of energy-saving technology business Treegreen, smart devices will become universal remote controls - for example, for changing the TV channel, switching the heating on/off and starting the washing machine. We’re still only in the foothills of the internet revolution, but it’s reassuring to see the new generation of digital entrepreneurs so mindful of the powerful, potential social consequences.


Bryan Appleyard | New Statesman

The New Luddites: Why Former Digital Prophets are Turning Against Tech

Very few of us can be sure that our jobs will not, in the near future, be done by machines. We know about cars built by robots, cashpoints replacing bank tellers, ticket dispensers replacing train staff, self-service checkouts replacing supermarket staff, tele­ phone operators replaced by “call trees”, and so on. But this is small stuff compared with what might happen next. Nursing may be done by robots, delivery men replaced by drones, GPs replaced by artificially “intelligent” diagnosers and health-sensing skin patches, back-room grunt work in law offices done by clerical automatons and remote teaching conducted by computers. In fact, it is quite hard to think of a job that cannot be partly or fully automated. And technology is a classless wrecking ball – the old blue-collar jobs have been disappearing for years; now they are being followed by white-collar ones. Ah, you may say, but human beings will always be better. This misses the point. It does not matter whether the new machines never achieve full humanlike consciousness, or even real intelligence, they can almost certainly achieve just enough to do your job – not as well as you, perhaps, but


much, much more cheaply. To modernise John Ruskin, “There is hardly anything in the world that some robot cannot make a little worse and sell a little cheaper, and the people who consider price only are this robot’s lawful prey.” Inevitably, there will be social and political friction. The onset has been signalled by skirmishes such as the London Underground strikes over ticket-office staff redundancies caused by machinereadable Oyster cards, and by the rage of licensed taxi drivers at the arrival of online unlicensed car booking services such as Uber, Lyft and Sidecar. This resentment is intensified by rising social inequality. Everybody now knows that neoliberalism did not deliver the promised “trickledown” effect; rather, it delivered trickle-up, because, even since the recession began, almost all the fruits of growth have gone to the rich. Working- and middle-class incomes have flatlined or fallen. Now, it seems, the wealthy cyber-elites are creating machines to put the rest of us out of work entirely. The effect of this is to undermine the central argument of those who hype the benefits of job replacement by machines. They say that new and better jobs will be created. They say this was always true in the past, so it will be true now. (This is the precise correlative of the neoliberals’ “rising tide floats all boats” argument.) But people now doubt the “new and better jobs” line trotted out – or barked – by the prophets of robotisation. The new jobs, if there are any, will more probably be serf-like attenders to the needs of the machine, burger-flippers to the robot classes. Nevertheless, this future, too, is being sold in neoliberal terms. “I am sure,” wrote Mitch Free (sic) in a commentary for Forbes on 11 June, “it is really hard [to] see when your pay check is being directly impacted but the reality to any market disruption is that the market wants the new technology or business model more than they want what you offer, otherwise it would not get off the ground. The market always wins, you cannot stop it.” Free was writing in response to what probably seemed to him a completely absurd development, a nightmarish impossibility – the return of Luddism. “Luddite” has, in the past few decades, been such a routine term of abuse for anybody questioning the march


of the machines (I get it all the time) that most people assume that, like “fool”, “idiot” or “prat”, it can only ever be abusive. But, in truth, Luddism has always been proudly embraced by the few and, thanks to the present climate of machine mania and stagnating incomes, it is beginning to make a new kind of sense. From the angry Parisian taxi drivers who vandalised a car belonging to an Uber driver to a Luddite-sympathetic column by the Nobel laureate Paul Krugman in the New York Times, Luddism in practice and in theory is back on the streets. Luddism derives its name from Ned Ludd, who is said to have smashed two “stocking frames” – knitting machines – in a fit of rage in 1779, but who may have been a fictional character. It became a movement, with Ludd as its Robin Hood, between 1811 and 1817 when English textile workers were threatened with unemployment by new technology, which the Luddites defined as “machinery hurtful to Commonality”. Mills were burned, machinery was smashed and the army was mobilised. At one time, according to Eric Hobsbawm, there were more soldiers fighting the Luddites than were fighting Napoleon in Spain. Parliament passed a bill making machine-smashing a capital offence, a move opposed by Byron, who wrote a song so seditious that it was not published until after his death: “. . . we/Will die fighting, or live free,/And down with all kings but King Ludd!” Once the Luddites had been suppressed, the Industrial Revolution resumed its course and, over the ensuing two centuries, proved the most effective wealth-creating force ever devised by man. So it is easy to say the authorities were on the right side of history and the Luddites on the wrong one. But note that this is based on the assumption that individual sacrifice in the present – in the form of lost jobs and crafts – is necessary for the mechanised future. Even if this were true, there is a dangerous whiff of totalitarianism in the assumption. Neo-Luddism began to emerge in the postwar period. First, the power of nuclear weapons made it clear to everybody that our machines could now put everybody out of work for ever by the simple expedient of killing them and, second, in the 1980s and 1990s it became apparent that new computer technologies had the power to change our lives completely.


Thomas Pynchon, in a brilliant essay for the New York Times in 1984 – he noted the resonance of the year – responded to the first new threat and, through literature, revitalised the idea of the machine as enemy. “So, in the science fiction of the Atomic Age and the cold war, we see the Luddite impulse to deny the machine taking a different direction. The hardware angle got de-emphasised in favour of more humanistic concerns – exotic cultural evolutions and social scenarios, paradoxes and games with space/time, wild philosophical questions – most of it sharing, as the critical literature has amply discussed, a definition of ‘human’ as particularly distinguished from ‘machine’.” In 1992, Neil Postman, in his book Technopoly, rehabilitated the Luddites in response to the threat from computers: “The term ‘Luddite’ has come to mean an almost childish and certainly naive opposition to technology. But the historical Luddites were neither childish nor naive. They were people trying desperately to preserve whatever rights, privileges, laws and customs had given them justice in the older world-view.” Underpinning such thoughts was the fear that there was a malign convergence – perhaps even a conspiracy – at work. In 1961, even President Eisenhower warned of the anti-democratic power of the “military-industrial complex”. In 1967 Lewis Mumford spoke presciently of the possibility of a “mega-machine” that would result from “the convergence of science, technics and political power”. Pynchon picked up the theme: “If our world survives, the next great challenge to watch out for will come – you heard it here first – when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy.” The possibility is with us still in Silicon Valley’s earnest faith in the Singularity – the moment, possibly to come in 2045, when we build our last machine, a super-intelligent computer that will solve all our problems and enslave or kill or save us. Such things are true only to the extent to which they are believed – and, in the Valley, this is believed, widely. Environmentalists were obvious allies of neo-Luddism – adding global warming as a third threat to the list – and globalism, with its tendency to destroy distinctively local and cherished ways of life, was an obvious enemy. In recent decades, writers such as Chellis


Glendinning, Langdon Winner and Jerry Mander have elevated the entire package into a comprehensive rhetoric of dissent from the direction in which the world is going. Winner wrote of Luddism as an “epistemological technology”. He added: “The method of carefully and deliberately dismantling technologies, epistemological Luddism, if you will, is one way of recovering the buried substance upon which our civilisation rests. Once unearthed, that substance could again be scrutinised, criticised, and judged.” It was all very exciting, but then another academic rained on all their parades. His name was Ted Kaczynski, although he is more widely known as the Unabomber. In the name of his own brand of neoLuddism, Kaczynski’s bombs killed three people and injured many more in a campaign that ran from 1978-95. His 1995 manifesto, “Industrial Society and Its Future”, said: “The Industrial Revolution and its consequences have been a disaster for the human race,” and called for a global revolution against the conformity imposed by technology. The lesson of the Unabomber was that radical dissent can become a form of psychosis and, in doing so, undermine the dissenters’ legitimate arguments. It is an old lesson and it is seldom learned. The British Dark Mountain Project (dark-mountain.net), for instance, is “a network of writers, artists and thinkers who have stopped believing the stories our civilisation tells itself”. They advocate “uncivilisation” in writing and art – an attempt “to stand outside the human bubble and see us as we are: highly evolved apes with an array of talents and abilities which we are unleashing without sufficient thought, control, compassion or intelligence”. This may be true, but uncivilising ourselves to express this truth threatens to create many more corpses than ever dreamed of by even the Unabomber.1 Obviously, if neo-Luddism is conceived of in psychotic or apocalyptic terms, it is of no use to anybody and could prove very dangerous. But if it is conceived of as a critical engagement with technology, it could be useful and essential. So far, this critical engagement has been limited for two reasons. First, there is the belief – it is actually a superstition – in progress as an inevitable and benign outcome of free-market economics. Second, there is the extraordinary power of the technology companies to hypnotise us with their gadgets. Since 1997 the first belief has


found justification in a management theory that bizarrely, upon closer examination, turns out to be the mirror image of Luddism. That was the year in which Clayton Christensen published The Innovator’s Dilemma, judged by the Economist to be one of the most important business books ever written. Christensen launched the craze for “disruption”. Many other books followed and many management courses were infected. Jill Lepore reported in the New Yorker in June that “this fall, the University of Southern California is opening a new program: ‘The degree is in disruption,’ the university announced.” And back at Forbes it is announced with glee that we have gone beyond disruptive innovation into a new phase of “devastating innovation”. It is all, as Lepore shows in her article, nonsense. Christensen’s idea was simply that innovation by established companies to satisfy customers would be undermined by the disruptive innovation of market newcomers. It was a new version of Henry Ford and Steve Jobs’s view that it was pointless asking customers what they want; the point was to show them what they wanted. It was nonsense because, Lepore says, it was only true for a few, carefully chosen case histories over very short time frames. The point was made even better by Christensen himself when, in 2007, he made the confident prediction that Apple’s new iPhone would fail. Nevertheless, disruption still grips the business imagination, perhaps because it sounds so exciting. In Luddism you smash the employer’s machines; in disruption theory you smash the competitor’s. The extremity of disruptive theory provides an accidental justification for extreme Luddism. Yet still, technocratic propaganda routinely uses the vocabulary of disruption theory.


The Anti-Tech Movement

Alexandra Ossola | VICE

The NoPhone, a 3D-printed block of plastic about the same size and weight as a smartphone, is entering the final hours of its Kickstarter campaign. Despite getting a ton of media attention, it looks like the “technology-free alternative to constant hand-to-phone contact” won’t meet its $30,000 goal. Even so, the NoPhone hit a nerve: the compulsion to disconnect, and make it clear to everyone else that you’re disconnected. Entrepreneurs and designers have racked their brains for creative, quirky ideas for how to cut us off. But much of the effort goes into making a statement, to show others around you that you have better things to do than to be connected to a device. When challenged to create a product that “looked beyond the smartwatch,” designers at Code and Theory went to an opposite extreme. The device looks like a caret (^) perched on the user’s ear and emits EEG waves through a bone conduction speaker. The sound is supposed to cause your brain to induce slower theta waves, the same ones it emits when you’re daydreaming or in the shower. The other element is what it signals to others, something like the opposite of whatever


it is that Google Glass means. “It’s a statement piece that tells other people you’re taking a break from them and everything else for a few minutes,” reads another review. The other element is what it signals to others, something like the opposite of whatever it is that Google Glass means. Other wearables look slightly less conspicuous while still making a statement. Designer Kunihiko Morinaga developed a clothing line called Focus: Life Gear. Each dress, coat and pouch is made of fabric that shields radio waves, which grants its wearer “protection from the virtual world,” as Morinaga said in an interview. The DIY community has also taken up the gauntlet, designing pouches that render phones useless or hoodies with zippers that can turn off irritating televisions. Scientists have established the need for these sorts of products; much like drugs and sugar, some of the things we like about technology are the same things that make it bad for us. When you get an email, text or notification, your brain produces dopamine. This makes you feel good. But it also elicits what neuroscientists are calling a seeking behavior: you want more of it. Today, the average person who spends the day in front of the computer is being constantly dosed with dopamine triggered by an onslaught of pings. The result has been a slew of new disorders, including nomophobia (the fear of not having access to technology) and phantom ringing or vibration syndrome (where you’re positive that your phone rang or vibrated, but it really didn’t). These disorders are far more prevalent than we might like to admit; one 2010 study of phantom vibration syndrome found that 68 percent of subjects experienced the hallucination. Another study way back in 2006 found that one in eight Americans showed at least one symptom of Internet addiction—and that was before the advent of YouTube or affordable smartphones. So why not just turn off your devices off if you want to be disconnected? Part of it might be that we just can’t. We’re already addicted, and we need help. “When your phone is with you, there is too much temptation to use it,” the NoPhone team said in an email. “When the NoPhone is with you, the temptation is still there. But when you try to use the NoPhone, it will not work. Because it’s a piece of plastic.”


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.