November 26th, 2016

Page 6

saturDaY 26•11•2016

PeoPle, life, etc...

6

THE MORUNG EXPRESS

The Science Fiction That Came Before Science

Literature imagined technologically marvellous cities, space travel, and aliens before the scientific revolution even hit its stride Edward Simon

A

The Atlantic

n explorer builds a space ship and meets aliens on another world. They are a “people most strange,” these extraterrestrials. They’re twice as tall as humans; they wear clothes spun of a mysterious material, dyed in a color unseen by human eyes; they speak only in haunting musical tones. Then the explorer returns to Earth. This has been the plot of seemingly countless examples of pulp magazines and canonical science fiction in the past century. Similar themes have been explored by authors like Isaac Asimov, Ray Bradbury, and Arthur C. Clarke, classic television such as The Twilight Zone and Star Trek, and films like this month’s Arrival. But this particular story isn’t from the past century. Its explorer, Domingo Gonsales, is the fictional narrator of The Man in the Moone, a novel by Francis Godwin, a bishop in the Church of England. It was published in 1638. Science fiction is sometimes understood as the result of modern science. According to this view, the genre emerged to make sense of the tremendous expanses in empirical knowledge and technological ability throughout the 17th and 18th centuries—the Copernican model of the solar system, discoveries in the New World, medical advances, microscopes. Critics like Brian Aldiss have argued that Frankenstein, Mary Shelley’s 1818 masterpiece, is the first science-fiction novel because its fantastic events occur not because of magic or miracle, but purely through science. Yet many books written at the height of, or even before, the Scientific Revolution used the same narrative conceit. What makes these books fas-

cinating is not just that they reflect the new science of the time, but that they demonstrate literature’s influence on scientific inquiry. Like many contemporary scientists say that Star Trek inspired their love of discovery, or that modern technology is prefigured by stories from a half-century ago, The Man in the Moon disseminated ideas like heliocentricism and the possibility of extraterrestrial life. Science fiction alone did not inspire the scientific revolution, but the literature of the era did allow people to imagine different realities—in some cases, long before those realities actually became real. A reading list of these early stories includes works of varying canonicity, such as Thomas More’s Utopia (1516), Francis Bacon’s New Atlantis (1627), Johannes Kepler’s Somnium (1634), Margaret Cavendish’s The Blazing World (1666), Henry Neville’s The Isle of Pines (1688), and Jonathan Swift’s Gulliver’s Trav-

Job titles are getting seriously deceptive Nury Vittachi

M

IANS

y son is applying for jobs. I told him that "we have no suitable vacancy at this time" really means: "Dude you are too cool for us." I noticed many jobs are in sneaky disguises. About half of them, with titles ranging from Marketing Executive to Account Manager to Chick Sexer to Cheese Sprayer, actually mean "Salesperson". There are lots of euphemisms, or do I mean euphoniums? Hospitality Specialist means "dishwasher". Beverage Disseminator means "bartender". Communications Executive means "telesales pest". A colleague showed me a recent news item about a US man arrested for faking a job by ordering copied Secret Service badges from China. Christopher Diiorio, 53, needed a cool-sounding job because he had signed up with a dating website, and his real job was too awful to admit to. He was a dog poop picker-upper. I felt sorry for him. While I hate to generalize -- wait, no I don't, I'm a journalist -- women never put dog poop picker-uppers on top of their lists of desirable marriage partners. He should have just made himself a post: Senior Chief Vice President for Canine Sanitation Deposit Collection, for example. When I was single I would have dated someone with that title. But then, I would have dated anyone. In the newspaper was a tale about a man in Germany who designed his own job, and that was awful too. He decided to cheat an automatic bottle recycling machine. He found a way of putting a bottle in, collecting a tiny sum of money, and then getting it out again. He turned this into a full time job, netting 44,000 euros by inserting and extracting a single bottle 177,451 times. The judge expressed astonishment at what a horribly dull way he'd found to spend his days. The man agreed: "It was really boring," and skipped to a relatively fun future of sitting in a jail cell. You see, jobs should never be just about making money, as proved by a UK toilet-fixer who recently won 14 million pounds in a lottery. John Doherty, 52, celebrated -- and then went straight back to fixing toilets. Just because the numbers in your bank account change, that doesn't mean that your purpose in life changes. He even signed up for a full-time course to improve his toilet-fixing skills. These news stories reminded me of the time I sat in on a friend's school reunion, where everyone was deliberately vague about what they did. "I'm in the restaurant business" probably meant "I'm a waiter" and the guy who said "I'm a writer" I knew for a fact was a blogger. And we all know that "independent new media consultants" are people who try to trick you into paying them to show you how to use Facebook. There ARE cool job titles out there. The neon light industry employs "Light Benders" and the guy who sells tickets on Virgin Galactic has "Space Travel Agent" on his card. Worth applying for? Maybe. And anyway, I've met Richard Branson and he's a fun guy who probably would send a rejection letter saying: "Dude you are too cool for us."

els(1726). These texts all share the driving curiosity that defines so much classic science fiction. “There is no man this day living that can tell you of so many strange and unknown peoples and countries,” writes More, describing the discoverer of the fictional island Utopia—a passage as evocative and stirring as “to boldly go where no man has gone before.” Though obscure today, Godwin’s The Man in the Moone captivated 17th-century readers with its tale of a Spaniard who travels in a ship powered by geese. He flies through space, which, for the first time in literature is depicted as weightless, then spends time with the denizens of a lunar civilization, only to leave for an almost equally exotic and technologically marvellous land called China. The story’s blend of natural philosophy, travel narrative, and the utopian and picaresque genres delighted English and European audiences. It also influenced

literary stars for centuries. The French author Savinien de Cyrano de Bergerac poked fun at the book in his satirical 1657 novel, The Other World. Edgar Allen Poe referenced the novel in his 1835 story “The Unparalleled Adventures of One Hans Pfaall.” And H.G. Wells’ 1901 novel, The First Men in the Moon, was directly inspired by Godwin. Godwin’s influence was scientific as well. As the Oxford professor William Poole writes in his introduction to the latest edition of The Man in the Moone, “Literary or humanistic traditions and practical astronomy were not absolutely separate activities for earlymodern astronomers.” For Godwin, the humanities and sciences weren’t just overlapping, they were often mutually reinforcing methodologies. John Wilkins, a fellow of the Royal Society and the inventor of the precursor to the metric system, argued in his book Mercury (1641) that Godwin’s novel could “be used to unlock

the secrets” of natural philosophy. Even more provocative when it was first published was The Blazing World, by the first woman in the Royal Society, Margaret Cavendish. The story is an account of travels to a parallel universe accessed through the North Pole and populated by sentient animalman creatures: “Bear-men, some Worm-men, … some Bird-men, Some fly-men, some Ant-men, some Geesemen,” and others. There are flying vehicles and submarines, as well as discussions on scientific innovations, particularly the most recent discoveries afforded by the invention of the microscope. The novel is especially notable for its narrative complexity. The author herself appears as a character and reflects on writing, “making and dissolving several worlds in her own mind … a world of Ideas, a world of Atomes, a world of Lights.” The Blazing World was recovered as a subject of serious study by feminist critics in the last quarter of the 20th century, and Cavendish has recently found herself in more popular discussions as well. Danielle Dutton, whose historical novelMargaret the First kicked off a renewed interest in Cavendish earlier this year, says that the first time she encountered The Blazing World, she found it “totally bizarre, in the best possible way: the talking animals, the cities of amber and coral, the metafictional move wherein the soul of Margaret Cavendish travels to the Blazing World to befriend the Empress.” The book conjures the clockpunk era of primitive microscopes and telescopes, of fleas made monstrously visible to the human eye, and magnetic lodestones pointing true North. Godwin, Cavendish, and their contemporaries are im-

portant for generating a freely speculative space of imagination—which is still science fiction’s role today. In constructing worlds—or birthing “paper bodies,” as Cavendish called them—the authors’ acts of envisioning possible futures had a tangible impact on how reality took shape. Take this selection of technological marvels Bacon describes in New Atlantis: “Versions of bodies into other bodies” (organ transplants?), “Exhilaration of the spirits, and putting them in good disposition” (pharmaceuticals?), “Drawing of new foods out of substances not now in use” (genetically modified food?), “Making new threads for apparel” (synthetic fabrics?), “Deceptions of the senses” (television and film?). And then there’s this eerily prescient description of the Lunar technology in The Man in the Moone: You shal then see men to flie from place to place in the ayre; you shall be able, (without moving or travailing of any creature,) to send messages in an instant many Miles off, and receive answer againe immediately you shall bee able to declare your minde presently unto your friend, being in some private and remote place of a populous Citie, with a number of such like things… you shall have notices of a new World… that all the Philosophers of former ages could never so much as dreame of. Can one read that passage and not think of air travel, telecommunications, the internet, computers? This is prophesy, but not of scripture and myth; Godwin did not speak to angels, and had no scrying mirrors or tools of divination. Instead, he relied on empiricism and reason. And that gave him a rare quality as an oracle: He happened to be correct. “What the scientific revolution did,” writes the British historian Keith Thomas inRe-

ligion and the Decline of Magic: Studies in Popular Belief in Sixteenth and SeventeenthCentury England, “was to … buttress up the old rationalist attitude with a more stable intellectual foundation.” That is, science fiction wasn’t always derivative of scientific explanations themselves. Even before science had fully defined itself, literature offered a means for thinking about science. The capacity to envision alternative social arrangements, in particular, makes science fiction arguably the literary genre with the most revolutionary potential. Cavendish’s “proto-feminist critique,” Dutton says, was a “critique of dominate power structures.” In 17th-century Britain, “these critiques … coming from a woman’s pen, no less, must have seemed nearly as fantastical as [her] talking bears!” Science fiction has since been the social laboratory of visionaries like Ursula K. LeGuin, Samuel Delaney, Margaret Atwood, Philip K. Dick, and Octavia Butler. The freedom of speculative fiction has allowed these authors to question real-life culture in radical ways. In the tradition of socially engaged science fiction, Cavendish is the first “Creatoress, as she called herself. In The Blazing World, Cavendish wrote that “fictions are an issue of man’s Fancy, framed in his own Mind, according as he please, without regard, whether the thing, he fancies, be really existent without his mind of not.” Yet for her, Godwin, Bacon, and others, so many of the things they fancied later did become “really existent.” Their imaginations didn’t always require empirical discoveries to have happened first; their fancies were written in the poetry of delight and wonder, before being confirmed in the prose of experiment and logic.

Dealing with sadness during the holidays Jason Marsh

to a healthy—and happy—life. Here are four strategies to help you craft your own happihoever wrote the song ness recipe this holiday season “It’s the Most Wonder- (and the rest of your year). ful Time of the Year” never had to endure a night of 1. Don’t force cheer Hanukkah listening to a cousin At family gatherings with cousrail about politics. Or spend an ins you secretly can’t stand and inentire Christmas alone while laws who dole out backhanded cheers and laughter erupted from compliments, it can be tempting the apartment down the hall. to put on a happy face while you Fortunately, psychological re- seethe inside. Indeed, that might search suggests some effective even seem like the most mature ways you can beat the holiday response—no drama, no conflict. blues—and flags some especially But a 2011 study by researchers unhelpful ones. The upshot is that at Michigan State University and sadness and other tough emotions West Point might make you think are not afflictions that we should twice. They followed dozens of bus try to avoid. Instead, if properly un- drivers for two weeks, looking to derstood, they can help contribute see when they flashed fake versus

W

YES!

genuine smiles at their passengers. The results showed that on days when the drivers tried to put on an act and fake a good mood, their actual moods got worse. This was especially true for women. 2. Don’t suppress saDness The results of the bus-driver study can be explained by researchers Oliver John of UC Berkeley and James Gross of Stanford University, who found that negative feelings like sadness or anger only intensify when we try to suppress them. That’s because we feel bad about ourselves when our outward appearance contradicts how we truly feel inside. We don’t like to be inauthentic. What’s more, when we sup-

press emotions like sadness, we deny them the important function they serve. Sadness can signal that something is distressing us; if we don’t recognize it, we might not take the necessary steps to improve the situation. 3. responD minDfully But none of this is to endorse drowning in melancholy or lashing out at our in-laws. Some ways of processing and acting on our emotions are healthier than others. Recently, scientists have been paying special attention to the benefits of mindfulness. When you respond mindfully to an emotional trigger (e.g., overcooking the holiday turkey), you pause rather than reacting. Instead of berating yourself,

you simply notice what you’re feeling without judging that response as right or wrong. 4. enjoy your emotional cocktail Inevitably, the holidays will bring a mix of highs and lows. Perhaps the most important lesson to keep in mind is that this variety of emotions might be the best thing possible for your overall well-being. In other words, sadness, anger, and other difficult emotions are like so many other staples of the holidays, from eggnog to office parties: In moderation, they’re nothing to fear. Just make sure you’re balancing them with lighter experiences. And don’t forget to give yourself a break.

Can Robots Make Moral Decisions? Should They? Joelle Renstrom

I

The Daily Beast

n the beginning of the movie I, Robot, a robot has to decide whom to save after two cars plunge into the water— Del Spooner (Will Smith) or a child. Even though Spooner screams “save her! Save her!” the robot rescues him because it calculates that he has a 45 percent chance of survival compared to Sarah’s 11 percent. The robot’s decision and its calculated approach raise an important question: would humans make the same choice? And which choice would we want our robotic counterparts to make? Isaac Asimov circumvented the whole notion of morality in devising his three laws of robotics, which hold that 1. Robots cannot harm humans or allow humans to come to harm, 2. Robots must obey humans, except where the order would conflict with law 1, and 3. Robots must act in self-preservation, unless doing so conflicts with laws 1 or 2. These laws are programmed into Asimov’s robots—they don’t have to think, judge, or value. They don’t have to like humans or believe that hurting them is wrong or bad. They simply don’t do it. The robot who rescues Spooner’s life in I, Robot follows Asimov’s zeroth law: robots cannot harm humanity (as opposed to individual humans) or allow humanity to come to harm—an expansion of the first law that allows robots to determine what’s in the greater good. Under the first law, a robot could not harm a

dangerous gunman, but under the zeroth law, a robot could take out the gunman to save others. Whether it’s possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as “harm” is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov’s fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations. Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It’s doubtful than an algorithm can do that—at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies called “H-bots” from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both “die.” The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what’s best for humanity, especially if it can’t calculate survival odds? Self-driving car developers struggle with such scenarios. MIT’s Moral Machines website asks participants to evaluate various situations to identify the lesser of evils and to assess what humans would want driverless cars to do. The scenarios

are all awful: should a driverless car mow down three children in the lane ahead or swerve into the other lane and smash into five adults? Most of us would struggle to identify the best outcome in these scenarios, and if we can’t quickly or easily decide what to do, how can a robot? If coding morals into robots proves impossible, we may have to teach them, just as we were taught by family, school, church, laws, and, for better and for worse, the media. Of course, there are problems with this scenario too. Recall the debacle surrounding Microsoft’s Tay, a chatbot that joined Twitter in March and within 24 hours espoused racism, sexism, and Nazism, among other nauseating views. It wasn’t programmed with those beliefs—in fact, Microsoft tried to make Tay as noncontroversial as possible, but thanks to interactions on Twitter, Tay learned how to be a bigoted troll. Stephen Hawking and Elon Musk have expressed concern over AI’s potential to escape our control. It might seem that a sense of morals would help prevent this, but that’s not necessarily true. What if, as in Karel Čapek’s 1920 play R.U.R.—the first story to use the word “robot”—robots find their enslavement not just unpleasant but wrong, and thus seek revenge on

their immoral human creators? Google is developing a “kill switch” to help humans retain control over AI: “Now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions.” That solution assumes watchful humans would be in a position to respond; it also assumes robots wouldn’t be able to circumvent such a command. Right now, it’s too early to gauge the feasibility of such an approach. Spooner’s character resents the robot that saved him. He understands that doing so was “the logical choice,” but argues that “an 11 percent probability of survival is more than enough. A human being would have known that.” But would we? Spooner’s assertion that robots are all “lights and clockwork” is less a statement o f

fact and more a statement of desire. The robot that saved it possessed more than LEDs and mechanical systems—and perhaps that’s precisely what worries us.


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.