Issuu on Google+

BARD D e c e m b e r

2 0 1 3

Science Journal




Table of Contents 2 3 4 5 7 8 10 14 16 17 20 21 26

e f : - i n - C h i s r o t i d E

len and Jennifer Gil

Charlotte Ames

The Hasty Architectural Engineer A Rewritten Fable

Guide to Taxonomy Classifying Organisms

The Importance of Biological Diversity Editors:

Leon’s Lab

Hands-On Science in Tivoli Donald Long

Compter Graphics in Film

An Analysis of Lillian Schwartz’s Pixellation (1970)

P o l i n a Vu l a k h Jennifer Gillen

Science Fiction Vignette

Charlotte Ames

Singularity vs. Apocalypse

Dan Pitts Eli Regen

Exploring Posthumanism through a Religious Lens

Overview of Special Relativity Introduction to Einstein

Implications of Special Relativity to St. Augustine’s Theory of Time Co-Sleeping

Examining Cross Cultural Reason and Evidence

C on t r ib ut or s:

stem cells A Poem

The Emergence and Migration of Tuberculosis Discerning Blood Clotting Physics REU Results

cover by ELEONORA BEIER back cover by POLINA VULAKH

Pe nn y Web er, E lys e Neu b aue r, Nic hol as C ar-

b one, Polina Vu l a k h, Jos ephine Wi l li ams, Gav in Myers , Mart y Abb e-S chneid er, E le onora B eier, Sam Osb or n, Tre vor L aMount ain, Matt he w D a lr y mple & A lexi a M ot a l

Send all questions, comments, and submissions to



Bard Science Journal December 2013 Volume 3, No.1

The Hasty Architectural Engineer GAVIN MYERS Author’s Note: I have always been unfathomably irritated by the phrase “Slow and steady wins the race.” Slowness does not win races; slowness helps caution against mistakes. Most people know that, deep down somewhere, and yet they still reference the tale of The Tortoise and the Hare as an archaic example of the benefits of slowness. I have written this story to replace the original fable. Once upon a time there were two architectural engineers. They each got commissioned to design a bridge at the same time, and decided to turn it into a race (or one did). One engineer was very careful with his calculations and choices, while the other was hasty and didn’t double-check any of his numerical results. Sure enough, the hasty engineer finished his bridge design, and bridge, first. He mocked the other engineer about it for the entire time until he finished. A month later, while the hasty engineer was at dinner with his wife, he went to the bathroom and got a phone call. His contractor informed him that his bridge had collapsed due to faulty engineering, costing millions of dollars in damage and ending the lives of 18 loving and loved people, including his friend-and-coworker, Jasper. The hasty engineer threw up on the floor and began crying. He waited in the bathroom, contemplating all of the ramifications of his stupidity, too ashamed to go out and face the world or deliver the news to his wife. The careful engineer slept just fine that night, with his wife, and did not have to face any lawsuits or haunting guilt. The End.

Bard Science Journal December 2013 Volume 3, No.1


Classifying Species, the Ins and Outs of Taxonomy Taxonomy refers to the biological identification and classification of species. Each species is given a unique scientific name, which comprises the genus and species name. For example, Homo sapiens, in which Homo is the genus and sapiens is the specific epithet. The scientific name is always italicized, and the genus is always capitalized. For classification, ranging from broad categories to more specific, an organism is put into a domain, kingdom, phylum, class, order, family, genus, and finally, a species. Taxonomy is important for preserving biodiversity, because if we do not know that a certain species exists, we will not know how to conserve that species or how important its ecological role is. There are currently three domains of life and six kingdoms of life (domains Bacteria and Archaea are also considered kingdoms).



Domain Bacteria includes prokaryotes known as true bacteria, that are found almost everywhere, including our mouths and intestines.

Domain Archaea includes single-celled prokaryotes that live in extreme environments and reproduce asexually.


Eukarya Eukaryotes have a membrane-bound nucleus and other membrane-bound organelles, and are typically larger than prokaryotes. All multicellular organisms are members of this domain.

Kingdom Animalia

Kingdom Fungi

Animals are mulFungi are heteroticellular, heterotrophic organisms trophic eukaryotes that have chitinous that lack cell walls. cell walls. They are Most animals are ecologically importmotile, reproduce ant because they sexually, and have break down organic cells organized materials and often into tissues. This form symbiotic kingdom includes a relationships with range of organisms, plants, called mycfrom sponges to corhizae, in which humans. they help supply the plant roots with water and nutrients. Although many associate them with plants, fungi are more closely related to animals.

Bard Science Journal December 2013 Volume 3, No.1

Kingdom Protista

Kingdom Plantae

Protists are a diverse group of mostly unicellular (or multicellular, lacking tissues) microorganisms. Most taxonomists now believe that this kingdom should be further divided, because these organisms are very different—they range from photoautotrophic algae to the apicomplexans that cause malaria and other diseases.

Plants are multicellular photoautotrophs, meaning they make their own food through photosynthesis. They have vital ecological roles, which include absorbing carbon dioxide and providing oxygen, and being the energy source for animals—without plants, animals would not be alive.

Hudson Valley Species:

The Importance of Biological Diversity by JENNIFER GILLEN


iological diversity, or biodiversity, is usually defined as the variety of life in a given area. It refers to the complex interactions of animals and plants with other living things and their environments, and includes not only the number of species, but also the diversity of smaller biological units, like genes, and larger biological units, like ecosystems. Because of these complexities, it’s difficult for scientists to know or predict how the loss of one species will affect the ecosystem. A species’ role in its environment may remain unknown until it’s extinct. Biodiversity is also important for vital ecosystem services—it’s responsible for the clean water you drink, the clean air you breathe, the pollination of flowers and crops, the reduction of disease risks, and many other benefits. It’s also important for tourism, reducing stress, and improving the quality of life. But our lack of knowledge of an ecosystem service or a species’ worth to humans does not designate that ecosystem or species as valueless. Every living thing has intrinsic value, and this should be our reason to conserve a species, or to at least make it possible for them to continue living. It’s also important to remember that the benefits of biodiversity are not a result of a specific number of species, but the result from the interactions between those species. We cannot determine an individual species’ role without studying its interactions and effects on its environment. If you’re wondering how one species could be so significant, think of the sea otter. Sea otters eat sea urchins, which allows kelp populations to grow and remain important habitats for many different species. When sea otter abundance is greatly reduced, sea urchin populations grow unchecked, and eat all of the kelp, destroying kelp forest habitats. Organisms that provide an irreplaceable ecological role are called keystone species. They maintain the structure and balance of their ecosystem, and include herbivores, predators, prey, and plants. The most cited examples are predators that reduce the abundance of herbivores, like sea otters, wolves, fire ants, and sea stars. Of course, not all species are keystone species, but it’s important not to underestimate the significance of any living thing. For example, there’s a bacterium found in this area that produces a purple pigment, violacein, with anti-fungal and antimicrobial properties. This pigment kills the type of fungus that is causing amphibian declines all over the world, and may start being used to treat animals, and prevent these amphibian declines. Professor Brooke Jude and many of her seniors are doing research on this bacterium. Other important and interesting organisms include salamanders, turtles, butterflies, waterfowl, and many more. The Hudson Valley is an area of high biological diversity, and living here gives us a great opportunity to appreciate and understand its importance. According to the New York Department of Environmental Conservation, the Hudson Valley is home to over 2,000 plants and animals, and 90% of the birds, mammals, reptiles, and am-

phibians that live in the state are found in this region. We live in a place where you can hear the red-eyed vireo or see a ribbon snake on a short walk to class, where you can hear coyotes at night and maybe glimpse a screech owl. Or you can turn over a rock in the woods and find a salamander. This is my fourth year at Bard, and I still can’t believe that we get to share this place with so many different kinds of plants and animals. The Hudson Valley is home to many different types of organisms because there are so many habitat types. Coastal habitats like salt marshes, tidal creeks, and tidal wetlands are shelter for waterfowl, a number of species of turtles, fishes, and bald eagles. Wetlands, including freshwater and brackish waters, are habitats for ducks, frogs, beavers, and salamanders. Forests are important for breeding songbirds, owls, bears, and bobcats. Grasslands or shrubby lands are habitats for the endangered Bog turtle, certain types of birds, foxes, and butterflies. This variety of habitats allows for the diverse number of organisms that we find in the Hudson Valley. To show you just how great and diverse this area is, I’ve created a very, very short, and incomplete field guide for some species you can find around here. It includes many animals and plants that I’ve come across, but it is in no way a comprehensive field guide. It may not seem like the most appropriate time for an article about biodiversity or a field guide, but before the winter stupor sets in, I want to remind everyone of all the cool species that we live next to. And I hope you save the guide until it’s a bit more useful, in the spring and summer! If you’re interested in learning more about local biodiversity, Hudsonia, located out of the Bard Field Station, is a great resource. It’s a non-profit environmental research organization that is committed to environmental education and research on biodiversity. Hudsonia performs habitat assessments, does research on the endangered bog turtle and threatened Blanding’s turtle, and educates the public through programs like biodiversity assessment courses. Hudsonia biologists do research on amphibians, reptiles, moths and butterflies, and plants, and have published papers on wetland restoration, fracking, purple loosestrife, phragmites, and many more ecological topics. If you’re interested in learning more about a species or knowing what species inhabit this area, Animal Diversity Web, the New York DEC website, and the New York DEC’s NatureExplorer are helpful online resources that I used to write this article and the field guide. Thanks to Felicia Keesing, Bruce Robertson, Philip Johns, and Brooke Jude for their help and suggestions.

Bard Science Journal December 2013 Volume 3, No.1


Leon’s Laboratory:

Bard Students Bring Hands-on Science to the Tivoli Library by POLINA VULAKH


his fall semester brings us a new scientifically-oriented club: ers who are in kindergarten comes in handy; the result is very inLeon’s Laboratory. Headed by Kedian Koehan (’16) and An- volved, very enthusiastic teamwork. Sometimes an older kid pairs drea Szegedy-Maszak (’16), this program conducts hands- up with a volunteer and goes through the experiment with his on experiments with children at the Tivoli Free Library three times younger club members, adding more comfort and accessibility to per month. The three experiments explore a monthly theme: Sep- the procedure. This takes away from the standard classroom setting tember and October dove into plant and food science respectively, that kids often associate with science. “It [the experiments] has to while November will focus on chemical reactions. This club first be something that we’re excited about, something that we can exesprouted shoots when former CCE fellow Jananie Ravi (’12) ap- cute with a lot of wiggle room, and something that the kids can run proached Szegedy-Maszak with the idea of starting it up. Both she with in a bunch of different directions,” explains Szegedy-Maszak. and Koehan have volunteered at similar programs and events such Unlike the sterile, lectured-filled environment of most in-school as the Bard Science Outreach, the Rhinebeck Science Fair, and the sciences, kids are free to restructure the experiment if they’re curi8th Grade Science Days, so Ravi knew the two would be interested ous about a specific portion of it. For example, during the invisible in something of that sort. Leon’s Laboratory is the revamping of ink experiment, a girl was curious to see what would happen if one said programs: an organized, enthusiastic, and welcoming environ- reversed the order of the procedure: instead of first applying corn ment for kids to dive into laboratory starch to the paper and then revealing work without the pressure of a textthe message using iodine, what if one book-based, lecture-filled class. mixed the corn starch and iodine toWhen Szegedy-Maszak was gether and wrote on the paper using first discussing the idea of the club that solution? Basically, the nature of with Veronica Stork, a representathe chosen experiments yields a lot of tive of the Tivoli Free Library Board, room for creative flexibility. the two realized that one of the main As is custom for laboratory things they wanted to achieve was work, each child has their own noteconsistent attendance from the chilbook to record weekly scientific obdren. This would make the kids more servations. However, this staple of the comfortable with one another, which scientific method came with a twist. is essential since all the activities are Because of the diverse age range, “a very hands-on and some require teamfew of our kids can write but can’t work. Thus, they developed the idea of read,” admits Koehan. “An interesting A portrait of Leon’s Lab founders, Kedian and Andrea. monthly scientific themes. This reinstruggle that I didn’t know would be a forces the general concepts of each subject and allows for experi- reality was to have kids writing in their little science journals and ments to be continuations of each other. During Plant Month, for asking us to dictate words to them letter-by-letter so they could example, each child first created their own terrarium from rocks, write what they want to say.” Thankfully, this challenge is easy to soil, and plants in a large plastic water bottle for the first experi- work around by substituting written records with illustrations. ment. On the second week, they monitored the growth and overall “My favorite part [of Leon’s Laboratory] is when the progress of their already created terrariums and observed capillary kids can articulate an idea back to you,” shares Koehan. Seeing the action using celery stalks and food coloring. Other experiments kids immediately latch onto new concepts is possibly the most reincluded leaf chromatography, the difference between natural and warding part of the experience because this shows how eager they artificial foods, and the chemical processes of several invisible are to learn new scientific processes. “I just love hearing ‘woah!’ inks made from household materials like cornstarch and lemon across the room and then somebody who’s not one of the volunjuice. Having monthly themes not only reinforces the concepts for teers explaining it… seeing these kids running up to the light bulb each topic, but also lets children pick and choose which sessions and showing their parents and showing each other,” recalls Szeto attend based on their interests. Through this, Leon’s Laboratory gedy-Maszak. It’s outstanding how enthusiastic the children are achieved a good portion of regular participants. The clever name about every experiment so far—the clubheads couldn’t be more of the club itself came from Koehan (“I really was dying for allitera- pleased. They mention that this has a lot to do with the volunteers tion!”). While originally vetoed by her fellow clubhead, it gathered who show up each week, who “…do a great job of engaging with wide support from the masses, forcing Szegedy-Maszak to cave in. the kids, or sitting down with an individual and working through A huge challenge in this program is articulating complex a problem. Or speaking to a whole group of kids, or just floating scientific ideas in a way for all ages to understand. This is where around and handing extra materials out where it’s necessary.” mixing kids in the upper grades of elementary school with othA key part is that the club has had enough volunteers each


Bard Science Journal December 2013 Volume 3, No.1

week to do that. “Someone I’d like to recognize who isn’t a founder of the club is Rock Delliquanti. Rock has been really, really great,” tells Koehan. Not only has he been a volunteer for every session so far, but Delliquanti has also offered up the Sands House kitchen as a place to try out and perfect experiments before conducting them with the children. Apart from expanding from Tivoli into Bard’s other neighboring towns, Leon’s Laboratory’s future plans include acquiring a stronger presence on campus. Similar to the 8th grade science days, kids from the surrounding community would come to campus and be led through a series of experiments focusing on one theme (as opposed to having five subjects presented in one day). Another idea the clubheads are looking into is having a Bard College science fair for the Bard students themselves—this fair would showcase how science is interpreted across all disciplines at Bard as well. For example, a dancer could present something they choreographed which shows how a certain molecule moves; the framework will be very loose, meant to incorporate the many academic branches at Bard in a multitude of ways. Leon’s Laboratory is organized to make it easy for anyone to just drop by and join in. The weekly meetings at 5pm in the RKC pods on Wednesdays act as mini brain-storming sessions where the upcoming Friday experiment is discussed. On top of that, lesson plans are sent out to all participants so that everyone stays in the loop even if they’ve missed the Wednesday meeting. If this sounds like something you would be interested in, come by the RKC on a Wednesday at 5, or email Koehan and Szegedy-Maszak at kk9350@ and

Bard Science Journal December 2013 Volume 3, No.1


Computer Graphics in Lillian Schwatz’s Pixillation (1970) by NICHOLAS CARBONE


n our technology-heavy age, it’s strange to think of a time when films were distant from computer technology. In the 1960s and ‘70s, long before computers were easily accessible, American experimental filmmaker Lillian Schwartz worked with AT&T’s Bell Laboratories to experiment with film and computer technology. With the help of Ken Knowlton, a software engineer and computer programmer, she created films that combined her visual experimentations with Knowlton’s computer graphic language. The films they made together pioneered experimentation with digital media and film form. Their first collaboration, Pixillation, reflects how Schwartz’s visual creation used computer pixels to create a deeper understanding of how technological innovation relates to the organic processes of nature. Pixillation used the EXPLOR computer language of Ken Knowlton to create the diverse abstract forms that Schwartz desired. Lillian Schwartz explains that EXPLOR is a “system for computer-generation of still or moving images from explicitly defined patterns, local operations and randomness.” She would devise images on graph paper and transpose them to film with EXPLOR in order to achieve the forms she desired on the cinematic lens. Optical printer Bruce Cornwell also helped her to layer her film sequences over each other to match to her central rhythmic composition within the frame. She would then add color to these sequences to produce a deeper illusion of motion and fluidity. Schwartz’s fluid images are then complemented with Gershon Kingley’s experimental soundtrack that uses a Moog synthesizer to produce rhythmic beats that mix organic and futuristic sounds to create an increasing intensity throughout the film. The power of the images come from the evolving and moving squares, fluids and crystals in the film. Schwartz’s fluctuating geometric sequences are juxtaposed with immersions of liquids and oils as well as footage showing the formation of ice crystals [Figures 1 & 2]. The similarity between the natural pattern of ice crystals and the mathematical grids of the squares underlines the convergence of the outside world and the more technologically-simulated world. These converging and diverging visuals also emphasize an organically occurring pattern in a computer generated display.

Figure 2

Schwartz’s organic framework delineates the concept of “emergence,” which can be defined as the way patterns and systems form from a number of uncomplicated interactions. The way in which Schwartz utilizes pixels and animates them through the algorithms in EXPLOR exemplifies this theory. She shapes the cinematic image through minute interactions of pixels, which relates to the way life develops and emerges out of small interactions. Her investigation of computer language on film helps to support the “emergence” concept that life and technology form through small accidents and alterations. By showing the growth of the square shapes, the muddle of liquid substances and the enlargement of ice crystals, Pixillation serves to point to how our natural and technological environment progresses through cycles. The slight geometric adaptations of Lillian Schwartz’s pixels use diverse colors, designs, and editing to create a deep contemplation of the computer medium and the way life develops. In contrasting the computer pixels with the ice crystals, she presents a clear connection between the formation of nature and the development of an algorithm. Pixillation’s relation to the concept of “emergence” helps to situate the film as a metaphor for how computer technology extends from the natural processes of life and how life keeps evolving through time. Works Cited Pixillation is available to watch on Youtube and Vimeo and it is only 4 minutes long. Qtd. in Walter Forsburg, “Lillian Schwartz Sees in Four-Dimensions,” Incite: Journal of Experimental Media 3 (2011). Online. Moog synthesizer is solely an analog synthesizer that was manufactured by the Moog company, which had pioneered the selling of synthesizers in the mid-1960s. Jeffery Goldstein, “Emergence as a Construct: History and Issues,” Emergence: Complexity, and Organization (1999), 49. Web.

Figure 1


Bard Science Journal December 2013 Volume 3, No.1

12/24, Day 700-something, Three AM Dominic King


hristmas carols are full of bullshit. I don’t mean the worshipping a tiny defenseless baby or the belief that out of the blue some giant old man in the sky is going to swoop down and save our collective, sinful asses. Your beliefs are up to you, that’s one thing I think no man has a right to infringe on. I might question your sanity, but that’s only fair. You’re sure as hell gonna be questioning mine when you read a bit further. If you read further. If you’ve still got eyes to read. If you exist at all, alive. Or not. Hello. Anyway. Christmas carols are bullshit because there’s no such thing as a silent night. Maybe there was, once, when nights were marked by milky moonlight and friendly, undisturbed shadows, by the sleeping faces of pretty children. But I can’t remember anything like that. I don’t even know if I believe that there ever was anything like that. Sometimes I think the world’s always been this way, always full of screaming and clawing and biting and the growing, growing, growing, growing noises of the roots over his head. I wish they’d stop growing. I can’t see his face anymore. Take this night, now. This is what I’d call a quiet night, not anything to do with the actual noise level, but because I’m not running half-naked through the freezing woods yelling myself hoarse and trying to aim a shotgun with my bad hand. I’m not surrounded by the Butchers, they're not fixing me with gleaming stares like they’re waiting for my skin to peel away from my bones. Waiting for me to be revealed as one of their gods. It’s a quiet night, because I actually have time to sit down and dig my broken bit of pencil from my belt and sketch and theorize and write this shit down in case there’s someone out there who’ll outlast me who’s not fucked up or crazy or a dead man walking like I am. Ha. Dead man walking. That one made me snort aloud and look around to see if anyone had noticed, but the skulls are always grinning, and...I can’t see his face. Maybe the corner of his mouth, if I squint, but it might just be shadow. I wish I could still remember his laugh. If I had my knife, I’d cut the roots away from his throat and see if maybe it would slip out like air from a tire. I was good at making him laugh, and he was good at making me laugh. I think that’s the best basis for a relationship, don’t you? Laughter breaks down walls.

“Christ, this isn’t a romance column.” Dom stuck his pencil behind his ear, his eyes drifting to the lantern and the fluttering of the stupid, suicidal moths against its glow. There were about twelve of them now. Only four, yesterday. Maybe there were eggs in the soil, maybe soon his whole little home would be bursting with powdery wings, so many he couldn’t breathe for them. That wouldn’t be such a bad way to die, all things considered. He imagined kids, in the future, the far off future where there were kids again, the future that was about as likely as him getting a real Christmas dinner. There’d be three of them, kids, not Christmas dinners, a boy and two girls, sitting at their grandfather’s knee. “Tell me about the Great Plague,” they’d say, because there was no way historians were going to call it the “Colossal Shitfuck of 2016,” or “That Time When Everything Went To Hell”, they were going to pretty it up for the bright-faced students of the 2100s, because that’s what historians did, they took the shit of the previous generation and smothered it in glitter until it looked like victory. No one was going to win this. If there were bright-faced students of the 2100s, if there were children and grandparents and historians ever again, it wouldn’t be because humanity won. It would be because they lost so bad that everything else starved to death. “Tell me about the Great Plague,” they’d say, and the grandfather, too young to know anything about it first-hand but still sure in his perfect second-hand knowledge, would nod sagely and say, “Many great men died in those years. Many great men.” He would sniff, and sigh, and let out half of a grin, letting them know that there was another half to be expected when the joke was told. “Lots of idiots, too, though. You kids want to hear about my favorite idiot?” And the kids would nod, and grin their own half-grins, and the grandfather would push his glasses down his nose - or extend his cybernetic eyeballs or whatever the fuck future ancient bastards did to look more condescending - and say, “He wasn’t killed by the Butchers. He didn’t starve or commit suicide. He buried himself underground and he drowned in moths.” And then the kids would laugh, but Dom couldn’t picture that bit. Their mouths gaped and their eyes scrunched up dark, but he only heard the shuffle and growl and scream of the darkness outside. He shook his head and stretched his arms upwards, wincing as the joints in his shoulders and elbows popped. The heel of his hand hit the dirt ceiling of his hide-out, knocking some sandy soil down into his hair. He cursed and glanced upwards. Grey, watery Bard Science Journal December 2013 Volume 3, No.1


light filtered through the tiny hole he’d made between roots. He squinted at it. He listened hard, staying utterly still. He heard the shuffling lessen, heard the whispers fade, and then leaned down and carefully crossed off Three AM at the top of his page with his left hand, writing in instead Four Thirty. “Losing my touch.” He shifted forward, reaching past his lantern to the small bucket sitting by his knee. Whistling tunelessly to himself, he scooped a handful of wet clay from the bucket and patched up his roof one-handed. His right hand stayed cupped in his lap, loosely holding his bundle of stained papers in shaking fingers. He squinted upward to examine his handiwork, then nodded. “Alright,” he said, “day 700 and something.” He blew out the lantern, and looked around the small earthen room at the skulls, lined against one wall, at the other wall, which was all woven roots and glimpses of muscle and skin. “Good morning.” He crawled out through the lattice of roots that served as his front door to find that the dawn had left him a present. In a heap of fur and exposed spine at the base of his tree lay a rabbit, not a single bite missing and still warm. Died of fear, probably, and then got worked over for brain-stuff. The ears would’ve been frustrating, maybe delaying them enough until the sun drove them off. Dom cracked a smile, and the smile cracked his lip. He sucked at it immediately, careful not to lose a single drop of blood. This was the best fucking luck he’d had in weeks. Maybe he’d get Christmas dinner after all. He skinned the thing, holding its back legs in his teeth and pulling, pulling with his working hand, and hung it from the branch of his tree. It would be out of the reach of anything animal in these woods, and he’d be back long before he had to worry about anything else. He headed towards the river, his bucket dangling from his good hand, his papers stuck back in his belt. He didn’t hurry. He was a great believer in the restorative power of stopping and smelling the roses. There wouldn’t be buds on the trees for months yet, but there was still something in the air that spoke to Dom of spring, or at least change. He liked to call it “wishful thinking”, the light sort of breeze that played with the too-long hair around his ears. If his calculations were correct, the longest night had been almost five days ago by now. The tide should be turning. Of course, that’s what he’d said last year, as well. But it’s been so cold, Wishful Thinking chimed again. The Butchers are self-destructive, they’ll burn themselves out. This should end.

Pryor Elementary School stamped into the side in blue. There’d been dozens of them, scattered across the floor of the apartment, knocked from Theo’s desk when he fell. Dom had never had the chance to clean them up, or the broken glass from the window, or the muddy footprints the Butchers left behind when he slammed the doors against them. He’d just slung Theo across his back and gotten the hell out. He’d found the pen later, tucked into his boyfriend’s pocket. Ever prepared, his Theo. Would he laugh, if Dom used it, pulled out the little vial of bones and herbs and read the inscription, cheated fate and God and whatever the fuck else and brought him back? Would he look around at the cave Dom had carved out for himself and laugh and say “Not exactly the dream house you promised, baby”? Would he kneel and look at Dom’s hand and laugh and say “If only you’d become a doctor like your parents wanted”? “But I guess neither of us did what our parents wanted after all, did we”? “But I guess this world wasn’t what our parents thought, anyway”? How long would it take before the force of his laughter started pushing his teeth from his mouth? Dom closed his journal. The dirt between his boots stirred with white wings.

written by PENNY WEBER

3/13, Day 512 Dominic King I keep thinking about using it. Maybe it’d work. Maybe it’d be one of the ones that worked. I bought it early, on a whim - before everything went so wrong - and everyone says the earlier ones are more likely to be real. He shook his head at the old entry. It was before he’d started recording the time, back when he was using a real pen, not the stub that barely made marks on the paper anymore. It’d been one of the free pens Theo’d gotten from work, shittily made, with George


Bard Science Journal December 2013 Volume 3, No.1


Singularity vs. Apocalypse

The Fear and Excitement of Life’s Propensity for Itself by CHARLOTTE AMES


he Posthuman era represents a time where suddenly the lines between what is “machine” and what is “human” have been blurred and we are forced to re-evaluate what separates us from what we create. Anthropologists, sociologists, ethnologists, humanists, everyone has been scrambling to re-evaluate what it means to be “human” with the development of artificial intelligence and upgrades to the body. There are several products from our entry into this period, which will each be examined in a similar manner to how we have traditionally examined religion in the past. The goal of this paper is to investigate how contemporary western society is dealing with the revision of their species’ definition by looking at discussions in academic work, pop culture, and news articles. The development of new technologies that change the way we view nature have in turn changed the way we view ourselves in nature and our role in the world. Repercussions of entering such an era can possibly be predicted by analyzing its immediate products in a classical manner. The posthuman, a recent term coined from science-fiction and futurism philosophy, has a variety of meanings including the deconstruction and revision of the human condition and human nature by academics to adapt to current scientific knowledge of the mind and body, and the movement and ideology of eliminating aging and improving human capabilities with technology, called transhumanism. The posthuman seeks to break free from Renaissance humanism’s belief that human nature is autonomous and the apex of existence through recent advances in technoscientific knowledge. Focusing on the latter, transhumanism (which is cleverly abbreviated as h+ to represent the goal of being more than human) aspires to transcend the human race to become something better than before. In his “History of Transhumanist Thought,” Bostrom writes, “[They] emphasize the enormous potential for genuine improvements in human well-being and human flourishing that are attainable only via technological transformation”. Transhumanists believe the secret to immortality, and greater mental and physical power, are attainable through development of technology. According to Hopkins this is low transhumanism and “offers little more of an ideal than being a Greek god”, but there is a high transhumanism that is still in the process of being created, and is in search of the “Ultimate”. This group of thinkers relies on advancements in science and bio-technology to extend their life and overcome the limitations of their human body. Although Hopkins sees this as little more than a desire to be greater in areas that are biologically used to pick the healthiest mate and what some dismissively call “superhumanism,” this enthusiasm for adapted bodies shows a willingness to manipulate the human condition, what we previously thought was uncontrollable. Something I’d also like to address in this examination of posthumanism is the development of artificial intelligence; we see countless examples in recent science news of the development

of “smarter” robots, ones that are programmed with the ability to learn from different sources of input from their environment. There are a couple of basic categories of objects that we interact with and informed our sense of self in the Humanist era: body, inanimate objects, responsive (digital one input gives one output) objects, and social (humans, virtual networking) objects. The issue with posthumanism is that there is the blurring of the last two categories – where responsive machines can learn. Engineers and scientists are slowly but surely developing artificial intelligence that “learns” in the same way humans do. At the University of Hertfordshire computer scientists are improving on a robot named DeeChee who learns to talk and communicate like an infant. It cannot yet think like a human, but technologically speaking this is just more processing power and a greater scientific understanding of consciousness. The creation of DeeChee and countless “learning” robots like it are marking modern humanities attempts to understand what makes us human by trying to recreate it. There is a philosophy that has been gaining traction with the emergence of cognitive science called Determinism, which states every human action (or natural event) can be predicted based on a series of neurochemical reactions to environmental factors. This belief in the West is often associated with Newtonian physics, which depicts all physical matter in the universe as operating under a fixed set of laws. Modern physics and the development of quantum mechanics in recent years has eliminated the feasibility of this exact model of determinism however – based mostly upon the Heisenberg uncertainty principle and the phenomena of the observer principle, in which the act of observing a system invariably changes its state. But aside from recent developments in physics, which are not wholly understood in the more applied branches of science, a deterministic perception of society and nature caused by recent techno-scientific knowledge pushes humans and their social interactions into the third category as stated above, and is the source of most posthumanism resistance due to its suggestion that we are only machines. If “sheep are machines that turn grass into money,” What isn’t a machine? Something with choice? If we adopt determinism’s meaning in which free will is an illusion, then “a human” becomes merely a machine designed to propagate itself with a series of chemical reactions. There has long been a struggle between identifying between organism and machine, and posthumanism has arrived to put forward that they are one and the same. If we look back in time to the Renaissance and Greeks’ view of nature we can see that with the progression of technology comes a different view of nature, and henceforth our place in it. R.G. Collingwood writes, “Instead of being an organism, the natural world is a machine: a machine in the literal and proper sense of the word, an arrangement of bodily parts designed and put together and set going for a definite purpose by an intelligent mind outside itself. The Renaissance thinkers, like the Greeks, saw in the orderliness of Bard Science Journal December 2013 Volume 3, No.1


the natural world an expression of intelligence: but for the Greeks this intelligence was nature’s own intelligence, for the Renaissance thinkers it was the intelligence of something other than nature: the divine creator and ruler of nature.” If we consider the techno-scientific context of these two cultures, and reason that because the Renaissance occurred later chronologically they would have increased technologies, we can make useful observations in relation to posthumanism today. As humans in the Renaissance were creating greater technologies, allowing them to observe and manipulate more of the natural world and their environment, they imposed this anthropocentricity onto nature itself and began to see themselves in this system as utilitarian creators: as nature was a system that had also been created with a “definite purpose”. Over time our technology has drastically increased and so has our perception of nature as a machine: where each gear of the natural world becomes more visible with a more powerful microscope or a new piece of agricultural machinery. But this is venturing into psychology and philosophy in examining the lines between organism and machine; back to the beginning with the questions raised by determinism and improvements to the human condition as relating to anthropology, posthumanism almost completely dismantles cultural anthropology from its base as the traditional definition of human is abandoned. If we are not autonomous, then what is intelligence? Modern cognitive science has only raised more existential questions than it has tried to answer about consciousness. One possible answer is that intelligence is a collection of information; Renaissance thinkers believed that the order of nature was only the intelligence of some divine creator. We have become the divine creator in this aspect if our observation and manipulation of nature through technology is collective intelligence. There is concern and anxiety that we will create something smarter than ourselves, more human than humans, but I will address these later. This collection of knowledge and the study of orderliness is an attempt to recreate the meaning of existence by inventing new “life” that is intelligent. Even in posthumanism, we watch humanity’s constant strive to create more of itself. While religion and transhumanism easily antagonize each other, both have arisen in response to the “deflationary account of human nature”. There are several tropes that drive both, the inherent need and search for meaning, strong devotion and placement of faith, creation and destruction, and the hope of an eternal life or transcendence. Stephen Garner writes, “In the contemporary world both transhumanism and Christianity offer visions of a better world. …Both are committed to social concerns, either directly or as a by-product of their distinctive emphases” . Both are products of society and offer these promises through greater understanding. The constant fear of the “rise of the machines” is evidenced by a myriad of science-fiction movies, and groups like the Cambridge Project for Existential Risk, who believe in a future in which the AI robots we create become more powerful and lead to the overthrow and extinction of humans . This anxiety is oddly reminiscent of the classical religious archetype of the created overthrowing its creator. In religion we see parallels between the third book of Genesis and the Jewish legend of the Golem, and the fear that the creation of a sentient being will lead to destruction in posthumanism. In product-of-their-time novels like Frankenstein we


Bard Science Journal December 2013 Volume 3, No.1

can also see the interplay of religion and science in the same way – the creation of life and the destruction that follows at the hands of that creation is sacred, even when toyed with by “unholy” technology. Although these cultural references to an old trope seem exaggerated, the fear humans have of losing dominance as a species is a very real concern as we’ve been at the top of the metaphorical and literal food chain for a long time. “The Singularity is coming,” is a strangely familiar chant repeated on online posthumanist forums, during science-fiction meetings, and futurists with picket signs. It is a fairly recent movement of futurists preparing for the arrival of superintelligence and its effect on humanity as mortals become obsolete. This is not a superintelligence that arrives on Earth from somewhere foreign, but rather that (fairly soon, as Kurzweil predicts this occurring within 40 years,) humans will create a being with intelligence greater than themselves in their quest for meaning through self-emulation in artificial intelligence. This idea that an outside entity will someday control humanity is not new in any regards, (see H.G. Well’s “Empire of the Ants” for merely one example,) but this is the first time we’ve taken the Prometheus role as a society, and been in the lab trying to create something human-like in both appearance and thought. Anthropologically the phrase “The end is nigh,” is old news – most societies have some sort of end-of-the-world prediction in which humanity is overtaken and destroyed, and it seems like merely a counterpoint to our obsession with creation. However, not all Singulatarians see this event as a doomsday prophecy. In an early email list before the founding of the Singularity Institute for Artificial Intelligence, “discussions contained the duality of anxiety and exhilaration that still characterizes the field”. There is extreme fear and excitement over the possibility of creating an autonomous, intelligent being that might become the dominant species. Excitement for salvation of the human race at the hands of a species we created that’s smarter than us, and fear that that salvation may mean elimination of humans. These predictions and their reactions from both enthusiasts and the layman are evocative of the hype surrounding the Christian Rapture in terms of fear and excitement of transcendence and as an event-horizon beyond which events cannot be predicted. Whether or not either is coming any time soon in the history of man is yet to be seen, but the comparison remains. This recurring interest in destruction (and death) leads us into mankind’s innumerous attempts at achieving immortality. We see it in promises of an afterlife in religion, years and years of explorer’s lives devoted to the search for truth behind the legendary Fountain of Youth, and once again in the transhumanist movement under the term “life extension,” which contains things such as mind uploading, whole brain emulation, and cryogenic preservation. There is a growing movement of people who believe that uploading consciousness is the future of immortality. This goal persists even though everything – from gas giants to ideologies to single-celled organisms – eventually die. Creation and destruction are the only things we can observe outside our anthropocentric viewpoint as consistent laws of nature, yet it persists in a niche of the transhumanist movement. Cryonics is both the belief that science will discover the secret to immortality, and also the attachment to one’s body. Using liquid nitrogen a dead body is frozen causing chemical activity to come to a halt and prevent decomposition. Although a small group

in the posthumanism movement, they warrant a mention in technology’s quest for lengthened life. There is one other niche idea in the immortality of the self, and that is “mind uploading”. This is still in the very early stages as neuroscience has not advanced enough to fully understand consciousness, but the theory is that in the future one could have the same mind in an upgraded body and live forever. In his book, Bainbridge takes an extremely pessimistic approach to the abandonment of religion in western society, predicting extreme moral decay and the destruction of society. He writes, “Until about this point in history, humanity was ignorant and beset by insoluble problems. Religion assuaged fears, motivated sacrifice, and strengthened social solidarity. …However, now it has become an impediment to the full flowering of science and technology, and the transformation of humanity. …The industrial revolution launched humanity on a rapid flight toward and unseen destination, and contemporary society is probably not viable in the long run.” His point is a little confusing: he seems to think not only that with the advent of modern science we are no longer ignorant and can solve any problem that comes our way, but that this is not a viable track for society in the long run. What he doesn’t seem to connect are the striking similarities between religion and technology. People put extreme amounts of faith into science for developing cures for sicknesses and resource shortages, scientists devote their lives to research for the benefit of humanity, and huge amounts of resources are put into monuments for scientific and technological development (e.g. chapels to particle colliders). Across a Secular Abyss posits four explanations of religion: the supernatural, the societal, the exchange, and the cognitive theory. Emile Durkhem’s societal theory of religion is one in which, “society was sacred, and religion was its collective expression. When people worshipped God, they actually adored society. Heaven was not a myth but a metaphor, because deceased people lived on through their contributions to society”. This analysis of religion as a human invention is in line with history’s technological advancements. The pursuit of greater technology and scientific progress contributes to the global system of knowledge. Religion is correct in that the creator is sacred, as humans are the creators of society, and in worshipping God they worship society’s products, namely technology as a collective functional base of intelligence. Although I specifically avoided talking about the conflict and tension between posthumanism and religion, the possibility of consilience lingers behind every comparison. Dorothy Nelkin writes about the budding confusion of science with religion , and the inherent spirituality behind science. Many, physicists most prominently, use religious language in discussions of modern discoveries and their implications. Obviously language does not mean science and religion are the same or could be, but language remains as one of the major proponents of informing culture, and Culture is the product of the metaphysical act of reality testing . Shweder continues to write, “Given the central role of meaning as a causal factor for human beings, the restriction of scope of generalizations in the social sciences and humanities is thus entirely expectable”. If God is in fact dead, our societal replacement for Him is a recreation of religion where the creator-figure is once again ourselves, observing and manipulating nature with technology to become posthuman. I came across a perfect summary of this human tendency in a book that describes comics: Scott McCloud writes,

“We see ourselves in everything. We assign identities and emotions where none exist. And we make the world over in our image”. Alongside it he draws a car whose front grill resembles a human face and an expressionless “stick-figure” face, two emotionless objects through which we can see emotion portrayed even though there is none. Although here he is not writing specifically about the posthuman movement, it still explains anthropocentricity as a byproduct of us only having our own experiences to draw on in the search for meaning through projection of the self. This represents some of the fear in creating human like robots – if they are able to interpret human emotion and mimic it, what stops them from manipulating us if they are engineered to have their own desires? Our relationship with technology combined with this human-centric projection onto the universe creates a definition of the natural world as a machine: a created universe that has the designed purpose to create more of itself. The repercussions of posthumanism are not, as Bainbridge and several other religious writers I came across in my research want to suggest, moral decay and an unsustainable society construct. Instead posthumanism seems to be stepping in to fill the slow disappearance of religion as a method of searching for meaning through observation. Spirituality in the form of existential questions shows no signs of waning, even without the backing of religion, as a method of searching for truth through internal reflection. We will continue to search for meaning through spirituality and posthumanism, consistently getting in our own way by altering the system while trying to observe it. Works Cited Bostrom, Nick. A History of Transhumanist Thought. Oxford University. Journal of Evolution and Technology. Volume 14 Issue 1, April 2005. Keim, Brandon. “Humanoid Robot Learns Language Like a Baby. Wired Magazine. <> Collingwood, Robin George. The Idea of Nature. Oxford University Press. 1960. Hopkins, Patrick. A History of Transhumanist Thought. Oxford University. Journal of Evolution and Technology. Volume 14 Issue 2, August 2005. Garner, Stephen. Transhumanism and Christian Social Concern. University of Auckland. Journal of Evolution and Technology. Volume 14 Issue 2, August 2005. “Artificial intelligence – can we keep it in the box?” The Conversation. <> Farman, Abou. Re-Enchantment Cosmologies: Mastery and Obsolescence in an Intelligent Universe. Anthropological Quarterly, Vol. 85 No. 4. 2012. Bainbridge, William Sims. Across the Secular Abyss: From Faith to Wisdom. Lexington Books. 2007. Bainbridge, William Sims. Across the Secular Abyss: From Faith to Wisdom. Lexington Books. 2007. “God Talk: Confusion between Science and Religion: Posthumous Essay”. Dorothy Nelkin. Science, Technology, & Human Values, Vol. 29, No. 2 (Spring, 2004). pp. 139-152. <> Shweder, Richard A. “A Polytheistic Conception of the Sciences and the Virtues of Deep Variety”. Committee on Human Development, University of Chicago.

Bard Science Journal December 2013 Volume 3, No.1




Bard Science Journal December 2013 Volume 3, No.1

Special Relativity

An Overview of Albert Einstein’s Theory by ELI REGEN


hile working in a Swiss patent office in 1905, Albert Einstein changed the world of physics by developing his Theory of Special Relativity, forever changing our understanding of time and space. The main principle of relativity involves two separate postulates. The first is that the laws of physics are the same for all observers, which also means that there is no such thing as an absolute reference frame. The second postulate is that all observers measure light moving at the same speed in a vacuum, no matter what their velocity. It’s important to clarify that light does slow down in some mediums, such as water, air, and glass, which makes it possible for particles to move faster than light within such a medium. But in a vacuum light always travels at a fixed speed that cannot be exceeded. For purposes of illustration, some of the dynamics of Special Relativity can be imagined through a scenario where an astronaut (Smith) is piloting a spaceship, and is going past the Earth while an astronomer, Jones, observes the ship through a telescope on the ground. We can imagine Einstein’s theory of time dilation by looking at their two different perspectives. From Smith’s point of view his ship is standing still while the Earth is moving away from him, but to Jones the Earth is standing still while the ship moves away from it. Both observations are correct, but only within their own reference frames. Now suppose there is a laser mounted on the bottom of the space ship. When Smith fires it at a frozen lake, which acts as a mirror, he measures the amount of time it takes for the laser to return to his ship. From his perspective, the laser simply travels straight up and down, so his measurement for the time the laser takes to fire and return will be twice the height of his ship, divided by the speed of light (notated in physics as the letter c). Jones will see something very different. Since he is ob-

serving a space ship moving relative to his fixed position on Earth, the laser beam will follow a “V” shaped pattern before it returns to the ship, meaning the laser seemingly travels a far greater distance to get back to that ship. But since the speed of light is constant, Jones will measure more time passing than Smith will. This effect is called time dilation, and it becomes more pronounced the faster an object moves relative to another. As Smith’s ship accelerates and approaches the speed of light (3,000,000 meters/second = 670,616,629 mph), time will slow down more and more, but if he moved at the speed of light, time would stop completely. However, due to the first postulate, Smith perceives that he is standing still and the universe is moving around him. From his perspective, time is moving normally for him, and has slowed down everywhere else in the universe. It is therefore possible for an astronaut to travel across entire galaxies in what would feel to him like minutes, but for observes standing back on Earth it would appear to be millions of years. Another vital aspect of the Theory of Relativity is the notion of simultaneity. If we go back to Smith and Jones, imagine now that Smith is moving at a slow cruising speed during a storm while Jones is observing him. Suddenly, two lighting bolts strike the ship in the front and the back. To Jones, each lightning bolt seems to strike at the exactly same time, but from Smith’s point of view, the bolt that hit the front of his ship came before the one that hit the back of it. This is because Smith is moving toward the place where the bolt struck the front and away from where it hit the back. Another way to imagine this phenomenon is if two stopwatches were placed at both the front and back of the ship, and they only start ticking when a lighting bolt strikes each one. From Jones’s point of view, both watches start at the same time, but to Smith, the

Bard Science Journal December 2013 Volume 3, No.1


illustrations by DONALD LONG watch at the front of the ship will start before the one at the back of it. Thus, two events that occur at the same time for Jones will not be simultaneous for Smith. Length Contraction is another consequence of the relativity of simultaneity. That the faster the space ship travels, the shorter its length will become. If it moved at the speed of light, the ship would appear completely flat to Jones, but Smith wouldn’t notice anything different because everything on his ship would be contracted in the same way. From his reference frame, it is the rest of the universe that has shortened, not his space ship. Therefore, if he traveled fast enough, the entire length of a galaxy would be shortened to that of a planet! This isn’t an optical illusion of some kind, this is a physical manifestation of relativity. Relativistic mass is a component of relativity that causes the main divergences from Newton’s classical physics. Imagine now that Jones and Smith are each piloting their own space ships headed directly for one another at the same exact speed. They fire elastic projectiles at the other’s ship in such a way that the two collide, and the recoil causes them to bounce back to their owners. This is an example of conservation of momentum, another important aspect of Relativity. Momentum is a number that is a moving object’s velocity (speed) multiplied by its mass. From the point of view of Jones’s ship, he seemingly is standing still while Smith is speeding toward him. Because of time dilation, time for Smith is moving more slowly, and Jones will see that he spends less time giving his ball a push. So his ball will be moving more slowly in Jones’s reference frame than in Smiths. For the two projectiles to have the same momentum, Smith’s ball must be heavier. This is called relativistic mass, which is the idea that an object’s mass is greater when it’s moving than when it is still. When an object moves faster and faster, it gains kinetic energy and its relativistic mass increases exponentially. This result is described by arguably the most famous equation in physics, E= mc2. As with the other effects of relativity, this effect is negligible for speeds experienced on Earth. An object would have to move at about 26,000 miles per second for its mass to increase by just one percent. Herein also lies one of the reasons why it is impossible to travel at light speed. As an object moves faster, and its mass increas-


Bard Science Journal December 2013 Volume 3, No.1

es, it takes more and more energy to accelerate it. If it’s moving a large enough fraction of light-speed, than most of the energy applied to it will only increase its mass rather than speed it up. So it would take an infinite amount of energy to accelerate to the speed of light. Now, one might wonder about the consistency of time dilation. If Jones observes Smith traveling to Alpha Centauri, which is the closest star system to ours, then the astronaut will age more slowly than the astronomer back on Earth. But to Smith, he is standing still while the Earth is speeding away from him, so he sees time move more slowly for Jones. So if Smith returns to Earth, who will have aged more? This is called the twin paradox, which Einstein resolved with the equivalence principle. Simply stated, it says that the force caused by an accelerating ship, would cause effects equivalent to those caused by a gravitational field. A consequence of General Relativity is that time moves more slowly for observers at the bottom of a gravitational field. So a person hovering high above the ground with a jet pack would age slightly quicker than if he were on the surface. This is gravitational time dilation. So when Smith turns his ship around to return to Earth, he accelerates in such a way that is equivalent to him being at the bottom of gravitational field. So in his reference frame, the Earth is high above this field, and time goes by more quickly up there. When he returns to Earth, he finds that Jones has aged more than he has. When Einstein made public his Theory of Relativity in 1905, it was like a Pandora’s Box of physics, raising as many questions as it answered. Since then, many great physicists have tried to either prove or disprove Einstein’s original theories. Others have sought to fill in holes that Einstein didn’t foresee in 1905. But what is truly remarkable is that the Theory of Relativity has, for the most part, been proved almost entirely accurate, and at the same time it has sparked a broad exploration of our universe that will continue into the foreseeable future.

Implications of Special Relativity to Saint Augustine’s Theory of Time by MATTHEW DALRYMPLE


n this essay I will discuss Saint Augustine’s theory of time, described in Book XI of Confessions, as it relates to the theory of special relativity. Through this discussion I hope to show that Augustine’s theory of time is compatible with simultaneity under the theory of special relativity, although somewhat complicated by it. To do this, I will begin by describing simultaneity as understood through special relativity. Next, I will outline the portions of Augustine’s theory which I consider to be pertinent to simultaneity under special relativity. Lastly, I will discuss how the phenomena and predictions of special relativity complicate, or fail to complicate, Augustine’s theory of time. One consequence of the theory of special relativity is that simultaneity is no longer objective. That is, according to special relativity, two events may be perceived as being simultaneous for one observer, however, they will not necessarily be simultaneous for another observer (Krane 39). Or, similarly, two events can be perceived as happening at the same time when their light rays are arriving to the observer simultaneously. Whether or not an observer thinks that a set of events happened simultaneously or sequentially is a matter of the observer’s reference frame in spacetime, and not a matter of “when” the events occur (as it is traditionally thought of). If we accept that the laws of physics are the same for all reference frames and that light has the same velocity in all inertial reference frames, then we can no longer claim that separate events occurred simultaneously, other than from our perspective. Therefore, simultaneity is a just a matter of perspective. For Augustine, the only time which can surely be said to exist is the instant of the present. Augustine applies a rational skepticism to the concepts of past and future and concludes that in order for something to exist, it must do so in the present moment. Augustine asks, “Of these three divisions of time, then, how can two, the past and the future, be, when the past no longer is and the future is not yet?” Here, Augustine is pointing out that neither the past nor future can be directly perceived and thus do not exist. Next, Augustine narrows the present down to a moment, or point, in time. And lastly, Augustine comments that memories of the past and expectations of the future can exist, but only within the subject. For Augustine, only the instant of the present can be said to exist. Augustine’s presentist ontology is complicated by special relativity’s conclusion that simultaneity is subjective. As described above, Augustine believes that only things which are in the present moment can be said to exist, however, this view becomes more complicated when we consider that the present moment is dependent on the subject’s reference frame. If an event is observed

in the “present moment” for one person, Observer A, and is not observed in the “present moment” for another, Observer B, Augustine’s theory must claim that the event only exists for Observer A and literally does not exist for Observer B. This clashes with our intuitive response that the event exists for both Observers but has either not yet been, or was already, perceived by Observer B. When considered alongside special relativity it becomes apparent that Augustine’s ontology is entirely dependent upon the subject. That perception is neither instantaneous nor equal for all observers proves to complicate Augustine’s theory. Augustine’s eternalist ontology, time as “perceived” by God, employed alongside a presentist perception of time can account for the complications which arise from subjective simultaneity. If we consider Augustine’s presentist theory of time to only be a theory of perception, and his eternalist theory of “time” to be his ontology, we can account for subjective simultaneity without any strain. Throughout his exposition of time, Augustine posits that God has an eternalist perspective of time. Augustine writes, “[I]n eternity nothing moves into the past: all is present.” According to Augustine, God perceives everything in the present moment, and thus, all things always externally exist as they are perceived by God. With this alternate ontology we are able to relieve Augustine’s more anthropocentric presentist ontology, and claim that external objects exist without human perception. This allows Augustine to interpret the case of Observer A and Observer B in a way that Observer A simply has the perception of the event at a different time than Observer B but that the event always exists in God’s eternity. By employing an eternalist ontology we are able to avoid the conclusion that events may exist for one person while not for another. Overall, Augustine’s theory of time is mostly congruous with the theory of special relativity’s notion of simultaneity. We have seen that simultaneity, as understood through special relativity, caused complications for Augustine’s presentist ontology but was reconcilible by way of his eternalist ontology. Even with the conclusions of special relativity Augustine continues to provide insight as to how we perceive and experience the world more than 1500 years later.

For Augustine, the only time which can surely be said to exist is the instant of the present.

Works Cited Augustine, Saint. Confessions. Trans. R.S. Pine-Coffin. London: Penguin Books, 1961. Carroll, Sean. From Eternity to Here: The Quest for the Ultimate Theory of Time. New York: Dutton, 2009. Book. Krane, Kenneth. Modern Physics. New York: John Wiley & Sons, Inc, 1996. Book. Bard Science Journal December 2013 Volume 3, No.1



Examing Cross Cultural Reason and Evidence by ELYSE NEUBAUER


o-sleeping, when the parent(s) and child share a sleeping space, is practiced in many parts of the world. The United States is one place in which parents do not widely practice bed-sharing, although room-sharing is sometimes used with small infants. Some parents feel that their children will not grow up to be competent and independent adults if they are not expected to go to sleep on their own (Morelli, 1992). However, research indicates that co-sleeping is not so deleterious to children’s development (Raykeil; Welsh; Barajas, 2011; Simard, 2008). The most compelling reason not to practice bed-sharing is that it increases the risk of Sudden Infant Death Syndrome (SIDS) (AAP, 2011). This, however, seems to be restricted to infants under 12 months. Cultural preference seems to dictate whether parents will practice co-sleeping or not. However, after taking certain information into account an emphasis on co-sleeping, might indeed be beneficial for both interdependent and independent cultures. Contrary to popular belief, co-sleeping is not actually harmful for a child’s development. An article from TIME magazine summarizes results from a 2008 research study, concluding that co-sleeping (especially bed-sharing) disrupts sleep and leads to a series of cognitive and behavioral problems (Sharples). Yet this article completely misrepresents the results of the study; Simard did find increases in bad dreams, decreases in total sleeping time, and increases in the amount of time it took to get to sleep when mothers exhibited behaviors like co-sleeping and giving meals during the night if the child woke up. However, the maladaptive patterns in linking co-sleeping with bad dreams and loss of total sleep time disappeared when the researchers controlled for other variables. Bad dreams seemed to be caused by anxieties within the child and disruptions of sleep, including feeding in the middle of the night and trying to comfort the child while not in bed with them. Loss of sleep time was more likely caused by strained mother/child relationships. Co-sleeping with children after the child had woken up did produce a delay in falling asleep of about 15 minutes or more. However, the mother’s presence as the child initially fell asleep was advantageous for falling asleep faster (Simard, 2008). Another study, summarized in an article from Livescience. com, coincides nicely with the reported results of Simard 2008. The study found that cognitive and behavioral deficits occurring in 5 year-olds were due to outside factors not related to co-sleeping during younger years. The study concluded that co-sleeping between the ages of 1 and 3 did not result in negative outcomes at 5 years old (Welsh; Barajas, 2011). Most of the parents in the Simard study did not practice any form of co-sleeping. Although not analysed in the study, it is possible that the maladaptive night awakening rituals are correlated with a parenting style that actually

does not include co-sleeping. Although co-sleeping does not seem to be responsible for the negative long term effects mentioned above, it is not without its dangers. The AAP’s official stance on co-sleeping is that it increases the risk of SIDS (AAP, 2011). The Barajas study directly addressed this. Summed up in Welsh’s article, Barajas (2011) states that the AAP’s warning pertains to infants under 1 whereas the Barajas study looked at babies from 1-3 years of age. Bearing this in mind, bed-sharing does seem to be hazardous for newborns. Parents are not as aware of their bodies during sleep as they think they are, therefore it is possible to obstruct a newborn’s airway, overheat the infant, or allow the infant to become otherwise entrapped so that the child stops breathing (AAP, 2011). There is also the danger of rolling on an infant, although this may be rare and is mainly due to overly soft mattresses and intoxicated sleeping partners (Berk, 2010 pg 99). Nevertheless, bed-sharing appears to be more dangerous for an infant than for a 1 to 3 year-old. Bed-sharing may be hazardous to an infant, but co-sleeping, which includes both bed and room-sharing may create a tighter emotional bond between parents and a baby when such hazards are looked out for. In a study that asked Mayan and U.S. parents about their sleeping behaviors and their rationale behind it, Mayan families’ rationale for co-sleeping was that it fostered emotional closeness. In fact, the idea of having a child sleep on their own was met with shock and pity for the child (Morelli, 1992). In a recent, informal interview with a U.S. mother who raised two infants, the concept of emotional well-being was brought up multiple times. Even though she made sure that her children had their own space between 1 and 3 years of age, she did not discourage co-sleeping if the child expressed a need for it. She indicated that her decision to listen to the child’s needs in this way made her feel that she was taking care of her “children’s emotional health, letting them feel safe and secure.” On the other side of the argument, parents living in independent cultures (i.e. the U.S. parents interviewed in Morelli, 1992) stress that teaching a child to self-soothe is extremely important in fostering emotional strength and self-reliance. Some children in this study, but not the majority, were expected to sleep in their own room as soon as they came home from the hospital. By 3 monthsold, most parents discouraged co-sleeping all together with 58% of the babies expected to sleep in their own room (Morelli, 1992). The interviewed mother mentioned above had some interesting theories about how co-sleeping may affect independence: she speculated that her oldest child had grown up to become extremely self-reliant with great personal value placed on independence. She believes that her parenting had actually strengthened her child’s

Contrary to popular belief, co-sleeping is not harmful for a child’s development.


Bard Science Journal December 2013 Volume 3, No.1

sense of security and self. This viewpoint is consistent with attachment theory, that children who have a secure emotional attachment with parents are more confident during play and less likely to have emotional problems like anxiety later in life. Whether or not proper co-sleeping practices can actually promote healthy independence later in life is a question that should be researched empirically; the idea is nevertheless compelling. One last issue that may prompt parents to choose separate beds over co-sleeping centers around privacy. Parental privacy may be a more important factor in this decision within the U.S., considering how highly Americans value independence. The interviewed mother noted that privacy is important to her, saying,, “ I definitely ask for that [privacy] if I need it...I am there for my kids, but I need my own privacy.” In one article, the potential loss of sex life was brought up by a mother who decided to co-sleep. She states that this is a real issue for some families who practice co-sleeping, and maintains that if the parents do not want to give up either sex or co-sleeping practices, they must adapt (i.e. adjust their expecta-

tions of when and where they will be intimate). The privacy issue, although a genuine concern, is one that can be remedied with a little adaptation to circumstances. In sum, the most compelling reason to choose either co-sleeping or separate sleeping arrangement lies in avoiding safety issues and embracing cultural values. Considering the results of research studies (Barajas, 2011; Simard, 2008) and the warnings about infant bed-sharing (AAP, 2011), the risk of harm is much higher when bed-sharing with an infant. Parents who practice co-sleeping should engage in room-sharing for the first year of life and move on to bed-sharing when the baby reaches 1 year of age. Culturally speaking, interdependent societies who value social interconnectivity feel that co-sleeping fosters healthy emotional closeness. Independent societies, on the other hand, feel that co-sleeping greatly weakens autonomy and self-reliance. If co-sleeping does in fact strengthen an attachment bond by making the baby feel more emotionally close, it may be that co-sleeping is actually a protective factor for later independence. If this is the case, and considering there are no serious cognitive or behavioral deficits caused by co-sleeping, there should not be such large cultural gaps in decisions about co-sleeping. As long as safety concerns are taken into account, co-sleeping may have the potential to foster important cultural values in both collaborative and individualistic cultures. References American Academy of Pediatrics. (2011). SIDS and Other Sleep-Related Infant Deaths: Expansion of Recommendations for a Safe Infant Sleeping environment. originally published online October 17, 2011: DOI: 10.1542/peds.2011-2285. Barajas, G. B., Martin A. M., Brooks-Gunn, J., Hale, L. (2011). Mother-Child Bed-sharing in Toddlerhood and Cognitive and Behavioral Outcomes. American Academy of Pediatrics, doi: 10.1542/ peds.2010-3300 Berk, L.E. (2010). Exploring Lifespan Development. Boston, MA: Allyn & Bacon. Morelli, G.A., Oppenheim, D., Rogoff, B., Goldsmith, D. (1992). Cultural Variation in Infant’s Sleeping Arrangements: Questions of Independence. Developmental Psychology, 28(4),604-603. Simard, V., Nielsen, T. A., Tremblay, R. E., Boivin, M., Montplaisir, J. Y. (2008). Longitudinal Study of Preschool Sleep Disturbance The Predictive Role of Maladaptive Parental Behaviors, Early Sleep Problems, and Child/Mother Psychological Factors. Arch Pediatr Adolesc Med,162(4), 360-367. doi:10.1001/archpedi.162.4.360. Sharples, T. (2008, Apr. 08). How Not to Get Baby to Sleep. Time. Raykeil, H. Three In the Bed. Parenting. Retrieved from http:// Welsh, J. (2011, July 18). It’s OK to Share a Bed with Your Toddler, Study Finds. Retrieved from

ALEXIA MOTAL Bard Science Journal December 2013 Volume 3, No.1




Bard Science Journal December 2013 Volume 3, No.1


stem cells


flowing frictionless above a wide-range track fishes floating penniless cannot get to nantes he who uses his hands to speak trivialities speaks salty while his tongue drips vitals contemptuous his fingers pointed down, their tips pointed like fountain pens, inward: inward pointing and grumbling and muttering wordless personality each neuron firing an electrical pulse that merely makes them twitch and bend uselessly their nails scrape lightly upon the throat progressive but twitching fleshes mutter trivialities nonetheless to the bone and there a message left for none to read for knowledge spoken judiciously for god himself is but a single body with us contained in cells within

Bard Science Journal December 2013 Volume 3, No.1


The Emergence and Migration of Tuberculosis by SAM OSBORN


here were 5,000,000 reported cases of tuberculosis in 2006. With a third of the worldâ&#x20AC;&#x2122;s population believed to be infected, tuberculosis is one of the most feared diseases in human history (Sreevatsan 1997). Likewise, it is one of the oldest diseases in human history (Gutierrez, 2005). The telltale bone and flesh lesions caused by advanced stages of tuberculosis infection have drawn a vivid red line through human history. A record that can be easily surveyed through archaeology. For years, the model of emergence has attributed changes in human settlement patterns and bovine domestication as the causative factors. With the domestication of cows, agricultural herds began to congregate in the close proximity in which tuberculosis thrives. (Mays, 2001) Additionally, humans were beginning to settle in denser populations, and also in close proximity to cattle. The assumption, prior to recent genetic research, was that M. bovis spilled over into human populations, and from it evolved modern M. tuberculosis, M. canettii, and M. africanum. (Stead, 1995, Hare, 1967) However, with recent advances in genetics, and new discoveries in archaeology, a more modern model needs to be constructed. New work done to monitor, classify, and respond to emerging diseases makes it possible to analyze tuberculosis, using clues from paleopathology, as an emerging disease. Tuberculosis is caused by five major mycobacteria, comprising the Tuberculosis complex. M. tuberculosis is the most common among humans, as it, M. africanum, and M. canetti are exclusive to human hosts. M. microti, a close relative of M. tuberculosis, is exclusive to rodent populations(Brosh 2002). M. bovis infects a wide range of hosts, including humans, bovine, oryx, goats, and seals (Brosh 2002).Tuberculosis complex mycobacteria are highly aerobic, acid-fast bacilli that operate as intracellular pathogens, able to survive and replicate within a macrophage (Bauman, 2011; Clarke 1987). Colonies of mycobacteria form foci on lymph-nodes within the lung. These foci burst, forming a bacteria filled lesion in the lung wall, allowing for pathogen transmission through coughing (Clarke 1987). Cell-mediated immunity can be acquired after two weeks, at which point macrophages can attack the lesions formed in the lung. Patients surviving this primary infection are at risk of developing a post-primary infection from dormant bacteria in the lungs. Such post-primary infections can result in extrapulmonary symptoms including Pott Disease, an infection of the spine, and rib and bone lesions accessible to archaeologists (Clarke 1987). Symptomatic clues, like these bone lesions, act to alert researchers of possible mycobacteria presence. However, to truly trace the proliferation of tuberculosis around the planet, genetic analysis is needed. In situ human remains offer an accurate date, either through specific cultural phases

or radiocarbon dating. Phylogenetic analysis builds accurate trees of progression from one morphology to the next. This adds order to an otherwise scrambled archaeological record. Fitting DNA retrieved from ancient sources into evolutionary pathways built from extant mycobacteria analysis allows for more accurate renderings of pathogen proliferation.

Evolutionary Pathways Early genetic analysis of the tuberculosis-causing mycobacteria began with Sreevatsan et al. in 1997. They analyzed chromosomal heterogeneity based on restricted fragment length polymorphisms (RFLP). RFLPs measure chromosomal heterogeneity by tracking mobile insertion elements. As species diverge, chromosomal heterogeneity increases, so by tracking genetic diversity among a community, it is possible to trace the oldest member of the community. Allelic diversity, based on known mutation frequency for mycobacteria, suggests that M. tuberculosis encountered a population diversity bottleneck 15,000-20,000 years ago (Sreevatsan 1997). Work done in 2005 by Gutierrez et al confirms a bottleneck, but pushes the date back to as far as 35,000 years ago. Bacterial substitution rates are calculated at 0.0044-0.0047 per site per million years. This rate of change allowed Gutierrez to analyze the entire tubercle bacilli phylogeny, including both smooth tubercle bacilli and the mycobacteria complex. Their analysis of bacilli heterogeneity shows that 2.6-2.8 million years are needed to separate M. tuberculosis from the smooth bacilli, which suggests a progenitor of tuberculosis could be as old as 3 million years. Gutierrez speculates that it might have affected early hominids in Africa, making it one of the oldest plagues known to man. (Gutierrez et al 2005) Brosch et al. in 2002, used regions of difference in mycobacteria genomes to chart genomic additions and subtractions. The team chose specific genes, and observed how they were represented within extant tubercle bacilli; the presence or absence of a chromosomal event allows for placement on an evolutionary pathway. For instance, modern M. tuberculosis is denoted as such by the absence of a â&#x20AC;&#x153; M. tuberculosis specific deletion 1â&#x20AC;? (TbD1). Three regions are used frequently in the literature to test mycobacterial lineage: absence of TbD1 (indicates modern M. tuberculosis), presence of IS6110 (indicates mycobacteria), presence of RD2 (indicates M. bovis). By testing a multitude of regions ranging from housekeeping genes to IS sequences, they were able to construct a confident phylogenetic tree that included M. canettii, M. tuberculosis, M. african, M. microti, and M. bovis. However, since all work is genetic,

With recent advances in genetics and archaeology, a more modern model of Tuberculosis spread needs to be constructed.


Bard Science Journal December 2013 Volume 3, No.1

only relative dating through specific polymorphic changes can be used-- there is no way to attach fixed dates using only genetic analysis of extant strains. They concluded, from numerous polymorphisms in house keeping genes, that M. canettii split with a common ancestor first. House keeping gene polymorphisms represent extreme heterogeneity, suggesting that M. canettii might have split before the Sreevatsan bottleneck 15,000-20,000 ybp (Figure 1). Next, a branch including M. africanum, M. microti, and M. bovis, cleaved off. As M. bovis was diversifying to infect its numerous host types, M. tuberculosis was reaching itâ&#x20AC;&#x2122;s modern forms (indicated by the loss of the TbD1 region), diversifying from a common ancestor, first with M. bovis, then with M. canettii (Brosch 2002)(Fig 1). While these models chart evolution with precision, they lack any indicators of time or space. To investigate the emergence of Tuberculosis in human populations, paleopathologists must turn to the remains of its victims. It is through archaeology that emergence epidemiology can be preformed.

Figure 1: Scheme of the proposed evolutionary pathways of tubercle bacilli from Brosch et al. Grey boxes denote the loss of specific DNA sequences. Added are two events discussed by the literature that allow for the addition of dates to the genetic map: (1) Diversity bottleneck, c. 20,000 ybp (Sreevatsan 1997) and (2) Wyoming bison, representing 82.3% similarity to M. africanum, 76.6% similarity to M. tuberculosis, and 72.7% similarity to M. bovis (Rothschild 2001). (2) radiocarbon dates to 17,870 Âą230 ybp.

Emergence in the Old World The most common model of tuberculosis emergence suggests that increased contact with bovine during the domestication of cattle allowed tuberculosis complex mycobacteria to jump from bovine to humans (Mays 2000; Stead 1995; Hare 1967). This, combined with larger human populations living at higher density during the Neolithic Agricultural Revolution, allowed for the survival and proliferation of tuberculosis among humans (Diamond 2005). Often, the mycobacteria attributed to the jump is M. bovis, but recent research has shown it to be a contemporary of M. tuberculosis, not the ancestor. Thus, a new model of Old World emergence should be considered. According to the archaeological record (which is far from perfect), patient zero for M. tuberculosis infection was a women and her infant child who died together in a Neolithic settlement called Atlit-Yam on the coast of Israel. Radiocarbon dates from Atlit-Yam place it between 9250-8160 years before present, during the late phase of Pre-Pottery Neolithic C period. The rich floral and faunal remains at Atlit-Yam allowed for a reconstruction of food consumption, showing cattle comprised 43% of the diet (goat and pig were also present). The skeletal remains were genetically examined at different centers to assure honest results. PCR primers tar-

geting M. tuberculosis specific sites, including sites flanking TbD1, were used to confirm the infection. By showing that the inner and outer flanks of TbD1 were adjacent, they confirmed that this lineage had a TbD1 deletion. Additionally, cell-wall lipid biomarkers, unique to M. tuberculosis, were assayed for and located (Hershkovitz 2008). These results confirm that modern M. tuberculosis was the cause of infection. The fact that M. tuberculosis, a human exclusive lineage, was the cause, not an ancestral M. bovis, suggests that these Neolithic transition sites were not transmitting a bovine tuberculosis. Cattle, however, might have still played a role as an amplifying host. Cattle seem to unite incidences of prehistoric TB. The next case of infection comes from Neolithic cattle farmers in Halberstadt, Derenburg, and Karsdorf, in central Germany. These settlements all date to the Linear Pottery Culture around 5,000 BC. Burials among these settlements show macroscopic evidence of bone lesions symptomatic of post-primary tuberculosis infection. Lesioned bones were analyzed for IS6110, TbD1, and RD9 (common to M. africanum, M. microti, and M. bovis but not M. tuberculosis). The researchers found that mycobacteria DNA all carried the IS6110 region. However, many were positive for TbD1, and one carried both TbD1 and RD9 (Nicklisch, 2012). This suggests that the tuberculosis infection in these German settlements was a common ancestor to modern tuberculosis and the Africanum-Bovis branch, making the pathogen very similar to that found in the Bard Science Journal December 2013 Volume 3, No.1


Wyoming bison discussed later (Figure 1). Research on Medieval burials in England reveals a similar link between tuberculosis and cattle. Exhumation of a cemetery in the rural agrarian community of Wharram Percy revealed 687 articulated human skeletons ranging from the 10th -16th century. In addition to post-primary bone lesions, skeletons suffered from exaggerated spine curvature symptomatic of Pott disease. DNA analysis of nine graves suspected of tuberculosis infection revealed the presence of the mycobacteria IS6110 marker. Additionally, the team screened for M. bovis using the a PCR site for RD7, and found no positive matches. This study is held as a prime example of the impact of cattle populations on tuberculosis presence. In addition to being an important part of the sites economy, archaeological evidence shows that humans and cattle might have lived together in extreme proximity in the same longhouses. However, three animal bone samples form the site were analyzed and none revealed the IS6110 markers. This suggests that while the humans of Wharram Percy suffered from tuberculosis, there close cattle neighbors did not (Mays, 2001).

Emergence in the New World The first known case of tuberculosis is an extinct Pleistocene long-horned bison, who suffered from bone lesions. It died 17,870 Âą 230 years ago when it fell into the Natural Trap Cave in Wyoming. There, 30 meters underground, the semiarid conditions and constant low temperatures, preserved the DNA in its metacarpal. A spoligotype analysis of the PCR amplified DNA showed that

the mycobacteria responsible for these lesions was 82.3% similar to M. africanum, 76.6% similar to M. tuberculosis, 72.7% similar to M. bovis, and only 17.4% similar to M. microti (Rothschild 2001). This supports Broschâ&#x20AC;&#x2122;s claim that M. bovis was not the ancestral form of modern M. tuberculosis. It also adds a critical date to the phylogenetic tree built by Brosch et al (Figure 1). This Pleistocene strain represents the branching of M. africanum and M. tuberculosis (as it sits on the africanum side). If Sreevatsanâ&#x20AC;&#x2122;s bottleneck theory is correct, this Pleistocene infection, and therefore the splitting of tuberculosis and africanum, happened just after the tubercle bacilli population burst at the recent end of the population constriction. Additionally, this provides tuberculosis with a path to the new world. 17,000 years ago, the Beringian land bridge was still intact and acted as a conduit for a variety of species from Asia to America. The question is: who was the host species, humans or bovids? Spillover is often used to describe movement from animals to humans, but it is possible that humans brought tuberculosis to the new world with them, infecting bovids there. However possible, it seems unlikely for several reasons. First, 17,000 years before present is about 5,000 years before the first confirmed human occupation of North America (c. 11,500ybp), however this date is much debated, and most scholars acknowledge that human movement across Beringia must have occurred before 13,000 ybp if they could have settled Chile 12,500 ybp (the oldest human settlement in the New World) (Mithen 2003). Second, tuberculosis requires close proximity to be transmitted. Bison hunters in North America

Figure 2: Map of the major events discussed in this paper. White lines show key places in the settlement of the Americas. Black lines show major tuberculosis events in the archaeological record. (1) Beringian Land bridge c. 25,000 ybp. (2) Wyoming Bison c. 17,000 ybp (Rothschild, 2001). (3) Monte Verde, first confirmed human settlement in the New World, c. 12,000 ybp. (Mithen 2003). (4) Atlit-Yam, c. 9,000 ybp (Hershkovitz, 2008). (5) Neolithic Central Germany, c. 7,000 ybp (Nicklisch, 2012). (6) South America, 1050 BC-1600 AD (Prat, 2002). (7) North and Meso-America 150 BC1875AD (Prat, 2002). (8) Wharram Percy, c. 1000 AD (Mays, 2001). (9) Possible bone lesions on H. erectus, no genetic evidence, up to 1 million years old (Kappelman, 2008).


Bard Science Journal December 2013 Volume 3, No.1

could easily inhale infected sputum during a butchering, but wild bison could hardly have acquired an infection from humans. This means that tuberculosis could have jumped into human populations twice. Once in the Old World from cattle domestication, and again in the New World to a naive human population from bison environmental reservoirs already hosting a population of mycobacteria. The herd structure of bison could allow for the persistence of the disease, and sick animals would make easier targets for hunters, increasing chances of transmission through butchering. Alternatively, tuberculosis could have been endemic in humans that crossed Beringia. Both human and bovine tuberculosis might have originated in the Old World and then moved, independent of each other, to America. One final hypothesis should be considered. Beringia functions as a geographic bottleneck, and it is possible that the genetic bottleneck observed by Sreevatsan could correspond with mycobacteria crossing such a land bridge. A genetic bottleneck represents a population contraction, decreasing chromosomal heterogeneity. Such a contraction is evidence of species hardship (death rates exceeding birth rates resulting in a population of “survivors” who represent the extent of genetic diversity), and thus increased pressure to expand ones range. Fifteen to twenty thousand years ago, the tuberculosis complex was struggling, diversity was at a minimum, and then something happened that allowed it to flourish and diversify into four mycobacteria species and across many hosts. Could this event have been the crossing of the Bering Strait? If so, which side of Beringia represents population contraction, and which represents population expansion? The Wyoming Bison acts as evidence of an environmental reservoir in the Old World, presumably predating the arrival of humans. However, it is unclear if tuberculosis spilled-over from the limited bovid populations, or if humans were able to bring tuberculosis with them as they crossed the Bering Strait. To answer this question, we must again turn to archaeological evidence to preform emergence epidemiology. Evidence of American tuberculosis comes largely from macroscopic bone lesions and spine deformation indicative of post-primary infections. Many of the surveys lack the comprehensive molecular analysis used to determine which lineage of mycobacteria caused the infection. However, the lack of bovine host populations is itself cause for concern, and much of the literature moves toward trying to assemble the number of hypotheses that explain tuberculosis presence in the New World. Fundamentally, two options exist: the pathogen was brought by prehistoric hunters from Eurasia to the New World, in one of many confirmed migrations, and kept at low endemic levels, or the bacteria was acquired from the environmental reserves in the New World (Gomez et al. 2003). Gomez et al. cite several possible native reservoirs: buffalo, wild cats, and dogs can all carry M. bovis and M. tuberculosis, and turkeys can carry M. avium. Yet, the pathogen would still need to travel from its place of origin to these hosts, so a conduit host is needed. The human is a great candidate; there is archaeological evidence confirming numerous migrations across Beringia (Mithen 2003), they can carry the pathogen in the lungs, spreading it through aerial transmission, and survivors of the primary infection can experience a recurrence years later. However, it is unknown if tuberculosis can survive within a small band of moving humans. If a migratory group could maintain an infection, especially within

the lungs of a survivor of primary infection, the disease could easily spread across the world, independent of bovine or other environmental reservoirs. Even if buffalo were a reservoir, allowing for spill-over into North American hunters, the established presence of mycobacteria in South America demands a delivery system from North to South America. Archaeological evidence confirms trade between the American peoples. The fact that tuberculosis could move across the Americas means that it can operate independently of bovids, contrary to the story told by the Old World.

The process of zoonotic disease emergence

Tuberculosis has often been discussed historically. In Jared Diamond’s Guns, Germs, and Steel, TB becomes a keystone antagonist. As the unwanted gift from domestication, tuberculosis has had a simple history. The Agricultural Revolution brought high densities of people close to high densities of bovine, and the pathogen jumped. However, recent outbreaks of zoonotic diseases, like SARS, Hendra and Nipah viruses, bird flu, and Lyme disease have given epidemiologists a new vocabulary for thinking about the specific moment of emergence, a factor often ignored given the early history of TB evolution. Recent publications by Daszak et al. (2000) and Wolfe et al. (2007) outline a more contemporary way of tracing diseases into new populations. Wolfe et al. (2007) offer a review of major emergence events and compares patterns in the tropical and temperate regions. They analyze 25 temperate and tropical diseases and run them through statistical comparisons to establish trends in emergence, showing that most zoonotic emergences occur in tropical areas and that temperate diseases tend to be acute whereas tropical diseases are more likely chronic. Broad statistical trends like this allow scientists to test theories of historic emergence with observable modern trends. To aid in their analysis of emergence patterns, the team constructs a five stage process of evolution that all human pathogens must navigate before becoming human exclusive. Additionally, the review acknowledges the large gaps in understanding that exist around long established, yet major human diseases, including tuberculosis (Wolfe 2007). Given this gap, it is important to observe that even ancient diseases like tuberculosis existed in a primary, “confined to animals” stage. Daszak et al. (2000) explore patterns in disease movement between host populations. They suggest a “spill-over” and “spillback” effect, where-in, a high density of host animals act to amplify a disease until pathogen density allows for a successful jump into a new susceptible host (Daszak 2000). This fits perfectly with the traditional model of TB movement: domestic cattle acted as an amplifying host, spilling over into early Neolithic farmers and evolving into a higher stage that can be completely sustained within human populations. This works to explain Old World tuberculosis, but does not address why pre-Columbian South American Indians, without access to bovine species, contracted TB.


The recent discovery and analysis of a Homo erectus skeleton in Turkey showed lesions in the meninges that correspond to tuberculosis. Due to the extreme age of this Middle Pleistocene specimen, no DNA spoligotyping could be preformed, so any evidence of a mycobacterial infection is purely symptomatic (Kappelman 2007). If a mycobacteria infection, this would be the oldest fossil evidence of tuberculosis, possibly one million years old. Bard Science Journal December 2013 Volume 3, No.1


Meaning patient zero is not a bison, or a cattle farmer in Atlit-Yam, but an early human ancestor on its way out of Africa. This supports the hypothesis that tuberculosis evolved with us in Africa. This idea is supported by several facts already mentioned in this paper. First, disease emergence is statistically more likely to happen in tropical regions than temperate. Tuberculosis would not have been the first famous infection to originate in the African tropics, and statistics alone suggest it is more likely than emerging in a temperate area like the Fertile Crescent (Wolfe 2007). Second, Gutierrez speculates, given his three million year old date for ancestral tuberculosis, that mycobacteria might have infected early hominids (Gutierrez 2005). The discovery in Turkey supports this claim, and if true would show that mycobacteria invaded early hominid populations well before the domestication of anything, much less cattle. If mycobacteria left Africa with us, we would expect to see low epidemic levels associated with a low populations of susceptibles. However, as susceptible populations increased in size and density, epidemics would reach higher levels, and post-primary bone lesioning and Pott disease would leave an archaeological record. Such an event would occur during the Neolithic Agricultural Revolution-- an increase in sedentary lifestyle, communal and close living quarters-- the best conditions possible to spread an aerosolized pathogen. This corresponds, in the Old World especially, with the domestication of many animals, including cattle. For years, this correlation has been interpreted as causation, but more modern discoveries, especially the prominence of infection in South America, suggest that it is the high population densities, not the proximity to cattle, that caused tuberculosis to reach epidemic levels. This is represented best by the fact that none of the samples of animal bones taken from Wharram Percy were positive for IS6110. In recent history, tuberculosis has fallen into the spotlight. Its prevalence still, combined with its growing antibiotic resistance, makes it among the most feared diseases of the twenty-first century. The history of tuberculosis is a valuable tool in understanding its modern pathology. Knowing that the pathogen has been evolving with the hominid body for possibly three million years changes the paradigm associated with modern TB and its ability to so quickly respond to pressure. Given the information compiled in this review, I advise a close observation of the selective pressures placed on mycobacteria in the future. The clinicians should work in close concert with evolutionary biologists to devise therapies that will not select for a fitter, more prolific mycobacteria. These recent discoveries suggest that tuberculosis might be the oldest extant disease known to humans, a disease that afflicted other extinct hominids. This should act as a reminder of the long time-frames associated with microbial evolution and the long-term reverberations that might be felt by antibiotic resistance.


Bard Science Journal December 2013 Volume 3, No.1

Works cited Bauman, Robert, Microbiology: with diseases and taxonomy, 2011, Pearson Publishing Brosch, R. et al. “A new evolutionary scenario for the Mycobacterium tuberculosis complex” PNAS, March 19, 2002 Clark A. George et al “The Evolution of Mycobacteria Disease in Human Populations: A Reevaluation” Current Anthropology, vol 28, Feb. 1987 Diamond, Jared, Guns, Germs, and Steel, 2005, W. W. Norton & Co., Inc. New York Gomez i Prat, Jordi et al “Prehistoric Tuberculosis in America: Adding Comments to a Literature Review”, Mem Inst Oswaldo Cruz, vol 98, 2003 Gutierrez MC, Brisse S, Brosch R, Fabre M, Omaı¨s B, et al. (2005) Ancient origin and gene mosaicism of the progenitor of Mycobacterium tuberculosis. PLoS Pathog 1(1): e5 Hare, R. 1967, “The antiquity of diseases caused by bacteria and viruses: A review of the problem from a bacteriologist’s point of view”, Disease in Antiquity, ed. D.R. Brothwell, A.T. Sandison, Sprinfield: Thomas Hershkovitz, Isreal et al. “Detection and Molecular Characterization of 9000-Year-Old Mycobacterium tuberculosis from a Neolithic Settlement in Eastern Mediterranean” PloS ONE, 2008 Kappelman, John, et al. “Brief Communications: First Homo erectus from Turkey and Implications for Migrations into Temperate Eurasia” American Journal of Physical Anthropology, 135:110-116, 2008 Mackowiak, Philip, et al. “On the Origin of American Tuberculosis” Clinical Infectious Diseases, Oxford, Aug. 2005:41 S. Mays et al. “Paleopathological and Biomolecular Study of Tuberculosis in a Medieval Skeletal Collection From England”, American Journal of Physical Anthropology, Vol. 114, Iss. 4, April 2001 Mithen, Steven, After the Ice, Harvard Press, 2006, Cambridge Mass. Nicklisch, Nicole et al. “Rib Lesions in Skeletons From Early Neolithic Sites in Central Germany: On the trail of Tuberculosis at the Onset of Agriculture”, American Journal of Physical Anthropology, Vol 149, Iss. 3, Nov. 2012 Rothschild et al. “Mycobacterium tuberculosis Complex DNA from an Extinct Bison Dated 17,00 Years before Present”, Clinical Infectious Diseases, 2001:33 August Sreevatsan, Srinand, et al. “Restricted gene polymorphisms in the Mycobacterium tuberculosis complex indicates evolutionarily recent global distribution” National Academy of Sciences, 1997, PNAS vol 94, Sept. Stead, W.W., Eisenach, K.D., Cave, M.D., Beggs, M.L., Templeton, G.L., Teon, C.O., & Bates, J.H., (1995), Am. J. Respir. Crit. Care Med., 151 WHO, Global Tuberculosis Report: 2012, WHO Library Cataloguing-in-Publication Data

ASAP COMSOL Model for Discerning Blood Clotting An REU in Physics


REUs, Research Experiences for Undergraduates, are opportunities sponsored by the National Science Foundation for undergraduates in science, engineering, and mathematics to do hands-on research during the summer in a variety of institutions across the globe.

ASAP Technology Traumatic injuries are the leading cause of death for those under the age of 44 and the third leading cause of death overall in the United States. Many of these trauma victims suffer from some sort of trauma-induced blood clotting complication. The standard treatment for a lack of blood clotting ability is a blood transfusion. Unfortunately, the resources necessary for transfusions are both extremely limited and expensive. It is therefore necessary to distinguish between whose blood is having clotting complications, and whose blood is clotting correctly so as to avoid any misallocation of scarce resources. Furthermore, if a patient with normal or high clotting capacity were to be given a transfusion, this may lead to clots in areas of the body where they are not desired, such as an artery. The current technology used for diagnosing blood clotting cannot be removed from a clinical laboratory. The machine that provides the golden standard for blood clot measurement, TEG, requires a perfectly level surface and complete isolation from vibration, so it cannot be brought on an ambulance or to the scene of the accident. This creates a large delay in the clot diagnosis process, as the patientâ&#x20AC;&#x2122;s blood cannot be fully examined until they are returned to the clinical lab of the hospital. The emergency room is also inhospitable to the easily disturbed TEG. In the case that a trauma victim is brought directly into the emergency room, a sample of their blood must still be taken to a different area of the hospital containing a clinical lab, which is the only space that can properly house the TEG system. ASAP technology will provide the same measurements as TEG, but in a portable system. Many trauma victims are suffering from severe blood loss. As such, it is critical that medical technicians know whether or not a given trauma victim is having problems clotting as soon as possible. The faster this can be assessed, the more quickly these victims can be treated, and the more blood loss can be prevented consequently. By making a device that can be used at the scene of the accident or on the ambulance, the process of diagnoses is expedited in a situation where patients need to be treated as quickly as possible. ASAP, short for Actuating Surface-Attached Posts, employs an array of 25 micron tall posts that can be actuated magnetically. Taking a small sample of a patientâ&#x20AC;&#x2122;s blood using a finger prick, the sample is

inserted into a small closed chamber containing the array of posts. Information is gained about the clotting ability of the blood sample by examining how the actuating posts interact with thickening blood. As the blood begins to clot and thicken over time, it restricts the motion of these posts, reducing the amplitude of oscillation. By measuring how this amplitude changes over time, we can gain information on the clotting parameters of the blood sample. The Computer Model In order to better understand the interaction between the actuating posts and the blood in the chamber, we started to develop a computer model of our system. We are most interested in how the fluid dynamics underneath the tips of the posts relate to the fluid dynamics above the tips. Specifically, we would like to know which region dominates in energy dissipation. The amount of energy dissipation in each region corresponds to the affect that region has on resisting the postsâ&#x20AC;&#x2122; motion. By identifying where energy is being dissipated, we can understand what part of the clot we are measuring by examining the change in oscillation amplitude of the posts. We are also interested in how energy dissipation is affected by a change in the height of the channel. By changing the dimensional parameters of the model we are able to study this effect. While blood is a viscoelastic fluid, the first step to understanding this system is to consider the simpler case of a purely viscous fluid in place of blood. Considering this much more easily modeled fluid, we hope to create a model of the post structures in a viscous fluid and compare that model against the physical one to validate that this model is behaving well in the simpler of the two

Bard Science Journal December 2013 Volume 3, No.1


Figure 2: A plot of the velocity at 1/10th of an oscillation cycle during the 2nd oscillation.

Figure 1: Geometry of first Model. Outflow with zero pressure. fluids. We will also be able to compare the viscous model against the physical experiment using blood, which may be able to aid in characterizing the viscous properties of the clot. The model that was created this summer started to generate the infrastructure for model analysis and serves as a first step towards creating a more complex model that can incorporate more realistic characteristics of the system. Methods COMSOL Software The model was created using the multiphysics software COMSOL. COMSOL uses the finite element method, which is a numerical method for solving the various differential equations that govern physical behavior. The software was chosen because of its ability to incorporate multiple kinds of physics into a single model. In the case of the simplistic version of the model, we only need to account for the interaction between a fluid and a structure. In the future, though, we may want to incorporate the magnetic forces actuating the posts and the structural mechanics of the flexible posts in addition to the fluid mechanics. COMSOL’s modular design gives it the capacity to account for all of these properties. Within COMSOL, we are specifically interested in using the “Fluid-Structure Interaction” module that is built into the software. The most powerful component of the Fluid-Structure Interaction module is the user’s ability to define the interface between a solid region and a fluid region. After defining these boundaries, the module will consider the effects that the solid region has on the fluid and vise versa. Within this module the user is also able to define a prescribed displacement of a given domain or boundary which COMSOL will account for fluid mechanically. Using COMSOL and the Fluid-Structure Interaction module, we created a two dimensional model with variable Reynolds number, viscosity, density, oscillation frequency, oscillation


Bard Science Journal December 2013 Volume 3, No.1

amplitude, and channel height. All of these parameters were set to values that closely match the properties of our experimental model, but can be easily changed if need be. First Model: Oscillating Boundary Condition The first model we went about making was intended to emulate Stokes’ Second Problem, which involves an infinitely long flat plate oscillating in an infinite expanse of fluid and has a known solution. Previous experimental work has demonstrated that the fluid flow above the tips of the oscillating posts very closely resembles the flow produced by Stokes’ Second Problem. Considering this and the fact that the problem has a known analytical solution we can compare our model against for error, the oscillating plate is a manageable first step in developing our model. In order to recreate this problem in COMSOL, we first set up a rectangular geometry and defined the domain of that rectangle to be a fluid with a viscosity and density similar to that of whole blood. To model the oscillating plate, we used an “oscillating wall” boundary condition that is built in to the Fluid Structure Interaction module for the bottom edge of the rectangle. The wall velocity was set as a sine function in the x direction, and zero in the y direction. To account for the infinite horizontal expanse of fluid, we set up periodic boundary conditions on both the sidewalls of the rectangular domain. This was achieved by setting the boundary conditions of both of these walls to be an “outflow” with zero pressure as the outflow condition. To account for the infinite expanse of fluid in the vertical direction we made the height of the fluid domain to be 1.5 times the height of the Stokes Boundary Layer beyond which the viscous forces of the fluid damp out nearly all of the perturbation cause by the oscillating plate. Lastly, the upper edge of the domain was set as a no-slip boundary. The method for checking the COMSOL model against the known solution was to plot the velocity of the fluid in the channel as a function of height. This is accomplished in COMSOL by creating a “cut line” vertically through the domain and plotting the computed fluid velocity along that cutline. This data was then exported to a python script for analysis. Since this model does not

make use of a fluid-structure interaction boundary like our more realistic models will, and thus will not produce the same errors, the intention of this first setup was only to make sure that COMSOL was not behaving erratically and to work out any initial problems. We were more interested in a qualitative assessment of the correspondence between COMSOL’s results and the analytical solution than a quantitative one. Something we had not anticipated was that our model began with all of the fluid in the channel at zero velocity, while Stokes’ Second Problem assumed that the plate had been oscillating for an indefinite amount of time. We wanted to determine at which cycle our model reached a steady state so that we could end our run just after that steady state cycle was complete in order to save computation time. For this model we plotted error as the difference between the COMSOL solution and the analytical solution. To determine the cycle at which the model reached steady state, we examined the first of the ten time steps we computed for each oscillation cycle and compared the error for each cycle. The next component of the model that we tested was how error was affected by the mesh size. For our steady state analysis we had been using the “fine” mesh setting, but we wanted to see how the error responded to a coarser mesh. If our model were stable, it would have less error when using a smaller mesh size. We found that the coarser mesh did make the error noticeably larger, which gave us confidence in the stability of the first model. Second Model: Oscillating Plate Domain The second model again emulated the flat plate in an infinite expanse of fluid, this time by creating a domain to act as a solid plate instead of just a boundary condition. In this model we wanted to resolve any problems that we encountered with making a moving solid domain in the simplest case of the oscillating flat plate before moving on to more complex geometries and motions. We also made a more appropriate error analysis since this model utilizes the solid-fluid interface boundary that will be used in future models. The geometric setup of this model was similar to our first model except at the bottom of the channel. Instead of using a moving boundary condition, we created a long, thin plate with sides that sloped downwards to meet with the bottom boundary of the fluid domain. The sections of the bottom of the channel on either side of the plate were given a slip boundary condition so fluid could move freely over them. These same sections are given a prescribed displacement of zero in the y direction, but are able to freely deform in the x direction. The solid domain was given a “prescribed displacement” under the Fluid-Structure Interaction module. Third Model: Rigid Posts The third model is where we began to deviate from the Stokes Second Problem model and start to incorporate some elements that were more reminiscent of the actuating posts. For this model, we created an array of posts with the same dimensions of our physical model and attached them to the oscillating base plate. The posts follow the same displacement as the oscillating plate without bending or tilting. While they are not pivoting about their bases like they are in the physical model, the rigid posts add a more

Figure 3: (top) Error compared between the plot on the left and the 1/10th time step plot for the 3rd and 5th oscillation. There is a significant decrease in error between the 2nd and 3rd cycle, but very little change between 3rd and 5th. It was decided that the small amount of error decrease between the 3rd and the 5th cycle was not worth the extra computation time, so we determined the 3rd oscillation as our steady state cycle. (bottom) Error comparison between mesh sizes.

Figure 4: Geometry of second model. Boundary conditions same as first model everywhere except bottom. Bottom details shown.

Bard Science Journal December 2013 Volume 3, No.1


realistic geometry to the model. We wanted to be certain that this model was converging and behaving as expected before moving on to a model where the posts actually pivot, which will introduce much more complicated fluid mechanics. The geometry of the third model replaces the flat plate solid domain of the second model with a rigid post solid domain, but is otherwise the same. In the model, the sharp corners of the post tips and where the posts meet the baseplate are rounded out so as to avoid any strange fluid behavior around those points. Figure 5: Geometry of third model. Boundary conditions same as first model everywhere except bottom. Bottom details shown.

Error Analysis and Results Varying X-Position Since both of the solid domain models have a finite length for the plate, the first thing we checked was how the error in fluid velocity varied in the x direction. The error metric we used for this test was again to take the difference between the analytical solution and the COMSOL solution. We checked the error in the cutline that went directly down the center of the model against a cutline that went through the region on the peripheries very near the outflow boundaries. For the flat plate model, we found very little change in the error plot. The only discrepancy was very near the base of the plot. The plot through the center of the channel shows the velocity of the fluid that is in contact with the plateâ&#x20AC;&#x2122;s surface and is therefore subject to the no-slip condition at the plate. The plot on the side of the channel is in contact with the slip boundary at the side of the base of the channel, and is consequently more dissimilar to the analytical solution very near the bottom of the channel. Besides the very bottom of the channel, the error plots are very similar between the side cutline and the center cutline. Despite having a plate that is finite in length, the fluid velocity of the solid domain model does not vary significantly when varying the x position of the cutline. A similar analysis was done for the rigid post model. The fluid velocity generated above the tips of the rigid post model is very similar to the analytical solution to Stokesâ&#x20AC;&#x2122; Second Problem. An error plot taking the difference between the COMSOL solution above the post tips and Stokesâ&#x20AC;&#x2122; Second Problem was made for various x positions and no significant differences in those error plots were found.

Figure 6: (top) A plot of the difference in velocity between the analytical solution and the COMSOL solution in the center of the geometry. (bottom) A plot of the difference in velocity between analytical and COMSOL solution very near the side of the geometry. The bottom of the cutline used for the plot on the right touches the slip boundary along the bottom of the channel. The two plots only vary very near zero y position, but otherwise match quite well.


Bard Science Journal December 2013 Volume 3, No.1

Energy Dissipation Error Analysis Although the calculation for energy dissipation will be quite different in a model that accounts for a viscoelastic fluid, we wanted to incorporate an analysis of this property into our viscous fluid model. Since average energy dissipation is the parameter we are most interested in, it is also the best parameter to base a more definitive error metric off of. Energy dissipated in a viscous fluid can be described by the shear stress times the velocity of the fluid. Plotting this value as a function of channel height yields an energy dissipation rate in Watts per meter. An output of this plot for 50 evenly spaced time steps in one oscillation cycle is taken from COMSOL and put into a python script that averages these plots to find the average energy dissipated per second per meter (W/m). We can later compare the energy dissipated above the tips of the posts to the energy dissipated below by integrating this function and yielding an average energy dissipation rate below the tips and above the tips in watts.

Figure 7: (left) Average watts dissipated per meter as a function of channel height for the flat plate model. (right) Same plot as left only plotted for post tip height and above. Deviation occurs near post tips. Both averaged over 50 time steps.

Figure 8: (left) Energy dissipation rate as a function of height above and below tips of posts. Without any shearing motion between the posts, there is negligible dissipation below tips. Once we have created a model that generates more shear stress below the tips of the posts, we should see some more interesting results. (right) Plot of the ratio between energy dissipated above post tips and below post tips. Larger ratio value corresponds to more energy dissipation above post tips.

Bard Science Journal December 2013 Volume 3, No.1


Error was calculated by taking the difference between the analytical solution and the COMSOL solution for average energy dissipation as a function of channel height, then to divide that difference by the maximum average energy dissipation value of the function (this occurred at the surface of the plate in the flat plate model, and just above the tips of the posts in the rigid post model). This metric was chosen because when we first tried a moving percentage, the error would blow up any time dissipated energy was near zero. We felt this unjustly wieghted the regions of minimal energy dissipation, since a large error where the energy dissipation is small will not greatly affect the total energy dissipated. By using the maximum energy dissipation, the high energy regions and the low energy regions are wieghted more fairly. Again, we do not have an analytical solution for our rigid post model, but we do expect the plot of dissipated energy above the post tips to closely resemble the solution to stokes’ second problem so we compared these two plots for the error analysis of that model. In the flat plate model error never exceeded 2%. In the rigid post model, error only exceeded 5% very near the tips of the rigid posts, which is to be expected since that is the region that is likely to deviate from Stokes’ Second Problem. Elsewhere in the rigid post model though, the solutions corresponded very closely. Energy Dissipation as a Function of Height In the rigid post model there are no shearing forces underneath the tips between the posts, and consequently no energy dissipation that occurs there. Despite our current model yielding uninteresting results, we wanted to make a script that would be able to analyze how the energy dissipation above and below the tips changes as we vary the height of the channel. As mentioned in the previous section, this is achieved by integrating the time averaged energy dissipation rate plot from the bottom of the channel to the height of the posts to find the average energy dissipation rate below the tips. This process is repeated for the region above the tips to find the average energy dissipation rate there. Our rigid post model and analysis script was able to successfully generate these plots. However, the results are uninteresting due to the lack of a pivoting motion that causes shearing in the fluid region between the posts. Conclusion All three of the models that were created behaved as expected and, after fixing the solver problems, are all numerically stable. The flat plate solid domain model yields an average energy dissipation plot within 2% error of the analytical solution according to the error metric used. The rigid post model is not realistic enough to give any insight about the dissipated energy below the post tips. Once a model is created that accounts for the pivoting motion of the posts, the script used to analyze energy dissipation above vs below the tips will work with that model. Acknowledgements I would like to thank my mentors Robert Judith and Dr. Richard Superfine for their guidance and support during my time at UNC. I would also like to thank Sheila Kannappan, Greg Smith,


Bard Science Journal December 2013 Volume 3, No.1

Zane Beckwith, and anyone else involved in the creation the CAP REU program for providing me with an enriching research experience. Lastly I would like to thank all of the CAP REU students for their constructive suggestions throughout the course of the summer that this research was done. References 1. “10 Leading Causes of Death by Age Group, United States.” National Vital Statistics System, National Center for Health Statistics, CDC. 2010.

Bard Science Journal is an interdivisional magazine including everything from science fiction to original research by students.

December 2013