Introduction
The Varieties of Gridded Experience
When I trained as a molecular biologist in the 1980s, I spent most of my time characterizing biomolecules. I cloned genes to understand how that gene product functioned in cells; I isolated RNA to see how and when it was expressed; and I assayed under what conditions a specific protein might be expressed. One evening, while learning a new protocol for isolating proteins, I asked a postdoctoral fellow in the lab next to mine if he ever thought that the molecular biological sciences would move toward a science of “building things back up” after all this characterization. I was interested in seeing how all the bits of knowledge we gathered fit together to form a picture of how a cell, or an organism, might operate. Not in the piecemeal fashion we were currently glimpsing, a cellular function discovered in one part of the cell, a molecular signal glimpsed in another. I wanted to see how all these insights might be orchestrated together. If the protein expression problem I was working on could be compared to a single musical instrument, I wanted to hear the grandeur and complexity of the whole cellular symphony.
The postdoctoral fellow replied that he didn’t think the type of science I was interested in would happen for a long time. As he saw it, there was still too much characterizing that had to be done. In his view, the molecular biological sciences of the twentieth century would remain as mostly detailed depictions of molecules, cells, and organisms. We just didn’t have enough information to develop a comprehensively synthetic (as opposed to an overtly analytic) molecular biology. Only after we had characterized the world sufficiently, would we be able to see how the parts fit together. To be a successful scientist, he suggested, I needed to dedicate myself to the intellectual world of a single amino acid on a single molecule.
Well, the era of “building things back up” has most certainly arrived. Many molecular biologists today embrace the use of computation, art, animation, engineering, systems thinking, and design to promote making as a new form of understanding. Disciplines as varied as synthetic biology, evolutionary and developmental biology, systems biology, bioengineering, bioinformatics, and even biodesign flourish because of a deeply held commitment that understanding how biomolecules interact to form organelles and cells is important for understanding how living things operate. If we frame this insight into terms of literary theory, one could claim that within the last few decades, biomolecular scientists increasingly adopted world building as a strategy for exploring the complex interactions that form living things.1
I’m not the first to notice this shift in biology. Just ask the scientists. Synthetic biologists have written on how “biology is technology” as cells can be used as “platforms” to fabricate economically important biomolecules, such as pharmaceutical drugs.2 Disciplines such as evolutionary and developmental biology temper the study of the selective pressure of environments with a renewed sense of the importance of the internal molecular and physiological constraints of developing organisms,3 and scientists have “pioneered” institutes dedicated to embracing “biological complexity” to fearlessly decipher “vast amounts of data” “to gain valuable insights and achieve breakthroughs across scientific disciplines.”4 Clearly something is afoot and people have noticed.
What I find especially surprising is the speed with which this change appeared. For many commentators, the cause for the rapidity of this change is clear: the use of computers in biology.5 Specifically, these commentators point to the heavy use of computers in biological practices, with their ability to store and correlate large amounts of data, and the use of cybernetic and biological metaphors for thinking about biological systems, giving researchers new conceptual tools to think about complex processes. Although I don’t think these histories of biology are wrong, and I will address these claims in more detail below, I think they are much too limited. An overemphasis on computation as a historical agent obscures two main points. The first is the problem of standardization. It is difficult to turn something as complex as a fruit fly’s body into the data and operable commands that a computer will recognize. There are a large number of historical studies that have labored to demonstrate the conceptual and technological innovations that needed to go into making a body calculable
in the first place.6 This history should not just be swept under the carpet. The second problem is the role of envisioning. I will offer a more concrete definition of “envisioning” below. For now, however, we can think about envisioning as a composite act that mixes imagination, visualization, and desire. To envision something in the biological sciences means having a vision for how something could occur under specific circumstances. Often envisioning requires tangible images, movies, animations, or diagrams to depict phenomena as well as the relations that birth them. Pointing to the adoption of computers as the prime historical agent diminishes the values, desires, and imaginative potentials that make science such an interesting field of study. My goal as a scholar of the cultural and conceptual basis of biology is to create stories that are historically informed, scientifically robust, and imaginatively captivating. Pointing to the increased rise of computing, while it is a superficially correct claim, too often tempts authors to condense these complex interactions into a single opaque technological box of agency.
I have spent my career trying to understand these two moments in the history of biology, the standardization of human, animal, and plant bodies and the envisioning of how they work. My first book, The Emergence in Genetic Rationality, was a study of the role of standardization in defining biological relationships. At the time, it was one effort by many that were interested in the problem. This book, Biology in the Grid: Graphic Design and the Envisioning of Life, is a study of the role of envisioning in defining biological relationships. Understanding why so many biologists adopt strategies for world building all at once, however, demands we understand a bit more about twentieth-century consumption practices and the values that they promote. This is key for understanding how computation and world building in biology evolve as conjoined visions for a world that can be envisioned (imagined and controlled) in all its complex interconnectedness.
Envisioning Consumers
Early twentieth-century industrialized economies had the capacity to produce a lot of stuff; what industries needed were mechanisms to convince consumers that they needed this stuff. Mid-twentieth-century business achieved these goals by changing their relationship to consumers.7 Mechanisms for relating to consumers, such as advertising and marketing, helped consumers visualize how products could change their lives, while
mechanisms for consumer feedback, such as focus groups, gave companies ideas about which products might be successfully marketed. Producing products no longer meant simply turning an assembly line on or off; it meant ensuring the object would sell once it was produced.
The development of new ways for envisioning products was key for ensuring the regulation of goods. Companies began using mechanically, chemically, and electronically produced images to ensure that consumers desired what companies were making. Magazines swelled with advertisements for products, and newspapers promoted visually rich features, such as comic strips, to ensure that they sold even on slow news days. Publications and broadcasts splashed images across media, stores developed techniques to display goods in novel ways, and companies used novelties and gimmicks to ensure they maintained that special relationship between a brand and its consumer (do you remember the toy hidden in the cereal box?). These initiatives led to innovations in product design and distribution, and a world built on modern marketing ensued. Companies became especially adept at using visualization technologies to paint the consequences of a world transformed by consumption. This world was not simply focused on using an object’s utilitarian function to sell a product; it instead focused on the big picture, how these products could help a consumer attain a certain lifestyle. This was a world that you could participate in if you had the liquid income needed to purchase the advertised goods. Our ability to envision organisms in the biological sciences not only relies on some of these innovations, it remains tightly bound with the interplay of desire and spectacle in a consumerist economy.8
the Spectacle of the Grid
One especially important innovation in visualization practices was the use of grids for graphic design.9 Grids allowed designers to break down complex forms into simpler elements and assemble them in novel configurations for maximum effect. Texts and images could be combined in novel ways, as publishers fought over the attention of readers. Grids also allowed distributers to circulate images across various media, allowing publishers to deliver scalable content to large audiences across diverse platforms (print, broadcast, and web). Overtly commercial enterprises such as advertising and the entertainment industry weren’t the only publications that used grids. Almost all publications began using grids as a standard-
ized form of layout, including scientific publications, where gridded layout design was especially welcomed for its ability to easily include more advertisements and illustrations. The advent of graphical user interfaces in personal computing programs ensured that even neophytes in graphic design could produce elegant yet standardized documents using grids. (I know, this neophyte used these programs to create some of the illustrations in the book.) The history is clear. During the twentieth century it became harder and harder to envision the world without using a grid.
The sheer prevalence of grids revealed new ways for thinking about how organisms might build themselves. Scientists began seeing how bodies could be constructed as a series of parts or modules, much like a grid is composed of a series of panels.10 This not only offered scientists new ways of thinking about how bodies might be constructed, it also suggested new ways that bodies could be related. The key realization is that grids allowed scientists to think about how the autonomy of individual parts and the needs of the organism could work together. The key for this delicate balancing act is in how grids are organized. Each module is part of a larger array of modules, but it also has a degree of autonomy in defining its self-organization. This allowed for the creation of designs based on assemblages of various types of modules as well as the use of single modules in multiple designs. Although it oversimplifies the complexity of how the scientists were thinking about the problem, it might be helpful to see how creating organisms is akin to assembling a structure with Legos. Legos use basic building blocks, similar enough that they can recombine with one another, but different enough that they can give one’s final creations different forms and functions when swapped with one another. Limbs can be switched (such as a fruit fly’s small stabilizers for full blown wings), heads modified (legs inserted where antenna should be), and (this is where the Legos example falters) biochemical pathways can be refashioned.
Perhaps counter-intuitively, theories of modularity provided biologists with a strategy for understanding how an incredible diversity of different forms could be created from seemingly homogenous building blocks. Similarity and difference, it seems, are much more closely related than we thought. These theories also reinvigorated old existential questions. If the materials of construction for a fly and a human are the same (proteins, nucleic acids, fats, and starches) and the process of construction is similar (with the same signals directing diverse, site specific structures), then how different from each other are we? The answer, it seemed, increasingly
appeared to be in how the actual modules were assembled, where and when they were placed to form an organism. Welcome to life in the grid, where a key source for our ability to understand new forms of biological complexity comes from one of the least likely sources, standardization wrought by desires for advertisements and entertainments. Grids (and as I will argue in the last chapter of the manuscript, layers for dynamic processes, as well) provide the scaffolding for envisioning how living things operate in the twenty-first century. Analyzing our lives in the grid, then, requires a bifurcated research strategy. It is imperative to recognize the important political constraints and oppressive consequences of the use of grids in thinking about controlling life. As I will argue in depth in chapters 1 and 3, despite the homogenizing appearance of grids, not all lives in a grid are treated similarly. Uncovering these differences is politically important but also requires understanding how grids are ordered. This involves asking questions such as “What are the specific values that grids are intended to support?” Understanding life in the grid also demands the use of imagination. In this way we see how grids can be used to reorder lives to be less oppressive and more creative. Strangely, it is through a study of the most monotone and bureaucratic of terms, “regulation,” that we see how closely bound the impulse to control and the desire to imagine coexist through envisioning.
the Varieties of regulated Experience
The use of grids was just one aspect of a more important shift in the ways that products were regulated. Most people are used to thinking of regulation in terms of the application of rules or laws in social organization. For instance, laws limit how fast we can move by setting speed limits, what we can ingest by creating controlled substances, and how we interact with other bodies by criminalizing specific behaviors. A growing number of scholars from a remarkably diverse number of disciplines have argued that this is much too limited a way to think about how bodies are regulated in twenty-first-century society.11 Central to many of these claims is the work of the late Michel Foucault in thinking in terms of how lives, economies, and regulations interact to create regulatory practices supple in their application and responsive to the challenges of random events. Especially central to this new type of regulation is “the production of the collective interest through the play of desire.”12 Economic practices, legal norms, and
ideas about how bodies operate worked in tandem to create new forms of regulatory functions.13 This subtle form of regulation appeared more fluid in its application, more natural in its appearance, more insidious in its presence, and more expansive in its reach. Visual technologies of midtwentieth-century capital played into this change in how products were regulated. Regulation was no longer a matter of restricting which products were produced for consumption. As noted earlier, it also meant developing desire for the products in the first place.
Regulation is an increasingly important concept in biology as well. Although biologists began figuring out how nucleic acids informed the expression of proteins, they couldn’t figure out why only certain proteins were produced at specific times and locations in organisms. This was an interesting conundrum as living things seemed to always be changing. How could the same materials that created a specific stage of an organism’s life create other stages as well? Insects, such as fruit flies, not only grow larger, they also dramatically change their forms as they grow. The untrained eye can find little similarity between the soft grub of a juvenile fly and the well-articulated, chitin armored appearance of the adult. Even humans go through subtle chemical metamorphoses, as some lactose intolerant adults lose the ability to metabolize the sugar lactose, a metabolic skill they relied upon as an infant. Why did they lose this ability? Or, in more biologically precise terms, how could a limited nucleic acid code account for all the changes organisms exhibit during their lives? Clearly, something was happening to allow some parts of the code to be expressed at one time and not at another. This implied that genes not only needed to be turned on, they had to be turned on at the right time and in the right place. One point that Biology in the Grid argues is that understanding the complexity of genetic regulation not only involved changes in how scientists thought about genes, it required a change in how the concept of regulation was pragmatically deployed in society.
Genetic regulation metamorphosed from being a simple directive (such as turning a gene “on” or “off”) to a complex set of responses to localized cues (such as the response to chemicals in specific cellular environments) that occur in a specific order (where the expression of one chemical initiates an expression of another chemical). Genomes now appear less like a detailed set of instructions to build an organism and more like a set of potentially coordinated responses to localized changes. Take, for instance this claim, by developmental biologist Eric H. Davidson, that a genetic
regulatory network has two components. They are sensitive to cellular signals, meaning that signals have the capacity to “affect transcription of regulatory genes.” And they form networks, where “each regulatory gene has both multiple inputs . . . as well as multiple outputs . . . so each can be conceived as a node of the network.”14 This vision of genetic regulation as multicausal and conditionally responsive is much more complex than a single directive to turn a gene on or off. In this case, it is not just the code of the genes that makes a difference, it is how, when, and where the codes are translated and transcribed. This idea of genetic regulation was key for the emergence of world building in the biological sciences.
Although this shift in the idea of regulation has registered greater impact in some fields of biology than in others, it has been surprisingly far reaching. Its influence on the field of evolutionary and developmental biology is especially profound, as the concept of genetic regulation has played a key role in thinking about how organisms develop. Genetics, too, has had to wrestle with how genes respond to cellular environments. The result has been the characterization of a long list of factors implicated in regulating genes. We now have specific nucleic acid sequences identified (such as initiation and termination sequences, promoters, enhancers, and microRNAs), specific proteins identified (such as transcription factors), and even modifications to the architecture of the nucleic acids (through methylation and histone interactions).15 Understanding the rich interplay between all these factors has led to productive debates in proteomics, structural biology, epigenetics, cell biology, bioengineering, and even synthetic biology. Grids not only helped structure how these interactions were thought to occur, in many cases they enabled the visualizations that allowed for thinking about interactions in the first place. Grids allowed for scientists to see how things could be put together in a way that wasn’t just propositional, it was additive and supple.
This turn to world building has been fruitfully envisioned as a return to a biology of organic holism.16 The more I studied the development of this new way of thinking in biology, though, the more I found parallels with contemporary economic practices more compelling than parallels with older biological theories. Let me offer an extended example to illustrate what I mean. I hate to buy clothes. I really dislike spending an afternoon shopping. I find very little interest in trying things on, waiting in lines, and being jostled by others. Increasingly, I have relied on purchasing my clothes online. A series of innovations in online retail has made it pos-
sible for me to drastically reduce my visits to physical stores. For instance, I now have accounts at retail clothing sites that remember my body size and can even suggest small adjustments in size if a brand is known to run a little smaller or larger than other brands. Some sites even have virtual methods for trying on clothes, such as “virtual fitting rooms.” These can rely on mechanisms such as providing measurements made at home to create a virtual mannequin with the dimensions of your body or uploading an image so that this mannequin resembles you in more than your formal dimensions. In one sense the online experience of buying clothes is becoming more like the store experience, or even the much older, now elitist practice of going to a tailor.17 On the other hand, this process is clearly built from processes of standardization that made it possible to buy off-the-rack in the first place. Yet the process more thoroughly adjusts for how standards can be varied to build a body profile that accommodates even hard to fit parts of my body, such as my peculiarly short arms.
Interestingly, innovations in how goods and information are regulated keep my buying habits of clothes and biologists’ conception of development from being a return to pre-twentieth-century emphasis on handcrafted organic holism or a simple extension of twentieth-century standardization.18 When I purchase my shirt online, there is now a layer of informational practices added to a simple retail purchase. Information about my buying habits is stored, my body size and shape recorded, different products virtually compared, and new means of negotiating payment utilized. The same is true for envisioning the development of organisms. The movement of individual cells and molecules has been tracked, potential outcomes from molecular interactions are evaluated, cellular and physiological changes remembered, and complicated issues involving the physics of different scales of interaction are accounted for. Consequently, the seeming return to holism in biology is predicated upon the detailed regulatory apparatus of molecular development even as it confounds the reductionist assumptions of its practitioners. New forms of regulation have not only shaped my clothing buying habits, they have also informed how scientists think about the methods organisms use to shape themselves.
From Visualization to Envisioning
Images play an especially important role in industrial economies for understanding how things fit together. Two points are especially important to
my argument, and I appeal to the work of media theorist and philosopher Vilém Flusser to help me explain them.19 The first point is phenomenological. Flusser grasped that readers engage with images differently than they do texts. He argued that reading lines of text requires readers to follow a structure “imposed upon us” as our eyes “follow the text of a line from left to right” and “jump from line to line from above to below it” and then “turn the pages from left to right.” Viewers of images on surfaces, such as a printed page, however, “may seize the totality of the picture at a glance, so to speak, and then proceed to analyze it.”20 Flusser is careful to acknowledge that this isn’t a difference in freedom, where our eyes are free to roam a page when looking at an image, but in the order of synthesis and analysis. Texts require analysis first, which can then lead to specific forms of synthetic meaning making, which Flusser terms “historical thought.” Images, however, provide a synthesized experience that invites specific types of analytics. “The one aims at getting somewhere, the other is already there, but may reveal how it got there.”21 The real difference involves the temporality of perception as each treats the viewer’s relationship to “past, present, and future” in different ways.
Recognizing this phenomenological point is important for understanding how visual technologies are especially useful, but not necessarily required, for the types of open-ended regulations we discussed earlier. As most advertising executives know, images are especially useful for evoking desires. They present worlds where associations can be made and possible futures promised. As most lawyers know, images are very difficult for thinking in terms of laws.22 They infrequently provide clear prohibitions (the strong prohibition of a nonsmoking symbol is the exception) and often don’t suggest strong and clear causalities. Clearly, different forms of informing are important for suggesting different ways that things can be regulated. This is what makes images so useful for world building. They excel at providing a way to imagine how elements fit together to make a complex scene without necessarily saying how these elements specifically interact. This is one reason why many scientific models rely on visualization—they allow a viewer to see how things might fit together. Flusser’s second important point about images is political economic. Flusser recognizes that images produced through technological means operate differently than traditional images. Consequently, they are implicated differently in meaning making. For instance, technical images are not direct representations of the world, they are mostly composed. The term Flusser uses, and I have appropriated, is “envisioned.” Image makers
(he calls them “envisioners”) draw “the concrete out of the abstract” by arranging bits of data into informative patterns. As anyone who has zoomed too closely into the pixels of a digital image can attest, electronic images are little more than collections of specific data points. What makes them meaningful is how they are arranged (or regulated). The representational veracity of these images, then, comes from the series of commands that organize pixels in specific locations in the image. This act of envisioning always takes place through the capabilities of a technical apparatus. “The technical image is an image produced by apparatuses.”23 This is what makes his point political economic, as all images are the products of the capability of the technological apparatus. In the universe of technical images, then, all images reflect the scientific statements that allowed for their production. “As apparatuses themselves are the products of applied scientific texts, in the case of technical images one is dealing with the indirect products of scientific texts.”24 When one envisions a world with technical images, one is always envisioning this world through the technical capacities of the society that created the imaging apparatus.
This insight doesn’t necessarily suggest that images are now postrepresentation (as other theorists have argued).25 It does suggest that the distinction between representation and imagination is no longer a very interesting epistemic criterion by which to judge scientific images: “The gesture of the envisioner is directed from a particle toward a surface that never can be achieved [because of its abstraction], whereas that of the traditional image maker is directed from the world of objects toward an actual surface.”26 Consequently, envisioners are much more interested in using data (visual data bound with the codes of the apparatus) to create informative configurations of images. “Envisioners press buttons to inform, in the strictest sense of that word, namely, to make something improbable out of possibilities. They press buttons to seduce the automatic apparatus into making something that is improbable within its program.” Traditional images inform by either reflecting (representation) or escaping (imagination) the normative visual qualities of a world. Technical images inform by creating unlikely images that reveal new relationships.
Let’s analyze a specific image from evolutionary and developmental biology to anchor Flusser’s abstract insights with the particularities of biomolecular practice. Figure I.1 shows a drosophila embryo stained so that a viewer can visualize the expression of eve or even skipped pair rule genes in the upper part of the image.27 This regulatory protein is expressed as a series of discrete bands, in a modular fashion, arranged along the longitudinal axis

Figure I.1. A fruit fly embryo stained for the eve or even skipped pair rule gene bands (the seven prominent bands in the upper image and the seven less prominent bands in the lower image). The eve gene has seven different regulatory sequences that can be activated independently. Each of these regions is responsible for producing a discrete band. Each eve gene, then, acts as a module that is both dependent upon its placement in the whole organism as well as able to act independently. The fluorescent stipples dispersed across the bottom image help visualize individual nuclei of cells. The diffuse central band on the bottom image is the gene product, kruppel. This beautiful picture is also a testament to the sophistication of visualization techniques in biology that can combine the visual detail of confocal microscopy, the color of multiple forms of fluorescent staining, the specificity of monoclonal antibodies, and genetically engineered probes to identify single molecules. Photograph courtesy of David W. Knowles and Mark D. Biggin of the Berkeley Drosophila Transcription Network.
of the embryo. The bottom image combines the eve staining pattern with a stain that creates a stippled pattern used to visualize each of the cells in the embryo. The more diffuse single central band is a stain specific for the gene kruppel. 28 Eve and kruppel are called “gap genes” in that if mutated, whole segments of the developing drosophila larva will be skipped during development. With eve, the even numbered segments of the larva are skipped during development, while kruppel deletes a massive segment from the posterior of the embryo (kruppel is German for “cripple”). During the decade
between 1990 and 2010, the pages of journals such as Nature, Science, and Cell were saturated with gorgeously detailed colored pictures tracking the expression of single molecules in (mostly) drosophila larvae. What makes these images a good example of Flusser’s concept of envisioning?
First, these images are not directly represented in the same way that a traditional image is. They were produced using highly specific stains (using either monoclonal antibodies or specific nucleotide sequences) that effectively amplify the visual presence of molecules within a complex molecular soup. They could not be visualized without it. Second, the imaging apparatus, confocal microscopy, is different from conventional light microscopy. Confocal microscopy is known for its ability to provide complex images with especially high resolution. These images are created, however, by scanning the whole specimen as a series of closely focused two dimensional planes that are then reassembled into a grid of three dimensional images with stunning detail. Confocal microscopy is especially useful in the biological sciences as it gives researchers the ability to visualize planes of three dimensional objects with relatively little invasiveness. And finally, but certainly not exhaustively, the whole ability to identify and map these molecules in the first place comes from the creation of mutant flies who have lost the ability to express these genes in the first place. The ability to envision the normal banding of a developing fly is predicated upon the unlikely events of seeing mutant flies in the past. As Flusser argues, “Envisioners press buttons to inform, in the strictest sense of that word, namely, to make something improbable out of possibilities.”29 So, as you see, the technological coding that produces images of gene expression during development requires multiple levels of envisioning in a technical society. This point was impressively brought home to me when I ordered this image from the Berkeley Drosophila Transcription Network (BDTN). Specifically, I learned that no one fly is represented by this image. The BDTN created this composite image upon my demand by envisioning data from multiple images that they already had stored in their database. This image of eve expression in drosophila is what Flusser is referring to when he writes that envisioners draw “the concrete” from an “abstract” collection of particles or data points. It’s not that this image isn’t real, as it actually portrays how eve is expressed during drosophila development. It’s just that judging this image through the categories of real vs. unreal is less informative than with traditional images.
Some of what Flusser argues will appear familiar to readers of media theory. Flusser’s recognition of the importance of the medium (or in
Flusser’s case, apparatus) in shaping messages can at times seem like Marshal McLuhan’s emphasis that the medium is the message. Flusser’s focus on the role of envisioning to create anomalous outcomes, however, shifts the focus of analysis away from the normative effects of a medium in meaning making.30 It’s not so much that “the medium is the message,” but that under specific circumstances envisioners can inform using media. His characterization of the abstract qualities of technical images can, at times, seem like Jean Baudrillard’s depiction of the “hyper-real.” Flusser, though, emphasizes the production of concrete results from abstract processes, instead of allowing all meaning to implode through symbolic exchange.31 In a sense, Flusser stands Baudrillard on his head as he is interested in how the abstract elements of symbolic exchange still inform concrete processes. Also, in some circumstances, Flusser’s view of envisioning can seem a lot like Alexander Galloway’s conception of protocol, where envisioners and apparatuses are following protocols to create images.32 Yet, there is a major difference. Flusser thinks that use of technical images is industrial and not just informational. Consequently, the logic of the apparatus is also a chemical logic of association and not only an informational association through protocols. This is an important distinction as the experiences of industry might be chemically synthesized, but they are most certainly not informationally propositional in the way that protocols are. These distinctions are important for my argument as it helps me more deeply explore how bodies become industrially calculable before they became informationally computable. It also suggests why the industries of advertising and entertainment are especially useful for envisioning life in the twenty-first century. Understanding how technical images are produced helps one to realize that bubbling below the slick sterile surface of the modern biological laboratory is an amusement show of special effects. Excess, illusion, and imagination fuel our hunger for knowledge about living organisms, shape the content of what it is we find, and drive the limits of what we think possible.
the Chapters Envisioned
Flusser’s two insights about images, that images promote associative thought and that technical images reflect the codes of a technical society, allow me to argue several important claims in the book that follows. The first insight is that the associative power of images has been especially important for biology. Chapter 1, “Life on the Line: Organic Form,” explores this
insight through an analysis of the images of nineteenth-century morphologist Ernst Haeckel by looking at how the aesthetics of his images relate to his theories of life. It begins by asking how one illustrates that something is alive. In addressing this question, the chapter then analyzes how in Haeckel’s work, a materialist idea of life as organic (meaning composed of carbon) is in tension with a formalist idea of life as organic (meaning the form of the organism regulates the assembly of the parts). The chapter traces these ideas through a formal analysis of Haeckel’s use of curved lines in his masterwork, Kunst-Formen der Natur. It argues that Haeckel used an openly curved or wavy line to suggest vitality and a closed circular line to build architectural volumes for his forms. The chapter closes with a political analysis suggesting that an aesthetic analysis that privileges organic forms is insufficient to analyze the types of oppression that occur in twenty-first-century society.
The second insight from Flusser developed by this book is that the codes of technical images reflect the codes and values of an industrial society. This insight structures much of the content of chapter 2, “Envisioning Grids.” This chapter documents the adoption of grids in the twentieth century as a publishing and display aesthetic: first in advertising and promotions and then in scientific print and electronic publications. The chapter suggests that grids promote two moments in design: a moment of partitioning a complex process into simpler elements and a moment of the reconstruction of these elements in a larger assemblage. This second moment is especially important when looking at the use of grids in the history of graphic design as it allows designers to build diverse displays from similar items. According to philosopher Vilém Flusser, this constructivist moment is due to the visual “magic” performed by images as they create meaning by demonstrating associations across surfaces. Flusser then contrasts this form of associative meaning making with the historical and sharply causal knowledge promoted by texts. The twentieth century especially, saw a proliferation of images, most of which were made using chemical and electronic technologies. Almost all of these images used the disciplinary logic of grids in their construction. According to Flusser, technologies now create images by assembling representations from quanta, bites, dots, or grains of silver. These quanta, when taken individually, are inherently absurd in that they have no intrinsic meaning. They only acquire meaning when they are assembled into specific patterns through the associative power of grids. What technological images represent then, is never just the content of the
image, but the political economic circumstances that produce the image and give it meaning. The chapter ends by showing how the development of printed scientific journals in the late twentieth century and the display of web-based scientific journals adopted the regulative power of grids to bring coherence to the disparate facts, values, and sensory experiences produced by a technologized society. Science is thus seen as a constructivist practice in that it uses social codes to make meaning from inherently absurd collections of data. The idea of construction, though, has now changed from a conception of building up potential meanings from parts to a conception of using the codes of technical images to create visual constraints. Constructivism isn’t just a simple assembling of things but a path to creating meaning by limiting the inherently powerful act of association involved in all image making.
The third insight from Flusser, that unlikely events are the most informative, drives much of the analysis of chapter 3, “Warped Grids: Pests and the Problem of Order.” The point behind this chapter is that grids are more than ideal conceptual constructs, they can be strangely responsive material constructs for ordering actual spaces. This chapter takes the historical phenomenology of grids developed in chapter 2 and applies it to a cultural analysis of how bodies, grids, and regulation were related in midtwentieth-century popular media. It begins with an analysis of a poster advertising 20th Century Fox’s 1958 motion picture The Fly to suggest how the seemingly unrelated spheres of the teleportation of goods and the elimination of pests are products of a desire to develop new strategies for regulating bodies. In the movie, the body parts of a housefly and a man are switched during a teleportation experiment when the teleportation device misregulates the reassembly of the two organisms. It then shows how this idea of mutation by misregulation was originally developed in biology in the work of biologist William Bateson, the father of cyberneticist Gregory Bateson. It does this by concentrating on how William Bateson’s focus on the role of variation in inheritance is an important milestone for thinking about the importance of regulation of bodies (most specifically through his studies of homeovariants). The chapter then moves to an analysis of Foucault’s later work on biopower and its relationship to the development of neoliberalism (from Security, Territory, and Population as well as The Birth of Biopolitics) that casts life and liberty as a problem of the regulation of the circulation of goods and bodies. The chapter ends with thoughts on how the regulative function of grids promotes some lives over other lives,
suggesting that the way a society circulates goods contributes not only to “the power to ‘make’ live and ‘let’ die,”33 as claimed by Foucault, but in the very terms of what life might be and how it structures itself. It makes this point, however, by demonstrating how in society and biology, grids are always material orders of spaces. Most complex spaces consist of many grids, or types of order, existing in relation with each other. So, although grids have defined the possibilities of life, grids can also interact, effectively warping each other, to create degrees of freedom for the pests they intend to control. This is an important insight as it broadens one’s political analysis away from the identification of conditions to seeing how conditions allow for specific types of futures.
The fourth insight is that envisioning not only is a property of image surfaces, its use on surfaces helps define how we look at things; in this case, how organisms are regulated during development. Chapter 4, “Modulations: Envisioning Variations” begins by demonstrating how important the concept of “modules” is for the development of contemporary biology and for the discipline of evolutionary and developmental biology specifically. Modules, a single panel within a grid, are defined as having two key properties: they possess a degree of autonomy, meaning that they can act as individual agents with internal dynamics, and they are integrated with other modules, meaning that the functions of the entire grid depend upon the relationship of modules to each other. I locate William Bateson’s theories on variation as a key moment in biology for thinking in terms of modules. Through an analysis of Bateson’s illustrations and writing, the chapter suggests that Bateson’s ideas on segmental variation are an important precursor to the development of modularity in evolutionary and developmental biology. Bateson identified two types of variations: substantialist variations, based on how a part is put together; and meristic variations, based on how the parts are arranged. The chapter then turns to the work of Nobel Prize winning geneticist Edward B. Lewis to demonstrate how this concept of modularity works visually in Lewis’s illustrations, as well as conceptually, in Lewis’s arguments about how development occurs as a sequence of specific genetic events. According to Lewis, the expression of phenotypic traits, such as the wings of a fruit fly, depend on how molecular events are regulated at the time it was developed. Sean Carroll has elegantly described this conceptual variation as the “logic of making a series of initially similar modules and then making them different from one another.”34 This concept of development synthesizes a depiction of the developmental
sequence (as found in Haeckel) with the importance of variance (as found in Bateson). Most importantly, this logic of making modules and then varying them into new things allows for the development of an important insight in evolutionary developmental biology, that regulation can lead to variance despite the similarity of materials and processes.
The final chapter of the book is an extension of Flusser’s theories on technical images into the role of grids and layers for envisioning, and controlling, over time. Chapter 5, “Drawing Together: Composite Lives and Liquid Regulations,” begins where the last chapter left off, with the work of Ed Lewis. It does so, though, by returning to the question asked in the first chapter: how do you illustrate a living thing? Ed Lewis seems to have thought that he could best illustrate development by learning animation. Lewis used stop-motion animation to put his ideas into motion so that he could show how mutations could be used to understand the steps of developmental sequences. This chapter reads Lewis’s animations through the lens of Thomas Lamarre’s theories of animation, where changes over time derive from the differential layering of images over each other, a technique in animation called “compositing.”35 I argue that scientists use animation to draw multiple types of evidence together in a single presentation and then order them alongside each other. This technique allowed animators to freely mix texts and diagrams with live action images, juxtapose different rates of change, and freely change scales between molecular, cellular, and organismic interactions. Biological explanations, I argue, are at their most robust when they draw on multiple sources to present compelling visions of life.
the Importance of Aesthetic Analysis
Sprinkled throughout these chapters are several important insights about what it means to live composite lives within a grid. I will close this introduction by pointing to a few that I think are the most important. The first of these is the claim that how we order the world is a product of how we envision it. This insight carries with it aesthetic as well as epistemic consequences. Scholars of biology have long used the categories of “form” and “function” to explain why organisms are created the way that they are.36 An analysis of an organism’s functions often stresses how specific traits help an organism exploit an ecological niche. A simple example of this is how the development of the limb of a fish into a fin will help that fish swim. An analysis of an organism’s form, however, usually stresses how
the parts of an organism have been ordered to fit together in the whole organism. The presence of vestigial organs, such as the human tail bone, for instance, are often thought to be preserved because of similarities in development between humans and other primates even though humans have no tail.
As I explore, mainly in chapters 1, 2, and 4, that while I see a resurgence in an emphasis on form in biology, we must not be too quick to assume it is a simple return to the holism of nineteenth-century biology as some have assumed. I concur with Richard Burian when he claims, “The key to the integration of organisms is not dependent on a master plan, but on the coordination of quasi-autonomous modules.”37 In fact, I would even go so far as to claim that the type of formal principles operative in biology today are similar to those described by philosopher Gilbert Simondon in his important L’individuation psychique et collective.38 Simondon rejects the idea of form as hylomorphic in that it presupposes what something will become. Instead, he posits the idea of “information” as process that occurs through the transductive ontogenesis of an individual.39 I believe, however (and this is very different from Simondon), that the history of twentieth-century biology suggests that understanding how bodies are informed also demands understanding how modules interact through the relationships of grids.
not Just Computing
As I mentioned earlier, others have noticed the shift toward world building in biology but tend to emphasize the use of computation and algorithms as the historical catalyst. For instance, Alireza Iranbakhsh and Seyyed Hassan Seyyedrezaei argue that “The advent of computer and information technology thereafter has been making such conspicuously remarkable changes in every aspect of life that its importance cannot be over accentuated.”40 I understand the seductiveness of this view of history. Biologists today use computers to create databases and evaluate data, process images, keep records of samples, and model outcomes. Even seemingly mundane scientific tools, like water baths, are now computerized to lend them greater precision and programmability. Specifically, scholars have tended to identify the computer’s impact through two different mechanisms: the increasing number of uses with which computers are applied in biology41 or the use of computational and informational metaphors to
explain biological processes.42 I think there are compelling historical, sociological, and political reasons to complicate these narratives as they tend to underemphasize the amount of experimental and conceptual work that went into making bodies calculable in the first place and underestimate how firmly rooted scientific practice is in contemporary streams of capital.
My first issue with this claim is historical. Biologists turned to computation relatively later than their colleagues in other disciplines, and even when they did adopt computers, it was initially only in a few applications with robust institutional and conceptual ties to other disciplines. As Robert Ledley wrote in 1959, most biologists and medical professionals using computers “are . . . relatively isolated research workers who are, with only few exceptions, people with extensive cross-disciplinar y backgrounds.”43 One of the key places that computers made their presence felt is in the role of molecular modeling, most notably with the studies of John Kendrew on myoglobin, where increasing the resolution of the X-ray crystallographic data demanded more complex Fourier synthesis calculations.44 As Ledley points out, “Computers are being used to assist in the complicated and extensive computations that are frequently involved in obtaining information about the precise atom-structure or the over-all size and shape of crystallizable molecules, from x-ray diffraction patterns.”45 As important as it is to recognize how computers were used for calculating model molecular structures, it is also important to recognize that researchers in other fields used different tools for helping them visualize complex problems. This consideration is especially important for understanding the work of geneticist Ed Lewis, whose experiments play a central role in the book that follows, as he turned to learning stop-motion animation to help him explain his models for genetic contributions to development. Animating his model allowed Lewis to depict the complicated developmental changes of fruit flies while combining representational practices of illustration and live action photography at a time when computers could not.
The second issue with this claim is sociological. Many recent sociologies of science have stressed the importance of work as an indicator of the effort that needs to go into making scientific claims.46 The focus on work as an analytic allows researchers to understand the important role played by an investigator’s tacit knowledge, the interaction of scientists with instruments and samples, and the institutional support that goes into making scientific claims. It also helps reveal how making items fit into grids requires constant policing. Much like a garden, scientific insights need to be
Another random document with no related content on Scribd:
copies of the works possessed in a physical medium and discontinue all use of and all access to other copies of Project Gutenberg™ works.
• You provide, in accordance with paragraph 1.F.3, a full refund of any money paid for a work or a replacement copy, if a defect in the electronic work is discovered and reported to you within 90 days of receipt of the work.
• You comply with all other terms of this agreement for free distribution of Project Gutenberg™ works.
1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™ electronic work or group of works on different terms than are set forth in this agreement, you must obtain permission in writing from the Project Gutenberg Literary Archive Foundation, the manager of the Project Gutenberg™ trademark. Contact the Foundation as set forth in Section 3 below.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend considerable effort to identify, do copyright research on, transcribe and proofread works not protected by U.S. copyright law in creating the Project Gutenberg™ collection. Despite these efforts, Project Gutenberg™ electronic works, and the medium on which they may be stored, may contain “Defects,” such as, but not limited to, incomplete, inaccurate or corrupt data, transcription errors, a copyright or other intellectual property infringement, a defective or damaged disk or other medium, a computer virus, or computer codes that damage or cannot be read by your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the “Right of Replacement or Refund” described in paragraph 1.F.3, the Project Gutenberg Literary Archive Foundation, the owner of the Project Gutenberg™ trademark, and any other party distributing a Project Gutenberg™ electronic work under this agreement, disclaim all liability to you for damages, costs and
expenses, including legal fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a defect in this electronic work within 90 days of receiving it, you can receive a refund of the money (if any) you paid for it by sending a written explanation to the person you received the work from. If you received the work on a physical medium, you must return the medium with your written explanation. The person or entity that provided you with the defective work may elect to provide a replacement copy in lieu of a refund. If you received the work electronically, the person or entity providing it to you may choose to give you a second opportunity to receive the work electronically in lieu of a refund. If the second copy is also defective, you may demand a refund in writing without further opportunities to fix the problem.
1.F.4. Except for the limited right of replacement or refund set forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
1.F.5. Some states do not allow disclaimers of certain implied warranties or the exclusion or limitation of certain types of damages. If any disclaimer or limitation set forth in this agreement violates the law of the state applicable to this agreement, the agreement shall be interpreted to make the maximum disclaimer or limitation permitted by the applicable state law. The invalidity or unenforceability of any provision of this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the trademark owner, any agent or employee of the Foundation, anyone providing copies of Project Gutenberg™ electronic works in accordance with this agreement, and any volunteers associated with the production, promotion and distribution of Project Gutenberg™ electronic works, harmless from all liability, costs and expenses, including legal fees, that arise directly or indirectly from any of the following which you do or cause to occur: (a) distribution of this or any Project Gutenberg™ work, (b) alteration, modification, or additions or deletions to any Project Gutenberg™ work, and (c) any Defect you cause.
Section 2. Information about the Mission of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of electronic works in formats readable by the widest variety of computers including obsolete, old, middle-aged and new computers. It exists because of the efforts of hundreds of volunteers and donations from people in all walks of life.
Volunteers and financial support to provide volunteers with the assistance they need are critical to reaching Project Gutenberg™’s goals and ensuring that the Project Gutenberg™ collection will remain freely available for generations to come. In 2001, the Project Gutenberg Literary Archive Foundation was created to provide a secure and permanent future for Project Gutenberg™ and future generations. To learn more about the Project Gutenberg Literary Archive Foundation and how your efforts and donations can help, see Sections 3 and 4 and the Foundation information page at www.gutenberg.org.
Section 3. Information about the Project Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit 501(c)(3) educational corporation organized under the laws of the state of Mississippi and granted tax exempt status by the Internal Revenue Service. The Foundation’s EIN or federal tax identification number is 64-6221541. Contributions to the Project Gutenberg Literary Archive Foundation are tax deductible to the full extent permitted by U.S. federal laws and your state’s laws.
The Foundation’s business office is located at 809 North 1500 West, Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up to date contact information can be found at the Foundation’s website and official page at www.gutenberg.org/contact
Section 4. Information about Donations to the Project Gutenberg Literary Archive Foundation
Project Gutenberg™ depends upon and cannot survive without widespread public support and donations to carry out its mission of increasing the number of public domain and licensed works that can be freely distributed in machine-readable form accessible by the widest array of equipment including outdated equipment. Many small donations ($1 to $5,000) are particularly important to maintaining tax exempt status with the IRS.
The Foundation is committed to complying with the laws regulating charities and charitable donations in all 50 states of the United States. Compliance requirements are not uniform and it takes a considerable effort, much paperwork and many fees to meet and keep up with these requirements. We do not solicit donations in locations where we have not received written confirmation of compliance. To SEND DONATIONS or determine the status of compliance for any particular state visit www.gutenberg.org/donate.
While we cannot and do not solicit contributions from states where we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in such states who approach us with offers to donate.
International donations are gratefully accepted, but we cannot make any statements concerning tax treatment of donations received from outside the United States. U.S. laws alone swamp our small staff.
Please check the Project Gutenberg web pages for current donation methods and addresses. Donations are accepted in a number of other ways including checks, online payments and credit card donations. To donate, please visit: www.gutenberg.org/donate.
Section 5. General Information About Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project Gutenberg™ concept of a library of electronic works that could be freely shared with anyone. For forty years, he produced and distributed Project Gutenberg™ eBooks with only a loose network of volunteer support.
Project Gutenberg™ eBooks are often created from several printed editions, all of which are confirmed as not protected by copyright in the U.S. unless a copyright notice is included. Thus, we do not necessarily keep eBooks in compliance with any particular paper edition.
Most people start at our website which has the main PG search facility: www.gutenberg.org.
This website includes information about Project Gutenberg™, including how to make donations to the Project Gutenberg Literary Archive Foundation, how to help produce our new eBooks, and how to subscribe to our email newsletter to hear about new eBooks.