Ivan Sergejev - Masters Thesis - Exposing the Data Center

Page 1

EXPOSING THE DATA CENTER Ivan Sergejev

Thesis submitted to the faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Architecture in Architecture at the College of Architecture and Urban Studies School of Architecture + Design

Hans C. Rott James Bassett David Dugas

08/31/2012 Blacksburg, VA

internet, cloud, data center, architecture, design


EXPOSING THE DATA CENTER Ivan Sergejev

ABSTRACT Given the rapid growth in the importance of the Internet, data centers – the buildings that store information on the web – are quickly becoming the most critical infrastructural objects in the world. However, so far they have received very little, if any, architectural attention. This thesis proclaims data centers to be the “churches” of the digital society and proposes a new type of a publicly accessible data center. The thesis starts with a brief overview of the history of data centers and the Internet in general, leading to a manifesto for making data centers into public facilities with an architecture of their own. After, the paper proposes a roadmap for the possible future development of the building type with suggestions for placing future data centers in urban environments, incorporating public programs as a part of the building program, and optimizing the inside workings of a typical data center. The final part of the work, concentrates on a design for an exemplary new data center, buildable with currently available technologies. This thesis aims to: 1) change the public perception of the internet as a non-physical thing, and data centers as purely functional infrastructural objects without any deeper cultural significance and 2) propose a new architectural language for the type.


CONTENTS Intoduction Hardware Alien

1 6 7

Opportunity The Cloud Condition Tendencies Manifesto

8 9 11 14 18

Roadmap Argument for the City Rhizome The Robot Next Door Interface Inefficiencies Science Fiction

21 22 23 25 26 27 29

Project Concepts Site Program Massing Logistics Server Component Support Spaces Office Component Public Route Facade Structure Drawings and Renderings

31 32 35 39 40 40 42 43 44 45 46 48 50

Epilogue

75

Notes

79

Image Credits

81

Acknowledgements

83

Exposing the Data Center

iii



INTRODUCTION

Exposing the Data Center Introduction

1


Exposing the Data Center Introduction

2


Exposing the Data Center Introduction

3


Exposing the Data Center Introduction

4


Exposing the Data Center Introduction

5


1 HARDWARE

Notes:

“It’s all hardware, dude” - told me my long-time friend, developer of the social networking service Zerply.com, Taaniel Jakobs, towards the end of our Skype chat one late April evening of 2010. It has been a while since we talked as he was in San Francisco, participating in a high-profile start-up workshop, and Skype was by far the cheapest and easiest way of staying in touch from across the ocean. I knew Taaniel through a common friend of ours, and the more I learned about the guy the more interesting our conversations became. Most interesting to me was the fact that he had the audacity to call himself an “architect”, although he had nothing to do with architecture. From what I knew, I was the architect - he was an internet developer. I designed buildings - tangible things made of real materials, built to last for years and designed to make peoples’ lives better. He coded web-sites - the ephemeral, “non-physical“ world of the “web”, where everything lasted for a fraction of what I was used to work with. That particular night I was expressing to Taaniel my fascination with Mark Zuckerberg - the co-founder of the social networking site Facebook. He became the youngest billionaire in the world on the basis of something no one really knew how to define. Social networking was not a product one could sell, or a service one could charge for. Nevertheless, the guy made billions. In my oblivious exhilaration, I was preaching to Taaniel that the phenomenon of Facebook was a step towards the promise of the internet: to help us become “pure energy beings” by liberating us from the iron grip of physical limitations, allow us to file our taxes with ease, make money from nothing, travel and see other people without ever leaving the comfort of our living room - even live better lives in artificially constructed online realities. Internet, for me, was the antithesis of the physical and spatial everyday reality we lived in. Turned out I was wrong.

Exposing the Data Center Introduction

6


2 ALIEN In May 2011, I got an internship with a small boutique architecture office located in New York City. The job was great because, among other things, it had regular hours and was situated in Tribeca. Every night I would leave work and instead of heading home, take a walk in a random direction, discovering the city a few blocks at a time.

Notes: 1 TenantWise.com Incorporated (2003). Special Report: WTC Tenant Relocation Summary. Retrieved from http://www. tenantwise.com/reports/wtc_ relocate.asp

During my walks, I realized that the grain of Manhattan seemed to be made up of two major elements: walls and windows. Whenever you ran into something that deferred from this formula, you knew it was either a museum or a monument of some sort.

2 AT&T Long Lines Building. Retrieved January 12, 2014 from http://www.nycarchitecture.com/SOH/ SOH026.htm

One evening though, I happened to stumble onto something I could not quite figure out. It was a monolith of a building with no windows and a scale of a skyscraper. It is typical for highrises to have a single windowless wall usually containing the core (shafts, stairs and elevators). This building, however, had no windows at all with ownership not marked in any way (Figure 1).

3 33 Thomas Street. Retrieved January 12, 2014, from http:// en.wikipedia.org/wiki/33_ Thomas

I walked around the block in an attempt to figure out what the thing was. The huge vents on top of the tower gave me a clue that it might be an infrastructural facility (a ventilation tower for the subway system of the Lower Manhattan I figured), but why would someone put it there - a building apparently unoccupiable by humans, in a place with some of the highest land values in the world?

4 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY. HarperCollins Publishers. p. 202.

Intrigued by my new acquaintance, I went on home and typed the address into Google: 33 Thomas St. It turned out the structure was a utility building owned by AT&T1 (also known as the “AT&T Long Lines Building”)2, and contained major long distance telephony switches, in addition to some “highly secure data center space”3, all located on one of the largest intercontinental cable thoroughfares in the world4. This encounter turned out to be my first contact with the physical location of the internet and the “hardware“ Taaniel was talking about.

Figure 1: 33 Thomas St., New York.

Exposing the Data Center Introduction

7


OPPORTUNITY

Exposing the Data Center Opportunity

8


THE “CLOUD”

?

Frequent users of the internet have probably noticed that recently the network has been overwhelmed by the term “cloud”. It seems to be everywhere, and every internet provider we use seems to be offering “cloud” services. For example, if one needs to transform a set of pictures into a 3D model, one can use the AutoDesk’s 123D Catch web app1. As a lot of other services, it is based “in the cloud”. One just needs to upload the pictures onto a website, click “Convert”, and in a few minutes he/she will receive a file containing a 3D model in a mesh form, accessible with a number of 3D modelling software packages. No need to download or install anything – it is all done “in the cloud”. Or another example, which is probably more familiar to most of us – a service called DropBox2. DropBox allows customers to store their data online, and access it from any device. One does not need to worry about their data being accidentally lost as a result of a computer malfunction – all their “stuff” is in the cloud, available anywhere anytime. Hence a question arises: what is this “cloud” (Figure 1)?

Notes: 1 AutoDesk’s 123D Catch web app link: http://www.123dapp. com/catch

2 DropBox link: https://www. dropbox.com/

3 Coulouris, G., Dollimore, J., Kindberg, T., Blair, G. (2011). Distributed Systems: Concepts and Design (5th Edition). Boston. Addison-Wesley. 4 Ledford, H. (2010, October 14). Supercomputer sets protein-folding record. Nature. Retrieved from http://www. nature.com/news/2010/101014/ full/news.2010.541.html

A “cloud” is a popularized term that originated from “cloud computing”, also known as “distributed computing”, the underlying logic of which is that multiple computers are connected via a real-time link, and a particular task is shared and performed on all of them simultaneously3. Thus, separate Figure 1: The “Cloud”.

computers contribute their computational power to tackling a single task, such as, for example, simulating protein folding4, running an email application, or even storing a large amount of data. If one now imagines a lot of separate machines connected, for example, across the United States, the geographical map of such a structure would arguably start reminding a cloud – a formless object, comprised of a multitude of tiny elements – similar to a meteorological cloud that consists of droplets of water, or a cloud of electrons that revolve around a nucleus of an atom (see Figure 2 on the next page). As in nature, a computational cloud can change shape. Imagine a set of computers, subdivided into two sub-sets, with each set performing a certain task. When the computational power required performing one of the tasks decreases, the computers associated with it can switch and join the second sub-set, thus contributing to the second task being performed faster or better. Thus, a computer cloud is not a stable thing – it can rearrange and “scale” (grow or shrink in size) according to computational needs. A great thing about the cloud is that it allows the end user (like me or you) to have a much lighter portable device. If we assume that the data is stored and “heavy” computations are performed in the cloud, the only thing the end user needs is an interface, a “portal” to access that data – a task that

Exposing the Data Center Opportunity

9


can be performed by a very simple device. Essentially, it is the cloud that makes devices like tablets and netbooks possible: they can be light and “dumb” because they do not need to perform any operations themselves; instead, they merely provide access to whatever is being carried out in the cloud. Cloud computing is not a new concept. It has been around since the 1950’s, but has never been called the way it currently is until around mid-2000’s. There is a lot of debate as to who exactly coined the term “cloud”. There has been quite some drama in relation to its origins, such as, for example, the highly publicized and criticized Dell’s trademark application for the term “cloud computing” in 20075, or the earlier refused trademark application by entrepreneur Sean O’Sullivan of the now inexistent company NetCentric in 1997, which explicitly had the phrase “cloud computing” in its body6. The cloud seems to be one of those things a number of people arrived at independently and started using simultaneously. Whatever the origins, the term started rapidly gaining popularity in the mid-2000’s and is all but ubiquitous by now.

Figure 2: The map of the internet resembling a cloud. Image credit: The Opte Project7. A partial map of the Internet, rendered based on ping delay and colored based on TLD. Explaination from the Wikipedia Computer Science portal, “Selected Image” category: “Partial map of the Internet based on the January 15, 2005 data found on opte.org. Each line is drawn between two nodes, representing two IP addresses. The length of the lines are indicative of the delay between those two nodes. This graph represents less than 30% of the Class C networks reachable by the data collection program in early 2005. Lines are color-coded according to their corresponding RFC 1918”. Retrieved on January 12, 2014 from http://en.wikipedia.org/wiki/Portal:Computer_science

Notes: 5 Condon, S. (2008, August 8). Dell unlikely to get trademark for ‘cloud computing’. CNET (online). Retrieved from http:// news.cnet.com/8301-13578_310011577-38.html

6 Regalado, A. (2011, October 31). Who Coined ‘Cloud Computing’? MIT Technology Review (online). Retrieved from http://www. technologyreview.com/ news/425970/who-coinedcloud-computing/

7 Opte project link: http://www. blyon.com/philanthropy.php

Although initially only large corporations and research organizations could afford/were allowed to use “clouds” (see Figure 4 on the next page), by now the regular “mortals” have been given access to the cloud too. As a matter of fact, most of us use it on a daily basis – our e-mails, pictures, and documents are all there; most of the online services ubiquitous today, such as Facebook, Twitter or Google Docs would have never been possible without the cloud. For example, once the pictures are uploaded onto Facebook, one can delete them from their own machine (computer, camera, or whatever device they used to take and store them) as they will stay securely in the cloud. Another example could be a blog post which, unless a person prefers to have a copy of everything they do, is not typically written in a text editor such as, for example, Microsoft Word, but typed directly online – into the cloud. Then, once it has posted, that is where it will remain – in the cloud. The cloud enables our life the way we are used to it today. However, despite the fact that the cloud holds our data, friendships and memories, it is surprisingly hard to get into a cloud, or even pinpoint where exactly it could be. Because we have been liberated by the cloud from the necessity of keeping track of how much “space” we have left on our hard drives, the thing we tend to forget is that our data is still a physical thing. Every picture and every comment we have ever posted are stored on a hard drive somewhere. However, we lack any access to it. Be it as shape-changing and ephemeral as it wants, a cloud is a very physical thing and it lives in very specific physical locations that were built specifically to accommodate it – data centers.

Exposing the Data Center Opportunity

10


Figure 3: Ferranti Mark 1 computer. Manchester, England. 1951. Image source: Van Buskirk, E. (2008, June 20). First Videogame Music Recorded in 1951. Wired (online). Retrieved from http://www.wired.com/ listening_post/2008/06/firstvideogame/

Figure 4: A sketch of the initial map of ARPANET - the original “internet” - by Larry Roberts, dating to the late 1960’s. We can see clearly just how small and exclusive the network’s “early adopters” club was. Image source: Hafner, K., Lyon, M. (1998). Where Wizards Stay Up Late: The Origins Of The Internet. New York. Touchstone.

Figure 5: Internet world map 2007 by IPIntelligence. As we see, the density of nodes is much different from the one in the image above. Image source: http://www. ipligence.com/worldmap/

CONDITION It all started with those first computers that took up entire rooms (Figure 3). As computer components and their overall size gradually got smaller (according to “Moore’s Law”8), we went on from one super-computer, which was a single integrated machine, to super-computers comprised of multiple connected machines. By today, a typical super-computer is not a unique singular super-machine, but rather a massive agglomeration of relatively standard machines, which get their “super” from the ability to parallelize9 a task and execute it together, as well as the complexity of the infrastructure that ties them together. Thus, super-computers are the original clouds.

Notes: 8 Moore, G.E. (1965, April 19). Cramming more components onto integrated circuits. Electronics Magazine. Retrieved from http://www. cs.utexas.edu/~fussell/courses/ cs352h/papers/moore.pdf

9 Gottlieb, A., Almasi, G.S. (1989). Highly parallel computing. Redwood City, Calif. Benjamin/Cummings.

Although even spatially dispersed individual machines could theoretically form a cloud, least latency and maximum integration is achieved through keeping them in tightly-knit clusters. With time, the number of connected machines in such clusters gradually grew into thousands, and a new type of space was developed to have some specific features such as raised floors providing space for ventilation ducts, massive under-ceiling trays for cables, and standardized racks for individual machines. Such a space came to be called a “data center”. Initially, these kinds of spaces were located in the actual agencies and organizations using them, such as, for example, some select research institutions (Figure 4). Later, small businesses could build a data center inside their offices by just cramming a bunch of machines in a corner and walling them off. However, with the growth of the industry and increasing connectivity (Figure 5), data centers became a species of their own, with their requirements – such as ventilation and security – way too strict to be in a room somewhere. Dedicated facilities needed to be built to accommodate them. This was the birth of a modern data center (see Figure 6 on the next page). Typical data centers are not super-computers in that they are not actually performing massive computational tasks, but rather store data and applications. They are one of the major parts of the infrastructure of the internet, with the other three being fiber networks, internet exchanges, and of course us – the end users with our devices. Fiber networks are the actual cables that connect separate machines across the globe and through which the information travels in the form of binary light signals – zeroes and ones. Internet exchanges are the facilities that multiple cables run into and get interconnected. If fiber networks and exchange centers are the links, which enable connectivity, then data centers are the end destinations for information on the internet. When we click a link on the screen, the request from our computers goes through the fiber network and sometimes multiple exchanges before it arrives at a data center where the actual content of the link is stored. Once the information is accessed, it is sent back to us via the network to be displayed. Thus, a data center somewhere is always the end destination and “home” for the information we upload or request.

Exposing the Data Center Opportunity

11


Figure 6: An image of a topnotch modern data center: isles upon isles of “cyberrific”13 racks and blinking lights - Facebook’s data center in Prinville, OR. Image source: Brinkmann, M. (2011, April 8). Facebook Open Compute Project. Ghacks.net (online). Retrieved from http:// www.ghacks.net/2011/04/08/ facebook-open-computeproject/ Image credit: Open Compute Project.

Figure 7: Microsoft’s Dublin data center: a typical data center exterior - no different from a warehouse. Image source: Miller, R. (2009, September 24). Microsoft’s Chiller-less Data Center. Data Center Knowledge (online). Retrieved from http:// www.datacenterknowledge. com/archives/2009/09/24/ microsofts-chiller-less-datacenter/ Image credit: Microsoft Corporation.

When driving by a data center nowadays, one might not be able to distinguish it from a typical warehouse or logistic terminal (Figure 7). They share the same language of corrugated steel and massive proportions, with a few unique features. From the author’s experience, the two main attributes that reveal a data center are: 1) virtual absence of windows and only a few points of access (as opposed to warehouses that typically have a large amount of docks), and 2) massive HVAC systems that are supposed to keep the equipment inside a data center at an optimal temperature of 6175°F10. The same features make it easier to spot a data center in urban conditions, where its unfriendly windowless and bland self stands out as a sore thumb among the typical urban grain of windows and doors. Today, a lot of data centers are, indeed, warehouses outfitted with nothing but the “shell” of the building itself, and necessary infrastructure, such as power, “backbone” (fibers), ventilation, and security, with the insides rentable by sq. ft. and power capacity (Figure 8). The internet service providers who build and operate data centers guarantee the anonymity of their occupants and functionality of the data center at all times, regardless of climate, disaster situations, etc. Data centers are rated into “tiers” according to their reliability. As an example, the Uptime Institute rates data centers into four tiers, with the simplest one being Tier I, and the most sophisticated – Tier IV. According to their “Data Center Site Infrastructure Tier Standard: Topology” (2012)11, Tier I data center is a simple server space that, among other things, needs to be fully shut down once a year for maintenance and does not have redundant infrastructure. In comparison, a Tier IV data center has fully redundant and concurrently operating systems, fully available even when a capacity component is removed for planned maintenance, or just fails. Since the inception of data centers, there have been a few issues that have governed their operations and design.

Figure 8: Outfitting a data center. Image source: Hamilton, D. (2011, November 16). Mozilla Building Out 1 Megawatt Santa Clara Data Center. Data Center Talk (online). Retrieved from http://www.datacentertalk. com/2011/11/mozilla-buildingout-1-megawatt-santa-claradata-center/

Notes: 10 ASHRAE (2008). Thermal Guidelines for Data Processing Environments (PDF Document). Retrieved from http://tc99.ashraetcs. org/documents/ASHRAE_ Extended_Environmental_ Envelope_Final_Aug_1_2008. pdf

11 Uptime Institute, LLC (2012). Data Center Site Infrastructure Tier Standard: Topology (PDF Document). Retrieved from http://www.uptimeinstitute.com/ publications#Tier-Classification

12 ConnectKentucky (2007). “Global Data Corp. to Use Old Mine for Ultra-Secure Data Storage Facility” (PDF Document). Connected ConnectKentucky Quarterly. p.4. Retrieved from http:// www.connectkentucky.org/_ documents/connected_fall_ FINAL.pdf

13 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY. HarperCollins Publishers. p. 93, par.2.

First, security. Security has been an integral part of the internet, and by extension, data centers, because it was incepted as a super-exclusive research network. Later, with the start of use of the internet by large organizations, security had to be provided because of the sensitivity of their data. This obsession with security explains why data centers are among the stealthiest of modern infrastructural facilities, and why they are routinely outfitted with biometric equipment such as finger or retina scanners, fences, and roadblocks on the roads leading to them. At the same time, having become behemoths of 348,000 sq.ft. and more (as a facility visited by the author in Ashburn, VA), they are vulnerable to terrorism among other threats. In search for secure space, some critical data centers have been literally put inside mountains and other subterranean environments (e.g. WikilLeaks or Global Data Corp. servers)12. Another major issue for data centers is power and ventilation. The

Exposing the Data Center Opportunity

12


Figure 9: Diagram of current data center design inefficiencies. Image source: Palmintier, B., Newman, S., Rocky Mountain Institute (2008, August 5). Systems Thinking for Radically Efficient and Profitable Data Center Design [PowerPoint presentation]. Slide 16.

Figure 10: Approximate energy consumption by data centers in the state of Virginia as of March 2012: ~1/6 of total load; mostly in the National Capital Region.

machines need to be kept in certain climatic conditions to operate optimally and, most importantly, kept from overheating. Considering that computers running at full capacity generate a considerable amount of heat, a big chunk of a data center’s electricity needs is consumed by HVAC systems. For example, the website treehugger.com claims that “according to the Rocky Mountain Institute (RMI) energy analysts Sam Newman and Bryan Palmintier, average data centers are hugely energy inefficient. For every 100 watts these data centers consume, only 2.5 watts result in useful computing (Figure 9). The rest of the power is wasted on low server utilization and inefficiencies in the server power supply, fans and hardware that cool servers, UPS (uninterrupted power supply), lighting, and central cooling.”14 This results in data centers being huge polluters, which has remained the industry’s dirty secret for a while, until getting a lot of media coverage in the last few years in both books (such as Andrew Blum’s “Tubes”), and periodicals, most notably The New York Times. In their cover story “Power, Pollution and the Internet” from September 22, 2012, they provide a statistic by Jonathan G. Koomey, a research fellow at Stanford University, who concludes, “nationwide, data centers used about 76 billion kilowatt-hours in 2010, or roughly 2 percent of all electricity used in the country that year.”15 In addition, according to Cormack Lawler, the general manager of hosting operations at Rackspace in Ashburn/Herndon, VA, the energy consumed by data centers spread across a few counties in the National Capital Region of Virginia alone accounts for about a sixth of total energy consumption in the entire state (personal communication, March 15, 2012) (Figure 10). Virginia might be an exception rather than the rule as, along with New York and California, it is home to some of the major internet exchanges and data center clusters in the country16,17. Still, if true, one sixth of total consumption is quite an impressive number for a thing some of us think is not physical at all. The third important factor in data center design is redundancy. Everything in a data center is backed up and multiplied to make sure the facility remains operational, even if half of its generators fail and the UPS systems run out earlier than expected, especially when it comes to mission-critical facilities, such as the ones ranked as Tier 4 data centers. The server machines themselves typically have dual power wiring in case one source of power fails. In addition, information contained at a data center, especially when it comes to critical data, is multiplied and backed up into a number of other locations, in case the whole facility goes down (as was the case with the twin towers in New York City on 9/11, described in detail by William J. Mitchell in his 2003 book “Me++”)18. For example, at its facilities, Rackspace offers N+2 redundancy, which means that there are two spares for each piece of infrastructural equipment at the facility: a presumably necessary precaution, but a very expensive one.

Notes: 14 Rocky Mountain Institute (2008, August 7). Designing Radically Efficient and Profitable Data Centers. Trehugger (online). Retrieved from http://www.treehugger. com/gadgets/designingradically-efficient-andprofitable-data-centers.html

15 Glanz, J. (2012, September 22) Power, Polution and the Internet. The New York Times. Retrieved from http://www. nytimes.com/2012/09/23/ technology/data-centerswaste-vast-amounts-ofenergy-belying-industry-image. html?_r=0

16 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY. HarperCollins Publishers. p 90. 17 O’Brien, D.J. (2013, September 16). Region likely to see continued growth in data center industry. The Washington Post. Retrieved from http://articles. washingtonpost.com/2013-0915/business/42089033_1_datacenters-internet-exchangepoints-loudoun-county

18 Mitchell, W. J. (2003) Me++: The Cyborg Self and the Networked City. Cambridge Mass, USA. The MIT Press. pp. 176-177.

Exposing the Data Center Opportunity

13


Figure 11: From information society to digital society. Every one of us becomes not only a consumer, but also a producer of data.

Figure 12: Increasingly, as it is getting “outsourced” into the cloud, your memory is becoming the memory of your devices.

THIS IS HOW THE CODE LOOKS LIKE 0100110101101010100101010101110101010100100101010111010101000101011110101010010101010110 1001101011010101001010101011101010101001001010101110101010001010111101010100101010101101 0011010110101010010101010111010101010010010101011101010100010101111010101001010101011010 THIS IS HOW THE CODE LOOKS LIKE 0100110101101011010010101010111001101011010101001010101011101010101001001010101110101010 0100110101101010100101010101110101010100100101010111010101000101011110101010010101010110 0010101111010101001010101011010011010110101010010101010111010101010010010101011101010100 1001101011010101001010101011101010101001001010101110101010001010111101010100101010101101 0101011110101010010101010110111101010100101011011011110101010010101101101111010101001010 0011010110101010010101010111010101010010010101011101010100010101111010101001010101011010 0100110101101011010010101010111001101011010101001010101011101010101001001010101110101010 0010101111010101001010101011010011010110101010010101010111010101010010010101011101010100 IN THIS FORM, IT COULD MEAN ANYTHING 0101011110101010010101010110111101010100101011011011110101010010101101101111010101001010

IN THIS FORM, IT COULD MEAN ANYTHING

“In the beginning was the Word” “In the beginning was the Word”

IT CAN BE READ FROM LEFT TO RIGHT, OR RIGHT TO LEFT, “CUT” AND “PASTED” BACK TOGETHER AT WILL TO PRODUCE MEANING IT CAN BE READ FROM LEFT TO RIGHT, OR RIGHT TO LEFT, “CUT” AND “PASTED” BACK TOGETHER AT WILL TO PRODUCE MEANING

Figure 13: Code. Ones and zeroes that can stand for anything depending on how you read them, be copied, pasted, read from left to right or right to left, cropped. Diagram by the author. Images used: [middle] Malevich, K. (1915). Suprematist Painting: Aeroplane Flying. Oil on canvas, 57.3 x 48.3 cm (22 5/8 x 19 in); The Museum of Modern Art, New York [bottom] Kubrik, S. (1968). “2001: A Space Odyssey”. Movie screenshot.

TENDENCIES Currently, there are a number of observable tendencies that influence the evolution of data centers. The first one is a macro-tendency: the onset of the so called “digital age”19, with the ongoing digitalization of our lives as a new spin in the development of the information society (Figures 11 and 12). We are no longer mere receivers of information; we are also the content generators, empowered by our portable devices, abundant connections and services like Wordpress, Facebook and Tumblr. We “like”, “comment”, “tweet” and “post”, and all of these actions feed into our collective view of the world. As the data contained in digital format is nothing else but a collection of ones and zeroes, it can be copied, modified, erased and restored with greater ease than ever before (Figure 13, and Figure 14 on the next page). All of this adds to the job of data centers – keeping that data stored and available at all times.

Notes: 19 Lévy, P., & Bononno, R. (1998). Becoming virtual: Reality in the digital age. Da Capo Press Inc. 20 IBM Big Data Platform (n.d.). What is Big Data? Retrieved from http://www-01. ibm.com/software/data/bigdata/ what-is-big-data.html

21 Hunter, M. (n.d.). Bible Facts and Statistics. Amazing Bible Timeline. Retrieved from http://amazingbibletimeline. com/bible_questions/q10_ bible_facts_statistics/

In addition to the data humans produce knowledgeably, there is also a lot of “dumb” data that is being produced automatically, or passively: credit scores, weather statistics, signals that run between your car’s remote control and the door of your garage, and so on. This is the world of the so called “Big Data” – data that is so huge and random that sometimes we might not even be able to come up with a meaningful method of using it. However, the more ubiquitous computing becomes, and the more devices become connected, the faster this data will grow, and the more space will need to be provided to accommodate it. “Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone” – claims IBM on their web-site advertising their Big Data platform20 – a statistic that may seem wild at first, but in no way actually stands for the progress we are making as humankind in the same span of time. Here is an illustration that might drive the point home for you. A common DVD (digital versatile disk) fits about 4.7 Gigabytes (GB) of information. 1 GB equals 109, or 1,000,000,000, or one billion bytes, so the whole disk would fit around 4.7 billion bytes. 1 byte consists of 8 bits, which is the number of bits (basic and thus smallest units of information in computing, available only in two forms – a “zero” or a “one”) necessary to encode in digital format a single character – such as a letter, or a number. The King James Bible – one of the foundation texts of the current western civilization – has 3,116,480 characters altogether21. This would mean that the entire Bible, not accounting for design, would be about 3 MB, or 0.003 GB. This means that a typical DVD that you may be routinely taking out of your local “Red Box” for an evening chic-flick holds an equivalent of 1,566 (rounded down) copies of the civilization’s most sacred texts. One might argue that this kind of a comparison is useless as it totally neglects the actual value of the objects compared. However, this is

Exposing the Data Center Opportunity

14


storage power which was not included in Koomey’s original body of work. Why is this important? This 20 percent addition for storage and network is used to adjust the Koomey’s global estimate as shown in Figure 2: INPUT

A

Figure 2. Comparison of Koomey’s and EPA’s reports on Datacenter Power Footprint

Figure 14: Bits are electrons, routed and re-routed in different ways through a maze of transistors with the help of only four “gates“: AND, OR, NAND, NOR - and capable of producing only two answers: Yes (1) or No (0). From this limited amount of capabilities arrises the whole complexity of nowadays computing.

Assuming these trends continue, with all other factors remaining relatively constant, this data can be used to generate a projection to 2020 as shown by Figure 3. For both US and global trends, the curves represent an approximate doubling every five years. Clearly, these are projections based on past performance and there are many factors that can change this such as energy costs, efficiency and cost gains, technological shifts, and new business models. However, for this RESULT discussion, it is assumed that Figure 3 represents a plausible future. B

Figure 15: Projection of electricity use by datacenters in the US and the world based on J.Koomey’s data. Image source: Belady, C.L. (2011). Projecting Annual New Data Center Construction market size. (PDF Document) Microsoft. Global Foundation Services. p. 3. Retieved from http://www.business-sweden. se/PageFiles/9118/Projecting_ Annual_New_Data_Center_ Construction_Final.pdf

Figure 3. Projection of total electricity use by datacenters in the US and the world [3] [4] based on Koomey’s and EPA’s data

Microsoft Corporation

Today

Organized in clusters

...scattered in small portions

In two years

...combined as a major node

...serving as hybrid infrastructure

In five years

...an architectural composition

...combined with other infrastructure

Figure 16: Rate of growth of Rackspace is given as illustration. Growing data infrastructure can merge with other types of infrastructure and take on different regional spatial characteristics.

precisely the point of the above example: it aims to illustrate that 1) not all information is meaningful, and 2) with the formats that we currently use to store information, the numbers really do get wild. It is important to keep in mind that when it comes to digital, quantity does not necessarily reflect quality. Another interesting thing to notice with digital data is that it is not eternal. It only exists as long as it is being paid for. Once it is not – it disappears. Running servers costs money, and with the massive need of free space data is being written, rewritten, and rewritten again, without leaving much “cultural sediment”. Not only does this explain a relative absence of “old” internet from the current Internet landscape, it also means that, potentially, if Facebook went bankrupt and could no longer afford to run their servers, you would be left without your pictures, or “friends”. In addition, carriers of information keep evolving, leading to the so-called “digital obsolescence”, vividly perceivable in the practice of digital preservation, where one needs to constantly upgrade formats in which data is stored in order for it to remain accessible22 and a potential for the socalled “Digital dark age”23. This abundance of data and expanding interconnectivity result in a paradox that we do not know the answer to a seemingly simple question “How big is the Internet?” It is so big that it is by now immeasurable. Moreover, if we consider the fact that “inter-net” is actually quite literally a network between any two or more machines, then, in addition to the internet visible to regular people, there is an immense amount of tiny local “inter-nets” as well as private chunks of the common internet (like the governmental, military and research “inter-nets”). Thus, there is no feasible way to measure it in the first place. This property of the internet gives it a certain fractality and reminds us of a 1967 paper by the mathematician Benoît Mandelbrot “How long is the coast of Britain? Statistical Self-Similarity and Fractional Dimension”, in which he examines the “coastline paradox”: the property of the coastline to become increasingly more detailed the more you zoom in, and as the unit of measurement approaches zero, the length of the coastline seems to be approaching infinity24. Internet seems to be exactly like the Mandelbrot’s coast of Britain.

Notes: 22 Hedstrom, M. (1997). Digital preservation: a time bomb for Digital Libraries. Computers and Humanities, Vol. 31, No. 3. 189 - 202. Retrieved from http://www.uky.edu/~kiernan/ DL/hedstrom.html

23 Kuny, T. (1997). A Digital Dark Ages? Challenges in the Preservation of Electronic Information [PDF Document]. 63RD IFLA (International Federation of Library Associations and Institutions) Council and General Conference. Retrieved from http://archive.ifla.org/IV/ ifla63/63kuny1.pdf

24 Mandelbrot, B. (1967). How long is the coast of Britain? Statistical Self-Similarity and Fractional Dimension. Science, Vol. 156, No. 3775, 636-638. Retrieved from: http://faculty.washington.edu/ cet6/pub/Temp/CFR521e/ Mandelbrot_1967.pdf

25 Koomey, J. (2007). Estimating Total Power Consumption by Servers in the US and the World (PDF Document). Report. Retrieved from http://hightech.lbl.gov/ documents/DATA_CENTERS/ svrpwrusecompletefinal.pdf

Regardless of the current size of the internet, it is clear that it is getting yet bigger, resulting in the growth of the data center market as well. In his report, “Estimating Total Power Consumption by Servers in the US and the World”, 2007, Jonathon Koomey has demonstrated that the annual growth of data center electricity consumption is about 14% in the United States, and 16% globally, thus virtually doubling every 5 years25 (Figure 15). That is an average estimate, with some companies doing much better than that. According to Cormack Lawler, general manager of

Exposing the Data Center Opportunity

15


Figure 17: Going “green”! A scorecard by Greenpeace’s “Cool IT Challenge” from April 2012, rating leaders in IT on their environmental performance. Image source: Lowensohn, J. (2012, July 12). Apple’s Greenpeace cloud rating no longer a ‘fail’. CNET. Retrieved from http://news.cnet. com/8301-13579_3-5747059337/apples-greenpeace-cloudrating-no-longer-a-fail/

Rackspace’s hosting operations in Ashburn/Herndon, as of Spring 2012 Rackspace has grown from 80,000 machines to 1,000,000 in 5 years (personal communication, March 15, 2012), which is obviously an example of extraordinary growth (Figure 16 on the previous page). Now, if we recall the statistic pertaining to electricity consumption by servers (see p. 11), the future prospects look quite eerie – a fact that has attracted a lot of media attention recently as discussed above. As a result, several major players in the data center market are making efforts to go “green”, as making their data center operations more sustainable both improves their public image, and cuts down costs – a “green data center arms race” as the WIRED magazine calls it26. As a matter of fact, since 2009, Greenpeace has had an initiative called the “Cool IT Challenge” that urges “IT companies to put forth innovation, mitigate their own carbon footprint, and advocate for significant policy changes in the mutual interest of business and the climate.”27 They release annual rankings28 of the leaders in IT, tracing and rating their efforts at building sustainable business models, as well as promoting sustainability (Figure 17). Already in 2005 B.Palmintier & S.Newman, whom we have mentioned before, showed that it is possible to build much more efficient servers with off-the-shelf components29 – an idea that seems to have gained traction with a few IT behemoths, probably most notably in the Facebook’s “Open Compute” project. The company set out to design its own servers and fine tune them to Facebook’s specific requirements, which has by now resulted in “38% more efficient and 24% less expensive to build” data centers, the company claims30. Apart from fine-tuning the data center from the inside, geography has recently become an increasingly important factor in building new data centers. Because data centers do not need to be in direct proximity to their end-users (fiber networks carry signals with speed close to the speed of light, so delays are negligible even if you request a page from half the way across the globe), their geography has been defined by a different set of factors, such as 1) abundance of electric power, lately with special interest in renewable energy sources such as hydro and wind power31, 2) relatively low cost of energy, 3) positioning in colder climates for bringing down the necessity to cool down facilities, or even using the outside air to do so, 4) proximity and ease of connectivity to existing fiber throughways, 5) low taxes, and 6) social-political stability. Not surprisingly then, the northern states of Iowa, Oregon, Wyoming and Washington have become the main destinations for data center migration inside the United States, and the Scandinavian countries such as Iceland, Finland and Sweden32 – abroad. Some companies even went as far as proposing building data centers in the sea, such as the Google’s patent for a “Water-based data center”33 that would get its power from wave energy and use sea water for cooling. Although moving a data center to a remote arctic location might mean that

Notes: 26 Finley, K. (2013, November 12). Microsoft’s Built-In Power Plants Could Double Data Center Efficiency. Wired (online). Retrieved from http://www.wired.com/ wiredenterprise/2013/11/ microsoft-fuel-cells/

27 Greanpeace International Cool IT Challenge link: http://www.greenpeace.org/ international/en/campaigns/ climate-change/cool-it/

28 Pomerantz, D. (2013, April 24). Cisco, Google tie for first in latest Greenpeace ranking of IT sector climate leadership. Blogpost. Retrieved from http://www.greenpeace.org/ international/en/news/Blogs/ Cool-IT/facebook-and-googlelike-1-clean-energy-in-da/ blog/44893/

29 Palmintier, B., Newman, S., Rocky Mountain Institute (2008, August 5). Systems Thinking for Radically Efficient and Profitable Data Center Design (PowerPoint presentation). 30 Open Compute Project website, “About” section. Retrieved January 12, 2014 from http://www.opencompute. org/about/

31 Finley, K. (2013, November 13). Facebook Says Its New Data Center Will Run Entirely on Wind. Wired (online). Retrieved from http://www.wired.com/ wiredenterprise/2013/11/ facebook-iowa-wind/

Exposing the Data Center Opportunity

16


one does not need massive chillers, heat is still being dumped outside, contributing to the ice caps melting. It is not a very sustainable solution. Now, if we come back to the first observation of this chapter that we are entering the “digital age”, we can see how computing is becoming infrastructure (Figure 18), which is just as omnipresent (in the developed world) as tap water, garbage removal or electricity. Data has become a second atmosphere, a usual “dimension” of our modern lives. We are past the time when having access to the internet was “cool”. Nowadays we expect it to be there, and readily available. It is no longer something new – it is essential.

Figure 18: Data centers becoming infrastructure: just another service that makes modern life possible.

Not only have we fully embraced connectivity enabled by the internet, our faith in it has become so strong that we give away increasingly larger pieces of ourselves to it, which seems to repeat the pattern that we have gone through with other inventions, throughout the history of civilization. First, we entrusted ourselves to kings, queens, dictators and democracies, believing that they can do a better job than ourselves at discerning what is good for us. Then, we gave up weapons, entrusting our lives to the professionals, who, we thought, would defend us if something bad happened. At some point, we also gave away our money to the banks to hold it for us, believing that they will do a better job at keeping it safe than we were capable of ourselves. All this liberated our minds of responsibilities and gave us the opportunity to engage other realms, which we otherwise would have been incapable of engaging because of the “ballast”. Now, in a yet new spin, we are entrusting our memories and intellectual work to the cloud, trusting they will be safer with it. The cloud is on its way to become just as huge and important as, for example, the banking system. Sensing this, a lot of young people are encouraged to go study computing because that is what “rules the world” – just like kids were encouraged to become lawyers or financiers before. In a bit, we will forget the cloud was ever a new thing and will start trusting and relying on it just as much as we rely on electricity or banking.

Notes: 32 Wheatley, M. (2013, June 13). Facebook Opens a Really Cold New Data Center in Sweden. Siliconangle (online). Retrieved from http://siliconangle.com/ blog/2013/06/13/facebookopens-a-really-cool-new-datacenter-in-sweden/

33 Clidaras, J., et al. (2008, August 28). Water-Based Data Center. United States Patent Application nr.20080209234. Retrieved from: http://appft1. uspto.gov/netacgi/nph-Pars er?Sect1=PTO1&Sect2=HI TOFF&d=PG01&p=1&u=% 2Fnetahtml%2FPTO%2Fsr chnum.html&r=1&f=G&l=50 &s1=%2220080209234%22. PGNR.&OS=DN/20080209234 &RS=DN/20080209234

34 Bitcoin website link: http:// bitcoin.org/en/

The catch is, though, that digital data is not eternal, and, as discussed earlier, is only there until being paid for. If a service you used (say Facebook) went bankrupt, it would take all the effort you put into it with itself, the same way the banks stripped people of money during the latest financial crisis. In this case, it would not be the money that would disappear (although that is also possible with the growth of Bitcoin34, for example), but mostly your intellectual effort and social connections. However, as data permeates an increasingly large part of our lives, data centers will grow in size, number, and importance, which means that everything “digital” – the stuff we initially thought was ephemeral and intangible, will become more and more physical, signifying the rise of the new materiality – materiality of the digital, the “silicon materiality” (Figure 19 on the next page). In the past, the worst damage one country could inflict

Exposing the Data Center Opportunity

17


on the other was by bombing its ports and railroad terminals. Nowadays, increasingly, the same is done by attacking servers. With digitalization of our lives, a data center is becoming the most important building in the modern world.

MANIFESTO I see the condition emerging from the discussion above as a great opportunity for architecture. Becoming the important building type that they are, it is time data centers stopped hiding themselves, and made their presence visible. There is a huge amount of aesthetic interest in infrastructure as illustrated by, for example, a poetic account by Robert Smithson of the derelict infrastructure of New Jersey in his 1967 essay “A Tour of the Monuments of Passaic, New Jersey”35, or numerous photographic projects36, 37, or the fairly recent attempts at making infrastructure into public space by BIG (Bjarke Ingels Group) with their Amagerforbraending waste-to-energy plant in Copenhagen. There seems to be a certain sublimity to infrastructure that fascinates us, whether as a result of its size (for example, massive dams of hydro-electric plants), certain “weirdness” and “un-occupability” (for example, sea port cranes), or sheer power it has in enabling things, shifting them around, or making them possible. Thus, the currently ruling attitude towards infrastructure as mere machine-like utilitarian construction which does not have to yield to any aesthetic considerations is, in my view, deeply misguided. There are a lot of infrastructural objects that have over time become classics of both engineering and architecture, starting with the aqueducts of Rome, and ending with numerous bridges all across the world. I believe that data centers, which speak of the today’s world probably more than any other modern building type, can become those objects too – the proud symbols of their times.

Figure 19: The new materiality.

Notes: 35 Smithson, R. (1996). A Tour of the Monuments of Passaic, New Jersey (1967). Jack Flam (Hg.), Robert Smithson–The collected writings, Berkeley, 68ff. 36 Urban Omnibus (2012, August 22). Undercity: The Infrastructural Explorations of Steve Duncan. Urban Omnibus (online). Retrieved from http:// urbanomnibus.net/2012/08/ undercity-the-infrastructuralexplorations-of-steve-duncan/

37 Urban Omnibus (2010, November 3). Stanley Greenberg: City as Organism, Only Some of it Visible. Urban Omnibus (online). Retrieved from http://urbanomnibus. net/2010/11/stanley-greenbergcity-as-organism-only-some-ofit-visible/

38 Mitchell, W. J. (2003) Me++: The Cyborg Self and the Networked City. Cambridge Mass, USA. The MIT Press. p.15, par.2.

But it is not just aesthetics. One of the reasons for “revealing the data centers” is raising awareness. Currently the broader public is unaware of how the internet works, what kind of ecological threats the industry poses, or how much of their life it controls. Knowing about one’s personal energy consumption makes one relate differently to it, and be mindful about how one uses it while simultaneously appreciating it more. This could be the same with computing. If right now what happens between a click of a mouse and the appearance of a web-site on the display is a limbo (as William J. Mitchell calls it38), with a well-designed data center one would: 1) learn of the existence of the limbo, and 2) get a feel for what that limbo is made of. In addition, if we consider Internet as a public space – which it has been so far, given the total freedom of expression, speech, ability to set up websites and even its recent real-life contribution to the democratic process in Egypt and a number of other Arab countries – then why cannot the building

Exposing the Data Center Opportunity

18


Figure 20: Data - a new religion? Data center - the church for the new “spatially extended cyborg”43?

that hosts the digital dimension of our public lives give back to the threedimensional public reality? Although some data center operators are more willing than others to give researchers access and tours of their facilities, no one has yet gone so far as to make a facility public39. Considering how small a fraction of the total cost of a data center the actual structure constitutes40, it might even be financially feasible. However, the most intriguing part about data centers is that they contain a relatively homogenous program – thousands and thousands of machines, wired together to form a single super-machine. From the outside, they stand as impermeable volumes and, like homogenous chunks of clay, can potentially be given any shape (Figure 21). For some time, the architecture of data centers was one of leftovers and “repurposement” of old telegraph buildings (for example, the major hubs at 32 Ave of the Americas, or 60 Hudson, both located in New York City); nowadays it is an architecture of total lack of creativity. However, if we realize and exploit the formal potential of data centers, they can become architectural sculptures virtually unrestricted in their formal language, speaking boldly of what they contain. In the nearest future, their number will be rapidly increasing, so why not utilize this opportunity to make great architecture out of at least a few of them? Before – banks, churches and other important socio-economic institutions manifested themselves in magnificent structures. Now as data is becoming the new money, memory and religion, data centers are on their way to become the “monuments”41 and “temples”, of the new “network society”42 (Figure 20). They need an architectural language of their own to make that message clear.

Notes: 39 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY. HarperCollins Publishers. pp. 240-241, 248-249. 40 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY. HarperCollins Publishers. pp. 233-234. 41 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY. HarperCollins Publishers. pp.143-144, p.229 par.2. 42 Castells, M. (1996). The Rise of the Network Society: of The Information Age: Economy, Society and Culture. Volume I. Malden, Mass, USA. Blackwell Publishing. 43 Mitchell, W. J. (2003) Me++: The Cyborg Self and the Networked City. Cambridge Mass, USA. The MIT Press. p.39, par.2.

Figure 21: A blend of sculpture and architecture. Architecture free of limitations.

Exposing the Data Center Opportunity

19


What if we could build data centers into structures like this? Would they make you stop and wonder for their purpose, motivate you to explore and discover?

Exposing the Data Center Opportunity

20


ROADMAP

Exposing the Data Center Roadmap

21


Figure 1: Diagram of densification of the network with the multiplication of nodes and connections.

ARGUMENT FOR THE CITY If we consider expressing data centers architecturally, what would they be like? To start with, I believe that data centers should be located in urban environments. There are obvious advantages to placing them in rural and remote locations, such as security and discreteness. However, some facilities could and should be located in urban settings. Let us take New York City as an example of an urban location for this study. Given its diversity, size, and position in the history of communications, it is a perfect specimen.

Figure 2: A diagrammatic sketch of the “interconnected planet”.

Figure 3: A diagrammatic projection of growth and densification of the data infrastructure onto Manhattan. Major fiber avenues shown in red, internet exchanges and data centers - in yellow.

Current

Future

By today, cities are where most of us live, with the trend of global urbanization continuing in the foreseeable future1. Cities are the main producers and consumers of most resources, including data. So far, proximity to the end user has not been an issue for data centers, as, given the proliferation of fiber cable networks, it takes just milliseconds for information to travel across the United States. This makes relative remoteness of data centers tolerable. As the proportion of people living in cities gets bigger, and ideas like augmented reality and ubiquitous computing gain traction, moving data centers closer to end users will become a necessity. Already today there are certain applications that need close proximity to ensure low latency of operations. Probably the most important of them is high-frequency high-volume automated trading2. In trading – which is mostly carried out today by algorithms as opposed to shouting numbers in a phone receiver – milliseconds matter. As everyone is using computers to trade, and are connected using the same fiber cables, getting new information a millisecond earlier than one’s competitor, means placing the sell or buy order first, and thus millions of dollars in gains. This is why in New York City some of the biggest data facilities are in close proximity with the New York Stock Exchange, and also why one of the most important communication nodes used to be in the very heart of the Manhattan’s Financial District – inside the twin towers, before 9/11/20013. However, if we propel ourselves 10-20 years into the future, into the world of “Google Glass”4,5 and driverless vehicles, we will see that our daily routines will become so dependent on data that the closer it will reside to us, the better. Consider driverless vehicles. As we all know, road conditions are implicitly dangerous and need constant coordination. Sometimes the difference between an accident with casualties and a mild annoyance is made by milliseconds. If the roads are really on their way to be remotely coordinated, their brain centers ought to be close to where the activity happens, to make sure those milliseconds are taken care of. Additionally, research has shown that by now our expectation of connectivity and

Notes: 1 World Health Organization. Global Health Observatory (n.d.). Urban Population Growth. Situation and trends in key indicators. Retrieved from http://www.who.int/gho/ urban_health/situation_trends/ urban_population_growth_text/ en/

2 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY. HarperCollins Publishers. p. 198. 3 Mitchell, W. J. (2003) Me++: The Cyborg Self and the Networked City. Cambridge Mass, USA. The MIT Press. p.176. 4 Olsson; M. I., et al. (2013, February 21). Wearable Device with Input and Output Structures. United States Patent Application nr.20130044042. Retrieved from http://appft.uspto.gov/ netacgi/nph-Parser?Sect1 =PTO2&Sect2=HITOFF& u=%2Fnetahtml%2FPTO %2Fsearch-adv.html&r=13 &p=1&f=G&l=50&d=PG01 &S1=%2820130221.PD.%20 AND%20Google.AS.%29& OS=PD/20130221%20 AND%20AN/ Google&RS=%28PD/20130221 %20AND%20AN/Google%29, %20http://news.cnet.com/83011023_3-57570533-93/googleglass-patent-application-getsreally-technical/,%20http:// www.google.com/glass/start/

5 Tibken, S. (2013, February 21). Google Glass patent application gets really technical. CNET (online). Retrieved from: http://news. cnet.com/8301-1023_357570533-93/google-glasspatent-application-gets-reallytechnical/

Exposing the Data Center Roadmap

22


Figure 4: With growth, interweaving and multiplication of networks and nodes, data infrastructure is starting to resemble a rhizome. Image source: Lloyd, J. U., Lloyd, C. G. (1884-1887). Drugs and Medicines of North America. Ch. “Cimicifuga: Allied species - Green rhizome”. Plate XXIII: A fresh rhizome of Cimicifuga Racemosa. Retrieved from http://www. henriettesherbal.com/eclectic/ dmna/pics/dmna-pl-23.html

speed of information retrieval has grown so outrageous that if a certain page online takes a moment too long to load (a quarter of a second, Google researchers claim6), we will leave it, and go to a different one instead. This is, again, something that could be dealt with by placing data centers closer to the places where the information stored in them is needed. As cutting down latency is a thing most services would be ready to pay a premium for, moving data centers closer into cities starts to make good business sense.

ELEMENTS

STEM - reproduction

RHIZOME AND TUBERS - nutrient exchange and storage

Figure 5: Diagram of a rhizome in nature.

2) GROWTH

NODE SOIL

ROOTS - nutrient extraction

1) NUTRITION

Figure 6: Diagram of the above-ground appearance of a rhizome, and its actual subterranean structure.

On the surface

In reality

6 Lohr, S. (2012, February 29). For Impatient Web Users, an Eye Blink Is Just Too Long to Wait. The New York Times. Retrieved from http://www. nytimes.com/2012/03/01/ technology/impatient-webusers-flee-slow-loading-sites. html?pagewanted=all&_r=0

RHIZOME As the infrastructure of the internet evolves from a hand-full of nodes and a bunch of major fiber lines into a fabric of fibers and a dense population of data handling facilities (see Figures 1,2 and 3 on the previous page), the data infrastructure starts resembling a rhizome (Figure 4).

3) REPRODUCTION

Notes:

7 Gilles, D., & Félix, G. (1980). Mille plateaux. Capitalisme et schizophrénie II. Paris. Les éditions de Minuit.

Rhizome, in nature, is a horizontal stem: it is the vehicle of propagation for multiple species of plants, such as orchid, ginger or bamboo, and most commonly – fungi. It forms a subterranean carpet made of horizontal stems of a plant, from the nodes of which grow the vertical shoots (organs of reproduction) and roots (organs responsible for mineral and water “mining” as well as anchoring) (Figure 5). A biological rhizome has a few peculiar properties. For example, if one cuts out a piece of a rhizome, the rest of it will stay alive and develop, unlike a typical stem which, if hurt or destroyed, will affect negatively the rest of the plant. In addition, if we cut out a piece of a rhizome and plant it somewhere else, it will develop into a full new plant. Given the fact that rhizomes are typically subterranean structures, they are also relatively invisible (Figure 6), and invulnerable to hazards that exist on the ground, such as, for example, fires, animals, or changes of seasons. As a matter of fact, the specific organs of rhizomatic plants called “tubers” are responsible for storing nutrients, commonly in the form of starch, for the plant to use either during producing new shoots, or for survival in harsh conditions. Hence, rhizome’s main power is its multiplicity. Although it is basically one entity, any piece of it is totally self-sufficient and capable of surviving and multiplying on its own. This makes destroying a rhizomatic structure very difficult, because, first of all, being subterranean, it is hard to tell the extents of it, and second, unless fully destroyed, it will just keep on growing, and will come back and flourish as if nothing ever happened – a fact a lot of gardeners know all too well. Given their appearance and complex multi-nodal structure, rhizomes have attracted a lot of interest outside dendrology. In philosophy, the concept of the rhizome was pioneered by Gilles Deleuze and Félix Guattari in their 1980 book “A Thousand Plateaus”7. For them, rhizomes ideally represented

Exposing the Data Center Roadmap

23


Figure 7: Rhizomatic urban form - different local shapes, same “DNA”, enabled by subterranean connectivity.

Figure 8: A few examples of the advantages of being small, surrounded, hidden:

A treasure in a maze,

The heart of an onion, hidden behind its outer layers,

A city hall of a medieval city,

New York Stock Exchange - hidden in the heart of the Financial District: a relatively small but enormously important object protected by all the buildings around it.

modern culture, with its homogenous, interlaced, non-linear, poly-centric, “messy” structure. In national defense, a concept close to a rhizome found its application in devising the future of the United States military fleet as a “distributed fleet”, which would consist of smaller vessels distributed more evenly across the globe as opposed to a few super-ships vulnerable to being traced and attacked. Being “distributed” would arguably lend the fleet “greater overall resilience“8, plus start resembling a “cloud” or a swarm, which could change shape in order to meet dynamic military needs. In city planning, subway systems are a good example of a rhizome-like structure – a fully interconnected subterranean network with no presence above ground except for entrances and exits, which all become slightly different by adjusting to the local milieu and conditions, but still comprise a single bigger whole.

Notes: 8 Axe, D. (2013, March 20). After the Aircraft Carrier: 3 Alternatives to the Navy’s Vulnerable Flattops. Wired (online). Retrieved from http://www.wired.com/ dangerroom/2013/03/replacingaircraft-carriers/3/

9 Mitchell, W. J. (2003) Me++: The Cyborg Self and the Networked City. Cambridge Mass, USA. The MIT Press. p.175.

Just like Subway stops, Data Centers can be manifested in seemingly unrelated locations, behaving as a rhizomatic urban form (Figure 7). When understood in rhizomatic terms, internet exchanges can be perceived as nodes, data centers – as tubers, us and our points of access such as laptops, smart phones and the like – the aboveground stems, while the fibers tying the whole thing together become the rhizome itself. Importantly, by approximating the structure of the rhizome, the infrastructure of the internet would benefit from some of its implicit security features such as interconnectivity, multiplicity and “cloning” of nodes. The advantages of a distributed communication infrastructure have already been persuasively defended by William J. Mittchell in his book “Me++: The Cyborg Self and the Networked City”, especially in the chapter “Accident and attack”, where he writes, “the large decentralized networks that increasingly dominate our globalized world have turned out to be remarkably resistant to random accidents and failures”9. Moving on from concentrated computing, to distributed computing – smaller nodes, but many more of them – will enhance physical security of the system overall. If today something went wrong and a plane dove into the cluster of data centers located right underneath the landing path of the Dulles Airport, it would be a great blow for the industry. If instead, the nodes were smaller and more distributed, that would not be an issue. Becoming smaller would also allow more data centers to be located in cities, such as New York, as then the city itself would protect the facilities (Figure 8). A huge building sticking out as a sore thumb on a city’s skyline, such as, for example, the former “Verizon Tower”, 375 Pearl St., or the cluster in the National Capital Region described earlier is in no way a secure physical location for anyone’s data, especially after the twin towers have set a precedent with their own collapse on 9/11/2001. In this sense, a small building situated inside a block and surrounded by other buildings, with only one wall open to the street if any, is much more secure than any “protected” but free-standing structure (for example, “Intergate Manhattan”,

Exposing the Data Center Roadmap

24


by Sabey Data Centers)10 could ever be.

large block

street art meditation heating small block

event space

park freestanding

parcour local data cache

Figure 9: Possible versatility of public function.

In all these ways, apart from bringing the physical infrastructure of the internet closer to its virtual (real) counterpart – the Internet, which is distributed, centerless and homogenous – the rhizome would make data centers less vulnerable and allow them to start playing their part in a city’s spatial life.

Notes: 10 Sabey Data Centers (2013). Introducing New York’s Only Purpose-Built Data Center Campus. Brochure. Retrieved from http://sabeydatacenters. com/portfolio-item/intergatemanhattan-brochure/

THE ROBOT NEXT DOOR off-site Figure 10: “It could be anything” - possible versatility of urban setting and aquired form.

Figure 11: A local skating paradise on the ground, a data center underneath. Collage by the author. An image of the interior of Hal supercomputer was used for visualizing the subterranean data center. Image source: Kubrik, S. (1968). “2001: A Space Odyssey”. Movie screenshot.

Figure 12: An extreme example of occupation of derelict properties and urban spaces: data center “suburbia” in Detroit. Collage by the author, Base image credit: 100 Abandoned Houses Project. Retrieved from http://www.100abandonedhouses.com/

In their discussion of the rhizome, Gilles Deleuze and Félix Guattari described the rhizome’s ability to enter into relationships with its surroundings, or to “deterritoriolize” itself. They described it on the now infamous example of an interaction between an orchid and a wasp. However, largely what they were talking about was a phenomenon we knew for a long time, called symbiosis. Symbiosis is defined as any lifelong interaction between two different organisms, regardless of whether it is beneficial to the organisms. There has been a lot of debate, whether only mutually beneficial relationships should be considered symbiotic, but the current state of the issue is that symbiosis can be both benevolent and parasitic11. Similarly, data centers of the future could enter into symbiotic relationships with their surroundings, and become “scavengers” – buildings that “land” onto the local unoccupied lots and add spatial diversity to their surroundings by their unique presence, or just fill in the gaps in city fabric (Figure 9). In an optimal scenario, they would become highly contextual structures, improving the local condition of public space by taking on a form of, say, a public garden in one place, a community skateboard ramp in another, and a bouldering club in the third (Figures 10 and 11). In addition, as renowned producers of heat, data centers could be used as “heat donors” for the surrounding building stock in the colder months of the year. By now there are numerous precedents for using servers as heating elements, from small scale distributed “data furnaces”12, to commercial developments13, to successful attempts of heating whole campuses with exhaust heat from servers, or, as is the case with ETH Zurich – super-computers14. Just like Deleuze’s rhizomes, data centers of the future could become “deterritoriolized“, and enter into symbiotic relationships with their surroundings. On the other hand, post-recession Detroit (and places like Detroit) can be an ideal destination for “parasitic” data center expansion: cheap and abundant energy (including renewable sources), existing industrial building stock, expanses of unutilized real estate, all situated on a major potential fiber avenue (New York - Chicago – onwards west), in a relatively cold climate – a perfect scenario for a data center migration hotspot. By just “devouring” and repurposing whatever is there, data centers can be temporary – or long-term – occupants of land in places like Detroit (Figure 12).

11 Martin, B. D., Schwab, E. (2012). Current usage of symbiosis and associated terminology. International Journal of Biology, 5(1), p32. 12 Liu, J., Goraczko, M., James, S., Belady, C., Lu, J., & Whitehouse, K. (2011, June). The data furnace: heating up with cloud computing. Proceedings of the 3rd USENIX Workshop on Hot Topics in Cloud Computing. Retrieved from https://www.usenix.org/ legacy/events/hotcloud11/tech/ final_files/LiuGoraczko.pdf

13 Verge, J. (2013, March 4). Telus Warms Condos With Heat From Its Servers. Data Center Knowledge (online). Retrieved from http://www. datacenterknowledge.com/ archives/2013/03/04/teluswarm-condos-with-heat-fromservers/

14 IBM (2010, July 2). Made in IBM Labs: IBM Hot WaterCooled Supercomputer Goes Live at ETH Zurich. News release. Zurich. Retrieved from http://www-03.ibm.com/press/ us/en/pressrelease/32049.wss

Exposing the Data Center Roadmap

25


Figure 13: A data center - a machine on an urban scale.

INTERFACE Any machine humankind has ever come up with has an “interface”, aptly called so because it is exactly that – an “inter-face” – an intermediate layer that separates the inside from the outside, with the outside designed to communicate with us – users, humans – and the insides working according to their own logic. The interface spans the distance between the inside and the outside of a machine, acting as an interpreter between the two. With a good interface, a typical user does not need to know exactly what is going on inside his or her device. Even if he attempted to take a look inside the machine, without proper preparation or training, he wouldn’t understand it, although he probably would be either fascinated, or horrified by it as we always are when we face something we do not understand.

Figure 14: A diagram of the design of a machine (left), and a building (right). One is designed to be accessible, the other - not.

Notes: 15 Koolhaas, R. (1978). Delirious New York. Thames & Hudson. p.100.

In this sense, a data center is nothing else but a machine on an urban scale (Figure 13). Typically, architecture tries to mutually penetrate the inside and the outside. It has doors and windows, it has both the inside and the outside that are both available to be explored and are made up of fundamentally the same elements. Architecture is permeable – one can be both inside and outside it (Figure 14). However, a data center is not like that. It is more like an iPhone, or any other device: its insides are highly technical, understandable only to a person trained to understand them, and ruled by the logic of the machine, where to a large degree, humans have no place. The outside, on the other hand, is geared for communicating with us. Within a building, this results in a “lobotomy” of the inside and the outside, similar to the one described by Rem Koolhaas in his Delirious New York15 (Figure 15).

Figure 15: “You never know what is behind that wall.” Illustrating the “lobotomy” between the inside and the outside, between the public and the private. Collage by the author. Data center image source: Infopipe. Retrieved from http:// goinfopipe.com/what-we-do/

However, when it comes to an object the scale of a building, what would make a good interface, especially given the absence of doors and windows? You cannot click or tap it. You might be able to tile the wall with screens, but that would either reduce the building to one big screen thus erasing its “architecture-ness”, or only give you a localized interaction with the building, without revealing its wholesome nature. Thus, the only interface a data center can have is its raw architecture. However, given its scale and physicality, it is a different kind of an interface. The building does not serve as a portal into the ephemeral world of the internet; instead, it focuses your attention on the physical reality that both you and it, and by extension the internet itself, actually occupy. It becomes a landscape that invites one to explore and figure out for oneself what the relationship is between the form and the content of the building. In the case of a data center, the “lobotomy” of the inside and the outside is also a separation between the public and the private. Saying that, because the internet is a “public” thing we should all get free physical access to it is nonsense and cannot be done. There are good reasons why the actual

Exposing the Data Center Roadmap

26


machines of the data center should remain behind walls and closed doors. However, there are no reasons for why it can not have an “interface”. If anything, it is this interface that can, and probably should be made public.

Notes:

INEFFICIENCIES We have talked about what the new generation of data centers could look like from the outside, and how they are going to relate to their surroundings, as well as agreed that they would be machines on an urban scale, with their outside relating to us, humans, and the insides working according to their own logic and needs. Let us now take a glimpse inside these beasts. A thing that strikes one while visiting a data center is how much space it seems to be wasting (Figure 16). It is not a dense machine, but rather a WalMart-like space filled with isles of generously spaced racks. This is currently necessary for mostly two reasons: ventilation and maintenance. The isles in a data center are typically divided into two categories: “cold” and “hot” isles. The names come from the temperature of the air inside them. A cold isle is where the cold air fresh from the chillers is blasted at the server racks from underneath the raised floors; the hot isles are where the air, having passed the machines and collected their excess heat, is sucked back into the ventilation system of the building to be recycled. This constant movement of air makes the insides of a data center a very loud and windy environment. And, although at some data centers not all racks are filled up, the air is still blown through them with the same force, which is obviously very inefficient.

Figure 16: Diagrammatic analysis of the percentage of Intergate Manhattan10 (p.24) taken up by the actual server machines in plan and volumetrically. Includes only the actual server machines, excludes wiring, ventilation ducts, etc. Projected to full capacity.

Figure 17: The same volume of machines takes up only 52% of the available volume of a much smaller site (see next part of the book: “Project”).

Figure 18: Resulting difference in scale of Pearl 375 and the proposed building in lower Manhattan, in case machines are densified to the maximum.

The other reason for wide gaps between racks is the need to access and plug the machines in and out, or in some rare cases to wheel a “crash cart” to a specific machine for manual diagnostics. Servers do not typically have dedicated interfaces, such as monitors or keyboards, plus most of the maintenance operations are handled remotely (in the case of the Rackspace’s facility visited visited by the author in the National Capital Region – from Austin, TX). The racks have to be accessible from both sides, to allow to diagnose the machines on their front side, and to plug them into the grid and the backbone on the back. At the specific facility visited by the author, the daily exchange rate was approximately 14 machines for a total “population” of 7,000, meaning 14 machines had to be removed and 14 new put in their place daily, according to staff. Seeing this current standard for data center layout, one can not help but wonder if it is indeed the most efficient way to organize it (Figures 17 and 18). However, the inefficiencies inside a data center do not stop at the building scale - they continue diminishing in scale. Inside the machine itself - the component layout can be optimized.

Exposing the Data Center Roadmap

27


Figure 19: Volume difference between a PC desktop and a Mac laptop.

Figure 20: Density of component packing inside a MacBook Pro 2010 laptop.

Packing of computer’s components is not a new concept. One of the things that make Mac computers so stylish and elegant is the ability of Apple’s engineers to fit minimized components into tight frames. If we open up a cover of a MacBook Pro, and a standard desktop PC computer, the difference in layout efficiency is staggering (Figure 19). In some cases, Apple’s computers have to utilize custom-formed bolts and metal holders that clutch multiple components in place simultaneously while using only one holding screw, in order to allow for the densest “packing” and “multitasking”, even on the hardware level (Figure 20). Comparing a Mac’s laptop with a PC desktop is like comparing the super-tight automated parking garages of New York (Figure 21) with the sprawling parking lots of suburban Virginia. Packing of machines inside a data center can be achieved via different techniques, for example, by stripping down and optimizing the actual server machines so that they could be serviced from only one side, and by making them easily “pluggable” into your mainframe. IBM’s blade servers and their related enclosures16 are great examples of doing just that. Another avenue is to consider mechanizing the maintenance operations by employing robots, instead of humans, to reach the machines and exchange whatever parts need to be exchanged or fixed. However, as is the case with Macintosh laptops, the denser the layout, the more difficult it becomes to evacuate the heat produced by the computer’s working components from the body of the machine. While Apple seems to be partially solving this by using the “unibody” shells of their laptops as singular oversized heat sinks, another solution is to use a different medium of heat evacuation, a much more effective one than air – a liquid, most commonly – water.

Figure 21: Automated parking - a space-saver in urban conditions - is a great example of radically and efficiently densifying a program we are used to seeing in a very different form.

Notes: 16 IBM Systems and Technology Group (2013, January). IBM BladeCenter: Build smarter IT (PDF Document). Brochure. Retrieved from http://www-01. ibm.com/common/ssi/cgi-bin/ ssialias?infotype=PM&subtyp

e=BR&htmlfid=BLB03002USE N&attachment=BLB03002US EN.PDF 17 IBM (2010, July 2). Made in IBM Labs: IBM Hot WaterCooled Supercomputer Goes Live at ETH Zurich. News release. Zurich. Retrieved from http://www-03.ibm.com/press/ us/en/pressrelease/32049.wss 18 Cray Research (1988). Cray-2 Series of Computer Systems (PDF Document). Brochure. p.5. Retrieved from http://www. craysupercomputers.com/ downloads/Cray2/Cray2_ Brochure001.pdf

Given water’s thermal properties, it arguably “removes heat 4,000 times more efficiently than air.”17. This makes it the coolant of choice for some energy-dense applications, such as cooling car engines and power plants. However, liquid cooling has also been successfully applied in computing. Apart from the gaming enthusiasts, who know that sticking a liquid cooling component onto your CPU will dramatically improve the performance of your machine, there have been a few pioneering supercomputer designs, which used water as their cooling medium of choice. One example is the 1985 Cray-2 super-computer, which was remarkable for a number of reasons. First, it employed a novel approach to stacking motherboards on top of each other in order to achieve groundbreaking computational speed in a tight envelope. This stacking led to another innovation: the computer used liquid cooling (in this case, an inert fluorocarbon liquid), in which the integrated circuit packages and power supplies were submerged18. Another remarkable example is the current Aquasar supercomputer design, which uses “hot water cooling”, meaning

Exposing the Data Center Roadmap

28


Figure 22: Cytosol is the liquid that constitutes most of the volume of a cell and in which the organelles of the cell are suspended (marked yellow). Image source: “Cytoplasm Organelles“. TutorVista.com. Retrieved from http://www. tutorvista.com/content/biology/ biology-iii/cell-organization/ cytoplasm.php.

the water used for cooling the components has a temperature of up to 60°F. This reduces the energy consumption of the facility by 40%. Additionally, at ETH Zurich, where the first Aquasar supercomputer was installed, exhaust energy produced by the machine is used to heat buildings on the university’s campus, which reduces the carbon footprint of the computer by 85%19,20,21. Last but not the least, the computer uses IBM’s BladeCenter technology to, again, densify the design.

Notes: 19 IBM (2010, July 2). Made in IBM Labs: IBM Hot WaterCooled Supercomputer Goes Live at ETH Zurich. News release. Zurich. Retrieved from http://www-03.ibm.com/press/ us/en/pressrelease/32049.wss

SCIENCE FICTION

Figure 23: The “Monolith”. Kubrik, S. (1968). “2001: A Space Odyssey”. Movie screenshot.

With the two discussed examples in mind, one may only wonder what the next step in the “architecture” and engineering of computers might be. Maybe one day we will be able to build a “liquid computer”, where a special “smart liquid” would act as both the cooling and the “wiring” agent, with microprocessors submerged in a computational “cytosol” (Figure 22) and connections between them established “on demand” (something similar to the supercomputer visualized in the movie “I Robot”). Or may-be it will become a replica of the “monolith” from Stanley Kubrick’s “2001: A Space Odyssey” (Figure 23) – a computer with its components packed so densely, it would appear solid from a distance, only revealing its artificial nature if looked at through a microscope: an architectural artefact approaching the Egyptian pyramids in its density and symbolic significance (Figure 24). We shall see.

20 IBM Research - Zurich (n.d.). Zero-emission Data center: Direct use of waste heat to minimize carbon-dioxide emission (online). Retrieved from http://www.zurich.ibm. com/st/energy/zeroemission. html

21 Broersma, M. (2010, December 14). IBM XeonBased Supercomputer To Hit Three Petaflops. Tech Week Europe (online). Retrieved from http://www.techweekeurope. co.uk/news/ibm-xeon-basedsupercomputer-to-hit-threepetaflops-15866

Figure 24: A diagrammatic section through an Egyptian pyramid - a solid piece of information thousands of years old we still have trouble deciphering.

Exposing the Data Center Roadmap

29


What if we could build data centers into structures like these? What would they tell our successors about our civilization? Would they start wondering what the structures were - the way we still wonder today about Stonehenge and the pyramids?

Exposing the Data Center Roadmap

30


PROJECT

Exposing the Data Center Project

31


Figure 1: Conceptual image of a rock in an urban setting. Image credit: Filip Dujardin. “Fictions”. Retrieved from http:// www.filipdujardin.be/

CONCEPTS The project of a building presented herein aims to be a manifestation of the ideas discussed in this book so far – it aims to be an example for the future development of data centers: a new “public data center“, which, if not applicable to all locations, might still be possible in some contexts.

Notes: 1 Koolhaas, R. (1978). Delirious New York. Thames & Hudson. p. 294.

Although the topics discussed so far have been informative for figuring out what a data center of the future should be technically, the actual shape of the building was derived from a separate set of concepts and observations, most of them pertaining to the specific location of the building – in the NoHo historic district of Manhattan, NY.

Figure 2: A drawing of an omnidirectional intersection within a 3-dimensional grid.

Figure 3: The grid. Commissioners plan of Manhattan, 1811. Image Source: Koolhaas, R. (1978). Delirious New York. Thames & Hudson. p.18-19.

As discussed earlier, a data center is not the most open of the building types. With its lack of windows and tight security requirements, an urban data center stands in stark contrast to the rest of the city that is made of programs that we can access and use. One of the ways to describe such a building would be to call it a “rock” – an object we can walk around, but cannot enter, which “just stands there” and can care less about our existence (Figure 1). The proposed building’s shape becomes a direct metaphor for this. Its shape is curvy as opposed to orthogonal, which would have immediately given away the human thought behind it. Its formal language is so alien to the surrounding context that it seems to have grown out of the ground. Its general inaccessibility, with only a specific “path” that can be taken in order to ascend it, lack of apparent entrances or exits, and non-conformity to the typical grain of the elaborately decorated wall-andwindow architecture of its surroundings are meant to invoke the feelings one would commonly experience on a hike through the mountains. In addition, the building’s rock-like appearance allows it to reflect on Manhattan’s legacy of man-made naturalism. Through the ages, the island’s topology has been infinitely modified by human use, leaving virtually no examples of the original landscape. Even the most renowned public park of the island – Central Park – is entirely man-made despite its naturalistic look and feel: designed by Frederic Law Olmsted and Calvert Vaux, it was built from a swamp into a public park through the effort of countless labor men. Thus, by inserting its rock-like bulk onto a block in downtown Manhattan, the building tries to surprise the passer-by as a sublime, slightly surreal and out-of-context object, which is, as a matter of fact, entirely contextual. The second important concept is the one of the “grid” (Figure 2). New York City is defined by its urban grid (Figure 3). As argued by Rem Koolhaas in his book “Delirious New York” (1978), Manhattan’s grid is the framework that enables the city to act as a collection of separate and independent worlds, where each block is relatively independent from the other and can be whatever it wants to be1. All of these independent worlds have to play within the framework of the grid, and can interpret it the way they see fit.

Exposing the Data Center Project

32


Figure 4: The grid and the “microchip aesthetic”. A die diagram of the Intel Core i7 Nehalem processor. Image source: Intel Corporation (2008, November 17). Intel Launches Fastest Processor on the Planet. Pressrelease. Santa Clara, Calif. Retrieved from http://www. intel.com/pressroom/archive/ releases/2008/20081117comp_ sm.htm

Figure 5: Conceptual model of filling the vacant space of the site, making it into a true urban “block”. An image of density.

Figure 6: The grid filling the site (left) and it’s fractal language (right).

On the other hand, the grid is an indispensable element of computing. If we look closely at microchips and layouts of motherboards, we see a strict underlying logic: a matrix through which individual electrons are routed and rerouted in order to perform calculations with the help of only four operations, or logic “gates” as they are called: AND, OR, NAND and NOR (see Figure 4 on the left, and Figure 14 in chapter “Opportunity”). From combining these few simple elements in a universe of ways the whole complexity of modern computing arises.

Notes: 2 Fractal. Retrieved January 12, 2014 from http:// en.wikipedia.org/wiki/Fractal

Closely related to the grid, is the notion of a “fractal”. Although the grid is a rigid system, it allows for certain freedom through the application of fractal logic. Citing multiple sources, Wikipedia defines fractals as “typically selfsimilar patterns, where self-similar means they are ‘the same from near as from far’... Fractals may be exactly the same at every scale, or they may be nearly the same at different scales. The definition of fractal goes beyond self-similarity per se to exclude trivial self-similarity and include the idea of a detailed pattern repeating itself.”2 I mentioned fractals earlier in relation to a 1967 paper by the mathematician Benoît Mandelbrot titled “How long is the coast of Britain? Statistical Self-Similarity and Fractional Dimension” (p.15), where I argued that internet, and by extension the cloud, are fractal: regardless of how close we look at them, they remind us of a mesh of interconnected nodes, with still more nodes and connections to discover as we zoom in. If applied to the grid, the fractal would be a grid within a grid, within a grid, within a grid – and so on to infinity. The building takes these two related concepts and applies them by uniting all separate parts of itself via an omnipresent – sometimes explicit, sometimes just implied – grid. First, a virtual 12’x12’x12’ grid fills (Figure 5) the entire site, and then parts of it rarify while others densify or fracture to accommodate specific programs (Figure 6). The most explicit architectural manifestation of this fractal grid is the public route onto the top of the building: the 12’x12’x12’ module fractures and the cubic pattern densifies, with bigger modules providing for structural stability in the areas inaccessible to people while the smaller ones provide for human occupation in the form of stairs and floors. By extension, this modularity and fractality allow for the control of relative transparency / opacity of the building. For example, looking at the public grid from close all we see is a collection of cubic elements, but as we zoom out the elements visually merge to start resembling a cloud ascending the slope of the “rock”, squeezed by the neighboring buildings. The last major conceptual inspiration has been the interplay and tension, or even certain bipolar separation between the public and the private realms, the inside and the outside of the facility (see Figure 7 on the next page, or Figure 15 in the chapter “Roadmap”). For one, the building is aiming to be a public building, but is doing so by being entirely private. It can do such a

Exposing the Data Center Project

33


Figure 7: Separation of the inside and the outside, the private and the public: the “lobotomy” (see p.26). Collage by the author. Data center image source: Infopipe. Retrieved from http:// goinfopipe.com/what-we-do/

trick because of its radical separation of the inside and the outside: because the private interest is wholly inside it, the outside shell can become a public place (see p. 20). In a way, the outside of the building becomes an antipode to the city. If New York is an explosion – this building is silence; if the city is a “Culture of Congestion”3, this building is void. At the same time, inside it is the city’s densest clone, for what it contains makes it extremely dense and city-like – it is packed with information, which is in constant motion and turmoil. In fact, if a city could be represented as pure information, this building would probably be one of its densest nodes.

Notes: 3 Koolhaas, R. (1978). Delirious New York. Thames & Hudson. p. 293, par.2

Exposing the Data Center Project

34


SITE The building site is located in Downtown Manhattan (Figure 8), in the historic NoHo district, which is defined by its cast-iron architecture dating back to the end of 19th - beginning of the 20th century. There were three major reasons for locating the building here.

Washington Square Park

Figure 8: Position within Manhattan.

Cooper Square

NYU

oa

dw

ay

E

4th

St

.

St

Bow

ery

.

St.

0 25’ 100’ 200’ 400’ Figure 9: Situation plan. Site marked in yellow. Base image source for Figures 8 and 9: Google Earth.

The second reason is social. Not only is this neighborhood a tourist magnet and a shopping heaven, it is also conveniently located on the junction between East and West Villages, between New York University and Cooper Union (Figure 9). This is one of the “edgiest” neighborhoods in the city, which boasts vibrant street culture of ad-hoc events, theatres, art installations and the like. Thus, this location will ensure that the public space that the facility aims to introduce will actually be used by both the community around it and the plentiful visitors to this part of the city.

eS

es

ett

on

fay

tJ

La

ea

ston

The first reason is technical. Although this particular section of Broadway is not a major fiber thoroughfare as are, for example, Bowery or Hudson Streets (see Figure 10 on the next page), it still serves as an important local fiber avenue, and is also located right next to a major institution - NYU which could be engaged as the data center’s main tenant.

t.

Br

Gr

Hou

Notes:

1000’

The third reason is spatial. This particular parcel of land strikes one as an extremely empty and dilapidated lot among its high-rise surroundings. It is also one of the few parcels along Broadway that are empty, and no corporate or institutional interest has yet made any considerable investment in it, as illustrated by the scarcity and seediness of the existing development. Thus, it is a lot that can be “scavenged” (see p.25), plus given the position of the proposed facility inside the block with its height just barely over its neighbors it is also a good example of the strategy of being “small” and “hidden” as discussed on page 24. The immediate site for the proposed facility is composed of three separate lots (see Figure 11 on the next page), all located on the Manhattan’s Block nr. 531. Lot nr.4 is currently occupied by a tourist market made up of temporary tents, with owners selling scarfs, hats and various New York memorabilia. The back lot nr.15 is currently occupied by a former factory building. Site nr.3 currently holds a two-story building, housing a shoe store. In addition, the extension of the Great Jones Alley which falls between lots nr. 3-4 and 15 has turned into a shady back-alley with syringes and trash being a common sight. Thus, despite its location right on Broadway and in direct proximity to New York University (NYU), the site is currently way below its occupational capacity, holds buildings of low architectural value and is not a friendly urban environment by any means (see Figures 12 and 13 pages 37 and 38). Exposing the Data Center Project

35


Figure 10: Major data infrastructure of Lower Manhattan: RED - existing data centers YELLOW - major fiber avenues BLUE - location of the proposed Data Center.

In addition to the ground occupation of lots 3, 4 and 15, the building also occupies the air space of the building on lot nr.7504, thus spanning the whole block from Broadway to Lafayette Street.

Notes:

The existing buildings on lots 3, 4 and 15 will be demolished, and the new structure is going to be erected in their place. The building in the back of the site, lot nr.7504 will stay intact during both the construction and occupation.

Figure 11: The different lots comprizing the site. Base map and information source: New York City, DoITT City-wide GIS. Retrieved from http://maps.nyc.gov/doitt/ nycitymap/

Exposing the Data Center Project

36


Figure 12: A panoramic view of the site from across Broadway.

Exposing the Data Center Project

37


Figure 13: Snippets of the site. Exposing the Data Center Project

38


Figure 14: One of the largest data centers in the United States, ACC4, owned and operated by DuPont Fabros, located in Ashburn, VA. Total area: 348,000 sft. 36.4 MW critical load4. Image credit: DuPont Fabros Technology, Inc. Retrieved from http://www.dft.com/themes/dft/ images/data_centers/DFT_ ACC4_layout.pdf

PROGRAM As the specifics of program composition of any data center are somewhat variable and often secret, the program composition of the data center proposed here is not precise but is instead a result of interpolation of four major sources: 1) Plans downloaded online of one of the largest data centers in the US, called ACC4, which is operated by DuPont Fabros and located in Ashburn, VA, visited by the author (Figure 14)4,5; 2) Recollection of the spatial arrangement and programmatic composition of a medium-sized data center, located in the National Capital Region, also personally visited by the author (Figure 15);

4 2

3

15

13

14

1 5

6

7 10

8

12

9 11

Figure 15: An approximate reconstruction from memory and using Google Earth of the program of a data center visited by the author: 1 - Server space 2 - Air Handling Units 3 - Uninterrupted Power Supply 4 - Generators 5 - Server maintenance 6 - Backbone 7 - Coridor 8 - Offices 9 - Inventory / Assembly 10 - Security 11 - Support 12 - Loading dock 13 - Storage 14 - Restrooms 15 - Kitchen / Recreation

Figure 16: Isometric diagram of program distribution inside the proposed building.

3) An online brochure describing a major urban data center proposal– “Intergate Manhattan”, located at 375 Pearl St., in New York City6; 4) General information on programmatic composition of data centers available online and as described by industry experts.

Notes: 4 DuPont Fabros Technology, Inc. website: http://www.dft. com/data-centers/locationinformation

5 DuPont Fabros Technology, Inc. (n.d.). DFT_ACC4_layout. pdf (PDF Document). Retrieved January 12, 2014 from http:// www.dft.com/themes/dft/images/data_centers/DFT_ACC4_ layout.pdf

6 Sabey Data Centers (2013). Introducing New York’s Only Purpose-Built Data Center Campus. Brochure. Retrieved from http://sabeydatacenters. com/portfolio-item/intergatemanhattan-brochure/

Given the fact that the proposed facility is aiming to be more advanced than its contemporary counterparts, the amount and density of space allocated to actual servers might seem disproportionate as related to other components. This was done based on the assumption that the performance and capacity of the technology going into this data center will evolve and advance. Thus, the program is rather speculative. Overall, the program of the facility can be divided into four major categories (Figure 16): 1) Server space - the space where the actual server machines are installed, 2) Support spaces, which include the uninterrupted power supply (UPS), generator rooms, HVAC systems, and an onsite water treatment plant, 3) Office space for the data center team, 4) Public space. The last category is the new type of space for the facility as typically no public spaces are included into a data center. However, as one of the main goals – and also the main innovation – of the building is to be a public building, public spaces account for a rather large part of the program. They include, among others, a visitor center and an open-air public staircase culminating in a rooftop terrace.

Color legend: GREEN - server space MAGENTA - support BLUE - office YELLOW - public GREY - circulation

Exposing the Data Center Project

39


Figure 17: Diagrammatic massing sketches:

MASSING

Filling in the block,

There are a number of factors that define the massing of the building (Figure 17). First, as discussed in the “Concepts”, the block is filled with a virtual 12’x12’x12’ grid. The grid occupies all the air space inside the block, and its maximum height correlates with the surrounding buildings, thus “topping out” the block.

Good neighbor,

Notes: 7 Feeney, J. (2010, March 23). Real estate law made simple: What to look for before you buy a place with great views. Daily News. Retrieved from http://www.nydailynews.com/ life-style/real-estate/real-estatelaw-made-simple-buy-placegreat-views-article-1.172485

Into this virtual grid, a “rock” element – the opaque main data center volume – is placed. Given the fact that the building is surrounded by others, the “rock” tries to be a good neighbor: in some places, where there are a lot of windows, it bends away from the neighbors allowing for light and views to be retained by most of the occupants of the surrounding buildings, even though most windows overlooking the site are the so-called “lot-line windows” and can be ordered to be closed at owners expense.7 In addition, the built grid, although still filling the site, stays relatively sparse closer to the neighboring buildings to ensure both insolation and privacy of the occupants. The front elevation of the building that overlooks Broadway, conforms to the scale and general massing of the facades of its neighbors: it keeps its height and width proportionate with its neighbors, without any stepping back, and divides the available space evenly between the “rock” and the public “grid” in order to support the regular volumetric rhythm of Broadway.

Keeping the rhythm,

While the server center itself and the public staircase (grid) are facing Broadway, the office block is placed in the back of the site, on lot nr.15, instead of the existing structure. This allows for a separate entrance for the office, a relative separation of the building’s public and office functions and improves the servicing of the building.

LOGISTICS The building has two separate access routes: a public entrance on Broadway which is open to a large number of people, and a more secluded one on St. Jones Street (Figure 18 on the next page). Office block in the back.

The front of the building facing Broadway is open to the general public. There is no control of any sort at the Broadway entrance, and anyone can enter the maze of the grid to make their way up to the upper deck, which is also entirely publically accessible. One cannot enter the core of the data center or the office spaces using this entrance. In addition to the staircase, an elevator is provided for the disabled and groups to access the top platform of the building. It is not a typical elevator in that it is designed to move along the 12’x12’x12’ grid in all directions. Thus, it does not only

Exposing the Data Center Project

40


Figure 18: Access to site: YELLOW - public GREEN - service/deliveries/ maintenance MAGENTA - office/visitor center BLUE - omnidirectional public elevator

Figure 19: Robot types inside the building’s server space: a robotic arm serving as an elevator for maintenance personel (on the right), and a sliding heavy-lifting elevator on top of the server isles (left).

move vertically, but horizontally, although always following the same path (blue arrow on the access diagram on the left).

Notes:

The side entrance facing St. Jones Street takes advantage of the relatively slow pace and side-street feel of St. Jones St. This entrance has a threefold function: deliveries, employee, and visitor center entry. Delivery trucks can maneuver into the loading dock on St. Jones Street without much inconvenience, which would not have been possible if Broadway would have been used for this function (green arrow on the diagram on the left). The dock accommodates up to two 48 foot long lorries. Right next to the cargo bay is the pedestrian entry for the data center personnel and visitor center (magenta arrow on the diagram). The pedestrian entry group includes a glazed booth with security personnel and a space with metal/ explosive detectors and biometric scanners. No one can access the inside of the building without passing this area first, which is also connected to the loading bay via a window. Regular employees have an opportunity to skip the additional screening by simply scanning in their fingers (or retinas) while visitors, clients and other parties need to go through the entire screening routine before accessing the data center. Once the visitors have passed the security check, they can move freely around the office component of the building. However, to access the maintenance/support spaces, or the actual machine space of the data center, additional clearance is required. There is no staff or visitor parking provided inside the building. Instead, both employees and visitors can use designated spaces on the parking lot located at the intersection of St. Jones and Lafayette streets. Inside the building, movement can be divided into two components: movement of people, and movement of equipment/cargo.

Figure 20: Sliding platforms on the 0 floor (- 24’-0”).

Deliveries

Node / Airlock

Inside the office and support spaces, human logistics is organized using conventional methods: a system of staircases and an elevator. On the other hand, in the server machine space, robotic arms are used for moving around in most cases, but ladders on the ends of each server “isle” are also provided to be used in extreme situations. The elevator inside the office component is adherent to the 12’x12’x12’ grid logic and serves the entire height of the building from the 0 to the 7th floor; it is large enough to transport both people and cargo. As it is a logistical artery that ties together both the office and service components of the building, it has similar biometric equipment as the entry area to ensure that visitors can only go to designated areas, according to their security clearance status (for example, the 1st, 2nd and 7th floors of the building, but not the 0 floor). The above-ground portion of the office component is served by a staircase which is similar to the one of the public grid outside: it is made up by a

Exposing the Data Center Project

41


Figure 21: Server isles inside the “rock”.

Figure 22: A step-by-step modularity diagram of the server “brain-center”.

fragmented 12’x12’x12’ module. It weaves through the different spaces of the office component and dubs as both a logistic element and a means of egress. The subterranean portion of the building is served by a spiral staircase which connects the maintenance area and the loading bay, with a second spiral staircase spanning all the floors of the generator and UPS room. Inside the server machine room, transport of both objects and people is carried out by robots, of which there are two types (see Figure 19 on the previous page). The first type is a robotic arm secured onto a moving platform, on which the robot slides along rails in all three dimensions inside the server space. These robots, used for illustration of concept in this project, are inspired by existing industrial prototypes, such as, for example, KUKA KR 120 R3500 PRIME K8, which can lift loads of up to 250 pounds, making them suitable for all the necessary applications inside the center: performing maintenance, carrying out plug-in/out operations on either individual or entire sets of server machines, and serving as elevators for the maintenance personnel. The second type of robots is similar to a sliding elevator commonly found in docks, warehouses and industrial plants: it slides along the top of the server isles and is capable of lifting larger loads, such as entire rack “panels” (see “Server Component” below).

Notes: 8 KUKA Robot Group. Industrial robots: KR 120 R3500 PRIME K (KR QUANTEC PRIME). Retrieved January 12, 2014 from http://www. kuka-robotics.com/en/products/industrial_robots/special/ shelf_mounted_robots/kr120_ r3500_prime_k/

The movement of large objects through the building is handled by two sliding loading platforms based on a scissor lift, which are situated on the 0 floor and connect the loading dock to the storage, maintenance, server and generator rooms, and allow to transport bulky objects in one go (Figure 20 on the previous page). Each platform measures 11.5’ by 11.5’ and can be operated either separately, or be combined for use as a single large platform.

rid

ge

sin

bla

gle

de

bla

ce

de

rac

nte

r

ba

kp

an

ck

el

bo

ne Figure 23: Diagram of using exhaust heat as energy. The cold water (blue) is run up to the server machines and the resulting exhaust hot water (red) is routed into the heat exchange (green) and then out into the surrounding buildings (yellow).

It is important to remark that the building was designed to have the absolute majority of maintenance tasks computerized and performed by robots either according to prescribed automation protocols, or, in more complicated situations, in manual mode operated remotely by the employees of the facility. Necessity of actual human access into the server machine space is brought to the minimum.

SERVER COMPONENT The server is the part of the data center that all the rest of the parts are meant to support - it is the core of this building and its reason for being, a place where the networks converge and the information is stored (Figure 21). The proposed data center machine space takes advantage of all the innovations proposed in the previous part of this book. Although the building is not quite a solid thing (as mentioned briefly in the end of the “Roadmap”),

Exposing the Data Center Project

42


1

2

Plan: 1 - Ridge cavity 2 - Space between ridges

1

1

2

1

Section through the superstructure of the server farm: 1 - Ridge cavity 2 - Space between ridges

it tries to approach solidity using currently available technologies. Using existing designs for servers and supercomputers as points of reference, the machines are packed more densely than in a typical contemporary data center, and the isles of the center are vertical as opposed to horizontal, which saves space by eliminating floor divisions. The facility also uses water for cooling, as opposed to air, and utilizes exhaust heat for heating the neighboring buildings (Figure 23 on the previous page) - a concept proposed earlier. Originating from the concept of fractality, the data center’s “brain center” is highly modular (see Figure 22 on the previous page). The initial “cell” of the facility is a blade server, inspired by the IBM’s BladeCenter QS22 Type 07939. Individual blades get arranged into enclosures, which are inspired by the IBM’s BladeCenter E Chassi10 enclosure (both IBM designs are used purely as references for illustration of concept in this study). These “blocks” then get plugged into a rack system, or “panel”, which is based on a typical 19-inch server rack11 (a system that has been around for a while and is probably going to remain the industry standard for at least the foreseeable future), but optimized to fit perfectly into the 12’x12’x12’ spatial module of the facility. The rack “panel” then gets inserted into the superstructure of the server, which consists of the load-bearing structure, and the backbone comprised of network, plumbing and electricity connections (in Figure 22, the backbone is shown in orange). All of these layers together result in “ridges” which are laid out to allow enough space to be maintained by both robots and humans (Figure 24). The ridges are constructed in such a way that they can be erected and removed as necessary, being based on typical steel-frame construction.

Notes: 9 IBM (2008). BladeCenter QS22 Type 0793 - Installation and User’s Guide (online). Copyright International Business Machines Corporation 2006, 2008. Retrieved from https://publib. boulder.ibm.com/infocenter/ bladectr/documentation/ index.jsp?topic=/com.ibm. bladecenter.qs22.doc/ qs22iu03.html

10 IBM Systems and Technology Group (2010, March). IBM Blade Center E (PDF Document). Data sheet. Retrieved from http://www-01. ibm.com/common/ssi/cgi-bin/ ssialias?infotype=PM&subtyp e=SP&htmlfid=BLD03018USE N&attachment=BLD03018US EN.PDF&appname=STG_BC_ USEN_SP

11 19-inch rack (n.d.). Retrieved January 12, 2014 from http://en.wikipedia.org/ wiki/19-inch_rack

SUPPORT SPACES

Elevation, as seen from the outside. Figure 24: Detailed drawigns of the server.

Elevation of the ridge inner cavity, exposing the plumbing and backbone connections.

The support spaces are meant to support the operations of the data center and can be broadly divided into three categories, as pertaining to 1) power, 2) HVAC and 3) maintenance/storage. Most of these are concentrated on the southern side of the building, underneath the swooping slope that supports the public grid, and on the subterranean level of the office component (see Figure 25 on the next page). Power-related spaces include: 1) the grid in/out room, through which the facility taps into the municipal electricity network, 2) generator room, which contains the alternative autonomous electricity supply units, typically in the form of diesel generators, which are used to power the facility in the event of electricity grid failure, and 3) Uninterrupted Power Supply (UPS), in the form of batteries, which level out short-term power spikes and interruptions, as well as pick up the load of powering the data center while the generators are firing up. Because the building uses water as the main cooling agent, the typical

Exposing the Data Center Project

43


Figure 25: Support spaces.

UPS Storage / Maintenance

HVAC / Heat Exchange Water Treatment

Generators

Figure 26: Office component. Break-out space

HVAC installations are not present in the building. The human-occupied parts of the building have floor heating, so both the cooling and the heating of the building is done with either hot water exhausted by the server in the winter, or the same cold water that goes into the servers in the summer. Given the building’s position deep inside a block, a large part of it stays in the shade throughout the year, which means that it will not get hot even during the summer. This allows the water necessary for cooling the servers to be produced internally by using the back yard as a concealed “cooling pond”. It is also planned to use harvested rain and molten ice water for replenishing the water supply. The tank and the internal water treatment facility are located at the south-western corner of the building, at the landing of the public staircase.

Notes:

The third type of support spaces are the storage and maintenance spaces. These are used to store the inventory of server machines, arrange smaller components into larger parts or disassemble them, and fix the broken machines on the premises, if possible. If the rest of the support spaces are true machine rooms, the spaces in this cluster have more of a workshop feel as the tasks performed in them are performed by personnel.

OFFICE COMPONENT

Meeting

The office component of the building (Figures 26 and 27) is the one where the data center personnel spend most of their time. Their tasks mostly include fixing problems and monitoring the functioning of the center at their workstations. Although the center would ideally function totally autonomously, this particular building includes office space to step the “evolutionary gap” between the current “attended” and the future fully automated data center types. This potential liberation of any need for human attendance is also the reason for placing the office component into a separate volume and not including it in the “rock”.

Office Office

Visitor center Deliveries

Figure 27: A rendering of the office component of the building with glazing and context removed.

Most of the work spaces of the office component overlook St. Jones Street, aligning themselves against the slender southern façade of the building. This allows the work spaces to have plentiful insolation, relative spaciousness, as well as a bold presence on the street. The block’s architecture is based on the grid, within which the occupiable spaces are arranged/”packed”. In addition to the actual offices and formal meeting spaces which are used for both internal meetings and client negotiations, the office component contains plenty of recreational/socializing space typical for IT offices, such as the break-out space on the 3rd floor and a kitchen on the 2nd. At ground level, the office entrance opens into a spacious multi-level lobby and a gallery that culminates in an atrium containing the visitor center at the opposite side. Exposing the Data Center Project

44


Top landscape

Figure 28: Public spaces.

PUBLIC ROUTE

Notes:

Probably the most important innovation in the proposed building is to act as a public facility. Thus, a big part of its volume is devoted to public programs (Figure 28).

Public stair

Lobby, Gallery Visitor center The bench

Figure 29: A view of the public stair entrance illustrating the varying density of the grid and its morphing from structure to stair.

Figure 30: A rendering of the top landing of the public staircase and the “landscape” on top of the building.

Walking around the neighborhoods of Lower Manhattan, it is easy to notice that there are virtually no places to sit along Broadway. If one does not want to go to a Starbucks or some high-end restaurant, he/she is left either standing or walking. Typically, as one grabs his “chicken over rice” from a Halal food cart during lunch break, he is forced to go off Broadway, into a park – say, the Washington Square Park, or Cooper Square - to sit down and actually eat. For whatever reason, a simple bench which is an elementary piece of public infrastructure in virtually any other large city, is absent from Broadway. This is why in the place where the data center meets the street, at the very bottom of the front wall looking onto Broadway, the windowless monolith straight-down wall of the center bends just enough to allow for seating. It is as if an alien from a different galaxy that found itself on Broadway by accident, wants to be friendly despite its weirdness and size, and offers to the local folk something pleasant in exchange for contact. Right next to the bench, also open onto Broadway, is the public stair. It is the part of the building where “the grid” and “the fractal” get their most extensive incarnation. Conceptually, the stair is a glorified version of an escape staircase, so typical to New York City. Thankfully, in New York evacuation stairs are seldom used for their original purpose. People who have an apartment or an office with an evacuation staircase typically use it as a balcony: a hangout spot, where one would take sun baths in nice weather, read, or catch a break from a stressful day, smoking a cigarette while enjoying the view and the noise of the city. The way those ubiquitous emergency staircases were designed to provide a possibility of escape, the stair designed for this building was too. However, instead of escaping fire, it enables one to escape the hustle and bustle of Broadway and rise above the city streets for a few moments of tranquility. Because the stair applies fractality, it does not have to be the same default element that a typical staircase usually is: in a few places en route it morphs from a stair into a room, where visitors can catch their breath and look around, or just sit down and enjoy being “in-between”. Such a composition of a stair provides the visitor with a diverse experience of space, and can accommodate different functions without changing its structure or formal language (see Figures 29 and 30 on the previous page). As a relatively large part of the staircase complex is air space bound by

Exposing the Data Center Project

45


Figure 31: Section through a junction of the “rock” and the grid.

cubic modules of 12 feet and bigger, it is possible to use the structure as strong points for temporary art exhibits. The pieces are visible from the street as well as from the stair, which allows visitors to circle around and observe them from multiple points of view.

Notes:

Structurally, the stair is made up of standardized units. The different elements are welded together and then unitized Plexiglas elements are inserted into the smallest modules (0.75x0.75 ft) in such a way that they protect the visitors from the wind and rain while simultaneously acting as floors and stairs. Once one has made it up the stair, he’she finds themselves on the roof-top terrace of the data center. As the surface of the “rock” is made of highlydurable ceramic, the entire roof-scape is walkable, but in the area where most pedestrian traffic is anticipated, the roof is paved with flat concrete plates. The roofscape can be conceptually divided into three zones. The first one is the landing of the public staircase located in the middle section of the roof. From there, a visitor can go either towards the east end of the building to hit a “plaza” with a spectacular view along Broadway and seating, or to the west, where the roofscape morphs into an “amphitheater”. The two are connected via a “bridge”.

1 2 3 4 5 6 7

Figure 32: Facade section: 1 - glass petals - 5” 2 - preformed panel - 2” 3 - air gap - 2” 4 - water barrier 5 - insulation - 4” 6 - water barrier 7 - structural R/C surface 8 - glass fiber channel

8

8” 1”

36” 20”

8”

2”

Figure 33: Individual facade panel. Left: view from top; right: horizontal section.

A yet different public space – the visitor center – is located on the very bottom of the building, inside the office component. The visitor center is the place where registered visitors would be able to take a look at the insides of the data center – the only point where the “interface” is absent, and the insides of the machine are exposed. Programmatically, it is a multi-purpose space capable of accommodating lectures and classes, where people would receive an introduction into how data infrastructure works, get to know where their data is being stored, engage in some interactive online activities (such as “Trace route”, for example), and receive a guided tour of the insides of the center.

FACADE As the building consists of multiple components, its façade is also diverse, reflecting the different ideas implicit in the building: the openness and public nature of the public grid, the rationality of the office, and the sealed rock-like nature of the data center itself. The public route has virtually no specially designated façade, as the structure and the transparent acrylic glass panels act as the façade for this component. The office component has standard glass façade, with composite plate cladding for walls. On the other hand, the facade of the “rock” is a very special one. It is made up of multiple layers of construction,

Exposing the Data Center Project

46


Figure 34 (far left): facade panel close-up, illustrating the seamlesness of the outer later of the building’s facade. Figure 35 (left): a rendering of a junction of the concrete envelope of the “rock” and the grid, according to a section on the previous page.

Figure 36: The eyes of a fly an inspiration and precedent of a natural curvilinear seamless object Photo credit: Thomas Shahan. “Close up of compound eyes and face”. Retrieved from http:// www.environmentalgraffiti. com/news-thomas-shahansincredible-insect-macrophotography

Figure 37: A silicon rhizome: the building is made up of the same materials throughout. Not only does this tie together its separate parts, it also makes the facility into a direct extension and exhibit of the city’s own “silicon infrastructure”. Left: initial conceptual sketch; right: diagram.

with the outward one consisting of small ceramic petals, and the base support structure being reinforced concrete (Figures 31 through 35). The petals are made of super-durable half-transparent ceramic, hard enough to withstand being walked on, and secured onto 3D-printed panels measuring 3x3 feet. These panels are then tessellated over the entire outer shell of the “rock” in such a way that the cavities between panels are not visible. Inside the space of the 3D-printed base panels, the petals are connected to individual glass-fibers which penetrate the load bearing structure of the building through water-tight cavities spaced on a 3x3 foot grid. Through these cavities, the idle light of the working machines inside the building is carried outside and can be seen at night while the light from the outside can be directed inside the server during the day.

Notes:

Given its layered buildup, the rainwater running off the swooping slope of the building disappears behind the petals and runs in the space between the petals and the 3D-printed panels of the facade into a concealed runoff water container on the edge of the building that faces Broadway. The container stores the water for later use in the building, but is also connected to the city’s sewer system for times when the volume of runoff water becomes unmanageable. This allows the building to avoid a “waterfall” of rain water on its western façade, thus avoiding excessive slipperiness of the adjacent sidewalk. The inspirations behind such a complex facade system for the “rock” are multiple. First, there is a desire to make the curvilinear volume of the server appear as a single solid seamless object. With most other facade solutions, it is virtually impossible to avoid some sort of tessellation if one wants a hard surface: expansion joints in case of exposed concrete construction, panel size limitations and planarity for most cladding systems, etc. In response to this challenge, instead of ever increasing the modulation of the façade elements, the building goes in the other direction and divides them into such tiny elements that they seem to become a single thing when looked at from a distance: similarly to sand, which has no “seams” even when distributed over large areas, as opposed to, for example, stone. This “seamlessness” aims to add to the otherworldliness of the building’s overall shape (Figure 36). The second aim of the data center façade is to expose the silicon nature of the building (Figure 37). Ceramics, optical fibers, and concrete are all largely composed of silica - the same family of materials that microchips inside the building are produced of. Using a material outside which has the same base as the one inside seemed interesting and appropriate for this facility. Third, there is a will to provide a means of contact between the inside and

Exposing the Data Center Project

47


Figure 38: Historic loading bays of the SoHo/NoHo Districts.

1

2

3

the outside of the facility without necessarily allowing for access into it. By using the glass petals and the fibers penetrating the concrete shell of the building (see Figure 32, page 36), the light of the inside can be brought outside, and vice versa. This changes the building into a communicable alien which, although impermeable, is still showing signs of “life”. Interestingly, the amount of light radiating from the inside will depend on the density of use of the server machines; thus the light will be dimmer when the servers are used little, and brighter during peak times, beaming the density of peoples’ online activity back to them.

Notes:

A modular bumpy glassy texture similar to the one produced by the petals covering the building’s facade is actually native to SoHo/NoHo and is typical for the local buildings’ loading bays (Figure 38). Because of the lack of space and the resulting positioning of buildings tightly side by side without any side alleys, historically, the only loading possibility left to be used in this part of town was right from the street. It was often solved as a large horizontal hatch in the pavement in front of the building, which would get opened every morning during goods delivery hours, and then in the evening to take out the trash. These hatches were typically made of cast iron plates with punched round openings, filled with thick glass tablets. Such a design enabled a hatch to be sturdy while still letting light filter inside the otherwise dark subterranean storage spaces of the building. Interestingly, although these hatches would commonly be a size of a typical door, in some cases the entire sidewalk in front of a building would get covered with them, occasionally morphing into stairs and seating. Thus, the “innovative” facade system of the proposed data center greatly resembles both in function and in form the historic loading hatches that have been around the area for over a century.

STRUCTURE The building uses multiple types of structural materials and construction methods.

4

5

Figure 39: Diagram of the building’s structural hierarchy.

6

The “rock” is built of reinforced concrete as illustrated by the diagrams on the left. Although the whole load-bearing structure needs to be poured concurrently, it can be divided into three major elements: 1) two main vertical walls (Figure 39, position 1) 2) inner cross bracing (position 2) that lends rigidity to the vertical walls by making them into a vertical spatial frame, and 3) lighter beam/slab construction that makes up the roof and the sloping side of the building looking onto Broadway (position 3). The rest of the building is built using steel-frame construction: 1) both end-walls of the “rock” (Figure 39, position 4), 2) the office component (position 5), 3) the grid of the public route (position 6), and

Exposing the Data Center Project

48


ATTAN

f buildings in solution methil properties ame from geong construction a sources and ed:

and Soil Borings resent Standard counts.” The f blows (using a ded to achieve . This SPT blow he stiffness of common methdard Penetration e the borehole. of blow counts a single NEHRP igned to each

depth-to-bedhe stiffness of not determined ngs constitute in Manhattan. of soil stiffness hen the depth slates into a y shallow depth y large depth to not allow assignhe black dots sent more than rom the past

ON

Thiessen Polygon Maps The second figure is derived from DBR point data and the “Thiessen Polygon” mapping tool. With this tool, we were able to contour the site class point data, optimized to the spatial distribution of the borings. Each one of the polygons defines a region of a single site class and is controlled by the class of the central point within the polygon. Census Tract-Based Maps The third figure uses DBR and SPT point data, plus census tracts employed by HAZUS, assigning a predominant site class to each Boring data map for Manhattan which was used to create the next “Thiessen Polygon” map.

! ! !! !! !!!!! ! ! !!!! ! ! !! ! ! !! !! ! ! !! ! ! !!! ! !!! !!!!! ! !!! !! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! !! !! !! ! !! ! ! ! ! !! ! !! ! ! ! !!! ! ! ! ! ! !! ! ! ! ! ! !! ! !!! ! ! !!!! !!!! ! ! ! !!! ! !! !! !!! ! ! !! !!! ! ! !!!! !! ! ! ! !!! !!! ! !! !! !! !! !! ! ! !! ! !! !! ! ! !! !! !!!!!!!! ! !! !!! ! ! !!!!! !!!! ! !! ! !! ! ! ! ! !! !!! ! ! !! !! !! ! !!! ! !! !!!! !! !!! !!!! !!!!! ! ! !!!! ! !! ! ! !!!! ! !! !! ! ! ! ! !!!! ! !! ! !!! !!! !!!! ! ! ! ! !! ! !! !!!! !!! ! !! ! !! ! !! !!!! ! !!! ! !! ! !! ! !!! !!!!!! !!! ! !! !!!!!!!!!! ! ! !! ! !! !! !!!! !!! ! !! ! ! !! ! ! !!! !!!! !!!! ! !! ! ! ! ! ! ! ! !!!! !!! ! ! ! ! !! !!!! !!! !! ! !!!!! !!! ! ! !! ! ! !!!!! !!! ! !! ! !!! ! !!! !!!!!!! ! !! !!! ! !! ! !!!!!! !! !! ! !! !! ! ! !! ! ! ! ! ! !! !! !!!! !!! !! !!!!! ! !! !! !! !!! !! ! !!! ! !!!!!!! ! !! !! !!!! ! ! !!!!!!!!! !! ! !! ! !! ! ! ! !! !!!! ! !! ! ! !!!! ! ! !! ! !! ! ! ! !!!!! !!! !! !!!!! !! ! ! !!!! ! ! !! !! !!! !! ! ! !! ! ! !!!! !! !! ! ! ! !!! ! ! ! !!!!! !! !! !!! !! ! ! ! ! ! ! ! ! !! ! ! ! !! ! !!!!!! ! ! !! !! !! ! ! !!! !!! ! !! ! ! !!! ! !!!!! ! !! !!!! !! ! ! ! ! ! ! ! ! !!!!!! ! !! ! ! ! ! ! !! !! ! ! ! !!!!! !!! !!! !! !! !!!! !! ! !! !! !!! !! ! ! ! ! !! ! ! !!! ! !! ! ! ! ! ! ! ! !! ! ! !! ! !!! !! !! ! ! ! !! !!!!!! !! !!! !! !!!!!! !! !! !! ! !! !! !!!!!!!!!! ! !!! !! !! ! ! ! ! ! ! ! ! ! !! !! !!!! ! ! ! !! ! !! !! ! !!! !!!! !!! ! ! !! ! ! ! !! !! ! !!! ! !!!!! !!! !! ! !!! ! ! !!!!!!! !!!! !!! !! ! !!! !! !!!!! !! !! ! !! !! ! ! !! !!!!!! !!!!!! ! !! !! !!! !!! !!!! ! !!!!!!!!!!!!!!!!!! !!!! ! ! !!!!!!!! !!! ! !! ! ! !! !!!! ! ! !! ! !! !!!!! !!! ! !! ! ! !! !!! ! ! ! ! ! ! ! !! !! ! !! !! !! !! !! ! ! !! ! !! ! ! !!! !! ! !! !! !! !!! !!! ! ! !! !!! ! !!!!! ! !!! ! !! ! !!! !! ! !! !!! !! !!! !!! !! ! ! ! !! ! !! ! !! ! ! !! !! ! ! !!!!!! !! ! !! ! !! !! !! !! !! ! ! ! ! !! !!!! !!! ! ! !! ! ! !! ! ! ! ! !!!!!!!! !!!!!!! !! ! ! !!! ! ! !! ! ! !! ! ! ! !! ! ! ! ! !! ! ! ! !! ! ! !!!!!!! ! ! !!! !!! !!! !!!! !! !! ! !! !! ! !!! ! ! ! ! ! !! ! !! ! !! ! ! ! ! ! ! !! ! ! !!!!!! !!! !! ! !!!!! !! !!!!! ! ! !!!!!!!! !!! !!! ! !! ! ! ! ! !! ! ! !!! !!!!!!! !! ! !! !!!!!!!!! !! !!!!! !!!! ! ! !! !!!! ! ! !! ! !!!! !!!!!!!!! !!!!! ! !!! ! ! ! ! ! !! ! !!! !! !!! !! !!!!! !!!!! ! ! ! !!! !! !! ! !! !! ! !!! ! !!!! !! ! ! !!! !!!!! ! !! ! !!!! !! ! !!! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! !!!!! !!!! !!!! !! !!!! !!!!!!!! !!!! !! ! !!! !! !! !!!!! ! !! !!! !! !!!!!!!! ! !!!!!!!!!! !!!! ! !!! ! ! ! ! ! !!! !! ! !! !! !!! !!! !! !! ! ! !!!! ! ! !! !!!!! ! !! ! !! ! !! !! !!! ! !! !!!!!!! !! ! !! !!!!!!!!! !!!! !! !!!!!! !!! !!!! ! !!! !! ! !! !!!!!! !!! !!! !!!!! !!! !! !! !!! !!!!! ! ! !!!!! !!! !!!! !!! !!! !!! ! !!! ! !!!!!!! ! !!! ! !!!!! !! !!! ! ! !! !! ! ! ! !! !!!!!! ! !! ! !!!! ! !!! !!! !!!! !!!!!! !! !!!!!!!! !!!!!! !! !! !! !!!! !!! !!!! ! !! !! !! ! ! ! ! ! ! ! ! ! ! ! !! !!!!! ! !! !! !! !!!! !!! !!!!!!!!! ! !!!!!!! !!!!! ! !!!! !!! !! !!!!!!! ! !!!!!! !! ! !! !!!!!!!!! !!!! ! !! ! !! !! !! !!! !!!!!!! !!!! !! !!!!!!! !! ! !!! !!!!!!!! !!!!!! !! !!! ! ! ! ! !!! ! ! ! ! !! ! ! !!!! !! ! !!! ! !! ! ! !!!!!!!!!! !! !! !!!!! !! !!!!!!!!!! ! !! ! !!!!!!!!!!!!!!!!!!!! ! ! !! ! !!! !!! ! ! ! ! ! ! !! ! ! ! !! ! ! ! !! ! ! !! ! !!!! ! ! !!!!!!!! ! !!!! ! !! !!!! !!!! ! ! !! !!!!!! ! ! ! !!!! ! !! !!!! !!!! ! !!!! !!! !!!!!!! !!!!! !!! !!!!!! !! !! ! !!!!!!!! !!! !!! ! ! !! ! !! ! !! ! ! ! ! !! !!!! !!! ! !! ! ! !! ! ! !! ! !! ! ! ! ! ! !! ! !!! !! !!! ! !!! !! !! ! !! !! !!!!! ! ! !!! ! !! ! ! !! ! ! ! !! ! !!! !! ! ! !!!! ! ! ! ! !! ! !! !! ! ! !!!!!!!!! ! !!!! !! ! !!!!!!! !! !! !! !! !!! !!! ! ! ! !! !! !!!!!! !!!!! ! ! !! ! !! !!! !! ! !!!! ! ! ! !!! ! !!! !!! ! !!!! !! !! ! ! ! ! ! ! ! ! !! ! ! ! !! ! !! ! ! !!!!!!! !! !!! ! ! ! ! ! ! ! ! ! ! ! ! ! !!! ! !!!! !! ! !! ! ! ! ! ! ! ! ! ! ! ! ! !!! ! ! ! ! ! !!! ! ! ! !!! ! ! ! !! ! ! ! ! !!! ! ! ! ! !! ! ! ! ! ! ! ! ! !! !! !! !! !! ! !!!! ! ! !! ! !! !! ! ! ! !!! ! ! ! !!! ! ! !! !!! ! ! ! !!!!! ! ! ! ! ! !!!!!! ! ! !! !!! ! !! !! ! ! ! !! ! ! ! !! ! !!!! !! ! ! ! ! !! !! ! ! ! !! !!! ! !!! ! !! ! ! !!! ! !! ! ! ! !!! ! !!!! ! ! !! ! ! ! ! !! ! ! ! ! !!!!! ! ! !!!! !! ! ! !! ! ! !! !!! !! ! ! !!!!!! ! !!!!!!!! !!!!!!! !!! !!! !! ! !!!!!!!! !!!!!!!! !!! ! !!!! ! !! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !!! !!! !!!!!! !!!!!!!! ! !! !! ! ! !!!! ! ! !!!!!!!!! !! !! ! !! !!!! ! ! !!!!!!!!!!!!!!! !!!!!!!!!!!!!!!! !!!!!!!!! !!! ! ! !! ! ! ! ! !! !!!!! !!!!!!!! !!!! !!!!!!!!!! ! ! !!!!! ! ! !! ! !!!!!!!! ! ! ! !!! !! ! !!!!!!! !!! ! ! ! !! !!!! ! !! ! !! ! ! !

census tract, based on the borings it contains. Census-tract-based maps assign a single site class to a sizeable census tract, which often may contain rapidly varying site geology. Nevertheless, such maps are useful in improving the loss computations generated by HAZUS. The results from both types of maps indicate that lower Manhattan and the Upper East Side are predominately soft soil (Class D). Most of the remainder of Manhattan has relatively stiffer soils (Class B and C), which do not amplify ground shaking as much.

“Thiessen Polygon” map of site classes in Manhattan derived from depth-to-bedrock data only.

Site

Figure 40: The sketch illustrating the structural logic behind combining a curvy element and an orthogonal grid: the grid components “snap” to the concrete shell and are thus stabilized.

Census-tract-based map of site classes in lower Manhattan using all boring data and additional geological information.

Figure 41: Soil types in Manhattan. Image source: The New York City Area Consortium for Earthquake Loss Mitigation (2005). Full Report: Earthquake Risks and Mitigation in the New York, New Jersey and Connecticut Region. p.22 Retrieved from http:// www.nycem.org/techdocs/ FinalReport/13-2soilmanhattan. pdf

Site

4) the “isles” inside the server space of the data center.

Notes:

Tubular elements were chosen for the steel-frame as, given their fully symmetrical cross section, they support the concept of a “grid” much better than a typical structural H profile. To keep the profiles of the steel elements as elegant and consistent as possible, differences in forces are being accommodated by either varying the thickness of the tubes inwards, or, in the case of vertical columns, by pouring concrete inside the tubes for additional compression strength. The isles inside the data center server and support spaces, as well as the hidden structural elements of the office use H-profile steel elements as an exception, as they: 1) carry larger loads, and 2) are located outside public view. The combination of an orthogonal “grid” and the curvilinear “rock” (Figure 40) has its structural virtues. Typically, a structure based on a grid of elements intersecting at right angles is not stable in response to dynamic loads, such as wind, and requires cross-bracing in all three dimensions. Typically this issue is solved by using diagonal cross bracing. In this building, though, the cross bracing is omitted as the “rock” acts as the stabilizing agent for the grid. The ferroconcrete structure of the “rock” combined with its three-dimensional curvilinear shape make it inherently stable, and able not to only hold itself up, but to act as a substitute for typical cross bracing for the grid which is attached to it. Using this, the grid is able to keep its conceptually pure orthogonality.

NEHRP SOIL CLASSIFICATIONS

A hard rock

D soft soils

B rock

E special soils

C dense soil/soft rock Black dots represent boring data used.

Figure 42: Manhattan hurricane evacuation map. Image source: NYC Huricane Evacuation. Retrieved from http://maps.nyc.gov/hurricane/

The soil at the site of the proposed data center is classified as “soft rock” (Figure 41). Absolute elevation of the building at ±0.00 is 40 feet above sea level, which puts it above the level of any of the flooding zones in Manhattan (Figure 42). Given the projected mass of the building and the soil conditions at the location, a raft foundation is the optimum foundation type. Although no additional piling is necessary, additional caissons can be arranged on a 36x36 foot grid, going down 50 ft to bedrock (bedrock below is at the absolute elevation of approximately -10 feet - see Figure 43). Figure 43: Southern half of Manhattan bedrock elevation map. Image credit: New York City Area Consortium for Earthquake Loss Mitigation. Retrieved from http:// www.nycem.org/techdocs/siteCondsYr1/figures. asp?figure=1

Exposing the Data Center Project

49


DRAWINGS AND RENDERINGS

Exposing the Data Center Project

50


SITE PLAN

0 10’ 25’

50’

Base image source: Google Earth.

100’

200’

Exposing the Data Center Project

51


SECTION A

C

B

1

2 15 LEGEND 3

9

14

12

4

5

6

7

13 10

8

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Server room Breakout space Office Loading dock Galery Assembly / Maintenance Storage “Node” / Airlock UPS Generators Heat exchange Rainwater run-off tank Water treatment Public stair Public elevator

B

0 3 6

12

24

48

C

11

Exposing the Data Center Project

52


SECTION B

A

LEGEND 1 2 3 4

Server room UPS Generators Public stair

4

1

2

A

3

0 3 6

12

24

48

Exposing the Data Center Project

53


SECTION C

LEGEND

A

1 2 3 4 5 6 7 8 9 10 11

Server room Meeting / Conference Office Breakout space Foyer Loading dock Visitor center Storage Sliding elevator platform Assembly / Maintenance Security

1

2

3 4

3

5

6

7

11

10

9

8

A

0 3 6

12

24

48

Exposing the Data Center Project

54


0 FL. PLAN (-24’-0”)

LEGEND

C

B

1 2 3 4 5 6 7 8

Server room Storage Heat exchange Generators Sliding elevator platform “Node” / Airlock Assembly / Maintenance Elevator

1 2

4 3

A

5

6

A

5

2

B

0 3 6

12

24

48

8

C

7

Exposing the Data Center Project

55


1 FL. PLAN (±0’-0”)

LEGEND

C

B

1 2 3 4 5 6 7 8 9 10 11

1

Server room Visitror center Gallery Foyer Elevator Security checkpoint Loading dock Short term storage Sliding elevator platform UPS Public stair

2

10 3

8

A

11

9

A

9

4

7

5

B

0 3 6

12

24

48

C

6

Exposing the Data Center Project

56


2 FL. PLAN (+24’-0”)

LEGEND

C

B

1 2 3 4 5 6 7 8 9

Server room UPS Sliding elevator platform Office Storage Restrooms Kitchen Elevator Public stair

1

4

2 9

A

A

3 5

6

B

0 3 6

12

24

48

8

C

7

Exposing the Data Center Project

57


3 FL. PLAN (+48’-0”)

LEGEND

C

B

1 2 3 4 5 6

Server room UPS Breakout space Office Elevator Public stair

1

3

2

6

A

4

B

0 3 6

12

24

48

5

C

A

Exposing the Data Center Project

58


5 FL. PLAN (+72’-0”)

LEGEND

C

B

1 2 3 4

Server room Office Elevator Public stair

1

4

A

A

2

B

0 3 6

12

24

48

C

3

Exposing the Data Center Project

59


7 FL. PLAN (+108’-0”)

LEGEND

C

B

1 2 3 4

Server room Meeting / Conference Elevator Public stair

1

4

A

A

3

B

0 3 6

12

24

48

C

2

Exposing the Data Center Project

60


ROOF PLAN

LEGEND

C

B

1 2 3 4

Landing of the public stair “Plaza” “Bridge” ”Amphitheater”

2

1

A

A

3

B

0 3 6

12

24

48

C

4

Exposing the Data Center Project

61


SOUTH ELEVATION

0 3 6

12

24

48

Exposing the Data Center Project

62


NORTH ELEVATION (NEIGHBORING BUILDING REMOVED)

0 3 6

12

24

48

Exposing the Data Center Project

63


WEST ELEVATION

0 3 6

12

24

48

Exposing the Data Center Project

64


EAST ELEVATION

0 3 6

12

24

48

Exposing the Data Center Project

65


A panoramic view of the site and the building from accross Broadway.

Exposing the Data Center Project

66


View of the building in the context of Lafayette St. as seen from Great Jones Street.

Exposing the Data Center Project

67


A night view of the building and the entrance of the public stair, illustrating the glow of the servers inside the building through the facade onto Broadway.

Exposing the Data Center Project

68


A night view of the building from afar along Broadway.

Exposing the Data Center Project

69


A bird’s eye view of the building in its context. Base image source: Google Earth.

Exposing the Data Center Project

70


A view of the office component of the building as seen from Great Jones Street. The office component, along with the public stair open to Broadway, are the two spaces that really bring out the fractal grid-based logic of the building (The surrounding context and the public stair removed).

Exposing the Data Center Project

71


A horizontal section through the “rock� with servers removed, illustrating some of the structural reinforced concrete flying buttresses inside.

Exposing the Data Center Project

72


A day view of the building as seen from Broadway. (Surrounding context removed)

A night view of the lit-up public stair as seen from Broadway. (Surrounding context removed)

A day view of the public stair, which starts resembling a cloud with its increasing density. (Surrounding context removed)

A view of the public stair entrance illustrating the varying density of the grid and its morphing from structure to stair. (Surrounding context removed)

Exposing the Data Center Project

73


A bird’s eye view of the portion of the landscape on top of the building that overlooks Broadway, with furniture. (Surrounding context removed)

A view along the top landscape of the building. (Surrounding context removed)

A view from the main work space of the office component. (Surrounding context removed)

A view from the break-out space of the office component. (Surrounding context removed)

Exposing the Data Center Project

74


EPILOGUE

Exposing the Data Center Epilogue

75


EPILOGUE After a semester and a half of writing literally all over the United States trying to arrange for visiting a working data center, I finally got a reply. It came from David1 of Rackspace, who invited me to visit their facility in the National Capital Region. This contact was a result of a chain spanning the whole four degrees of separation: Taaniel gave me an email of an employee at the Rackspace’s HQ in Austin, TX, who used to help Taaniel with his own business. After listening to some of my ideas, he gave me a contact of someone at the Blacksburg office of Rackspace, who seeing that I am indeed a student, directed me to David.

Notes: 1 Names of Rackspace employees changed. 2 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY. HarperCollins Publishers. p.60.

When I got the news, I did not care where I needed to go, as long as I was cleared to visit a working data center. Luckily, Washington, D.C. is only a few hours away from Virginia Tech, so it did not seem to be too long of a trip. It was only later, during the summer of 2012, that I learned from a great account of his own visit to the same place by Andrew Blum2, how important that area was within the infrastructure of the internet. When I got there, the thing that surprised me the most were the people. Although the data center itself was located in a discrete location, surrounded by barbed wire and roadblocks, the people who came out to greet me were, well, people (although I did need to sign a non-disclosure agreement with Rackspace before I could be granted access to the facility - Figure 1). They were not “Men in Black”, the “guardians of data”, but regular humans like myself, who talked, laughed and procrastinated by playing fussball when they did not feel like working: a demographic reminiscent of the one in the Ubiquitous Computing Lab back at Tech (see Figure 2 on the next page). Upon arrival, the site manager told me that I was virtually a unique exception to the usual facility admission policy. As a rule, nobody penetrated the building deeper than the initial buffer of the security desk and office/ conference spaces. Not even the clients who were planning to buy/lease space at the facility were allowed into the server space itself. In my case, not only was I allowed in, we actually took our time walking around the building: feeling the contrasting climates of the hot and cold aisles, listening to the humming of the machines reading/writing data coming in from random locations scattered across the world, and even climbing up a ladder to the roof to see a small lounge area among a “field” of HVAC units.

Figure 1: A sample Unilateral Non-disclosure Agreement one has to sign in order to get access to a data center. The body of the document was blurred to preserve the secrecy of terms.

During lunch at a local Taco Bell we discussed the differences of our disciplines and my reasons for being there: - So why are you interested in this again? - ... - Yeah, a skate park on top of a server hub would be awesome! And a racing track, too!

Exposing the Data Center Epilogue

76


The employees of the center left me with an impression of a “mechanical baseball pitching machine“, to use Toyo Ito’s term describing Rem Koolhaas in his memo, featured in S,M,L,XL1. For them, everything was binary – it either worked or it did not. Therefore, to make sure something worked, they shoot a million counterarguments at an idea to see if it stood. Mine did. Although the people I met had no problem discussing the most outrageous sci-fi ideas in the field – was it “singularity”4, machines taking over the world or whatever. In contrast to “cybernetic totalists”5, they were sure that the very reason for machines’ existence was to serve humans. Therefore, nothing could go wrong, unless we did it to ourselves. According to them, machines could, indeed, be smart or even reproduce themselves. However, machines had no reason to be here by themselves. Most interestingly – something these people knew firsthand – all machines broke. And someone had to be there to fix them.

Notes: 3 Koolhaas, R., & Mau, B. (1995). Small, medium, large, extra-large. New York, NY. Monacelli Press. p.100. 4 Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Viking Penguin. New York. 5 Lanier, J. (2000). One Half a Manifesto. Edge.org (online). Retrieved from http://www. wired.com/wired/archive/8.12/ lanier.html

Us – humans.

Figure 2: Snippets of the Ubiquitous computing Lab at Virginia Tech.

Exposing the Data Center Epilogue

77


Exposing the Data Center

78


NOTES Introduction 1 TenantWise.com Incorporated (2003). Special Report: WTC Tenant Relocation Summary. Retrieved from http://www.tenantwise.com/reports/wtc_relocate.asp 2 AT&T Long Lines Building. Retrieved January 12, 2014 from http://www.nycarchitecture.com/SOH/SOH026.htm 3 33 Thomas Street. Retrieved January 12, 2014, from http://en.wikipedia.org/wiki/33_ Thomas 4 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY. HarperCollins Publishers. p. 202 Opportunity 1 AutoDesk’s 123D Catch web app link: http://www.123dapp.com/catch 2 DropBox link: https://www.dropbox.com/ 3 Coulouris, G., Dollimore, J., Kindberg, T., Blair, G. (2011). Distributed Systems: Concepts and Design (5th Edition). Boston. Addison-Wesley. 4 Ledford, H. (2010, October 14). Supercomputer sets protein-folding record. Nature (online). Retrieved from http://www.nature.com/news/2010/101014/full/news.2010.541 html 5 Condon, S. (2008, August 8). Dell unlikely to get trademark for ‘cloud computing’. CNET (online). Retrieved from http://news.cnet.com/8301-13578_3-10011577-38.html 6 Antonio Regalado (2011, October 31). Who Coined ‘Cloud Computing’? MIT Technology Review (online). Retrieved from http://www.technologyreview.com/news/425970/whocoined-cloud-computing/ 7 Opte project link: http://www.blyon.com/philanthropy.php 8 Moore, G.E. (1965, April 19). Cramming more components onto integrated circuits. Electronics Magazine. Retrieved from http://www.cs.utexas.edu/~fussell/courses/cs352h/ papers/moore.pdf 9 Gottlieb, A., Almasi, G.S. (1989). Highly parallel computing. Redwood City, Calif. Benjamin/Cummings. 10 ASHRAE (2008). Thermal Guidelines for Data Processing Environments (PDF Document). Retrieved from http://tc99.ashraetcs.org/documents/ASHRAE_Extended_ Environmental_Envelope_Final_Aug_1_2008.pdf 11 Uptime Institute, LLC (2012). Data Center Site Infrastructure Tier Standard: Topology (PDF Document). Retrieved from http://www.uptimeinstitute.com/publications#TierClassification 12 ConnectKentucky (2007). “Global Data Corp. to Use Old Mine for Ultra-Secure Data Storage Facility” (PDF Document). Connected - ConnectKentucky Quarterly. p.4. Retrieved from http://www.connectkentucky.org/_documents/connected_fall_FINAL.pdf 13 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY. HarperCollins Publishers. p. 93, par.2 14 Rocky Mountain Institute (2008, August 7). Designing Radically Efficient and Profitable Data Centers. Trehugger (online). Retrieved from http://www.treehugger.com/gadgets/ designing-radically-efficient-and-profitable-data-centers.html 15 Glanz, J. (2012, September 22) Power, Polution and the Internet. The New York

Times. Retrieved from http://www.nytimes.com/2012/09/23/technology/data-centers-wastevast-amounts-of-energy-belying-industry-image.html?_r=0 16 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY. HarperCollins Publishers. p 90. 17 O’Brien, D.J. (2013, September 16). Region likely to see continued growth in data center industry. The Washington Post. Retrieved from http://articles.washingtonpost. com/2013-09-15/business/42089033_1_data-centers-internet-exchange-points-loudouncounty 18 Mitchell, W. J. (2003) Me++: The Cyborg Self and the Networked City. Cambridge Mass, USA. The MIT Press. pp. 176-177 19 Lévy, P., & Bononno, R. (1998). Becoming virtual: Reality in the digital age. Da Capo Press Inc. 20 IBM Big Data Platform (n.d.). What is Big Data? Retrieved January 12, 2014 from http:// www-01.ibm.com/software/data/bigdata/what-is-big-data.html 21 Hunter, M. (n.d.). Bible Facts and Statistics. Amazing Bible Timeline. Retrieved from http://amazingbibletimeline.com/bible_questions/q10_bible_facts_statistics/ 22 Hedstrom, M. (1997). Digital preservation: a time bomb for Digital Libraries. Computers and Humanities. Vol. 31, No. 3. 189 - 202. Retrieved from http://www.uky.edu/~kiernan/DL/ hedstrom.html 23 Kuny, T. (1997). A Digital Dark Ages? Challenges in the Preservation of Electronic Information (PDF Document). 63RD IFLA (International Federation of Library Associations and Institutions) Council and General Conference. Retrieved from http://archive.ifla.org/IV/ ifla63/63kuny1.pdf 24 Mandelbrot, B. (1967). How long is the coast of Britain? Statistical Self-Similarity and Fractional Dimension. Science, Vol. 156, No. 3775, 636-638. Retrieved from: http://faculty. washington.edu/cet6/pub/Temp/CFR521e/Mandelbrot_1967.pdf 25 Koomey, J. (2007). Estimating Total Power Consumption by Servers in the US and the World (PDF Document). Report. Retrieved from http://hightech.lbl.gov/documents/DATA_ CENTERS/svrpwrusecompletefinal.pdf 26 Finley, K. (2013, November 12). Microsoft’s Built-In Power Plants Could Double Data Center Efficiency. Wired (online). Retrieved from http://www.wired.com/ wiredenterprise/2013/11/microsoft-fuel-cells/ 27 Greanpeace International Cool IT Challenge link: http://www.greenpeace.org/ international/en/campaigns/climate-change/cool-it/ 28 Pomerantz, D. (2013, April 24). Cisco, Google tie for first in latest Greenpeace ranking of IT sector climate leadership. Blogpost. Retrieved from http://www.greenpeace.org/ international/en/news/Blogs/Cool-IT/facebook-and-google-like-1-clean-energy-in-da/ blog/44893/ 29 Palmintier, B., Newman, S., Rocky Mountain Institute. (2008, August 5). Systems Thinking for Radically Efficient and Profitable Data Center Design (PowerPoint presentation) 30 Open Compute Project website, “About” section. Retrieved January 12, 2014 from http://www.opencompute.org/about/ 31 Finley, K. (2013, November 13). Facebook Says Its New Data Center Will Run Entirely on Wind. Wired (online). Retrieved from http://www.wired.com/wiredenterprise/2013/11/ facebook-iowa-wind/ 32 Wheatley, M. (2013, June 13). Facebook Opens a Really Cold New Data Center in

Exposing the Data Center Notes

79


Sweden. Siliconangle (online). Retrieved from http://siliconangle.com/blog/2013/06/13/

glass-patent-application-gets-really-technical/

facebook-opens-a-really-cool-new-data-center-in-sweden/

6 Lohr, S. (2012, February 29). For Impatient Web Users, an Eye Blink Is Just Too Long

33 Clidaras, J., et al. (2008, August 28). Water-Based Data Center. United States Patent

to Wait. The New York Times. Retrieved from http://www.nytimes.com/2012/03/01/

Application nr.20080209234. Retrieved from: http://appft1.uspto.gov/netacgi/nph-Parser

technology/impatient-web-users-flee-slow-loading-sites.html?pagewanted=all&_r=0

?Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.

7 Gilles, D., & Félix, G. (1980). Mille plateaux. Capitalisme et schizophrénie II. Paris. Les

html&r=1&f=G&l=50&s1=%2220080209234%22.PGNR.&OS=DN/20080209234&RS=

éditions de Minuit.

DN/20080209234

8 Axe, D. (2013, March 20). After the Aircraft Carrier: 3 Alternatives to the Navy’s

34 Bitcoin website link: http://bitcoin.org/en/

Vulnerable Flattops. Wired (online). Retrieved from http://www.wired.com/

35 Smithson, R. (1996). A Tour of the Monuments of Passaic, New Jersey (1967). Jack

dangerroom/2013/03/replacing-aircraft-carriers/3/

Flam (Hg.), Robert Smithson–The collected writings, Berkeley, 68ff.

9 Mitchell, W. J. (2003) Me++: The Cyborg Self and the Networked City. Cambridge Mass,

36 Urban Omnibus (2012, August 22). Undercity: The Infrastructural Explorations of

USA. The MIT Press. p.175

Steve Duncan. Urban Omnibus (online). Retrieved from http://urbanomnibus.net/2012/08/

10 Sabey Data Centers (2013). Introducing New York’s Only Purpose-Built Data Center

undercity-the-infrastructural-explorations-of-steve-duncan/

Campus. Brochure. Retrieved from http://sabeydatacenters.com/portfolio-item/intergate-

37 Urban Omnibus (2010, November 3). Stanley Greenberg: City as Organism, Only

manhattan-brochure/

Some of it Visible. Urban Omnibus (online). Retrieved from http://urbanomnibus.

11 Martin, B. D., Schwab, E. (2012). Current usage of symbiosis and associated

net/2010/11/stanley-greenberg-city-as-organism-only-some-of-it-visible/

terminology. International Journal of Biology, 5(1), p32.

38 Mitchell, W. J. (2003) Me++: The Cyborg Self and the Networked City. Cambridge

12 Liu, J., Goraczko, M., James, S., Belady, C., Lu, J., & Whitehouse, K. (2011, June). The

Mass, USA. The MIT Press. p.15, par.2

data furnace: heating up with cloud computing. Proceedings of the 3rd USENIX Workshop

39 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY.

on Hot Topics in Cloud Computing. Retrieved from https://www.usenix.org/legacy/events/

HarperCollins Publishers. pp. 240-241, 248-249

hotcloud11/tech/final_files/LiuGoraczko.pdf

40 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY.

13 Verge, J. (2013, March 4). Telus Warms Condos With Heat From Its Servers. Data

HarperCollins Publishers. pp. 233-234

Center Knowledge (online). Retrieved from http://www.datacenterknowledge.com/

41 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY.

archives/2013/03/04/telus-warm-condos-with-heat-from-servers/

HarperCollins Publishers. pp.143-144, p.229 par.2

14 IBM (2010, July 2). Made in IBM Labs: IBM Hot Water-Cooled Supercomputer Goes

42 Castells, M. (1996). The Rise of the Network Society: of The Information Age:

Live at ETH Zurich. News release. Zurich. Retrieved from http://www-03.ibm.com/press/

Economy, Society and Culture. Volume I. Malden, Mass, USA. Blackwell Publishing.

us/en/pressrelease/32049.wss

43 Mitchell, W. J. (2003) Me++: The Cyborg Self and the Networked City. Cambridge

15 Koolhaas, R. (1978). Delirious New York. Thames & Hudson. p.100

Mass, USA. The MIT Press. p.39, par.2

16 IBM Systems and Technology Group (2013, January). IBM BladeCenter: Build smarter IT (PDF Document). Brochure. Retrieved from http://www-01.ibm.com/common/ssi/cgi-bin/

Roadmap

ssialias?infotype=PM&subtype=BR&htmlfid=BLB03002USEN&attachment=BLB03002US

1 World Health Organization. Global Health Observatory (n.d.). Urban Population Growth.

EN.PDF

Situation and trends in key indicators. Retrieved from http://www.who.int/gho/urban_

17 IBM (2010, July 2). Made in IBM Labs: IBM Hot Water-Cooled Supercomputer Goes

health/situation_trends/urban_population_growth_text/en/

Live at ETH Zurich. News release. Zurich. Retrieved from http://www-03.ibm.com/press/

2 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY.

us/en/pressrelease/32049.wss

HarperCollins Publishers. p. 198

18 Cray Research (1988). Cray-2 Series of Computer Systems (PDF Document).

3 Mitchell, W. J. (2003) Me++: The Cyborg Self and the Networked City. Cambridge

Brochure. p.5. Retrieved from http://www.craysupercomputers.com/downloads/Cray2/

Mass, USA. The MIT Press. p.176

Cray2_Brochure001.pdf

4 Olsson; M. I., et al. (2013, February 21). Wearable Device with Input and Output

19 IBM (2010, July 2). Made in IBM Labs: IBM Hot Water-Cooled Supercomputer Goes

Structures. United States Patent Application nr.20130044042. Retrieved from http://appft.

Live at ETH Zurich. News release. Zurich. Retrieved from http://www-03.ibm.com/press/

uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%

us/en/pressrelease/32049.wss

2Fsearch-adv.html&r=13&p=1&f=G&l=50&d=PG01&S1=%2820130221.PD.%20AND%20

20 IBM Research - Zurich (n.d.). Zero-emission Data center: Direct use of waste heat to

Google.AS.%29& OS=PD/20130221%20AND%20AN/Google&RS=%28PD/20130221

minimize carbon-dioxide emission (online). Retrieved from http://www.zurich.ibm.com/st/

%20AND%20AN/Google%29,%20http://news.cnet.com/8301-1023_3-57570533-93/

energy/zeroemission.html

google-glass-patent-application-gets-really-technical/,%20http://www.google.com/glass/

21 Broersma, M. (2010, December 14). IBM Xeon-Based Supercomputer To Hit Three

start/

Petaflops. Tech Week Europe (online). Retrieved from http://www.techweekeurope.co.uk/

5 Tibken, S. (2013, February 21). Google Glass patent application gets really technical.

news/ibm-xeon-based-supercomputer-to-hit-three-petaflops-15866

CNET (online). Retrieved from: http://news.cnet.com/8301-1023_3-57570533-93/google-

Exposing the Data Center Notes

80


Project 1 Koolhaas, R. (1978). Delirious New York. Thames & Hudson. p. 294 2 Fractal. Retrieved January 12, 2014 from http://en.wikipedia.org/wiki/Fractal 3 Koolhaas, R. (1978). Delirious New York. Thames & Hudson. p. 293, par.2 4 DuPont Fabros Technology, Inc. website link: http://www.dft.com/data-centers/locationinformation 5 DuPont Fabros Technology, Inc. (n.d.). DFT_ACC4_layout.pdf (PDF Document). Retrieved January 12, 2014 from http://www.dft.com/themes/dft/images/data_centers/ DFT_ACC4_layout.pdf 6 Sabey Data Centers (2013). Introducing New York’s Only Purpose-Built Data Center Campus. Brochure. Retrieved from http://sabeydatacenters.com/portfolio-item/intergatemanhattan-brochure/ 7 Feeney, J. (2010, March 23). Real estate law made simple: What to look for before you buy a place with great views. Daily News. Retrieved from http://www.nydailynews.com/ life-style/real-estate/real-estate-law-made-simple-buy-place-great-views-article-1.172485 8 KUKA Robot Group. Industrial robots: KR 120 R3500 PRIME K (KR QUANTEC PRIME). Retrieved January 12, 2014 from http://www.kuka-robotics.com/en/products/industrial_robots/special/shelf_mounted_robots/kr120_r3500_prime_k/ 9 IBM (2008). BladeCenter QS22 Type 0793 - Installation and User’s Guide (online). Copyright International Business Machines Corporation 2006, 2008. Retrieved from https://publib.boulder.ibm.com/infocenter/bladectr/documentation/index.jsp?topic=/com. ibm.bladecenter.qs22.doc/qs22iu03.html 10 IBM Systems and Technology Group (2010, March). IBM Blade Center E (PDF Document). Data sheet. Retrieved from http://www-01.ibm.com/common/ssi/cgi-bin/ssiali as?infotype=PM&subtype=SP&htmlfid=BLD03018USEN&attachment=BLD03018USEN. PDF&appname=STG_BC_USEN_SP 11 19-inch rack (n.d.). Retrieved January 12, 2014 from http://en.wikipedia.org/wiki/19inch_rack Epilogue

IMAGE CREDITS All images by author unless noted otherwise. p.10 Figure 2: The map of the internet resembling a cloud. Image credit: The Opte Project. Retrieved on January 12, 2014 from http://en.wikipedia.org/wiki/Portal:Computer_science p.11 Figure 3: Ferranti Mark 1 computer. Manchester, England. 1951. Image source: Van Buskirk, E. (2008, June 20). First Videogame Music Recorded in 1951. Wired (online). Retrieved from http://www.wired.com/listening_post/2008/06/first-videogame/ Figure 4: A sketch of the initial map of ARPANET - the original “internet” - by Larry Roberts, dating to the late 1960’s. We can see clearly just how small and exclusive the network’s “early adopters” club was. Image source: Hafner, K., Lyon, M. (1998). Where Wizards Stay Up Late: The Origins Of The Internet. New York. Touchstone. Figure 5: Internet world map 2007 by IPIntelligence. As we see, the density of nodes is much different from the one in the image above. Image source: http://www.ipligence.com/ worldmap/ p.12 Figure 6: An image of a top-notch modern data center: isles upon isles of “cyberrific”13 racks and blinking lights - Facebook’s data center in Prinville, OR. Image source: Brinkmann, M. (2011, April 8). Facebook Open Compute Project. Ghacks.net (online). Retrieved from http:// www.ghacks.net/2011/04/08/facebook-open-compute-project/ Image credit: Open Compute Project. Figure 7: Microsoft’s Dublin data center: a typical data center exterior - no different from a warehouse. Image source: Miller, R. (2009, September 24). Microsoft’s Chiller-less Data Center. Data Center Knowledge (online). Retrieved from http://www.datacenterknowledge. com/archives/2009/09/24/microsofts-chiller-less-data-center/ Image credit: Microsoft Corporation. Figure 8: Outfitting a data center. Image source: Hamilton, D. (2011, November 16). Mozilla

1 Names of Rackspace employees changed.

Building Out 1 Megawatt Santa Clara Data Center. Data Center Talk (online). Retrieved

2 Blum, A. (2012). Tubes: the Journey to the Center of the Internet. New York City, NY.

from http://www.datacentertalk.com/2011/11/mozilla-building-out-1-megawatt-santa-clara-

HarperCollins Publishers. p.60.

data-center/

3 Koolhaas, R., & Mau, B. (1995). Small, medium, large, extra-large. New York, NY.

p. 13

Monacelli Press. p.100.

Figure 9: Diagram of current data center design inefficiencies. Image source: Palmintier,

4 Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. New

B., Newman, S., Rocky Mountain Institute (2008, August 5). Systems Thinking for Radically

York. Viking Penguin.

Efficient and Profitable Data Center Design [PowerPoint presentation]. Slide 16.

5 Lanier, J. (2000). One Half a Manifesto. Edge.org (online). Retrieved from http://www.

p.14

wired.com/wired/archive/8.12/lanier.html

Figure 13: Diagram by the author. Images used: [middle] Malevich, K. (1915). Suprematist Painting: Aeroplane Flying. Oil on canvas, 57.3 x 48.3 cm (22 5/8 x 19 in); The Museum of Modern Art, New York. [bottom] Kubrik, S. (1968). “2001: A Space Odyssey”. Movie screenshot. p.15 Figure 15: Projection of electricity use by datacenters in the US and the world based on J.Koomey’s data. Image source: Belady, C.L. (2011). Projecting Annual New Data Center Construction market size. (PDF Document) Microsoft. Global Foundation Services. p. 3.

Exposing the Data Center Credits

81


Retieved from http://www.business-sweden.se/PageFiles/9118/Projecting_Annual_New_

p.47

Data_Center_Construction_Final.pdf

Figure 36: The eyes of a fly - an inspiration and precedent of a natural curvilinear seamless

p.16

object. Photo credit: Thomas Shahan. “Close up of compound eyes and face”. Retrieved

Figure 17: Going “green”! A scorecard by Greenpeace’s “Cool IT Challenge” from April

from http://www.environmentalgraffiti.com/news-thomas-shahans-incredible-insect-macro-

2012, rating leaders in IT on their environmental performance. Image source: Lowensohn,

photography

J. (2012, July 12). Apple’s Greenpeace cloud rating no longer a ‘fail’. CNET. Retrieved

p.49

from http://news.cnet.com/8301-13579_3-57470593-37/apples-greenpeace-cloud-rating-

Figure 41: Soil types in Manhattan. Image source: The New York City Area Consortium for

no-longer-a-fail/

Earthquake Loss Mitigation (2005). Full Report: Earthquake Risks and Mitigation in the

p.23

New York, New Jersey and Connecticut Region. p.22 Retrieved from http://www.nycem.org/

Figure 4: With growth, interweaving and multiplication of networks and nodes, data

techdocs/FinalReport/13-2soilmanhattan.pdf

infrastructure is starting to resemble a rhizome. Image source: Lloyd, J. U., Lloyd, C.

Figure 42: Manhattan hurricane evacuation map. Image source: NYC Huricane Evacuation.

G. (1884-1887). Drugs and Medicines of North America. Ch. “Cimicifuga: Allied species

Retrieved from http://maps.nyc.gov/hurricane/

- Green rhizome”. Plate XXIII: A fresh rhizome of Cimicifuga Racemosa. Retrieved from

Figure 43: Southern half of Manhattan bedrock elevation map. Image credit: New York City

http://www.henriettesherbal.com/eclectic/dmna/pics/dmna-pl-23.html

Area Consortium for Earthquake Loss Mitigation. Retrieved from http://www.nycem.org/

p.25

techdocs/siteCondsYr1/figures.asp?figure=1

Figure 11: Collage by the author. Source for the image of the interior of Hal supercomputer:

p.51

Kubrik, S. (1968). “2001: A Space Odyssey”. Movie screenshot.

Site plan. Base image source: Google Earth.

Figure 12: Collage by the author. Base image credit: 100 Abandoned Houses Project.

p.68

Retrieved from http://www.100abandonedhouses.com/

A bird’s eye view of the building in its context. Base image source: Google Earth.

p.26 Figure 15: Collage by the author. Data center image source: Infopipe. Retrieved from http:// goinfopipe.com/what-we-do/ p.29 Figure 23: The “Monolith”. Kubrik, S. (1968). “2001: A Space Odyssey”. Movie screenshot. p.32 Figure 1: Conceptual image of a rock in an urban setting. Image credit: Filip Dujardin. “Fictions”. Retrieved from http://www.filipdujardin.be/ Figure 3: The grid. Commissioners plan of Manhattan, 1811. Image Source: Koolhaas, R. (1978). Delirious New York. Thames & Hudson. p.18-19. p.33 Figure 4: The grid and the “microchip aesthetic”. A die diagram of the Intel Core i7 Nehalem processor. Image source: Intel Corporation (2008, November 17). Intel Launches Fastest Processor on the Planet. Press-release. Santa Clara, Calif. Retrieved from http://www.intel. com/pressroom/archive/releases/2008/20081117comp_sm.htm p. 34 Figure 7: Collage by the author. Data center image source: Infopipe. Retrieved from http:// goinfopipe.com/what-we-do/ p.35 Base image source for Figures 8 and 9: Google Earth. p.36 Figure 11: The different lots comprizing the site. Base map and information source: New York City, DoITT City-wide GIS. Retrieved from http://maps.nyc.gov/doitt/nycitymap/ p.39 Figure 14: One of the largest data centers in the United States, ACC4, owned and operated by DuPont Fabros, located in Ashburn, VA. Total area: 348,000 sft. 36.4 MW critical load4. Image credit: DuPont Fabros Technology, Inc. Retrieved from http://www.dft.com/themes/ dft/images/data_centers/DFT_ACC4_layout.pdf

Exposing the Data Center Credits

82


ACKNOWLEDGEMENTS This book only became possible thanks to the support of numerous people and organizations, to whom I wish to extend my gratitude: The Institute of International Education and the American Embassy in Estonia, who gave me a chance to come to the United States via their Fulbright Foreign Student Exchange Program, and have supported me for the three years that I spent here; The Estonian Student Fund in the USA, the Estonian National Culture Foundation (Eesti Rahvuskultuuri Fond) and the Cultural Endowment of Estonia (Eesti Kultuurkapital) for providing study grants for the second year of my Master’s program; The School of Architecture + Design (A+D) at Virginia Tech which has become the trial ground for my architectural explorations, as well as provided financial support to bring them to fruition; The Virginia Tech Graduate School, especially the Vice President and Dean of Graduate Education Dr. Karen P. DePauw and the Director of Graduate Student Services Monika Gibson for providing moral, critical and financial support throughout my graduate studies; My thesis committee: Hans. C. Rott, James Bassett, and David Dugas – as well as my “extended committee”: Paola Zellner Bassett (Assistant Professor, A+D, Virginia Tech), Andrew Balster (Director of the Chicago Studio, A+D, Virginia Tech), Jason Forsyth (Doctoral candidate at the Ubiquitous Computing Lab, Virginia Tech) and Christian Gänshirt (Associate Professor, Xi’an Jiaotong-Liverpool University); Rackspace’s staff in the National Capital Region, who welcomed me to their facility and from whom I learned a large amount of what is found above; Taaniel Jakobs with whom my interest in computing originated and developed, and who has been indispensable in making this work take off; And lastly, the Office for Metropolitan Architecture (OMA), which has served as an inspiration throughout my architectural education and later taught me, among other things, the work ethic necessary for bringing this work to the result it deserved. Apart from all of the above, I owe a special thanks to my parents Vitali Sergejev and Svetlana Zaytseva, my wife Kameron Elizabeth BryantSergejev and her family, and to the people who instilled in me the dream of pursuing the impossible: Juri, Irena and Anton Weiss-Wendt.

Exposing the Data Center Acknowledgements

83


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.