Cornerstone 2013

Page 1

C O R NERSTONE academic foundation for global leadership

2013 ANNUAL REPORT i


C O R N E R S TO N E —a stone that forms the base of a corner of a building, joining two walls —an important quality or feature on which a particular thing depends or is based


CHANCELLOR’S MESSAGE The Texas A&M University Institute for Advanced Study is a key initiative to launch Texas A&M to the forefront of top tier universities. I have strongly supported the Institute, including a commitment of $5.2 million to cover half the cost during the five-year startup phase. Through the Chancellor’s Research Initiative (CRI), we are investing $100 million over the next few years to attract a new wave of stars to permanent appointments on our faculty. The fifteen accomplished scholars attracted in the first two TIAS Faculty Fellow classes are working well with our faculty and students, and all are getting introduced to the quality of our programs and the opportunities that they offer for serious scholarship. One of the TIAS Fellows has already joined our permanent faculty, and with the support of CRI funds, it is my hope that several more permanent appointments will follow. Taken together, TIAS and CRI are making a dramatic impact on advancing excellence at Texas A&M. John Sharp Chancellor The Texas A&M University System

INTERIM PRESIDENT’S MESSAGE Prior to beginning my new role as interim president of Texas A&M, I served as vice chancellor and dean of agriculture and life sciences. As such, I became a strong advocate for the Texas A&M University Institute for Advanced Study. TIAS is a unique institute that invites prominent scholars to collaborate in disciplinary and multidisciplinary research with Texas A&M faculty and students. As a land-grant university, Texas A&M’s founding mission is to promote teaching, research, extension, and service to the people of the state of Texas and beyond. TIAS is an exemplary program that furthers that mission by providing world-class research opportunities for our faculty and students. The product of TIAS is the enrichment of research and education for our state, nation, and world. Mark Hussey Interim President Texas A&M University

“TIAS is wisely designed to catalyze collaborations among student and faculty scholars and to attract a wide range of visitors with challenging ideas.”

-Dudley R. Herschbach Nobel Prize in Chemistry, 1986


TIAS DIRECTOR’S M E S S A G E The Texas A&M University Institute for Advanced Study (TIAS) is a dream of mine that has been over a decade in the making. I am pleased that, thanks to the hard work of many individuals, TIAS is now an operational reality and is making significant contributions toward advancing the academic excellence of Texas A&M. The Vision 2020 objective is for Texas A&M to become a distinctive university measured by world standards of academic excellence. How do we accomplish this? We must enhance the University’s academic quality—exactly why TIAS was created—by identifying and attracting national academy and Nobel caliber academic talent. TIAS, the cornerstone of Texas A&M’s approach to enhancing teaching, research, and scholarship, creates a streamlined, merit-based structure for moving academic programs to the top-tier of universities and the University to global academic leadership. TIAS provides resources and has a rigorous process for identifying and attracting worldclass talent to Texas A&M. Two years into our five-year start-up plan, TIAS has recruited fifteen eminent scholars. These scholars are making a positive impact on teaching, research, and scholarship at Texas A&M. Once a substantial endowment has been secured, TIAS will target and attract up to twenty world-renowned scholars each year to team with our exceptional faculty and students. By fostering collaborative relationships among the TIAS scholars, faculty, and students, the Institute advances the University’s research productivity and deepens the students’ educational experiences.

“While we expect continued progress toward the strategic 2020 goals on all fronts, TIAS is the key that will dramatically enhance the standing of Texas A&M among world class universities.”

-Eddie Joe Davis President, Texas A&M Foundation (Pictured on left with John L. Junkins)

vi

Scholarship and creative thinking are the foundation on which great universities thrive. TIAS is the cornerstone of that foundation at Texas A&M.

John L. Junkins Founding Director Texas A&M University Institute for Advanced Study


TA B L E OF CONTENTS

viii

2

TIAS and Vision 2020

4

Administrative Council

6

Advisory Board

7

TIAS Advocates

9

2013 Gala

11

TIAS Faculty Fellows Overview

12

Faculty Fellows: From Across the Globe

14

2012-2013 Faculty Fellow’s Research

42

2013-2014 Faculty Fellows

52

Financial Overview

54

Charting the Way Forward


“TIAS can provide a platform for recognized academic leaders to interact with students and faculty and thereby greatly encourage creativity in research and teaching on the campus.”

-David M. Lee Nobel Prize in Physics, 1996

TIAS AND VISION 2020 In late 1997, then Texas A&M President Ray Bowen proposed that the University strive to achieve recognition as one of the top ten universities in the nation—while maintaining the distinctiveness of Aggieland—by the year 2020. Dr. Bowen’s challenge to the University community led to Vision 2020, a strategic plan to reinforce Texas A&M’s reputation as a world-class university. Vision 2020 identifies twelve imperatives. Of these, the Texas A&M University Institute for Advanced Study (TIAS) is designed to accelerate the achievement of four: • Elevate Our Faculty and Their Teaching, Research, and Scholarship. • Strengthen Our Graduate Programs. • Enhance the Undergraduate Academic Experience.

As envisioned by a group of Texas A&M distinguished professors, each year TIAS invites highly acclaimed intellectuals from around the world to serve on campus as TIAS Faculty Fellows to: • collaborate with faculty and graduate students on cutting-edge research; • present classroom lectures to graduate and undergraduate students and provide opportunities for engagement that will enrich the academic environment; and • give public lectures to enhance the intellectual atmosphere on campus and throughout the community. In its first two years, TIAS has brought fifteen Faculty Fellows to Texas A&M. With growth of the existing endowment, TIAS plans to expand to twenty new Faculty Fellows each year.

With partial funding from the Academic Master Plan and generous support from Texas A&M University System Chancellor John Sharp through the Academic Scholars Enhancement Program, TIAS operates across all of the University’s colleges. Authority to recruit Faculty Fellows is driven solely by the excellence of the nominations and their correlation with a college’s strategic plans, without regard to existing resources within that college. Nominations for Faculty Fellows come from the University’s distinguished professors and college deans. After an extensive review, an advisory board of nine Texas A&M distinguished professors develops a list of nominees who are recruited to become Faculty Fellows.

TIAS also allows the academic community a low-risk opportunity to assess how each Faculty Fellow might— as a permanent member of the faculty—contribute to Texas A&M’s programs and advance its national and international reputations as a top university and a worldclass research institution. Since several Faculty Fellows have agreed to extend their appointments and one Faculty Fellow has joined the faculty permanetly, TIAS has clearly emerged as an important vehicle for attracting exceptional scholars to join the Texas A&M faculty.

While not specifically intended as a recruiting tool, TIAS provides opportunities for Faculty Fellows to experience Texas A&M first-hand and in-depth and to carefully consider how the academic environment could advance their goals.

• Build the Letters, Arts, and Sciences Core.

2

3


A D M I N I S T R AT I V E COUNCIL

MEMBERS

Joanne R. Lupton

M. Katherine Banks

Distinguished Professor and holder, William W. Allen Endowed Chair in Nutrition Department of Nutrition and Food Science College of Agriculture & Life Sciences

Dean, Dwight Look College of Engineering Director, Texas A&M Engineering Experiment Station Vice Chancellor for Engineering The Texas A&M University System

Tadhg P. Begley

CO-CHAIRS

Distinguished Professor, holder, Derek Barton Professorship in Chemistry Robert A. Welch Foundation Chair Department of Chemistry College of Science

José Luis Bermúdez Dean, College of Liberal Arts

Karen L. Butler-Purry Associate Provost for Graduate and Professional Studies Office of the Provost and Executive Vice President for Academic Affairs

Karan L. Watson

Glen A. Laine

The Texas A&M University Institute for Advanced Study is designed to accelerate progress towards making Texas A&M a global academic leader by bringing more worldclass talent to interact with our current faculty and students. I strongly concur with the Chancellor and Dr. Junkins regarding the importance of this institute in advancing our quality and reputation. We are working effectively together on this critical initiative. I salute TIAS for its resounding success in bringing such accomplished and inspiring individuals to Texas A&M.

The Texas A&M University Institute for Advanced Study has been conceived to foster and enrich the academic and research environment at Texas A&M. The notable scholars that have served as Faculty Fellows, as well as the distinguished scholars that will be attracted in the future, encourage and support dynamic collaborations with Texas A&M’s already distinguished faculty. We are indeed fortunate to have an institute such as TIAS—a key to retaining our best faculty and recruiting stellar faculty and students.

Bhanu P. Chowdhary

Glen A. Laine Interim Vice President for Research Texas A&M University

President, Texas A&M Foundation

Karan L. Watson Provost and Executive Vice President for Academic Affairs Texas A&M University

Associate Dean, Research and Graduate Studies College of Veterinary Medicine & Biomedical Sciences

B. J. Crain Vice President for Finance and Chief Financial Officer Division of Finance and Administration

Eddie J. Davis

Ricky W. Griffin Distinguished Professor Blocker Chair in Business Head, Department of Management Mays Business School

4

Kate C. Miller Dean, College of Geosciences

H. Joseph Newton Dean Richard H. Harrison III/External Advisory and Development Council Dean’s Chair, George P. Mitchell ‘40 Chair in Statistics College of Science

Michael G. O’Quinn Vice President for Government Relations Office of the President

Timothy D. Phillips Distinguished Professor of Veterinary Integrative Biosciences College of Veterinary Medicine & Biomedical Sciences

Thomas R. Saving Distinguished Professor of Economics holder, Jeff Montgomery Professorship Department of Economics College of Liberal Arts Director, Private Enterprise Research Center (PERC)

Niall C. Slowey Professor, Department of Oceanography College of Geosciences

John N. Stallone Professor and Acting Head Department of Veterinary Physiology and Pharmacology College of Veterinary Medicine & Biomedical Sciences

5


A DV I S O RY BOARD

TIAS ADVOCATES

TERM EXPIRES NOVEMBER 30, 2014 Margaret J.M. Ezell

B. Don Russell Jr.

James E. Womack

College of Liberal Arts

Dwight Look College of Engineering

College of Veterinary Medicine & Biomedical Sciences

The Texas A&M University Institute for Advanced Study is honored to have a distinguished council of advocates who help advance the mission of the Institute. TIAS Advocates help identify others interested in ensuring the Institute’s financial foundation to continue its faculty-driven and merit-based mission for the future. The TIAS Advocates are distinguished professors; former presidents and chancellors; and graduates; as well as friends of Texas A&M.

TERM EXPIRES NOVEMBER 30, 2015 R. J. Q. Adams

Marlan O. Scully

Christopher Layne

College of Liberal Arts

College of Science

Bush School of Government and Public Service

TERM EXPIRES NOVEMBER 30, 2016

6

J.N. Reddy

Rajan Varadarajan

Stephen Safe

Dwight Look College of Engineering

Mays Business School

College of Veterinary Medicine & Biomedical Sciences

Ray M. Bowen ’58

Michael A. Hitt

Herbert H. Richardson

Jerry S. Cox ‘72

William E. Jenkins

B. Don Russell ‘70

John L. Crompton ‘77

Christopher Layne

Stephanie W. Sale

Edward S. Fry

Carolyn S. Lohman

Thomas R. Saving

J. Rick Giardino

Joanne Lupton

Marlan O. Scully

John A. Gladysz

John W. (Bill) Lyons, Jr. ‘59

Les E. Shephard ‘77

Melburn G. Glasscock ‘59

George J. Mann

James M. Singleton IV ‘66

William C. Hearn ‘63

William J. Merrell, Jr. ‘71

Ronald L. Skaggs ‘65

Rodney C. Hill

Charles R. Munnerlyn ‘62

Michael L. Slack ‘73

Gerald R. North

Christine A. Stanley ‘90

Erle A. Nye ‘59

Bruce Thompson

Thomas W. Powell ‘62

James E. Womack

7


2013 GALA Each year, the Texas A&M University Institute for Advanced Study inducts its newest class of Faculty Fellows at a formal gala on the Texas A&M campus. The first class of Fellows was introduced to the Texas A&M community during a January 25, 2013, gala at the Memorial Student Center.

“The Texas A&M University Institute for Advanced Study provides a great opportunity to both collaborate with colleagues and to learn about the excellence that exists at Texas A&M.”

-Peter Stang National Medal of Science, 2011

The festivities officially began when each Faculty Fellow and guest entered the event under a canopy of sabers provided by Texas A&M’s Ross Volunteers, the official honor guard of the Texas governor. Following a dinner, TIAS Founding Director John Junkins served as master of ceremonies. Dr. Junkins introduced A&M System Chancellor John Sharp and then University President R. Bowen Loftin. He invited each of the six new Faculty Fellows to the stage and provided highlights of each scholar’s career. As a keepsake, Dr. Junkins presented each Fellow with a beautiful bronze replica of Rodin’s “The Thinker.” Guests of the gala danced to big band music played by the Greg Tivis Orchestra. TIAS will introduce its second class of Faculty Fellows at a gala on February 7, 2014.

8

9


T I A S FA C U LT Y F E L L O W S OVERVIEW

“I am very pleased to be a part of the emerging Texas A&M University Institute for Advanced Study.”

-Roy Glauber Nobel Prize in Physics, 2005

The hallmark of a great university is for its students to have access to and interact with the finest academic minds in the world and for its faculty to conduct cutting-edge research for the benefit of mankind. In its first year of operation, TIAS brought six outstanding scholars as Faculty Fellows to the Texas A&M University campus. During its second year, TIAS will bring nine more distinguished researchers to Texas A&M. Each of these fifteen Faculty Fellows is an example of the world-class talent TIAS is attracting to the University to interact with our faculty and students. Two of the Fellows are Nobel laureates, another is a National Medal of Science winner, and the rest of the Fellows hold “national academy” levels of distinction in a wide spectrum of disciplines. Their accomplishments and recognitions tell us an important truth: TIAS is not just a nice idea that might work; it is a reality that is working. During their visits to Texas A&M, TIAS Fellows establish joint research objectives with our faculty, interact with students, and—on a selective basis—give public lectures. The impact of this annual influx of talent will enrich the quality of our programs, improve our visibility, and heighten our reputation for excellence. TIAS will dramatically enhance Texas A&M as a global academic leader.

10

11


FA C U LT Y F E L L O W S FROM ACROSS THE GLOBE 2013-2014 FACULTY FELLOWS

12

13


TIME OF YO U R L I F E : WHOLE GENOME APPROACHES TO CLOCK AND L I G H T- R E G U L A T E D GENE EXPRESSION J AY D U N L A P Professor of Genetics and holder, Nathan Smith Chair Professor of Biochemistry Geisel School of Medicine Dartmouth University

Jay C. Dunlap is best known for his research on the molecular basis of biological clocks, including their synchronization to environmental light-dark cycles and their regulation of daily rhythms in physiology, metabolism, and behavior. A member of the National Academy of Sciences, Dunlap has authored and co-authored more than 150 research papers, as well as a widely used textbook on biological clocks, Chronobiology: Biological Timekeeping. He received a National Institutes of Health Method to Extend Research in Time Award. Dr. Dunlap’s work has been recognized with the Honma International Prize for Biological Rhythms Research, the Genetics Society of America’s Robert L. Metzenberg Award, and the George W. Beadle Medal.

Most life on earth is exposed to daily environmental cycles. To cope with these cycles, a vast array of organisms exhibit daily rhythms in their physiology, metabolism, and behavior that are thought to optimize resource allocation to improve fitness. These biological rhythms are not a simple reaction to environmental change, but are driven by internal circadian clocks that drive biological rhythms in anticipation of daily changes in the environment. For example, our heart rate and blood pressure increase in anticipation of waking up each morning. Likewise, our capacity for learning, memory, and the onset of sleep are also clock-controlled. As a consequence of the widespread influence of the clock on our physiology, metabolism, and behavior, disruption of the clock, either through mutation or by living out of phase with the environment (e.g., shift work, jet lag), significantly impacts physical and mental health.

Figure 1. Cultures of Neurospora crassa.

14

While these examples are from humans, almost all eukaryotic organisms possess similar biological rhythms that are regulated by conserved circadian clock mechanisms. The circadian clock is comprised of input pathways that synchronize the oscillator to daily environmental cycles, an oscillator that keeps circadian time and output pathways that relay time of day information from the oscillator to systems that control physiology, metabolism, and behavior. Pioneering studies in the genetic model system Neurospora crassa have uncovered key genes and mechanisms that mediate circadian oscillator function and light input (Figure 1). Although a number of clock-controlled genes have been identified in Neurospora, determining how the clock controls expression of target genes, defining the molecular pathways involved, and identifying the overt rhythms that clock-controlled genes regulate are key to understanding how the clock controls biological rhythms.

15


In a collaborative effort currently funded by the National Institutes of Health (NIH) (P01 GM68087), we have studied how light and the circadian clock controls the transcription of mRNA. Using high throughput sequencing technology, we have identified the direct gene targets of a blue-light photoreceptor and central component of the circadian clock called WC-1. WC-1 forms a complex with WC-2, and this complex, called the WCC, functions as a transcription factor (TF) to directly and indirectly control rhythmic gene expression and overt rhythmicity. It is also important for the transcriptional response to light. We found that several of the direct targets of the WCC are themselves TFs, and current NIH funded work involves determining the downstream targets of these TFs using chromatin immunoprecipitationsequencing (ChIP-seq) and mRNA-sequencing (RNAseq) in wild type versus TF mutant cells. We have worked closely to analyze ChIP-seq and RNA-seq data from these samples, and three papers are currently in preparation as a result of these interactions. In addition to clock- and light-dependent transcription, accumulating evidence suggests that the clock also regulates translation, but how the clock governs translation is unknown, and current proteomic techniques are not adequate to address this question. We have extended our current collaboration to include light, development, and clock control of translation via ribosome profiling. Ribosome profiling is an exciting and powerful new technology that can track translation of all mRNAs in cells simultaneously with codonresolution. Ribosomes in the process of translation will protect the mRNA from degradation by specific nucleases (Figure 2). The ribosome-protected mRNA

fragments are extracted and used to prepare an Illumina cDNA sequence library. Sequence identification enables determination of the precise mRNA regions that are translated by ribosomes. Furthermore, measurements of relative rates of protein synthesis are obtained by comparison of the “ribosome density” for each mRNA relative to the level of each mRNA (which is determined by sequencing the RNA population via RNA-Seq). Higher ribosome density/mRNA generally corresponds to increased translation. Thus, this approach can provide unprecedented regulatory and structural insight into translational processes in living cells.

These ribosome profiling experiments provide an opportunity to examine all protein synthesis in Neurospora as a function of time of day and after a light treatment. Currently we are analyzing sequences obtained from ribosome profiling of N. crassa grown for 24 hours in the dark. In an initial pilot test, sequences associated with ribosomes mapped to translated sequences as predicted, but additional sequences corresponding to upstream open reading frames (uORFs) were also detected (Figure 3). The ability to detect translation at uORFs will lead to new insight

into translational regulation under different experimental conditions. N. crassa grown under additional regimens—including protocols to examine circadian and light-induced responses—are prepared and await analyses. When circadian targets of translational control are identified, we will be in a position to investigate the mechanism of circadian translational control in future studies. Moreover, once these techniques are established in Neurospora, we will extend our analysis of clock control of translation to Drosophila cells in collaboration with Dr. Paul Hardin.

Figure 2. Diagram of strategy to identify translated mRNA regions by ribosome profiling. Ribosomes with associated mRNAs are purified, and mRNAs regions that are not protected by ribosomes are digested with nuclease (steps 1-17). Protected regions associated with ribosomes are purified (steps 18-29) and converted to DNA for sequencing in a series of steps (not shown). Figure reprinted from: Ingolia, N.T., Brar, G.A., Rouskin, S., McGeachy, A.M., and Weissman, J.S. (2012) Nat Protoc 7, 1534-1550. Figure 3. Example of ribosome profiling for the NCU06110 gene specifying a thiolase. RP3 and RP5 represent 25-mer ribosome profiling data, and RNA-Seq WT 0 min represent mRNA transcriptome data for cells grown in the dark (no light induction) aligned to the Genes 10.6 model gene structure. Purple indicates the coding region of the gene; the coding region is interrupted by an intron that is spliced out of the mRNA; additional introns in the transcript are also indicated. The ribosome profiling data precisely map to the gene’s coding region. The small peak in ribosome profiling data in a region corresponding to the mRNA 5’ leader may represent ribosomes translating a short upstream open reading frame.

I N C O L L A B O R AT I O N W I T H : Matthew S. Sachs, professor, Department of Biology, College of Science Deborah Bell-Pedersen, professor, Department of Biology, College of Science Paul Hardin, distinguished professor, Department of Biology, College of Science Stephen Z. Caster, Ph.D. student, genetics, (Bell-Pedersen laboratory) Jiajie Wei, Ph.D. student, biology, (Sachs laboratory)

16

17


HOW DO THE OCEAN AND AT M O S P H E R E I N T E R AC T AND WHY IS THIS I N T E R A C T I O N I M P O R TA N T ?

P E T E R S. L I S S Professorial Fellow School of Environmental Sciences University of East Anglia Norwich, England

The atmosphere is almost never still. Whenever the wind blows-—a gentle breeze or a howling hurricane—it moves material around, including solids such as dust (or larger) particles, liquids such as fresh or salt water, or gases that make up the atmosphere itself. Perhaps the most important effect of this movement is the global water cycle, where water vapor is lost from the ocean through evaporation and re-deposited over the land as rain. Without this cycle, multicellular life forms on land (including humans) would not exist.

Peter Liss is renowned for his contributions on the biogeochemical interactions between the ocean and the atmosphere. His work is an integral part of East Anglia’s Centre for Ocean and Atmospheric Sciences. Liss’ research reveals how gases that are formed biologically in the surface oceans can affect atmospheric chemistry and physics. A fellow of the Royal Society, Liss was the first recipient of the Challenger Society Medal, has received the Plymouth Marine Sciences Medal, and the John Jeyes Medal of the Royal Society of Chemistry. He is guest professor of the Ocean University of Qingdao, China and is a member of Academia Europaea.

Both natural and man-made substances are transferred in both directions across the interface between the ocean and atmosphere. These transfers are important for life in the oceans and affect atmospheric properties including climate and air quality. Three aspects of ocean-atmosphere interactions have been selected here because of their importance and generality: dust from land depositing on the oceans; the formation of particles in the marine atmosphere; and the uptake of carbon dioxide (CO2) by the oceans.

Figure 1. Dust fluxes to the world ocean. “Hot” colors indicate areas receiving large amounts of dust, with regions of low dust inputs shown in “cool” colors. Scale units are grams of dust per square meter per year. (Jickells, et al. 2005)

18

Dust particles from land are blown into the air, particularly from desert areas in China and Saharan Africa and are carried by winds around the globe. Saharan winds move dust westward, while dust from the Gobi desert in northern China moves east. Eventually a significant fraction is deposited on the oceans. The Sahara deposits dust on the north Atlantic Ocean, while the Gobi deposits dust particles on the East and South China Seas. Other desert and semi-desert areas supply dust to different parts of the global ocean. The amount of these deposits vary depending largely on their proximity to source regions (Figure 1).

19


The mineral grains that make up the dust are not particularly important to the oceans, but the amount of certain elements, such as the iron they contain, plays a significant role. Many of these elements are vital nutrients for the growth of microscopic plants (phytoplankton) in the sea. Close to the coasts there is plenty of iron brought down by rivers, but further offshore its concentration decreases so that it is virtually undetectable. Since phytoplankton form the base of the marine food chain (the grass of the sea) the atmospheric source of this nutrient is potentially important for the operation of the whole chain in these offshore regions. This is particularly relevant in the Southern Oceans because of their distance from major landmasses that are not covered by ice. The Southern Oceans’ relatively small amounts of atmospheric input of iron may limit biological life. Several experiments in various parts of the ocean have shown that adding iron can increase phytoplankton production by a factor of four or five, although the effect is usually short-lived. This has led some to propose artificially fertilizing areas such as the Southern Oceans with iron as a purposeful way of getting the plankton to take up extra CO2, thereby removing it from the atmosphere. Such proposals for geo-engineering the planet should be treated with great caution since unforeseen or underestimated secondary consequences might do more harm than good. Much of the iron contained in the dust, however, is not in a form that can be assimilated by the phytoplankton. Iron and other metals contained in the dust can become available for assimilation through chemical reduction that renders them soluble. This can occur under the action of dissolved organic matter in seawater activated by light from the sun or by reaction with the atmospheric ozone. Such reactions are not well understood. To address this problem Alison Smyth, master’s candidate, is conducting laboratory experiments to try to establish the role of atmospheric ozone, after deposition to the ocean surface, in bringing about solubilisation of trace metals. Much of the dust blown from the land occurs as relatively large particles. Over the oceans, however, there are considerably greater numbers of fine particles. These can be formed from seawater, particularly under strong winds, or by condensation of gases emitted from surface seawater. These fine particles are important for several reasons. They can directly reflect sunlight back to space, causing cooling of the climate system. Also, if they are the right size, they can act as seeds—cloud condensation nuclei or CCN—on which water vapor can condense to form cloud droplets. Airborne particle concentrations can affect the degree of

20

cloudiness over the oceans and indirectly impact the climate. Currently, the direct and indirect radiative forcings by fine particles, including marine aerosols, represent the largest uncertainty in projections of future climate. Furthermore, another important role fine particles play is as a site for chemical reactions often leading to the formation of active species (oxidants) that keep the atmosphere relatively free of organic pollutants. Such oxidants are sometimes referred to as the “detergent” of the atmosphere. The proposed scheme in Figure 2 shows the process by which the fine particles form and grow to the size required to act as CCN. Although the details of the reaction scheme shown in Figure 2 are not well understood, it is likely to involve organic molecules, sulfur dioxide, amines, iodine, and probably other substances, several of which are themselves formed from gases initially produced by phytoplankton in the surface oceans. The formation of these gases and how they transfer from the sea to the air is an active research area. There has been a lot of discussion about the effects on the global climate of anthropogenic gases produced from burning fossil fuel and through industrial processes. Of these gases, which include refrigerants such as freons, the most significant is carbon dioxide. All organic matter turns to CO2 as it decays, but until we started using large quantities of fossil fuels, the global balance between uptake and release of CO2 was

Figure 2. In aerosol formation bonding particles must cross an energy threshold - the nucleation barrier - beyond which aerosol growth becomes spontaneous (a); Organic molecules such as amines or iodine species (b) formed by phytoplankton in the surface oceans that mingle with atmospheric gases can facilitate the crossing of the barrier by creating a critical nucleus with sulfuric acid molecules (c); leading to aerosol growth (d). Knowing the composition of the critical nucleus will enable researchers to predict the nucleation rate, an important variable in atmospheric models. (Zhang, 2010)

essentially in equilibrium. As we have consumed fossil fuels over the past 300 years or so, the carbon dioxide produced has been building up in the atmosphere faster than plant use and dissolution in the oceans can take it up. About a quarter of all the carbon dioxide emitted by burning fossil fuels, land-use change, and other anthropogenic activities is taken up by the oceans by transfer across the air-sea interface. The oceans provide a substantial “service” to mankind since without that uptake the rise in atmospheric CO2 would be significantly greater, with correspondingly larger impacts on global climate. Figure 3 shows how this flow of carbon dioxide across the sea surface is distributed around the oceans. It is clear that in the warmer tropical parts of the oceans there is net emission of CO2 (hot colors in Figure 3), whereas in colder polar seas the reverse happens (cold colors), with the difference providing the overall uptake. Given the substantial ability of the oceans to absorb CO2, it is vital to be able to predict whether this will change in the future, with clear implications for atmospheric levels of the gas and hence climate. In order to make such forecasts we need to understand the many processes involved so that they can be incorporated into predictive models. Because of the CO2 uptake by the oceans, there are additional consequences that may have significant impacts on marine life. When dissolved in water, CO2 forms a weak acid and as its concentration builds up in the ocean, the ocean becomes more acidic. This has major implications for many creatures, particularly oysters, corals, and some plankton that secrete shells of calcium carbonate, as the shells tend to dissolve as the

acid concentration in the ocean increases. Observations over the last two decades at several oceanic time-series locations have shown a measurable effect on the acidity of the surface oceans, as shown in Figure 4. Although there is considerable seasonal variability, it is clear that the inexorable rise in atmospheric CO2 (top panel, black trace) is matched by an equivalent increase in carbon dioxide in seawater (pCO2 –top panel, colored traces) and decrease in carbonate ion (carbonate- converted to bicarbonate, bottom panel) and associated decrease in pH (that is an increase in seawater acidity–middle panel). The examples briefly described here show how natural and man-made materials crossing from the oceans to the atmosphere and vice versa profoundly affect vital properties of both the oceans and the atmosphere. These include the functioning of biological life in the oceans, the ability of the atmosphere to self-clean and eliminate pollutants harmful to humans, as well as the combined ocean/atmosphere system playing a major role in controlling the climate of the globe. This research is an important part of a worldwide effort to better understand how the oceans and atmosphere interact through projects such as the Surface Ocean – Lower Atmosphere Study (SOLAS <www.solas-int.org>).

Figure 4. Time series of atmospheric CO2 (top panel, black trace) and associated seawater carbon system parameters (explained in text) at 3 ocean time series sampling sites; (central N. Pacific - green; western N. Atlantic – red; and eastern N. Atlantic – blue). (Orr, 2011).

Figure 3. Annual net air-sea carbon dioxide fluxes for the year 2000. (Takahashi, et al. 2009).

I N C O L L A B O R AT I O N W I T H : Shari Yvon-Lewis, associate professor, Department of Oceanography, College of Geosciences Renyi Zhang, professor, Department of Atmospheric Sciences, College of Geosciences, and the Department of Chemistry, College of Science Piers Chapman, professor, Department of Oceanography, College of Geosciences Alison Smyth, M.S. student, Department of Oceanography, College of Geosciences

21


M AT H E M AT I C A L A N D C O M P U TAT I O N A L M O D E L I N G O F M AT E R I A L S B E H AV I O R ALAN NEEDLEMAN Professor of Materials Science and Engineering College of Engineering University of North Texas

Alan Needleman conducts research in material science, especially mathematical modeling of fracture, dislocations, and environmental effects on materials. His research focuses on improving understanding of multifunctional material properties. After 40 years at the Massachusetts Institute of Technology and Brown University, he joined the University of North Texas in 2009. Needleman received a Guggenheim Fellowship and belongs to the National Academy of Engineering and the American Academy of Arts and Sciences. He received the Timoshenko Medal from the American Society of Mechanical Engineers, the Prager Medal from the Society of Engineering Science, and the Drucker Medal from the American Society of Mechanical Engineers. The Institute for Scientific Information recognizes him as a Highly Cited Researcher in the fields of engineering and materials science.

Figure 1. Dislocation structures that develop in the bulk of a bent specimen deformed in plane strain. (a) fine-scale structure obtained using the “2.5-D� dislocation dynamics (DD) simulation scheme. (b) Coarse GND structure obtained using the standard DD method.

Recent studies have conducted discrete dislocation simulations of bending of microbeams using two simulation schemes, which only differ in the fine details of how short-range dislocation interactions are represented (Figure 1). Their results for bending show that the moment versus rotation response is essentially governed by the longrange interactions among geometrically (a)

22

Dislocations are atomic-scale configurational line defects in crystalline materials. They are the main carriers of plastic deformation. Their individual and collective properties reveal intriguing duality. An individual dislocation is a source of stress concentration. At the same time, its motion causes slip and relieves the stress. Collectively, dislocations are invoked to rationalize the fact that materials yield at a stress, which is orders of magnitude lower than the ideal strength. On the other hand, dislocation long-range interactions and short-range intersections are invoked to explain why materials harden. Dislocations glide, climb, and interact with other lower- and higher-dimensional defects. While still not properly modeled, their selforganization properties are often taken advantage of to produce stronger materials. Because of this complexity, continuum plasticity theories face challenges in modeling macroscopic phenomena, such as fracture and fatigue, or modeling the behavior in smallscale structures, such as thin films.

(b)

23


(a)

necessary dislocations (GNDs), so that the refinement in simulation scheme has a small effect (Figure 2a). On the other hand, the elastic strain energy locked in the dislocation structures (e.g., Figure 1) is affected by the short-range interactions (Figure 2b). Irrespective of the simulation method, a size effect is predicted both on the strength and energy storage. This behavior cannot be presently predicted using continuum plasticity theories.

(c)

Figure 4. (a) Tapered pillars after deformation (left: 3.2micron diameter pillar; right: two 0.4micron diameter pillars). (b) Stress-strain curves for two different pillar sizes with no taper angle. (c) Same for a 5degree taper angle. Two realizations are shown in each case. (a)

(b)

Figure 2. (a) Normalized bending moment versus rotation response for two sizes of geometrically similar microbeams. Thick lines: “2.5D” scheme; Thin lines: standard 2D scheme. (b) Stored energy normalized by the total work versus specimen size.

Discrete dislocation simulations have been carried out combining glide and climb to model creep. It is generally believed that the creep strain is caused by dislocation glide at a rate controlled by dislocation climb. However, direct simulation of the phenomenon has been lacking due to the complexity of incorporating both dislocations and point defects in the same framework (the diffusion of point defects such as vacancies mediates climb). These simulations are among the first of their kind in the world.1 (a)

(b)

The principle involves solving two concurrent boundaryvalue problems (Figure 3). These problems are coupled since the frequency and locations of the dislocation climb events are determined by vacancy distribution and any climb-induced production or annihilation of vacancies affects the evolution of the vacancy field through a source term in the equation of continuity. The success of this methodology has been demonstrated in that creep deformation becomes an emergent behavior (Figure 3c) instead of a constitutive relation, as in conventional continuum formulations. A series of parameter sensitivity analysis using this framework before it can be used in the context of multilayered material systems is being conducted.

(b)

(c)

Figure 3. Boundary-value problem for (a) mechanical part with Dislocation Dynamics using the superposition method of van der Giessen and Needleman; and (b) chemical part with unsteady diffusion. (c) Typical results of strain versus time at various applied stress levels. These results are obtained when the DD and diffusion problems are supplemented with kinetic equations supplying the dislocation glide and climb velocities as functions of the driving forces.

Studies involving the combined effects of micropillar size and taper on their mechanical response are being conducted (Figure 4). Micropillar compression has emerged in recent years as a potential alternative to cumbersome nanoindentation. However, evidence of a strong size effect as well as potential artifacts associated with pillar fabrication techniques pose challenges for its use as an efficient nanomechanics tool. Using discrete DD simulations, the contributions of fabrication-induced taper and specimen size on the pillar strength and its evolution with straining was apportioned. Dislocations are not the only defects affecting the material’s strength. Other such defects include vacancies, twins, and micro-voids. In particular, the monotonic fracture of many metallic alloys is governed by cavitation processes whereby the loss of stress-bearing capacity at the material point level occurs due to the nucleation, growth and coalescence of microvoids. As engineered materials become “cleaner,” the second-phase particles that give rise to voids become smaller and smaller. Continuum models of ductile fracture do not typically involve a length scale. Work is being done with a micromechanics-based model that incorporates a length scale representing the size-dependence of plastic deformation in the matrix material (without voids). The constitutive model in a dynamic finite-element code previously developed, is being implemented. The framework being used analyzes the dependence of fracture initiation and propagation on the void size (Figure 5).

(a)

(b)

Figure 5. (a) Fracture surface of Interstitial-Free steel showing one large dimple and submicron-scale dimples (Dr. Benzerga’s group at Texas A&M). (b) Force versus displacement for three values of the void radius to material length scale ratio in a dynamic three-point bend simulation using the size-dependent fracture model.

I N C O L L A B O R AT I O N W I T H :

[1] Keralavarma, S. M., Cagin, T., Arsenlis, A., Benzerga, A. A., Phys. Rev. Lett. 109, 265504 (2012)

24

Amine A. Benzerga, associate professor, Department of Aerospace Engineering, Dwight Look College of Engineering Chris Greer, M.S. student, Department of Aerospace Engineering, Dwight Look College of Engineering Babak Kondori, Ph.D. student, Department of Materials Science & Engineering, Dwight Look College of Engineering Sang-Il Lee, Ph.D. student, Department of Materials Science and Engineering, Dwight Look College of Engineering

25


THE HIGH PRICE OF CHEAP FOOD: RESEARCH ON GLOBAL F O O D S U P P LY C H A I N S

A L E D A V. R O T H Burlington Industries Distinguished Professor of Supply Chain Management College of Business and Behavioral Sciences Clemson University

Aleda V. Roth conducts research on risk analysis, sustainability, and supply chain. Her research seeks theoretical and practical explanations of how firms can best deploy their operations, global supply chain, and technology strategies for competitive advantage. With 200-plus publications (92 in refereed journals), Roth’s work places her in the top 1 percent among production and operations management scholars in the United States and seventh worldwide in service management researchers. Over her career, she has received more than $2.75 million in external research funding, plus more than 70 research and teaching awards, including the Lifetime Achievement Award from the Production and Operations Management Society’s College of Service Operations.

The past two decades have witnessed massive levels of U.S. production being shifted eastward to low-cost countries, such as China. This shift, coupled with productivity gains, has brought the decimation of U.S. manufacturing jobs to the limelight of public scrutiny. Today as the economy crawls out of the 2008 financial crisis, we find that many middle class service jobs were also moved abroad. Yet as these macro events have left many Americans economically challenged, another avalanche is looming: the lure, and the potential liability, of cheap food and ingredients from emerging market countries (Roth 2008). By Western norms, in these countries, hygiene factors, worker safety, environmental practices, quality processes, treatment of animals, and regulations are well-known to be highly substandard. So while there are many well-touted benefits of globalization, the risks of importing food from emerging countries are well below the radar screens of most consumers.

Figure 1. Trends in U.S. Imports from China: Total and Food

26

Focusing on China as a metaphor for untamed and developing countries, the rapidity of change and magnitude of Chinese imports are depicted in Figure 1. United States sourcing of food from China has tripled over a decade, and the pace is escalating along with total imports. Like the opening of the mythical Pandora’s box, this work reveals the dark side of global supply chains, wherein their growing scope and complexities plague food safety. Their unintended consequences are creating a conundrum for companies, consumers,

27


GLOBAL SOURCING OF FOOD Farm Supplier

Farming

Marketer/Storage

Imports

Processor

Exports

Wholesaler/ Distributor

Retailer

Consumer

Figure 2. Simplified food supply chain

and policy makers. The Centers for Disease Control and Prevention (CDC) found, for example, that outbreaks were linked to the rise in imported foods, and nearly 45 percent of the outbreaks were tied to food coming from Asia (Bottemiller 2012). The same article cited the Food and Drug Administration (FDA) report “... not all that surprising considering the United States is not only importing more food, but more high risk items like seafood, spices, and produce.” At the center of our solutions are new strategies for companies—and governments—based upon ecological principles that can mitigate the food supply chain risks. We hope to turn the tide by demonstrating how food supply chains can be redesigned to improve public health and national security by moving towards more local, sustainable agricultural practices.

28

Most Americans implicitly assume that the food they buy is wholesome and safe. Wrapped around the physical ingredients in food is the intangible ingredient of “trust.” However, when we study food supply chains and how they work, we find potential “time bombs.” A typical food supply chain diagram is comprised of a deceptively simple series of processes that start with the source (or origin) of raw ingredients followed by a sequence of intermediary steps taken to make and finally deliver the product that is consumed (Figure 2). With globalization, the long food supply chains are frequently comprised of a complex web of suppliers, manufacturers, distributors, retailers, and wholesalers. Surprisingly, these long chains often lack traceability to the source of ingredients, combined with little transparency of documentation and information along the entire chain; at the same time, they are coupled with a highly fragmented infrastructure of policies and regulators within and across countries (Roth et al., 2008). This complexity opens the door to deception and opportunistic behaviors among the supply chain players, which only heightens the potential risks to consumers. For example, new U.S. regulations allow chickens raised in the U.S. to be sent to a slaughterhouse, which freezes the carcasses that are shipped to China for processing in a Chinese plant that processes and packages them to be brought back to the United States for consumption. Research would suggest that change in regulations for chicken processing is fundamentally flawed when it comes to U.S. food safety. While the Chinese food processing plants are supposed to operate with the same standards as their U.S. counterparts, regulators are ill-equipped to manage the complexity of the global supply web and cannot guarantee exactly what is coming back. When food enters our ports, less than 2 percent is inspected, and of these, tests are for biological contaminants. There is no routine checking for heavy metals or nonorganic toxins (Roth et al., 2008). Moreover, savvy suppliers sometimes transship products, through other countries such as the Netherlands and Mexico, to evade detection of China as a source.

USA: high fructose corn syrup, sugar, wheat flour (produced and milled), whole grain oats, sunflower oil, strawberry puree, celluose, red dye #40

CHINA: vitamin and mineral supplements (B1, B2, iron, folic acid), honey

SCOTLAND:

sodium alginate

PHILIPPINES: carrageenan

ITALY:

malic acid

DENMARK:

lecithin (soy)

EUROPE: citric acid

INDIA:

guar gum

Figure 3. Global sourcing of ingredients

Almost a quarter of the average American’s food consumption is imported. Walking down the grocery aisles, consumers would be hard-pressed to find processed foods without at least one ingredient from China. Even a simple snack bar, as depicted in Figure 3, contains ingredients from around the globe. In 2010, the U.S. Department of Agriculture (USDA) reported that over one-quarter of the seafood that Americans consume is from China, including tilapia (75 percent), shrimp (70 percent), and cod (50 percent) providing a larger share of seafood. Also emanating from there are large quantities of frozen and canned fruits and vegetables (e.g., 20 percent of frozen spinach), 88 million pounds of candy, spices (e.g., 85 percent of artificial vanilla flavoring and 50 percent of garlic) and other ingredients used in processed foods. China also has a virtual monopoly as a supplier of vitamins, food supplements, and many ingredients in pharmaceuticals. The odds are high that your vitamin C (90 percent), B vitamins (folic acid, 85 percent and thiamine, 85 percent), and “nutritional product ingredients” are sourced from China. Moreover, many U.S. food companies are also looking to China to source organic foods that are often certified by Chinese government agencies. Unfortunately, there is an abundance of loopholes and gaps in the Chinese food

system, even for exports, that are cause for concern (e.g., less rigorous processes for exported ingredients and unlikely tests for pollutants). The bottom line: A significant amount of food that Americans eat crosses the ocean before entering the United States; such foods may have at least trace amounts of contaminants and their cumulative health effects are difficult to trace at best, or at worst unknown.

C A N A RY I N T H E C O A L M I N E The 2007 food debacle may portend the proverbial “canary in the coal mine” scenario for imported human-grade food ingredients. Coal miners would place a canary into the mines prior to entry. If it did not come out alive, it signaled that the air quality was not safe for humans. The spate of pet food recalls showed that hundreds of brands, even the most reputable brands, were brought down by melaminetainted wheat and rice gluten sourced from China. These miniscule ingredients were credited for deaths and the sickening of thousands of dogs and cats. Worse yet, as the list of recalled products grew day by day, it was months—and hundreds of products later— before the FDA traced the root cause to unscrupulous Chinese suppliers. Correspondingly, a huge supply

29


disruption occurred as retailers’ shelves remained empty and many consumers faced the question of how and what to feed their pets. At the same time, consumers were unaware that melamine had been found in the urine of U.S. livestock from melamine-imported feed. And more than 2,500 metric tons of imported Chinese dairy ingredients and products were estimated to enter the U.S. food system in 2007. Even though there was little research on the cumulative effects of tiny, trace amounts of melamine on humans, the FDA contended that there is “no risk.” Back in China, however, kidney ailments were found in thousands of Chinese babies drinking melamine-laced milk. A 2008 federal study reports that the prevalence of chronic kidney disease in the United States had increased from 20 percent to 25 percent over the past decade-—and for older Americans, it is significantly higher. Is it a coincidence that these trends parallel the imports from China? The problem is that we don’t know. But we do know this: The fallout of the pet food scandal caused U.S. authorities to enforce a law that was on the ‘books’ but rarely enforced, namely, regulations regarding country-of-origin labeling for fresh meat, fish, fruits, and vegetables. Yet country-of-origin labeling is still not available to consumers for ingredients in any processed foods, including baby foods. Companies were surveyed on thousands of branded food items and it was found that manufacturers and distributors, for the most part, do not want you to know where they source your food. It was concluded that “ingredient sourcing” remains veiled in much secrecy. In requests for import data on food from the U.S. Commerce Department, not only was it difficult to review, but most of the U.S. buying companies had “exceptions” in that they were anonymous. Second, in reviewing the marketing and packaging of many of these food items, they were found to be well-crafted in misleading customers to believe that their ingredients are from American farms, even when at least some are sourced from China. It was further found that many of those foods labeled as “organic” were sourced from abroad, and increasingly China. In 2008, ABC

News reported that many of Whole Foods branded organic frozen fruits and vegetables, including its “California Blend,” were actually from China. Despite the certification of USDA and Quality Assurance International (QAI) certification labels, neither of these agencies actually performed the certification. Rather, they outsourced it to Chinese third parties—still a common practice according to investigations of food certifiers. The consumer backlash was so great that the company responded by changing its sourcing strategy at that time, at least temporarily.

THE GLOBAL SOURCING TRAP Why is it so important that we know the country of origin for ingredients? To answer this question, we need to consider the potential health risks of the accumulation of trace amounts of toxins that may emanate from polluted environments. Empirical research on foreign direct investments (FDI) reveals that many Western companies moved their production to emerging markets so they could export their pollution costs (Anguelov and Roth 2012). Accompanying the largest transfer of manufacturing in history, and in turn, the demands of the rising middle class in China (Zhao et al, 2006), was an unprecedented escalation in levels of water, air, and soil pollution. Clean water is growing scarcer every day. Much of China’s industrial, municipal, and agricultural waste is left untreated. According to a 2008 National Geographic article, about four billion tons annually of untreated chemical and human waste flow through tributaries of the Yellow River—50 percent was considered to be biologically “dead,” and more than 65 percent is used in agricultural irrigation. Figure 4 depicts the pollution

levels along the Yellow River. More broadly, about twothirds of China’s surface water is polluted, as well as its ground water. The USDA found China used the world’s highest rates of chemical fertilizers per hectare. Adding to the water scarcity, highly fertilized plants need much more irrigation, not only reducing the water tables but also adding to the concentration of water contaminants. Further, some of the toxic fertilizers and pesticides, which are banned in the United States but still used in China, affect the soil quality as runoffs enter the waterways. In total, China is left with an estimated onethird of its waterways as unsuitable for industrial usage, and much less for agriculture, which takes up the toxins.

The bottom line is this: with so few controls on pollution, much of China’s air, water, and soil are so contaminated already that producing any food there is risky business. The current solutions to food quality are not sustainable. Quality risk in global food supply chains cannot be eliminated by fiats, inspections, auditing, and testing. Rethinking our supply chain designs using ecological design principles—for both food and production—are a way forward. Our health and national security may depend on it.

That China is now a pollution haven cannot be masked. A recent 2013 study indicated that life expectancy of Chinese citizens was reduced by 5.5 years due to air pollution. Coal-fired power and manufacturing plants, which are the main source of air pollution in China, also are a major contributor to its acid rain and atmospheric mercury, among other toxins. A report of the National Academies states that one form of mercury, methylmercury (MeHg), is accumulating in the aquatic food chain. Mercury is found in fish, fish eating birds and mammals, and it is well-known that dietary exposure to it, beyond the level of 5.8 mg/L of whole blood, may be harmful to the nervous system, especially for unborn babies and young children. Coincident with recent reports of heavy metals, including cadmium and lead, found in China’s domestic food chain, the Chinese government announced that it will begin testing for soil contamination. Again, while there is little transparency in China regarding pollution, all signs point to increasing wide-spread adverse health problems, which will be challenging to the Chinese government.

The next step in research is to further model and evaluate the “hidden” risks associated with long supply chains and set the stage for new business models built on sustainability and corporate social responsibility (CSR). Several studies are now in various stages of development. The first study will develop and test theory-based models of security risks associated with long supply chains in FDA/USDA regulated industries. The second study will address the “system” effects and will develop systems dynamics models to simulate the long term impacts of environmental pollution on the hidden costs of food and well-being. The third study will examine the extent to which firms’ publically available corporate socially responsibility (CSR) messages are aligned with their realized supply chains and performance outcomes.

NEXT STEPS

I N C O L L A B O R AT I O N W I T H : Gregory R. Heim, associate professor and Mays Research Fellow, Department of Information and Operations Management, Mays Business School Xenophon Koufteros, associate professor and holder, Jenna and Calvin R. Guest Professorship, Department of Information and Operations Management, Mays Business School Rogelio Oliva, associate professor and Ford Faculty Fellow, Department of Information and Operations Management, Mays Business School Figure 4. Pollution levels along the Yellow River (Huang)

30

31


BALANCE SHEET CRISIS: CAUSES, CONSEQUENCES AND RESPONSES

VERNON L. SMITH Professor of Economics Argyros School of Business and Economics/School of Law Chapman University

Vernon L. Smith earned the 2002 Nobel Prize in Economic Sciences for his groundbreaking work in experimental economics and is a member of the National Academy of Sciences. He is a distinguished fellow of the American Economic Association, an Andersen Consulting Professor of the Year, and an Adam Smith Award recipient conferred by the Association of Private Enterprise Education. Smith consulted on the privatization of electric power in Australia and New Zealand. Smith has authored or co-authored more than 250 articles and books on capital theory, finance, natural resource economics, and experimental economics.

Figure 1.

The Depression and the Great Recession were characterized by large increases in mortgage lending followed by an abrupt decline. The net flow of mortgage funds for the Depression era are shown in Figure 1 and for the Great Recession are shown in Figure 2 in the upper panel; the lower panel contrasts the flow of mortgage funds in excess of its trend with the similarly large inflow of foreign capital. The net flow of mortgage credit turned negative in 1931, and again in 2008; a negative flow means that the volume rate at which old loans are paid down exceeds the volume of new loans made.

Figure 2.

32

Eleven of the last fourteen recessions were foreshadowed by significant declines in new housing expenditures. But only two of these recessions were major long-lasting economic slumps—the Great Depression that began in 1929 and the recent Great Recession. Despite price bubbles being commonplace in history, severe nationwide episodes in the U.S. economy are rare and their collapse not anticipated or understood by economic and policy experts.

As happened during the Depression and most recently, housing market collapses can lead to a balance sheet crisis in which home values fall against fixed mortgage indebtedness, causing many households to be plunged into negative net equity, which provides an especially severe economic shock for people of modest means whose net wealth is almost entirely in their homes.

33


Figure 3.

Figure 4.

The aggregate housing wealth data for the recent economic crisis are shown in Figure 3. Equity is the difference between the value of housing assets and the amount owed on those assets. Because real estate values fall against fixed mortgage debt, the decline is entirely felt in homeowner equity.

Some countries, such as Japan and the United States, have followed standard Keynesian prescriptions of increasing government spending and decreasing taxes (government deficit spending) to “fill the gap” left by a collapse of fixed investment and household durable goods consumption. Others, such as Finland in 1992, Thailand in 1997, and Iceland in 2008, had little choice but to reduce government spending and increase tax revenue to reign in government deficits.

Contrast this recent experience with the recessions of 1973-1975, 1980, 1981-82 and 1990-1991 (Figure 4) in which housing asset value declined but did not bring a large decline in equity, and the recessions were much less severe. The typical post World War II recession in the United States did not damage household balance sheets as seen in Figure 4. These and other data indicate that severe economic downturns and long-lasting economic slumps are associated with large declines in household equity that plunge many households into negative equity—socalled “underwater” homes in which the occupants owe more than the market worth of their homes. Losses in household equity are mirrored in households’ lender banks, bringing tremendous stress to the financial sector. These banks incur severe losses and decreases in equity as people abandon their homes or stop making mortgage payments and homes at depressed values are foreclosed. Balance sheet crises around the world over the past twenty years provide evidence of the efficacy of alternative anti-recession government policies.

34

Responses to a heavily damaged financial sector have varied. In the United States, the Federal Reserve relieved banks of large amounts of their nonperforming mortgage securities and the credit default swap insurance on them; this action simply shifted these liabilities from incumbent bank investors to the Federal Reserve. The liability for these toxic assets continued to constitute a claim on future national output and a drag on recovery. Similarly, the U.S. Treasury launched its “too-big-to-fail” program, bailing out only the largest U.S. banks, transferring liability from private investors to taxpayers. Neither of these policies helped to restore damaged household or bank balance sheets. In Japan house prices peaked in the fall of 1990 and fell 25 percent within two years. Nonperforming loans continued to escalate throughout the decade, and house prices had fallen 65 percent by 2004. Banks actually expanded their loans to existing borrowers to allow them to continue to meet their payment obligations, effectively papering over the de facto losses of bank investors.

This allowed Japanese banks to stretch out their loan and equity losses from 1993 to 2004. The consequence of this policy, however, was that the focus of Japanese banks on protecting the return to their incumbent investors ill-prepared them to provide an undiluted return on new loans enabled by new bank investment capital. Total lending by private financial institutions fell at an annual rate of 1.7 percent per year between 1992 and 2007. This fifteen-year decline in lending is surely an important factor in Japan’s sluggish economic growth over this period.

B A N K R U P T C Y A N D D E F A U LT A S A B A L A N C E S H E E T R E PA I R AND REBOOT PROCESS Loan losses of 10 to 15 percent of gross domestic product (GDP) are not unusual in serious banking crises. For example, in Sweden, from 1990 to 1994, loan losses amounted to 10.6 percent of GDP. At many banks losses significantly exceeded capital. In these cases, support for the interests of existing shareholders and bondholders reduces the incentive of potential new capital investors to recapitalize the banking sector, and the economy suffers from lack of lending. There are benefits to requiring the losses to be borne by the incumbent shareholders and bondholders. If all bad loans are written down, the losses may wipe out a bank’s capital and stockholders’ equity, and bondholders are forced to take losses until liabilities equal assets. But at that point the bank’s balance sheet is clean. When new investors provide new capital for the bank, that capital doesn’t need to be applied to fill in the holes in the balance sheet, and the investment return from new earnngs isn’t diluted by claims on it from previous shareholders. The alternative, in which loan losses aren’t fully recognized, is much less favorable to new capital investors. If previous shareholders are protected from loss, they too have a claim on the yield from capital investment by the new investors, which discourages new investment

flows. Requiring loan losses to be borne by incumbent shareholders helps restore the financial system to health and revive its capacity to lend. In Sweden the loan losses were in many cases borne by the shareholders. The result was that loan losses fell sharply after they spiked from 1991 to 1993. By 1994 they were only slightly above their more normal levels prior to the crisis. Once the bad loans were written off and new capital was raised, banks began lending again. Bank lending bottomed out at the end of 1994 and rose slowly until 1999, when it began a sharp rise that continued unabated for ten years. When banks recapitalize through private markets with new capital going to the restructured balance sheets unencumbered by the claims of past investors, the consequence is to greatly facilitate recovery and restoration of growth in the economy. Swedish stock market and home prices increased sharply once lending recovered. House price increases helped to restore the damaged balance sheets of households. Although the 29.2 percent fall in house prices in Stockholm was almost as great as the U.S. national decline, house prices recovered to their pre-crisis peak by 1998, only five years after the trough. Recovery in the stock market was faster yet. The Swedish response was very different than in the United States and Japan. It was much swifter in 1992, with failing banks required to recognize losses and recapitalize. The contrast with Japan, where bank losses were covered up, is stark.

35


LEARNING FROM MARKET C U R R E N C Y D E P R E C I AT I O N : FINLAND, THAILAND, AND ICELAND Data from Finland (1990-1993), Thailand (1994-2003), Iceland (2007-2010), and other countries demonstrate that smaller countries with flexible international exchange rates recover more quickly from balance sheet crises than those whose currency exchange rates cannot freely adjust. In these economies the original excesses were a consequence of large inflows of foreign capital into fixed investment that ultimately reversed. In each case the capital flow reversal led to currency depreciation that initiated a period of export-led economic growth, which aided the balance sheet recovery in a way that government borrowing cannot. Most developed countries that experience a financial crisis also face a surge in government deficits soon afterward. But many of the countries where those deficits have persisted have also had poor growth records, whereas those whose deficits declined after the initial surge experienced more rapid growth. In Finland, Thailand, and Iceland growth of government expenditures—so-called fiscal stimulus­—was not a part of the recoveries. In the United States, the United Kingdom, and Japan deficit financed government spending has lately been the norm. Finish deficits soared after their financial crisis, from a deficit of .4 percent of GDP in 1991 to a deficit of 9 percent of GDP in 1993. But real government expenditures were lower in each year between 1992 and 1996 than they were at their peak in 1991, during the middle of the crisis. By 1997 deficits were brought down to 1.6 percent of GDP, well below its growth rate, so total government debt began to decline as a percentage

36

CONCLUSIONS of GDP. During the first five years of the Finnish recovery, the GDP growth rate averaged 4.2 percent per year, and in the first ten years from the bottom of the depression, growth averaged 3.7 percent per year. Similar performance was observed for Thailand and Iceland. The Finnish experience stands in stark contrast to the Japanese experience. Japanese government deficits grew rapidly after their financial crisis in 1997 and have continued at an elevated level for 15 years. Central government debt has grown from 49.5 percent of GDP in 1997 to 147.8 percent of GDP in 2012. Annual deficits averaged 6.8 percent of GDP during this period while the growth rate of GDP averaged only .5 percent per year. In Finland, the growth rate during the first 15 years after their financial crisis averaged 3.6 percent per year, and the government had, on average, a small surplus. In the United Kingdom, deficits peaked at 11.4 percent of GDP in 2009 and remained elevated at 8.3 percent of GDP in 2011. With significant deficit spending and an annual increase in government expenditures of 4.6 percent since the peak output in the first quarter of 2008, the growth rate in the United Kingdom has been only .9 percent per year.

In periods of balance sheet crises, monetary policy stimulus is not effective in countering the downturn or in getting the economy to grow quickly again. In many smaller countries, however, flexible exchange rates have helped to counter the balance sheet crisis by stimulating net exports and increasing economic growth. No recovery scenario is painless. The losses are real. But when the losses are taken quickly the economy rebounds, and when they are prolonged then so is the economic stagnation. An elimination of the balance sheet stress is crucial to a relatively rapid recovery.

The Keynesian prescription of increasing government spending and reducing taxes is associated with prolonged recession and slow growth during recovery. The opposite prescription of revenue enhancements and government expenditure reductions that reduces fiscal deficits has sustained strong output growth and has generated foreign income to reduce accumulated debts. The Keynesian prescription, which has been followed most extensively by Japan, but also recently by the United States, has led to extreme public sector indebtedness and has been associated with a prolonged record of poor growth.

The data from a variety of countries point toward some clear conclusions. The evidence indicates flaws in the argument that government spending can substitute for private demand, or that tax cuts can stimulate the economy after a sharp downturn due to a balance sheet crisis. Numerous cases demonstrate a clear relationship between deficit spending and prolonged stagnation and numerous examples of fiscal consolidation that are related to renewed growth.

*This lecture is primarily taken from a recently published paper; sections of that paper are included in this synopsis. Gjerstad, S., Smith, V. L., “Balance Sheet Crises: Causes, Consequences, and Responses,” Cato Journal, Vol. 33, No. 3 (Fall 2013), pp. 437-470.

37


EXTREME MIXING: DISCOVERING THE LIMITATIONS OF A VITAL PHENOMENON

K AT E PA L L I S R E E N I VA S A N Professor of Physics, Mathematics, and Mechanical Engineering Dean of Engineering President, Polytechnic Institute New York University

38

Katepalli R. Sreenivasan is an international leader on the nature of turbulent flows, including experiment, theory, and simulations. His expertise crosses the boundaries of physics, engineering, and mathematics. Sreenivasan belongs to the National Academy of Sciences and the National Academy of Engineering and is a Fellow of the American Academy of Arts and Sciences. His numerous honors include a Guggenheim Fellowship and the Otto Laporte Memorial Award of the American Physical Society. Sreenivasan has authored more than 240 research papers. He is interested in human rights, especially as they apply to scientists, and has held unique positions with respect to international science and science policy, especially in developing countries.

Every morning, many of us observe the phenomenon of “mixing” when we add cream to our coffee. If you pour your cream gently and the coffee in your cup is motionless, it may take several hours before the coffee mixes uniformly with the cream. However, by stirring with a spoon, you can mix coffee and cream in mere seconds.

(a)

Now let’s suppose you are a scientist conducting research on the relationship between coffee and cream. Your observations might lead to a string of questions: • Is it more efficient to use a big spoon and stir slowly, or a small spoon and stir rapidly? • How will the addition of sugar change the process? (b)

• How will the temperature of the coffee influence mixing? Figure 1. (a) It is possible to assess quantitatively how a dye added to jet fluid eventually mixes with background fluid and becomes diluted by it. Mixing does not occur uniformly; and often the boundary between the background fluid and the jet fluid is quite sharp. (b) Results of a model of mixing in two dimensions. The model assumes an artificial velocity field that varies infinitely rapidly in time. The small scale features obey the so-called “anomalous scaling,” which effectively emphasizes that fluctuations of different sizes behave differently and independently. (a) and (b) show the essential difference between mixing in three dimensions and the results of this model.

• Is it better for your spoon to be cold or hot? • What if you used skim milk instead of cream? The process of mixing two or more substances to create one homogenous substance is of practical importance to the modern world. For example: In jet engines, we must mix fuel and air (Figure 1) in the right proportion and proper configuration to enable efficient combustion. In environmental sciences, we are concerned with how exhausts from a power plant might affect air quality in its neighborhood. In bio-geodynamics, we study the mixing of ocean currents, especially to understand the distribution of the nutrients. In nuclear engineering, we are aware that mixing influences the fusion process tremendously. In astronomy, we may speculate how mixing within the sun can affect its surface supergranules.

39


The list is very long. Wherever you look, mixing plays critical roles in nature and industry. In many practical applications, we need to find precise answers to wellposed questions about mixing to understand if any given operation is efficient or viable.

in nature; in other words, statistical patterns are stable, but precise prediction of details cannot be done. The act of performing averages will introduce a system of equations that are “unclosed.” That is, new terms will appear for which there are no equations.

THE “CLOSURE” PROBLEM

This “closure” problem is the source of the inherent difficulty in turbulence and turbulent mixing.

Ultimately, mixing is a molecular phenomenon. For example, if your goal is combustion, then a molecule of oxidant must react with a molecule of fuel. We must bring these molecules into close proximity. We accomplish this by stirring the reactants together to generate a background velocity field. By and large, it is this velocity field that determines to what degree the substances are mixed. We are able to arrive at some exact results if the velocity field is simple—that is, constant, periodic, or two-dimensional. Unfortunately, we know hardly anything with precision when the background velocity field is turbulent and three dimensional, as is the case in most practical circumstances. This is true even if the stirring algorithm is itself a simple one. The specific instance in which most progress has been made is the mixing of the so-called passive scalars: the mixing of passive scalars does not influence the background velocity field, which is generated by an external and independent means such as the stirring of the spoon in the coffee-cream example. (For active scalars—involving huge density differences or chemical reactions—mixing itself alters the velocity field.) Though each realization of the passive scalar field obeys a linear equation and is usually subject to linear boundary conditions, turbulent velocity is stochastic

In addition, the complex geometry of the practical situations for which we need answers, as well as many other relevant features—such as shocks, density differences, rapid accelerations and decelerations, and high temperatures— make the problem extremely complex. Our knowledge comes from a combination of model studies, basic dimensional and multiscaling arguments, as well as largescale direct numerical simulations. Science has made much progress on all of these fronts, and some researchers are pursuing the so-called Reynolds-averaging of the relevant equations and large-eddy simulations, which directly compute not only the large scales but also model the smaller scales. We have several such projects underway.

EXPERIMENTS AT HIGH R AY L E I G H N U M B E R S Hot and cold fluids routinely mix without reacting chemically, thus causing big changes in seasonal temperatures. In such instances, a surface is typically heated by an energy source. For example, the earth is heated by the sun. The fluid in the vicinity of the surface will become lighter and rise against gravity. Meanwhile, colder fluids will descend. The result is the atmospheric motion known as “convection.” Convection is a dominant form of motion in geophysical and astrophysical situations. On earth, clouds rise by convection. At the core of the sun, the heating from nuclear energy causes hot gases to rise to the surface. Convection is also important in technological applications. Nuclear power plants require cooling, as do computer chips. In flows caused by convection, the parameter of importance is the Rayleigh number. Related to the maximum temperature difference in the flow, the Rayleigh number is usually very high in the examples just cited. In solar convection, for example, the number registers at around 1024 (one followed by twenty-four zeros).

40

The highest Rayleigh number ever attained in a laboratory is 1017 (one followed by seventeen zeros). Set in the year 2000, this record is unlikely to be broken soon. But we have finished the design of an experiment that will again take us quite high into Rayleigh numbers. In addition to this feature, the experiment will make direct connection to other domains. For example, rotation is important in all geophysical flows and Rayleigh-Taylor turbulence is found whenever a fluid of one density accelerates into another of a different density---for example in supernovas, inertial confinement fusion, and oil spills. The simpler case of Kelvin-Helmholtz instability is shown in Figure 2.

DIRECT NUMERICAL SIMULATIONS OF EXTREME MIXING Experiments are not the only avenue to study mixing. Modern computational resources enable us to simulate mixing under realistic conditions. The computational resources needed become more and more demanding with increases of the Reynolds number, which is the ratio of inertial forces to viscous forces. The Reynolds number dictates the range of scales in a variety of turbulent flows. The range of temporal and spatial scales is thought to grow as the ¾-power of the Reynolds number, but there are recent indications that this power is even larger. Thus, it becomes a major challenge to achieve fidelity in resolving the smallest scales while keeping the computational domain large enough to contain at least a few of the largest scales. There are two reasons to push for computations at the highest possible Reynolds numbers:

Figure 2. The panels of the figure form a sequence in time from top to bottom in each column, the right column following the left. The thin bright interface (top left panel) initially separates two fluid streams which are moving in opposite directions. It is made visible by introducing a thin layer of dye. As the fluid layers keep moving, in time, this interface undulates, rolls up, shows complex interactions between neighboring rolls, and eventually becomes turbulent even as it grows in thickness. This illustrates a mechanism (known after Lord Kelvin and Hermann von Helmholtz) by which two initially segregated fluids mix together.

Phenomenological knowledge begins with the premise that all reality consists only of objects and events as perceived by human experience. Testing the veracity of this knowledge is quite critical for the progress of science. This becomes more readily attainable as innovation results in faster and larger computers and produces efficient algorithms to work in an environment of millions of processing elements. The challenge is to ensure that the computations avoid the trap of getting bogged down by inter-processor communications. To solve the problems of extreme mixing, researchers need the largest and fastest computers possible. Texas A&M is positioned to acquire a petascale computer, which can handle more than one quadrillion operations per second. This would help us to push the frontiers of the field as never before.

1. I n practice, realistic conditions most often correspond to high Reynolds numbers. 2. M ost phenomenological knowledge, which provides the foundation of our qualitative and quantitative understanding, is based on the assumption of high Reynolds numbers.

I N C O L L A B O R AT I O N W I T H : Rodney Bowersox, professor and department head, Department of Mechanical Engineering, Dwight Look College of Engineering Sharath Girimaji, professor, Department of Mechanical Engineering, Dwight Look College of Engineering Adonios Karpetis, associate professor, Department of Mechanical Engineering, Dwight Look College of Engineering Diego Donzis, assistant professor, Department of Mechanical Engineering, Dwight Look College of Engineering Devesh Ranjan, assistant professor, Department of Mechanical Engineering, Dwight Look College of Engineering

41


2013-2014 F A C U LT Y F E L L O W S

LEIF ANDERSSON Among the world’s most renowned scholars in the genomic and molecular study of domestic animals, Leif Andersson has carved a scientific niche by approaching farm animals as model organisms. As group leader and professor at Uppsala University in Sweden, Andersson analyzes interbreeding among species of farm animals—such as between wild boars and domestic pigs—to identify the genes and mutations that affect specific traits. He also investigates how the mutations may alter the function and regulation of the genes. Andersson and his research team compare genomes from many species of domestic animals to discover the molecular mechanisms and underlying traits that are important to human and veterinary medicine. They study the genetic background of phenotypic traits such as gaits in horses as well as disorders such as cancer, metabolic syndrome, and inflammatory diseases. Their discoveries provide insights in genetics, animal breeding, evolution, and biomedical research.

“TIAS provides an expanded opportunity for our students, especially our graduate students, to rub shoulders with the elite in their field of study.”

He earned his doctorate from the Swedish University of Agricultural Sciences in 1984. A member of the U.S. National Academy of Sciences and the Royal Swedish Academy of Sciences, Andersson has received the Thureus Prize in Natural History and Medicine from the Royal Society of Sciences, the Linneus Prize in Zoology from the Royal Physiographic Society of Lund, the Hilda and Alfred Eriksson’s Prize in Medicine from the Royal Swedish Academy of Sciences, and the Olof Rudbeck Prize from Uppsala Medical Society. He has published more than 330 scientific articles, received six patents, and filed applications for two more patents. He has mentored twenty-five students to doctorate or professional degrees. While serving as a TIAS Faculty Fellow, Andersson will collaborate with researchers in the College of Veterinary Medicine & Biomedical Sciences.

A professor in functional genomics in the Department of Medical Biochemistry and Microbiology at Uppsala University, Andersson also serves as a guest professor in molecular animal genetics at the Swedish University of Agricultural Sciences in Uppsala.

-James E. Womack Wolf Prize in Agriculture, 2001

42

43


S AT YA AT L U R I Each spring, the International Conference on Computational & Experimental Engineering and Sciences honors the legacy of professor and researcher Satya Atluri by awarding the Satya N. Atluri ICCES Medal to an individual who has made a significant impact on engineering, the sciences, commerce, and society. The annual award recognizes Atluri’s influence on aerospace and mechanics, as well as his founding of the conference in 1986. Atluri’s widely cited research reveals the workings of complex biological and mechanical systems. Over the last four decades, his work has received support from the National Science Foundation, the U.S. armed forces, the National Aeronautics and Space Administration, the Federal Aviation Administration, the U.S. Department of Energy, and the Nuclear Regulatory Commission, among others. Currently, Atluri is improving the safety of helicopters by developing a mathematical model to better predict the failure of rotors and other major components, and is conducting research on integrated materials science, mathematics, modeling, and engineering of the materials genome. Born in India and a citizen of the United States, Atluri is a Distinguished Professor in the Department of Mechanical and Aerospace Engineering, the Henry Samueli School of Engineering at the University of California, Irvine. A Distinguished Alumnus of the Indian Institute of Science, Atluri earned his

44

CLAUDE BOUCHARD doctorate from the Massachusetts Institute of Technology in 1969 and holds four honorary doctorates. He also has served on the faculties of the University of Washington, Georgia Tech University, the Massachusetts Institute of Technology, and the University of California, Los Angeles. Atluri is a fellow of the American Academy of Mechanics, the American Institute of Aeronautics and Astronautics, the American Society of Mechanical Engineers, the Aeronautical Society of India, and the Chinese Society of Theoretical and Applied Mechanics, as well as an honorary fellow of the International Congress on Fracture. Atluri has authored or edited 45 monographs and has published more than 800 archival papers in mechanical and aerospace engineering. He has mentored more than 375 doctoral students, postdoctoral scholars, visiting scholars, and visiting professors; many now serve as leaders in governments, industries, and universities around the world. As a TIAS Faculty Fellow, Atluri will collaborate with faculty and students in the Dwight Look College of Engineering’s Department of Aerospace Engineering.

For the last thirty-five years, Claude Bouchard has studied the genetics of obesity and the diseases commonly associated with obesity, including type 2 diabetes and hypertension.

Bouchard earned his doctorate in physical anthropology and population genetics from The University of Texas in 1977. He did postgraduate work in Cologne, Germany, and at the University of Montreal.

Bouchard has also documented how genetics influence the ability of humans to adapt to regular exercise in terms of cardiorespiratory fitness and the changes experienced with regular exercise in risks for cardiovascular disease and diabetes. He obtained his results from several experimental twin studies and one large cohort of families exposed to a standardized exercise program. Bouchard’s research relies on physiological, metabolic, and genomics technologies.

Bouchard is a fellow of the American Association for the Advancement of Science and the American Heart Association. He is also a foreign member of the Belgium Royal Academy of Medicine.

In recognition of his work and its influence, the Canadian government selected Bouchard as a Member of the Order of Canada in 2001. While serving as the executive director of Louisiana State University’s Pennington Biomedical Research Center from 1999-2010, Bouchard established a human genomics laboratory. Today, as a professor and the center’s John W. Barton, Sr. Endowed Chair in Genetics and Nutrition, Bouchard conducts his full-time research activities in the human genomics laboratory. His current research seeks ways to predict the ability of humans to respond to regular exercise, the risks of an adverse response to an exercise program, and the conditions under which personalized exercise medicine and nutritional recommendations could become reality.

Having published more than 1,000 articles in peerreviewed journals, Bouchard has served as the senior advisor for fifteen doctoral students and has mentored more than thirty postdoctoral fellows in his laboratory over the years. As a TIAS Faculty Fellow, Bouchard will collaborate with faculty and students in the College of Education & Human Development to stimulate collaborative research among faculty and graduate students in kinesiology, nutrition, and genomics.

Over the past decade, he has discovered several genes and sequence variants contributing to the understanding of human variations in fat retention, energy metabolism, and responsiveness to regular exercise—particularly for cardiorespiratory endurance and insulin sensitivity.

45


CHRISTODOULOS A. FLOUDAS A world-renowned authority in mathematical modeling and the optimization of complex systems, Christodoulos A. Floudas conducts research in chemical process systems engineering, which is found at the intersection of chemical engineering, applied mathematics, and operations research. During a career that spans four decades, Floudas has developed useful tools for optimization and found novel pathways for energy conversion and conservation. The scope of his research includes chemical process synthesis and design, process control and operations, discrete-continuous nonlinear optimization, local and global optimization, and computational chemistry and molecular biology. Floudas earned his doctorate in chemical engineering from Carnegie Mellon University in 1986 and joined the faculty at Princeton University as an assistant professor later that year. Today, Floudas is a professor of chemical and biological engineering at Princeton University and has served since 2007 as the Stephen C. Macaleer ‘63 Professor in Engineering and Applied Science. In addition, he has been a visiting professor at England’s Imperial College, the Swiss Federal Institute of Technology, the University of Vienna, the Chemical Process Engineering Research Institute in Greece, and the University of Minnesota.

R OY J. GLAUBER Floudas is the recipient of numerous awards that include the 2001 American Institute of Chemical Engineers (AIChE) Professional Progress Award for Outstanding Progress in Chemical Engineering; the 2006 AIChE Computing in Chemical Engineering Award; the 2007 Graduate Mentoring Award, Princeton University; One Thousand Global Experts, China 2012-15; SIAM Fellow, 2013; AIChE Fellow, 2013; and the National Award and HELORS Gold Medal, 2013. Floudas has authored two graduate textbooks, Nonlinear Mixed-Integer Optimization, published by Oxford University Press in 1995, and Deterministic Global Optimization, published by Kluwer Academic Publishers in 2000. He is the chief co-editor of The Encyclopedia of Optimization, published by Kluwer Academic Publishers in 2001, with a second edition published by Springer Science+Business Media in 2008. As a TIAS Faculty Fellow, Floudas will collaborate with faculty and students in the Dwight Look College of Engineering’s Artie McFerrin Department of Chemical Engineering.

In 1963, U.S. theoretical physicist Roy J. Glauber published his historic paper, “The Quantum Theory of Optical Coherence,” which explained the fundamental characteristics of different types of light—from lasers to light bulbs. Glauber’s theory of photodetection is highly influential in quantum optics and led to his sharing the 2005 Nobel Prize in physics “for his contributions to the quantum theory of optical coherence.” From the beginning of his career, Glauber has worked at the vanguard of modern physics. At age eighteen as a junior at Harvard University during World War II, Glauber joined the Manhattan Project at Los Alamos, N.M., where he helped to calculate the critical mass for the first atom bombs.

Specific topics of his current research include: the quantum mechanical behavior of trapped wave packets; interactions of light with trapped ions; atom counting, which deals with the statistical properties of free atom beams and their measurement; algebraic methods for dealing with fermion statistics; coherence and correlations of bosonic atoms near the Bose–Einstein condensation; the theory of continuously monitored photon counting and its reaction on quantum sources; the fundamental nature of “quantum jumps,” which are abrupt transitions of an electron, atom, or molecule from one quantum state to another; the resonant transport of multiple particles produced in high-energy collisions; and the multiple diffraction model of proton-proton and proton-antiproton scattering.

He returned to Harvard after the war and earned his doctorate in 1949. His mentors include another Nobel laureate, theoretical physicist Julian Schwinger and the scientific leader of the Manhattan Project, theoretical physicist J. Robert Oppenheimer. Today, as Harvard’s Mallinckrodt Professor of Physics, Glauber focuses his research on solving problems in quantum optics, a field that studies the interactions between light and matter. He continues to work in high-energy collision theory and to study the statistical correlation of particles produced in high-energy reactions.

In 2011, Floudas became a member of the National Academy of Engineering. He is also a member of the Biophysical Society, the Operations Research Society of America, the Mathematical Programming Society, and the Society of Industrial and Applied Mathematics.

46

47


ROGER E. HOWE Over the last five decades, Yale University’s Roger E. Howe has expanded the frontiers of mathematics while also working to better prepare new generations of mathematicians. As a scholar, Howe is best known for his breakthroughs in representation theory, which allows mathematicians to translate problems from abstract algebra into linear algebra, thus making the problems easier to manage. Howe first introduced the concept of the reductive dual pair—often referred to as a “Howe pair”—in a preprint during the 1970s, followed by a formal paper in 1989. This and other significant contributions to mathematics earned Howe a membership in the National Academy of Sciences in 1994. Today, as the William R. Kenan, Jr. Professor of Mathematics, Howe continues to work on representation theory, as well as other applications of symmetry, including harmonic analysis, automorphic forms, and invariant theory. Howe received his doctorate in 1969 from the University of California, Berkeley and taught at the State University of New York in Stony Brook from 1969-74. During that time, he also belonged to the Institute for Advanced Study and served as a research associate at the University of Bonn in Germany.

48

R O B E RT S. LEVINE After joining the faculty of Yale University in 1974, Howe served as the mathematics department’s director of graduate studies from 1982-83 and 1986-87 and as department chair from 1992-95. He became Yale’s first Frederick Phineas Rose Professor in Mathematics in 1997.

A highly regarded leader in American literary studies, Robert S. Levine has been an influential force in American and African-American literature for thirty years, and more recently has contributed important new work to the study of the literature of the Americas.

He belongs to the American Academy of Arts and Sciences and the Connecticut Academy of Science and Engineering and was a fellow of the Japan Society for the Advancement of Science and the Institute for Advanced Studies at the Hebrew University of Jerusalem. In 2006, Howe received the American Mathematical Society Award for Distinguished Public Service for his “multifaceted contributions to mathematics and to mathematics education.” He became a fellow of the American Mathematical Society in 2012.

His scholarly editions of Herman Melville, Nathaniel Hawthorne, Martin Delany, William Wells Brown, and Harriet Beecher Stowe have brought their extensive writings to wider audiences.

In addition to his book on Douglass, he has published Conspiracy and Romance (1989) with Cambridge University Press and Dislocating Race and Nation (2006) with the University of North Carolina Press. Levine’s work includes more than fifty-five peer-reviewed articles and five edited collections of essays. Levine is currently under contract with Harvard University Press to write a history of Frederick Douglass’s autobiographical writings and cultural legacy, and he is editing two volumes of Douglass’s writings for Yale University Press.

Levine is the general editor of the five-volume Norton Anthology of American Literature, which has been read by hundreds of thousands of students. He is professor of English and a Distinguished University Professor at the University of Maryland, where he was the founding director of the Center for Literary and Comparative Studies.

Well known as an enthusiastic teacher, Levine presented a TIAS Eminent Scholar Lecture, “Frederick Douglass, Lincoln, and the Civil War,” in October 2013. As a TIAS Faculty Fellow, he is coordinating his other presentations on campus through the College of Liberal Arts’ Department of English.

As a TIAS Faculty Fellow, Howe will collaborate with faculty and students in the College of Education & Human Development’s Department of Teaching, Learning & Culture.

Levine received his doctorate from Stanford University in 1981. He received fellowships from the John Simon Guggenheim Memorial Foundation and the National Endowment for the Humanities. His book Martin Delany, Frederick Douglass, and the Politics of Representative Identity won an Outstanding Book Award from Choice magazine in 1997. Levine sits on a number of editorial boards, including American Literary History; Leviathan: A Journal of Melville Studies; Nathaniel Hawthorne Review; and J19: The Journal for Nineteenth-Century Americanists.

49


P E T E R J. S TA N G

WOLFGANG SCHLEICH With research that extends across several areas of physics, Wolfgang Peter Schleich’s major scientific interests are found where theoretical and experimental quantum optics intersect with fundamental questions of quantum mechanics, general relativity, number theory, statistical physics, and non-linear dynamics. Best known for his research into the physics of phase space, and in particular Wigner functions, cold atoms, and the interface to solid-state physics, Schleich also conducts tests of general relativity using cold atoms—specifically BoseEinstein condensates. Schleich earned his doctorate from the Ludwig Maximilian University of Munich in 1984. He has served as a chaired professor of theoretical physics at Germany’s Ulm University since 1991, where he directs the Institute for Quantum Physics. He was vice-chairman of the university council from 2000-2003. Recently, he established and secured the funding for the Institute for Integrated Quantum Science and Technology between Ulm University and the University of Stuttgart. He is a member of the German National Academy of Sciences Leopoldina, the Academy of Europe, the Austrian Academy of Sciences, the Royal Danish Academy, and Heidelberg Academy of Sciences and Humanities. Schleich has received numerous awards, among them, the Leibniz Prize in 1995, the Max Planck Research Award in 2002, and the Willis E. Lamb Award for Laser Science and Quantum Optics in 2008. He is a fellow of the American Physical Society, the Institute of Physics, the European Optical Society, and the Optical Society of America.

50

He has organized more than thirty international conferences on quantum optics and was editor of Optics Communications for fifteen years and co-editor of the New Journal of Physics. He also served as a divisional associate editor for Physical Review Letters for six years. While serving as a TIAS Faculty Fellow, Schleich will promote interdisciplinary and collaborative research within quantum science and engineering. He will interact with researchers and students within the Institute for Quantum Science and Engineering, which spans across five colleges—agriculture, engineering, liberal arts, science, and veterinary medicine—as well as numerous departments, including chemistry, mathematics, and ecosystems management. These collaborations will focus on applying quantum techniques to solve problems of interest to multidisciplinary fields within Texas A&M. Planned projects will include generating anthrax detectors; creating sky lasers to detect biochemical pathogens; developing new magnetometers to detect submarines; and working with sub-diffraction limited imaging as well as with high power and XUV laser systems associated with generating femto-second impulses.

Working with chemical systems built from molecular components, Peter J. Stang has advanced organic chemistry for five decades. In essence, Stang and his team are molecular architects who rearrange the building blocks of chemistry to create new and better products to serve advanced medicine, information storage, and energy. In recognition of his achievements as a pioneer in supramolecular chemistry, Stang received the National Medal of Science in 2011, followed by the American Chemical Society’s (ACS) 2013 Priestley Medal. Stang earned his doctorate in 1966 from the University of California, Berkeley, and subsequently joined Princeton University as a National Institute of Health Postdoctoral Fellow. Stang joined the University of Utah in 1969. During his forty-three years at the University, he rose from assistant professor to dean of the College of Science. Today, he is a Distinguished Professor of Chemistry at the University of Utah.

tetrahedral frameworks, trigonal prisms and cage-like metallocyclic dodecahedrons. This allows the construction of an intricate molecular framework, ultimately leading to the rapid assembly of nanoscale molecular devices for broad-based, practical applications. Stang is a member of the American Academy of Arts & Sciences, the National Academy of Sciences, the Chinese Academy of Sciences, and the Hungarian Academy of Sciences. He received the Fred Basolo Medal for Outstanding Research in Inorganic Chemistry in 2009 and the Paul G. Gassman Distinguished Service Award of the American Chemical Society’s Division of Organic Chemistry, and the F.A. Cotton Medal for Excellence in Chemical Research in 2010. He has more than 500 publications and more than 23,500 citations to his credit and has mentored nearly 100 graduate students and postdoctoral fellows. While serving as a TIAS Faculty Fellow, Stang will collaborate with faculty in the College of Science’s Department of Chemistry.

His major scientific contributions began in the late 1960s with his synthesis of vinyl trifluoromethanesulfonates, which are used to join two completely different hydrocarbons and thus create a chemical reaction. In the 1990s, his synthesis of molecular squares led to the development of several functional, self-assembled chemical systems. Most notably, Stang’s method of preparing cyclic structures containing metal-complex units—known as the directional bonding approach—has enabled his team to synthesize elaborate molecules such as

51


TOTA L C O M M I T T E D F U N D S - 5 Y E A R S $11 MILLION

FINANCIAL OVERVIEW For initial funding, the Texas A&M University Institute for Advanced Study received a five-year commitment for $5.2 million from Texas A&M University System Chancellor John Sharp, plus a five-year commitment from Texas A&M University of $3.7 million, which includes $1.2 million drawn from funds provided by Herman F. Heep and Minnie Bell Heep through the Texas A&M University Foundation. In addition to this $8.9 million, the Texas A&M colleges provide 30 percent matching funds for the Fellows’ salaries and pay expenses associated with the Fellows’ research, housing, and travel. Significant financial support is anticipated from the Texas A&M Foundation¹s forthcoming capital campaign, as well as members of the TIAS Legacy Society.

The Texas A&M University System - Chancellor John Sharp $5.2 million

Texas A&M University $5.8 million

TIAS EXPENDITURES - 5 YEARS $11 MILLION TIAS Faculty Fellows $5 million

TIAS Operations $2.7 million

Heep Graduate Student Fellowships $1.2 million

College Faculty Fellow Support $2.1 million

52

53


CHARTING T H E W AY F O R W A R D Great academic institutions like Texas A&M University are built upon exceptional scholarship. The Texas A&M University Institute For Advanced Study (TIAS) is designed to serve as the cornerstone securing the University’s future as a top tier institution of learning and research. The success of TIAS will depend upon a substantial endowment facilitated from the combined efforts of the Texas A&M Capital Campaign, the TIAS Advocates, and the TIAS Legacy Society.

Capital Campaign

The TIAS Legacy Society

Texas A&M University is in the early stages of a multibillion dollar comprehensive campaign, and it is vital that TIAS be at the forefront of this fund-raising effort. Texas A&M receives less than one-quarter of its total budget from the state’s general revenues. This is unlikely to change. Only by continuing to support partnerships with its private supporters can Texas A&M continue to provide the quality education, research, and service programs that distinguish the University as a top-tier institution. Contributions to TIAS through the Texas A&M Foundation will strengthen the Institute’s ability to recruit highly renowned Faculty Fellows and enhance the University’s global reputation.

Former students, faculty, staff, and friends of Texas A&M become members the TIAS Legacy Society by making an estate gift to TIAS through the Texas A&M Foundation.

TIAS Advocates TIAS Advocates—composed of influential Texas A&M faculty, former presidents, chancellors, former students, and distinguished friends of the University—help to advance the Institute’s goals and position the Institute within the Texas A&M Capital Campaign. TIAS Advocates promote the Institute to others who want to establish a strong financial foundation for the TIAS mission.

Four current faculty members have contributed generous estate gifts to secure the endowment of TIAS during its first year of operation. Through their gifts, these initial members of the TIAS Legacy Society have demonstrated their conviction that the Institute is crucial to the future of the University: • Janet Bluemel, professor, Department of Chemistry, College of Science

“Because Texas A&M University is the state’s first public institution of higher learning, it is especially significant that Texas A&M continues to lead through the creation of the Texas A&M University Institute for Advanced Study.”

-Vernon L. Smith Nobel Prize in Economic Sciences, 2002

• John Gladysz, distinguished professor and holder of the Dow Chair in Chemical Invention, Department of Chemistry, College of Science • John Junkins, distinguished professor and holder of the Royce E. Wisenbaker Chair, Department of Aerospace Engineering, Dwight Look College of Engineering, with Elouise Junkins • Ozden Ochoa, professor, Department of Mechanical Engineering, Dwight Look College of Engineering Their financial contributions have helped to permanently underwrite TIAS and to support its mission for years to come.

54

Produced by the Division of Research


Texas A&M University Institute for Advanced Study Texas A&M University 322 Liberal Arts and Humanities Building 3141 TAMU College Station, TX 77843-3141 tias.tamu.edu For inquiries, contact Clifford L. Fry, Ph.D Associate Director 979-458-5723 cfry@tamu.edu


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.