Hofstra Horizons - Fall 2011

Page 1




of a


president’s COLUMN


ach issue of Hofstra Horizons brings another opportunity to showcase the academic success of our faculty and the recognition it brings to the University. This publication, dedicated to citing some of the latest accomplishments achieved by our prestigious faculty, is also a reminder of the diverse array of expertise available within our community. In this special issue of Horizons, we also spotlight the life and work of two important faculty who passed away unexpectedly in 2011.

HOFSTRAhorizons Research and Scholarship at Hofstra University

table of contents

4 \

Dr. Dana Brand, former chair of the English Department, was author of The Spectator and the City in Nineteenth-Century American Literature (Cambridge, 1991) and numerous articles on topics in English, American, and French literature, philosophy, and film. He was also a personal essayist and the author of 2007’s Mets Fan, a collection of essays about his experiences as a baseball fan. He wrote a second book about baseball in 2009 titled The Last Days of Shea. He was an excellent scholar and addition to Hofstra’s literary community.

The World and Dana Brand

Dr. Vincent R. Brown, an unassumingly brilliant teacher and researcher, was an assistant professor in the Psychology Department. He also served as program director in Perception, Action, and Cognition at the National Science Foundation. Dr. Brown was co-principal investigator, with Dr. Simona Doboli, associate professor of computer science at Hofstra, of a National Science Foundation grant on neural network models of individual and group brainstorming. Dr. Brand and Dr. Brown were recognized scholars as well as excellent teachers. Their life’s accomplishments are an inspiration. They will be greatly missed.

8 \

Congratulations to Assistant Professor Dilruba Ozmen-Ertekin, Professor J Bret Bennington and Professor Robert Leonard, whose good work is also highlighted in this issue. Your tremendous efforts exemplify Hofstra Pride. On behalf of the Hofstra faculty, administration, staff and students, I am pleased to recognize your accomplishments and wish you continued success. Sincerely,

Remembering the Life Work of Vince Brown

Cover photo courtesy of Thinkstock.com.

Stuart Rabinowitz, J.D. President

Herman A. Berliner, Ph.D. Provost and Senior Vice President for Academic Affairs

Sofia Kakoulidis, M.B.A. Associate Provost for Research and Sponsored Programs

Stuart Rabinowitz President

Alice Diaz-Bonhomme, B.A. Assistant Provost for Research and Sponsored Programs

provost’s COLUMN 12 \

Traffic Jams, Delays, and Mitigation Strategies

20 \

onfessions of a C Statistical Paleontologist


Forensic Linguistics at Hofstra: From a Single Course to an International Institute

HOFSTRA HORIZONS is published semiannually in the fall and spring by the Office for Research and Sponsored Programs, 144 Hofstra University, Hempstead, New York 11549-1440. Each issue describes in lay language some of the many research and creative activities conducted at Hofstra. The conclusions and opinions expressed by the investigators and writers are their own and do not necessarily reflect University policy. ©2011 by Hofstra University in the United States. All rights reserved. No part of this publication may be reproduced without the consent of Hofstra University. Inquiries and requests for permission to reprint material should be addressed to: Editor, HOFSTRA HORIZONS, Office for Research and Sponsored Programs, 144 Hofstra University, Hempstead, New York 11549-1440. Telephone: 516-463-6810.


he Hofstra faculty are a diverse community of scholars who offer our students a broad-ranged and well-grounded perspective of the world. We are at once a community that identifies closely with a geographic region and assumes responsibilities to promote it, and a community that looks far beyond local boundaries to the national and international arenas. This issue of Hofstra Horizons highlights the work of three Hofstra faculty and pays tribute to the lives of two very special members of our community. We were saddened by the loss of two colleagues – and friends – in 2011, Drs. Dana Brand and Vincent Brown. These outstanding scholars will be greatly missed. Dr. Brand, a professor of English and American literature, was highly respected by both students and colleagues. His latest work, The Spectator and the City in Nineteenth-Century American Literature, is an innovative study of the urban observer in transatlantic texts. An avid Mets fan, Dr. Brand will likely be remembered most for his publications on baseball, such as Mets Fan and The Last Days of Shea, as well as his Mets fan blog. Dr. Brown was an associate professor of psychology. He was a frequent collaborator on myriad subjects, including idea generation, the processing of word meanings, and animal behavior. He and his team of researchers were recognized by the National Science Foundation for their research on the cognitive process of brainstorming. The article in this issue of Hofstra Horizons written by Dr. Dilruba OzmenErtekin addresses the national problem of traffic congestion. With a national congestion cost of $115 billion, Dr. Ozmen-Ertekin assesses various simulation models that can be used to estimate the amount of delays, as well as strategies to mitigate traffic congestion. The article from Dr. J Bret Bennington, Hofstra professor of geology and resident paleontologist, discusses his statistical approach to fossil data through several research projects. For years, he has led undergraduate students on fossilcollecting field trips, and his recent research with former undergraduate student Christa Abatemarco allowed him to apply basic statistics to an analysis of dinosaur tracks preserved at Dinosaur State Park in Connecticut. Finally, the article by Professor Robert Leonard, director of the recently formalized graduate program in forensic linguistics and Institute for Forensic Linguistics, Threat Assessment and Strategic Analysis, details Dr. Leonard’s pioneering role in the creation of both the program and the institute – which have allowed Hofstra to take the lead in forensic linguistics research and training. Our goal at Hofstra is to continuously extend knowledge beyond that which is known and to prepare the next generation of leaders to do the same. Thanks to our outstanding faculty, we are able to meet this goal. Sincerely,

Herman A. Berliner, Ph.D. Provost and Senior Vice President for Academic Affairs Hofstra HORIZONS t Fall 2011 3


The World and Dana Brand By Joseph

Fichtelberg, Professor of English

Dana Brand, who died on May 25, 2011, at the age of 56, left a lasting impression on his colleagues, his students, and the profession he so well served. A resourceful and imaginative teacher, he was also a brilliant writer and a dedicated professional and leader. Dana’s chief joy was his family. His greatest delight was baseball. His greatest pleasure was the well-chosen word. He had the rare gift of combining these faculties in a seamless whole in which life, love, and language spoke to the same abiding truths.


ana did his graduate work at Yale, studying with such luminaries as J. Hillis Miller and A. Bartlett Giamatti, the future commissioner of baseball. He came to Hofstra in 1989, was granted tenure in 1992, and in 1993 became chair of the English Department, a position he held for eight years. Dana guided the department through a significant expansion, overseeing a staff of close to 100. He helped shape a new literature curriculum and administered a diverse composition program. After heading the English Department, Dana chaired its Personnel Committee. An Americanist specializing in literature from the Civil War through the 20th century, he taught widely in HCLAS, New College, and Honors College. Dana’s writing reveals a singular clarity of vision. His principal scholarly work, The Spectator and the City in Nineteenth-Century


Hofstra HORIZONS t Fall 2011

American Literature (Cambridge University Press, 1991) – still in print – is a groundbreaking study of the role of the flaneur in transatlantic texts. As Dana observes in the opening pages, to understand the flaneur is to understand the emergence of modernity. The flaneur, or urban observer, as Charles Baudelaire defined him, “marvels at the eternal beauty and the amazing harmony of life in the capital cities, a harmony so providentially maintained amid the turmoil of human freedom” (Spectator 3). The flaneur, however, was not merely a passive observer of the urban scene. He was the presiding consciousness of a new relation to the world. The flaneur was the first to see in the chaos of modernity a new principle of order. In using the flaneur as the groundwork for his study, however, Dana went well beyond Baudelaire, who saw the figure as a European novelty. By contrast, Dana provides a


genealogy of the urban observer rooted in Renaissance London. That great city incited a wealth of commentary, from nondescript surveys of vital statistics to satirical portraits of its high life and low life. Writers captured high life in visits to Vauxhall Gardens and the Royal Exchange, where prosperity and power were on display. The low life found expression in a remarkable range of urban sketches, from confidence stories to scatological portraits of the city’s “mysteries.” Dana masterfully surveys this rich literature, which culminated in the great essayists of the 18th and 19th centuries. Addison and Steele’s Spectator, Dana argues, offered a way to order this sprawling scene: “They introduce the innovation of a powerful gaze that provides a spectator with a panoramic equanimity ... enjoy[ing] diversity without grossness, randomness without danger, amusing bustle of mild interest rather than terrifying chaos” (33). Mr. Spectator offered the controlling gaze of a new social regime, one in which a wealth of sensations and experiences could be organized with the right habit of mind. It was the brave new world of liberal capitalism served up as a feast of the senses. In the 19th century Dana shows, the focus and effects of this literature changed. Broad panoramas of urban life gave way to what he calls “the revival of the familiar essay,” a “highly personal style of writing” that allowed more immediate intimacy with the most diverse experience. That change was in part a response to a quantum shift in city life, which was becoming too vast, too vicious, too complicated to be captured in a single gaze. Other genres arose, including urban detective fiction and the new art of photography, to capture this elusive experience. American flaneurs, Dana argues, responded to the new regime. Before Dana, no one had so strenuously argued that writers of the American Renaissance could or should be understood as flaneurs. America was too rural, it was claimed, too Jeffersonian, too pastoral to allow its literature to be shaped by urban spectators. But America’s great flaneurs – Poe, Hawthorne, and, above all, Whitman – were no ordinary observers. In discussing Poe’s “The Murders in the Rue Morgue,”

for example, the world’s first mystery story, Dana shows that Poe eschewed the flaneur’s conventional serenity. Rather, Auguste Dupin was somewhat diabolical, a cerebral, sinister figure who understood the irrational forces driving urban life. Hawthorne, too, demonstrates the limitations of the flaneur’s power. In “Sights from a Steeple,” his narrator attempts to give a synoptic view of the town, only to confess his insufficiency. Hawthorne’s characteristic indirection and modesty betray the increasing opacity of the 19th-century city, yet those traits also suggest the reserve of an artist like Hawthorne, far from the centers of European modernity. It was left to America’s greatest literary flaneur to demonstrate the true reach of a new modern art. For Dana, Walt Whitman was America’s preeminent urban observer, at home in an opera house or on a Broadway omnibus declaiming Shakespeare while he took in the crowd. Whitman reveled in urban spectacle, but, like Poe and Hawthorne, his art presented flaneurs with a difference. Whitman’s attempts to capture the city, Dana argued, rarely strike home. Too often, his New York portraits read like static tableaux, generic depictions of everyday scenes. There seemed to be a gap between the flaneur’s tools and the city’s stark demands, an absence of poignancy that made the landscape seem almost barren. But Whitman sensed this shortfall, and answered it through a new modernist design. In a compelling reading, Dana relates Whitman’s urban portraiture to photography, which left a deep impression on the poet: Like taking a photograph, gazing, as Whitman describes it, does not necessarily

Hofstra HORIZONS t Fall 2011 5


privilege any portion of the field of vision nor does it imply that the gazer is having any definable response to or understanding of what he or she observes. The eye that gazes, in a photograph or in an urban crowd, does not surrender its mystery or allow itself to be read. Yet this increases its fascination. As [Walter] Benjamin writes, “... The deeper the remoteness which a glance has to overcome, the stronger will be the spell that is apt to emanate from the gaze.” (166) The oscillation between remoteness and haunting, clinical intimacy, evident in so many early photographs, shaped Whitman’s greatest urban poem, “Crossing Brooklyn Ferry.” Whitman accomplished what no other flaneur had been able to do: he made modernity personal. The years between the publication of The Spectator and the City and Mets Fan (McFarland, 2007) were a fallow period for Dana. He published essays on Alfred Hitchcock and F. Scott Fitzgerald. And he began experimenting with a genre he had studied so closely, the personal essay. After a piece on the Mets was published in Newsday in 2005, he turned full force to this new mode. Mets Fan and The Last Days of Shea (Taylor, 2009) were more than paeans to his favorite team. They were meditations on time and loss, love and purpose, idealism and sacrifice. At times, in recording his impressions, Dana himself became a kind of flaneur, as in this description of “The Crowd at Shea”: Looking over the tops of the heads, I see cornrows, spiked hair, pony tails, and yarmulkes. I see lawyers in bedraggled business suits, men who look like busts of Roman emperors, and kids who cannot believe their good fortune, to be here, to see the game ... All races mix, all cultures and ages gather, here in a broad mass of blue, black, and orange flecked with pink, yellow, and white. (Mets Fan 118) But the experience of watching the Mets was more than an urban spectacle. For Dana, it was deeply personal. “When I go to Shea,” he wrote of the old stadium, “I feel as if I am visiting my father and several long lost versions of my daughter. I visit all of


Hofstra HORIZONS t Fall 2011

the different eras of my life, and all of the different teams and players who gave me so much happiness as I grew up and grew older. ... So much of what I have known and been seems held in the great curved embrace of the stands, in the rich green symmetry of the field, in the chaos of girders and buttresses and bathrooms and frying food on the concourse behind the seats” (Mets Fan 7). For Dana, baseball was a commemoration of his life. He knew that it was the rhythms of baseball, the innumerable predictable actions punctuated by accident, that gave the game such depths. That sense of baseball as autobiography, as a kind of urban poem, is most poignantly conveyed in two parallel passages in Mets Fan. In “The Most Exciting Mets Game Ever,” Dana recalls the experience of watching the Mets’ epic postseason game against the 1986 Astros with his parents and sisters. The game was so tense and important that we couldn’t talk about our lives or about other things during the commercials. Nothing for those few hours was allowed to intrude. All of us cared, to an impossible depth, and all of us cared in different ways. ... In recent years, my parents had become more involved than any of us. They had more time. Now that we were grown, the Mets had taken part of our place. ... They watched every game here in the den, him in the bigger armchair to the right, her on the couch on the left. Nothing now changed in the den except that every year there were a few new things on the shelves. (Mets Fan 59) But something did change. The Mets won: Our den was not the right place for what happened next. My sisters and I leapt to our feet and began do to something we had never dreamed of doing as children, when we were much much smaller. We joined hands and began to jump up and down, and as objects my parents had collected over the years did not begin to fall from their perches, we squeezed each others’ hands and began to jump tentatively higher and higher, screaming because we were so happy that the Mets had won, that the Mets had survived. ...


Mets Fan and The Last Days of Shea (Taylor, 2009) were more than paeans to his favorite team. They were meditations on time and loss, love and purpose, idealism and sacrifice.

My parents are laughing and are pretending to be worried about the shelves and the cabinets toppling to the floor. I am filled with the joy I had at that moment, and the joy I feel looking back on us all there and complete and together. (61-2) Thirteen years later, the Mets once again found themselves in a playoff, this time with Atlanta. The Mets would not win the World Series that year. The Yankees did. But Dana was back in the den with his family – now including his young daughter. “The furniture, clocks, paintings, and shelves were all where they had been. But while the room in 1986 looked like the room of a late middle-aged couple, while it looked as if it had recently been the room of a whole family, the room in 1999 looked like something much more precarious” (93). Dana’s father, a distinguished physician, had developed Alzheimer’s. This time, there would be no jumping up and down. It was a Sunday night – school the next day – and Dana had to leave before the game ended. As he drove home, he thought of why baseball meant so much to him: I thought of my father, and my daughter, and I tried to decide if baseball made life seem longer or shorter. In ways, I think it makes life seem longer, because you have this sense of an infinite series of brightly lit evenings recalling other brightly lit evenings, summer afternoons that recall other summer afternoons. ... So you feel as if you remember so much, as if you have seen so much of this long, continuing thing. (94-5) He pulled into his driveway, deposited his sleeping daughter, and watched the Mets pull out a win. “I was so happy, in the late Connecticut quiet, as my daughter slept, and as my wife waited for all the commotion to be over so she could ask me how the day had been” (96). In his discussion of Whitman’s “Crossing Brooklyn Ferry,” Dana turned to the last work of the great French intellectual Roland Barthes. In Camera

Lucida, Barthes seeks to explain photography’s aesthetic power. Photography, Dana writes, “offers proof of the reality of time while enabling a sense that it is possible to see across time” (167). Whitman understood this poignant power. His great poem is a study of shared presence, of the subtle connections we feel in every moment, with everyone with whom we share our lives: Flood-tide below me! I see you face to face! Clouds of the east – sun there half an hour high – I see you also face to face. Crowds of men and women attired in the usual costumes, how curious you are to me! On the ferry-boats the hundreds and hundreds that cross, returning home, are more curious to me than you suppose. And you that shall cross from shore to shore years hence are more to me, and more in my meditations than you suppose. (1-5) For Whitman, as for Dana, time was not a “selfdestroying medium” but a means of experiencing the present in all its elusive fullness, like “glories strung ... on my smallest sights and hearings” (9). In that sense, Dana’s life and work were a unitary essay, in the French sense of that word: an attempt to see in the many elements of experience a resonant whole. It was our rare privilege to have shared those glories, to have shared Dana’s world.

Hofstra University has established the Dana Brand Memorial Scholarship Fund in Dana’s honor, and we would be grateful for your contribution. You can contribute in two ways – by check or online. Please send checks to: Meredith Celentano Assistant Vice President for Development 101 Hofstra University, Hempstead, New York 11549-1010. Contributions to the Dana Brand Memorial Scholarship Fund may be made online at hofstra.edu/giving. Under “Gift Designation,” please specify the Dana Brand Memorial Scholarship Fund.

Hofstra HORIZONS t Fall 2011 7


Remembering the Life Work of Vince Brown By Brian

D. Cox,

Associate Professor of Psychology

On August 13, 2011, Hofstra University lost Dr. Vince Brown, an original and productive researcher, an engaged and imaginative teacher, and a colleague who was interested in the work and welfare of other researchers, whether from his own discipline of cognitive psychology, from other departments at Hofstra or from around the world. He was a collaborative scholar who generously sharpened others’ ideas, and took the lesser share of credit. He was also a good friend.


r. Brown and his colleagues, including Simona Doboli of the Hofstra Computer Science Department, Paul Paulus and Dan Levine of the University of Texas at Arlington, Ali Minai of the University of Cincinnati, Alex Doboli of Stony Brook University, and Joe Betz of SUNY-Farmingdale, were engaged in a scientific examination using computational models and behavioral experiments of the cognitive and social aspects of brainstorming and creativity, funded in part by the National Science Foundation. Creativity seems to belong to the realm of the arts, not science. All of us develop tricks to help us be creative in our own fields, but the process has always been a bit mysterious, and some of us would like to keep it that way. But Vince and his collaborators showed in many rigorous empirical articles, book chapters, and conference presentations that it is possible to


Hofstra HORIZONS t Fall 2011

understand the variables that may help individuals and groups to create more and better ideas. Any good scientific project begins with compelling and clear questions, and then proceeds through careful, replicable experiments to answer those questions by deeper and deeper iterations of a feedback loop of results and studies. The research team began by studying the widely used process of brainstorming, in which a group of people attack a problem – for example, ways to improve a college campus – by producing, at first uncritically, as many ideas as possible – a process called divergent thinking in psychology – which are then winnowed by eliminating impossible or impractical ideas through logical analysis or convergent thinking. Use of convergent thinking alone tends to focus in on ideas, inhibiting creative


solutions. It was thought that use of divergent thinking in social groups would be better than generating ideas alone, because each person in the group would have different starting points, and the group as a whole would generate more ideas. It was also suggested that the suggestions of others would cognitively activate associations or prime the members’ fund of knowledge about the problem. The starting point for Vince’s research team, however, was a stubborn contradiction: when groups of people generating divergent ideas on their own were compared with groups generating ideas divergently as a group, the total number of unique ideas produced by those thinking divergently alone exceeded those of the collaborative group. There may be both social and cognitive reasons for this finding. Each member of a group might show decreased motivation because others could take up the slack, in what is called social loafing, or they might be apprehensive about the evaluation of others. Cognitively, the need to wait one’s turn means that individuals might either forget their idea while waiting, or rehearse the single idea until their turn rather than producing new ones. One must also attend to others’ ideas, or remember others’ ideas to avoid repeating them in one’s own contributions, lowering the rate of production. Studies have shown that “brainwriting,” writing down one’s ideas and passing them around, or collaborating by computer lessens memory load and increases idea production. Memory load also might explain why providing hints over time works better than providing them all at the beginning. Using the variables discovered in these behavioral experiments, the research team then constructed a computer model of idea generation. Computer models have several advantages. First, the researcher is forced to define variables in a quantitative way: flexibility, for example, can be defined as number of times a participant generating ideas for campus improvements moves between categories (e.g., social gathering places or types of restaurants) vs. within categories (e.g., menu items). Fluency can be defined as rate of idea production over time.

Second, computer models can handle the variation among many variables simultaneously to test their interaction in more ways than the researchers can think of, and thereby provide unexpected outcomes, or a more exhaustive examination of the universe of possible problems. Thus, modeling helps generate hypotheses for future research. One could model a convergent thinker who tends to stay within a category — for example, offering up all the menu ideas, probably starting at a fast fluency rate and slowing as the category is exhausted. The divergent thinker might hop from restaurant type to restaurant type without exploring them deeply. Computer modeling might help understand what happens if you put these two types of thinkers together. Then one could do an empirical experiment to corroborate or disconfirm an aspect of the model. Finally, aspects of the models can be evaluated analogously in terms of how the brain actually operates at the neural level to produce global effects. For example, the concept of lateral inhibition is present at several levels in the brain: when a particular visual stimulus activates a neuron that fires at lines of 45 degree angles, it inhibits neurons from firing at 44 or 46 degrees, accentuating edges and boundaries. Modeling this process might explain why people get stuck in a category in convergent idea generation. A presentation that Vince and Dr. Doboli did the week before his death explored the reiterative feedback loops by which associated ideas first prime each other, then accelerate idea development, and finally result in slowing of new idea production and increased repeating of old ideas. Using the jargon of dynamical systems modeling, associated ideas first synchronize and coalesce into temporary emergent attractors; as ideas are exhausted, local inhibition predominates, desynchronization develops, the attractors collapse, and generalized inhibition results. In layman’s terms, the studies have suggested that at that point one way to overcome generalized inhibition is to simply take a break! The power of modeling suggests that a few mathematical rules operating together can produce complex effects. At the time of Vince Brown’s death, the team was beginning to

Hofstra HORIZONS t Fall 2011 9


He was generous with his time, his enormous expertise, his ideas, and his concern, whether you were a student, a collaborator, a colleague, or a visiting scholar.

investigate the thorny problem of how to model the process of separating good ideas from bad ones. In addition, Vince collaborated with Dr. Minai on the issue of the dynamic effects of social networks on idea generation. In this new work, they were exploring how systematically varying the connectivity of members in a social network would affect idea generation. At its most global level, they were beginning to investigate scientists’ social networks by publication citations. Vince’s preliminary studies of novel conceptual combinations in groups, together with Dr. Alex Doboli’s interest in developing automated computer circuit design tools that could mimic human creativity led to a major NSF collaborative interdisciplinary research grant with Dr. Paulus, Dr. Minai, Dr. Doboli and Dr. Betz. The goal of this work is to study and compare creative processes in architecture, circuit design and general domain through human experiments and computational models. The work tries to uncover some of the mechanisms and conditions by which humans combine existing knowledge to create new insights. Vince Brown’s wide-ranging curiosity led him to fruitful collaborations in other areas of research as well. For example, he worked closely for 14 years with Dr. David Gorfein on the study of how people process word meanings and on the development of a model (the Activation-Selection Model) to explain the effects of context and experience on the choice of meaning when we encounter, as we do constantly, words that have multiple meanings. It was characteristic of Vince to always try to help his colleagues by calling them to the attention of one another. At this point, Drs. Brianna Eiter (formerly of Hofstra), Dr. Kristin Weingartner of Hofstra, and Dr. Gorfein continue to collaborate on the development of a national database for homographs (words of multiple meanings, with the same spelling). Some of this work, recently presented at scientific


Hofstra HORIZONS t Fall 2011

conferences, is still in preparation for publication. Also, in 1995 he became interested in an idea on animal behavior by Dr. Scott Coleman, then an animal behavior graduate student who was taking Vince’s class on neural network modeling and is now an adjunct faculty member at the University of Texas at Arlington. The idea was to apply an existing neural model of human decision making to animal foraging. This foraging work was the basis for one journal article that included Drs. Dan Levine and Roger Mellgren from that department, and several presentations at animal behavior and mathematical psychology conferences. Vince was also one of the main organizers of a conference at the University of Texas at Arlington on oscillations in neural systems, along with Dr. Levine and Dr. Timothy Shirey (Texas Instruments and University of Texas at Dallas), which led to a co-edited book published in 2000. Through these fruitful collaborations, Vince showed that he truly understood the value of the social community of scientists. He was always organizing colloquia, sharing a meal at a conference, or just having a conversation about his wide range of interests. Vince contributed much to many projects without receiving authorship credit. In fact, in the last few weeks, from his friends from the University of Texas, Arlington, and University of Richmond, where he taught before arriving at Hofstra, or from the National Science Foundation, where he recently served a two-year stint as a program officer, or from his many friends from everywhere else, the word most often heard about Vince is “generous.” He was generous with his time, his enormous expertise, his ideas, and his concern, whether you were a student, a collaborator, a colleague, or a visiting scholar. One also hears: “He made my work so much better” by cutting through fuzzy ideas to the conceptual and mathematical heart of the problem. Right now, it seems like the absence of his unassuming brilliance has left a hole in that community that will be difficult to fill. He will be sorely missed.


A selection of work by Dr. Vince Brown and his colleagues for further reading: Brown, V. R., & Gorfein, D. S. (2004). A new look at recognition in the Brown-Peterson distractor paradigm: Towards the application of new methodology to unsolved problems of recognition memory. Memory & Cognition, 32, 674-685. Brown, V. R., & Paulus, P. B. (2002). Making group brainstorming more effective: Recommendations from an associative memory perspective. Current Directions in Psychological Science, 11, 208-212. Brown, V., Tumeo, M., Larey, T., and Paulus, P.B. (1998). Modeling cognitive interactions during group brainstorming. Small Group Research, 29, 495-526. Coleman, S., Brown, V. R., Levine, D. S., & Mellgren, R. L. (2005). A neural network model of foraging decisions made under predation risk. Cognitive, Affective, and Behavioral Neuroscience, 5, 434-451. DeRosa, D.M., Smith, C.L., and Hantula, D.A. (2007). The medium matters: Mining the long-promised merit of group interaction in creative idea generation tasks in a meta-analysis of the electronic group brainstorming literature. Computers in Human Behavior, 23, 1549-158. Diehl, M., & Stroebe, W. (1987). Productivity loss in brainstorming groups: Toward the solution of a riddle. Journal of Personality and Social Psychology, 53, 497-509. Doboli, S., Minai, A. A., & Brown, V. (2007). Adaptive dynamic modularity in a connectionist model of context-dependent idea generation. In the Proceedings of the International Conference on Neural Networks, Orlando, Florida, 2183-2188. Doboli, S., & Brown, V.R. (2010). An emergent attractors model for idea generation process. In the Proceedings of the International Conference on Neural Networks, Barcelona, Spain, 2010. Gorfein, D. S., & Brown V. R. (2007). Saying no to inhibition: The encoding and use of words. In D.S. Gorfein & C. M. MacLeod (Eds.), The Place of Inhibition in Cognition. Washington: APA Books. Gorfein, D. S., Brown, V. R., & DeBiasi, C. (2007). The activation-selection model of meaning: The son comes out after the sun. Memory & Cognition, 35, 1986-2000.

Gorfein, D.S., & Weingartner, K. M. (2008). On the norming of homophones. Psychological Research Methods, 40, 522-530. Iyer, L., Doboli, S., Minai, A. A., Brown, V. R., Levine, D. S., & Paulus, P. B. (2009). Neural dynamics of idea generation and the effects of priming. Neural Networks, 22, 674-686. Iyer, L., Minai, A. A., Doboli, S., & Brown, V. (2007). Modularity and self-organized functional architectures in the brain. In the Proceedings of the 7th International Conference on Complex Systems, Boston, MA. Larey, T. S., & Paulus, P. B. (1999). Group preference and convergent tendencies in small groups: A content analysis of group brainstorming performance. Creativity Research Journal, 12, 175-184. Levine, D. S., & Brown, V. R. (2007). Uses (and abuses?) of inhibition in network models. In D.S. Gorfein & C. M. MacLeod (Eds.), The Place of Inhibition in Cognition. Washington: APA Books. Levine, D. S., Brown, V. R., & Shirey, V. T. (Eds.). (2000). Oscillations in Neural Systems. Mahwah, NJ: Lawrence Erlbaum Associates. Osborn, A. (1957). Applied Imagination. New York: Scribner’s. Paulus, P. B., Levine, D. S., Brown, V. R., Minai, A. A., & Doboli, S. (2010). Modeling ideational creativity in groups: Connecting cognitive, neural, and computational approaches. Small Group Research, 41, 688-724. Paulus, P. B., & Brown, V. R. (2007). Toward more creative and innovative group idea generation: A cognitive-social motivational perspective of brainstorming. Social and Personality Compass, 1, 248-265. Paulus, P. B., & Dzindolet, M. (1993). Social influence processes in group brainstorming. Journal of Personality and Social Psychology, 64, 575-586. Paulus, P. B., Nakui, T., Putman, V. L., & Brown, V. (2006). Effects of task instructions and brief breaks on brainstorming. Group Dynamics: Theory, Research, and Practice, 10, 206-219.

Friends of Vince Brown can share their memories of him at: http://hofstraremembers.blogspot.com/2011/08/ dr-vincent-brown-associate-professor-of.html. Vince Brown’s family has set up the Vincent R. Brown Memorial Scholarship Fund at Hofstra. Friends may contribute to the fund, which will be awarded to future undergraduate psychology students on the basis of academic accomplishment. Contributions to the Vincent R. Brown Memorial Scholarship Fund may be made online at hofstra.edu/giving. Under Gift Designation, click the box for “in memory of” and type in Dr. Vincent R. Brown Memorial Scholarship Fund, and the contribution will be directed to the fund. The author thanks David Gorfein, Dan Levine, Paul Paulus, and Simona Doboli for expert comments on drafts of this article.

Hofstra HORIZONS t Fall 2011 11

Traffic Jams, Delays, and Mitigation Strategies

Photo courtesy of thinkstock.com

Dilruba Ozmen-Ertekin, Ph.D., P.E., What causes traffic congestion and delays? How can we estimate the amount of delays? What strategies are most commonly used to manage and mitigate traffic congestion? Answers to these questions are explored while focusing on the New York/New Jersey metropolitan area, specifically the Hudson River crossings and their toll plazas. “Time-of-day pricing” is studied in detail as a congestion mitigation strategy.

Assistant Professor of Engineering


ormer Secretary of Transportation Norman Mineta stated in 2001 that “congestion and delay not only waste our time as individuals, they also burden our businesses and our entire economy.” Recent statistics support this statement. Traffic congestion is a national problem. In 2009 urban areas in the United States experienced 4.8 billion vehicle-hours of delay, resulting in 3.9 billion gallons in additional fuel, and a congestion cost of $115 billion to highway users, households and firms throughout the nation [1]. In the New York/New Jersey metropolitan area (Figure 1), served by


Hofstra HORIZONS t Fall 2011

Figure 1. Hudson River Crossings

a vast multimodal transportation system, nearly 700,000 vehicles cross the Hudson River daily from New Jersey to New York [2]. Given such intensity,


moving people and goods efficiently to, from and within the region is no simple task. The complexity of this task is compounded by the rapidly growing city and port traffic and population, further escalating the levels of congestion, as well as air pollution and traffic accidents. Recent congestion statistics demonstrate the severity of the situation: In 2009 the New York-Newark area experienced 0.45 billion vehicle-hours of delay, resulting in 0.35 billion gallons in wasted fuel and a congestion cost of $10.9 billion [1].

Causes of Delays Highway congestion occurs when traffic demand approaches or exceeds the available capacity of the highway system (creating bottleneck conditions). Traffic demands vary significantly depending on the season of the year, the day of the week, and even the time of day. Also, the capacity, often mistaken as constant, can change because of weather, work zones, or traffic incidents. Figure 2 shows sources of delays in the United States. Accordingly, bottleneck delays are the single largest cause of delays [3].

Estimating Delays: The Case of Toll Plazas For delay estimation, toll plazas are considered here, as they are the perfect examples for traffic bottlenecks due to the fact that many toll payment

Work Zones, 10%

approach lanes merge into a much fewer number of lanes beyond the toll plaza. Even though toll plazas can have adverse capacity and safety impacts on traffic, especially on urban highways, they serve an important purpose, namely revenue generation for highway agencies. Various traffic management and electronic toll collection (ETC) strategies, such as regular and highspeed E-ZPass, and time-of-day pricing (TDP), are implemented as part of toll plaza operations to change traffic supply and demand characteristics to improve network-wide level of service. In recent years, due to the increasing need to better assess the impact of toll plazas combined with these various traffic management strategies, customized or off-the shelf microsimulation and macrosimulation models of toll plaza operations have been developed. However, it is extremely difficult and expensive to calibrate and implement microsimulation models when projects have severe budget and time constraints. Therefore, it is necessary to develop alternative macroscopic approaches that are easy to use and inexpensive compared to the more complex microsimulation tools. Several studies, although not dealing with toll plaza operations specifically, used both macroscopic and microscopic tools to predict the traffic flow characteristics

Special Events, 5%

Bad Weather, 15%

Figure 2. Sources of Congestion

Poor Signal Timing, 5%

(e.g., [4], [5]). This kind of a comparative approach helps determine the validity of the macroscopic approaches. Macroscopic models can also be easily embedded in four-step demand forecasting models to estimate toll plaza delays and the impact of demand management strategies and technologies planned as part of toll collection operations. This will, of course, help the planners to easily perform sensitivity analysis of various alternatives without resorting to the microsimulation models that might not be feasible for large-scale statewide studies. Some Previous Studies About Toll Plaza Delay Estimation In [6], an analytical delay model, which estimates total delay by accounting for extra travel time due to deceleration, toll paying, acceleration and time spent waiting in queue, was formulated. To calibrate the model, a stochastic, microsimulation model was developed. The resulting incremental delay formulation was very similar to the formula given in [7]. The results showed that the delay model can yield estimates within 10 percent of simulated values. The author recommended that the delay model be used for preliminary screening of alternative designs and operations, and that further investigation be conducted to determine if the model can adequately estimate delay based upon field data. In [8], the deployment of ETC was studied by developing a model to maximize social welfare associated with a toll plaza. A payment choice model was developed to estimate the share of traffic using ETC as a function of delay, price, and a fixed cost of acquiring the in-vehicle transponder. Assuming that welfare depends on the market share of

Hofstra HORIZONS t Fall 2011 13


ETC, and includes delay, gasoline consumption, toll collection costs, and social costs such as air pollution, the authors examined the best combination of ETC lanes and toll discount to maximize welfare. The generalized delay model suggested by [9] for the new Highway Capacity Manual [7] was employed after slight modifications. An application to California’s Carquinez Bridge revealed that too many ETC lanes cause excessive delay to nonequipped users, whereas too high a discount costs the highway agency revenue. Macroscopic Approach to Estimating Toll Plaza Delays The total delay experienced at the toll plazas by each vehicle can be expressed as: d=dd+di+dp +da+dq (1) where di = incremental delay (s/veh) dd = deceleration delay (s/veh) dp = service time (s/veh) da = acceleration delay (s/veh) dq = initial queue delay (s/veh) Deceleration delay is the extra travel time incurred while drivers decelerate before reaching a tollbooth. Equation 2 can be used to calculate the deceleration delay [6]. (V − Vb )2 + (1 − P) (V − Vb )2 (2) dd = P 2d1V

2d 2 V

where dd = average deceleration delay in toll lane (s/veh) P = p roportion of noncommercial vehicles V = free-flow speed (m/s) Vb = speed at toll booth (m/s) d1 = deceleration rate of noncommercial vehicles (m/s2) d 2 = deceleration rate of commercial vehicles (buses and trucks in this case) (m/s2)


Hofstra HORIZONS t Fall 2011

Acceleration delay depends on the free-flow speed and the acceleration characteristics of vehicles, and is given as follows [6]: (V


V ) 2a 1V



V ) 2a 2 V

b b (3) + (1 P) da = P

where da = average acceleration delay in toll lane (s/veh) a1 = acceleration rate of noncommercial vehicles (m/s2) a2 = acceleration rate of commercial vehicles (m/s2) Other variables as defined before. According to [7], the incremental delay at signal-controlled intersections on principal arterials is: d = 900T ⎜⎜ X − 1 + ( X − 1) 2 + 8kIX ⎟⎟ (4) cT ⎠ ⎝ where d = incremental delay (s/veh) T = duration of analysis period (hr) X = lane group volume/capacity ratio or degree of saturation k = incremental delay factor that is dependent on traffic controller settings I = upstream filtering/metering adjustment factor c = lane group capacity (vph) ⎛

In the case of toll plazas, however, because “k” is a parameter to adjust for traffic actuated signals and “I” is a parameter to adjust for filtering and metering by upstream signals, they can be taken as “0.5” (upper limit given in [7]) and “1,” respectively. As a result, the incremental delay equation used here reduces to the form given in Equation 5, which expresses the incremental delay experienced by each vehicle in the toll lane due to random variations in toll processing times and vehicle arrivals. Equation 5 assumes that there are no queuing vehicles at the beginning of the analysis period.

⎡ 4X ⎤ (5) d i = 900T ⎢( X − 1) + ( X − 1) 2 + ⎥ C .T .N ⎦⎥ ⎣⎢

where di = a verage incremental delay in toll lane (s/veh) T = analysis period (hr) X = volume/capacity ratio C = c apacity (vehicles per lane per hour, or vplph) N = number of toll lanes

In order to take any delay due to the presence of initial queues at the toll plaza at the beginning of the analysis period into account, delay component d3 from [7] is used here as dq. 1800Qb (1 + u )t (6) dq = cT where dq = average initial queue delay in toll lane (s/veh) t = duration of oversaturation within T (hr) u = delay parameter Qb = Total number of vehicles present at the toll lanes at the beginning of T (veh) c = toll lane group capacity (vph)

The service time required to pay the toll results in an additional delay to each vehicle, and is dependent on the method of payment (ETC or manual). Here, the length of the analysis is kept at one hour, and the incremental delay was calculated for each hour and then summed to determine the total incremental delay for the peak period. Breaking up the entire four-hour peak period into one-hour blocks increases the accuracy of the macroscopic model. The macroscopic approach described here can be implemented very easily by incorporating the delay formulations into a spreadsheet. Sensitivity analysis can also be performed very easily and in a few minutes just by varying the values of the desired variables, whereas each run takes about 1 hour in PARAMICS microsimulation.


Microscopic Modeling of Toll Plazas with PARAMICS The major advantage of microscopic traffic simulation models is the level of detail in the modeling procedure. Modeling the dynamics of traffic flow is essential in the evaluation of the impacts of various operational strategies. Microsimulation provides necessary tools for this approach. Quadstone PARAMICS is a widely used microsimulation software package with a wide range of functionalities such as the default simulation logic in car following, lane changing, route choice, etc., that can be modified using Application Programming Interface (API).

Most of the existing microsimulation packages do not have an accurate toll plaza model. Hence, many researchers have developed customized toll plaza simulation models (e.g., [11], [12], [13]). Although PARAMICS has some of the basic features that can be used to build a toll plaza model, additional work using API had been performed to represent toll plaza operations accurately by [14] using NJ Turnpike data. The main feature of this approach is the lane changing logic involved at the toll plaza. The lane choice decision in PARAMICS is performed two links before a junction. A toll plaza consists of many links with a varying number of lanes. To model

Toll Plaza Link

Decision Link 2

Decision Link 1

Enter Toll Plaza Area

Set Lane Range closer to kerb


Path Matrix [1][j]


Set Lane Range closer to median

0 Lane Range

Lane Group

Set Lane Range closer to current lane

Choose appropriate Lane Group (ETC) or Cash

Choose Lane with least queue Lane Choice

Assign wait time or reduce speed, based on Transaction Type

Exit Toll Plaza

Figure 3. Logic of the Toll Plaza Lane Changing Behavior in PARAMICS [14]

The major advantage of microscopic traffic simulation models is the level of detail in the modeling procedure.

this in PARAMICS, it is necessary to have a number of small links with a different number of lanes. Additionally, toll plaza configurations on the NJ Turnpike are such that, after crossing the toll plaza, vehicles traveling north or south have to choose an appropriate ramp to enter the freeway. The lane choice logic in PARAMICS, when encountered by a decision of different paths as described above, will fail in the case of a toll plaza with many short links. Figure 3 illustrates the process of improved lane choice logic. A very important aspect of modeling traffic using microsimulation is the calibration and validation of the models using real-world data. There are many model parameters that have to be adjusted to replicate performance closer to the real-world. In [14], the geometry was obtained from satellite images and incorporated as overlays in the model. Service time distributions were collected from videographic data collected at two toll plazas (15W and 16E) on the NJ Turnpike [15]. For the simulation of facilities such as the Hudson River crossings, the average service times from the exit toll plazas of 15W and 16E have been used. This is because the cash users do not carry any ticket, but they only pay the requisite toll when they cross the toll plaza.

Hofstra HORIZONS t Fall 2011 15


Table 1. Comparison of Macro and Micro Approaches Average Weekday AM Peak Delay (s/veh)

Total Weekday AM Peak Delay (hours)

Average Weekday PM Peak Delay (s/veh)

Total Weekday PM Peak Delay (hours)

Macro Equations

PARAMICS Microsimulation

Macro Equations

PARAMICS Microsimulation

Relative Error (for Avg. Delay)

Holland Tunnel











Lincoln Tunnel











Goethals Bridge











Mean Relative Error


Mean Relative Error


Toll Plaza

Macro Equations

PARAMICS Microsimulation

Macro Equations

PARAMICS Microsimulation

Relative Error (for Avg. Delay)

Holland Tunnel

In the development of the customized toll plaza model, [14] used the disaggregate vehicle-by-vehicle electronic transaction data at each toll plaza. From this raw data, OriginDestination demand in terms of number of E-ZPass and cash users was extracted for the peak and off-peak periods for a typical weekday. This dataset replicates the arrival distribution at the toll plazas in the most accurate fashion. In [14], the toll plaza model was validated for the travel time between ODs, volumes on the freeway mainline volume, and proportion of lane usage at the toll plaza. Comparing Macroscopic and Microscopic Approaches

Lincoln Tunnel

Goethals Bridge Figure 4. Toll Plaza Lane Configurations (Orange: Mixed (E-ZPass/Cash), Purple: E-ZPass)


Hofstra HORIZONS t Fall 2011

The validity of macroscopic delay calculations was tested using data from three of the Hudson River crossings, namely, the Holland and Lincoln Tunnels and the Goethals Bridge, and the results were compared to the delays estimated by PARAMICS. Some of the input data used in this comparison are as follows: • Service time, dp: 9.6 seconds for cash, 3 seconds for E-ZPass, simply taken as the inverse of the capacity. This approach is recommended for the cases where field observations are not available.


• Capacity, C: 1150 vplph for E-ZPass and 375 vplph for cash toll lanes (using the average observed capacity values obtained from NJTA through an email correspondence in 2002). • Free flow speed, V [16]: Goethals: 22.35 m/s (50 mph) before the plaza, 20.12 m/s (45 mph) after the plaza Holland: 15.65 m/s (35 mph) Lincoln: 20.12 m/s (45 mph) before the plaza, 15.65 m/s (35 mph) after the plaza • Speed at toll booth, Vb: 0 for cash, 6.71 m/s (15 mph) for E-ZPass (posted). • Based on the acceleration and deceleration rates reported in the literature that varies between 0.56-2.69 m/s2 for acceleration rates and between 0.56-3.09 m/s2 for deceleration rates ([6], [19], [20], [10]), the deceleration rate for noncommercial vehicles, d1, the deceleration rate for commercial vehicles, d 2, the acceleration rate for noncommercial vehicles, a1, and the acceleration rate for commercial vehicles, a2, are taken as 2.4 m/s2, 1.48 m/s2, 1.5 m/s2, and 0.97 m/s2, respectively. • Analysis period, T: 1 hr. • Number of toll lanes, N [16]: The lane configurations are shown in Figure 4. Goethals: 3 lanes mixed, 5 lanes for E-ZPass Holland: 4 lanes mixed, 5 lanes for E-ZPass Lincoln: 7 lanes mixed, 6 lanes for E-ZPass • Total demand (volume, vph): Obtained from [17] and [18]: Goethals: AM Peak: 3041 for mixed, 3713 for E-ZPass; PM Peak: 5432 for mixed, 6786 for E-ZPass

Holland: AM Peak: 5087 for mixed, 5864 for E-ZPass; PM Peak: 5896 for mixed, 4725 for E-ZPass Lincoln: AM Peak: 7544 for mixed, 11003 for E-ZPass; PM Peak: 5379 for mixed, 5038 for E-ZPass • Initial queue, Qb (assumed values): AM Peak: 5 veh/cash lane, 4 veh/E-ZPass lane; PM Peak: 2 veh/cash lane, 1 veh/E-ZPass lane The results (Table 1) from the macro and micro-level approaches for the Holland and Lincoln Tunnels, and the Goethals Bridge compare very closely, especially for the AM peak period. This justifies the use of macroscopic delay equations in lieu of more complex microsimulation approaches that are both labor intensive and costly to develop.

Congestion Management Strategies There is no “one size fits all” strategy to mitigate traffic congestion; a solution that works well in one geographic area might not be suitable for another area. Each region must identify the projects, programs and policies that achieve goals, solve problems and capitalize on opportunities. The most effective strategy is one where agency actions are complemented by efforts of businesses, manufacturers, commuters and travelers [1]. Generally speaking, however, strategies to deal with congestion fall into three categories [3]: 1. Capacity Expansions – This can include expanding the base capacity (by adding additional lanes or building new highways) as well as redesigning specific bottlenecks such as interchanges and intersections to increase their capacity. Additional roadways reduce the rate of congestion increase.

2. Demand Management Through Operational Changes – Getting more out of the existing system. 3. Greener Transportation Choices – Mass transit, nonautomotive travel modes, and land use management. Demand management using advanced technology (known as Intelligent Transportation Systems, ITS) is the major policy advocated by many agencies, although it might not be possible to completely avoid exanding capacity by building new roads. ITS solutions include: • Electronic toll collection (ETC) • Vehicle tracking using ETC for travel time estimation and incident detection • Time-of-day pricing by providing a financial incentive for drivers to switch to times, routes, or modes of transportation that are less congested. It encourages drivers to use the existing highways more efficiently. It can also be linked to strategies to improve mobility by making alternatives to the private automobile, such as subways, buses, or commuter rail service, more attractive during peak periods. • Incident management by identifying incidents more quickly, improving response times, and managing incident scenes more effectively. • Work zone management by reducing the amount of time work zones need to be used and moving traffic more effectively through work zones, particularly at peak times. • Road weather management by prediction of bad weather conditions (such as rain, snow, ice, and fog) in specific areas and on specific roadways, allowing for more effective road surface treatment. • Planned special events traffic management through pre-event Hofstra HORIZONS t Fall 2011 17



Table 2. Changes in Average Delay (s/veh) due to TDP Weekday Plaza

AM Peak

PM Peak

[1] Texas Transportation Institute, Texas A&M University. 2010 Urban Mobility Report.

Weekend Off-Peak



















































planning and coordination and traffic control plans. • Freeway, arterial, and corridor management using advanced computerized control of traffic signals, ramp meters, and lane usage (lanes that can be reversible, truckrestricted, or exclusively for high occupancy vehicles). • Traveler information, i.e., providing travelers with real-time information on roadway conditions, delays, and advice on alternative routes [3].

Estimated Changes in Delay Due to TDP The macroscopic methodology that was validated by comparison with the well-calibrated PARAMICS microsimulation model as discussed in the previous section can be used to estimate the changes in toll plaza delays due to changes in demand resulting from TDP implemented at the Hudson River crossings since March 25, 2001, for the users of E-ZPass. The changes in delays were estimated for all the Hudson River crossings, not


Hofstra HORIZONS t Fall 2011

just for the three crossings used in the validation process presented in the previous section. Delays were calculated with the macroscopic approach using July 2000 (before TDP) and July 2001 (after TDP) data, for the weekday and weekend peak and off-peak periods. The peak hours for the facilities considered are 6-9 a.m. and 4-7 p.m. on weekdays. Weekend peak hours are from noon-8 p.m.. The off-peak hours are the rest of the day when the peak periods are subtracted. As can be seen in Table 2, TDP resulted in significant reductions (>10%) in delay (gray cells), especially for the weekday afternoon peak and weekend peak periods. It is important to note that delays presented here are only validated using the PARAMICS model, but not validated using real-world data, thus might not reflect observed delays in the real world. [The analysis results reported in this article are a small portion of the detailed and more comprehensive work previously published in “A Simple Approach to Estimating Changes in Toll Plaza Delays,” Transportation Research Record: Journal of the Transportation Research Board, No. 2047, pp. 66-74, co-authored by Dr. Ozmen-Ertekin and her colleagues.]

[2] New York Metropolitan Transportation Council. <http://nymtc.org>. [3] Federal Highway Administration. (2005). Traffic Congestion and Reliability: Trends and Advanced Strategies for Congestion Mitigation. [4] Stockton, W., Benz, R., Rilett, L., Skowronek, D.A., Vadali, S., and Daniels, G. (2000). Investigating the General Feasibility of High Occupancy Toll Lanes in Texas. Research Report Submitted to Texas Department of Transportation, No.TX-00/4915-1. [5] Schnell, T., and Aktan, F. (2006). On the Accuracy of Commercially Available Macroscopic and Microscopic Traffic Simulation Tools for Prediction of Workzone Traffic. Federal Highway Administration, Work Zone Mobility and Safety Program. [6] Lin, F. (2001). Delay Model for Planning Analysis of Main-Line Toll Plazas. Transportation Research Record: Journal of the Transportation Research Board, No.1776. [7] Transportation Research Board (2000). Highway Capacity Manual. Special Report No. 209, 3rd Ed. [8] Levinson, D. and Chang, E. (2003). A Model for Optimizing Electronic Toll Collection Systems. Transportation Research Part A: Policy and Practice, Vol. 37, No. 4, pp. 293-314. [9] Fambro, D. B. and Rouphail, N.M. (1997). Generalized Delay Model for Signalized Intersections and Arterial Streets. Transportation Research Record: Journal of the Transportation Research Board, No. 1572. [10] A kcelik, R., and Besley, M. (2001). Acceleration and Deceleration Models. 23rd Conference of Australian Institutes of Transport Research, Monash University, Melbourne, Australia, 10-12 December.

Hofstra HORIZONS [11] Al-Deek, H.M, Mohamed, A.A., and Radwan, E.A. (2000). New Model for Evaluation of Traffic Operations at Electronic Toll Collection Plazas. Transportation Research Record: Journal of the Transportation Research Board, No. 1710. [12] Chien, S.I., Spasovic, L.N., Opie, E.K., Korikanthimathi, V., and Besenski, D. (2005). Simulation-Based Analysis for Toll Plazas with Multiple Toll Methods. 84th Annual Meeting of the Transportation Research Board, Washington, D.C. [13] Correa, E., Metzner, C., and Nino, N. (2004). TollSim: Simulation and Evaluation of Toll Stations. International Transactions in Operational Research, pp. 121-138. [14] Ozbay, K., Mudigonda, S., and Bartin, B. (2006). Microscopic Simulation and Calibration of an

Integrated Freeway and Toll Plaza Model. 85th Annual Meeting of the Transportation Research Board, Washington, D.C. [15] Bartin, B., Mudigonda, S., and Ozbay, K. (2006). Estimation of the Impact of Electronic Toll Collection on Air Pollution Levels Using Microscopic Simulation Model of a Large-Scale Transportation Network. 85th Annual Meeting of the Transportation Research Board, Washington, D.C. [16] Hudson River Bridges and Tunnels <http://www.nycroads.com/crossings/ hudson-river/> [17] Holguín-Veras, J., Ozbay, K., and De Cerreno, A. (2005). Evaluation Study of Port Authority of New York and New Jersey’s Time of Day Pricing Initiative. Research Report Submitted to New Jersey Department of Transportation, No. FHWA/NJ-2005-005.

[18] Ozbay, K., Ozmen-Ertekin, D., Yanmaz-Tuzel, O., and Holguín-Veras, J. (2005). Analysis of Time-of-Day Pricing Impacts at Port Authority of New York and New Jersey Facilities. Transportation Research Record: Journal of the Transportation Research Board, No. 1932, pp. 109-118. [19] Mousa, R.M. (2002). Analysis and Modeling of Measured Delays at Isolated Signalized Intersections. Journal of Transportation Engineering, July/August. [20] Quiroga, C.A. and Bullock, D. (1999). Measuring Control Delay at Signalized Intersections. Journal of Transportation Engineering, Vol. 125, No. 4, pp. 271-280.

Dr. Dilruba Ozmen-Ertekin earned a B.S. in civil engineering in 1995 and an M.S. in civil engineering (transportation) in 1998 from Middle East Technical University, Ankara, Turkey, and a Ph.D. in civil engineering (transportation) in 2003 from Rutgers University. Dr. Ozmen-Ertekin’s research interest in transportation covers planning, operations and design of transportation facilities, simulation and modeling, economic impact studies, sustainable transportation systems, transportation and land use linkages, transit systems, traffic safety, transportation demand management, ITS, transportation data management, linear and nonlinear optimization techniques, and pavement deterioration models. Dr. Ozmen-Ertekin has extensive practical experience in carrying out research projects at Hofstra University and also as a consultant for companies outside the University, and she was involved in various projects funded by NYMTC, NJDOT, FHWA, and NYSDOT. She has authored and co-authored more than 50 papers and research reports. Most recently, she was the co-principal investigator of a project funded by NYMTC. She is also an affiliated faculty at the Region 2 University Transportation Research Center located at The City College of New York. In recognition of her scholarly contributions to the Department of Civil and Environmental Engineering at Rutgers University, Dr. Ozmen-Ertekin received the Faculty Academic Service Increment Award in 2004. She earned the professional engineer license in April 2006. In 2007 she received a Hofstra University grant-writing stipend for her NSF CAREER proposal.

Hofstra HORIZONS t Fall 2011 19

Confessions of a Statistical Paleontologist Photo courtesy of thinkstock.com

J Bret Bennington, Ph.D.,

Professor of Geology

“I love diggin’ in the dirt With just a pick and brush Finding fossils is my aim So I’m never in a rush ‘Cause the treasures that I seek Are rare and ancient things Like Velociraptor’s jaw Or Archaeopteryx’s wings I am a paleontologist That’s who I am, that’s who I am, that’s who I am.”


– “They Might Be Giants”, Here Comes Science, Disney Sound, 2009

Hofstra HORIZONS t Fall 2011


llow me to introduce myself as Hofstra’s resident paleontologist. Like all paleontologists, I study the natural history of the Earth through the data provided by fossils. Most people assume, as do the song lyrics to the left, that this entails digging up the bones of prehistoric beasts. I will never forget the look of abject disappointment on my mother’s face the day I informed her, some number of years into my Ph.D. research, that I did NOT dig up dinosaurs. “Well then,” she asked, “what do you study?” “I count fossils,” I told her. I could see that this was not a very satisfying answer. Traditionally, paleontologists have a specialization, a taxonomic group or a geologic time period that they are expert in; I do not.

My interests lie in the fossil record itself. Fossils and their entombing sediments are the remains of the habitats, communities, and ecosystems of the biological past. It is a source of wonder to paleontologists that the strata of the Earth should preserve such a record at all, but at the same time the fossil record is frustrating because it can be very difficult to accurately interpret. The study of all that happens to a living community of organisms as they are transformed into fossils is called taphonomy (literally, “the law of burial”). Taphonomic processes selectively remove some organisms via decay, disarticulate and disperse others, average together cohorts of individuals through time, and generally disturb,


destroy, and scramble the intricate details and relationships of the living world. To read the fossil record at face value is naïve, particularly if you are trying to reconstruct the complexities of biological communities and ecological interactions. This was the challenge that caught my attention starting out as a young scientist – how can we extract and more reliably interpret the ecological data preserved in a paleoecological (i.e., fossil) assemblage?

Measuring Ecological Stability in the Fossil Record Early in my graduate school career, I took an excellent course in statistics offered through the animal science program at Virginia Tech. Up to that point I had never considered myself much inclined toward math, but I had something of an epiphany studying statistics. Beyond knowing how to apply particular statistical techniques, if you understand the basic principles of

statistical analysis, you gain powerful insight into what kinds of data are needed to answer research questions and how to look at data with a skeptical eye. It was not long before I began applying my insights from statistical analysis to the fossil record. For my Ph.D. research, I decided to look for evidence of ecological stability over geological spans of time. Several very wellrespected paleontologists had observed that the same assemblages of marine shelly fossils seemed to recur in the fossil record over hundreds of thousands to millions of years, even after significant disruptions of their habitats. A few had even suggested that ecological interactions were stabilizing species assemblages, causing the same combinations of species to reassemble in similar abundances whenever the appropriate habitats were re-established. To test this, I headed out to the coal fields of Kentucky and southwest Virginia where thin intervals of shale

with marine fossils deposited during times of exceptionally high global sea level were separated by thousands of feet of strata deposited by rivers during the intervening times of low sea level. Were the fossil assemblages the same from marine interval to marine interval, demonstrating ecological stasis, or did they change with each new drowning of the basin by marine waters and return of marine organisms? From my training in statistics, I realized that to assess how much fossil assemblages changed through time I needed to determine how variable they were at any one time from place to place. If the temporal variability was equal to or less than the spatial variability, then I could conclude that ecological communities had persisted relatively unchanged through time. As it turned out, I was able to show that in almost all of the marine habitats I tested, the fossil communities showed statistically significant change through time relative to their variability at any

Figure 1. Hofstra HORIZONS t Fall 2011 21


Figure 2.

“From my experience sampling fossils in the field, I was convinced that fossil assemblages were “patchy,” meaning that they were spatially variable even at small scales (I had seen repeatedly how two samples from the same fossil horizon just a few feet apart could contain quite different species abundances).”


Hofstra HORIZONS t Fall 2011

one time (Figure 1), with species dropping in and out of habitats and changing their numerical abundance in the fossil assemblages. There was little evidence for ecological interactions controlling community structure. I was already teaching at Hofstra when the results of this work began appearing in print (Bambach and Bennington, 1996; Bennington and Bambach, 1996).

An Experimental Design Approach to the Fossil Record It is a basic tenet of experimental design that observations must be replicated. A single measurement is not very informative because we don’t know how representative it is of the quantity we are trying to measure. We make repeat observations to quantify the amount of error or bias in our measurements. Some error comes from random chance – the luck of the draw, called sampling error. Other sources of error are systematic – for example, a difference in conditions between the two times an experiment is run. There is also a question of how much replication is

necessary to make reliable comparisons between experimental treatments. Not enough replicates, and your results lack statistical confidence and are biased; too many replicates, and you are wasting time, energy, and money. As I contemplated the results of my analysis of stability through time in fossil assemblages, I realized that these issues applied to any attempt to measure attributes of the fossil record. Most paleontologists were fixated on random sampling error and assumed that the solution to accurately quantifying the fossil record was to collect lots of fossils – the more the better. At the same time, there was an underlying assumption that fossiliferous layers were essentially homogeneous across large areas and that one large sample was all you needed to describe a fossil assemblage. From my experience sampling fossils in the field, I was convinced that fossil assemblages were “patchy,” meaning that they were spatially variable even at small scales (I had seen repeatedly how two samples from the same fossil horizon just a few feet apart could contain quite different species abundances). Working with my friend Scott Rutherford, who was then at the University of Rhode Island, Graduate School of Oceanography, we wrote computer programs to simulate sampling from virtual fossil assemblages. By changing the number and size of our samples and changing our simulated fossil assemblages from homogeneous to spatially patchy, we were able to demonstrate that, for patchy fossil assemblages, replicate samples were critical for obtaining unbiased estimates of species abundances. Furthermore, we were able to show that the number of replicate samples was much more important than the total number of fossils collected for quantifying fossil assemblages and that most paleontologists were collecting too many fossils from too few places. Since the real work of paleontology comes in extracting and identifying individual


fossils from a bulk sample of rock or sediment (collecting bulk samples is as easy as digging a hole) the results of our computer simulations provided valuable guidance to our colleagues studying fossil assemblages. To make our conclusions clear, we published this work with the attention-grabbing subtitle “How to get more statistical bang for your sampling buck” (Bennington and Rutherford, 1999).

Testing Sampling Theory Against the Fossil Record Computer simulations are useful, but inherently less convincing than examples drawn from real data. So my next project was to follow my own advice and apply what I had learned about sampling to actual fossil assemblages. Fortunately, for several years I had been leading my undergraduate students on field trips to a pair of well-known fossil-collecting sites in central New Jersey, along the banks of Big Brook and Poricy Brook. Here, muddy sands of the Navesink Formation were deposited on the seafloor of the inner continental shelf during the Late Cretaceous Period (now exposed above sea level as the Atlantic Coastal Plain) and include a laterally extensive shell bed composed of extinct oyster, brachiopod, clam, and squid fossils. This layer is densely fossiliferous, and the condition of many of the shells shows that they accumulated over a long period of exposure on the seafloor. This is exactly the kind of fossil assemblage that paleontologists would expect to have been time averaged (laterally homogenized by the accumulation of shells over time) and was thus an ideal fossil layer in which to test my ideas about patchiness and sampling. Wading along the streambanks with a team of Hofstra students, we collected bulk samples of the fossil horizon at a hierarchy of spatial scales, from one meter to several kilometers apart. It

Figure 3. took a couple of years to extract all the fossils from all the samples, indentify and count them, but finally I had the numbers to analyze and graph. Sure enough, the Navesink shell bed was patchy, despite being time averaged, even at the meter scale (Figure 2). Traditional bulk sampling would have failed to quantify the range of patches, leading to biased estimates of the overall species abundance distribution in the fossil assemblage at each of the two localities (Bennington, 2003). This field-based analysis of the structure of fossil assemblages has been widely cited by other paleontologists since it was published, particularly in the research reports of graduate students and newly minted Ph.D.s. I am also gratified to report that the authors of the third edition of the widely used textbook Principles of Paleontology saw fit to devote several pages to a discussion of this work in their chapter on paleoecology. There is no greater satisfaction for a scholar than knowing your work has been of use to others in answering their own research questions.

“Wading along the streambanks with a team of Hofstra students, we collected bulk samples of the fossil horizon at a hierarchy of spatial scales, from one meter to several kilometers apart.“

Hofstra HORIZONS t Fall 2011 23


Figure 4

In fall 2007 and again in 2008, I was invited to the Smithsonian National Museum of Natural History to participate in a pair of workshops designed to bridge the gap between paleoecology and ecology and encourage more research combining data from the paleontological and neontological records. Because biological data usually span less than a decade, while paleontological data may incorporate hundreds to hundreds of thousands of years of time, it is important to understand how these different time scales relate. I was included in these deliberations because of my expertise in quantifying spatial and temporal scales in the fossil record. From these discussions emerged a short manifesto of the critical issues of scale in paleoecology that was published as a guide for future research (Bennington et al., 2009).

Recent Research Although the intricacies of counting fossils have been the focus of most of my published research over the last two decades, other projects have also benefited from a statistical approach to fossil data. For example, I can now proudly tell my mother that I do study dinosaurs, thanks to the research


Hofstra HORIZONS t Fall 2011

interests of my former undergraduate student Christa Abatemarco. Taking advantage of an opportunity to examine the remarkable collection of dinosaur tracks preserved at Dinosaur State Park in Connecticut, we decided to measure the best-preserved trackways at the site and apply basic statistics to determine how variable measured parameters such as trackway orientation, footprint length, stride, and digit angle are from trackway to trackway. What we found was that there is a lot of variation from footprint to footprint within a trackway, making it difficult to statistically distinguish many of the trackways we measured (Figure 3). In other words, the 12 trackways we measured could have been made by as few as six individual dinosaurs! We also used the variation within trackway parameters to quantify uncertainties around estimates of speed derived from dinosaur trackways (Abatemarco and Bennington, 2009). More recently, I have been collaborating with Dr. Sylvia Silberger in Hofstra’s Department of Mathematics and some high school research students, including Anna Chung, to use fossil bivalves to estimate the size of the sampling domain (the total population of individuals from which the fossil shells in a sample were

drawn). The basic idea here is that your odds of collecting both halves of an individual clam are greater if the sampling domain is small. If you can determine that you have, say, 10 matching clam shells out of a sample of 100 shells, then you can estimate the total number of clams from which your 100 shells were drawn. Although somewhat esoteric, this work has potential uses for quantifying how much spatial and temporal mixing a fossil shell bed has experienced (Figure 4). We recently presented some preliminary results from this study (Bennington, Silberger, and Chung, 2010), and the feedback we received from other paleontologists has provided some interesting avenues of inquiry for two new high school research interns. Finally, I should also mention that replicate sampling is an important consideration in a new research project being conducted with Dr. E. Christa Farmer of Hofstra’s Geology Department under the auspices of the Hofstra University Center for Climate Studies (HUCCS). I am assisting Dr. Farmer in collecting sediment cores from locations across the Great South Bay along the inner barrier island shores of southern Long Island. Our aim is to develop a sedimentary record of the history of deposition in the bay for the purposes of detecting prehistoric hurricane events (powerful hurricanes wash sand from the ocean beaches, over the barrier island, into the bay marshes) and describing how the bay environment has changed over time. During our first field season, we collected four cores from the marsh behind Gilgo Beach in Babylon, New York. We found a significant amount of spatial variability in the sedimentary layering from core to core, demonstrating the potential hazards of trying to adequately describe any location with a single core. Again, replication is proving to be the key to sampling the record of the past.


References Abatemarco, Christa, and Bennington, J Bret. 2009. A reanalysis of footprints and trackways at the Dinosaur State Park megatracksite using basic statistical methods. Geological Society of America, Abstracts with Programs, 41 (7): 264. Bambach, R. K., and Bennington, J Bret. 1996. Do communities evolve? A major question in evolutionary paleoecology. In Evolutionary Paleobiology. D. Jablonski, D. H. Erwin, and J. H. Lipps, eds., University of Chicago Press., pp. 123-160. Bennington, J Bret. 2003. Transcending patchiness in the comparative analysis of paleocommunities: A test case from the Upper Cretaceous of New Jersey. PALAIOS 18: 22-33.

Bennington, J Bret, and Bambach, R. K. 1996. Statistical testing for paleocommunity recurrence: Are similar fossil assemblages ever the same? Palaeogeography, Palaeoclimatology and Palaeoecology 127: 107-134. Bennington, J Bret, Dimichele, W.A., Badgley, C., Bambach, R.K., Barrett, P.M., Behrensmeyer, A.K., Bobe, R., Burnham, R.J., Daeschler, E.B., Van Dam, J., Eronen, J.T., Erwin, D.H., Finnegan, S., Holland, S.M., Hunt, G., Jablonski, D., Jackson, S.T., Jacobs, B.F., Kidwell, S.M., Koch, P.L., Kowalewski, M.J., Labandeira, C.C., Looy, C.V., Lyons, K., Novack-Gottshall, P.M., Potts, R., Roopnarine, P.D., Stromberg, C.A.E., Sues, H., Wagner, P.J., Wilf, P., and Wing, S.L. 2009.

Spotlight: Critical issues of scale in paleoecology. PALAIOS 24: 1-4. Bennington, J Bret, and Rutherford, S. D. 1999. Precision and reliability in paleocommunity comparisons based on cluster-confidence intervals: How to get more statistical bang for your sampling buck. PALAIOS 14: 506-515. Bennington, J Bret, Silberger, Silvia, and Chung, Anna. 2010. Estimating the size of the sampling domain from the number of unique bivalved individuals in a paleontological sample. Geological Society of America, Abstracts with Programs, 42 (5): 139. Foote, Michael, and Miller, Arnold. 2007. Principles of Paleontology, 3rd Ed., W.H. Freeman and Company, 354 p.

J Bret Bennington is a professor of geology at Hofstra University. Although born in Dayton, Ohio, Dr. Bennington spent most of his formative years on the north shore of Long Island, New York, where he learned to appreciate bagels and a good slice of pizza. He is a graduate of Northport High School (Class of 1981) and attended the University of Rochester, graduating with a B.S. in biology-geology in 1985. Heading south for graduate studies, Dr. Bennington joined up with Richard Bambach’s paleontology group at Virginia Tech. He was hired at Hofstra University in 1993 and earned a Ph.D. in 1995. At Hofstra Dr. B (as his students call him) teaches courses in physical geology, historical geology, geomorphology, hydrology, dinosaurs, evolution and Charles Darwin, and paleontology. His main research focus is paleoecology and the statistical analysis of fossil assemblages. Dr. Bennington also has ongoing projects looking into Cretaceous marine communities and environments on the Atlantic Coastal Plain; the analysis of predation on fossil oysters, tetrapod and dinosaur trackways; and the glacial geomorphology and geological history of Long Island. For the past seven years Dr. Bennington has co-directed a study abroad program in Ecuador and the Galápagos Islands with Hofstra biology professor Dr. Russell Burke. In that time he has had the good fortune to escort groups of students to the islands in the footsteps of Charles Darwin on five separate occasions. With Dr. Burke and Hofstra’s Dean of Library and Information Services Daniel Rubey, he is a co-founder and organizer of Hofstra’s annual Darwin Day celebration and was a co-director of the Darwin’s Reach conference at Hofstra University in spring 2009, commemorating the 200th anniversary of the birth of Charles Darwin. During these events, “Mr. Darwin” has been known to appear, with Dr. Bennington nowhere to be found. In his free time, Dr. Bennington likes to garden, bike, hike, kayak, read, eat, and drink as much craft beer and single malt scotch as is prudent.

Hofstra HORIZONS t Fall 2011 25

Forensic Linguistics at Hofstra: From a Single Course to an International Institute

Photo courtesy of thinkstock.com

Robert Leonard, Professor of Comparative Literature and Languages; Institute and Graduate Program Director Contributions by Ginny Greenberg, Director of Public Relations, Hofstra University Justice requires the objective analysis of evidence Imagine a justice system in which there was no scientific analysis of evidence. The time of death of a victim would be left to seat-of-the-pants guesswork; the type of bullet used would be difficult to determine; DNA analysis would not exist to place a suspect at the scene of the crime or to exonerate one falsely accused. The emerging field of forensic linguistics seeks to improve the analysis of evidence in the same way that forensic medical examiners, ballistics experts, and DNA scientists have advanced the administration of justice


Hofstra HORIZONS t Fall 2011

through the scientific analysis of their data. Forensic linguists analyze evidence that is language – for example, suicide notes, threatening letters, ransom notes, interrogations, and contracts. It also has direct application to intelligence work. Hofstra, a longtime leader in forensic linguistics, has recently formalized America’s first graduate program and institute specializing in this field, both headed by longtime Hofstra professor Rob Leonard.

What is meant by “forensic linguistics”? This term has evolved to describe all applications of the scientific discipline

of linguistics to any language data that is related to the law (and not only that associated with “forensic” criminal analysis). Thus, forensic linguists may study jury instructions, the construction of contracts, interrogation procedures, or methods to investigate whether a suspect wrote a threatening letter to a senator. Institute and Graduate Program Director Rob Leonard’s expertise in forensic linguistics has led to his involvement in a number of high-profile cases, including the Taye Diggs-Idina Menzel arson threat letters, the Hummert murder, the McGuire “suitcase” murder, the Alvarez spy case, the doctored tape case involving the Canadian prime


minister, and the John Karr episode of the JonBenet Ramsey murder (in which Dr. Leonard’s comparison of the ransom note and Karr’s writing found no compelling link, prior to the release of DNA results that came to the same conclusion). Currently, Dr. Leonard has been admitted as an expert in courts in 10 states as well as in Federal District Court. While linguists routinely consult with police and other law enforcement agencies, they are just as often called upon to consult with defense counsels. Certain interrogation techniques can create conditions in which subjects will utter what appears to be a valid confession but is actually only a response to the interrogation. The presuppositions of interrogators can cause them to believe that a subject has “confessed,” while a close examination of the actual language data reveals that the subject actually has not.

Forensic linguistics is also brought to bear in the areas of international and domestic counterterrorism, intelligence, and counterintelligence, where it has been seen that a focused analysis of language can yield highly useful information.

Hofstra has become the leading U.S. university in forensic linguistics research and training For several years, Hofstra has successfully supported the research and teaching of forensic linguistics to the degree that it has now taken the lead in the United States, and last fall saw the establishment at Hofstra of the first graduate program in forensic linguistics in North America. Still, much research – and worldwide coordination of scholars and practitioners – needs to be done in this emerging field. To address that situation, this spring, Hofstra

... Hofstra has successfully supported the research and teaching of forensic linguistics to the degree that it has now taken the lead in the United States, and last fall saw the establishment at Hofstra of the first graduate program in forensic linguistics in North America.

established an institute to focus on forensic linguistics research, especially its role in strategic intelligence analysis. Through the Institute for Forensic Linguistics, Threat Assessment and Strategic Analysis, Hofstra has established connections to a small community of highly accomplished forensic linguistics scholars worldwide, as well as members of the security and intelligence community who value the insights forensic linguistics can deliver. Thus, the emphasis of the Hofstra institute is security and counterterrorism research and application; the Hofstra graduate program supports direct academic training and research.

HISTORY OF FORENSIC LINGUISTICS AT HOFSTRA Course work begins Robert Leonard, professor of comparative literature and languages and institute director.

In 2001 linguistics professor Rob Leonard began teaching courses in language and law and forensic linguistics, longtime research interests of his. In January 2004, Dr. Leonard

Hofstra HORIZONS t Fall 2011 27


Fitzgerald and Leonard sample cases that demonstrate the value of forensic language analysis in counterterrorism, both international and domestic Leonard: Ideologue domestic terrorists Bomb and other threats of violence were made not only to principals whose activities went against the values of the threateners but also against any organizations, their workers, and the families of workers, who had any dealings of any kind with the organizations of the target principals. Analyses of identity of threateners were outlined, and strategies of tracing and legally obtaining language samples were conceived. Threats against sitting judges Forensic linguistics analyses of death threats against judges led to suspect identifications in an East Coast U.S. city. Courthouse bomb threats Forensic linguistics allowed narrowing of suspect pool in phoned-in mass destruction threats in a suburban New York area county. Fitzgerald: 9/11 attacks on the United States A careful linguistic analysis of the writings of most of the 18 hijackers indicated that only several of the hijackers — presumably the four leaders, one of whom was on each of the four planes — were aware that they were on a suicide mission. Leader Mohamed Atta’s writings and inspirational talks he gave to the other hijackers prior to the seizing of the planes gave the non-ringleaders permission to steal money and jewelry from the passengers, for they were infidels and it would advance the cause of the hijackers’ struggle against the United States. This aided in maintaining control of the 12 nonringleaders as they might have backed out of the hijacking missions. Clearly there is no motivation to steal money and jewelry if one is aware one is on a suicide mission. Unabom Concepts of forensic linguistics and textual analysis were incorporated for the first time by


Hofstra HORIZONS t Fall 2011

approached his dean, Bernard Firestone of Hofstra College of Liberal Arts and Sciences (HCLAS), and proposed that Hofstra take the lead nationwide in developing forensic linguistics courses and supporting research in the applications of theoretical linguistics to issues of the law. Dr. Leonard often recounts, especially when giving talks at other universities, how at a time when there were at least 30 linguists all over the United States teaching forensic linguistics, only Hofstra had the entrepreneurial foresight to support and develop forensic linguistics teaching, research, and internships. HCLAS created the Forensic Linguistics Project, forerunner of the current programs, and with the Hofstra Law School clinics created internships for undergraduate forensic linguistics interns to assist in language-related cases in which the clinics represented indigent members of the local community.

Leonard trains FBI By 2006 Dr. Leonard had given plenary addresses at threat assessment and law enforcement organizations around the country, and assisted in some major criminal cases – enlisted sometimes by the defense, sometimes by the prosecution – and was recruited by the FBI to participate with the Behavioral Analysis Unit-1 (Counterterrorism and Threat Assessment) in its Forensic Linguistics Workshop for Law Enforcement Practitioners at the National Center for the Analysis of Violent Crime, Critical Incident Response Group in Quantico, Virginia.

Former SSA Fitzgerald joins Hofstra When Supervisory Special Agent of the FBI Jim Fitzgerald, program director of forensic linguistic services of the Behavioral Analysis Unit-1, retired from the FBI, Dr. Leonard recruited him as faculty in the new Hofstra Forensic Linguistics Program and as co-director of the institute. (The Behavioral Analysis Unit is the subject of the TV show Criminal Minds, for which Fitzgerald serves as technical adviser.) Fitzgerald pioneered forensic linguistics at the FBI, becoming involved through his groundbreaking work on the Unabomber and other cases that showed linguistic evidence to be pivotal. The FBI created a forensic linguistics program several years ago in response to the real-world demands of counteracting criminal and terrorist attacks, in which, time and again, linguistic evidence proved to be pivotal. For example: Terror: The Unabomber, DC Sniper, and Anthrax cases, communications from Al Qaeda Threat Assessment: Threats to government, officials, workplace, schools, and private individuals Criminal Communications: Extortion, ransom notes, criminal intelligence disinformation, intercepted communications


Through FBI workshops, Hofstra workshops, and other intensive workshops, Fitzgerald and Leonard as a team have given training to ATF, CIA, U.S. Capitol Police, US Secret Service, NYPD, NJ State Troopers, Homeland Security, UN, Mounties, and the FBI NYFO.

Hofstra’s new graduate program Last fall, Hofstra announced the founding of the first North American graduate program specializing in forensic linguistics. In this area, the United States has lagged; there already exist such graduate programs in the UK and Spain, and there is interest in China, in Kenya, and throughout Europe. The Hofstra program, called the Master of Arts in Linguistics: Forensic Linguistics, provides rigorous training in the core competencies of linguistics – phonetics, phonology, grammar, syntax, semantics, pragmatics, etc. – and teaches how a mastery of those theoretical subfields can advance the cause of justice by striving to analyze language evidence as objectively and scientifically as possible. Adviser to the program is Dr. Leonard’s colleague and research partner, Dr. Roger W. Shuy, Distinguished Research Professor of Linguistics, Emeritus, of Georgetown University. Known as the foremost forensic linguist in the United States, he has consulted on some 600 cases, testified as a linguistics expert witness in 26 states and before the U.S. Senate and U.S. House of Representatives in impeachment trials of U.S. senators and federal judges, and in international criminal tribunal trials. Dr. Shuy has written or edited some 40 books on linguistics, including writing 10 books on forensic linguistics. Dr. Leonard has recruited, in addition to former SSA Fitzgerald, a group of dedicated and highly trained linguists from within Hofstra: sociolinguistics is taught by Professor Evelyn Altenberg, dialectology by Dr. Gregory Kershner, general linguistics by Dr. Mari Fujimoto, and phonology, semantics, and pragmatics by Professor Josef Fioretta.

The Hofstra Institute for Forensic Linguistics, Threat Assessment and Strategic Analysis The institute it is an important complementary program to the M.A. in Forensic Linguistics, for its mandate is far broader than the M.A. program. Hofstra’s new institute has the potential to play a pivotal international role in forensic linguistics, threat management, counterterrorism, and related national security competencies. The main foci of the institute are research and training in: • Forensic linguistics in investigation, prosecution, interdiction • Forensic linguistics enhanced threat assessment techniques

the FBI in 1996 case of Theodore Kaczynski, aka the Unabomber. This was the first time ever in U.S. criminal court that linguistic analysis was utilized in a probable cause affidavit to obtain a search warrant. That search warrant authorized the FBI to enter Kaczynski’s Montana cabin and seize bombmaking equipment and other evidence linking him to the 16 Unabom bombings. Kidnapping of reporter Daniel Pearl In a linguistic profile of a letter that accompanied the photos of Pearl with a gun to his head, forensic linguistics demonstrated that it was a non-native speaker of English who had learned UK English; indeed, it turned out the letter had been written by a Pakistani educated in the UK. Since the letter was posted on the Internet, copycats had access to documents, and forensic linguistics was utilized to separate those out. Washington, D.C., sniper Linguistic analysis of the 19-word tarot card suggested that there were possibly two snipers – one older and one younger; this turned out to be the case. A linguistic analysis of the subsequent letters sent to the police suggested that at least one of the authors was African American and that the motivation was not financial, despite the professed indications that it was. They were, indeed, African American, and the motive was to kill the older shooter’s wife. 2005 assassination of ex-president of Lebanon Rafic Hariri A letter of responsibility was left at the bomb scene in Arabic, and translated; a videotape of the suicide bomber speaking was retrieved. Were the letter and the videotape authored by the same person or team of persons? Spoken communication of the bomber was consistent with the written letter. 1996 Atlanta Olympic bombings (and subsequent bombings of gay bar and abortion clinics) Forensic linguistics was utilized to analyze the “Army of God” letters received by the media during the timeframe of these bombings; the letters were forensically compared to the writings of Eric Rudolph. Rudolph’s letters and those involved in the bombings themselves were judged consistent.

Hofstra HORIZONS t Fall 2011 29


Institute Director Robert A. Leonard, Ph.D., Hofstra professor of linguistics for more than 20 years, has also taught at Columbia University, where he earned a doctorate. He possesses both the academic theoretical foundations of the field as well as the requisite experience of applying those foundations to the real world. An internationally known forensic linguistics expert, he has been qualified in Federal District Court and a number of state courts; has consulted for the FBI, UK government, Pennsylvania State Police, NYPD Hate Crimes Task Force, New Jersey Office of the Attorney General, U.S. Attorney’s Office (Eastern District of NY), New Yorker magazine, ABC-TV’s Investigative Unit, and numerous other police departments and major law firms. Leonard has trained FBI and international agents in forensic linguistic techniques at the FBI Behavioral Analysis Unit-1 headquarters in Quantico, Virginia, and has participated with the FBI to give training in New York to U.S. Secret Service, NYPD, NJ State Troopers, Homeland Security, UN, FBI NYFO, and ATF. Institute Co-Director James R. Fitzgerald, M.Sc., is a former FBI supervisory special agent and program director of forensic linguistic services of the FBI’s Behavioral Analysis Unit-1 BAU-1. (Fitzgerald left the FBI in 2007.) The BAU-1 of the Critical Incident Response Group is a component of the National Institute for the Analysis of Violent Crime at the FBI Academy and focuses its efforts on counterterrorism, threat assessment, and other forensic linguistics services. BAU-1 personnel offer services, including behaviorally oriented investigative assistance in counterterrorism matters, threat assessment/textual analysis, weapons of mass destruction, extortions, product tampering, arson and bombing matters, and stalking cases, to the FBI; other international, federal, state, and local law enforcement and intelligence agencies; and the military. Fitzgerald is the only fully credentialed profiler and forensic linguist in the history of the FBI. It was Fitzgerald’s groundbreaking forensic linguistic work on the Unabomber case that led to the FBI’s recognition of the pivotal necessity of forensic linguistics. For several years he held annual forensic linguistics seminars at Quantico for agents from the FBI and around the world. He currently serves as violent crime consultant and forensic linguist at The Academy Group, Inc., of Manassas, Virginia.

• Conduct cutting-edge research and provide cutting-edge training, course work, and workshops for counterterrorism, law enforcement, officials, and researchers – centered around the core competencies of forensic linguistics and threat assessment – two internationally recognized tools to combat terrorism.

• Hold international workshops in specific language intelligence competencies. • Offer an Executive Certificate Program in Linguistic Analysis and Security Studies. • Undertake analyses of incidentrelated critical communications. • Plan and execute research programs to expand analytical capabilities.

• Seek to facilitate and coordinate cooperative efforts by U.S. and foreign partner institutions and foster allied offshore intelligence initiatives.

Chief among its goals is a rapidresponse international task force of forensic linguists and allied specialists who can react quickly to situations requiring analysis and intervention.

The institute seeks to:


Hofstra HORIZONS t Fall 2011

Sample applications of forensic linguistics-enhanced threat assessment: THREAT LEVEL ANALYSIS: State-ofthe-art techniques, based on thousands of FBI cases. When a threat is made, what should be the response? How likely is the threat to be carried out? How can the author be found? LINGUISTIC DEMOGRAPHIC PROFILES: Increase contextual information and narrow suspect pools. Language structure, grammar, vocabulary, syntax, phonological representation can potentially reveal a writer’s regional and local geographic origin, education level, occupational training, gender, native language and other features. AUTHORSHIP ANALYSIS: Further narrows suspect pools and can support investigation and prosecution. The Hofstra institute – and especially its International Forensic Linguistic Task Force – offers an array of unique and distinctive services not currently offered by any U.S. or allied government agency of which we are aware. As inaugural efforts of the institute, Director Rob Leonard recently traveled to London to train counterterrorism agents in the UK in forensic linguistic techniques, and similarly addressed a gathering of counterterrorism experts in NY. The institute also hosted an on-campus presentation on cybersecurity by Shawn Henry, a top FBI official. Hofstra has the opportunity to become a leading center in the world for forensic linguistics-related teaching, research, and application.


Hofstra at a Glance LOCATION: Hempstead, Long Island, 25 miles east of New York City. Telephone: 516-463-6600 FOUNDING DATE: 1935 PRESIDENT: Stuart Rabinowitz, J.D. CHARACTER: A private, nonsectarian, coeducational university ACCESSIBILITY: Hofstra is 100 percent program accessible to persons with disabilities. COLLEGES AND SCHOOLS: Hofstra College of Liberal Arts and Sciences; Frank G. Zarb School of Business; School of Communication; School of Education, Health and Human Services; Maurice A. Deane School of Law at Hofstra University; School for University Studies; Hofstra University Honors College; Hofstra University Continuing Education; and Hofstra North Shore-LIJ School of Medicine at Hofstra University. FACULTY: There are 1,165 faculty members, of whom 533 are full-time. Ninety-three percent of full-time faculty hold the highest degree in their fields. STUDENT BODY: Full-time undergraduate enrollment of 6,804. Total University enrollment, including part-time undergraduate, graduate, School of Medicine and School of Law, is about 12,000. Male-female ratio is 44-to-56. DEGREES: Bachelor’s degrees are offered in about 140 program options. Graduate degrees, including Ph.D., Ed.D., Psy.D., Au.D., J.D., and M.D. degrees, advanced certificates and professional diplomas, are offered in about 150 program options. LIBRARIES: The Hofstra libraries contain 1.2 million print volumes and provide 24/7 online access to more than 49,000 full-text journals and 47,000 electronic books. JANUARY AND SUMMER SESSIONS: Hofstra offers a January session and three summer sessions between May and August.

Trustees of Hofstra University As of December 2011 OFFICERS Janis M. Meyer,* Chair James E. Quinn,* Vice Chair Peter G. Schiff, Vice Chair David S. Mack,* Secretary Stuart Rabinowitz, President MEMBERS Alan J. Bernon* George W. Bilicic, Jr. Tejinder Bindra Robert F. Dall* Helene Fortunoff Steven J. Freiberg* Martin B. Greenberg* Joseph M. Gregory* Leo A. Guthart Peter S. Kalikow* Abby Kenigsberg Arthur J. Kremer Karen L. Lutz Donna M. Mendes* John D. Miller* Marilyn B. Monter* Martha S. Pope Edwin C. Reed Robert D. Rosenthal* Debra A. Sandler* Thomas J. Sanzone* Joseph Sparacio* Frank G. Zarb* DELEGATES William F. Nirode, Speaker of the Faculty Stuart L. Bass,* Chair, University Senate Executive Committee Elizabeth K. Venuti, Chair, University Senate Planning and Budget Committee David Zuniga, President, Student Government Association Alexander Zelinski, Vice President, Student Government Association Frederick E. Davis, Jr.,* President, Alumni Organization

James M. Shuart,* President Emeritus Wilbur Breslin, Trustee Emeritus Emil V. Cianciulli,* Chair Emeritus John J. Conefry, Jr., Chair Emeritus Maurice A. Deane,* Chair Emeritus George G. Dempster,* Chair Emeritus Joseph L. Dionne,* Trustee Emeritus Bernard Fixler,* Trustee Emeritus Florence Kaufman, Trustee Emerita Walter B. Kissinger, Trustee Emeritus Ann M. Mallouk,* Chair Emerita Thomas H. O’Brien, Trustee Emeritus Arnold A. Saltzman, Trustee Emeritus Norman R. Tengstrom,* Trustee Emeritus *Hofstra alumni

Hofstra HORIZONS t Fall 2011 31

Hofstra has become the leading American university in forensic linguistics research and training.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.