Reliability engineering probabilistic models and maintenance methods second edition nachlas - Read
Second Edition Nachlas
Visit to download the full and correct content document: https://textbookfull.com/product/reliability-engineering-probabilistic-models-and-maint enance-methods-second-edition-nachlas/
More products digital (pdf, epub, mobi) instant download maybe you interests ...
Reliability engineering. Probabilistic models and maintenance methods Second Edition Nachlas
CRC Press is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works
Printed on acid-free paper
Version Date: 20161019
International Standard Book Number-13: 978-1-4987-5247-3 (Pack - Book and Ebook)
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com
and the CRC Press Web site at http://www.crcpress.com
Dedicated to the memory of Betty Nachlas
Preface
The motivation for the preparation of a second edition was my wish to expand the treatment of several topics while maintaining an integrated introductory resource for the study of reliability evaluation and maintenance planning. The focus across all of the topics treated is the use of analytical methods to support the design of dependable and efficient equipment and the planning for the servicing of that equipment. The orientation of the topical development is that probability models provide an effective vehicle for portraying and evaluating the variability that is inherent in the performance and longevity of equipment.
The book is intended to support either an introductory graduate course in reliability theory and preventive maintenance planning or a sequence of courses that address these topics. A fairly comprehensive coverage of the basic models and of various methods of analysis is provided. An understanding of the topics discussed should permit the reader to comprehend the literature describing new and advanced models and methods.
Notwithstanding the emphasis upon initial study, the text should also serve well as a resource for practicing engineers. Engineers who are involved in the design process should find a coherent explanation of the reliability and maintenance issues that will influence the success of the devices they create. Similarly, engineers responsible for the analysis and verification of product reliability or for the planning of maintenance support of fielded equipment should find the material presented here to be relevant and easy to access and use.
In preparing this second edition, the treatment of statistical methods for evaluating reliability has been expanded substantially. Several methods for constructing confidence intervals as part of the parametric estimation effort are described and methods for treating data derived from operating repairable devices have also been added. In addition, the analysis of nonstationary models of repairable equipment maintenance has been updated and expanded. These expansions along with numerous other minor improvements to the text should make this book an even more useful resource for both students and practitioners.
The background required of the reader is a sound understanding of probability. This subsumes capability with calculus. More specifically, the reader should have an understanding of distribution theory, Laplace transforms, convolutions, stochastic processes, and Markov processes. It is also worth mentioning that the use of the methods discussed in this book often involves substantial computational effort, so facility with numerical methods and access to efficient mathematical software is desirable.
One caveat concerning the coverage here is that the treatment is strictly limited to hardware. Reliability and maintenance models have been developed for applications to software, humans, and services systems. No criticism of those efforts is intended but the focus here is simply hardware.
The organization of the text is reasonably straightforward. The elementary concepts of reliability theory are presented sequentially in Chapters 1 through 6. Following this, the commonly used statistical methods for evaluating component reliability are described in Chapters 7 and 8. Chapters 9 through 13 treat repairable systems and maintenance planning models. Here again the presentation is sequential in that simple failure models precede those that include preventive actions and the renewal cases are treated before the more realistic nonrenewal cases. In the final chapter, four interesting special topics, including warranties, are discussed. It is worth noting that four appendices that address aspects of numerical computation are provided. These should be quite useful to the reader.
Naturally, many people have contributed to the preparation of this text. The principal factor in the completion of this book was the support and encouragement of my wife Beverley. An important practical component of my success was the support of Virginia Tech, especially during sabbaticals when progress with writing is so much easier.
I acknowledge the significant computational capability provided to me by the Mathematica software. Many of the analyses included in this text would have been much more taxing or even impossible without the strength and efficiency the Wolfram software provides.
I also wish to extend my thanks directly to three of my students, each of whom contributed to my efforts. Edvin Beqari stimulated my increased interest in and analysis of the diffusion models of degradation. He also directed much of my analysis of that topic. Elliott Mitchell-Colgan helped to expand the sets of exercises included at the end of the chapters. Paul D’Agostino invested very many hours in verifying a majority of the complicated numerical analyses used for examples or for exercise solutions.
I express my profound gratitude to all of my graduate students who have taught me so much about these topics over the years. May we all continue to learn and grow and to enjoy the study of this important subject.
Author
Joel A. Nachlas received his BES from Johns Hopkins University in 1970, his MS in 1972 and his PhD in 1976, both from the University of Pittsburgh. He served on the faculty of the Grado Department of Industrial and Systems Engineering at Virginia Tech for 41 years and retired in March 2016. His research interests are in the applications of probability and statistics to problems in reliability and quality control. In addition to his normal teaching activities during his time at Virginia Tech, he served as the coordinator for the department’s graduate program in operations research and for their dual master’s degree that is operated with École des Mines de Nantes in France. From 1992 to 2011, he regularly taught reliability theory at the École Polytechnique de l’Université Nice Sophia Antipolis. He is the coauthor of more than 50 refereed articles, has served in numerous editorial and referee capacities, and has lectured on reliability and maintenance topics throughout North America and Europe.
1 Introduction
Although we rarely think of it, reliability and maintenance are part of our everyday lives. The equipment, manufactured products, and fabricated infrastructure that contribute substantively to the quality of our lives have finite longevity. Most of us recognize this fact, but we do not always fully perceive the implications of finite system life for our efficiency and safety. Many, but not all, of us also appreciate the fact that our automobiles require regular service, but we do not generally think about the fact that roads and bridges, smoke alarms, electricity generation and transmission devices, and many other machines and facilities we use also require regular maintenance.
We are fortunate to live at a time in which advances in the understanding of materials and energy have resulted in the creation of an enormous variety of sophisticated products and systems, many of which (1) were inconceivable 100 or 200 or even 20 years ago; (2) contribute regularly to our comfort, health, happiness, efficiency, or success; (3) are relatively inexpensive; and (4) require little or no special training on our part. Naturally, our reliance on these devices and systems is continually increasing and we rarely think about failure and the consequences of failure.
Occasionally, we observe a catastrophic failure. Fatigue failures of the fuselage of aircraft [1], the loss of an engine by a commercial jet [1], the Three Mile Island [1] and Chernobyl [1] nuclear reactor accidents, and the Challenger [2] and Discovery [3] space shuttle accidents are all widely known examples of catastrophic equipment failures. The relay circuit failure at the Ohio power plant that precipitated the August 2003 power blackout in the northeastern United States and in eastern Canada [4] is an example of a system failure that directly affected millions of people. When these events occur, we are reminded dramatically of the fallibility of the physical systems on which we depend.
Nearly everyone has experienced less dramatic product failures such as that of a home appliance, the wear out of a battery, and the failure of a light bulb. Many of us have also experienced potentially dangerous examples of product failures such as the blowout of an automobile tire.
Reliability engineering is the study of the longevity and failure of equipment. Principles of science and mathematics are applied to the investigation of how devices age and fail. The intent is that a better understanding of device failure will aid in identifying ways in which product designs can be improved to increase life length and limit the adverse consequences of failure. The key point here is that the focus is upon design. New product and
system designs must be shown to be safe and reliable prior to their fabrication and use. A dramatic example of a design for which the reliability was not properly evaluated is the well-known case of the Tacoma Narrows Bridge, which collapsed into the Puget Sound in November 1940, a few months after its completion [1].
A more recent example of a design fault with significant consequences is the 2013 lithium-ion battery fire that occurred on a new Boeing 787 aircraft while it was parked at the Boston airport [5]. Fortunately, the plane was empty, so no one was injured, but the fire and two subsequent fires of the same type resulted in all 787s being grounded until a modification to the battery containment was made. The cost to the airlines using the planes was estimated to be $1.1 million per day.
The study of the reliability of an equipment design also has important economic implications for most products. As Blanchard [6] states, 90% of the life cycle costs associated with the use of a product are fixed during the design phase of a product’s life.
Similarly, an ability to anticipate failure can often imply the opportunity to plan for an efficient repair of equipment when it fails or even better to perform preventive maintenance in order to reduce failure frequency.
There are many examples of products for which system reliability is far better today than it was previously. One familiar example is the television set, which historically experienced frequent failures and which, at present, usually operates without failure beyond its age of obsolescence. Improved television reliability is certainly due largely to advances in circuit technology. However, the ability to evaluate the reliability of new material systems and new circuit designs has also contributed to the gains we have experienced.
Perhaps the most well-recognized system for which preventive maintenance is used to maintain product reliability is the commercial airplane. Regular inspection, testing, repair, and even overhaul are part of the normal operating life of every commercial aircraft. Clearly, the reason for such intense concern for the regular maintenance of aircraft is an appreciation of the influence of maintenance on failure probabilities and thus on safety.
On a personal level, the products for which we are most frequently responsible for maintenance are our automobiles. We are all aware of the inconvenience associated with an in-service failure of our cars and we are all aware of the relatively modest level of effort required to obtain the reduced failure probability that results from regular preventive maintenance.
It would be difficult to overstate the importance of maintenance and especially preventive maintenance. It is also difficult to overstate the extent to which maintenance is undervalued or even disliked. Historically, repair and especially preventive maintenance have often been viewed as inconvenient overhead activities that are costly and unproductive. Very rarely have the significant productivity benefits of preventive maintenance been recognized and appreciated. Recently, there have been reports [7–9] that suggest that
it is common experience for factory equipment to lose 10%–40% of productive capacity to unscheduled repairs and that preventive maintenance could drastically reduce these losses. In fact, the potential productivity gains associated with the use of preventive maintenance strategies to reduce the frequency of unplanned failures constitute an important competitive opportunity [9]. The key to exploiting this opportunity is careful planning based on cost and reliability.
This book is devoted to the analytical portrayal and evaluation of equipment reliability and maintenance. As with all engineering disciplines, the language of description is mathematics. The text provides an exploration of the mathematical models that are used to portray, estimate, and evaluate device reliability and those that are used to describe, evaluate, and plan equipment service activities. In both cases, the focus is on design. The models of equipment reliability are the primary vehicle for recognizing deficiencies or opportunities to improve equipment design. Similarly, using reliability as a basis, the models that describe equipment performance as a function of maintenance effort provide a means for selecting the most efficient and effective equipment service strategies.
These examples of various failures share some common features and they also have differences that are used here to delimit the extent of the analyses and discussions. Common features are that (1) product failure is sufficiently important that it warrants engineering effort to try to understand and control it and (2) product design is complicated, so the causes and consequences of failure are not obvious.
There are also some important differences among the examples. Taking an extreme case, the failure of a light bulb and the Three Mile Island reactor accident provide a defining contrast. The Three Mile Island accident was precipitated by the failure of a physical component of the equipment. The progress and severity of the accident were also influenced by the response by humans to the component failure and by established decision policies. In contrast, the failure of a light bulb and its consequences are not usually intertwined with human decisions and performance. The point here is that there are very many modern products and systems for which operational performance depends upon the combined effectiveness of several of (1) the physical equipment, (2) human operators, (3) software, and (4) management protocols.
It is both reasonable and prudent to attempt to include the evaluation of all four of these factors in the study of system behavior. However, the focus of this text is analytical and the discussions are limited to the behavior of the physical equipment.
Several authors have defined analytical approaches to modeling the effects of humans [10] and of software [11] on system reliability. The motivation for doing this is the view that humans cause more system failures than does equipment. This view seems quite correct. Nevertheless, implementation of the existing mathematical models of human and software reliability requires
the acceptance of the view that probability models appropriately represent dispersion in human behavior. In the case of software, existing models are based on the assumption that probability models effectively represent hypothesized evolution in software performance over time. The appropriateness of both of these points of view is subject to debate. It is considered here that the human operators of a system do not comprise a homogeneous population for which performance is appropriately modeled using a probability distribution. Similarly, software and operating protocols do not evolve in a manner that one would model using probability functions. As the focus of this text is the definition of representative probability models and their analysis, the discussion is limited to the physical devices.
The space shuttle accidents serve to motivate our focus on the physical behavior of equipment. The 1986 Challenger accident has been attributed to the use of the vehicle in an environment that was more extreme than the one for which it was designed. The 2002 Discovery accident is believed to have been the result of progressive deterioration at the site of damage to its heat shield. Thus, the physical design of the vehicles and the manner in which they were operated were incompatible and it is the understanding of this interface that we obtain from reliability analysis.
The text is organized in four general sections. The early chapters describe in a stepwise manner the increasingly complete models of reliability and failure. These initial discussions include the key result that our understanding of design configurations usually implies that system reliability can be studied at the component level. This is followed by an examination of statistical methods for estimating reliability. A third section is comprised of five chapters that treat increasingly more complicated and more realistic models of equipment maintenance activities. Finally, several advanced topics are treated in the final chapter.
It is hoped that this sequence of discussions will provide the reader with a basis for further exploration of the topics treated. The development of new methods and models for reliability and maintenance has expanded our understanding significantly and is continuing. The importance of preventive maintenance for safety and industrial productivity is receiving increased attention. The literature that is comprised of reports of new ideas is expanding rapidly. This book is intended to prepare the reader to understand and use the new ideas as well as those that are included here.
As a starting point, note that it often happens that technical terms are created using words that already have colloquial meanings that do not correspond perfectly with their technical usage. This is true of the word reliability. In the colloquial sense, the word reliable is used to describe people who meet commitments. It is also used to describe equipment and other inanimate objects that operate satisfactorily. The concept is clear but not particularly precise. In contrast, for the investigations we undertake in this text, the word reliability has a precise technical definition. This definition is the departure point for our study.
2 System Structures
The point of departure for the study of reliability and maintenance planning is the elementary definition of the term reliability. As mentioned in Chapter 1, the technical definition of reliability is similar to the colloquial definition but is more precise. Formally, the definition is as follows:
Definition 2.1
Reliability is the probability that a device properly performs its intended function over time when operated within the environment for which it is designed.
Observe that there are four specific attributes of this definition of reliability. The four attributes are (1) probability, (2) proper performance, (3) qualification with respect to environment, and (4) time. All four are important. Over this and the next several chapters, we explore a series of algebraic models that are used to represent equipment reliability. We develop the models successively by sequentially including in the models each of the four attributes identified in the previous definition. To start, consider the representation of equipment performance to which we refer as function.
2.1 Status Functions
The question of what constitutes proper operation or proper function for a particular type of equipment is usually specific to the equipment. Rather than attempt to suggest a general definition for proper function, we assume that the appropriate definition for a device of interest has been specified and we represent the functional status of the device as f= ì í î 1 0 if thedevicefunctions properly if thedeviceisfailed
Note that this representation is intentionally binary. We assume here that the status of the equipment of interest is either satisfactory or failed. There are many types of equipment for which one or more derated states are possible. Discussion of this possibility is postponed until the end of this chapter. We presume that most equipment is comprised of components and that the status of the device is determined by the status of the components. Accordingly, let n be the number of components that make up the device and define the component status variables, x i , as
i = ì í î 1 0 if componentisfunctioning if componentisfailed
so the set of n components that comprise a device is represented by the component status vector
xx n = {} 12,, ,
Next, we represent the dependence of the device status on the component status as the function
= () x
and the specific form for the function is determined by the way in which the components interact to determine system function. In the discussions that follow, ϕ(x) is referred to as a “system structure function” or as a “system status function” or simply as a “structure.” In all cases, the intent is to reflect the dependence of the system state upon the states of the components that comprise the system. A parenthetical point is that the terms “device” and “system” are used here in a generic sense and may be interpreted as appropriate.
An observation concerning the component status vector is that it is defined here as a vector of binary elements so that an n component system has 2n possible component status vectors. For example, a three-component system has 23 = 8 component status vectors. They are
{1, 1, 1} {1, 0, 1}
{1, 1, 0} {1, 0, 0}
{0, 1, 1} {0, 0, 1}
{0, 1, 0} {0, 0, 0}
Each component status vector yields a corresponding value for the system status function, ϕ
From a purely mathematical point of view, there is no reason to limit the definition of the system status function so forms that have no practical interpretation can be constructed. In order to avoid any mathematically correct but practically meaningless forms for the system status function, we limit our attention to coherent systems.
Definition 2.2
A coherent system is one for which the system structure function is nondecreasing in each of its arguments.
This means that for each element of the component status vector, xi, there exists a realization of the vector for which
Throughout our study of reliability, we will limit our attention to algebraic forms that comply with this restriction.
Generally, we expect that the physical relationships among the components determine the algebraic form of the system status function, ϕ
2.2 System Structures and Status Functions
Among reliability specialists, it is generally accepted that there are four generic types of structural relationships between a device and its components. These are (1) series, (2) parallel, (3) k out of n, and (4) all others. Consider each of these forms in sequence.
2.2.1
Series Systems
The simplest and most commonly encountered configuration of components is the series system. The formal definition of a series system is as follows:
Definition 2.3
A series system is one in which all components must function properly in order for the system to function properly.
The conceptual analog to the series structure is a series-type electrical circuit. However, unlike a series circuit, it is specifically not implied here that the components must be physically connected in sequence. Instead, the point of emphasis is the requirement that all components function. An example of a series system in which the components are not physically connected is the
FIGURE 2.1
Reliability block diagram for a series system.
set of legs of a three-legged stool. Another is the set of tires on an automobile. In both examples, the components are not physically connected to each other in a linear configuration. Nevertheless, all of the components must function properly for the system to operate.
The concept of a series circuit is commonly used to define a graphical representation of a series structure. For the three components, this is presented in Figure 2.1.
In general, representations of system structures such as the one in Figure 2.1 are referred to as reliability block diagrams. They are often helpful in understanding the relationships between components.
For the series structure, the requirement that all components must function in order for the system to function implies that a logical algebraic form for the system structure function is
but an equivalent and more useful form is
As examples, consider a three-component series system and the cases
x1 = x2 = 1, x3 = 0, and f() x = 0
x1 = 1, x2 = x3 = 0, and f() x = 0
x1 = x2 = x3 = 1 and f() x = 1
Only the functioning of all components yields system function.
2.2.2 Parallel System
The second type of structure is the parallel structure. The conceptual analog is again the corresponding electrical circuit and the definition is as follows:
Definition 2.4
A parallel system is one in which the proper function of any one component implies system function.
It is again emphasized that no specific physical connections among the components are implied by the definition or by the reliability block diagram. The reliability block diagram for a three-component parallel system is shown in Figure 2.2.
One example of a parallel system is the set of two engines on a twoengine airplane. As long as at least one engine functions, flight is sustained. However, this example implies that simply maintaining flight corresponds to proper function. It is a worthwhile debate to discuss when it is and when it is not an appropriate example of a parallel system.
Another example that is more appealing is the fact that the communications satellites presently in use have triple redundancy for each communications channel. That is, three copies of each set of transmitting components are installed in the satellite and arranged in parallel in order to assure continued operation of the channel. In view of the fact that this implies significant weight increases over the use of only singleconfiguration transmitters, the satellite provides an example of the importance of reliability as well as one of a parallel structure.
In a similar manner to that for the series system, the structure function for the parallel system may be defined as
An alternate form that is more amenable to analytical manipulation can be defined using a shorthand developed by Barlow and Proschan [12]:
FIGURE 2.2
Reliability block diagram for a three-component parallel system.
The inverted product symbol, Π, is called “ip” and is defined as
Once mastered, this shorthand is very convenient. Example cases for the three-component parallel system are
x1 = x2 = 1, x3 = 0, and f() x = 1
x1 = 1, x2 = x3 = 0, and f() x = 1
x1 = x2 = x3 = 0, and f() x = 0
Conceptually, a parallel system is failed only when all system components are failed.
Before leaving the discussion of parallel structures, it is appropriate to mention the fact that the parallel arrangement of components is often referred to as “redundancy.” This is because the proper function of any of the parallel components implies proper function of the structure. Thus, the additional components are redundant until a component fails. Frequently, parallel structures are included in product designs specifically because of the resulting redundancy. Often but not always, the parallel components are identical. At the same time, there are actually several ways in which the redundancy may be implemented. A distinction is made between redundancy obtained using a parallel structure in which all components function simultaneously and that obtained using parallel components of which one functions and the other(s) wait as standby units until the failure of the functioning unit. Models that describe the reliability of active redundancy and of standby redundancy are presented at the end of Chapter 4.
2.2.3
k-out-of- n Systems
The third type of structure is the k-out-of-n structure. There is no obvious conceptual analog for this structure. A formal definition of it is as follows:
Definition 2.5
A k-out-of-n system is one in which the proper function of any k of the n components that comprise the system implies proper system function.
The usual approach to constructing the reliability block diagram for the k-out-of-n system is to show a parallel diagram and to provide an additional indication that the system is k out of n.
An example of a k-out-of-n system is the rear axle of a large tractor trailer on which the functioning of any three out of the four wheels is sufficient to assure mobility. Another example is the fact that some (1 – k) electronic memory arrays are configured so that the operation of any 126 of the 128 memory addresses corresponds to satisfactory operation.
The algebraic representation of the structure function for a k-out-of-n system is not as compact as those for series and parallel systems. Given the definition of the relationship between component and system status, the most compact algebraic form for the structure function is
Example cases for a 3-out-4 system are
x1 = x2 = x3 = 1, x4 = 0, and f() x = 1
x1 = x2 = 1, x3 = x4 = 0, and f() x = 0
x1 = x2 = x3 = 0, x4 = 1, and f() x = 0
Note that a series system may be considered as an n-out-of-n system and a parallel system may be viewed as a 1-out-of-n system. Thus, the k-out-of-n form provides a generalization that is sometimes useful in analyzing system performance in a generic context.
Before leaving the discussion of k-out-of-n systems, it is interesting to note that a special case of this type of system is the consecutive k-out-of-n system of which there are two types. The consecutive k-out-of-n:F system is comprised of n-aligned independent components that fail when k or more consecutive components fail. The k-out-of-n:G system functions when k or more consecutive components function. These two realizations of the consecutive system have been shown to be mirror images so it is sufficient to analyze either of them.
Examples of the k-out-of-n:F system have been provided by Chiang and Niu [13]. One of those examples is an oil pipeline in which each of the equally spaced pumps is capable of pushing the oil to a distance equal to k pumping points so a failed pump does not fail the system as the next pump will maintain flow. If k consecutive pumps fail, the system fails.
As indicated earlier, the fourth class of component configurations is the set of all others that are conceivable. This statement is not intended to be misleading. Instead, it is intended to imply that we can establish an equivalence
between any arbitrary component configuration and one based on series and parallel structures. The process of constructing equivalent structures is explained in the next section and is illustrated with a classic example.
2.2.4 Equivalent Structures
The selection of a component configuration is usually made by the device designer in order to assure a specific functional capability. The configuration selected may not match one of the classes discussed earlier. In such a case, there are two ways by which we can obtain equivalent structural forms that may be easier to analyze than the actual one. The two ways are to use either minimum path or minimum cut analyses of the network representation of the system.
As a vehicle for illustrating the two methods, we use the Wheatstone bridge. The reliability block diagram for the bridge is shown in Figure 2.3
Notice that the bridge structure is not series, parallel, or k out of n. Thus, the earlier algebraic representations cannot be used directly to provide a statement of the system status function.
We can obtain a system status function for the bridge in several ways. One obvious approach is to enumerate all of the component status vectors, to determine the system status for each vector and to construct a table of system status values. For the five-component bridge structure, this is readily done and the result is shown in Table 2.1
On the other hand, systems having a greater number of components cannot be handled so easily.
The use of minimum paths will permit us to analyze the bridge structure and other larger systems as well. Start with some definitions:
Definition 2.6
A path vector, x, is a component status vector for which the corresponding system status function has a value of 1.
FIGURE 2.3
Reliability block diagram for a Wheatstone bridge.
Another random document with no related content on Scribd:
the conditions were worse than those of the present day, and much more so than in the eighth and ninth centuries. According to Rabot, it appears from ancient records that considerable areas cultivated in the tenth century are now covered with ice. The first spread of the glaciers took place in the first half of the fourteenth century. In the fifteenth and sixteenth centuries the climate of Iceland ameliorated somewhat, but in the seventeenth there was a readvance, which destroyed several farms about 1640 or 1650. Since then there has been a slight retreat.
The ice-conditions of Greenland are closely related to those of Iceland, and the records of the Norse colonization of Greenland bear out the conclusions drawn from the latter island. Up to the close of the twelfth century ice is hardly ever mentioned in the accounts of voyages, though it is now a great hindrance. Eric, the pioneer explorer of West Greenland, spent three successive winters on the islands in Juliaanehaab Bay (latitude 60° 45′ N.), and explored the country during the summer; “this cannot be explained otherwise than by assuming that the Polar ice did not reach Cape Farewell and the west coast of Greenland in those days.” In the thirteenth century ice is first specifically mentioned as a danger to navigation, and at the end of the fourteenth century the old Norse sailing route was on account of ice definitely abandoned in favour of one further south. Shortly afterwards the Norse colonies were wiped out by a southward migration of the Eskimos. Even in Norway itself the fourteenth century was a time of dearth, short harvests and political troubles, when corn had to be imported from Germany instead of being exported to Iceland as in former years.
It should be noted that Pettersson’s conclusions are considered invalid by H. H. Hildebrandsson[8] on the ground of the incompleteness of the records.
For the southern hemisphere our records are naturally much rarer and of less antiquity than for the northern hemisphere, and until the tree-rings are investigated we cannot carry our study back beyond the sixteenth century. From some researches into the municipal archives of Santiago de Chile, latitude 33½° S., published by B. V. Mackenna in 1877, we can infer, however, that the general course of
variation since 1520 was similar to that of corresponding regions in North America. Santiago lies in a semi-arid region where a temporary shortage of water is severely felt, the average annual rainfall being only 364 mm. (14.3 inches). The early travellers, however, make no specific mention of drought, and in 1540 Pedro de Valdivia crossed the desert of Atacama with a column of troops and cattle without inconvenience—a feat which would be difficult nowadays. In 1544 there were heavy rains and great floods in June. The next record is for the year 1609, recording another heavy flood on the Mapocho, which was repeated nine years later in 1618. The first recorded drought occurred in the years 1637 to 1640; there was another flood in 1647, after which came a series of severe droughts interrupted by occasional floods, which lasted until the close of the eighteenth century. The first half of the nineteenth century was again comparatively rainy. The records thus indicate a wet period centred about 1600, followed by a dry period during the eighteenth century, exactly parallel to the records from the United States and Europe.
BIBLIOGRAPHY
Huntington, E “The climatic factor as illustrated in arid America ” Washington, Carnegie Institution, 1914
. “The fluctuating climate of North America.” Geogr. J. 40, 1912, pp. 264, 392.
—— “The pulse of Asia ” Boston and New York, 1907
. “World power and evolution.” New Haven, 1919.
Pettersson, O “Climatic fluctuations in historic and prehistoric time ” Svenska Hydrogr -Biol Komm Skrifter, H 5
Rabot, Ch. “Essai de chronologie des variations glaciaires.” Bull. géogr. historique et descriptive, No. 2, 1902.
Brückner, E. “Klimaschwankungen seit 1700” ... Vienna, 1890.
Hildebrandsson, H. H. “Sur le prétendu changement du climat européen en temps historique ” Upsala, Nova Actae Regiae Soc Sci (4), Vol 4, No 5, 1915
Brooks, C. E. P. “An historical notice of the variations of climate in Chile.” Washington, Dept. Agric., U.S. Weather Bureau, Monthly Weather Rev. 47, 1919, p. 637.
CHAPTER XVIII
The origin of man from an ape-like ancestor[9] is generally admitted, but owing to the incompleteness of the palæontological record we are still in ignorance as to the circumstances, while the place is generally put vaguely as somewhere in Asia, and the time as the late Tertiary (Prof. Elliot Smith places it near the Siwalik hills in the Miocene). For this early period we are reduced to speculation, in which we may reasonably utilize the facts which we have gained about climatic variation.
The chief problem to be explained is why man’s arboreal ancestor left the safe shelter and easy food supply of his primæval forest and ventured forth into the plains. An article by Professor J. Barrell,[10] of Yale University, gives a plausible account of the change, putting it down to necessity, and not to choice. His theory is that the human ancestor lived in the forests spread over Asia, then a vast wellwatered plain, during the middle-Tertiary period. Then the gradual uplifting of the Himalayas and other mountain ranges caused a decrease in the rainfall of central Asia, so that ultimately the forests were unable to thrive, and gradually gave place to steppe conditions. The change was slow enough to give the less specialized inhabitants of the forest time to change their habits and evolve into forms suitable to a terrestrial life, and the chief of the animals which took advantage of this period of grace was the pre-human. Forced to live on the ground, with a diminishing food supply, only the most progressive individuals were able to survive, and evolution was rapid. The changing type was saved from being submerged in the great mass of the original type in the forests which continued to exist
further south by the impassable wall of mountains. Major Cherry[11] considers that there is sufficient evidence to prove that a portion of this evolution took place on the seashore, an environment which would have been much more favourable to a small ape-like animal than the open steppe would have been. It is quite likely that the earliest migrations, such as that which carried Pithecanthropus to Java, took place along the shore. But after a time, when increasing brainpower and the use of primitive stone implements enabled man to take the offensive against the larger animals, the centre of activity changed to the steppes. A familiar view of the early development of man was advocated by W. D. Mathew,[12] who writes: “In view of the data obtainable from historical record, from tradition, from the present geographical distribution of higher and lower races of men, from the physical and physiological adaptation of all and especially of the higher races, it seems fair to conclude that the centre of dispersal of mankind in prehistoric times was central Asia, north of the great Himalayan ranges, and that when by progressive aridity that region became desert it was transferred to the regions bordering it to the east, south and west. We may further assume that the environment in which man primarily evolved was not a moist tropical climate, but a temperate and more or less arid one, progressively cold and dry during the course of his evolution. In this region and under these conditions, the race first attained a dominance which enabled it to spread out in successive waves of migration to the most remote parts of the earth.”
We do not know anything of the migrations of the Eolithic and earlier Palæolithic races, except that they spread rapidly over a considerable portion of the earth. Both migration and evolution, especially mental evolution, must have been accelerated by the great changes of climate which were taking place. In the Mindel-Riss interglacial period we know of two types, the Piltdown man (Eoanthropus dawsoni) and the Heidelberg man (Homo heidelbergensis), the latter a true man, though probably not on the direct line of evolution of Homo sapiens. The stress of the succeeding second Glacial period was too great for Eoanthropus, which appears to have died out, but Homo, probably an Asiatic or
African type similar to H. Heidelbergensis, survived. The next form, associated with Mousterian implements, is Neanderthal man (H. neanderthalensis), who closely resembled modern man, and all the remains of races which lived subsequently to the last glaciation are those of modern man (H. sapiens), including the magnificent CroMagnards and the negroid Grimaldi race. Thus each glaciation has been marked by a step upwards in the scale of humanity; does this mean that the coming of the super-man is contingent on another glacial epoch?
BIBLIOGRAPHY
Barrell, J. “Probable relations of climatic change to the origin of the Tertiary Apeman.” Scientific Monthly, New York, 4, 1917, p. 16.
Mathew, W D “Climate and evolution ” Annals New York Acad Sci , 24, 1915, p 212 London, British Museum. “A guide to the fossil remains of man....” London, 1918.
CHAPTER XIX
It is a remarkable fact in human history that civilization began in regions which are at present inhabited chiefly by backward races, and the centres of progress have shifted from one country to another with the passage of time. Many accidental factors—position on trade-routes, possession of special mineral advantages, and so on, have undoubtedly played a part in this, but it will not be difficult to show that climatic fluctuations have also had their share.
A brilliant study of Ellsworth Huntington[13] has shown that there are certain optimum conditions of climate which are most suitable for efficient work. These conditions, which were determined by an analysis of the output of work in American factories, were then found to be just those which prevail in the most progressive regions of the globe, which are located in the temperate storm-belts, and it is shown in certain instances that fluctuations in the position of this storm-belt coincided with fluctuations in the centres of civilization. A few additional examples of this may be given.
The beginnings of civilization may reasonably be placed with the transition from the Palæolithic to the Neolithic type, a transition which involved much more than just the polishing of stone weapons. It involved also the beginnings of agriculture, crude pottery, and later, the domestication of animals. One of the earliest Neolithic cities known is probably that of Anau, near Askabad in Transcaspia, excavated by Pumpelly in 1904. From the thickness of the accumulated debris the date of first settlement is placed at or before 8000 . ., i.e. 10,000 years ago, or during the period which in Europe is assigned to the concluding stages of the Wurm glaciation.
Pumpelly’s time-estimates are based on careful comparison with accumulations in Merv and other cities. At present the mean annual rainfall in that part of Turkestan is below ten inches a year, and the country is practically desert, and is entirely unfitted for agriculture. But with the remains of the ice-sheet still over Scandinavia and depressions following a more southerly course along the Mediterranean basin and into southern Asia, the rainfall was considerably heavier, and the climate in general was more suited to a progressive race. At the outset we find this Neolithic race living in rectangular houses built of uniform sun-dried bricks; they were skilful potters, cultivating cereals, but at first without domestic animals.
The beginning of Neolithic civilization in Crete is placed by Evans at about 12000 . ., while on the basis of excavations by de Morgan at Susa in Persia, Montelius places the origin of Neolithic culture in this part of Asia as early as about 18000 . . At Susa the deposits are 130 feet thick, and of these the upper 40 feet cover a period of 6000 years.
Thus we see that what may be considered as the great step from savagery to civilization took place while the present centres of progress in Europe and America were still in the Ice Age. At this time the climate of southern Asia must have resembled the present climate of north-west Europe in heavier rainfall and the day-to-day fluctuations of weather—in fact, the districts where civilization began probably had at that time the most stimulating climate in the northern hemisphere.
With the vanishing of the ice-sheets and the setting in of the mild climate of the Maritime phase the Neolithic culture spread rapidly to Europe, and by 2000 . . even the Baltic regions were well inhabited, and it is probable that the Aryan race was developing in the Russian steppes. About this time Anau was abandoned owing to increasing aridity.
With the coming of the Bronze Age in western Europe, about 1800 . ., however, the climate again became colder and rainier, corresponding to the Peat-bog phase or “Classical” rainfall maximum, the deterioration culminating in the Early Iron Age. This
period was marked by a great southward spread of the Aryan peoples, and ushered in the Heroic Age of Greece. The races of the Mediterranean, as we have seen, continued to thrive throughout this rainy period, and their power did not diminish until its close, about . . 400. This downfall was accelerated if not caused by the pressure of nomad peoples driven out of Asia by the increasing drought. These Asiatic migrations included the great marches of the Tartar hordes and, aided by religious enthusiasm, the conquests of the Moslems.
The early Middle Ages, after the downfall of Rome, appear to have been characterized by a dry warm climate. This was the age of the Vikings, when the Norse races rose to dominance in western Europe, finally invading and occupying large areas of France and Britain, and even extending their power to Sicily. With the increasing cold and wet of the “Mediæval” rainfall maximum came a final burst of Norse migration, which left the homeland poor and scantily populated, and the centre of activity and progress lay once again with the Mediterranean peoples, and especially with Italy and Spain. The Tartar invasions ceased, and against the increasing power of Europe the Moslem wave broke and receded. At the close of this rainy period political dominance again moved north. From that time the fluctuations of climate have been of minor importance, and correspondingly there have been no great shiftings of political power from latitude to latitude.
BIBLIOGRAPHY
Tyler, J M “The New Stone Age in Northern Europe ” London, 1921 Huntington, Ellsworth, “World Power and Evolution.” New Haven, 1919. Haddon, A C , “The wanderings of peoples ” Cambridge University Press, 1919
APPENDIX
To calculate the probable temperature of January or July at any point, the following procedure should be adopted:
Draw a circle round the point of angular radius ten degrees (i.e. set the compass to cover ten degrees of latitude) and divide this into two halves by a line passing from north to south through the centre. By means of squared tracing paper, or otherwise, measure: (a) the amount of ice in the whole circle; (b) the amount of land in the western half; (c) the amount of land in the eastern half. (a) is expressed as a percentage of the area of the whole circle; (b) and (c) as percentages of the area of a semicircle.
The term “ice” includes ice-sheets such as that of Greenland or Antarctica, and also frozen sea or sea closely covered by pack-ice; the latter figure may vary in different months.
The temperature in January or July is then calculated from the following formula:
Temperature = basal temperature + ice coeff. x per cent. of ice + land west coeff. x per cent. of land to west + land east coeff. x per cent. of land to east.
The basal temperatures and the appropriate coefficients are given in the following table.
In calculating the effect of a given slight change of land and sea distribution, it is not necessary to employ the basal temperature. Instead the equation can be treated as a differential, and the change of temperature due to the change of land and ice calculated from the
figures in columns 3 to 5. The figures are given in degrees absolute, 273°0 = 32° F. To convert differences to Fahrenheit, multiply by 1°8.
July.
In the case of the calculation of the effect of comparatively slight and irregular changes in land and sea distribution in a limited area, such as those of the Littorina Sea referred to on p. 128, it may be found that a ten-degree circle is too wide an area to employ, the changes from land to sea at one point being nullified by changes from sea to land at another more distant point. In such a case a smaller unit such as a circle of five degrees radius can be employed. As a rough approximation it may be said that the effect of the conversion of a square mile of land into sea, or vice versa, on the temperature of a neighbouring point is inversely proportional to its distance. Since the area of a five-degree circle is one-quarter that of a ten-degree circle, while the average distance of the land composing it is one-half, we have to divide our regression coefficients by two in order to fit the new data.
This method was applied to obtain the probable temperature distribution on the shores of the Littorina Sea at its maximum extension, and gave results which agreed remarkably well with those calculated by geologists from the animal and plant life of the time.
See London Q. F. R. Meteor. Soc., 43, 1917, pp. 169-171.