My Reading on ASQ CQA HB Part V Part 2

Page 1

My Reading on ASQ CQA The Handbook 2/2 Part V (VD-VH) My Pre-exam Self Study Notes, 14.7%. 8th Oct 2018

Charlie Chong/ Fion Zhang


Tokomak Fusion Reactor

Charlie Chong/ Fion Zhang

https://www.iter.org/sci/tkmkresearch

Charlie Chong/ Fion Zhang


Tokomak Fusion Reactor

https://www.iter.org/sci/tkmkresearch

Charlie Chong/ Fion Zhang


Tokomak Fusion Reactor

https://www.kennethfilar.com/hinge/

Charlie Chong/ Fion Zhang


The Magical Book of CQA

Charlie Chong/ Fion Zhang


国泰民安

http://www.freerepublic.com/focus/f-news/1529576/posts

Charlie Chong/ Fion Zhang


闭门练功

Charlie Chong/ Fion Zhang


Charlie Chong/ Fion Zhang


Fion Zhang at Heilongjiang 8th October 2018

Charlie Chong/ Fion Zhang


ASQ Mission: The American Society for Quality advances individual, organizational, and community excellence worldwide through learning, quality improvement, and knowledge exchange.

Charlie Chong/ Fion Zhang


BOK Knowledge

Percentage Score

I. Auditing Fundamentals (30 Questions)

20%

II. Audit Process (60 Questions)

40%

III. Auditor Competencies (23 Questions)

15.3%

IV. Audit Program Management and Business Applications (15 Questions)

10%

V. Quality Tools and Techniques (22 Questions)

14.7%

150 Questions

100%

https://asq.org/cert/resource/docs/cqa_bok.pdf

Charlie Chong/ Fion Zhang


Part V

Part V Quality Tools and Techniques [26 of the CQA Exam Questions or 14.7 percent] _____________________________________________________ Chapter 18 Basic Quality and Problem- Solving Tools/Part VA Chapter 19 Process Improvement Techniques/Part VB Chapter 20 Basic Statistics/Part VC Chapter 21 Process Variation/Part VD Chapter 22 Sampling Methods/Part VE Chapter 23 Change Control and Configuration Management/Part VF Chapter 24 Verification and Validation/Part VG Chapter 25 Risk Management Tools/Part VH

Charlie Chong/ Fion Zhang


Part V

Quality Tools and Techniques Auditors use many types of tools to plan and perform an audit, as well as to analyze and report audit results. An understanding of these tools and their application is essential for the performance of an effective audit since both auditors and auditees use various tools and techniques to define processes, identify and characterize problems, and report results. An auditor must have sufficient knowledge of these tools in order to determine whether the auditee is using them correctly and effectively. This section provides basic information on some of the most common tools, their use, and their limitations. For more in-depth information on the application of tools, readers should consult an appropriate textbook.

Charlie Chong/ Fion Zhang


Part VD1

Chapter 21 Process Variation/Part VD ________________________ VD1. Common And Special Causes (Theory Of Variation) Variation is inherent; it exists in all things. No two entities in the world have exactly the same measurable characteristics. The variation might be small and unnoticeable without the aid of precise and discriminative measuring instruments, or it might be quite large and easily noticeable. Two entities might appear to have the same measurement because of the limitations of the measuring device.

Charlie Chong/ Fion Zhang


Part VD1

Variations No two entities in the world have exactly the same measurable characteristics.

Charlie Chong/ Fion Zhang


Part VD1

Variations No two entities in the world have exactly the same measurable characteristics.

Charlie Chong/ Fion Zhang


Part VD1

Factors affecting Variation Everything is the result of some process, so the chance for some variation in output is built into every process. Because material inputs are the outputs of some prior process, they are subject to variation, and that variation is transferred to the outputs. Variation will exist even in apparently identical processes using seemingly identical resources. Even though a task is defined and performed in the same manner repeatedly, different operators performing the same task and the same operator performing the same task repeatedly introduce variation. Precision and resolution of the measuring devices, and techniques used to collect data also introduce variation into the output data. Variation can result from changes in various factors, normally classified as follows: 1. 2. 3. 4. 5. 6.

People (worker) influences Machinery influences Environmental factors Material influences Measurement influences Method influences

The resulting total variation present in any product is a result of the variations from these six main sources. Because the ramifications of variation in quality are enormous for managers, knowing a process’s capabilities prior to production provides for better utilization of resources. Operating costs are reduced when inspection, rework, safety stock storage, and troubleshooting are eliminated. Proper management requires a deep appreciation of the existence of variation as well as an understanding of its causes and how they can be corrected. Meaning: Safety stock is a term used by logisticians to describe a level of extra stock that is maintained to mitigate risk of stockouts (shortfall in raw material or packaging) caused by uncertainties in supply and demand. Adequate safety stock levels permit business operations to proceed according to their plans. https://en.wikipedia.org/wiki/Safety_stock

Charlie Chong/ Fion Zhang


Part VD1

Types of Variation Walter Shewhart, the father of modern quality control, was concerned with the low- cost reduction of variation. Shewhart distinguished two kinds of processes: Walter Andrew Shewhart.March 18, 1891 – March 11, 1967) was an American physicist, engineer and statistician, sometimes known as the father of statistical quality control and also related to the Shewhart cycle.

(1) a stable process with ―inevitable chance variation‖ and (2) an unstable process with ―assignable cause variation.‖

W. Edwards Deming said of him: As a statistician, he was, like so many of the rest of us, self-taught, on a good background of physics and mathematics. Born in New Canton, Illinois to Anton and Esta Barney Shewhart, he attended the University of Illinois at Urbana–Champaign before being awarded his doctorate in physics from the University of California, Berkeley in 1917. He married Edna Elizabeth Hart, daughter of William Nathaniel and Isabelle "Ibie" Lippencott Hart on August 4, 1914 in Pike County, Illinois.

 If the limits of process variation are well within the band of customer tolerance (specification), then the product can be made and shipped with reasonable assurance that the customer will be satisfied.  If the limits of process variation just match the band of customer tolerance, then the process should be monitored closely and adjusted when necessary to maximize the amount of satisfactory output.  If the limits of process variation extend beyond the band of customer tolerance, output should be inspected to determine whether it meets customer requirements. State of Statistical Control (Stable) When the amount of variation can be predicted with confidence, the process is said to be in a state of statistical control (stable). Although a singular value cannot be predicted exactly, it can be anticipated to fall within certain limits. Similarly, the long- term average value can be predicted. In an unstable process every batch of product is a source of excitement! It is impossible to predict how much, if any, of the product will fall within the band of customer tolerance.

Charlie Chong/ Fion Zhang


Part VD1

The costs necessary to produce satisfactory product are unknown because the organization is forced to carry large quantities of safety stock, and bids for new work must include a safety factor. Shewhart developed simple statistical and graphical tools to inform operators and managers about their processes and to detect promptly when a stable process becomes unstable and vice versa. These tools, called control charts, come in various forms to accommodate whether measures are attributes or variables, whether samples are of constant size or not. Deming also recognized Shewhart’s two sources of variation, calling them:  common causes and  special causes He also distinguished between the duties of those who work in the process and the managers who work on the process.

William Edwards Deming (October 14, 1900 – December 20, 1993) was an American engineer, statistician, professor, author, lecturer, and management consultant. Educated initially as an electrical engineer and later specializing in mathematical physics, he helped develop the sampling techniques still used by the U.S. Department of the Census and the Bureau of Labor Statistics. In his book, The New Economics for Industry, Government, and Education,[1] Deming championed the work of Walter Shewhart, including statistical process control, operational definitions, and what Deming called the "Shewhart Cycle"[2] which had evolved into Plan-Do-Study-Act (PDSA). This was in response to the growing popularity of PDCA, which Deming viewed as tampering with the meaning of Shewhart's original work.[3] Deming is best known for his work in Japan after WWII, particularly his work with the leaders of Japanese industry. That work began in August 1950 at the Hakone Convention Center in Tokyo, when Deming delivered a speech on what he called "Statistical Product Quality Administration". Many in Japan credit Deming as one of the inspirations for what has become known as the Japanese post-war economic miracle of 1950 to 1960, when Japan rose from the ashes of war on the road to becoming the second-largest economy in the world through processes partially influenced by the ideas Deming taught:[4] Better design of products to improve service Higher level of uniform product quality Improvement of product testing in the workplace and in research centers Greater sales through side [global] markets Deming is best known in the United States for his 14 Points (Out of the Crisis, by W. Edwards Deming, preface) and his system of thought he called the "System of Profound Knowledge". The system includes four components or "lenses" through which to view the world simultaneously: Appreciating a system Understanding variation Psychology Epistemology, the theory of knowledge[5] Deming made a significant contribution to Japan's reputation for innovative, high-quality products, and for its economic power. He is regarded as having had more impact on Japanese manufacturing and business than any other individual not of Japanese heritage. Despite being honored in Japan in 1951 with the establishment of the Deming Prize, he was only just beginning to win widespread recognition in the U.S. at the time of his death in 1993.[6] President Ronald Reagan awarded him the National Medal of Technology in 1987. The following year, the National Academy of Sci ences gave Deming the Distinguished Career in Science award. https://en.wikipedia.org/wiki/W._Edwards_Deming

Charlie Chong/ Fion Zhang


Part VD1

Common Causes Variation that is always present or inherent in a process is called common cause variation It occurs when one or more of the six previously mentioned factors fluctuate within the normal or expected manner and can be improved only by changing a factor. Common causes of variation occur continually and result in controlled variation. They ensue, for example, from the choice of supplier, quality of inputs, worker hiring and training practices, equipment selection, machinery maintenance, and working conditions. If the process variation is excessive, then the process must be changed. Eradicating these stable and predictable causes of variation is the responsibility of the managers of the process. Common causes are beyond the control of workers, as was demonstrated by Deming’s famous red bead experiment.1 In that experiment, volunteers were told to produce only white beads from a bowl containing a mixture of white and red beads. Monitoring or criticizing worker performance had no effect on the output. No matter what the workers did, they got red beads—sometimes more, sometimes less, but always some—because the red beads were in the system. Deming estimated that common causes account for 80 percent to 95 percent of workforce variation. This is not the fault of the workers, who normally do their best even in less- than-ideal circumstances. Rather, this is the responsibility of the managers, who work on, not in, the process. Management decides how much money and time is to be spent on designing processes, which impacts the resources and methods that can be used. It is the design of the process that impacts the amount of common cause variation.

Charlie Chong/ Fion Zhang


Part VD1

Special Causes (also called assignable causes) When variation from one or more factors is abnormal or unexpected, the resultant variation is known as special cause variation. This unexpected level of variation that is observed in an unstable process is due to special causes that are not inherent in the process. Special causes of variation are usually local in time and space, for example, specific to a change in a particular machine or a difference in shift, operator, or weather condition. They appear in a detectable pattern and cause uncontrolled variation. Special causes of variation often result in sudden and extreme departures from the normal, but can also occur in the form of gradual shifts (or drifts) in a characteristic of a process. When a control chart shows a lack of control, skilled investigation should reveal what special causes affect the output. The workers in the process often have the detailed knowledge necessary to guide this investigation.

Structural Variation Structural variation is inherent in the process;2 however, when plotted on a control chart, structural variation appears like a special cause (blip), even though it is predictable. For example, a restaurant experiences a high number of errors in diners’ orders taken on Saturday nights. The number of diners increases by 50 percent or more on every Saturday night, served by the same number of waitpersons and chefs as on other nights.

Charlie Chong/ Fion Zhang


Part VD1

Achieving Breakthrough Improvement Building on Shewhart’s notions to develop a systematic method for improvement, Juran distinguished between sporadic and chronic problems for quality improvement projects (QIPs). Starting from a state of chaos, a QIP should first seek to control variation by eliminating sporadic problems. When a state of controlled variation is reached, the QIP should then break through to higher levels of quality by eliminating chronic problems, thereby reducing the controlled variation. The notions of control and breakthrough are critical to Juran’s thinking. The following scenario demonstrates this concept: A dart player throws darts at two different targets. The darts on the first target are all fairly close to the bull’s-eye, but the darts are scattered all over the target. It is difficult for the player to determine whether changing stance (or any other variable) will result in an improved score. The darts thrown at the second target are well off target from the bull’s-eye, but the location of the darts is clustered and therefore predictable. When the player determines what variable is causing the darts to miss the bull’seye, immediate and obvious improvement should result. The impetus behind Juran’s work is to achieve repeatable and predictable results. Until that happens, it will be almost impossible to determine whether a quality improvement effort has had any effect. Once a process is in control, breakthroughs are possible because they are detectable. The following points are essential to an understanding of variation: • Everything is the result or outcome of some process. • Variation always exists, although it is sometimes too small to notice. • Variation can be controlled if its causes are known. The causes should be determined through the practical experience of workers in the process as well as by the expertise of managers. • Variation can result from special causes, common causes, or structural variation. Corrective action cannot be taken unless the variation has been assigned to the proper type of cause. For example, in Deming’s bead experiment (white beads = good product, red beads = bad product) the workers who deliver the red beads should not be blamed; the problem is the fault of the system that contains the red beads. • Tampering by taking actions to compensate for variation within the control limits of a stable process increases rather than decreases variation. • Practical tools exist to detect variation and to distinguish controlled from uncontrolled variation.

Charlie Chong/ Fion Zhang


Part VD1

Variation exists everywhere (even the earth wobbles a bit in its journey around the sun). So, too, variation exists at an organizational level—within management’s sphere of influence. The organization as a system is subject to common cause variation and special cause variation. Unfortunately, members of management in many organizations do not know about or understand the theory of variation. As a result of this, management tends to treat all anomalies as special causes and therefore treats actual common causes with continual tampering. Three examples follow: • A donut shop, among its variety of products, produces jelly donuts. The fruit mix used to fill the jelly donuts is purchased from a long- time, reliable supplier. From time to time, a consumer complains about the tartness of the donut filling (nature produces berries of varying degrees of sweetness). The shop owner complains to the jelly supplier who adds more sugar to the next batch (tampering). Several consumers complain about the overly sweet donut filling. The shop owner complains to the supplier who reduces the amount of sugar in the next batch (tampering). Some consumers complain about tartness, and so it goes. • Susan, a normally average salesperson, produces 10 percent fewer sales (number of sales, not dollar value) this month. The sales manager criticizes Susan for low sales production and threatens her with compensation loss. Susan responds by an extra effort to sell to anyone who will buy the service, regardless of the dollar volume of the sale (tampering). The sales manager criticizes Susan again, pointing out that dollar volume is more important than number of sales made. Susan concentrates on large- dollar buyers, which take several months to bring to fruition. Susan’s monthly figures show a drastic drop and she is severely criticized for lack of productivity. Susan leaves the company and takes the large- dollar prospects with her to a competitor. The system failed due to tampering, but the worker was blamed. • A VP of finance of a widely known charity continually tinkers with the organization’s portfolio of investments, selling or buying whenever a slight deviation is noted, resulting in suboptimal yield from the portfolio. An organization must focus its attempts at reducing variation. Variation does not need to be eliminated from everything; rather, the organization should focus on reducing variation in those areas most critical to meeting customers’ requirements.

Charlie Chong/ Fion Zhang


Part VD2

VD2. Process Performance Metrics Process capability is the range within which a process is normally able to operate given the inherent variation due to design and selection of materials, equipment, people, and process steps. Knowing the capability of a process means knowing whether a particular specification can be held if the process is in control. If a process is in control, one can then calculate the process capability index. Several formulae are used to describe the capability of a process, comparing it to the specification limits; the two most popular indexes are Cp and Cpk.

 Cp indicates how the width of the process compares to the width of the specification range, while  Cpk looks at whether the process is sufficiently centered in order to keep both tails from falling outside specifications.

Cpk =

Upper Spec ;Lower Spec 3σ Upper Spec ;Averafge 3σ

Following are the formulae:

Cp =

Specification range Process range

Cpk =

=

Upper Spec ;Lower Spec 6σ

or

, whichever smaller ?

Upper Spec ;Lower Spec Upper Spec ;Averafge or 3σ 3σ

, whichever smaller

Process Capability Note that the σ used for this calculation is not the standard deviation of a sample. It is the process sigma based on time- ordered data, such as given by the formula R-bar/d2. Following are the rules often used to determine whether a process is considered capable: • Cpk > 1.33 (capable) • Cpk = 1.00 – 1.33 (capable with tight control) • Cpk < 1.00 (not capable)

Charlie Chong/ Fion Zhang


Part VD2 Upper Spec ;Lower Spec Cpk = or 3σ Upper Spec ;Averafge , whichever smaller 3σ

Cpk = Min [

USL;μ 3σ

,

μ;lSL 3σ

?

]

Charlie Chong/ Fion Zhang


Part VD2

Process capability index. In process improvement efforts, the process capability index or process capability ratio is a statistical measure of process capability: the ability of a process to produce output within specification limits. The concept of process capability only holds meaning for processes that are in a state of statistical control. Process capability indices measure how much "natural variation" a process experiences relative to its specification limits and allows different processes to be compared with respect to how well an organization controls them. If the upper and lower specification limits of the process are USL and LSL, the target process mean is T, the estimated mean of the process is μ and the estimated variability of the process (expressed as a standard deviation) is σ, then commonly accepted process capability indices include: Area under the Ф (σ)

Adjusted Sigma level (σ)

Cpk

Process yield

Process fallout (in terms of DPMO/PPM)

0.33

1

0.3085375387

30.85%

691462

0.67

2

0.6914624613

69.15%

308538

1.00

3

0.9331927987

93.32%

66807

1.33

4

0.9937903347

99.38%

6209

1.67

5

0.9997673709

99.9767%

232.6

2.00

6

0.9999966023

99.99966%

3.40

DPMO : defects per million opportunities or (or nonconformities per million opportunities (NPMO)

https://en.wikipedia.org/wiki/Process_capability_index

Charlie Chong/ Fion Zhang


Part VD2

Index

Cp =

Description Estimates what the process is capable of producing if the process mean were to be centered between the specification limits. Assumes process output is approximately normally distributed.

USL;LSL 6Ďƒ

Cp, lower = Cp, upper =

Îź;LSL 3Ďƒ USL;Îź 3Ďƒ

Cpk = USL;Îź Îź;lSL min [ , ] 3Ďƒ

Cpm = Cpkm =

3Ďƒ

đ??śđ?‘? Îźâˆ’đ?‘‡ 2 Ďƒ

1:

đ??śđ?‘?đ?‘˜ 1:

Îźâˆ’đ?‘‡ 2 Ďƒ

Estimates process capability for specifications that consist of a lower limit only (for example, strength). Assumes process output is approximately normally distributed. Estimates process capability for specifications that consist of an upper limit only (for example, concentration). Assumes process output is approximately normally distributed. Estimates what the process is capable of producing, considering that the process mean may not be centered between the specification limits. (If the process mean is not centered, Cp overestimates process capability.) Cpk<0 if the process mean falls outside of the specification limits. Assumes process output is approximately normally distributed.

Estimates process capability around a target, T. Cpm is always greater than zero. Assumes process output is approximately normally distributed. Cpm is also known as the Taguchi capability index. Estimates process capability around a target, T, and accounts for an off-center process mean. Assumes process output is approximately normally distributed.

https://en.wikipedia.org/wiki/Process_capability_index

Charlie Chong/ Fion Zhang


Part VD2

Example Consider a quality characteristic with target of 100.00 Îźm and upper and lower specification limits of 106.00 Îźm and 94.00 Îźm respectively. If, after carefully monitoring the process for a while, it appears that the process is in control and producing output predictably (as depicted in the run chart below), we can meaningfully estimate its mean and standard deviation. If Îź and Ďƒ are estimated to be 98.94 Îźm and 1.03 Îźm, respectively, then

Cp =

USL;LSL 6Ďƒ

Cpk = min [

=

1.06;96 = 1.94 6x1.03

USL;Îź 3Ďƒ

,

Îź;lSL 3Ďƒ

106;98.94 3x1.03

= min [ = 1.60 đ??śđ?‘? Cpm = 1: 1:

CpKm =

T Îź

]

Îźâˆ’đ?‘‡ 2 Ďƒ 98.94−100 2 1.03

đ??śđ?‘? 1:

98.94;94 3x1.03

3Ďƒ

1.94

=

,

]

Îźâˆ’đ?‘‡ 2 Ďƒ

= 1.35 1.60

= 1:

98.94−100 2 1.03

= 1.11

Charlie Chong/ Fion Zhang


Part VD2

Potential Process Capability Initial process capability studies are often performed as part of the process validation stage of a new product launch. Since this is usually a run of only a few hundred parts, it does not include the normal variability that will be seen in full production, such as small differences from batch to batch of raw material. In this case the study is called potential process capability, with the symbol Ppk used instead of Cpk To compensate for the reduced variability the decision points are typically set at:  Ppk > 1.67 (capable)  Ppk = 1.33 – 1.67 (capable with tight control)  Ppk < 1.33 (not capable) Capability is then studied soon after production release and on an as- needed basis during normal production. Changes to the process due to engineering changes or as part of continuous improvement should also be evaluated for their impact on process capability. If process capability is found to be unsatisfactory, the following may be considered:    

Ensure that the process is centered Initiate process improvement projects to decrease variation Determine if the specifications can be changed Do nothing, but realize that a percentage of output will be outside acceptable variation

When using statistical software programs to evaluate process capability, it is important that the user understand the specific terminology used by the programmers. Although the same concepts may be used, different symbols or formulae may be used.

Charlie Chong/ Fion Zhang


Part VD3

VD3. Outliers The dictionary defines outlier as a statistical observation not homogeneous in value with others of a sample. An outlier is a special case of a special cause. An outlier is a data point that deviates markedly from the other data points collected or in the sample. An outlier is a result of a special cause such as using the wrong test equipment or pulling the sample from the wrong bin. A data point identified as an outlier is abnormal and if not removed from the data base will result in skewed, misleading, or false conclusions. Outliers are the most extreme observations and are either the sample maximum or sample minimum. However, sample maximums and minimums are not normally outliers. Outliers are data points so extreme, they do not appear to belong to the same data base. Deletion of outlier data may be the correct thing to do but it is a subjective judgment. The practice of deleting outliers is frowned (displease) upon by many scientists due to the potential of researchers manipulating statistical data for their own selfinterest. If the cause of the outlier data point is known, it should be verified before removal from the data base. When data points are excluded from data analysis, the rationale should be clearly stated in any subsequent report.

Charlie Chong/ Fion Zhang


Part VD2

Terminology. When using statistical software programs to evaluate process capability, it is important that the user understand the specific terminology used by the programmers. Although the same concepts may be used, different symbols or formulae may be used.

Charlie Chong/ Fion Zhang


Part VE

Chapter 22 Sampling Methods/Part VE _______________________

Charlie Chong/ Fion Zhang


Part VE Charlie Chong/ Fion Zhang


Part VE

Statistical Sampling Plans The auditor should follow the sampling plan required by audit program management.  Normally, statistical sampling plans are not required for process or system audits. However, knowledge of sampling methods and techniques may be needed to evaluate auditee sampling processes. Also, auditors need to know the limitations and biases created by taking samples. Sampling Sampling is the practice of taking selected items or units from a total population of items or units. The method and reason for taking certain samples or a certain number of samples from a population should be based on sampling theory and procedures. Samples may be taken from:  the total population or universe, or  the population may be separated into subgroups called strata. Inferences drawn from the sampling of a stratum, however, may not be valid for the total population. To infer statistical significance from any sample, two conditions must be met:  The population under consideration must be homogeneous, and  the sample must be random.

Charlie Chong/ Fion Zhang


Part VE

Sampling Sampling is the practice of taking selected items or units from a total population of items or units. The method and reason for taking certain samples or a certain number of samples from a population should be based on sampling theory and procedures. Samples may be taken from: ď Ž the total population or universe, or ď Ž the population may be separated into subgroups called strata.

Charlie Chong/ Fion Zhang


Part VE

Sampling- Strata Sampling is the practice of taking selected items or units from a total population of items or units. The method and reason for taking certain samples or a certain number of samples from a population should be based on sampling theory and procedures. Samples may be taken from: ď Ž the total population or universe, or ď Ž the population may be separated into subgroups called strata.

https://www.slideshare.net/shanmooz/sampling-35931464

Charlie Chong/ Fion Zhang


Part VE

Stratified Sampling In statistical surveys, when subpopulations within an overall population vary, it could be advantageous to sample each subpopulation (stratum) independently. Stratification is the process of dividing members of the population into homogeneous subgroups before sampling. The strata should be mutually exclusive: every element in the population must be assigned to only one stratum. The strata should also be collectively exhaustive: no population element can be excluded. Then simple random sampling or systematic sampling is applied within each stratum. The objective is to improve the precision of the sample by reducing sampling error. It can produce a weighted mean that has less variability than the arithmetic mean of a simple random sample of the population. In computational statistics, stratified sampling is a method of variance reduction when Monte Carlo methods are used to estimate population statistics from a known population.

https://en.wikipedia.org/wiki/Stratified_sampling

Charlie Chong/ Fion Zhang


Part VE

Homogeneous Homogeneous means that the population must be uniform throughout—the bad parts should not be hidden on the bottom of one load— or it could refer to the similarities that should exist when one load is checked against others from a different production setup. Random Random means that every item in the population has an equal chance of being checked. To ensure this, samples can be pulled by a random number generator or other unbiased method. The preferred practice is for the auditor to go to the location of the sample and select the sample for the audit. However, there are situations (long distances, convenience, files off- site, and so on) in which the auditee may be permitted to provide the sample population, such as in a file, folder, or logbook to the auditor, who may then select the sample. When sampling, auditors should record the identity of samples selected, the number in the population from which the samples were taken (if possible), and the number of samples selected for examination. The goal is to provide management with supportable information about the company, with the expectation that management will take action based on the results presented. An auditor must be able to qualify the sampling methods used to management as either statistical or non-statistical, but factual based.

Keywords: should record the identity of samples selected, the number in the population from which the samples were taken (if possible), and the number of samples selected for examination. • •

Population Number of samples

Charlie Chong/ Fion Zhang


Part VE

Types Of Sampling Population sampling is the process of taking a subset of subjects that is representative of the entire population. The sample must have sufficient size to warrant statistical analysis. https://explorable.com/population-sampling

ď Ž Haphazard sampling/ Convenient/ Accidental. Haphazard sampling is used by auditors to try to gather information from a representative sample of a population. Items are selected without intentional bias and with the goal of representing the population as a whole. The auditor might ask to see the deficiency reports on the coordinator’s desk. These reports might be rationalized as being random and as representing the population as a whole. The auditor might ask for 10 deficiency reports, two from each line, and will ask to be the one who picks them. This might be rationalized as removing the bias from having the coordinator select the sample. The pro side of haphazard sampling is that it is easy to select the sample, so the audit can be completed more quickly. There is less preparation time, making it possible to do more audits. The con side of haphazard sampling may outweigh its advantages. If the coordinator is reviewing the deficiency reports for a specific department at the time the auditor walks in, the results of the audit will show that this department has a disproportionate number of deficiencies when compared with the other departments in the sample. The auditor might pick deficiency reports that catch his or her eye for some unknown reason, thus introducing an unknown bias. Haphazard sampling is the easy approach to sampling, but the results may not reflect all departments, lines, items, people, problems, or a myriad of other considerations. The results are not statistically valid, and generalizations about the total population should be made with extreme caution. The results of haphazard sampling are difficult to defend objectively. Of all the non-statistical audit sampling methods, haphazard is arguably the worst.

Charlie Chong/ Fion Zhang


Part VE

Haphazard Sampling Haphazard sampling is a sampling method that does not follow any systematic way of selecting participants. An example of Haphazard Sampling would be standing on a busy corner during rush hour and interviewing people who pass by. Haphazard sampling gives little guarantee that your sample will be representative of the entire population. If you were to use this method to conduct a survey to find out who people will vote for president, the results you get may not predict the actual outcome of the election. This is because you would probably only be able to interview people who were probably white-collar workers on their way to work, or those who were not in such a big hurry to get to where they're going, or those who lived or worked near the area where you conducted your survey.

http://aerohaveno.blogspot.com/2014/12/asia-summer-series-shanghai-china-part-1.html

http://www.ilishi.net/html/200909/18715.html Charlie Chong/ Fion Zhang


Part VE

Block Sampling or Cluster Sampling – Statically Valid Block sampling or cluster sampling can be used by auditors to gain a pretty good picture of the population, if the blocks are chosen in a statistical manner. This requires that numerous blocks be chosen before an accurate representation of the total population is obtained, and often more items are examined than if a statistical sample was selected in the beginning. Normally, auditors don’t use block sampling during audits but do use it extensively after a problem has been identified. Auditors and others use block sampling when trying to determine when or how a previously identified problem began, ended, or both. For example, if a problem began in May, the auditor might examine all items made or processed in May to try to determine when the problem began and whether it is still occurring. If a problem with calibration of balances was identified, every balance could be examined to determine when the problem began and whether it affected only those in one building or in one department. Some may recognize these activities as investigative actions taken subsequent to identification of a problem. Block sampling is also used in investigative actions. The pro side of block sampling is that it allows statistically valid judgments about the block examined. With a sufficiently large number of blocks selected randomly using the same selection criteria, statistically valid judgments about the total population can be made. Single blocks allow the auditor to narrow down the root cause of a previously identified problem by focusing the investigation in the area of concern. Single blocks also allow the auditor to recognize a possible problem with a single machine or a specific process. The con side of block sampling is that it requires sampling a large number of items—even more than statistically selected samples—before judgments about the total population can be made. Auditors often want more than just information on a particular block of time, products, or locations. Auditors want to be able to provide management with supportable statements about the entire population. For this reason, block sampling is normally not used during the audit to identify problems.

Charlie Chong/ Fion Zhang


Part VE

Judgmental Sampling- Not Statistically Valid Judgmental sampling can be used by auditors to get a pretty good idea of what is happening, although the results are not statistically valid. In the first approach, the auditor selects samples based on his or her best judgment of what is believed to give a representative picture of the population. These samples are chosen based on the auditor’s past experience. Often these samples are taken from areas that expose the company to the greatest risk, such as high-dollar orders, special orders, or critical application orders. The auditor may already know from past history (past audits) that problems have existed in department A, activity C, and with this knowledge, the auditor examines that area in an audit. In judgmental sampling, the auditor may also decide to look at all orders over $2 million or all orders destined for installation in the military aircraft. If a problem is found, the auditor examines additional samples to determine the extent of the immediate problem. In the second approach (that is, looking at all orders over $2 million), the process or system has reached maturity, and very few problems are identified in a general audit using random sample techniques. The company may then decide to audit all areas in which problems were identified, with the intention of determining whether the activity can be improved beyond its current level. The nuclear industry and several other industries have reached this point and have begun to rely on judgmental sampling to identify areas for improvement. The pro side of judgmental sampling is extensive. The auditor focuses on areas where previous problems were found and corrected. High-risk areas and activities historically have received the most attention from management. By doing judgmental sampling, the auditor will be providing information on areas known to be of interest to management. Judgmental sampling allows companies to focus their efforts on specific improvements rather than general assessment. It allows the auditor to more effectively use his or her time during the audit. And finally, selection of the audit sample is relatively simple, which leaves more time to prepare for and perform the audit.

Charlie Chong/ Fion Zhang


Part VE

The con side of judgmental sampling is that the results are not statistically valid or objectively defensible. Judgmental sampling is open to abuse through retaliation (selecting a group for a detailed audit because of some previous action). Judgmental sampling causes auditors to continue to focus on areas where problems were found previously. It is a fact that an auditor focusing on an area will probably find problems that get recorded and reported. An unwritten law of auditing is that ―if we look for it, we will find it.‖ If auditors continue to focus only on areas where problems are found, logic would take them to the extreme where they always audit the same thing over and over. Thus, certain areas would be seen as pristine (in its original condition; un-spoilt) , while others would be seen as consistently incompetent. Statistical sampling is needed to provide a baseline from which further auditing using judgmental sampling may proceed. Haphazard sampling should be avoided if at all possible. Block sampling is effective in pinpointing problems, and statistically valid conclusions can be made about the block evaluated; but conclusions about the total population require more work. Judgmental sampling is effective in focusing the auditor’s efforts and in identifying areas of improvement in a relatively mature program. The job of the auditor is to know which method is best for obtaining the information needed.

Charlie Chong/ Fion Zhang


Part VE

Judgmental Sampling Judgmental sampling is a non-probability sampling technique where the researcher selects units to be sampled based on their knowledge and professional judgment. This type of sampling technique is also known as purposive sampling and authoritative sampling. Purposive sampling is used in cases where the specialty of an authority can select a more representative sample that can bring more accurate results than by using other probability sampling techniques. The process involves nothing but purposely handpicking individuals from the population based on the authority's or the researcher's knowledge and judgment. Example of Judgmental Sampling In a study wherein a researcher wants to know what it takes to graduate summa cum laude in college, the only people who can give the researcher first hand advise are the individuals who graduated summa cum laude. With this very specific and very limited pool of individuals that can be considered as a subject, the researcher must use judgmental sampling. When to Use Judgmental Sampling Judgmental sampling design is usually used when a limited number of individuals possess the trait of interest. It is the only viable sampling technique in obtaining information from a very specific group of people. It is also possible to use judgmental sampling if the researcher knows a reliable professional or authority that he thinks is capable of assembling a representative sample. Setbacks of Judgmental Sampling The two main weaknesses of authoritative sampling are with the authority and in the sampling process; both of which pertains to the reliability and the bias that accompanies the sampling technique. Unfortunately, there is usually no way to evaluate the reliability of the expert or the authority. The best way to avoid sampling error brought by the expert is to choose the best and most experienced authority in the field of interest. When it comes to the sampling process, it is usually biased since no randomization was used in obtaining the sample. It is also worth noting that the members of the population did not have equal chances of being selected. The consequence of this is the misrepresentation of the entire population which will then limit generalizations of the results of the study.

https://explorable.com/judgmental-sampling

Charlie Chong/ Fion Zhang


Part VE

Block sampling https://es.slideshare.net/drbharatpaul/sampling-techniques-48927352

Charlie Chong/ Fion Zhang


Part VE https://es.slideshare.net/drbharatpaul/sampling-techniques-48927352

Charlie Chong/ Fion Zhang


Part VE https://es.slideshare.net/drbharatpaul/sampling-techniques-48927352

Charlie Chong/ Fion Zhang


Part VE https://es.slideshare.net/drbharatpaul/sampling-techniques-48927352

Charlie Chong/ Fion Zhang


Part VE

or Haphazard sampling

https://es.slideshare.net/drbharatpaul/sampling-techniques-48927352

Charlie Chong/ Fion Zhang


Part VE https://es.slideshare.net/drbharatpaul/sampling-techniques-48927352

Charlie Chong/ Fion Zhang


Part VE

Judgmental Sampling

https://www.youtube.com/watch?v=-kwdXEXC7yE

Charlie Chong/ Fion Zhang


Part VE

Non-Probability Sampling Non-Probability Sampling

Other names

Haphazard sampling

Convenient sampling Accidental sampling

Description

Snow ball sampling Judgmental sampling

Purposive sampling

Voluntary sampling

https://www.youtube.com/results?search_query=Rahul+Patwari+sampling

Charlie Chong/ Fion Zhang


Part VE

Non-Probability Sampling

https://www.youtube.com/watch?v=-kwdXEXC7yE

Charlie Chong/ Fion Zhang


Part VE

Dr. Rahul Patwari MD. Lectures on Sampling Subject

Links

Sampling 01: Introduction

https://youtu.be/Cl2uZGGL-_U

Sampling 02: Simple Random Sampling

https://youtu.be/-BRoHNiRM-o

Sampling 03: Stratified Random Sampling https://youtu.be/rsNCCQhkKN8 Sampling 04: Cluster Sampling

https://youtu.be/pV3FAVr086s

Sampling 05: Systematic Sampling

https://youtu.be/SBsgnpby-Hc

Sampling 06: Non-Probability Sampling

https://youtu.be/-kwdXEXC7yE

https://www.rushu.rush.edu/faculty/rahul-g-patwari-md

https://www.youtube.com/results?search_query=Rahul+Patwari+sampling

Charlie Chong/ Fion Zhang


Part VE

Dr. Manishika lecture on Statistical Sampling

https://youtu.be/bQ5_PPRPjG4

Charlie Chong/ Fion Zhang


Part VE

Statistical Sampling (Random And Systematic) For a sampling approach to be considered statistical, the method must have random selection of items to be valuated and use probability theory to quantitatively evaluate the results. Statistically valid sampling is necessary to quantify problems resulting from an administrative process or production line. Statistically valid sampling allows the auditor to state in the audit report that ―we are 95 percent confident that the actual population deviation rate lies between 1.2 percent and 5 percent. Since this is less than the tolerable deviation rate of 6 percent, the control procedure appears to be functioning as prescribed.‖ With a slightly different sampling technique, the auditor would be able to state, ―We are 95 percent confident that the true population deviation rate is less than 4.8 percent, which is less than the tolerable deviation rate of 6 percent.‖ These numbers and confidence levels mean more to upper management than, say, ―We think there is a problem in document control.‖ There are two widely used methods of statistical sampling:  Simple random sampling and  Systematic sampling. How about:  Stratified random  Cluster/block sampling

Charlie Chong/ Fion Zhang


Part VE

Simple random Simple random sampling ensures that each item in the population has an equal chance of being selected. Random number tables and computer programs can help make the sample selections.

Systematic sampling Systematic sampling also ensures that each item in the population has an equal chance of being selected. The difference here is that after the sample size is determined, it is divided into the total population size to determine the sampling interval (for example, every third item). The starting point is determined using a random number table. Several computer programs are available to help determine the sample size, the sampling interval, the starting point, and the actual samples to be evaluated.

Reporting It’s not difficult to plug numbers into a formula and calculate the results. Management likes to work with numbers that have meaning and that put boundaries around a question or an error rate. This is where statistical sampling comes in. With statistical sampling, auditors can state in the audit reports that ―we have 95 percent confidence that the purchase orders are being correctly processed.‖ Sample size depends on confidence level and what the auditor wants to determine. Naturally, the larger the sample size, the more accurate the estimate. For small populations, the sample size is corrected. Auditing by statistical sampling is best suited to single-attribute auditing. However, once the item to be audited has been selected, many attributes can be checked during the audit. A purchase order has many attributes that can be checked simultaneously. In this way, one calculation for sample size can be used to report on many attributes. Standards (discussed in the next section) or statistical formulas should be used to determine the appropriate sample size given a required confidence level, such as 95% or 99%. For more information concerning statistical sampling, Dodge- Romig or Bayesian sampling plans, and binominal distributions, consult a comprehensive statistical textbook.

https://www.calculator.net/sample-size-calculator.html?type=1&cl=95&ci=5&pp=10&ps=1000&x=59&y=34

Charlie Chong/ Fion Zhang


Random number tables Part VE

How to use a random number table. Let’s assume that we have a population of 185 students and each student has been assigned a number from 1 to 185. Suppose we wish to sample 5 students (although we would normally sample more, we will use 5 for this example). Since we have a population of 185 and 185 is a three digit number, we need to use the first three digits of the numbers listed on the chart. We close our eyes and randomly point to a spot on the chart. For this example, we will assume that we selected 20631 in the first column. We interpret that number as 206 (first three digits). Since we don’t have a member of our population with that number, we go down to the next number 899 (89990). Once again we don’t have someone with that number, so we continue at the top of the next column. As we work down the column, we find that the first number to match our population is 100 (actually 10005 on the chart). Student number 100 would be in our sample. Continuing down the chart, we see that the other four subjects in our sample would be students 049, 082, 153, and 164. Researchers use different techniques with these tables. Some researchers read across the table using given sets (in our examples three digit sets). For our class, we will use the technique I have described. Microsoft Excel has a function to produce random numbers. The function is simply =RAND() Type that into a cell and it will produce a random number in that cell. Copy the formula throughout a selection of cells and it will produce random numbers between 0 and 1. If you would like to modify the formula, you can obtain whatever range you wish. For example.. if you wanted random numbers from 1 to 250, you could enter the following formula: =INT(250*RAND())+1 The INT eliminates the digits after the decimal, the 250* creates the range to be covered, and the +1 sets the lowest number in the range.

https://researchbasics.education.uconn.edu/random-number-table/

Charlie Chong/ Fion Zhang


Random number tables Part VE

How to use a random number table. Let’s assume that we have a population of 185 students and each student has been assigned a number from 1 to 185. Suppose we wish to sample 5 students (although we would normally sample more, we will use 5 for this example). Since we have a population of 185 and 185 is a three digit number, we need to use the first three digits of the numbers listed on the chart. We close our eyes and randomly point to a spot on the chart. For this example, we will assume that we selected 20631 in the first column. We interpret that number as 206 (first three digits). Since we don’t have a member of our population with that number, we go down to the next number 899 (89990). Once again we don’t have someone with that number, so we continue at the top of the next column. As we work down the column, we find that the first number to match our population is 100 (actually 10005 on the chart). Student number 100 would be in our sample. Continuing down the chart, we see that the other four subjects in our sample would be students 049, 082, 153, and 164. Researchers use different techniques with these tables. Some researchers read across the table using given sets (in our examples three digit sets). For our class, we will use the technique I have described. Microsoft Excel has a function to produce random numbers. The function is simply =RAND() Type that into a cell and it will produce a random number in that cell. Copy the formula throughout a selection of cells and it will produce random numbers between 0 and 1. If you would like to modify the formula, you can obtain whatever range you wish. For example.. if you wanted random numbers from 1 to 250, you could enter the following formula: =INT(250*RAND())+1 The INT eliminates the digits after the decimal, the 250* creates the range to be covered, and the +1 sets the lowest number in the range.

https://stattrek.com/statistics/random-number-generator.aspx#error

Charlie Chong/ Fion Zhang


Part VE https://www.randomizer.org/

Charlie Chong/ Fion Zhang


Part VE

Sample Size Calculator This Sample Size Calculator is presented as a public service of Creative Research Systems survey software. You can use it to determine how many people you need to interview in order to get results that reflect the target population as precisely as needed. You can also find the level of precision you have in an existing sample. Before using the sample size calculator, there are two terms that you need to know. These are: confidence interval and confidence level.

ď Ž The confidence interval (also called margin of error) is the plus-or-minus figure usually reported in newspaper or television opinion poll results. For example, if you use a confidence interval of 4 and population proportion with 47% picked an answer, you can be "sure" that if you had asked the question of the entire relevant population between 47% Âą4%, i.e. 43% (47-4) and 51% (47+4) would have picked that answer. ď Ž The confidence level tells you how sure you can be. It is expressed as a percentage and represents how often the true percentage of the population who would pick an answer lies within the confidence interval. The 95% confidence level means you can be 95% certain; the 99% confidence level means you can be 99% certain. Most researchers use the 95% confidence level. When you put the confidence level and the confidence interval together, you can say that you are 95% sure that the true percentage of the population is between 43% and 51%. The wider the confidence interval you are willing to accept, the more certain you can be that the whole population answers would be within that range.

https://surveysystem.com/sscalc.htm#one

Charlie Chong/ Fion Zhang


Part VE

For example, if you asked a sample of 1000 people in a city which brand of cola they preferred, and 60% said Brand A, you can be very certain that between 40 and 80% of all the people in the city actually do prefer that brand, but you cannot be so sure that between 59 and 61% of the people in the city prefer the brand.

https://surveysystem.com/sscalc.htm#one

Charlie Chong/ Fion Zhang


Part VE https://www.dawn.com/news/1329368

Charlie Chong/ Fion Zhang


Part VE Charlie Chong/ Fion Zhang


Part VE

Sampling Standards (Acceptance Sampling) In this section, three sampling procedures are discussed for application to auditing:  ANSI/ASQ Z1.4-2008: Sampling Procedures and Tables for Inspection by Attributes  ANSI/ASQ Z1.9-2008: Sampling Procedures and Tables for Inspection by Variables for Percent Nonconforming  ASQC Q3-1998: Sampling Procedures and Tables for Inspection of Isolated Lots by Attributes

Note: ANSI/ASQ Z1.4-2003 http://vcg1.com/files/ANSI_ASQC-Z1.4.pdf

Charlie Chong/ Fion Zhang


Part VE

Application We are interested in determining conformance with procedures, instructions, and other program documentation. Audits for adequacy require a point- by-point comparison of the lower- tier document, such as a procedure, with the upper- tier document, such as a standard. This is a 100% check. Thus, statistical sampling does not apply. Effectiveness and performance audits require the judgment skills of the auditor, as applied to the results of the program. Again, statistical sampling does not apply. Conformance is determining whether an activity or a document is satisfactory or unsatisfactory. This, then, is attribute sampling, rather than variable sampling. In addition, we are dealing with whole numbers (1, 2, 3), so we need to deal with discrete probability distributions. The most common of these are hypergeometric, binomial, and Poisson. For general discussion, we need to assume a large population, N. This condition is met if N is greater than or equal to 10n, where n is the sample size. This condition is not required when the standards are used, as the sample size is corrected for small population (lot size). But what are the characteristics of the lot that we will examine during the audit? Meanings: Hypergeometric- In probability theory and statistics, the hypergeometric distribution is a discrete probability distribution that describes the probability of k successes (random draws for which the object drawn has a specified feature) in n draws, without replacement, from a finite population of size N that contains exactly K objects with that feature, wherein each draw is either a success or a failure. In contrast, the binomial distribution describes the probability of k successes in n draws with replacement.

Charlie Chong/ Fion Zhang


Part VE

Hypergeometric Distribution The probability distribution of a hypergeometric random variable is called a hypergeometric distribution. This lesson describes how hypergeometric random variables, hypergeometric experiments, hypergeometric probability, and the hypergeometric distribution are all related. Notation The following notation is helpful, when we talk about hypergeometric distributions and hypergeometric probability: • • • • • •

N: The number of items in the population. k: The number of items in the population that are classified as successes. n: The number of items in the sample. x: The number of items in the sample that are classified as successes. kCx: The number of combinations of k things, taken x at a time. h(x; N, n, k): hypergeometric probability - the probability that an n-trial hypergeometric experiment results in exactly x successes, when the population consists of N items, k of which are classified as successes.

 Hypergeometric Formula.. Suppose a population consists of N items, k of which are successes. And a random sample drawn from that population consists of n items, x of which are successes. Then the hypergeometric probability is: h(x; N, n, k) = [ kCx ] [ N-kCn-x ] / [ NCn ] The hypergeometric distribution has the following properties:  The mean of the distribution is equal to n * k / N .  The variance is n * k * ( N-k ) * ( N-n ) / [ N2 * ( N-1 ) ] .

https://www.stattrek.com/probability-distributions/hypergeometric.aspx

Charlie Chong/ Fion Zhang


Part VE

Example 1 Suppose we randomly select 5 cards without replacement from an ordinary deck of playing cards. What is the probability of getting exactly 2 red cards (i.e., hearts or diamonds)? Solution: This is a hypergeometric experiment in which we know the following: • • • •

N = 52; since there are 52 cards in a deck. k = 26; since there are 26 red cards in a deck. n = 5; since we randomly select 5 cards from the deck. x = 2; since 2 of the cards we select are red.

We plug these values into the hypergeometric formula as follows: • • • •

h(x; N, n, k) = [ kCx ] [ N-kCn-x ] / [ NCn ] h(2; 52, 5, 26) = [ 26C2 ] [ 26C3 ] / [ 52C5 ] h(2; 52, 5, 26) = [ 325 ] [ 2600 ] / [ 2,598,960 ] h(2; 52, 5, 26) = 0.32513

Thus, the probability of randomly selecting 2 red cards is 0.32513.

https://www.stattrek.com/probability-distributions/hypergeometric.aspx

Charlie Chong/ Fion Zhang


Part VE

Hypergeometric distribution, N=250, k=100

http://www.statsref.com/HTML/index.html?hypergeometric.html

Charlie Chong/ Fion Zhang


Part VE

The Moving Lot Most audits are a snapshot covering a specific time period. Our snapshot is of a lot from a continuous production line. As part of the scope of the audit, we might have to examine the deficiency reports (DRs) for the last quarter. Obviously there were DRs written before our time period, and there will be DRs written after we are gone. This might look like: Stream of DRs occurring ---------------------------------------------------Time {Scope of Audit #1}

The population (lot size) consists of the number of DRs written during the selected time period. During a subsequent audit, the lot might be completely separate from the lot selected during this audit, or it might overlap. Stream of DRs occurring ---------------------------------------------------Time {Scope of Audit #1} {Scope of Audit #2} Thus, the lot moves for each audit performed, depending on the goals set for the audit. The process being examined can be said to have an acceptable quality level (AQL) set by the people doing the work. This is the work standard they are attempting to achieve. Now that we understand the nature of our lot, we are ready to use the standards.

Charlie Chong/ Fion Zhang


Part VE

QC101 Control Charts

https://www.youtube.com/watch?v=sV5PRDV7hyM

Charlie Chong/ Fion Zhang


Part VE

Variable Control Chart. Z1.9 Applicability Z1.9 has been eliminated from consideration because it is variable sampling, and auditors need to concern themselves with attribute sampling. This leaves Z1.4 and ASQC Q3-1998 for possible use by auditors. We will consider them in turn.

Charlie Chong/ Fion Zhang


Part VE

QC101 Attribute Control Charts: P & NP Charts

https://www.youtube.com/embed/sV5PRDV7hyM

https://www.youtube.com/embed/8XEvaR2TPlU

Charlie Chong/ Fion Zhang


Part VE

Attribute Control Charts  ANSI/ASQ Z1.4-2008 Applicability And Use ANSI/ASQC Z1.4-2008 is the revised and updated version of the old MIL- STD-105. This standard assumes that isolated lots are drawn from a process and sampled separately. The process AQL is a factor in determining the sample size in this case. In auditing, we want to know the maximum error rate, which translates into a limiting quality level (LQL) for the standards. Tables VI- A and VII- A of ANSI/ASQ Z1.4-2008 provide LQLs as a percent nonconforming with a probability of acceptance Pa = 10% and 5%, respectively. Pa = 10% means that there is only a 10% chance that we will accept a lot with a percent nonconforming greater than our specified LQL. As an example, let’s assume that we want a 10% LQL for our lot with Pa = 10% or less, and an AQL of 1.5% for a series of lots. Enter the ANSI/ASQ Z1.4-2008 sample size of 50, and we can accept the lot with two problems noted but must reject the lot if three problems are noted. The sample size of 50 implies that our population is approximately 500 (N is greater than or equal to 10n). If the population is much less than 500, we need to use the calculations presented in ―Proportional Stratified Sampling‖ in this chapter. Continuing with the example, we need to choose a random sample of 50 items. Computer random number generator programs or random number tables provide the selection method for our sample. When completed, assuming the results are acceptable, we will be able to say that there is a 90% probability that the audited attribute has a percent defective less than 10%. When working to very exact requirements, such as low AQL and low LQL, we need to use the operating characteristic curves to determine the discrimination desired. The operating characteristic curves are imprecise when working in this area. This brings up the second standard applicable to auditing, ASQC Q3-1998.

Charlie Chong/ Fion Zhang


Part VE

Table VI-A Limiting Quality (in percent nonconforming) for Which Pa = 10 Percent (for Normal Inspection, Single Sampling)

• •

https://www.youtube.com/embed/y5WbL_86OOo http://www.ombuenterprises.com/LibraryPDFs/Attributes_Acceptance_Sampling_Understanding_How_it_Works.pdf

Charlie Chong/ Fion Zhang


Part VE

Table VI-A Limiting Quality (in percent nonconforming) for Which Pa = 10 Percent (for Normal Inspection, Single Sampling) C=2, r=3; this correspond to AQL=4

https://www.youtube.com/embed/y5WbL_86OOo

Charlie Chong/ Fion Zhang


Part VE

Table VI-A Limiting Quality (in percent nonconforming) for Which Pa = 10 Percent (for Normal Inspection, Single Sampling)

Single / Double/Multiple plan

Standard offers 3 types of sampling plans Mil. Std. 105E offers three types of sampling plans: single, double and multiple plans. The choice is, in general, up to the inspectors. Because of the three possible selections, the standard does not give a sample size, but rather a sample code letter. This, together with the decision of the type of plan yields the specific sampling plan to be used. https://qualityinspection.org/inspection-level/

Example IIA/IIB/IIC Normal/ Tighten/ Reduce

https://www.youtube.com/embed/y5WbL_86OOo

Charlie Chong/ Fion Zhang


Why different inspection levels? There is a fairly obvious principle in statistical quality control: the greater the order quantity, the higher the number of samples to check.But should the number of samples ONLY depend on the order quantity? What if this factory had many quality problems recently, and you suspect there are many defects? In this case, you might want more products to be checked.On the other hand, if an inspection requires tests that end up in product destruction, shouldn’t the sample size be drastically reduced? And if the quality issues are always present on all the produc ts of a given batch (for reasons inherent to processes at work), why not check only a few samples? For these reasons, different levels are proposed by MIL-STD 105 E (the widely recognized standard for statistical quality control).It is usually the buyer’s responsibility to choose the inspection level–more samples to check means more chances to reject bad products when they are bad, but it also means more days (and dollars) spent in inspection. he 3 ―general‖ inspection levels Level I Has this supplier passed most previous inspections? Do you feel confident in their products quality? Instead of doing no quality control, buyers can check less samples by opting for a level-I inspection.However, settling on this level by default, in order to spend less time/money on inspections, is very risky. The likelihood of finding quality problems is lower than generally recommended.

Part VE

Table I—Sample size code letters

Level II It is the most widely used inspection level, to be used by default. Level III If a supplier recently had quality problems, this level is appropriate. More samples are inspected, and a batch of products will (most probably) be rejected if it is below the quality criteria defined by the buyer. Some buyers opt for level-III inspections for high-value products. It can also be interesting for small quantities, where the inspection would take only one day whatever the level chosen.

Lot Size N=150

https://www.intouch-quality.com/aql-calculator

https://www.youtube.com/embed/y5WbL_86OOo

http://rsjqa.com/useful-corner/aql-manday-calculator/aql-calculator

Charlie Chong/ Fion Zhang


Part VE

Table II-A—Single sampling plans for normal inspection (Master table) Acceptable quality levels (AQL) Sometimes called ―acceptable quality limits‖, AQLs range from 0 to 15 percent or more, with 0 representing the lowest tolerance for defects. Importers’ tolerance for ―minor‖ defects tends to be higher than that for ―major‖ and ―critical‖ defects. So they usually choose a different AQL for each of these classes of product defects. For consumer goods, QC professionals typically recommend AQLs of 0, 2.5 and 4 percent for critical, major and minor defects, respectively Choosing an AQL isn’t always as simple as adopting the one that similar importers are using. What works for one importer might not work for another to verify that orders are meeting customer expectations. To ensure you can choose the best AQL for your circumstances, there are a number of factors to consider, including: What quality level your supplier considers reasonable and has agreed to meet Your inspection budget (lower AQLs typically require larger sample sizes and more time) Your exit-factory date The value of the goods in question (more expensive products tend to warrant lower AQLs) Although you might select what you perceive as a reasonable AQL to apply, that doesn’t mean a factory will feel the same way. Agreeing upon standards early is crucial when it comes to QC inspection. The factory may try to dispute the results of an inspection if there’s no prior agreement on an appropriate AQL

Charlie Chong/ Fion Zhang


Part VE

Table II-A—Single sampling plans for normal inspection (Master table)

https://www.smartchinasourcing.com/anatomy-ansi-asq-z1-4-industry-standard-aql-table/

Charlie Chong/ Fion Zhang


Part VE

Table II-B—Single sampling plans for tightened inspection (Master table)

Charlie Chong/ Fion Zhang


Part VE

Table VI-A Limiting Quality (in percent nonconforming) for Which Pa = 10 Percent (for Normal Inspection, Single Sampling) Limiting Quality (LQ) and a consumer’s risk to be associated with it. Limiting Quality is the percentage of nonconforming units (or nonconformities) in a batch or lot for which for purposes of acceptance sampling, the consumer wishes the probability of acceptance to be restricted to a specified low value. Tables VI and VII give process levels for which the probabilities of lot acceptance under various sampling plans are 10 percent and 5 percent respectively. If a different value of consumer’s risk is required, the O.C. curves and their tabulated values may be used. For individual lots with percent nonconforming or nonconformities per 100 units equal to the specified Limiting Quality (LQ) values, the probabilities of lot acceptance are less than 10 percent in the case of plans listed in Table VI and less than 5 percent in the case of plans listed in Table VII. When there is reason for avoiding more than a limiting percentage of nonconforming units (or nonconformities) in a lot or batch, Tables VI and VII may be useful for fixing minimum sample sizes to be associated with the AQL and Inspection Level specified for the inspection of a series of lots or batches. For example, if an LQ of 5 percent is desired for individual lots with an associated Pa of 10 percent or less, then if an AQL of 1.5 percent is designated for inspection of a series of lots or batches. Table VI indicates that the minimum sample size must be that given by Code Letter M.

Charlie Chong/ Fion Zhang


Part VE

ASQC Q3-1998 Applicability And Use ASQC Q3-1998 is designed for isolated lots and uses the hypergeometric probability function. This applies even more directly to audits than ANSI/ASQC Z1.4-1993. ASQC Q3-1998 also uses the customer’s specified limiting quality (LQ) as the basis for sample sizes. The goal is to have a very low probability of accepting (Pa) a lot that has a percent nonconforming equal to or worse than the LQ. ASQC Q3-1998 ties back to ANSI/ASQ Z1.4-2008 for AQLs to provide a commonality or cross- reference. Because we will be pulling isolated lots from a continuous process, we will be working with Table B of the ASQC Q3-1998 standard. There are cases when we will work with truly isolated lots, in which case Table A would be used, but this is the exception. ASQC Q3-1998 is fairly simple to understand. Let’s assume that from the DR log, we counted 239 DRs written during the period being audited. Our client isn’t overly concerned with detailed compliance with the procedures, but does want each deficiency corrected. For compliance, we select an LQ of 12.5%, which is fairly loose. Table B8 shows our sample size to be 32. For deficiency correction, we select an LQ of 2%, which is fairly tight. Table B4 shows our sample size to be 200. Please note that this is almost a 100% sample because our population is low. From Table C3, we find that in both cases we accept the lot if we find one or zero problems in our sample. We approach this by selecting a random sample of 200 for auditing deficiency correction. Next, we divide 200 by the 32 samples needed for the compliance portion of the audit, to get a frequency of 6. Thus for the compliance portion, we will use every sixth item from our deficiency correction sample as one of our compliance samples, beginning with item 4 (chosen because it is less than 6). The sequence looks like this:

Deficiency correction 1 2 3 4 5 6 7 8 9 10 . . . 190 191 192 193 194 195 196 197 198 199 200 Compliance 1 2 32 33 Note that because the division does not yield an even number, we end up with 33 total samples for compliance, rather than the 32 from the table. Use the extra sample as part of the audit. We then perform the audit, and if one or zero problems is noted for each sample (200 and 32), we can be 90% confident that the DRs meet the LQ specified for the attribute being checked (12.5% for compliance and 2% for deficiency correction). Read more: http://www.uotechnology.edu.iq/dep-production/branch3_files/Dr.%20Mahmoud%20Chapter%2010%20Acceptance%20Sampling%20Systems.pdf Charlie Chong/ Fion Zhang


Part VE

Summary Two of the most familiar sampling plans, ANSI/ASQ Z1.4-2008 and ASQC Q3-1998, are readily applied to and used during audits. These allow the auditor to speak with authority to management about the results of the audit. There is a lot to learn about applying the standards to audits, but many people who have previously applied the standards to product acceptance will be able to apply the standards to their auditing. Consider using some of these sampling methods the next time you audit a large population to improve the audit credibility.

Charlie Chong/ Fion Zhang


Part VE

Proportional Stratified Sampling Proportional stratified sampling can be used to gain an understanding of each stratum within a population, but it cannot be used to make statistical inferences for each stratum. The sample size is determined by any one of the statistical methods/standards. Then the sample size is divided and applied in proportion to the population of each stratum in the total population. If the sample is not statistically determined, the statistical validity is compromised, and no statistically valid conclusions can be drawn. For example, the population is 1000 purchase orders over the past year. Eight hundred of these are for amounts of $500 or less; 150 are for amounts of $501– $1000, and 50 are for amounts over $1000. We chose an LQ of 5% and used ASQC Q3-1998, Table B6, to learn that the sample size is 80. To apportion the sample to the stratum, simply set up a proportion for each:  Then we solve for X500 ($500 or less): X500 = 64  Similar proportions give us ($501–$1000): X1000 = 12  and ($1000 or greater): X1000+ = 4 The samples are chosen from each stratum using the random sampling technique. If the samples are not chosen using a random sampling technique, the results will not be statistically valid.

Charlie Chong/ Fion Zhang


Part VE

With this knowledge, we can accept the total population with one or zero deficiencies and reject the total population with two or more deficiencies. Although we cannot make statistical inferences on each stratum, we gain some information that we can use to further investigate the stratum. If all the deficiencies are found in a particular stratum, the auditor can revise the focus of the audit to further investigate that particular stratum, and the auditee could justifiably focus corrective action on that stratum. This method places emphasis on the population of the strata within a population. This may not be desirable to management. For example, management may want the auditor to focus on the high- cost items, which is where the greatest risk lies. Proportional stratified sampling is deceptive in that the auditor and the auditee could be misled into drawing conclusions about each stratum instead of the total population. The method, sample size, and results allow the auditor to draw conclusions about the total population only. The auditor can use the results to determine where to focus further investigation, and the auditee can use the results as a guide to determine where to focus corrective actions.

Charlie Chong/ Fion Zhang


Part VE

Risks in Sampling Hypothesis Testing is creating two hypotheses to arrive at a decision based on sampling. Sampling is used to make business decisions regarding the marketability of a product or quality control decisions regarding the acceptance of a batch, lot, or process.

A hypothesis may be: if the lot achieves a certain acceptable quality level (AQL), it will be approved. Or a decision rule may be: if X number of parts is found to be defective, the lot will be rejected. Consumer and producer risk are the chances of making decision errors based on the sample taken. Producer Risk, or Type I Error, is the probability that good quality product is rejected or the probability that a product survey would indicate that a product is not marketable, when it actually is. The producer suffers when this occurs because good product (or marketable product or service) is rejected. The math symbol used to represent producer risk is alpha (ι risk). See Figure 22.1. Consumer Risk, or Type II Error, is the probability that bad quality product is accepted or the probability that a product survey would indicate that a product is marketable, when it actually is not. The consumer suffers when this occurs because bad product is accepted (released). The math symbol used to represent consumer risk is beta (β risk). For example: A product recall may be the result of a Type II error. See Figure 22.2.

Sufficient samples of the population must be taken to achieve a certain confidence that Type I and Type II errors will be avoided. Statistically, we can define a sampling plan as one that will give us confidence in the results. Typical confidence levels are 95% or 99%. There is a trade- off between the confidence level you want to achieve versus the cost of sampling.

Charlie Chong/ Fion Zhang


Part VE

Figure 22.1 Producer risk or Type I error (note: sample taken from shaded area).

Charlie Chong/ Fion Zhang


Part VE

Figure 22.2 Consumer risk or Type II error (note: sample taken from shaded area).

Charlie Chong/ Fion Zhang


Part VE

Sampling Summary This chapter has discussed methods of sampling that are commonly encountered in the performance of product, process, and system audits. Table 22.1 summarizes the methods, their advantages and disadvantages, and their applicability. Statistically, random sampling must ensure that each item in the population has an equal chance of being selected. A sampling scheme is developed for selection of the samples. This scheme can use a strictly random selection based on random number tables or computerized random number generators, or a systematic sampling scheme based on the number of samples selected and the total population. Non-statistical sampling, although very easy and quick to perform, has many disadvantages, which include potential bias in the sample, inability to make generalizations about the total population, and indefensibility as objective sampling. Statistically valid auditing must have random selection of items to be evaluated and use probability theory to quantitatively evaluate the results. Two methods can be applied to auditing to provide the ability to make statistically valid conclusions for use by management: statistical sampling for attributes and sampling with standards. These methods allow confidence levels and error estimation based on the results of the evaluation of the samples. Auditors are encouraged to begin using statistically valid sampling when it adds value and improves the effectiveness of the audit report. When using statistical sampling techniques for the first time, the auditor may use one method during an audit and try a different one for the next to learn the various methods. This has the additional advantage of allowing the auditor to educate management on statistical methods and the accurate results that can be obtained. Often managers have not had extensive training in statistical methods (other than as applied to business applications and budget), so it may be the auditor’s job to help familiarize management with the value of statistical methods. After the auditor and management become familiar with the methods, a complete audit using statistical methods should be performed and the results reported to management.

Charlie Chong/ Fion Zhang


Part VE

Table 22.1 Sampling methods summary.

Charlie Chong/ Fion Zhang


Part VE

Table 22.1 Sampling methods summary. (continued)

Charlie Chong/ Fion Zhang


Part VF

Chapter 23 Change Control and Configuration Management/Part VF ___________________ Configuration Management. Since the advent of the industrial age, organizations have recognized the need to control products and the documents that describe those products to ensure that the latest models and their descriptions match. Historically, this involved blueprints and specification sheets that were updated and noted by date of revision or a revision of model code or letter. Over time, there has been a continual evolution in the means and techniques involved in managing change. However, change must be controlled so that unnecessary risks are avoided. Before an organization offers a product or service for sale, it must figure out how to provide it. The established way for providing a product or service is to demonstrate how it is configured. A collection of documents such as procedures, specifications, or drawings defines the product or service configuration. Controlling the configuration is called configuration management. Configuration management can include:  Planning,  Identification and  Tracking,

   

Change control, History, Archiving, and Auditing.

Some companies call this management change control. If there is a change in the management system, it should be controlled relative to the risks to the organization. One aspect of change control is the control of documents. Document control is not new and is a well-established management system control.

Charlie Chong/ Fion Zhang


Part VF

Document Control All organizations have documents either internally or externally generated that need to be identified and controlled so that correct, complete, current, and consistent information is distributed among those who need it in order to do their jobs effectively and to meet customer and stakeholder requirements. These documents could be federal or other governmental registers or regulations, employment regulations, industry-specific material and product specifications, maintenance manuals, customer-supplied designs and specifications, standards, organization policies and procedures, price lists, contracts, other purchasing-related documents, business and project plans, and so on. Which documents need to be controlled? Much depends on the nature of the organization’s business and the types of products and services produced, the regulatory climate, federal and state law, industry practices, and organization experience. There are references available that provide guidance. Some of the standards have wording to the effect that documents required by the management systems for environmental concerns, quality, or whatever must be controlled.

Charlie Chong/ Fion Zhang


Part VF

Technology Technology is a consideration in document management and change control. Many aspects of change control, such as revision levels, revision dates, signatures, distribution copies, distribution lists, distribution verifications, master lists, and so forth, are holdovers in the development of systems to control what could be termed hardcopy documents. In the past, there were master and derivative blue (or sepia) prints, carbon or mimeographed copies of procedures, and other documents, and the management, distribution, and updating of controlled documents often required a full-time position for one or more persons, depending on the size of the organization. With the advent of word processing, distributed computing, shared drives, and designated or limited file access, the task of keeping documents current and ensuring appropriate distribution became somewhat easier. Many organizations evolved to the state where all of their controlled documents had only one controlled copy, and that copy was on a shared drive or in a certain computer or server.

Hard copies were time and date stamped and considered to be uncontrolled; some were considered to be obsolete the day after they were printed, or as indicated by the time/date stamp. Even then, hard copies of references, industry specifications, customer documents, and the like still existed, requiring mixed systems of hard-copy and electronic documents. With the advent of web-based technology, the internet, and company intranets, the evolution of document control has continued. Access and distribution are through web-pageaccess technology. One copy is maintained online by a designated individual/owner, who has electronic review and approval. However, when one abstracts the content from the technology, the elements of effective document control are still evident.

https://1drv.ms/f/s!AgjXpjEHTe0ej1ZlZxoQmNEbj3eG

https://1drv.ms/f/s!AgjXpjEHTe0ej1ZlZxoQmNEbj3eG

Charlie Chong/ Fion Zhang


Part VF

Configuration Management Control Configuration management is a management oversight activity for monitoring and controlling changes to configured products, services, and systems. Configuration management ensures that existing product, service, or system configuration is documented, traceable, and current (accurate) during its life cycle (series of stages or phases from beginning to end). Configuration management includes planning, identification and tracking, change control, history, archiving, and auditing. Configuration management audits include auditing the configured documents to ensure they meet requirements and auditing the configuration management process/system to ensure it conforms and is effective.

Charlie Chong/ Fion Zhang


Part VF http://slideplayer.com/slide/5823154/

Charlie Chong/ Fion Zhang


Part VF https://www.youtube.com/embed/-xVXAIrZcZU http://slideplayer.com/slide/5823154/

Charlie Chong/ Fion Zhang


Part VF https://www.youtube.com/embed/i2E1VDjmrXo http://slideplayer.com/slide/5823154/

Charlie Chong/ Fion Zhang


Part VF

A configuration management program includes a plan, procedures, identification, change control, records, and audit processes, as described in the following list: 1.

Plan: A configuration plan should include activities to be implemented and goals to be achieved—such as XYZ product line will be put under configuration management control, configuration management control process will be expanded, suppliers will be provided training, or there will be change control training for all administrative assistants, and so on.

2.

Procedures and guidelines: An organization may need guidelines for the selection of items to be configured, review frequency, distribution, and contents and control of configuration reports. Guidelines may be needed for establishing the baseline configuration to be controlled (what is needed to define the product, service, or system).

3.

Identification process: Items to be under configuration management control should be identified, such as drawings, specifications, control plans, and procedures. Conventions for marking and numbering should be established.

4.

Change-control process: There should be a change- control procedure before and after the configuration baseline is established.

5.

Records status process: There should be an established method for collecting, recording, processing, maintaining, archiving, and destroying configuration data.

6.

Audit process: Process or product audits may be used to audit the configured items and the configuration management process.

Charlie Chong/ Fion Zhang


Part VF

Audits can be used to verify that the configured product/service conforms to specified characteristics product/service audit) and that the product/service can perform its intended function. Additionally, a process audit may be conducted on the configuration process itself. A configuration process audit should verify that the process is adequate, implemented, and maintained. Within configuration management control, organizations should conduct audits of the document control system as they have done in the past. An auditor should verify the effectiveness of the document and record control system by verifying all aspects of the procedures, policies, and practices.

Charlie Chong/ Fion Zhang


Part VF

Conclusion Change control includes product design (including hardware, software, and service), process design, project (including schedule), and the management system. The key principles of change control are what was done, why, when, where, by whom, and how, and the result, including the impact of changes to other processes. Configuration management is a key factor of change control because any change could affect various processes and sub-processes and because it is necessary to have a good grasp of which process or part relates to which other process or part. Configuration management is the basis for good process management.

Charlie Chong/ Fion Zhang


Part VF

Configuration management (CM) Configuration management (CM) is a systems engineering process for establishing and maintaining consistency of a product's performance, functional, and physical attributes with its requirements, design, and operational information throughout its life. The CM process is widely used by military engineering organizations to manage changes throughout the system lifecycle of complex systems, such as weapon systems, military vehicles, and information systems. Outside the military, the CM process is also used with IT service management as defined by ITIL, and with other domain models in the civil engineering and other industrial engineering segments such as roads, bridges, canals, dams, and buildings. Configuration management (CM). Introduction. CM applied over the life cycle of a system provides visibility and control of its performance, functional, and physical attributes. CM verifies that a system performs as intended, and is identified and documented in sufficient detail to support its projected life cycle. The CM process facilitates orderly management of system information and system changes for such beneficial purposes as to revise capability; improve performance, reliability, or maintainability; extend life; reduce cost; reduce risk and liability; or correct defects. The relatively minimal cost of implementing CM is returned many fold in cost avoidance. The lack of CM, or its ineffectual implementation, can be very expensive and sometimes can have such catastrophic consequences such as failure of equipment or loss of life. CM emphasizes the functional relation between parts, subsystems, and systems for effectively controlling system change. It helps to verify that proposed changes are systematically considered to minimize adverse effects. Changes to the system are proposed, evaluated, and implemented using a standardized, systematic approach that ensures consistency, and proposed changes are evaluated in terms of their anticipated impact on the entire system. CM verifies that changes are carried out as prescribed and that documentation of items and systems reflects their true configuration. A complete CM program includes provisions for the storing, tracking, and updating of all system information on a component, subsystem, and system basis.[6] A structured CM program ensures that documentation (e.g., requirements, design, test, and acceptance documentation) for items is accurate and consistent with the actual physical design of the item. In many cases, without CM, the documentation exists but is not consistent with the item itself. For this reason, engineers, contractors, and management are frequently forced to develop documentation reflecting the actual status of the item before they can proceed with a change. This reverse engineering process is wasteful in terms of human and other resources and can be minimized or eliminated using CM. History. Configuration Management originated in the United States Department of Defense in the 1950s as a technical management discipline for hardware material items—and it is now a standard practice in virtually every industry. The CM process became its own technical discipline sometime in the late 1960s when the DoD developed a series of military standards called the "480 series" (i.e., MIL-STD-480, MIL-STD-481 and MIL-STD-483) that were subsequently issued in the 1970s. In 1991, the "480 series" was consolidated into a single standard known as the MIL–STD–973 that was then replaced by MIL–HDBK–61 pursuant to a general DoD goal that reduced the number of military standards in favor of industry technical standards supported by standards developing organizations (SDO).[7] This marked the beginning of what has now evolved into the most widely distributed and accepted standard on CM, ANSI–EIA–649–1998.[8] Now widely adopted by numerous organizations and agencies, the CM discipline's concepts include systems engineering (SE), Integrated Logistics Support (ILS), Capability Maturity Model Integration (CMMI), ISO 9000, Prince2 project management method, COBIT, Information Technology Infrastructure Library (ITIL), product lifecycle management, and Application Lifecycle Management. Many of these functions and models have redefined CM from its traditional holistic approach to technical management. Some treat CM as being similar to a librarian activity, and break out change control or change management as a separate or stand alone discipline. Overview. CM is the practice of handling changes systematicall y so that a system maintains its integrity over time. CM implements the policies, procedures, techniques, and tools that manage, evaluate proposed changes, track the status of changes, and maintain an inventory of system and support documents as the system changes. CM programs and plans provide technical and administrative direction to the development and implementation of the procedures, functions, services, tools, processes, and resources required to successfully develop and support a complex system. During system development, CM allows program management to track requirements throughout the life-cycle through acceptance and operations and maintenance. As changes inevitably occur in the requirements and design, they must be approved and documented, creating an accurate record of the system status. Ideally the CM process is applied throughout the system lifecycle. Most professionals mix up or get confused with Asset management (AM), where it inventories the assets on hand. The key difference between CM and AM is that the former does not manage the financial accounting aspect but on service that the system supports. The CM process for both hardware- and software-configuration items comprises five distinct disciplines as established in the MIL–HDBK–61A[9] and in ANSI/EIA-649. These disciplines are carried out[by whom?] as policies and procedures for establishing baselines and for performing a standard change-management process. The IEEE 12207 process IEEE 12207.2 also has these activities and adds "Release management and delivery". The five disciplines are: CM Planning and Management: a formal document and plan to guide the CM program that includes items such as: personnel responsibilities and resources training requirements administrative meeting guidelines, including a definition of procedures and tools baselining processes configuration control and configuration-status accounting naming conventions audits and reviews subcontractor/vendor CM requirements Configuration Identification (CI): consists of setting and maintaining baselines, which define the system or subsystem architecture, components, and any developments at any point in time. It is the basis by which changes to any part of a system are identified, documented, and later tracked through design, development, testing, and final delivery. CI incrementally establishes and maintains the definitive current basis for Configuration Status Accounting (CSA) of a system and its configuration items (CIs) throughout their lifecycle (development, production, deployment, and operational support) until disposal. Configuration Control: includes the evaluation of all change-requests and change-proposals, and their subsequent approval or disapproval. It covers the process of controlling modifications to the system's design, hardware, firmware, software, and documentation. Configuration Status Accounting: includes the process of recording and reporting configuration item descriptions (e.g., hardware, software, firmware, etc.) and all departures from the baseline during design and production. In the event of suspected problems, the verification of baseline configuration and approved modifications can be quickly determined. Configuration Verification and Audit: an independent review of hardware and software for the purpose of assessing compliance with established performance requirements, commercial and appropriate military standards, and functional, allocated, and product baselines. Configuration audits verify that the system and subsystem configuration documentation complies with the functional and physical performance characteristics before acceptance into an architectural baseline.

https://en.wikipedia.org/wiki/Configuration_management

Charlie Chong/ Fion Zhang


Part VF

Configuration Management (CM). originated in the United States Department of Defense in the 1950s as a technical management discipline for hardware material items—and it is now a standard practice in virtually every industry. The CM process became its own technical discipline sometime in the late 1960s when the DoD developed a series of military standards called the "480 series" (i.e., MIL-STD-480, MIL-STD-481 and MIL-STD-483) that were subsequently issued in the 1970sa. In 1991, the "480 series" was consolidated into a single standard known as the MIL–STD–973 that was then replaced by MIL–HDBK–61 pursuant to a general DoD goal that reduced the number of military standards in favor of industry technical standards supported by standards developing organizations (SDO).

https://en.wikipedia.org/wiki/Configuration_management

Charlie Chong/ Fion Zhang


Part VF

Configuration Management (CM). originated in the United States Department of Defense in the 1950s as a technical management discipline for hardware material items—and it is now a standard practice in virtually every industry. The CM process became its own technical discipline sometime in the late 1960s when the DoD developed a series of military standards called the "480 series" (i.e., MIL-STD-480, MIL-STD-481 and MIL-STD-483) that were subsequently issued in the 1970sa. In 1991, the "480 series" was consolidated into a single standard known as the MIL–STD–973 that was then replaced by MIL–HDBK–61 pursuant to a general DoD goal that reduced the number of military standards in favor of industry technical standards supported by standards developing organizations (SDO).

https://en.wikipedia.org/wiki/Configuration_management

Charlie Chong/ Fion Zhang


Part VF

Configuration Management (CM). originated in the United States Department of Defense in the 1950s as a technical management discipline for hardware material items—and it is now a standard practice in virtually every industry. The CM process became its own technical discipline sometime in the late 1960s when the DoD developed a series of military standards called the "480 series" (i.e., MIL-STD-480, MIL-STD-481 and MIL-STD-483) that were subsequently issued in the 1970sa. In 1991, the "480 series" was consolidated into a single standard known as the MIL–STD–973 that was then replaced by MIL–HDBK–61 pursuant to a general DoD goal that reduced the number of military standards in favor of industry technical standards supported by standards developing organizations (SDO).

http://www.nbcnews.com/politics/national-security/us-test-icbm-tensions-rise-north-korea-n788481

Charlie Chong/ Fion Zhang


Part VG

Chapter 24 Verification and Validation/Part VG _____________________

Charlie Chong/ Fion Zhang


Part VG

An audit is a systematic, independent, and documented process for obtaining audit evidence and evaluating it objectively to determine the extent to which audit criteria are fulfilled (ISO 19011:2011). Auditors collect evidence to ensure that requirements are being met. Auditors may verify and/or validate that requirements (audit criteria) are being met. In general, verification is checking or testing, and validation is the actual performance of its intended use. The dictionary does not support the distinction normally associated between verification and validation in the management systems and system-process audit fields. However, definitions of these terms were used by the FDA in its GMP starting in the 1980s and were later incorporated into ISO 9000 series standards. Now we can reference the definitions of verification and validation provided in ISO 9000:2005 and the design and development model outlined in ISO 9001:2008, clause 7.3. Verification should be performed to ensure that the system-process outputs have met the system-process requirements (audit criteria). Verification is the authentication of truth or accuracy by such means as facts, statements, citations, measurements, and confirmation by evidence. An element of verification is that it is independent or separate from the normal operation of a process. The act of an auditor checking that the process or product conforms to requirement is verification (as opposed to inspection checks). For example, ISO 9000:2005, clause 3.8.4 notes that verification activities include performing alternative calculations, comparing a new design specification to a similar proven design specification, undertaking tests and demonstrations, and reviewing documents prior to issue. The most common method of verification is the examination of documents and records. Records verify that a process or activity is being performed and the results recorded. Interviewing is another method of verifying that processes meet requirements through affirmation by the interviewee.

Charlie Chong/ Fion Zhang


Part VG

Techniques ď Ž Verify by examination of records, documents, or interviewing ď Ž Validate by observing or using the product or process Validation should be performed to ensure that the system-process outputs are meeting the requirements for the specified application or intended use.

Validation is the demonstration of the ability of the system-processes under investigation to achieve planned results. According to ISO 9000:2005, clause 3.8.5, validation is confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled. Sometimes an activity cannot be verified by records or interviews, and the actual process must be observed as intended to be operated. The observation can be the real process or a simulated one. Some activities can only be verified; for example, it would be too costly or impractical to validate a process such as a plant shutdown or start-up or the use of emergency procedures. Sometimes products or activities are only verified because the product would be destroyed or the process ruined by validating it (such as checking the seal on a container). For example, an auditor may assume there is a requirement to post revision dates on revised documents. At the audit, he or she asks about this and is told the computer does it automatically. The auditor may want to validate this process by asking the document coordinator to make a change to a document and then see if the software program automatically posts today’s date.

Charlie Chong/ Fion Zhang


Part VG

Process Auditing And Techniques One of the advantages of process-based management systems and using process auditing techniques is that the auditor follows along the process steps. In many cases an auditor is able to validate the audit criteria as opposed to just verifying them.

Charlie Chong/ Fion Zhang


Part VG

Process Auditing Techniques One of the advantages of process-based management systems and using process auditing techniques is that the auditor follows along the process steps. In many cases an auditor is able to validate the audit criteria as opposed to just verifying them.

https://pt.slideshare.net/ignitetribes/process-related-mechanical-and-piping-workshop-by-technip

Charlie Chong/ Fion Zhang


Part VG

Process Auditing Techniques One of the advantages of process-based management systems and using process auditing techniques is that the auditor follows along the process steps. In many cases an auditor is able to validate the audit criteria as opposed to just verifying them.

Charlie Chong/ Fion Zhang


Part VH

Chapter 25 Risk Management Tools/Part VH _____________________ Risk has four main components: 1. 2. 3. 4.

probability, hazard, exposure, and consequences.

1. 2. 3. 4.

probability, exposure, and hazard, consequences.

Probability Consequences

Ropeik and Gray define risk using these components as ―the probability that exposure to a hazard will lead to a negative consequence.‖ Other definitions look at risk as the combination of these components in some fashion or another, such as the mathematical, statistical expected value or a mathematical expression attempting to capture the essence of some, if not all, of the components. An example of the latter is the calculation of risk numbers or risk priority numbers in Design/Process Failure Mode (and Criticality) Analyses (DFMEAs, PFMEAs, DFMECAs). These analysis methods also expand on the hazard component by introducing an evaluation of how easily the failure mode can be detected or prevented—the idea being that something that cannot be easily prevented or detected will pose a higher level of risk than something that is more readily apparent.

Charlie Chong/ Fion Zhang


Part VH

Quantification Of Risk Assessment scales that assign numbers or weights in order to rank or prioritize components of risk are subjective and may cause confusion and false conclusions. A simple quantification approach is classification of the elements of risk by category, such as ―high,‖ ―medium,‖ or ―low,‖ and ―red,‖ ―yellow,‖ or ―green.‖ This may apply to risk assessments for disaster and recovery planning, financial plans, product development strategies, product and process design evaluations, product liability exposure, internal controls, environmental assessments, and production and quality systems. Evidence of such assessments can be used to demonstrate prudence and due care by establishing what risks were evaluated, how they were classified, and what was done to address or mitigate the effects. In general, elements of risk can be (1) designed out of a product or a process, (2) detected or the effects minimized, or if neither of these is feasible, then (3) warned against.2

Charlie Chong/ Fion Zhang


Part VH

The following are methods to identify, assess, and treat risks. Though the techniques were originally designed for specific purposes, they can all be used as a tool to manage risks. FMEA was designed to assess product risk, HACCP (hazard analysis and critical control point) was developed to manage food safety hazards, and so on.

Charlie Chong/ Fion Zhang


Part VH

Other reading: APPLICATION OF FISHBONE DIAGRAM TO DETERMINE THE RISK OF AN EVENT WITH MULTIPLE CAUSES Gheorghe ILIE 1, Carmen Nadia CIOCOIU 2 1 UTI Grup SRL, Soseaua Oltenitei no.107A, Bucharest, Romania, gheorghe.ilie@uti.ro 2 Academy of Economic Studies, Piata Romana, 6, Bucharest, Romania, nadia.ciocoiu@man.ase.ro

http://mrp.ase.ro/no21/f1.pdf

Charlie Chong/ Fion Zhang


Part VH http://slideplayer.com/slide/6118604/18/images/18/Risk+Assessment+Matrix.jpg

Charlie Chong/ Fion Zhang


Part VH

IE2-Gossip Diligent And Using Due Care. A small company hired an outside firm to assist in a comprehensive assessment of regulatory, environmental, business, operational, and financial risks. The tools and techniques employed were way over the heads of the people involved, and the results were not actionable. In frustration, they turned to a local consultant, who sat the principals and outside counsel around a table and quickly had them list concerns, issues, and the like on a whiteboard. He then distilled these down and put them into general classifications. Next, he had the group rate them in terms of ―high,‖ ―medium,‖ or ―low.‖ After that, they developed short, doable action plans, taking each group in order. Within a couple of sessions they had a real plan with assignments, dates, and review points. This was then shared with investors, insurance providers, local officials, and other parties. When later issues arose, the existence of the plan was used as evidence that the company was diligent and using due care.

Charlie Chong/ Fion Zhang


Part VH

Failure Mode And Effects Analysis Failure mode and effects analysis (FMEA) has been in use for many years and is used extensively in the automotive industry. FMEA is used for analyzing designs or processes for potential failure. Its aim is to reduce risk of failure. Therefore, there are two types in general use: ď Ž the DFMEA for analyzing potential design failures and ď Ž the PFMEA for analyzing potential process failures. For example, a small organization engaged in bidding for military contracts for high- tech devices successfully used an FMEA to identify and assess risks for a product never made before. The FMEA aided in evaluating design inputs, assured that potential failure modes were identified and addressed, provided for the identification of the failure modes’ root cause(s), determined the actions necessary to eliminate or reduce the potential failure mode, and added a high degree of objectivity to the design review process. The FMEA also directed attention to design features that required additional testing or development, documented risk reduction efforts, provided lessons-learned documentation to aid future FMEAs, and assured that the design was performed with a customer focus.

Charlie Chong/ Fion Zhang


Part VH

The FMEA methodology is: 1. Define the device design inputs or process functions and requirements 2. Identify a failure mode (what could go wrong) and the potential effects of the failure 3. Rank the severity of the effects (using a 1–10 scale, where 1 is minor and 10 is major and without warning) 4. Establish what the root cause(s) could be 5. Rate the likelihood of occurrence for the failure using a 1–10 scale 6. Document the present design or present process controls regarding prevention and detection 7. Rate the likelihood of these controls—detecting the failure using a 1–10 scale 8. Compute the risk priority number (RPN = severity × occurrence × detection) 9.

Recommend preventive/corrective action (what action, who will do it, when) —note that preventive action is listed first when dealing with the design stage and corrective action first if analyzing potential process failures

10. 11. 12. 13. 14.

Return to number 2 if other potential failures exist Build and test a prototype Redo the FMEA after test results are obtained and any necessary or desired changes are made Retest and, if acceptable, place in production Document the FMEA process for the knowledge base

Charlie Chong/ Fion Zhang


Part VH

The collaboration with employees who have been involved in design, development, production, and customer service activities is critical because their knowledge, ideas, and questions about a new product design will be based on their experience at different stages of product realization. Furthermore, if your employees are also some of your customers (end users), obtaining and documenting the employees’ experience is most useful. This experiential input, along with examinations of similar designs (and their FMEAs, nonconforming product and corrective action records, and customer feedback reports), is often the best source for analysis input. Figure 25.1 shows a sample PFMEA.

Charlie Chong/ Fion Zhang


Part VH

Figure 25.1 Consumer risk or Type II error.

RPN = severity Ă— occurrence Ă— detection

Charlie Chong/ Fion Zhang


Failure mode and effects analysis (FMEA) Part VH

one of the first highly structured, systematic techniques for failure analysis. It was developed by reliability engineers in the late 1950s to study problems that might arise from malfunctions of military systems. A FMEA is often the first step of a system reliability study. It involves reviewing as many components, assemblies, and subsystems as possible to identify failure modes, and their causes and effects. For each component, the failure modes and their resulting effects on the rest of the system are recorded in a specific FMEA worksheet. There are numerous variations of such worksheets. A FMEA can be a qualitative analysis, but may be put on a quantitative basis when mathematical failure rate models are combined with a statistical failure mode ratio database. A few different types of FMEA analyses exist, such as:

 Functional  Design  Process Sometimes FMEA is extended to FMECA (failure mode, effects, and criticality analysis) to indicate that criticality analysis is performed too. FMEA is an inductive reasoning (forward logic) single point of failure analysis and is a core task in reliability engineering, safety engineering and quality engineering. A successful FMEA activity helps identify potential failure modes based on experience with similar products and processes—or based on common physics of failure logic. It is widely used in development and manufacturing industries in various phases of the product life cycle. Effects analysis refers to studying the consequences of those failures on different system levels. Functional analyses are needed as an input to determine correct failure modes, at all system levels, both for functional FMEA or Piece-Part (hardware) FMEA.

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


An FMEA is used to structure Mitigation for Risk reduction based on either:

Part VH

ď Ž Failure (mode) effect severity reduction or ď Ž Based on lowering the probability of failure or both. The FMEA is in principle a full inductive (forward logic) analysis, however the failure probability can only be estimated or reduced by understanding the failure mechanism. Hence, FMEA may include information on causes of failure (deductive analysis) to reduce the possibility of occurrence by eliminating identified (root) causes.

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

An FMEA Methodology

http://www.leanhospitals.pl/en/2017/12/21/analiza-fmea-w-szpitalu/

http://www.leanhospitals.pl/en/2017/12/21/analiza-fmea-w-szpitalu/

Charlie Chong/ Fion Zhang


Part VH

An FMEA Methodology

https://www.slideshare.net/Anleitner/lts-2009-dfmea

Charlie Chong/ Fion Zhang


Part VH

FMEA-Introduction. The FME(C)A is a design tool used to systematically analyze postulated component failures and identify the resultant effects on system operations. The analysis is sometimes characterized as consisting of two subanalyses, the first being the failure modes and effects analysis (FMEA), and the second, the criticality analysis (CA). Successful development of an FMEA requires that the analyst include all significant failure modes for each contributing element or part in the system. FMEAs can be performed at the system, subsystem, assembly, subassembly or part level. The FMECA should be a living document during development of a hardware design. It should be scheduled and completed concurrently with the design. If completed in a timely manner, the FMECA can help guide design decisions. The usefulness of the FMECA as a design tool and in the decisionmaking process is dependent on the effectiveness and timeliness with which design problems are identified. Timeliness is probably the most important consideration. In the extreme case, the FMECA would be of little value to the design decision process if the analysis is performed after the hardware is built. While the FMECA identifies all part failure modes, its primary benefit is the early identification of all critical and catastrophic subsystem or system failure modes so they can be eliminated or minimized through design modification at the earliest point in the development effort; therefore, the FMECA should be performed at the system level as soon as preliminary design information is available and extended to the lower levels as the detail design progresses. Remark: For more complete scenario modeling another type of Reliability analysis may be considered, for example fault tree analysis (FTA); a deductive(backward logic) failure analysis that may handle multiple failures within the item and/or external to the item including maintenance and logistics. It starts at higher functional / system level. A FTA may use the basic failure mode FMEA records or an effect summary as one of its inputs (the basic events). Interface hazard analysis, Human error analysis and others may be added for completion in scenario modeling. Meanings: FTA-

Fault tree analysis (FTA) is a top-down, deductive failure analysis in which an undesired state of a system is analyzed using Boolean logic to combine a series of lower-level events. This analysis method is mainly used in the fields of safety engineering and reliability engineering to understand how systems can fail, to identify the best ways to reduce risk or to determine (or get a feeling for) event rates of a safety accident or a particular system level (functional) failure. FTA is used in the aerospace,[1] nuclear power, chemical and process, pharmaceutical,[5] petrochemical and other high-hazard industries; but is also used in fields as diverse as risk factor identification relating to social service system failure. FTA is also used in software engineering for debugging purposes and is closely related to cause-elimination technique used to detect bugs. In aerospace, the more general term "system failure condition" is used for the "undesired state" / top event of the fault tree. These conditions are classified by the severity of their effects. The most severe conditions require the most extensive fault tree analysis. These system failure conditions and their classification are often previously determined in the functional hazard analysis.

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Functional analysis. The analysis may be performed at the functional level until the design has matured sufficiently to identify specific hardware that will perform the functions; then the analysis should be extended to the hardware level. When performing the hardware level FMECA, interfacing hardware is considered to be operating within specification. In addition, each part failure postulated is considered to be the only failure in the system (i.e., it is a single failure analysis). In addition to the FMEAs done on systems to evaluate the impact lower level failures have on system operation, several other FMEAs are done. Special attention is paid to interfaces between systems and in fact at all functional interfaces. The purpose of these FMEAs is to assure that irreversible physical and/or functional damage is not propagated across the interface as a result of failures in one of the interfacing units. These analyses are done to the piece part level for the circuits that directly interface with the other units. The FMEA can be accomplished without a CA, but a CA requires that the FMEA has previously identified system level’s critical failures. When both steps are done, the total process is called a FMECA.

Ground rules The ground rules of each FMEA include a set of project selected procedures; the assumptions on which the analysis is based; the hardware that has been included and excluded from the analysis and the rationale for the exclusions. The ground rules also describe the indenture level of the analysis, the basic hardware status, and the criteria for system and mission success. Every effort should be made to define all ground rules before the FMEA begins; however, the ground rules may be expanded and clarified as the analysis proceeds. A typical set of ground rules (assumptions) follows:  Only one failure mode exists at a time.  All inputs (including software commands) to the item being analyzed are present and at nominal values.  All consumables are present in sufficient quantities.  Nominal power is available

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Benefits Major benefits derived from a properly implemented FMECA effort are as follows: 1. 2.

3.

4. 5. 6.

It provides a documented method for selecting a design with a high probability of successful operation and safety. A documented uniform method of assessing potential failure mechanisms, failure modes and their impact on system operation, resulting in a list of failure modes ranked according to the seriousness of their system impact and likelihood of occurrence. Early identification of single failure points (SFPS) and system interface problems, which may be critical to mission success and/or safety. They also provide a method of verifying that switching between redundant elements is not jeopardized by postulated single failures. An effective method for evaluating the effect of proposed changes to the design and/or operational procedures on mission success and safety. A basis for in-flight troubleshooting procedures and for locating performance monitoring and fault-detection devices. Criteria for early planning of tests.

From the above list, early identifications of SFPS, input to the troubleshooting procedure and locating of performance monitoring / fault detection devices are probably the most important benefits of the FMECA. In addition, the FMECA procedures are straightforward and allow orderly evaluation of the design.

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

History Procedures for conducting FMECA were described in US Armed Forces Military Procedures document MIL-P1629 (1949); revised in 1980 as MIL-STD-1629A. By the early 1960s, contractors for the U.S. National Aeronautics and Space Administration (NASA) were using variations of FMECA or FMEA under a variety of names. NASA programs using FMEA variants included Apollo, Viking, Voyager, Magellan, Galileo, and Skylab. The civil aviation industry was an early adopter of FMEA, with the Society for Automotive Engineers (SAE) publishing ARP926 in 1967. After two revisions, ARP926 has been replaced by ARP4761, which is now broadly used in civil aviation. During the 1970s, use of FMEA and related techniques spread to other industries. In 1971 NASA prepared a report for the U.S. Geological Survey recommending the use of FMEA in assessment of offshore petroleum exploration. A 1973 U.S. Environmental Protection Agency report described the application of FMEA to wastewater treatment plants. FMEA as application for HACCP on the Apollo Space Program moved into the food industry in general. The automotive industry began to use FMEA by the mid 1970s. The Ford Motor Company introduced FMEA to the automotive industry for safety and regulatory consideration after the Pinto affair. Ford applied the same approach to processes (PFMEA) to consider potential process induced failures prior to launching production. In 1993 the Automotive Industry Action Group (AIAG) first published an FMEA standard for the automotive industry. It is now in its fourth edition. The SAE first published related standard J1739 in 1994. This standard is also now in its fourth edition.

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Although initially developed by the military, FMEA methodology is now extensively used in a variety of industries including semiconductor processing, food service, plastics, software, and healthcare. Toyota has taken this one step further with its Design Review Based on Failure Mode (DRBFM) approach. The method is now supported by the American Society for Quality which provides detailed guides on applying the method. The standard Failure Modes and Effects Analysis (FMEA) and Failure Modes, Effects and Criticality Analysis (FMECA) procedures identify the product failure mechanisms, but may not model them without specialized software. This limits their applicability to provide a meaningful input to critical procedures such as virtual qualification, root cause analysis, accelerated test programs, and to remaining life assessment. To overcome the shortcomings of FMEA and FMECA, a Failure Modes, Mechanisms and Effect Analysis (FMMEA) has often been used.

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Basic terms The following covers some basic FMEA terminology.  Failure The loss of a function under stated conditions.  Failure mode The specific manner or way by which a failure occurs in terms of failure of the item (being a part or (sub) system) function under investigation; it may generally describe the way the failure occurs. It shall at least clearly describe a (end) failure state of the item (or function in case of a Functional FMEA) under consideration. It is the result of the failure mechanism (cause of the failure mode). For example; a fully fractured axle, a deformed axle or a fully open or fully closed electrical contact are each a separate failure mode of a DFMEA, they would not be failure modes of a PFMEA. Here you examine your process, so process step x - insert drill bit, the failure mode would be insert wrong drill bit, the effect of this is too big a hole or too small a hole.  Failure cause and/or mechanism Defects in requirements, design, process, quality control, handling or part application, which are the underlying cause or sequence of causes that initiate a process (mechanism) that leads to a failure mode over a certain time. A failure mode may have more causes. For example; "fatigue or corrosion of a structural beam" or "fretting corrosion in an electrical contact" is a failure mechanism and in itself (likely) not a failure mode. The related failure mode (end state) is a "full fracture of structural beam" or "an open electrical contact". The initial cause might have been "Improper application of corrosion protection layer (paint)" and /or "(abnormal) vibration input from another (possibly failed) system".  Failure effect Immediate consequences of a failure on operation, function or functionality, or status of some item.  Indenture levels (bill of material or functional breakdown) An identifier for system level and thereby item complexity. Complexity increases as levels are closer to one.

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

 Local effect The failure effect as it applies to the item under analysis.  Next higher level effect The failure effect as it applies at the next higher indenture level.  End effect The failure effect at the highest indenture level or total system.  Detection The means of detection of the failure mode by maintainer, operator or built in detection system, including estimated dormancy period (if applicable)  Probability The likelihood of the failure occurring.  Risk Priority Number (RPN) Severity (of the event) x Probability (of the event occurring) x Detection (Probability that the event would not be detected before the user was aware of it)  Severity The consequences of a failure mode. Severity considers the worst potential consequence of a failure, determined by the degree of injury, property damage, system damage and/or time lost to repair the failure.  Remarks / mitigation / actions Additional info, including the proposed mitigation or actions used to lower a risk or justify a risk level or scenario.

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Example of FMEA worksheet Ref.

1.1.1.1

Item

Brake Manifold Ref. Designat or 2b, channel A, O-ring

Potential Potential Mission Local failure cause(s) Phase effects of mode / failure mechani sm

Internal Leakage from Channel A to B

Next higher level effect

a) O-ring Landing Decreas No Left Compres ed Wheel sion Set pressure Braking (Creep) to main failure b) brake surface hose damage during assembl y

System (P) (S) (D) Level Probabili Severity Detectio End ty n Effect (estimate (Indicatio ) ns to Operator , Maintain er) Severely (C) (V) (1) Flight Reduced Occasio Catastro Compute Aircraft nal phic (this r and decelerat is the Maintena ion on worst nce ground case) Compute and side r will drift. indicate Partial "Left loss of Main runway Brake, position Pressure control. Low" Risk of collision

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Detectio Risk Actions Mitigatio n Level for n/ Dormanc P*S (+D) further Require y Period Investiga ments tion / evidence

Built-In Test interval is 1 minute

Unaccep Check table Dormanc y Period and probabilit y of failure

Require redunda nt independ ent brake hydraulic channels and/or Require redunda nt sealing and Classify O-ring as Critical Part Class 1

Charlie Chong/ Fion Zhang


Part VH

Probability (P) It is necessary to look at the cause of a failure mode and the likelihood of occurrence. This can be done by analysis, calculations / FEM, looking at similar items or processes and the failure modes that have been documented for them in the past. A failure cause is looked upon as a design weakness. All the potential causes for a failure mode should be identified and documented. This should be in technical terms. Examples of causes are: Human errors in handling, Manufacturing induced faults, Fatigue, Creep, Abrasive wear, erroneous algorithms, excessive voltage or improper operating conditions or use (depending on the used ground rules). A failure mode is given a Probability Ranking. Rating

Meaning

A

Extremely Unlikely (Virtually impossible or No known occurrences on similar products or processes, with many running hours)

B

Remote (relatively few failures)

C

Occasional (occasional failures)

D

Reasonably Possible (repeated failures)

E

Frequent (failure is almost inevitable)

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Severity (S) Determine the Severity for the worst-case scenario adverse end effect (state). It is convenient to write these effects down in terms of what the user might see or experience in terms of functional failures. Examples of these end effects are: full loss of function x, degraded performance, functions in reversed mode, too late functioning, erratic functioning, etc. Each end effect is given a Severity number (S) from, say, I (no effect) to V (catastrophic), based on cost and/or loss of life or quality of life. These numbers prioritize the failure modes (together with probability and detectability). Below a typical classification is given. Other classifications are possible. See also hazard analysis. Rating

Meaning

I

No relevant effect on reliability or safety

II

Very minor, no damage, no injuries, only results in a maintenance action (only noticed by discriminating customers)

III

Minor, low damage, light injuries (affects very little of the system, noticed by average customer)

IV

Critical (causes a loss of primary function; Loss of all safety Margins, 1 failure away from a catastrophe, severe damage, severe injuries, max 1 possible death )

V

Catastrophic (product becomes inoperative; the failure may result in complete unsafe operation and possible multiple deaths)

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Detection (D) The means or method by which a failure is detected, isolated by operator and/or maintainer and the time it may take. This is important for maintainability control (availability of the system) and it is especially important for multiple failure scenarios. This may involve dormant failure modes (e.g. No direct system effect, while a redundant system / item automatically takes over or when the failure only is problematic during specific mission or system states) or latent failures (e.g. deterioration failure mechanisms, like a metal growing crack, but not a critical length). It should be made clear how the failure mode or cause can be discovered by an operator under normal system operation or if it can be discovered by the maintenance crew by some diagnostic action or automatic built in system test. A dormancy and/or latency period may be entered.

Rating

Meaning

1

Certain – fault will be caught on test - e.g. Poke-Yoke

2

Almost certain

3

High

4

Moderate

5

Low

6

Fault is undetected by Operators or Maintainers

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Dormancy or Latency Period The average time that a failure mode may be undetected may be entered if known. For example:  Seconds, auto detected by maintenance computer  8 hours, detected by turn-around inspection  2 months, detected by scheduled maintenance block X  2 years, detected by overhaul task x

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Indication If the undetected failure allows the system to remain in a safe / working state, a second failure situation should be explored to determine whether or not an indication will be evident to all operators and what corrective action they may or should take. Indications to the operator should be described as follows:  Normal. An indication that is evident to an operator when the system or equipment is operating normally.  Abnormal. An indication that is evident to an operator when the system has malfunctioned or failed.  Incorrect. An erroneous indication to an operator due to the malfunction or failure of an indicator (i.e., instruments, sensing devices, visual or audible warning devices, etc.). PERFORM DETECTION COVERAGE ANALYSIS FOR TEST PROCESSES AND MONITORING (From ARP4761 Standard): This type of analysis is useful to determine how effective various test processes are at the detection of latent and dormant faults. The method used to accomplish this involves an examination of the applicable failure modes to determine whether or not their effects are detected, and to determine the percentage of failure rate applicable to the failure modes which are detected. The possibility that the detection means may itself fail latently should be accounted for in the coverage analysis as a limiting factor (i.e., coverage cannot be more reliable than the detection means availability). Inclusion of the detection coverage in the FMEA can lead to each individual failure that would have been one effect category now being a separate effect category due to the detection coverage possibilities. Another way to include detection coverage is for the FTA to conservatively assume that no holes in coverage due to latent failure in the detection method affect detection of all failures assigned to the failure effect category of concern. The FMEA can be revised if necessary for those cases where this conservative assumption does not allow the top event probability requirements to be met. After these three basic steps the Risk level may be provided.

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Risk level (P*S) and (D) Risk is the combination of End Effect Probability And Severity where probability and severity includes the effect on non-detectability (dormancy time). This may influence the end effect probability of failure or the worst case effect Severity. The exact calculation may not be easy in all cases, such as those where multiple scenarios (with multiple events) are possible and detectability / dormancy plays a crucial role (as for redundant systems). In that case Fault Tree Analysis and/or Event Trees may be needed to determine exact probability and risk levels. Preliminary Risk levels can be selected based on a Risk Matrix like shown below, based on Mil. Std. 882.[24] The higher the Risk level, the more justification and mitigation is needed to provide evidence and lower the risk to an acceptable level. High risk should be indicated to higher level management, who are responsible for final decision-making. Severity

Probability

I

II

III

IV

V

VI

A

Low

Low

Low

Low

Moderate

High

B

Low

Low

Low

Moderate

High

Unacceptable

C

Low

Low

Moderate

Moderate

High

Unacceptable

D

Low

Moderate

Moderate

High

Unacceptable

Unacceptable

E

Moderate

Moderate

High

Unacceptable

Unacceptable

Unacceptable

After this step the FMEA has become like a FMECA

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Timing The FMEA should be updated whenever:  A new cycle begins (new product/process)  Changes are made to the operating conditions  A change is made in the design  New regulations are instituted  Customer feedback indicates a problem

Uses  Development of system requirements that minimize the likelihood of failures.  Development of designs and test systems to ensure that the failures have been eliminated or the risk is reduced to acceptable level.  Development and evaluation of diagnostic systems  To help with design choices (trade-off analysis).

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Advantages              

Catalyst for teamwork and idea exchange between functions Collect information to reduce future failures, capture engineering knowledge Early identification and elimination of potential failure modes Emphasize problem prevention Improve company image and competitiveness Improve production yield Improve the quality, reliability, and safety of a product/process Increase user satisfaction Maximize profit Minimize late changes and associated cost Reduce impact on company profit margin Reduce system development time and cost Reduce the possibility of same kind of failure in future Reduce the potential for warranty concerns

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Limitations. While FMEA identifies important hazards in a system, its results may not be comprehensive and the approach has limitations. In the healthcare context, FMEA and other risk assessment methods, including SWIFT (Structured What If Technique) and retrospective approaches, have been found to have limited validity when used in isolation. Challenges around scoping and organisational boundaries appear to be a major factor in this lack of validity. If used as a top-down tool, FMEA may only identify major failure modes in a system. Fault tree analysis (FTA) is better suited for "top-down" analysis. When used as a "bottom-up" tool FMEA can augment or complement FTA and identify many more causes and failure modes resulting in top-level symptoms. It is not able to discover complex failure modes involving multiple failures within a subsystem, or to report expected failure intervals of particular failure modes up to the upper level subsystem or system. Additionally, the multiplication of the severity, occurrence and detection rankings may result in rank reversals, where a less serious failure mode receives a higher RPN than a more serious failure mode. The reason for this is that the rankings are ordinal scale numbers, and multiplication is not defined for ordinal numbers. The ordinal rankings only say that one ranking is better or worse than another, but not by how much. For instance, a ranking of "2" may not be twice as severe as a ranking of "1," or an "8" may not be twice as severe as a "4," but multiplication treats them as though they are. See Level of measurement for further discussion. Various solutions to this problems have been proposed, e.g., the use of fuzzy logic as an alternative to classic RPN model

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Process FMEA can be challenging for participants who have not completed many PFMEAS, often confusing FAILURE MODES with EFFECTS and CAUSES. To clarify, a Process FMEA shows how the process can go wrong. Using a detailed Process Map will aid the person filling in the worksheet to correctly list the steps of the process being reviewed. The FAILURE MODE is then simply how that step can go wrong. Example, Process Step 1. Pick Up right handed part. Can they pick up the wrong part? (some manufacturing centers have left and right handed parts etc.) FAILURE MODE put left hand part in, EFFECT could be wrecked CNC machine and scrapped part, or hole drilled in wrong location. The cause, keeping inventory of similar parts at the job. Why is it important to do a PFMEA with regard to the process? When a process is examined or if we ask what can go wrong with the process unknown issues are uncovered, solving problems before they occur and tackling root cause issues or at least 2 Y's deep on a 5 Y. Here the manufacturing engineer could possibly poke yoke the tooling to prevent a left handed part in the fixture when running the right handed parts or program a touch off probe in the CNC programming - all before ever making the mistake the first time. If a PFMEA is set up where the FAILURE MODE relates to the feature on the print, example FAILURE MODE drilled hole too big - no further understanding of what caused the problem is gained.

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Numerous PFMEA's have been examined and show that little to no value is gained when reviewing features off of a print as FAILURE MODES - little understanding of the cause is gained. New PFMEA practitioners often try to relate the PFMEA FAILURE MODE to the FEATURE, numerous authors list this as trying to inspect in quality rather than listing the process step determining how it can go wrong and building in quality through root cause evaluation. Besides, two shortcomings are: 1. complexity of the FMEA worksheet; 2. intricacy of its use. Entries in an FMEA worksheet are voluminous. The FMEA worksheet is hard to produce, hard to understand and read, as well as hard to maintain. The use of neural network techniques to cluster and visualize failure modes were suggested, recently.

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Types  Functional: before design solutions are provided (or only on high level) functions can be evaluated on potential functional failure effects. General Mitigations ("design to" requirements) can be proposed to limit consequence of functional failures or limit the probability of occurrence in this early development. It is based on a functional breakdown of a system. This type may also be used for Software evaluation.  Concept Design / Hardware: analysis of systems or subsystems in the early design concept stages to analyse the failure mechanisms and lower level functional failures, specially to different concept solutions in more detail. It may be used in trade-off studies.  Detailed Design / Hardware: analysis of products prior to production. These are the most detailed (in mil 1629 called Piece-Part or Hardware FMEA) FMEAs and used to identify any possible hardware (or other) failure mode up to the lowest part level. It should be based on hardware breakdown (e.g. the BoM = Bill of Material). Any Failure effect Severity, failure Prevention (Mitigation), Failure Detection and Diagnostics may be fully analyzed in this FMEA.  Process: analysis of manufacturing and assembly processes. Both quality and reliability may be affected from process faults. The input for this FMEA is amongst others a work process / task Breakdown.

https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis

Charlie Chong/ Fion Zhang


Part VH

Mission Success

Charlie Chong/ Fion Zhang


Part VH

Mission Failure.

https://en.wikipedia.org/wiki/Deepwater_Horizon_oil_spill

Charlie Chong/ Fion Zhang


Part VH

FMEA

https://www.youtube.com/embed/QBFRuXo88ic

Charlie Chong/ Fion Zhang


Part VH

Critical to Quality, CTQ In the realm of Six Sigma methodology, there is a tool for displaying the causal relationship among the key business indicators (labeled as Y), the critical- to-quality (CTQ) process outputs (labeled y) that directly affect the Ys, and the causal factors that affect the process outputs (labeled as x). For example: One key business indicator (an outcome, a dependent variable) is customer retention (Y). CTQ outputs are services delivered on time (y), services delivered correctly (y), and customer satisfied (y). Factors affecting outputs (independent variables) are scheduling/ dispatch system (x); training of service personnel (x); supplies, vehicles, tools, and equipment (x); and time to complete service properly (x). See the relationship of x to y and y to Y in Figure 25.2. The selecting of the key metrics to be included in the balanced scorecard is illustrative of how key indicators are established. The top- level metrics of the scorecard (typically four) are the ones executives use to make their decisions. Each of these top- level metrics (dependent variables) is backed up by metrics on independent variables, usually available through computer access. Thus if the marketing vice president wants to know the cause for a negative trend in the customer metric of the scorecard, the vice president can drill down to the variable affecting the negative trend.

Charlie Chong/ Fion Zhang


Part VH

Figure 25.2 Causal relationship in developing key process measurements.

Charlie Chong/ Fion Zhang


Part VH

HACCP Hazard analysis and critical control point (HACCP) is an effective tool to prevent food from being contaminated. HACCP is not a new concept. The Pillsbury Co. developed it for NASA in the late 1950s to prevent food safety incidents on manned space flights. The technique identifies hazards, assesses their significance, and develops control measures (treats the risk). Seven Principles 1. Conduct a hazard analysis 2. Determine the critical control points (CCPs) 3. Establish critical limits (CLs) 4. Establish monitoring procedures 5. Establish corrective action 6. Establish verification plan 7. Establish records and documented procedures

Charlie Chong/ Fion Zhang


Part VH

12 HACCP Application Steps 1. 2. 3. 4. 5. 6.

7. 8. 9. 10. 11. 12.

Assemble the HACCP team Describe the product Identify the intended use Construct flow diagram On-site confirmation of flow diagram List all potential hazards - Conduct hazard analysis - Consider control measures Determine the CCPs Establish critical limits for each CCP Establish a monitoring system for each CCP Establish corrective actions Establish verification procedures Establish documentation and recordkeeping

Charlie Chong/ Fion Zhang


Part VH

HHA The purpose of Health Hazard Assessment (HHA) is to identify health hazards, evaluate proposed hazardous materials, and propose protective measures to reduce the associated risk to an acceptable level.  The first step of the HHA is to identify and determine quantities of potentially hazardous materials or physical agents (noise, radiation, heat stress, cold stress) involved with the system and its logistical support.  The next step is to analyze how these materials or physical agents are used in the system and for its logistical support. Based on the use, quantity, and type of substance/agent, estimate where and how personnel exposures may occur and if possible the degree or frequency of exposure.  The final step includes incorporation into the design of the system and its logistical support equipment/facilities, cost- effective controls to reduce exposures to acceptable levels.

The life- cycle costs of required controls could be high, and consideration of alternative systems may be appropriate. An HHA evaluates the hazards and costs due to system component materials, evaluates alternative materials, and recommends materials that reduce the associated risks and life- cycle costs. Materials are evaluated if (because of their physical, chemical, or biological characteristics; quantity; or concentrations) they cause or contribute to adverse effects in organisms or offspring, pose a substantial present or future danger to the environment, or result in damage to or loss of equipment or property during the system’s life cycle.

Charlie Chong/ Fion Zhang


Part VH

An HHA should include the evaluation of the following:  Chemical hazards—Hazardous materials that are flammable, corrosive, toxic, carcinogens or suspected carcinogens, systemic poisons, asphyxiants, or respiratory irritants  Physical hazards (e.g., noise, heat, cold, ionizing and non- ionizing radiation)  Biological hazards (e.g., bacteria, fungi)  Ergonomic hazards (e.g., lifting, task saturation)  Other hazardous materials that may be introduced by the system during manufacture, operation, or maintenance

Charlie Chong/ Fion Zhang


Part VH

The evaluation is performed in the context of the following: ď Ž System, facility, and personal protective equipment requirements (e.g., ventilation, noise attenuation, radiation barriers) to allow safe operation and maintenance. When feasible engineering designs are not available to reduce hazards to acceptable levels, alternative protective measures must be specified (e.g., protective clothing, operation or maintenance procedures to reduce risk to an acceptable level). ď Ž Potential material substitutions and projected disposal issues. The HHA discusses long- term effects such as the cost of using alternative materials over the life cycle or the capability and cost of disposing of a substance. ď Ž Hazardous material data. The HHA describes the means for identifying and tracking information for each hazardous material. Specific categories of health hazards and impacts that may be considered are acute health, chronic health, cancer, contact, flammability, reactivity, and environment.

Charlie Chong/ Fion Zhang


Part VH

The HHA’s hazardous materials evaluation must include the following:  Identification of the hazardous materials by name(s) and stock numbers (or CAS numbers); the affected system components and processes; the quantities, characteristics, and concentrations of the materials in the system; and source documents relating to the materials.  Determination of the conditions under which the hazardous materials can release or emit components in a form that may be inhaled, ingested, absorbed by living beings, or leached into the environment.  Characterization of material hazards and determination of reference quantities and hazard ratings for system materials in question.  Estimation of the expected usage rate of each hazardous material for each process or component for the system and program-wide impact.  Recommendations for the disposition of each hazardous material identified. If a reference quantity is exceeded by the estimated usage rate, material substitution or altered processes may be considered to reduce risks associated with the material hazards while evaluating the impact on program costs.

Charlie Chong/ Fion Zhang


Part VH

For each proposed and alternative material, the assessment must provide the following data for management review:  Material identification. Includes material identity, common or trade names, chemical name, chemical abstract service (CAS) number, national stock number (NSN), local stock number, physical state, and manufacturers and suppliers.  Material use and quantity. Includes component name, description, operations details, total system and life cycle quantities to be used, and concentrations of any mixtures.  Hazard identification. Identifies the adverse effects of the material on personnel, the system, environment, or facilities.  Toxicity assessment. Describes expected frequency, duration, and amount of exposure. References for the assessment must be provided.  Risk calculations. Includes classification of severity and probability of occurrence, acceptable levels of risk, any missing information, and discussions of uncertainties in the data or calculations.

Charlie Chong/ Fion Zhang


Part VH

For work performed under contract, details to be specified in the SOW (statement of work) include:  Minimum risk severity and probability reporting thresholds  Any selected hazards, hazardous areas, hazardous materials or other specific items to be examined or excluded  Specification of desired analysis techniques and/or report formats

Charlie Chong/ Fion Zhang


Appendixes ________________________________________________ Appendix A ASQ Code of Ethics Appendix B Notes on Compliance, Conformance, and Conformity Appendix C Example Guide for Technical Specialists Appendix D The Institute of Internal Auditors Code of Ethics Appendix E History of Quality Assurance and Auditing Appendix F Certified Quality Auditor Body of Knowledge Appendix G Example Audit Program Schedule Appendix H Example Third- Party Audit Organization Forms Appendix I Example Audit Reports Appendix J Product Line Audit Flowchart Appendix K First, Second, and Third Edition Contributors and Reviewers

Charlie Chong/ Fion Zhang


Charlie Chong/ Fion Charlie Chong/ Fion Zhang

Zhang


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.