Manufacturing Handbook of Best Practices 2011 Part 15 docx

26 311 0
Manufacturing Handbook of Best Practices 2011 Part 15 docx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

319 15 Statistical Process Control Paul A. Keller 15.1 DESCRIBING DATA When it comes right down to it, data are boring, just a bunch of numbers. By themselves, data tell us little. For example: 44.373. By itself: nothing. What it lacks is context. Even knowing that it’s the measurement in inches for a key characteristic, we still want more: Is this representative of the other parts? How does this compare with what we’ve made in the past? Context allows us to process the data into information. Descriptive data are commonly presented as point estimates. We see point estimates in many aspects of our personal and business life: newspapers report the unemployment rate, magazines poll readers’ responses, quality departments report scrap rate. Each of these examples, and countless others, provide us with an estimate of the state of a population through a sample. Yet these point estimates often lack context. Is the reported reader response a good indicator of the general population? Is the response changing from what it has been in the past? Statistics help us to answer these questions. In this chapter, we explore some tools for providing an appropriate context for data. 15.1.1 H ISTOGRAMS A histogram is a graphical tool used to visualize data. It is a bar chart, where each bar represents the number of observations falling within a range of data values. An example is shown in Figure 15.1. An advantage of the histogram is that the process location is clearly identifiable. In Figure 15.1, the central tendency of the data is about 0.4. The variation is also clearly distinguishable: we expect most of the data to fall between 0.1 and 1.0. We can also see if the data are bounded or have symmetry. If your data are from a symmetrical distribution, such as the bell-shaped normal distribution, the data will be evenly distributed about a center. If the data are not evenly distributed about the center of the histogram, it is skewed. If the data appear skewed, you should understand the cause of this behavior. Some processes will naturally have a skewed distribution, and may also be bounded, such as the concen- tricity data in Figure 15.1. Concentricity has a natural lower bound at zero, because no measurements can be negative. The majority of the data is just above zero, so there is a sharp demarcation at the zero point representing a bound. SL3003Ch15Frame Page 319 Tuesday, November 6, 2001 6:02 PM © 2002 by CRC Press LLC 320 The Manufacturing Handbook of Best Practices If double or multiple peaks occur, look for the possibility that the data are coming from multiple sources, such as different suppliers or machine adjustments. One problem that novice practitioners tend to overlook is that the histogram provides only part of the picture. A histogram of a given shape may be produced by many different processes, although the only difference in the data is their order. So the histogram that looks like it fits our needs could have come from data showing random variation about the average, or from data clearly trending toward an unde- sirable condition. Because the histogram does not consider the sequence of the points, we lack this information. Statistical process control (SPC) provides this context. 15.2 OVERVIEW OF SPC Statistical process control is a method of detecting changes to a process. Unlike more general enumerative statistical tools, such as hypothesis testing, which allow conclusions to be drawn on the past behavior of static populations, SPC is an analytical statistical tool. As such, SPC provides predictions on future process behavior, using its past behavior as a model. Applications of SPC in business are as varied as business itself, including manufacturing, chemical processes, banking, healthcare, and general service. SPC may be applied to any time-ordered data, when the observations are statistically independent. Methods addressing dependent data are discussed under 15.5.1, Auto- correlation. The tool of SPC is the statistical control chart, or more simply, the control chart. The control chart was developed in the 1920s by Walter Shewhart while he was working for Bell Laboratories. Shewhart defined statistical control as follows: A phenomenon is said to be in statistical control when, through the use of past experience, we can predict how the phenomenon will vary in the future. FIGURE 15.1 Example histogram for non-normal data. Concentricity. Best-fit curve: Johnson Sb; K–S test: 0.999. Kac K of fit is not significant; specified lower bound = 0.000. 8.0 6.4 4.8 3.2 1.6 0.0 0.100 0.300 0.500 0.700 0.900 1.100 1.300 1.500 0.00 6.67 13.33 20.00 26.67 33.33 High PERCENT CELL FREQUENCY CELL BOUNDARY SL3003Ch15Frame Page 320 Tuesday, November 6, 2001 6:02 PM © 2002 by CRC Press LLC Statistical Process Control 321 15.2.1 C ONTROL C HART P ROPERTIES Control charts take many forms, depending on the process that is being analyzed and the data available from that process. All control charts have the following properties: • The x-axis is sequential, usually a unit denoting the evolution of time. • The y-axis is the statistic that is being charted for each point in time. Examples of plotted statistics include an observation, an average of two or more observations, the median of two or more observations, a count of items that meet a criteria of interest, or the percentage of items meeting a criteria of interest. • Limits are defined for the statistic that is being plotted. These control limits are statistically determined by observing process behavior, provid- ing an indication of the bounds of expected behavior for the plotted statistic. They are never determined using customer specifications or goals. An example of a control chart is shown in Figure 15.2. In this example, the cycle time for processing an order is plotted on an individual-X control chart, the top chart shown in the figure. The cycle time is observed for a randomly selected order each day and plotted on the control chart. For example, the cycle time for the third order is about 25. In Figure 15.2, the centerline (PCL, for process center line) of the individual-X chart is the average of the observations (18.6 days). It provides an indication of the process location. Most of the observations will fall somewhere close to this average value, so it is our best guess for future observations, as long as the observations are statistically independent of one another. We notice from Figure 15.2 that the cycle time process has variation. That is, the observations are different from one another. The third observation at 25 days is FIGURE 15.2 Example of individual-X/moving control charts (shown with histogram). 15 0 OBSERVATIONSRANGES Group range: All (1-70) Auto drop: OFF CL Ordinate: 3.0 Curve: Normal. K-S: 0.929 Cpk: 1.16 Cp: (N/A) AVERAGE(m): 18.6 PROCESS SIGMA: 4.7 HIGH SPEC: 35.0 % HIGH: 0.0265% UCL : 32.8 LCL: 4.4 37.5 32.5 27.5 22.5 17.5 12.5 7.5 2.5 0 6 12 18 LCL=0.0 RBAR=5.3 UCL=17.4 LCL=4.4 PCL=18.6 UCL=32.8 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 SL3003Ch15Frame Page 321 Tuesday, November 6, 2001 6:02 PM © 2002 by CRC Press LLC 322 The Manufacturing Handbook of Best Practices clearly different from the second observation at 17 days. Does this mean that the process is changing over time? The individual-X chart has two other horizontal lines, known as control limits. The upper control limit (UCL) is shown in Figure 15.2 as a line at 32.8 days; the lower control limit (LCL) is drawn at 4.4 days. The control limits indicate the predicted boundary of the cycle time. In other words, we don’t expect the cycle time to be longer than about 33 days or shorter than about 4 days. For the individual-X chart shown in Figure 15.1, the control limits are calculated as follows: (15.1) (15.2) The letter x with the bar over it is read “x bar.” The bar notation indicates the average of the parameter, so in this case, the average of the x, where x is an observation. The parameter σ x (read as “sigma of x”) refers to the process standard deviation (or process sigma) of the observations, which in this case is calculated using the bottom control chart in Figure 15.2, the moving range chart. The moving range chart uses the absolute value of the difference (i.e., range) between neighboring observations to estimate the short-term variation. For example, the first plotted point on the moving range chart is the absolute value of the difference between the second observation and the first observation. In this case, the first observation is 27 and the second is 17, so the first plotted value on the moving range chart is 10 (27 – 17). The line labeled RBAR on the moving range chart represents the average moving range, calculated by simply taking the average of the plotted points on the moving range chart. The moving range chart also has control limits, indicating the expected bounds on the moving range statistic. The lower control limit on the moving range chart in this example is zero. The upper control limit is shown in Figure 15.2 as 17.4. The moving range chart’s control limits are calculated as (15.3) (15.4) Process sigma, the process standard deviation, is calculated as (15.5) For a moving range chart, the parameters d 3 and d 2 are 0.853 and 1.128, respectively. UCL x xx =+3σ LCL x xx =−3σ UCL R d x =+3 3 σ LCL MAX R d x =−(, )03 3 σ σ x R d = 2 SL3003Ch15Frame Page 322 Tuesday, November 6, 2001 6:02 PM © 2002 by CRC Press LLC Statistical Process Control 323 15.2.2 G ENERAL I NTERPRETATION OF C ONTROL C HARTS The control limits on the individual-X chart help us to answer the question posed in the section above. Since all the observations fall within the control limits, the answer is, “No, the process has not changed,” even though the observations are clearly different. We see variation in all processes, provided we have adequate measurement equipment to detect the variation. The control limits represent the amount of variation we expect to see in the plotted statistic, based on our observations of the process in the past. The fluctuation of the points between the control limits is due to the variation that is intrinsic (built in) to the process. We say that this variation is due to common causes , meaning that the sources of variation are common to all the observations in the process. Although we don’t know what these causes are, their effect on the process is consistent over time. Recall that the control limits are based on process sigma, which for the individ- ual-X chart is calculated based on the moving range statistic. We can say that process sigma, and the resulting control limits, are determined by estimating the short-term variation in the process. If the process is stable, or in control, then we would expect what we observe now to be about the same as what we’ll observe in the future . In other words, the short-term variation should be a good predictor for the longer-term variation if the process is stable. Points outside the control limits are attributed to a special cause . Although we may not be able to immediately identify the special cause in process terms (for example, cycle time increased due to staff shortages), we have statistical evidence that the process has changed. This process change can occur in two ways. • A change in process location, also known as a process shift . For example, the average cycle time may have changed from 19 days to 12 days. Process shifts may result in process improvement (for example, cycle time reduc- tion) or process degradation (for example, an increased cycle time). Rec- ognizing this as a process change, rather than just random variation of a stable process, allows us to learn about the process dynamics, and to reduce variation and maintain improvements. • A change in process variation. The variation in the process may also increase or decrease. Generally, a reduction in variation is considered a process improvement, because the process is then easier to predict and manage. Control charts are generally used in pairs. One chart, usually drawn as the bottom of the two charts, is used to estimate the variation in the process. In Figure 15.2, the moving range statistic was used to estimate the process variation, and because the chart has no points outside the control limits, the variation is in control. Conversely, if the moving range chart were not in control, the implication would be that the process variation is not stable (i.e., it varies over time), so a single estimate for variation would not be meaningful. Inasmuch as the individual-X chart’s control limits are based on this estimate of the variation, the control limits for the individual-X chart should be ignored if the moving range chart is out of control. We must remove SL3003Ch15Frame Page 323 Tuesday, November 6, 2001 6:02 PM © 2002 by CRC Press LLC 324 The Manufacturing Handbook of Best Practices the special cause that led to the instability in process variation before we can further analyze the process. Once the special causes have been identified in process terms, the control limits may be recalculated, excluding the data affected by the special causes. 15.2.3 D EFINING C ONTROL L IMITS To define the control limits we need an ample history of the process to set the level of common-cause variation. There are two issues here. • To distinguish between special causes and common causes, you must have enough subgroups to define the common-cause operating level of your process. This implies that all types of common causes must be included in the data. For example, if we observed the process over one shift, using one operator and a single batch of material from one supplier, we would not be observing all elements of common cause variation that are likely to be characteristic of the process. If we defined control limits under these limited conditions, then we would likely see special causes arising due to the natural variation in one or more of these factors. • Statistically, we need to observe a sufficient number of data observations before we can calculate reliable estimates of the variation and, to a lesser degree, the average. In addition, the statistical constants used to define control chart limits (such as d 2 ) are actually variables, and they approach constants only when the number of subgroups is large. For a subgroup size of 5, for instance, the d 2 value approaches a constant at about 25 subgroups (Duncan, 1986). When a limited number of subgroups are available, short-run techniques may be useful. These are covered later in this chapter. 15.2.4 B ENEFITS OF C ONTROL C HARTS Control charts provide benefits in a number of ways. Control limits represent the common-cause operating level of the process. The region between the upper and lower control limits defines the variation that is expected from the process statistic. This is the variation due to common causes: causes common to all the process observations. We don’t concern ourselves with the differences between the obser- vations themselves. If we want to reduce this level of variation, we need to redefine the process, or make fundamental changes to the design of the process. Deming demonstrated this principle with his red bead experiment, which he regularly con- ducted during his seminars. In this experiment, he used a bucket of beads or marbles. Most of the beads were white, but a small percentage (about 10%) of red beads were thoroughly mixed with the white beads. Students volunteered to be process workers, who would dip a sample paddle into the bucket and produce a day’s “production” of 50 beads for the “White Bead Company.” Another student would volunteer to be an inspector. The inspector counted the number of white beads in each operator’s daily production. The white beads represented usable output that SL3003Ch15Frame Page 324 Tuesday, November 6, 2001 6:02 PM © 2002 by CRC Press LLC Statistical Process Control 325 could be sold to White Bead Company’s customers, and the red beads were scrap. These results were then reported to a manager, who would invariably chastise operators for a high number of red beads. If the operator’s production improved on the next sample, he or she was rewarded; if the production of white beads went down, more chastising. A control chart of the typical white bead output is shown in Figure 15.3. It’s obvious from the figure that there was variation in the process observations: each dip into the bucket yielded a different number of white beads. Has the process changed? No! No one has changed the bucket, yet the number of white beads is different every time. The control limits tell us that we should expect between 0 and 11 red beads in each sample of 50 beads. Control limits provide an operational definition of a special cause. As we’ve seen, process variation is quite natural. Once we accept that every process exhibits some level of variation, we then wonder how much variation is natural for this process. If a particular observation seems large, is it unnaturally large, or should an observation of this magnitude be expected? The control limits remove the subjec- tivity from this decision, and define this level of natural process variation. In the absence of control limits, we assume that an arbitrarily large variation is due to a shift in the process. In our zeal to reduce variation, we adjust the process to return it to its prior state. For example, we sample the circled area in the leftmost distribution in Figure 15.4 from a process that (unbeknownst to us) is in control. We feel this value is excessively large, so assume the process must have shifted. We adjust the process by the amount of deviation between the observed value and the initial process average. The process is now at the level shown in the center distri- bution in Figure 15.4. We sample from this distribution and observe several values near the initial average, and then sample a value such as is the circled area in the center distribution in the figure. We adjust the process upward by the deviation between the new value and the initial mean, resulting in the rightmost distribution shown in the figure. As we continue this process, we can see that we actually increase the total process variation, which is exactly the opposite of our desired effect. Responding to these arbitrary observation levels as if they were special causes is known as tampering . This is also called “responding to a false alarm,” since a FIGURE 15.3 Example, control chart for Deming’s red bead experiment. Sample size = 50. DEFECTIVE 12 10 8 6 4 2 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 PCL=4.7 UCL=10.9 SL3003Ch15Frame Page 325 Tuesday, November 6, 2001 6:02 PM © 2002 by CRC Press LLC 326 The Manufacturing Handbook of Best Practices false alarm is when we think that the process has shifted when it really hasn’t. Deming’s funnel experiment demonstrates this principle. In practice, tampering occurs when we attempt to control the process to limits that are narrower than the natural control limits defined by common cause variation. Some causes of this: • We try to control the process to specifications, or goals. These limits are defined externally to the process, rather than being based on the statistics of the process. • Rather than using the suggested control limits defined at ±3 standard deviations from the centerline, we use limits that are tighter (or narrower) than these, based on the faulty notion that this will improve the perfor- mance chart. Using limits defined at ±2 standard deviations from the centerline produces narrower control limits than the ±3 standard deviation limits, so it would appear that the ±2 sigma limits are better at detecting shifts. Assuming normality, the chance of being outside of a ±3 standard deviation control limit is 0.27% if the process has not shifted. On average, a false alarm is encountered with these limits once every 370 subgroups ( = 1/0.0027). Using ±2 standard deviation control limits, the chance of being outside the limits when the process has not shifted is 4.6%, corre- sponding to false alarms every 22 subgroups! If we respond to these false alarms, we tamper and increase variation. Control charts prevent searching for special causes that do not exist . As data are collected and analyzed for a process, it seems almost second nature to assume that we can understand the causes of this variation. In Deming’s red bead experiment, the manager would congratulate operators when their dips in the bucket resulted in a relatively low number of red beads, and chastise them if they submitted a high number of red beads. This should seem absurd, because the operator had no control over the number of red beads in each random sample. Yet, this same experiment FIGURE 15.4 Tampering increases process variation. Original Variation Resulting Variation SL3003Ch15Frame Page 326 Tuesday, November 6, 2001 6:02 PM © 2002 by CRC Press LLC Statistical Process Control 327 happens daily in real business environments. In the cycle time example shown above, suppose the order-processing supervisor, being unfamiliar with statistical process control, expected all orders to be processed at a quick pace, say 15 days. It seemed the process could deliver at this rate, because it had processed orders at or below this many times in the past. If this was the supervisor’s expectation, then he or she may look for a special cause (“This order must be different from the others”) that doesn’t exist. Instead, he or she should be redesigning the system (i.e., changing the fundamental nature of the bucket). Control charts result in a stable process, which is predictable . When used on a real-time basis, control charts result in process stability. In the absence of a control chart, a common reaction is to respond to process variation with process adjustments. As discussed above, this tampering results in an unstable process that has increased variation. Personnel using a control chart to monitor the process in real time (as the process produces the observations) are trained to react with process adjustments only when the control chart signals a process shift with an out-of-control point. The resulting process is stable, allowing its future capability to be estimated. In fact, the future performance of processes may be estimated only if the process is stable (see also, process capability later in this chapter). 15.3 CHOOSING A CONTROL CHART Many control charts are available for our use. One differentiator between control charts is the type of data to be analyzed: Attribute data: also known as “count” data. Typically, we will count the number of times we observe some condition (usually something we don’t like, such as a defect or an error) in a given sample from the process. Variables data: also known as measurement data. Variables data are continuous in nature, generally capable of being measured to enough resolution to provide at least ten unique values for the process being analyzed. Attribute data have less resolution than variables data, because we count only if something occurs, rather than take a measurement to see how close we are to the condition. For example, attribute data for a manufacturing process might include the number of items in which the diameter exceeds the specification, whereas variables data for the same process might be the measurement of that part’s diameter. Attribute data generally provide us with less information than variables data would for the same process. Attribute data would generally not allow us to predict if the process is trending toward an undesirable state, because it is already in this condition. As a result, variables data are considered more useful for defect prevention . 15.3.1 A TTRIBUTE C ONTROL C HARTS There are several attribute control charts, each designed for slightly different uses: • NP chart — for monitoring the number of times a condition occurs, relative to a constant sample size. NP charts are used for binomial data, SL3003Ch15Frame Page 327 Tuesday, November 6, 2001 6:02 PM © 2002 by CRC Press LLC 328 The Manufacturing Handbook of Best Practices which exist when each sample can either have this condition of interest, or not have this condition. For example, if the condition is “the product is defective,” then each sample unit either is defective or not defective. In the NP chart, the value that is plotted is the observed number of units that meet the condition in the sample. For example, if we sample 50 items, and 4 are defective, we plot the value 4 for this sample. The NP chart requires a constant sample size, inasmuch as we cannot directly compare 4 observations from 50 units with 5 observations from 150 units. Figure 15.3 provided an example of an NP chart. • P chart — for monitoring the percentage of samples having the condition, relative to either a fixed or varying sample size. Use the P chart for the same data types and examples as the NP chart. The value plotted is a percentage, so we can use it for varying sample sizes. When the samples vary by more than 20% or so, it’s common to see the control limits vary as well. • C chart — for monitoring the number of times a condition occurs, relative to a constant sample size, when each sample can have more than one instance of the condition. C charts are used for Poisson data. For example, if the condition is a surface scratch, then each sample unit can have 0, 1, 2, 3 … etc., defects. The value plotted is the observed number of defects in the sample. For example, if we sample 50 items and 65 scratches are detected, we plot the value 65 for this sample. The C chart requires a constant sample size. • U chart — for monitoring the percentage of samples having the condition, relative to either a fixed or varying sample size, when each sample can have more than one instance of the condition. Use the U chart for the same data types and examples as the C chart. The value that is plotted is a percentage, so we can use it for varying sample sizes. When the samples vary by more than 20% or so, it’s common to see the control limits vary as well. An example of a U chart is shown in Figure 15.5. FIGURE 15.5 U control chart, number of cracks per injection molding piece. 0.30 0.25 0.20 0.15 0.10 0.05 0.00 DEFECTS PER UNIT 08:00 09:00 10:00 11:00 12:00 13:00 14:00 15:00 16:00 17:00 UCL PCL=0.105 LCL SL3003Ch15Frame Page 328 Tuesday, November 6, 2001 6:02 PM © 2002 by CRC Press LLC [...]... are less capable of detecting shifts in the process Table 15. 1 shows the average number of subgroups necessary to detect the shift of size k (in standard deviation units), based on the subgroup size n For example, if we observe the process a large © 2002 by CRC Press LLC SL3003Ch15Frame Page 334 Tuesday, November 6, 2001 6:02 PM 334 The Manufacturing Handbook of Best Practices number of times, then on... CRC Press LLC SL3003Ch15Frame Page 338 Tuesday, November 6, 2001 6:02 PM 338 The Manufacturing Handbook of Best Practices TABLE 15. 2 Parts per Million Defect Rates for Cpk Cpk One-Sided Spec Two-Sided Spec 0.25 0.5 0.7 1.0 1.1 1.2 1.3 1.4 1.5 1.6 2 226627 66807 17864 1350 483 159 48 13 3 1 0.00099 453255 133614 35729 2700 967 318 96 27 7 2 0.00198 Cp m = Cp ( x − T )2 1+ σ2 x (15. 23) where T is the... SL3003Ch15Frame Page 340 Tuesday, November 6, 2001 6:02 PM 340 The Manufacturing Handbook of Best Practices • • of liquid to maintain a similar environment that carries over into subsequent temperature observations for a period of time Subgroups formed over a small time frame from these types of processes are sometimes called homogenous subgroups, because the observations within the subgroups are often... 32 36 40 FIGURE 15. 7 Irrational subgroups hug the centerline of this X-bar chart of fill weight • The subgroups are formed from observations taken in a time-ordered • sequence In other words, subgroups cannot be randomly formed from a set of data (or a box of parts); instead, the data composing a subgroup must be a “snapshot” of the process over a small window of time, and the order of the subgroups... 2001 6:02 PM 342 The Manufacturing Handbook of Best Practices 252.5 247.5 Lag 5 242.5 237.5 232.5 227.5 222.5 217.5 217.5 222.5 227.5 232.5 237.5 VISCOSITY 242.5 247.5 252.5 FIGURE 15. 8C Viscosity vs itself, five samples apart 252.5 247.5 Lag 10 242.5 237.5 232.5 227.5 222.5 217.5 217.5 222.5 227.5 232.5 237.5 VISCOSITY 242.5 247.5 252.5 FIGURE 15. 8D Viscosity vs itself, ten samples apart 1.0 ACF 0.6 0.2... effects of autocorrelation, and use this process model as a predictor of the process Changes in the process (relative to this model) can then be detected as © 2002 by CRC Press LLC SL3003Ch15Frame Page 344 Tuesday, November 6, 2001 6:02 PM 344 The Manufacturing Handbook of Best Practices special causes Specially constructed EWMA (wandering mean) charts with moving centerlines, such as is shown in Figure 15. 10,... the process are independent of one another Independence implies that the particular value of an observation in time cannot be predicted based on prior data observations For example, in Deming’s red bead experiment shown in Figure 15. 3, observing a particular value of, say 7 red beads, does not provide us with any information to predict the next observation Our best estimate of every sample is the process... is the observed value, nominal and range are the © 2002 by CRC Press LLC SL3003Ch15Frame Page 336 Tuesday, November 6, 2001 6:02 PM 336 The Manufacturing Handbook of Best Practices nominal and calculated standard range values, respectively, for the particular run, and zi is the standardized value: zi = xi − nominal range (15. 13) In either case, inasmuch as the short-run standardization is done to the... (15. 15) Cpk A measure of both process dispersion and its centering about the average Cp k = MIN (Cpl , Cpu ) (15. 16) where Cpl = − Zl 3 (15. 17) Cpu = − Zu 3 (15. 18) Normal distributions: Zl = x − Low Spec σx (15. 19) Zu = High Spec − x σx (15. 20) where x-double bar is the grand average and σx is process sigma Non-normal distributions: Zl = Znormal , p (15. 21) Zu = Znormal ,1− p (15. 22) Znormal,p and... double bar.” Because the bar notation indicates the average of the parameter, x double bar is the © 2002 by CRC Press LLC SL3003Ch15Frame Page 330 Tuesday, November 6, 2001 6:02 PM 330 The Manufacturing Handbook of Best Practices 1.8 UCL=1.723 1.4 OBSERVATIONS 1.0 0.6 PCL=0.564 0.2 -0.2 -0.6 g LCL=0.594 1.5 Group range: Selected (1-30) Auto drop: OFF CL ordinate: 3.000 Curve: Normal K-S: 0.640 Cpk: 0.81 . 1 1 SL3003Ch15Frame Page 333 Tuesday, November 6, 2001 6:02 PM © 2002 by CRC Press LLC 334 The Manufacturing Handbook of Best Practices number of times, then on average a subgroup of size n =. SL3003Ch15Frame Page 327 Tuesday, November 6, 2001 6:02 PM © 2002 by CRC Press LLC 328 The Manufacturing Handbook of Best Practices which exist when each sample can either have this condition of. the UCL x n x =+ 3σ LCL x n x =− 3σ SL3003Ch15Frame Page 329 Tuesday, November 6, 2001 6:02 PM © 2002 by CRC Press LLC 330 The Manufacturing Handbook of Best Practices average of the subgroup averages. The

Ngày đăng: 11/08/2014, 14:20

Mục lục

  • 15.2.2 GENERAL INTERPRETATION OF CONTROL CHARTS

  • 15.2.4 BENEFITS OF CONTROL CHARTS

  • 15.3.3 SELECTING THE SUBGROUP SIZE

  • 15.4 PROCESS CAPABILITY AND PERFORMANCE INDICES

    • 15.4.1 INTERPRETATION OF CAPABILITY INDICES

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan