Designing Capable and Reliable Products Episode 1 Part 8 doc

20 341 0
Designing Capable and Reliable Products Episode 1 Part 8 doc

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Rating (S) to give the target C pk . The program initially calculates a primer tolerance, t p i (the statistical sum of the primer tolerances equals the assembly tolerance, t a ) and allo- cates this to each dimension. The primer tolerances are calculated using equation 3.23, which is derived from equation 3.21. They are used to start the optimization routine and their respective risk values `A' are calculated from the process capability maps: t p i  t a  n p 3:23 where: t p i  bilateral primer tolerance for ith component characteristic t a  bilateral tolerance for assembly n  number of component tolerances in the assembly stack: Figure 3.9 shows the results of the optimization routine for the tolerances. For capability at this ®rst level to be realized, all the component risks must be below the line on completion of the optimization routine, and subsequently the capability required with this design con®guration cannot be achieved. It is evident from Figure 3.9 that two out the six components' characteristics are preventing optimization (as was determined in the paper-based analysis), and there- fore redesign eort here is required, speci®cally the base tolerance of the solenoid body and the pole thickness, which are both impact extruded. A feature of the design related to these two tolerances is the bobbin dimensional tolerance, which sets the position of the pole from the solenoid base. Further redesign data is shown on the ®nal CAPRAtol screen in Figure 3.10. Redesign data includes a basic estimate for C pk , and an assembly tolerance calculated from the process capability maps that would be achievable for the given design parameters. A redesign solution of the tolerance stack is shown in Figure 3.11. It involves a small design alteration (circled) which eliminates the plastic bobbin from the toler- ance stack by machining a shoulder on the inside of the solenoid body, up to which the pole piece is precisely located. Also, secondary machining processes are 0 10 20 30 40 50 60 423615 Variance contribution (%) Characteristic number Nominal target Figure 3.8 Pareto chart showing variance contribution of each characteristic to the ®nal assembly variance (for the paper-based analysis) Case study ± revisiting the solenoid design 125 carried out on the components with the least capability as determined from Figure 3.9. As there were machining operations on these two components anyway, the component cost increase was small. Inputting the design parameters into CAPRAtol and proceeding with the optimization routine, we ®nd the largest tolerances for the Figure 3.9 Chart showing that capable tolerances cannot be optimized for solenoid tolerance stack design Values of tolerance risk ‘A’ indicate that optimization is not feasible with two characteristics dominating (see Figure 3.9) Figure 3.10 Final CAPRAtol screen for the solenoid tolerance stack design 126 Designing capable assembly stacks least risk are optimized to near equal value as shown in Figure 3.12. The risk values determined at this stage are very low, close to unity in fact, and the situation looks more promising. Figure 3.13 shows the eects of the material processing and geometry risks for each component from the component manufacturing variability risk, q m , and these are taken into consideration in the calculation of the ®nal estimates for C pk and C p for each tolerance. Part of the design information provided by the software is the standard deviation multiplier, z, for each component tolerance shown in Figure 3.14 in Pareto chart form. Additionally, sensitivity analysis is used to provide a percentage contribution of each tolerance variance to the ®nal assembly tolerance variance as shown in Figure 3.15. It is evident that characteristic number 5 has the largest contribution (89.8%) and it is likely that if this is shifted, then the ®nal assembly distribution will be shifted from its target value, suggesting a need for SPC in production. However, this redesign solution is very capable (as shown in the ®nal CAPRAtol screen in Figure 3.16) with C pk  3:67, which theoretically relates to no failures. The actual process capability of the assembly tolerance will, in fact, lie somewhere between the two values calculated for C pk and C p , but using the former we are considering the possibility of shift throughout its life-cycle. Figure 3.11 Solenoid end assembly redesign Case study ± revisiting the solenoid design 127 Figure 3.12 Chart showing optimized tolerances for solenoid tolerance stack redesign C variabilit estimate Figure 3.13 Data screen for solenoid tolerance stack redesign 128 Designing capable assembly stacks The ®nal tolerances the designer would allocate to the component dimensions are also shown in Figure 3.16. The tolerance values are given to three decimal places which, if required for practical use, can be rounded o with minimal eect to the overall assembly tolerance, for example Æ0:191 mm to Æ0:190 mm. Figure 3.14 Chart of standard deviation multiplier values, z, for each tolerance Figure 3.15 Chart showing variance contribution of each tolerance to the ®nal assembly variance Case study ± revisiting the solenoid design 129 A comparative worst case assembly tolerance based on these capable tolerances optimized using the statistical approach is shown as Æ0:306 mm in the lower left- hand corner of Figure 3.16, which is greater than the target of Æ0:2 mm. A thorough analysis of the redesign based on the worst case model was presented in Chapter 2. However, the assumption that each component dimension was at its maximum or minimum limit was clearly not the case and the level of variability experienced diered throughout the components in the assembly stack. In practice, the worst case approach gives little indication of the impact of the component tolerance distributions on the ®nal assembly tolerance distribution. Leaving out this data can have as great an impact as leaving out the tolerances in the ®rst place (Hopp, 1993). The inadequacy of the worst case model is evident and the statistical nature of the tolerance stack is more realistic, especially when including the eects of shifted distributions. This has also been the conclusion of some of the literature discussing tolerance stack models (Chase and Parkinson, 1991; Harry and Stewart, 1988; Wu et al., 1988). Shifting and drifting of component distributions has been said to be the chief reason for the apparent disenchantment with statistical tolerancing in manufacturing (Evans, 1975). Modern equipment is frequently composed of thousands of components, all of which interact within various tolerances. Failures often arise from a combination of drift conditions rather than the failure of a speci®c component. These are more dicult to predict and are therefore less likely to be foreseen by the designer (Smith, 1993). Final capable tolerances allocated to the dimensions Figure 3.16 Final CAPRAtol screen for solenoid tolerance stack redesign 130 Designing capable assembly stacks 3.7 Summary Recent developments in designing capable tolerance stacks have been reviewed and the application has been demonstrated via an industrial case study. The data used to determine the tolerances and standard deviations has a realistic base and can provide reliable results in the design problem of allocating optimum capable toler- ances in assembly stacks. The CAPRAtol method uses empirical capability data for a number of processes, including material and geometry eect. Component tolerances with the greatest capability are optimized for the given functional assembly tolerance including the eects of anticipated process shift. The use of the Conformability Map for setting capability targets from FMEA inputs is pivotal in the generation of a capable design solution. The inadequacy of the worst case approach to tolerance stack design compared to the statistical approach is evident, although it still appears to be popular with designers. The worst case tolerance stack model is inadequate and wasteful when the capability of each dimensional tolerance is high (C pk ! 1:33). Some summarizing comments on the two main approaches are given below. The `worst case' tolerance stack approach is characterized by the following: . Simple to perform . Assumes tolerance distribution on maximum or minimum limit . Little information generated for redesign purposes . Popular as a safeguard, leading to unnecessarily tight tolerances and, therefore, increased costs. The `statistical' tolerance stack approach is characterized by: . More dicult mathematically (computer necessary) . Assumes tolerances are random variables . Opportunities for optimization of tolerances in the assembly . Can perform sensitivity analysis for redesign purposes . Can include eects of shifting and drifting of component tolerances . More realistic representation of actual situation. We must promote the use of statistical methods with integrated manufacturing knowledge, through user-friendly platforms in order to design capable products. Comparing and evaluating assembly stacks as shown in the case study above is an essential way of identifying the capability of designs and indicating areas for redesign. Summary 131 4 Designing reliable products 4.1 Deterministic versus probabilistic design For many years, designers have applied so called factors of safety in a deterministic design approach. These factors are used to account for uncertainties in the design par- ameters with the aim of generating designs that will ideally avoid failure in service. Load and stress concentrations were the unknown contributing factors and this led to the term factor of ignorance (Gordon, 1991). The factor of safety, or deterministic approach, still predominates in engineering design culture, although the statistical nature of engineering problems has been studied for many years. Many designs are still based on past experience and intuition, rather than on thorough analysis and experimentation (Kalpakjian, 1995). In general, the deterministic design approach can be shown by equation 4.1. Note that the stress and strength are in the same units: S FS > L 4:1 where: L  loading stress S  material strength FS  factor of safety: The factors of safety were initially determined from the sensitivity experienced in practice of a part made from a particular material to a particular loading situation, and in general the greater the uncertainties experienced, the greater the factor of safety used (Faires, 1965). Table 4.1 shows recommended factor of safety values published 60 years apart, ®rst by Unwin (c.1905) and by Faires (1965). They are very similar in nature, and in fact the earlier published values are lower in some cases. As engineers learned more about the nature of variability in engineering parameters, but were unable to quantify them satisfactorily, it seems that the factor of safety was increased to accommodate these uncertainties. It is also notice- able that the failure criterion of tensile fracture seems to have been replaced by ductile yielding (due to the omission of such values c.1905), the yield stress being the preferred criterion of failure. This is because in general most machine parts will not function satisfactorily after permanent deformation caused by the onset of yielding. The factor of safety had little scienti®c background, but had an underlying empirical and subjective nature. No one can dispute that at the time that stress analysis was in its infancy, this was the best knowledge available, but they are still being applied today! Factors of safety that are recommended in recent literature range from 1.25 to 10 for various material types and loading conditions (Edwards and McKee, 1991; Haugen, 1980). Figure 4.1 gives an indication that engineers in the 1950s were beginning to think dierently about design with the introduction of a `true' margin of safety,anda probabilistic design approach was being advocated. It shows that the design problem was multifactored and variability based. With the increasing use of statistics in engineering around this time, the theories of probabilistic design and reliability were to become established methods in some sectors by the 1960s. The deterministic approach is not very precise and the tendency is to use it very conservatively resulting in overdesigned components, high costs and sometimes ineectiveness (Modarres, 1993). Carter (1986) notes that stress rupture was respon- sible for a sucient number of failures for us to conclude that deterministic design does not always ensure intrinsic reliability, and that room for improvement still exists. Increasing demands for performance, resulting often in operation near limit conditions, has placed increasing emphasis on precision and realism (Haugen, 1980). There has been a great disenchantment with factors of safety for many years, mainly because they disregard the fact that material properties, the dimensions of the components and the externally applied loads are statistical in nature (Dieter, 1986). The deterministic approach is, therefore, not suitable for today's products where superior functionality and high customer satisfaction should be a design output. The need for more ecient, higher performance products is encouraging more applications of probabilistic methods (Smith, 1995). Probabilistic design methods have been shown to be important when the design cannot be tested to failure and when it is important to minimize weight and/or cost (Dieter, 1986). In companies where minimizing weight is crucial, for example such as those in the aerospace industry, probabilistic design techniques can be found, Table 4.1 Factors of safety for ductile and brittle materials and various loading conditions (values shown in brackets from 1905, without brackets from 1965) (Su ultimate tensile strength, Sy yield strength) Type of loading Steel (ductile metals) Cast iron (brittle metals) Based on Su Based on Sy Based on Su Dead load 3 to 4 (3) 1.5 to 2 5 to 6 (4) Repeated one direction/mild shock 6 (5) 3 7 to 8 (6) Repeated reversed/mild shock 8 (8) 410to12 (10) Shock 10 to 15 (12) 5 to 7 15 to 20 (15) Deterministic versus probabilistic design 133 although NASA has found the deterministic approach to be adequate for some structural analysis (NASA, 1995). Non-complex and/or non-critical applications in mechanical design can also make use of probabilistic design techniques and justify a more in-depth approach if the bene®ts are related to practitioners and customers alike. Surveys have indicated that many products in the industrial sector have in the past been overdesigned (Kalpakjian, 1995). That is, they were either too bulky, were made of materials too high in quality, or were made with unwarranted precision for the intended use. Overdesign may result from uncertainties in design calculations or the concern of the designer and manufacturer over product safety in order to avoid user injury or Figure 4.1 The `true' margin of safety (adapted from Furman, 1981 and Nixon, 1958) 134 Designing reliable products [...]... method can be found in Ayyub and McCuen (19 97), Edwards and McKee (19 91) , Kottegoda and Rosso (19 97), Leitch (19 95), Lewis (19 96), Metcalfe (19 97), Mischke (19 92), Rao (19 92), and Shigley and Mischke (19 89 ) Straight line plots of the cumulative function are commonly used, but there is no foolproof method that will guide the choice of the distribution (Lipson and Sheth, 19 73) Additional goodness-of-®t... 4.5 Cumulative frequency plot and determination of mean and standard deviation graphically 14 1 14 2 Designing reliable products Figure 4.6 Shape of the Cumulative Distribution Function (CDF) for an arbitrary normal distribution with varying standard deviation (adapted from Carter, 19 86 ) population We observe that 10 0% of the sample taken has failed, but this failure distribution does not necessarily match... and in practice only sucient observations are made to determine the mean and standard deviation of the stress and strength leading to the Normal distribution (Mischke, 19 70) Therefore, the simplest and most common case has always been when stress and strength are normally distributed (Murty and Naikan, 19 97; Vinogradov, 19 91) If a complete theory of statistical inference is developed based on the... and estimated cumulative frequencies It is applicable to small samples and doesn't depend on grouping the data In both cases, however, their e€ective use is restricted to the non-linearized domain (Ayyub and McCuen, 19 97) 14 3 14 4 Designing reliable products Figure 4 .8 Correlation coef®cient, r, for several relationships between x and y variables An alternative method is to ®t the `best' straight line... ) test and the Kolmogorov±Smirnov test are available (Ayyub and McCuen, 19 97; Leitch, 19 95; Mischke, 19 92) The 2 test is not applicable when data is sparse (N < 15 ) and relies on grouping the data This is because of the need to compare the estimated and observed frequencies for the experimental data The Kolmogorov±Smirnov test statistic is determined from the di€erence between the observed and estimated... (Siddal, 19 83 ) The random variable may be a set of real numbers corresponding to the outcome of a series of experiments, tests, data measurements, etc Usually, information relating to these variables for a particular design is not known beforehand Even if similar design cases are well documented, there are always particular circumstances a€ecting the distribution functions (Vinogradov, 19 91) Three... 19 50) Although certainly not all engineering random variables are normally distributed, a Normal distribution is a good ®rst approximation Ullman (19 92) argues that the assumption that stress and strength are of the Normal type is a reasonable one because there is not enough data available to warrant anything more sophisticated A problem with this is that the Normal distribution 13 9 14 0 Designing reliable. .. curve is drawn by hand in the ®gure, but the use of polynomial curve ®tting software will yield more accurate results This type of graph and its variants are used to determine the parameters of any distribution For example, from the points on the x-axis corresponding to the percentile points $84 :1% and $15 :9% on the cumulative frequency (the percentage probabilities at 1 of the Standard Normal variate,... regarded as a most valuable distribution (Bompas-Smith, 19 73) If an improved estimate for the mean and standard deviation of a set of data is the goal, it has been cited that determining the Weibull parameters and then converting to Normal parameters using suitable transformation equations is recommended (Mischke, 19 89 ) Similar estimates for the mean and standard deviation can be found from any initial distribution... software 13 6 Designing reliable products Figure 4.2 Comparison of the probabilistic and deterministic design approaches Statistical methods for probabilistic design 4.2 Statistical methods for probabilistic design 4.2 .1 Modelling data using statistical distributions A key problem in probabilistic design is the generation of the statistical distributions from available information about the random variables . in Ayyub and McCuen (19 97), Edwards and McKee (19 91) , Kottegoda and Rosso (19 97), Leitch (19 95), Lewis (19 96), Metcalfe (19 97), Mischke (19 92), Rao (19 92), and Shigley and Mischke (19 89 ). Straight. designer and manufacturer over product safety in order to avoid user injury or Figure 4 .1 The `true' margin of safety (adapted from Furman, 19 81 and Nixon, 19 58) 13 4 Designing reliable products death,. load 3 to 4 (3) 1. 5 to 2 5 to 6 (4) Repeated one direction/mild shock 6 (5) 3 7 to 8 (6) Repeated reversed/mild shock 8 (8) 410 to12 (10 ) Shock 10 to 15 (12 ) 5 to 7 15 to 20 (15 ) Deterministic

Ngày đăng: 06/08/2014, 15:21