1. Trang chủ
  2. » Giáo án - Bài giảng

Ch12 stochastic processes and

55 2 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 55
Dung lượng 591,41 KB

Nội dung

Applied Structural and Mechanical Vibrations Theory, Methods and Measuring Instrumentation 12 Stochastic processes and random vibrations 12 1 Introduction A large number of phenomena in science and en.

12 Stochastic processes and random vibrations 12.1 Introduction A large number of phenomena in science and engineering either defy any attempt of a deterministic description or only lend themselves to a deterministic description at the price of enormous difficulties Examples of such phenomena are not hard to find: the height of waves in a rough sea, the noise from a jet engine, the electrical noise of an electronic component or, if we remain within the field of vibrations, the vibrations of an aeroplane flying in a patch of atmospheric turbulence, the vibrations of a car travelling on a rough road or the response of a building to earthquake and wind loads Without doubt, the question as to whether any of the above or similar phenomena is intrinsically deterministic and, because of their complexity, we are simply incapable of a deterministic description is legitimate, but the fact remains that we have no way to predict an exact value at a future instant of time, no matter how many records we take or observations we make However, it is also a fact that repeated observations of these and similar phenomena show that they exhibit certain patterns and regularities that fit into a probabilistic description This occurrence suggests taking a different and more pragmatic approach, which has turned out to be successful in a large number of practical situations: we simply leave open the question about the intrinsic nature of these phenomena and, for all practical purposes, tackle the problem by defining them as ‘random’ and adopting a description in terms of probabilistic statements and statistical averages In other words, we base the decision of whether a certain phenomenon is deterministic or random on the ability to reproduce the data by controlled experiments If repeated runs of the same experiment produce identical results (within the limits of experimental error), then we regard the phenomenon in question as deterministic; if, on the other hand, different runs of the same experiment not produce identical results but show patterns and regularities which allow a satisfactory description (and satisfactory predictions) in terms of probability laws, then we speak of random phenomenon Copyright © 2003 Taylor & Francis Group LLC 12.2 The concept of stochastic process First of all a note on terminology: although some authors distinguish between the terms, in what follows we will adopt the common usage in which ‘stochastic’ is synonymous with ‘random’ and the two terms can be used interchangeably Now, if we refer back to the preceding chapter, it can be noted that the concepts of event and random variable can be conveniently considered as forming two levels of a hierarchy in order of increasing complexity: the information about an event is given by a single number (its probability), whereas the information about a random variable requires the knowledge of the probability of many events If we take a step further up in the hierarchy we run into the concept of stochastic or random process Broadly speaking, any process that develops in time or space and can be modelled according to probabilistic laws is a stochastic or random process More specifically, a stochastic process X(z) consists of a family of random variables indexed by a parameter z which, in turn, can be either discrete or continuous and varies within an index set Z, i.e In the former case one speaks of a discrete parameter process, while in the latter case we speak of a continuous parameter process For our purposes, the interest will be focused on random processes X(t) that develop in time so that the index parameter will be time t varying within a time interval T; such processes can also be generally indicated with the symbol In general, the fact that the parameter t varies continuously does not imply that the set of possible values of X(t) is continuous, although this is often the case A typical example of a random time record with zero mean (velocity in this specific example, although this is not important for our present purposes) looks like Fig 12.1, which was created by using a set of software-generated random numbers Also note that a random process can develop in both time and space: consider for example the vibration of a tall and slender structure under the action of wind during a windstorm The effect of turbulence will be random not only in time but also with respect to the vertical space coordinate y along the structure The basic idea of stochastic process is that for any given value of t e.g is a random variable, meaning that we can consider its cumulative distribution function (cdf) (12.1a) or its probability density function (pdf) (12.1b) where we write and to point out the fact that, in general, these functions depend on the particular instant of time t0 Note, however, Copyright © 2003 Taylor & Francis Group LLC Fig 12.1 Random (velocity) time record that if we adhere strictly to the notation of the preceding chapter we should write and By the same token, we can have information on the behaviour on the process X(t) at two particular instants of time t1 and t2 by considering the joint cdf (12.2a) and the corresponding joint pdf (12.2b) or, for any finite number of instants we can consider the function (12.3) and its corresponding joint pdf so that, by increasing the value of n we can describe the probabilistic structure of the random process in finer and finer detail Note that knowledge of the joint distribution function (12.3) gives information for any (e.g the function of eq (12.2a) where m=2), since these distribution functions are simply its marginal distribution functions Similarly, we may extend the concepts above by considering more than one Copyright © 2003 Taylor & Francis Group LLC stochastic process, say X(t) and Y(t´), and follow the discussion of Chapter 11 to define their joint pdfs for various possible sets of the index parameters t and t’ Now, since we can characterize a random variable X by means of its moments and since, for a fixed instant of time the stochastic process X(t) defines a random variable, we can calculate its first moment (mean value) as (12.4) or its mth order moment (12.5) and the central moments as in eq (11.36) In the general case, all these quantities now obviously depend on t because they may vary for different instants of time; in other words if we fix for example two instants of time t1 and t2, we have Similarly, for two instants of time we have the so-called autocorrelation function (12.6) and the autocovariance (12.7) which are related (eq (11.67a)) by the equation (12.8) Particular cases of eqs (12.6) and (12.7) occur when respectively, the mean squared value and the variance so that we obtain, (12.9) When two processes are studied simultaneously the counterpart of eq (12.6) is the cross-correlation function (12.10) Copyright © 2003 Taylor & Francis Group LLC which is related to the cross-covariance (12.11) by the equation (12.12) Consider now the idea of statistical sampling With a random variable X we usually perform a series of independent observations and collect a number of samples, i.e a set of possible values of X Each observation xj is a number and by collecting a sufficient number of observations we can get an idea of the underlying probability distribution of the random variable X In the case of a stochastic process X(t) each observation xj(t) is a time record similar to the one shown in Fig 12.1 and our experiment consists of collecting a sufficient number of time records which can be used to estimate probabilities, expected values etc A collection of a number—say n—of time records is the engineer’s representation of the process and is called an ensemble A typical ensemble of four time histories is shown in Fig 12.2 As an example, consider the vibrations of an aeroplane in a region of frequent atmospheric turbulence given the fact that the same plane flies through that region many times a year During a specific flight we measure a vibration time history x1(t), during a second flight in similar conditions we measure x2(t) and so on, where, for instance, if the plane takes about 15 Fig 12.2 Ensemble of four time histories for the stochastic process X(t) Copyright © 2003 Taylor & Francis Group LLC to fly through that region, The statistical population for this random process is the infinite set of time histories that, in principle, could be recorded in similar conditions We are thus led to a two-dimensional interpretation of the stochastic process which we can indicate, whenever convenient, with the symbol X(j, t): for a specific value of t, say is a random variable and are particular realizations, i.e observed values, of X(j, t0); on the other hand, for a fixed j, say is simply a function of time, i.e a sample function xj0(t) With the data at our disposal, the quantities of eqs (12.4)–(12.9) must be understood as ensemble expected values, that is expected values calculated across the ensemble However, it is not always possible to collect an ensemble of time records and the question could be asked if we can gain some information on a random process just by recording a sufficiently long time history and by calculating temporal expected values, i.e expected value calculated along the sample function at our disposal An example of such a quantity can be the temporal mean obtained from a time history x(t) as (12.13) The answer to the question is that this is indeed possible in a number of cases and depends on some specific assumptions that can often (reasonably) be made about the characteristics of many stochastic processes of interest 12.2.1 Stationary and ergodic processes Strictly speaking, a stationary process is a process whose probabilistic structure does not change with time or, in more mathematical terms, is invariant under an arbitrary shift of the time axis Stated this way, it is evident that no physically realizable process is stationary because all processes must begin and end at some time Nevertheless the concept is very useful for sufficiently long time records, where by the expression ‘sufficiently long’ we mean here that the process has a duration which is long compared to the period of its lowest spectral components There are many kinds of stationarity, depending on what aspect of the process remains unchanged under a shift of the time axis For example, a process is said to be mean-value stationary if (12.14a) for any value of the shift r Equation (12.14a) implies that the mean value is the same for all times so that for a mean-value stationary process (12.14b) Copyright © 2003 Taylor & Francis Group LLC Similarly, a process is second-moment stationary if (12.15a) for any value of the shift r For eq (12.15a) to be true, it is not difficult to see that the autocorrelation and covariance functions must not depend on the so that individual values of t1 and t2 but only on their difference we can simply write (12.15b) By the same token, for two stochastic processes X(t) and Y(t) we can speak of joint second-moment stationarity when At this point it is easy to extend these concepts and define, for a given process, covariant stationarity and mth moment stationarity or, for two processes, joint covariant stationarity, etc It must be noted that stationarity always reduces the number of necessary time arguments by one: i.e in the general case the mean depends on one time argument, while for a stationary process it does not depend on time (zero time arguments); the autocorrelation depends on two time arguments in the general case and only on one time argument ( ) in the stationary case, and so on Other forms of stationarity are defined in terms of probability distributions rather than in terms of moments A process is first-order stationary if (12.16) for all values of x, t and r; second-order stationary if (12.17) for all values of and r Similarly, the concept can be extended to mth-order stationarity, although the most important types in practical situations are first- and second-order stationarities In general, a main distinction is made between strictly stationary processes and weakly stationary processes, strict stationarity meaning that the process is mth-order stationary for any value of m and weak stationarity meaning that the process is mean-value and covariant stationary (note that some authors define weak stationarity as stationarity up to order 2) If we consider the interrelationships among the various types of stationarity, for our purposes it suffices to say that mth order stationarity implies all stationarities of lower order, while the same does not apply for mth moment stationarity Furthermore, mth-order stationarity also implies mth moment stationarity so that, necessarily, an mth-order stationary process is also stationary up to the mth moment Note, however, that it is not always possible to establish Copyright © 2003 Taylor & Francis Group LLC a hierarchy among different types of stationarities: for example it is not possible to say which is stronger between second-moment stationarity and first-order stationarity because they simply correspond to different behaviours First-order m stationarity certainly implies that all moments E[X (t)]—which are calculated by using pX(x, t)—are invariant under a time shift, but it gives us no information about the relationship between X(t1) and X(t2) when Before turning to the issue of ergodicity, it is interesting to investigate some properties of the functions we have introduced above The first property is the symmetry of autocorrelation and autocovariance functions, i.e (12.18) which, whenever the appropriate stationarity applies, become (12.19) meaning that autocorrelation and autocovariance are even functions of Also, if we note that we get which it follows that from (12.20) for all Similarly, for all (12.21) where the first equality is a direct consequence of the second of eqs (12.9) where stationarity applies Moreover, it is not difficult to see that eq (12.8) now reads (12.22a) so that, as it often happens in vibrations, if the process is stationary with zero mean, then When from eq (12.22a) it follows that (12.22b) Two things should be noted at this point: first (Chapter 11), Gaussian random processes are completely characterized by the first two moments, Copyright © 2003 Taylor & Francis Group LLC i.e by the mean value and the autocovariance or autocorrelation function In particular, for a stationary Gaussian process all the information we need is the constant µX and one of the two functions RXX( ) or KXX( ) Second, for most random processes the autocovariance function rapidly decays to zero with increasing values of (i.e ) because, as can be intuitively expected, at increasingly larger values of there is an increasing loss of correlation between the values of X(t) and Broadly speaking, the rapidity with which KXX( ) drops to zero as | | is increased can be interpreted as a measure of the ‘degree of randomness’ of the process If two weakly stationary processes are also cross-covariant stationary, it can be easily shown that the cross-correlation functions RXY( ) and RYX( ) are neither odd nor even; in general but, owing to the property of invariance under a time shift, they satisfy the relations (12.23) while eq (12.12) becomes (12.24) The final property of cross-correlation and cross-covariance functions of stationary processes is the so-called cross-correlation inequalities, which we state without proof: (12.25) (We leave the proof to the reader; the starting point is the fact that where a is a real number.) Stated simply, a process is strictly ergodic if a single and sufficiently long time record can be assumed as representative of the whole process In other words, if one assumes that a sample function x(t)—in the course of a sufficiently long time T—passes through all the values accessible to it, then the process can be reasonably classified as ergodic In fact, since T is large, we can subdivide our time record into a number n of long sections of time length Θ so that the behaviour of x(t) in each section will be independent of its behaviour in any other section These n sections then constitute as good a representative ensemble of the statistical behaviour of x(t) as any ensemble that we could possibly collect It follows that time averages should then be equivalent to ensemble averages Assuming that a process is ergodic simplifies both the data acquisition phase and the analysis phase In fact, on one hand we not need to collect an ensemble of time histories—which is often difficult in many practical Copyright © 2003 Taylor & Francis Group LLC situations—and, on the other hand, the single time history at our disposal can be used to calculate all the quantities of interest by replacing ensemble averages with time averages, i.e by averaging along the sample rather than across the number of samples that form an ensemble Ergodicity implies stationarity and hence, depending on the process characteristic we want to consider, we can define many types of ergodicity For example, the process X(t) is ergodic in mean value if the expression (12.26) where x(t) is a realization of X(t), tends to E[X(t)] as Mean value stationarity is obviously implied (incidentally, note that the reverse is not necessarily true, i.e a mean-value stationary process may or may not be mean-value ergodic, and the same applies for other types of stationarities) because the limit of (12.26) cannot depend on time and hence (eq (12.13)) (12.27) Similarly, the process is second-moment ergodic if it is second-moment stationary and (12.28) These ideas can be easily extended because, for any kind of stationarity, we can introduce a corresponding time average and an appropriate type of ergodicity There exist theorems which give necessary and sufficient (or simply necessary) conditions for ergodicity We will not consider such mathematical details, which can be found in specialized texts on random processes but only consider the fact that in common practice—unless there are obvious physical reasons not to so—ergodicity is often tacitly assumed whenever the process under study can be considered as stationary Clearly, this is more an educated guess rather than a solid argument but we must always keep in mind that in real-world situations the data at our disposal are very seldom in the form of a numerous ensemble or in the form of an extremely long time history Stationarity, in turn—besides the fact that we can rely on engineering common sense in many cases of interest—can be checked by hypothesis testing noting that, in general, it is seldom possible to test for more than meanvalue and covariance stationarity This can be done, for example, by subdividing our sample into shorter sections, calculating sample averages for each section and then examining how these section averages compare with each other and with the corresponding average for the whole sample Copyright © 2003 Taylor & Francis Group LLC the conclusion that is directly proportional to T so that we can write (12.108) where we interpret as the average frequency of upward crossings of the threshold x=a, i.e the number of crossings per unit time Now, by isolating a short (say, of length dt, between the instants t0 and t0+dt) section of a sample time history, let us consider a typical situation in which an upward crossing is very likely to occur The first condition to be met is that at the beginning of the interval—i.e at time t0—we must have xa, so that the number of such peaks in the time interval T is given by Also, we can say that each upward crossing of the threshold x=0 corresponds to one ‘cycle’ of our smoothly varying time history, so that there are, on average, ‘cycles’ in the time interval T (Note that these assumptions are generally not true for a wide-band processes, which have highly erratic time histories In this circumstance it cannot be assumed that each upcrossing of the threshold corresponds to one peak (or maximum) only.) Then, in the same interval, the favourable fraction of peaks greater than a can be expressed as the ratio and (12.116) Differentiating both sides with respect to a gives the desired result, i.e the probability density function for the occurrence of peaks (12.117) If, in particular, the narrow-band process has a Gaussian distribution, we can use eq (12.113) to obtain (12.118) where we took into account (eq (12.113)) that The distribution of eq (12.118) is well known in probability theory and is called the Rayleigh distribution From this result it is easy to determine the probability that a peak chosen at random will exceed the level a: this is (12.119a) or the probability that a peak chosen at random is less than level a, i.e the Rayleigh cumulative probability distribution (12.119b) Although the Rayleigh distribution is widely used in a large number of practical problems, it must be noted that the distribution of peaks may differ significantly from eq (12.118) if the underlying probability distribution of the original process is not Gaussian In these cases, the Weibull distribution Copyright © 2003 Taylor & Francis Group LLC generally provides better results This distribution in its general form is a two-parameter distribution and is often found in statistics books written as (12.120a) where α is a parameter which determines the shape of the distribution and β is a scale parameter which determines the spread of the values From eq (12.120a) the Weibull probability density function can be obtained by differentiating with respect to x (see the third of eqs (11.20)) (12.120b) For our purposes, however, we can follow Newland and note that if we call a0 the median (eq (11.45)) of the Rayleigh distribution (12.119b), we have so that from which it follows Substitution of this result into eq (12.119b) gives the Rayleigh distribution in the form (12.121) which, in turn, is a special case of the one-parameter Weibull distribution (eq (12.120b) with and ), i.e (12.122) From eq (12.122) we obtain the Weibull pdf (12.123) which is sketched in Fig 12.9 for three different values of k, the case k=2 representing the Rayleigh pdf (The reader is invited to sketch a graph of the Weibull cumulative probability distributions of eq (12.122) for the same values of k.) At this point we may ask about the highest peak which can be expected within a time interval T The average number of cycles in time T (and hence, for a narrow-band process, the average number of peaks) is given by where has been introduced above in this section Noting that there is no Copyright © 2003 Taylor & Francis Group LLC Fig 12.9 Weibull pdf for different values of k loss of generality in considering the amplitude of peaks in median units, let us call A the (unknown) maximum peak amplitude expected, on average, in time T In other words, we are putting ourselves in the situation in which the applies, which in turn implies equation (12.124) Furthermore, we know from eq (12.116) that (12.125a) and from eq (12.122) that (12.125b) so that, equating eqs (12.125a) and (12.125b) and taking (12.124) into account, we get Copyright © 2003 Taylor & Francis Group LLC from which it follows that (12.126a) Finally, noting that A expresses the maximum amplitude in median units where amax is the maximum and can therefore be written as amplitude in its appropriate units, we get (12.126b) Equation (12.126b) is a general expression for narrow-band processes when we can reasonably assume that any upcrossing of the zero level corresponds to a full cycle (and hence to a peak), so that the average number of cycles (peaks) in time T is given by It is left to the reader to sketch a graph of eq (12.126b) plotting as a function of the number of cycles For example, if the peak distribution of our process is a Weibull distribution with k=1, eq (12.126b) shows that, on average, a peak with an amplitude higher than four times the median can be expected every 16 cycles or, in other words, one peak out of 16 peaks will exceed, on average, four times the median If, on the other hand, the peak distribution is a Rayleigh distribution, the average number of cycles needed to observe one peak higher than four times the median (i.e ) is given by or, in other words, one peak out of approximately 65 500 peaks will exceed an amplitude of four times the median Qualitatively, a similar result should be expected just by visual inspection of Fig 12.9, where we note that higher values of k correspond to more and more strongly peaked probability density functions in the vicinity of the median, and hence to lower and lower probabilities for the occurrence of peak values significantly different from a0 The interested reader can find further developments along this line of reasoning, for example, in Newland [8] or Sólnes [10] 12.6.3 Notes on fatigue damage due to random excitation Fatigue is the process by which the strength of a structural member is degraded due to the cyclic application of load (stress) or strain so that the fatigue load that a structure can withstand is often significantly less than the load it would be capable of if the same load were applied only once Broadly Copyright © 2003 Taylor & Francis Group LLC speaking, fatigue failure is caused by the gradual propagation of cracks in regions of high stress and the whole process can be divided into three main phases: crack initiation, crack growth and final failure Although there exist some general guidelines, there is no clear distinction between the various phases and, as a matter of fact, traditional fatigue analysis makes no distinction between fatigue crack initiation and crack growth to failure Moreover, it seems that none of the theories which have been developed to describe the actual mechanism of crack initiation is universally accepted Experimentally, the most important techniques of fatigue testing are performed by applying a constant amplitude and periodically varying load to a test specimen of the material to be tested, and estimating its ‘fatigue life’ by counting the number of cycles to failure (Nf) In this situation, the fatigue life depends significantly only on two characteristics of the stress time history: the stress range (maximum stress minus minimum stress) and the mean stress value, the effect of the former characteristic being generally more important than the effect of the latter If, as it is often the case, we assume for the moment a zero mean stress value and consider only the stress range S and the number of cycles to failure Nf, then a typical experimental test leads to the so called S–N curve (or the Wöhler fatigue curve) which is essentially a plot of log S versus log Nf Analytical approximations of such graphs have generally the form (12.127) where b and m are positive constants whose values depend on both the material and the geometry of the specimen More specifically, eq (12.127) does not apply for all values of S and we can distinguish between two regimes of material behaviour: high-cycle fatigue, in which eq (12.127) applies and failure occurs in excess of approximately 103 cycles (e.g ASTM Standard E468 [11]) and low-cycle fatigue in which failure occurs in relatively few cycles (

Ngày đăng: 11/12/2022, 02:05