Lifetime-Oriented Structural Design Concepts- P20 potx

30 179 0
Lifetime-Oriented Structural Design Concepts- P20 potx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

528 4 Methodological Implementation Ω = ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ . . . −δ i − ω i i . . . ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ (4.315) and the complex conjugate eigenvector matrix Ψ = ⎡ ⎢ ⎣ ΦΦ ∗ ΦΩ Φ ∗ Ω ∗ ⎤ ⎥ ⎦ (4.316) including the complex eigenvector matrix Φ =  . . . φ r i − φ i i i . . .  . (4.317) Unfortunately, the solution of the subspace identification for the system ma- trix A (4.283) is discrete-time and includes an arbitrary state space trans- formation. The complex circular frequency matrix Λ (4.314) obtained from the solution of the special eigenvalue problem (4.313) is a diagonal matrix and independent from the state space transformation. The continuous-time complex conjugate circular frequency matrix ¯ Λ = 1 Δt ln (Λ) (4.318) can be calculated with the matrix logarithm ln(•) (see [228]). The complex eigenvector matrix Φ (4.317) is similar in discrete-time and continuous-time but includes an abritrary state space transformation. Therefore, the trans- formed complex eigenvector matrix ˆ Φ = C ⎡ ⎢ ⎣ Φ ΦΩ ⎤ ⎥ ⎦ Ω −2 (4.319) is calculated with the output matrix C which includes the same state space transformation (see e.g. [615]). 4.4 Reliability Analysis Authored by Dietrich Hartmann, Yuri Petryna and Andr´es Wellmann Jelic The computer-based determination and analysis of structural reliability aims at the realistic spatiotemporal assessment of the probability that given structural systems will adequately perform their intended functions subject to 4.4 Reliability Analysis 529 existing environmental conditions. Thus, the computation of time-variant, but also time-invariant failure probabilities of a structure, in total or in parts, gov- erns the reliability analysis approach. In addition, based on the success gained in reliability analysis in the past, it becomes more and more popular to extend the reliability analysis to a reliability-based optimum design. By that, struc- tural optimization of structures, which often leads to infeasible structural lay- outs in sensitive cases, naturally incorporates probabilistic effects in the op- timization variables, criteria and constraints making real world optimization models. According to the fact that various modes of failure are customarily pos- sible, a formidable task, in particular for large and complex structural systems, is to be solved. In the following sections the currently most powerful approaches in reliability analysis are described to demonstrate their possible potentials. 4.4.1 General Problem Definition Authored by Dietrich Hartmann, Yuri Petryna and Andr´es Wellmann Jelic A reliability analysis aims at quantifying the reliability of a structure accounting for uncertainties inherent to the model properties (material pa- rameters, geometry) as well as environmental data (loads), respectively. The quantification is achieved by estimating the failure probability P f of the struc- ture. For problems in the scope of civil engineering, this probability depends on the random nature of the stress values S as well as the resistance values R, as depicted in Figure 4.103. The structural failure at the particular time T is defined by the event R < S, leading to a failure probability P f = P (R<S)=  all s P (R<S|S = s)P(S = s) . (4.320) f S (s),f R (r) Stresses S Resistance R μ s μ r failure domain r, s Fig. 4.103. General definition of the failure domain depending on scattering resis- tance (R) and stress (S) values 530 4 Methodological Implementation where the quantities r, s are realizations of the random values R and S,re- spectively. By assuming an uncorrelated relationship and continuous distribution func- tions of the random values R and S the formulation in eq. (4.320) can be simplified to P f = ∞  0 F R (s) ·f S (s) ds (4.321) where F R (s), f S (s) represent the general cumulative distribution function (cdf) of the resistance values and the probability density function (pdf) of the stress values, respectively. Mathematically, the random nature of the values R and S is modelled in terms of a vector of random variables X = {X 1 ,X 2 , ,X d } T and the correspondent joint probability density function f x (x)= P (x d <X d ≤ x d + dx) dx d . (4.322) In this context, the parameter d quantifies the total number of random variables which corresponds to the stochastic dimension of the researched structural reliability problem. By using the joint probability density function in eq. (4.322), the failure probability in eq. (4.321) is reformulated to P f = P [g(X) ≤ 0] =  g(x,T )=0 f x (x)dx . (4.323) g(x,T) = 0 is the relevant time-dependent limit state function for a prescribed failure criterion and divides the safe state from the unsafe state as follows g (x,T)=  ≤ 0 failure > 0survival. (4.324) In general, multiple distinct limit state functions may be defined. In the following, however, only one limit state function is considered for a better readability. By solving the multidimensional integral in Eq.(4.323) an estimate for the failure probability at the point in time T is quantified. Additionally to the time-dependent formulation of the limit state function in eq. (4.324), also the stress values S as well as the resistance values R of certain structural problems may exhibit a significant time dependency. As a consequence, the formulation of the resulting failure probability has to incorporate this time dependency as follows 4.4 Reliability Analysis 531 P f (t)=P(R(t) < S(t)) = F T (t) . (4.325) Thus, by evaluating this failure probability for multiple discrete time points t i the evolution of P f (t) can be estimated. Following the above explained differentiation between time-variant and time-invariant modelling of reliability problems, consequently, also the corre- sponding solution methods for solving these reliability problems are presented separately in the following. 4.4.2 Time-Invariant Problems Authored by Dietrich Hartmann, Yuri Petryna and Andr´es Wellmann Jelic Existing methods for solving time-invariant reliability problems can be mainly separated into three different groups: analytical solutions, approxi- mate methods and simulation methods. In the initially named group an an- alytical, closed-form solution of the multidimensional integral in eq. (4.323) is researched. However, this approach is only realizable for low-dimensional reliability problems with a small number of random variables. As structural reliability problems in civil engineering primarily comprise several random variables together with nonlinear limit state functions this analytical approach can not be applied. For the analysis and solution of structural reliability prob- lems the approximate and simulation methods are most favorable so that they are to be explained more detailed. 4.4.2.1 Approximation Methods Well developed methods for approximating the failure probability are FORM and SORM (First-Order and Second-Order Reliability Methods). These are analytical solutions converting the integration into an optimization problem. In order to simplify the calculation the distribution functions of the random variables and the limit state function are transformed into a standardized Gaussian space, as outlined in Figure 4.104. This transformation is defined via the Cumulative Distribution Function F X i (x i )=Φ(y i ) (4.326) where y i are the transformed and standardized Gaussian variables, leading to y i = Φ −1 (F X i (y i )) . (4.327) This transformation leads to a nonlinear limit state function h(y)=g  F −1 X 1 (Φ(x 1 )) , ,F −1 X m (Φ(x m ))  (4.328) 532 4 Methodological Implementation X 1 μ X X 2 g(X) Y 1 h(Y) Y 2 Fig. 4.104. Standardization of an exemplary 2D joint distribution function for a subsequent FORM/SORM analysis in almost all cases. The FORM and SORM now simplify these functions cal- culating linear and quadratic tangential surfaces respectively. These surfaces are adapted in the so-called design point y ∗ . This point of the limit state function h(y) is defined via the shortest distance (e.g. in FORM) δ = h(y) − m  j=1 y j ∂h ∂y j  m  j=1  ∂h ∂y j  2  1/2 (4.329) between h(y) and the coordinate origin of the standardized Gaussian space. From this distance measure the safety index β = ⎧ ⎪ ⎨ ⎪ ⎩ +δ, h(0) > 0 −δ, h(0) < 0 (4.330) is derived. This leads to a simplified formulation of the failure probability P f ≈ Φ(−β) (4.331) in FORM and to P f = Φ(−β) m−1  i=1 (1 − βκ i ) −1/2 (4.332) in SORM. 4.4 Reliability Analysis 533 A main computational task in these methods is finding the design point by means of suitable search algorithms. Conceptually simple analytical algorithms – like the Hasofer-Lind-algorithm [356] or the derived Rackwitz-Fiessler- algorithm [654] – have been developed initially and are still used for well-behaved reliability problems. As the search of the design point y ∗ can be formulated in terms of an optimization problem, alternatively, also gradient-based optimiza- tion strategies like the Sequential Programming method (SQP) [64] are heavily employed. More detailed information on FORM/SORM and, particularly, on further developed derivatives of these methods are presented in [653]. A critical view on this approximate methods conclude to the following statements. In general, these methods only approximate the researched fail- ure probability by ignoring existing non-linearities, e.g. in the limit state function, such that an significant error may be introduced providing only poor information about this possible error. Furthermore, the above explained identification of the design point y ∗ by means of optimization strategies may only deliver a local minimum of the limit state function, possibly ignoring a global minimum. Another disadvantage is the low numerical efficiency when solving high-dimensional structural reliability problems (high-dimensional in terms of number d of random variables). This low efficiency results from the computation of gradients in multiple point of the limit state function as this computation itself comprise – in most cases – multiple FE analyses. In this context, the authors in [711] state a lack of robustness, accuracy and compet- itiveness compared to simulation methods for d>30. An exemplary analysis of the influence of the number d on the results is given in [653]. Conversely, the approximate methods are well suited for the computation of small values of failure probability, say P f ≤ 10 6 , when reliability problems with a small number d of random variables are to be analyzed. Also problems with multiple limit state functions (union or intersection of failure domains) can be analyzed very efficiently when extended version of FORM/SORM – like summarized in [653] – are employed. Due to this high efficiency for low- dimensional problems (in terms of random variables) these approximate meth- ods are widely used in the scope of Reliability-Based Design Optimization (RBDO, see Section 4.5), as stated in [300, 519]. 4.4.2.2 Simulation Methods Authored by Dietrich Hartmann, Andr´es Wellmann Jelic and Yuri Petryna In contrast to the approximation methods named above the class of Monte Carlo Simulations (MCS) has to be mentioned. These methods use the given density functions to create multiple sets of realizations of all random variables. For each set of realizations, a deterministic analysis of the researched limit state function g(x) is performed, in civil engineering dominantly a structural analysis using the Finite Element Method. Afterwards, the results are evalu- ated concerning failure or survival. In order to simplify the description of the analysis results an Indicator function 534 4 Methodological Implementation I(g(x)) = ⎧ ⎪ ⎨ ⎪ ⎩ 1, f¨ur g(x) < 0 0, f¨ur g(x) ≥ 0, (4.333) is used. This leads to an alternative formulation of the failure probability in eq. (4.323) P f = ∞  −∞ I(g(x)) · f X (x)dx. (4.334) In a discrete simulation this can be reduced to the finite sum P f = 1 n n  i=1 I[g(x i ) < 0] (4.335) with n is describing the number of simulations and x i is the ith set of generated realizations. The big disadvantage of the classical Monte Carlo Simulation is that the accuracy of the estimated results are proportional to 1/ √ n. Therefore, an in- crease of accuracy by one order of magnitude demands an increased execution of discrete simulations by around two orders of magnitude. The main reason is the clustered generation of realizations of the random variables near their mean values. As the demanded failure probabilities in structural engineering are very small, an uneconomic number of simulations have to be performed in- tending to get good estimations. Consequently, the class of variance reducing methods have been developed based on the classic Monte Carlo Simulations. Some variations are e.g Importance Sampling, Stratified Sampling or Adaptive Sampling, more details can be found in [159, 706]. 4.4.2.2.1 Importance Sampling Representative for the above named variance-reducing simulation methods the main principles of Importance Sampling will be explained shortly. The Impor- tance Sampling method moves the main generation point for realizations near the design point y ∗ , shown in eq. (4.329), and then defines a new simulation density function h(v)iny ∗ . This expands the integral in Eq.(4.323) to P f =  ···  I(v) f x (v) h V (v) h V (v)dv. (4.336) Hence, the failure probability can be estimated with P f = 1 m m  n=1 I(v n ) f x (v n ) h V (v n ) (4.337) 4.4 Reliability Analysis 535 using m simulation runs and the sample v n defined by h. In order to calculate approximate estimates for the failure probability a good choice of the sampling density h(v) is essential. The variance of eq. (4.337) is Var[P f ]= 1 m − 1  1 m m  n=1 I(v n )  f x (v n ) h V (v n )  2 − P 2 f  , (4.338) leading to a coefficient of variance υ P f = (Var[P f ]) 1/2 P f . (4.339) The exact solution for P f is obtained for a proportional definition of h V (v) to the real density function f X (v), which, however, implies the knowledge of the searched probability. Instead, [760] proposes the use of the original density function of f V (v), a normal or a uniform distribution. The design point y ∗ can be determined from a pre-executed FORM or SORM calculation, respectively. 4.4.2.2.2 Latin Hypercube Sampling Stochastic modelling of random values for reliability analysis using direct Monte-Carlo simulations requires a huge number of samples, if the failure probability is small. In other cases, one requires solely statistical characteris- tics of structural response such as displacements or stresses estimated over a certain range of input values. Sometimes, critical parameter combinations cor- responding to structural failure conditions, i.e. to limit states, are of interest. For those purposes, the number of the required simulations can be signifi- cantly reduced by special simulation techniques such as the Latin Hypercube Sampling (LHS) [285]. According to the direct Monte-Carlo approach, uniformly distributed ran- dom values x k ,k =1, 2, are generated within the probability range [0, 1] and then transformed into the actual random samples of a certain variable X k by means of its inverse probability function X k = F −1 X (x k ). In such a way, a uni- form distribution x k can be ”mapped” onto an arbitrary distribution function of interest. For most practically important statistical distributions, like the normal distribution, the probability density function (PDF) is concentrated more or less around the mean value. Thus, rare values X k corresponding to the tales of the PDF can be reliably generated only within a large number of Monte-Carlo simulations. The main idea of the Latin Hypercube Sampling is to divide the probability range [0, 1] in a N sim number of equal intervals and take their centroids, ran- domly permutated, for the mapping onto the probability function of interest (Figure 4.105). At that, N sim denotes the number of simulations resp. the size of the set of samples. It is evident that LHS covers the entire range of values much better than the direct MCS for the same, relatively small number N sim . The applications of LHS to stochastic structural analysis in [474, 624] con- firm that already ten to hundred LHS simulations provide acceptable results. 536 4 Methodological Implementation 1 . 0 0 . 0 X F X ( x ) X k r a n d o m v a l u e s X F X ( x ) X k x k 1 . 0 0 . 0 x k N s i m 1 / N s i m M o n t e - C a r l o S i m u l a t i o n L a t i n H y p e r c u b e S a m p l i n g X 0 5 1 0 1 5 2 0 2 5 3 0 3 5 4 0 4 5 5 0 - 0 . 1 - 0 . 0 5 0 0 . 0 5 0 . 1 0 . 1 5 r a n d o m v a l u e X R a n d o m s e t s s i m u l a t i o n n u m b e r L H S M C S - 0 . 2 - 0 . 1 5 - 0 . 1 - 0 . 0 5 0 0 . 0 5 0 . 1 0 . 1 5 0 0 . 5 1 1 . 5 2 2 . 5 3 3 . 5 4 4 . 5 5 H i s t o g r a m L H S X L H S - 0 . 1 - 0 . 0 8 - 0 . 0 6 - 0 . 0 4 - 0 . 0 2 0 0 . 0 2 0 . 0 4 0 . 0 6 0 . 0 8 0 . 1 0 1 2 3 4 5 6 H i s t o g r a m M C S M C S Fig. 4.105. Comparison of Latin Hypercube Sampling and Monte-Carlo Simulation Figure 4.105 illustrates the difference between LHS and MCS by means of N sim = 50 simulations: The histogram of the random set of a Gaussian vari- able generated by LHS is almost ideal compared to that generated by MCS using the same N sim . Latin Hypercube Sampling has been successfully applied in [624] to stochas- tic sensitivity analysis of a reinforced/prestressed concrete bridge and to cal- culation of the limit state points in the framework of the Response Surface Method (Chapter 4.4.2.3). 4.4.2.2.3 Subset Methods A novel and very promising simulation method called Subset simulation (Sub- Sim) has been proposed by Au & Beck in [67] for estimating small P f values. This method reduces the numerical effort compared to direct MCS by express- ing small failure probabilities as a product of larger, conditional probabilities. These conditional probabilities are estimated for decreasing intermediate fail- ure events (subsets) {F i } m i=1 such that F 1 ⊃ F 2 ⊃ F m = F (4.340) with F defined as the main failure event. Consequently, the researched failure probability P f = P (F m )=P (F 1 ) m−1  i=1 P (F i+1 |F i ) , (4.341) is defined as the product of all conditional failure probabilities P (F i+1 |F i ). By selecting the intermediate failure events F i appropriately, large corresponding 4.4 Reliability Analysis 537 failure probability values are achieved such that they can be computed efficiently by direct Monte Carlo estimators. Three variants of Subset Simulation have been developed so far namely Sub- Sim/MCMC, SubSim/Splitting and SubSim/Hybrid. All variants are based on the same adaptive simulation procedure, however, the differ in the generation of the next conditional sample when reaching an intermediate failure event. A general summary of these variants together with their application in the context of a benchmark study is given in [68]. This benchmark study on reliability estimation in higher dimension of struc- tural systems is organized since 2004 by the Institute of Engineering Mechan- ics, University of Innsbruck, Austria (Prof. Schu¨eller, Prof. Pradlwater). The intermediate results of this benchmark, presented in [710], attest a general applicability together with a very high computational efficiency to almost all Subset Simulation variants. 4.4.2.3 Response Surface Methods The reliability assessment of structures is usually focused on the evaluation of the failure probability: p f = P  g(X) ≤ 0  =  g(X)≤0 f x (X)dX. (4.342) This task includes, on one hand, an expensive structural analysis for determi- nation of the limit state function g(X) and, on the other hand, the solution of the multi-dimensional integral (4.342). The reliability analysis of large structures imposes a number of typical dif- ficulties influencing the choice of an appropriate approach to calculate failure probabilities: • the need of multiple and expensive nonlinear analyses of the entire structure; • generally implicit limit state functions, which can be determined only for discrete values of governing parameters; • the vagueness about the parameters dominating the failure probability. The Response Surface Method (RSM) [271, 160, 712], combined with efficient Monte-Carlo simulations [712], helps to avoid at least the first two difficulties. Its application includes in general the following steps. First, the limit state points are determined by means of deterministic non- linear structural simulations, as described above. The corresponding critical values of the random variables X (k) belong to the implicit limit state function g(X) = 0 and satisfy global equilibrium of the system, for example in static case: F I (u, X (k) )=λP (X (k) ) → X (k)  g(X (k) )=0 . (4.343) [...]... 4.5 Optimization and Design Authored by Dietrich Hartmann As demonstrated in Section 4.1.4 structural design problems can be transformed into equivalent structural optimization problems because ‘optimizing something’ is always inherent in design Naturally, only the numerically representable aspects of a design problem can be captured by such a conversion It is customary to divide structural optimization... aforementioned design variables, optimization criterion and constraints: • • • design variables xi , i = 1, 2, 3, , n representing the vital parameters of a structural system, concentrated in the design vector x optimization criterion or objective function introduced as f (x) and, as a rule, being a non-linear function of the design variables xi , i = 1, 2, 3, , n In some cases of optimum design the... unacceptable and infeasible designs in practise and to achieve robust optimum designs, structural reliability must be incorporated into the optimization model, taking into account the governing uncertainties with respect to structural parameters, geometry of the structural system and loading As expedited in the previous section on uncertainties, here, solely stochastic structural uncertainties are... components are a function of the design variables x generalized u(x) = displacement vector, also being a function of the vector x and describing the basic constituent of the structural response r F(x) = generalized load vector as a function of the design variables x b Systems equations of motion if dynamic effects need to be include into the design 4.5 Optimization and Design ˙ M(x)¨ (x) + D(x)u(x) +... are, however, sizing and shaping of structural systems and their structural parts 548 4 Methodological Implementation assuming that the choice of the structural material (e.g concrete, steel, etc.) is fixed already and must remain unaltered As a result, material optimization is disregarded such that only size and shape optimization is dealt with In lifetime-oriented design, size as well as shape optimization... ⎠ j=1 and λj = estimates of the Lagrange multipliers Using the SQP approach gives rapidly an optimum design vector for structural design, provided that continuous functions can be assumed 4.5 Optimization and Design 555 4.5.3.2 Derivative-Free Strategies In practical engineering problems, where the design optimization must be based on nonlinear mechanics simulation methods, e.g nonlinear transient... what types of design variables are to be optimized Further characteristics are then catenated to the methodology applied within the iterative optimization as structural analysis kernel From the previous remarks it is known that different finite computational methods may be chosen Details of the canonical classification in optimum design are, therefore, resolved in the next subsection Once a structural optimization... and a further amplification by means of stochastic quantities leading to stochastic structural optimization Of course, the 4.5 Optimization and Design 551 computational effort increases considerable, however, in particular in lifetimeoriented design the enhancements are clearly indispensable Modification of the deterministic structural optimization model requires to introduce stochastic parameters (basic... parameters in structural analysis are not known exactly and thus introduce uncertainties Those of the input information can be generally classified into load, material and structural uncertainties Additionally, we must account for model uncertainties and output uncertainties mirrored in structural response variables We consider in this contribution only stochastic approaches to handle uncertainties in structural. .. relevant components of the constraint vector g(x) ≤ 0 only indirectly depend from their design variables Vice versa, to be able to compute an implicit constraint, say the j-th constraint gj (x) ≤ 0, induces a complete and probably computationally extensive structural analysis of the total structure for the current design vector x It is readily identifiable that this forms a serious obstacle because of . reported here. 4.5 Optimization and Design Authored by Dietrich Hartmann As demonstrated in Section 4.1.4 structural design problems can be trans- formed into equivalent structural optimization problems. aforemen- tioned design variables, optimization criterion and constraints: • design variables x i ,i=1, 2, 3, ,n representing the vital parameters of a structural system, concentrated in the design vector. classification in optimum design are, therefore, resolved in the next subsection. Once a structural optimization model is established, i.e. the optimization criterion, the design variables and the

Ngày đăng: 01/07/2014, 10:43

Mục lục

  • 1 Lifetime-Oriented Design Concepts

    • 1.1 Lifetime-Related Structural Damage Evolution

    • 1.2 Time-Dependent Reliability of Ageing Structures and Methodological Requirements

    • 1.3 Idea of Working-Life Related Building Classes

    • 1.4 Economic and Further Aspects of Service-Life Control

    • 1.5 Fundamentals of Lifetime-Oriented Design

    • 2.1.1.2 Number of Gust Effects

    • 2.1.2 In.uence of Wind Direction on Cycles of Gust Responses

      • 2.1.2.1 Wind Data in the Sectors of the Wind Rosette

      • 2.1.2.2 Structural Safety Considering the Occurrence Probability of the Wind Loading

      • 2.1.3 Vortex Excitation Including Lock-In

        • 2.1.3.1 Relevant Wind Load Models

        • 2.1.3.2 Wind Load Model for the Fatigue Analysis of Bridge Hangers

        • 2.1.4 Micro and Macro Time Domain

          • 2.1.4.1 Renewal Processes and Pulse Processes

          • 2.2.2 Thermal Impacts on Structures

          • 2.2.4 Modelling of Short Term Thermal Impacts and Experimental Results

          • 2.2.5 Application: Thermal Actions on a Cooling Tower Shell

          • 2.3.1.2 Basic European Traffic Data

          • 2.3.1.3 Basic Assumptions of the Load Models for Ultimate and Serviceability Limit States in Eurocode

          • 2.3.1.4 Principles for the Development of Fatigue Load Models

          • 2.3.1.5 Actual Traffic Trends and Required Future Investigations

          • 2.3.2.3 Load Pattern for Static and Dynamic Design Calculations

          • 2.4 Load-Independent Environmental Impact

            • 2.4.1 Interactions of External Factors Influencing Durability

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan