1. Trang chủ
  2. » Giáo án - Bài giảng

Physics informed neural networks for the analysis and optmization of structures

108 0 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Physics-informed neural networks for the analysis and optimization of structures MAI TIEN HAU February 2023 Department of Architectural Engineering The Graduate School Sejong University Physics-informed neural networks for the analysis and optimization of structures MAI TIEN HAU A dissertation submitted to Faculty of Sejong University in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Architectural Engineering February 2023 Approved by Professor Jaehong Lee Major Advisor Physics-informed neural networks for the analysis and optimization of structures by MAI TIEN HAU Approved Professor Kihak Lee, Chair of the committee Approved Professor Dongkyu Lee, Member of dissertation committee Approved Professor Seunghye Lee, Member of dissertation committee Approved Professor JongJae Lee, Member of dissertation committee Approved Professor Jaehong Lee, Advisor ABSTRACT This thesis is concerned with nonlinear, stability analyses, and size optimization of truss structures based on physics-informed neural networks (PINNs) For non­ linear analysis one, a robust and simple unsupervised neural network framework is proposed to perform the geometrically nonlinear analysis of inelastic truss struc­ tures To guide the training process, the loss function built via the total potential energy principle under boundary conditions (BCs) is minimized in the suggested NN model whose weights and biases are considered as design variables And the training data only contain the spatial coordinates of joints In each training iter­ ation, feedforward, physical laws, and back-propagation are applied for adjusting the parameters of the network to minimize the loss function Once the network is properly trained, the mechanical responses of inelastic structures can be eas­ ily obtained without using any structural analysis as well as incremental-iterative algorithms Several benchmark examples regarding geometrical and material non­ linear analysis of truss structures are examined to demonstrate the effectiveness and reliability of the proposed paradigm Subsequently, the proposed model is first to analyze the stability of truss structures Different from most existing work, neural network (NN) is designed to directly locate the critical point by minimizing the loss function involving the residual load and property of the stiff­ i ness matrix which they are established based on the outputs, loads, and BCs It is also significant because the first critical point will be located at the training end corresponding to the minimum loss function without utilizing any incrementaliterative methods Additionally, this dissertation also develops a Bayesian deep neural network-based parameterization framework to directly solve the optimum design for geometrically nonlinear trusses for the first time In this approach, the parameters of the network are regarded as decision variables of the structural op­ timization problem, instead of the member’s cross-sectional areas Therein, the loss function is constructed with the aim of minimizing the total structure weight so that all constraints of the optimization problem obtained by supporting fi­ nite element analysis (FEA) and arc-length method are satisfied Furthermore, Bayesian optimization (BO) is applied to automatically tune the hyperparame­ ters of the network The effectiveness of this model is demonstrated through a series of numerical examples for geometrically nonlinear space trusses And the obtained results demonstrate that our framework can overcome the drawbacks of applications of machine learning in computational mechanics Finally, a physicsinformed neural energy-force network (PINEFN) framework is first constructed to directly solve the optimum design of truss structures that structural analy­ sis is completely removed from the implementation of the global optimization in this thesis Herein, the loss function is designed based on the output values and physics laws to guide the training Now only NN is used in our scheme to minimize the loss function wherein weights and biases of the network are con­ sidered as design variables In this model, spatial coordinates of truss members are examined as input data, while corresponding cross-sectional areas and re­ dundant forces unknown to the network are taken account of output Obtained outcomes indicated that it not only reduces the computational cost dramatically ii but also yields higher accuracy and faster convergence speed compared with re­ cent literature With the above outstanding features, it is promising to offer a unified solver-free numerical simulation for solving complex issues in structural optimization Keywords: Physics-informed, Geometric nonlinear, Structural stabil­ ity, Hyperparameter optimization, Force method, Critical points, Com­ plementary energy, Bayesian optimization, Truss optimization iii CONTENTS ABSTRACT i LIST OF TABLES vii LIST OF FIGURES xi INTRODUCTION 1.1 Structural nonlinearanalysis 1.2 Size optimization 1.3 Physics-informed neural networks 1.4 Objective Organization 1.5 PHYSICS-INFORMED NEURAL NETWORK FOR NON­ LIN EAR ANALYSIS OF TRUSS STRUCTURES 11 2.1 An overview 11 2.2 PINN for nonlinear analysis 12 2.2.1 Problem statement 12 2.2.2 Unsupervised learning-basedapproachframework 14 2.3 PINN for structural stability analysis 17 2.3.1 Problem statement 17 2.3.2 Direct instability-informed neuralnetworkframework 20 IV 2.4 2.5 Numerical examples 23 2.4.1 Material and geometrical nonlinearities 2.4.2 Geometrical nonlinearity 31 2.4.3 Material nonlinearity 38 2.4.4 Structural stability 43 23 Conclusions 56 BAYESIAN DEEP NEUARL NETWORK-BASED PARAM­ ETERIZATION FRAMEWORK FOR OPTIMUM DESIGN OF NONLINEAR STRUCTURES 58 3.1 Introduction 58 3.2 Statement of structuraloptimizationproblem 60 3.3 BDNN-based parameterizationframework 61 3.3.1 DNN-based parameterization model 63 3.3.2 Hyperparameter tuning 66 3.4 Numerical examples 70 3.4.1 25-bar space truss 71 3.4.2 52-bar dome truss 77 3.4.3 56-bar space truss .81 3.4.4 120-bar dome truss 85 3.5 Conclusions 89 PHYSICS-INFORMED NEURAL ENERGY FORCE NET­ WOR K FOR STRUCTURAL OPTIMIZATION 90 4.1 Introduction 90 4.2 Structural optimization basedon energy-force methods 94 4.3 Physics-informed neuralenergy-force network 101 4.4 Numerical examples 105 V 4.5 4.4.1 Ten-bar truss 106 4.4.2 200-bar planar truss Ill 4.4.3 25-bar space truss 117 4.4.4 72-bar space truss 120 4.4.5 120-bar dome truss 126 Conclusions 130 CONCLUSIONS AND FUTURE WORK 131 5.1 Conclusions 131 5.2 Future work 133 REFERENCES 135 LIST OF PUBLICATIONS 150 ABSTRACT IN KOREAN 152 ACKNOWLEDGEMENTS 155 VI 1300 SQP with initial areas = 20 cm2 - - - SQP with initial areas = cm2 1200 1100 1000 ZM 900 800 ể 700 I 600 500 400 300 100 200 300 400 500 600 700 Iterations Fig 3.5 Iteration history of the SQP algorithm for different initial areas for the 25-bar space truss Fig 3.6 The weight convergence histories of the optimal network and other studies for the 25-bar space truss 76 3.4.2 52-bar dome truss Next, a 52-bar dome truss illustrated in Fig 3.7 is examined This benchmark has been previously studied by Saka and Ulker [39] The data of the optimization problem is tabulated in Table 3.2 All members of the structure are classified into groups with respect to the design variables as labeled in the same figure An external load of 150 kN is applied in the z-axis’ negative direction at joints 6-13 Since this is a symmetric structure, the vertical displacements of nodes 1, 2, 6, and are restricted to 10 mm Fig 3.7 Schematic of a 52-bar dome truss structure 77 Table 3.8 Optimum hyperparameters obtained by using the BO for different problems Test problems 52-bar dome truss 6-bar space truss 120-bar dome truss Hyperparameters No of hidden Activation neurons function 37 ReLU 59 ReLU 45 ReLU No of hidden layers Learning rate 0.01489 0.09405 0.04433 Table 3.9 Statistics of the optimal weight with different problems Test problems 52-bar dome truss 56-bar space truss 120-bar dome truss Best 2,141.010 14,549.691 5,836.717 Worst 2,146.903 14,554.903 5,841.903 Weight (kg) Mean Std 2,142.715 0.665 14,550.723 0.508 5,838.019 0.605 95% 2,142.280 14,550.4225,837.661 - CI 2,143.149 14,551.023 5,838.376 As the previously investigated example, the optimal hyperparameters of the net­ work, including hidden layers (3), neurons in each hidden layer (37), activation function (ReLU), and learning rate (0.01489) as shown in Table 3.8, are found by the BO with respect to the minimum weight of 2,141.01 kg From Table 3.9, it is easily be seen that the best (2,141.01 kg), mean (2,142.715 kg) and 95% CI (2,142.280 kg - 2,143.149 kg) values of the optimum weight are close with the Std less than 0.7 kg Fig 3.8 shows the graph of the convergence curve which gives a more detailed performance view for tuning hyperparameter Clearly, the optimal hyperparameters achieved after only 20 times of training On the other hand, the optimal results corresponding to the optimal network, including the cross-sectional areas, weight as well as deflection constraints, are summarized in comparison with other studies in Tables 3.10-3.11 It is obvious that our approach outperforms others in terms of the minimum weight, while all displacements are free from any violations of constraints More specifically, the best optimum weight obtained by the proposed method (2,141.01 kg) is smaller than DNN-DE (2,142.41 78 kg), FEA-DE (2,141.9 kg), and Saka (5,161 kg) Clearly, the optimal weight found by Saka is still far behind comparing to other studies This is due to the fact that it probably traps in the local optimum Meanwhile, our model entirely overcomes this challenge In addition, a comparison of the convergence rates is shown in Fig 3.9 As can be seen on the plot, the mass of structures rapidly decreases in the first ten epochs and find the solution only through 237 epochs Again, our procedure converges very fast to the optimal solution, while the others are still a long way from the target value It can easily be interpreted by the fact that the training network works based on the gradient-based optimization Hence, the number of evaluations will go down much more quickly Furthermore, the obtained results demonstrate the efficiency of the proposed framework Table 3.10 Comparison of optimal results for the 52-bar dome truss Design variables Ải (cm2) Best weight (kg) Saka [39] 81.820 22.410 33.580 14.450 10.640 25.160 2.000 2.000 5,161 FEA-DE [56] 2.000 2.000 2.000 2.000 16.672 17.585 2.519 2.000 2,141.9 DNN-DE [56] 2.000 2.000 2.000 2.000 16.753 17.869 2.301 2.000 2,142.41 Present 2.004 2.000 2.004 2.001 16.692 17.544 2.509 2.008 2,141.010 Table 3.11 The displacement constraints of the 52-bar dome truss Displacements (mm) W1 w2 w6 w7 Saka [39] -2.772 -2.826 13.045 9.491 FEA-DE [56] 1.328 0.720 10.000 10.000 79 DNN-DE [56] 1.206 0.742 10.000 9.648 Present 1.339 0.743 10.000 9.987 Fig 3.8 The convergence history of the HPO using BO for the 52-bar dome truss structure Number of generations/epochs Fig 3.9 The weight convergence histories of the optimal network and other works for the 52-bar dome truss 80 3.4.3 56-bar space truss The next optimization problem deals with the 56-bar space truss shown in Fig 3.10 As shown in this figure, the structure has stories and the cross-sectional areas of members as design variables are collected in groups It is subjected to concentrated loads of 45.5 kN in the x-axis’ positive direction at nodes 1, 3, 5, 7, 9, 11, 13, and 15 and 91 kN in the negative z-direction at nodes 1, 2, 3, and The data dealing with the design, including Young’s modulus, density, and design variable bounds, are listed in Table 3.2 All displacements of joints in the x-direction are limited in the interval [-30, 30] mm Fig 3.10 A 56-bar space truss structure 81 The optimal results, including the hyperparameters, design variables, total weight, and deflection constraints, are reported in Tables 3.8, 3.9, 3.12, and 3.13 Accord­ ingly, the best combination of hyperparameters (4, 59, ReLU, 0.09405) is found by the BO with respect to the minimum weight (14,549.691 kg) that it only needs 25 times for the training network It can be observed that the best optimum weight is close to mean (14,550.723 kg) and worst weights (14,554.903 kg) with the error less than 0.008% and 0.036%, respectively Furthermore, the best opti­ mal masses is found with the small Std values (0.508 kg) In addition, the lower (14,550.422 kg) and upper (14,551.023 kg) bound values of the 95% CI are not significantly different and close to the best weight The convergence history of the BO is depicted in Fig 3.11 As observed, the best-fitted network model achieves after only samples which are found by the infill strategy EI with 10 initial sam­ ples Additionally, the optimum weight attained by the network with the optimal hyperparameters is the best design without violation constraints Note that al­ though Saka [39] was given the smallest weight (13,577.160 kg), the displacement constraints are violated at nodes (31.1542 mm) and (32.2244 mm) in the x-direction As indicated in Ref [56], the sensitivity of the nonlinear response and control parameters of the numerical method used by Saka can affect the optimal result Fig 3.12 displays the convergence curves of the present method and FEA-DE for the structural weight Once again shows that the convergence speed accelerates within 50 first epochs and achieves the optimal solution after only 198 times nonlinear analyses 82 Table 3.12 Comparison of optimal results for the 56-bar space truss Design variables Ảị (cm2) Best weight (kg) Saka [39] FEA-DE Present 7.440 111.020 5.000 46.460 13,577.160 8.002 115.427 5.001 52.761 14,550.004 7.951 115.609 5.005 52.642 14,549.691 Table 3.13 The displacement constraints of the 56-bar space truss Displacements (mm) Ui U2 U3 u4 U5 u6 u7 U8 u9 U10 Uli U12 U13 U14 U15 U16 Saka [39] 31.1542 32.2244 27.0624 26.7943 24.3559 23.2623 19.9771 20.2158 14.6074 14.8078 13.9259 12.7964 7.7910 6.6797 5.4646 5.6804 83 FEA-DE 29.0021 30.0002 25.3639 25.1181 22.5559 21.5386 18.5977 18.8190 13.4029 13.5845 12.8816 11.8268 7.1061 6.0706 4.9662 5.1646 Present 28.9957 30.0000 25.3520 25.1044 22.5625 21.5389 18.5931 18.8160 13.4083 13.5914 12.8889 11.8277 7.1175 6.0757 4.9702 5.1701 Fig 3.11 The convergence history of the HPO using BO for the 56-bar space truss structure Number of generations/epochs Fig 3.12 The weight convergence histories of the optimal network and FEA-DE for the 56-bar space truss 84 3.4.4 120-bar dome truss The last problem done herein is to optimize a 120-bar dome truss structure as shown in Fig 3.13 All cross-sectional areas of members are categorized into groups corresponding to design variables The data concerning the design of this benchmark is indicated in Table 3.2 The system is subjected to vertical loads in the negative direction of the z-axis which are 60 kN at node 1, 30 kN at nodes 2-13, and 10 kN at nodes 14-37 The vertical displacement of free nodes under this loading is limited to 10 mm Fig 3.13 120-bar dome space truss structure 85 The optimal hyperparameters were obtained after 25 training times, as shown in Table 3.8 and Fig 3.14 With respect to the optimal network, the optimum results, including the weight, statistics, design variables, and constraint values, are reported in Tables 3.9, 3.14-3.15 Firstly, from the data in Table 3.9, the range of confidence interval (95% CI = 5,837.661 kg to 5,838.376 kg) changes for narrow and close to the best (5,836.717 kg), mean (5,838.019 kg) and worst (5,841.903 kg) weight with the small Std (0.605 kg) Next, it can be seen that the optimum weight found by Saka [39] (7,587 kg) was the heaviest and violated the design constraints (-10.015 mm) In addition, it is interesting that in this example, the optimum weight gained by the FEA-DE (6,504.674 kg) is also much larger than the proposed paradigm (5,836.717 kg) without violating constraints This can be explained that the DE algorithm may trap in the local optimum And clearly, our approach gives the best result in terms of both the optimum weight and constraints Hence, tuning hyperparameters by the BO is an important role in improving accuracy and searching the global solution, as shown in Fig 3.14 A comparison of the convergence history between the optimal network and FEA- DE is illustrated in Fig 3.15 As the above examples, the proposed framework always converges much more rapidly than the FEA-DE It only requires 200 times of nonlinear analysis, while the DE requires a large number of nonlinear analyses (13,740) Again, this demonstrates the efficiency of the self-tuning DNN for solving the optimum design problem with geometrically nonlinear behavior 86 Table 3.14 Comparison of optimal results of the 120 dome truss Design variables Ảị (cm2) Best weight (kg) Saka [39] FEA-DE Present 17.500 45.560 25.450 8.440 22.300 15.960 3.900 7,587.000 9.693 45.096 25.785 4.829 24.116 14.997 2.000 6,504.674 5.496 42.184 25.077 2.077 24.805 15.454 2.048 5,836.717 Table 3.15 The displacement constraints of the 120-bar dome truss Displacements (mm) W1 W5 W19 W20 Saka [39] -7.518 -10.015 0.907 0.125 87 FEA-DE -8.336 -9.838 0.641 -4.401 Present -10.000 -10.000 0.606 -5.766 7800 Fig 3.14 The convergence histories of the HPO using BO for the 120-bar dome truss structure Number of generations/epochs Fig 3.15 The weight convergence histories of the optimal network and FEA-DE for the 120-bar dome truss 88 3.5 Conclusions In this chapter, an efficient self-tuning hyperparameter BDNN-based framework is developed to solve the design optimization of truss structures with geomet­ rically nonlinear behavior The DNN is built to parametrize the cross-sectional area of members In addition, the BO framework is integrated with the network’s training process to self-adjusting search hyperparameters And instead of looking for the cross-sectional areas, the parameters of the network as the new design variables for structural optimization are estimated by training to minimize the loss function Therein, the FEA is utilized to support the construction of the loss function and the network directly performs the structural optimization problem When the training phase ends, the minimum weight of the structure is pointed out immediately And then, BO is utilized to determine the best optimum weight corresponding the optimal hyperparameters of the network Based on the inves­ tigated numerical examples, several conclusions are made as follows: (i) The numerical results indicated that our model always outperforms others in terms of the quality solution and convergence rate (ii) It is interesting that the network can automatically tune hyperparameters in the learning process to avoid being trapped in a local optimum, and one of the major strengths of this approach (in) In light of the above outstanding features, it is a promising alternative to solve complex problems with nonlinear behavior 89 CHAPTER PHYSICS-INFORMED NEURAL ENERGY FORCE NETWORK FOR STRUCTURAL OPTIMIZATION 4.1 Introduction The objective of the design optimization of truss structure is to minimize the structural weight while satisfying all constraints In general, although there have been a variety of algorithms employed to address this issue, they all work on the same basic principle, as shown in Fig 4.1(a) Therein, the optimization tool often requires numerical simulations, such as FEA, to estimate structural responses during each iteration of the optimizer And they can be divided into two main classes In the first one, the gradient-based algorithms have been successfully applied for searching optimal solutions [36,96,97] However, this approach cannot deal with the lack of gradient information from the objective and constraint functions The other one is the gradient-free algorithms which rely on evolutionary and population genetics to address the optimal design of truss structures [49- 51], Despite these algorithms have achieved certain success, they require many evaluation functions, slow convergence rate, and high computational cost due to 90

Ngày đăng: 09/10/2023, 08:15

Xem thêm:

w