1. Trang chủ
  2. » Luận Văn - Báo Cáo

Machine learning modelling of tensile force in anchored geomembrane liners

17 0 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Itis important to determine the tensile force T based on allthe system variabilities soil properties, anchorage geo-metry and CCL–GM interface shear characteristics toensure anchorage s

Geosynthetics International Machine-learning modelling of tensile force in anchored geomembrane liners K V N S Raviteja1,2, K V B S Kavya3, R Senapati4 and K R Reddy5 1SIRE Research Fellow, Department of Civil, Materials, and Environmental Engineering, University of Illinois, Chicago, IL, USA 2Assistant Professor, Department of Civil Engineering, SRM University AP, Amaravati, Guntur, India, E-mail: raviteja.k@srmap.edu.in 3Research Scholar, Department of Civil Engineering, SRM University AP, Amaravati, Guntur, India, E-mail: kvbskavya@gmail.com 4Assistant Professor, Department of Computer Science and Engineering, SRM University AP, Amaravati, Guntur, India, E-mail: rajiv.s@srmap.edu.in 5Professor, Department of Civil, Materials, and Environmental Engineering, University of Illinois, Chicago, IL, USA, E-mail: kreddy@uic.edu (corresponding author) Received 28 October 2022, accepted 26 February 2023 ABSTRACT: Geomembrane (GM) liners anchored in the trenches of municipal solid waste (MSW) landfills undergo pull-out failure when the applied tensile stresses exceed the ultimate strength of the liner The present study estimates the tensile strength of GM liner against pull-out failure from anchorage with the help of machine-learning (ML) techniques Five ML models, namely multilayer perceptron (MLP), extreme gradient boosting (XGB), support vector regression (SVR), random forest (RF) and locally weighted regression (LWR) were employed in this work The effect of anchorage geometry, soil density and interface friction were studied with regards to the tensile strength of the GM In this study, 1520 samples of soil–GM interface friction were used The ML models were trained and tested with 90% and 10% of data, respectively The performance of ML models was statistically examined using the coefficients of determination (R2, Radj ) and mean square errors (MSE, RMSE) In addition, an external validation model and K-fold cross-validation techniques were used to check the models’ performance and accuracy Among the chosen ML models, MLP was found to be superior in accurately predicting the tensile strength of GM liner The developed methodology is useful for tensile strength estimation and can be beneficially employed in landfill design KEYWORDS: Geosynthetics, Anchorage capacity, Machine learning, Geoenvironment, Landfill REFERENCE: Raviteja, K V N S., Kavya, K V B S., Senapati, R and Reddy, K R (2023) Machine-learning modelling of tensile force in anchored geomembrane liners Geosynthetics International [https://doi.org/10.1680/jgein.22.00377] INTRODUCTION the geosynthetic interface components are highly influ- enced by the properties of overlying waste (Reddy et al Composite liner consisting of compacted clay liner (CCL) 2017) The conventional limit equilibrium analysis lacks (or geosynthetic clay liner, GCL) and geomembrane the ability to determine displacement along the critical (GM) is used to prevent leachate from escaping from shear plane and report strain levels within the composite municipal solid waste (MSW) landfills GM is placed liner system (Reddy et al 1996) over CCL or GCL and overlain by a leachate drainage layer An anchor system secures GM to avoid pull-out Anchor systems could be of different geometries failure Figure shows the schematic representation of the (simple runout, rectangular, L-shape and V-shape) with liner and GM anchorage system in MSW landfills soil backfilled in the trenches (Koerner et al 1986) GM Ensuring the stability and integrity of the composite liners are often prone to pull-out failure along the side liner system is crucial in landfill design The anchor slopes of the landfill during installation Figure presents system secures GM liners in order to avoid pull-out failure the pull-out force and corresponding resistance forces caused by stresses induced by the drainage layer (Koerner developed along the liner embedded in a V-shaped trench et al 1986; Sharma and Reddy 2004) It is reported that The anchorage capacity should be designed in an optimal way so that it acts rigid when the mobilised tension is less, 1072-6349 © 2023 Thomas Telford Ltd Downloaded by [ RMIT University] on [01/03/24] Copyright © ICE Publishing, all rights reserved Raviteja, Kavya, Senapati and Reddy Cover soil GM liner Backfill Native soil V-trench GM liner Drainage layer CCL/GCL Native soil Figure Schematic representation of GM liner anchored in a V-shaped trench Cover soil dcs Drainage layer f f f Backfill f f dat f f Native soil f GM liner Lat Lro Pull-out force, T (kN/m) Figure Anchorage showing the mobilised tension and interface frictional resistance acting along the length of GM liner and flexible when the mobilised tension reaches the developed to address the stability and tension factors for ultimate tensile strength to avoid tear in the GM liner It cover soils on GM-lined slopes (Koerner and Hwu 1991) is important to determine the tensile force (T ) based on all Qian et al (2002) derived an expression for tensile force in the system variabilities (soil properties, anchorage geo- the liners for simple, rectangular and V-shaped anchors by metry and CCL–GM interface shear characteristics) to considering normal stress from cover soil The anchor ensure anchorage stability trench pull-out resistance is analysed and compared for four design models (Raviteja and Basha 2018) A A large number of physical tests and evaluations needs significant variability is associated with soil–GM liner to be conducted on pull-out apparatus and shear box interface friction that needs to be incorporated in the equipment for the experimental assessment of tensile design of anchor trench (Raviteja and Basha 2015) The forces in the GM liner It is recommended to conduct one target reliability-based design optimisation is proposed for test on the tensile properties of the liner for every a V-shaped anchor trench against pull-out failure (Basha 100 000 ft2 (TCEQ 2017) – that is, a 600-acre landfill and Raviteja 2016) Huang and Bathurst (2009) devel- site requires more than 3000 conformances testing to oped statistical bilinear and nonlinear models for predict- determine the tensile properties of the liner Further, the ing the pull-out capacity of geosynthetics The cyclic variability associated with various design parameters of interface shear properties between sandy gravel and anchorage could demand repetitive testing for proper high-density polyethylene (HDPE) GM were experimen- judgement (Raviteja and Basha 2021) The friction angle tally evaluated and further modelled through a constitu- at CCL–GM and sand–GM interfaces are the most tive relationship (Cen et al 2019) Miyata et al (2019) critical parameters in anchor trench design Most proposed ML regression models to predict the pull-out pull-out failures on the side slope are initiated at the capacity in steel strip reinforcement The pull-out coeffi- soil–GM interface Recent and past studies have shown cient is determined using analytical techniques that rely on that low frictional resistance at the interface, tensile soil engineering properties, namely stress-related skin stiffness of the liner and failure of the soil mass along friction between soil and geosynthetics (Samanta et al the preferential slip lines in granular soils are some of the 2022) major causes Inadequate analysis of soil–geosynthetic interface characteristics would result in pull-out failure In general, artificial intelligence (AI) techniques fre- quently outperform traditional and deterministic sol- Koerner et al (1986) analysed anchorage resistance by utions AI approaches such as artificial neural networks determining the pressure exerted by cover and backfill soil (ANN), genetic programming (GP) and support vector on the GM liner Further, four design models were Geosynthetics International Downloaded by [ RMIT University] on [01/03/24] Copyright © ICE Publishing, all rights reserved Machine-learning modelling of tensile force in anchored geomembrane liners machines (SVM) are more sophisticated, resulting in wide suggested by Koerner (1998) and Qian et al (2002), usage for geotechnical engineering designs Several pull-out failure is preferable to tensile failure of the GM authors have identified the importance of extensive liner Basha and Raviteja (2016) reported the following database analysis in better predicting experimental Equation to calculate the allowable GM tensile force results Machine learning (ML)-based applications are against pull-out failure (Ta) based on the Qian et al (2002) gaining prominence in geotechnical engineering (Sharma theory considering the GM liner as a continuous member et al 2019; Hu and Solanki 2021; Mittal et al 2021; throughout the length Rauter and Tscchnigg 2021; Zhang et al 2021) Chou et al (2015) used an evolutionary metaheuristic intelli- Ta ¼ γdcs Lrotan ỵ 2dcs ỵ 0:5datịLattancos 1ị gence model to estimate the tensile loads in cosα À sinαtanδ geosynthetic-reinforced soil structures The applicability is verified for five different ML models in determining the where γ is unit weight of soil, dcs is depth of cover soil, Lro peak shear strength of soil–geocomposite drainage layer is runout length, δ is the interface friction angle, ψ is interfaces (Chao et al 2021) ANN models successfully trench angle, α is the angle of side slope, Lat and dat are the estimate anticipated settlement in geosynthetic-reinforced length and depth of anchor trench, respectively soil foundations (Raja and Shukla 2021) It is reported that the pull-out coefficient in geogrids could be accu- 2.2 Multilayer perceptron (MLP) rately predicted using random forest regression (RFR) (Pant and Ramana 2022) Ghani et al (2021) studied the The MLP is a multilayered network with input, output response of strip footing resting on prestressed layers and hidden units that can represent a variety of geotextile-reinforced industrial waste using ANN and nonlinear functions MLP is a part of an artificial neural extreme ML The complex heterogeneous nature of the network Interactions between the inputs and outputs can soil properties and the peculiar interaction with various be represented using multilayer neural networks Each geosynthetic materials can be simulated and well analysed layer consists of neurons linked across different layers with using ML models Ghani et al (2021) studied the response connection weights The weight of connections is adjusted of strip footing resting on prestressed geotextile-reinforced based on the output error, which is the difference between industrial waste using ANN and a revolutionary ML the ideal and predicted output when propagating back- approach The complex heterogeneous nature of the soil wards Backpropagation is the method for updating properties and the peculiar interaction with various weights in such multilayered neural networks The geosynthetic materials can be simulated and well analysed graphical representation of the architecture for MLP is using ML models Chao et al (2023) experimentally shown in Figure validated the peak shear strength of clay–GM interface predicted using AI algorithms For example, a two-layer network with one hidden layer and one output layer is provided with a requisite number This paper used five different ML models to build an of hidden units Using appropriate activation functions anchorage model for assessing tensile force against can represent any Boolean and continuous functions with pull-out failure A dataset has been compiled from intolerance The algorithm is trained as given in the published test results that include soil parameters, soil– following steps Step 1: Initialise the structure of the liner interface friction angle (δ), side slope angle (α) network as well as weights with small random values at and allowable tensile force (Ta) ML models were studied different biases in the network Step 2: Forward comput- using K-fold cross-validation (CV) and grid search to ing: apply training examples comprising of ((x1, y1), find hyperparameters for a better prediction of results (x2, y2) …, (xm, ym)) to the network one by one, where x A comparative analysis is carried out to determine the (input vector) = {γ, dcs, Lro, δ, ψ, α, Lat, dat}; y (output superior ML model vector) = {Ta} Step 3: Update weights: predicted output says yˆ = Ta for a particular configuration of the network METHODOLOGY x1 Input signals y^ x2 Output layer 2.1 Anchored GM liner tensile force against pull-out force x3 z1 xm The tension mobilised in the anchored GM liner is z2 affected by the friction at the soil–liner interface, over- Inputs burden pressure from soil cover, liner alignment, trench zn geometry, construction activities and equipment loads at wh z v h h the crest portion A high mobilised tension may pull out the liner from the anchorage Nevertheless, a rigid First hidden anchorage can lead to tearing of the GM liner Figure layer shows the GM liner anchorage indicating the resisting ( f ) Error signals and pull-out (T ) forces The anchor-holding capacity should preferably be between the allowable tensile force Figure Architecture of MLP and the ultimate tensile force of the GM liner to avoid pull-out failure and tear in the GM liner However, as Geosynthetics International Downloaded by [ RMIT University] on [01/03/24] Copyright © ICE Publishing, all rights reserved Raviteja, Kavya, Senapati and Reddy If there is a difference in y1 and yˆ, the weight vectors will works on the principle of ensembling multiple weak be adjusted accordingly based on the computation of learners sequentially to form a strong learner Generally, error signals to neurons Step 4: Repeat the process with a weak learner is a small decision tree with few splits updated weights until the model converges to obtain less wherein each tree learns from errors made by the previous error between actual output and predicted output model until there is no improvement XGB also works on the same principle with additional regularisation par- Consider a training dataset of m samples (x1, x2,…, xm) ameters, which improves the model’s accuracy by prevent- The forward propagation calculation is given in Equations ing overfitting Figure illustrates the architecture of and Each neuron consists of linear and activation XGB The current output of the mth tree is the sum of functions, as shown in Figure Further, the loss is previous tree output and the hypothesis function of the calculated using the function in Equation current tree multiplied by the regularisation parameter (Equations and 8) zh ẳ awT xiị ð2Þ h yˆ ¼ vhT zh ð3Þ TmðX Þ ẳ Tm1X ị ỵ rịmhmX ; rm1ị 7ị E ẳ y yị2 4ị Xm arg ẳ LẵYi; Ti1Xiị ỵ rhiXi; riÀ1ފ ð8Þ where zh indicates activation of hidden layer, a is the αr i¼1 activation function, wh is the weight vector, vhT is the transpose of weight vector, E is the loss function where Tm(X ) is the mth tree output, (αr)i is a regularisation The calculated errors are back propagated, updating the parameter, ri is the computed residuals with ith tree, hi is a weights wh and vh by propagating the estimated error function trained to predict residuals and L(Y, T(X )) is the using the descent gradient as given in Equations and differential loss function The number of hidden layers and neurons in each layer will affect the model performance 2.4 Support vector regression (SVR) Δvh ¼ Àη @E ð5Þ SVM are supervised machine-learning models designed @vh for classification and further updated to regression problems (Vapnik 1995) SVM predicts discrete categori- wh ẳ @E 6ị cal labels as a classifier, whereas SVR envisages continu- @wh ous order variables as a regressor (Vapnik 1997) Although SVR is based on the same processing principle, where η is the learning factor it is used to solve regression problems, unlike SVM Problem-solving involves constructing a hyperplane that 2.3 Extreme gradient boosting (XGB) separates the positive and negative parts of the data, along XGB is a supervised ML algorithm proposed by Chen with two decision boundaries that are parallel to the and Guestrin (2016) Gradient tree boosting is one of the hyperplane This is known as the insensitive region (ε) techniques that works efficiently for classification and (ε-tube), which makes the data linearly separable In SVR, regression applications The regular boosting algorithm the algorithm forms the best tube by formulating optimisation Nevertheless, SVR balances the complexity of prediction errors by providing a good approximation Data set (X, Y ) Tree Tree Tree m T1 (X) T2 (X) Tm (X) í Compute α1 Compute α2 Compute αi Compute αm Compute residuals Compute Compute Compute (r1) = Y–Ŷ residuals (r2) residuals (ri) residuals (rm) Figure Process of XGB Output Geosynthetics International Downloaded by [ RMIT University] on [01/03/24] Copyright © ICE Publishing, all rights reserved Machine-learning modelling of tensile force in anchored geomembrane liners In this method, the search converges at a hyperplane that +ε holds maximum training data within the boundaries ξ (ε-tube) To estimate a linear function, SVR can be formed as given in Equation –ε Consider a two-dimensional (2D) training set (x1, Output, y x2, …, xn) where x is an input variable, y is a target variable and n is number of variables The core goal of ξ SVR is to obtain y The divergence from the actual output (ε) can be achieved by minimising the Euclidean form of Input, x weight vector (w) (Equation 10) subjected to the con- straints (Equation 11) The algorithm finds a weight Figure Architecture of SVR vector where most samples are within the margin Prediction error that lies outside the margin can be 2.5 Random forest (RF) decreased by inserting the slack variables ξi and ξi* which help in converting hard margin to soft The RF is a bootstrap aggregation-based ensemble machine- optimisation functions are provided in Equations 12 and learning algorithm defined on decision trees developed by 14 and the corresponding constraints are given in Breiman (2001) This method incorporates randomness Equations 13 and 15 If the hyperparameter (C) is too during the attribute selection phase replacing the training high, the model will not allow large slacks If C = 0, then data Bagging techniques force the ensemble model to slack variables are not penalised, so they can be as large as generate a variety of decision trees, where each tree acts on possible resulting in poor performance of the model different data subsets Since the trees are made up of a random selection of samples and features, they create y ẳ w xị ỵ b ð9Þ numerous random trees, forming a random forest The RF modelling procedure is depicted in Figure The Minimise : kwk2 ð10Þ RF method is superior to the decision tree technique RF has low variance and low bias as this method averages & the number of decision trees trained on various parts of the same training data, making it better for prediction Constraints : yi À ðw Á xiÞ À b ε ð11Þ w xiị ỵ b yi ε A dataset with ‘m’ samples creates a decision tree from several bootstrap samples by considering only a random where b is a dimensionless constant variable subset of the total of ‘F’ features HenÀce, D pfeffiaffiffitÁures are evaluated for each tree at each split D ¼ F Their Xn à correlation is reduced when the trees are trained with Minimise : k wk ỵC i ỵ i ị ð12Þ a random subset of features As with bagging, training i¼1 is typically performed for a large number of trees As given in Equation 16, the average of all random tree outputs O1, O2, O3, …, On is used as the regression model < yi À w xiị b ỵ i output (Cn) Constraints : : ðw Á xiị ỵ b y i ỵ i ð13Þ ξi; ξi ! Xj à ð14Þ Minimiseðw; bÞ : k wk ỵC i ỵ i ị i¼1 PK À Á >> kẳ1 wk1 xi;k ỵ b yi ỵ i < Constraints : yi kẳ1 PK wk1xi;k ỵ b >>: ε þ ξi Cn ¼ Xn Oi ð16Þ ξi; ξià ! n i¼1 ð15Þ The model described above is for linear regression 2.6 Locally weighted regression (LWR) problems SVR is flexible and performs nonlinear regression problems by projecting the data in high LWR is exposure to a non-parametric, supervised learn- dimensional space using kernel methods to avoid com- ing algorithm As the name indicates, LWR predictions plexity In the nonlinear process, SVR adopts the kernel are based on data close to the new instance and the function (Φ) that represents the nonlinear relationship contribution of each training example is weighted based between w and x Figure presents the architecture of on its distance from the new instance LWR excludes SVR Among the various kernel functions, radial basis the training phase, so the entire work is performed during and polynomial functions are successfully employed for the testing/prediction phase Further, LWR considers geotechnical engineering problems (Debnath and Dey the full dataset to make predictions, unlike simple 2018) regression, which is needed to construct the regression line local to each new data point Thus, LWR overcomes Geosynthetics International Downloaded by [ RMIT University] on [01/03/24] Copyright © ICE Publishing, all rights reserved Raviteja, Kavya, Senapati and Reddy Dataset Decision tree-1 Decision tree-2 Decision tree-n Output-1 Output-2 Output-n Averaging Final output Figure Process of RFR the limitations of linear regression by assigning weights to training data (Cleveland and Devlin 1998) The weights are higher for data points close to the new data point being predicted by the algorithm, as shown in Figure The method optimises Θ to minimum and modifies the cost function (Equation 17) The computation of weighting function (wi) is given in Equation 18 The learning algorithm (Θ) chooses parameters for better predictions This approximation calculates the estimated target value for the query instance Xm À Á x wi yi À ΘT xi ð17Þ Figure Architecture of LWR iẳ1 wi ẳ exixị2=22 ð18Þ 2.7 Grid search hyperparameter optimisation 2.8 K-fold CV In ML models, hyperparameters must be specified for K-fold CV is a specific type of predictive analytic adopting a model to the dataset The general effects of model with a broad framework that can apply various hyperparameters on a model are frequently recognised but types of models within it It consists of the following determining the appropriate hyperparameter and combi- steps (1) The original data is randomly divided into K nations of interactive hyperparameters for a given dataset subsamples, which serve as the training data (2) For can be complex Systematic searching of different pre- each fold, models are estimated using K – subsamples, ferences for model hyperparameters and selecting the with the Kth subsample serving as a validation dataset subset that produces the best model on a given dataset is The process is repeated until all subsamples have served one of the best ways This is referred to as hyperparameter as validation and the model result can be averaged optimisation or tuning The scikit-learn library in Python across the folds The K-fold CV can also be extended has this functionality with different optimisation tech- by dividing the original data into a subset that goes niques that can provide a unique set of high-performing through the K-fold CV process The rest of the data is hyperparameters Random and grid searches are the two split into another subgroup that can be used to primary and widely used optimisation techniques for evaluate the final model performance The final data tuning the optimisation parameters In this study, grid subset is often termed test data In contrast, the testing search is used to generate an optimised model It considers set is utilised to assess the generalisation error of the a search space to be a grid of hyperparameter values and finalised model (Zhang et al 2021) K-fold CV considers evaluates each position in the grid Grid search is ideal all three components: training, validation and testing for double-checking combinations that have previously Although there is no definite rule for determining the performed well values of K, the number of K-folds was set to five in this study, as suggested by Kohavi (1995) and Wang et al (2015) Geosynthetics International Downloaded by [ RMIT University] on [01/03/24] Copyright © ICE Publishing, all rights reserved Machine-learning modelling of tensile force in anchored geomembrane liners Table Statistical descriptors for input and output variables Parameters δ γ Lro dat dcs ψ Lat α Ta μ 24.01 17.00 1.72 0.30 0.29 40.83 0.78 33.45 14.98 σ 7.05 0.97 0.72 0.11 0.12 18.67 0.40 6.61 10.10 Max 42 18.7 0.5 0.5 0.5 84.28 1.5 44.98 45.95 Min 15.3 0.1 0.1 0.1 7.96 0.1 22 0.51 n 1441 1441 1441 1441 1441 1441 1441 1441 1441 250 140 120 200 100 Count 150 Count 80 60 100 40 20 50 15.5 16.0 16.5 17.0 17.5 18.0 18.5 10 15 20 25 30 35 40 γ (kN/m3) δ (q) 140 120 140 120 100 100 Count 80 Count 80 60 60 40 20 40 20 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 dat (m) 160 0.5 1.0 1.5 2.0 2.5 3.0 140 120 Lro (m) 100 Count 160 Count 80 140 60 120 40 100 20 80 60 10 20 30 40 50 60 70 80 40 ψ (q) 20 140 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 120 dcs (m) 100 140 Count Count 80 120 60 100 40 80 20 60 40 25 30 35 40 45 20 α (q) 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Lat (m) Figure Histograms indicating the wide range of variability among input parameters Geosynthetics International Downloaded by [ RMIT University] on [01/03/24] Copyright © ICE Publishing, all rights reserved Raviteja, Kavya, Senapati and Reddy DATABASE used The present study employs MLP (shallow neural network), XGB (regression/classification), SVR In ML, the size of samples is a crucial factor in developing (regression/classification), RF (classification/regression) an effective prediction model In this study, 1520 samples and LWR (regression) For the chosen ML models, as a were generated from collated laboratory results of soil– general thumb-rule, 50–100 data points per predictor are GM interface friction to formulate the models (Raviteja required to build an efficient ML model (Gopaluni 2010; and Basha 2015) The ranges of anchorage geometric Bujang et al 2018; Hecht and Zitzmann 2020) It is parameters were considered in a wide range covering all crucial to evaluate all other influencing variables when practical possibilities A dataset of 1441 samples (exclud- determining the optimal amount of data for analysis The ing outliers) is compiled with nine variable parameters: present study employed 1441 input data samples for each soil–liner interface friction angle (δ), unit weight of soil of the nine variables used for the analysis (γ), runout length (Lro), depth of anchor trench (dat), depth of cover soil (dcs), slope angle of trench (ψ), length 3.1 Correlation analysis of anchor trench (Lat), side slope angle (α) and allowable GM tensile force (Ta) The statistics of the given The correlation coefficient is determined to understand parameters are listed in Table with the mean value (μ), the relationship between dataset parameters Pearson’s standard deviation (σ) and size of the dataset (n) The correlation coefficient is used in this study to quantify the extent of variability among the input parameters is shown relationship between each pairwise parameter in terms of in Figure strength and direction Correlation coefficient (ρ) values range from −1 to +1 (−1 ≤ ρ ≤ 1), where +1 indicates Raviteja and Basha (2018) developed a mathematical strong positive correlation and −1 indicates strong inverse model to accurately predict anchorage capacity, which is a correlation The heat map presenting correlation coeffi- function of friction angle, unit weight of cover soil and cients of chosen parameters is given in Figure The trench geometry There are standard ranges for the analysis was useful in identifying the significant governing variables, such as geometry of trench, depth of soil cover parameters and in the further process of the model and thickness of GM liner, to be used in MSW contain- algorithm As is evident from Table 2, soil–liner interface ment facilities The anchorage capacity is computed by friction (δ) has a strong influence in governing tensile force varying these variables within the practical ranges, as against pull-out failure shown in Table 1, as well as using 65 measured friction angle values These results are then used as data points for 3.2 Examination of outliers ML modelling Thus, with the modelled variables and the 65 measured friction angle values, there was a total of Outliers are points that differ from the remaining obser- 1441 data points for ML modelling vations with extreme values These outliers can interfere during the training and cause algorithms to underperform, In general, deep neural network (DNN) models like resulting in less accurate ML models Specific to this study, convolutional neural network (CNN), recurrent neural the box plot was used to clean the data It is a standardised network (RNN) and generative adversarial networks method of displaying data distribution, measured using a (GAN) require a large amount (from hundreds to five-point scale (minimum, 1st quartile (Q1), median, 3rd thousands) of data to build a successful ML model quartile (Q3), maximum) It is used to detect outliers and However, in the present study such DNN models were not δ 0.01 –0.07 –0.032 0.016 –0.05 0.02 0.038 0.58 1.00 0.75 γ 0.01 –0.036 0.008 –0.038 –0.012 0.034 –0.027 0.087 0.50 0.25 α –0.07 –0.036 –0.068 –0.027 –0.052 –0.03 0.049 0.16 –0.25 Lat –0.032 0.008 –0.068 –0.0026 –0.031 0.0017 –0.81 0.42 –0.50 dat 0.016 0.52 –0.019 –0.75 –0.038 –0.027 –0.0026 0.00055 –0.0026 dcs –0.05 –0.012 –0.052 –0.031 0.00055 –0.014 0.017 0.39 Lro 0.02 0.034 –0.03 0.0017 –0.0026 –0.014 0.0012 0.27 ψ 0.038 –0.027 0.049 –0.81 0.52 0.017 0.0012 –0.34 Ta 0.58 0.087 0.16 0.42 –0.019 0.39 0.27 –0.34 δ γ α Lat dat dcs Lro ψ Ta Figure Heat map representation of correlation coefficient matrix Geosynthetics International Downloaded by [ RMIT University] on [01/03/24] Copyright © ICE Publishing, all rights reserved Machine-learning modelling of tensile force in anchored geomembrane liners Table Correlation discretisation among the variables Box plot 140 ρ Correlation strength Variables 120

Ngày đăng: 01/03/2024, 15:59

Xem thêm:

w