999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

A Deep Learning-Based Computational Algorithm for Identifying Damage Load Condition: An Artificial Intelligence Inverse Problem Solution for Failure Analysis

2019-01-11 07:29:28ShaofeiRenGuorongChenTiangeLiQijunChenandShaofanLi

Shaofei Ren , Guorong Chen, Tiange Li, Qijun Chen and Shaofan Li,

Abstract: In this work, we have developed a novel machine (deep) learning computational framework to determine and identify damage loading parameters(conditions) for structures and materials based on the permanent or residual plastic deformation distribution or damage state of the structure. We have shown that the developed machine learning algorithm can accurately and (practically) uniquely identify both prior static as well as impact loading conditions in an inverse manner,based on the residual plastic strain and plastic deformation as forensic signatures.The paper presents the detailed machine learning algorithm, data acquisition and learning processes, and validation/verification examples. This development may have significant impacts on forensic material analysis and structure failure analysis, and it provides a powerful tool for material and structure forensic diagnosis, determination, and identification of damage loading conditions in accidental failure events, such as car crashes and infrastructure or building structure collapses.

Keywords: Artificial intelligence (AI), deep learning, f orensic materials engineering,plastic deformation, structural failure analysis.

1 Introduction

Most engineering materials,products and structures are designed,manufactured or constructed with an intention to function properly.However,they can fail,get damaged or may not operate or function as intended due to various reasons including material or design fl aws,extreme loading,etc.It is important to identify the reasons of these failures or damagesituationstoimprovethedesignsand detectany fl awsinthematerialsor designs.Oneof theessential requirementsof identifying thereasonsof thesefailuresisto know the loading conditionsthat lead to thefailures.

Engineering products and structures are designed with specific intent and function.They may fail due to reasons including material shortcomings,construction fl aws,extreme loading,or other conditions and behaviors exceeding their design parameters.To improve design and prevent failures,it is important to analyze actual failures,and identify the associated loading conditions.

Unfortunately,theseloading conditionsarenot readily known whiletheforensic signatures such as the plastic strains or plastic deformations are easily measurable.For example,in car crashes,theimpact loadson carsarenot known,whilethepermanent deformationscan be quantifi ed after the fact.If the impact loadscan be determined,it could potentially help insurance companies determine which party is responsible for the accident,and help car manufacturesdevelop more realistic crash test scenarios.Both of thesehavehigh potential for concrete and signifi cant economic impact.

Particularly for the situation of car crushes,it is very important to know the impact loads for two reasons.First,thiswill help greatly to determinewhich party ismainly responsible for theaccident.Second,accidentsarereal crush tests.If theaccident datacan beadded to thecrush test data,it can help enhancecar designssubstantially.

What emerges from these considerations is an inverse problem of finding loading conditions from engineering responses.This represents an inverse of current engineering practice,inwhichthetypical setup isto develop fi niteelementmodelsof structures,subject them to static and dynamic loading conditions,and then compute theresulting strains and residual displacements.

As a general methodology,we believed that machine learning(ML)and artificial intelligence(AI)techniques provide an effective solution for inverse problems.ML and AI encompass powerful tools for extracting complicated relationship between input and output sampling data,potentially through a training process,and then using theuncovered relationship to make predictions[Hastie,Tibshirani and Friedman(2009);Nasrabadi(2007)].ML and AI have found a large number of successful applications in various fi elds beyond their birthplace in computer science[Sebastiani(2002);Bratko,Cormack,Filipicˇet al.(2006);Sajda(2006)].Recent years have also witnessed a number of studies devoted to applying ML techniques to explore forensic materials engineering problems[Jones,Keatley,Goulermas et al.(2018);Mena(2016)].In the context of this paper,the measurable engineering responses would be fed as input,with the loading conditions as output.The inverse nature of the problem does not hinder ML and AI’s effectiveness in discovering complex mathematical relationships between input and output.

This paper develops a novel machine learning computational framework to determine and identify damage loading conditions for structures and materials based on their permanent or residual plastic deformation or damage state.Our work combines the current mature stateof fi niteelement modelswith therecent advancesin machinelearning methodologies.This approach advances the state of the art in forensic materials engineering,which seeks to examine material evidence and determine the original causes[Lei,Liu,Du et al.(2019);Zheng,Zheng and Zhang(2018);Zhou,Tang,Liu et al.(2018);Kirchdoerfer and Ortiz(2018)].We believe the ML based approach can solve many previously intractable problems,with prior approachesincurring impractical computational costsdueto thescale of the finite element models,the large degrees of freedom,and the complex and dynamic natureof theloading forces.

The rest of the paper begins with a detailed description of themachine learning algorithm used.We then outline our process for gathering the required data to train the machine learning algorithms.This is followed by examples that demonstrate how we solve the inverse problem,including a cantilever beam of inelastic materials statically loaded at different locations,and the same beam loaded dynamically with impact loading.We seek to demonstrate with these examples that the machine learning algorithms can accurately identify both static loading and impact loading conditions based on observed residual plastic strain or deformation.

2 Methods

2.1 Deep neural network model

Recently, machine learning, especially deep neural networks, has become one of the most popular key words in every scientific or engineering field. Deep learning architectures, such as deep neural networks, deep belief networks and recurrent neural networks, have been applied to fields including computer vision, speech recognition,natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design and board game programs, where they have produced results comparable to and in some cases superior to human experts, e.g.[Ciresan, Meier and Schmidhuber (2012); Krizhevsky, Sutskever and Hinton (2012)].

Figure 1:Flowchart of developing themachine learning neural network

Using machine learning methods,one could make relatively accurate predictions on some problems that were difficult to be solved before.In this work,we adopted a fundamental model of deep neural network(DNN),namely an artifi cial neural network.There are five stagesin our model,asshown in Fig.1.

(1)Datacollection:Collect datafrom software Abaqus.

(2)Datacleaningand processingwithfeatureselectionand featureengineering.Duringthis stage,we applied dimension reduction,removed irrelevant data,and created new features so themodel could perform better.

The original data of static has 120 variables,which indicate 60 pairs of displace from x and y directions.The 60 pairs of data represent three parts of the cantilever beam which are the top,middle,and bottom parts.Due to the similarities,the top and bottom data was eliminated and weonly keep themiddleparts.Sincethecantilever beamhasarelativesmall displacement on x-direction,weonly keep they displacementto simplify themodel.When doing the feature engineering,we tried to use our expertise and mathematical methods to createnew relativefeaturesand established fivenew variables,including findingslopefrom plotting data of displacement x and displacement y,the summation of displacement y,the amplitude of displacement x,and the centroid distance.(First,we find a linear fi t for the deformationchange,namely,theapproximateslopeof thedisplacementx and displacement y from plotting data.Second,we generate the sum of displacement of y as another new feature.Third,to make the feature more obvious,we create a variable by using the top displacement x minus the bottom displacement x.Last but not least,we made centroid distanceas anew feature.Theequation is

where ux is the displacement of x and uy is the displacement of y)For the dynamic data,we did a relative radical method.Like the static data we pick only 20 variables,instead of keep them we use the product of displacement x with displacement y,which lead all variablesnew.Then weapplied summation of thevariableand thecentroid distanceof the new dataset.

(3)Building network:Initialize bias,weight,number of layers and number of neurons in each layer.

(4)Application of deep neural network to obtain a specifi c mathematical model.It is noted that thebasic mathematical model for thefi rst stageof neural network isasimplifi ed projection pursuit regression[Friedman and Stuetzle(1981)],

where g(x)is an activation function,w is a distributed weight,and x is an observation.During the second stage,the w is redistributed to optimize the loss or error used through back-propagation.

(5)Using a chosen test data set,which is different than the training sets,to validate the model and analyzeerrors.

2.2 Modelsand settings

Asmentioned earlier,adeep neural network[LeCun,Bengio and Hinton(2015)]isused in this study to solve this inverse problem with procedures considered for the features.One samplefully-connected layer isshown in Fig.2[Castrounis(2016)].In onecertain neuron j,theemployed mathematical model isdescribed as

One can clearly observe from Fig.2 that,the input data fl ow(X)multiplied by the distributed weight(βk)and then added a bias(β0)as a parameter of an activation function(σ(x)),refl ects the mathematical model in Eq.(3).Among all the options of activation functions,such as hyperbolic tanh,Sigmoid,or Relu.According to Ramachandran,"currently,the most successful and widely-used activation function is the Rectified Linear Unit(ReLU)."[Ramachandran,Zoph and Le(2018)]In our case,Rectified Linear Unit(ReLU)performs well as an activation functionσ(x)for both static and dynamic loading conditions.ReLU providesanonlinear function f(x)=max(0,x).Over-fitting isusually a concern that a model would perform too"well"on the training data set.In order to prevent over-fitting,a dropout was applied during the training process,"the key idea is to randomly drop units(along with their connections)from the neural network during training."[Srivastava,Hinton,Krizhevsky et al.(2014)].

Figure2:Illustration of neuron structureof theneural network

Further more, the used loss function is the Mean Square Error (MSE), which is defined as

MSEis commonly adopted as a loss function in statistical models.It provides an intuitive measurement of errors.The objective is to minimize MSE and make the model fi t both the training data and validation data well.An optimizer will adjust between forward and backward propagation in order to achievethisobjective.Theoptimization tool used herein tominimize MSEisthe Adamoptimizer[Kingmaand Ba(2014)]with exponential descent in learning rate.

2.3 Implementation

TensorFlow is an open-source software library for datafl ow programming across a range of tasks.It is a symbolic math library,and is also used for machine learning applications such as neural networks.In this project,we developed a machine learning computer code by using TensorFlow.

There arefour layersin thedeveloped DNN code,with the learning rateof 0.0035,and the dropoutof 0.05.Weusethe ReLu astheactivation function,and weran itin 18000 steps.In general,thenumber of hidden neuronsshould bebetween thesizeof theinputlayer and the sizeof theoutput layer.Theinputparametersof neural network istheplastic displacements or the permanent displacements along both the horizontal and vertical directions of all the nodes of the FEM mesh,except the boundary nodes,of the cantilever beam.The size of input layer is 60 or more after the numerical process,depending on the mesh of the model.The size of output layer is 3 or 8,depending on different numerical models.Thus,we choose from eight to sixty hidden layers in our machine learning code.After trying different numbersof hidden layers,wefound that four hidden layersneural network provides good computation results for the FEM mesh size used and the numerical model.More hidden layers will lead to much more calculation time,but not much better results.Thus,we choose four hidden layers as the default structure for our machine learning test code.The number of hidden neurons should be less than twice the size of the input layer.Hence,wechoosethefirsthiddenlayer 32 neurons,thesecond and third hidden layershave 64 neuronseach.Thefinal hidden layer has8 neurons.

3 Data c ollection

3.1 Geometric and material properties

Table 1: Mechanical properties of AISI 4340 steel (33 HRc) (From [Guo and Yen (2004);Johnson and Cook (1985)]).

A 2D cantilever beam was chosen for the study, which has a length of 5 meter and a width of 1 meter. The finite element model of the cantilever beam was developed with the ABAQUS software. Plane strain condition was assumed throughout this study, and CPE4R element was used, which is a 4-node bilinear plane strain quadrilateral element[Simulia (2011)]. In order to ensure the accuracy of the numerical calculation, different mesh sizes of the beam were chosen for the convergence analysis, and the final mesh size was chosen as 0.25 m. Accordingly, the number of nodes is 105, and the number of elements is 80.

We choose AISI 4340 steel (33 HRc) as the material of the cantilever beam, which is modeled by using the Johnson-Cook plasticity model ([Guo and Yen (2004); Johnson and Cook (1985)]). It is model as a thermo- elastoplastic solid, as expressed in the following equations,

whereσis the fl ow stress,∈pis the equivalent plastic strain,˙∈pis the strain rate,˙∈0is the reference strain rate,A,B,C,m,n are material constants,and T?is the homologous temperature which is related to the absolute temperature T,the reference temperature T0and the melting temperature Tm.

Thecritical failure strain is defi ned as[Guo and Yen(2004);Johnson and Cook(1985)]:

where Diarematerial constants,andσ?isthedimensionlesspressure-stressratio.Material parametersof AISI 4340 steel arelisted in Tab.1.

3.2 Boundary conditionsand loads

Figure 3:Finite element model of thecantilever beam and the loading positions

Figure4:Dynamic loading timehistory

The finite element model of the cantilever beam is presented in Fig. 3. All degrees of freedom of the f ive nodes on the left at x = 0 m are rigidly fixed. As shown in this figure, seven numbered nodal points were chosen to apply loads and corresponding sets of simulation data were generated. In most cases when loads are applied to the three points closer to the support of the cantilever beam (nodes 5, 6 and 7 in Fig. 3), the residual displacement and plastic strain are all zero at all the nodes, which leads to the issue multiple-answer issue to DNN since zero residual displacement corresponds to three different locations. Thus, the loading at these points are excluded and only the loading points 1-4 are considered in the training, as shown in Fig. 3.

Both static and dynamic responses of the cantilever beam under concentrated loading forces are computed by using ABAQUS. It should be noted that the loading history of each concentrated force acting on the beam, which can cause plastic deformation of the beam, is featured with a bi-linear loading and unloading curve of bandwidth t and loading amplitude Fmax, as shown in Fig. 4.

Four case studies have been conducted to test, validate, and verify the deep learning algorithm and the trained neural network:

(1) Prediction of static loads acting on the numbered nodes (also referred as training nodes in this paper). Static loads simultaneously imposed to one up to four numbered nodes shown in Fig. 3. Responses of the beam under static loads of different amplitudes were numerically predicted, and the database for the deep learning was developed. Then, a different deformation of the beam caused by loads applied to the numbered nodes was given by ABAQUS program, and the amplitude of the static load was predicted by the deep learning algorithm and compared with the exact solution.

(2) Prediction of static loads acting between the numbered nodes. In this case study, the database for the deep learning was also developed by applying static forces to the numbered nodes. Then, a deformation of the beam caused by a load acting on a node between the two adjacent numbered nodes (as shown in Fig. 3) was given by ABAQUS program. Amplitude and position of the static load were predicted by the deep learning algorithm.

(3) Prediction of the impact load acting on the numbered nodes. Impact loads imposedto one numbered node. Responses of the beam under impact load with different amplitudes and durations were numerically predicted. Then, a different deformation of the beam caused by an impact load acting on this numbered node was given, and both the amplitude and duration of the impact load were predicted.

Table2:Loadsfor thestatic analysis

Table3:Loads for thedynamic analysis

Figure 5:Permanent displacement distribution:(a)Plastic displacement along horizontal direction,and(b)Plastic displacement along vertical direction

(4) Prediction of the impact load acting between the numbered nodes. This case study is similar to the second case study. Position, amplitude and duration of the impact load were all predicted by the deep learning algorithm.

The bi-linear pulse load–time history curve is assumed throughout this study as shown in Fig. 4. For the static analysis, the step time T equals 1, and durations of loading and unloading are 0.5. For the dynamic response, durations of loading and unloading are significantly reduced to orders of 0.01-0.03 s to simulate the impact loads. Meanwhile,in order to minimize the influence of inertial effects and get the final stable deformation of the beam, the step time T for the dynamic analysis is set as 50-100 s. For the static and dynamic analysis, amplitude, duration and step time for the first two case studies are listed in Tab. 2 and Tab. 3, respectively. For the multi-points condition, amplitude of the load is reduced. For static analysis as an example, amplitude of the load was between 2.0 × 107- 7.0 × 107N for two nodes cases, 1.0 × 107- 5.0 × 107N for three nodes cases,and 1.0 × 107- 4.0 × 107N for four nodes cases.

3.3 Training data

When applying above magnitude force to the cantilever beam, it will have permanent plastic deformation. One of particular plastic displacement examples is as shown in Fig. 5,while the associated plastic residual strain distribution is shown in Fig. 6.

After obtaining the residual plastic displacements from all FEM nodes of the beam, we combine them together as the input of training data of DNN. The process is as shown in Fig. 7. The right table of Fig. 7 is the input of one set of raw training data of DNN. The output of this set of training data is the value and location of the outside force according to this plastic displacement distribution, for static modes, and the location, magnitude and duration according to this plastic displacement distribution, for dynamic models.

4 Resultsand discussions

4.1 Prediction of thestatic loadson training nodes

Table 4: The correct and predicted loads of the testing cases (107N)

Generally speaking, in machine learning, the more the data are available, the more accurate a prediction may be. However, there is still no rule about how much data is enough. It depends on how complex of the problem and how complex of the learning algorithm are. So the rules we used to generate data is as follows,

Figure 6: Permanent plastic strain distribution: (a) Plastic strain component εp11 , (b) Plastic strain component εp22and (c) Plastic strain component εp12

1.Generating thepreliminary databasewith alittlesamples.

2.Training the model to see how the performance of the model is and if the predicted accuracy hitstherequirement.

3.If theresultsarenotaccurateenough,thengeneratingmoredatatoseeif theperformance increases.

4.If theperformanceincreases,then repeat the1 to 3 Stepsuntil theresult hitstherequired accuracy.

5.If theperformancestays still or increases slowly,then modifying thelearning model or learning parameters.

For our static problem,we have generated 290 sets of data(about 20 sets for each load case)from which weget agood training effect.Thetraining effect is usually evaluated by thetraining loss which is a value representing the fitting of the model to the training data.

Figure 7:Collect oneset of training datainput

Figure8:Training loss

In this study, the mean square error (MSE) between the outputs and the correct results was chosen as the training loss. There is an updated loss in each epoch which can show the fitting of the model. The record of the loss throughout the process of our case is shown as Fig. 8:

Table5:Predicted errorsof thetesting cases(%)

In Fig. 8, the loss gradually decreases and reaches the minimum value of 0.2611 after 18k steps. The minimum value and the smooth descend curve of the loss indicates that the model was trained steadily and has fitted the training data very well.

To test the performance of the model, four sets of testing data were generated. Each testing data represents one type of loading combination, as shown in Tab. 4. For this problem, we designed our output layers with eight neurons which represent the two direction loads in four valid loading points, as mentioned in Section 3.2. Therefore, inputting the testing data to our DNN model, we g e t the output as shown in Tab. 4. The errors of the prediction are listed in Tab. 5.

In Tab. 4 , due to the eight-neuron output layer, all the eight values of the output were nonzero, but the values on the real-loaded nodes were much larger than the values on the other nodes. For example, in Case 1, the real load is located in node ○4 . In the predicted result,the values of ○4 x and ○4 y is obviously much larger than the values on the other nodes. And the values of ○4 x and ○4 y is very close to the real values, with the errors of 1.71% and 1.998%. The output of the other three test cases have the same pattern. Predicted errors are all smaller than 5%, as shown in Tab. 5. Therefore, the trained DNN model can correctly predict the loading locations and the magnitude of the static loads which act on the training nodes.

4.2 Prediction of thestatic loadsbetween training nodes

Table 6:Correct valuesof thetesting data

Table7:Predicted resultsof thetesting data

Table 8: Predicted errors of the testing data (%)

In Section 4.1 we demonstrated the capability of the DNN model to predict the statics loads acting on the training nodes. In this section, following the data collection rules outlined in Section 4.1, we trained the DNN model with 133 sets of data that were obtain ed by static loads acting on the four valid training nodes individually. Then, we tested the model with 8 sets of deformation states caused by 8 different loads acting in the interval between the training nodes, to see if the DNN can make extended prediction. The testing data is shown in Tab. 6. We designed the output layer with three neurons which represent the location of theload and the magnitudes of the loads in x and y directions because the loads are all acting individually in this section. The predicted results are shown in Tab. 7 and Tab. 8.

According to the results, all the predicted locations were in the correct intervals. The correct interval is an important information which can indicate the location range of the load. Then one can further locate the load by subdividing or reducing the interval. For the prediction of the specific location and the magnitude in x and y directions, the maximum errors are 6.554%, 7.366% and 4.424% respectively. The average error of the output is 3.098%, which is less than 5%. The errors of the prediction are small and can meet a lots of engineering requirements. Therefore, the results show that the DNN model is able to predict the static loads in the interval between the training nodes.

4.3 Prediction of the impact loadson training nodes

Table9:Predicted errors of the nodal impact loads(%)

Figure9:Training loss

Figure10:Training and testing data

Compared to the static problems, dynamic problems are more common in real situations but also more complicated. By following the data collection rules stated in Section 4.1,for the dynamic problem discussed here, about 40 sets of data for each load case would produce a good training effect. We finally trained the DNN model with 175 sets of deformation states caused by different single vertical impact load acting on the training points. Then five sets of test data were generated to test the performance of the trained model. All the training and testing data are shown in Fig. 10. The output layer has three neurons which represent the location, magnitude and the duration of the impact loads.The training loss throughout the training process is shown as Fig. 9. The minimum loss(MSE) reached 0.056 after 18k iteration steps which shows a good fitting of the model to the training data.The prediction of these impact loads are shown in Fig. 11. The predicted loads and the correct loads are very close to each other in the shown three dimension space.The maximum errors of the predicted location and the magnitude are 3.321% and 2.338%,respectively, as shown in Tab. 9. The maximum error of the duration is 12.321%. During the study, we found that the load duration is more difficult to be predicted than the other two parameters. The overall errors of these five testing cases are less than 5.4%. It shows that the DNN also works very well in the prediction of the impact loads which are located on the training nodes, especially for the location and the magnitude of the impact.

Figure 11:Predicted result of theloads on thenodes

4.4 Prediction of the impact loadsbetween training nodes

Table10:Predicted errorsof theloadsin intervals(%)

Figure 12:Training and testing data

Figure13:Predicted result of the loadsin the intervals

For the prediction of theimpact loads acting within different intervals,wegenerated three sets of testing deformation which were caused by three different impact loads acting in each interval individually.Then,we used the DNN model in the Section 4.3 which had been trained with the 175 sets of deformation by single impact loads acting on training nodes.The values of thetraining data and the test loads are shown in Fig.12.Theoutput of the DNN model is shown in Fig. 13, and the error of the output is listed in Tab. 10. The maximum error of the location, magnitude and duration are 9.718%, 6.869% and 7.213%,respectively. Compared to the Section 4.3, the overall errors are increased but still less than 10%. And the predicted locations are all in the correct intervals. It is concluded that the DNN can predict the loads in the interval between the training nodes, too.

The results show that the DNN model is capable to do interpolation between the training data by itself. However, the interpolation accuracy still depends on the density of the training data. The higher the density of the training data, the higher the predicted accuracy will be.

Thus, while training the DNN model, we can choose some typical loading cases as training data depending on the accuracy demand, which would save a lot of time in training phrase. To further reduce the predicted error, another approach is developed as explained in the Section 4.5.

4.5 Improving accuracy of prediction on theload location

Figure14:Refi neinterval

Table11:Correct valuesof thepredicted impact loads

Table12:Predicted value of the impact loads

According to theaboveresults,it isshown that the DNN model can tell thecorrect interval in which the loads acts.Therefore,after the fi rst prediction,we can concentrate our attention to the predicted interval.To further locate the true value of the load,we can subdividetheinterval into several smaller intervals,and then add moretraining datawithin this area.Here,we took the Testing Case 3 in the Section 4.4 as an example.The DNN model had predicted that the load was located in interval 1○~○2.So we can subdivide the interval 1○~○2 into three smaller intervals.The location coordinate of the smallerintervalsare4.25~4.375 m,4.375~4.625 m and 4.625~5 m,asshown in Fig.14.Eighteen sets of new training data were generated on each of the new numbered points ○5 and ○6,respectively.Then,we retrained the DNN model with the data on Node ○1,○2,○5 and ○6.Different from the previous model which was trained with all data,the retrained model would only predict theloads in interval 1○~○2 and achieve amuch higher accuracy in the prediction.Two testingdataweregenerated,asshownin Tab.11.Thepredicted resultsand the errors before and after the retraining are shown in Tab.12 and Tab.13.The predicted errors of all three parameters are reduced greatly by refi ning the interval,from 9.02%,10.583%and 5.945%to 0.798%,2.978%and 4.864%,respectively.Also,the prediction of the smaller intervals reached 100%accuracy.Therefore,by refi ning the interval and retraining the DNN model step by step,it ispossibleto finally reach therequired accuracy.Based on the convergence tendency,it is also shown that fi ner mesh will lead to correct fi ner intervalsand high accuracy results.Thus,if wecould havelargeamountsof datawith forces applied on a large variety of different locations,which is equivalent to dividing the cantilever beamto largenumbersof sections,thepredicted resultswill still beinthecorrect interval.Theaccuracy will achievealmost 100%if wehaveunlimited amount of data.

Table13:Predicted errorsof thenodal impact loads(%)

5 Conclusions

In this study, we applied deep (machine) learning techniques to solve an inverse engineering problem of identification of the location, amplitude, and duration of impact forces on a structure, both static and dynamic loads, based on the permanent residual deformation as well as the permanent strain distributions. For static problems,the developed machine learning algorithm can predict both the location as well as the amplitude of the force with high accuracy. The prediction on the location of the applied load is not necessarily from the training data, after studying the training data,the machine learning algorithm can automatically use interpolation to find the applied load location that is not in the train data. This is to say that we only need to train the neural network with a limited number of loading sets, it can then predict any loading locations on the boundary of the cantilever beam. This is a remarkable success for an inverse problem solution that was impossible to realize because of the non-unique solutions for inverse problems. Based on this study, we have come to a conclusion that by using the forensic signatures such as permanent deformation and residual plastic strain distributions one can uniquely(practically)determine the applied load conditions inversely with high accuracy for most engineering purposes.

For dynamic loading problems,thecurrent version of our machinelearning algorithmscan also predict both the location and amplitude of applied loads with high accuracy,whereas the accuracy of the prediction on the duration of the loads is not high,even though it still have good accuracy i.e.the error is within 5%to 10%.These results are very much promising for both static and dynamic inverse problems.Results herein demonstrate that the ML or AI based approaches can solve many previously intractable problems.We shall soon report our machine learning algorithm for inverse failure analysis of threedimensional structures with more complex geometries and loading conditions,including crashworthiness analysis and seismic or other extreme hazardous event induced structure failureforensic analysis.

主站蜘蛛池模板: 在线无码九区| 国产精品美女免费视频大全| 欧美亚洲另类在线观看| 久久精品丝袜| 99热这里只有精品久久免费| 国产在线观看91精品亚瑟| 美女免费黄网站| 精品在线免费播放| 国产亚洲高清视频| 亚洲视屏在线观看| 亚洲精品无码专区在线观看| 日日碰狠狠添天天爽| 久久久久青草线综合超碰| 欧美日本在线观看| 福利在线免费视频| 国产毛片高清一级国语| 亚洲动漫h| 一区二区影院| 久久青草精品一区二区三区| 亚洲精品无码av中文字幕| 日韩欧美中文在线| 国产男人的天堂| 国产成+人+综合+亚洲欧美| 亚洲欧美日韩中文字幕在线一区| 曰韩免费无码AV一区二区| 九月婷婷亚洲综合在线| 亚洲天堂啪啪| 乱码国产乱码精品精在线播放| 四虎影视无码永久免费观看| 国产资源免费观看| 无码一区二区三区视频在线播放| 在线网站18禁| 综合色在线| 国产三级毛片| 成人一级黄色毛片| 国产在线精品网址你懂的| 日韩精品无码免费专网站| 91久久偷偷做嫩草影院精品| 国产三级毛片| 国产亚洲欧美在线视频| 99热这里都是国产精品| 国产欧美亚洲精品第3页在线| 青青草原国产| 亚洲国产成人久久77| 免费高清毛片| 欧美亚洲日韩中文| 国产亚洲视频中文字幕视频| 美女一级免费毛片| 免费a级毛片18以上观看精品| 91 九色视频丝袜| 亚洲最大看欧美片网站地址| 狠狠v日韩v欧美v| 日韩精品久久无码中文字幕色欲| 久久成人免费| 亚洲国模精品一区| 亚洲天堂网在线观看视频| 国产在线观看人成激情视频| 免费看黄片一区二区三区| 青青青国产在线播放| 久久亚洲综合伊人| 欧美色99| 99精品欧美一区| 国产白浆一区二区三区视频在线| 日韩人妻精品一区| 亚洲人免费视频| 直接黄91麻豆网站| 国内精品久久人妻无码大片高| 国产一级毛片yw| 97视频免费在线观看| 影音先锋亚洲无码| 91精品aⅴ无码中文字字幕蜜桃| 亚洲一区国色天香| 扒开粉嫩的小缝隙喷白浆视频| 免费不卡在线观看av| 久久99热66这里只有精品一| 国产欧美日韩另类精彩视频| 亚洲一级毛片在线观| 免费一级无码在线网站| 亚洲国产精品无码久久一线| 亚洲狠狠婷婷综合久久久久| 97狠狠操| 99er精品视频|