999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Pushing Mathematical Limits,a Neural Network Learns Fluid Flow

2021-09-24 06:45:28DanaMackenzie
Engineering 2021年5期

Dana Mackenzie

Senior Technology Writer

Drop a pebble into a flowing stream of water.It may not change the pattern of flow very much.But if you drop a pebble into a different place,it may change a lot.Who can predict?

Answer:A neural network can.A group of computer scientists and mathematicians at the California Institute of Technology(Caltech)in Pasadena,CA,USA,has opened up a new arena for artificial intelligence(AI),by showing that a neural network can teach itself how to solve a broad class of fluid flow problems,much more rapidly and more accurately than any previous computer program[1].

‘‘When our group got together two years ago,we discussed which scientific domains are ripe for disruption by AI,”said Animashree Anandkumar,a professor of computing and mathematical sciences and co-leader of the artificial intelligence for science(AI4Science)initiative at Caltech.‘‘We decided that if we could find a strong framework for solving partial differential equations,we could have a wide impact.”Their first target was the Navier–Stokes equations in two dimensions,which describe the motion of an infinitely thin sheet of water(Fig.1)[1].Their neural network,which they call a‘‘Fourier neural operator,”dramatically outperforms any previous differential equation solver on this type of problem,exceeding their speed by a factor of 400 and increasing their accuracy by 30%.

Partial differential equations(PDEs)are the kind of equation that Isaac Newton’s laws of motion naturally lead to.For this reason,they are fundamental to science,and any major advance in solving them would have broad ramifications.‘‘We are having discussions with so many teams,from industry and academia and national labs,”said Anandkumar.‘‘We are already doing experiments on fluid flow in three dimensions.”One good use case would be the equations for modeling nuclear fusion,Anandkumar said.Another would be materials design,she added,especially plastic and elastic materials,an area in which team member Kaushik Bhattacharya,a professor of mechanics and materials science,‘‘has deep experience.”

Computers emerged,in part,out of efforts during the Second World War to predict projectile motion using differential equations[2].They have been used ever since to solve differential equations,with varying degrees of accuracy and success.But previous approaches,whether they involved traditional computer programming or AI,have always worked on one‘‘instance”of an equation at a time.For example,they can figure out how one pebble dropped in one place affects the flow of water.Then they can learn how a pebble dropped in a different place changes it.But they will not generalize to understand how any pebble dropped in any place changes the flow.That is the ambitious goal behind the Caltech Fourier neural operator.

There is,of course,a good reason why this has not been done before.Neural networks excel at learning associations between what mathematicians call finite-dimensional spaces.For example,the Google AI program AlphaGo,that beat the strongest human Go player,learned a function between Go positions(which are finite,though astronomical,in number)and Go moves[3].By contrast,the Fourier neural operator takes as input the initial velocity field of a fluid and produces as output the velocity field a certain time later.Both of these velocity fields live in infinite-dimensional spaces—which is just a mathematical way of saying that there are infinitely many ways in which you can toss a pebble into flowing water.

Fig.1.Water flows in a thin sheet over a fountain.The Caltech AI4Science team reports that a neural network can predict the motion of such two-dimensional fluid flow much more rapidly and accurately than computer programs using standard methods to solve differential equations[1].Their work,which has potentially broad ramifications for advancing science through improved modeling of natural phenomena such as nuclear fusion,continues with experiments on fluid flow in three dimensions.Credit:Pixabay(public domain).

The Caltech team trained the Fourier neural operator by showing it a few thousand instances of a Navier–Stokes equation solved by traditional methods[1].The network is then evaluated by a‘‘cost function,”which measures how far off its predictions are from the correct solution,and it evolves in a way to gradually improve its predictions.Because the network starts with a curated set of inputs and outputs,this is called‘‘supervised learning.”Google’s original version of AlphaGo learned by a combination of supervised and unsupervised learning(though a later version used unsupervised only)[3].Other neural network programs used in image processing typically employ supervised learning[4].

But no matter how much training data you have,you might not be able to explore more than the tiniest part of an infinite-dimensional space.You cannot try out all the places where you could put a pebble into a stream.And without some kind of prior assumptions,your network is not guaranteed to correctly predict what happens when the pebble is dropped into a new place.

For this and other reasons,‘‘We wanted to take the relevant parts of neural networks and combine them with domain-specific understanding on the math side,”said Andrew Stuart,another AI4Science team member and a professor of computing and mathematical sciences.

Specifically,Stuart knew that linear PDE—the simplest kind of PDE—can be solved with the well-known method of Green’s functions,a device used to solve difficult ordinary and PDE which may be unsolvable by other methods[5].Basically,it provides a template for an appropriate solution to the equation.This template can be approximated in a finite-dimensional space,so it reduces the problem from infinite dimensions to finite dimensions.

The Navier–Stokes equations are nonlinear,so no such template is known for them.But if there were something similar to a Green’s function for the Navier–Stokes equation,a nonlinear but still finitedimensional template,then a neural network should be able to learn it.There was no guarantee that this would work,but Stuart called it a‘‘well-informed gamble.”Experience has shown time and time again that neural networks are extremely good for learning nonlinear maps in finite-dimensional spaces,he said.

Learning a nonlinear operator between infinite-dimensional spaces is a‘‘holy grail”of computational science,said Daniele Venturi,assistant professor of applied mathematics at the University of California,Santa Cruz in Santa Cruz,CA,USA.Venturi,whose research involves differential equations and infinite-dimensional function spaces,said he is not convinced that the Caltech group has gotten there yet.‘‘It is in general impossible to learn a nonlinear map between infinite-dimensional spaces based on a finite number of input–output pairs,”he said.‘‘But it is possible to approximate it.The main question is really the computational cost of such approximation,and its accuracy and efficiency.The results they have shown are really,really impressive.”

In addition to unprecedented speed and accuracy,the Caltech group’s method has other remarkable properties[1].By design,it can predict the fluid flow even in places where you have no initial data and predict the result of disturbances not seen before.The program also confirms an emergent behavior of solutions to the Navier–Stokes equations:Over time,they redistribute energy from long to short wavelengths.This phenomenon,called an‘‘energy cascade,”was proposed by Andrei Kolmogorov in the 1940s as an explanation for turbulence in fluids[6].

The next frontier for the Fourier neural operator is three-dimensional fluid flow,where turbulence and chaos are major obstacles.Can neural networks tame chaos?‘‘We know that chaos means we cannot precisely predict the fluid motion over long time horizons,”Anandkumar said.‘‘But we also know from theory that there are statistical invariants,such as invariant measures and stable attractors.”If the neural network could learn where the attractors are,it would be possible to make better probabilistic predictions,even when precise deterministic projections are impossible.Anandkumar points out that the network could control a chaotic system so that it does not head toward an undesirable attracting state.‘‘In nuclear fusion,for example,the ability to control disruptions,such as loss of stability of the plasma,becomes very important,”she said.

主站蜘蛛池模板: 久久99热66这里只有精品一| 欧美在线精品怡红院| 午夜不卡视频| 亚洲欧州色色免费AV| jizz在线观看| 亚洲色图欧美| 女人一级毛片| 久久黄色一级视频| 国产成人喷潮在线观看| 日日噜噜夜夜狠狠视频| 欧美精品亚洲精品日韩专区va| 欧美一区二区三区香蕉视| 亚洲永久免费网站| 国产AV无码专区亚洲A∨毛片| 美女视频黄频a免费高清不卡| 欧美第九页| 伊人无码视屏| 欧美一级高清片欧美国产欧美| 日韩精品无码一级毛片免费| 在线另类稀缺国产呦| 久久精品这里只有国产中文精品| 亚洲国产第一区二区香蕉| 日本三级黄在线观看| 国产视频自拍一区| 青青青伊人色综合久久| 尤物亚洲最大AV无码网站| 婷婷亚洲视频| 亚洲婷婷在线视频| 91精品专区国产盗摄| 蜜桃臀无码内射一区二区三区| 九九久久精品国产av片囯产区| 97精品伊人久久大香线蕉| 日韩毛片免费观看| 在线欧美a| 色婷婷国产精品视频| 日本www在线视频| 色悠久久综合| 久夜色精品国产噜噜| swag国产精品| 亚洲成人网在线播放| 91青青草视频| 91福利一区二区三区| 久久久久夜色精品波多野结衣| 欧美一区中文字幕| 亚洲成年人网| 亚洲中文字幕无码mv| 伊人久久青草青青综合| 久久成人免费| a级毛片免费看| 久久亚洲高清国产| 欧美色丁香| 亚洲成在线观看 | 国产第一福利影院| 国产偷倩视频| AV在线天堂进入| 久久精品电影| 华人在线亚洲欧美精品| 国产无码精品在线| 国产黄色免费看| 2048国产精品原创综合在线| 日韩大乳视频中文字幕| 国产日本一区二区三区| 欧美亚洲欧美| 国产人前露出系列视频| 欧美视频二区| 青青青视频91在线 | 国产日本视频91| 国产后式a一视频| 99re精彩视频| 久久香蕉国产线看观看式| 欧美日韩激情| 欧美精品v欧洲精品| 国产成人精品优优av| 美女被躁出白浆视频播放| 国产96在线 | 99精品视频九九精品| 大学生久久香蕉国产线观看| 一本久道热中字伊人| 亚洲黄色激情网站| 无码内射在线| 国产成人在线无码免费视频| 理论片一区|