999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Brain-Inspired Artificial Intelligence:Advances and Applications

2021-05-30 01:55:46JIATianyuanFANChaoqiongWANGLinaWANGLiyaWUXia
Aerospace China 2021年1期

JIA Tianyuan,FAN Chaoqiong,WANG Lina,WANG Liya*,WU Xia*,

1 School of Artificial Intelligence,Beijing Normal University,Beijing 100875

2 Administration for Research and Development,Beijing Normal University,Beijing 100875

3 National Key Laboratory of Science and Technology on Aerospace Intelligence Control,Beijing

100854

4 Beijing Aerospace Automatic Control Institute,Beijing 100070

Abstract:Recent advances in Artificial Intelligence (AI) have indicated that inspirations from the brain can effectively improve the level of intelligence for AI computational models,even if just local and partial inspirations.Nevertheless,realizing and exceeding intelligence at a human level still needs a deeper investigation and inspirations from the brain.The goal of brain-inspired intelligence is to achieve human intelligence inspired from brain neural mechanism and cognitive behavior mechanism.To this end,in this paper we introduce the relationship between AI and neuroscience,the current status of brain-inspired intelligence,the future work in intelligent control systems,and its profound influence in other fields.

Key words:artificial intelligence,brain-inspired intelligence,neuroscience,human brain

1 INTRODUCTION

For a long time,creating intelligent machines has been a great expectation for human beings,and Artificial Intelligence (AI)is the most promising technology to achieve this expectation.The basic definition and long-term goal for AI were put forward at the beginning of its creation,namely“Artificial Intelligence to achieve a human intelligence level”and“simulating,extending and expanding human intelligence”.After more than 60 years,significant achievements have been made in AI area.Besides,with the development of neural networks,AI has shown an extraordinary ability in many cognitive tasks,such as AlphaGo where it has defeated human players at the strategic board game of Go.Despite this impressive performance,there are some open issues about AlphaGo,for example how much is the energy consumption involved in these activities? Will AI surpass human intelligence? In fact,there is a very obvious gap between intelligent machines and human beings in terms of cognitive function.However,owing to the architecture and principles,the universality of existing AI systems is very limited.

The traditional computer concept is mainly based on the Von Neumann machine shown in Figure 1,which is a computer with a stored program which cannot be updated according to changes and requirements of the external environment.Specifically,data is transferred from the memory unit to the central processing unit,and then returns the result to the memory unit after calculating,which leads to extremely frequent data transferring between the memory unit and the central processing unit,thus requiring high energy consumption.Constrained by the Von Neumann architecture,the current intelligent system has obvious shortcomings compared with the human brain in terms of perception,cognition,control and other aspects.Particularly,the energy consumption of the human brain is very low,which is mainly related to time-dependent neuronal and synaptic functionality,seen in Figure 2.This is because that memory and computing are not separated in human brain,and the neural network comprises both a memory unit and a processing unit.

Figure 1 Von Neumann architecture

Figure 2 Schematic of the organizational principles of brain [6]

From the perspective of principles,almost all AI systems need to build a model first,and then transform it into a specific type of computing function(such as search,automatic reasoning,machine learning),while human brain can deal with these problems directly without modeling.Therefore,if the future AI systems are expected to become more general and intelligent,one of the key problems to be solved is the ability to conduct automatic modeling.

Considering the vast range of possible solutions,huge energy consumption and throughput requirements,it is a challenge to build general AI at a human level.Hence,discovering the inner workings of the human brain will play a vital role.

2 THE RELATIONSHIP BETWEEN AI AND NEUROSCIENCE

After a long history of research,the mechanism for information processing in the brain is still not fully understood,and thereby an intelligent system that is completely consistent with that of the human brain is still to be achieved.Therefore,in promoting the performance of AI,it should be inspired by the human brain,learning from its working mechanism,rather than just creating an imitation.Besides,with the development of neuroscience,it is now possible to detect brain activities of various cognitive tasks and obtain associated data from different levels such as brain regions,nerve clusters,and neurons.Therefore,the benefits of coordinated development AI and neuroscience are two-fold:on the one hand,the research of neuroscience can inspire considerable new learning architectures and algorithms,which can be used to further explore brain mechanisms and promote the development of neuroscience;on the other hand,the efficiency of existing AI techniques can be further validated by neuroscience.

2.1 Potential Inspiration from Neuroscience to AI

At the micro level,the types and numbers of biological neurons and synapses in the different brain regions are considerably different,along with their structures and functions which can be dynamically adjusted according to the complexity of different tasks.The existing experimental results show that excitatory neurons have a good classification effect in the application of feed forward neural network.From the synaptic aspect,Spike-Timing Dependent Plasticity (STDP) is a kind of timing dependent connection with weight learning rules.The change of the synaptic weight mainly depends on the time when the cell discharge occurs in presynaptic neurons and postsynaptic neurons.The mathematical mapping relationship between the discharge time difference and the weight update is established to describe the change of neural connection strength.At the mesoscopic level,the effective fusion of specific brain connection patterns and random network background noise makes the biological neural network maintain a specific network function while taking into account the dynamic network plasticity.Transistors are used to imitate the function of neurons and synapses,thus mapping the function of the human brain to hardware.At present,there are still many questions worthy of consideration about the network connection between neurons and synaptic connection:Why do different connection modes come into being? What are the corresponding functional differences?What is the significance for the realization of the cognitive function? At the macro level,the cooperation between different brain regions enables highly intelligent cognitive functions,such as reinforcement learning,long-term and short-term memory function,the biological visual system and so on,all of which achieve more complex cognitive functions through the cooperation of the different brain regions.The connection between brain regions not only determines the signal transmission,but also reflects the mechanism of information processing.

If AI is expected to achieve intelligence at a human level,it is essential to integrate those functions from the micro-,mesoscopic-and macro-level and information processing mechanisms as in the brain.Only by realizing the integration of multiscale mechanisms can we substantially subvert the existing computational models and realize innovation.

2.2 Further Validation from Neuroscience to AI

At present,it is generally believed that the Artificial Neural Network (ANN) has poor interpretability,and that the neural network model is a black box,lacking in relevant theories to support it.The further development of neuroscience is expected to address the interpretability of neural networks.The early research of ANN mainly refers to the basic concepts of neuron and synaptic connection,while the working principle of specific neurons,the formation principle of synapse and the network structure are greatly different from those of brain neural network.

Prof.Wolfgang Maass categorizes neural networks into three generations according to the function of neurons.The first generation,referred to as perceptrons,which performs a thresholding operation,outputs 1 or 0.The second generation adds continuous nonlinearity to the neuronal unit,with which it can evaluate a continuous set of output values.The nonlinear difference between the first and second generation networks has a great effect on extending neural networks to more complex applications and a deeper implementation.However,the ANN with more layers is still a rough simulation of the neural system,and its flexibility is poor compared with that of the human brain.The third generation neural network mainly uses“integrate-and-fire”spike neurons (e.g.Spiking Neural Networks,SNNs) to exchange information through spikes,which is similar to information exchange to the human brain.The most important distinction between the second-and third generation networks is the principle of information processing.The former generation uses real-valued computation (the amplitude of the signal),whereas SNNs use the timing of the signals (or the spikes) to process information.

3 RESEARCH STATUS ON BRAIN-INSPIRED INTELLIGENCE

At present,it is realized that a better understanding of the biological of the brain could play a vital role in building intelligent machines.The developments of AI and neuroscience show that the current advances in AI have been inspired by the research of neural computation in humans and animals.Besides,the development benefits to neuroscience,from the promotion of information technology and AI,it will inspire the next generation of information technology.This section reviews the current research of the brain-inspired model and information processing,and mainly introduces some cognitive function research and neural network models.

3.1 Research on Cognitive Function Based on Memory,Attention and Reasoning

It is believed that applying memory,reasoning and attention mechanisms we can effectively solve many core problems of AI.Today,research concentrates on the following questions.(1) What is stored in the memory unit? (2) What is the representation of memory in the neural memory unit? (3) How to make semantic activation quickly when memory units are large?(4) How to construct a hierarchical memory structure? (5) How to conduct hierarchical information reasoning? (6) How to forget or compress redundant information? (7) How to evaluate the reasoning and understanding ability of the system? (8) How to obtain inspiration from the human or animal memory mechanism? These cognitive mechanisms are instructive for AI,and some of them have achieved preliminary results in terms of modeling cognitive functions.In the following,we present some examples of recent developments of AI inspired and guided achievements from neuroscience findings.

1) Episodic memory

A central principle of neuroscience is that intelligent behavior depends on multiple memory systems,including both reinforcement-based mechanisms,and instantaneous based mechanisms.The latter form of memory,also known as episodic memory,is usually associated with circuits in the medial temporal lobe,mainly the hippocampus.

Episodic memory shows a good application in the Deep Q-Network (DQN).One of the key factors of DQN is“experience replay”whereby the network stores a subset of the training data in an instance based way,and then conducts“offline replay”to learn from the successes or failures that occurred in the past.Experience replay is critical to maximize the efficiency of data,avoids the unstable influence of learning from continuous current experience,and allows the network to learn a viable value function even in complex and in a highly structured sequential environment.Hippocampus encodes new information after one shot learning,but this information will gradually integrate into cerebral cortex during sleep or rest.This consolidation is accompanied by the replay in the hippocampus and neocortex,which is regarded as a structural pattern of neural activity accompanied by learning events,seen in Figure 3.Therefore,the replay buffer in DQN can be regarded as a very primitive hippocampus,permitting complementary learning in a computer.Later work showed that when the event replay is associated with high reward value it is given priority,the benefits of experience replay in DQN increase,just as hippocampus replay seems to prefer the event that can bring a high level of reinforcement.The prior experience stored in the memory buffer can not only be used to gradually adjust the parameters of the deep network to the best strategy (as in DQN),but also can support rapid behavior changes based on personal experience.In fact,theoretical neuroscience has demonstrated the potential benefits of situational control.In the hippocampus of biological brain,reward action sequences can be reactivated internally from fast and renewable memory.

Figure 3 Schematic illustration of complementary learning systems [5]

At present,the brain-inspired intelligence algorithm based on episodic memory significantly outperforms traditional algorithms in many tasks.Lin et al presented an episodic memory based on a reinforcement learning algorithm called Episodic Memory Deep Q-Networks (EMDQN),which leverages episodic memory to supervise an agent during training.Savinov et al.proposed a new curiousity method which uses episodic memory and allows the agent to create rewards for itself.

2) Attention

The brain does not learn the global optimization principle in a unified and undifferentiated neural network.On the contrary,the biological brain is modular,with unique but interactive subsystems,supporting key functions such as memory,language and cognitive control.Traditional Convolutional Neural Network (CNN) models work directly on the whole image or video frame,and hold equal priority to all image pixels.However,the primate visual system does not process all the inputs in parallel,and the visual attention is strategically shifted between the geographical location and the object,focusing on a series of regional processing resources and representational coordinates.In AI architectures,the system will take glimpses at the input at each step,update the internal state representation,and then select the next sampling location.One such network is able to use this selective attention mechanism to ignore irrelevant objects and perform well in difficult object classification tasks with noise.

In addition,the attention mechanism improves the computation cost (such as the number of network parameters) to be scaled advantageously according to the size of the input image.Google put forward the concept of the self-attention mechanism,which can significantly improve natural language processing tasks.Then based on rethinking attention,a neural network structure was proposed.

3) Continual learning

Agents are expected to learn and remember many different tasks over multiple timescales.Therefore,biological and artificial agents must have the ability of continual learning.That is,they should not forget how to perform the previous task while mastering the new task.However,neural networks lack the capability of continual learning and suffer from catastrophic forgetting,which remains a significant challenge for the development of AI.There is emerging evidence that specific mechanisms can protect knowledge about previous tasks from interference in the learning process.After two days of study,there are some new synapses increasing as shown in Figure 4.These mechanisms include reducing or increasing some of the synaptic plasticity.These changes are related to the retention of knowledge over a few months.If they are“erased”,forgetting will occur.Theoretical models suggest that memory can be protected by synaptic transition between different levels of plasticity.Based on the above findings,scientists propose an Elastic Weight Consolidation (EWC),which establishes the subsets of network weights that are important to previous tasks,and anchors these parameters to reduce their learning rate,so as to realize continual learning.In this way,the network can learn multiple tasks without increasing the network capacity,and the weight is effectively shared among tasks with related structure.

Figure 4 Illustration of parallel neurobiological models of synaptic consolidation and the EWC algorithm [5]

Inspired by the mechanism in the human long-term memory formation,Han et al.introduced Episodic Memory Activation and Reconsolidation (EMAR) for continual relation learning.Zeng et al.proposed an Orthogonal Weights Modification algorithm with the addition of a context-dependent processing module to overcome catastrophic forgetting in neural networks,which enables the model to achieve continual learning.

3.2 Research on Neural Network Models

Despite the effectiveness of traditional ANN,synaptic connections and other aspects of the initial imitation of the nervous system in the micro scale and structure,it is superficial in the information processing mechanism of neuroscience.The Deep Neural Network (DNN) model is inspired by the hierarchical information processing mechanism of the human brain,which has made major breakthroughs in computing and intelligence applications,and has achieved great success in the fields of AI and pattern recognition.

Inspired by the biological vision system,CNN applies the local connection between biological neurons (local receptive field) and the hierarchical structure of information processing.The receptive field of CNN is getting larger and larger from the lowest level to the highest level.Specifically,low-level V1 and V2 are mainly used to extract marginal and shape features respectively.High-level features are the combination of low-level features,which become more abstract from low-level to high-level.The deep learning method based on CNN has made a breakthrough in many tasks in the field of vision and speech.Its end-to-end modeling and learning ability subvert the traditional inherent mode of feature and classifier learning.That is,it breaks the boundary between feature and classifier,and implements learning in an integrated way.Reinforcement Learning (RL) is one of the most important ways of human learning through interaction.By integrating the perception ability of DNN and the decision-making ability of RL,the DeepMind team built the Deep Reinforcement Learning (DRL) model.The model is designed according to the learning process that human beings choose the best strategy and take the best action automatically by interacting with the environment,which is a kind of AI method closer to the way of human thinking,and shows that the cooperation of different functional networks can improve the level of the intelligence system.For example,AlphaGo which was developed based on DRL,won the battle with human players with the strategic board game Go.

The traditional DNN is not good at dealing with timing problems,but the data and problems in many scenarios are highly temporal.The Recurrent Neural Network (RNN) is designed for timing signals,especially the RNN based on long short time memory (LSTM).Unfortunately,it also needs a large number of training samples to ensure the generalization performance.The Hierarchical Temporal Memory (HTM) model proposed by Hawkinsstudied the brain information processing mechanism more deeply,so characterizing a six-layer structure of cerebral cortex,an information transmission mechanism between neurons at different levels,and an information processing principle of the cortical column.The HTM model makes processing information with the time series available and hence is extensively used in object recognition and tracking,human abnormal behavior detection and so on.

Based on the above statements,it is noted that building a better brain-inspired neural network computation model through deep interaction with neuroscience is one of the hot topics of brain-inspired intelligence.

4 FUTURE WORK OF BRAIN-INSPIRED INTELLIGENCE IN AN INTELLIGENT CONTROL SYSTEM

Brain-inspired intelligence is an important way to realize general AI,and its applications would be more extensive than that of traditional AI,which urges us to rethink what is the significance of studying brain-inspired intelligence? In some specific areas,such as face recognition and handwritten character recognition,the DNN can even surpass the recognition accuracy of a human after training.However,it requires tremendous artificial design,besides its recognition capability is limited by problems and the environment.Therefore,the purpose of studying brain-inspired intelligence is to overcome these limitations,and achieve the prospect that with a small number of samples and artificial intervention machines can possess the ability of multitask coordination at a human level.

Currently,almost all kinds of automatic control strategies based on preprogramming fail to meet the requirements of future advanced multi-functional aircraft for multi-tasks in a complex combat environment.The main goal of a future intelligent control system is to improve the ability of autonomous flight control.For the specific applications in aerospace field,enabled by brain-inspired intelligence,those intelligent control systems and unmanned aerial systems should possess the capabilities of self-adaption,self-healing,self-learning intelligently and cooperatively at a human level.Therefore,those intelligent control systems can execute environment perception,trajectory planning,multi-task scheduling,and autonomous flight control tasks in a coordinated manner with limited or even without artificial intervention,and meet the requirements of future advanced multi-functional intelligent systems/unmanned systems for multi-tasks in a complex combat environment.The future applications using brain-inspired intelligence would be those information processing tasks that are more suitable for human beings than computers,such as multimodal perceptual information(vision,hearing,touch,etc.) processing,language understanding,knowledge reasoning,humanoid machine as well as for human-computer cooperation and so on.Also,brain-inspired intelligence can be used for:

1) Aircraft environmental perception,interaction,decision-making,and autonomous control;

2) Human-computer interaction based psychology,health,and risk assessment;

3) Intelligent system for selecting,appointing and assessing cadres;

4) Big data based public service and national security.

5 CONCLUSION

In this paper,we first introduce the gap between AI and the brain from the architecture and principle,then analyze dual benefits of collaborative development of AI and neuroscience.The current research of brain-inspired model,and future work for the Intelligent Control System are also comprehensively described.It is believed that though interdisciplinary analysis and design with AI and neuroscience we can explore brand new and more powerful intelligence schemes for general AI.

Acknowledgement

This work is supported by the General Program of National Natural Science Foundation of China (Grant No.61876021)and the General Program of Beijing Natural Science Foundation(Grant No.4212037) .

主站蜘蛛池模板: 国产在线视频导航| 91视频99| 毛片网站免费在线观看| 97狠狠操| 毛片免费网址| 最新无码专区超级碰碰碰| 亚洲一区毛片| 毛片在线看网站| 999国产精品永久免费视频精品久久| a国产精品| 久久人人爽人人爽人人片aV东京热 | 超薄丝袜足j国产在线视频| 久久久久久久蜜桃| 久久综合一个色综合网| 国产理论精品| 国产精品第| 九色免费视频| 亚洲综合18p| 婷婷五月在线| 国内精自线i品一区202| 中文字幕免费播放| 九九久久精品免费观看| 伊人福利视频| 国产精品无码作爱| 91精品啪在线观看国产91九色| 亚洲综合第一页| 亚洲国产精品不卡在线| 欧美黑人欧美精品刺激| 一本久道久久综合多人| 国产精品亚洲一区二区三区z | 玖玖精品视频在线观看| 亚洲浓毛av| 少妇精品在线| 国产一级毛片在线| 亚洲精品日产精品乱码不卡| 日韩欧美中文字幕一本| 婷婷久久综合九色综合88| 国产97视频在线| 国产香蕉在线视频| 91啪在线| 欧美激情视频二区三区| 毛片免费在线视频| 日韩无码黄色| 欧美色综合网站| 亚洲精品无码AⅤ片青青在线观看| 欧美日韩午夜| 亚洲男人天堂网址| 久草视频福利在线观看| 不卡午夜视频| 欧美一区二区三区国产精品| 国产美女无遮挡免费视频| 国产无遮挡裸体免费视频| 亚洲人成网站观看在线观看| 亚洲精品午夜天堂网页| 成人福利免费在线观看| 国产JIZzJIzz视频全部免费| 五月激情综合网| 高清大学生毛片一级| 67194亚洲无码| 色播五月婷婷| 国产免费看久久久| 午夜精品区| 国产福利观看| 天堂在线www网亚洲| 精品国产www| 91伊人国产| 免费aa毛片| 国产精品一线天| 国产成人高清精品免费5388| 老司机午夜精品视频你懂的| 丝袜无码一区二区三区| a级毛片免费播放| 中国一级毛片免费观看| 国产精品无码AV片在线观看播放| 国产精品尤物铁牛tv| 精品国产91爱| 97在线免费视频| 18黑白丝水手服自慰喷水网站| 国产精品播放| 伊在人亞洲香蕉精品區| 午夜久久影院| 特级欧美视频aaaaaa|