999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Eagle-Vision-Based Object Detection Method for UAV Formation in Hazy Weather

2020-09-16 01:12:58,,,,

,,,,

School of Automation Science and Electrical Engineering,Beihang University,Beijing 100083,P.R.China

(Received 18 May 2020;revised 30 June 2020;accepted 25 July 2020)

Abstract: Inspired by eagle’s visual system,an eagle-vision-based object detection method for unmanned aerial vehicle(UAV)formation in hazy weather is proposed in this paper. To restore the hazy image,the values of atmospheric light and transmission are estimated on the basis of the signal processing mechanism of ON and OFF channels in eagle’s retina. Local features of the dehazed image are calculated according to the color antagonism mechanism and contrast sensitivity function of eagle’s visual system. A center-surround operation is performed to simulate the response of reception field. The final saliency map is generated by the Random Forest algorithm.Experimental results verify that the proposed method is capable to detect UAVs in hazy image and has superior performance over traditional methods.

Key words:object detection;eagle visual system;UAV formation;image dehazing

0 Introduction

Locating and collision avoidance are the essential tasks in the formation maintenance and reconstruction. Unmanned aerial vehicle(UAV)detection provides guarantees for solving these tasks,thus enhances the anti-interference ability and robustness of formation in challenging environments.Object detection has been widely studied in recent years. Its implementations can be mainly divided into three classes:pixel-based analysis methods,feature-based methods[1]and deep learning methods[2].Pixel-based methods such as inter-frame difference algorithm and optical flow are efficient,but only capable of moving target detection. Feature-based methods,commonly include image matching,saliency detection and feature classifier,are less robust in complex environments. With the development of computer vision technology,object detection method based on deep learning has greatly improved the accuracy of detection tasks. However,deep learning methods have high computational complexity,which makes it difficult to be implemented on a UAV platform with limited memory.Compared with general object detection task,object detection of UAV faces the following technical challenges:(1)The environment is often affected by adverse lighting or weather conditions such as haze and rain,and(2)the scale of object changes drastically,hence it is easy to miss small targets. It is imperative to ensure the detectors work reliably in the presence of such conditions. Although current object detection methods can obtain remarkable results on benchmark dataset,they have limited abilities in applying to adverse conditions such as hazy weather.

Biological vision-based approach provides innovative ideas for solving computer vision tasks. It is widely known that eagles have superb vision. The high spatial acuity and contrast sensitivity of eagle eye enable eagles to spot targets accurately during hunting. Currently,the raptor’s vision mechanisms have employed for contour detection[3],saliency detection[4], aerial refueling[5-6], autonomous landing[7],target detection[8],imaging guidance[9],etc.Inspired by eagle vision mechanism,an object detection method for UAV formation in hazy weather is proposed in this paper. The main contributions of this paper are:(1)An image dehazing method is proposed in this paper. Based on the structure and interactions of eagle’s retina,the ON and OFF channels are modeled to estimate the atmospheric light and transmission of hazy image.(2)An object detection method is proposed based on the eagle’s visual attention mechanism. Color antagonism and the contrast sensitivity function of eagle eye are introduced for feature extraction.Compared with traditional methods,the proposed method can restore the details and features of hazy image more naturally,and shows advantages in accuracy and reliability of UAV object detection.

The rest of this paper is organized as follows.Section 1 presents an image dehazing method based on the ON and OFF channels. In Section 2,a visual attention model of eagle is established for object detection. Comparative experiments are conducted and simulation results are shown in Section 3. Conclusions are given in Section 4.

1 Eagle-Vision-Based Dehazing

A number of traditional state-of-the-art dehazing algorithms[10-11]have obvious halo effects and color shift in large sky regions,which makes it unsuited for UAV image dehazing. To overcome these drawbacks,an eagle-vision-based dehazing method is proposed.

1.1 ON and OFF channels

In biological visual system,fovea is the most acute and crucial area in retina. Different from mammals with one single fovea,eagle has a deep fovea in the middle of the retina and a shallow fovea in the temporal retina,which is considered to be an important factor for their outstanding vision[12].

Eagle’s retina is mainly composed of several types of nerve cells:Photoreceptors,horizontal cells,bipolar cells,amacrine cells and ganglion cells. Cones and rods,two types of photoreceptors in eagle’s retina,dominate bright and scotopic vision respectively. The photoreceptors hyper-polarize to light and release neurotransmitter glutamate on bipolar cells. Bipolar cells are divided into two types based on the different acts on the neurotransmitter:ON bipolar cell that produces a depolarization and OFF bipolar cell that produces a hyperpolarization. The photoreceptors make synapses with ON bipolar cells and OFF bipolar cells,and two types of bipolar cells make synapses with ON and OFF ganglion cells respectively. These cells with their electrical synapses together form a pathway in retina that is ON and OFF channels[13]. Cones and rods transmit electric signals differently in the ON and OFF channels. Concretely,cones transmit signals to ON and OFF cone bipolar cells and subsequently to ganglion cells. Different from cones,rod bipolar cells do not transmit signals to ganglion cells directly,but make synapses with AII type amacrine cells instead. The rod signals are then relayed to ON and OFF cone bipolar cells by amacrine cells,carrying the signals to ganglion cells in the retina.Moreover,horizontal cells also have a certain adjustment effect on the signals. The interaction between various cells completes the transmission and perception of lightness and darkness in retina,simplified as shown in Fig.1,where HC,BC,AC and GC are short for horizontal cell,bipolar cell,amacrine cell and ganglion cell,respectively,and“+”and“-”denote ON-type and OFF-type cells.

Fig.1 A simplified schematic of ON and OFF channels

In eagle’s visual system,the ON channels perceive lightness while the OFF channels perceive darkness. For a hazy imageIc(x,y),we normalize each color channel,the input signals transmitted by photoreceptors are computed as follows whereI(x,y) andI(x,y) refer to the input signals of ON and OFF channels respectively,andR,G,Brepresent red,green and blue channels of the image.

1.1.1 Horizontal cells

Lateral connected horizontal cells receive a wide range of output signals transmitted by photoreceptors,integrate and modulate the signals according to the brightness. A triphasic modulation of horizontal cells is thought to enhance contrast of images[14]. Thus,we use a modified sigmoid functions(p) to simulate the effect of modulation. The outputs of horizontal in ON and OFF channels are defined as follows

whereδ∈{on,off},αandβare constant parameters which control the shape and translation ofs(p),respectively.

1.1.2 Bipolar cells

Bipolar cells have receptive fields with a centersurround structure,which helps to transmit high-accuracy visual information. A Gaussian function is used to simulate the response of receptive field. The output of ON and OFF bipolar cellsIδBC(x,y) are computed as follows

whereσBCis the size of reception field, andG(x,y;σ) is an expression of a two-dimensional Gaussian function.

1.1.3 Amacrine cells

In rod pathways,rod bipolar cells do not connect directly with ganglion cells but via amacrine cells and cone bipolar cells. Amacrine cells connect with cone bipolar cells by gap junction,making excitatory electrical synapses with ON bipolar cells and inhibitory synapses with OFF bipolar cells.Maximum and minimum filterings are used to simulate the excitation and inhibition of AII amacrine cells in ON and OFF channels,which are written as follows

whereΩ(x,y)is a local patch at center(x,y).

1.1.4 Ganglion cells

Ganglion cells on the one hand receive signals from bipolar cells originate in cones,asIδBC(x,y),and originate in rod on the other,asIδAC(x,y). According to previous research,the concentric circle structure and spatial characteristics of ganglion cell receptive field can be well simulated by difference of Gaussian(DoG)function[15]

Fig.2 Schematic diagram of reception field simulated by DoG

whereσcenandσsurdenote the size of the center and surround structure in reception fields.AcenandAsurrefer to the gain factor of two structures. The receptive fields of ganglion cells simulated by DoG function is shown in Fig.2. The ON-type and OFF-type reception fields are displayed in Figs.2(a)and(b),respectively. The outputs of ON and OFF channels are calculated as follows

1.2 Image dehazing

The formation of haze images can be explained by atmospheric scattering model,shown as

whereI(x,y)andJ(x,y)denote the observed image and original image,Ais the global atmospheric light andt(x,y)the transmission of the reflected light. Considering the dark channel prior is invalid in sky region,the sky region is separated first in the proposed method,and the values ofAandtare calculated based on the output of ON and OFF channels.

Obviously,sky region is characterized by high brightness,low saturation and low contrast. Correspond to above characteristics,the conditions for segmentation can be specifically described as follows

wherel,sandcdenote the thresholds of brightness,saturation and contrast. A segmentation imageS(x,y)is obtained according to above conditions. After the sky region is segmented,the atmospheric lightAcan be easily estimated by ON channel. Concretely,we take the average of the maximum 5% value inIonAC(x,y) belonging to the sky region as the estimated value ofA,described as

According to the atmospheric scattering model and dark channel prior,the transmissiontcan be estimated by

The second term of Eq.(15)is the dark channel of the normalized imageConsidering the similarity of the OFF channel and dark channel images,we first use the output of the OFF channel to estimatetas follows

whereωis for keeping a small amount of haze to make image natural. To fix the halo effect,we define the transmission of the sky region as a constant valuetsky. The final transmission map is calculated by weighted sum oftskyandtoff(x,y). The weights are selected by the pixel value ofSg(x,y),which isS(x,y)after Gaussian filter,shown as

whereσsis the size of Gaussian filter. At last,guided filtering is performed to refine the transmission map.The restored image is calculated by

2 Eagle-Vision-Based Object Detection

Visual attention describes the process that the visual system will give priority to a specific area in the image and allocate more resources to it. The living and predation environments of eagles are complex and changeable. Mueller’s research[16]stated that the choice of targets during predation is not random,but shows more interest in special objects or regions than background. Dutta et al.[17]made electrical stimulation experiments on barn owls,concluding that the barn owl’s visual system has a popout mechanism,which can help them to locate targets accurately. Moreover,related research shows conspicuousness[18],oddity[19],color differences[16]are typical factors that affect target selection in eagle’s predatory behavior,and these characteristics are all related to visual attention. High density of cone cells in eagle’s foveas makes them more sensitive to color and contrast features than other species.In this section,we proposed an object detection algorithm on the basis of eagle’s visual attention mechanism. The intensity,color antagonism and contrast sensitivity of the images are calculated,and a spatial Gaussian pyramid is generated to simulate the center-surround receptive field mechanism. Random Forest is used to further improve the performance of the algorithm and generate a saliency map.Region with the highest saliency value in the map is regarded as the object. The illustration of the proposed method is shown in Fig.3.

Fig.3 Schematic diagram of the proposed object detection method

2.1 Feature extraction

The cones in eagle’s retina play an important role in color perception. Many studies confirmed that eagle has four types of single cone cells,which could sense four different frequency bands of light.In contrast of three ones in human,eagle may have an acute four-color vision[20]. Color information is processed in an antagonistic manner in eagle’s visual system. Four types of photoreceptors generate red-green and blue-yellow antagonistic signals,then transfer the signals along the tectofugal pathway.We calculate the primary features of the image based on the eagle’s color antagonistic mechanism.Given an RGB input image,the intensity channel is defined as

Color antagonistic feature is calculated using four colors of redR,greenG,blueBand yellowY.The yellow channel of the RGB image,Y=0.5×(R+G),is synthesized by the red and green channels. Two color antagonistic channels are created:RG=R-Gfor red-green antagonistic channel andBY=B-Yfor blue-yellow antagonistic channel.

Spatial contrast sensitivity is defined to describe the ability of distinguish adjacent areas with contrast differences. The maximum value of the distinguishable frequency is defined as visual acuity,which can be obtained by anatomical or electrical behavior experiments. The former calculates the visual acuity by measuring the density of photoreceptors.Related results indicate that the visual acuity of wedge-tailed hawk is about 140 cpd(cycle per degree),and that of human is about 33—73 cpd[21].Results obtained by behavior experiments have similar results and concluded that eagle’s visual acuity is much higher than other avians[22]. The relationship between contrast sensitivity and visual acuity can be described by the contrast sensitivity function(CSF).Compared with other species,eagle’s CSFs have narrower band-pass,showing a symmetrical inverted U-shaped trend. Their distributions focusing on high spatial frequencies reflected that eagles are sensitive to dim features. According to the sensitivity normalized data of eagle,the CSF can be well fitted by a double-exponention function[3],which is computed as follows

whereKc,Ks,αc,αsare fixed parameters. We applied CSF to calculated textures and local contrast features of image,and the contrast sensitivity channel is calculated as

whereg(x,y)is the grayscale of the input image.

2.2 Center-surround operation

Tectofugal pathway,composed of retina,optic tectum,nucleus and ectostriatum,is the most important pathway in eagle’s visual system. Neurons in tectofugal pathway transmit visual information progressively,extracting and integrating primary visual features such as color,edges and textures. Reception field of optic nerve cells in tectofugal pathway has a center-surround structure,and stimuli presented in center regions are activated while visual neurons in surround regions are inhibited. Inspired by Itti’s visual attention model[23],a modified difference pyramid is used to simulate the operation of the central-surround structure. To avoid the loss of information caused by interpolations in Itti’s model,we adopted a seven-layer pyramid. The center is the feature map at scales∈{1,2,3},and the corresponding surround is at scales+γ,withγ=4. A feature vectoris calculated as follows

whereF∈{I,RG,BY,C} represents the four types of features,and the operatorΘrepresents the pointby-point subtraction after normalizing the size of feature maps at different scales. All feature maps are uniformly normalized to the size of scale 4. The feature map at scale 4 is not calculated with other layers but defined as the feature vector directly. Thus,we have a 16-dimension feature vector for a single RGB image.

2.3 Random Forest algorithm

Random Forest[24]is a supervised ensemble learning algorithm that widely used in classification and regression tasks. On the basis of bagging,random attribute selection is implemented in the training of decision trees,which makes Random Forests have good accuracy,insensitivity to feature outliers,and strong ability to interfere. To enhance the performance of object detection in specific scenes,Random Forest is used to promote the synthesis of saliency maps. In this paper,object detection is regarded as a binary classification task. The procedure of proposed object detection algorithm is summarized as follows

Step 1Annotate the training samples,the pixels in the target area are marked as 1,others are marked as 0.

Step 2Extract the low-level features including intensity,color antagonism and contrast sensitivity,and obtain a 16-dimensional feature vector for each image.

Step 3Train the Random Forest classifier with labeled images.

Step 4Input the test images to the classifier,and generate a preliminary saliency map according to the confidence of predictions.

Step 5Binarize the saliency map. The thresholdbfor binarization is set as

whereM(x,y)is the pixel value of saliency map at(x,y),andσis fixed to 0.75 for separating the salient regions.

At last,the overall flow chart of the method is shown in Fig.4.

Fig.4 Overall flow chart of the method

3 Results and Analysis

Performances of the proposed dehazing and visual attention model are tested on two cases of image sequences pictured in UAV formation scenarios.In the first experiment,we tested the effect of the dehazing method on both cases and compared it with He’s algorithm[10]. In the second experiment,we tested the object detection algorithm and compared it with the HC[1],SR[25],SO[26]and GS[27]methods. The details of two experiments are given as follows.

3.1 Image dehazing

The parameters used in this experiment are set as follows:α=7,β=-0.5,σBC=7,σcen=0.8,σsur=0.7,Acen=Asur=1,l=0.65,s=0.1,c=0.08,ω=0.98,tsky=0.65,σs=5.

Fig.5 shows the processed results of two cases of experimental images using He’s method and our method. As shown in Fig.5,He’s method enhanced the image contrast remarkably,however,the color tone of the restored image shifts and loses the true color of the original appearance.The estimation of transmission is lower than the actual value,which leads to the severe halo effect occurring in the sky region. The halo effect makes UAV detection more challenging. By contrast,our result restores clear details of the scenarios with a natural color tone.

Fig.5 Results of image dehazing

Fig.6 Results of object detection

3.2 Object detection

The parameters used in this experiment are set as follows:Kc=0.2,Ks=0.3,αc=0.25,αs=0.32.

In this experiment,Random Forest is trained with three images,and other images in the same scenario are used for testing. Fig.6(a)is the dehazed image processed by eagle-vision-based dehazing method,Figs.6(b)—(f) exhibit the saliency maps generated by SR,HC,SO,GS and the proposed methods,and Fig.6(g)is the binarized saliency map of Fig.6(f). Results above show that all UAVs are highlighted and backgrounds are suppressed by the proposed method. In comparison,object detected by SR is not complete. In the saliency maps of HC,SO and GS,the background of the test image is not well suppressed.

Fig.7 shows the testing results on a more challenging task. Fig.7(a)is the hazy image,Figs.7(b)—(e)present the saliency maps generated by SR,HC,SO and the proposed method,Fig.7(f)is the binarized saliency map of Fig.7(e),Fig.7(g)shows the detection result with bounding boxes,and Figs.7(h)are the enlarged views of Figs.7(a)and(g),showing the details of target area. As shown in Figs.7(a)and(h),the UAV with a red drogue overlaps with building in background which makes it hard to detect. Through the comparison of saliency maps,it is found that pixels in saliency map generated by the proposed method have higher values in the area of both two targets. However,in the saliency map generated by SR,the UAV with a red drogue was not determined to be a salient object,and pixels belonging to the background reach high value in HC’s and SO’s results.Thus,our method is more accurate and reliable than the other three methods.

Fig.7 Detection results in challenging task

Fig.8 Detection results on hazy and dehazed images

Fig.8(a)shows the saliency map of hazy image. Detection results of the proposed method on hazy and dehazed images are presented in Fig.8(b)and Fig.8(c). In the saliency map of dehazed image(Fig.6(f)),the saliency value of the object area is significantly higher than that of the background area,and all the objects are detected accurately and marked by bounding boxes. Without dehazing,the features and details of the images are not recovered.The saliency of objects is decreased,and an UAV is failed to detected in Case 2. Since the threshold of saliency map binarization is proportional to the maximum pixel value of the map and the difference of saliency value between objects and background is smaller,object detection is more susceptible to background interference. Therefore,the precision is lower due to the false positives in background,and dehazing method plays a certain role in detection under severe weather condition.

Fig.9 shows the continuous detection error of the image sequence in Case 1. The resolution of images in Case 1 is 640 pixel×480 pixel. UAVs in the 30 continuous images are labeled manually. The detection error is defined as the distance between the center coordinates of the bounding box in labeled image and saliency map. Results verify that the proposed method has the smallest detection error among the five methods.

Fig.9 Detection error

4 Conclusions

In this paper,an eagle-vision-based object detection method for UAV formation in hazy weather is proposed. Inspired by the signal processing mechanism of ON and OFF channels in eagle’s retina,the values of atmospheric light and transmission are estimated to restore the hazy image. An object detection method on the basis of eagle’s visual attention mechanism is presented. Performances of the proposed algorithm are tested and compared on two cases of images pictured in UAV formation scenarios. Experimental results verify that the proposed method has superior performance over traditional methods. Moreover,the proposed method is robust and reliable in challenging environments,which could provide guarantees for UAV formation.

主站蜘蛛池模板: 美女国产在线| 亚洲AⅤ波多系列中文字幕| 国产SUV精品一区二区| 日韩毛片在线播放| 国产色伊人| 热思思久久免费视频| 久青草国产高清在线视频| 欧美性猛交一区二区三区| 精品国产成人国产在线| 亚洲综合欧美在线一区在线播放| 中文字幕调教一区二区视频| 精品伊人久久久香线蕉| 国产午夜一级毛片| 欧美日韩精品综合在线一区| 伊人激情综合网| 日本精品中文字幕在线不卡 | 亚洲va欧美ⅴa国产va影院| 亚洲av无码成人专区| 国产中文在线亚洲精品官网| 99热这里只有精品免费| 91在线高清视频| 四虎精品免费久久| 91无码人妻精品一区二区蜜桃| 亚洲日韩久久综合中文字幕| 国产在线观看高清不卡| 99久久99这里只有免费的精品| 午夜老司机永久免费看片| 成人毛片免费在线观看| 久久永久免费人妻精品| 国产成人艳妇AA视频在线| 制服丝袜一区二区三区在线| 亚洲美女一区| 在线亚洲精品自拍| 国产91无毒不卡在线观看| 精品国产黑色丝袜高跟鞋| 国产91无毒不卡在线观看| 小13箩利洗澡无码视频免费网站| 不卡的在线视频免费观看| 国产日本视频91| 97在线免费| 污视频日本| 国产精品99一区不卡| 婷婷六月激情综合一区| 久久综合亚洲色一区二区三区| 不卡无码网| 欧美国产综合视频| 欧美亚洲国产一区| 亚洲青涩在线| 永久免费AⅤ无码网站在线观看| 91最新精品视频发布页| 大香伊人久久| 色妞永久免费视频| 天天激情综合| 亚洲无码视频图片| 国产一区二区三区精品久久呦| 18禁色诱爆乳网站| 国产白浆视频| 美女视频黄又黄又免费高清| a网站在线观看| 999国产精品永久免费视频精品久久 | 激情無極限的亚洲一区免费| 无码AV高清毛片中国一级毛片 | 呦女精品网站| 免费欧美一级| 中文字幕伦视频| 无码中文AⅤ在线观看| 欧美色图第一页| 亚洲精品777| 日本不卡在线视频| 国产成人一区二区| 茄子视频毛片免费观看| 亚洲资源在线视频| 久草青青在线视频| 国产精品流白浆在线观看| 久久这里只有精品2| 大香伊人久久| 国产永久免费视频m3u8| 欧美日韩免费观看| 国产熟女一级毛片| 国产真实二区一区在线亚洲| 欧美在线精品一区二区三区| 免费观看国产小粉嫩喷水|