999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Enhancement Dataset for Low Altitude Unmanned Aerial Vehicle Detection

2022-01-27 04:09:16,,*,,,

,,*,,,

1.Zhejiang Jiande General Aviation Research Institute,Jiande 311612,P.R.China;2.Department of General Aviation,Civil Aviation Management Institute of China,Beijing 100102,P.R.China;3.School of Electronic and Information Engineering,Shenyang Aerospace University,Shenyang 110136,P.R.China;4.College of Robotics,Beijing Union University,Beijing 100101,P.R.China;5.School of Artificial Intelligence,Shenyang Aerospace University,Shenyang 110136,P.R.China

(Received 28 June 2021;revised 25 October 2021;accepted 21 November 2021)

Abstract:In recent years,the number of incidents involved with unmanned aerial vehicles(UAVs)has increased conspicuously,resulting in an increasingly urgent demand for developing anti-UAV systems.The vast requirements of high detection accuracy with respect to low altitude UAVs are put forward.In addition,the methods of UAV detection based on deep learning are of great potential in low altitude UAV detection.However,such methods need high-quality datasets to cope with the problem of high false alarm rate(FAR)and high missing alarm rate(MAR)in low altitude UAV detection,special high-quality low altitude UAV detection dataset is still lacking.A handful of known datasets for UAV detection have been rejected by their proposers for authorization and are of poor quality.In this paper,a comprehensive enhanced dataset containing UAVs and jamming objects is proposed.A large number of high-definition UAV images are obtained through real world shooting,web crawler,and data enhancement.Moreover,to cope with the challenge of low altitude UAV detection in complex backgrounds and long distance,as well as the puzzle caused by jamming objects,the noise with jamming characteristics is added to the dataset.Finally,the dataset is trained,validated,and tested by four mainstream deep learning models.The results indicate that by using data enhancement,adding noise contained jamming objects and images of UAV with complex backgrounds and long distance,the accuracy of UAV detection can be significantly improved.This work will promote the development of anti-UAV systems deeply,and more convincing evaluation criteria are provided for models optimization for UAV detection.

Key words:unmanned aerial vehicle(UAV);UAV dataset;object detection;deep learning

0 Introduction

The wide popularization and use of low altitude unmanned aerial vehicles(UAVs)facilitate people’s lives,while bringing significant challenges to the safety of critical infrastructures such as airports,power stations,water treatment plants,and so on.In the urban environments,the need of detecting UAVs is more frequent than ever before.In 2020 alone,19 cases of the black flight of UAVs were found in China.One of recent high-profile events connected to UAVs is the notable Guangzhounan Railway Station drone incident.In complex low altitude areas,some specialized detection devices such as radar cannot detect UAVs effectively.To reduce the black flight of UAVs,it is necessary to develop anti-UAV systems that can detect UAV accurately in low altitude areas[1].

In recent years,many countermeasures against UAVs have been proposed.The early detection methods of UAVs mainly depend on radar.Pisa et al.[2]evaluated the radar cross sections of the commercial UAVs for anti-UAV passive radar source selection,whose values are between-18 dB and-1 dB in the 1—4 GHz frequency range.This preliminary analysis set a solid ground for the selection of anti-UAV radar carrier frequency when using both conventional and passive radars.However,radar cannot detect UAVs accurately in complex urban environments,so UAV detection methods based on computer vision have been proposed to address the issue of UAV detection.Wang et al.[3]proposed a detection method to address the problems of visual surveillance for flying small UAVs,in which the Gaussian mixture background modeling was used in a compressive sensing domain and low-rank and sparse matrix decomposition of local image.

At present,how to improve the accuracy of UAV detection and meanwhile reduce the false alarm rate(FAR)and missing alarm rate(MAR)is still an open problem.Benefiting from the ability of some powerful deep learning models,the increasing availability of computing power,and the largescale label datasets,deep learning has achieved great success in the field of object detection[4-5],such as the well-known YOLO[6-9],EfficientDet[10],etc.Huge and high-quality datasets are the fundamental guarantees for deep learning to solve computer vision tasks.Some datasets,such as Microsoft COCO[11],the pattern analysis statistical modeling and computational learning visual object classes(PASCAL VOC)[12],and ImageNet[13]datasets,can achieve high accuracy,low FAR and MAR for most object classifications.Microsoft COCO,PASCAL VOC,and ImageNet are valuable datasets and benchmarks for common objects classification,location and detection.Unfortunately,special highquality low altitude UAV detection dataset is still lacking.

A dataset named ATAUAV was proposed in 2020 for UAV detection,in which 10 000 different UAV images were collected from videos and google image search[14].When using YOLOv3-Tiny[7]for UAV detection,the highest accuracy for the dataset is 82.7%.When the dataset is used for actual detection on the platform built by the authors’team,the detection speed is 29.6 frames/s.However,the dataset is only used for the development of airborne anti-UAV system by the authors’team,and authors’team stated categorically that they refuse to share the dataset with others.

In the same year,the dataset RWQUAV(Real world object detection dataset for quadcopter unmanned aerial vehicle detection)was proposed,which consists of 51 446 training set images and 5 375 test set images[15].The highest accuracy for UAV detection of the dataset is only 55%.Since UAV images with long distance and complex backgrounds are very scarce in the dataset,resulting in high FAR and MAR.At the same time,there are too many UAV images from video output in the dataset,and these UAV images only have slight differences,which leads to the large number of images in the dataset but poor actual performance.Moreover,it will take a lot of manpower and material resources to obtain UAV images only by taking real world images,and updating later datasets will be difficult.It is necessary to try and explore more simple and effective methods to obtain UAV images,such as data enhancement.

Inspired by the literatures mentioned above,the main contributions of this paper are as follows:

(1)A novel UAV detection dataset is presented and shared to deal with the problem of no public dataset available in the field of UAV detection and no high-quality datasets for uniform evaluation of algorithm innovation for UAV detection.

(2)Multiple methods of data enhancement are first used in the dataset for UAVs and have been proven to improve dataset performance.An effective way to obtain UAV images is provided,and it dramatically reduces the difficulty and cost of UAV image acquisition.

(3)Images of the main jamming targets for UAV detection,such as birds and kites,are first added to the dataset and have proven to be effective in reducing the FAR and MAR for UAV detection.

(4)The objects that interfere with UAV detection in a specific scene,including airliners,bats,balloons and airships,are added to the dataset,which further improves the practical application value of the dataset for UAV detection,so that the models can obtain better performance in a specific scene.

(5)To fully demonstrate the successful application of the proposed dataset to the challenging problem of drone detection,we use the dataset to train and validate the YOLOv4-Tiny,CenterNet,YOLOv4 and EfficientDet models,and perform a direct side-by-side comparison of the detection results.

The theoretical contribution in this paper is as follows.Noise containing jamming objects is introduced into the generating process of the UAV dataset,avoiding the problem of over-fitting when the models are used in a single-object dataset.

In this paper,the method contribution can be summarized as follows.Many data enhancement methods such as geometric texture enhancement,optical enhancement,and image mixing enhancement are introduced into the generating process of the UAV dataset,further reducing the FAR and MAR of UAV detection.

1 Dataset Preparation

1.1 Real world images generation process

1.1.1 Real world images of UAVs

Real world images of UAVs can be obtained in two ways.The first is the videos taken for UAVs with various types in a variety of complex backgrounds and long distance,and the videos are output as images in 1 frame/s by Free Studio software.The purpose of taking videos from a wide range of complex backgrounds and long distance for UAVs is to extend the UAV image samples with these two features because there are very few of them on the internet.The reason of the more considerable time interval setting is that if the time interval of image output is too small,the flight attitude of UAVs will be similar,and the backgrounds will be too similar.An example diagram is given in Fig.1.

The other way to obtain real world images of UAVs is by web crawler from major mainstream picture websites.The background of images is more affluent and includes various types of UAVs.A part of them are high-definition images of UAVs in close shot,containing more appearance information of UAVs.An example diagram is given in Fig.2.

Fig.1 An example of UAV images from real world shooting

Fig.2 An example of UAV images from web crawler

1.1.2 Real world images of birds and kites

Real world images of birds and kites come from web crawler,and over 70% of the images are with long distance.Two example diagrams are given in Figs.3 and 4,respectively.

Fig.3 An example of bird images from web crawler

Fig.4 An example of kite images from web crawler

1.1.3 Real world images of secondary jamming objects

Secondary jamming objects include airliners,bats,balloons and airships.Although they are not the main reason for the high MAR of UAV detection,they will cause interference in specific detection scenarios.Adding these images will further enhance the generalization ability of the models and increase the usefulness of the dataset.

Real world images of secondary jamming objects come from web crawler.Four example diagrams are given in Figs.5—8,respectively.

Fig.5 An example of bat images from web crawler

Fig.6 An example of airship images from web crawler

Fig.7 An example of airliner images from web crawler

Fig.8 An example of balloon images from web crawler

1.2 Data enhancement for UAV images

The methods of data enhancement used in datasets can be divided into three main categories:Geometric texture enhancement,optical enhancement,and image mixing enhancement.Concretely speaking,the methods of geometric texture enhancement include flipping or rotation,Gaussian or Salt Pepper noise,Gaussian blur,and affine transformation.The methods of optical enhancement include changes in brightness,contrast,and sharpening.The methods of image mixing enhancement include Mix-Up,CutMix,Mosaic,and DCGANs.Such methods of data enhancement used in datasets are shown intuitively in Fig.9.

1.2.1 Geometric texture enhancement

Geometric texture enhancement is an enhancement operation in the geometric space of images.The main methods used in this paper include randomly flipping along the x-or y-axis and rotating at clockwise 90°,180°,270°,or 360°,adding Gaussian noise or Salt Pepper noise with a mean value of 0 and a variance of 0.001 to 0.2,adding Gaussian blur,and adding affine transformation[16-18].Examples of flipping and rotation,adding Gaussian or Salt Pepper noise,adding Gaussian blur,and adding affine transformation are given in Figs.10—13,respectively.

Fig.9 Data enhancement methods used in datasets

Fig.10 An example of flipping or rotation from geometric texture enhancement

Fig.11 An example of adding Gaussian or Salt Pepper noise from geometric texture enhancement

Fig.12 An example of adding Gaussian blur from geometric texture enhancement

Fig.13 An example of affine transformation from geometric texture enhancement

1.2.2 Optical enhancement

Optical enhancement is an enhancement operation by adjusting the visual space of images.The main methods used in this paper are random brightness change,contrast change,and sharpening change[19-20].Examples of the change in brightness,contrast and sharpening are given in Figs.14—16,respectively.

Fig.15 An example of change in contrast from optical enhancement

Fig.16 An example of change in sharpening from optical enhancement

1.2.3 Image mixing enhancement

Image mixing enhancement combines multiple image samples from a dataset to synthesize new image samples.The image blending augmentation method has the following characteristics.First,it requires two or more image samples to participate in the process of image augmentation.Then the semantic information of new image samples generated after blending enhancement depends on the semantics of several participating enhancement samples.The enhanced image samples often do not have human visual understanding characteristics.The main methods used in this paper are MixUp[21],CutMix[22],Mosaic[9]and DCGANs[23],whose examples are given in Figs.17—20,respectively.

Fig.17 An example of MixUp from image mixing enhancement

Fig.18 An example of CutMix from image mixing enhancement

Fig.19 An example of Mosaic from image mixing enhancement

Fig.20 An example of DCGANs from image mixing enhancement

2 Dataset Establishment

The structure of the dataset is given in Fig.21.

Fig.21 Structure of dataset

2.1 Parameters of dataset

The image format of the dataset is JPG,and the image dimension is 628×628×3,namely RGB three-channel image.There are seven classes,including 46 571 UAV images,43 001 bird images,44 340 kite images,12 305 airship images,11 321 airliner images,10 587 bat images,and 10 363 balloon images,with a total of 200 654 annotations.

2.2 Label of dataset

The annotations are in XML format,and the specific labeling process is as follows:First,we labele part of the whole class by using Labelimg software on a small scale manually,and then traine with the EfficientDet to get a good accuracy model.Then we input all the pictures into the EfficientDet and outputted the location with the prediction box and prediction box.The use of the model with training significantly improves the efficiency and accuracy of the labeling.Of course,not all images can be labeled accurately.We manually inspect the image information of all model automatic labeling and modify those relatively unreasonable labels,which do not take us too much time.Due to the use of MixUp,CutMix,and Mosaic enhancements,each image may contain one or more classifications or has multiple detections of the same type,which may result in multiple tags in the XML file corresponding to each image and should be noted by the reader when using it.

So Tsarevitch Ivan wiped away his tears and a third time mounted the Wolf s back. Take me, Gray Wolf, he said, across three times nine lands to the Tsarevna who is called Helen the Beautiful. And straightway the Wolf began running, a hundred times swifter than the swiftest horse, faster than one can tell in a tale, until he came to the country of the beautiful princess. At length he stopped at a golden railing surrounding a lovely garden.

2.3 Partition of dataset

Referring to the holdout cross-validation method in the cross-validation,that is,according to a fixed proportion,the dataset is statically divided,ensuring that the train set,verification set,and test set are completely unrelated,and avoiding the virtual high classification accuracy in the actual test due to their overlapping,so that the follow-up experimental results are more scientific.Training set∶verification set∶test set=80%∶10%∶10%.This proportion division comprehensively refers to the division rules of major public datasets and the scale of our dataset.

3 Dataset Verification

Methods of data enhancement used in this paper have been proven by their authors to improve the detection accuracy to a certain extent in most classifications on public datasets such as Microsoft COCO and PASCAL VOC.However,whether the accuracy of UAV detection can be improved still needs relevant experiments to prove its validity,especially after UAV images with complex backgrounds and long distance are deliberately added.

It is important to note that datasets published by the other teams mentioned earlier for UAV are not publicly available and cannot be obtained.Therefore,the accuracy of UAV detection for the above datasets is extracted from the corresponding papers.The average precision of AP50 of ATAUAV and RWQUAV datasets for UAV detection is given in Table 1.

Table 1 AP50 of ATAUAV and RWQUAV datasets for UAV detection %

3.1 Experimental conditions and parameter setup

These experiments are divided into Experiment 1,Experiment 2,and Experiment 3.The purpose of Experiment 1 is to obtain the accuracy of UAV detection under pure complex backgrounds and long distance,especially after providing a dataset with huge number of images,so as to provide a benchmark for future research.However,in the previous literatures,there is no emphasis on expanding this kind of image,and it is only mentioned that in the case of complex backgrounds and long distance,the accuracy of UAV detection will be reduced,and the detailed data cannot be given.The purpose of Experiment 2 is to verify the effectiveness of adding images of jamming objects such as birds and kites in improving the accuracy and reducing the FAR and MAR.The purpose of Experiment 3 is to verify the effectiveness of data enhancement methods adopted for UAV images in improving the accuracy and reducing the FAR and MAR.The training,verification,and testing of all experiments are completed on NVIDIA RTX2080Ti graphics card matrix,and the dataset is divided according to the static of holdout cross-validation,in which the train set∶verification set∶test set=80%∶10%∶10%,the training epoch is set to 200,the learning rate set to 0.000 1,and the training batch size is 32.Four main models are used in the experiment,YOLOv4-Tiny with CSPDarknet53-Tiny backbone,CenterNet[24]with Hourglass backbone,YOLOv4 with CSPDarknet-53 backbone,and EfficientDet with D7 backbone.

3.1.1 Experiment 1

The dataset used in Experiment 1 consists of 7 010 original UAV images without data enhancement.Moreover,these images are pure UAV images in long distance and complex backgrounds.The precision-recall(PR)curves of the four models are shown in Figs.22(a—d).Finally,the detection accuracy of UAV,the FAR,and the MAR under long distance and complex backgrounds are shown in Table 2.

Table 2 AP50,FAR & MAR(Remote),and FAR & MAR(Complex backgrounds)of four models for UAV detection in Experiment 1 %

Fig.22 PR curves obtained from Experiment 1

3.1.2 Experiment 2

Fig.23 PR curves obtained from Experiment 2

Table 3 AP50,FAR & MAR(Remote),and FAR &MAR(Complex backgrounds)of four models for UAV detection in Experiment 2 %

3.1.3 Experiment 3

The dataset used in Experiment 3 consists of 46 571 UAV images composed of original UAV images and UAV images generated by data enhancement,43 001 bird images,and 44 340 kite images.The PR curves of the four models are shown in Fig.24.Finally,the detection accuracy of UAV,the FAR,and the MAR under long distance and complex backgrounds are shown in Table 4.

Fig.24 PR curves obtained from Experiment 3

Table 4 AP50,FAR & MAR(Remote),and FAR &MAR(Complex backgrounds)of four models for UAV detection in Experiment 3 %

3.2 Experimental results and analysis

The accuracy of ATAUAV and RWQUAV datasets for UAV detection is given in Table 1.The data are extracted from the corresponding papers.

3.2.1 Experiment 1

It can be seen from Fig.22 and Table 2 that in long distance and complex backgrounds,the accuracy of UAV detection is not quite high,and the highest AP50 is from EfficientDet,about 71.32%.However,it is not particularly low as compared with Refs.[14-15].In fact,the AP50 is even higher than that in Ref.[15]under normal circumstances.

3.2.2 Experiment 2

It can be seen from Fig.23 and Table 3 that the AP50 of four models for UAVs is much higher than that of Experiment 1.It is because a large number of bird and kite images are added,which are highly similar to UAVs in long distance and complex backgrounds,and the FAR and MAR of UAV detection in long distance and complex backgrounds are significantly reduced.Although the dataset used is not expanded by data enhancement,the above four models achieve higher AP50 of UAV detection than those of ATAUAV and RWQUAV datasets,and significantly reduce the FAR and MAR of UAV detection in long distance and complex backgrounds.Even YOLOv4-Tiny,the simplest of the four models,still reaches the AP50 of 86.61% for UAV detection.

3.2.3 Experiment 3

From Fig.24 and Table 4,it can be seen that the AP50 of the four models for UAV detection is further improved by using the dataset after the use of data enhancement.By comparing the results of Experiment 2,YOLOv4-Tiny,CenterNet,and YOLOv4 are enhanced by about 5%.It shows that after data enhancement,the three models are learned more image detail features,which helps these models better detect UAVs in long distance and complex backgrounds.While the improvement of EfficientDet model is not obvious,about 0.16%,showing that the data enhancement effect is not obvious in the model with complex network structure,such as EfficientDet.In other words,from the experimental results,for the specific target of UAV,when using the model with simple network structure,the use of data enhancement can indeed improve the accuracy to a certain extent,and reduce the FAR and MAR.However,when the model network is complex and large enough,the effect of data enhancement is not significant.

The successful application of data enhancement methods essentially increases data by introducing prior knowledge,which reduces the structural risk of the model and improves its generalization ability.Specifically,various methods of data enhancement are used to express many images correlating with the original UAV images,which cannot be obtained by real world shooting.

For example,flipping and rotation can help the model better detect UAVs flying in abnormal postures.Random Gaussian or Salt Pepper noise can more closely mimic the image of UAVs flying on rainy days.Gauss blur can improve the model’s accuracy in detecting UAVs when camera lens blurring occurs with irresistible factors.In affine transformations,its random rotation and the compression and pulling of the image at random latitudes can significantly change UAV images appearance and better help the model detect UAVs based on local features when the part of UAVs is not clear.The intensity and contrast enhancements are essentially based on gray-scale changes in UAV images,which can help the model better exclude the impact of grayscale changes in images caused by lighting and weather factors on the accuracy.

3.3 Visualization results

The EfficientDet model with D7 backbone is used to randomly select some original or enhanced images,including some or all of classification.The detection effect is shown in Fig.25.

Fig.25 Final object detection results of using EfficientDet

4 Conclusions

A dataset dedicated to UAV detection is built with high standard and performance.By incorporating a large number of UAV images in long distance and complex backgrounds,the problem of high FAR and MAR in the existing UAV datasets can be avoided.The addition of noise with secondary jamming objects can further improve the generalization ability of models,and avoid the problem of over-fitting of models caused by the existing UAV datasets containing only single target UAV images,and the accuracy of UAV detection is improved.Moreover,an effective way to obtain UAV images is provided,and the difficulty and cost of UAV image acquisition are dramatically reduced.Through the mainstream target detection models and a large number of comparative experiments,the effectiveness of the proposed dataset is verified.So in the field of UAV detection in the future,there is a truly usable high standard and high-performance special dataset.With the proposed new methods of data enhancement,we believe that our dataset can be further expanded and will achieve better performance in deep learning models.

主站蜘蛛池模板: 亚洲电影天堂在线国语对白| 青青草原国产一区二区| 亚洲欧美日韩中文字幕一区二区三区| 中文字幕乱码二三区免费| 一区二区三区在线不卡免费| 成人日韩欧美| 毛片久久久| 国产精品一区二区国产主播| 亚洲中文字幕在线观看| 国产在线观看一区精品| 亚洲黄色网站视频| 91亚瑟视频| 免费高清自慰一区二区三区| 一本一道波多野结衣av黑人在线| 亚洲欧州色色免费AV| 日韩国产一区二区三区无码| 欧美亚洲欧美区| 国产jizz| 69国产精品视频免费| 国产女同自拍视频| 色婷婷在线播放| 色国产视频| 亚洲熟女中文字幕男人总站| 欧美国产综合色视频| 91娇喘视频| 国产精品刺激对白在线| 欧美精品黑人粗大| 亚洲婷婷丁香| 一区二区日韩国产精久久| 浮力影院国产第一页| 波多野结衣亚洲一区| 啊嗯不日本网站| 欧美成人看片一区二区三区| 日a本亚洲中文在线观看| 国产男女免费视频| 青青青伊人色综合久久| 经典三级久久| 亚洲福利一区二区三区| 国产午夜无码片在线观看网站| 福利片91| 国产伦片中文免费观看| 亚洲午夜久久久精品电影院| 国产精品99一区不卡| 毛片网站观看| 国产亚洲欧美另类一区二区| 国产黄色视频综合| 国产精品嫩草影院av| 91久久国产热精品免费| 国产欧美亚洲精品第3页在线| 国产又爽又黄无遮挡免费观看| 欧美精品成人一区二区视频一| 精品国产一区二区三区在线观看| 国产成人乱无码视频| 国产超碰一区二区三区| 久草国产在线观看| 亚洲精品国产成人7777| 日韩高清在线观看不卡一区二区 | 亚洲欧洲国产成人综合不卡| 国产在线观看一区精品| 国产精品成| 伊人色在线视频| 日韩免费中文字幕| 激情综合网激情综合| 午夜视频免费一区二区在线看| 成人看片欧美一区二区| 美女裸体18禁网站| 妇女自拍偷自拍亚洲精品| 精品国产网站| 91人妻日韩人妻无码专区精品| 亚洲人精品亚洲人成在线| 精品99在线观看| 91久久精品国产| 日本一本正道综合久久dvd| 激情亚洲天堂| 成人无码区免费视频网站蜜臀| 黄色网址手机国内免费在线观看| 国产精品人成在线播放| 久久这里只有精品2| 精品撒尿视频一区二区三区| 美女啪啪无遮挡| 国产乱人激情H在线观看| 香蕉伊思人视频|