999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Spatial-Resolution Independent Object Detection Framework for Aerial Imagery

2021-12-11 13:30:26SidharthSamantaMrutyunjayaPandaSomulaRamasubbareddySankarandDanielBurgos
Computers Materials&Continua 2021年8期

Sidharth Samanta,Mrutyunjaya Panda,Somula Ramasubbareddy,S.Sankar and Daniel Burgos

1Deptartment of CSA,Utkal University,Bhubaneswar,751004,India

2Department of Information Technology,VNRVJIET,Hyderabad,500090,India

3Department of CSE,Sona College of Technology,Salem,636005,India

4Research Institute for Innovation&Technology in Education(UNIR iTED),Universidad Internacional de La Rioja(UNIR),Logro?o,26006,Spain

Abstract:Earth surveillance through aerial images allows more accurate identification and characterization of objects present on the surface from space and airborne platforms.The progression of deep learning and computer vision methods and the availability of heterogeneous multispectral remote sensing data make the field more fertile for research.With the evolution of optical sensors, aerial images are becoming more precise and larger, which leads to a new kind of problem for object detection algorithms.This paper proposes the“Sliding Region-based Convolutional Neural Network(SRCNN),”which is an extension of the Faster Region-based Convolutional Neural Network(RCNN) object detection framework to make it independent of the image’s spatial resolution and size.The sliding box strategy is used in the proposed model to segment the image while detecting.The proposed framework outperforms the state-of-the-art Faster RCNN model while processing images with significantly different spatial resolution values.The SRCNN is also capable of detecting objects in images of any size.

Keywords: Computer vision; deep learning; multispectral images; remote sensing; object detection; convolutional neural network; faster RCNN;sliding box strategy

1 Introduction

Surveillance of a large geographical area through aerial imagery is undoubtedly a faster and less time-consuming process than conventional methods that use a horizontal perspective.Although there are some cases where aerial imagery cannot be used for surveillance, like person or facial detection and pedestrian or vehicle license plate detection, it can be used for detection of the number and types of vehicles in a city or any geographical area.To perform this task using a horizontal perspective, it is very expensive in terms of planning, procurement and execution, but computationally it is quite simple to analyse through an aerial perspective.The field of computer vision has resolved numerous problems of surveillance, irrespective of their type and complexity.Surveying the Earth from an aerial view by using deep learning has not only reduced the time and cost but has also become more accurate and robust with the availability of training data and computation power.There are many application areas, like the study of vegetation distribution in an area and changes in shape and size of agricultural land, towns, or slums, where the machine outsmarts humans concerning time as well as efficiency.

1.1 Object Detection

Object detection is a computer vision technique widely used in surveillance.It is generally used to determine the number, type and position of a particular object in an image.There are many state-of-the-art object detection frameworks such as the Region-based Fully Convolutional Network (RFCN) [1], Single-Shot Detector (SSD) [2], You Only Look Once (YOLO) [3] and RCNN [4] and its multiple variants, such as Mask RCNN [5], Fast RCNN [6], Faster RCNN [7],YOLO version 2 [8] and YOLO version 3 [9].Each of these frameworks uses different methods and principles to detect objects, but all are based on deep neural networks.This study uses Faster RCNN rather than SSD and YOLO because of its accuracy [10], although it is slower and more resource-heavy than SSD and YOLO.When detecting objects from an extremely large aerial image,time and computational resources can be traded for accuracy.

The change in the size of the objects in the image makes the detection process more complex for the algorithm.When a trained model processes an input image with higher or lower spatial resolution than the training image dataset, the Region Proposal Network (RPN) of Faster RCNN fails to provide a Region of Interest (RoI).This is because the RPN uses similar sized anchor boxes as evaluated during the training process.For example, an object detection model trained on a dataset with a spatial resolution of 7.5 cm cannot perform well with an image with a spatial resolution of 30 cm.The same thing happens for the size of the image.A model trained on a dataset of images with the dimensions 250 px × 250 px cannot perform accordingly with larger images with the dimensions 1000 px × 1000 px or smaller images with the dimensions 100 px ×100 px.

1.2 Problem Statement

Innovations in optical sensors, storage devices and sensor carriers like satellites, airplanes and drones have revolutionized the remote sensing and Geographic Information System (GIS)industries.These sensors are producing a huge number of multispectral images with different characteristics, such as spatial resolution.The spatial resolution of an aerial image can be defined as the actual size of an individual pixel on the surface, as demonstrated in Fig.1.Images with lower spatial resolution values seem to be clearer and larger than those with relatively higher spatial resolution values.

An object detection model trained with an arial image dataset will perform accordingly with test images having the same spatial resolution, but its accuracy drops drastically when tested with images having a different spatial resolution.Almost all existing state-of-the-art frameworks fail to detect objects in this scenario.Though image cropping can be used where the spatial resolution of the training image is less than that of the testing image, the reverse (i.e., the spatial resolution of the training image is higher than that of the testing image) cannot be done with this technique.

Figure 1:250 × 250 px of four different images with different spatial resolution values

1.3 Research Contributions

This paper proposes an extension to the state-of-the-art Faster RCNN.It is based on the sliding windows strategy which uses a mathematically-derived optimal window size for precise detection.The primary use cases of the proposed model can be noted as follows:

(i) To detect objects from images of any spatial resolution value and size, such as detection of vehicles in a city [11] and tree crowns in a forest [12].

(ii) For object detection in images captured from drones [13] or aircraft [14], where the elevation is not fixed, as elevation is directly proportional to the spatial resolution value, where the sensor remains constant.

(iii) For the detection of small and very small objects, such as headcounts in protests or social gatherings [15,16].

(iv) It can also be used for microscopic object detection such as cells [17], molecules [18],pathogens [19], red blood cells [20] and blob objects [21].

The rest of the paper is organized as follows.Section 2 provides an overview of some critical works on object detection in remote sensing and aerial imagery and methods to deal with size and resolution.Sections 3 and 4 provide the proposed model and its results, respectively.Finally,Section 5 contains the conclusion.

2 Related Works

Many pieces of literature have reviewed the application of deep-learning-based computer vision techniques in aerial imagery.The authors [22] surveyed about 270 publications related to object detection.This includes the detection of objects by (i) matching the template, (ii) matching the knowledge, (iii) image analysis and (iv) machine learning.They also raised a concern about the availability of labelled data for supervised learning.Han et al.[23] proposed a framework in,where a weakly labelled dataset can be used to extract high-level features.The problem of object orientation in remote sensing imagery is addressed in [24-26].

Diao et al.[27] proposed a deep belief network in, whereas [28] used a convolutional neural network for object detection.In [29], a basic RCNN model is used and in [30] a single-stage densely connected feature pyramid network is used for object detection specifically for very-highresolution remote sensing imagery.The studies in [31,32] used the SSP and the state-of-the-art YOLO 9000, respectively.Huang et al.used a densely connected YOLO based on the SSP in [33].The proposed model aims to provide a framework that can process any aerial image with any value of spatial resolution.Although very few studies addressed this problem, the semantics of [34-36] and the method used in [37] are close to the working principle of the proposed model.

3 Proposed Method

This study proposes an extension that is based on the sliding window strategy; therefore, it is called the Sliding Region-based Convolutional Neural Network.In the proposed model, the slider box shown in Fig.2i(a) will roam all over the input image just like a convolution operation with a determined stride value.The stride value is derived from the spatial resolution of the input image.At each instance of the box position, the model will perform the object detection process according to the stock Faster RCNN on the fragment of the image that falls under the footprint of the slider box as demonstrated in Fig.2i(b).Fig.2 shows the architecture of the proposed SRCNN.The proposed SRCNN is divided into three phases.

? Phase 1:Image Analysis

? Phase 2:Image Pre-Processing

? Phase 3:Object Detection

Figure 2:Architecture of proposed sliding RCNN

3.1 Phase 1:Image Analysis

Phase 1 of the proposed model includes data acquisition, data analysis and a box dimension proposal.This phase plays a vital role in normalizing the spatial resolution factor.As illustrated in Fig.1 in Section 1.2, the size changes according to the spatial resolution value.So, the image has to be scaled in such a way that the size of the object in the training and testing images feels similar in terms of spatial view.In Fig.3, the visual object size feels very similar in (a) and(b) as the image in (b) is down-scaled almost three times.For the proposed model, the original dimension of the scaled image can be the size of the slider box.The box length m can be derived from the average length s of the input image with dimensions a × b and the spatial resolution of both training image r and testing image R, as follows:

Thus, the slider box width is the product of the training image width and the ratio between the spatial resolution of the training image and the input image.This value is also helpful when cropping a large image to process individually.

Figure 3:Scaling of image.(a) Spatial resolution:25 cm/px image size:250 × 250 px, (b) spatial resolution:7.5 cm/px image size:848 × 848 px

3.2 Phase 2:Image Pre-Processing

Phase 2 of the proposed model is image pre-processing, which includes image size analysis and padding.The size of the slider box, evaluated in Section 3.1, depends upon the spatial resolutions of the training and testing image and the dimensions of the training image.But the slider box has to traverse every pixel present in the testing image, so it must be compatible with the image size.In Fig.4ii, the original image is too short to accommodate the last set of slider boxes.As the image area covered by these last boxes will be exempted from the object detection process, it cannot be ignored.This problem can be solved by either image resizing or image padding, in such a way that the end of the last slider box will converge with the end of the image, as demonstrated in Figs.4i and 4iii.

Figure 4:Overlapping problem and difference between resizing and padding

It is observed in Fig.4 that the object size in the padded image is the same as the original image, but the object size in the resized image is bigger than the original, and this is similar to Fig.1.This means that resizing the image results in a significant change in spatial resolution.Thus, the proposed model has used the padding method over resizing.The given image needs to be padded with 0s in such a way that the sliding boxes can cover the entire image area.To determine the padding amount, two cases have to be considered for the slider box of length m, which takes p number of steps to cover the image having length n with O percentage of overlapping.The best and worst cases are demonstrated in Fig.5.

Figure 5:Box sliding demonstration

a)Best Case:

The last box converges perfectly with the image as shown in Fig.5 (case 1).The size of the image is calculated as follows:

b)Worst Case:

The last box does not converge with the image as shown in Fig.5 (case 2).The box will takep′number of steps to cover the image.

With p′number of instances, an image of length n′is needed to converge perfectly like the best case.The same formula is applied for vertical sliding as well.

3.3 Phase 3:Object Detection

Phase 3 of the proposed model is detection.The fraction of the image that falls under the footprint of the slider box is selected and the image matrix is processed by the Faster RCNN to detect the objects.Here, a trained Faster RCNN model is used to detect objects in the input image.Rather than taking the whole image at once, it takes the box image, i.e., the portion of the input image covered by the sliding box.By using Eq.(3), the row instancePrand column instancePccan be evaluated for an input image of dimension a × b.The product ofPcandPris the total number of iterations I.

4 Results and Discussion

A computer with an Intel i5 8th generation processor, 8 GB RAM and a dedicated 4 GB NVIDIA GTX 1050ti graphics card is used to train a Faster RCNN model using the TensorFlow open-source library.Pre-trained weights named “faster_rcnn_inception_v2_coco_2018” are used to initialize the parameter for transfer learning.The model was trained for nineteen hours on the benchmark VEDAI dataset [38].The experimental codes used in this paper for evaluation and weights are available at https://github.com/sidharthsamanta/srcnn.

Four types of images with spatial resolution (sample image with ground truth demonstrated in Fig.6) 7.5, 12.5, 15.5 and 30.5 cm were used for testing (Fig.7).Each type contained three images of 256 px × 256 px, the same as the training image dataset.All images were processed under Faster RCNN and the proposed SRCNN to determine the accuracy and the precision of the proposed framework.

Figure 6:Portion of test Christchurch.jpg with ground truth

i.Image Analysis:Details of the testing images are given below in Tab.1.The Box Size column in the table is the length of the slider box, which is calculated by using the formula derived in Eq.(1).

Figure 7:Overview of test images with spatial resolution 7.5, 12.5, 15 and 30.5 cm, respectively

Table 1:Train and test image description

ii.Image Pre-processing:The padding amount p is calculated for each image with 5% overlapping by using the mathematical formula from Eq.(6).The Padding Value column of Tab.1 contains all the padding values for each test image.The first number represents the number of 0s to be added on the right side of the image and the second number represents the number of 0s to be appended at the bottom of the image.0s can be padded on any side of the image, as there will be no effect on performance.

iii.Object Detection:Now the detector is deployed on top of the sliding window to process the image fragment that falls under its footprint.The process continues until the box reaches the vertical and horizontal end.Fig.8 illustrates the sliding detection process.

iv.Evaluation:The outcomes of the proposed model with four sets of input images mentioned in Tab.1 are compared with the Faster RCNN model in Tab.2.The confusion matrix is used for calculating the accuracy (Eq.7) and precision (Eq.8).

(a)True Positives(TP):Objects that are present in the ground truth and correctly detected in the output.

(b)True Negatives(TN):Objects that are not present in the ground truth and not detected in the output.For object detection and localization, the TN is always considered 0.

Figure 8:Sliding detection process on a sample image with 15 cm resolution

Table 2:Accuracy and prenecision of faster RCNN and proposed SRCNN

(c)False Positives (FP):Objects that are not present in the ground truth, but detected in the output.

(d)False Negatives (FN):Objects that are not present in the ground truth, but detected in the output.

v.Discussion:As the spatial resolution was the same as the training image data, i.e., 12.5 cm,both models performed identically, as both are the same.But when the spatial resolution increased or decreased, the performance of the stock Faster RCNN started to deteriorate.There was a significant change in accuracy as well as in precision when the Faster RCNN dealt with the images having spatial resolution of 7.5 cm and 15 cm.At resolution 30 cm, it performed worse with 0 accuracies and 0 precision, whereas the proposed SRCNN shows the better results for every spatial resolution.

5 Conclusion

Detection of an object is a complex task due to ambiguity in object position, orientation and light source.A small modification of the sensor might change the scale of the objects present over the image.This scaling can be normalized by the proposed method, as it segments the image before detection.The proposed SRCNN outperformed the stock Faster RCNN on image samples with completely different spatial resolution values.It is additionally ascertained that the model can work with images of much smaller or far larger dimensions.

The size problem can also be resolved by using an internal slider box during the convolution operation.However, when an image with very large dimensions undergoes a convolution operation directly, it creates a large range of hyperparameters.Storing and processing these hyperparameters could cause a high configuration personal computer to run out of memory.There is a possibility to implement the extended part of SRCNN in a different state-of-the-art framework, such as YOLO or SSD.

Funding Statement:The authors received no specific funding for this study.

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 国产无码性爱一区二区三区| 色网站在线视频| 另类重口100页在线播放| 亚洲无码高清一区| 亚洲精品成人片在线播放| www.99精品视频在线播放| 亚洲人人视频| 毛片网站在线播放| 国产视频a| 秘书高跟黑色丝袜国产91在线 | 日本一区二区三区精品视频| 毛片免费视频| 国产H片无码不卡在线视频| 国产精品第一区在线观看| 国产成人精品男人的天堂| 国产精品9| 日韩在线视频网站| 妇女自拍偷自拍亚洲精品| 欧美亚洲国产精品第一页| 欧美中文字幕在线视频| 国产国模一区二区三区四区| 无码一区18禁| 91麻豆国产在线| 色综合久久无码网| 中国一级毛片免费观看| 欧美在线中文字幕| 欧美在线网| 国产精品偷伦视频免费观看国产| 蜜臀av性久久久久蜜臀aⅴ麻豆| 欧美.成人.综合在线| 国产极品美女在线播放| 精品無碼一區在線觀看 | 黄色片中文字幕| 精品一区二区三区自慰喷水| 999精品色在线观看| 久久一本日韩精品中文字幕屁孩| 国产又爽又黄无遮挡免费观看| 国产95在线 | 国产精品欧美在线观看| 免费又爽又刺激高潮网址| 亚洲一级毛片在线观播放| 国产福利免费在线观看| 国产乱人乱偷精品视频a人人澡| 国产九九精品视频| 91青青草视频在线观看的| 欧美精品一二三区| 在线观看精品自拍视频| 色综合狠狠操| 这里只有精品在线| 色综合狠狠操| 欧美亚洲另类在线观看| 欧美成人区| 国产激情无码一区二区三区免费| 国产精品成人免费视频99| 国产拍在线| 久久国产亚洲欧美日韩精品| 久久99国产乱子伦精品免| 欧美成人a∨视频免费观看| 欧美激情,国产精品| 污网站在线观看视频| 99视频在线观看免费| 99久久精品国产精品亚洲| 国产手机在线小视频免费观看| 亚洲欧美另类日本| 色哟哟国产精品| 免费人成视网站在线不卡| 国产超碰在线观看| 嫩草国产在线| 亚洲人在线| 日韩国产综合精选| 国产精品久久久久久影院| 色妞永久免费视频| 欧美午夜性视频| 国产熟睡乱子伦视频网站| 中文字幕va| yy6080理论大片一级久久| 无码久看视频| 亚洲天堂视频在线播放| 色窝窝免费一区二区三区| 国产网站免费观看| 男女性色大片免费网站| 亚洲中字无码AV电影在线观看|