999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Application of UAV-Based Imaging and Deep Learning in Assessment of Rice Blast Resistance

2023-11-18 01:12:04LINShaodanYAOYueLIJiayiLIXiaobinMAJieWENGHaiyongCHENGZuxinYEDapeng
Rice Science 2023年6期

LIN Shaodan, YAO Yue, LI Jiayi, LI Xiaobin, MA Jie, WENG Haiyong, CHENG Zuxin,YE Dapeng

(1College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou 350002, China; 2College of Mechanical and Intelligent Manufacturing, Fujian Chuanzheng Communications College, Fuzhou 350007, China; 3Fujian Key Laboratory of Agricultural Information Sensing Technology, Fuzhou 350002, China; 4College of Agriculture, Fujian Agriculture and Forestry University, Fuzhou 350002, China)

Abstract: Rice blast is regarded as one of the major diseases of rice. Screening rice genotypes with high resistance to rice blast is a key strategy for ensuring global food security. Unmanned aerial vehicles(UAV)-based imaging, coupled with deep learning, can acquire high-throughput imagery related to rice blast infection. In this study, we developed a segmented detection model (called RiceblastSegMask) for rice blast detection and resistance evaluation. The feasibility of different backbones and target detection models was further investigated. RiceblastSegMask is a two-stage instance segmentation model,comprising an image-denoising backbone network, a feature pyramid, a trinomial tree fine-grained feature extraction combination network, and an image pixel codec module. The results showed that the model combining the image-denoising and fine-grained feature extraction based on the Swin Transformer and the feature pixel matching feature labels with the trinomial tree recursive algorithm performed the best. The overall accuracy for instance segmentation of RiceblastSegMask reached 97.56%, and it demonstrated a satisfactory accuracy of 90.29% for grading unique resistance to rice blast. These results indicated that low-altitude remote sensing using UAV, in conjunction with the proposed RiceblastSegMask model, can efficiently calculate the extent of rice blast infection, offering a new phenotypic tool for evaluating rice blast resistance on a field scale in rice breeding programs.

Key words: rice blast; segmentation detection; trinomial tree; Swin Transformer; unmanned aerial vehicle

Rice blast, caused by the fungusMagnaporthe oryzae,is considered one of the major diseases of rice.M.oryzaecan infect various parts of rice, including leaves, stems, nodes, and panicles (Kim et al, 2017),leading to a significant decrease in rice yield up to 10% to 30%, and even total loss of the rice grain(Feng et al, 2022). Managing rice blast has become more challenging due to the declining resistance of current rice varieties when confronted with new and more virulent strains of the fungus (Laha et al, 2017).Effects have been made to combat rice blast through several strategies, such as fungicides and breeding.However, the use of eco-unfriendly fungicides can result in pesticide residue, raising environmental concerns. Therefore, considerable efforts are being made to screen rice genotypes for resistance toM.oryzae,whichis a crucial measure for ensuring global food security.

In addition of morphology and pathogenicity analysis,molecular characterization is commonly undertaken during the process of rice breeding (Shahriar et al,2020). The manually grading of the symptoms on leaves based on the disease spot area plays a critical role in assessing disease resistance. For instance, Bal et al (2019) conducted a field study to manually screen and evaluate the resistance of 14 mungbean genotypes against yellow mosaic disease,Cercosporaleaf spot, or powdery mildew based on the lesions on the foliage. The length of lesions on the leaves is also recorded to assess the rice resistance toXanthomonas oryzae(Jin et al, 2020). Furthermore, blast symptoms on rice blades and sheaths at different infection stages are measured to investigate the resistance mechanisms of rice againstM. oryzae(Ma et al, 2022). This research highlights the importance of manual disease lesion measurement in breeding, despite its labor-intensive and inefficiency nature.

Fortunately, optical-based technologies have been taken into account with advancements in rapid, realtime and nondestructive estimation and discrimination of specific plant traits, such as assessing drought and salt stress in magneto-primed triticale seeds (Alvarez et al,2021), discriminating herbicide-resistant genotypes in kochia (Nugent et al, 2018), and selecting salinitytolerant cultivars in okra (Feng et al, 2020). Additionally,Zhang et al (2022) presented a method for selecting bacterial blight resistant rice varieties by coupling visible/near-infrared hyperspectral imaging and deep learning. Hyperspectral imaging was used to evaluate the resistance of different sugar beet genotypes toCercosporaleaf spot, and it successfully quantified fungal sporulation (Oerke et al, 2019). Moreover,Brugger et al (2021) demonstrated that spectral signatures in the ultraviolet range related to pigments and flavonoids, combined with deep learning, enabled the non-invasive characterization of the interaction between barley and powdery mildew.

Field phenotyping to dissect QTL and candidate genes using various platforms from ground-based to aerial systems is considered a frontier in crop breeding(Araus and Cairns, 2014). With the development of unmanned aerial vehicle (UAV) remote sensing, it has been applied to field-scale trait evaluation, such as yield estimation in oilseed rape, spinach, and rice(Zhou et al, 2017; Wan et al, 2018; Ariza-Sentís et al,2023), biomass dynamic monitoring in rice (Cen et al,2019), and time series canopy phenotyping of genetic variants in soybean (Li et al, 2023). Cai et al (2018)explored the feasibility of UAV remote sensing for evaluating the severity of narrow brown leaf spots in rice and found that the vegetation index of excess green minus excess red, calculated from RGB (red,green, and blue) images achieved the best-detecting performance. An et al (2021) applied UAV hyperspectral imaging to spatially and temporally extract areas of rice false smut infection with an overall accuracy of 85.19%, using spectral and temporal features as input for random forests. High-resolution RGB and multispectral images from the UAV platform are applied to quantify different levels of sheath blight in rice field with an overall accuracy of 63%, indicating that the customer-grade UAV remote sensing system,integrating digital and multispectral cameras, has the potential to detect sheath blight disease on a field scale (Zhang et al, 2018). In summary, UAV-based remote sensing has played a key role in the application of smart agriculture, thanks to its advantages of high spatial resolution, efficiency, and cost effectiveness.The use of this technology can facilitate the acquisition of rice phenomics in interaction with biotic/abiotic factors on a field scale during the breeding process.

To the best of our knowledge, few studies have quantified the rice blast resistance in different rice genotypes in the field using low-altitude UAV-based images. Therefore, this study aimed to investigate the potential of UAV-based remote sensing coupled with deep learning to quantify rice blast resistance in different rice genotypes in the field. The specific goals were to propose a backbone network for imagedenoising and fine-grained feature extraction, build a model for accurate instance segmentation of rice blast,and evaluate rice blast resistance in different genotypes based on the established model.

RESULTS AND DISCUSSION

Training performance of different backbone networks under RiceblastSegMask model

The RiceblastSegMask model was trained using different backbone networks including ResNet50,ResNet101, Swin-Tiny, and Swin-Base. The training process was conducted on three Tesla V100 GPUs.The batch size was set to 21 to obtain the pre-training weight. The initial learning rate was set at 1 × 10-4,and the weight was attenuated to 1 × 10-4after 89 999 epochs. The gradient curve of the detection box, the target mask, and the instance segmentation were used as parameters to evaluate the model’s training performance and obtain the optimal model after sufficient training (Fig. 1). It can be observed that the gradient trends of RiceblastSegMask using ResNet50,ResNet101, Swin-Tiny, and Swin-Base as the backbone network were all relatively stable without significant fluctuations and converged at the end of the training,indicating that they were fully trained.

Fig. 1. Gradient descent curve of RiceblastSegMask model with backbone networks of ResNet50, ResNet101, Swin-Tiny, and Swin-Base.

The details of each training parameter for different backbone networks under RiceblastSegMask are shown in Table 1. RiceblastSegMask with Swin-Tiny as the backbone required the least training time to complete the training process. In terms of the value of box loss for the detection frame,the model using the ResNet50 backbone had minimal mask loss and segmentation loss compared with the other backbone networks.However, when using Swin-Base as the backbone network, the classification loss value was the smallest at 0.027. Overall, although the box loss, mask loss,and segmentation loss using the ResNet as the backbone network were slightly better than those with Swin Transformer, the training time and classification loss showed an opposite pattern.

Table 1. RiceblastSegMask training parameters using different backbone networks.

Comparison of testing performance of RiceblastSegMask model using different backbone networks for rice blast resistance grading

To determine the best backbone network, the box average precision and mask average precision of instance segmentation were further investigated at different rice blast levels with the RiceblastSegMask model (Table 2). For the ResNet50 backbone network,the box average precision at rice blast levels 1, 2, 5, 7,and 8 reached 100.00%, 98.88%, 100.00%, 98.80%,and 97.87%, respectively, while the mask average precision of instance segmentation for disease levels 2,4, 5, 6, 7, and 8 reached 95.95%, 97.27%, 100.00%,97.04%, 96.26%, and 93.99%, respectively. For the ResNet101 backbone network, the box average precision for disease levels 1, 4, 5, and 6 was 100.00%,99.00%, 100.00%, and 99.00%, and the mask average precision of instance segmentation for disease levels 1 and 3 was 97.33% and 95.46%, respectively. For the Swin-Base backbone network, the box average precision for disease levels 1, 5, 6, and 9 reached 100.00%, 100.00%, 99.00%, and 98.98%, respectively,while the mask average precision of instance segmentation for disease level 9 reached 97.56%.Based on these results, the box average precision atdifferent disease levels and the average precision of instance segmentation between Swin Transformer and ResNet did not show a significant difference. However,the training time for Swin Transformer (Table 1) was almost half of that for ResNet. Therefore, considering the training time and the average precision, Swin Transformer was considered a better backbone network for the RiceblastSegMask model.

Table 2. Box average precision and mask average precision of instance segmentation at different rice blast levels with RiceblastSegMask model using different backbone networks. %

The confusion matrix of the Swin-Base backbone network on the test dataset at different rice blast infection levels is shown in Fig. 2. It can be seen that the Swin-Base backbone network achieved satisfactory performance, with accuracies for all nine disease levels exceeding 97%.

Comparison of different target detection models for rice blast grading

Fig. 2. Confusion matrix of Swin-Base at different levels of rice blast infection.

The proposed RiceblastSegMask model was further compared with popular single-stage target detection models (YOLACT and YOLACT++) and two-stage target detection models (Fast R-CNN, Mask R-CNN,and Transfiner) using the parameters of average precision and mean average precision. YOLACT and YOLACT++ adopt an architecture based on a fully connected convolutional neural network and an additional masked prediction branch. YOLACT is the first real-time framework in instance segmentation,but forgoes the display localization step. The average precision of the YOLACT model on Microsoft Common Objects in Context (COCO) reaches 29.8%(Bolya et al, 2020). The Mask R-CNN training model is an end-to-end model and a flexible multi-task detection framework that can accomplish target detection, target instance segmentation, and target keypoint detection. The average precision of the Mask R-CNN model on COCO reaches 36.4%. Fast R-CNN utilizes shared convolutional image feature layers in the spatial pyramid pooling network, which greatly reduces the time complexity of the model, but it is still time-consuming to selectively search the candidate box. The mean average precision of the Fast R-CNN model on visual object classes 2007 reaches 66.9% (Li et al, 2021). Transfiner makes significant performance improvements in the three large-scale instance segmentation datasets COCO, Cityscapes, and BDD100K, especially in the edge regions of objects(Ke et al, 2022). It can be observed that the accuracies of RiceblastSegMask in IOU 0.50, IOU 0.75, IOU 0.90, and IOU 0.50-0.90 are 96.83%, 87.33%,56.33%, and 80.16%, respectively, which are higher than those of YOLACT, YOLACT++, Fast R-CNN,Mask R-CNN, and Transfiner (Table 3). This demonstrates that RiceblastSegMask has an advantage for rice blast grading.

Assessment of rice blast resistance among different genotypes

On 5 m low-altitude UAV-based images for rice blast identification using RiceblastSegMask, as shown in Fig. 3, valuable information can be derived to assess rice blast resistance among different genotypes. It is evident that there were significant differences in the number of infected leaves and infection levels among resistant, moderately resistant, and susceptible genotypes.Among them, resistant varieties (N246 Red Glutinous 442, D36 BC1F4_RGD-7S, and D343 South 11S)exhibited the fewest number of infected leaves, while susceptible varieties (P148 Wild Aroma BX, N175 Hengben BX, and D596 F4_Kinong B) were the most susceptible to rice blast infection.

Table 3. Average precision of different target detection models. %

Fig. 3. Representative unmanned aerial vehicle (UAV)-based images for rice blast resistance grading among different genotypes using RiceblastSegMask.

The high-throughput calculation of rice blast lesions on rice leaves from UAV-based images allows agronomists to easily assess rice blast resistance in the field during breeding. As depicted in Fig. 4-A,instance segmentation using the RiceblastSegMask model accurately calculated the rice blast lesions.Overall, RiceblastSegMask achieved satisfactory results with an overall grading accuracy of 90.29% for different levels of rice blast resistance (Fig. 4-B).Susceptible rice varieties exhibited the highest grading accuracy of 92.5%, while both resistant and moderately resistant rice varieties showed grading accuracies exceeding 80%. Notably, 16% of moderately resistant rice varieties were misclassified as susceptible.

This research proposed a segmentation detection model for rice blast called RiceblastSegMask, which utilized Swin Transformer as the backbone network.This approach effectively enhanced the accuracy of fine-grained recognition of target features and the subsequent segmentation of target region instances.Furthermore, the feature pyramid and trinomial tree structure contributed to capturing more local details of the target. Additionally, the pixel decoder of Riceblast-SegMask decoded the output query for each node in the tree to predict the final instance label. The instance segmentation accuracy reached 97.56%, surpassing that of YOLACT, YOLACT++, Fast R-CNN, Mask R-CNN, and Transfiner. The overall accuracy in rice blast resistance grading was 90.29%, providing theoretical and technical support for efficiently screening rice genotypes with high resistance to rice blast in breeding programs.

Fig. 4. Instance segmentation of rice blast based on RiceblastSegMask (A) and confusion matrix of rice blast resistance assessment (B).

Moreover, this study utilized UAV-generated images for rice blast disease acquisition. Compared with ground-based acquired images, UAV-based images were acquired more rapidly and could gather a larger amount of data in a shorter period, significantly reducing data acquisition time. Additionally, UAVs operate at higher flight altitudes, resulting in capturing higher resolution images that offer richer details. As a result, utilizing drones for image acquisition has become a powerful tool in various applications, including agriculture.

METHODS

Experimental site

The experimental site was located at the Longyan Shanghang Rice Breeding Demonstration Base in Shanghang Chadi Town,Longyan City, Fujian Province, China (longitude 116.575°,latitude 25.020°) (Fig. S1). A total of 1 358 rice genotypes harboring different combinations of genes were utilized to detect rice blast resistance. Rice seedlings were cultivated using a moist method, and fertilizer application followed local field practices, with N, P, and K fertilizers applied at rates of 162.6,90.6, and 225.0 kg/hm2, respectively, in the form of CO(NH2)2,Ca(H2PO4)2·H2O, and KCl. The paddy field was maintained with a water layer throughout the growth period, and no measures such as baking or spraying were taken to control rice blast.

The identification of disease resistance against leaf blast in rice was carried out through natural field induction. A field disease nursery was established in Chadi township, Shanghang county, which is a major rice-growing area in Fujian Province with severe outbreaks of rice blast disease. The selected plots were conveniently irrigated with moderate fertility. The entire field was divided into 30 large plots, each covering an area of more than 20 m2, and then each large plot was further divided into 60 small subplots. One rice variety was planted in each small subplot, occupying an area of 0.02 m2, and a protective row of the inducer variety ‘Minghui 86’ was established around each plot. Conventional fertilization and irrigation were managed, without implementing disease control measures against pests. Rice blast disease images were collected once leaf blast was stabilized in the inducer variety.

UAV-based images collection

A commercial UAV (DJI Mavic 2 Pro,Dajiang Innovations,Shenzhen, China) was used to collect high-resolution RGB images (5 472 × 3 648 pixels) using a camera with the following parameters: 1 CMOS (Complementary Metal Oxide Semiconductor) sensor, f/2.8 aperture, and 28 mm equivalent focal length (L1D-20c camera, Hasselblad, Germany). The data collection location is in Longyan District, Fujian Province,China, as indicated in Figure S1. The UAV was deployed between 14:00 pm and 16:00 pm at the yellow ripening stage in early- to mid-July and mid-October 2022. The weather conditions were sunny with a temperature of 30 °C and a humidity level of 72%. The UAV flew at an altitude of 5 m with an exposure time of 2 000 ms for the camera, and the camera resolution was 0.1 mm/pixel. The UAV acquired pictures at a frequency of 2 frames per second (f/s). The UAV flight route was planned to achieve 60% forward overlap and 75% lateral overlap to ensure optimal performance in image mosaicking. The UAV was flown multiple times to obtain UAV-based images during the observation of rice blast disease. In total, 1 702 high-resolution UAV-based images were gathered in this study. After collecting the image data using the UAV, we manually observed and counted the number of rice leaves infected by rice blast based on the acquired images

Rice blast level grading standard

The classification of rice blast disease levels was based on the standards set by the International Rice Research Institute(Anderson, 1991), as presented in Table S1. Rice blast grade is divided into three levels based on the extent of infection:resistant (R), moderately resistant (M), and susceptible (S).Specifically, disease level 0 corresponds to R, levels 1-3 correspond to M, and levels 4-9 correspond to S. The disease levels of rice blast are depicted in Fig. S2, and the distribution of instances for each level is shown in Table S1. Given the similarities in visual discrimination between adjacent infection levels, instances at lower levels are often categorized as higher level during manual annotation.

Following the experiment, the assessment of variety resistance to rice blast was conducted based on the results of disease level segmentation. Disease grades were further categorized, following the guidelines proposed by Atkins (1967): (i) When A > B + C +D and C + D ≥ 1, it is marked as M; (ii) When A > B + C + D and C + D = 0, it is marked as R; (iii) When A ≤ B + C + D and A + B > C + D, it is marked as M; (iv) When A ≤ B + C + D and A + B < C + D, it is marked as S; and (v) When no disease spots are present, it is marked as R. Here, A, B, C, and D correspond to disease levels 0-2, 3-7, 8, and 9, respectively.

Detection model

Based on the rice blast lesions and the noise present in the rice blast images collected by UAV, we proposed a rice blast detection model called RiceblastSegMask. This model integrates image denoising and fine-grained feature extraction and operates as a two-stage instance segmentation model. It comprises an image-denoising backbone network, a feature pyramid, a trinomial tree fine-grained feature extraction combination network, and an image pixel codec module (Fig. S3).

Swin Transformer backbone network

In conventional convolutional neural networks, multi-sized features are typicaly obtained through pooling operations.Pooling increases the receptive field of each convolutional kernel, facilitating the extraction of features at different scales.Our model adopted the Swin Transformer as its backbone network, which performs a pooling-like operation called patch merging. Patch merging involves synthesizing four adjacent small patches into a larger one, enabling the network to capture features of different sizes effectively. As illustrated in Fig. S4,the Swin Transformer achieves the integration of shallow to deep features by employing receptive fields of multiple sizes.Utilizing the Swin Transformer as the backbone network effectively enhances the fine-grained recognition of target features, which is crucial for subsequent target region instance segmentation.

Image denoising

The backbone network incorporates the Swin Transformer with an input image denoising function. It primarily consists of six spatial transformer layers (STL), forming residual Swin Transformer blocks (RSTB). These blocks are followed by a 3 ×3 convolutional layer responsible for feature extraction and residual connection to generate high-quality denoised images(Liang et al, 2021). Specifically, a low-quality (LQ) input image is fed into the network, resulting in an intermediate shallow featureF0?RH×W×Cin(whereH,W, andCinrepresent height,width, and input image channels, respectively). To extract this shallow featureF0, we employed a 3 × 3 convolutional layer,ConvSF(·), as follows:F0=ConvSF(ILQ). Here, the convolutional layer is well-suited for shallow visual processing, ensuring stable optimizations and improved results. Additionally, it provides a straightforward mechanism for mapping the input image space into a higher-dimensional feature space. We then extract the deep featureFDF?RH×W×CfromF0:FDF=ConvSF(F0). Here,ConvSF(Fi) represents the deep feature extraction module, which consists of K RSTBs and a 3 × 3 convolutional layer. The intermediate featuresF1,F2,·····,FK,and the output deep featureFDFare extracted block by block as follows:

Here,ConvRSTB-i(Fi) denotes theith RSTB, andConvRSTB-krepresents the final convolutional layer. The inclusion of the convolutional layer at the end of feature extraction introduces the inductive bias of convolutional operations into the Swin Transformer-based network, forming a solid foundation for the subsequent aggregation of shallow and deep features.

Feature location trinomial tree structure

Within the framework of our model, high-quality images generated through the combination of residual STL, were used as inputs to the backbone network. These images were then connected to the feature pyramid network (FPN) to extract multi-level depth features. Based on this, a region of interest(RoI) feature pyramid was constructed to segment instances in a coarse-to-fine manner. Specifically, RiceblastSegMask first extracted features at three levels of the FPN to form the RoI pyramid, where the RoI size progressively increased to {28, 56,112}. Instead of selecting a single-layer FPN feature based solely on object size, RiceblastSegMask utilized information loss points distributed across multiple layers in the RoI pyramid.The model treated these points as an unordered sequence with dimensions of C × N, where N represents the total number of nodes and C indicates the feature channel dimension. It then predicted the corresponding segmentation label for each point.

In the model, feature regions, ranging from shallow to deep,were decomposed and represented as a trinomial tree structure.The nodes of this trinomial tree were derived from feature propagation within the FPN of the backbone network, with different layers signifying different feature granularities. The structure of this tree was visually depicted in Fig. S5. A node from a lower-level RoI feature (e.g., 28 × 28 resolution) had three corresponding children in its adjacent higher-level RoI(e.g., 56 × 56 resolution), recursively expanding until reaching the optimal feature level. To reduce computational load, only pixels predicted as lost nodes higher up the trinomial tree were decomposed, and the maximum tree depth was set to 3.Specifically, information loss points detected from the lowest level (28 × 28) to the highest level (112 × 112) were treated as root nodes. These root nodes were then expanded with two sub-quadrants each, moving from top to bottom, forming a multi-level tree. For large-scale feature extraction, the subquadrant points can be magnified, and the level can be increased to observe finer local details of the target (Ke et al, 2022).

Encoding and decoding of pixel sequences

To enhance the characteristics of each information loss node,RiceblastSegMask’s Node Encoder utilized three different information clues for encoding each trinomial tree node. These clues included fine-grained depth features extracted from corresponding positions and levels of the FPN pyramid, relative position coding to capture distance correlations between nodes within the RoI, and information about adjacent points surrounding each node to account for local details. For each node, we extracted features from the 3 × 3 neighborhood domain and then compressed to the original feature dimension using a fully-connected layer. Intuitively, this helped locate the object edges and capture their local shapes. As shown in the Node Encoding module in Fig. S3, fine-grained and local features were concatenated and added to the position vector to produce the encoded nodes.

Once the trinomial tree nodes were encoded, the multi-head attention module in the Sequence Encoder was used to model correlations between points, facilitating the fusion and update of features among the nodes in the input sequence. Each layer of the column encoder consisted of multi-head self-attention modules and a fully connected FPN. Unlike standard transformer decoders, RiceblastSegMask’s pixel decoder was a simple MultiLayer Perceptron without the multi-head attention module.It decoded the output query for each node in the tree to predict the final instance label.

By decomposing the image for each instance, we calculated the rice blast lesions, and analyzed the correlation between rice varieties and the affected leaf area, as well as the extent of rice varieties infected. We designated a circle with a radius of 2 cm as the reference object within the detection area. By using this,we calculated its area and perimeter, and established the ‘actual value/pixel’ relationship based on the actual area and perimeter.We then converted the pixels in the lesion area into actual values using this relationship. The formulae for calculating the lesion area are as follows:

WhereCrdrepresents the perimeter of the lesion area,Cprepresents the perimeter in pixels, andcrepresents the perimeter of the reference object.

WhereSrdrepresents the lesion area,Sprepresents the area in pixels, andsrepresents the area of the reference object.

Model performance evaluation

Each model underwent testing across nine rice blast infection levels. The data were divided into training, validation, and test sets at a ratio of 5:4:1. The training set excluded the test set,while the validation set was drawn from the training set but was not involved in the training process. These data allowed for a relatively objective evaluation of the model (Lin et al, 2022).To compare the test results of the improved method with other techniques, we randomly selected 100 images from the dataset as a test set. Through multiple controlled experiments, we obtained two performance metrics, precision and recall, to measure the effectiveness of the model in rice blast detection.Precision was calculated using the formula:Precision=TP/(TP+FP), and recall was calculated using the formula:Recall=TP/ (TP+FN). Here,TPrepresents the number of true positive samples predicted correctly,FNrepresents the number of true positive samples predicted as negative, andFPrepresents the number of negative samples predicted as positive.

ACKNOWLEDGEMENT

This study was supported by the Natural Science Foundation of Fujian Province, China (Grant No. 2022J01611).

SUPPLEMENTAL DATA

The following materials are available in the online version of this article at http://www.sciencedirect.com/journal/rice-science;http://www.ricescience.org.

Fig. S1. Location of experimental site and overview of rice RGB images captured by unmanned aerial vehicle remote sensing platform.

Fig. S2. Typical images of different disease levels of rice blast.

Fig. S3. Architecture of model for rice blast detection.

Fig. S4. Swin Transformer multi-size feature receptive field.

Fig. S5. Trinomial tree structure.

Table S1. Rating scale of rice blast disease levels and instance distribution.

主站蜘蛛池模板: 色噜噜在线观看| 手机精品福利在线观看| 日韩经典精品无码一区二区| 亚洲最大在线观看| 国产成人a毛片在线| 亚洲欧美自拍视频| 欧美精品在线看| a天堂视频| a亚洲视频| 女同国产精品一区二区| 少妇精品网站| 国产午夜无码专区喷水| 男人天堂亚洲天堂| 色哟哟国产成人精品| 久久精品国产精品青草app| a级毛片视频免费观看| 91精品啪在线观看国产91| 亚洲综合专区| 国产AV毛片| www精品久久| 欧美亚洲一区二区三区导航| 免费一级无码在线网站| 波多野结衣第一页| 秋霞午夜国产精品成人片| 在线国产三级| 久久人人妻人人爽人人卡片av| 99re在线免费视频| 草草影院国产第一页| 四虎影视永久在线精品| 潮喷在线无码白浆| 91福利免费视频| 国产精品分类视频分类一区| 成人综合在线观看| 99中文字幕亚洲一区二区| 伊人久久久大香线蕉综合直播| 精品色综合| 亚洲国产午夜精华无码福利| 国产男女XX00免费观看| 青青青国产视频| 国产福利免费视频| 成人a免费α片在线视频网站| 欧美色丁香| 久青草免费在线视频| 欧美五月婷婷| 日a本亚洲中文在线观看| 欧美狠狠干| 国产一区二区人大臿蕉香蕉| 国产在线自揄拍揄视频网站| 男人天堂伊人网| 欧美第一页在线| 国产日本视频91| 久久a级片| 免费a级毛片视频| 欧美日韩精品综合在线一区| 黄色网在线| 波多野结衣一区二区三区四区视频| 无码aaa视频| 无遮挡一级毛片呦女视频| 久热中文字幕在线观看| 亚洲av无码成人专区| 色呦呦手机在线精品| 波多野结衣一二三| 国产成人亚洲毛片| 亚洲色图欧美视频| 国产97视频在线| 在线日韩日本国产亚洲| 久久婷婷色综合老司机| 香蕉色综合| 真人高潮娇喘嗯啊在线观看| 国产精品尤物在线| 91麻豆精品国产91久久久久| 日韩欧美91| 国产在线观看高清不卡| 激情视频综合网| 国产成+人+综合+亚洲欧美| 亚洲无码不卡网| 国产亚洲精品精品精品| 自拍偷拍欧美日韩| 激情无码视频在线看| 99激情网| 国产精品一区在线观看你懂的| 亚洲中字无码AV电影在线观看|