戴建國,薛金利,趙慶展,王 瓊,陳 兵,張國順,蔣 楠
利用無人機(jī)可見光遙感影像提取棉花苗情信息
戴建國1,2,薛金利1,2,趙慶展1,2,王 瓊3,陳 兵3,張國順1,2,蔣 楠1,2
(1. 石河子大學(xué)信息科學(xué)與技術(shù)學(xué)院,石河子 832003;2. 兵團(tuán)s空間信息工程技術(shù)研究中心,石河子 832003;3. 新疆農(nóng)墾科學(xué)院,石河子 832003)
為提高棉花苗情信息獲取的時(shí)效性和精確性,該文提出了基于可見光遙感影像的棉花苗情提取方法。首先,利用自主搭建的低空無人機(jī)平臺(tái)獲取棉花3~4葉期高分辨率遙感影像,結(jié)合顏色特征分析和Otsu自適應(yīng)閾值法實(shí)現(xiàn)棉花目標(biāo)的識(shí)別和分割。同時(shí),采用網(wǎng)格法去除雜草干擾后,提取棉花的形態(tài)特征構(gòu)建基于SVM的棉株計(jì)數(shù)模型。最后,基于該模型提取棉花出苗率、冠層覆蓋度及棉花長勢(shì)均勻性信息,并繪制棉花出苗率、冠層覆蓋度的空間分布圖。結(jié)果顯示,模型的測試準(zhǔn)確率為97.17%。將模型應(yīng)用于整幅影像,計(jì)算的棉花出苗率為64.89%,與真實(shí)值誤差僅為0.89%。同時(shí)基于冠層覆蓋度、變異系數(shù)分析了棉花長勢(shì)均勻情況。該文提出的方法實(shí)現(xiàn)了大面積棉田苗情的快速監(jiān)測,研究成果可為因苗管理的精細(xì)農(nóng)業(yè)提供技術(shù)支持。
無人機(jī);遙感;棉田;出苗率;覆蓋度;可見光
在現(xiàn)代農(nóng)業(yè)生產(chǎn)過程中,精準(zhǔn)的苗情信息是實(shí)現(xiàn)農(nóng)作物因苗管理的關(guān)鍵。傳統(tǒng)苗情獲取主要依靠植保員田間抽樣調(diào)查、手動(dòng)估算,這種方式主觀性強(qiáng)、精確度低且費(fèi)時(shí)費(fèi)力,難以滿足當(dāng)前農(nóng)業(yè)發(fā)展要求。尤其隨著精準(zhǔn)農(nóng)業(yè)的快速發(fā)展,大面積農(nóng)作物信息的快速準(zhǔn)確獲取已成為農(nóng)田精細(xì)管理的重要前提,在現(xiàn)代農(nóng)業(yè)研究領(lǐng)域受到極大關(guān)注。
自20世紀(jì)90年代起,基于機(jī)器視覺技術(shù)的苗情監(jiān)測在農(nóng)業(yè)領(lǐng)域逐步得到廣泛研究。經(jīng)過多年發(fā)展,國內(nèi)外學(xué)者在作物生長監(jiān)測、病蟲草害監(jiān)測及產(chǎn)量估測等方面的研究[1-4]取得了豐碩的成果。但早期圖像主要通過固定式(手持、三腳架)或行走式(農(nóng)用機(jī)械、農(nóng)用車、手推車)設(shè)備獲取,在獲取范圍、獲取速度等方面受到了一定限制,難以實(shí)現(xiàn)大面積的農(nóng)情監(jiān)測[5-8]。
隨著無人機(jī)(unmanned aerial vehicles,UAV)和傳感器技術(shù)的不斷提高,田間影像數(shù)據(jù)的獲取方式也更加多樣化。對(duì)比固定式和行走式田間影像獲取平臺(tái),可搭載多傳感器的無人機(jī)平臺(tái)具有效率高、成本低、操作靈活、更適合復(fù)雜農(nóng)田環(huán)境等特點(diǎn)[9-10],不僅克服了人工獲取影像數(shù)據(jù)的困難,還可減少人體接觸對(duì)農(nóng)作物的損害,為大范圍農(nóng)情信息獲取提供了新的技術(shù)手段。
目前,針對(duì)無人機(jī)在農(nóng)業(yè)中的應(yīng)用研究十分活躍。趙必權(quán)等[11]將高分辨率遙感影像分割后提取油菜的形態(tài)特征,采用逐步回歸分析方法,建立了油菜苗期株數(shù)監(jiān)測模型,決定系數(shù)2為0.809。Gn?dinger等[12]采用圖像拉伸變換、多閾值分割等方法檢測無人機(jī)影像中的玉米苗并計(jì)算出苗率,誤差為5%。劉帥兵等[13]對(duì)采集的玉米苗期無人機(jī)遙感影像進(jìn)行顏色空間變換實(shí)現(xiàn)作物與土壤的分割,再通過Harris、Moravec和Fast角點(diǎn)檢測算法進(jìn)行玉米株數(shù)識(shí)別,總體識(shí)別率可達(dá)97.8%。陳道穎等[14]采集航空可見光影像,結(jié)合K-means聚類、BP神經(jīng)網(wǎng)絡(luò)等算法實(shí)現(xiàn)了煙草苗株數(shù)量統(tǒng)計(jì)。牛亞曉等[15]使用自主搭建的無人機(jī)平臺(tái)獲取了多光譜遙感影像,采用監(jiān)督分類與植被指數(shù)統(tǒng)計(jì)直方圖相結(jié)合的方式實(shí)現(xiàn)了田間尺度冬小麥覆蓋度提取。Jin等[16]結(jié)合低空無人機(jī)影像對(duì)小麥出苗密度進(jìn)行估算,誤差僅為9.01株/m2。
新疆棉花種植面積大(占耕地面積50%左右),機(jī)械化程度高,精細(xì)化管理需求強(qiáng)烈,大面積快速準(zhǔn)確的苗情調(diào)查需求十分迫切。無人機(jī)遙感影像結(jié)合機(jī)器學(xué)習(xí)技術(shù)已經(jīng)在農(nóng)作物識(shí)別和長勢(shì)監(jiān)測中展現(xiàn)了強(qiáng)大優(yōu)勢(shì),為實(shí)現(xiàn)苗情信息大面積快速提取奠定了基礎(chǔ)。因此,本文針對(duì)大田環(huán)境中棉花苗情信息人工采集困難、誤差大、效率低等問題,利用無人機(jī)遙感平臺(tái)采集高分辨率棉花影像,研究棉花出苗率、冠層覆蓋度及棉株長勢(shì)均勻性等苗情信息的準(zhǔn)確提取方法,以期實(shí)現(xiàn)苗期棉花長勢(shì)的定量評(píng)估,為棉花精細(xì)化管理提供依據(jù)。
研究區(qū)位于新疆生產(chǎn)建設(shè)兵團(tuán)第八師145團(tuán)蘑菇湖村。該地區(qū)(85°92′02″~85°92′31″E,44°39′14″~44°39′25″N)屬典型的溫帶大陸性氣候,冬季長而嚴(yán)寒,夏季短而炎熱。年平均氣溫為6.2~7.9 ℃,年日照時(shí)長為2 318~2 732 h,年均降水量為180~270 mm[17]。該地區(qū)棉花主要采用機(jī)采棉種植模式(行距配置為一膜6行,行外間距為66 cm,行內(nèi)間距為10 cm;株距為10 cm)。選取研究區(qū)大小為40 m×40 m,棉花種植品種為新陸早2號(hào),進(jìn)行試驗(yàn)時(shí)棉花處于3~4葉期。
本文試驗(yàn)數(shù)據(jù)于2018年5月25日通過無人機(jī)平臺(tái)獲取。飛行平臺(tái)為大疆DJI四旋翼無人機(jī)悟Inspire 1 PRO,最大起飛質(zhì)量3.5 kg,無風(fēng)環(huán)境下水平飛行速度22 m/s,配備智能飛行電池TB47,最大飛行時(shí)長約為18 min。傳感器采用大疆禪思X5(ZENMUSE X5)可見光相機(jī),有效像素1 600萬。影像獲取時(shí)相機(jī)分辨率像素設(shè)定為4 608 ×3 456,焦距15 mm(定焦拍攝)。航拍時(shí)天氣晴朗,無風(fēng)無云,飛行高度設(shè)置為10 m,航向重疊度為60%,旁向重疊度為65%。拍攝時(shí)鏡頭垂直向下,懸停拍攝。
本次試驗(yàn)共采集93幅棉花可見光影像,數(shù)據(jù)以24位真彩色JEPG格式進(jìn)行存儲(chǔ)。利用Pix4Dmapper軟件對(duì)其進(jìn)行正射校正和影像拼接,拼接后得到的影像長約37.5 m,寬約35.4 m,空間分辨率為0.29 cm。為便于后續(xù)試驗(yàn)處理,首先對(duì)拼接影像進(jìn)行裁剪去除邊緣異常值。其次,采用Photoshop圖像處理工具對(duì)影像數(shù)據(jù)進(jìn)行切片分割,選取30幅典型圖像進(jìn)行試驗(yàn)。
在獲取無人機(jī)數(shù)據(jù)后,立即開展地面數(shù)據(jù)調(diào)查工作。試驗(yàn)區(qū)機(jī)采棉一膜6行,行內(nèi)行外間距分別為10和66 cm,總寬為2.28 m,因此地面調(diào)查樣方大小設(shè)置為2.28 m×2.28 m。研究區(qū)共設(shè)置5個(gè)樣方點(diǎn),分別統(tǒng)計(jì)每個(gè)樣方的出苗株數(shù)(圖1)。根據(jù)研究區(qū)機(jī)采棉一穴一粒的播種方式,可計(jì)算出每個(gè)樣方的播種株數(shù)。出苗率為樣方內(nèi)出苗株數(shù)與播種株數(shù)之比,因而可計(jì)算出每個(gè)樣方的出苗率。5個(gè)樣方出苗率的平均值即為試驗(yàn)區(qū)整體出苗率,調(diào)查結(jié)果顯示試驗(yàn)區(qū)整體出苗率為65.78%。

圖1 研究區(qū)影像及樣方分布
本研究技術(shù)路線如圖2所示,主要包括4個(gè)步驟:1)對(duì)獲取得遙感影像進(jìn)行預(yù)處理,選取典型圖像進(jìn)行試驗(yàn);2)對(duì)圖像進(jìn)行顏色特征分析,結(jié)合最大類間方差法實(shí)現(xiàn)植被與背景分離并去除雜草影響,提取棉花目標(biāo);3)提取二值圖像中各連通區(qū)域的形態(tài)特征,通過相關(guān)性分析篩選分類變量,基于支持向量機(jī)(support vector machine,SVM)構(gòu)建棉花株數(shù)識(shí)別模型;4)將模型應(yīng)用于整幅影像,獲得研究區(qū)棉花出苗株數(shù),進(jìn)而計(jì)算棉花出苗率、冠層覆蓋度及棉株長勢(shì)均勻性。棉田影像分割、特征提取、相關(guān)性分析、SVM分類器構(gòu)建分別采用MATLAB、SPSS等軟件進(jìn)行處理。

圖2 技術(shù)路線圖
2.1.1 顏色特征分析
植被與背景(土壤)分離是獲取棉花苗情信息的前提。觀察棉田圖像可以發(fā)現(xiàn),圖像主要由土壤、地膜和植物構(gòu)成,其中土壤背景主要呈褐色,地膜背景主要呈白色,植物(棉花植株、雜草)主要呈綠色,因此可以通過對(duì)RGB顏色空間中紅色、綠色和藍(lán)色3種顏色成分進(jìn)行線性組合以使作物與土壤及地膜差異最大化,從而實(shí)現(xiàn)作物與背景分離[18-21]。另外,顏色模型的合理選擇對(duì)彩色圖像精準(zhǔn)分割來說至關(guān)重要[22]。因此,本文通過對(duì)棉田圖像的顏色進(jìn)行分析,分別選取RGB顏色模型中的綠-藍(lán)差值指數(shù)(GBDI)、過綠指數(shù)(ExG)、歸一化綠-紅差值指數(shù)(NGRDI)、歸一化綠-藍(lán)差值指數(shù)(NGBDI),及YCrCb顏色空間綠色色差()、L*a*b*顏色空間*分量、L*a*b*顏色空間*分量、HSV顏色空間分量作為候選顏色指數(shù),通過特征分析和棉田圖像分割效果對(duì)比得到最佳顏色指數(shù)。顏色空間轉(zhuǎn)換時(shí),首先將圖像紅、綠、藍(lán)3通道經(jīng)過歸一化轉(zhuǎn)換為double類型的顏色參數(shù),進(jìn)而求取各顏色指數(shù),轉(zhuǎn)換公式及各候選顏色指數(shù)公式分別如公式(1)、表1所示。

式中、、為24位深影像數(shù)據(jù)的顏色分量,取值范圍為0~255;、、為歸一化后的顏色分量,取值范圍為0~1。

表1 顏色指數(shù)列表
2.1.2 Otsu閾值分割
傳統(tǒng)的圖像分割算法主要包括邊緣檢測、閾值處理、基于區(qū)域分割等方式。最大類間方差法(Otsu)作為閾值分割方法,因其直觀性和實(shí)現(xiàn)的簡單性,在圖像分割方法中一直占有重要地位[29]。Otsu方法由日本學(xué)者大津展之于1979年提出,按照?qǐng)D像的灰度特性將圖像分成背景和目標(biāo)兩部分,并基于統(tǒng)計(jì)學(xué)的方法來選取一個(gè)閾值,使得這個(gè)閾值盡可能的將兩者分開。首先,分別得到目標(biāo)和背景的像元比例(以閾值為界限)0、1,及平均灰度0、1;然后計(jì)算影像總的平均灰度g;最后計(jì)算類間方差,方差越大,說明兩部分差異越明顯,分割圖像越清晰。而最佳閾值就是類間方差最大時(shí)的閾值[30],公式如下:


分割完成后會(huì)得到多個(gè)連通區(qū)域。由于3~4葉期棉花苗連片生長現(xiàn)象已比較常見,使得一個(gè)連通區(qū)域內(nèi)往往包含多株棉花,且圖像中粘連棉株很難通過形態(tài)學(xué)操作進(jìn)行二次分割,因此無法通過對(duì)連通區(qū)域簡單操作直接獲得棉株數(shù)量。前期研究表明,連通區(qū)域的形態(tài)特征與該區(qū)域內(nèi)植株數(shù)量間存在很強(qiáng)的相關(guān)性[11,19],因此可以通過提取連通區(qū)形態(tài)特征構(gòu)建模型實(shí)現(xiàn)棉株計(jì)數(shù)。
2.2.1 雜草噪聲去除
由于棉田中雜草與棉花植株顏色相近,在經(jīng)過顏色指數(shù)及Otsu自適應(yīng)閾值分割后得到的二值圖像會(huì)存在少量的雜草,因此需要進(jìn)一步處理才可進(jìn)行棉花株數(shù)統(tǒng)計(jì)。本文首先嘗試通過腐蝕膨脹及形態(tài)學(xué)操作剔除雜草噪聲。腐蝕是一種消除邊界點(diǎn)使邊界向內(nèi)部收縮的過程,而膨脹是腐蝕的逆向操作,是使邊界向外擴(kuò)張的過程。本文首先選擇合適的結(jié)構(gòu)元素對(duì)圖像進(jìn)行腐蝕操作,以消除面積較小的雜草,然后通過膨脹操作盡量恢復(fù)被腐蝕掉的棉田的形態(tài)特征。
同時(shí),觀察棉田影像發(fā)現(xiàn),由于鋪膜的原因,只有位于未經(jīng)地膜覆蓋的壟間雜草才會(huì)被劃分為有效連通區(qū)域。基于此,本文提出網(wǎng)格法去除雜草。該方法將獲取的棉花影像旋轉(zhuǎn)為田壟豎直的方向,然后對(duì)圖像進(jìn)行網(wǎng)格線劃分,統(tǒng)計(jì)區(qū)域內(nèi)質(zhì)心個(gè)數(shù),當(dāng)個(gè)數(shù)少于3時(shí)則認(rèn)為是雜草需要去除。網(wǎng)格線去雜草示意圖如圖3所示,其中,網(wǎng)格的寬度根據(jù)圖像分辨率及棉苗行距確定,本試驗(yàn)間距劃分為50像素值。

圖3 網(wǎng)格法去除雜草示意圖
2.2.2 特征選取
本文共對(duì)比研究了10種圖像形態(tài)學(xué)參數(shù),如表2所示。通過計(jì)算Person相關(guān)系數(shù)獲得每個(gè)參數(shù)與植株數(shù)量間的相關(guān)性,作為建模特征選取的依據(jù)。

表2 圖像形態(tài)學(xué)參數(shù)列表
2.2.3 基于SVM建立棉株計(jì)數(shù)模型
SVM作為監(jiān)督分類方法,以VC維(Vapnik-Chervo-nenkis dimension)理論為依據(jù)、以結(jié)構(gòu)風(fēng)險(xiǎn)最小化為原則,追求在有限樣本信息下實(shí)現(xiàn)模型復(fù)雜度與模型學(xué)習(xí)能力之間的平衡,以獲得最好的推廣能力。
因此,本文利用形態(tài)特征結(jié)合支持向量機(jī)分類器來估算二值化圖像中每個(gè)區(qū)域內(nèi)棉花植株數(shù)量。基于SVM的棉花株數(shù)估算模型主要包括圖像特征提取、劃分?jǐn)?shù)據(jù)集、訓(xùn)練及測試3部分。分別為:1)選取30幅圖像進(jìn)行試驗(yàn),分割后提取二值圖像中連通區(qū)域的形態(tài)特征作為模型的分類變量,并手動(dòng)標(biāo)記每個(gè)目標(biāo)區(qū)域內(nèi)植株數(shù)量作為模型識(shí)別結(jié)果,共計(jì)獲取3 710條樣本數(shù)據(jù)。由于樣本中最大粘連為7株,因此,標(biāo)簽范圍為1~7;2)數(shù)據(jù)集劃分。將樣本中80%用于模型訓(xùn)練,20%用于模型測試,訓(xùn)練過程中采用五折交叉驗(yàn)證;3)模型訓(xùn)練。本文試驗(yàn)核函數(shù)選用徑向基函數(shù)(radial basis function,RBF),懲罰系數(shù)為1,為0.1。其中懲罰系數(shù)代表對(duì)誤差的容忍程度,越大對(duì)誤差的容忍度越小,模型容易過擬合;越小對(duì)誤差的容忍度越大,模型容易欠擬合。是核函數(shù)RBF自帶的一個(gè)參數(shù),隱含地決定了數(shù)據(jù)映射到新特征空間后的分布,越大,支持向量越少;越小,支持向量越多。
2.2.4 模型精度驗(yàn)證
綜合采用準(zhǔn)確率和混淆矩陣對(duì)模型精度進(jìn)行驗(yàn)證。其中準(zhǔn)確率用于驗(yàn)證模型的有效性。混淆矩陣用于驗(yàn)證模型的可靠性。
葉面積指數(shù)是衡量作物長勢(shì)的常用指標(biāo)。由于3~4葉期棉花葉片重疊較小,因此本文采用冠層垂直覆蓋度代替葉面積指數(shù)進(jìn)行棉花長勢(shì)監(jiān)測。以每幅二值化圖像為單位(像素值為800×800)計(jì)算冠層覆蓋度,用以表征該區(qū)域棉花長勢(shì)情況。冠層覆蓋度公式如下所示:

基于冠層覆蓋度的平均值計(jì)算變異系數(shù)(coefficient of variation,CV)表征棉花長勢(shì)的均勻性。變異系數(shù)公式如下所示:

在本研究中隨機(jī)選取30幅棉田圖像,以50×50的像素截取圖像中棉花植株、土壤、地膜樣本點(diǎn)(每幅圖像分別提取10個(gè)棉花植株、土壤、地膜測試點(diǎn)),進(jìn)行顏色特征分析。圖4為棉花植株、土壤、地膜在8個(gè)顏色指數(shù)上的統(tǒng)計(jì)直方圖。各分量特征值均為0~255,為凸顯不同地物的差異性,將部分圖像波段特征值進(jìn)行了截取。

圖4 棉花、土壤和地膜的顏色直方圖
圖4是不同顏色指數(shù)對(duì)于棉田3種地表(棉花、土壤和地膜)的顏色直方圖,圖5是對(duì)應(yīng)的分割效果圖。從圖中可以看出,在綠藍(lán)波段上,土壤、地膜與棉花之間存在明顯的間隔區(qū)域,分割效果好;在ExG分量上棉花與土壤有少許共同區(qū)域,且棉花在不同特征值處像素點(diǎn)分布較為均勻,分割效果不佳;NGRDI和分量的共同特點(diǎn)是棉花與地膜共同區(qū)域較大,且在棉花與地膜相交的波谷區(qū)域有一定的像素?cái)?shù),因此圖像分割時(shí)會(huì)由于地膜無法去除干凈而出現(xiàn)地膜的條狀邊。對(duì)于NGBDI分量,土壤和地膜位于的前段,棉花與土壤、地膜之間的波谷明顯,應(yīng)用Otsu閾值分割算法具有很好的分割效果。在L*a*b*顏色模型的*分量下,在波谷分界線處棉花與土壤有一定的像素?cái)?shù),分割時(shí)可能出現(xiàn)雜質(zhì);在*分量下棉花與土壤、地膜分割明顯,且波谷處像素值幾乎為0,因此,該特征值下棉田圖像分割效果很好;在HSV顏色模型下的分量上棉花與土壤、地膜有明顯的波谷,雖然波谷處像素值不為0,但在棉花30~150(灰度值)區(qū)域內(nèi),土壤、地膜摻雜幾乎沒有,因此分割效果也較好。通過對(duì)以上8個(gè)顏色特征進(jìn)行綜合分析表明:GBDI分量、NGBDI分量、*分量、*分量及分量均有較好的分割效果。綜合對(duì)比分割結(jié)果,GBDI分量分割雜質(zhì)更少、分割更為完整,因此本文選取GBDI分量用于棉花作物分割的顏色指數(shù)。
通過腐蝕膨脹及形態(tài)學(xué)操作剔除雜草噪聲效果如圖6所示。對(duì)比去除雜草前后可以看出,該方法對(duì)于面積比較大的雜草去除效果不好,且有將棉株當(dāng)作雜草去除的誤操作,影響出苗率計(jì)算。同時(shí),形態(tài)學(xué)操作也會(huì)影響棉花的形態(tài)特征,給模型分類帶來一定的人為誤差。網(wǎng)格法去除雜草噪聲效果如圖7所示,可以看出該方法不會(huì)影響圖像的形態(tài)特征。

圖5 不同顏色特征下Otsu分割結(jié)果

圖6 腐蝕膨脹去除雜草噪聲

圖7 網(wǎng)格法去除雜草效果圖
3.3.1 特征變量選擇
本文提取了10類圖像連通區(qū)形態(tài)參數(shù)作為候選特征(表2所示),為優(yōu)選特征變量,通過Person相關(guān)系數(shù)計(jì)算了特征變量與植株間的相關(guān)性,如表3所示。

表3 特征變量與植株數(shù)量間的相關(guān)性
觀察表3可以發(fā)現(xiàn),除形態(tài)特征7、9與植株數(shù)量間相關(guān)性較差外,其余形態(tài)特征參數(shù)與植株數(shù)量的相關(guān)性絕對(duì)值均大于0.6,其中2(周長)與植株數(shù)量相關(guān)性最大,為0.771。本試驗(yàn)中選擇相關(guān)系數(shù)大于0.7的形態(tài)參數(shù)作為最終的特征變量進(jìn)行建模,即1(面積)、2(周長)、3(主軸)、5(外接矩形周長)、6(外接矩形長寬比)、8(外接矩形面積與周長比)。
3.3.2 模型評(píng)價(jià)
構(gòu)建的SVM模型進(jìn)行測試時(shí),平均準(zhǔn)確率為97.17%,其中測試準(zhǔn)確率最低為94.12%,最高為99.00%。表4為混淆矩陣,比較了連通區(qū)域內(nèi)植株的實(shí)際數(shù)量和預(yù)測數(shù)量。從表中可以發(fā)現(xiàn),植株數(shù)量為1、2、3、6、7時(shí),準(zhǔn)確率較高;而植株數(shù)量為4、5時(shí),準(zhǔn)確率下降幅度較大,相互之間誤分類比較嚴(yán)重;另一方面,當(dāng)實(shí)際植株數(shù)量為1~4時(shí),更容易發(fā)生過高估計(jì)。通過分析發(fā)現(xiàn),誤差可能是由棉花植株的不均勻性及植株冠層重疊引起的,當(dāng)植株間的陰影、間隙等混雜時(shí)會(huì)引起形態(tài)特征誤差,使其誤分到大植株數(shù)量一類;而當(dāng)植株冠層重疊度較大時(shí),才會(huì)誤分到小植株數(shù)量一類。基于混淆矩陣求得的Kappa系數(shù)為0.899 6,表明模型一致性較好。

表4 混淆矩陣
進(jìn)一步對(duì)比圖像識(shí)別和人工計(jì)數(shù)的出苗株數(shù),以確定所提出方法的可行性。其中,圖像識(shí)別統(tǒng)計(jì)誤差在0.8%~4.7%之間,平均誤差2.52%。圖8為模型預(yù)測株數(shù)與真實(shí)株數(shù)間的比較分析。可以看出,模型預(yù)測株數(shù)比真實(shí)株數(shù)偏高,但總體上兩者具有一致性。以上分析表明,利用SVM模型可以有效預(yù)測棉花植株數(shù)量,且精度高、誤差小,統(tǒng)計(jì)結(jié)果可靠性較強(qiáng)。

圖8 真實(shí)株數(shù)與預(yù)測株數(shù)比較分析
3.4.1 棉花出苗率計(jì)算
為測試模型在不同地塊尺度上的識(shí)別精度,本文將拼接影像按不同面積進(jìn)行裁剪,其測試結(jié)果如表5所示。地面調(diào)查出苗率取棉田中五點(diǎn)取樣調(diào)查結(jié)果的平均值。對(duì)比表5可以發(fā)現(xiàn),基于棉花苗期影像進(jìn)行出苗率識(shí)別與人工調(diào)查結(jié)果基本一致,且隨監(jiān)測面積增加,誤差導(dǎo)下降趨勢(shì)。

表5 不同地塊尺度的棉花出苗率
3.4.2 棉花總體苗情分析
小范圍內(nèi)棉花冠層覆蓋度可以反映棉花長勢(shì)情況,因此,基于模型識(shí)別結(jié)果計(jì)算圖像子單元的(像素值為800×800)的棉花冠層平均覆蓋度,并繪制棉苗冠層覆蓋度分布圖直觀顯示苗情信息,如圖9a所示。同時(shí),繪制棉花出苗的熱力圖直觀顯示棉花出苗狀況,如圖 9b所示。從棉花整體出苗率(64.89%)、棉花冠層平均覆蓋度(7.17%)、棉花冠層平均覆蓋度圖像及變異系數(shù)(10.98%)來看,該地塊棉花出苗少、長勢(shì)不均勻,整齊度較差。

圖9 棉花冠層覆蓋度和出苗率空間分布
本文以3~4葉期棉花可見光影像為研究對(duì)象,通過顏色特征分析和Otsu自動(dòng)閾值分割算法獲取棉花二值圖像,采用網(wǎng)格線法去雜草,基于植株形態(tài)特征建立棉株數(shù)量識(shí)別模型,從而獲得棉田出苗率、冠層覆蓋度等苗情信息。研究成果能夠在較短時(shí)間內(nèi)快速、準(zhǔn)確的獲取棉花苗情信息,為棉田精細(xì)化管理提供技術(shù)支持。主要結(jié)論如下:
1)基于顏色指數(shù)(GBDI)的圖像分割能夠解決棉田影像中地膜反光性強(qiáng)、影像明暗變化明顯等問題,提高圖像識(shí)別的適應(yīng)性和魯棒性。同時(shí),針對(duì)棉田雜草分布特征,本文提出的網(wǎng)格線去雜草方法,能夠避免形態(tài)學(xué)去噪方法造成形態(tài)特征改變的問題,去噪更加精準(zhǔn)。
2)利用形態(tài)學(xué)特征構(gòu)建的SVM分類模型能夠有效解決葉片粘連時(shí)棉株的計(jì)數(shù)問題。模型分類精度達(dá)到97.17%,統(tǒng)計(jì)誤差在0.8%~4.7%之間,平均誤差2.52%。將模型應(yīng)用于3種不同面積尺度地塊上,預(yù)測的出苗率誤差分別為5.33%、3.03%、0.89%,且隨著監(jiān)測面積的增加,誤差呈下降趨勢(shì),說明模型在更大面積棉田上具有更好的適用性。同時(shí),基于圖像識(shí)別結(jié)果獲得的棉花冠層覆蓋度及冠層間變異系數(shù),可有效獲取棉花整體長勢(shì)及均勻性信息。
本文研究發(fā)現(xiàn)圖像分辨率對(duì)監(jiān)測模型有較大影響,在3~4葉苗期,0.29 cm的分辨率效果最好,未來還可針對(duì)苗期不同階段最適合的分辨率開展進(jìn)一步研究。此外,隨著深度學(xué)習(xí)技術(shù)的快速發(fā)展,深度卷積神經(jīng)網(wǎng)絡(luò)對(duì)圖像中的抽象特征具有更好的提取和表達(dá)能力,對(duì)于真實(shí)農(nóng)田復(fù)雜背景、不同圖像分辨率、不同獲取設(shè)備、不同亮度的圖像都具備很好的魯棒性,實(shí)用性良好,這也將成為下一步重點(diǎn)研究方向。
[1]賈彪. 基于計(jì)算機(jī)視覺技術(shù)的棉花長勢(shì)監(jiān)測系統(tǒng)構(gòu)建[D]. 石河子:石河子大學(xué),2014. Jia Biao. Establishment of System for Monitoring Cotton Growth Based on Computer Vision Technology[D]. Shihezi: Shihezi University, 2014. (in Chinese with English abstract)
[2]孫素云. 基于圖像處理和支持向量機(jī)的蘋果樹葉部病害的分類研究[D]. 西安:西安科技大學(xué),2017. Sun Suyun. Classification of Apple Leaf Disease Based on Image Processing and Support Vector Machine[D]. Xi’an: Xi’an University of Science and Technology, 2017. (in Chinese with English abstract)
[3]魏全全. 應(yīng)用數(shù)字圖像技術(shù)進(jìn)行冬油菜氮素營養(yǎng)診斷的初步研究[D]. 武漢:華中農(nóng)業(yè)大學(xué),2016. Wei Quanquan. Preliminary Study on Diagnosing Nitrogen Status of Winter Rapeseed Based on Digital Image Processing Technique[D]. Wuhan: HuaZhong Agricultural University, 2016. (in Chinese with English abstract)
[4]李嵐?jié)瑥埫龋螡? 應(yīng)用數(shù)字圖像技術(shù)進(jìn)行水稻氮素營養(yǎng)診斷[J]. 植物營養(yǎng)與肥料學(xué)報(bào),2015,21(1):259-268. Li Lantao, Zhang Meng, Ren Tao, et al. Diagnosis of N nutrition of rice using digital image processing technique[J]. Journal of Plant Nutrition and Fertilizers, 2015, 21(1): 259-268. (in Chinese with English abstract)
[5]Burgos-Artizzu X, Angela R, Guijarro M, et al. Real-time image processing for crop/weed discrimination in maize fields[J]. Computers & Electronics in Agriculture, 2011, 75(2): 337-346.
[6]任世龍,宜樹華,陳建軍,等. 基于不同數(shù)碼相機(jī)和圖像處理方法的高寒草地植被蓋度估算的比較[J]. 草業(yè)科學(xué),2014,31(6):1007-1013. Ren Shilong, Yi Shuhua, Chen Jianjun, et al. Comparisons of alpine grassland fractional vegetation cover estimation using different digital cameras and different image analysis methods[J]. Pratacultural Science, 2014, 31(6): 1007-1013. (in Chinese with English abstract)
[7]賈洪雷,王剛,郭明卓,等. 基于機(jī)器視覺的玉米植株數(shù)量獲取方法與試驗(yàn)[J]. 農(nóng)業(yè)工程學(xué)報(bào),2015,31(3):215-220. Jia Honglei, Wang Gang, Guo Mingzhuo, et al. Methods and experiments of obtaining corn population based on machine vision[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2015, 31(3): 215-220. (in Chinese with English abstract)
[8]劉濤,孫成明,王力堅(jiān),等. 基于圖像處理技術(shù)的大田麥穗計(jì)數(shù)[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2014,45(2):282-290. Liu Tao, Sun Chengming, Wang Lijian, et al. In-field wheatear counting based on Image processing technology[J]. Transactions of the Chinese Society for Agricultural Machinery, 2014, 45(2): 282-290. (in Chinese with English abstract)
[9]高林,楊貴軍,于海洋,等. 基于無人機(jī)高光譜遙感的冬小麥葉面積指數(shù)反演[J]. 農(nóng)業(yè)工程學(xué)報(bào),2016,32(22):113-120. Gao Lin, Yang Guijun, Yu Haiyang, et al. Retrieving winter wheat leaf area index based on unmanned aerial vehicle hyperspectral remote sensing[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2016, 32(22): 113-120. (in Chinese with English abstract)
[10]戴建國,張國順,郭鵬,等. 基于無人機(jī)遙感可見光影像的北疆主要農(nóng)作物分類方法[J]. 農(nóng)業(yè)工程學(xué)報(bào),2018,34(18):122-129. Dai Jianguo, Zhang Guoshun, Guo Peng, et al. Classification method of main crops in northern Xinjiang based on UAV visible waveband images[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(18): 122-129. (in Chinese with English abstract)
[11]趙必權(quán),丁幼春,蔡曉斌,等. 基于低空無人機(jī)遙感技術(shù)的油菜機(jī)械直播苗期株數(shù)識(shí)別[J]. 農(nóng)業(yè)工程學(xué)報(bào),2017,33(19):115-123. Zhao Biquan, Ding Youchun, Cai Xiaobin, et al. Seedlings number identification of rape planter based on low altitude unmanned aerial vehicles remote sensing technology[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2017, 33(19): 115-123. (in Chinese with English abstract)
[12]Gn?dinger F, Schmidhalter U. Digital counts of maize plants by unmanned aerial vehicles (UAVs)[J/OL]. Remote Sensing, 2017, 9(6): 544.
[13]劉帥兵,楊貴軍,周成全,等. 基于無人機(jī)遙感影像的玉米苗期株數(shù)信息提取[J]. 農(nóng)業(yè)工程學(xué)報(bào),2018,34(22):69-77. Liu Shuaibing, Yang Guijun, Zhou Chengquan, et al. Extraction of maize seedling number information based on UAV imagery[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(22): 69-77. (in Chinese with English abstract)
[14]陳道穎,張娟,黃國強(qiáng),等. 一種基于航空可見光圖像的煙草數(shù)量統(tǒng)計(jì)方法[J]. 湖北農(nóng)業(yè)科學(xué),2017,56(7):1348-1350. Chen Daoying, Zhang Juan, Huang Guoqiang, et al. A statistic method for tobacco based on airborne images[J]. Hubei Agricultural Sciences, 2017, 56(7): 1348-1350. (in Chinese with English abstract)
[15]牛亞曉,張立元,韓文霆,等. 基于無人機(jī)遙感與植被指數(shù)的冬小麥覆蓋度提取方法[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2018,49(4):212-221. Niu Yaxiao, Zhang Liyuan, Han Wenting, et al. Fractional vegetation cover extraction method of winter wheat based on UAV remote sensing and vegetation index[J]. Transactions of the Chinese Society for Agricultural Machinery, 2018, 49(4): 212-221. (in Chinese with English abstract)
[16]Jin Xiuliang, Liu Shouyang, Baret Frédéric, et al. Estimates of plant density of wheat crops at emergence from very low altitude UAV imagery[J]. Remote Sensing of Environment, 2017, 198: 105-114.
[17]郭鵬,武法東,戴建國,等. 基于無人機(jī)可見光影像的農(nóng)田作物分類方法比較[J]. 農(nóng)業(yè)工程學(xué)報(bào),2017,33(13):112-119. Guo Peng, Wu Fadong, Dai Jianguo, et al. Comparison of farmland crop classification methods based on visible light images of unmanned aerial vehicles[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2017, 33(13): 112-119. (in Chinese with English abstract)
[18]Zhao Biquan, Zhang Jian, Yang Chenghai, et al. Rapeseed seedling stand counting and seeding performance evaluation at two early growth stages based on unmanned aerial vehicle imagery[J]. Frontiers in Plant Science, 2018, 9: 1362.
[19]Li Bo, Xu Xiangming, Han Jiwan, et al. The estimation of crop emergence in potatoes by UAV RGB imagery[J]. 2019, 15(1): 15.
[20]Han Liang, Yang Guijun, Dai Huayang, et al. Modeling maize above-ground biomass based on machine learning approaches using UAV remote-sensing data.[J]. Plant Methods, 2019, 15(1): 10.
[21]Bakhshipour A, Jafari A. Evaluation of support vector machine and artificial neural networks in weed detection using shape features[J]. Computers and Electronics in Agriculture, 2018, 145: 153-160.
[22]蘇博妮,化希耀,范振岐. 基于顏色特征的水稻病害圖像分割方法研究[J]. 計(jì)算機(jī)與數(shù)字工程,2018,46(8):1638-1642. Su Boni, Hua Xiyao, Fan Zhenqi. Image segmentation of rice disease based on color features[J]. Computer & Digital Engineering, 2018, 46(8): 1638-1642. (in Chinese with English abstract)
[23]Zheng Yang, Zhu Qibing, Huang Min, et al. Maize and weed classification using color indices with support vector data description in outdoor fields[J]. Computers and Electronics in Agriculture, 2017, 141: 215-222.
[24]Gitelson A, Kaufman Y, Stark R, et al. Novel algorithms for remote estimation of vegetation fraction[J]. Remote Sensing of Environment, 2002, 80(1): 76-87.
[25]汪小欽,王苗苗,王紹強(qiáng),等. 基于可見光波段無人機(jī)遙感的植被信息提取[J]. 農(nóng)業(yè)工程學(xué)報(bào),2015,31(5):152-157. Wang Xiaoqin, Wang Miaomiao, Wang Shaoqiang, et al. Extraction of vegetation information from visible unmanned aerial vehicle images[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2015, 31(5): 152-157. (in Chinese with English abstract)
[26]Prasetyo E, Adityo R, Suciati N, et al. Mango leaf classification with boundary moments of centroid contour distances as shape features[C]. Intelligent Technology and Its Applications (ISITIA) 2018 International Seminar on, pp. 2018, 317-320.
[27]Zhao YS, Gong L, Huang YX, et al. Robust tomato recognition for robotic harvesting using feature images fusion[J/OL]. Sensors, 2016, 16(2): 173.
[28]王禮,方陸明,陳珣,等. 基于Lab顏色空間的花朵圖像分割算法[J]. 浙江萬里學(xué)院學(xué)報(bào),2018,142(3):67-73. Wang Li, Fang Luming, Chen Xun, et al. Flower image segmentation algorithm based on Lab color space[J]. Journal of Zhejiang Wanli University, 2018, 142(3): 67-73. (in Chinese with English abstract)
[29]Wang Fubin, Pan Xingchen. Image segmentation for somatic cell of milk based on niching particle swarm optimization Otsu[J]. Engineering in Agriculture, Environment and Food, 2019, 12(2): 141-149.
[30]Xiao LY , Ouyang H L, Fan C D. An improved Otsu method for threshold segmentation based on set mapping and trapezoid region intercept histogram[J]. Optik, 2019, 196: 106-163.
Extraction of cotton seedling growth information using UAV visible light remote sensing images
Dai Jianguo1,2, Xue Jinli1,2, Zhao Qingzhan1,2, Wang Qiong3, Chen Bing3, Zhang Guoshun1,2, Jiang Nan1,2
(1.832003,; 2.832003,; 3.832003,)
The rapid and accurate seedling situation acquisition is an important prerequisite for farmland fine management, and also the basis for promoting the development of precision agriculture. It was found that UAV remote sensing images combined with machine vision technology had obvious advantages in crop detection in the field. However, current research mainly focused on crops such as corn, wheat, and rape, and only realized the extraction of emergence rate or coverage. In fact, there were few reports on the research of cotton overall seedling situation acquisition. In order to solve the problems of time-consuming and inefficient manual collection of cotton seedling information, this article explored a new method of extracting seedlings based on unmanned aerial vehicles (UAV) visible light remote sensing images. Firstly, cotton images in the 3-4 leaf stage were captured by the UAV equipped with a high-resolution visible light sensor. Meanwhile, the typical images were selected for experimentation after a series of preprocessing operations, such as correction, stitching, and cropping. The separation of cotton from the background (soil, mulch) was a primary prerequisite for obtaining cotton seedling situation information. The segmentation effect of eight color indexes on cotton image were compared and analyzed and the green-blue difference index (GBDI) color index was selected in this paper to realize the segmentation of cotton and background by combining with the Otsu threshold segmentation method because GBDI component was found to have fewer impurities and more complete segmentation. In order to avoid the influence of weed noise on the follow-up experiment, morphology and grid method for weed noise elimination were adopted, and the results showed that the grid method was more effective than the morphological method in removing weeds. A mapping relationship between morphological characteristics and the number of cotton plants was established to estimate the number of cotton. Because conglutinated cotton was difficult to be segmented by morphological operation, 10 morphological features were extracted as candidate variables to establish SVM plant number estimation model. A total of 3710 sample data were obtained in this experiment, 80% of which were randomly selected for classification modeling, while the remaining 20% were used for testing. Based on the person correlation analysis, 6 features whose correlation coefficient more than 0.7 were selected.The model was applied to the whole image to obtain the number of emerging cotton plants in the study area, and then the seedling emergence rate, canopy coverage and evenness of cotton plant growth were calculated, consecutively. The results showed that the classification accuracy of SVM plant number estimation model reached 97.17%, the statistical error ranged from 0.8% to 4.7%, and the average error was 2.52%. The error of the method decreased with the increase of monitoring area, which indicated that the model had better applicability in larger cotton fields. The cotton emergence rate, canopy coverage and growth uniformity were 64.89%, 7.17%, 10.98% respectively. The method based on the UAV visible light image effectively improved the efficiency of cotton field seedling acquisition, and the research results can provide technical support for subsequent cotton field management and fine plant protection.
unmanned aerial vehicle; remote sensing; cotton field; emergence rate; coverage; visible light
戴建國,薛金利,趙慶展,王 瓊,陳 兵,張國順,蔣 楠. 利用無人機(jī)可見光遙感影像提取棉花苗情信息[J]. 農(nóng)業(yè)工程學(xué)報(bào),2020,36(4):63-71. doi:10.11975/j.issn.1002-6819.2020.04.008 http://www.tcsae.org
Dai Jianguo, Xue Jinli, Zhao Qingzhan, Wang Qiong, Chen Bing, Zhang Guoshun, Jiang Nan.Extraction of cotton seedling growth information using UAV visible light remote sensing images[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2020, 36(4): 63-71. (in Chinese with English abstract) doi:10.11975/j.issn.1002-6819.2020.04.008 http://www.tcsae.org
2019-09-09
2020-01-14
國家自然科學(xué)基金資助項(xiàng)目(31460317);自然基金項(xiàng)目(41661089);國家重點(diǎn)研發(fā)計(jì)劃項(xiàng)目(2017YFB0504203)
戴建國,教授,主要從事農(nóng)業(yè)信息化和遙感技術(shù)研究。Email:daijianguo2002@sina.com
10.11975/j.issn.1002-6819.2020.04.008
TP751;S562
A
1002-6819(2020)-04-0063-09