Soybean yield estimation and lodging discrimination based on lightweight UAV and point cloud deep learning
摘要The unmanned aerial vehicle(UAV)platform has emerged as a powerful tool in soybean(Glycine max(L.)Merr.)breeding phenotype research due to its high throughput and adaptability.However,previous studies have pre-dominantly relied on statistical features like vegetation indices and textures,overlooking the crucial structural information embedded in the data.Feature fusion has often been confined to a one-dimensional exponential form,which can decouple spatial and spectral information and neglect their interactions at the data level.In this study,we leverage our team's cross-circling oblique(CCO)route photography and Structure-from-Motion with Multi-View Stereo(SfM-MVS)techniques to reconstruct the three-dimensional(3D)structure of soybean canopies.Newly point cloud deep learning models SoyNet and SoyNet-Res were further created with two novel data-level fusion that integrate spatial structure and color information.Our results reveal that incorporating RGB color and vegetation index(Ⅵ)spectral information with spatial structure information,leads to a significant reduction in root mean square error(RMSE)for yield estimation(22.55 kg ha1)and an improvement in F1-score for five-class lodging discrimination(0.06)at S7 growth stage.The SoyNet-Res model employing multi-task learning exhibits better accuracy in both yield estimation(RMSE:349.45 kg ha-1)when compared to the H2O-AutoML.Furthermore,our findings indicate that multi-task deep learning outperforms single-task learning in lodging discrimination,achieving an accuracy top-2 of 0.87 and accuracy top-3 of 0.97 for five-class.In conclusion,the point cloud deep learning method exhibits tremendous potential in learning multi-phenotype tasks,laying the foundation for optimizing soybean breeding programs.
更多相关知识
- 浏览0
- 被引0
- 下载0

相似文献
- 中文期刊
- 外文期刊
- 学位论文
- 会议论文


换一批



