A Hybrid Vegetation Detection Framework: Integrating Vegetation Indices and Convolutional Neural Network
Vegetation inspection and monitoring is a time-consuming task. In the era of industrial revolution 4.0 (IR 4.0), unmanned aerial vehicles (UAV), commercially known as drones, are in demand, being adopted for vegetation inspection and monitoring activities. However, most off-the-shelf drones are leas...
Guardado en:
Autores principales: | , , , , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/4590f249746c4de18e2d8130955ae910 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | Vegetation inspection and monitoring is a time-consuming task. In the era of industrial revolution 4.0 (IR 4.0), unmanned aerial vehicles (UAV), commercially known as drones, are in demand, being adopted for vegetation inspection and monitoring activities. However, most off-the-shelf drones are least favoured by vegetation maintenance departments for on-site inspection due to limited spectral bands camera restricting advanced vegetation analysis. Most of these drones are normally equipped with a normal red, green, and blue (RGB) camera. Additional spectral bands are found to produce more accurate analysis during vegetation inspection, but at the cost of advanced camera functionalities, such as multispectral camera. Vegetation indices (VI) is a technique to maximize detection sensitivity related to vegetation characteristics while minimizing other factors which are not categorised otherwise. The emergence of machine learning has slowly influenced the existing vegetation analysis technique in order to improve detection accuracy. This study focuses on exploring VI techniques in identifying vegetation objects. The selected VIs investigated are Visible Atmospheric Resistant Index (VARI), Green Leaf Index (GLI), and Vegetation Index Green (VIgreen). The chosen machine learning technique is You Only Look Once (YOLO), which is a clever convolutional neural network (CNN) offering object detection in real time. The CNN model has a symmetrical structure along the direction of the tensor flow. Several series of data collection have been conducted at identified locations to obtain aerial images. The proposed hybrid methods were tested on captured aerial images to observe vegetation detection performance. Segmentation in image analysis is a process to divide the targeted pixels for further detection testing. Based on our findings, more than 70% of the vegetation objects in the images were accurately detected, which reduces the misdetection issue faced by previous VI techniques. On the other hand, hybrid segmentation methods perform best with the combination of VARI and YOLO at 84% detection accuracy. |
---|