Abstract
Real-time object detection and segmentation are considered as one of the fundamental but challenging problems in remote sensing and surveillance applications (including satellite and aerial). Consequently, it performs a crucial role in various management and monitoring applications and has received notable attention in recent years. This paper aims to present a real-time, efficient system in which a deep learning-based model U-Net is explored for multiple object segmentation in aerial drone images. We perform data augmentation and apply transfer learning to enhance the model efficiency. We experimented U-Net segmentation model with different base architectures, including VGG 16, ResNet-50, and MobileNet, and compare their performance. We also compare the results U-Net segmentation model with different base architectures and concludes that the U-Net (MobileNet) achieves good results. The experimental results demonstrate that data augmentation improves the model’s performance by achieving a segmentation accuracy of 92%, 93%, and 95% with base architectures VGG-16, ResNet-50, and MobileNet, respectively.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 1745-1758 |
| Number of pages | 14 |
| Journal | Journal of Real-Time Image Processing |
| Volume | 18 |
| Issue number | 5 |
| DOIs | |
| Publication status | Published - Oct 2021 |
| Externally published | Yes |
Keywords
- Deep leaning
- Real-time
- Remote sensing
- Satellite images
- U-Net