NAS for underwater scenaries (MSc AI)
During my MSc, I explored meta-learning for underwater object detection. While there were many works applied to inland object detection, there was a lack of methods tackling the challenges in underwater sceneries. These challenges include light absorption and scattering by suspended particles; and the small average size of objects being detected underwater among other factors.
After exploring several types of meta-learning approaches as surveyed by Hospedales et al. [1], I settled on exploring Neural Architecture Search more thoroughly. Leveraging the established object detection trend using Convolutional Neural Networks, I focused mainly on the feature extraction part also known as the backbone.
This work led to my first conference publication (see in /publications/) where we look at the following main questions:
- Is pre-training with ImageNet useful for underwater detection?
- How far are the state of art algorithms from performing robustly underwater?
- How good are their generalization capabilities?
The DetNAS [2] method was compared with previous results using YOLOv3 [3] in two publicly available underwater object detection datasets.
Footnotes:
-
Hospedales, T.M., Antoniou, A., Micaelli, P., & Storkey, A.J. (2020). Meta-Learning in Neural Networks: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 5149-5169. ↩
-
Chen, Y., Yang, T., Zhang, X., Meng, G., Xiao, X., & Sun, J. (2019). DetNAS: Backbone Search for Object Detection. Neural Information Processing Systems. ↩
-
Redmon, J., Divvala, S.K., Girshick, R.B., & Farhadi, A. (2015). You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 779-788. ↩