References:
[1] T. G. O’Brien, “Abundance, Density and Relative Abundance,” in Camera Traps in Animal Ecology, A. F. O’Connel, J. D. Nichols, K. U. Karanth. Tokyo: Springer, 2011, pp. 71-96.
[2] A. Swanson, M. Kosmala, C. Lintott, C. Packer, “A generalised approach for producing, quantifying, and validating citizen science data from wildlife images,” Conservation Biology, vol. 30, issue 3, pp. 520-531, Apr 2016.
[3] S. Schneider, S. Greenberg, G. W. Taylor, S. C. Kremer, “Three critical factors affecting automated image species recognition performance for camera traps,” Ecology and Evolution, vol. 10, issue 7, pp. 3503-3517, Mar 2020.
[4] M. S. Norouzzadeh, A. Nguyen, M. Kosmala, “Automatically identifying wild animals in camera trap images with deep learning,” Proceedings of the National Academy of Sciences, vol. 115, issue 25, pp. E5716-E5725, Jun 2018.
[5] J. Martin, W. M. Kitchens, J. E. Hines, “Importance of well-designed monitoring programs for the conservation of endangered species,” Conservation Biology, vol. 21, issue 2, pp. 472-481, Apr 2007.
[6] A. Panesar, Machine Learning and AI for Healthcare. Coventry: Apress, 2019.
[7] M. Mitchell, Artificial Intelligence: A Guide for Thinking Humans. London: Penguin, 2019.
[8] Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, et al., “The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches,” Post-Doctoral research, Dept. Comp. Sci., Univ. of Dayton, OH, 2018.
[9] D. H. Hubel and T. N. Wiesel, “Receptive fields of single neurones in the cat’s striate cortex,” The Journal of Physiology, vol. 148, issue 3, pp. 574-591, Oct 1959.
[10] S. Sutherland, “The vision of David Marr,” Nature, vol. 298, pp. 691-692, Aug 1982.
[11] I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (Adaptive Computation and Machine Learning series). Cambridge: MIT Press, 2016.
[12] Z. Yang, T. Dan, Y. Yang, “Multi-temporal Remote Sensing Image Registration Using Deep Convolutional Features,” IEEE Access, vol. 6, pp 38544-38555, Jul 2018.
[13] V. H. Phung and E. J. Rhee, “A High‐Accuracy Model Average Ensemble of Convolutional Neural Networks for Classification of Cloud Image Patches on Small Datasets,” Applied Sciences, vol. 9, issue 21, pp. 4500, Nov 2019.
[14] A. Gomez, G. Diez, A. Salazar, A. Diaz, “Animal Identification in Low Quality Camera-Trap Images Using Very Deep Convolutional Neural Networks and Confidence Thresholds,” in International Symposium on Visual Computing, Las Vegas, NV, 2016, pp. 747-756.
[15] ImageNet. (2016, May 31). Large Scale Visual Recognition Challenge 2016 (ILSVRC2016) (Online). Available: http://image-net.org/challenges/LSVRC/2016/
[16] Zooniverse. (2020). Snapshot Serengeti (Online). Available: https://www.zooniverse.org/projects/zooniverse/snapshot-serengeti
[17] M. A. Tabak, M. S. Norouzzadeh, D. W. Wolfson, S. K. Sweeney, et al., “Machine learning to classify animal species in camera trap images: Applications in ecology,” Methods in Ecology and Evolution, vol. 10, issue 4, pp. 585-590, Nov 2018.
[18] Parks Canada. (2019, Oct. 16). Wildlife webcams and remote cameras (Online). Available: https://www.pc.gc.ca/en/nature/science/controle-monitoring/cameras
[19] O. J. Robinson, V. R. Gutierrez, D. Fink, “Correcting for bias in distribution modelling for rare species using citizen science data,” Diversity and Distributions, vol. 24, issue 4, pp. 460-472, Dec 2017.
[20] A. Swanson, M. Kosmala, C. Lintott, R. Simpson, A. Smith, C. Packer, “Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna,” Scientific Data, vol. 2, Article 150026, Jun 2015.
[21] T. Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollár, “Focal Loss for Dense Object Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, issue 2, pp. 318-327, Feb 2020.
[22] M. Lofty, R. Shubair, S. Albarqouni, “Investigation of Focal Loss in Deep Learning Models For Femur Fractures Classification” in The 2019 IEEE International Conference on Electrical and Computing Technologies and Applications, UAE, 2019.
[23] K. Pasupa, S. Vatathanavaro, S. Tungjitnob, “Convolutional neural networks based focal loss for class imbalance problem: a case study of canine red blood cells morphology classification,” Journal of Ambient Intelligence and Humanized Computing, Feb 2020.
[24] S. Schneider. (2019, Sep. 9) Camera Trap Species Classifier (Source Code). Available: https://github.com/Schnei1811/Camera_Trap_Species_Classifier
[25] Tensorflow. (2020, Aug. 5). Focal_loss.py (Source Code).Available: https://github.com/tensorflow/addons/blob/v0.11.2/tensorflow_addons/losses/focal_loss.py
[26] G. Piosenka. (2020, Jul. 27). 225 Bird Species (Online). Available: https://www.kaggle.com/gpiosenka/100-bird-species