All-in-one computation vs computational-offloading approaches: a performance evaluation of object detection strategies on android mobile devices
Main Article Content
Abstract
Object detection gives a computer ability to classify objects in an image or video. However, high specified devices are needed to get a good performance. To enable devices with low specifications performs better, one way is offloading the computation process from a device with a low specification to another device with better specifications. This paper investigates the performance of object detection strategies on all-in-one Android mobile phone computation versus Android mobile phone computation with computational offloading on Nvidia Jetson Nano. The experiment carries out the video surveillance from the Android mobile phone with two scenarios, all-in-one object detection computation in a single Android device and decoupled object detection computation between an Android device and an Nvidia Jetson Nano. Android applications send video input for object detection using RTSP/RTMP streaming protocol and received by Nvidia Jetson Nano which acts as an RTSP/RTMP server. Then, the output of object detection is sent back to the Android device for being displayed to the user. The results show that the android device Huawei Y7 Pro with an average FPS performance of 1.82 and an average computing speed of 552 ms significantly improves when working with the Nvidia Jetson Nano, the average FPS becomes ten and the average computing speed becomes 95 ms. It means decoupling object detection computation between an Android device and an Nvidia Jetson Nano using the system provided in this paper successfully improves the detection speed performance.
Downloads
Article Details
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work
References
[2] H. Le, M. Nguyen, W. Q. Yan, and H. Nguyen, “Augmented reality and machine learning incorporation using yolov3 and arkit,” Appl. Sci., vol. 11, no. 13, pp. 1–19, 2021, doi: 10.3390/app11136006.
[3] Z. He et al., “Design and implementation of augmented reality cloud platform system for 3D entity objects,” Procedia Comput. Sci., vol. 131, pp. 108–115, 2018, doi: 10.1016/j.procs.2018.04.192.
[4] D. Ambarwulan and D. Muliyati, “The Design of Augmented Reality Application as Learning Media Marker-Based for Android Smartphone,” J. Penelit. Pengemb. Pendidik. Fis. p-ISSN 2461-0933 e-ISSN2461-1433, vol. 2, no. 1, pp. 73–80, 2016.
[5] E. Buber and B. Diri, Performance Analysis and CPU vs GPU Comparison for Deep Learning. 2018.
[6] P. Sowa and J. Izydorczyk, Darknet on OpenCL: A Multi-platform Tool for Object Detection and Classification. 2020.
[7] L. Barba-Guaman, J. E. Naranjo, and A. Ortiz, “Deep learning framework for vehicle and pedestrian detection in rural roads on an embedded GPU,” Electron., vol. 9, no. 4, pp. 1–17, 2020, doi: 10.3390/electronics9040589.
[8] M. Lofqvist and J. Cano, “Accelerating Deep Learning Applications in Space,” vol. 44, no. 0, 2020, [Online]. Available: http://arxiv.org/abs/2007.11089.
[9] Y. Syaifudin, I. Rozi, R. Ariyanto, R. Erfan, and S. Adhisuwignjo, “Study of Performance of Real Time Streaming Protocol (RTSP) in Learning Systems,” Int. J. Eng. Technol., vol. 7, pp. 216–221, Dec. 2018, doi: 10.14419/ijet.v7i4.44.26994.
[10] X. Li, M. Darwich, M. A. Salehi, and M. Bayoumi, “A survey on cloud-based video streaming services,” in Advances in Computers, vol. 123, 2021, pp. 193–244.
[11] T. Ruether, “Streaming Protocols: Everything You Need to Know (Update),” 2021. https://www.wowza.com/blog/streaming-protocols (accessed Aug. 09, 2021).
[12] B. Patel, V. Sanchez, and J. Anderson, “Deep Learning Inference on PowerEdge R7425,” Dell EMC Tech. White Pap., 2019.
[13] N. Verma, S. Kansal, and H. Malvi, “Development of Native Mobile Application Using Android Studio for Cabs and Some Glimpse of Cross Platform Apps,” Int. J. Appl. Eng. Res., vol. 13, no. 16, pp. 12527–12530, 2018, [Online]. Available: http://www.ripublication.com.
[14] K. R. Srinath, “Python – The Fastest Growing Programming Language,” Int. Res. J. Eng. Technol., pp. 354–357, 2017, [Online]. Available: https://www.irjet.net/archives/V4/i12/IRJET-V4I1266.pdf.
[15] A. Brdjanin, A. Akagic, D. Džigal, and N. Dardagan, Single Object Trackers in OpenCV: A Benchmark. 2020.
[16] X. Wu, P. Qu, S. Wang, L. Xie, and J. Dong, Extend the FFmpeg Framework to Analyze Media Content. 2021.
[17] J. Redmon and A. Farhadi, “YOLOv3: An incremental improvement,” arXiv, 2018.
[18] A. Redmon, Joseph and Farhadi, “YOLO: Real-Time Object Detection,” 2018. https://pjreddie.com/darknet/yolo/ (accessed Aug. 13, 2021).
[19] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016, doi: 10.1109/CVPR.2016.91.