Object Position Estimation based on Dual Sight Perspective Configuration

Main Article Content

Dion Setiawan
Maulana Ifdhil Hanafi
Indra Riyanto
Akhmad Musafa

Abstract

Development of the coordination system requires the dataset because the dataset could provide information around the system that the coordination system can use to make decisions. Therefore, the capability to process and display data-related positions of objects around the robots is necessary. This paper provides a method to predict an object’s position. This method is based on the Indoor Positioning System (IPS) idea and object position estimation with the multi-camera system (i.e., stereo vision). This method needs two input data to estimate the ball position: the input image and the robot’s relative position. The approach adopts simple and easy calculation technics: trigonometry, angle rotations, and linear function. This method was tested on a ROS and Gazebo simulation platform. The experimental result shows that this configuration could estimate the object’s position with Mean Squared Error was 0.383 meters.  Besides, R squared distance calibration value is 0.9932, which implies that this system worked very well at estimating an object’s position.

Downloads

Download data is not yet available.

Article Details

How to Cite
[1]
D. Setiawan, M. Hanafi, I. Riyanto, and A. Musafa, “Object Position Estimation based on Dual Sight Perspective Configuration”, INFOTEL, vol. 13, no. 2, pp. 94-103, May 2021.
Section
Electronics
Author Biographies

Dion Setiawan, Universitas Budi Luhur

First Author

Maulana Ifdhil Hanafi, Universitas Budi Luhur

Third Author

Akhmad Musafa, Universitas Budi Luhur

Beliau adalah pembimbing dalam penelitian ini

References

[1] P. Long, T. Fanl, X. Liao, W. Liu, H. Zhang, dan J. Pan, “Towards Optimally Decentralized Multi-Robot Collision Avoidance via Deep Reinforcement Learning,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), Mei 2018, pp. 6252–6259, doi: 10.1109/ICRA.2018.8461113.
[2] H. Lu, H. Zhang, S. Yang, dan Z. Zheng, “Vision-based ball recognition for soccer robots without color classification,” in 2009 IEEE International Conference on Information and Automation, ICIA 2009, 2009, no. December 2013, pp. 916–921, doi: 10.1109/ICINFA.2009.5205049.
[3] Y. Prakoso, “Desain dan Implementasi Pengukuran Posisi Bola Menggunakan Kamera 360 Derajat Pada Robot Sepak Bola,” 2017.
[4] D. Fox, W. Burgard, F. Dellaert, dan S. Thrun, “Monte Carlo Localization: efficient position estimation for mobile robots,” in Proc. Natl. Conf. Artif. Intell., no. Handschin 1970, hal. 343–349, 1999.
[5] A. C. Almeida, S. R. J. Neto, dan R. R. A. Bianchi, “Comparing vision-based monte-carlo localization methods,” in Proceedings - 15th Latin American Robotics Symposium, 6th Brazilian Robotics Symposium and 9th Workshop on Robotics in Education, LARS/SBR/WRE 2018, 2018, pp. 408–412, doi: 10.1109/LARS/SBR/WRE.2018.00084.
[6] W. Xiaoyu, L. Caihong, S. Li, Z. Ning, dan F. U. Hao, “On adaptive monte carlo localization algorithm for the mobile robot based on ROS,” in Chinese Control Conference, CCC, 2018, vol. 2018-July, pp. 5207–5212, doi: 10.23919/ChiCC.2018.8482698.
[7] D. Talwar dan S. Jung, “Particle Filter-based Localization of a Mobile Robot by Using a Single Lidar Sensor under SLAM in ROS Environment,” Int. Conf. Control. Autom. Syst., vol. 2019-Octob, no. Iccas, pp. 1112–1115, 2019, doi: 10.23919/ICCAS47443.2019.8971555.
[8] X. Chen et al., “OverlapNet: Loop Closing for LiDAR-based SLAM,” no. i, 2020, doi: 10.15607/rss.2020.xvi.009.
[9] X. Wang, B. Zhou, J. Ji, dan B. Pu, “Recognition and distance estimation of an irregular object in package sorting line based on monocular vision,” Int. J. Adv. Robot. Syst., vol. 16, no. 1, pp. 1–12, 2019, doi: 10.1177/1729881419827215.
[10] R. A. Setyawan, R. Sunoko, M. A. Choiron, dan P. M. Rahardjo, “Implementation of stereo vision semi-global block matching methods for distance measurement,” Indones. J. Electr. Eng. Comput. Sci., vol. 12, no. 2, pp. 585–591, 2018, doi: 10.11591/ijeecs.v12.i2.pp585-591.
[11] N. Setyawan, N. Mardiyah, K. Hidayat, Nurhadi, dan Z. Has, “Object detection of omnidirectional vision using PSO-neural network for soccer robot,” Int. Conf. Electr. Eng. Comput. Sci. Informatics, vol. 2018-Octob, pp. 117–121, 2018, doi: 10.1109/EECSI.2018.8752833.
[12] E. M. Pamungkas, B. A. A. Sumbodo, dan I. Candradewi, “Sistem Pendeteksi dan Pelacakan Bola dengan Metode Hough Circle Transform, Blob Detection, dan Camshift Menggunakan AR.Drone,” IJEIS (Indonesian J. Electron. Instrum. Syst., vol. 7, no. 1, pp. 1, 2017, doi: 10.22146/ijeis.15405.
[13] T. Manda, A. Triyono, H. Fitriyah, M. Hannats, dan H. Ichsan, “Deteksi Jarak Bola Pada Robot Kiper Sepak Bola Menggunakan Hough Circle Transformation Berbasis Raspberry Pi,” J. Pengemb. Teknol. Inf. dan Ilmu Komput. Univ. Brawijaya, vol. 3, no. 2, pp. 8937–8943, 2019.
[14] H. K. Kim, J. H. Park, dan H. Y. Jung, “An Efficient Color Space for Deep-Learning Based Traffic Light Recognition,” J. Adv. Transp., vol. 2018, pp. 1-12, 2018, doi: 10.1155/2018/2365414.
[15] M. E. Ashoori dan M. Mahlouji, “Measuring the Distance between the Two Vehicles Using Stereo Vision with Optical Axes Cross,” Mod. Appl. Sci., vol. 12, no. 1, pp. 165, 2017, doi: 10.5539/mas.v12n1p165.
[16] P. Li, T. Qin, dan S. Shen, “Stereo Vision-Based Semantic 3D Object and Ego-Motion Tracking for Autonomous Driving,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11206 LNCS, pp. 664–679, 2018, doi: 10.1007/978-3-030-01216-8_40.
[17] V. Tuominen, “The measurement-aided welding cell—giving sight to the blind,” Int. J. Adv. Manuf. Technol., vol. 86, no. 1–4, pp. 371–386, 2016, doi: 10.1007/s00170-015-8193-9.
[18] H. Deng, Q. Fu, Q. Quan, K. Yang, dan K. Y. Cai, “Indoor Multi-Camera-Based Testbed for 3-D Tracking and Control of UAVs,” IEEE Trans. Instrum. Meas., vol. 69, no. 6, pp. 3139–3156, 2020, doi: 10.1109/TIM.2019.2928615.
[19] Y. N. Naggar, A. H. Kassem, dan M. S. Bayoumi, “A Low Cost Indoor Positioning System Using Computer Vision,” Int. J. Image, Graph. Signal Process., vol. 11, no. 4, pp. 8–25, 2019, doi: 10.5815/ijigsp.2019.04.02.
[20] D. Scaramuzza, “Omnidirectional Camera,” in Computer Vision, 2014, hal. 552–560.
[21] R. F. Brena, J. P. García-Vázquez, C. E. Galván-Tejada, D. Muñoz-Rodriguez, C. Vargas-Rosales, dan J. Fangmeyer, “Evolution of Indoor Positioning Technologies: A Survey,” J. Sensors, vol. 2017, pp.1-21, 2017, doi: 10.1155/2017/2630413.
[22] I. Riyanto dan Y. M. Akbar, “Local Area Positioning System (LAPS) for indoor navigation and tracking system and building electricity energy saving,” Int. J. Simul. Syst. Sci. Technol., vol. 17, no. 32, pp. 1–7, 2016, doi: 10.5013/IJSSST.a.17.32.10.
[23] J. Duque Domingo, C. Cerrada, E. Valero, dan J. A. Cerrada, “Indoor positioning system using depth maps and wireless networks,” J. Sensors, vol. 2016, pp.1-8, 2016, doi: 10.1155/2016/2107872.
[24] V. Cantón Paterna, A. Calveras Augé, J. Paradells Aspas, dan M. A. Pérez Bullones, “A Bluetooth Low Energy Indoor Positioning System with Channel Diversity, Weighted Trilateration and Kalman Filtering,” Sensors (Basel)., vol. 17, no. 12, 2017, doi: 10.3390/s17122927.
[25] M. Uradzinski, H. Guo, X. Liu, dan M. Yu, “Advanced Indoor Positioning Using Zigbee Wireless Technology,” Wirel. Pers. Commun., vol. 97, no. 4, pp.6509–6518, 2017, doi: 10.1007/s11277-017-4852-5.
[26] S. Xia, Y. Liu, G. Yuan, M. Zhu, dan Z. Wang, “Indoor fingerprint positioning based on Wi-Fi: An overview,” ISPRS Int. J. Geo-Information, vol. 6, no. 5, 2017, doi: 10.3390/ijgi6050135.
[27] H. Pujiharsono, D. Utami, dan R. D. Ainul, “Trilateration Method For Estimating Location in RSSI-Based Indoor Positioning System Using Zigbee Protocol,” J. Infotel, vol. 12, no. 1, pp. 8–13, 2020, doi: 10.20895/infotel.v12i1.380.
[28] A. Jalil, “Robot Operating System (ROS) dan Gazebo Sebagai Media Pembelajaran Robot Interaktif,” Ilk. J. Ilm., vol. 10, no. 3, pp.284–289, 2018, doi: 10.33096/ilkom.v10i3.365.284-289.
[29] Z. Zhou, W. Yao, J. Ma, H. Lu, J. Xiao, dan Z. Zheng, “Simatch: A Simulation System for Highly Dynamic Confrontations between Multi-Robot Systems,” in Proceedings 2018 Chinese Automation Congress, CAC 2018, 2019, pp.3934–3939, doi: 10.1109/CAC.2018.8623698.
[30] J. Xiao, D. Xiong, W. Yao, Q. Yu, H. Lu, dan Z. Zheng, “Building software system and simulation environment for RoboCup MSL Soccer robots based on ROS and Gazebo,” in Studies in Computational Intelligence, vol. 707, Springer, 2017, pp. 597–631.
[31] J. M. O’Kane, A gentle introduction to ROS, no. 2.1.3. 2016.
[32] A. Kadir, Langkah Mudah Pemrograman OpenCV & Python. Jakarta: PT Elex Media Komputindo, 2019.
[33] R. Dikarinata, I. K. Wibowo, M. M. Bachtiar, dan M. A. Haq, “Searching Ball around ROI to Increase Computational Processing of Detection,” IES 2020 - Int. Electron. Symp. Role Auton. Intell. Syst. Hum. Life Comf., pp. 207–212, 2020, doi: 10.1109/IES50839.2020.9231903.
[34] K. Czyzewska, “GENERALIZATION OF THE PYTHAGOREAN THEOREM,” Demonstr. Math., vol. 24, no. 1–2, 2018, doi: 10.1515/dema-1991-1-228.
[35] M. Occhiogrosso, Graphs of trigonometric functions. Milliken Publishing Company, 2007.