Pengenalan Pose Tangan Menggunakan HuMoment
Main Article Content
Abstract
Computer vision yang didasarkan pada pengenalan bentuk memiliki banyak potensi dalam interaksi manusia dan komputer. Pose tangan dapat dijadikan simbol interaksi manusia dengan komputer seperti halnya pada penggunaan berbagai pose tangan pada bahasa isyarat. Berbagai pose tangan dapat digunakan untuk menggantikan fungsi mouse, untuk mengendalikan robot, dan sebagainya. Penelitian ini difokuskan pada pembangunan sistem pengenalan pose tangan menggunakan HuMoment. Proses pengenalan pose tangan dimulai dengan melakukan segmentasi citra masukan untuk menghasilkan citra ROI (Region of Interest) yaitu area telapak tangan. Selanjutnya dilakukan proses deteksi tepi. Kemudian dilakukan ekstraksi nilai HuMoment. Nilai HuMoment dikuantisasikan ke dalam bukukode yang dihasilkan dari proses pelatihan menggunakan K-Means. Proses kuantisasi dilakukan dengan menghitung nilai Euclidean Distance terkecil antara nilai HuMomment citra masukan dan bukukode. Berdasarkan hasil penelitian, nilai akurasi sistem dalam mengenali pose tangan adalah 88.57%.
Downloads
Article Details
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work
References
[2] Lijun Zhao, Xiaoyu Li, Peidong Liang dll, "Intuitive robot teaching by hand guided demonstration", 2016 IEEE International Conference on Mechatronics and Automation, pp. 1578 – 1583, 2016, ISSN: 2152-744X.
[3] Mark Billinghurst, Tham Piumsomboon, dan Huidong Bai, "Hands in Space: Gesture Interaction with Augmented-Reality Interfaces", IEEE Computer Graphics and Applications, vol. 34, Issue: 1, Jan.-Feb. 2014, pp. 77 – 80, 2014.
[4] C. Keskin, F. Kirac, Y.E. Kara, L. Akarun, "Real Time Hand Pose Estimation Using Depth Sensors". Computer Vision Workshops (ICCV Workshop) 2011 IEEE International Conference on, pp. 1228-1234, 2011.
[5] Yan Wen, Chuanyan Hu, Guanghui Yu, Changbo Wang, "A robust method of detecting hand gestures using depth sensors", Haptic Audio Visual Environments and Games (HAVE) 2012 IEEE International Workshop on, pp. 72-77, 2012.
[6] Jesus Suarez, Robin R. Murphy, "Hand gesture recognition with depth images: A review", RO-MAN 2012 IEEE, pp. 411-417, 2012, ISSN 1944-9445.
[7] Ti-zhou Qiao, Shu-ling Dai, "Fast head pose estimation using depth data", Image and Signal Processing (CISP) 2013 6th International Congress on, vol. 2, pp. 664-669, 2013.
[8] Hui Liang, Junsong Yuan, Daniel Thalmann, "Parsing the Hand in Depth Images", Multimedia IEEE Transactions on, vol. 16, pp. 1241-1253, 2014, ISSN 1520-9210.
[9] Nelly Indriani Widiastuti dan Restu Suhendar, "Scattered object recognition using Hu Moment invariant and backpropagation neural network", Information and Communication Technology (ICoICT ), 2015 3rd International Conference on, pp. 578 – 583, 2015.
[10] Zhihu Huang dan Jinsong Leng, " Analysis of Hu’s Moment Invariants on Image Scaling and Rotation, Proceedings of 2010 2nd International Conference on Computer Engineering and Technology (ICCET), pp. 476-480, 2010.