DOI QR코드

DOI QR Code

A Design and Implementation of Object Recognition based Interactive Game Contents using Kinect Sensor and Unity 3D Engine

키넥트 센서와 유니티 3D 엔진기반의 객체 인식 기법을 적용한 체험형 게임 콘텐츠 설계 및 구현

  • Jung, Se-hoon (School of Major Connection, Youngsan University) ;
  • Lee, Ju-hwan (Dept. of Multimedia Eng., Sunchon University) ;
  • Jo, Kyeong-Ho (Dept. of Multimedia Eng., Sunchon University) ;
  • Park, Jae-Seong (Dept. of Multimedia Eng., Sunchon University) ;
  • Sim, Chun Bo (School of Information Com. and Multimedia Eng., Sunchon University)
  • Received : 2018.10.15
  • Accepted : 2018.10.17
  • Published : 2018.12.31

Abstract

We propose an object recognition system and experiential game contents using Kinect to maximize object recognition rate by utilizing underwater robots. we implement an ice hockey game based on object-aware interactive contents to validate the excellence of the proposed system. The object recognition system, which is a preprocessor module, is composed based on Kinect and OpenCV. Network sockets are utilized for object recognition communications between C/S. The problem of existing research, degradation of object recognition at long distance, is solved by combining the system development method suggested in the study. As a result of the performance evaluation, the underwater robot object recognized all target objects (90.49%) with 80% of accuracy from a 2m distance, revealing 42.46% of F-Measure. From a 2.5m distance, it recognized 82.87% of the target objects with 60.5% of accuracy, showing 34.96% of F-Measure. Finally, it recognized 98.50% of target objects with 59.4% of accuracy from a 3m distance, showing 37.04% of F-measure.

Keywords

MTMDCW_2018_v21n12_1493_f0001.png 이미지

Fig. 1. Conceptual Diagram of Kinect and Underwater Robot.

MTMDCW_2018_v21n12_1493_f0002.png 이미지

Fig. 2. Overall Structure of Proposed System.

MTMDCW_2018_v21n12_1493_f0003.png 이미지

Fig. 3. Structure of Object Recognition Algorithm.

MTMDCW_2018_v21n12_1493_f0004.png 이미지

Fig. 4 Principle of Kinect Camera.

MTMDCW_2018_v21n12_1493_f0005.png 이미지

Fig. 5. Set Tracking Window.

MTMDCW_2018_v21n12_1493_f0006.png 이미지

Fig. 6. Set Threshold of Object Recognition.

MTMDCW_2018_v21n12_1493_f0007.png 이미지

Fig. 7. Set Blur.

MTMDCW_2018_v21n12_1493_f0008.png 이미지

Fig. 8. Objects Classification Through Labeling.

MTMDCW_2018_v21n12_1493_f0009.png 이미지

Fig. 9. Location Variables Values of Labeling Scope.

MTMDCW_2018_v21n12_1493_f0010.png 이미지

Fig. 10. Structure of Communication Module.

MTMDCW_2018_v21n12_1493_f0011.png 이미지

Fig. 11. Structure of Game Content.

MTMDCW_2018_v21n12_1493_f0012.png 이미지

Fig. 12. Game Aquarium(H/W).

MTMDCW_2018_v21n12_1493_f0013.png 이미지

Fig. 14. Adjusted Depth Value.

MTMDCW_2018_v21n12_1493_f0014.png 이미지

Fig. 15. Adjusted Blur Cut.

MTMDCW_2018_v21n12_1493_f0015.png 이미지

Fig. 16. Adjusted Size

Table 1. Implementation Environment

MTMDCW_2018_v21n12_1493_t0001.png 이미지

Fig. 13. Implementation Results.

MTMDCW_2018_v21n12_1493_t0002.png 이미지

Table 2. Performance Evaluation with Kinect Distance

MTMDCW_2018_v21n12_1493_t0003.png 이미지

Table 3. Results of F-measure with Kinect Distance

MTMDCW_2018_v21n12_1493_t0004.png 이미지

Table 4. Qualitative Comparison Evaluation with Existing Study

MTMDCW_2018_v21n12_1493_t0005.png 이미지

References

  1. S.K. Kwon and H.J. Kim, “Tracking Method for Moving Object Using Depth Picture,” Journal of Korea Multimedia Society, Vol. 19, No. 4, pp. 774-779, 2016. https://doi.org/10.9717/kmms.2016.19.4.774
  2. S. Kwon, J.S. Kim, Y.H. Kim, and Y.H. Lee, "Architecture and Design of Real-time Depth Map Extraction Hardware Using Stereo Vision," The Journal of Korean Institute of Information Technology, Vol. 12, No. 2, pp. 163-171, 2014.
  3. D.S. Lee and S.K. Kwon, "A Recognition Method for Moving Objects Using Depth and Color Information," Journal of Korea Multimedia Society, Vol. 19, No. 4, pp. 681-688, 2016. https://doi.org/10.9717/kmms.2016.19.4.681
  4. M.G. Seo, S.Y. Kim, J.B. J, and C. Lee, "Frequency Estimation of Human Movements Using Kinect and Its Application," Journal of Korea Multimedia Society, Vol. 20, No. 8, pp. 1248-1257, 2017. https://doi.org/10.9717/KMMS.2017.20.8.1248
  5. H.H. Son, D.H. Koo, S.C. Jeong, Y.M. Lee, S.M. Lee, D.H. Lee, et al., "Hand-Grabbing Recognition Based on Kinect and Unity3D," Proceedings of the Fall Conference of the Korean Information Science Society, pp. 1481-1483, 2014.
  6. Y.N. Kim, S.J. Kim, J.S. Lee, and E.S. Cho, "Sensible 3D Virtual Reality Rehabilitation Treatment System Using Kinect and Unity 3D," Proceedings of the Fall Conference of the Institute of Electronics and Information Engineers, pp. 935-938, 2013.
  7. N. Otsu, “A Threshold Selection Method from Gray-level Histograms,” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 9, No. 1, pp. 62-66, 1979. https://doi.org/10.1109/TSMC.1979.4310076
  8. D.K. Yu, J.G. Lee, and J.Y. Jung, "Fast Extraction of Vehicle Plate in Car Image Using Morphology Operation," Proceeding of The International Industrial Information Systems Conference, pp. 343-347, 2002.
  9. J.K. Won, H.Y. Kim, and J.S. Cho, "Text Area Extraction Method for Color Images Based on Labeling and Gradient Difference Method," Journal of the Korea Contents Association, Vol. 11, No. 12, pp. 511-521, 2011. https://doi.org/10.5392/JKCA.2011.11.12.511
  10. C.J. Lee and T.J. Choi, "3D Content Creation of Korean Traditional Tales Using a Depth Camera : on the Basis of The Hare and The Tortoise," Journal of the Korea Entertainment Industry Association, Vol. 10, No. 3, pp. 329-336, 2016. https://doi.org/10.21184/jkeia.2016.06.10.3.329
  11. S.B. Lee and I.H. Jung, "A Design and Implementation of Natural User Interface System Using Kinect," Journal of Digital Contents Society, Vol. 15, No. 4, pp. 473-480, 2014. https://doi.org/10.9728/dcs.2014.15.4.473
  12. J.H. Lee, S.H. Jung, K.H. Jo, J.S. Park, K.S. You, C.B. Sim, et al., "Design and Implementation of Sensing Program Based on Object Recognition Using Depth Information of Kinect Camera," Proceedings of the Spring Conference of the Korean Electronic Communication Sciences, Vol. 12, No. 1, pp. 87-90, 2018.