• Title/Summary/Keyword: upper-body joint segmentation

Search Result 2, Processing Time 0.016 seconds

Robust 2D human upper-body pose estimation with fully convolutional network

  • Lee, Seunghee;Koo, Jungmo;Kim, Jinki;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.2
    • /
    • pp.129-140
    • /
    • 2018
  • With the increasing demand for the development of human pose estimation, such as human-computer interaction and human activity recognition, there have been numerous approaches to detect the 2D poses of people in images more efficiently. Despite many years of human pose estimation research, the estimation of human poses with images remains difficult to produce satisfactory results. In this study, we propose a robust 2D human body pose estimation method using an RGB camera sensor. Our pose estimation method is efficient and cost-effective since the use of RGB camera sensor is economically beneficial compared to more commonly used high-priced sensors. For the estimation of upper-body joint positions, semantic segmentation with a fully convolutional network was exploited. From acquired RGB images, joint heatmaps accurately estimate the coordinates of the location of each joint. The network architecture was designed to learn and detect the locations of joints via the sequential prediction processing method. Our proposed method was tested and validated for efficient estimation of the human upper-body pose. The obtained results reveal the potential of a simple RGB camera sensor for human pose estimation applications.

Development of a prototype simulator for dental education (치의학 교육을 위한 프로토타입 시뮬레이터의 개발)

  • Mi-El Kim;Jaehoon Sim;Aein Mon;Myung-Joo Kim;Young-Seok Park;Ho-Beom Kwon;Jaeheung Park
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.61 no.4
    • /
    • pp.257-267
    • /
    • 2023
  • Purpose. The purpose of the study was to fabricate a prototype robotic simulator for dental education, to test whether it could simulate mandibular movements, and to assess the possibility of the stimulator responding to stimuli during dental practice. Materials and methods. A virtual simulator model was developed based on segmentation of the hard tissues using cone-beam computed tomography (CBCT) data. The simulator frame was 3D printed using polylactic acid (PLA) material, and dentiforms and silicone face skin were also inserted. Servo actuators were used to control the movements of the simulator, and the simulator's response to dental stimuli was created by pressure and water level sensors. A water level test was performed to determine the specific threshold of the water level sensor. The mandibular movements and mandibular range of motion of the simulator were tested through computer simulation and the actual model. Results. The prototype robotic simulator consisted of an operational unit, an upper body with an electric device, a head with a temporomandibular joint (TMJ) and dentiforms. The TMJ of the simulator was capable of driving two degrees of freedom, implementing rotational and translational movements. In the water level test, the specific threshold of the water level sensor was 10.35 ml. The mandibular range of motion of the simulator was 50 mm in both computer simulation and the actual model. Conclusion. Although further advancements are still required to improve its efficiency and stability, the upper-body prototype simulator has the potential to be useful in dental practice education.