• Title/Summary/Keyword: Hand tracking

Search Result 350, Processing Time 0.125 seconds

Real-time Human Pose Estimation using RGB-D images and Deep Learning

  • Rim, Beanbonyka;Sung, Nak-Jun;Ma, Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.113-121
    • /
    • 2020
  • Human Pose Estimation (HPE) which localizes the human body joints becomes a high potential for high-level applications in the field of computer vision. The main challenges of HPE in real-time are occlusion, illumination change and diversity of pose appearance. The single RGB image is fed into HPE framework in order to reduce the computation cost by using depth-independent device such as a common camera, webcam, or phone cam. However, HPE based on the single RGB is not able to solve the above challenges due to inherent characteristics of color or texture. On the other hand, depth information which is fed into HPE framework and detects the human body parts in 3D coordinates can be usefully used to solve the above challenges. However, the depth information-based HPE requires the depth-dependent device which has space constraint and is cost consuming. Especially, the result of depth information-based HPE is less reliable due to the requirement of pose initialization and less stabilization of frame tracking. Therefore, this paper proposes a new method of HPE which is robust in estimating self-occlusion. There are many human parts which can be occluded by other body parts. However, this paper focuses only on head self-occlusion. The new method is a combination of the RGB image-based HPE framework and the depth information-based HPE framework. We evaluated the performance of the proposed method by COCO Object Keypoint Similarity library. By taking an advantage of RGB image-based HPE method and depth information-based HPE method, our HPE method based on RGB-D achieved the mAP of 0.903 and mAR of 0.938. It proved that our method outperforms the RGB-based HPE and the depth-based HPE.

Pictorial Model of Upper Body based Pose Recognition and Particle Filter Tracking (그림모델과 파티클필터를 이용한 인간 정면 상반신 포즈 인식)

  • Oh, Chi-Min;Islam, Md. Zahidul;Kim, Min-Wook;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.186-192
    • /
    • 2009
  • In this paper, we represent the recognition method for human frontal upper body pose. In HCI(Human Computer Interaction) and HRI(Human Robot Interaction) when a interaction is established the human has usually frontal direction to the robot or computer and use hand gestures then we decide to focus on human frontal upper-body pose, The two main difficulties are firstly human pose is consist of many parts which cause high DOF(Degree Of Freedom) then the modeling of human pose is difficult. Secondly the matching between image features and modeling information is difficult. Then using Pictorial Model we model the human main poses which are mainly took the space of frontal upper-body poses and we recognize the main poses by making main pose database. using determined main pose we used the model parameters for particle filter which predicts the posterior distribution for pose parameters and can determine more specific pose by updating model parameters from the particle having the maximum likelihood. Therefore based on recognizing main poses and tracking the specific pose we recognize the human frontal upper body poses.

  • PDF

The Palm Line Extraction and Analysis using Fuzzy Method (퍼지 기법을 이용한 손금 추출 및 분석)

  • Kim, Kwang-Baek;Song, Doo-Heon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.11
    • /
    • pp.2429-2434
    • /
    • 2010
  • In this paper, we propose a method to extract and analyze palm line with fuzzy method. In order to extract the palm part, we transform the original RGB color space to YCbCr color space and extract sin colors ranging Y:65-255, Cb:25-255, Cr:130-255 and use it as a threshold. Possible noise is removed by 8-directional contour tracking algorithm and morphological characteristic of the palm. Then the edge is extracted from that noise-free image by stretching method and sobel mask Then the fuzzy binarization algorithm is applied to remove any minute noise so that we have only the palm lines and the boundary of the hand. Since the palm line reading is done with major lines, we use the morphological characteristics of the analyzable palm lines and fuzzy inference rules. Experiment verifies that the proposed method is better in visibility and thus more analyzable in palm reading than the old method.

WOI : Determining Area of Interest and Gaze Analysis for Task Switching in a Window Unit Behavior Measurement in Windowing System GUI (WOI : 윈도윙 시스템 GUI에서의 창 단위 작업 전환 행위 측정을 위한 관심영역 지정 및 시선 분석)

  • Ko, Eunji;Choi, Sun-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.5
    • /
    • pp.963-971
    • /
    • 2016
  • This paper is a research of gaze analysis for measuring task switching behavior at multiple windows in GUI of windowing system. Previous methods that define area of interest for categorizing gaze had difficulties to define dynamic content that disappears or changes over time. On the other hand, this study suggests a new method to categorize gaze that defines Area of interest in a unit of window during the eye tracking experiment. In this paper, we constructed the conception of WOI(Window of Interest) in GUI of windowing system. Therefore, we developed a system using an eye tracker device and implemented a number of experiments. In addition, we analyzed the number of task switching and proportion of watched content. The method in the study comprehend from previous researches as it can measure and analyze the multi-tasking behaviors using multiple numbers of windows.

3D View Controlling by Using Eye Gaze Tracking in First Person Shooting Game (1 인칭 슈팅 게임에서 눈동자 시선 추적에 의한 3차원 화면 조정)

  • Lee, Eui-Chul;Cho, Yong-Joo;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.10
    • /
    • pp.1293-1305
    • /
    • 2005
  • In this paper, we propose the method of manipulating the gaze direction of 3D FPS game's character by using eye gaze detection from the successive images captured by USB camera, which is attached beneath HMD. The proposed method is composed of 3 parts. In the first fart, we detect user's pupil center by real-time image processing algorithm from the successive input images. In the second part of calibration, the geometric relationship is determined between the monitor gazing position and the detected eye position gazing at the monitor position. In the last fart, the final gaze position on the HMB monitor is tracked and the 3D view in game is control]ed by the gaze position based on the calibration information. Experimental results show that our method can be used for the handicapped game player who cannot use his (or her) hand. Also, it can increase the interest and immersion by synchronizing the gaze direction of game player and that of game character.

  • PDF

3D Object Location Identification Using Finger Pointing and a Robot System for Tracking an Identified Object (손가락 Pointing에 의한 물체의 3차원 위치정보 인식 및 인식된 물체 추적 로봇 시스템)

  • Gwak, Dong-Gi;Hwang, Soon-Chul;Ok, Seo-Won;Yim, Jung-Sae;Kim, Dong Hwan
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.24 no.6
    • /
    • pp.703-709
    • /
    • 2015
  • In this work, a robot aimed at grapping and delivering an object by using a simple finger-pointing command from a hand- or arm-handicapped person is introduced. In this robot system, a Leap Motion sensor is utilized to obtain the finger-motion data of the user. In addition, a Kinect sensor is also used to measure the 3D (Three Dimensional)-position information of the desired object. Once the object is pointed at through the finger pointing of the handicapped user, the exact 3D information of the object is determined using an image processing technique and a coordinate transformation between the Leap Motion and Kinect sensors. It was found that the information obtained is transmitted to the robot controller, and that the robot eventually grabs the target and delivers it to the handicapped person successfully.

Big data, how to balance privacy and social values (빅데이터, 프라이버시와 사회적 가치의 조화방안)

  • Hwang, Joo-Seong
    • Journal of Digital Convergence
    • /
    • v.11 no.11
    • /
    • pp.143-153
    • /
    • 2013
  • Big data is expected to bring forth enormous public good as well as economic opportunity. However there is ongoing concern about privacy not only from public authorities but also from private enterprises. Big data is suspected to aggravate the existing privacy battle ground by introducing new types of privacy risks such as privacy risk of behavioral pattern. On the other hand, big data is asserted to become a new way to by-pass tradition behavioral tracking such as cookies, DPIs, finger printing${\cdots}$ and etc. For it is not based on a targeted person. This paper is to find out if big data could contribute to catching out behavioral patterns of consumers without threatening or damaging their privacy. The difference between traditional behavioral tracking and big data analysis from the perspective of privacy will be discerned.

A Study on the Development of iGPS 3D Probe for RDS for the Precision Measurement of TCP (RDS(Robotic Drilling System)용 TCP 정밀계측을 위한 iGPS 3D Probe 개발에 관한 연구)

  • Kim, Tae-Hwa;Moon, Sung-Ho;Kang, Seong-Ho;Kwon, Soon-Jae
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.11 no.6
    • /
    • pp.130-138
    • /
    • 2012
  • There are increasing demands from the industry for intelligent robot-calibration solutions, which can be tightly integrated to the manufacturing process. A proposed solution can simplify conventional robot-calibration and teaching methods without tedious procedures and lengthy training time. iGPS(Indoor GPS) system is a laser based real-time dynamic tracking/measurement system. The key element is acquiring and reporting three-dimensional(3D) information, which can be accomplished as an integrated system or as manual contact based measurements by a user. A 3D probe is introduced as the user holds the probe in his hand and moves the probe tip over the object. The X, Y, and Z coordinates of the probe tip are measured in real-time with high accuracy. In this paper, a new approach of robot-calibration and teaching system is introduced by implementing a 3D measurement system for measuring and tracking an object with motions in up to six degrees of freedom. The general concept and kinematics of the metrology system as well as the derivations of an error budget for the general device are described. Several experimental results of geometry and its related error identification for an easy compensation / teaching method on an industrial robot will also be included.

Research on Human Posture Recognition System Based on The Object Detection Dataset (객체 감지 데이터 셋 기반 인체 자세 인식시스템 연구)

  • Liu, Yan;Li, Lai-Cun;Lu, Jing-Xuan;Xu, Meng;Jeong, Yang-Kwon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.111-118
    • /
    • 2022
  • In computer vision research, the two-dimensional human pose is a very extensive research direction, especially in pose tracking and behavior recognition, which has very important research significance. The acquisition of human pose targets, which is essentially the study of how to accurately identify human targets from pictures, is of great research significance and has been a hot research topic of great interest in recent years. Human pose recognition is used in artificial intelligence on the one hand and in daily life on the other. The excellent effect of pose recognition is mainly determined by the success rate and the accuracy of the recognition process, so it reflects the importance of human pose recognition in terms of recognition rate. In this human body gesture recognition, the human body is divided into 17 key points for labeling. Not only that but also the key points are segmented to ensure the accuracy of the labeling information. In the recognition design, use the comprehensive data set MS COCO for deep learning to design a neural network model to train a large number of samples, from simple step-by-step to efficient training, so that a good accuracy rate can be obtained.

Golf Swing Classification Using Fuzzy System (퍼지 시스템을 이용한 골프 스윙 분류)

  • Park, Junwook;Kwak, Sooyeong
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.380-392
    • /
    • 2013
  • A method to classify a golf swing motion into 7 sections using a Kinect sensor and a fuzzy system is proposed. The inputs to the fuzzy logic are the positions of golf club and its head, which are extracted from the information of golfer's joint position and color information obtained by a Kinect sensor. The proposed method consists of three modules: one for extracting the joint's information, another for detecting and tracking of a golf club, and the other for classifying golf swing motions. The first module extracts the hand's position among the joint information provided by a Kinect sensor. The second module detects the golf club as well as its head with the Hough line transform based on the hand's coordinate. Using a fuzzy logic as a classification engine reduces recognition errors and, consequently, improves the performance of robust classification. From the experiments of real-time video clips, the proposed method shows the reliability of classification by 85.2%.