• Title/Summary/Keyword: Body Gesture Recognition

Search Result 61, Processing Time 0.032 seconds

A Bio-Feedback Controller for Image Training (이미지 트레이닝을 위한 바이오 피드백 컨트롤러)

  • Ahn, Jin-Ho;Moon, Myoung-Jib;Kim, Ho-Ryong;Kim, Kyung-Sik
    • Journal of The Institute of Information and Telecommunication Facilities Engineering
    • /
    • v.10 no.3
    • /
    • pp.92-97
    • /
    • 2011
  • In this paper, a controller recognizing human gestures using EMG signal is shown. The tiny and band-type controller is developed for image training to excercise the specific area in the body, and uses a dry-type silver fiber electrode easy to be attached or detached itself to a skin. The captured EMG signals are converted to 10-bit digital values via amplification and frequency filtering processes within the controller, and are transmitted to the server by wireless. As the gesture recognition ratio using the proposed controller on biceps is up to 80%, we expect the practical potential of the controller is very promising.

  • PDF

Interaction Intent Analysis of Multiple Persons using Nonverbal Behavior Features (인간의 비언어적 행동 특징을 이용한 다중 사용자의 상호작용 의도 분석)

  • Yun, Sang-Seok;Kim, Munsang;Choi, Mun-Taek;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.738-744
    • /
    • 2013
  • According to the cognitive science research, the interaction intent of humans can be estimated through an analysis of the representing behaviors. This paper proposes a novel methodology for reliable intention analysis of humans by applying this approach. To identify the intention, 8 behavioral features are extracted from the 4 characteristics in human-human interaction and we outline a set of core components for nonverbal behavior of humans. These nonverbal behaviors are associated with various recognition modules including multimodal sensors which have each modality with localizing sound source of the speaker in the audition part, recognizing frontal face and facial expression in the vision part, and estimating human trajectories, body pose and leaning, and hand gesture in the spatial part. As a post-processing step, temporal confidential reasoning is utilized to improve the recognition performance and integrated human model is utilized to quantitatively classify the intention from multi-dimensional cues by applying the weight factor. Thus, interactive robots can make informed engagement decision to effectively interact with multiple persons. Experimental results show that the proposed scheme works successfully between human users and a robot in human-robot interaction.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.

Design and Implementation of Immersive Media System Based on Dynamic Projection Mapping and Gesture Recognition (동적 프로젝션 맵핑과 제스처 인식 기반의 실감 미디어 시스템 설계 및 구현)

  • Kim, Sang Joon;Koh, You Jon;Choi, Yoo-Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.3
    • /
    • pp.109-122
    • /
    • 2020
  • In recent, projection mapping, which has attracted high attention in the field of realistic media, is regarded as a technology to increase the users' immersion. However, most existing methods perform projection mapping on static objects. In this paper, we developed a technology to track the movements of users and dynamically map the media contents to the users' bodies. The projected media content is built by predefined gestures just using the user's bare hands without the special devices. An interactive immersive media system has been implemented by integrating these dynamic projection mapping technologies and gesture-based drawing technologies. The proposed realistic media system recognizes the movements and open / closed states of the user 's hands, selects the functions necessary to draw a picture. The users can freely draw the picture by changing the color of the brush using the colors of any real objects. In addition, the user's drawing is dynamically projected on the user's body, allowing the user to design and wear his t-shirt in real-time.

Interactive drawing with user's intentions using image segmentation

  • Lim, Sooyeon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.3
    • /
    • pp.73-80
    • /
    • 2018
  • This study introduces an interactive drawing system, a tool that allows user to sketch and draw with his own intentions. The proposed system enables the user to express more creatively through a tool that allows the user to reproduce his original idea as a drawing and transform it using his body. The user can actively participate in the production of the artwork by studying the unique formative language of the spectator. In addition, the user is given an opportunity to experience a creative process by transforming arbitrary drawing into various shapes according to his gestures. Interactive drawing systems use the segmentation of the drawing image as a way to extend the user's initial drawing idea. The system includes transforming a two-dimensional drawing into a volume-like form such as a three-dimensional drawing using image segmentation. In this process, a psychological space is created that can stimulate the imagination of the user and project the object of desire. This process of drawing personification plays a role of giving the user familiarity with the artwork and indirectly expressing his her emotions to others. This means that the interactive drawing, which has changed to the emotional concept of interaction beyond the concept of information transfer, can create a cooperative sensation image between user's time and space and occupy an important position in multimedia society.

Rotation Invariant 3D Star Skeleton Feature Extraction (회전무관 3D Star Skeleton 특징 추출)

  • Chun, Sung-Kuk;Hong, Kwang-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.836-850
    • /
    • 2009
  • Human posture recognition has attracted tremendous attention in ubiquitous environment, performing arts and robot control so that, recently, many researchers in pattern recognition and computer vision are working to make efficient posture recognition system. However the most of existing studies is very sensitive to human variations such as the rotation or the translation of body. This is why the feature, which is extracted from the feature extraction part as the first step of general posture recognition system, is influenced by these variations. To alleviate these human variations and improve the posture recognition result, this paper presents 3D Star Skeleton and Principle Component Analysis (PCA) based feature extraction methods in the multi-view environment. The proposed system use the 8 projection maps, a kind of depth map, as an input data. And the projection maps are extracted from the visual hull generation process. Though these data, the system constructs 3D Star Skeleton and extracts the rotation invariant feature using PCA. In experimental result, we extract the feature from the 3D Star Skeleton and recognize the human posture using the feature. Finally we prove that the proposed method is robust to human variations.

NUI/NUX framework based on intuitive hand motion (직관적인 핸드 모션에 기반한 NUI/NUX 프레임워크)

  • Lee, Gwanghyung;Shin, Dongkyoo;Shin, Dongil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.11-19
    • /
    • 2014
  • The natural user interface/experience (NUI/NUX) is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. Up to now, typical motion recognition methods used markers to receive coordinate input values of each marker as relative data and to store each coordinate value into the database. But, to recognize accurate motion, more markers are needed and much time is taken in attaching makers and processing the data. Also, as NUI/NUX framework being developed except for the most important intuition, problems for use arise and are forced for users to learn many NUI/NUX framework usages. To compensate for this problem in this paper, we didn't use markers and implemented for anyone to handle it. Also, we designed multi-modal NUI/NUX framework controlling voice, body motion, and facial expression simultaneously, and proposed a new algorithm of mouse operation by recognizing intuitive hand gesture and mapping it on the monitor. We implement it for user to handle the "hand mouse" operation easily and intuitively.

Human Tracking and Body Silhouette Extraction System for Humanoid Robot (휴머노이드 로봇을 위한 사람 검출, 추적 및 실루엣 추출 시스템)

  • Kwak, Soo-Yeong;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.6C
    • /
    • pp.593-603
    • /
    • 2009
  • In this paper, we propose a new integrated computer vision system designed to track multiple human beings and extract their silhouette with an active stereo camera. The proposed system consists of three modules: detection, tracking and silhouette extraction. Detection was performed by camera ego-motion compensation and disparity segmentation. For tracking, we present an efficient mean shift based tracking method in which the tracking objects are characterized as disparity weighted color histograms. The silhouette was obtained by two-step segmentation. A trimap is estimated in advance and then this was effectively incorporated into the graph cut framework for fine segmentation. The proposed system was evaluated with respect to ground truth data and it was shown to detect and track multiple people very well and also produce high quality silhouettes. The proposed system can assist in gesture and gait recognition in field of Human-Robot Interaction (HRI).

States, Behaviors and Cues of Infants (영아의 상태, 행동, 암시)

  • Kim, Tae-Im
    • Korean Parent-Child Health Journal
    • /
    • v.1
    • /
    • pp.56-74
    • /
    • 1998
  • The language of the newborn, like that of adults, is one of gesture, posture, and expression(Lewis, 1980). Helping parents understand and respond to their newborn's cues will make caring for their baby more enjoyable and may well provide the foundation for a communicative bond that will last lifetime. Infant state provides a dynamic pattern reflecting the full behavioral repertoire of the healthy infant(Brazelton, 1973, 1984). States are organized in a predictable emporal sequence and provide a basic classification of conditions that occur over and over again(Wolff, 1987). They are recognized by characteristic behavioral patterns, physiological changes, and infants' level of responsiveness. Most inportantly, however, states provide caregivers a framework for observing and understanding infants' behavior. When parents know how to determine whether their infant is sleep, awake, or drowsy, and they know the implications, recognition of states has for both the infant's behavior and for their caregiving, then a lot of hings about taking care of a newborn become much easier and more rewarding. Most parents have the skills and desire to do what is best for their infant. The skills 7373parents bring to the interaction are: the ability to read their infant's cues: to stimulate the baby through touch, movement, talking, and looking at: and to respond in a contingent manner to the infant's signals. Among the crucial skills infants bring to the interaction are perceptual abilities: hearing and seeing, the capacity to look at another for a period of time, the ability to smile, be consoled, adapt their body to holding or movement, and be regular and predictable in responding. Research demonstrates that the absence of these skills by either partner adversely affects parent-infant interaction and later development. Observing early parent-infant interactions during the hospital stay is important in order to identify parent-infant pairs in need of continued monitoring(Barnard, et al., 1989).

  • PDF

Point Cloud Content in Form of Interactive Holograms (포인트 클라우드 형태의 인터랙티브 홀로그램 콘텐츠)

  • Kim, Dong-Hyun;Kim, Sang-Wook
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.9
    • /
    • pp.40-47
    • /
    • 2012
  • Existing art, media art, accompanied by a new path of awareness and perception instrumentalized by the human body, creating a new way to watch the interaction is proposed. Western art way to create visual images of the point cloud that represented a form that is similar to the Pointage. This traditional painting techniques using digital technology means reconfiguration. In this paper, a new appreciation of fusion of aesthetic elements and digital technology, making the point cloud in the form of video. And this holographic film projection of the spectator, and gestures to interact with the video content is presented. A Process of making contents is intent planning, content creation, content production point cloud in the form of image, 3D gestures for interaction design process, go through the process of holographic film projection. Visual and experiential content of memory recall process takes place in the consciousness of the people expressed. Complete the process of memory recall, uncertain memories, memories materialized, recalled. Uncertain remember the vague shapes of the point cloud in the form of an image represented by the image. As embodied memories through the act of interaction to manipulate images recall is complete.