• Title/Summary/Keyword: User tracking

Search Result 611, Processing Time 0.028 seconds

A Study on Real-time Tracking Method of Horizontal Face Position for Optimal 3D T-DMB Content Service (지상파 DMB 단말에서의 3D 컨텐츠 최적 서비스를 위한 경계 정보 기반 실시간 얼굴 수평 위치 추적 방법에 관한 연구)

  • Kang, Seong-Goo;Lee, Sang-Seop;Yi, June-Ho;Kim, Jung-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.88-95
    • /
    • 2011
  • An embedded mobile device mostly has lower computation power than a general purpose computer because of its relatively lower system specifications. Consequently, conventional face tracking and face detection methods, requiring complex algorithms for higher recognition rates, are unsuitable in a mobile environment aiming for real time detection. On the other hand, by applying a real-time tracking and detecting algorithm, we would be able to provide a two-way interactive multimedia service between an user and a mobile device thus providing a far better quality of service in comparison to a one-way service. Therefore it is necessary to develop a real-time face and eye tracking technique optimized to a mobile environment. For this reason, in this paper, we proposes a method of tracking horizontal face position of a user on a T-DMB device for enhancing the quality of 3D DMB content. The proposed method uses the orientation of edges to estimate the left and right boundary of the face, and by the color edge information, the horizontal position and size of face is determined finally to decide the horizontal face. The sobel gradient vector is projected vertically and candidates of face boundaries are selected, and we proposed a smoothing method and a peak-detection method for the precise decision. Because general face detection algorithms use multi-scale feature vectors, the detection time is too long on a mobile environment. However the proposed algorithm which uses the single-scale detection method can detect the face more faster than conventional face detection methods.

Immersive user interfaces for visual telepresence in human-robot interaction (사람과 로봇간 원격작동을 위한 몰입형 사용자 인터페이스)

  • Jang, Su-Hyeong
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.406-410
    • /
    • 2009
  • As studies on more realistic human-robot interface are being actively carried out, people's interests about telepresence which remotely controls robot and obtains environmental information through video display are increasing. In order to provide natural telepresence services by moving a remote robot, it is required to recognize user's behaviors. The recognition of user movements used in previous telepresence system was difficult and costly to be implemented, limited in its applications to human-robot interaction. In this paper, using the Nintendo's Wii controller getting a lot of attention in these days and infrared LEDs, we propose an immersive user interface that easily recognizes user's position and gaze direction and provides remote video information through HMD.

  • PDF

User Interfaces for Visual Telepresence in Human-Robot Interaction Using Wii Controller (WII 컨트롤러를 이용한 사람과 로봇간 원격작동 사용자 인터페이스)

  • Jang, Su-Hyung;Yoon, Jong-Won;Cho, Sung-Bae
    • Journal of the HCI Society of Korea
    • /
    • v.3 no.1
    • /
    • pp.27-32
    • /
    • 2008
  • As studies on more realistic human-robot interface are being actively carried out, people's interests about telepresence which remotely controls robot and obtains environmental information through video display are increasing. In order to provide natural telepresence services by moving a remote robot, it is required to recognize user's behaviors. The recognition of user movements used in previous telepresence system was difficult and costly to be implemented, limited in its applications to human-robot interaction. In this paper, using the Nintendo's Wii controller getting a lot of attention in these days and infrared LEDs, we propose an immersive user interface that easily recognizes user's position and gaze direction and provides remote video information through HMD.

  • PDF

A Consecutive Motion and Situation Recognition Mechanism to Detect a Vulnerable Condition Based on Android Smartphone

  • Choi, Hoan-Suk;Lee, Gyu Myoung;Rhee, Woo-Seop
    • International Journal of Contents
    • /
    • v.16 no.3
    • /
    • pp.1-17
    • /
    • 2020
  • Human motion recognition is essential for user-centric services such as surveillance-based security, elderly condition monitoring, exercise tracking, daily calories expend analysis, etc. It is typically based on the movement data analysis such as the acceleration and angular velocity of a target user. The existing motion recognition studies are only intended to measure the basic information (e.g., user's stride, number of steps, speed) or to recognize single motion (e.g., sitting, running, walking). Thus, a new mechanism is required to identify the transition of single motions for assessing a user's consecutive motion more accurately as well as recognizing the user's body and surrounding situations arising from the motion. Thus, in this paper, we collect the human movement data through Android smartphones in real time for five targeting single motions and propose a mechanism to recognize a consecutive motion including transitions among various motions and an occurred situation, with the state transition model to check if a vulnerable (life-threatening) condition, especially for the elderly, has occurred or not. Through implementation and experiments, we demonstrate that the proposed mechanism recognizes a consecutive motion and a user's situation accurately and quickly. As a result of the recognition experiment about mix sequence likened to daily motion, the proposed adoptive weighting method showed 4% (Holding time=15 sec), 88% (30 sec), 6.5% (60 sec) improvements compared to static method.

A Fast Vision-based Head Tracking Method for Interactive Stereoscopic Viewing

  • Putpuek, Narongsak;Chotikakamthorn, Nopporn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1102-1105
    • /
    • 2004
  • In this paper, the problem of a viewer's head tracking in a desktop-based interactive stereoscopic display system is considered. A fast and low-cost approach to the problem is important for such a computing environment. The system under consideration utilizes a shuttle glass for stereoscopic display. The proposed method makes use of an image taken from a single low-cost video camera. By using a simple feature extraction algorithm, the obtained points corresponding to the image of the user-worn shuttle glass are used to estimate the glass center, its local 'yaw' angle, as measured with respect to the glass center, and its global 'yaw' angle as measured with respect to the camera location. With these estimations, the stereoscopic image synthetic program utilizes those values to interactively adjust the two-view stereoscopic image pair as displayed on a computer screen. The adjustment is carried out such that the so-obtained stereoscopic picture, when viewed from a current user position, provides a close-to-real perspective and depth perception. However, because the algorithm and device used are designed for fast computation, the estimation is typically not precise enough to provide a flicker-free interactive viewing. An error concealment method is thus proposed to alleviate the problem. This concealment method should be sufficient for applications that do not require a high degree of visual realism and interaction.

  • PDF

Development of a Simulator for a Mobile Robot Based on iPhone (아이폰 기반의 이동로봇 시뮬레이터 개발)

  • Kim, Dong Hun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.1
    • /
    • pp.29-34
    • /
    • 2013
  • This study presents the remote control of a mobile robot using iPhone based on ad hoc communication. Two control interfaces are proposed to control a mobile robot using iPhone : Remote control by a user and autonomous control. To evaluate the effectiveness of algorithms for trajectory following, a simulator are developed where a virtual robot follows a referenced trajectory in a monitor by iPhone interface. In the proposed simulator, some algorithms are tested how they work well or not for trajectory following of a mobile robot. Comparative results by remote user control and autonomous control are shown. Results of an experiment show that the proposed simulator can be effectively used for testing the effectiveness of autonomous tracking algorithms.

Hidden Indicator Based PIN-Entry Method Using Audio Signals

  • Seo, Hwajeong;Kim, Howon
    • Journal of information and communication convergence engineering
    • /
    • v.15 no.2
    • /
    • pp.91-96
    • /
    • 2017
  • PIN-entry interfaces have high risks to leak secret values if the malicious attackers perform shoulder-surfing attacks with advanced monitoring and observation devices. To make the PIN-entry secure, many studies have considered invisible radio channels as a secure medium to deliver private information. However, the methods are also vulnerable if the malicious adversaries find a hint of secret values from user's $na{\ddot{i}}ve$ gestures. In this paper, we revisit the state-of-art radio channel based bimodal PIN-entry method and analyze the information leakage from the previous method by exploiting the sight tracking attacks. The proposed sight tracking attack technique significantly reduces the original password complexities by 93.8% after post-processing. To keep the security level strong, we introduce the advanced bimodal PIN-entry technique. The new technique delivers the secret indicator information through a secure radio channel and the smartphone screen only displays the multiple indicator options without corresponding numbers. Afterwards, the users select the target value by following the circular layout. The method completely hides the password and is secure against the advanced shoulder-surfing attacks.

Performance Comparison of Manual and Touch Interface using Video-based Behavior Analysis

  • Lee, Chai-Woo;Bahn, Sang-Woo;Kim, Ga-Won;Yun, Myung-Hwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.655-659
    • /
    • 2010
  • The objective of this study is to quantitatively incorporate user observation into usability evaluation of mobile interfaces using monitoring techniques in first- and third-person points of view. In this study, an experiment was conducted to monitor and record users' behavior using Ergoneers Dikablis, a gaze tracking device. The experiment was done with 2 mobile phones each with a button keypad interface and a touchscreen interface for comparative analysis. The subjects included 20 people who have similar experiences and proficiency in using mobile devices. Data from video recordings were coded with Noldus Observer XT to find usage patterns and to gather quantitative data for analysis in terms of effectiveness, efficiency and satisfaction. Results showed that the button keypad interface was generally better than the touchcreen interface. The movements of the fingers and gaze were much simpler when performing given tasks on the button keypad interface. While previous studies have mostly evaluated usability with performance measures by only looking at task results, this study can be expected to contribute by suggesting a method in which the behavioral patterns of interaction is evaluated.

Implementation of Gesture Interface for Projected Surfaces

  • Park, Yong-Suk;Park, Se-Ho;Kim, Tae-Gon;Chung, Jong-Moon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.378-390
    • /
    • 2015
  • Image projectors can turn any surface into a display. Integrating a surface projection with a user interface transforms it into an interactive display with many possible applications. Hand gesture interfaces are often used with projector-camera systems. Hand detection through color image processing is affected by the surrounding environment. The lack of illumination and color details greatly influences the detection process and drops the recognition success rate. In addition, there can be interference from the projection system itself due to image projection. In order to overcome these problems, a gesture interface based on depth images is proposed for projected surfaces. In this paper, a depth camera is used for hand recognition and for effectively extracting the area of the hand from the scene. A hand detection and finger tracking method based on depth images is proposed. Based on the proposed method, a touch interface for the projected surface is implemented and evaluated.

Augmented-Reality Survey: from Concept to Application

  • Kim, Soo Kyun;Kang, Shin-Jin;Choi, Yoo-Joo;Choi, Min-Hyung;Hong, Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.982-1004
    • /
    • 2017
  • The recent advances in the field of augmented reality (AR) have shown that the technology is a fundamental part of modern immersive interactive systems for the achievement of user engagement and a dynamic user experience. This survey paper presents the descriptions of a variety of the new AR explorations, and the issues that are relevant to the contemporary development of the fundamental technologies and applications are discussed. Most of the literature regarding the pertinent topics-taxonomy, the core tracking and sensing technologies, the hardware and software platforms, and the domain-specific applications-are then chronologically surveyed, and in varying detail, this is supplemented with the cited papers. This paper portrays the diversity of the research regarding the AR field together with an overview of the benefits and the limitations of the competing and complementary technologies.