• Title/Summary/Keyword: orientation estimation

Search Result 235, Processing Time 0.025 seconds

Pedestrian Gait Estimation and Localization using an Accelerometer (가속도 센서를 이용한 보행 정보 및 보행자 위치 추정)

  • Kim, Hui-Sung;Lee, Soo-Yong
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.4
    • /
    • pp.279-285
    • /
    • 2010
  • This paper presents the use of 3 axis accelerometer for getting the gait information including the number of gaits, stride and walking distance. Travel distance is usually calculated from the double integration of the accelerometer output with respect to time; however, the accumulated errors due to the drift are inevitable. The orientation change of the accelerometer also causes error because the gravity is added to the measured acceleration. Unless three axis orientations are completely identified, the accelerometer alone does not provide correct acceleration for estimating the travel distance. We proposed a way of minimizing the error due to the change of the orientation. Pedestrian localization is implemented with the heading angle and the travel distance. Heading angle is estimated from the rate gyro and the magnetic compass measurements. The performance of the localization is presented with experimental data.

Sound Source Tracking Control of a Mobile Robot Using a Microphone Array (마이크로폰 어레이를 이용한 이동 로봇의 음원 추적 제어)

  • Han, Jong-Ho;Han, Sun-Sin;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.4
    • /
    • pp.343-352
    • /
    • 2012
  • To follow a sound source by a mobile robot, the relative position and orientation of the sound source from the mobile robot have been estimated using a microphone array. In this research, the difference among the traveling times of the sound source to each of three microphones has been used to calculate the distance and orientation of the sound source from the mobile robot which carries the microphone array. The cross-correlation between two signals has been applied for detecting the time difference between two signals, which provides reliable and precise value of the time difference comparing to the conventional methods. To generate the tracking direction to the sound source, fuzzy rules are applied and the results are used to control the mobile robot in a real-time. The efficiency of the proposed algorithm has been demonstrated through the real experiments comparing to the conventional approaches.

Brain Source Localization using EEG Signals (EEG신호를 이용한 뇌 신호원 국부화에 관한 연구)

  • Jung, Jae-Chul;Song, Min;Lee, He-Young
    • Proceedings of the IEEK Conference
    • /
    • 2002.06e
    • /
    • pp.133-136
    • /
    • 2002
  • EEG(Electroencephalography) is generated by electrical activity between neurons in cortical. Waveform of EEG is changed according to body and mental states. Therefore EEG is used to diagnosis of encephalophyma and epilepsy, etc. Also EEG is used to HCI(Human-Computer Interface). This paper describes estimation of orientation and location of dipole sources. The forward model is three-layer spherical head model and current dipole model. Using analytical solution, EEG is generated. Using MNLS(Minimum-Norm Least-Square) method, orientation and location of dipole moment is estimated.

  • PDF

Mobile Robot Destination Generation by Tracking a Remote Controller Using a Vision-aided Inertial Navigation Algorithm

  • Dang, Quoc Khanh;Suh, Young-Soo
    • Journal of Electrical Engineering and Technology
    • /
    • v.8 no.3
    • /
    • pp.613-620
    • /
    • 2013
  • A new remote control algorithm for a mobile robot is proposed, where a remote controller consists of a camera and inertial sensors. Initially the relative position and orientation of a robot is estimated by capturing four circle landmarks on the plate of the robot. When the remote controller moves to point to the destination, the camera pointing trajectory is estimated using an inertial navigation algorithm. The destination is transmitted wirelessly to the robot and then the robot is controlled to move to the destination. A quick movement of the remote controller is possible since the destination is estimated using inertial sensors. Also unlike the vision only control, the robot can be out of camera's range of view.

Estimation of Real Boundary with Subpixel Accuracy in Digital Imagery (디지털 영상에서 부화소 정밀도의 실제 경계 추정)

  • Kim, Tae-Hyeon;Moon, Young-Shik;Han, Chang-Soo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.8
    • /
    • pp.16-22
    • /
    • 1999
  • In this paper, an efficient algorithm for estimating real edge locations to subpixel values is described. Digital images are acquired by projection into image plane and sampling process. However, most of real edge locations are lost in this process, which causes low measurement accuracy. For accurate measurement, we propose an algorithm which estimates the real boundary between two adjacent pixels in digital imagery, with subpixel accuracy. We first define 1D edge operator based on the moment invariant. To extend it to 2D data, the edge orientation of each pixel is estimated by the LSE(Least Squares Error)line/circle fitting of a set of pixels around edge boundary. Then, using the pixels along the line perpendicular to the estimated edge orientation the real boundary is calculated with subpixel accuracy. Experimental results using real images show that the proposed method is robust in local noise, while maintaining low measurement error.

  • PDF

Effect of orientation, interval size, target location on interpolation estimates on CRT display. (CRT 표시장치에서 내삽 추정치에 대한 방향, 크기, 위치의 효과)

  • 노재호
    • Journal of the Ergonomics Society of Korea
    • /
    • v.9 no.1
    • /
    • pp.35-42
    • /
    • 1990
  • This study is concerned with the accuracy, of error with which subjects can interpolate the location of a target between two graduation markers with 4 orientations and 6 sizes CRT display. Stimuli were graphic images on CRT with a linear, end-markec, ungraduated scales having a target. The location of a target is estimated in units over te range 1-99. Smallest error of estimates was at the near ends and middle of the base-line. The median error was less than 2 units, modal error was 1, and the most error (; 99.7%) was within 10. A proper size to make an minimum error in interpolation exists such that size 400 pixels. Interpolation estimation is shown to be affected by the size, location and interaction (orientation x location, size x location). The accuracy, interpolation performance are discussed in relation to absolute error associated with visual performance.

  • PDF

Performance Enhancement of Soccer Robot System by Changing Color Patch (칼라 패치 변경을 이용한 축구 로봇 시스템의 성능 개선)

  • Ko, Chang-Gun;Jang, Mun-Hee;Lee, Suk-Gyu
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.4 no.3
    • /
    • pp.118-125
    • /
    • 2009
  • This paper proposes a novel method to enhance performance of soccer robot system using optimal color patch mounted on the robot. In soccer robot system, the position and orientation of the robot can be estimated with color patch under real time environment. However, the location estimation of the robot is very sensitive to the pattern of color patch. In addition, pattern recognition and navigation algorithm are operated independently to reduce the operation time. The experimental results show that the proposed pattern of patch is effective to reduce the position and orientation error of the robot.

  • PDF

Analyzing Construction Workers' Recognition of Hazards by Estimating Visual Focus of Attention

  • Fang, Yihai;Cho, Yong K.
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.248-251
    • /
    • 2015
  • High injury and fatality rates remain a serious problem in the construction industry. Many construction injuries and fatalities can be prevented if workers can recognize potential hazards and take actions in time. Many efforts have been devoted in improving workers' ability of hazard recognition through various safety training and education methods. However, a reliable approach for evaluating this ability is missing. Previous studies in the field of human behavior and phycology indicate that the visual focus of attention (VFOA) is a good indicator of worker's actual focus. Towards this direction, this study introduces an automated approach for estimating the VFOA of equipment operators using a head orientation-based VFOA estimation method. The proposed method is validated in a virtual reality scenario using an immersive head mounted display. Results show that the proposed method can effectively estimate the VFOA of test subjects in different test scenarios. The findings in this study broaden the knowledge of detecting the visual focus and distraction of construction workers, and envision the future work in improving work's ability of hazard recognition.

  • PDF

A Parallel Implementation of Multiple Non-overlapping Cameras for Robot Pose Estimation

  • Ragab, Mohammad Ehab;Elkabbany, Ghada Farouk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4103-4117
    • /
    • 2014
  • Image processing and computer vision algorithms are gaining larger concern in a variety of application areas such as robotics and man-machine interaction. Vision allows the development of flexible, intelligent, and less intrusive approaches than most of the other sensor systems. In this work, we determine the location and orientation of a mobile robot which is crucial for performing its tasks. In order to be able to operate in real time there is a need to speed up different vision routines. Therefore, we present and evaluate a method for introducing parallelism into the multiple non-overlapping camera pose estimation algorithm proposed in [1]. In this algorithm the problem has been solved in real time using multiple non-overlapping cameras and the Extended Kalman Filter (EKF). Four cameras arranged in two back-to-back pairs are put on the platform of a moving robot. An important benefit of using multiple cameras for robot pose estimation is the capability of resolving vision uncertainties such as the bas-relief ambiguity. The proposed method is based on algorithmic skeletons for low, medium and high levels of parallelization. The analysis shows that the use of a multiprocessor system enhances the system performance by about 87%. In addition, the proposed design is scalable, which is necaccery in this application where the number of features changes repeatedly.

2D-3D Pose Estimation using Multi-view Object Co-segmentation (다시점 객체 공분할을 이용한 2D-3D 물체 자세 추정)

  • Kim, Seong-heum;Bok, Yunsu;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.1
    • /
    • pp.33-41
    • /
    • 2017
  • We present a region-based approach for accurate pose estimation of small mechanical components. Our algorithm consists of two key phases: Multi-view object co-segmentation and pose estimation. In the first phase, we explain an automatic method to extract binary masks of a target object captured from multiple viewpoints. For initialization, we assume the target object is bounded by the convex volume of interest defined by a few user inputs. The co-segmented target object shares the same geometric representation in space, and has distinctive color models from those of the backgrounds. In the second phase, we retrieve a 3D model instance with correct upright orientation, and estimate a relative pose of the object observed from images. Our energy function, combining region and boundary terms for the proposed measures, maximizes the overlapping regions and boundaries between the multi-view co-segmentations and projected masks of the reference model. Based on high-quality co-segmentations consistent across all different viewpoints, our final results are accurate model indices and pose parameters of the extracted object. We demonstrate the effectiveness of the proposed method using various examples.