• Title/Summary/Keyword: vision camera

Search Result 1,372, Processing Time 0.029 seconds

Stairs Walking of a Biped Robot (2족 보행 로봇의 계단 보행)

  • 성영휘;안희욱
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.1
    • /
    • pp.46-52
    • /
    • 2004
  • In this paper, we introduce a case study of developing a miniature humanoid robot which has 16 degrees of freedom, 42 cm heights, and 1.5kg weights. For easy implimentation, the integrated RC-servo motors are adopted as actuators and a digital camera is equipped on its head. So, it can transmit vision data to a remote host computer via wireless modem. The robot can perform staircase walking as well as straight walking and turning to any direction. The user-interface program running on the host computer contains a robot graphic simulator and a motion editor which are used to generate and verify the robot's walking motion. The experimental results show that the robot has various walking capability including straight walking, turning, and stairs walking.

  • PDF

Visual Servoing-Based Paired Structured Light Robot System for Estimation of 6-DOF Structural Displacement (구조물의 6자유도 변위 측정을 위한 비주얼 서보잉 기반 양립형 구조 광 로봇 시스템)

  • Jeon, Hae-Min;Bang, Yu-Seok;Kim, Han-Geun;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.10
    • /
    • pp.989-994
    • /
    • 2011
  • This study aims to demonstrate the feasibility of a visual servoing-based paired structured light (SL) robot for estimating structural displacement under various external loads. The former paired SL robot, which was proposed in the previous study, was composed of two screens facing with each other, each with one or two lasers and a camera. It was found that the paired SL robot could estimate the translational and rotational displacement each in 3-DOF with high accuracy and low cost. However, the measurable range is fairly limited due to the limited screen size. In this paper, therefore, a visual servoing-based 2-DOF manipulator which controls the pose of lasers is introduced. By controlling the positions of the projected laser points to be on the screen, the proposed robot can estimate the displacement regardless of the screen size. We performed various simulations and experimental tests to verify the performance of the newly proposed robot. The results show that the proposed system overcomes the range limitation of the former system and it can be utilized to accurately estimate the structural displacement.

On the Development of an Inspection Algorithm for Micro Ball Grid Array Solder Balls ($\mu$BGA패키지 납볼 결함 검사 알고리듬 개발에 관한 연구)

  • 박종욱;양진세;최태영
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.8 no.3
    • /
    • pp.1-9
    • /
    • 2001
  • This paper proposes an inspection algorithm for micro ball grid array ($\mu$BGA) solder balls. This algorithm is motivated by the difficulty of finding defect balls by human visual inspection due to their small dimensions. Specifically, it is developed herein an automated vision-based inspection algorithm for $\mu$BGA's, which can inspect solder balls not only for so-called two dimensional errors, such as missings, positions and sizes, but also for height errors. The inspection algorithm uses two dimensional images of $\mu$BGA obtained through special blue illumination, and processes them with a rotation-invariant sub algorithm. It can also detect height errors when a two-camera system is available. Simulation results show that the proposed algorithm is more efficient in detecting ball defects compared with the conventional algorithms.

  • PDF

Rule-based Detection of Vehicles in Traffic Scenes (교통영상에서의 규칙에 기반한 차량영역 검출기법)

  • Park, Young-Tae
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.3
    • /
    • pp.31-40
    • /
    • 2000
  • A robust scheme of locating and counting the number of vehicles m urban traffic scenes, a core component of vision-based traffic monitoring systems, is presented The method is based on the evidential reasoning, where vehicle evidences m the background subtraction Image are obtained by a new locally optimum thresholding, and the evidences are merged by three heuristic rules using the geometric constraints The locally optimum thresholding guarantees the separation of bright and dark evidences of vehicles even when the vehicles are overlapped or when the vehicles have similar color to the background Experimental results on diverse traffic scenes show that the detection performance is very robust to the operating conditions such as the camera location and the weather The method may be applied even when vehicle movement is not observed since a static Image IS processed without the use of frame difference.

  • PDF

Synchronization System of Robot-centered Information for Context Understanding (상황 이해를 위한 로봇 중심 정보 동기화 시스템)

  • Lim, G.H.;Lee, S.;Suh, I.H.;Kim, H.S.;Son, J.H.
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.933-934
    • /
    • 2006
  • High level perceptual tasks such as context understanding, SLAM and object recognition are essential for intelligent robot to provide services for human supports. Those intelligent robots often use camera sensor for vision information, sonar or laser sensor for range information, encoder for angular velocity of wheel and so on. The information is generated at different time intervals by the different H/W devices and S/W algorithms. The generation of high level information requires the specific mixture of low level information. And the information should be represented to be useful for robots to use in their ecological niche. In conventional robot systems, perceptual module requires the resource to use by tightly coupling whenever it is needed. So the resource and information cannot be easily shared and even could be invalid for the delayed information. In this paper, we propose a synchronization system of robot-centered information for context understanding. Our system represents information for the robot capacity and synchronizes the information that is asynchronously generated, where is employed the black-board architecture.

  • PDF

Real-Time Automatic Tracking of Facial Feature (얼굴 특징 실시간 자동 추적)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.6
    • /
    • pp.1182-1187
    • /
    • 2004
  • Robust, real-time, fully automatic tracking of facial features is required for many computer vision and graphics applications. In this paper, we describe a fully automatic system that tracks eyes and eyebrows in real time. The pupils are tracked using the red eye effect by an infrared sensitive camera equipped with infrared LEDs. Templates are used to parameterize the facial features. For each new frame, the pupil coordinates are used to extract cropped images of eyes and eyebrows. The template parameters are recovered by PCA analysis on these extracted images using a PCA basis, which was constructed during the training phase with some example images. The system runs at 30 fps and requires no manual initialization or calibration. The system is shown to work well on sequences with considerable head motions and occlusions.

Global Localization of Mobile Robots Using Omni-directional Images (전방위 영상을 이용한 이동 로봇의 전역 위치 인식)

  • Han, Woo-Sup;Min, Seung-Ki;Roh, Kyung-Shik;Yoon, Suk-June
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.31 no.4
    • /
    • pp.517-524
    • /
    • 2007
  • This paper presents a global localization method using circular correlation of an omni-directional image. The localization of a mobile robot, especially in indoor conditions, is a key component in the development of useful service robots. Though stereo vision is widely used for localization, its performance is limited due to computational complexity and its narrow view angle. To compensate for these shortcomings, we utilize a single omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Nodes around a robot are extracted by the correlation coefficients of CHL (Circular Horizontal Line) between the landmark and the current captured image. After finding possible near nodes, the robot moves to the nearest node based on the correlation values and the positions of these nodes. To accelerate computation, correlation values are calculated based on Fast Fourier Transforms. Experimental results and performance in a real home environment have shown the feasibility of the method.

Improved Tracking System and Realistic Drawing for Real-Time Water-Based Sign Pen (향상된 트래킹 시스템과 실시간 수성 사인펜을 위한 사실적 드로잉)

  • Hur, Hyejung;Lee, Ju-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.2
    • /
    • pp.125-132
    • /
    • 2014
  • In this paper, we present marker-less fingertip and brush tracking system with inexpensive web camera. Parallel computation using CUDA is applied to the tracking system. This tracking system can run on inexpensive environment such as a laptop or a desktop and support for real-time application. We also present realistic water-based sign pen drawing model and implementation. The realistic drawing application with our inexpensive real-time fingertip and brush tracking system shows us the art class of the future. The realistic drawing application, along with our inexpensive real-time fingertip and brush tracking system, would be utilized in test-bed for the future high-technology education environment.

Control of Mobile Robot Navigation Using Vision Sensor Data Fusion by Nonlinear Transformation (비선형 변환의 비젼센서 데이터융합을 이용한 이동로봇 주행제어)

  • Jin Tae-Seok;Lee Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.4
    • /
    • pp.304-313
    • /
    • 2005
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robot need to recognize his position and direction for intelligent performance in an unknown environment. And the mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this research, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the accurate measurement. As a general approach of sensor fusion, a UT -Based Sensor Fusion(UTSF) scheme using Unscented Transformation(UT) is proposed for either joint or disjoint data structure and applied to the landmark identification for mobile robot navigation. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations and experiments. The newly proposed, UT-Based UTSF scheme is applied to the navigation of a mobile robot in an unstructured environment as well as structured environment, and its performance is verified by the computer simulation and the experiment.

Posture Change Recognition System using Visual Information (영상정보에 의한 자세변화 감지 시스템)

  • Jo, Sung-Won;Han, Kyong-Ho
    • Journal of IKEEE
    • /
    • v.14 no.4
    • /
    • pp.291-296
    • /
    • 2010
  • This paper handles, pitching and rolling posture change detection using the visual image changes due to the road slope conditions. When the moving vehicle is slanted to a direction, the objects in the visual images of the vehicle are moving to up or down and right or left. This is similar to the human's balancing behavior depending on the visual image change detection as well as the vestibular organs and semicircular canal in the ear. The proposes method shows the visual image through the camera can be used for the image information itself and for the posture change detection through the experiments.