• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.036 seconds

A Study on the Application of Spatial-Knowledge-Tags using Human Motion in Intelligent Space

  • Jin, Tae-Seok;Morioka, Kazuyuki;Niitsuma, Mihoko;Sasaki, Takeshi;Hashimoto, Hideki
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.31-36
    • /
    • 2005
  • Intelligent Space (iSpace) is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment comes to have intelligence. In iSpace, the locations of multiple humans and other objects are obtained and tracked by using multiple camera and color-based method. In addition, we describe a context-aware information system which is based on Spatial-Knowledge-Tags (SKT). SKT system enables humans to access information and data by using spatial location of human and stored information in storage. The proposed tracking method is applied to the intelligent environment and its performance is verified by the experiments.

  • PDF

The Laser Range Finder for the Mobile Robot Navigation using a Lock-in Amplifier

  • Yoon, Hee-Sun;Shin, Myung-Kwan;Park, Kyi-Hwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1423-1426
    • /
    • 2005
  • Map building is the most important thing for the mobile robots navigation. It requires specific vision system such as CCD camera, range finding system, and many other things. Laser range finder has highly collimated beams can be obtained easily, thus achieving lateral resolution. Laser Diode is used for a continuous laser source. The Automatic Current Control Circuit and the Bias-T is used for mix AC signal with DC bias. This signal is used for driving Laser Diode. The main idea of the calculating distance is detecting phase shift between reference signal and detected signal by photo detector. For the signal processing, the Lock-in amplifier system is addressed in this paper. We used a diffused reflected beam to detect phase shift in this system. But this beam is minuteness signal so it can be easily buried in nose. Lock-in amplifier is used to measure the amplitude and phase of signals which are buried in noise.

  • PDF

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.1
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

Visibility Sensor with Stereo Infrared Light Sources for Mobile Robot Motion Estimation (주행 로봇 움직임 추정용 스테레오 적외선 조명 기반 Visibility 센서)

  • Lee, Min-Young;Lee, Soo-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.108-115
    • /
    • 2011
  • This paper describes a new sensor system for mobile robot motion estimation using stereo infrared light sources and a camera. Visibility is being applied to robotic obstacle avoidance path planning and localization. Using simple visibility computation, the environment is partitioned into many visibility sectors. Based on the recognized edges, the sector a robot belongs to is identified and this greatly reduces the search area for localization. Geometric modeling of the vision system enables the estimation of the characteristic pixel position with respect to the robot movement. Finite difference analysis is used for incremental movement and the error sources are investigated. With two characteristic points in the image such as vertices, the robot position and orientation are successfully estimated.

Detection of Preceding Vehicles Based on a Multistage Combination of Edge Features and Horizontal Symmetry (에지특징의 단계적 조합과 수평대칭성에 기반한 선행차량검출)

  • Song, Gwang-Yul;Lee, Joon-Woong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.7
    • /
    • pp.679-688
    • /
    • 2008
  • This paper presents an algorithm capable of detecting leading vehicles using a forward-looking camera. In fact, the accurate measurements of the contact locations of vehicles with road surface are prerequisites for the intelligent vehicle technologies based on a monocular vision. Relying on multistage processing of relevant edge features to the hypothesis generation of a vehicle, the proposed algorithm creates candidate positions being the left and right boundaries of vehicles, and searches for pairs to be vehicle boundaries from the potential positions by evaluating horizontal symmetry. The proposed algorithm is proven to be successful by experiments performed on images acquired by a moving vehicle.

Optical Implementation of Real-Time Two-Dimensional Hopfield Neural Network Model Using Multifocus Hololens (Multifocus Hololens를 이용한 실시간 2차원 Hopfield 신경회로망 모델의 광학적 실험)

  • 박인호;서춘원;이승현;이우상;김은수;양인응
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.10
    • /
    • pp.1576-1583
    • /
    • 1989
  • In this paper, we describe real-time optical implementation of the Hopfield neural network model for two-dimensional associative memory by using commercial LCTV and Multifocus For real-time processing capability, we use LCTV as a memory mask and a input spatial light modulator. Inner product between input pattern and memory matrix is processed by the multifocus holographic lens. The output signal is then electrically thresholded fed back to the system input by 2-D CCD camera. From the good experimental results, the proposed system can be applied to pattern recognition and machine vision in future.

  • PDF

Navigation of a Mobile Robot Using the Hand Gesture Recognition

  • Kim, Il-Myung;Kim, Wan-Cheol;Yun, Jae-Mu;Jin, Tae-Seok;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.126.3-126
    • /
    • 2001
  • A new method to govern the navigation of a mobile robot is proposed based on the following two procedures: one is to achieve vision information by using a 2 D-O-F camera as a communicating medium between a man and a mobile robot and the other is to analyze and to behave according to the recognized hand gesture commands. In the previous researches, mobile robots are passively to move through landmarks, beacons, etc. To incorporate various changes of situation, a new control system manages the dynamical navigation of a mobile robot. Moreover, without any generally used expensive equipments or complex algorithms for hand gesture recognition, a reliable hand gesture recognition system is efficiently implemented to convey the human commands to the mobile robot with a few constraints.

  • PDF

Development of 3-dimensional measuring robot cell (3차원 측정 로보트 셀 개발)

  • Park, Kang;Cho, Koung-Rae;Shin, Hyun-Oh;Kim, Mun-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.1139-1143
    • /
    • 1991
  • Using industrial robots and sensors, we developed an inline car body inspection system which proposes high flexibility and sufficient accuracy. Car Body Inspection(CBI) cell consists of two industrial robots, two corresponding carriages, camera vision system, a process computer with multi-tasking ability and several LDS's. As industrial robots guarantee sufficient repeatabilities, the CBI cell adopts the concept of relative measurement instead of that of absolute measurement. By comparing the actual measured data with reference data, the dimensional errors of the corresponding points can be calculated. The length of the robot arms changes according to ambient temperature and it affects the measuring accuracy. To compensate this error, a robot arm calibration process was realized. By measuring a reference jig, the differential changes of the robot arms due to temperature fluctuation can be calculated and compensated.

  • PDF

Hard calibration of a structured light for the Euclidian reconstruction (3차원 복원을 위한 구조적 조명 보정방법)

  • 신동조;양성우;김재희
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.183-186
    • /
    • 2003
  • A vision sensor should be calibrated prior to infer a Euclidian shape reconstruction. A point to point calibration. also referred to as a hard calibration, estimates calibration parameters by means of a set of 3D to 2D point pairs. We proposed a new method for determining a set of 3D to 2D pairs for the structured light hard calibration. It is simply determined based on epipolar geometry between camera image plane and projector plane, and a projector calibrating grid pattern. The projector calibration is divided two stages; world 3D data acquisition Stage and corresponding 2D data acquisition stage. After 3D data points are derived using cross ratio, corresponding 2D point in the projector plane can be determined by the fundamental matrix and horizontal grid ID of a projector calibrating pattern. Euclidian reconstruction can be achieved by linear triangulation. and experimental results from simulation are presented.

  • PDF

Blur-Invariant Feature Descriptor Using Multidirectional Integral Projection

  • Lee, Man Hee;Park, In Kyu
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.502-509
    • /
    • 2016
  • Feature detection and description are key ingredients of common image processing and computer vision applications. Most existing algorithms focus on robust feature matching under challenging conditions, such as inplane rotations and scale changes. Consequently, they usually fail when the scene is blurred by camera shake or an object's motion. To solve this problem, we propose a new feature description algorithm that is robust to image blur and significantly improves the feature matching performance. The proposed algorithm builds a feature descriptor by considering the integral projection along four angular directions ($0^{\circ}$, $45^{\circ}$, $90^{\circ}$, and $135^{\circ}$) and by combining four projection vectors into a single highdimensional vector. Intensive experiment shows that the proposed descriptor outperforms existing descriptors for different types of blur caused by linear motion, nonlinear motion, and defocus. Furthermore, the proposed descriptor is robust to intensity changes and image rotation.