• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.032 seconds

Navigation of a Mobile Robot Using Hand Gesture Recognition (손 동작 인식을 이용한 이동로봇의 주행)

  • Kim, Il-Myeong;Kim, Wan-Cheol;Yun, Gyeong-Sik;Lee, Jang-Myeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.7
    • /
    • pp.599-606
    • /
    • 2002
  • A new method to govern the navigation of a mobile robot using hand gesture recognition is proposed based on the following two procedures. One is to achieve vision information by using a 2-DOF camera as a communicating medium between a man and a mobile robot and the other is to analyze and to control the mobile robot according to the recognized hand gesture commands. In the previous researches, mobile robots are passively to move through landmarks, beacons, etc. In this paper, to incorporate various changes of situation, a new control system that manages the dynamical navigation of mobile robot is proposed. Moreover, without any generally used expensive equipments or complex algorithms for hand gesture recognition, a reliable hand gesture recognition system is efficiently implemented to convey the human commands to the mobile robot with a few constraints.

Real-time Simultaneous Localization and Mapping (SLAM) for Vision-based Autonomous Navigation (영상기반 자동항법을 위한 실시간 위치인식 및 지도작성)

  • Lim, Hyon;Lim, Jongwoo;Kim, H. Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.5
    • /
    • pp.483-489
    • /
    • 2015
  • In this paper, we propose monocular visual simultaneous localization and mapping (SLAM) in the large-scale environment. The proposed method continuously computes the current 6-DoF camera pose and 3D landmarks position from video input. The proposed method successfully builds consistent maps from challenging outdoor sequences using a monocular camera as the only sensor. By using a binary descriptor and metric-topological mapping, the system demonstrates real-time performance on a large-scale outdoor dataset without utilizing GPUs or reducing input image size. The effectiveness of the proposed method is demonstrated on various challenging video sequences.

Interactive Typography System using Combined Corner and Contour Detection

  • Lim, Sooyeon;Kim, Sangwook
    • International Journal of Contents
    • /
    • v.13 no.1
    • /
    • pp.68-75
    • /
    • 2017
  • Interactive Typography is a process where a user communicates by interacting with text and a moving factor. This research covers interactive typography using real-time response to a user's gesture. In order to form a language-independent system, preprocessing of entered text data presents image data. This preprocessing is followed by recognizing the image data and the setting interaction points. This is done using computer vision technology such as the Harris corner detector and contour detection. User interaction is achieved using skeleton information tracked by a depth camera. By synchronizing the user's skeleton information acquired by Kinect (a depth camera,) and the typography components (interaction points), all user gestures are linked with the typography in real time. An experiment was conducted, in both English and Korean, where users showed an 81% satisfaction level using an interactive typography system where text components showed discrete movements in accordance with the users' gestures. Through this experiment, it was possible to ascertain that sensibility varied depending on the size and the speed of the text and interactive alteration. The results show that interactive typography can potentially be an accurate communication tool, and not merely a uniform text transmission system.

A Lane Departure Warning Algorithm Based on an Edge Distribution Function (에지분포함수 기반의 차선이탈경보 알고리즘)

  • 이준웅;이성웅
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.9 no.3
    • /
    • pp.143-154
    • /
    • 2001
  • An algorithm for estimating the lane departure of a vehicle is derived and implemented based on an EDF(edge distribution function) obtained from gray-level images taken by a CCD camera mounted on a vehicle. As the function of edge direction, the EDF is aimed to show the distribution of edge direction and to estimate the possibility of lane departure with respect to its symmetric axis and local mamma. The EDF plays important roles: 1) It reduces noisy effects caused by dynamic road scene. 2) It makes possible lane identification without camera modeling. 3) It also leads LDW(lane departure warning) problem to a mathematical approach. When the situations of lane departure such that the vehicle approaches to lane marks or runs in the vicinity of the lane marks are occurred, the orientation of lane marks in images is changed, and then the situations are immediately reflected to the EDF. Accordingly, the lane departure is estimated by studying the shape of the EDF. The proposed EDF-based algorithm enhanced the adaptability to cope with the random and dynamic road environments, and eventually led to the reliable LDW system.

  • PDF

A Real-time Vision Inspection System at a Laver Production Line (해태 생산라인에서의 실시간 시각검사 시스템)

  • Kim, Gi-Weon;Kim, Bong-Gi
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.601-604
    • /
    • 2007
  • In this paper dose a laver surface check using a real time image process. This system does false retrieval of a laver at a laver production line. At first, a laver image was read in real time using a CCD camera. In this paper, we use an area scan CCD camera. Image is converted into a binary code image using a high-speed imaging process board afterwards. A laver feature is extracted by a binary code image. Surface false retrieval is finally executed using a laver feature. In this paper, we use an area feature of a laver image.

  • PDF

On the Measurement of the Depth and Distance from the Defocused Imagesusing the Regularization Method (비초점화 영상에서 정칙화법을 이용한 깊이 및 거리 계측)

  • 차국찬;김종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.6
    • /
    • pp.886-898
    • /
    • 1995
  • One of the ways to measure the distance in the computer vision is to use the focus and defocus. There are two methods in this way. The first method is caculating the distance from the focused images in a point (MMDFP: the method measuring the distance to the focal plane). The second method is to measure the distance from the difference of the camera parameters, in other words, the apertures of the focal planes, of two images with having the different parameters (MMDCI: the method to measure the distance by comparing two images). The problem of the existing methods in MMDFP is to decide the thresholding vaue on detecting the most optimally focused object in the defocused image. In this case, it could be solved by comparing only the error energy in 3x3 window between two images. In MMDCI, the difficulty is the influence of the deflection effect. Therefor, to minimize its influence, we utilize two differently focused images instead of different aperture images in this paper. At the first, the amount of defocusing between two images is measured through the introduction of regularization and then the distance from the camera to the objects is caculated by the new equation measuring the distance. In the results of simulation, we see the fact to be able to measure the distance from two differently defocused images, and for our approach to be robuster than the method using the different aperture in the noisy image.

  • PDF

Object and Pose Recognition with Boundary Extraction from 3 Dimensional Depth Information (3 차원 거리 정보로부터 물체 윤곽추출에 의한 물체 및 자세 인식)

  • Gim, Seong-Chan;Yang, Chang-Ju;Lee, Jun-Ho;Kim, Jong-Man;Kim, Hyoung-Suk
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.6
    • /
    • pp.15-23
    • /
    • 2011
  • Stereo vision approach to solve the problem using a single camera three dimension precise distance measurement and object recognition method is proposed. Precise three dimensional information of objects can be obtained using single camera, a laser light and a rotating flat mirror. With a simple thresholding operation on the depth information, the segmentations of objects can be obtained. Comparing the signatures of object boundaries with database, objects can be recognized. Improving the simulation results for the object recognition by precise distance measurement are presented.

A vision based mobile robot travelling among obstructions

  • Ishigawa, Seiji;Gouhara, Kouichi;Kouichi-Ide;Kato, Kiyoshi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10b
    • /
    • pp.810-815
    • /
    • 1988
  • This paper presents a mobile robot that travels employing visual information. The mobile robot is equipped solely with a TV camera as a sensor, and views from the TV camera are transferred to a separately installed micro computer through an image acquisition device. An acquired image of a view is processed there and the information necessary for travel is yielded. Instructions based on the information are then sent from the micro computer to the mobile robot, which causes the mobile robot next action. Among several application programs that have already been developed for the mobile robot other than the entire control program, this paper focuses its attention on the travelling control of the mobile robot in a model environment with obstructions as well as an overview of the whole system. The behaviour the present mobile robot takes when it travels among obstructions was investigated by an experiment, and satisfactory results were obtained.

  • PDF

Analysis of Rotational Motion of Skid Steering Mobile Robot using Marker and Camera (마커와 카메라를 이용한 스키드 구동 이동 로봇의 회전 운동 분석)

  • Ha, Jong-Eun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.2
    • /
    • pp.185-190
    • /
    • 2016
  • This paper deals with analysis of the characteristics of mobile robot's motion by automatic detection of markers on a robot using a camera. Analysis of motion behaviors according to parameters is important in developing control algorithm for robot operation or autonomous navigation. For this purpose, we use four chessboard patterns on the robot. Their location on the robot is adjusted to be on single plane. Homography is used to compute the actual amount of movement of the robot. Presented method is tested using P3-AT robot and it gives reliable results.

Performing Missions of a Minicar Using a Single Camera (단안 카메라를 이용한 소형 자동차의 임무 수행)

  • Kim, Jin-Woo;Ha, Jong-Eun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.1
    • /
    • pp.123-128
    • /
    • 2017
  • This paper deals with performing missions through autonomous navigation using camera and other sensors. Extracting pose of the car is necessary to navigate safely within the given road. Homography is used to find it. Color image is converted into grey image and thresholding and edge is used to find control points. Two control ponits are converted into world coordinates using homography to find the angle and position of the car. Color is used to find traffic signal. It was confirmed that the given tasks performed well through experiments.