• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.026 seconds

Walking Features Detection for Human Recognition

  • Viet, Nguyen Anh;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.787-795
    • /
    • 2008
  • Human recognition on camera is an interesting topic in computer vision. While fingerprint and face recognition have been become common, gait is considered as a new biometric feature for distance recognition. In this paper, we propose a gait recognition algorithm based on the knee angle, 2 feet distance, walking velocity and head direction of a person who appear in camera view on one gait cycle. The background subtraction method firstly use for binary moving object extraction and then base on it we continue detect the leg region, head region and get gait features (leg angle, leg swing amplitude). Another feature, walking speed, also can be detected after a gait cycle finished. And then, we compute the errors between calculated features and stored features for recognition. This method gives good results when we performed testing using indoor and outdoor landscape in both lateral, oblique view.

  • PDF

A Technique for Alignment to True North Based on Camera in Meteorological Installation (풍황 계측 타워 설치시 카메라를 사용한 진북 맞추기 기법)

  • Yoo Neung Soo;Nam Yoo Su;Lee Jeong Wan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.2
    • /
    • pp.122-126
    • /
    • 2005
  • A technique for alignment to true north is presented based on synchronized measurements of vision image by a camera and output voltage of wind direction sensor. The true wind direction is evaluated by means of image processing techniques with least square sense, and then evaluated true value is compared with measured output voltage of the sensor. The uncertainty analysis about the component error for the proposed method in practical situation is performed. The proposed technique is applied to real meteorological tower (wind measuring tower) at the Daekwanryung test site. In addition, some uncertainty analysis of this method is presented.

Implementation of an Underwater ROV for Detecting Foreign Objects in Water

  • Lho, Tae-Jung
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.1
    • /
    • pp.61-66
    • /
    • 2021
  • An underwater remotely operated vehicle (ROV) has been implemented. It can inspect foreign substances through a CCD camera while the ROV is running in water. The maximum thrust of the ROV's running thruster is 139.3 N, allowing the ROV to move forward and backward at a running speed of 1.03 m/s underwater. The structural strength of the guard frame was analyzed when the ROV collided with a wall while traveling at a speed of 1.03 m/s underwater, and found to be safe. The maximum running speed of the ROV is 1.08 m/s and the working speed is 0.2 m/s in a 5.8-m deep-water wave pool, which satisfies the target performance. As the ROV traveled underwater at a speed of 0.2 m/s, the inspection camera was able to read characters that were 3 mm in width at a depth of 1.5 m, which meant it could sufficiently identify foreign objects in the water.

Object tracking algorithm through RGB-D sensor in indoor environment (실내 환경에서 RGB-D 센서를 통한 객체 추적 알고리즘 제안)

  • Park, Jung-Tak;Lee, Sol;Park, Byung-Seo;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.248-249
    • /
    • 2022
  • In this paper, we propose a method for classifying and tracking objects based on information of multiple users obtained using RGB-D cameras. The 3D information and color information acquired through the RGB-D camera are acquired and information about each user is stored. We propose a user classification and location tracking algorithm in the entire image by calculating the similarity between users in the current frame and the previous frame through the information on the location and appearance of each user obtained from the entire image.

  • PDF

Vehicle Classification and Tracking Based on Deep Learning

  • Hyochang Ahn;Yong-Hwan Lee
    • Journal of Web Engineering
    • /
    • v.21 no.4
    • /
    • pp.1283-1294
    • /
    • 2022
  • Traffic volume is gradually increasing due to the development of technology and the concentration of people in cities. As the results, traffic congestion and traffic accidents are becoming social problems. Detecting and tracking a vehicle based on computer vision is a great helpful in providing important information such as identifying road traffic conditions and crime situations. However, vehicle detection and tracking using a camera is affected by environmental factors in which the camera is installed. In this paper, we thus propose a deep learning based on vehicle classification and tracking scheme to classify and track vehicles in a complex and diverse environment. Using YOLO model as deep learning model, it is possible to quickly and accurately perform robust vehicle tracking in various environments, compared to the traditional method.

Development of the Computer Vision based Continuous 3-D Feature Extraction System via Laser Structured Lighting (레이저 구조광을 이용한 3차원 컴퓨터 시각 형상정보 연속 측정 시스템 개발)

  • Im, D. H.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.24 no.2
    • /
    • pp.159-166
    • /
    • 1999
  • A system to extract continuously the real 3-D geometric fearture information from 2-D image of an object, which is fed randomly via conveyor has been developed. Two sets of structured laser lightings were utilized. And the laser structured light projection image was acquired using the camera from the signal of the photo-sensor mounted on the conveyor. Camera coordinate calibration matrix was obtained, which transforms 2-D image coordinate information into 3-D world space coordinate using known 6 points. The maximum error after calibration showed 1.5 mm within the height range of 103mm. The correlation equation between the shift amount of the laser light and the height was generated. Height information estimated after correlation showed the maximum error of 0.4mm within the height range of 103mm. An interactive 3-D geometric feature extracting software was developed using Microsoft Visual C++ 4.0 under Windows system environment. Extracted 3-D geometric feature information was reconstructed into 3-D surface using MATLAB.

  • PDF

Stereo Vision-based Visual Odometry Using Robust Visual Feature in Dynamic Environment (동적 환경에서 강인한 영상특징을 이용한 스테레오 비전 기반의 비주얼 오도메트리)

  • Jung, Sang-Jun;Song, Jae-Bok;Kang, Sin-Cheon
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.263-269
    • /
    • 2008
  • Visual odometry is a popular approach to estimating robot motion using a monocular or stereo camera. This paper proposes a novel visual odometry scheme using a stereo camera for robust estimation of a 6 DOF motion in the dynamic environment. The false results of feature matching and the uncertainty of depth information provided by the camera can generate the outliers which deteriorate the estimation. The outliers are removed by analyzing the magnitude histogram of the motion vector of the corresponding features and the RANSAC algorithm. The features extracted from a dynamic object such as a human also makes the motion estimation inaccurate. To eliminate the effect of a dynamic object, several candidates of dynamic objects are generated by clustering the 3D position of features and each candidate is checked based on the standard deviation of features on whether it is a real dynamic object or not. The accuracy and practicality of the proposed scheme are verified by several experiments and comparisons with both IMU and wheel-based odometry. It is shown that the proposed scheme works well when wheel slip occurs or dynamic objects exist.

  • PDF

Enhancement of 3D Scanning Performance by Correcting the Photometric Distortion of a Micro Projector-Camera System (초소형 카메라-프로젝터의 광학왜곡 보정을 이용한 위상변이 방식 3차원 스캐닝의 성능 향상)

  • Park, Go Gwang;Baek, Seung-Hae;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.3
    • /
    • pp.219-226
    • /
    • 2013
  • A distortion correction technique is presented to enhance the 3D scanning performance of a micro-size camera-projector system. Recently, several types of micro-size digital projectors and cameras are available. However, there have been few effort to develop a micro-size 3D scanning system. We develop a micro-sized 3D scanning system which is based on the structured light technique. Three images of phase-shifted sinusoidal patterns are projected, captured, and analyzed by the system to reconstruct 3D shapes of very small objects. To overcome inherent optical imperfection of the micro 3D sensor, we correct the vignetting and blooming effects which cause distortions in the phase image. Error analysis and 3D scanning results on small real objects are presented to show the performance of the developed 3D scanning system.

An Obstacle Detection and Avoidance Method for Mobile Robot Using a Stereo Camera Combined with a Laser Slit

  • Kim, Chul-Ho;Lee, Tai-Gun;Park, Sung-Kee;Kim, Jai-Hie
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.871-875
    • /
    • 2003
  • To detect and avoid obstacles is one of the important tasks of mobile navigation. In a real environment, when a mobile robot encounters dynamic obstacles, it is required to simultaneously detect and avoid obstacles for its body safely. In previous vision system, mobile robot has used it as either a passive sensor or an active sensor. This paper proposes a new obstacle detection algorithm that uses a stereo camera as both a passive sensor and an active sensor. Our system estimates the distances from obstacles by both passive-correspondence and active-correspondence using laser slit. The system operates in three steps. First, a far-off obstacle is detected by the disparity from stereo correspondence. Next, a close obstacle is acquired from laser slit beam projected in the same stereo image. Finally, we implement obstacle avoidance algorithm, adopting the modified Dynamic Window Approach (DWA), by using the acquired the obstacle's distance.

  • PDF

Moving Target Indication using an Image Sensor for Small UAVs (소형 무인항공기용 영상센서 기반 이동표적표시 기법)

  • Yun, Seung-Gyu;Kang, Seung-Eun;Ko, Sangho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.12
    • /
    • pp.1189-1195
    • /
    • 2014
  • This paper addresses a Moving Target Indication (MTI) algorithm which can be used for small Unmanned Aerial Vehicles (UAVs) equipped with image sensors. MTI is a system (or an algorithm) which detects moving objects. The principle of the MTI algorithm is to analyze the difference between successive image data. It is difficult to detect moving objects in the images recorded from dynamic cameras attached to moving platforms such as UAVs flying at low altitudes over a variety of terrain, since the acquired images have two motion components: 'camera motion' and 'object motion'. Therefore, the motion of independent objects can be obtained after the camera motion is compensated thoroughly via proper manipulations. In this study, the camera motion effects are removed by using wiener filter-based image registration, one of the non-parametric methods. In addition, an image pyramid structure is adopted to reduce the computational complexity for UAVs. We demonstrate the effectiveness of our method with experimental results on outdoor video sequences.