• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.032 seconds

A Study on the Gesture Matching Method for the Development of Gesture Contents (체감형 콘텐츠 개발을 위한 연속동작 매칭 방법에 관한 연구)

  • Lee, HyoungGu
    • Journal of Korea Game Society
    • /
    • v.13 no.6
    • /
    • pp.75-84
    • /
    • 2013
  • The recording and matching method of pose and gesture based on PC-window platform is introduced in this paper. The method uses the gesture detection camera, Xtion which is for the Windows PC. To develop the method, the API is first developed which processes and compares the depth data, RGB image data, and skeleton data obtained using the camera. The pose matching method which selectively compares only valid joints is developed. For the gesture matching, the recognition method which can differentiate the wrong pose between poses is developed. The tool which records and tests the sample data to extract the specified pose and gesture is developed. 6 different pose and gesture were captured and tested. Pose was recognized 100% and gesture was recognized 99%, so the proposed method was validated.

Implementation of a sensor fusion system for autonomous guided robot navigation in outdoor environments (실외 자율 로봇 주행을 위한 센서 퓨전 시스템 구현)

  • Lee, Seung-H.;Lee, Heon-C.;Lee, Beom-H.
    • Journal of Sensor Science and Technology
    • /
    • v.19 no.3
    • /
    • pp.246-257
    • /
    • 2010
  • Autonomous guided robot navigation which consists of following unknown paths and avoiding unknown obstacles has been a fundamental technique for unmanned robots in outdoor environments. The unknown path following requires techniques such as path recognition, path planning, and robot pose estimation. In this paper, we propose a novel sensor fusion system for autonomous guided robot navigation in outdoor environments. The proposed system consists of three monocular cameras and an array of nine infrared range sensors. The two cameras equipped on the robot's right and left sides are used to recognize unknown paths and estimate relative robot pose on these paths through bayesian sensor fusion method, and the other camera equipped at the front of the robot is used to recognize abrupt curves and unknown obstacles. The infrared range sensor array is used to improve the robustness of obstacle avoidance. The forward camera and the infrared range sensor array are fused through rule-based method for obstacle avoidance. Experiments in outdoor environments show the mobile robot with the proposed sensor fusion system performed successfully real-time autonomous guided navigation.

Smart Streetlight based on Accident Recognition using Raspberry Pi Camera OpenCV (라즈베리파이 카메라 OpenCV를 활용한 사고 인식 기반 스마트 가로등)

  • Dong-Jin, Kim;Won-Seok, Choi;Sung-Pyo, Ju;Seung-Min, Yoo;Jae-Yong, Choi;Hyoung-Keun, Park
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1229-1236
    • /
    • 2022
  • In this paper, we studied accident-aware smart streetlights to prevent secondary accidents when driving on highways. It used Arduino and sensors to inform drivers of weather conditions, incorporated functions such as LED brightness control according to sunlight and night driving vehicles, and used Raspberry Pi camera OpenCV to learn various traffic accidents, natural disasters, and wildlife.

Combining Object Detection and Hand Gesture Recognition for Automatic Lighting System Control

  • Pham, Giao N.;Nguyen, Phong H.;Kwon, Ki-Ryong
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.329-332
    • /
    • 2019
  • Recently, smart lighting systems are the combination between sensors and lights. These systems turn on/off and adjust the brightness of lights based on the motion of object and the brightness of environment. These systems are often applied in places such as buildings, rooms, garages and parking lot. However, these lighting systems are controlled by lighting sensors, motion sensors based on illumination environment and motion detection. In this paper, we propose an automatic lighting control system using one single camera for buildings, rooms and garages. The proposed system is one integration the results of digital image processing as motion detection, hand gesture detection to control and dim the lighting system. The experimental results showed that the proposed system work very well and could consider to apply for automatic lighting spaces.

Detection and Recognition of Illegally Parked Vehicles Based on an Adaptive Gaussian Mixture Model and a Seed Fill Algorithm

  • Sarker, Md. Mostafa Kamal;Weihua, Cai;Song, Moon Kyou
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.3
    • /
    • pp.197-204
    • /
    • 2015
  • In this paper, we present an algorithm for the detection of illegally parked vehicles based on a combination of some image processing algorithms. A digital camera is fixed in the illegal parking region to capture the video frames. An adaptive Gaussian mixture model (GMM) is used for background subtraction in a complex environment to identify the regions of moving objects in our test video. Stationary objects are detected by using the pixel-level features in time sequences. A stationary vehicle is detected by using the local features of the object, and thus, information about illegally parked vehicles is successfully obtained. An automatic alarm system can be utilized according to the different regulations of different illegal parking regions. The results of this study obtained using a test video sequence of a real-time traffic scene show that the proposed method is effective.

Lane Positioning in Highways Based on Road-sign Tracking by Kalman Filter (칼만필터 기반의 도로표지판 추적을 이용한 차량의 횡방향 위치인식)

  • Lee, Jaehong;Kim, Hakil
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.3
    • /
    • pp.50-59
    • /
    • 2014
  • This paper proposes a method of localization of vehicle especially the horizontal position for the purpose of recognizing the driving lane. Through tracking road signs, the relative position between the vehicle and the sign is calculated and the absolute position is obtained using the known information from the regulation for installation. The proposed method uses Kalman filter for road sign tracking and analyzes the motion using the pinhole camera model. In order to classify the road sign, ORB(Oriented fast and Rotated BRIEF) features from the input image and DB are matched. From the absolute position of the vehicle, the driving lane is recognized. The Experiments are performed on videos from the highway driving and the results shows that the proposed method is able to compensate the common GPS localization errors.

Lane and Obstacle Recognition Using Artificial Neural Network (신경망을 이용한 차선과 장애물 인식에 관한 연구)

  • Kim, Myung-Soo;Yang, Sung-Hoon;Lee, Sang-Ho;Lee, Suk
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.10
    • /
    • pp.25-34
    • /
    • 1999
  • In this paper, an algorithm is presented to recognize lane and obstacles based on highway road image. The road images obtained by a video camera undergoes a pre-processing that includes filtering, edge detection, and identification of lanes. After this pre-processing, a part of image is grouped into 27 sub-windows and fed into a three-layer feed-forward neural network. The neural network is trained to indicate the road direction and the presence of absence of an obstacle. The proposed algorithm has been tested with the images different from the training images, and demonstrated its efficacy for recognizing lane and obstacles. Based on the test results, it can be said that the algorithm successfully combines the traditional image processing and the neural network principles towards a simpler and more efficient driver warning of assistance system

  • PDF

Development of Vision-Based Inspection System for Detecting Crack on the Lining of Concrete Tunnel (비젼센서를 이용한 콘크리트 터널 라이닝 균열검사 시스템의 개발)

  • 고봉수;조남규
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.12
    • /
    • pp.96-104
    • /
    • 2003
  • To assess tunnel safety, cracks in tunnel lining are measured by inspectors who observe cracks with their eyes. A manual inspection is, however, slow and subjective. This paper, therefore, proposes vision-based inspection system for measuring cracks in the tunnel lining that inspects cracks fast and objective. The system is consisted of an on-vehicle system and a lab system. An on-vehicle system acquires image data with line CCD camera. A lab system extracts crack then inform their thickness, length and orientation by using image processing. To improve accuracy of crack recognition the geometric properties of a crack was applied to image processing. The proposed system were verified with experiments in both laboratory and field environment.

Quality Inspection of Dented Capsule using Curve Fitting-based Image Segmentation

  • Kwon, Ki-Hyeon;Lee, Hyung-Bong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.12
    • /
    • pp.125-130
    • /
    • 2016
  • Automatic quality inspection by computer vision can be applied and give a solution to the pharmaceutical industry field. Pharmaceutical capsule can be easily affected by flaws like dents, cracks, holes, etc. In order to solve the quality inspection problem, it is required computationally efficient image processing technique like thresholding, boundary edge detection and segmentation and some automated systems are available but they are very expensive to use. In this paper, we have developed a dented capsule image processing technique using edge-based image segmentation, TLS(Total Least Squares) curve fitting technique and adopted low cost camera module for capsule image capturing. We have tested and evaluated the accuracy, training and testing time of the classification recognition algorithms like PCA(Principal Component Analysis), ICA(Independent Component Analysis) and SVM(Support Vector Machine) to show the performance. With the result, PCA, ICA has low accuracy, but SVM has good accuracy to use for classifying the dented capsule.

A Framework of Recognition and Tracking for Underwater Objects based on Sonar Images : Part 2. Design and Implementation of Realtime Framework using Probabilistic Candidate Selection (소나 영상 기반의 수중 물체 인식과 추종을 위한 구조 : Part 2. 확률적 후보 선택을 통한 실시간 프레임워크의 설계 및 구현)

  • Lee, Yeongjun;Kim, Tae Gyun;Lee, Jihong;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.3
    • /
    • pp.164-173
    • /
    • 2014
  • In underwater robotics, vision would be a key element for recognition in underwater environments. However, due to turbidity an underwater optical camera is rarely available. An underwater imaging sonar, as an alternative, delivers low quality sonar images which are not stable and accurate enough to find out natural objects by image processing. For this, artificial landmarks based on the characteristics of ultrasonic waves and their recognition method by a shape matrix transformation were proposed and were proven in Part 1. But, this is not working properly in undulating and dynamically noisy sea-bottom. To solve this, we propose a framework providing a selection phase of likelihood candidates, a selection phase for final candidates, recognition phase and tracking phase in sequence images, where a particle filter based selection mechanism to eliminate fake candidates and a mean shift based tracking algorithm are also proposed. All 4 steps are running in parallel and real-time processing. The proposed framework is flexible to add and to modify internal algorithms. A pool test and sea trial are carried out to prove the performance, and detail analysis of experimental results are done. Information is obtained from tracking phase such as relative distance, bearing will be expected to be used for control and navigation of underwater robots.