• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.04 seconds

Image Distortion Compensation for Improved Gait Recognition (보행 인식 시스템 성능 개선을 위한 영상 왜곡 보정 기법)

  • Jeon, Ji-Hye;Kim, Dae-Hee;Yang, Yoon-Gi;Paik, Joon-Ki;Lee, Chang-Su
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.97-107
    • /
    • 2009
  • In image-based gait recognition systems, physical factors, such as the camera angle and the lens distortion, and environmental factors such as illumination determines the performance of recognition. In this paper we present a robust gait recognition method by compensating various types of image distortions. The proposed method is compared with existing gait recognition algorithm with consideration of both physical and environmental distortion factors in the input image. More specifically, we first present an efficient compensation algorithm of image distortion by using the projective transform, and test the feasibility of the proposed algorithm by comparing the recognition performances with and without the compensation process. Proposed method gives universal gait data which is invariant to both distance and environment. Gained data improved gait recognition rate about 41.5% in indoor image and about 55.5% in outdoor image. Proposed method can be used effectively in database(DB) construction, searching and tracking of specific objects.

RBFNNs-based Recognition System of Vehicle License Plate Using Distortion Correction and Local Binarization (왜곡 보정과 지역 이진화를 이용한 RBFNNs 기반 차량 번호판 인식 시스템)

  • Kim, Sun-Hwan;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.9
    • /
    • pp.1531-1540
    • /
    • 2016
  • In this paper, we propose vehicle license plate recognition system based on Radial Basis Function Neural Networks (RBFNNs) with the use of local binarization functions and canny edge algorithm. In order to detect the area of license plate and also recognize license plate numbers, binary images are generated by using local binarization methods, which consider local brightness, and canny edge detection. The generated binary images provide information related to the size and the position of license plate. Additionally, image warping is used to compensate the distortion of images obtained from the side. After extracting license plate numbers, the dimensionality of number images is reduced through Principal Component Analysis (PCA) and is used as input variables to RBFNNs. Particle Swarm Optimization (PSO) algorithm is used to optimize a number of essential parameters needed to improve the accuracy of RBFNNs. Those optimized parameters include the number of clusters and the fuzzification coefficient used in the FCM algorithm, and the orders of polynomial of networks. Image data sets are obtained by changing the distance between stationary vehicle and camera and then used to evaluate the performance of the proposed system.

Vision-based hand Gesture Detection and Tracking System (비전 기반의 손동작 검출 및 추적 시스템)

  • Park Ho-Sik;Bae Cheol-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.12C
    • /
    • pp.1175-1180
    • /
    • 2005
  • We present a vision-based hand gesture detection and tracking system. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. In this experiment, the proposed method has recognition rate of $99.28\%$ that shows more improved $3.91\%$ than the conventional appearance method.

Field Test of Automated Activity Classification Using Acceleration Signals from a Wristband

  • Gong, Yue;Seo, JoonOh
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.443-452
    • /
    • 2020
  • Worker's awkward postures and unreasonable physical load can be corrected by monitoring construction activities, thereby increasing the safety and productivity of construction workers and projects. However, manual identification is time-consuming and contains high human variance. In this regard, an automated activity recognition system based on inertial measurement unit can help in rapidly and precisely collecting motion data. With the acceleration data, the machine learning algorithm will be used to train classifiers for automatically categorizing activities. However, input acceleration data are extracted either from designed experiments or simple construction work in previous studies. Thus, collected data series are discontinuous and activity categories are insufficient for real construction circumstances. This study aims to collect acceleration data during long-term continuous work in a construction project and validate the feasibility of activity recognition algorithm with the continuous motion data. The data collection covers two different workers performing formwork at the same site. An accelerator, as well as portable camera, is attached to the worker during the entire working session for simultaneously recording motion data and working activity. The supervised machine learning-based models are trained to classify activity in hierarchical levels, which reaches a 96.9% testing accuracy of recognizing rest and work and 85.6% testing accuracy of identifying stationary, traveling, and rebar installation actions.

  • PDF

3D Object Recognition for Localization of Outdoor Robotic Vehicles (실외 주행 로봇의 위치 추정을 위한 3 차원 물체 인식)

  • Baek, Seung-Min;Kim, Jae-Woong;Lee, Jang-Won;Zhaojin, Lu;Lee, Suk-Han
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.200-204
    • /
    • 2008
  • In this paper, to solve localization problem for out-door navigation of robotic vehicles, a particle filter based 3D object recognition framework that can estimate the pose of a building or its entrance is presented. A particle filter framework of multiple evidence fusion and model matching in a sequence of images is presented for robust recognition and pose estimation of 3D objects. The proposed approach features 1) the automatic selection and collection of an optimal set of evidences 2) the derivation of multiple interpretations, as particles representing possible object poses in 3D space, and the assignment of their probabilities based on matching the object model with evidences, and 3) the particle filtering of interpretations in time with the additional evidences obtained from a sequence of images. The proposed approach has been validated by the stereo-camera based experimentation of 3D object recognition and pose estimation, where a combination of photometric and geometric features are used for evidences.

  • PDF

The Chinese Characters Learning Contents Based on Gesture Recognition Using HMM Algorithm (HMM을 이용한 제스처 인식 기반 한자 학습 콘텐츠)

  • Song, Dae-Hyeon;Kim, Dong-Min;Lee, Chil-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.8
    • /
    • pp.1067-1074
    • /
    • 2012
  • In this paper, we proposed a contents of Chinese characters learning based on gesture recognition using HMM(hidden markov model) algorithm. Input image of the system is obtained in 3-dimensional information from the TOF camera, and the method of gesture recognition is consisted of part of forecasting user's posture in two infrared images and part of recognizing gestures from continuous poses. In the communication between human and computer, this system provided convenience that user can manipulate it easily by not using any further equipment but action. Because this system raise immersion and interest by using two large display and various multimedia factor, it can maximize information transmission. The edutainment Chinese character contents proposed in this paper provide educational effect that use can master Chinese character naturally with interest, and it can be expected a synergy effect via content experience because it is based on gesture recognition.

Development of a Recognition System of Smile Facial Expression for Smile Treatment Training (웃음 치료 훈련을 위한 웃음 표정 인식 시스템 개발)

  • Li, Yu-Jie;Kang, Sun-Kyung;Kim, Young-Un;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.4
    • /
    • pp.47-55
    • /
    • 2010
  • In this paper, we proposed a recognition system of smile facial expression for smile treatment training. The proposed system detects face candidate regions by using Haar-like features from camera images. After that, it verifies if the detected face candidate region is a face or non-face by using SVM(Support Vector Machine) classification. For the detected face image, it applies illumination normalization based on histogram matching in order to minimize the effect of illumination change. In the facial expression recognition step, it computes facial feature vector by using PCA(Principal Component Analysis) and recognizes smile expression by using a multilayer perceptron artificial network. The proposed system let the user train smile expression by recognizing the user's smile expression in real-time and displaying the amount of smile expression. Experimental result show that the proposed system improve the correct recognition rate by using face region verification based on SVM and using illumination normalization based on histogram matching.

Hand Gesture Interface Using Mobile Camera Devices (모바일 카메라 기기를 이용한 손 제스처 인터페이스)

  • Lee, Chan-Su;Chun, Sung-Yong;Sohn, Myoung-Gyu;Lee, Sang-Heon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.621-625
    • /
    • 2010
  • This paper presents a hand motion tracking method for hand gesture interface using a camera in mobile devices such as a smart phone and PDA. When a camera moves according to the hand gesture of the user, global optical flows are generated. Therefore, robust hand movement estimation is possible by considering dominant optical flow based on histogram analysis of the motion direction. A continuous hand gesture is segmented into unit gestures by motion state estimation using motion phase, which is determined by velocity and acceleration of the estimated hand motion. Feature vectors are extracted during movement states and hand gestures are recognized at the end state of each gesture. Support vector machine (SVM), k-nearest neighborhood classifier, and normal Bayes classifier are used for classification. SVM shows 82% recognition rate for 14 hand gestures.

Occluded Object Motion Tracking Method based on Combination of 3D Reconstruction and Optical Flow Estimation (3차원 재구성과 추정된 옵티컬 플로우 기반 가려진 객체 움직임 추적방법)

  • Park, Jun-Heong;Park, Seung-Min;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.5
    • /
    • pp.537-542
    • /
    • 2011
  • A mirror neuron is a neuron fires both when an animal acts and when the animal observes the same action performed by another. We propose a method of 3D reconstruction for occluded object motion tracking like Mirror Neuron System to fire in hidden condition. For modeling system that intention recognition through fire effect like Mirror Neuron System, we calculate depth information using stereo image from a stereo camera and reconstruct three dimension data. Movement direction of object is estimated by optical flow with three-dimensional image data created by three dimension reconstruction. For three dimension reconstruction that enables tracing occluded part, first, picture data was get by stereo camera. Result of optical flow is made be robust to noise by the kalman filter estimation algorithm. Image data is saved as history from reconstructed three dimension image through motion tracking of object. When whole or some part of object is disappeared form stereo camera by other objects, it is restored to bring image date form history of saved past image and track motion of object.

YOLO-based lane detection system (YOLO 기반 차선검출 시스템)

  • Jeon, Sungwoo;Kim, Dongsoo;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.464-470
    • /
    • 2021
  • Automobiles have been used as simple means of transportation, but recently, as automobiles are rapidly becoming intelligent and smart, and automobile preferences are increasing, research on IT technology convergence is underway, requiring basic high-performance functions such as driver's convenience and safety. As a result, autonomous driving and semi-autonomous vehicles are developed, and these technologies sometimes deviate from lanes due to environmental problems, situations that cannot be judged by autonomous vehicles, and lane detectors may not recognize lanes. In order to improve the performance of lane departure from the lane detection system of autonomous vehicles, which is such a problem, this paper uses fast recognition, which is a characteristic of YOLO(You only look once), and is affected by the surrounding environment using CSI-Camera. We propose a lane detection system that recognizes the situation and collects driving data to extract the region of interest.