• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.026 seconds

Multiple Moving Person Tracking based on the IMPRESARIO Simulator

  • Kim, Hyun-Deok;Jin, Tae-Seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.877-881
    • /
    • 2008
  • In this paper, we propose a real-time people tracking system with multiple CCD cameras for security inside the building. The camera is mounted from the ceiling of the laboratory so that the image data of the passing people are fully overlapped. The implemented system recognizes people movement along various directions. To track people even when their images are partially overlapped, the proposed system estimates and tracks a bounding box enclosing each person in the tracking region. The approximated convex hull of each individual in the tracking area is obtained to provide more accurate tracking information. To achieve this goal, we propose a method for 3D walking human tracking based on the IMPRESARIO framework incorporating cascaded classifiers into hypothesis evaluation. The efficiency of adaptive selection of cascaded classifiers have been also presented. We have shown the improvement of reliability for likelihood calculation by using cascaded classifiers. Experimental results show that the proposed method can smoothly and effectively detect and track walking humans through environments such as dense forests.

  • PDF

Color-Based Real-Time Hand Region Detection with Robust Performance in Various Environments (다양한 환경에 강인한 컬러기반 실시간 손 영역 검출)

  • Hong, Dong-Gyun;Lee, Donghwa
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.6
    • /
    • pp.295-311
    • /
    • 2019
  • The smart product market is growing year by year and is being used in many areas. There are various ways of interacting with smart products and users by inputting voice recognition, touch and finger movements. It is most important to detect an accurate hand region as a whole step to recognize hand movement. In this paper, we propose a method to detect accurate hand region in real time in various environments. A conventional method of detecting a hand region includes a method using depth information of a multi-sensor camera, a method of detecting a hand through machine learning, and a method of detecting a hand region using a color model. Among these methods, a method using a multi-sensor camera or a method using a machine learning requires a large amount of calculation and a high-performance PC is essential. Many computations are not suitable for embedded systems, and high-end PCs increase or decrease the price of smart products. The algorithm proposed in this paper detects the hand region using the color model, corrects the problems of the existing hand detection algorithm, and detects the accurate hand region based on various experimental environments.

Nonlinear model for estimating depth map of haze removal (안개제거의 깊이 맵 추정을 위한 비선형 모델)

  • Lee, Seungmin;Ngo, Dat;Kang, Bongsoon
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.492-496
    • /
    • 2020
  • The visibility deteriorates in hazy weather and it is difficult to accurately recognize information captured by the camera. Research is being actively conducted to remove haze so that camera-based applications such as object localization/detection and lane recognition can operate normally even in hazy weather. In this paper, we propose a nonlinear model for depth map estimation through an extensive analysis that the difference between brightness and saturation in hazy image increases non-linearly with the depth of the image. The quantitative evaluation(MSE, SSIM, TMQI) shows that the proposed haze removal method based on the nonlinear model is superior to other state-of-the-art methods.

Multiple Camera-Based Correspondence of Ground Foot for Human Motion Tracking (사람의 움직임 추적을 위한 다중 카메라 기반의 지면 위 발의 대응)

  • Seo, Dong-Wook;Chae, Hyun-Uk;Jo, Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.8
    • /
    • pp.848-855
    • /
    • 2008
  • In this paper, we describe correspondence among multiple images taken by multiple cameras. The correspondence among multiple views is an interesting problem which often appears in the application like visual surveillance or gesture recognition system. We use the principal axis and the ground plane homography to estimate foot of human. The principal axis belongs to the subtracted silhouette-based region of human using subtraction of the predetermined multiple background models with current image which includes moving person. For the calculation of the ground plane homography, we use landmarks on the ground plane in 3D space. Thus the ground plane homography means the relation of two common points in different views. In the normal human being, the foot of human has an exactly same position in the 3D space and we represent it to the intersection in this paper. The intersection occurs when the principal axis in an image crosses to the transformed ground plane from other image. However the positions of the intersection are different depend on camera views. Therefore we construct the correspondence that means the relationship between the intersection in current image and the transformed intersection from other image by homography. Those correspondences should confirm within a short distance measuring in the top viewed plane. Thus, we track a person by these corresponding points on the ground plane. Experimental result shows the accuracy of the proposed algorithm has almost 90% of detecting person for tracking based on correspondence of intersections.

A Vision-based Damage Detection for Bridge Cables (교량케이블 영상기반 손상탐지)

  • Ho, Hoai-Nam;Lee, Jong-Jae
    • 한국방재학회:학술대회논문집
    • /
    • 2011.02a
    • /
    • pp.39-39
    • /
    • 2011
  • This study presents an effective vision-based system for cable bridge damage detection. In theory, cable bridges need to be inspected the outer as well as the inner part. Starting from August 2010, a new research project supported by Korea Ministry of Land, Transportation Maritime Affairs(MLTM) was initiated focusing on the damage detection of cable system. In this study, only the surface damage detection algorithm based on a vision-based system will be focused on, an overview of the vision-based cable damage detection is given in Fig. 1. Basically, the algorithm combines the image enhancement technique with principal component analysis(PCA) to detect damage on cable surfaces. In more detail, the input image from a camera is processed with image enhancement technique to improve image quality, and then it is projected into PCA sub-space. Finally, the Mahalanobis square distance is used for pattern recognition. The algorithm was verified through laboratory tests on three types of cable surface. The algorithm gave very good results, and the next step of this study is to implement the algorithm for real cable bridges.

  • PDF

EEG-based Customized Driving Control Model Design (뇌파를 이용한 맞춤형 주행 제어 모델 설계)

  • Jin-Hee Lee;Jaehyeong Park;Je-Seok Kim;Soon, Kwon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.2
    • /
    • pp.81-87
    • /
    • 2023
  • With the development of BCI devices, it is now possible to use EEG control technology to move the robot's arms or legs to help with daily life. In this paper, we propose a customized vehicle control model based on BCI. This is a model that collects BCI-based driver EEG signals, determines information according to EEG signal analysis, and then controls the direction of the vehicle based on the determinated information through EEG signal analysis. In this case, in the process of analyzing noisy EEG signals, controlling direction is supplemented by using a camera-based eye tracking method to increase the accuracy of recognized direction . By synthesizing the EEG signal that recognized the direction to be controlled and the result of eye tracking, the vehicle was controlled in five directions: left turn, right turn, forward, backward, and stop. In experimental result, the accuracy of direction recognition of our proposed model is about 75% or higher.

Wavelet Transform Technology for Translation-invariant Iris Recognition (위치 이동에 무관한 홍채 인식을 위한 웨이블렛 변환 기술)

  • Lim, Cheol-Su
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.459-464
    • /
    • 2003
  • This paper proposes the use of a wavelet based image transform algorithm in human iris recognition method and the effectiveness of this technique will be determined in preprocessing of extracting Iris image from the user´s eye obtained by imaging device such as CCD Camera or due to torsional rotation of the eye, and it also resolves the problem caused by invariant under translations and dilations due to tilt of the head. This technique values through the proposed translation-invariant wavelet transform algorithm rather than the conventional wavelet transform method. Therefore we extracted the best-matching iris feature values and compared the stored feature codes with the incoming data to identify the user. As result of our experimentation, this technique demonstrate the significant advantage over verification when it compares with other general types of wavelet algorithm in the measure of FAR & FRR.

Design & Implementation of Lipreading System using the Articulatory Controls Analysis of the Korean 5 Vowels (<<한국어 5모음의 조음적 제어 분석을 이용한 자동 독화에 관한 연구>>)

  • Lee, Kyong-Ho;Kum, Jong-Ju;Rhee, Sang-Bum
    • Journal of the Korea Computer Industry Society
    • /
    • v.8 no.4
    • /
    • pp.281-288
    • /
    • 2007
  • In this paper, we set 6 interesting points around lips. Analyzed and characterized is the distance change of these 6 interesting points when people pronounces 5 vowels of Korean language. 450 data are gathered and analyzed. Based on this analysis, the system is constructed and the recognition experiments are performed. In this system, we used the camera connected to computer to measure the distance vector between 6 interesting points. In the experiment, 80 normal persons were sampled. The observational error between samples was corrected using normalization method. We analyzed with 30 persons and experimented with 50 persons. We constructed three recognition systems and of those the neural net system gave the best recognition result of 87.44 %.

  • PDF

A Study on the Environment Recognition System of Biped Robot for Stable Walking (안정적 보행을 위한 이족 로봇의 환경 인식 시스템 연구)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.1977-1978
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tele-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

Presentation Control System using Vision Based Hand-Gesture Recognition (Vision 기반 손동작 인식을 활용한 프레젠테이션 제어 시스템)

  • Lim, Kyoung-Jin;Kim, Eui-Jeong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.281-284
    • /
    • 2010
  • In this paper, we present Hand-gesture recognition for actual computing into color images from camera. Color images are binarization and labeling by using the YCbCr Color model. Respectively label area seeks the center point of the hand from to search Maximum Inscribed Circle which applies Voronoi-Diagram. This time, searched maximum circle and will analyze the elliptic ingredient which is contiguous so a hand territory will be able to extract. we present the presentation contral system using elliptic element and Maximum Inscribed Circle. This algorithm is to recognize the various environmental problems in the hand gesture recognition in the background objects with similar colors has the advantage that can be effectively eliminated.

  • PDF