• Title/Summary/Keyword: vision recognition

Search Result 1,048, Processing Time 0.035 seconds

Ultrasonic and Vision Data Fusion for Object Recognition (초음파센서와 시각센서의 융합을 이용한 물체 인식에 관한 연구)

  • Ko, Joong-Hyup;Kim, Wan-Ju;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 1992.07a
    • /
    • pp.417-421
    • /
    • 1992
  • Ultrasonic and vision data need to be fused for efficient object recognition, especially in mobile robot navigation. In the proposed approach, the whole ultrasonic echo signal is utilized and data fusion is performed based on each sensor's characteristic. It is shown to be effective through the experiment results.

  • PDF

Obstacle Recognition Using the Vision and Ultrasonic Sensor in a Mobile Robot (영상과 초음파 정보를 이용한 이동로보트의 장애물 인식)

  • 박민기;박민용
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.9
    • /
    • pp.1154-1161
    • /
    • 1995
  • In this paper, a new method is proposed where the vision and ultrasonic sensor are used to recognize obstacles and to obtain its position and size. Ultrasonic snsors are used to obtain the actual navigation path width of the mobile robot. In conjunction with camera images of the path, recognition of obstacles and the determination of its distance, direction, and width are carried out. The characteristics of the sensors and the mobile robots used generally make it difficult to recognize all environments; accordingly, a restricted environment is employed for this study.

  • PDF

Pose and Expression Invariant Alignment based Multi-View 3D Face Recognition

  • Ratyal, Naeem;Taj, Imtiaz;Bajwa, Usama;Sajid, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4903-4929
    • /
    • 2018
  • In this study, a fully automatic pose and expression invariant 3D face alignment algorithm is proposed to handle frontal and profile face images which is based on a two pass course to fine alignment strategy. The first pass of the algorithm coarsely aligns the face images to an intrinsic coordinate system (ICS) through a single 3D rotation and the second pass aligns them at fine level using a minimum nose tip-scanner distance (MNSD) approach. For facial recognition, multi-view faces are synthesized to exploit real 3D information and test the efficacy of the proposed system. Due to optimal separating hyper plane (OSH), Support Vector Machine (SVM) is employed in multi-view face verification (FV) task. In addition, a multi stage unified classifier based face identification (FI) algorithm is employed which combines results from seven base classifiers, two parallel face recognition algorithms and an exponential rank combiner, all in a hierarchical manner. The performance figures of the proposed methodology are corroborated by extensive experiments performed on four benchmark datasets: GavabDB, Bosphorus, UMB-DB and FRGC v2.0. Results show mark improvement in alignment accuracy and recognition rates. Moreover, a computational complexity analysis has been carried out for the proposed algorithm which reveals its superiority in terms of computational efficiency as well.

Scene Recognition based Autonomous Robot Navigation robust to Dynamic Environments (동적 환경에 강인한 장면 인식 기반의 로봇 자율 주행)

  • Kim, Jung-Ho;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.245-254
    • /
    • 2008
  • Recently, many vision-based navigation methods have been introduced as an intelligent robot application. However, many of these methods mainly focus on finding an image in the database corresponding to a query image. Thus, if the environment changes, for example, objects moving in the environment, a robot is unlikely to find consistent corresponding points with one of the database images. To solve these problems, we propose a novel navigation strategy which uses fast motion estimation and a practical scene recognition scheme preparing the kidnapping problem, which is defined as the problem of re-localizing a mobile robot after it is undergone an unknown motion or visual occlusion. This algorithm is based on motion estimation by a camera to plan the next movement of a robot and an efficient outlier rejection algorithm for scene recognition. Experimental results demonstrate the capability of the vision-based autonomous navigation against dynamic environments.

  • PDF

Recognition of Patterns and Marks on the Glass Panel of Computer Monitor (컴퓨터 모니터용 유리 패널의 문자 마크 인식)

  • Ahn, In-Mo;Lee, Kee-Sang
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.52 no.1
    • /
    • pp.35-41
    • /
    • 2003
  • In this paper, a machine vision system for recognizing and classifying the patterns and marks engraved by die molding or laser marking on the glass panels of computer monitors is suggested and evaluated experimentally. The vision system is equipped with a neural network and an NGC pattern classifier including searching process based on normalized grayscale correlation and adaptive binarization. This system is found to be applicable even to the cases in which the segmentation of the pattern area from the background using ordinary blob coloring technique is quite difficult. The inspection process is accomplished by the use of the NGC hypothesis and ANN verification. The proposed pattern recognition system is composed of three parts: NGC matching process and the preprocessing unit for acquiring the best quality of binary image data, a neural network-based recognition algorithm, and the learning algorithm for the neural network. Another contribution of this paper is the method of generating the training patterns from only a few typical product samples in place of real images of all types of good products.

Object Recognition for Mobile Robot using Context-based Bi-directional Reasoning (상황 정보 기반 양방향 추론 방법을 이용한 이동 로봇의 물체 인식)

  • Lim, G.H.;Ryu, G.G.;Suh, I.H.;Kim, J.B.;Zhang, G.X.;Kang, J.H.;Park, M.K.
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.6-8
    • /
    • 2007
  • In this paper, We propose reasoning system for object recognition and space classification using not only visual features but also contextual information. It is necessary to perceive object and classify space in real environments for mobile robot. especially vision based. Several visual features such as texture, SIFT. color are used for object recognition. Because of sensor uncertainty and object occlusion. there are many difficulties in vision-based perception. To show the validities of our reasoning system. experimental results will be illustrated. where object and space are inferred by bi -directional rules even with partial and uncertain information. And the system is combined with top-down and bottom-up approach.

  • PDF

Vision based place recognition using Bayesian inference with feedback of image retrieval

  • Yi, Hu;Lee, Chang-Woo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.11a
    • /
    • pp.19-22
    • /
    • 2006
  • In this paper we present a vision based place recognition method which uses Bayesian method with feed back of image retrieval. Both Bayesian method and image retrieval method are based on interest features that are invariant to many image transformations. The interest features are detected using Harris-Laplacian detector and then descriptors are generated from the image patches centered at the features' position in the same manner of SIFT. The Bayesian method contains two stages: learning and recognition. The image retrieval result is fed back to the Bayesian recognition to achieve robust and confidence. The experimental results show the effectiveness of our method.

  • PDF

A Vision-Based Method to Find Fingertips in a Closed Hand

  • Chaudhary, Ankit;Vatwani, Kapil;Agrawal, Tushar;Raheja, J.L.
    • Journal of Information Processing Systems
    • /
    • v.8 no.3
    • /
    • pp.399-408
    • /
    • 2012
  • Hand gesture recognition is an important area of research in the field of Human Computer Interaction (HCI). The geometric attributes of the hand play an important role in hand shape reconstruction and gesture recognition. That said, fingertips are one of the important attributes for the detection of hand gestures and can provide valuable information from hand images. Many methods are available in scientific literature for fingertips detection with an open hand but very poor results are available for fingertips detection when the hand is closed. This paper presents a new method for the detection of fingertips in a closed hand using the corner detection method and an advanced edge detection algorithm. It is important to note that the skin color segmentation methodology did not work for fingertips detection in a closed hand. Thus the proposed method applied Gabor filter techniques for the detection of edges and then applied the corner detection algorithm for the detection of fingertips through the edges. To check the accuracy of the method, this method was tested on a vast number of images taken with a webcam. The method resulted in a higher accuracy rate of detections from the images. The method was further implemented on video for testing its validity on real time image capturing. These closed hand fingertips detection would help in controlling an electro-mechanical robotic hand via hand gesture in a natural way.

A Study on Vision-based Robust Hand-Posture Recognition Using Reinforcement Learning (강화 학습을 이용한 비전 기반의 강인한 손 모양 인식에 대한 연구)

  • Jang Hyo-Young;Bien Zeung-Nam
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.39-49
    • /
    • 2006
  • This paper proposes a hand-posture recognition method using reinforcement learning for the performance improvement of vision-based hand-posture recognition. The difficulties in vision-based hand-posture recognition lie in viewing direction dependency and self-occlusion problem due to the high degree-of-freedom of human hand. General approaches to deal with these problems include multiple camera approach and methods of limiting the relative angle between cameras and the user's hand. In the case of using multiple cameras, however, fusion techniques to induce the final decision should be considered. Limiting the angle of user's hand restricts the user's freedom. The proposed method combines angular features and appearance features to describe hand-postures by a two-layered data structure and reinforcement learning. The validity of the proposed method is evaluated by appling it to the hand-posture recognition system using three cameras.