• Title/Summary/Keyword: Vision-based recognition

Search Result 615, Processing Time 0.023 seconds

Design and Application of Vision Box Based on Embedded System (Embedded System 기반 Vision Box 설계와 적용)

  • Lee, Jong-Hyeok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.8
    • /
    • pp.1601-1607
    • /
    • 2009
  • Vision system is an object recognition system analyzing image information captured through camera. Vision system can be applied to various fields, and automobile types recognition is one of them. There have been many research about algorithm of automobile types recognition. But have complex calculation processing. so they need long processing time. In this paper, we designed vision box based on embedded system. and suggested automobile types recognition system using the vision box. As a result of pretesting, this system achieves 100% rate of recognition at the optimal condition. But when condition is changed by lighting and angle, recognition is available but pattern score is lowered. Also, it is observed that the proposed system satisfy the criteria of processing time and recognition rate in industrial field.

Recognition Performance of Vestibular-Ocular Reflex Based Vision Tracking System for Mobile Robot (이동 로봇을 위한 전정안반사 기반 비젼 추적 시스템의 인식 성능 평가)

  • Park, Jae-Hong;Bhan, Wook;Choi, Tae-Young;Kwon, Hyun-Il;Cho, Dong-Il;Kim, Kwang-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.496-504
    • /
    • 2009
  • This paper presents a recognition performance of VOR (Vestibular-Ocular Reflex) based vision tracking system for mobile robot. The VOR is a reflex eye movement which, during head movements, produces an eye movement in the direction opposite to the head movement, thus maintaining the image of interested objects placed on the center of retina. We applied this physiological concept to the vision tracking system for high recognition performance in mobile environments. The proposed method was implemented in a vision tracking system consisting of a motion sensor module and an actuation module with vision sensor. We tested the developed system on an x/y stage and a rate table for linear motion and angular motion, respectively. The experimental results show that the recognition rates of the VOR-based method are three times more than non-VOR conventional vision system, which is mainly due to the fact that VOR-based vision tracking system has the line of sight of vision system to be fixed to the object, eventually reducing the blurring effect of images under the dynamic environment. It suggests that the VOR concept proposed in this paper can be applied efficiently to the vision tracking system for mobile robot.

Design and Implementation of Vision Box Based on Embedded Platform (Embedded Platform 기반 Vision Box 설계 및 구현)

  • Kim, Pan-Kyu;Lee, Jong-Hyeok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.1
    • /
    • pp.191-197
    • /
    • 2007
  • Vision system is an object recognition system analyzing image information captured through camera. Vision system can be applied to various fields, and vehicle recognition is ole of them. There have been many proposals about algorithm of vehicle recognition. But have complex calculation processing. So they need long processing time and sometimes they make problems. In this research we suggested vehicle type recognition system using vision bpx based on embedded platform. As a result of testing this system achieves 100% rate of recognition at the optimal condition. But when condition is changed by lighting, noise and angle, rate of recognition is decreased as pattern score is lowered and recognition speed is slowed.

Object Recognition using Smart Tag and Stereo Vision System on Pan-Tilt Mechanism

  • Kim, Jin-Young;Im, Chang-Jun;Lee, Sang-Won;Lee, Ho-Gil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2379-2384
    • /
    • 2005
  • We propose a novel method for object recognition using the smart tag system with a stereo vision on a pan-tilt mechanism. We developed a smart tag which included IRED device. The smart tag is attached onto the object. We also developed a stereo vision system which pans and tilts for the object image to be the centered on each whole image view. A Stereo vision system on the pan-tilt mechanism can map the position of IRED to the robot coordinate system by using pan-tilt angles. And then, to map the size and pose of the object for the robot to coordinate the system, we used a simple model-based vision algorithm. To increase the possibility of tag-based object recognition, we implemented our approach by using as easy and simple techniques as possible.

  • PDF

Development of Non-Contacting Automatic Inspection Technology of Precise Parts (정밀부품의 비접촉 자동검사기술 개발)

  • Lee, Woo-Sung;Han, Sung-Hyun
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.6
    • /
    • pp.110-116
    • /
    • 2007
  • This paper presents a new technique to implement the real-time recognition for shapes and model number of parts based on an active vision approach. The main focus of this paper is to apply a technique of 3D object recognition for non-contacting inspection of the shape and the external form state of precision parts based on the pattern recognition. In the field of computer vision, there have been many kinds of object recognition approaches. And most of these approaches focus on a method of recognition using a given input image (passive vision). It is, however, hard to recognize an object from model objects that have similar aspects each other. Recently, it has been perceived that an active vision is one of hopeful approaches to realize a robust object recognition system. The performance is illustrated by experiment for several parts and models.

A Computer Vision-Based Banknote Recognition System for the Blind with an Accuracy of 98% on Smartphone Videos

  • Sanchez, Gustavo Adrian Ruiz
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.6
    • /
    • pp.67-72
    • /
    • 2019
  • This paper proposes a computer vision-based banknote recognition system intended to assist the blind. This system is robust and fast in recognizing banknotes on videos recorded with a smartphone on real-life scenarios. To reduce the computation time and enable a robust recognition in cluttered environments, this study segments the banknote candidate area from the background utilizing a technique called Pixel-Based Adaptive Segmenter (PBAS). The Speeded-Up Robust Features (SURF) interest point detector is used, and SURF feature vectors are computed only when sufficient interest points are found. The proposed algorithm achieves a recognition accuracy of 98%, a 100% true recognition rate and a 0% false recognition rate. Although Korean banknotes are used as a working example, the proposed system can be applied to recognize other countries' banknotes.

Deep Learning Machine Vision System with High Object Recognition Rate using Multiple-Exposure Image Sensing Method

  • Park, Min-Jun;Kim, Hyeon-June
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.2
    • /
    • pp.76-81
    • /
    • 2021
  • In this study, we propose a machine vision system with a high object recognition rate. By utilizing a multiple-exposure image sensing technique, the proposed deep learning-based machine vision system can cover a wide light intensity range without further learning processes on the various light intensity range. If the proposed machine vision system fails to recognize object features, the system operates in a multiple-exposure sensing mode and detects the target object that is blocked in the near dark or bright region. Furthermore, short- and long-exposure images from the multiple-exposure sensing mode are synthesized to obtain accurate object feature information. That results in the generation of a wide dynamic range of image information. Even with the object recognition resources for the deep learning process with a light intensity range of only 23 dB, the prototype machine vision system with the multiple-exposure imaging method demonstrated an object recognition performance with a light intensity range of up to 96 dB.

Automatic Recognition of In-Process mold Dies Based on Reverse Engineering Technology (형상 역공학을 통한 공정중 금형 가공물의 자동인식)

  • 김정권;윤길상;최진화;김동우;조명우;박균명
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2003.10a
    • /
    • pp.420-425
    • /
    • 2003
  • Generally, reverse engineering means getting CAD data from unidentified shape using vision or 3D laser scanner system. In this paper, we studied unidentified model by machine vision based reverse engineering system to get information about in-processing model. Recently, vision technology is widely used in current factories, because it could inspect the in-process object easily, quickly, accurately. The following tasks were mainly investigated and implemented. We obtained more precise data by corning camera's distortion, compensating slit-beam error and revising acquired image. Much more, we made similar curves or surface with B-spline approximation for precision. Until now, there have been many case study of shape recognition. But it was uncompatible to apply to the field, because it had taken too many processing time and has frequent recognition failure. This paper propose recognition algorithm that prevent such errors and give applications to the field.

  • PDF

ADD-Net: Attention Based 3D Dense Network for Action Recognition

  • Man, Qiaoyue;Cho, Young Im
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.6
    • /
    • pp.21-28
    • /
    • 2019
  • Recent years with the development of artificial intelligence and the success of the deep model, they have been deployed in all fields of computer vision. Action recognition, as an important branch of human perception and computer vision system research, has attracted more and more attention. Action recognition is a challenging task due to the special complexity of human movement, the same movement may exist between multiple individuals. The human action exists as a continuous image frame in the video, so action recognition requires more computational power than processing static images. And the simple use of the CNN network cannot achieve the desired results. Recently, the attention model has achieved good results in computer vision and natural language processing. In particular, for video action classification, after adding the attention model, it is more effective to focus on motion features and improve performance. It intuitively explains which part the model attends to when making a particular decision, which is very helpful in real applications. In this paper, we proposed a 3D dense convolutional network based on attention mechanism(ADD-Net), recognition of human motion behavior in the video.

A Novel Approach to Mugshot Based Arbitrary View Face Recognition

  • Zeng, Dan;Long, Shuqin;Li, Jing;Zhao, Qijun
    • Journal of the Optical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.239-244
    • /
    • 2016
  • Mugshot face images, routinely collected by police, usually contain both frontal and profile views. Existing automated face recognition methods exploited mugshot databases by enlarging the gallery with synthetic multi-view face images generated from the mugshot face images. This paper, instead, proposes to match the query arbitrary view face image directly to the enrolled frontal and profile face images. During matching, the 3D face shape model reconstructed from the mugshot face images is used to establish corresponding semantic parts between query and gallery face images, based on which comparison is done. The final recognition result is obtained by fusing the matching results with frontal and profile face images. Compared with previous methods, the proposed method better utilizes mugshot databases without using synthetic face images that may have artifacts. Its effectiveness has been demonstrated on the Color FERET and CMU PIE databases.