• Title/Summary/Keyword: vision recognition

Search Result 1,048, Processing Time 0.026 seconds

Improved Inference for Human Attribute Recognition using Historical Video Frames

  • Ha, Hoang Van;Lee, Jong Weon;Park, Chun-Su
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.3
    • /
    • pp.120-124
    • /
    • 2021
  • Recently, human attribute recognition (HAR) attracts a lot of attention due to its wide application in video surveillance systems. Recent deep-learning-based solutions for HAR require time-consuming training processes. In this paper, we propose a post-processing technique that utilizes the historical video frames to improve prediction results without invoking re-training or modifying existing deep-learning-based classifiers. Experiment results on a large-scale benchmark dataset show the effectiveness of our proposed method.

Clustering Technique Using Relevance of Data and Applied Algorithms (데이터와 적용되는 알고리즘의 연관성을 이용한 클러스터링 기법)

  • Han Woo-Yeon;Nam Mi-Young;Rhee PhillKyu
    • The KIPS Transactions:PartB
    • /
    • v.12B no.5 s.101
    • /
    • pp.577-586
    • /
    • 2005
  • Many algorithms have been proposed for (ace recognition that is one of the most successful applications in image processing, pattern recognition and computer vision fields. Research for what kind of attribute of face that make harder or easier recognizing the target is going on recently. In flus paper, we propose method to improve recognition performance using relevance of face data and applied algorithms, because recognition performance of each algorithm according to facial attribute(illumination and expression) is change. In the experiment, we use n-tuple classifier, PCA and Gabor wavelet as recognition algorithm. And we propose three vectorization methods. First of all, we estimate the fitnesses of three recognition algorithms about each cluster after clustering the test data using k-means algorithm then we compose new clusters by integrating clusters that select same algorithm. We estimate similarity about a new cluster of test data and then we recognize the target using the nearest cluster. As a result, we can observe that the recognition performance has improved than the performance by a single algorithm without clustering.

Unsupervised Transfer Learning for Plant Anomaly Recognition

  • Xu, Mingle;Yoon, Sook;Lee, Jaesu;Park, Dong Sun
    • Smart Media Journal
    • /
    • v.11 no.4
    • /
    • pp.30-37
    • /
    • 2022
  • Disease threatens plant growth and recognizing the type of disease is essential to making a remedy. In recent years, deep learning has witnessed a significant improvement for this task, however, a large volume of labeled images is one of the requirements to get decent performance. But annotated images are difficult and expensive to obtain in the agricultural field. Therefore, designing an efficient and effective strategy is one of the challenges in this area with few labeled data. Transfer learning, assuming taking knowledge from a source domain to a target domain, is borrowed to address this issue and observed comparable results. However, current transfer learning strategies can be regarded as a supervised method as it hypothesizes that there are many labeled images in a source domain. In contrast, unsupervised transfer learning, using only images in a source domain, gives more convenience as collecting images is much easier than annotating. In this paper, we leverage unsupervised transfer learning to perform plant disease recognition, by which we achieve a better performance than supervised transfer learning in many cases. Besides, a vision transformer with a bigger model capacity than convolution is utilized to have a better-pretrained feature space. With the vision transformer-based unsupervised transfer learning, we achieve better results than current works in two datasets. Especially, we obtain 97.3% accuracy with only 30 training images for each class in the Plant Village dataset. We hope that our work can encourage the community to pay attention to vision transformer-based unsupervised transfer learning in the agricultural field when with few labeled images.

Design of Optimized RBFNNs based on Night Vision Face Recognition Simulator Using the 2D2 PCA Algorithm ((2D)2 PCA알고리즘을 이용한 최적 RBFNNs 기반 나이트비전 얼굴인식 시뮬레이터 설계)

  • Jang, Byoung-Hee;Kim, Hyun-Ki;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.1-6
    • /
    • 2014
  • In this study, we propose optimized RBFNNs based on night vision face recognition simulator with the aid of $(2D)^2$ PCA algorithm. It is difficult to obtain the night image for performing face recognition due to low brightness in case of image acquired through CCD camera at night. For this reason, a night vision camera is used to get images at night. Ada-Boost algorithm is also used for the detection of face images on both face and non-face image area. And the minimization of distortion phenomenon of the images is carried out by using the histogram equalization. These high-dimensional images are reduced to low-dimensional images by using $(2D)^2$ PCA algorithm. Face recognition is performed through polynomial-based RBFNNs classifier, and the essential design parameters of the classifiers are optimized by means of Differential Evolution(DE). The performance evaluation of the optimized RBFNNs based on $(2D)^2$ PCA is carried out with the aid of night vision face recognition system and IC&CI Lab data.

Histogram Based Hand Recognition System for Augmented Reality (증강현실을 위한 히스토그램 기반의 손 인식 시스템)

  • Ko, Min-Su;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.7
    • /
    • pp.1564-1572
    • /
    • 2011
  • In this paper, we propose a new histogram based hand recognition algorithm for augmented reality. Hand recognition system makes it possible a useful interaction between an user and computer. However, there is difficulty in vision-based hand gesture recognition with viewing angle dependency due to the complexity of human hand shape. A new hand recognition system proposed in this paper is based on the features from hand geometry. The proposed recognition system consists of two steps. In the first step, hand region is extracted from the image captured by a camera and then hand gestures are recognized in the second step. At first, we extract hand region by deleting background and using skin color information. Then we recognize hand shape by determining hand feature point using histogram of the obtained hand region. Finally, we design a augmented reality system by controlling a 3D object with the recognized hand gesture. Experimental results show that the proposed algorithm gives more than 91% accuracy for the hand recognition with less computational power.

A Study on The Classification of Target-objects with The Deep-learning Model in The Vision-images (딥러닝 모델을 이용한 비전이미지 내의 대상체 분류에 관한 연구)

  • Cho, Youngjoon;Kim, Jongwon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.20-25
    • /
    • 2021
  • The target-object classification method was implemented using a deep-learning-based detection model in real-time images. The object detection model was a deep-learning-based detection model that allowed extensive data collection and machine learning processes to classify similar target-objects. The recognition model was implemented by changing the processing structure of the detection model and combining developed the vision-processing module. To classify the target-objects, the identity and similarity were defined and applied to the detection model. The use of the recognition model in industry was also considered by verifying the effectiveness of the recognition model using the real-time images of an actual soccer game. The detection model and the newly constructed recognition model were compared and verified using real-time images. Furthermore, research was conducted to optimize the recognition model in a real-time environment.

CNN-based Building Recognition Method Robust to Image Noises (이미지 잡음에 강인한 CNN 기반 건물 인식 방법)

  • Lee, Hyo-Chan;Park, In-hag;Im, Tae-ho;Moon, Dai-Tchul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.341-348
    • /
    • 2020
  • The ability to extract useful information from an image, such as the human eye, is an interface technology essential for AI computer implementation. The building recognition technology has a lower recognition rate than other image recognition technologies due to the various building shapes, the ambient noise images according to the season, and the distortion by angle and distance. The computer vision based building recognition algorithms presented so far has limitations in discernment and expandability due to manual definition of building characteristics. This paper introduces the deep learning CNN (Convolutional Neural Network) model, and proposes new method to improve the recognition rate even by changes of building images caused by season, illumination, angle and perspective. This paper introduces the partial images that characterize the building, such as windows or wall images, and executes the training with whole building images. Experimental results show that the building recognition rate is improved by about 14% compared to the general CNN model.