• Title/Summary/Keyword: Video recognition

Search Result 683, Processing Time 0.024 seconds

비디오속의 얼굴추적 및 PCA기반 얼굴포즈분류와 (2D)2PCA를 이용한 얼굴인식 (Face Tracking and Recognition in Video with PCA-based Pose-Classification and (2D)2PCA recognition algorithm)

  • 김진율;김용석
    • 한국지능시스템학회논문지
    • /
    • 제23권5호
    • /
    • pp.423-430
    • /
    • 2013
  • 통상의 얼굴인식은 사람이 똑바로 카메라를 응시해야 하거나, 혹은 이동하는 통로의 정면과 같이 특정 얼굴포즈를 취득할 수 있는 위치에 카메라를 설치하는 등 통제적인 환경에서 이루어진다. 이러한 제약은 사람에게 불편을 초래하고 얼굴인식의 적용 범위를 제한하는 문제가 있다. 본 논문은 이러한 기존방식의 한계를 극복하기 위하여 대상이 특별한 제약 없이 자유롭게 움직이더라도 동영상 내에서 대상의 얼굴을 추적하고 얼굴인식을 하는 방법을 제안한다. 먼저 동영상 속의 얼굴은 IVT(Incremental Visual Tracking) 추적기를 사용하여 지속적으로 추적이 되며 이때 얼굴의 크기변화와 기울기가 보상이 되어 추출이 된다. 추출된 얼굴영상은 사람과 카메라의 각도를 특정각도로 제한하지 않았으므로 다양한 포즈를 가지게 되며 따라서 얼굴인식을 하기 위해서 포즈에 대한 판정이 선행되어야 한다. 본 논문에서는 PCA(Principal Component Analysis)기반의 얼굴포즈판정방법을 사용하여 추적기에서 추출된 이미지가 5개 포즈별 DB속의 학습된 포즈와 유사한 것으로 판정될 때만 얼굴인식을 수행하여 인식률을 높이는 방법을 제안하였다. 얼굴인식에서는 PCA, 2DPCA, $(2D)^2PCA$의 인식알고리즘을 사용하여 얼굴인식률과 수행시간을 비교 제시하였다.

Human Motion Recognition Based on Spatio-temporal Convolutional Neural Network

  • Hu, Zeyuan;Park, Sange-yun;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제23권8호
    • /
    • pp.977-985
    • /
    • 2020
  • Aiming at the problem of complex feature extraction and low accuracy in human action recognition, this paper proposed a network structure combining batch normalization algorithm with GoogLeNet network model. Applying Batch Normalization idea in the field of image classification to action recognition field, it improved the algorithm by normalizing the network input training sample by mini-batch. For convolutional network, RGB image was the spatial input, and stacked optical flows was the temporal input. Then, it fused the spatio-temporal networks to get the final action recognition result. It trained and evaluated the architecture on the standard video actions benchmarks of UCF101 and HMDB51, which achieved the accuracy of 93.42% and 67.82%. The results show that the improved convolutional neural network has a significant improvement in improving the recognition rate and has obvious advantages in action recognition.

스마트폰 기반 행동인식 기술 동향 (Trends in Activity Recognition Using Smartphone Sensors)

  • 김무섭;정치윤;손종무;임지연;정승은;정현태;신형철
    • 전자통신동향분석
    • /
    • 제33권3호
    • /
    • pp.89-99
    • /
    • 2018
  • Human activity recognition (HAR) is a technology that aims to offer an automatic recognition of what a person is doing with respect to their body motion and gestures. HAR is essential in many applications such as human-computer interaction, health care, rehabilitation engineering, video surveillance, and artificial intelligence. Smartphones are becoming the most popular platform for activity recognition owing to their convenience, portability, and ease of use. The noticeable change in smartphone-based activity recognition is the adoption of a deep learning algorithm leading to successful learning outcomes. In this article, we analyze the technology trend of activity recognition using smartphone sensors, challenging issues for future development, and a strategy change in terms of the generation of a activity recognition dataset.

A Computer Vision-Based Banknote Recognition System for the Blind with an Accuracy of 98% on Smartphone Videos

  • Sanchez, Gustavo Adrian Ruiz
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권6호
    • /
    • pp.67-72
    • /
    • 2019
  • This paper proposes a computer vision-based banknote recognition system intended to assist the blind. This system is robust and fast in recognizing banknotes on videos recorded with a smartphone on real-life scenarios. To reduce the computation time and enable a robust recognition in cluttered environments, this study segments the banknote candidate area from the background utilizing a technique called Pixel-Based Adaptive Segmenter (PBAS). The Speeded-Up Robust Features (SURF) interest point detector is used, and SURF feature vectors are computed only when sufficient interest points are found. The proposed algorithm achieves a recognition accuracy of 98%, a 100% true recognition rate and a 0% false recognition rate. Although Korean banknotes are used as a working example, the proposed system can be applied to recognize other countries' banknotes.

객체 탐지와 행동인식을 이용한 영상내의 비정상적인 상황 탐지 네트워크 (Abnormal Situation Detection on Surveillance Video Using Object Detection and Action Recognition)

  • 김정훈;최종혁;박영호;나스리디노프 아지즈
    • 한국멀티미디어학회논문지
    • /
    • 제24권2호
    • /
    • pp.186-198
    • /
    • 2021
  • Security control using surveillance cameras is established when people observe all surveillance videos directly. However, this task is labor-intensive and it is difficult to detect all abnormal situations. In this paper, we propose a deep neural network model, called AT-Net, that automatically detects abnormal situations in the surveillance video, and introduces an automatic video surveillance system developed based on this network model. In particular, AT-Net alleviates the ambiguity of existing abnormal situation detection methods by mapping features representing relationships between people and objects in surveillance video to the new tensor structure based on sparse coding. Through experiments on actual surveillance videos, AT-Net achieved an F1-score of about 89%, and improved abnormal situation detection performance by more than 25% compared to existing methods.

영상객체 spFACS ASM 알고리즘을 적용한 얼굴인식에 관한 연구 (ASM Algorithm Applid to Image Object spFACS Study on Face Recognition)

  • 최병관
    • 디지털산업정보학회논문지
    • /
    • 제12권4호
    • /
    • pp.1-12
    • /
    • 2016
  • Digital imaging technology has developed into a state-of-the-art IT convergence, composite industry beyond the limits of the multimedia industry, especially in the field of smart object recognition, face - Application developed various techniques have been actively studied in conjunction with the phone. Recently, face recognition technology through the object recognition technology and evolved into intelligent video detection recognition technology, image recognition technology object detection recognition process applies to skills through is applied to the IP camera, the image object recognition technology with face recognition and active research have. In this paper, we first propose the necessary technical elements of the human factor technology trends and look at the human object recognition based spFACS (Smile Progress Facial Action Coding System) for detecting smiles study plan of the image recognition technology recognizes objects. Study scheme 1). ASM algorithm. By suggesting ways to effectively evaluate psychological research skills through the image object 2). By applying the result via the face recognition object to the tooth area it is detected in accordance with the recognized facial expression recognition of a person demonstrated the effect of extracting the feature points.

환경에 적응적인 얼굴 추적 및 인식 방법 (A New Face Tracking and Recognition Method Adapted to the Environment)

  • 주명호;강행봉
    • 정보처리학회논문지B
    • /
    • 제16B권5호
    • /
    • pp.385-394
    • /
    • 2009
  • 사람의 얼굴은 강체(Rigid object)가 아니기 때문에 얼굴을 추적하거나 인식하는 일은 쉽지 않다. 특히 얼굴의 포즈나 주변 조명의 변화에 따른 입력 영상의 차이는 얼굴 인식을 어렵게 하는 주된 원인이다. 본 논문에서는 비디오 영상으로부터 얼굴을 추적하고 인식할 때 발생하는 이 두 가지의 문제를 해결하기 위한 프레임웍과 전처리 방법을 제안한다. 얼굴 포즈의 변화에도 효과적으로 얼굴을 추적 및 인식하기 위해 먼저 학습 영상으로부터 주성분 분석법(Principal Component Analysis)을 이용하여 각 얼굴 포즈마다 하나의 독립된 가우시안 분포를 추정하고 이를 이용하여 각 사람마다 가우시안 혼합 모델(Gaussian Mixture Model)을 구성한다. 본 논문에서는 서로 다른 조명 상태를 가진 얼굴 영상을 처리하기 위해 먼저 입력된 얼굴 영상을 SSR(Single Scale Retinex) 모델을 이용하여 반사율(Reflectance)과 조도(Illuminance)로 분해한다. 반사율은 사전 정의된 범위 안에서 히스토그램 평활화를 수행함으로써 재조정되고 조도는 조명의 변화를 포함하고 있지 않은 영상들으로부터 학습된 매니폴드 모델로 다시 근사된다. 이 두 특징을 결합함으로써 실내 환경이나 실외 환경에서 촬영된 영상에서 효율적으로 얼굴을 추적 및 인식한다. 비디오 기반의 영상으로부터 보다 효율적으로 얼굴을 추적하기 위해 본 논문에서는 구성된 모델의 가중치를 각 프레임마다 이전 프레임의 추적 결과에 의해 EM 알고리즘을 이용하여 갱신함으로써 비디오 영상내의 연속적으로 변화하는 얼굴 포즈를 추정하였다. 본 논문에서 제안된 방법은 실내에서의 다양한 조명환경과 실외의 여러 장소에서 획득한 실험 영상을 이용하여 기존에 연구되어 온 다른 방법에 비해 우수한 성능을 보였다.

An Optimized e-Lecture Video Search and Indexing framework

  • Medida, Lakshmi Haritha;Ramani, Kasarapu
    • International Journal of Computer Science & Network Security
    • /
    • 제21권8호
    • /
    • pp.87-96
    • /
    • 2021
  • The demand for e-learning through video lectures is rapidly increasing due to its diverse advantages over the traditional learning methods. This led to massive volumes of web-based lecture videos. Indexing and retrieval of a lecture video or a lecture video topic has thus proved to be an exceptionally challenging problem. Many techniques listed by literature were either visual or audio based, but not both. Since the effects of both the visual and audio components are equally important for the content-based indexing and retrieval, the current work is focused on both these components. A framework for automatic topic-based indexing and search depending on the innate content of the lecture videos is presented. The text from the slides is extracted using the proposed Merged Bounding Box (MBB) text detector. The audio component text extraction is done using Google Speech Recognition (GSR) technology. This hybrid approach generates the indexing keywords from the merged transcripts of both the video and audio component extractors. The search within the indexed documents is optimized based on the Naïve Bayes (NB) Classification and K-Means Clustering models. This optimized search retrieves results by searching only the relevant document cluster in the predefined categories and not the whole lecture video corpus. The work is carried out on the dataset generated by assigning categories to the lecture video transcripts gathered from e-learning portals. The performance of search is assessed based on the accuracy and time taken. Further the improved accuracy of the proposed indexing technique is compared with the accepted chain indexing technique.

방송뉴스 인식에서의 잡음 처리 기법에 대한 고찰 (A Study on Noise-Robust Methods for Broadcast News Speech Recognition)

  • 정용주
    • 대한음성학회지:말소리
    • /
    • 제50호
    • /
    • pp.71-83
    • /
    • 2004
  • Recently, broadcast news speech recognition has become one of the most attractive research areas. If we can transcribe automatically the broadcast news and store their contents in the text form instead of the video or audio signal itself, it will be much easier for us to search for the multimedia databases to obtain what we need. However, the desirable speech signal in the broadcast news are usually affected by the interfering signals such as the background noise and/or the music. Also, the speech of the reporter who is speaking over the telephone or with the ill-conditioned microphone is severely distorted by the channel effect. The interfered or distorted speech may be the main reason for the poor performance in the broadcast news speech recognition. In this paper, we investigated some methods to cope with the problems and we could see some performance improvements in the noisy broadcast news speech recognition.

  • PDF

Human Gait Recognition Based on Spatio-Temporal Deep Convolutional Neural Network for Identification

  • Zhang, Ning;Park, Jin-ho;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제23권8호
    • /
    • pp.927-939
    • /
    • 2020
  • Gait recognition can identify people's identity from a long distance, which is very important for improving the intelligence of the monitoring system. Among many human features, gait features have the advantages of being remotely available, robust, and secure. Traditional gait feature extraction, affected by the development of behavior recognition, can only rely on manual feature extraction, which cannot meet the needs of fine gait recognition. The emergence of deep convolutional neural networks has made researchers get rid of complex feature design engineering, and can automatically learn available features through data, which has been widely used. In this paper,conduct feature metric learning in the three-dimensional space by combining the three-dimensional convolution features of the gait sequence and the Siamese structure. This method can capture the information of spatial dimension and time dimension from the continuous periodic gait sequence, and further improve the accuracy and practicability of gait recognition.