• Title/Summary/Keyword: Recognition Research

Search Result 5,339, Processing Time 0.036 seconds

A Research on the Measurement of Human Factor Algorithm 3D Object (3차원 영상 객체 휴먼팩터 알고리즘 측정에 관한 연구)

  • Choi, Byungkwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.14 no.2
    • /
    • pp.35-47
    • /
    • 2018
  • The 4th industrial revolution, digital image technology has developed beyond the limit of multimedia industry to advanced IT fusion and composite industry. Particularly, application technology related to HCI element algorithm in 3D image object recognition field is actively developed. 3D image object recognition technology evolved into intelligent image sensing and recognition technology through 3D modeling. In particular, image recognition technology has been actively studied in image processing using object recognition recognition processing, face recognition, object recognition, and 3D object recognition. In this paper, we propose a research method of human factor 3D image recognition technology applying human factor algorithm for 3D object recognition. 1. Methods of 3D object recognition using 3D modeling, image system analysis, design and human cognitive technology analysis 2. We propose a 3D object recognition parameter estimation method using FACS algorithm and optimal object recognition measurement method. In this paper, we propose a method to effectively evaluate psychological research techniques using 3D image objects. We studied the 3D 3D recognition and applied the result to the object recognition element to extract and study the characteristic points of the recognition technology.

A Novel and Efficient Feature Extraction Method for Iris Recognition

  • Ko, Jong-Gook;Gil, Youn-Hee;Yoo, Jang-Hee;Chung, Kyo-Il
    • ETRI Journal
    • /
    • v.29 no.3
    • /
    • pp.399-401
    • /
    • 2007
  • With a growing emphasis on human identification, iris recognition has recently received increasing attention. Iris recognition includes eye imaging, iris segmentation, verification, and so on. In this letter, we propose a novel and efficient iris recognition method which employs a cumulative-sum-based grey change analysis. Experimental results demonstrate that the proposed method can be used for human identification in efficient manner.

  • PDF

Hybrid Model-Based Motion Recognition for Smartphone Users

  • Shin, Beomju;Kim, Chulki;Kim, Jae Hun;Lee, Seok;Kee, Changdon;Lee, Taikjin
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.1016-1022
    • /
    • 2014
  • This paper presents a hybrid model solution for user motion recognition. The use of a single classifier in motion recognition models does not guarantee a high recognition rate. To enhance the motion recognition rate, a hybrid model consisting of decision trees and artificial neural networks is proposed. We define six user motions commonly performed in an indoor environment. To demonstrate the performance of the proposed model, we conduct a real field test with ten subjects (five males and five females). Experimental results show that the proposed model provides a more accurate recognition rate compared to that of other single classifiers.

KMSAV: Korean multi-speaker spontaneous audiovisual dataset

  • Kiyoung Park;Changhan Oh;Sunghee Dong
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.71-81
    • /
    • 2024
  • Recent advances in deep learning for speech and visual recognition have accelerated the development of multimodal speech recognition, yielding many innovative results. We introduce a Korean audiovisual speech recognition corpus. This dataset comprises approximately 150 h of manually transcribed and annotated audiovisual data supplemented with additional 2000 h of untranscribed videos collected from YouTube under the Creative Commons License. The dataset is intended to be freely accessible for unrestricted research purposes. Along with the corpus, we propose an open-source framework for automatic speech recognition (ASR) and audiovisual speech recognition (AVSR). We validate the effectiveness of the corpus with evaluations using state-of-the-art ASR and AVSR techniques, capitalizing on both pretrained models and fine-tuning processes. After fine-tuning, ASR and AVSR achieve character error rates of 11.1% and 18.9%, respectively. This error difference highlights the need for improvement in AVSR techniques. We expect that our corpus will be an instrumental resource to support improvements in AVSR.

Robust Sign Recognition System at Subway Stations Using Verification Knowledge

  • Lee, Dongjin;Yoon, Hosub;Chung, Myung-Ae;Kim, Jaehong
    • ETRI Journal
    • /
    • v.36 no.5
    • /
    • pp.696-703
    • /
    • 2014
  • In this paper, we present a walking guidance system for the visually impaired for use at subway stations. This system, which is based on environmental knowledge, automatically detects and recognizes both exit numbers and arrow signs from natural outdoor scenes. The visually impaired can, therefore, utilize the system to find their own way (for example, using exit numbers and the directions provided) through a subway station. The proposed walking guidance system consists mainly of three stages: (a) sign detection using the MCT-based AdaBoost technique, (b) sign recognition using support vector machines and hidden Markov models, and (c) three verification techniques to discriminate between signs and non-signs. The experimental results indicate that our sign recognition system has a high performance with a detection rate of 98%, a recognition rate of 99.5%, and a false-positive error rate of 0.152.

Automatic Container Code Recognition from Multiple Views

  • Yoon, Youngwoo;Ban, Kyu-Dae;Yoon, Hosub;Kim, Jaehong
    • ETRI Journal
    • /
    • v.38 no.4
    • /
    • pp.767-775
    • /
    • 2016
  • Automatic container code recognition from a captured image is used for tracking and monitoring containers, but often fails when the code is not captured clearly. In this paper, we increase the accuracy of container code recognition using multiple views. A character-level integration method combines recognized codes from different single views to generate a new code. A decision-level integration selects the most probable results from the codes from single views and the new integrated code. The experiment confirmed that the proposed integration works successfully. The recognition from single views achieved an accuracy of around 70% for the test images collected on a working pier, whereas the proposed integration method showed an accuracy of 96%.

Intra-and Inter-frame Features for Automatic Speech Recognition

  • Lee, Sung Joo;Kang, Byung Ok;Chung, Hoon;Lee, Yunkeun
    • ETRI Journal
    • /
    • v.36 no.3
    • /
    • pp.514-517
    • /
    • 2014
  • In this paper, alternative dynamic features for speech recognition are proposed. The goal of this work is to improve speech recognition accuracy by deriving the representation of distinctive dynamic characteristics from a speech spectrum. This work was inspired by two temporal dynamics of a speech signal. One is the highly non-stationary nature of speech, and the other is the inter-frame change of a speech spectrum. We adopt the use of a sub-frame spectrum analyzer to capture very rapid spectral changes within a speech analysis frame. In addition, we attempt to measure spectral fluctuations of a more complex manner as opposed to traditional dynamic features such as delta or double-delta. To evaluate the proposed features, speech recognition tests over smartphone environments were conducted. The experimental results show that the feature streams simply combined with the proposed features are effective for an improvement in the recognition accuracy of a hidden Markov model-based speech recognizer.

Hybrid Facial Representations for Emotion Recognition

  • Yun, Woo-Han;Kim, DoHyung;Park, Chankyu;Kim, Jaehong
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.1021-1028
    • /
    • 2013
  • Automatic facial expression recognition is a widely studied problem in computer vision and human-robot interaction. There has been a range of studies for representing facial descriptors for facial expression recognition. Some prominent descriptors were presented in the first facial expression recognition and analysis challenge (FERA2011). In that competition, the Local Gabor Binary Pattern Histogram Sequence descriptor showed the most powerful description capability. In this paper, we introduce hybrid facial representations for facial expression recognition, which have more powerful description capability with lower dimensionality. Our descriptors consist of a block-based descriptor and a pixel-based descriptor. The block-based descriptor represents the micro-orientation and micro-geometric structure information. The pixel-based descriptor represents texture information. We validate our descriptors on two public databases, and the results show that our descriptors perform well with a relatively low dimensionality.

Multimodal audiovisual speech recognition architecture using a three-feature multi-fusion method for noise-robust systems

  • Sanghun Jeon;Jieun Lee;Dohyeon Yeo;Yong-Ju Lee;SeungJun Kim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.22-34
    • /
    • 2024
  • Exposure to varied noisy environments impairs the recognition performance of artificial intelligence-based speech recognition technologies. Degraded-performance services can be utilized as limited systems that assure good performance in certain environments, but impair the general quality of speech recognition services. This study introduces an audiovisual speech recognition (AVSR) model robust to various noise settings, mimicking human dialogue recognition elements. The model converts word embeddings and log-Mel spectrograms into feature vectors for audio recognition. A dense spatial-temporal convolutional neural network model extracts features from log-Mel spectrograms, transformed for visual-based recognition. This approach exhibits improved aural and visual recognition capabilities. We assess the signal-to-noise ratio in nine synthesized noise environments, with the proposed model exhibiting lower average error rates. The error rate for the AVSR model using a three-feature multi-fusion method is 1.711%, compared to the general 3.939% rate. This model is applicable in noise-affected environments owing to its enhanced stability and recognition rate.

Improved Two-Phase Framework for Facial Emotion Recognition

  • Yoon, Hyunjin;Park, Sangwook;Lee, Yongkwi;Han, Mikyong;Jang, Jong-Hyun
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1199-1210
    • /
    • 2015
  • Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer-based automated two-phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU-to-emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two-phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components - multiple AU detection, AU detection fusion, and AU-to-emotion mapping. The experimental results on two real-world face databases demonstrate an improved performance over the previous two-phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.