• Title/Summary/Keyword: Feature representation

Search Result 422, Processing Time 0.027 seconds

Two-Stage Neural Networks for Sign Language Pattern Recognition (수화 패턴 인식을 위한 2단계 신경망 모델)

  • Kim, Ho-Joon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.3
    • /
    • pp.319-327
    • /
    • 2012
  • In this paper, we present a sign language recognition model which does not use any wearable devices for object tracking. The system design issues and implementation issues such as data representation, feature extraction and pattern classification methods are discussed. The proposed data representation method for sign language patterns is robust for spatio-temporal variances of feature points. We present a feature extraction technique which can improve the computation speed by reducing the amount of feature data. A neural network model which is capable of incremental learning is described and the behaviors and learning algorithm of the model are introduced. We have defined a measure which reflects the relevance between the feature values and the pattern classes. The measure makes it possible to select more effective features without any degradation of performance. Through the experiments using six types of sign language patterns, the proposed model is evaluated empirically.

Face Representation and Face Recognition using Optimized Local Ternary Patterns (OLTP)

  • Raja, G. Madasamy;Sadasivam, V.
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.1
    • /
    • pp.402-410
    • /
    • 2017
  • For many years, researchers in face description area have been representing and recognizing faces based on different methods that include subspace discriminant analysis, statistical learning and non-statistics based approach etc. But still automatic face recognition remains an interesting but challenging problem. This paper presents a novel and efficient face image representation method based on Optimized Local Ternary Pattern (OLTP) texture features. The face image is divided into several regions from which the OLTP texture feature distributions are extracted and concatenated into a feature vector that can act as face descriptor. The recognition is performed using nearest neighbor classification method with Chi-square distance as a similarity measure. Extensive experimental results on Yale B, ORL and AR face databases show that OLTP consistently performs much better than other well recognized texture models for face recognition.

Neural Text Categorizer for Exclusive Text Categorization

  • Jo, Tae-Ho
    • Journal of Information Processing Systems
    • /
    • v.4 no.2
    • /
    • pp.77-86
    • /
    • 2008
  • This research proposes a new neural network for text categorization which uses alternative representations of documents to numerical vectors. Since the proposed neural network is intended originally only for text categorization, it is called NTC (Neural Text Categorizer) in this research. Numerical vectors representing documents for tasks of text mining have inherently two main problems: huge dimensionality and sparse distribution. Although many various feature selection methods are developed to address the first problem, the reduced dimension remains still large. If the dimension is reduced excessively by a feature selection method, robustness of text categorization is degraded. Even if SVM (Support Vector Machine) is tolerable to huge dimensionality, it is not so to the second problem. The goal of this research is to address the two problems at same time by proposing a new representation of documents and a new neural network using the representation for its input vector.

Depth Image-Based Human Action Recognition Using Convolution Neural Network and Spatio-Temporal Templates (시공간 템플릿과 컨볼루션 신경망을 사용한 깊이 영상 기반의 사람 행동 인식)

  • Eum, Hyukmin;Yoon, Changyong
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.10
    • /
    • pp.1731-1737
    • /
    • 2016
  • In this paper, a method is proposed to recognize human actions as nonverbal expression; the proposed method is composed of two steps which are action representation and action recognition. First, MHI(Motion History Image) is used in the action representation step. This method includes segmentation based on depth information and generates spatio-temporal templates to describe actions. Second, CNN(Convolution Neural Network) which includes feature extraction and classification is employed in the action recognition step. It extracts convolution feature vectors and then uses a classifier to recognize actions. The recognition performance of the proposed method is demonstrated by comparing other action recognition methods in experimental results.

Analysis of 2-Dimensional Object Recognition Using discrete Wavelet Transform (이산 웨이브렛 변환을 이용한 2차원 물체 인식에 관한 연구)

  • Park, Kwang-Ho;Kim, Chang-Gu;Kee, Chang-Doo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.10
    • /
    • pp.194-202
    • /
    • 1999
  • A method for pattern recognition based on wavelet transform is proposed in this paper. The boundary of the object to be recognized includes shape information for object of machine parts. The contour is first represented using a one-dimensional signal and normalized about translation, rotation and scale, then is used to build the wavelet transform representation of the object. Wavelets allow us to decompose a function into multi-resolution hierarchy of localized frequency bands. The recognition of 2-dimensional object based on the wavelet is described to analyze the shape of analysis technique; the discrete wavelet transform(DWT). The feature vectors obtained using wavelet analysis is classified using a multi-layer neural network. The results show that, compared with the use of fourier descriptors, recognition using wavelet is more stable and efficient representation. And particularly the performance for objects corrupted with noise is better than that of other method.

  • PDF

3D Geometric Reasoning for Solid Model Conversion and Feature Recognition (솔리드 모델 변환과 특징형상인식을 위한 기하 추론)

  • Han, Jeonghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.3 no.2
    • /
    • pp.77-84
    • /
    • 1997
  • Solid modeling refers to techniques for unambiguous representations of three- dimensional objects. The most widely used techniques for solid modeling have been Constructive Solid Geometry (CSG) and Boundary Representation (BRep). Contemporary solid modeling systems typically support both representations, and bilateral conversions between CSG and BRep are essential. However, computing a CSG from a BRep is largely an open problem. This paper presents 3D geometric reasoning algorithms for converting a BRep into a special CSG, called Destructive Solid Geometry (DSG) whose Boolean operations are all subtractions. The major application area of BRep-to-DSG conversion is feature recognition, which is essential for integrating CAD and CAM.

  • PDF

Improving Transformer with Dynamic Convolution and Shortcut for Video-Text Retrieval

  • Liu, Zhi;Cai, Jincen;Zhang, Mengmeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2407-2424
    • /
    • 2022
  • Recently, Transformer has made great progress in video retrieval tasks due to its high representation capability. For the structure of a Transformer, the cascaded self-attention modules are capable of capturing long-distance feature dependencies. However, the local feature details are likely to have deteriorated. In addition, increasing the depth of the structure is likely to produce learning bias in the learned features. In this paper, an improved Transformer structure named TransDCS (Transformer with Dynamic Convolution and Shortcut) is proposed. A Multi-head Conv-Self-Attention module is introduced to model the local dependencies and improve the efficiency of local features extraction. Meanwhile, the augmented shortcuts module based on a dual identity matrix is applied to enhance the conduction of input features, and mitigate the learning bias. The proposed model is tested on MSRVTT, LSMDC and Activity-Net benchmarks, and it surpasses all previous solutions for the video-text retrieval task. For example, on the LSMDC benchmark, a gain of about 2.3% MdR and 6.1% MnR is obtained over recently proposed multimodal-based methods.

Systematic Determination of Number of Clusters Based on Input Representation Coverage (클러스터 분석을 위한 IRC기반 클러스터 개수 자동 결정 방법)

  • 신미영
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.6
    • /
    • pp.39-46
    • /
    • 2004
  • One of the significant issues in cluster analysis is to identify a proper number of clusters hidden under given data. In this paper we propose a novel approach to systematically determine the number of clusters based on Input Representation Coverage (IRC), which is newly defined as a quantified value of how well original input data in Gaussian feature space can be captured with a certain number of clusters. Furthermore, its usability and applicability is also investigated via experiments with synthetic data. Our experiment results show that the proposed approach is quite useful in approximately finding the real number of clusters implicitly contained in the data.

Development of Feature Based Modeller Using Boundary Representation (경계표현법을 기본으로 한 특징형상 모델러의 개발)

  • 홍상훈;서효원;이상조
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.17 no.10
    • /
    • pp.2446-2456
    • /
    • 1993
  • By virtue of progress of computer science, CAD/CAM technology has been developed greatly in each area. But the problems in the integration of CAD/CAM are not yet solved completely. The reason is that the exchange of data between CAD and CAM is difficult because the domains of design and manufacturing are different in nature. To solve this problem, a feature based modeller is developed in this study, which makes it possible to communicate between design and manufacturing through features. The modeller has feature, the concept of semi-bounded plane is introduced, and implemented as a B-rep sheet model using half-edge data structure. The features are then created on a part by local modification of the boundary on a part based on feature template information. This approach generalizes the modelling of features in a geometry model.

Segmentation of Bacterial Cells Based on a Hybrid Feature Generation and Deep Learning (하이브리드 피처 생성 및 딥 러닝 기반 박테리아 세포의 세분화)

  • Lim, Seon-Ja;Vununu, Caleb;Kwon, Ki-Ryong;Youn, Sung-Dae
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.965-976
    • /
    • 2020
  • We present in this work a segmentation method of E. coli bacterial images generated via phase contrast microscopy using a deep learning based hybrid feature generation. Unlike conventional machine learning methods that use the hand-crafted features, we adopt the denoising autoencoder in order to generate a precise and accurate representation of the pixels. We first construct a hybrid vector that combines original image, difference of Gaussians and image gradients. The created hybrid features are then given to a deep autoencoder that learns the pixels' internal dependencies and the cells' shape and boundary information. The latent representations learned by the autoencoder are used as the inputs of a softmax classification layer and the direct outputs from the classifier represent the coarse segmentation mask. Finally, the classifier's outputs are used as prior information for a graph partitioning based fine segmentation. We demonstrate that the proposed hybrid vector representation manages to preserve the global shape and boundary information of the cells, allowing to retrieve the majority of the cellular patterns without the need of any post-processing.