• Title/Summary/Keyword: discriminative features

Search Result 114, Processing Time 0.025 seconds

CRF-Based Figure/Ground Segmentation with Pixel-Level Sparse Coding and Neighborhood Interactions

  • Zhang, Lihe;Piao, Yongri
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.3
    • /
    • pp.205-214
    • /
    • 2015
  • In this paper, we propose a new approach to learning a discriminative model for figure/ground segmentation by incorporating the bag-of-features and conditional random field (CRF) techniques. We advocate the use of image patches instead of superpixels as the basic processing unit. The latter has a homogeneous appearance and adheres to object boundaries, while an image patch often contains more discriminative information (e.g., local image structure) to distinguish its categories. We use pixel-level sparse coding to represent an image patch. With the proposed feature representation, the unary classifier achieves a considerable binary segmentation performance. Further, we integrate unary and pairwise potentials into the CRF model to refine the segmentation results. The pairwise potentials include color and texture potentials with neighborhood interactions, and an edge potential. High segmentation accuracy is demonstrated on three benchmark datasets: the Weizmann horse dataset, the VOC2006 cow dataset, and the MSRC multiclass dataset. Extensive experiments show that the proposed approach performs favorably against the state-of-the-art approaches.

A Multi-Scale Parallel Convolutional Neural Network Based Intelligent Human Identification Using Face Information

  • Li, Chen;Liang, Mengti;Song, Wei;Xiao, Ke
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1494-1507
    • /
    • 2018
  • Intelligent human identification using face information has been the research hotspot ranging from Internet of Things (IoT) application, intelligent self-service bank, intelligent surveillance to public safety and intelligent access control. Since 2D face images are usually captured from a long distance in an unconstrained environment, to fully exploit this advantage and make human recognition appropriate for wider intelligent applications with higher security and convenience, the key difficulties here include gray scale change caused by illumination variance, occlusion caused by glasses, hair or scarf, self-occlusion and deformation caused by pose or expression variation. To conquer these, many solutions have been proposed. However, most of them only improve recognition performance under one influence factor, which still cannot meet the real face recognition scenario. In this paper we propose a multi-scale parallel convolutional neural network architecture to extract deep robust facial features with high discriminative ability. Abundant experiments are conducted on CMU-PIE, extended FERET and AR database. And the experiment results show that the proposed algorithm exhibits excellent discriminative ability compared with other existing algorithms.

Facial Manipulation Detection with Transformer-based Discriminative Features Learning Vision (트랜스포머 기반 판별 특징 학습 비전을 통한 얼굴 조작 감지)

  • Van-Nhan Tran;Minsu Kim;Philjoo Choi;Suk-Hwan Lee;Hoanh-Su Le;Ki-Ryong Kwon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.540-542
    • /
    • 2023
  • Due to the serious issues posed by facial manipulation technologies, many researchers are becoming increasingly interested in the identification of face forgeries. The majority of existing face forgery detection methods leverage powerful data adaptation ability of neural network to derive distinguishing traits. These deep learning-based detection methods frequently treat the detection of fake faces as a binary classification problem and employ softmax loss to track CNN network training. However, acquired traits observed by softmax loss are insufficient for discriminating. To get over these limitations, in this study, we introduce a novel discriminative feature learning based on Vision Transformer architecture. Additionally, a separation-center loss is created to simply compress intra-class variation of original faces while enhancing inter-class differences in the embedding space.

A Sparse Target Matrix Generation Based Unsupervised Feature Learning Algorithm for Image Classification

  • Zhao, Dan;Guo, Baolong;Yan, Yunyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2806-2825
    • /
    • 2018
  • Unsupervised learning has shown good performance on image, video and audio classification tasks, and much progress has been made so far. It studies how systems can learn to represent particular input patterns in a way that reflects the statistical structure of the overall collection of input patterns. Many promising deep learning systems are commonly trained by the greedy layerwise unsupervised learning manner. The performance of these deep learning architectures benefits from the unsupervised learning ability to disentangling the abstractions and picking out the useful features. However, the existing unsupervised learning algorithms are often difficult to train partly because of the requirement of extensive hyperparameters. The tuning of these hyperparameters is a laborious task that requires expert knowledge, rules of thumb or extensive search. In this paper, we propose a simple and effective unsupervised feature learning algorithm for image classification, which exploits an explicit optimizing way for population and lifetime sparsity. Firstly, a sparse target matrix is built by the competitive rules. Then, the sparse features are optimized by means of minimizing the Euclidean norm ($L_2$) error between the sparse target and the competitive layer outputs. Finally, a classifier is trained using the obtained sparse features. Experimental results show that the proposed method achieves good performance for image classification, and provides discriminative features that generalize well.

New Feature Selection Method for Text Categorization

  • Wang, Xingfeng;Kim, Hee-Cheol
    • Journal of information and communication convergence engineering
    • /
    • v.15 no.1
    • /
    • pp.53-61
    • /
    • 2017
  • The preferred feature selection methods for text classification are filter-based. In a common filter-based feature selection scheme, unique scores are assigned to features; then, these features are sorted according to their scores. The last step is to add the top-N features to the feature set. In this paper, we propose an improved global feature selection scheme wherein its last step is modified to obtain a more representative feature set. The proposed method aims to improve the classification performance of global feature selection methods by creating a feature set representing all classes almost equally. For this purpose, a local feature selection method is used in the proposed method to label features according to their discriminative power on classes; these labels are used while producing the feature sets. Experimental results obtained using the well-known 20 Newsgroups and Reuters-21578 datasets with the k-nearest neighbor algorithm and a support vector machine indicate that the proposed method improves the classification performance in terms of a widely known metric ($F_1$).

Finger Vein Recognition Using Generalized Local Line Binary Pattern

  • Lu, Yu;Yoon, Sook;Xie, Shan Juan;Yang, Jucheng;Wang, Zhihui;Park, Dong Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.5
    • /
    • pp.1766-1784
    • /
    • 2014
  • Finger vein images contain rich oriented features. Local line binary pattern (LLBP) is a good oriented feature representation method extended from local binary pattern (LBP), but it is limited in that it can only extract horizontal and vertical line patterns, so effective information in an image may not be exploited and fully utilized. In this paper, an orientation-selectable LLBP method, called generalized local line binary pattern (GLLBP), is proposed for finger vein recognition. GLLBP extends LLBP for line pattern extraction into any orientation. To effectually improve the matching accuracy, the soft power metric is employed to calculate the matching score. Furthermore, to fully utilize the oriented features in an image, the matching scores from the line patterns with the best discriminative ability are fused using the Hamacher rule to achieve the final matching score for the last recognition. Experimental results on our database, MMCBNU_6000, show that the proposed method performs much better than state-of-the-art algorithms that use the oriented features and local features, such as LBP, LLBP, Gabor filter, steerable filter and local direction code (LDC).

Statistical Speech Feature Selection for Emotion Recognition

  • Kwon Oh-Wook;Chan Kwokleung;Lee Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.4E
    • /
    • pp.144-151
    • /
    • 2005
  • We evaluate the performance of emotion recognition via speech signals when a plain speaker talks to an entertainment robot. For each frame of a speech utterance, we extract the frame-based features: pitch, energy, formant, band energies, mel frequency cepstral coefficients (MFCCs), and velocity/acceleration of pitch and MFCCs. For discriminative classifiers, a fixed-length utterance-based feature vector is computed from the statistics of the frame-based features. Using a speaker-independent database, we evaluate the performance of two promising classifiers: support vector machine (SVM) and hidden Markov model (HMM). For angry/bored/happy/neutral/sad emotion classification, the SVM and HMM classifiers yield $42.3\%\;and\;40.8\%$ accuracy, respectively. We show that the accuracy is significant compared to the performance by foreign human listeners.

Personal Identification Using Inner Face of Fingers from Contactless Hand Image (비접촉 손 영상에서 손가락 면을 이용한 개인 식별)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.8
    • /
    • pp.937-945
    • /
    • 2014
  • Multi-modal biometric system can use another biometric trait in the case of having deficiency at a biometric trait. It also has an advantage of improving the performance of personal identification by using multiple biometric traits, so studies on new biometric traits have continuously been performed. The inner face of finger is a relatively new biometric trait. It has two major features of knuckle lines and wrinkles, which can be used as discriminative features. This paper proposes a finger identification method based on displacement vector to effectively process some variation appeared in contactless hand image. At first, the proposed method produces displacement vectors, which are made by connecting corresponding points acquired by matching each pair of local block. It then recognize finger by measuring the similarity among all the detected displacement vectors. The experimental results using pubic CASIA hand image database show that the proposed method may be effectively applied to personal identification.

New approach to two wheelers detection using Cell Comparison

  • Lee, Yeunghak;Kim, Taesun;Lee, Sanghoon;Shim, Jaechang
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.45-53
    • /
    • 2014
  • This article describes a two wheelers detection system riding on people based on modified histogram of oriented gradients (HOG) for vision based intelligent vehicles. These features used correlation coefficient parameter are able to classify variable and complicated shapes of a two wheelers according to different viewpoints as well as human appearance. Also our system maintains the simplicity of evaluation of traditional formulation while being more discriminative. In this paper, we propose an evolutionary method trained part-based models to classify multiple view-based detection: frontal, rear and side view (within $60^{\circ}C$). Our experimental results show that a two wheelers riding on people detection system based on proposed approach leads to higher detection accuracy rate than traditional features.

  • PDF

Classification of Cognitive States from fMRI data using Fisher Discriminant Ratio and Regions of Interest

  • Do, Luu Ngoc;Yang, Hyung Jeong
    • International Journal of Contents
    • /
    • v.8 no.4
    • /
    • pp.56-63
    • /
    • 2012
  • In recent decades, analyzing the activities of human brain achieved some accomplishments by using the functional Magnetic Resonance Imaging (fMRI) technique. fMRI data provide a sequence of three-dimensional images related to human brain's activity which can be used to detect instantaneous cognitive states by applying machine learning methods. In this paper, we propose a new approach for distinguishing human's cognitive states such as "observing a picture" versus "reading a sentence" and "reading an affirmative sentence" versus "reading a negative sentence". Since fMRI data are high dimensional (about 100,000 features in each sample), extremely sparse and noisy, feature selection is a very important step for increasing classification accuracy and reducing processing time. We used the Fisher Discriminant Ratio to select the most powerful discriminative features from some Regions of Interest (ROIs). The experimental results showed that our approach achieved the best performance compared to other feature extraction methods with the average accuracy approximately 95.83% for the first study and 99.5% for the second study.