• Title/Summary/Keyword: information expression

Search Result 3,009, Processing Time 0.119 seconds

A New Incremental Learning Algorithm with Probabilistic Weights Using Extended Data Expression

  • Yang, Kwangmo;Kolesnikova, Anastasiya;Lee, Won Don
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.4
    • /
    • pp.258-267
    • /
    • 2013
  • New incremental learning algorithm using extended data expression, based on probabilistic compounding, is presented in this paper. Incremental learning algorithm generates an ensemble of weak classifiers and compounds these classifiers to a strong classifier, using a weighted majority voting, to improve classification performance. We introduce new probabilistic weighted majority voting founded on extended data expression. In this case class distribution of the output is used to compound classifiers. UChoo, a decision tree classifier for extended data expression, is used as a base classifier, as it allows obtaining extended output expression that defines class distribution of the output. Extended data expression and UChoo classifier are powerful techniques in classification and rule refinement problem. In this paper extended data expression is applied to obtain probabilistic results with probabilistic majority voting. To show performance advantages, new algorithm is compared with Learn++, an incremental ensemble-based algorithm.

Facial Data Visualization for Improved Deep Learning Based Emotion Recognition

  • Lee, Seung Ho
    • Journal of Information Science Theory and Practice
    • /
    • v.7 no.2
    • /
    • pp.32-39
    • /
    • 2019
  • A convolutional neural network (CNN) has been widely used in facial expression recognition (FER) because it can automatically learn discriminative appearance features from an expression image. To make full use of its discriminating capability, this paper suggests a simple but effective method for CNN based FER. Specifically, instead of an original expression image that contains facial appearance only, the expression image with facial geometry visualization is used as input to CNN. In this way, geometric and appearance features could be simultaneously learned, making CNN more discriminative for FER. A simple CNN extension is also presented in this paper, aiming to utilize geometric expression change derived from an expression image sequence. Experimental results on two public datasets (CK+ and MMI) show that CNN using facial geometry visualization clearly outperforms the conventional CNN using facial appearance only.

Micro-Expression Recognition Base on Optical Flow Features and Improved MobileNetV2

  • Xu, Wei;Zheng, Hao;Yang, Zhongxue;Yang, Yingjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.1981-1995
    • /
    • 2021
  • When a person tries to conceal emotions, real emotions will manifest themselves in the form of micro-expressions. Research on facial micro-expression recognition is still extremely challenging in the field of pattern recognition. This is because it is difficult to implement the best feature extraction method to cope with micro-expressions with small changes and short duration. Most methods are based on hand-crafted features to extract subtle facial movements. In this study, we introduce a method that incorporates optical flow and deep learning. First, we take out the onset frame and the apex frame from each video sequence. Then, the motion features between these two frames are extracted using the optical flow method. Finally, the features are inputted into an improved MobileNetV2 model, where SVM is applied to classify expressions. In order to evaluate the effectiveness of the method, we conduct experiments on the public spontaneous micro-expression database CASME II. Under the condition of applying the leave-one-subject-out cross-validation method, the recognition accuracy rate reaches 53.01%, and the F-score reaches 0.5231. The results show that the proposed method can significantly improve the micro-expression recognition performance.

A Study on the BIBFRAME's Acceptance of Representative Expression of RDA Toolkit Beta (BIBFRAME에서 RDA Toolkit Beta 대표표현형 적용 방안에 관한 연구)

  • Lee, Mihwa
    • Journal of Korean Library and Information Science Society
    • /
    • v.51 no.1
    • /
    • pp.1-20
    • /
    • 2020
  • This study is to find the methods that BIBFRAME could accept the new concept of representative expression in the RDA Toolkit Beta which has developed in 2019. Research methods were the literature reviews and the mapping between RDA Toolkit Beta and BIBFRAME. BIBFRAME's acceptance of representative expression of RDA Toolkit Beta was suggested as followings. First, the properties of representative expression should be defined in BIBFRAME because the RDA Toolkit Beta defined the expression attributes and representative expression elements apart. Three choices for BIBFRAME's acceptances are (1) to develop the newly devised representative expression properties (2) to specify representative expression properties with using refinement (3) to differentiate representative expression properties with using class. Second, as relationship elements should be defined in BIBFRAME to link work and representative expression which transfer the expression attribute to work. This study will contribute to revise the BIBFRAME because this focusd on BIBFRAME's acceptance of RDA Toolkit Beta in which reflected LRM.

Risk Situation Recognition Using Facial Expression Recognition of Fear and Surprise Expression (공포와 놀람 표정인식을 이용한 위험상황 인지)

  • Kwak, Nae-Jong;Song, Teuk Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.3
    • /
    • pp.523-528
    • /
    • 2015
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing risk situation. The proposed method firstly extracts the facial region from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, discriminates facial expression, and recognizes risk situation. The proposed method is evaluated for Cohn-Kanade database image to recognize facial expression. The DB has 6 kinds of facial expressions of human being that are basic facial expressions such as smile, sadness, surprise, anger, disgust, and fear expression. The proposed method produces good results of facial expression and discriminates risk situation well.

A Study on the Expression Propensity of Typography in Korean Advertisement - Focused on Printing Advertisement after 2000year - (한국 광고의 타이포그래피 표현 경향 연구 - 2000년도 이후 인쇄광고를 중심으로 -)

  • Kim, Dong-Bin
    • Archives of design research
    • /
    • v.20 no.1 s.69
    • /
    • pp.219-228
    • /
    • 2007
  • Printing advertisement is aggregate of commercial information communication text consisted of image sign and language sign. This means that verbal tabor through visual stimulation and character is mixed and passes information through picture. Typography is process that visualize verbal appeal in printing advertisement. Therefore study about typography is very important for a visual expression element in printing advertisement. Typography expression of Korean printing advertisement accomplished fast qualitative growth after 2000 flowing the 1990s. This study makes that typography expression propensity of Korean printing advertisement after 2000 of changed of expression structure, changed of expression rule, changed of expression method etc. Accordingly, extracted each three analysis bases. And this study presented expansive direction of typography expression of printing advertising in case studies.

  • PDF

A Framework for Facial Expression Recognition Combining Contextual Information and Attention Mechanism

  • Jianzeng Chen;Ningning Chen
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.535-549
    • /
    • 2024
  • Facial expressions (FEs) serve as fundamental components for human emotion assessment and human-computer interaction. Traditional convolutional neural networks tend to overlook valuable information during the FE feature extraction, resulting in suboptimal recognition rates. To address this problem, we propose a deep learning framework that incorporates hierarchical feature fusion, contextual data, and an attention mechanism for precise FE recognition. In our approach, we leveraged an enhanced VGGNet16 as the backbone network and introduced an improved group convolutional channel attention (GCCA) module in each block to emphasize the crucial expression features. A partial decoder was added at the end of the backbone network to facilitate the fusion of multilevel features for a comprehensive feature map. A reverse attention mechanism guides the model to refine details layer-by-layer while introducing contextual information and extracting richer expression features. To enhance feature distinguishability, we employed islanding loss in combination with softmax loss, creating a joint loss function. Using two open datasets, our experimental results demonstrated the effectiveness of our framework. Our framework achieved an average accuracy rate of 74.08% on the FER2013 dataset and 98.66% on the CK+ dataset, outperforming advanced methods in both recognition accuracy and stability.

Recognition of Human Facial Expression in a Video Image using the Active Appearance Model

  • Jo, Gyeong-Sic;Kim, Yong-Guk
    • Journal of Information Processing Systems
    • /
    • v.6 no.2
    • /
    • pp.261-268
    • /
    • 2010
  • Tracking human facial expression within a video image has many useful applications, such as surveillance and teleconferencing, etc. Initially, the Active Appearance Model (AAM) was proposed for facial recognition; however, it turns out that the AAM has many advantages as regards continuous facial expression recognition. We have implemented a continuous facial expression recognition system using the AAM. In this study, we adopt an independent AAM using the Inverse Compositional Image Alignment method. The system was evaluated using the standard Cohn-Kanade facial expression database, the results of which show that it could have numerous potential applications.

Considerations for BIBFRAME Acceptance of Expression and Representative Expression Attributes in LRM (BIBFRAME에서 LRM 표현형 및 대표표현형 속성 적용시 고려사항)

  • Lee, Mihwa
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.30 no.2
    • /
    • pp.33-50
    • /
    • 2019
  • Cataloging principles, cataloging rules, and encoding formats should considered LRM acceptance because LRM replaced FRBR as the conceptual model. This study identifies considerations for BIBFRAME acceptance of expression and representative expression attributes in LRM by using literature reviews and expert interviews. Primarily, work in BIBFRAME without expression as entity could map to work and expression of LRM and sustain expression by linking 2 works (work and expression). Second, BIBFRAME must consider association between representative expression attributes and specific expressions whose values can be transferred to the representative expression attributes. Third, representative expression attributes are different according to work types in LRM, and language, media, intended audience, and scale, that can be used as representative expression attributes in BIBFRAME, should be changed in class. Fourth, relation properties should be articulated for expanding networks between expressions originated from work in BIBFRAME. This study analyzes LRM and BIBFRAME by focusing on expression entity and representative expression attributes. More LRM study is needed on cataloging principles and cataloging rules.

Considerations on gene chip data analysis

  • Lee, Jae-K.
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2001.08a
    • /
    • pp.77-102
    • /
    • 2001
  • Different high-throughput chip technologies are available for genome-wide gene expression studies. Quality control and prescreening analysis are important for rigorous analysis on each type of gene expression data. Statistical significance evaluation of differential expression patterns is needed. Major genome institutes develop database and analysis systems for information sharing of precious expression data.

  • PDF