• Title/Summary/Keyword: information expression

Search Result 3,009, Processing Time 0.036 seconds

A Study on Building Structures and Processes for Intelligent Web Document Classification (지능적인 웹문서 분류를 위한 구조 및 프로세스 설계 연구)

  • Jang, Young-Cheol
    • Journal of Digital Convergence
    • /
    • v.6 no.4
    • /
    • pp.177-183
    • /
    • 2008
  • This paper aims to offer a solution based on intelligent document classification to create a user-centric information retrieval system allowing user-centric linguistic expression. So, structures expressing user intention and fine document classifying process using EBL, similarity, knowledge base, user intention, are proposed. To overcome the problem requiring huge and exact semantic information, a hybrid process is designed integrating keyword, thesaurus, probability and user intention information. User intention tree hierarchy is build and a method of extracting group intention between key words and user intentions is proposed. These structures and processes are implemented in HDCI(Hybrid Document Classification with Intention) system. HDCI consists of analyzing user intention and classifying web documents stages. Classifying stage is composed of knowledge base process, similarity process and hybrid coordinating process. With the help of user intention related structures and hybrid coordinating process, HDCI can efficiently categorize web documents in according to user's complex linguistic expression with small priori information.

  • PDF

Song Player by Distance Measurement from Face (얼굴에서 거리 측정에 의한 노래 플레이어)

  • Shin, Seong-Yoon;Lee, Min-Hye;Shin, Kwang-Seong;Lee, Hyun-Chang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.667-669
    • /
    • 2022
  • In this paper, Face Song Player, which is a system that recognizes the facial expression of an individual and plays music that is appropriate for such person, is presented. It studies information on the facial contour lines and extracts an average, and acquires the facial shape information. MUCT DB was used as the DB for learning. For the recognition of facial expression, an algorithm was designed by using the differences in the characteristics of each of the expressions on the basis of expressionless images.

  • PDF

Ensemble Gene Selection Method Based on Multiple Tree Models

  • Mingzhu Lou
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.652-662
    • /
    • 2023
  • Identifying highly discriminating genes is a critical step in tumor recognition tasks based on microarray gene expression profile data and machine learning. Gene selection based on tree models has been the subject of several studies. However, these methods are based on a single-tree model, often not robust to ultra-highdimensional microarray datasets, resulting in the loss of useful information and unsatisfactory classification accuracy. Motivated by the limitations of single-tree-based gene selection, in this study, ensemble gene selection methods based on multiple-tree models were studied to improve the classification performance of tumor identification. Specifically, we selected the three most representative tree models: ID3, random forest, and gradient boosting decision tree. Each tree model selects top-n genes from the microarray dataset based on its intrinsic mechanism. Subsequently, three ensemble gene selection methods were investigated, namely multipletree model intersection, multiple-tree module union, and multiple-tree module cross-union, were investigated. Experimental results on five benchmark public microarray gene expression datasets proved that the multiple tree module union is significantly superior to gene selection based on a single tree model and other competitive gene selection methods in classification accuracy.

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • v.27 no.1
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

An Action Unit co-occurrence constraint 3DCNN based Action Unit recognition approach

  • Jia, Xibin;Li, Weiting;Wang, Yuechen;Hong, SungChan;Su, Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.924-942
    • /
    • 2020
  • The facial expression is diverse and various among persons due to the impact of the psychology factor. Whilst the facial action is comparatively steady because of the fixedness of the anatomic structure. Therefore, to improve performance of the action unit recognition will facilitate the facial expression recognition and provide profound basis for the mental state analysis, etc. However, it still a challenge job and recognition accuracy rate is limited, because the muscle movements around the face are tiny and the facial actions are not obvious accordingly. Taking account of the moving of muscles impact each other when person express their emotion, we propose to make full use of co-occurrence relationship among action units (AUs) in this paper. Considering the dynamic characteristic of AUs as well, we adopt the 3D Convolutional Neural Network(3DCNN) as base framework and proposed to recognize multiple action units around brows, nose and mouth specially contributing in the emotion expression with putting their co-occurrence relationships as constrain. The experiments have been conducted on a typical public dataset CASME and its variant CASME2 dataset. The experiment results show that our proposed AU co-occurrence constraint 3DCNN based AU recognition approach outperforms current approaches and demonstrate the effectiveness of taking use of AUs relationship in AU recognition.

Comparison of Expression Profiles between Trophozoite and Cyst of Acanthamoeba castellanii

  • Moon, Eun-Kyung;Kong, Hyun-Hee
    • Biomedical Science Letters
    • /
    • v.18 no.3
    • /
    • pp.313-318
    • /
    • 2012
  • Acanthamoeba is an opportunistic pathogen known to cause granulomatous amoebic encephalitis and amebic keratitis. Acanthamoeba exhibits life cycle consisting of trophozoite and cyst, and the cyst is highly resistant to variable antibiotics and therapeutic agents. To understand the encystation mechanism of Acanthamoeba, the expression profiles of trophozoite and cyst were compared by gene ontology (GO) analysis. Ribosomal proteins and cytoskeletal proteins were highly expressed in trophozoite. In cyst, various protease, and signal transduction - and protein turnover - related proteins were highly expressed. These results correlated with eukaryotic orthologous groups (KOG) assignment and microarray analysis of Acanthamoeba trophozoite and cyst ESTs. The information of differential expression profiles of trophozoite and cyst would provide important clues for research on encystation mechanism of cyst forming protozoa including Acanthamoeba.

A study on the Expressive Trends of Materials according to the Paradigmatic Variation (패러다임 변화에 의한 재료의 표현경향에 관한 연구)

  • 김은주;류호창
    • Proceedings of the Korean Institute of Interior Design Conference
    • /
    • 2003.05a
    • /
    • pp.196-199
    • /
    • 2003
  • In a modern society, as the architectural materials have become main subject of the interior design expression, their influence on architecture is becoming substantial. There are multiple ways of using architectural materials. The aesthetic value is greatly enhanced by the fact that the architectural materials can be used to create unrestricted and new expression. The modern time is uncertain and rapidly changing. In this era of chaos and diversity, the materialism is being superseded by idealism. Speedy exchange of information among the countries with development of new technology and materials increases the possibility of new expressions. Each new material that adds new value influences on the other art as well as the architecture. These characteristics of new expressions could be viewed as having close relationship with current idealism. Therefore, the study on the expressive trend of architectural materials will provide better understanding and translation on expression of interior design. At this point, this study has a purpose in not only understanding architectural materials but also researching for the characteristics of expression on interior design with consideration of paradigmatic variation in modern times.

  • PDF

Facial Expression Recognition with Fuzzy C-Means Clusstering Algorithm and Neural Network Based on Gabor Wavelets

  • Youngsuk Shin;Chansup Chung;Lee, Yillbyung
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.126-132
    • /
    • 2000
  • This paper presents a facial expression recognition based on Gabor wavelets that uses a fuzzy C-means(FCM) clustering algorithm and neural network. Features of facial expressions are extracted to two steps. In the first step, Gabor wavelet representation can provide edges extraction of major face components using the average value of the image's 2-D Gabor wavelet coefficient histogram. In the next step, we extract sparse features of facial expressions from the extracted edge information using FCM clustering algorithm. The result of facial expression recognition is compared with dimensional values of internal stated derived from semantic ratings of words related to emotion. The dimensional model can recognize not only six facial expressions related to Ekman's basic emotions, but also expressions of various internal states.

  • PDF

Clinicopathological Significance of CD133 and ALDH1 Cancer Stem Cell Marker Expression in Invasive Ductal Breast Carcinoma

  • Mansour, Sahar F;Atwa, Maha M
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.16 no.17
    • /
    • pp.7491-7496
    • /
    • 2015
  • Background: Biomarkers in breast neoplasms provide invaluable information regarding prognosis and help determining the optimal treatment. We investigated the possible correlation between cancer stem cell (CSC) markers (CD133, and ALDH1) in invasive ductal breast carcinomas with some clinicopathological parameters. Aim: To assess the correlation between expression of cancer stem cell (CSC) markers (CD133, and ALDH1) and clinicopathological parameters of invasive ductal breast carcinomas. Materials and Methods: Immunohistochemical analysis of CD133 and ALDH1 was performed on a series of 120 modified radical mastectomy (MRM) specimens diagnosed as invasive ductal breast carcinoma. Results: Expression of both CD133 and ALDH1 was significantly changed and related to tumor size, tumor stage (TNM), and lymph node metastasis. A negative correlation between CD133 and ALDH1 was found. Conclusions: Detecting the expression of CD133 and ALDH1 in invasive ductal breast carcinomas may be of help in more accurately predicting the aggressive properties and determining the optimal treatment.

Comparison of the Cluster Validation Techniques using Gene Expression Data (유전자 발현 자료를 이용한 군집 타당성분석 기법 비교)

  • Jeong, Yun-Kyoung;Baek, Jang-Sun
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2006.04a
    • /
    • pp.63-76
    • /
    • 2006
  • Several clustering algorithms to analyze gene expression data and cluster validation techniques that assess the quality of their outcomes, have been suggested, but evaluations of these cluster validation techniques have seldom been implemented. In this paper we compared various cluster validity indices for simulation data and real genomic data, and found that Dunn's index is more effective and robust through small simulations and with real gene expression data.

  • PDF