• Title/Summary/Keyword: formal classification

Search Result 100, Processing Time 0.019 seconds

Construction of Hierarchical Classification of User Tags using WordNet-based Formal Concept Analysis (WordNet기반의 형식개념분석기법을 이용한 사용자태그 분류체계의 구축)

  • Hwang, Suk-Hyung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.149-161
    • /
    • 2013
  • In this paper, we propose a novel approach to construction of classification hierarchies for user tags of folksonomies, using WordNet-based Formal Concept Analysis tool, called TagLighter, which is developed on this research. Finally, to give evidence of the usefulness of this approach in practice, we describe some experiments on user tag data of Bibsonomy.org site. The classification hierarchies of user tags constructed by our approach allow us to gain a better and further understanding and insight in tagged data during information retrieval and data analysis on the folksonomy-based systems. We expect that the proposed approach can be used in the fields of web data mining for folksonomy-based web services, social networking systems and semantic web applications.

A FCA-based Classification Approach for Analysis of Interval Data (구간데이터분석을 위한 형식개념분석기반의 분류)

  • Hwang, Suk-Hyung;Kim, Eung-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.1
    • /
    • pp.19-30
    • /
    • 2012
  • Based on the internet-based infrastructures such as various information devices, social network systems and cloud computing environments, distributed and sharable data are growing explosively. Recently, as a data analysis and mining technique for extracting, analyzing and classifying the inherent and useful knowledge and information, Formal Concept Analysis on binary or many-valued data has been successfully applied in many diverse fields. However, in formal concept analysis, there has been little research conducted on analyzing interval data whose attributes have some interval values. In this paper, we propose a new approach for classification of interval data based on the formal concept analysis. We present the development of a supporting tool(iFCA) that provides the proposed approach for the binarization of interval data table, concept extraction and construction of concept hierarchies. Finally, with some experiments over real-world data sets, we demonstrate that our approach provides some useful and effective ways for analyzing and mining interval data.

A Sentence Sentiment Classification reflecting Formal and Informal Vocabulary Information (형식적 및 비형식적 어휘 정보를 반영한 문장 감정 분류)

  • Cho, Sang-Hyun;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.325-332
    • /
    • 2011
  • Social Network Services(SNS) such as Twitter, Facebook and Myspace have gained popularity worldwide. Especially, sentiment analysis of SNS users' sentence is very important since it is very useful in the opinion mining. In this paper, we propose a new sentiment classification method of sentences which contains formal and informal vocabulary such as emoticons, and newly coined words. Previous methods used only formal vocabulary to classify sentiments of sentences. However, these methods are not quite effective because internet users use sentences that contain informal vocabulary. In addition, we construct suggest to construct domain sentiment vocabulary because the same word may represent different sentiments in different domains. Feature vectors are extracted from the sentiment vocabulary information and classified by Support Vector Machine(SVM). Our proposed method shows good performance in classification accuracy.

Character Analysis Method based on the Value Type of the Human (인간 가치 유형에 기반한 캐릭터 분석 방법론 제안)

  • Song, Minho
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.9
    • /
    • pp.650-660
    • /
    • 2017
  • This study is to suggest a new method of analyzing personality types of characters in narrative. First, we examined the history of the taxonomy of character types that existed in narrative theories so far. Until now, the classification of character types in narrative theory consisted largely of a formal classification based on roles in narrative, a content classification based on human internal qualities, and a complementary classification in which the two classification criteria are united. The problem with the existing character classification type is difficult to categorize it in spite of the usefulness of the content classification based on human internal qualities. On the other hand, the classification based on the role of the character in the narrative does not help as much as a practical analysis methodology because the classification is formal. In this study, we try to solve this problem by introducing Shalom Schwartz's human value type, and to make human character's value type and human role correlated with each other as a new character analysis methodology. Schwartz's study of value type is a very effective method to grasp the motivation of human behavior, and it seems to be very meaningful in analyzing the directivity of characters.

Classification and Verification of Semantic Constraints in ebXML BPSS

  • Kim, Jong-Woo;Kim, Hyoung-Do
    • Proceedings of the CALSEC Conference
    • /
    • 2004.02a
    • /
    • pp.318-326
    • /
    • 2004
  • The ebXML (Electronic Business using eXtensible Markup Language) Specification Schema is to provide nominal set of specification elements necessary to specify a collaboration between business partners based on XML. As a part of ebXML Specification Schema, BPSS (Business Process Specification Schema) has been provided to support the direct specification of the set of elements required to configure a runtime system in order to execute a set of ebXML business transactions. The BPSS is available in two stand-alone representations, a UML version and an XML version. Due to the limitations of UML notations and XML syntax, however, current ebXML BPSS specification is insufficient to specify formal semantic constraints of modeling elements completely. In this study, we propose a classification schema for the BPSS semantic constraints and describe how to represent those semantic constraints formally using OCL (Object Constraint Language). As a way to verify a Business Process Specification (BPS) with the formal semantic constraint modeling, we suggest a rule-based approach to represent the formal constraints and to use the rule-based constraints specification to verify BPSs in a CLIPS prototype implementation.

  • PDF

Regional Characteristics of Street Fashion In China -Focused on Yanji, Beijing, Shanghai in 2008 F/W- (중국 스트리트 패션에 나타난 지역적 특성 -2008년 F/W, 엔지, 베이징, 상하이를 중심으로-)

  • Kim, Chan-Ju;Yu, Hae-Kyung
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.34 no.10
    • /
    • pp.1581-1595
    • /
    • 2010
  • This paper investigates the regional characteristics of street fashion in China. Yanji, Beijing, Shanghai were chosen as three different cities in terms of location, weather, population, and industrial structure. A total of 592 pictures were collected through an internet and fashion magazine search for street fashion in Beijing and Shanghai in addition photos were taken for those in Yanji. Pictures of each city were classified into groups based on overall images covering top, bottom, and accessories to identify the characteristics of style in each group. The classification process included 2 stages. In the first stage, it produced 2 groups: formal and casual. The second stage divided formal into business formal and retro formal; casual was divided into II sub-groups that were easy, sporty, feminine, sexy, ethnic, girlish, nippon, trendy, bulky, military, and mixed. Easy casual showed the highest frequency for 3 cities and military style showed the lowest. Shanghai showed higher frequency in sporty, trendy, and military style than other cities. Each style exposed the similarities and differences in the cities that reflected different regional characteristics.

Feature Expansion based on LDA Word Distribution for Performance Improvement of Informal Document Classification (비격식 문서 분류 성능 개선을 위한 LDA 단어 분포 기반의 자질 확장)

  • Lee, Hokyung;Yang, Seon;Ko, Youngjoong
    • Journal of KIISE
    • /
    • v.43 no.9
    • /
    • pp.1008-1014
    • /
    • 2016
  • Data such as Twitter, Facebook, and customer reviews belong to the informal document group, whereas, newspapers that have grammar correction step belong to the formal document group. Finding consistent rules or patterns in informal documents is difficult, as compared to formal documents. Hence, there is a need for additional approaches to improve informal document analysis. In this study, we classified Twitter data, a representative informal document, into ten categories. To improve performance, we revised and expanded features based on LDA(Latent Dirichlet allocation) word distribution. Using LDA top-ranked words, the other words were separated or bundled, and the feature set was thus expanded repeatedly. Finally, we conducted document classification with the expanded features. Experimental results indicated that the proposed method improved the micro-averaged F1-score of 7.11%p, as compared to the results before the feature expansion step.

Grassmann's Mathematical Epistemology and Generalization of Vector Spaces (그라스만의 수학 인식과 벡터공간의 일반화)

  • Lee, Hee Jung;Shin, Kyunghee
    • Journal for History of Mathematics
    • /
    • v.26 no.4
    • /
    • pp.245-257
    • /
    • 2013
  • Hermann Grassmann classified mathematics and extended the dimension of vector spaces by using dialectics of contrasts. In this paper, we investigate his mathematical idea and its background, and the process of the classification of mathematics. He made a synthetic concept of mathematics based on his idea of 'equal' and 'inequal', 'discrete' and 'indiscrete' mathematics. Also, he showed a creation of new mathematics and a process of generalization using a dialectic of contrast of 'special' and 'general', 'real' and 'formal'. In addition, we examine his unique development in using 'real' and 'formal' in a process of generalization of basis and dimension of a vector space. This research on Grassmann will give meaningful suggestion to an effective teaching and learning of linear algebra.

Modeling and Validation of Semantic Constraints for ebXML Business Process Specifications (ebXML 비즈니스 프로세스 명세를 위한 의미 제약의 모델링과 검증)

  • Kim, Jong-Woo;Kim, Hyoung-Do
    • Asia pacific journal of information systems
    • /
    • v.14 no.1
    • /
    • pp.79-100
    • /
    • 2004
  • As a part of ebXML(Electronic Business using eXtensible Markup Language) framework, BPSS(Business Process Specification Schema) has been provided to support the direct specification of the set of elements required to configure a runtime system in order to execute a set of ebXML business transactions. The BPS,' is available in two stand-alone representations, a UML version and an XML version. Due to the limitations of UML notations and XML syntax, however, current ebXML BPSS specification fails to specify formal semantic constraints completely. In this study, we propose a constraint classification scheme for the BPSS specification and describe how to formally represent those semantic constraints using OCL(Object Constraint Language). As a way to validate p Business Process Specification(BPS) with the formal semantic constraints, we suggest a rule-based approach to represent the formal constraints and demonstrate its detailed mechanism for applying the rule-based constraints to the BPS with a prototype implementation.

Analyzing RDF Data in Linked Open Data Cloud using Formal Concept Analysis

  • Hwang, Suk-Hyung;Cho, Dong-Heon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.6
    • /
    • pp.57-68
    • /
    • 2017
  • The Linked Open Data(LOD) cloud is quickly becoming one of the largest collections of interlinked datasets and the de facto standard for publishing, sharing and connecting pieces of data on the Web. Data publishers from diverse domains publish their data using Resource Description Framework(RDF) data model and provide SPARQL endpoints to enable querying their data, which enables creating a global, distributed and interconnected dataspace on the LOD cloud. Although it is possible to extract structured data as query results by using SPARQL, users have very poor in analysis and visualization of RDF data from SPARQL query results. Therefore, to tackle this issue, based on Formal Concept Analysis, we propose a novel approach for analyzing and visualizing useful information from the LOD cloud. The RDF data analysis and visualization technique proposed in this paper can be utilized in the field of semantic web data mining by extracting and analyzing the information and knowledge inherent in LOD and supporting classification and visualization.