• Title/Summary/Keyword: textual knowledge

Search Result 37, Processing Time 0.021 seconds

An Ontology-based Knowledge Management System - Integrated System of Web Information Extraction and Structuring Knowledge -

  • Mima, Hideki;Matsushima, Katsumori
    • Proceedings of the CALSEC Conference
    • /
    • 2005.03a
    • /
    • pp.55-61
    • /
    • 2005
  • We will introduce a new web-based knowledge management system in progress, in which XML-based web information extraction and our structuring knowledge technologies are combined using ontology-based natural language processing. Our aim is to provide efficient access to heterogeneous information on the web, enabling users to use a wide range of textual and non textual resources, such as newspapers and databases, effortlessly to accelerate knowledge acquisition from such knowledge sources. In order to achieve the efficient knowledge management, we propose at first an XML-based Web information extraction which contains a sophisticated control language to extract data from Web pages. With using standard XML Technologies in the system, our approach can make extracting information easy because of a) detaching rules from processing, b) restricting target for processing, c) Interactive operations for developing extracting rules. Then we propose a structuring knowledge system which includes, 1) automatic term recognition, 2) domain oriented automatic term clustering, 3) similarity-based document retrieval, 4) real-time document clustering, and 5) visualization. The system supports integrating different types of databases (textual and non textual) and retrieving different types of information simultaneously. Through further explanation to the specification and the implementation technique of the system, we will demonstrate how the system can accelerate knowledge acquisition on the Web even for novice users of the field.

  • PDF

Exploring Simultaneous Presentation in Online Restaurant Reviews: An Analysis of Textual and Visual Content

  • Lin Li;Gang Ren;Taeho Hong;Sung-Byung Yang
    • Asia pacific journal of information systems
    • /
    • v.29 no.2
    • /
    • pp.181-202
    • /
    • 2019
  • The purpose of this study is to explore the effect of different types of simultaneous presentation (i.e., reviewer information, textual and visual content, and similarity between textual-visual contents) on review usefulness and review enjoyment in online restaurant reviews (ORRs), as they are interrelated yet have rarely been examined together in previous research. By using Latent Dirichlet Allocation (LDA) topic modeling and state-of-the-art machine learning (ML) methodologies, we found that review readability in textual content and salient objects in images in visual content have a significant impact on both review usefulness and review enjoyment. Moreover, similarity between textual-visual contents was found to be a major factor in determining review usefulness but not review enjoyment. As for reviewer information, reputation, expertise, and location of residence, these were found to be significantly related to review enjoyment. This study contributes to the body of knowledge on ORRs and provides valuable implications for general users and managers in the hospitality and tourism industries.

QualityRank : Measuring Authority of Answer in Q&A Community using Social Network Analysis (QualityRank : 소셜 네트워크 분석을 통한 Q&A 커뮤니티에서 답변의 신뢰 수준 측정)

  • Kim, Deok-Ju;Park, Gun-Woo;Lee, Sang-Hoon
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.343-350
    • /
    • 2010
  • We can get answers we want to know via questioning in Knowledge Search Service (KSS) based on Q&A Community. However, it is getting more difficult to find credible documents in enormous documents, since many anonymous users regardless of credibility are participate in answering on the question. In previous works in KSS, researchers evaluated the quality of documents based on textual information, e.g. recommendation count, click count and non-textual information, e.g. answer length, attached data, conjunction count. Then, the evaluation results are used for enhancing search performance. However, the non-textual information has a problem that it is difficult to get enough information by users in the early stage of Q&A. The textual information also has a limitation for evaluating quality because of judgement by partial factors such as answer length, conjunction counts. In this paper, we propose the QualityRank algorithm to improve the problem by textual and non-textual information. This algorithm ranks the relevant and credible answers by considering textual/non-textual information and user centrality based on Social Network Analysis(SNA). Based on experimental validation we can confirm that the results by our algorithm is improved than those of textual/non-textual in terms of ranking performance.

Text-Confidence Feature Based Quality Evaluation Model for Knowledge Q&A Documents (텍스트 신뢰도 자질 기반 지식 질의응답 문서 품질 평가 모델)

  • Lee, Jung-Tae;Song, Young-In;Park, So-Young;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.10
    • /
    • pp.608-615
    • /
    • 2008
  • In Knowledge Q&A services where information is created by unspecified users, document quality is an important factor of user satisfaction with search results. Previous work on quality prediction of Knowledge Q&A documents evaluate the quality of documents by using non-textual information, such as click counts and recommendation counts, and focus on enhancing retrieval performance by incorporating the quality measure into retrieval model. Although the non-textual information used in previous work was proven to be useful by experiments, data sparseness problem may occur when predicting the quality of newly created documents with such information. To solve data sparseness problem of non-textual features, this paper proposes new features for document quality prediction, namely text-confidence features, which indicate how trustworthy the content of a document is. The proposed features, extracted directly from the document content, are stable against data sparseness problem, compared to non-textual features that indirectly require participation of service users in order to be collected. Experiments conducted on real world Knowledge Q&A documents suggests that text-confidence features show performance comparable to the non-textual features. We believe the proposed features can be utilized as effective features for document quality prediction and improve the performance of Knowledge Q&A services in the future.

The Effectiveness of Foreign Language Learning in Virtual Environments and with Textual Enhancement Techniques in the Metaverse (메타버스의 가상환경과 텍스트 강화기법을 활용한 외국어 학습 효과)

  • Jeonghyun Kang;Seulhee Kwon;Donghun Chung
    • Knowledge Management Research
    • /
    • v.25 no.1
    • /
    • pp.155-172
    • /
    • 2024
  • This study investigates the effectiveness of foreign language learning through diverse treatments in virtual settings, particularly by differentiating virtual environments with three textual enhancement techniques. A 2 × 3 mixed-factorial design was used, treating virtual environments as within-subject factors and textual enhancement techniques as between-subject factors. Participants experienced two videos, each in different virtual learning environments with one of the random textual enhancement techniques. The results showed that the interaction between different virtual environments and textual enhancement techniques had a statistically significant impact on presence among groups. In examining main effects of virtual environments, significant differences were observed in flow and attitude toward pre-post learning. Also, main effects of textual enhancements notably influenced flow, intention to use, learning satisfaction, and learning confidence. This study highlights the potential of Metaverse in foreign language learning, suggesting that learner experiences and effects vary with different virtual environments.

Natural language processing techniques for bioinformatics

  • Tsujii, Jun-ichi
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.3-3
    • /
    • 2003
  • With biomedical literature expanding so rapidly, there is an urgent need to discover and organize knowledge extracted from texts. Although factual databases contain crucial information the overwhelming amount of new knowledge remains in textual form (e.g. MEDLINE). In addition, new terms are constantly coined as the relationships linking new genes, drugs, proteins etc. As the size of biomedical literature is expanding, more systems are applying a variety of methods to automate the process of knowledge acquisition and management. In my talk, I focus on the project, GENIA, of our group at the University of Tokyo, the objective of which is to construct an information extraction system of protein - protein interaction from abstracts of MEDLINE. The talk includes (1) Techniques we use fDr named entity recognition (1-a) SOHMM (Self-organized HMM) (1-b) Maximum Entropy Model (1-c) Lexicon-based Recognizer (2) Treatment of term variants and acronym finders (3) Event extraction using a full parser (4) Linguistic resources for text mining (GENIA corpus) (4-a) Semantic Tags (4-b) Structural Annotations (4-c) Co-reference tags (4-d) GENIA ontology I will also talk about possible extension of our work that links the findings of molecular biology with clinical findings, and claim that textual based or conceptual based biology would be a viable alternative to system biology that tends to emphasize the role of simulation models in bioinformatics.

  • PDF

Multilayer Knowledge Representation of Customer's Opinion in Reviews (리뷰에서의 고객의견의 다층적 지식표현)

  • Vo, Anh-Dung;Nguyen, Quang-Phuoc;Ock, Cheol-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.652-657
    • /
    • 2018
  • With the rapid development of e-commerce, many customers can now express their opinion on various kinds of product at discussion groups, merchant sites, social networks, etc. Discerning a consensus opinion about a product sold online is difficult due to more and more reviews become available on the internet. Opinion Mining, also known as Sentiment analysis, is the task of automatically detecting and understanding the sentimental expressions about a product from customer textual reviews. Recently, researchers have proposed various approaches for evaluation in sentiment mining by applying several techniques for document, sentence and aspect level. Aspect-based sentiment analysis is getting widely interesting of researchers; however, more complex algorithms are needed to address this issue precisely with larger corpora. This paper introduces an approach of knowledge representation for the task of analyzing product aspect rating. We focus on how to form the nature of sentiment representation from textual opinion by utilizing the representation learning methods which include word embedding and compositional vector models. Our experiment is performed on a dataset of reviews from electronic domain and the obtained result show that the proposed system achieved outstanding methods in previous studies.

  • PDF

Study on the Improvement of Extraction Performance for Domain Knowledge based Wrapper Generation (도메인 지식 기반 랩퍼 생성의 추출 성능 향상에 관한 연구)

  • Jeong Chang-Hoo;Choi Yun-Soo;Seo Jeong-Hyeon;Yoon Hwa-Mook
    • Journal of Internet Computing and Services
    • /
    • v.7 no.4
    • /
    • pp.67-77
    • /
    • 2006
  • Wrappers play an important role in extracting specified information from various sources. Wrapper rules by which information is extracted are often created from the domain-specific knowledge. Domain-specific knowledge helps recognizing the meaning the text representing various entities and values and detecting their formats However, such domain knowledge becomes powerless when value-representing data are not labeled with appropriate textual descriptions or there is nothing but a hyper link when certain text labels or values are expected. In order to alleviate these problems, we propose a probabilistic method for recognizing the entity type, i.e. generating wrapper rules, when there is no label associated with value-representing text. In addition, we have devised a method for using the information reachable by following hyperlinks when textual data are not immediately available on the target web page. Our experimental work shows that the proposed methods help increasing precision of the resulting wrapper, particularly extracting the title information, the most important entity on a web page. The proposed methods can be useful in making a more efficient and correct information extraction system for various sources of information without user intervention.

  • PDF

Higher Order Knowledge Processing: Pathway Database and Ontologies

  • Fukuda, Ken Ichiro
    • Genomics & Informatics
    • /
    • v.3 no.2
    • /
    • pp.47-51
    • /
    • 2005
  • Molecular mechanisms of biological processes are typically represented as 'pathways' that have a graph­analogical network structure. However, due to the diversity of topics that pathways cover, their constituent biological entities are highly diverse and the semantics is embedded implicitly. The kinds of interactions that connect biological entities are likewise diverse. Consequently, how to model or process pathway data is not a trivial issue. In this review article, we give an overview of the challenges in pathway database development by taking the INOH project as an example.

An intelligent system for automatic data extraction in E-Commerce Applications

  • Cardenosa, Jesus;Iraola, Luis;Tovar, Edmundo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.202-208
    • /
    • 2001
  • One of the most frequent uses of Internet is data gathering. Data can be about many themes but perhaps one of the most demanded fields is the tourist information. Normally, databases that support these systems are maintained manually. However, there is other approach, that is, to extract data automatically, for instance, from textual public information existing in the Web. This approach consists of extracting data from textual sources(public or not) and to serve them totally or partially to the user in the form that he/she wants. The obtained data can maintain automatically databases that support different systems as WAP mobile telephones, or commercial systems accessed by Natural Language Interfaces and others. This process has three main actors. The first is the information itself that is present in a particular context. The second is the information supplier (extracting data from the existing information) and the third is the user or information searcher. This added value chain reuse and give value to existing data even in the case that these data were not tough for the last use by the use of the described technology. The main advantage of this approach is that it makes independent the information source from the information user. This means that the original information belongs to a particular context, not necessarily the context of the user. This paper will describe the application based on this approach developed by the authors in the FLEX EXPRIT IV n$^{\circ}$EP29158 in the Work-package "Knowledge Extraction & Data mining"where the information captured from digital newspapers is extracted and reused in tourist information context.

  • PDF