• Title/Summary/Keyword: Document information retrieval

Search Result 410, Processing Time 0.03 seconds

Jointly Image Topic and Emotion Detection using Multi-Modal Hierarchical Latent Dirichlet Allocation

  • Ding, Wanying;Zhu, Junhuan;Guo, Lifan;Hu, Xiaohua;Luo, Jiebo;Wang, Haohong
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.55-67
    • /
    • 2014
  • Image topic and emotion analysis is an important component of online image retrieval, which nowadays has become very popular in the widely growing social media community. However, due to the gaps between images and texts, there is very limited work in literature to detect one image's Topics and Emotions in a unified framework, although topics and emotions are two levels of semantics that often work together to comprehensively describe one image. In this work, a unified model, Joint Topic/Emotion Multi-Modal Hierarchical Latent Dirichlet Allocation (JTE-MMHLDA) model, which extends previous LDA, mmLDA, and JST model to capture topic and emotion information at the same time from heterogeneous data, is proposed. Specifically, a two level graphical structured model is built to realize sharing topics and emotions among the whole document collection. The experimental results on a Flickr dataset indicate that the proposed model efficiently discovers images' topics and emotions, and significantly outperform the text-only system by 4.4%, vision-only system by 18.1% in topic detection, and outperforms the text-only system by 7.1%, vision-only system by 39.7% in emotion detection.

  • PDF

Detection of Porno Sites on the Web using Fuzzy Inference (퍼지추론을 적용한 웹 음란문서 검출)

  • 김병만;최상필;노순억;김종완
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.5
    • /
    • pp.419-425
    • /
    • 2001
  • A method to detect lots of porno documents on the internet is presented in this parer. The proposed method applies fuzzy inference mechanism to the conventional information retrieval techniques. First, several example sites on porno arc provided by users and then candidate words representing for porno documents are extracted from theme documents. In this process, lexical analysis and stemming are performed. Then, several values such as tole term frequency(TF), the document frequency(DF), and the Heuristic Information(HI) Is computed for each candidate word. Finally, fuzzy inference is performed with the above three values to weight candidate words. The weights of candidate words arc used to determine whether a liven site is sexual or not. From experiments on small test collection, the proposed method was shown useful to detect the sexual sites automatically.

  • PDF

Domain Question Answering System (도메인 질의응답 시스템)

  • Yoon, Seunghyun;Rhim, Eunhee;Kim, Deokho
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.2
    • /
    • pp.144-147
    • /
    • 2015
  • Question Answering (QA) services can provide exact answers to user questions written in natural language form. This research focuses on how to build a QA system for a specific domain area. Online and offline QA system architecture of targeted domain such as domain detection, question analysis, reasoning, information retrieval, filtering, answer extraction, re-ranking, and answer generation, as well as data preparation are presented herein. Test results with an official Frequently Asked Question (FAQ) set showed 68% accuracy of the top 1 and 77% accuracy of the top 5. The contribution of each part such as question analysis system, document search engine, knowledge graph engine and re-ranking module for achieving the final answer are also presented.

XML Translation of Structural Calculation Document and Information Retrieval in 3-D View of Bridge Information Model (교량 구조계산서 XML 문서변환 및 3차원 모델에서의 문서정보 검색)

  • Kim, Bong-Geun;Park, Ang-Il;Kim, Se-Jin;Eom, In-Soo;Lee, Sang-Ho
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2010.04a
    • /
    • pp.375-378
    • /
    • 2010
  • 본 논문은 엔지니어링 문서정보를 준구조화된 XML 문서로 변환하고 이를 3차원 교량 모델과 연계하는 방법을 제시한다. 이를 위해 먼저 구조계산서의 세부 목차에 따른 문서구조를 추출하는 기법을 이용하여 3차원 교량모델을 구성하는 각 부재와 매핑되는 구조계산서 문서의 일부를 프로그램 상에서 자동으로 추출하기 위한 모듈을 개발하였다. 또한 3차원 교량모델의 정보를 운영하기 위해 IFC 기반의 교량정보모델을 개발하였다. 개발된 정보모델은 교량요소들의 논리적 구성체계를 공간적 요소, 물리적 요소 및 그룹 요소별로 표현할 수 있도록 지원한다. 이와 같이 개발된 기술을 이용하여 3차원 교량모델 뷰어에서 구조계산서의 정보를 검색하기 위한 시범 툴을 개발하였으며, 4개의 단위 교량으로 구성된 복합형식의 교량에 대한 3차원 모델을 구축하고 각 교량에 대한 구조계산서 또한 XML 문서로 변환하였다. 이와 같이 구축된 두 정보체계에서 사용자가 선택한 임의의 구성요소에 관한 세부 문서정보의 조회가 가능함을 보임으로써 제시된 방법의 적합성을 검증하였다.

  • PDF

Recognition of Word-level Attributed in Machine-printed Document Images (인쇄 문서 영상의 단어 단위 속성 인식)

  • Gwak, Hui-Gyu;Kim, Su-Hyeong
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.5
    • /
    • pp.412-421
    • /
    • 2001
  • 본 논문은 문서 영상에 존재하는 개별 단어들에 대한 속성정보 추출 방법을 제안한다. 단어 단위의 속성 인식은 단어 영상 매칭의 정확도 및 속도 개선, OCR 시스템에서 인식률 향상, 문서의 재생산 등 다양한 응용 가치를 찾을 수 있으며, 메타정보(meta-information) 추출을 통해 영상 검색(image retrieval)이나 요약(summary) 생성 등에 활용할 수 있다. 제안하는 시스템에서 고려하는 단어 영상의 속성은 언어의 종류(한글, 영문), 스타일(볼드, 이탤릭, 보통, 밑줄), 문자 크기(10, 12, 14 포인트), 문자 개수 (한글: 2, 3, 4, 5, 영문: 4, 5, 6, 7, 8, 9, 10), 서체(명조, 고딕)의 다섯 가지 정보이다. 속성 인식을 위한 특징은, 언어 종류 인식에 2개, 스타일 인식에 3개, 문자 크기와 개수는 각각 1개, 한글 서체 인식은 1개, 영문 서체 인식은 2개를 사용한다. 분류기는 신경망, 2차형 판별함수(QDF), 선형 판별함수(LDF)를 계층적으로 구성한다. 다섯 가지 속성이 조합된 26,400개의 단어 영상을 사용한 실험을 통해, 제안된 방법이 소수의 특징만으로도 우수한 속성 인식 성능을 보임을 입증하였다.

  • PDF

Automating Warehouse Management Using a Bar-Code System (바-코드 시스템을 이용한 창고관리의 자동화)

  • 이성열
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.4 no.1
    • /
    • pp.20-27
    • /
    • 1999
  • This study presents an Automated Warehouse Management System (AWMS) using a bar-code system The AWMS has been designed to be associated with an Integrated Production Management System (IPMS), which basically includes the following four modules i.e the sales management, production management, material management, and data management. Now, whenever storage or retrieval of the material occurs in the warehouse, the events could be processed quickly and accurately only through reading the 13 digit bar-code including 5 digit position code and simply typing in the number of material. Consequently, the AWMS associated with IPMS could automatically coincide the item counts on document with those in the warehouse. Also, this makes it possible to identify the material quantities in real time.

  • PDF

Design and Implementation of Web Crawler utilizing Unstructured data

  • Tanvir, Ahmed Md.;Chung, Mokdong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.3
    • /
    • pp.374-385
    • /
    • 2019
  • A Web Crawler is a program, which is commonly used by search engines to find the new brainchild on the internet. The use of crawlers has made the web easier for users. In this paper, we have used unstructured data by structuralization to collect data from the web pages. Our system is able to choose the word near our keyword in more than one document using unstructured way. Neighbor data were collected on the keyword through word2vec. The system goal is filtered at the data acquisition level and for a large taxonomy. The main problem in text taxonomy is how to improve the classification accuracy. In order to improve the accuracy, we propose a new weighting method of TF-IDF. In this paper, we modified TF-algorithm to calculate the accuracy of unstructured data. Finally, our system proposes a competent web pages search crawling algorithm, which is derived from TF-IDF and RL Web search algorithm to enhance the searching efficiency of the relevant information. In this paper, an attempt has been made to research and examine the work nature of crawlers and crawling algorithms in search engines for efficient information retrieval.

A Study on the Integration of Information Extraction Technology for Detecting Scientific Core Entities based on Large Resources (대용량 자원 기반 과학기술 핵심개체 탐지를 위한 정보추출기술 통합에 관한 연구)

  • Choi, Yun-Soo;Cheong, Chang-Hoo;Choi, Sung-Pil;You, Beom-Jong;Kim, Jae-Hoon
    • Journal of Information Management
    • /
    • v.40 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • Large-scaled information extraction plays an important role in advanced information retrieval as well as question answering and summarization. Information extraction can be defined as a process of converting unstructured documents into formalized, tabular information, which consists of named-entity recognition, terminology extraction, coreference resolution and relation extraction. Since all the elementary technologies have been studied independently so far, it is not trivial to integrate all the necessary processes of information extraction due to the diversity of their input/output formation approaches and operating environments. As a result, it is difficult to handle scientific documents to extract both named-entities and technical terms at once. In this study, we define scientific as a set of 10 types of named entities and technical terminologies in a biomedical domain. in order to automatically extract these entities from scientific documents at once, we develop a framework for scientific core entity extraction which embraces all the pivotal language processors, named-entity recognizer, co-reference resolver and terminology extractor. Each module of the integrated system has been evaluated with various corpus as well as KEEC 2009. The system will be utilized for various information service areas such as information retrieval, question-answering(Q&A), document indexing, dictionary construction, and so on.

Text Filtering using Iterative Boosting Algorithms (반복적 부스팅 학습을 이용한 문서 여과)

  • Hahn, Sang-Youn;Zang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.270-277
    • /
    • 2002
  • Text filtering is a task of deciding whether a document has relevance to a specified topic. As Internet and Web becomes wide-spread and the number of documents delivered by e-mail explosively grows the importance of text filtering increases as well. The aim of this paper is to improve the accuracy of text filtering systems by using machine learning techniques. We apply AdaBoost algorithms to the filtering task. An AdaBoost algorithm generates and combines a series of simple hypotheses. Each of the hypotheses decides the relevance of a document to a topic on the basis of whether or not the document includes a certain word. We begin with an existing AdaBoost algorithm which uses weak hypotheses with their output of 1 or -1. Then we extend the algorithm to use weak hypotheses with real-valued outputs which was proposed recently to improve error reduction rates and final filtering performance. Next, we attempt to achieve further improvement in the AdaBoost's performance by first setting weights randomly according to the continuous Poisson distribution, executing AdaBoost, repeating these steps several times, and then combining all the hypotheses learned. This has the effect of mitigating the ovefitting problem which may occur when learning from a small number of data. Experiments have been performed on the real document collections used in TREC-8, a well-established text retrieval contest. This dataset includes Financial Times articles from 1992 to 1994. The experimental results show that AdaBoost with real-valued hypotheses outperforms AdaBoost with binary-valued hypotheses, and that AdaBoost iterated with random weights further improves filtering accuracy. Comparison results of all the participants of the TREC-8 filtering task are also provided.

A Study on the Development of Search Algorithm for Identifying the Similar and Redundant Research (유사과제파악을 위한 검색 알고리즘의 개발에 관한 연구)

  • Park, Dong-Jin;Choi, Ki-Seok;Lee, Myung-Sun;Lee, Sang-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.11
    • /
    • pp.54-62
    • /
    • 2009
  • To avoid the redundant investment on the project selection process, it is necessary to check whether the submitted research topics have been proposed or carried out at other institutions before. This is possible through the search engines adopted by the keyword matching algorithm which is based on boolean techniques in national-sized research results database. Even though the accuracy and speed of information retrieval have been improved, they still have fundamental limits caused by keyword matching. This paper examines implemented TFIDF-based algorithm, and shows an experiment in search engine to retrieve and give the order of priority for similar and redundant documents compared with research proposals, In addition to generic TFIDF algorithm, feature weighting and K-Nearest Neighbors classification methods are implemented in this algorithm. The documents are extracted from NDSL(National Digital Science Library) web directory service to test the algorithm.