• 제목/요약/키워드: text information

검색결과 4,361건 처리시간 0.028초

텍스트 마이닝을 활용한 사용자 핵심 요구사항 분석 방법론 : 중국 온라인 화장품 시장을 중심으로 (A Methodology for Customer Core Requirement Analysis by Using Text Mining : Focused on Chinese Online Cosmetics Market)

  • 신윤식;백동현
    • 산업경영시스템학회지
    • /
    • 제44권2호
    • /
    • pp.66-77
    • /
    • 2021
  • Companies widely use survey to identify customer requirements, but the survey has some problems. First of all, the response is passive due to pre-designed questionnaire by companies which are the surveyor. Second, the surveyor needs to have good preliminary knowledge to improve the quality of the survey. On the other hand, text mining is an excellent way to compensate for the limitations of surveys. Recently, the importance of online review is steadily grown, and the enormous amount of text data has increased as Internet usage higher. Also, a technique to extract high-quality information from text data called Text Mining is improving. However, previous studies tend to focus on improving the accuracy of individual analytics techniques. This study proposes the methodology by combining several text mining techniques and has mainly three contributions. Firstly, able to extract information from text data without a preliminary design of the surveyor. Secondly, no need for prior knowledge to extract information. Lastly, this method provides quantitative sentiment score that can be used in decision-making.

본문 데이타베이스 연구에 관한 고찰과 그 전망 (Future and Directions for Research in Full Text Databases)

  • 노정순
    • 한국문헌정보학회지
    • /
    • 제17권
    • /
    • pp.49-83
    • /
    • 1989
  • A Full text retrieval system is a natural language document retrieval system in which the full text of all documents in a collection is stored on a computer so that every word in every sentence of every document can be located by the machine. This kind of IR System is recently becoming rapidly available online in the field of legal, newspaper, journal and reference book indexing. Increased research interest has been in this field. In this paper, research on full text databases and retrieval systems are reviewed, directions for research in this field are speculated, questions in the field that need answering are considered, and variables affecting online full text retrieval and various role that variables play in a research study are described. Two obvious research questions in full text retrieval have been how full text retrieval performs and how to improve the retrieval performance of full text databases. Research to improve the retrieval performance has been incorporated with ranking or weighting algorithms based on word occurrences, combined menu-driven and query-driven systems, and improvement of computer architectures and record structure for databases. Recent increase in the number of full text databases with various sizes, forms and subject matters, and recent development in computer architecture artificial intelligence, and videodisc technology promise new direction of its research and scholarly growth. Studies on the interrelationship between every elements of the full text retrieval situation and the relationship between each elements and retrieval performance may give a professional view in theory and practice of full text retrieval.

  • PDF

Enhancing Text Document Clustering Using Non-negative Matrix Factorization and WordNet

  • Kim, Chul-Won;Park, Sun
    • Journal of information and communication convergence engineering
    • /
    • 제11권4호
    • /
    • pp.241-246
    • /
    • 2013
  • A classic document clustering technique may incorrectly classify documents into different clusters when documents that should belong to the same cluster do not have any shared terms. Recently, to overcome this problem, internal and external knowledge-based approaches have been used for text document clustering. However, the clustering results of these approaches are influenced by the inherent structure and the topical composition of the documents. Further, the organization of knowledge into an ontology is expensive. In this paper, we propose a new enhanced text document clustering method using non-negative matrix factorization (NMF) and WordNet. The semantic terms extracted as cluster labels by NMF can represent the inherent structure of a document cluster well. The proposed method can also improve the quality of document clustering that uses cluster labels and term weights based on term mutual information of WordNet. The experimental results demonstrate that the proposed method achieves better performance than the other text clustering methods.

학술논문의 내용구조에 의한 전문검색시스템 구현과 성능평가에 관한 연구 (A Study on the Implementation and Performance Evaluation of Full-text Information Retrieval System based on Scientific Paper′s Content Structure)

  • 이두영;이병기
    • 정보관리학회지
    • /
    • 제15권3호
    • /
    • pp.73-93
    • /
    • 1998
  • 본 연구는 문헌의 내용구조와 이용자의 정보요구는 밀접한 관련성이 있기 때문에 문헌의 본문을 내용 단위구조로 분할하여 색인한다면 기존의 전문데이터베이스 구축방식에 비해 검색효율을 향상시킬 수 있다는 가설을 설정하고 이를 검증하는데 목적이 있다. 이 가설을 검증하기 위하여 먼저 학술논문의 내용구조 모델을 설정하고, 이 모델을 기반으로 컴퓨터 관련분야 70여편의 학술논문을 대상으로 실험용 전문데이터베이스를 구축한 다음, 이에 대한 검색효율을 측정하여 내용구조 기반 전문검색시스템의 성능을 실험적으로 평가하였다.

  • PDF

Extending TextAE for annotation of non-contiguous entities

  • Lever, Jake;Altman, Russ;Kim, Jin-Dong
    • Genomics & Informatics
    • /
    • 제18권2호
    • /
    • pp.15.1-15.6
    • /
    • 2020
  • Named entity recognition tools are used to identify mentions of biomedical entities in free text and are essential components of high-quality information retrieval and extraction systems. Without good entity recognition, methods will mislabel searched text and will miss important information or identify spurious text that will frustrate users. Most tools do not capture non-contiguous entities which are separate spans of text that together refer to an entity, e.g., the entity "type 1 diabetes" in the phrase "type 1 and type 2 diabetes." This type is commonly found in biomedical texts, especially in lists, where multiple biomedical entities are named in shortened form to avoid repeating words. Most text annotation systems, that enable users to view and edit entity annotations, do not support non-contiguous entities. Therefore, experts cannot even visualize non-contiguous entities, let alone annotate them to build valuable datasets for machine learning methods. To combat this problem and as part of the BLAH6 hackathon, we extended the TextAE platform to allow visualization and annotation of non-contiguous entities. This enables users to add new subspans to existing entities by selecting additional text. We integrate this new functionality with TextAE's existing editing functionality to allow easy changes to entity annotation and editing of relation annotations involving non-contiguous entities, with importing and exporting to the PubAnnotation format. Finally, we roughly quantify the problem across the entire accessible biomedical literature to highlight that there are a substantial number of non-contiguous entities that appear in lists that would be missed by most text mining systems.

An Exploratory Analysis of Online Discussion of Library and Information Science Professionals in India using Text Mining

  • Garg, Mohit;Kanjilal, Uma
    • Journal of Information Science Theory and Practice
    • /
    • 제10권3호
    • /
    • pp.40-56
    • /
    • 2022
  • This paper aims to implement a topic modeling technique for extracting the topics of online discussions among library professionals in India. Topic modeling is the established text mining technique popularly used for modeling text data from Twitter, Facebook, Yelp, and other social media platforms. The present study modeled the online discussions of Library and Information Science (LIS) professionals posted on Lis Links. The text data of these posts was extracted using a program written in R using the package "rvest." The data was pre-processed to remove blank posts, posts having text in non-English fonts, punctuation, URLs, emails, etc. Topic modeling with the Latent Dirichlet Allocation algorithm was applied to the pre-processed corpus to identify each topic associated with the posts. The frequency analysis of the occurrence of words in the text corpus was calculated. The results found that the most frequent words included: library, information, university, librarian, book, professional, science, research, paper, question, answer, and management. This shows that the LIS professionals actively discussed exams, research, and library operations on the forum of Lis Links. The study categorized the online discussions on Lis Links into ten topics, i.e. "LIS Recruitment," "LIS Issues," "Other Discussion," "LIS Education," "LIS Research," "LIS Exams," "General Information related to Library," "LIS Admission," "Library and Professional Activities," and "Information Communication Technology (ICT)." It was found that the majority of the posts belonged to "LIS Exam," followed by "Other Discussions" and "General Information related to the Library."

문서 길이 정규화를 이용한 문서 요약 자동화 시스템 구현 (Implementation of Text Summarize Automation Using Document Length Normalization)

  • 이재훈;김영천;이성주
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2001년도 추계학술대회 학술발표 논문집
    • /
    • pp.51-55
    • /
    • 2001
  • With the rapid growth of the World Wide Web and electronic information services, information is becoming available on-Line at an incredible rate. One result is the oft-decried information overload. No one has time to read everything, yet we often have to make critical decisions based on what we are able to assimilate. The technology of automatic text summarization is becoming indispensable for dealing with this problem. Text summarization is the process of distilling the most important information from a source to produce an abridged version for a particular user or task. Information retrieval(IR) is the task of searching a set of documents for some query-relevant documents. On the other hand, text summarization is considered to be the task of searching a document, a set of sentences, for some topic-relevant sentences. In this paper, we show that document information, that is more reliable and suitable for query, using document length normalization of which is gained through information retrieval . Experimental results of this system in newspaper articles show that document length normalization method superior to other methods use query itself.

  • PDF

한글 문자 입력 인터페이스 개발을 위한 눈-손 Coordination에 대한 연구 (A Study on the Eye-Hand Coordination for Korean Text Entry Interface Development)

  • 김정환;홍승권;명노해
    • 대한인간공학회지
    • /
    • 제26권2호
    • /
    • pp.149-155
    • /
    • 2007
  • Recently, various devices requiring text input such as mobile phone IPTV, PDA and UMPC are emerging. The frequency of text entry for them is also increasing. This study was focused on the evaluation of Korean text entry interface. Various models to evaluate text entry interfaces have been proposed. Most of models were based on human cognitive process for text input. The cognitive process was divided into two components; visual scanning process and finger movement process. The time spent for visual scanning process was modeled as Hick-Hyman law, while the time for finger movement was determined as Fitts' law. There are three questions on the model-based evaluation of text entry interface. Firstly, are human cognitive processes (visual scanning and finger movement) during the entry of text sequentially occurring as the models. Secondly, is it possible to predict real text input time by previous models. Thirdly, does the human cognitive process for text input vary according to users' text entry speed. There was time gap between the real measured text input time and predicted time. The time gap was larger in the case of participants with high speed to enter text. The reason was found out investigating Eye-Hand Coordination during text input process. Differently from an assumption that visual scan on the keyboard is followed by a finger movement, the experienced group performed both visual scanning and finger movement simultaneously. Arrival Lead Time was investigated to measure the extent of time overlapping between two processes. 'Arrival Lead Time' is the interval between the eye fixation on the target button and the button click. In addition to the arrival lead time, it was revealed that the experienced group uses the less number of fixations during text entry than the novice group. This result will contribute to the improvement of evaluation model for text entry interface.

텍스트 데이터 시각화를 위한 MVC 프레임워크 (A MVC Framework for Visualizing Text Data)

  • 최광선;정교성;김수동
    • 지능정보연구
    • /
    • 제20권2호
    • /
    • pp.39-58
    • /
    • 2014
  • 빅데이터의 중요성에 대한 인식이 확산되고, 관련한 기술이 발전됨에 따라, 최근에는 빅데이터의 처리와 분석의 결과를 어떻게 시각화할 것인지가 매우 관심 받는 주제로 부각되고 있다. 이는 분석된 결과를 보다 명확하고 효과적으로 전달하는 데에 있어서 데이터의 시각화가 매우 효과적인 방법이기 때문이다. 시각화는 분석 시스템과 사용자가 소통하기 위한 하나의 그래픽 사용자 인터페이스(GUI)를 담당하는 역할을 한다. 통상적으로 이러한 GUI 부분은 데이터의 처리나 분석의 결과와 독립될 수록 시스템의 개발과 유지보수가 용이하며, MVC(Model-View-Controller)와 같은 디자인 패턴의 적용을 통해 GUI와 데이터 처리 및 관리 부분 간의 결합도를 최소화하는 것이 중요하다. 한편 빅데이터는 크게 정형 데이터와 비정형 데이터로 구분할 수 있는데 정형 데이터는 시각화가 상대적으로 용이한 반면, 비정형 데이터는 시각화를 구현하기가 복잡하고 다양하다. 그럼에도 불구하고 비정형 데이터에 대한 분석과 활용이 점점 더 확산됨에 따라, 기존의 전통적인 정형 데이터를 위한 시각화 도구들의 한계를 벗어나기 위해 각각의 시스템들의 목적에 따라 고유의 방식으로 시각화 시스템이 구축되는 현실에 직면해 있다. 더욱이나 현재 비정형 데이터 분석의 대상 중 대부분을 차지하고 있는 텍스트 데이터의 경우 언어 분석, 텍스트 마이닝, 소셜 네트워크 분석 등 적용 기술이 매우 다양하여 하나의 시스템에 적용된 시각화 기술을 다른 시스템에 적용하는 것이 용이하지 않다. 이는 현재의 텍스트 분석 결과에 대한 정보 모델이 서로 다른 시스템에 적용될 수 있도록 설계되지 못하는 경우가 많기 때문이다. 본 연구에서는 이러한 문제를 해결하기 위하여 다양한 텍스트 데이터 분석 사례와 시각화 사례들의 공통적 구성 요소들을 식별하여 표준화된 정보 모델인 텍스트 데이터 시각화 모델을 제시하고, 이를 통해 시각화의 GUI 부분과 연결할 수 있는 시스템 모델로서의 시각화 프레임워크인 TexVizu를 제안하고자 한다.

Deep-Learning Approach for Text Detection Using Fully Convolutional Networks

  • Tung, Trieu Son;Lee, Gueesang
    • International Journal of Contents
    • /
    • 제14권1호
    • /
    • pp.1-6
    • /
    • 2018
  • Text, as one of the most influential inventions of humanity, has played an important role in human life since ancient times. The rich and precise information embodied in text is very useful in a wide range of vision-based applications such as the text data extracted from images that can provide information for automatic annotation, indexing, language translation, and the assistance systems for impaired persons. Therefore, natural-scene text detection with active research topics regarding computer vision and document analysis is very important. Previous methods have poor performances due to numerous false-positive and true-negative regions. In this paper, a fully-convolutional-network (FCN)-based method that uses supervised architecture is used to localize textual regions. The model was trained directly using images wherein pixel values were used as inputs and binary ground truth was used as label. The method was evaluated using ICDAR-2013 dataset and proved to be comparable to other feature-based methods. It could expedite research on text detection using deep-learning based approach in the future.