• 제목/요약/키워드: Text information

검색결과 4,393건 처리시간 0.041초

A Study of on Extension Compression Algorithm of Mixed Text by Hangeul-Alphabet

  • Ji, Kang-yoo;Cho, Mi-nam;Hong, Sung-soo;Park, Soo-bong
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -1
    • /
    • pp.446-449
    • /
    • 2002
  • This paper represents a improved data compression algorithm of mixed text file by 2 byte completion Hangout and 1 byte alphabet from. Original LZW algorithm efficiently compress a alphabet text file but inefficiently compress a 2 byte completion Hangout text file. To solve this problem, data compression algorithm using 2 byte prefix field and 2 byte suffix field for compression table have developed. But it have a another problem that is compression ratio of alphabet text file decreased. In this paper, we proposes improved LZW algorithm, that is, compression table in the Extended LZW(ELZW) algorithm uses 2 byte prefix field for pointer of a table and 1 byte suffix field for repeat counter. where, a prefix field uses a pointer(index) of compression table and a suffix field uses a counter of overlapping or recursion text data in compression table. To increase compression ratio, after construction of compression table, table data are properly packed as different bit string in accordance with a alphabet, Hangout, and pointer respectively. Therefore, proposed ELZW algorithm is superior to 1 byte LZW algorithm as 7.0125 percent and superior to 2 byte LZW algorithm as 11.725 percent. This paper represents a improved data Compression algorithm of mixed text file by 2 byte completion Hangout and 1 byte alphabet form. This document is an example of what your camera-ready manuscript to ITC-CSCC 2002 should look like. Authors are asked to conform to the directions reported in this document.

  • PDF

An effective approach to generate Wikipedia infobox of movie domain using semi-structured data

  • Bhuiyan, Hanif;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • 인터넷정보학회논문지
    • /
    • 제18권3호
    • /
    • pp.49-61
    • /
    • 2017
  • Wikipedia infoboxes have emerged as an important structured information source on the web. To compose infobox for an article, considerable amount of manual effort is required from an author. Due to this manual involvement, infobox suffers from inconsistency, data heterogeneity, incompleteness, schema drift etc. Prior works attempted to solve those problems by generating infobox automatically based on the corresponding article text. However, there are many articles in Wikipedia that do not have enough text content to generate infobox. In this paper, we present an automated approach to generate infobox for movie domain of Wikipedia by extracting information from several sources of the web instead of relying on article text only. The proposed methodology has been developed using semantic relations of article content and available semi-structured information of the web. It processes the article text through some classification processes to identify the template from the large pool of template list. Finally, it extracts the information for the corresponding template attributes from web and thus generates infobox. Through a comprehensive experimental evaluation the proposed scheme was demonstrated as an effective and efficient approach to generate Wikipedia infobox.

유전 알고리즘 기반 한글 텍스트 스테가노그래피의 연구 (A Study of Hangul Text Steganography based on Genetic Algorithm)

  • 지선수
    • 한국산업정보학회논문지
    • /
    • 제21권3호
    • /
    • pp.7-12
    • /
    • 2016
  • 인터넷의 적대적인 환경에서 보안성을 향상시키기 위해 스테가노그래피는 커버 매체 내부에 비밀 메시지를 숨기는데 초점을 두고 있다. 즉 암호화의 보완이다. 이 논문에서 한글을 이용한 텍스트 스테가노그래피 기법을 제안한다. 보안 수준을 높이기 위해 비밀 메시지는 유전 알고리즘 연산자 교차를 통해 암호화한다. 커버 매체의 특성과 구조 변화가 없는 스테고 텍스트 형태를 만들기 위한 커버 텍스트로 메시지를 삽입한다. 커버 매체에 3.69% 삽입 용량을 유지하기 위해, 스테고 텍스트의 크기가 14%로 증가되는 것을 확인할 수 있다.

Text Summarization on Large-scale Vietnamese Datasets

  • Ti-Hon, Nguyen;Thanh-Nghi, Do
    • Journal of information and communication convergence engineering
    • /
    • 제20권4호
    • /
    • pp.309-316
    • /
    • 2022
  • This investigation is aimed at automatic text summarization on large-scale Vietnamese datasets. Vietnamese articles were collected from newspaper websites and plain text was extracted to build the dataset, that included 1,101,101 documents. Next, a new single-document extractive text summarization model was proposed to evaluate this dataset. In this summary model, the k-means algorithm is used to cluster the sentences of the input document using different text representations, such as BoW (bag-of-words), TF-IDF (term frequency - inverse document frequency), Word2Vec (Word-to-vector), Glove, and FastText. The summary algorithm then uses the trained k-means model to rank the candidate sentences and create a summary with the highest-ranked sentences. The empirical results of the F1-score achieved 51.91% ROUGE-1, 18.77% ROUGE-2 and 29.72% ROUGE-L, compared to 52.33% ROUGE-1, 16.17% ROUGE-2, and 33.09% ROUGE-L performed using a competitive abstractive model. The advantage of the proposed model is that it can perform well with O(n,k,p) = O(n(k+2/p)) + O(nlog2n) + O(np) + O(nk2) + O(k) time complexity.

디지털 텍스트의 음절을 이용한 운율 정보 시각화에 관한 연구 (A Study on Rhythm Information Visualization Using Syllable of Digital Text)

  • 박선희;이재중;박진완
    • 한국콘텐츠학회:학술대회논문집
    • /
    • 한국콘텐츠학회 2009년도 춘계 종합학술대회 논문집
    • /
    • pp.120-126
    • /
    • 2009
  • 정보화 시대가 빠르게 성장하면서 디지털 텍스트의 양도 증가하고 있다. 이에 따라 수많은 디지털 텍스트를 파악하기 위한 시각화 사례가 증가하고 있다. 기존의 디지털 텍스트 시각화 디자인은 스태밍 알고리즘(stemming algorithm)의 도입과 단어 빈도수를 추출하여 주제어를 형상화하여 텍스트의 의미를 부각시키고 문장과 문장을 연결해주는 것에 치중하고 있다. 이에 디지털 텍스트의 정서적인 느낌을 시각화할 수 있는 운율을 표현하는 것에 있어서 미흡했던 부분이 사실이다. 운율을 보다 효과적으로 표현할 수 있는 음운단위로는 음절을 들 수 있다. 문장에서 음절은 단어나 구, 문장의 발음에 가장 기본적인 발음 단위가 된다. 이를 기본으로 강세, 성조, 운율 요소들의 길이 등이 음절에 기반을 두고 있다. 음절을 정의하는 것과 가장 밀접한 연관이 있는 공명도(sonority)는 발화할 때 폐의 공기 흐름과 운동 에너지(Kinetic energy)를 공명도로 명시되는 음향에너지(acoustic energy)로 표현한 것이다. 본 연구는 이러한 관점에서 디지털 텍스트의 속성인 음절을 기반으로 음운론적 정의와 특성을 살펴보고 운율을 다이어그램을 통해 시각화하기 위한 방법을 연구한다. 실험을 통해 디지털 텍스트를 발음기호로 변환한 후, 모든 언어속의 리듬에서 출발된 음절의 공명도를 사용하고 디지털 텍스트를 음절화하여 운율 정보를 이미지로 시각화한다. 운율 정보를 시각화함으로써 디지털 텍스트의 음절 정보를 알 수 있고, 디지털 텍스트의 정서를 다이어그램을 통해 체계적인 공식에 의하여 사용자의 이해를 돕도록 표현한다. 이에 해당 텍스트의 운율을 보다 쉽게 파악하도록 설계하여 디지털 정보 시각화를 구현하는데 그 목적을 두고 있다.

  • PDF

A Term Importance-based Approach to Identifying Core Citations in Computational Linguistics Articles

  • Kang, In-Su
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권9호
    • /
    • pp.17-24
    • /
    • 2017
  • Core citation recognition is to identify influential ones among the prior articles that a scholarly article cite. Previous approaches have employed citing-text occurrence information, textual similarities between citing and cited article, etc. This study proposes a term-based approach to core citation recognition, which exploits the importance of individual terms appearing in in-text citation to calculate influence-strength for each cited article. Term importance is computed using various frequency information such as term frequency(tf) in in-text citation, tf in the citing article, inverse sentence frequency in the citing article, inverse document frequency in a collection of articles. Experiments using a previous test set consisting of computational linguistics articles show that the term-based approach performs comparably with the previous approaches. The proposed technique could be easily extended by employing other term units such as n-grams and phrases, or by using new term-importance formulae.

한글 문서의 단어 동시 출현 정보에 개선된 TextRank를 적용한 키워드 자동 추출 기법 (Keyword Automatic Extraction Scheme with Enhanced TextRank using Word Co-Occurrence in Korean Document)

  • 송광호;민지홍;김유성
    • 한국어정보학회:학술대회논문집
    • /
    • 한국어정보학회 2016년도 제28회 한글및한국어정보처리학술대회
    • /
    • pp.62-66
    • /
    • 2016
  • 문서의 의미 기반 처리를 위해서 문서의 내용을 대표하는 키워드를 추출하는 것은 정확성과 효율성 측면에서 매우 중요한 과정이다. 그러나 단일문서로부터 키워드를 추출해 내는 기존의 연구들은 정확도가 낮거나 한정된 분야에 대해서만 검증을 수행하여 결과를 신뢰하기 어려운 문제가 있었다. 따라서 본 연구에서는 정확하면서도 다양한 분야의 텍스트에 적용 가능한 키워드 추출 방법을 제시하고자 단어의 동시출현 정보와 그래프 모델을 바탕으로 TextRank 알고리즘을 변형한 새로운 형태의 알고리즘을 동시에 적용하는 키워드 추출 기법을 제안하였다. 제안한 기법을 활용하여 성능평가를 진행한 결과 기존의 연구들보다 향상된 정확도를 얻을 수 있음을 확인하였다.

  • PDF

An Improved Coverless Text Steganography Algorithm Based on Pretreatment and POS

  • Liu, Yuling;Wu, Jiao;Chen, Xianyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권4호
    • /
    • pp.1553-1567
    • /
    • 2021
  • Steganography is a current hot research topic in the area of information security and privacy protection. However, most previous steganography methods are not effective against steganalysis and attacks because they are usually carried out by modifying covers. In this paper, we propose an improved coverless text steganography algorithm based on pretreatment and Part of Speech (POS), in which, Chinese character components are used as the locating marks, then the POS is used to hide the number of keywords, the retrieval of stego-texts is optimized by pretreatment finally. The experiment is verified that our algorithm performs well in terms of embedding capacity, the embedding success rate, and extracting accuracy, with appropriate lengths of locating marks and the large scale of the text database.

거주민 공간복지 향상을 위한 공공 개방 민원 데이터 분석 모델 - 강동구 공간복지 분석 사례를 중심으로 - (A Public Open Civil Complaint Data Analysis Model to Improve Spatial Welfare for Residents - A Case Study of Community Welfare Analysis in Gangdong District -)

  • 신동윤
    • 한국BIM학회 논문집
    • /
    • 제13권3호
    • /
    • pp.39-47
    • /
    • 2023
  • This study aims to introduce a model for enhancing community well-being through the utilization of public open data. To objectively assess abstract notions of residential satisfaction, text data from complaints is analyzed. By leveraging accessible public data, costs related to data collection are minimized. Initially, relevant text data containing civic complaints is collected and refined by removing extraneous information. This processed data is then combined with meaningful datasets and subjected to topic modeling, a text mining technique. The insights derived are visualized using Geographic Information System (GIS) and Application Programming Interface (API) data. The efficacy of this analytical model was demonstrated in the Godeok/Gangil area. The proposed methodology allows for comprehensive analysis across time, space, and categories. This flexible approach involves incorporating specific public open data as needed, all within the overarching framework.

Joint Hierarchical Semantic Clipping and Sentence Extraction for Document Summarization

  • Yan, Wanying;Guo, Junjun
    • Journal of Information Processing Systems
    • /
    • 제16권4호
    • /
    • pp.820-831
    • /
    • 2020
  • Extractive document summarization aims to select a few sentences while preserving its main information on a given document, but the current extractive methods do not consider the sentence-information repeat problem especially for news document summarization. In view of the importance and redundancy of news text information, in this paper, we propose a neural extractive summarization approach with joint sentence semantic clipping and selection, which can effectively solve the problem of news text summary sentence repetition. Specifically, a hierarchical selective encoding network is constructed for both sentence-level and document-level document representations, and data containing important information is extracted on news text; a sentence extractor strategy is then adopted for joint scoring and redundant information clipping. This way, our model strikes a balance between important information extraction and redundant information filtering. Experimental results on both CNN/Daily Mail dataset and Court Public Opinion News dataset we built are presented to show the effectiveness of our proposed approach in terms of ROUGE metrics, especially for redundant information filtering.