• Title/Summary/Keyword: 단어 벡터

Search Result 299, Processing Time 0.025 seconds

Image Compression Using Edge Map And Multi-Sided Side Match Finite-State Vector Quantization (윤곽선 맵과 다중 면 사이드 매치 유한상태 벡터 양자화를 이용한 영상 압축)

  • Cho, Seong-Hwan;Kim, Eung-Sung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.8 no.6
    • /
    • pp.1419-1427
    • /
    • 2007
  • In this paper, we propose an algorithm which implements a multi-sided side match finite-state vector quantization(MSMVQ). After extracting the edge information from an image and classifying the image into edge blocks or non-edge blocks, we construct an edge map. We subdivide edge blocks into sixteen classes using discrete cosine transform(DCT) AC coefficients. Based on edge map information, a state codebook is made from the master codebook, and side match calculation is done for two-sided or three-sided current block of image. For reducing transmitted bits, a decision is made whether or not to encode the non-edge blocks among the pre-coded blocks by using the master codebook. Also for reducing allocation bits of codeword indices to decoder, a variable length coder is used. Considering the comparison with side match finite-state vector quantization(SMVQ) and two-sided SMVQ(TSMVQ) algorithm about Zelda, Lenna, Bridge and Peppers image, the new algorithm shows better picture quality than SMVQ and TSMVQ respectively.

  • PDF

Related Documents Classification System by Similarity between Documents (문서 유사도를 통한 관련 문서 분류 시스템 연구)

  • Jeong, Jisoo;Jee, Minkyu;Go, Myunghyun;Kim, Hakdong;Lim, Heonyeong;Lee, Yurim;Kim, Wonil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.77-86
    • /
    • 2019
  • This paper proposes using machine-learning technology to analyze and classify historical collected documents based on them. Data is collected based on keywords associated with a specific domain and the non-conceptuals such as special characters are removed. Then, tag each word of the document collected using a Korean-language morpheme analyzer with its nouns, verbs, and sentences. Embedded documents using Doc2Vec model that converts documents into vectors. Measure the similarity between documents through the embedded model and learn the document classifier using the machine running algorithm. The highest performance support vector machine measured 0.83 of F1-score as a result of comparing the classification model learned.

A Tensor Space Model based Deep Neural Network for Automated Text Classification (자동문서분류를 위한 텐서공간모델 기반 심층 신경망)

  • Lim, Pu-reum;Kim, Han-joon
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.3-13
    • /
    • 2018
  • Text classification is one of the text mining technologies that classifies a given textual document into its appropriate categories and is used in various fields such as spam email detection, news classification, question answering, emotional analysis, and chat bot. In general, the text classification system utilizes machine learning algorithms, and among a number of algorithms, naïve Bayes and support vector machine, which are suitable for text data, are known to have reasonable performance. Recently, with the development of deep learning technology, several researches on applying deep neural networks such as recurrent neural networks (RNN) and convolutional neural networks (CNN) have been introduced to improve the performance of text classification system. However, the current text classification techniques have not yet reached the perfect level of text classification. This paper focuses on the fact that the text data is expressed as a vector only with the word dimensions, which impairs the semantic information inherent in the text, and proposes a neural network architecture based upon the semantic tensor space model.

Scientific Awareness appearing in Korean Tokusatsu Series - With a focus on Vectorman: Warriors of the Earth (한국 특촬물 시리즈에 나타난 과학적 인식 - <지구용사 벡터맨>을 중심으로)

  • Bak, So-young
    • (The) Research of the performance art and culture
    • /
    • no.43
    • /
    • pp.293-322
    • /
    • 2021
  • The present study examined the scientific awareness appearing in Korean tokusatsu series by focusing on Vectorman: Warriors of the Earth. As a work representing Korean tokusatsu series, Vectorman: Warriors of the Earth achieved the greatest success among tokusatsu series. This work was released thanks to the continued popularity of Japanese tokusatsu since the mid-1980s and the trend of robot animations. Due to the chronic problems regarding Korean children's programs-the oversupply of imported programs and repeated reruns-the need for domestically produced children's programs has continued to come to the fore. However, as the popularity of Korean animation waned beginning in the mid-1990s, inevitably the burden fr producing animation increased. As a result, Vectorman: Warriors of the Earth was produced as a tokusatsu rather than an animation, and because this was a time when an environment for using special effects technology was being fostered in broadcasting stations, computer visual effects were actively used for the series. The response to the new domestically produced tokusatsu series Vectorman: Warriors of the Earth was explosive. The Vectorman series explained the abilities of cosmic beings by using specific scientific terms such as DNA synthesis, brain cell transformation, and special psychological control device instead of ambiguous words like the scientific technology of space. Although the series is unable to describe in detail about the process and cause, the way it defines technology using concrete terms rather than science fiction shows how scientific imagination is manifesting in specific forms in Korean society. Furthermore, the equal relationship between Vectorman and the aliens shows how the science of space, explained with the scientific terms of earth, is an expression of confidence regarding the advancement of Korean scientific technology which represents earth. However, the female characters fail to gain entry into the domain of science and are portrayed as unscientific beings, revealing limitations in terms of scientific awareness.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

A Study on VQ/HMM using Nonlinear Clustering and Smoothing Method (비선형 집단화와 완화기법을 이용한 VQ/HMM에 관한 연구)

  • 정희석;강철호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.3
    • /
    • pp.35-42
    • /
    • 1999
  • In this paper, a modified clustering algorithm is proposed to improve the discrimination of discrete HMM(Hidden Markov Model), so that it has increased recognition rate of 2.16% in comparison with the original HMM using the K-means or LBG algorithm. And, for preventing the decrease of recognition rate because of insufficient training data at the training scheme of HMM, a modified probabilistic smoothing method is proposed, which has increased recognition rate of 3.07% for the speaker-independent case. In the experiment applied the two proposed algorithms, the average rate of recognition has increased 4.66% for the speaker-independent case in comparison with that of original VQ/HMM.

  • PDF

The Postprocessor of Automatic Segmentation for Synthesis Unit Generation (합성단위 자동생성을 위한 자동 음소 분할기 후처리에 대한 연구)

  • 박은영;김상훈;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.7
    • /
    • pp.50-56
    • /
    • 1998
  • 본 논문은 자동 음소 분할기의 음소 경계 오류를 보상하기 위한 후처리 (Postprocessing)에 관한 연구이다. 이는 현재 음성 합성을 위한 음성/언어학적 연구, 운율 모델링, 합성단위 자동 생성 연구 등에 대량의 음소 단위 분절과 음소 레이블링된 데이터의 필요성에 따른 연구의 일환이다. 특히 수작업에 의한 분절 및 레이블링은 일관성의 유지가 어렵고 긴 시간이 소요되므로 자동 분절 기술이 더욱 중요시 되고 있다. 따라서, 본 논문은 자동 분절 경계의 오류 범위를 줄일 수 있는 후처리기를 제안하여 자동 분절 결과를 직접 합성 단위로 사용할 수 있고 대량의 합성용 운율 데이터 베이스 구축에 유용함을 기술한다. 제안된 후처리기는 수작업으로 조정된 데이터의 특징 벡터를 다층 신경회로망 (MLP:Multi-layer perceptron)을 통해 학습을 한 후, ETRI(Electronics and Telecommunication Research Institute)에서 개발된 음성 언어 번역 시스템을 이용한 자동 분절 결과와 후처리기인 MLP를 이용하여 새로운 음소 경계를 추출한다. 고립단어로 발성된 합성 데이터베이스에서 후처리기로 보정된 분절 결과는 음성 언어 번역 시스템의 분할율보 다 약 25%의 향상된 성능을 보였으며, 절대 오류(|Hand label position-Auto label position |)는 약 39%가 향상되었다. 이는 MLP를 이용한 후처리기로 자동 분절 오류의 범위를 줄 일 수 있고, 대량의 합성용 운율 데이터 베이스 구축 및 합성 단위의 자동생성에 이용될 수 있음을 보이는 것이다.

  • PDF

A Study on Isolated Words Speech Recognition in a Running Automobile (주행중인 자동차 환경에서의 고립단어 음성인식 연구)

  • 유봉근
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.381-384
    • /
    • 1998
  • 본 논문은 주행중인 자동차 환경에서 운전자의 안전성 및 편의성의 동시 확보를 위하여, 보조적인 스위치 조작없이 상시 음성의 입, 출력이 가능하도록 한다. 이때 잡음에 강인한 threshold 값을 구하기 위하여, 일정한 시간마다 기준 에너지와 영교차율(Zero Crossing Rate)을 변경하며, 밴드패스 필터(bandpass filter)를 이용하여 1차, 2차로 나누어 실시간 상태에서 자동으로, 정확하게 끝점검출(End Point Detection)을 처리한다. 기준패턴(reference pattern)은 DMS(Dynamic Multi-Section)을 사용하며, 화자의 변별력을 높이기 위하여 2개의 모델사용을 제안한다. 또한 주행중인 차량의 잡음환경에 강인하기 위하여 일반주행(80km/h 이내), 고속주행(80km/h 이상)등으로 나누며 차량의 가변잡음 크기에 따라 자동으로 선택하도록 한다. 음성의 특징 벡터와 인식 알고리즘은 PLP 13차와 One-Stage Dynamic Programming (OSDP)를 이용한다. 실험결과, 자주 사용되는 차량 편의장치 제어명령 33개에 대하여 중부, 영동 고속도로(시속 80Km/h 이상)에서 화자독립 89.75%, 화자종속 90.08%의 인식율을 구하였으며, 경부 고속도로에서는 화자독립 92.29%, 화자종속 92.42%의 인식율을 구하였다. 그리고 저속 주행중인 자동차 환경(80km/h 이내, 시멘트, 아스팔트 등의 서울시내 및 시외독립)에서는 화자독립 92.89%, 화자종속 94.44% 인식율을 구하였다.

  • PDF

(The Classification Method of the Document Plagiarism Similarity based on Similar Syntagma Tree and Non-Index Term) (유사 어절 트리와 비 색인어 기반의 문서 표절 유사도 분류 방법)

  • 천승환;김미영;이귀상
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.8
    • /
    • pp.1039-1048
    • /
    • 2002
  • It is difficult and laborious to distinguish between the original and the plagiarism about the electrical documents or on-line received documents, specially student homeworks because in many case, the homeworks are written on the same subject. Existing methods are not appropriate to solve this problem, which find the most appropriate category using the expression frequency of index term in documents to be classified. In this paper, a new classification method was proposed to distinguish between the original and the plagiarism about documents which were written similarly which is based on the syntagma vector - except the similar syntagma tree structure and non-index term.

  • PDF