• Title/Summary/Keyword: Lexicon

Search Result 273, Processing Time 0.025 seconds

Combination of the Verb ha- ′do′ and Entity Type Nouns in Korean: A Generative Lexicon Approach. (개체유형 명사와 동사 ′하-′의 결합에 관한 생성어휘부 이론적 접근)

  • 임서현;이정민
    • Language and Information
    • /
    • v.8 no.1
    • /
    • pp.77-100
    • /
    • 2004
  • This paper aims to account for direct combination of an entity type noun with the verb HA- 'do' (ex. piano-rul ha- 'piano-ACC do') in Korean, based on Generative Lexicon Theory (Pustejovsky, 1995). The verb HA-'do' coerces some entity type nouns (e.g., pap 'boiled rice') into event type ones, by virtue of the qualia of the nouns. Typically, a telic-based type coercion supplies individual predication to the HA- construction and an agentive-based type coercion evokes a stage-level interpretation. Type coercion has certain constraints on the choice of qualia. We further point out that qualia cannot be a warehouse of pragmatic information. Qualia are composed of necessary information to explain the lattice structure of lexical meaning and co-occurrence constraints, distinct from accidental information. Finally, we seriously consider co-composition as an alternative to type coercion for the crucial operation of type shift.

  • PDF

Automatic bilingual lexicon construction via bilingual parallel corpus and pivot language (이국어 병렬말뭉치와 중간언어를 활용한 이국어 사전 자동구축)

  • Seo, Hyeong-Won;Kwon, Hong-Seok;Kim, Jae-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.307-310
    • /
    • 2013
  • 본 논문은 한국어-스페인어와 한국어-불어 간의 양방향 이국어 사전(Bi-directional bilingual lexicon)을 자동으로 구축하기 위한 새로운 방법을 제안한다. 일반적으로 한국어와 스페인어/불어 간의 병렬 말뭉치를 직접적으로 구축하기에는 어려움에 따르기 때문에, 영어를 중심언어로 하는 영어(EN)-한국어(KR)/스페인어(ES)/불어(FR) 병렬 말뭉치를 이용하여 문맥 벡터를 만들고 그들 간의 유사도를 계산하는 변형된 문맥 벡터 방법을 제안한다. 영어는 다른 언어와의 이국어 병렬 말뭉치가 비교적 많이 공개되어 있기 때문에 이 방법을 이용하면 비교적 쉽게 KR-ES와 KR-FR 양방향 이국어 사전을 구축할 수 있다. 본 논문에서 제안한 방법으로 실험해본 결과 최고 85%(ES${\rightarrow}$KR)의 정확도를 얻을 수 있었다.

Analysis of Emotions in Lyrics by Combining Deep Learning BERT and Emotional Lexicon (딥러닝 모델(BERT)과 감정 어휘 사전을 결합한 음원 가사 감정 분석)

  • Yoon, Kyung Seob;Oh, Jong Min
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.471-474
    • /
    • 2022
  • 음원 스트리밍 서비스 시장은 지속해서 성장해왔다. 그중 최근에 가장 성장세가 돋보이는 서비스는 Spotify와 Youtube music이다. 두 서비스의 추천시스템은 사용자가 좋아할 만한 음악을 계속해서 추천해 줌으로써 많은 사랑을 받고 있다. 추천시스템 성능은 추천에 활용할 수 있는 변수(Feature) 수에 비례한다고 볼 수 있다. 최대한 많은 정보를 알아야 사용자가 원하는 추천이 가능하기 때문이다. 본 논문에서는 기존에 존재하는 감정분류 방법론인 사전기반과 딥러닝 BERT를 사용한 머신기반 방법론을 적절하게 결합하여 장점을 유지하면서 단점을 보완한 하이브리드 감정 분석 모델을 제안함으로써 가사에서 느껴지는 감정 비율을 분석한다. 감정 비율을 음원 가중치 변수로 사용하면 감정 정보를 포함한 고도화된 추천을 기대할 수 있다.

  • PDF

SEQUENTIAL MINIMAL OPTIMIZATION WITH RANDOM FOREST ALGORITHM (SMORF) USING TWITTER CLASSIFICATION TECHNIQUES

  • J.Uma;K.Prabha
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.116-122
    • /
    • 2023
  • Sentiment categorization technique be commonly isolated interested in threes significant classifications name Machine Learning Procedure (ML), Lexicon Based Method (LB) also finally, the Hybrid Method. In Machine Learning Methods (ML) utilizes phonetic highlights with apply notable ML algorithm. In this paper, in classification and identification be complete base under in optimizations technique called sequential minimal optimization with Random Forest algorithm (SMORF) for expanding the exhibition and proficiency of sentiment classification framework. The three existing classification algorithms are compared with proposed SMORF algorithm. Imitation result within experiential structure is Precisions (P), recalls (R), F-measures (F) and accuracy metric. The proposed sequential minimal optimization with Random Forest (SMORF) provides the great accuracy.

Reorganizing Social Issues from R&D Perspective Using Social Network Analysis

  • Shun Wong, William Xiu;Kim, Namgyu
    • Journal of Information Technology Applications and Management
    • /
    • v.22 no.3
    • /
    • pp.83-103
    • /
    • 2015
  • The rapid development of internet technologies and social media over the last few years has generated a huge amount of unstructured text data, which contains a great deal of valuable information and issues. Therefore, text mining-extracting meaningful information from unstructured text data-has gained attention from many researchers in various fields. Topic analysis is a text mining application that is used to determine the main issues in a large volume of text documents. However, it is difficult to identify related issues or meaningful insights as the number of issues derived through topic analysis is too large. Furthermore, traditional issue-clustering methods can only be performed based on the co-occurrence frequency of issue keywords in many documents. Therefore, an association between issues that have a low co-occurrence frequency cannot be recognized using traditional issue-clustering methods, even if those issues are strongly related in other perspectives. Therefore, in this research, a methodology to reorganize social issues from a research and development (R&D) perspective using social network analysis is proposed. Using an R&D perspective lexicon, issues that consistently share the same R&D keywords can be further identified through social network analysis. In this study, the R&D keywords that are associated with a particular issue imply the key technology elements that are needed to solve a particular issue. Issue clustering can then be performed based on the analysis results. Furthermore, the relationship between issues that share the same R&D keywords can be reorganized more systematically, by grouping them into clusters according to the R&D perspective lexicon. We expect that our methodology will contribute to establishing efficient R&D investment policies at the national level by enhancing the reusability of R&D knowledge, based on issue clustering using the R&D perspective lexicon. In addition, business companies could also utilize the results by aligning the R&D with their business strategy plans, to help companies develop innovative products and new technologies that sustain innovative business models.

A Study on the Automatic Lexical Acquisition for Multi-lingustic Speech Recognition (다국어 음성 인식을 위한 자동 어휘모델의 생성에 대한 연구)

  • 지원우;윤춘덕;김우성;김석동
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.6
    • /
    • pp.434-442
    • /
    • 2003
  • Software internationalization, the process of making software easier to localize for specific languages, has deep implications when applied to speech technology, where the goal of the task lies in the very essence of the particular language. A greatdeal of work and fine-tuning has gone into language processing software based on ASCII or a single language, say English, thus making a port to different languages difficult. The inherent identity of a language manifests itself in its lexicon, where its character set, phoneme set, pronunciation rules are revealed. We propose a decomposition of the lexicon building process, into four discrete and sequential steps. For preprocessing to build a lexical model, we translate from specific language code to unicode. (step 1) Transliterating code points from Unicode. (step 2) Phonetically standardizing rules. (step 3) Implementing grapheme to phoneme rules. (step 4) Implementing phonological processes.

A Crowdsourcing-based Emotional Words Tagging Game for Building a Polarity Lexicon in Korean (한국어 극성 사전 구축을 위한 크라우드소싱 기반 감성 단어 극성 태깅 게임)

  • Kim, Jun-Gi;Kang, Shin-Jin;Bae, Byung-Chull
    • Journal of Korea Game Society
    • /
    • v.17 no.2
    • /
    • pp.135-144
    • /
    • 2017
  • Sentiment analysis refers to a way of analyzing the writer's subjective opinions or feelings through text. For effective sentiment analysis, it is essential to build emotional word polarity lexicon. This paper introduces a crowdsourcing-based game that we have developed for efficiently building a polarity lexicon in Korean. First, we collected a corpus from the relating Internet communities using a crawler, and we classified them into words using the Twitter POS analyzer. These POS-tagged words are provided as a form of mobile platform based tagging game in which the players voluntarily tagged the polarities of the words, and then the result was collected into the database. So far we have tagged the polarities of about 1200 words. We expect that our research can contribute to the Korean sentiment analysis research especially in the game domain by collecting more emotional word data in the future.

A domain-specific sentiment lexicon construction method for stock index directionality (주가지수 방향성 예측을 위한 도메인 맞춤형 감성사전 구축방안)

  • Kim, Jae-Bong;Kim, Hyoung-Joong
    • Journal of Digital Contents Society
    • /
    • v.18 no.3
    • /
    • pp.585-592
    • /
    • 2017
  • As development of personal devices have made everyday use of internet much easier than before, it is getting generalized to find information and share it through the social media. In particular, communities specialized in each field have become so powerful that they can significantly influence our society. Finally, businesses and governments pay attentions to reflecting their opinions in their strategies. The stock market fluctuates with various factors of society. In order to consider social trends, many studies have tried making use of bigdata analysis on stock market researches as well as traditional approaches using buzz amount. In the example at the top, the studies using text data such as newspaper articles are being published. In this paper, we analyzed the post of 'Paxnet', a securities specialists' site, to supplement the limitation of the news. Based on this, we help researchers analyze the sentiment of investors by generating a domain-specific sentiment lexicon for the stock market.

Automatic Conversion of English Pronunciation Using Sequence-to-Sequence Model (Sequence-to-Sequence Model을 이용한 영어 발음 기호 자동 변환)

  • Lee, Kong Joo;Choi, Yong Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.5
    • /
    • pp.267-278
    • /
    • 2017
  • As the same letter can be pronounced differently depending on word contexts, one should refer to a lexicon in order to pronounce a word correctly. Phonetic alphabets that lexicons adopt as well as pronunciations that lexicons describe for the same word can be different from lexicon to lexicon. In this paper, we use a sequence-to-sequence model that is widely used in deep learning research area in order to convert automatically from one pronunciation to another. The 12 seq2seq models are implemented based on pronunciation training data collected from 4 different lexicons. The exact accuracy of the models ranges from 74.5% to 89.6%. The aim of this study is the following two things. One is to comprehend a property of phonetic alphabets and pronunciations used in various lexicons. The other is to understand characteristics of seq2seq models by analyzing an error.