• 제목/요약/키워드: sentence

검색결과 1,648건 처리시간 0.023초

가상세계 속에 보인 일본어의 가족 간의 문말 표현에 대해 - 교수매체로서의 문말의 정중체와 종조사 사용에 대해 (The Expression of Ending Sentence in Family Conversations in the Virtual Language - Focusing on Politeness and Sentence-final Particle with Instructional Media -)

  • 양정순
    • 비교문화연구
    • /
    • 제39권
    • /
    • pp.433-460
    • /
    • 2015
  • This paper was analyzed the politeness and the expression of ending sentence in family conversations in the virtual language of cartoon characters. Younger speakers have a tendency to unite sentence-final particle to the polite form, older speakers have a tendency to unite it to the plain form in the historical genre. But younger speakers and older speakers unite sentence-final particle to the plain form in other fiction genres. Using terms of respect is determined by circumstances and charactonym. Comparing the translation of conversations with the original, there were the different aspects of translated works. When Japanese instructors are used to study Japanese as the instructional media, they give a supplementary explanation to students. 'WA' 'KASIRA' that a female speaker usually uses are used by a male speaker, 'ZO' 'ZE' that a male speaker usually uses are used by a female speaker in the virtual language of cartoons. In the field of the translation, it is translated 'KANA' 'KASIRA' into 'KA?', 'WA' 'ZO' 'ZE' into 'A(EO)?', 'WAYO' 'ZEYO' into AYO(EOYO)'. When we use sentence-final particle in the virtual language of cartoon, we need to supply supplementary explanations and further examinations.

감정 표현구 단위 분류기와 문장 단위 분류기의 결합을 통한 주관적 문장 분류의 성능 향상 (Combining Sentimental Expression-level and Sentence-level Classifiers to Improve Subjective Sentence Classification)

  • 강인호
    • 정보처리학회논문지B
    • /
    • 제14B권7호
    • /
    • pp.559-566
    • /
    • 2007
  • 주관적 문장이란 주관적인 내용을 포함한 문장으로써 저자의 제품이나 사건에 대한 생각을 알 수 있다. 주관적 내용임을 나타내는 주관적인 표현은 문장 전반적으로 골고루 나타날 수도 있지만 일부 한정된 영역에서만 발견될 수도 있다. 따라서 보다 정확한 분류를 위해서는, 문장 전체를 고려하는 정보 외에 사실이나 감정을 표현하는 주관적 혹은 객관적 표현구 정보의 활용이 필요하다. 본 연구에서는 문장 전체를 이용한 분류 결과와 감정 표현구를 이용한 분류 결과를 결합하여 주/객관적 문장 분류기의 성능을 향상시키는 방법을 제안한다. 한 문장은 여러 개의 표현구를 가질 수 있어 복수개의 표현구 단위 결과를 얻게 되며 기계 학습을 응용하여 문장 단위 결과와 결합한다. 실험을 통한 결과, 표현구 단위 결과물 중 최대값을 가지는 두 가지 결과와 문장 전체를 이용한 결과를 합침으로써 2.5% 성능 향상된 79.7%의 정확률을 얻을 수 있었다.

문형 사전을 위한 문형 빈도 조사 (Studying the frequencies of sentence pattern for a entence patterns dictionary)

  • 김유미
    • 인지과학
    • /
    • 제16권2호
    • /
    • pp.123-140
    • /
    • 2005
  • 이 논문은 한국어 교육에서 문형 전자 사전을 바탕으로 하는 자동문형 검사기를 설계하기 위해 문형의 출현 빈도와 사용 빈도 조사를 목적으로 하였다. 먼저 한국어 교육에서의 문형의 개념을 정의하고 그 유형을 구문 문형과 표현 문형으로 나누어 분류하였다. 서술어 중심의 구문 문형과 의존명사, 어미, 조사가 중심인 표현 문형이 학습자 코퍼스에서 어떻게 나타나는지 분석하였다. 학습자 코퍼스는 학습자들이 꼭 배워야 하는 것으로 표준 코퍼스와 학습자들의 생산물인 오류 코퍼스로 나누어 구축하였다. 한국어 교재로 구성된 표준 코퍼스에서의 문형 출현 빈도와 학습자들이 직접 작성한 글을 모은 오류 코퍼스에서 어떻게 문형이 사용되고 있는지 사용 빈도를 조사하였다. 학습자들의 문형 사용 빈도순은 문형 전자 사전에 기술되고, 이것은 문형 검색 속도를 최적화할 것이다.

  • PDF

Sentence-Chain Based Seq2seq Model for Corpus Expansion

  • Chung, Euisok;Park, Jeon Gue
    • ETRI Journal
    • /
    • 제39권4호
    • /
    • pp.455-466
    • /
    • 2017
  • This study focuses on a method for sequential data augmentation in order to alleviate data sparseness problems. Specifically, we present corpus expansion techniques for enhancing the coverage of a language model. Recent recurrent neural network studies show that a seq2seq model can be applied for addressing language generation issues; it has the ability to generate new sentences from given input sentences. We present a method of corpus expansion using a sentence-chain based seq2seq model. For training the seq2seq model, sentence chains are used as triples. The first two sentences in a triple are used for the encoder of the seq2seq model, while the last sentence becomes a target sequence for the decoder. Using only internal resources, evaluation results show an improvement of approximately 7.6% relative perplexity over a baseline language model of Korean text. Additionally, from a comparison with a previous study, the sentence chain approach reduces the size of the training data by 38.4% while generating 1.4-times the number of n-grams with superior performance for English text.

Modality-Based Sentence-Final Intonation Prediction for Korean Conversational-Style Text-to-Speech Systems

  • Oh, Seung-Shin;Kim, Sang-Hun
    • ETRI Journal
    • /
    • 제28권6호
    • /
    • pp.807-810
    • /
    • 2006
  • This letter presents a prediction model for sentence-final intonations for Korean conversational-style text-to-speech systems in which we introduce the linguistic feature of 'modality' as a new parameter. Based on their function and meaning, we classify tonal forms in speech data into tone types meaningful for speech synthesis and use the result of this classification to build our prediction model using a tree structured classification algorithm. In order to show that modality is more effective for the prediction model than features such as sentence type or speech act, an experiment is performed on a test set of 970 utterances with a training set of 3,883 utterances. The results show that modality makes a higher contribution to the determination of sentence-final intonation than sentence type or speech act, and that prediction accuracy improves up to 25% when the feature of modality is introduced.

  • PDF

언이기반의 인지시스템을 위한 시공간적 기초화 (Spatiotemporal Grounding for a Language Based Cognitive System)

  • 안현식
    • 제어로봇시스템학회논문지
    • /
    • 제15권1호
    • /
    • pp.111-119
    • /
    • 2009
  • For daily life interaction with human, robots need the capability of encoding and storing cognitive information and retrieving it contextually. In this paper, spatiotemporal grounding of cognitive information for a language based cognitive system is presented. The cognitive information of the event occurred at a robot is described with a sentence, stored in a memory, and retrieved contextually. Each sentence is parsed, discriminated with the functional type of it, and analyzed with argument structure for connecting to cognitive information. With the proposed grounding, the cognitive information is encoded to sentence form and stored in sentence memory with object descriptor. Sentences are retrieved for answering questions of human by searching temporal information from the sentence memory and doing spatial reasoning in schematic imagery. An experiment shows the feasibility and efficiency of the spatiotemporal grounding for advanced service robot.

비성도 검사 문형에 따른 경직형 뇌성마비 화자의 비성도 특성 (The Characteristics of Nasalance in Speakers with Spastic Cerebral Palsy according to the Types of Sentence used for Nasalance Test)

  • 남현욱;유재연
    • 말소리와 음성과학
    • /
    • 제2권1호
    • /
    • pp.121-125
    • /
    • 2010
  • The purpose of this study was to compare the characteristics of nasalance in speakers with spastic cerebral palsy (CP) according to the types of sentence used for nasalance test. Twenty-eight speakers with spastic CP participated in this study. The experiment was conducted by analyzing nasalance of prolonged vowel utterance using the Sea sentence, the Zoo sentence, and the Mother sentence. The three sentences differ in the ratio of nasal consonants. The results show significant differences among the types of sentence for nasalance test.

  • PDF

SSF: Sentence Similar Function Based on word2vector Similar Elements

  • Yuan, Xinpan;Wang, Songlin;Wan, Lanjun;Zhang, Chengyuan
    • Journal of Information Processing Systems
    • /
    • 제15권6호
    • /
    • pp.1503-1516
    • /
    • 2019
  • In this paper, to improve the accuracy of long sentence similarity calculation, we proposed a sentence similarity calculation method based on a system similarity function. The algorithm uses word2vector as the system elements to calculate the sentence similarity. The higher accuracy of our algorithm is derived from two characteristics: one is the negative effect of penalty item, and the other is that sentence similar function (SSF) based on word2vector similar elements doesn't satisfy the exchange rule. In later studies, we found the time complexity of our algorithm depends on the process of calculating similar elements, so we build an index of potentially similar elements when training the word vector process. Finally, the experimental results show that our algorithm has higher accuracy than the word mover's distance (WMD), and has the least query time of three calculation methods of SSF.

Joint Hierarchical Semantic Clipping and Sentence Extraction for Document Summarization

  • Yan, Wanying;Guo, Junjun
    • Journal of Information Processing Systems
    • /
    • 제16권4호
    • /
    • pp.820-831
    • /
    • 2020
  • Extractive document summarization aims to select a few sentences while preserving its main information on a given document, but the current extractive methods do not consider the sentence-information repeat problem especially for news document summarization. In view of the importance and redundancy of news text information, in this paper, we propose a neural extractive summarization approach with joint sentence semantic clipping and selection, which can effectively solve the problem of news text summary sentence repetition. Specifically, a hierarchical selective encoding network is constructed for both sentence-level and document-level document representations, and data containing important information is extracted on news text; a sentence extractor strategy is then adopted for joint scoring and redundant information clipping. This way, our model strikes a balance between important information extraction and redundant information filtering. Experimental results on both CNN/Daily Mail dataset and Court Public Opinion News dataset we built are presented to show the effectiveness of our proposed approach in terms of ROUGE metrics, especially for redundant information filtering.

Sentence model based subword embeddings for a dialog system

  • Chung, Euisok;Kim, Hyun Woo;Song, Hwa Jeon
    • ETRI Journal
    • /
    • 제44권4호
    • /
    • pp.599-612
    • /
    • 2022
  • This study focuses on improving a word embedding model to enhance the performance of downstream tasks, such as those of dialog systems. To improve traditional word embedding models, such as skip-gram, it is critical to refine the word features and expand the context model. In this paper, we approach the word model from the perspective of subword embedding and attempt to extend the context model by integrating various sentence models. Our proposed sentence model is a subword-based skip-thought model that integrates self-attention and relative position encoding techniques. We also propose a clustering-based dialog model for downstream task verification and evaluate its relationship with the sentence-model-based subword embedding technique. The proposed subword embedding method produces better results than previous methods in evaluating word and sentence similarity. In addition, the downstream task verification, a clustering-based dialog system, demonstrates an improvement of up to 4.86% over the results of FastText in previous research.