• Title/Summary/Keyword: 문장 자동생성

Search Result 194, Processing Time 0.024 seconds

Investigating an Automatic Method in Summarizing a Video Speech Using User-Assigned Tags (이용자 태그를 활용한 비디오 스피치 요약의 자동 생성 연구)

  • Kim, Hyun-Hee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.46 no.1
    • /
    • pp.163-181
    • /
    • 2012
  • We investigated how useful video tags were in summarizing video speech and how valuable positional information was for speech summarization. Furthermore, we examined the similarity among sentences selected for a speech summary to reduce its redundancy. Based on such analysis results, we then designed and evaluated a method for automatically summarizing speech transcripts using a modified Maximum Marginal Relevance model. This model did not only reduce redundancy but it also enabled the use of social tags, title words, and sentence positional information. Finally, we compared the proposed method to the Extractor system in which key sentences of a video speech were chosen using the frequency and location information of speech content words. Results showed that the precision and recall rates of the proposed method were higher than those of the Extractor system, although there was no significant difference in the recall rates.

Korean Probabilistic Dependency Grammar Induction by morpheme (형태소 단위의 한국어 확률 의존문법 학습)

  • Choi, Seon-Hwa;Park, Hyuk-Ro
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.791-798
    • /
    • 2002
  • In this thesis. we present a new method for inducing a probabilistic dependency grammar (PDG) from text corpus. As words in Korean are composed of a set of more basic morphemes, there exist various dependency relations in a word. So, if the induction process does not take into account of these in-word dependency relations, the accuracy of the resulting grammar nay be poor. In comparison with previous PDG induction methods. the main difference of the proposed method lies in the fact that the method takes into account in-word dependency relations as well as inter-word dependency relations. To access the performance of the proposed method, we conducted an experiment using a manually-tagged corpus of 25,000 sentences which is complied by Korean Advanced Institute of Science and Technology (KAIST). The grammar induction produced 2,349 dependency rules. The parser with these dependency rules shoved 69.77% accuracy in terms of the number of correct dependency relations relative to the total number dependency relations for best-1 parse trees of sample sentences. The result shows that taking into account in-word dependency relations in the course of grammar induction results in a more accurate dependency grammar.

Performance Evaluation of Video Recommendation System with Rich Metadata (풍부한 메타데이터를 가진 동영상 추천 시스템의 성능 평가)

  • Min Hwa Cho;Da Yeon Kim;Hwa Rang Lee;Ha Neul Oh;Sun Young Lee;In Hwan Jung;Jae Moon Lee;Kitae Hwang
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.2
    • /
    • pp.29-35
    • /
    • 2023
  • This paper makes it possible to search videos based on sentence by improving the previous research which automatically generates rich metadata from videos and searches videos by key words. For search by sentence, morphemes are analyzed for each sentence, keywords are extracted, weights are assigned to each keyword, and some videos are recommended by applying a ranking algorithm developed in the previous research. In order to evaluate performance of video search in this paper, a sufficient amount of videos and sufficient number of user experiences are re required. However, in the current situation where these are insufficient, three indirect evaluation methods were used: evaluation of overall user satisfaction, comparison of recommendation scores and user satisfaction, and evaluation of user satisfaction by video categories. As a result of performance evaluation, it was shown that the rich metadata construction and video recommendation implementation in this paper give users high search satisfaction.

Automated Story Generation with Image Captions and Recursiva Calls (이미지 캡션 및 재귀호출을 통한 스토리 생성 방법)

  • Isle Jeon;Dongha Jo;Mikyeong Moon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.42-50
    • /
    • 2023
  • The development of technology has achieved digital innovation throughout the media industry, including production techniques and editing technologies, and has brought diversity in the form of consumer viewing through the OTT service and streaming era. The convergence of big data and deep learning networks automatically generated text in format such as news articles, novels, and scripts, but there were insufficient studies that reflected the author's intention and generated story with contextually smooth. In this paper, we describe the flow of pictures in the storyboard with image caption generation techniques, and the automatic generation of story-tailored scenarios through language models. Image caption using CNN and Attention Mechanism, we generate sentences describing pictures on the storyboard, and input the generated sentences into the artificial intelligence natural language processing model KoGPT-2 in order to automatically generate scenarios that meet the planning intention. Through this paper, the author's intention and story customized scenarios are created in large quantities to alleviate the pain of content creation, and artificial intelligence participates in the overall process of digital content production to activate media intelligence.

A Model to Automatically Generate Non-verbal Expression Information for Korean Utterance Sentence (한국어 발화 문장에 대한 비언어 표현 정보를 자동으로 생성하는 모델)

  • Jaeyoon Kim;Jinyea Jang;San Kim;Minyoung Jung;Hyunwook Kang;Saim Shin
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.91-94
    • /
    • 2023
  • 자연스러운 상호작용이 가능한 인공지능 에이전트를 개발하기 위해서는 언어적 표현뿐 아니라, 비언어적 표현 또한 고려되어야 한다. 본 논문에서는 한국어 발화문으로부터 비언어적 표현인 모션을 생성하는 연구를 소개한다. 유튜브 영상으로부터 데이터셋을 구축하고, Text to Motion의 기존 모델인 T2M-GPT와 이종 모달리티 데이터를 연계 학습한 VL-KE-T5의 언어 인코더를 활용하여 구현한 모델로 실험을 진행하였다. 실험 결과, 한국어 발화 텍스트에 대해 생성된 모션 표현은 FID 스코어 0.11의 성능으로 나타났으며, 한국어 발화 정보 기반 비언어 표현 정보 생성의 가능성을 보여주었다.

  • PDF

Statistical Analysis of Korean Phonological Variations Using a Grapheme-to-phoneme System (발음열 자동 생성기를 이용한 한국어 음운 변화 현상의 통계적 분석)

  • 이경님;정민화
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.7
    • /
    • pp.656-664
    • /
    • 2002
  • We present a statistical analysis of Korean phonological variations using a Grapheme-to-Phoneme (GPT) system. The GTP system used for experiments generates pronunciation variants by applying rules modeling obligatory and optional phonemic changes and allophonic changes. These rules are derived form morphophonological analysis and government standard pronunciation rules. The GTP system is optimized for continuous speech recognition by generating phonetic transcriptions for training and constructing a pronunciation dictionary for recognition. In this paper, we describe Korean phonological variations by analyzing the statistics of phonemic change rule applications for the 60,000 sentences in the Samsung PBS Speech DB. Our results show that the most frequently happening obligatory phonemic variations are in the order of liaison, tensification, aspirationalization, and nasalization of obstruent, and that the most frequently happening optional phonemic variations are in the order of initial consonant h-deletion, insertion of final consonant with the same place of articulation as the next consonants, and deletion of final consonant with the same place of articulation as the next consonant's, These statistics can be used for improving the performance of speech recognition systems.

Automatic Speech Recognition Research at Fujitsu (후지쯔에 있어서의 음성 자동인식의 현상과 장래)

  • Nara, Yasuhiro;Kimura, Shinta;Loken-Kim, K.H.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.10 no.1
    • /
    • pp.82-91
    • /
    • 1991
  • The history of automatic speech recognition research, and current and future speech products at Fujitsu are introduced here. The speech recognition research at Fujitsu started in 1970. Our research efforts have results in the production of a speaker dependent 12,000 word discrete / connected word recognizer(F2360), and a speaker independent 17 word discrete word recognizer(F2355L/S). Currently, we are working on a larger vocabulary speech recognizer, in which an input utterance will be matched with networks representing possible phonemic variations. Its application to text input is also discussed.

  • PDF

Probing Sentence Embeddings in L2 Learners' LSTM Neural Language Models Using Adaptation Learning

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.13-23
    • /
    • 2022
  • In this study we leveraged a probing method to evaluate how a pre-trained L2 LSTM language model represents sentences with relative and coordinate clauses. The probing experiment employed adapted models based on the pre-trained L2 language models to trace the syntactic properties of sentence embedding vector representations. The dataset for probing was automatically generated using several templates related to different sentence structures. To classify the syntactic properties of sentences for each probing task, we measured the adaptation effects of the language models using syntactic priming. We performed linear mixed-effects model analyses to analyze the relation between adaptation effects in a complex statistical manner and reveal how the L2 language models represent syntactic features for English sentences. When the L2 language models were compared with the baseline L1 Gulordava language models, the analogous results were found for each probing task. In addition, it was confirmed that the L2 language models contain syntactic features of relative and coordinate clauses hierarchically in the sentence embedding representations.

High-Quality Multimodal Dataset Construction Methodology for ChatGPT-Based Korean Vision-Language Pre-training (ChatGPT 기반 한국어 Vision-Language Pre-training을 위한 고품질 멀티모달 데이터셋 구축 방법론)

  • Jin Seong;Seung-heon Han;Jong-hun Shin;Soo-jong Lim;Oh-woog Kwon
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.603-608
    • /
    • 2023
  • 본 연구는 한국어 Vision-Language Pre-training 모델 학습을 위한 대규모 시각-언어 멀티모달 데이터셋 구축에 대한 필요성을 연구한다. 현재, 한국어 시각-언어 멀티모달 데이터셋은 부족하며, 양질의 데이터 획득이 어려운 상황이다. 따라서, 본 연구에서는 기계 번역을 활용하여 외국어(영문) 시각-언어 데이터를 한국어로 번역하고 이를 기반으로 생성형 AI를 활용한 데이터셋 구축 방법론을 제안한다. 우리는 다양한 캡션 생성 방법 중, ChatGPT를 활용하여 자연스럽고 고품질의 한국어 캡션을 자동으로 생성하기 위한 새로운 방법을 제안한다. 이를 통해 기존의 기계 번역 방법보다 더 나은 캡션 품질을 보장할 수 있으며, 여러가지 번역 결과를 앙상블하여 멀티모달 데이터셋을 효과적으로 구축하는데 활용한다. 뿐만 아니라, 본 연구에서는 의미론적 유사도 기반 평가 방식인 캡션 투영 일치도(Caption Projection Consistency) 소개하고, 다양한 번역 시스템 간의 영-한 캡션 투영 성능을 비교하며 이를 평가하는 기준을 제시한다. 최종적으로, 본 연구는 ChatGPT를 이용한 한국어 멀티모달 이미지-텍스트 멀티모달 데이터셋 구축을 위한 새로운 방법론을 제시하며, 대표적인 기계 번역기들보다 우수한 영한 캡션 투영 성능을 증명한다. 이를 통해, 우리의 연구는 부족한 High-Quality 한국어 데이터 셋을 자동으로 대량 구축할 수 있는 방향을 보여주며, 이 방법을 통해 딥러닝 기반 한국어 Vision-Language Pre-training 모델의 성능 향상에 기여할 것으로 기대한다.

  • PDF

korean-Hanja Translation System based on Semantic Processing (의미처리 기반의 한글-한자 변환 시스템)

  • Kim, Hong-Soon;Sin, Joon-Choul;Ok, Cheol-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.398-401
    • /
    • 2011
  • 워드프로세서에서의 한자를 가진 한글 어휘의 한자 변환 작업은 사용자에 의해 음절/단어 단위의 변환으로 많은 시간이 소요되어 효율이 떨어진다. 본 논문에서는 한글 문장의 의미처리를 통해 문맥에 맞는 한자를 자동 변환하는 시스템을 제안한다. 문맥에 맞는 한글-한자 변환을 위해서는 우선 정확한 형태소 분석 및 동형이의어 분별이 선행되어야 한다. 이를 위해 본 논문에서는 은닉마르코프모델 기반의 형태소 및 동형이의어 동시 태깅 시스템을 구현하였다. 제안한 시스템은 형태의미 세종 말뭉치 1,100만여 어절을 이용하여 unigram과 bigram을 추출 하였고, unigram을 이용하여 어절의 생성확률 사전을 구축하고 bigram을 이용하여 전이확률 학습사전을 구축하였다. 그리고 품사 및 동형이의어 태깅 후 명사를 표준국어대사전에 등재된 한자로 변환하는 시스템을 구현하였다. 구현된 시스템의 성능 확인을 위해 전체 세종 말뭉치를 문장단위로 비학습 말뭉치를 구성하여 실험하였고, 실험결과 한자를 가진 동형이의어에 대한 한자 변환에서 90.35%의 정확률을 보였다.