• Title/Summary/Keyword: Natural Language Generation

Search Result 137, Processing Time 0.025 seconds

u-SPACE: ubiquitous Smart Parenting And Customized Education (u-SPACE: 육아 보조 및 맞춤 교육을 위한 유비쿼터스 시스템)

  • Min, Hye-Jin;Park, Doo-Jin;Chang, Eun-Young;Lee, Ho-Joon;Park, Jong-C.
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.94-102
    • /
    • 2006
  • 부모의 사회 활동 시간이 늘어남에 따라 아이들이 혼자 집에서 보내는 시간도 늘어나고 있다. 따라서 아이들의 자립심을 크게 제한하지 않으면서 노출되기 쉬운 실내 위험으로부터 아이들을 보호하고 아이의 심리, 감정적 상태에 따라 적절한 지도를 해주는 도움이 필요하다. 본 연구에서는 RFID 기술을 기반으로 아이들을 물리적 위험으로부터 보호하고 자연언어처리 기술을 이용하여 아이의 심리, 감정 상태에 따른 음악과 애니메이션의 멀티미디어 콘텐츠를 제공한다. 또한 지속적인 관심이 필요한 일정 관리, 일상 생활에서 도움을 주는 전자제품 사용법 안내 등의 정보를 제공하여 아이 스스로 자신의 일을 할 수 있도록 도움을 준다. 본 연구에서는 가상의 가정을 디자인하여 실현 가능한 시나리오를 중심으로 이와 같은 서비스를 시뮬레이션 한 결과를 보인다.

  • PDF

A Study of Designing the Automatic Information Retrieval System based on Natural Language (자연어를 이용한 자동정보검색시스템 구축에 관한 연구)

  • Seo, Hwi
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.35 no.4
    • /
    • pp.141-160
    • /
    • 2001
  • This study is to develop a new system for conducting the information retrieval automatically. The system in this study is programmed by Delphi 4.0(PASCAL) and consists of automatic indexing, clustering technique, establishing and expressing term hierarchic relation, and automatic information retrieval technique. Thus this browser system can automatically control all the processes of information searching such as representation, generation and extension of queries and construction of searching strategy and feedback searching.

  • PDF

Spelling Correction in Korean Using the `Eojeol` generation Dictionary (어절 생성 사전을 이용한 한국어 철자 교정)

  • Lee, Yeong-Sin;Park, Yeong-Ja;Song, Man-Seok
    • The KIPS Transactions:PartB
    • /
    • v.8B no.1
    • /
    • pp.98-104
    • /
    • 2001
  • 본 논문에서는 어절 생성 사전을 이용한 한국어 철자 교정을 제안한다. 어절 생성 사전은 두 문자열 간 음절 특성이 고려된 편집 거리 계산을 기반으로 탐색되어 언어와 오류 유형에 의존적인 정보를 이용하지 않고 오류 어절에 대한 후보 어절을 생성한다. 또한 교정된 어절들의 가능한 형태소 분석들을 산출하여 후보들 간의 순위 계산 시에 재차 형태소 분석을 수행하지 않고 언어 정보를 적용할 수 있다. 본 논문에서 제안하는 철자 교정은 두 단계로 구성된다. 첫째, 오류 어절로부터 가능한 오류 정정 어간들을 계산한다. 둘째, 계산된 어간들로부터 어절 생성 사전을 탐색하여 원형 후보 어절들을 생성한다. 또한 품사 태깅과 공기 정보를 사용하여 오류 수정된 결과의 순위를 매긴다. 본 시스템의 자동 철자 교정 성능을 평가한 결과 3,000개의 어절에서 시험한 결과 단어 수준으로 93%가 옳게 교정되었다.

  • PDF

Generative Linguistic Steganography: A Comprehensive Review

  • Xiang, Lingyun;Wang, Rong;Yang, Zhongliang;Liu, Yuling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.986-1005
    • /
    • 2022
  • Text steganography is one of the most imminent and promising research interests in the information security field. With the unprecedented success of the neural network and natural language processing (NLP), the last years have seen a surge of research on generative linguistic steganography (GLS). This paper provides a thorough and comprehensive review to summarize the existing key contributions, and creates a novel taxonomy for GLS according to NLP techniques and steganographic encoding algorithm, then summarizes the characteristics of generative linguistic steganographic methods properly to analyze the relationship and difference between each type of them. Meanwhile, this paper also comprehensively introduces and analyzes several evaluation metrics to evaluate the performance of GLS from diverse perspective. Finally, this paper concludes the future research work, which is more conducive to the follow-up research and innovation of researchers.

Characteristic Analysis of Housing Design of Michael Graves - As the generation passes - (마이클 그레이브스의 주택 디자인 특성 - 시대적 흐름 배경중심으로 -)

  • Shin Kyung-Joo;Jeon Lea-Jin;Noh Sang-Wan
    • Korean Institute of Interior Design Journal
    • /
    • v.14 no.2 s.49
    • /
    • pp.3-11
    • /
    • 2005
  • The subject of this paper is to find out the characteristics of Michael Graves' housing design. I collected the information of his designs and teamed his design language through his works on different time. I chose five actual housings for analysis which were built since 1960 when he have started his career. Though analysis of the exterior view, plane figures and the total arrangement photos, I could reached the conclusion. From analysis of this research I could figure out the characteristic of Michael Graves housing design as follows. During his whole working career $(1960\~present)$, his design style has went through distinctive characteristic changes that can be divided into: 1960s to early 1970s, mid and late 1970s, 1980s to as of now. 1960s to early 1970s, during this period his works implicated transparency, openness, consistency, complication, and contrast on space, abstraction, white-color and metaphor. mid and late 1970s, showed the white-color figure put together with geometrical structure and post-modernism styles such as ornamentation, natural and metaphor. 1980s to as of now, implicated ornamentation, natural features, historic background, interconnections, variety, symbolism and tradition.

A Study on the Development Methodology for User-Friendly Interactive Chatbot (사용자 친화적인 대화형 챗봇 구축을 위한 개발방법론에 관한 연구)

  • Hyun, Young Geun;Lim, Jung Teak;Han, Jeong Hyeon;Chae, Uri;Lee, Gi-Hyun;Ko, Jin Deuk;Cho, Young Hee;Lee, Joo Yeoun
    • Journal of Digital Convergence
    • /
    • v.18 no.11
    • /
    • pp.215-226
    • /
    • 2020
  • Chatbot is emerging as an important interface window for business. This change is due to the continued development of chatbot-related research from NLP to NLU and NLG. However, the reality is that the methodological study of drawing domain knowledge and developing it into a user-friendly interactive interface is weak in the process of developing chatbot. In this paper, in order to present the process criteria of chatbot development, we applied it to the actual project based on the methodology presented in the previous paper and improved the development methodology. In conclusion, the productivity of the test phase, which is the most important step, was improved by 33.3%, and the number of iterations was reduced to 37.5%. Based on these results, the "3 Phase and 17 Tasks Development Methodology" was presented, which is expected to dramatically improve the trial and error of the chatbot development.

Automatic Electronic Medical Record Generation System using Speech Recognition and Natural Language Processing Deep Learning (음성인식과 자연어 처리 딥러닝을 통한 전자의무기록자동 생성 시스템)

  • Hyeon-kon Son;Gi-hwan Ryu
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.731-736
    • /
    • 2023
  • Recently, the medical field has been applying mandatory Electronic Medical Records (EMRs) and Electronic Health Records (EHRs) systems that computerize and manage medical records, and distributing them throughout the entire medical industry to utilize patients' past medical records for additional medical procedures. However, the conversations between medical professionals and patients that occur during general medical consultations and counseling sessions are not separately recorded or stored, so additional important patient information cannot be efficiently utilized. Therefore, we propose an electronic medical record system that uses speech recognition and natural language processing deep learning to store conversations between medical professionals and patients in text form, automatically extracts and summarizes important medical consultation information, and generates electronic medical records. The system acquires text information through the recognition process of medical professionals and patients' medical consultation content. The acquired text is then divided into multiple sentences, and the importance of multiple keywords included in the generated sentences is calculated. Based on the calculated importance, the system ranks multiple sentences and summarizes them to create the final electronic medical record data. The proposed system's performance is verified to be excellent through quantitative analysis.

Multi-Document Summarization Method Based on Semantic Relationship using VAE (VAE를 이용한 의미적 연결 관계 기반 다중 문서 요약 기법)

  • Baek, Su-Jin
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.341-347
    • /
    • 2017
  • As the amount of document data increases, the user needs summarized information to understand the document. However, existing document summary research methods rely on overly simple statistics, so there is insufficient research on multiple document summaries for ambiguity of sentences and meaningful sentence generation. In this paper, we investigate semantic connection and preprocessing process to process unnecessary information. Based on the vocabulary semantic pattern information, we propose a multi-document summarization method that enhances semantic connectivity between sentences using VAE. Using sentence word vectors, we reconstruct sentences after learning from compressed information and attribute discriminators generated as latent variables, and semantic connection processing generates a natural summary sentence. Comparing the proposed method with other document summarization methods showed a fine but improved performance, which proved that semantic sentence generation and connectivity can be increased. In the future, we will study how to extend semantic connections by experimenting with various attribute settings.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Voice Synthesis Detection Using Language Model-Based Speech Feature Extraction (언어 모델 기반 음성 특징 추출을 활용한 생성 음성 탐지)

  • Seung-min Kim;So-hee Park;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.3
    • /
    • pp.439-449
    • /
    • 2024
  • Recent rapid advancements in voice generation technology have enabled the natural synthesis of voices using text alone. However, this progress has led to an increase in malicious activities, such as voice phishing (voishing), where generated voices are exploited for criminal purposes. Numerous models have been developed to detect the presence of synthesized voices, typically by extracting features from the voice and using these features to determine the likelihood of voice generation.This paper proposes a new model for extracting voice features to address misuse cases arising from generated voices. It utilizes a deep learning-based audio codec model and the pre-trained natural language processing model BERT to extract novel voice features. To assess the suitability of the proposed voice feature extraction model for voice detection, four generated voice detection models were created using the extracted features, and performance evaluations were conducted. For performance comparison, three voice detection models based on Deepfeature proposed in previous studies were evaluated against other models in terms of accuracy and EER. The model proposed in this paper achieved an accuracy of 88.08%and a low EER of 11.79%, outperforming the existing models. These results confirm that the voice feature extraction method introduced in this paper can be an effective tool for distinguishing between generated and real voices.