• Title/Summary/Keyword: User utterance

Search Result 39, Processing Time 0.023 seconds

Implement of Semi-automatic Labeling Using Transcripts Text (전사텍스트를 이용한 반자동 레이블링 구현)

  • Won, Dong-Jin;Chang, Moon-soo;Kang, Sun-Mee
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.6
    • /
    • pp.585-591
    • /
    • 2015
  • In transcription for spoken language research, labeling is a work linking text-represented utterance to recorded speech. Most existing labeling tools have been working manually. Semi-automatic labeling we are proposing consists of automation module and manual adjustment module. Automation module extracts voice boundaries utilizing G.Saha's algorithm, and predicts utterance boundaries using the number and length of utterance which established utterance text. For maintaining existing manual tool's accuracy, we provide manual adjustment user interface revising the auto-labeling utterance boundaries. The implemented tool of our semi-automatic algorithm speed up to 27% than existing manual labeling tools.

Modality Classification for an Example-Based Dialogue System (예제 기반 대화 시스템을 위한 양태 분류)

  • Kim, Min-Jeong;Hong, Gum-Won;Song, Young-In;Lee, Yeon-Soo;Lee, Do-Gil;Rim, Hae-Chang
    • MALSORI
    • /
    • v.68
    • /
    • pp.75-93
    • /
    • 2008
  • An example-based dialogue system tries to utilize many pairs which are stored in a dialogue database. The most important part of the example-based dialogue system is to find the most similar utterance to user's input utterance. Modality, which is characterized as conveying the speaker's involvement in the propositional content of a given utterance, is one of the core sentence features. For example, the sentence "I want to go to school." has a modality of hope. In this paper, we have proposed a modality classification system which can predict sentence modality in order to improve the performance of example-based dialogue systems. We also define a modality tag set for a dialogue system, and validate this tag set using a rule-based modality classification system. Experimental results show that our modality tag set and modality classification system improve the performance of an example-based dialogue system.

  • PDF

The Effect of Preceding Utterance on the User Experience in the Voice Agent Interactions - Focus on the Conversational Types in the Smart Home Context - (음성 에이전트 상호작용에서 선행 발화가 사용자 경험에 미치는 영향 - 스마트홈 맥락에서 대화 유형 조건을 중심으로 -)

  • Kang, Yeseul;Na, Gyounghwa;Choi, Junho
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.1
    • /
    • pp.620-631
    • /
    • 2021
  • The study aim to test the effect of voice agent's preceding utterance type on the user experience in the smart home contexts by conversation types. Based on two types of conversation (task-oriented vs. relationship-oriented conversations) and two types of utterance (preceding vs. response utterances), four different scenarios were designed for experimental study. A total of 62 participants were divided into two groups by utterance type, and exposed to two scenarios of the conversation types. Likeability, psychological reactance, and perceived intelligence were measured for the user experience of conversational agent. The result showed main effects of likeability in task-oriented conversations, and of psychological reactance in preceding utterances. The interaction effect demonstrated that preceding conversation improved the likeabilitty and perceived intelligence in the task-oriented conversations.

Preceded Utterance Conversational Agent's Effect on User Experience with User's Task Performance and Conversational Agent's Self-Disclosure (선제 발화하는 대화형 에이전트가 사용자 경험에 미치는영향: 사용자 과제 수행과 대화형 에이전트의 자기노출을 중심으로)

  • Shin, Hyorim;Lee, Soyeon;Kang, Hyunmin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.565-576
    • /
    • 2022
  • The scope and functions of a conversational agent are gradually expanding. In particular, research and technology development is being conducted on a conversational agent that can speak first without user calls. However, still in its early stages, there is a lack of research on how a preceded utterance conversational agent will affect users. Accordingly, this study conducted a 2×3 mixed design using the user's task performance condition and the agent's self-exposure as independent variables and measured Intimacy, Functional Satisfaction, Psychological Reactance, and Workload as dependent variables to identify the effects of preceded utterance conversational agent on user experience.

A Method for Measuring Inter-Utterance Similarity Considering Various Linguistic Features (다양한 언어적 자질을 고려한 발화간 유사도 측정 방법)

  • Lee, Yeon-Su;Shin, Joong-Hwi;Hong, Gum-Won;Song, Young-In;Lee, Do-Gil;Rim, Hae-Chang
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.1
    • /
    • pp.61-69
    • /
    • 2009
  • This paper presents an improved method measuring inter-utterance similarity in an example-based dialogue system, which searches the most similar utterance in a dialogue database to generate a response to a given user utterance. Unlike general inter-sentence similarity measures, the inter-utterance similarity measure for example-based dialogue system should consider not only word distribution but also various linguistic features, such as affirmation/negation, tense, modality, sentence type, which affects the natural conversation. However, previous approaches do not sufficiently reflect these features. This paper proposes a new utterance similarity measure by analyzing and reflecting various linguistic features to improve performance in accuracy. Also, by considering substitutability of the features, the proposed method can utilize limited number of examples. Experimental results show that the proposed method achieves 10%p improvement in accuracy compared to the previous method.

Lip Reading Method Using CNN for Utterance Period Detection (발화구간 검출을 위해 학습된 CNN 기반 입 모양 인식 방법)

  • Kim, Yong-Ki;Lim, Jong Gwan;Kim, Mi-Hye
    • Journal of Digital Convergence
    • /
    • v.14 no.8
    • /
    • pp.233-243
    • /
    • 2016
  • Due to speech recognition problems in noisy environment, Audio Visual Speech Recognition (AVSR) system, which combines speech information and visual information, has been proposed since the mid-1990s,. and lip reading have played significant role in the AVSR System. This study aims to enhance recognition rate of utterance word using only lip shape detection for efficient AVSR system. After preprocessing for lip region detection, Convolution Neural Network (CNN) techniques are applied for utterance period detection and lip shape feature vector extraction, and Hidden Markov Models (HMMs) are then used for the recognition. As a result, the utterance period detection results show 91% of success rates, which are higher performance than general threshold methods. In the lip reading recognition, while user-dependent experiment records 88.5%, user-independent experiment shows 80.2% of recognition rates, which are improved results compared to the previous studies.

A Design and Implementation of Natural Language Dialogue Understanding System Based on Discourse Information and Plan Recognition (대화정보를 이용한 계획인식 기반형 자연언어 대화이해 시스템의 설계 및 구현)

  • 김영길;최병욱
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.3
    • /
    • pp.159-168
    • /
    • 1996
  • In this paper, the natural language dialogue understanding sytem, based on discourse information and plan recognition, is designed and implemented. The system needs to analyze the user's input utterance and acquire the discoruse information to perform plan recognition and facilitate cooperative response. This paper proposes the mehtod of controlling a dialogue, based on the algorithm for extracting the discourse information. When the discourse information for dialogue understanding is extracted, the information-based value in feature structure that is obtained form korean parser is used. And the system makes use of the structure. Thus it can offer the response that the user wants to take, and let the dialogue to study in utterance level and enhance the efficiency of dialogue understanding. In this paper, we apply the system to the hotel reservation domain and show the mehtod of using the discoruse information to control the dialogue.

  • PDF

A Machine Learning based Method for Measuring Inter-utterance Similarity for Example-based Chatbot (예제 기반 챗봇을 위한 기계 학습 기반의 발화 간 유사도 측정 방법)

  • Yang, Min-Chul;Lee, Yeon-Su;Rim, Hae-Chang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.8
    • /
    • pp.3021-3027
    • /
    • 2010
  • Example-based chatBot generates a response to user's utterance by searching the most similar utterance in a collection of dialogue examples. Though finding an appropriate example is very important as it is closely related to a response quality, few studies have reported regarding what features should be considered and how to use the features for similar utterance searching. In this paper, we propose a machine learning framework which uses various linguistic features. Experimental results show that simultaneously using both semantic features and lexical features significantly improves the performance, compared to conventional approaches, in terms of 1) the utilization of example database, 2) precision of example matching, and 3) the quality of responses.

Effective Korean Speech-act Classification Using the Classification Priority Application and a Post-correction Rules (분류 우선순위 적용과 후보정 규칙을 이용한 효과적인 한국어 화행 분류)

  • Song, Namhoon;Bae, Kyoungman;Ko, Youngjoong
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.80-86
    • /
    • 2016
  • A speech-act is a behavior intended by users in an utterance. Speech-act classification is important in a dialogue system. The machine learning and rule-based methods have mainly been used for speech-act classification. In this paper, we propose a speech-act classification method based on the combination of support vector machine (SVM) and transformation-based learning (TBL). The user's utterance is first classified by SVM that is preferentially applied to categories with a low utterance rate in training data. Next, when an utterance has negative scores throughout the whole of the categories, the utterance is applied to the correction phase by rules. The results from our method were higher performance over the baseline system long with error-reduction.

Prediction of Domain Action Using a Neural Network (신경망을 이용한 영역 행위 예측)

  • Lee, Hyun-Jung;Seo, Jung-Yun;Kim, Hark-Soo
    • Korean Journal of Cognitive Science
    • /
    • v.18 no.2
    • /
    • pp.179-191
    • /
    • 2007
  • In a goal-oriented dialogue, spoken' intentions can be represented by domain actions that consist of pairs of a speech art and a concept sequence. The domain action prediction of user's utterance is useful to correct some errors that occur in a speech recognition process, and the domain action prediction of system's utterance is useful to generate flexible responses. In this paper, we propose a model to predict a domain action of the next utterance using a neural network. The proposed model predicts the next domain action by using a dialogue history vector and a current domain action as inputs of the neural network. In the experiment, the proposed model showed the precision of 80.02% in speech act prediction and the precision of 82.09% in concept sequence prediction.

  • PDF