• Title/Summary/Keyword: 화자증명

Search Result 22, Processing Time 0.014 seconds

Applying Polite level Estimation and Case-Based Reasoning to Context-Aware Mobile Interface System (존대등분 계산법과 사례기반추론을 활용한 상황 인식형 모바일 인터페이스 시스템)

  • Kwon, Oh-Byung;Choi, Suk-Jae;Park, Tae-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.13 no.3
    • /
    • pp.141-160
    • /
    • 2007
  • User interface has been regarded as a crucial issue to increase the acceptance of mobile services. In special, even though to what extent the machine as speaker communicates with human as listener in a timely and polite manner is important, fundamental studies to come up with these issues have been very rare. Hence, the purpose of this paper is to propose a methodology of estimating politeness level in a certain context-aware setting and then to design a context-aware system for polite mobile interface. We will focus on Korean language for the polite level estimation simply because the polite interface would highly depend on cultural and linguistic characteristics. Nested Minkowski aggregation model, which amends Minkowski aggregation model, is adopted as a privacy-preserving similarity evaluation for case retrieval under distributed computing environment such as ubiquitous computing environment. To show the feasibility of the methodology proposed in this paper, simulation-based experiment with drama cases has performed to show the performance of the methodology proposed in this paper.

  • PDF

Improvement of Character-net via Detection of Conversation Participant (대화 참여자 결정을 통한 Character-net의 개선)

  • Kim, Won-Taek;Park, Seung-Bo;Jo, Geun-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.241-249
    • /
    • 2009
  • Recently, a number of researches related to video annotation and representation have been proposed to analyze video for searching and abstraction. In this paper, we have presented a method to provide the picture elements of conversational participants in video and the enhanced representation of the characters using those elements, collectively called Character-net. Because conversational participants are decided as characters detected in a script holding time, the previous Character-net suffers serious limitation that some listeners could not be detected as the participants. The participants who complete the story in video are very important factor to understand the context of the conversation. The picture elements for detecting the conversational participants consist of six elements as follows: subtitle, scene, the order of appearance, characters' eyes, patterns, and lip motion. In this paper, we present how to use those elements for detecting conversational participants and how to improve the representation of the Character-net. We can detect the conversational participants accurately when the proposed elements combine together and satisfy the special conditions. The experimental evaluation shows that the proposed method brings significant advantages in terms of both improving the detection of the conversational participants and enhancing the representation of Character-net.