• Title/Summary/Keyword: 對話

Search Result 3,166, Processing Time 0.028 seconds

EFFECT OF THE SHIP NOISE ON THE INTELLIGENCE ABILITY OF MAN (선박소음이 인간지능력에 미치는 영향에 관한 연구)

  • PARK Jung-Hee
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.8 no.3
    • /
    • pp.127-132
    • /
    • 1975
  • This is an experimental study that aimed to find out a possible relationships between the noise of the ship and the intelligent quotient, and the creativity of the crew member during June 5, to August 24, 1975. The experiment was carried out on the university training ship, the Oh-Bae-San Ho(1,126 tons), and the Kwan-Ak-San Ho (243 tons) and the training ship Baek-Kyung Ho (380 tons) of Je-ju College, where the total number of 144 students engaged on their tasks of practical exercise. And the following results were obtained : The decreases of I.Q. was evident as compared to the score obtained at the class room; soon after the embarking of the ship, the students on the deck decreased the score by $7\%$ of what they obtained at the class room while the students in the engine room decreased by $13\%$. The I.Q. was regaining the normal state after three days of embarking seemingly showing the fact that the students became adapted to the noise of the ship, but no remarkable improvement was visible during the period of 3 days to 35 days on the ship. One of the remarkable fact that had not been expected was that the problems for audio discernment was much easily solved in the midst of noise that made oral communication impossible (102 dB) than in the place of noise where conversation was possible(67 dB).

  • PDF

The Effect of AI Agent's Multi Modal Interaction on the Driver Experience in the Semi-autonomous Driving Context : With a Focus on the Existence of Visual Character (반자율주행 맥락에서 AI 에이전트의 멀티모달 인터랙션이 운전자 경험에 미치는 효과 : 시각적 캐릭터 유무를 중심으로)

  • Suh, Min-soo;Hong, Seung-Hye;Lee, Jeong-Myeong
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.92-101
    • /
    • 2018
  • As the interactive AI speaker becomes popular, voice recognition is regarded as an important vehicle-driver interaction method in case of autonomous driving situation. The purpose of this study is to confirm whether multimodal interaction in which feedback is transmitted by auditory and visual mode of AI characters on screen is more effective in user experience optimization than auditory mode only. We performed the interaction tasks for the music selection and adjustment through the AI speaker while driving to the experiment participant and measured the information and system quality, presence, the perceived usefulness and ease of use, and the continuance intention. As a result of analysis, the multimodal effect of visual characters was not shown in most user experience factors, and the effect was not shown in the intention of continuous use. Rather, it was found that auditory single mode was more effective than multimodal in information quality factor. In the semi-autonomous driving stage, which requires driver 's cognitive effort, multimodal interaction is not effective in optimizing user experience as compared to single mode interaction.

Phenomenological Research on Recovery Lived Experience of Stroke Inpatients (뇌졸중 입원 환자들의 회복체험에 관한 현상학적 연구)

  • Song, A-Young;Kim, Su-Kyoung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.7
    • /
    • pp.200-207
    • /
    • 2017
  • The components that influence recovery were investigated to understand the recovery lived experience of stroke inpatients. Using the phenomenological research methodology reported by Giorgi, 3 conversation sessions were performed on 12 stroke inpatients. The conversations were recorded through agreement, in which the data were analyzed according to the scientific phenomenological methods. Sixteen summarized meaning units were integrated to deduct 10 main meanings and 6 themes. The themes of the restoration experience included the burden of help, performance of independent roles, self-overcoming, return to society, psychological support, and enhanced motivation for recovery. This can be used as data to predict the difficulty that stroke patients experience during hospitalization and propose a direction of intervention for restoration. Rehabilitation experts must provide intervention for the restoration of stroke patients based on the formation of a therapeutic relationship to strengthen the psychological support and motivation, and make strategies for self-overcoming and a proper cooperation relationship with their family.

Image Dequantization using Optimization (최적화 기반 영상 역양자화)

  • Choi, Min-Gyu;Kim, Tae-Hoon;Ahn, Jong-Woo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.7
    • /
    • pp.296-303
    • /
    • 2007
  • Color quantization replaces the color of each pixel with the closest representative color, and thus it makes the resulting image partitioned into uniformly-colored regions. As a consequence, continuous, detailed variations of color over the corresponding regions in the original image are lost through color quantization. In this paper. we present a novel blind scheme for restoring such variations from a color-quantized input image without it priori knowledge of the quantization method. Our scheme identifies which pairs of uniformly-colored regions in the input image should have continuous variations of color in the resulting image. Then, such regions are seamlessly stitched through optimization while preserving the closest representative colors. The user can optionally indicate which regions should be separated or stitched by scribbling constraint brushes across the regions. We demonstrate the effectiveness of our approach through diverse examples, such as photographs, cartoons, and artistic illustrations.

Training Avatars Animated with Human Motion Data (인간 동작 데이타로 애니메이션되는 아바타의 학습)

  • Lee, Kang-Hoon;Lee, Je-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.231-241
    • /
    • 2006
  • Creating controllable, responsive avatars is an important problem in computer games and virtual environments. Recently, large collections of motion capture data have been exploited for increased realism in avatar animation and control. Large motion sets have the advantage of accommodating a broad variety of natural human motion. However, when a motion set is large, the time required to identify an appropriate sequence of motions is the bottleneck for achieving interactive avatar control. In this paper, we present a novel method for training avatar behaviors from unlabelled motion data in order to animate and control avatars at minimal runtime cost. Based on machine learning technique, called Q-teaming, our training method allows the avatar to learn how to act in any given situation through trial-and-error interactions with a dynamic environment. We demonstrate the effectiveness of our approach through examples that include avatars interacting with each other and with the user.

VOC Summarization and Classification based on Sentence Understanding (구문 의미 이해 기반의 VOC 요약 및 분류)

  • Kim, Moonjong;Lee, Jaean;Han, Kyouyeol;Ahn, Youngmin
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.50-55
    • /
    • 2016
  • To attain an understanding of customers' opinions or demands regarding a companies' products or service, it is important to consider VOC (Voice of Customer) data; however, it is difficult to understand contexts from VOC because segmented and duplicate sentences and a variety of dialog contexts. In this article, POS (part of speech) and morphemes were selected as language resources due to their semantic importance regarding documents, and based on these, we defined an LSP (Lexico-Semantic-Pattern) to understand the structure and semantics of the sentences and extracted summary by key sentences; furthermore the LSP was introduced to connect the segmented sentences and remove any contextual repetition. We also defined the LSP by categories and classified the documents based on those categories that comprise the main sentences matched by LSP. In the experiment, we classified the VOC-data documents for the creation of a summarization before comparing the result with the previous methodologies.

A study on Classification of Character Emoticon as the Techno-code (테크노-코드로서의 캐릭터 이모티콘 분류체계 연구)

  • Lyou, Chul-Gyun;Kim, Jeong-Yeon
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.4
    • /
    • pp.479-489
    • /
    • 2015
  • The paradigm of communication is changing by the generalization of mobile computers and the extension of network service. In the mobile messenger communication, people are increasingly using character emoticons to substitute linear text. This means that the character emoticons are functioning as narrative characters so that they are becoming the nonlinear techno-code, which can substitute for the alphabet code. As Vil$\acute{e}$m Flusser said, the post-modern communication is arrived. In this kind of communication, storytelling with character emoticons are effective enough to tell various stories. In this point of view, this paper tries to prove the functions of character emoticons and suggests the classification of the character emoticon series. This paper also structures the post-modern communication model as a new paradigm of visual communication.

A Study on the Mixed Reality (MR) Based Storytelling Convergence Coding Education (혼합현실(MR)기반 스토리텔링형 융합 코딩교육에 관한 연구)

  • Lee, Byong-Kwon;Jung, Doo-Yong
    • Journal of Internet of Things and Convergence
    • /
    • v.5 no.2
    • /
    • pp.27-32
    • /
    • 2019
  • Recently, the introduction of the elementary and middle school software education essential courses in 2018, the emergence of educational solutions, and the need for digital content for software education rather than simple coding. In addition, in the case of current coding education, it is necessary to study how to induce interest in elementary, middle and high school students. In this study, we included the interactive (UX / UI) function using MR(Mixed Reality) and induced interest in coding education, and it was possible to cultivate logical thinking by applying storytelling coding rather than injection education. Suggested way. As a result of research, elementary, middle, and high school students easily proposed logical thinking education beyond the existing injection education method using mixed reality(MR) technology, and will break away from the injection coding education which is a problem of the school and school.

Case Studies on Planning and Learning for Large-Scale CGFs with POMDPs through Counterfire and Mechanized Infantry Scenarios (대화력전 및 기계화 보병 시나리오를 통한 대규모 가상군의 POMDP 행동계획 및 학습 사례연구)

  • Lee, Jongmin;Hong, Jungpyo;Park, Jaeyoung;Lee, Kanghoon;Kim, Kee-Eung;Moon, Il-Chul;Park, Jae-Hyun
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.6
    • /
    • pp.343-349
    • /
    • 2017
  • Combat modeling and simulation (M&S) of large-scale computer generated forces (CGFs) enables the development of even the most sophisticated strategy of combat warfare and the efficient facilitation of a comprehensive simulation of the upcoming battle. The DEVS-POMDP framework is proposed where the DEVS framework describing the explicit behavior rules in military doctrines, and POMDP model describing the autonomous behavior of the CGFs are hierarchically combined to capture the complexity of realistic world combat modeling and simulation. However, it has previously been well documented that computing the optimal policy of a POMDP model is computationally demanding. In this paper, we show that not only can the performance of CGFs be improved by an efficient POMDP tree search algorithm but CGFs are also able to conveniently learn the behavior model of the enemy through case studies in the scenario of counterfire warfare and the scenario of a mechanized infantry brigade's offensive operations.

Speakers' Intention Analysis Based on Partial Learning of a Shared Layer in a Convolutional Neural Network (Convolutional Neural Network에서 공유 계층의 부분 학습에 기반 한 화자 의도 분석)

  • Kim, Minkyoung;Kim, Harksoo
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1252-1257
    • /
    • 2017
  • In dialogues, speakers' intentions can be represented by sets of an emotion, a speech act, and a predicator. Therefore, dialogue systems should capture and process these implied characteristics of utterances. Many previous studies have considered such determination as independent classification problems, but others have showed them to be associated with each other. In this paper, we propose an integrated model that simultaneously determines emotions, speech acts, and predicators using a convolution neural network. The proposed model consists of a particular abstraction layer, mutually independent informations of these characteristics are abstracted. In the shared abstraction layer, combinations of the independent information is abstracted. During training, errors of emotions, errors of speech acts, and errors of predicators are partially back-propagated through the layers. In the experiments, the proposed integrated model showed better performances (2%p in emotion determination, 11%p in speech act determination, and 3%p in predicator determination) than independent determination models.