• Title/Summary/Keyword: Multimodal Information

Search Result 255, Processing Time 0.025 seconds

A study of effective contents construction for AR based English learning (AR기반 영어학습을 위한 효과적 콘텐츠 구성 방향에 대한 연구)

  • Kim, Young-Seop;Jeon, Soo-Jin;Lim, Sang-Min
    • Journal of The Institute of Information and Telecommunication Facilities Engineering
    • /
    • v.10 no.4
    • /
    • pp.143-147
    • /
    • 2011
  • The system using augmented reality can save the time and cost. It is verified in various fields under the possibility of a technology by solving unrealistic feeling in the virtual space. Therefore, augmented reality has a variety of the potential to be used. Generally, multimodal senses such as visual/auditory/tactile feed back are well known as a method for enhancing the immersion in case of interaction with virtual object. By adapting tangible object we can provide touch sensation to users. a 3D model of the same scale overlays the whole area of the tangible object; thus, the marker area is invisible. This contributes to enhancing immersive and natural images to users. Finally, multimodal feedback also creates better immersion. In this paper, sound feedback is considered. By further improving immersion learning augmented reality for children with the initial step learning content is presented. Augmented reality is in the intermediate stages between future world and real world as well as its adaptability is estimated more than virtual reality.

  • PDF

Multimodal Emotion Recognition using Face Image and Speech (얼굴영상과 음성을 이용한 멀티모달 감정인식)

  • Lee, Hyeon Gu;Kim, Dong Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.1
    • /
    • pp.29-40
    • /
    • 2012
  • A challenging research issue that has been one of growing importance to those working in human-computer interaction are to endow a machine with an emotional intelligence. Thus, emotion recognition technology plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between human and computer. In this paper, we propose the multimodal emotion recognition system using face and speech to improve recognition performance. The distance measurement of the face-based emotion recognition is calculated by 2D-PCA of MCS-LBP image and nearest neighbor classifier, and also the likelihood measurement is obtained by Gaussian mixture model algorithm based on pitch and mel-frequency cepstral coefficient features in speech-based emotion recognition. The individual matching scores obtained from face and speech are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. Through experimental results, the proposed method exhibits improved recognition accuracy of about 11.25% to 19.75% when compared to the most uni-modal approach. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

A study on the implementation of identification system using facial multi-modal (얼굴의 다중특징을 이용한 인증 시스템 구현)

  • 정택준;문용선
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.5
    • /
    • pp.777-782
    • /
    • 2002
  • This study will offer multimodal recognition instead of an existing monomodal bioinfomatics by using facial multi-feature to improve the accuracy of recognition and to consider the convenience of user . Each bioinfomatics vector can be found by the following ways. For a face, the feature is calculated by principal component analysis with wavelet multiresolution. For a lip, a filter is used to find out an equation to calculate the edges of the lips first. Then by using a thinning image and least square method, an equation factor can be drawn. A feature found out the facial parameter distance ratio. We've sorted backpropagation neural network and experimented with the inputs used above. Based on the experimental results we discuss the advantage and efficiency.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

Layout Based Multimodal Contents Aughoring Tool for Digilog Book (디지로그 북을 위한 레이아웃 기반 다감각 콘텐츠 저작 도구)

  • Park, Jong-Hee;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.512-515
    • /
    • 2009
  • In this paper, we propose layout based multimodal contents authoring tool for Digilog Book. In authoring step, users create a virtual area using mouse or pen-type device and select property of the area repetitively. After finishing authoring step, system recognizes printed page number and generate page layout including areas and property information. Page layout is represented as a scene graph and stored as XML format. Digilog Book viewer loads stored page layout and analyze properties then augment virtual contents or execute functions based on area. Users can author visual and auditory contents easily by using hybrid interface. In AR environment, system provides area templates in order to help creating area. In addition, proposed authoring tool separates page recognition module from page tracking module. So, it is possible to author many pages using only single marker. As a result of experiment, we showed proposed authoring tool has reasonable performance time in AR environment. We expect that proposed authoring tool would be applicable to many fields such as education and publication.

  • PDF

Design of Lightweight Artificial Intelligence System for Multimodal Signal Processing (멀티모달 신호처리를 위한 경량 인공지능 시스템 설계)

  • Kim, Byung-Soo;Lee, Jea-Hack;Hwang, Tae-Ho;Kim, Dong-Sun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.5
    • /
    • pp.1037-1042
    • /
    • 2018
  • The neuromorphic technology has been researched for decades, which learns and processes the information by imitating the human brain. The hardware implementations of neuromorphic systems are configured with highly parallel processing structures and a number of simple computational units. It can achieve high processing speed, low power consumption, and low hardware complexity. Recently, the interests of the neuromorphic technology for low power and small embedded systems have been increasing rapidly. To implement low-complexity hardware, it is necessary to reduce input data dimension without accuracy loss. This paper proposed a low-complexity artificial intelligent engine which consists of parallel neuron engines and a feature extractor. A artificial intelligent engine has a number of neuron engines and its controller to process multimodal sensor data. We verified the performance of the proposed neuron engine including the designed artificial intelligent engines, the feature extractor, and a Micro Controller Unit(MCU).

A development of a multimodal patch-type probe for measuring blood flow and oxygen saturation in carotid artery (경동맥 혈류 속도 및 산소 포화도 측정을 위한 다중모드 패치형 프로브 개발)

  • Youn, Sangyeon;Lee, Kijoon;Kim, Jae Gwan;Hwang, Jae Youn
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.4
    • /
    • pp.443-449
    • /
    • 2019
  • To protect the patient's internal organs when a patient with cardiovascular disease occurs, it is important to reduce the elapsed time by providing emergency medical services. Decisions for conducting cardiopulmonary resuscitation are mainly made using the carotid palpation method, which directs the pulse of the carotid artery, which can diagnose the patient's condition according to one's own subject and cause cerebral blood flow to be blocked by excessive pressure in the carotid due to the weaken cardiopulmonary function. In this study, we developed a multimodal patch-type probe based on multi-channel ultrasound Doppler pairs and oxygen saturation measurement modules which can monitor cardiopulmonary functions. From the in-vivo experiments, the developed probe can be utilized as a novel tool that can increase the survival rate of cardiovascular disease patients by objectively monitoring the cardiopulmonary function of the patient quantitatively and promptly in an emergency situation.

Driver Drowsiness Detection Model using Image and PPG data Based on Multimodal Deep Learning (이미지와 PPG 데이터를 사용한 멀티모달 딥 러닝 기반의 운전자 졸음 감지 모델)

  • Choi, Hyung-Tak;Back, Moon-Ki;Kang, Jae-Sik;Yoon, Seung-Won;Lee, Kyu-Chul
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.45-57
    • /
    • 2018
  • The drowsiness that occurs in the driving is a very dangerous driver condition that can be directly linked to a major accident. In order to prevent drowsiness, there are traditional drowsiness detection methods to grasp the driver's condition, but there is a limit to the generalized driver's condition recognition that reflects the individual characteristics of drivers. In recent years, deep learning based state recognition studies have been proposed to recognize drivers' condition. Deep learning has the advantage of extracting features from a non-human machine and deriving a more generalized recognition model. In this study, we propose a more accurate state recognition model than the existing deep learning method by learning image and PPG at the same time to grasp driver's condition. This paper confirms the effect of driver's image and PPG data on drowsiness detection and experiment to see if it improves the performance of learning model when used together. We confirmed the accuracy improvement of around 3% when using image and PPG together than using image alone. In addition, the multimodal deep learning based model that classifies the driver's condition into three categories showed a classification accuracy of 96%.

Person Authentication using Multi-Modal Biometrics (다중생체인식을 이용한 사용자 인증)

  • 이경희;최우용;지형근;반성범;정용화
    • Proceedings of the Korea Institutes of Information Security and Cryptology Conference
    • /
    • 2003.07a
    • /
    • pp.204-207
    • /
    • 2003
  • 생체인식 기술은 전통적인 비밀번호 방식 또는 토큰 방식보다 신뢰성 면에서 더 선호되지만, 환경의 영향에 매우 민감하여 성능의 한계가 있다. 이러한 단일 생체인식 기술의 한계를 극복하기 위하여 여러 종류의 생체 정보를 결합한 다중 생체인식 (multimodal biometrics)에 관한 다양한 연구가 진행되고 있다 본 논문에서는 다중 생체인식 기술을 간략히 소개하고, Support Vector Machines(SVM)을 이용하여 얼굴 및 음성 정보를 함께 이용한 다중 생체인식 실험으로 성능이 개선될 수 있음을 확인하였다.

  • PDF

Multimodal User Interfaces for Web Services (웹 서비스를 위한 멀티 모달 사용자 인터페이스)

  • Song Ki-Sub;Kim Yeon-Seok;Lee Kyong-Ho
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.06b
    • /
    • pp.46-48
    • /
    • 2006
  • 본 논문에서는 웹 서비스의 WSDL 문서로부터 멀티 모달 유저 인터페이스를 동적으로 생성하는 방법을 제안한다. 이를 위해 W3C에서 제안한 사용자 인터페이스 관련 기술인 XForms와 VoiceXML을 소개하고. XForms에 기반한 사용자 인터페이스 생성 알고리즘을 제안한다. 제안된 방법은 WSDL 문서의 구조를 분석하고. 스키마로부터 데이터의 타입에 따른 적합한 컨트롤을 매핑하여 최적의 멀티 모달 사용자 인터페이스를 구성한다.

  • PDF