• Title/Summary/Keyword: Multimodal Information

Search Result 255, Processing Time 0.032 seconds

Impact Analysis of nonverbal multimodals for recognition of emotion expressed virtual humans (가상 인간의 감정 표현 인식을 위한 비언어적 다중모달 영향 분석)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2012
  • Virtual human used as HCI in digital contents expresses his various emotions across modalities like facial expression and body posture. However, few studies considered combinations of such nonverbal multimodal in emotion perception. Computational engine models have to consider how a combination of nonverbal modal like facial expression and body posture will be perceived by users to implement emotional virtual human, This paper proposes the impacts of nonverbal multimodal in design of emotion expressed virtual human. First, the relative impacts are analysed between different modals by exploring emotion recognition of modalities for virtual human. Then, experiment evaluates the contribution of the facial and postural congruent expressions to recognize basic emotion categories, as well as the valence and activation dimensions. Measurements are carried out to the impact of incongruent expressions of multimodal on the recognition of superposed emotions which are known to be frequent in everyday life. Experimental results show that the congruence of facial and postural expression of virtual human facilitates perception of emotion categories and categorical recognition is influenced by the facial expression modality, furthermore, postural modality are preferred to establish a judgement about level of activation dimension. These results will be used to implementation of animation engine system and behavior syncronization for emotion expressed virtual human.

Digital Multimodal Storytelling: Understanding Learner Perceptions (디지털 멀티모달 스토리텔링: 학습자 인식에 대한 이해)

  • Chung, Sun Joo
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.3
    • /
    • pp.174-184
    • /
    • 2021
  • The present study intends to understand how multimodality can be implemented in a content course curriculum and how students perceive multimodal tasks. Twenty-eight students majoring in English were engaged in a digital storytelling assignment as a part of the content curriculum. Findings from the questionnaire and reflective essays that investigated students perceptions of digital storytelling showed that students felt that the assignment helped them engage in the task and felt motivated. In comparison to traditional writing tasks, students perceived digital storytelling to be more engaging and motivating, but felt that the assignment required more mental effort and caused more anxiety. By supporting students to explore technology and implement multimodal aspects in the learning process, digital storytelling can encourage engagement and autonomous learning to create meaningful works that are purposeful and enjoyable.

Nano Bio Imaging for NT and BT

  • Moon, DaeWon
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2015.08a
    • /
    • pp.51.2-51.2
    • /
    • 2015
  • Understanding interfacial phenomena has been one of the main research issues not only in semiconductors but only in life sciences. I have been trying to meet the atomic scale surface and interface analysis challenges from semiconductor industries and furthermore to extend the application scope to biomedical areas. Optical imaing has been most widely and successfully used for biomedical imaging but complementary ion beam imaging techniques based on mass spectrometry and ion scattering can provide more detailed molecular specific and nanoscale information In this presentation, I will review the 27 years history of medium energy ion scattering (MEIS) development at KRISS and DGIST for nanoanalysis. A electrostatic MEIS system constructed at KRISS after the FOM, Netherland design had been successfully applied for the gate oxide analysis and quantitative surface analysis. Recenlty, we developed time-of-flight (TOF) MEIS system, for the first time in the world. With TOF-MEIS, we reported quantitative compositional profiling with single atomic layer resolution for 0.5~3 nm CdSe/ZnS conjugated QDs and ultra shallow junctions and FINFET's of As implanted Si. With this new TOF-MEIS nano analysis technique, details of nano-structured materials could be measured quantitatively. Progresses in TOF-MEIS analysis in various nano & bio technology will be discussed. For last 10 years, I have been trying to develop multimodal nanobio imaging techniques for cardiovascular and brain tissues. Firstly, in atherosclerotic plaque imaging, using, coherent anti-stokes raman scattering (CARS) and time-of-flight secondary ion mass spectrometry (TOF-SIMS) multimodal analysis showed that increased cholesterol palmitate may contribute to the formation of a necrotic core by increasing cell death. Secondly, surface plasmon resonance imaging ellipsometry (SPRIE) was developed for cell biointerface imaging of cell adhesion, migration, and infiltration dynamics for HUVEC, CASMC, and T cells. Thirdly, we developed an ambient mass spectrometric imaging system for live cells and tissues. Preliminary results on mouse brain hippocampus and hypotahlamus will be presented. In conclusions, multimodal optical and mass spectrometric imaging privides overall structural and morphological information with complementary molecular specific information, which can be a useful methodology for biomedical studies. Future challenges in optical and mass spectrometric imaging for new biomedical applications will be discussed.

  • PDF

A Methodology of Multimodal Public Transportation Network Building and Path Searching Using Transportation Card Data (교통카드 기반자료를 활용한 복합대중교통망 구축 및 경로탐색 방안 연구)

  • Cheon, Seung-Hoon;Shin, Seong-Il;Lee, Young-Ihn;Lee, Chang-Ju
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.3
    • /
    • pp.233-243
    • /
    • 2008
  • Recognition for the importance and roles of public transportation is increasing because of traffic problems in many cities. In spite of this paradigm change, previous researches related with public transportation trip assignment have limits in some aspects. Especially, in case of multimodal public transportation networks, many characters should be considered such as transfers. operational time schedules, waiting time and travel cost. After metropolitan integrated transfer discount system was carried out, transfer trips are increasing among traffic modes and this takes the variation of users' route choices. Moreover, the advent of high-technology public transportation card called smart card, public transportation users' travel information can be recorded automatically and this gives many researchers new analytical methodology for multimodal public transportation networks. In this paper, it is suggested that the methodology for establishment of brand new multimodal public transportation networks based on computer programming methods using transportation card data. First, we propose the building method of integrated transportation networks based on bus and urban railroad stations in order to make full use of travel information from transportation card data. Second, it is offered how to connect the broken transfer links by computer-based programming techniques. This is very helpful to solve the transfer problems that existing transportation networks have. Lastly, we give the methodology for users' paths finding and network establishment among multi-modes in multimodal public transportation networks. By using proposed methodology in this research, it becomes easy to build multimodal public transportation networks with existing bus and urban railroad station coordinates. Also, without extra works including transfer links connection, it is possible to make large-scaled multimodal public transportation networks. In the end, this study can contribute to solve users' paths finding problem among multi-modes which is regarded as an unsolved issue in existing transportation networks.

A study of using quality for Radial Basis Function based score-level fusion in multimodal biometrics (RBF 기반 유사도 단계 융합 다중 생체 인식에서의 품질 활용 방안 연구)

  • Choi, Hyun-Soek;Shin, Mi-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.192-200
    • /
    • 2008
  • Multimodal biometrics is a method for personal authentication and verification using more than two types of biometrics data. RBF based score-level fusion uses pattern recognition algorithm for multimodal biometrics, seeking the optimal decision boundary to classify score feature vectors each of which consists of matching scores obtained from several unimodal biometrics system for each sample. In this case, all matching scores are assumed to have the same reliability. However, in recent research it is reported that the quality of input sample affects the result of biometrics. Currently the matching scores having low reliability caused by low quality of samples are not currently considered for pattern recognition modelling in multimodal biometrics. To solve this problem, in this paper, we proposed the RBF based score-level fusion approach which employs quality information of input biometrics data to adjust decision boundary. As a result the proposed method with Qualify information showed better recognition performance than both the unimodal biometrics and the usual RBF based score-level fusion without using quality information.

Extraction Analysis for Crossmodal Association Information using Hypernetwork Models (하이퍼네트워크 모델을 이용한 비전-언어 크로스모달 연관정보 추출)

  • Heo, Min-Oh;Ha, Jung-Woo;Zhang, Byoung-Tak
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.278-284
    • /
    • 2009
  • Multimodal data to have several modalities such as videos, images, sounds and texts for one contents is increasing. Since this type of data has ill-defined format, it is not easy to represent the crossmodal information for them explicitly. So, we proposed new method to extract and analyze vision-language crossmodal association information using the documentaries video data about the nature. We collected pairs of images and captions from 3 genres of documentaries such as jungle, ocean and universe, and extracted a set of visual words and that of text words from them. We found out that two modal data have semantic association on crossmodal association information from this analysis.

  • PDF

Sensitivity Lighting System Based on multimodal (멀티모달 기반의 감성 조명 시스템)

  • Kwon, Sun-Min;Jung, In-Bum
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.4
    • /
    • pp.721-729
    • /
    • 2012
  • In this paper, human sensibility is measured on multi-modal environment and a sensitivity lighting system is implemented according to driven emotional indexes. We use LED lighting because it supports ecological circumstance, high efficiency, and long lifetime. In particular, the LED lighting provides various color schemes even in single lighting bulb. To cognize the human sensibility, we use the image information and the arousal state information, which are composed of multi-modal basis and calculates emotional indexes. In experiments, as the LED lighting color vision varies according to users' emotional index, we show that it provides human friendly lighting system compared to the existing systems.

A Calibration Method for Multimodal dual Camera Environment (멀티모달 다중 카메라의 영상 보정방법)

  • Lim, Su-Chang;Kim, Do-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.9
    • /
    • pp.2138-2144
    • /
    • 2015
  • Multimodal dual camera system has a stereo-like configuration equipped with an infrared thermal and optical camera. This paper presents stereo calibration methods on multimodal dual camera system using a target board that can be recognized by both thermal and optical camera. While a typical stereo calibration method usually performed with extracted intrinsic and extrinsic camera parameter, consecutive image processing steps were applied in this paper as follows. Firstly, the corner points were detected from the two images, and then the pixel error rate, the size difference, the rotation degree between the two images were calculated by using the pixel coordinates of detected corner points. Secondly, calibration was performed with the calculated values via affine transform. Lastly, result image was reconstructed with mapping regions on calibrated image.

Multimodal Medical Image Fusion Based on Double-Layer Decomposer and Fine Structure Preservation Model (복층 분해기와 상세구조 보존모델에 기반한 다중모드 의료영상 융합)

  • Zhang, Yingmei;Lee, Hyo Jong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.6
    • /
    • pp.185-192
    • /
    • 2022
  • Multimodal medical image fusion (MMIF) fuses two images containing different structural details generated in two different modes into a comprehensive image with saturated information, which can help doctors improve the accuracy of observation and treatment of patients' diseases. Therefore, a method based on double-layer decomposer and fine structure preservation model is proposed. Firstly, a double-layer decomposer is applied to decompose the source images into the energy layers and structure layers, which can preserve details well. Secondly, The structure layer is processed by combining the structure tensor operator (STO) and max-abs. As for the energy layers, a fine structure preservation model is proposed to guide the fusion, further improving the image quality. Finally, the fused image can be achieved by performing an addition operation between the two sub-fused images formed through the fusion rules. Experiments manifest that our method has excellent performance compared with several typical fusion methods.