• Title/Summary/Keyword: Multi-Modality

Search Result 100, Processing Time 0.022 seconds

Interruption in Digital Convergence: Focused on Multi-Modality and Multi-Tasking (디지털 컨버전스에서의 인터럽션: 멀티 모달리티와 멀티 태스킹 간의 상호 관계를 중심으로)

  • Lee, Ki-Ho;Jung, Seung-Ki;Kim, Hye-Jin;Lee, In-Seong;Kim, Jin-Woo
    • Journal of the Ergonomics Society of Korea
    • /
    • v.26 no.3
    • /
    • pp.67-80
    • /
    • 2007
  • Digital convergence, defined as the creative fusion of once-independent technologies and service, is getting more attention recently. Interruptions among internal functions happen frequently in digital convergence products because many functions that were in separate products are merged into a single product. Multi-tasking and multi-modality are two distinctive features of interruption in digital convergence products, but their impacts to the user have not been investigated yet. This study conducted a controlled experiment to investigate the impacts of multi-tasking and multi-modality on the subjective satisfaction and objective performance of digital convergent products. The study results indicate that multi-tasking and multi-modality have substantial effects individually as well as together. The paper ends with practical and theoretical implications of study results as well as research limits and future research.

A Framework of User Authentication for Financial Transaction based Multi-Biometrics in Mobile Environments (모바일 환경에서 다중 바이오인식 기반의 금융 거래를 위한 사용자 인증 프레임워크)

  • Han, Seung-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.1
    • /
    • pp.143-151
    • /
    • 2015
  • Biometric technology has been proposed as a new means to replace conventional PIN or password because it is hard to be lost and has the low possibility of illegal use. However, unlike a PIN, password, and personal information there is no way to modify the exposure if it is exposed and used illegally. Therefore, the existing single modality with single biometrics is critical when it expose. However in this paper, we use a multi-modality and multi-biometrics to authenticate between users and TTP or between users and financial institutions. Thereby, we propose a more reliable method and compared this paper with existed methods about security and performance in this paper.

Multiple Task Performance and Psychological Refractory Period in Children: Focusing on PRP Paradigm Tasks (유아의 다중과제 수행과 심리적 불응기: PRP 패러다임 과제를 중심으로)

  • Kim, Bokyung;Yi, Soon Hyung
    • Korean Journal of Child Studies
    • /
    • v.38 no.3
    • /
    • pp.75-90
    • /
    • 2017
  • Objective: This study aimed to identify children's cognitive processing and performance characteristics while multiple task performance. It confirmed whether their multiple task performance and psychological refractory period (PRP) varied by task condition (stimulus onset asynchrony [SOA] and task difficulty) and stimulus modality. Methods: Seventy 5-year-olds were recruited. Multi-task tools were developed using the E-prime software. The children were required to respond to two stimuli (visual or auditory) presented with microscopic time difference and their response times (RTs) were recorded. Results: As the SOA increased, the RTs in the first task increased, while the RTs in the second task and PRP decreased. The RTs of the first and second tasks, and the PRP for difficult tasks, were significantly longer than those for easy tasks were. Additionally, there was an interaction effect between the SOA and task difficulty. Although there was no main effect of stimulus modality, task difficulty moderated the modality effect. In the high difficulty condition, the RTs of the first and second tasks and PRP for the visual-visual task were significantly longer than those for auditory-auditory task were. Conclusion: These results inform theoretical discussions on children's multi-task mechanism, and the loss of multiple task performance. Additionally, they provide practical implications and information on the composition of multi-tasks suitable for children in educational environments.

A Study on the Weight Allocation Method of Humanist Input Value and Multiplex Modality using Tacit Data (암묵 데이터를 활용한 인문학 인풋값과 다중 모달리티의 가중치 할당 방법에 관한 연구)

  • Lee, Won-Tae;Kang, Jang-Mook
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.157-163
    • /
    • 2014
  • User's sensitivity is recognized as a very important parameter for communication between company, government and personnel. Especially in many studies, researchers use voice tone, voice speed, facial expression, moving direction and speed of body, and gestures to recognize the sensitivity. Multiplex modality is more precise than single modality however it has limited recognition rate and overload of data processing according to multi-sensing also an excellent algorithm is needed to deduce the sensing value. That is as each modality has different concept and property, errors might be happened to convert the human sensibility to standard values. To deal with this matter, the sensibility expression modality is needed to be extracted using technologies like analyzing of relational network, understanding of context and digital filter from multiplex modality. In specific situation to recognize the sensibility if the priority modality and other surrounding modalities are processed to implicit values, a robust system can be composed in comparison to the consuming of computer resource. As a result of this paper, it is proposed how to assign the weight of multiplex modality using implicit data.

The Effects of Multi-Modality on the Use of Smart Phones

  • Lee, Gaeun;Kim, Seongmin;Choe, Jaeho;Jung, Eui Seung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.33 no.3
    • /
    • pp.241-253
    • /
    • 2014
  • Objective: The objective of this study was to examine multi-modal interaction effects of input-mode switching on the use of smart phones. Background: Multi-modal is considered as an efficient alternative for input and output of information in mobile environments. However, there are various limitations in current mobile UI (User Interface) system that overlooks the transition between different modes or the usability of a combination of multi modal uses. Method: A pre-survey determined five representative tasks from smart phone tasks by their functions. The first experiment involved the use of a uni-mode for five single tasks; the second experiment involved the use of a multi-mode for three dual tasks. The dependent variables were user preference and task completion time. The independent variable in the first experiment was the type of modes (i.e., Touch, Pen, or Voice) while the variable in the second experiment was the type of tasks (i.e., internet searching, subway map, memo, gallery, and application store). Results: In the first experiment, there was no difference between the uses of pen and touch devices. However, a specific mode type was preferred depending on the functional characteristics of the tasks. In the second experiment, analysis of results showed that user preference depended on the order and combination of modes. Even with the transition of modes, users preferred the use of multi-modes including voice. Conclusion: The order of combination of modes may affect the usability of multi-modes. Therefore, when designing a multi-modal system, the fact that there are frequent transitions between various mobile contents in different modes should be properly considered. Application: It may be utilized as a user-centered design guideline for mobile multi modal UI system.

Multi-modality image fusion via generalized Riesz-wavelet transformation

  • Jin, Bo;Jing, Zhongliang;Pan, Han
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4118-4136
    • /
    • 2014
  • To preserve the spatial consistency of low-level features, generalized Riesz-wavelet transform (GRWT) is adopted for fusing multi-modality images. The proposed method can capture the directional image structure arbitrarily by exploiting a suitable parameterization fusion model and additional structural information. Its fusion patterns are controlled by a heuristic fusion model based on image phase and coherence features. It can explore and keep the structural information efficiently and consistently. A performance analysis of the proposed method applied to real-world images demonstrates that it is competitive with the state-of-art fusion methods, especially in combining structural information.

Human Action Recognition Via Multi-modality Information

  • Gao, Zan;Song, Jian-Ming;Zhang, Hua;Liu, An-An;Xue, Yan-Bing;Xu, Guang-Ping
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.2
    • /
    • pp.739-748
    • /
    • 2014
  • In this paper, we propose pyramid appearance and global structure action descriptors on both RGB and depth motion history images and a model-free method for human action recognition. In proposed algorithm, we firstly construct motion history image for both RGB and depth channels, at the same time, depth information is employed to filter RGB information, after that, different action descriptors are extracted from depth and RGB MHIs to represent these actions, and then multimodality information collaborative representation and recognition model, in which multi-modality information are put into object function naturally, and information fusion and action recognition also be done together, is proposed to classify human actions. To demonstrate the superiority of the proposed method, we evaluate it on MSR Action3D and DHA datasets, the well-known dataset for human action recognition. Large scale experiment shows our descriptors are robust, stable and efficient, when comparing with the-state-of-the-art algorithms, the performances of our descriptors are better than that of them, further, the performance of combined descriptors is much better than just using sole descriptor. What is more, our proposed model outperforms the state-of-the-art methods on both MSR Action3D and DHA datasets.

Ubiquitous Context-aware Modeling and Multi-Modal Interaction Design Framework (유비쿼터스 환경의 상황인지 모델과 이를 활용한 멀티모달 인터랙션 디자인 프레임웍 개발에 관한 연구)

  • Kim, Hyun-Jeong;Lee, Hyun-Jin
    • Archives of design research
    • /
    • v.18 no.2 s.60
    • /
    • pp.273-282
    • /
    • 2005
  • In this study, we proposed Context Cube as a conceptual model of user context, and a Multi-modal Interaction Design framework to develop ubiquitous service through understanding user context and analyzing correlation between context awareness and multi-modality, which are to help infer the meaning of context and offer services to meet user needs. And we developed a case study to verify Context Cube's validity and proposed interaction design framework to derive personalized ubiquitous service. We could understand context awareness as information properties which consists of basic activity, location of a user and devices(environment), time, and daily schedule of a user. And it enables us to construct three-dimensional conceptual model, Context Cube. Also, we developed ubiquitous interaction design process which encloses multi-modal interaction design by studying the features of user interaction presented on Context Cube.

  • PDF