• 제목/요약/키워드: Human-computer-interaction

검색결과 634건 처리시간 0.026초

Integrated Approach of Multiple Face Detection for Video Surveillance

  • Kim, Tae-Kyun;Lee, Sung-Uk;Lee, Jong-Ha;Kee, Seok-Cheol;Kim, Sang-Ryong
    • Proceedings of the IEEK Conference
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.1960-1963
    • /
    • 2003
  • For applications such as video surveillance and human computer interface, we propose an efficiently integrated method to detect and track faces. Various visual cues are combined to the algorithm: motion, skin color, global appearance and facial pattern detection. The ICA (Independent Component Analysis)-SVM (Support Vector Machine based pattern detection is performed on the candidate region extracted by motion, color and global appearance information. Simultaneous execution of detection and short-term tracking also increases the rate and accuracy of detection. Experimental results show that our detection rate is 91% with very few false alarms running at about 4 frames per second for 640 by 480 pixel images on a Pentium IV 1㎓.

  • PDF

Interaction Protocol on the COLAB Platform (원격공동연구 플랫품의 상호작용 프로토콜)

  • Kwon, Daniel D.;Suh, Young-Ho;Kim, Yong;Hwang, Dae-Joon
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 한국감성과학회 1998년도 춘계학술발표 논문집
    • /
    • pp.304-308
    • /
    • 1998
  • Technical advances in computer networks and the Internet bring a new communication era and provide effective solutions for cooperative works and research. These technological advances introduced the concept of cyberspace that many people involve reseach and a project at different locations at the same time. In this paper, we present a fast and effective interaction protocol that is aeapted to the COLAB(COIIaborative LABoratory) Systems which use a high-speed ATM Network. The CCOLAB systems is developed for researchers those who are doing a large project on the collaborative research environment. The interaction protocol that we developed supports multi-session and multi-channel on the TCP/IP Network and provides more flexible solution to control multimedia data on the network.

  • PDF

Multi-channel Speech Enhancement Using Blind Source Separation and Cross-channel Wiener Filtering

  • Jang, Gil-Jin;Choi, Chang-Kyu;Lee, Yong-Beom;Kim, Jeong-Su;Kim, Sang-Ryong
    • The Journal of the Acoustical Society of Korea
    • /
    • 제23권2E호
    • /
    • pp.56-67
    • /
    • 2004
  • Despite abundant research outcomes of blind source separation (BSS) in many types of simulated environments, their performances are still not satisfactory to be applied to the real environments. The major obstacle may seem the finite filter length of the assumed mixing model and the nonlinear sensor noises. This paper presents a two-step speech enhancement method with multiple microphone inputs. The first step performs a frequency-domain BSS algorithm to produce multiple outputs without any prior knowledge of the mixed source signals. The second step further removes the remaining cross-channel interference by a spectral cancellation approach using a probabilistic source absence/presence detection technique. The desired primary source is detected every frame of the signal, and the secondary source is estimated in the power spectral domain using the other BSS output as a reference interfering source. Then the estimated secondary source is subtracted to reduce the cross-channel interference. Our experimental results show good separation enhancement performances on the real recordings of speech and music signals compared to the conventional BSS methods.

3D Pose Estimation of a Human Arm for Human-Computer Interaction - Application of Mechanical Modeling Techniques to Computer Vision (인간-컴퓨터 상호 작용을 위한 인간 팔의 3차원 자세 추정 - 기계요소 모델링 기법을 컴퓨터 비전에 적용)

  • Han Young-Mo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • 제42권4호
    • /
    • pp.11-18
    • /
    • 2005
  • For expressing intention the human often use body languages as well as vocal languages. Of course the gestures using arms and hands are the representative ones among the body languages. Therefore it is very important to understand the human arm motion in human-computer interaction. In this respect we present here how to estimate 3D pose of human arms by using computer vision systems. For this we first focus on the idea that the human arm motion consists of mostly revolute joint motions, and then we present an algorithm for understanding 3D motion of a revolute joint using vision systems. Next we apply it to estimating 3D pose of human arms using vision systems. The fundamental idea for this algorithm extension is that we may apply the algorithm for a revolute joint to each of the revolute joints of hmm arms one after another. In designing the algorithms we focus on seeking closed-form solutions with high accuracy because we aim at applying them to human computer interaction for ubiquitous computing and virtual reality.

Smart Deaf Emergency Application Based on Human-Computer Interaction Principles

  • Ahmed, Thowiba E;Almadan, Naba Abdulraouf;Elsadek, Alma Nabil;Albishi, Haya Zayed;Al-Qahtani, Norah Eid;Alghamdi, arah Khaled
    • International Journal of Computer Science & Network Security
    • /
    • 제21권4호
    • /
    • pp.284-288
    • /
    • 2021
  • Human-computer interaction is a discipline concerned with the design, evaluation, and implementation of interactive systems for human use. In this paper we suggest designing a smart deaf emergency application based on Human-Computer Interaction (HCI) principles whereas nowadays everything around us is becoming smart, People already have smartphones, smartwatches, smart cars, smart houses, and many other technologies that offer a wide range of useful options. So, a smart mobile application using Text Telephone or TeleTYpe technology (TTY) has been proposed to help people with deafness or impaired hearing to communicate and seek help in emergencies. Deaf people find it difficult to communicate with people, especially in emergency status. It is stipulated that deaf people In all societies must have equal rights to use emergency services as other people. With the proposed application the deafness or impaired hearing can request help with one touch, and the location will be determined, also the user status will be sent to the emergency services through the application, making it easier to reach them and provide them with assistance. The application contains several classifications and emergency status (traffic, police, road safety, ambulance, fire fighting). The expected results from this design are interactive, experiential, efficient, and comprehensive features of human-computer interactive technology which may achieve user satisfaction.

Robot Vision to Audio Description Based on Deep Learning for Effective Human-Robot Interaction (효과적인 인간-로봇 상호작용을 위한 딥러닝 기반 로봇 비전 자연어 설명문 생성 및 발화 기술)

  • Park, Dongkeon;Kang, Kyeong-Min;Bae, Jin-Woo;Han, Ji-Hyeong
    • The Journal of Korea Robotics Society
    • /
    • 제14권1호
    • /
    • pp.22-30
    • /
    • 2019
  • For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot's understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.

Bringing Human Computer Interaction in Computer Science Classrooms : Case Study on Teaching User-Centric Design to Computer Science Students (컴퓨터 사이언스 강의실 HCI 도입 : 컴퓨터 사이언스 학생에게 사용자 중심 설계 교육에 관한 사례 연구)

  • Jeong, Young-Joo;Jeong, Goo-Cheol
    • The Journal of Korean Institute for Practical Engineering Education
    • /
    • 제2권1호
    • /
    • pp.164-173
    • /
    • 2010
  • In recent decades, focuses on usability and emphases on user-centric design have become more prevalent in the field of software design. However, it is not always easy for engineers and computer scientists to think in the users' shoes. Human-computer interaction (HCI) is a field of study that focuses on creating technologies easier and more intuitive for the users. This paper is based on teaching HCI skills to undergraduate computer science students in a software application design course. Specifically, this paper employs: first, the HCI skills taught to the students; second, the tendencies and challenges of the students in creating user-centric applications; and lastly, suggestions based on our findings to promote HCI in developing user-friendly software. While more firm conclusions shall be reserved for more formal empirical studies, the findings in this paper still offer implications and suggestions for promoting user-centric approach for software designers and developers in the technology industry.

  • PDF

A Cognitive and Emotional Strategy for Computer Game Design (인간의 인지 및 감성을 고려한 게임 디자인 전략)

  • Choi, Dong-Seong;Kim, Ho-Young;Kim, Jin-Woo
    • Asia pacific journal of information systems
    • /
    • 제10권1호
    • /
    • pp.165-187
    • /
    • 2000
  • The computer game market has grown rapidly with numerous games produced all over the world. Most games have been developed to make gamers have fun while playing the games. However, there has been little research to address the elements of games that create the perception of being fun. The objectives of this research are to focus on which features provide fun, and then analyze these aspects both qualitatively and quantitatively. This study, through surveys with game players and developers, provides several inputs regarding what it is, that makes certain computer games fun. There are many common characteristics which fun games share, and through grouping and organizing these traits, then compiling the data for use in an AHP(Analytic Hierarchy Process), we measured the disparity in the 'fun' perception between game developer and game user.

  • PDF

Design of Parallel Input Pattern and Synchronization Method for Multimodal Interaction (멀티모달 인터랙션을 위한 사용자 병렬 모달리티 입력방식 및 입력 동기화 방법 설계)

  • Im, Mi-Jeong;Park, Beom
    • Journal of the Ergonomics Society of Korea
    • /
    • 제25권2호
    • /
    • pp.135-146
    • /
    • 2006
  • Multimodal interfaces are recognition-based technologies that interpret and encode hand gestures, eye-gaze, movement pattern, speech, physical location and other natural human behaviors. Modality is the type of communication channel used for interaction. It also covers the way an idea is expressed or perceived, or the manner in which an action is performed. Multimodal Interfaces are the technologies that constitute multimodal interaction processes which occur consciously or unconsciously while communicating between human and computer. So input/output forms of multimodal interfaces assume different aspects from existing ones. Moreover, different people show different cognitive styles and individual preferences play a role in the selection of one input mode over another. Therefore to develop an effective design of multimodal user interfaces, input/output structure need to be formulated through the research of human cognition. This paper analyzes the characteristics of each human modality and suggests combination types of modalities, dual-coding for formulating multimodal interaction. Then it designs multimodal language and input synchronization method according to the granularity of input synchronization. To effectively guide the development of next-generation multimodal interfaces, substantially cognitive modeling will be needed to understand the temporal and semantic relations between different modalities, their joint functionality, and their overall potential for supporting computation in different forms. This paper is expected that it can show multimodal interface designers how to organize and integrate human input modalities while interacting with multimodal interfaces.