• 제목/요약/키워드: human and computer interaction

검색결과 607건 처리시간 0.028초

아동과 홈 로봇의 심리적.교육적 상호작용 분석 (Analysis on Psychological and Educational Effects in Children and Home Robot Interaction)

  • 김병준;한정혜
    • 정보교육학회논문지
    • /
    • 제9권3호
    • /
    • pp.501-510
    • /
    • 2005
  • 홈 로봇이 인간과 원활한 상호작용을 하기 위해서 인간과 로봇의 상호작용 즉 HRI(Human-Robot Interaction) 연구가 절실히 필요하다. 본 연구에서는 최근 개발된 홈 로봇 'iRobi'와 아동의 상호작용을 통해 홈 로봇이 아동의 심리적 인식에 어떤 영향을 미쳤는가와 홈 로봇 학습이 얼마나 효과적인가를 알아보았다. 심리적 인식 측면에서 홈 로봇과의 상호작용은 아동에게 친근감과 상호작용이 가능한 상대로 인식하도록 하였으며 아동의 불안을 해소시키는 것으로 분석되었다. 학습 효과 측면에서 홈 로봇을 이용한 경우가 다른 학습 매체(책, WBI)에 비해 학습 집중도와 학습 흥미도 그리고 학업 성취도가 높은 것으로 분석되었다. 따라서 홈 로봇은 아동의 정서적, 교육적 상호작용 도구로서 긍정적인 의미가 있는 것으로 보여진다.

  • PDF

Challenges and New Approaches in Genomics and Bioinformatics

  • Park, Jong Hwa;Han, Kyung Sook
    • Genomics & Informatics
    • /
    • 제1권1호
    • /
    • pp.1-6
    • /
    • 2003
  • In conclusion, the seemingly fuzzy and disorganized data of biology with thousands of different layers ranging from molecule to the Internet have refused so far to be mapped precisely and predicted successfully by mathematicians, physicists or computer scientists. Genomics and bioinformatics are the fields that process such complex data. The insights on the nature of biological entities as complex interaction networks are opening a door toward a generalization of the representation of biological entities. The main challenge of genomics and bioinformatics now lies in 1) how to data mine the networks of the domains of bioinformatics, namely, the literature, metabolic pathways, and proteome and structures, in terms of interaction; and 2) how to generalize the networks in order to integrate the information into computable genomic data for computers regardless of the levels of layer. Once bioinformatists succeed to find a general principle on the way components interact each other to form any organic interaction network at genomic scale, true simulation and prediction of life in silico will be possible.

비주얼 애널리틱스 연구 소개 (Introduction to Visual Analytics Research)

  • 오유상;이충기;오주영;양지현;곽희나;문성우;박소환;고성안
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제22권5호
    • /
    • pp.27-36
    • /
    • 2016
  • 컴퓨터 그래픽스 (Computer Graphics) 및 인간-컴퓨터 상호작용 (Human-Computer Interaction, HCI) 기술을 기반으로 효과적인 데이터 분석을위한 가시화 툴 (Tool) 기술이 크게 발전 하였다. 해당 기술 분야는 Visual Analytics (비주얼애널리틱스)라는 연구 분야로 발전하여 2006년 첫 심포지엄이 열린 이래, 다양한 데이터 마이닝 (Data Mining), 상호작용 (Interaction) 기술이 정보 가시화 (Information Visualization) 기술에 접목하여 사용자 중심의 빅 데이터분석 및 의사 결정 시스템을 연구하는 분야로 확장 되었다. 그러나 국내에서는 아직 해당 연구 분야에 대하여 제대로 알려지지 않아, 국내 컴퓨터 그래픽스 및 HCI 기술 연구에 비하여, 가시화 기술을 통한 빅데이터 분석 및 의사결정을 지원하는 시스템을 설계 하는 기술이 뒤쳐지는 편이다. 따라서 본 논문에서는 비주얼 애널리틱스 연구의 기본 철학을 살펴 보고, IEEE Symposium on Visual Analytics Science and Technology (VAST) 학회에 2015년 출판된 논문으로 사용된 데이터 및 가시화 기술 분석 서베이를 진행함으로써 국내 컴퓨터 그래픽스 연구자들의 해당 분야에 대한 이해를 돕고자 한다.

물체-행동 컨텍스트를 이용하는 확률 그래프 기반 물체 범주 인식 (Probabilistic Graph Based Object Category Recognition Using the Context of Object-Action Interaction)

  • 윤성백;배세호;박한재;이준호
    • 한국통신학회논문지
    • /
    • 제40권11호
    • /
    • pp.2284-2290
    • /
    • 2015
  • 다양한 외형 변화를 가지는 물체의 범주 인식성능을 향상 시키는데 있어서 사람의 행동은 매우 효과적인 컨텍스트 정보이다. 본 연구에서는 Bayesian 접근법을 기반으로 하는 간단한 확률 그래프 모델을 통해 사람의 행동을 물체 범주 인식을 위한 컨텍스트 정보로 활용하였다. 다양한 외형의 컵, 전화기, 가위 그리고 스프레이 물체에 대해 실험을 수행한 결과 물체의 용도에 대한 사람의 행동을 인식함으로써 물체 인식 성능을 8%~28%개선할 수 있었다.

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제31권2호
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

Realistic Visual Simulation of Water Effects in Response to Human Motion using a Depth Camera

  • Kim, Jong-Hyun;Lee, Jung;Kim, Chang-Hun;Kim, Sun-Jeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권2호
    • /
    • pp.1019-1031
    • /
    • 2017
  • In this study, we propose a new method for simulating water responding to human motion. Motion data obtained from motion-capture devices are represented as a jointed skeleton, which interacts with the velocity field in the water simulation. To integrate the motion data into the water simulation space, it is necessary to establish a mapping relationship between two fields with different properties. However, there can be severe numerical instability if the mapping breaks down, with the realism of the human-water interaction being adversely affected. To address this problem, our method extends the joint velocity mapped to each grid point to neighboring nodes. We refine these extended velocities to enable increased robustness in the water solver. Our experimental results demonstrate that water animation can be made to respond to human motions such as walking and jumping.

디지털 레버리징: 기술을 인간의 삶에 적용하는 방법론 (Digital Leveraging: The Methodology of Applying Technology to Human Life)

  • 한석영;김희철;황원주
    • 한국멀티미디어학회논문지
    • /
    • 제22권2호
    • /
    • pp.322-333
    • /
    • 2019
  • After the launch of smart phones, various miniaturized smart devices such as wearable and IOT devices have deeply embedded in human life, and have created a technology-oriented society. In this technology-oriented society, technology development itself is important, however it seems more important to utilize existing technology appropriately and deliver effectively to human life. As the computer became personalized after the appearance of PC, human-centered computing such as HCI and UCD had begun to appear. However, most of the researches focused on technology that made human being convenient to interact with computer such as computer systems design and UX development. In the technology-oriented society, it seems more urgent to apply existing technology to human life. In this paper, we propose a methodology, 'Digital Leveraging' which guides how to effectively apply technology to human life. Digital Leveraging is the way of convergence between technology and humanities.

동작포착 및 매핑 시스템: Kinect 기반 인간-로봇상호작용 플랫폼 (A Motion Capture and Mapping System: Kinect Based Human-Robot Interaction Platform)

  • 윤중선
    • 한국산학기술학회논문지
    • /
    • 제16권12호
    • /
    • pp.8563-8567
    • /
    • 2015
  • 본 동작포착 및 매핑 기반의 인간-로봇상호작용 플랫폼을 제안한다. 사람의 동작을 포착하고 포착된 동작에서 운동을 계획하고 기기를 작동하게 하는 포착, 처리, 실행을 수행하는 플랫폼의 설계, 운용 및 구현 과정을 소개한다. 제안된 플랫폼의 구현 사례로 신뢰성과 성능이 뛰어난 Kinect 기반 포착기, 처리기에 구현된 상호작용 사이버 아바타 로봇과 처리기를 통한 물리 로봇 제어가 기술되었다. 제안된 플랫폼과 구현 사례는 동작포착 및 매핑 기반의 새로운 기기 제어 작동 방식의 실현 방법으로 기대된다.