• Title/Summary/Keyword: 인간과 컴퓨터의 상호 작용

Search Result 277, Processing Time 0.025 seconds

User Adaptation Using User Model in Intelligent Image Retrieval System (지능형 화상 검색 시스템에서의 사용자 모델을 이용한 사용자 적응)

  • Kim, Yong-Hwan;Rhee, Phill-Kyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.12
    • /
    • pp.3559-3568
    • /
    • 1999
  • The information overload with many information resources is an inevitable problem in modern electronic life. It is more difficult to search some information with user's information needs from an uncontrolled flood of many digital information resources, such as the internet which has been rapidly increased. So, many information retrieval systems have been researched and appeared. In text retrieval systems, they have met with user's information needs. While, in image retrieval systems, they have not properly dealt with user's information needs. In this paper, for resolving this problem, we proposed the intelligent user interface for image retrieval. It is based on HCOS(Human-Computer Symmetry) model which is a layed interaction model between a human and computer. Its' methodology is employed to reduce user's information overhead and semantic gap between user and systems. It is implemented with machine learning algorithms, decision tree and backpropagation neural network, for user adaptation capabilities of intelligent image retrieval system(IIRS).

  • PDF

Development of the Guidelines for Expressing Big Data Visualization (공간빅데이터 시각화 가이드라인 연구)

  • Kim, So-Yeon;An, Se-Yun;Ju, Hannah
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.2
    • /
    • pp.100-112
    • /
    • 2021
  • With the recent growth of the big data technology market, interest in visualization technology has steadily increased over the past few years. Data visualization is currently used in a wide range of disciplines such as information science, computer science, human-computer interaction, statistics, data mining, cartography, and journalism, each with a slightly different meaning. Big data visualization in smart cities that require multidisciplinary research enables an objective and scientific approach to developing user-centered smart city services and related policies. In particular, spatial-based data visualization enables efficient collaboration of various stakeholders through visualization data in the process of establishing city policy. In this paper, a user-centered spatial big data visualization expression request method was derived by examining the spatial-based big data visualization expression process and principle from the viewpoint of effective information delivery, not just a visualization tool.

An Exploratory Research for Development of Design of Sensor-based Smart Clothing - Focused on the Healthcare Clothing Based on Bio-monitoring Technology - (센서 기반형 스마트 의류의 디자인 개발을 위한 탐색적 연구 - 생체 신호 센서 기술에 기반한 건강관리용 의류를 중심으로 -)

  • Cho Ha-Kyung;Lee Joo-Hyeon;Lee Chung-Keun;Lee Myoung-Ho
    • Science of Emotion and Sensibility
    • /
    • v.9 no.2
    • /
    • pp.141-150
    • /
    • 2006
  • Since the late 1990s, 'smart clothing' has been developed in a various way to meet the need of users and to help people more friendly interact with computers through its various designs. Recently, various applications of smart clothing concept have been presented by researchers. Among the various applications, smart clothing with a health care system is most likely to gain the highest demand rate in the market. Among them, smart clothing for check-up of health status with its sensors is expected to sell better than other types of smart clothing on the market. Under this circumstance, research and development for this field have been accelerated furthermore. This research institution has invented biometric sensors suitable for the smart clothing, and has developed a design to diagnose various diseases such as cardiac disorder and respiratory diseases. The newly developed smart clothing in this study looks similar to the previous inventions, but people can feel more comfortable in it with its fabric interaction built in it. When people wear it, the health status of the wearers is diagnosed and its signals are transmitted to the connected computer so the result can be easily monitored in real time. This smart clothing is a new kind of clothing as a supporting system for preventing various cardiac disorder and respiratory diseases using its biometric sensor built-in, and is also an archetype to show how smart clothing can work on the market.

  • PDF

The Effect of Data-Guided Artificial Wind in a Yacht VR Experience on Positive Affect (요트 VR 체험에서 데이터 기반의 인공풍이 정적 정서에 미치는 영향)

  • Cho, Yesol;Lee, Yewon;Lim, Dojeon;Ryoo, Taedong;Jonas, John Claud;Na, Daeyoung;Han, Daseong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.67-77
    • /
    • 2022
  • The sense of touch by natural wind is one of the most common feels that every person experiences in daily life. However, it has been rarely studied how natural wind can be reproduced in a VR environment and whether the multisensory contents equipped with artificial winds do improve human emotion or not. To address these issues, we first propose a wind reproduction VR system guided by video and wind capture data and also study the effect of the system on positive affect. We collected wind direction and speed data together with a 360-degree video on a yacht. These pieces of data were used to produce a multisensory VR environment by our wind reproduction VR system. 19 college students participated in the experiments, where the Korean version of Positive and Negative Affect Schedule (K-PANAS) was introduced to measure their emotions. Through the K-PANAS, we found that 'inspired' and 'active' emotions increase significantly after experiencing the yacht VR contents with artificial wind. Our experimental results also show that another emotion, 'interested', is most notably affected depending on the presence of the wind. The presented system can be effectively used in various VR applications such as interactive media and experiential contents.

A Finger Counting Method for Gesture Recognition (제스처 인식을 위한 손가락 개수 인식 방법)

  • Lee, DoYeob;Shin, DongKyoo;Shin, DongIl
    • Journal of Internet Computing and Services
    • /
    • v.17 no.2
    • /
    • pp.29-37
    • /
    • 2016
  • Humans develop and maintain relationship through communication. Communication is largely divided into verbal communication and non-verbal communication. Verbal communication involves the use of a language or characters, while non-verbal communication utilizes body language. We use gestures with language together in conversations of everyday life. Gestures belong to non-verbal communication, and can be offered using a variety of shapes and movements to deliver an opinion. For this reason, gestures are spotlighted as a means of implementing an NUI/NUX in the fields of HCI and HRI. In this paper, using Kinect and the geometric features of the hand, we propose a method for recognizing the number of fingers and detecting the hand area. A Kinect depth image can be used to detect the hand region, with the finger number identified by comparing the distance of outline and the central point of a hand. Average recognition rate for recognizing the number of fingers is 98.5%, from the proposed method, The proposed method would help enhancing the functionality of the human computer interaction by increasing the expression range of gestures.

Study of the user-oriented GUI design to build the database of Korean traditional materials on internet (인터넷에서의 한국 전통 소재 데이터베이스 구축을 위한 사용자 중심의 그래픽 유저 인터페이스 디자인 연구)

  • 이현주;박영순;김영인;김서경;방경란;이정현
    • Archives of design research
    • /
    • v.13 no.4
    • /
    • pp.125-135
    • /
    • 2000
  • As computers have became popular and multimedia and internet are frequently used, the volume and exchange of information have explosively increased. Therefore, more organized information system is needed for a more efficient use of information. For this reason, the importance of information design and interface design is highly recognized. This thesis is to present the user-oriented interface design prototype based on the environmental analysis of the user and information design. For this purpose, the theoretical study of information design and the principle and elements of GUI (Graphic User Interface) design is preceded. Information design is to subdivide and organize information according to its characteristic, and to systemize information structure considering the interrelationship of information. GUI design, that is interface design based on graphics, should be made in consideration of GUI elements and principle for the user's easy access of information. As it is said, much information needs to be structurized systematically and also visualized as the interface design esthetically as well as functionally. In conclusion, this study presents the effectual user-oriented interface design prototype by researching many theories of information design and GUI design and applying such theories to GUI design.

  • PDF

Toward an integrated model of emotion recognition methods based on reviews of previous work (정서 재인 방법 고찰을 통한 통합적 모델 모색에 관한 연구)

  • Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.14 no.1
    • /
    • pp.101-116
    • /
    • 2011
  • Current researches on emotion detection classify emotions by using the information from facial, vocal, and bodily expressions, or physiological responses. This study was to review three representative emotion recognition methods, which were based on psychological theory of emotion. Firstly, literature review on the emotion recognition methods based on facial expressions was done. These studies were supported by Darwin's theory. Secondly, review on the emotion recognition methods based on changes in physiology was conducted. These researches were relied on James' theory. Lastly, a review on the emotion recognition was conducted on the basis of multimodality(i.e., combination of signals from face, dialogue, posture, or peripheral nervous system). These studies were supported by both Darwin's and James' theories. In each part, research findings was examined as well as theoretical backgrounds which each method was relied on. This review proposed a need for an integrated model of emotion recognition methods to evolve the way of emotion recognition. The integrated model suggests that emotion recognition methods are needed to include other physiological signals such as brain responses or face temperature. Also, the integrated model proposed that emotion recognition methods are needed to be based on multidimensional model and take consideration of cognitive appraisal factors during emotional experience.

  • PDF

Face Recognition using Eigenfaces and Fuzzy Neural Networks (고유 얼굴과 퍼지 신경망을 이용한 얼굴 인식 기법)

  • 김재협;문영식
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.27-36
    • /
    • 2004
  • Detection and recognition of human faces in images can be considered as an important aspect for applications that involve interaction between human and computer. In this paper, we propose a face recognition method using eigenfaces and fuzzy neural networks. The Principal Components Analysis (PCA) is one of the most successful technique that have been used to recognize faces in images. In this technique the eigenvectors (eigenfaces) and eigenvalues of an image is extracted from a covariance matrix which is constructed form image database. Face recognition is Performed by projecting an unknown image into the subspace spanned by the eigenfaces and by comparing its position in the face space with the positions of known indivisuals. Based on this technique, we propose a new algorithm for face recognition consisting of 5 steps including preprocessing, eigenfaces generation, design of fuzzy membership function, training of neural network, and recognition. First, each face image in the face database is preprocessed and eigenfaces are created. Fuzzy membership degrees are assigned to 135 eigenface weights, and these membership degrees are then inputted to a neural network to be trained. After training, the output value of the neural network is intupreted as the degree of face closeness to each face in the training database.

Design of Korean eye-typing interfaces based on multilevel input system (단계식 입력 체계를 이용한 시선 추적 기반의 한글 입력 인터페이스 설계)

  • Kim, Hojoong;Woo, Sung-kyung;Lee, Kunwoo
    • Journal of the HCI Society of Korea
    • /
    • v.12 no.4
    • /
    • pp.37-44
    • /
    • 2017
  • Eye-typing is one kind of human-computer interactive input system which is implemented by location data of gaze. It is widely used as an input system for paralytics because it does not require physical motions other than the eye movement. However, eye-typing interface based on Korean character has not been suggested yet. Thus, this research aims to implement the eye-typing interface optimized for Korean. To begin with, design objectives were established based on the features of eye-typing: significant noise and Midas touch problem. Multilevel input system was introduced to deal with noise, and an area free from input button was applied to solve Midas touch problem. Then, two types of eye-typing interfaces were suggested on phonological consideration of Korean where each syllable is generated from combination of several phonemes. Named as consonant-vowel integrated interface and separated interface, the two interfaces are designed to input Korean in phases through grouped phonemes. Finally, evaluation procedures composed of comparative experiments against the conventional Double-Korean keyboard interface, and analysis on flow of gaze were conducted. As a result, newly designed interfaces showed potential to be applied as practical tools for eye-typing.

A Study on Hand Region Detection for Kinect-Based Hand Shape Recognition (Kinect 기반 손 모양 인식을 위한 손 영역 검출에 관한 연구)

  • Park, Hanhoon;Choi, Junyeong;Park, Jong-Il;Moon, Kwang-Seok
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.393-400
    • /
    • 2013
  • Hand shape recognition is a fundamental technique for implementing natural human-computer interaction. In this paper, we discuss a method for effectively detecting a hand region in Kinect-based hand shape recognition. Since Kinect is a camera that can capture color images and infrared images (or depth images) together, both images can be exploited for the process of detecting a hand region. That is, a hand region can be detected by finding pixels having skin colors or by finding pixels having a specific depth. Therefore, after analyzing the performance of each, we need a method of properly combining both to clearly extract the silhouette of hand region. This is because the hand shape recognition rate depends on the fineness of detected silhouette. Finally, through comparison of hand shape recognition rates resulted from different hand region detection methods in general environments, we propose a high-performance hand region detection method.