• Title/Summary/Keyword: 인간과 컴퓨터의 상호작용

Search Result 280, Processing Time 0.024 seconds

Study of the user-oriented GUI design to build the database of Korean traditional materials on internet (인터넷에서의 한국 전통 소재 데이터베이스 구축을 위한 사용자 중심의 그래픽 유저 인터페이스 디자인 연구)

  • 이현주;박영순;김영인;김서경;방경란;이정현
    • Archives of design research
    • /
    • v.13 no.4
    • /
    • pp.125-135
    • /
    • 2000
  • As computers have became popular and multimedia and internet are frequently used, the volume and exchange of information have explosively increased. Therefore, more organized information system is needed for a more efficient use of information. For this reason, the importance of information design and interface design is highly recognized. This thesis is to present the user-oriented interface design prototype based on the environmental analysis of the user and information design. For this purpose, the theoretical study of information design and the principle and elements of GUI (Graphic User Interface) design is preceded. Information design is to subdivide and organize information according to its characteristic, and to systemize information structure considering the interrelationship of information. GUI design, that is interface design based on graphics, should be made in consideration of GUI elements and principle for the user's easy access of information. As it is said, much information needs to be structurized systematically and also visualized as the interface design esthetically as well as functionally. In conclusion, this study presents the effectual user-oriented interface design prototype by researching many theories of information design and GUI design and applying such theories to GUI design.

  • PDF

Toward an integrated model of emotion recognition methods based on reviews of previous work (정서 재인 방법 고찰을 통한 통합적 모델 모색에 관한 연구)

  • Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.14 no.1
    • /
    • pp.101-116
    • /
    • 2011
  • Current researches on emotion detection classify emotions by using the information from facial, vocal, and bodily expressions, or physiological responses. This study was to review three representative emotion recognition methods, which were based on psychological theory of emotion. Firstly, literature review on the emotion recognition methods based on facial expressions was done. These studies were supported by Darwin's theory. Secondly, review on the emotion recognition methods based on changes in physiology was conducted. These researches were relied on James' theory. Lastly, a review on the emotion recognition was conducted on the basis of multimodality(i.e., combination of signals from face, dialogue, posture, or peripheral nervous system). These studies were supported by both Darwin's and James' theories. In each part, research findings was examined as well as theoretical backgrounds which each method was relied on. This review proposed a need for an integrated model of emotion recognition methods to evolve the way of emotion recognition. The integrated model suggests that emotion recognition methods are needed to include other physiological signals such as brain responses or face temperature. Also, the integrated model proposed that emotion recognition methods are needed to be based on multidimensional model and take consideration of cognitive appraisal factors during emotional experience.

  • PDF

Face Recognition using Eigenfaces and Fuzzy Neural Networks (고유 얼굴과 퍼지 신경망을 이용한 얼굴 인식 기법)

  • 김재협;문영식
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.27-36
    • /
    • 2004
  • Detection and recognition of human faces in images can be considered as an important aspect for applications that involve interaction between human and computer. In this paper, we propose a face recognition method using eigenfaces and fuzzy neural networks. The Principal Components Analysis (PCA) is one of the most successful technique that have been used to recognize faces in images. In this technique the eigenvectors (eigenfaces) and eigenvalues of an image is extracted from a covariance matrix which is constructed form image database. Face recognition is Performed by projecting an unknown image into the subspace spanned by the eigenfaces and by comparing its position in the face space with the positions of known indivisuals. Based on this technique, we propose a new algorithm for face recognition consisting of 5 steps including preprocessing, eigenfaces generation, design of fuzzy membership function, training of neural network, and recognition. First, each face image in the face database is preprocessed and eigenfaces are created. Fuzzy membership degrees are assigned to 135 eigenface weights, and these membership degrees are then inputted to a neural network to be trained. After training, the output value of the neural network is intupreted as the degree of face closeness to each face in the training database.

Inexpensive Visual Motion Data Glove for Human-Computer Interface Via Hand Gesture Recognition (손 동작 인식을 통한 인간 - 컴퓨터 인터페이스용 저가형 비주얼 모션 데이터 글러브)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.341-346
    • /
    • 2009
  • The motion data glove is a representative human-computer interaction tool that inputs human hand gestures to computers by measuring their motions. The motion data glove is essential equipment used for new computer technologiesincluding home automation, virtual reality, biometrics, motion capture. For its popular usage, this paper attempts to develop an inexpensive visual.type motion data glove that can be used without any special equipment. The proposed approach has the special feature; it can be developed as a low-cost one becauseof not using high-cost motion-sensing fibers that were used in the conventional approaches. That makes its easy production and popular use possible. This approach adopts a visual method that is obtained by improving conventional optic motion capture technology, instead of mechanical method using motion-sensing fibers. Compared to conventional visual methods, the proposed method has the following advantages and originalities Firstly, conventional visual methods use many cameras and equipments to reconstruct 3D pose with eliminating occlusions But the proposed method adopts a mono vision approachthat makes simple and low cost equipments possible. Secondly, conventional mono vision methods have difficulty in reconstructing 3D pose of occluded parts in images because they have weak points about occlusions. But the proposed approach can reconstruct occluded parts in images by using originally designed thin-bar-shaped optic indicators. Thirdly, many cases of conventional methods use nonlinear numerical computation image analysis algorithm, so they have inconvenience about their initialization and computation times. But the proposed method improves these inconveniences by using a closed-form image analysis algorithm that is obtained from original formulation. Fourthly, many cases of conventional closed-form algorithms use approximations in their formulations processes, so they have disadvantages of low accuracy and confined applications due to singularities. But the proposed method improves these disadvantages by original formulation techniques where a closed-form algorithm is derived by using exponential-form twist coordinates, instead of using approximations or local parameterizations such as Euler angels.

A Study on the Interaction Smart Space Model in the Untact Environment (언택트 환경에서의 스마트 인터랙션 공간 모델 연구)

  • Yun, Chang Ok;Lee, Byung Chun;Kwon, Kyung Su
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.1
    • /
    • pp.89-97
    • /
    • 2021
  • Recently, as the importance of forced indoor living has increased in the untact era, the connection and relationship between space environments is increasing. That is, the smart interaction environment for providing services in various spaces collects and processes a number of surrounding environment information through various sensors to provide desired information according to the required place and time. In this environment, a new type of interaction paradigm is needed for the user to select and focus on environmental information. In this paper, we provide guidelines based on models and patterns for designing various interactions around space. Through interaction model-based technology, we provide guidelines for space-oriented interaction design. We propose an ideal interaction environment through guideline-based patterns and templates. Finally, by providing a space-oriented interaction environment suitable for smart interaction, users can freely obtain desired information.

A Finger Counting Method for Gesture Recognition (제스처 인식을 위한 손가락 개수 인식 방법)

  • Lee, DoYeob;Shin, DongKyoo;Shin, DongIl
    • Journal of Internet Computing and Services
    • /
    • v.17 no.2
    • /
    • pp.29-37
    • /
    • 2016
  • Humans develop and maintain relationship through communication. Communication is largely divided into verbal communication and non-verbal communication. Verbal communication involves the use of a language or characters, while non-verbal communication utilizes body language. We use gestures with language together in conversations of everyday life. Gestures belong to non-verbal communication, and can be offered using a variety of shapes and movements to deliver an opinion. For this reason, gestures are spotlighted as a means of implementing an NUI/NUX in the fields of HCI and HRI. In this paper, using Kinect and the geometric features of the hand, we propose a method for recognizing the number of fingers and detecting the hand area. A Kinect depth image can be used to detect the hand region, with the finger number identified by comparing the distance of outline and the central point of a hand. Average recognition rate for recognizing the number of fingers is 98.5%, from the proposed method, The proposed method would help enhancing the functionality of the human computer interaction by increasing the expression range of gestures.

Design of Korean eye-typing interfaces based on multilevel input system (단계식 입력 체계를 이용한 시선 추적 기반의 한글 입력 인터페이스 설계)

  • Kim, Hojoong;Woo, Sung-kyung;Lee, Kunwoo
    • Journal of the HCI Society of Korea
    • /
    • v.12 no.4
    • /
    • pp.37-44
    • /
    • 2017
  • Eye-typing is one kind of human-computer interactive input system which is implemented by location data of gaze. It is widely used as an input system for paralytics because it does not require physical motions other than the eye movement. However, eye-typing interface based on Korean character has not been suggested yet. Thus, this research aims to implement the eye-typing interface optimized for Korean. To begin with, design objectives were established based on the features of eye-typing: significant noise and Midas touch problem. Multilevel input system was introduced to deal with noise, and an area free from input button was applied to solve Midas touch problem. Then, two types of eye-typing interfaces were suggested on phonological consideration of Korean where each syllable is generated from combination of several phonemes. Named as consonant-vowel integrated interface and separated interface, the two interfaces are designed to input Korean in phases through grouped phonemes. Finally, evaluation procedures composed of comparative experiments against the conventional Double-Korean keyboard interface, and analysis on flow of gaze were conducted. As a result, newly designed interfaces showed potential to be applied as practical tools for eye-typing.

A Study on Hand Region Detection for Kinect-Based Hand Shape Recognition (Kinect 기반 손 모양 인식을 위한 손 영역 검출에 관한 연구)

  • Park, Hanhoon;Choi, Junyeong;Park, Jong-Il;Moon, Kwang-Seok
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.393-400
    • /
    • 2013
  • Hand shape recognition is a fundamental technique for implementing natural human-computer interaction. In this paper, we discuss a method for effectively detecting a hand region in Kinect-based hand shape recognition. Since Kinect is a camera that can capture color images and infrared images (or depth images) together, both images can be exploited for the process of detecting a hand region. That is, a hand region can be detected by finding pixels having skin colors or by finding pixels having a specific depth. Therefore, after analyzing the performance of each, we need a method of properly combining both to clearly extract the silhouette of hand region. This is because the hand shape recognition rate depends on the fineness of detected silhouette. Finally, through comparison of hand shape recognition rates resulted from different hand region detection methods in general environments, we propose a high-performance hand region detection method.

Appraising the Interface Features of Web Search Engines Based on User-defined Relevance Criteria (이용자정의형 적합성 기준을 토대로 한 웹검색엔진 인터페이스 평가)

  • Kim, Yang-Woo
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.22 no.1
    • /
    • pp.247-262
    • /
    • 2011
  • Although research has shown a significant amount of work identifying various dimensions of relevance along with exhaustive lists of relevance criteria, there seem to have been less effort to apply the findings to improve actual systems design. Based on this assumption, this paper investigates to what extent those relevance criteria have been incorporated into the interface features of major commercial Web search engines, suggesting what can/should be done more. Before stepping into the actual system features, this paper compares recent relevance research in Information Science with other human factor studies both in Information Science and its neighboring discipline (HCI), as an attempt to identify studies that are conceptually similar to the relevance research, but not named as such way. Similarities and differences between these studies are presented. Recommendations suggested to support applicable interface features include: 1) further personalization of interface designs; 2) author-supplied meta tags for the Web contents; and 3) extensions of beyond-topical representations based on link structure.

A Study on Interaction Design of Companion Robots Based on Emotional State (감정 상태에 따른 컴패니언 로봇의 인터랙션 디자인 : 공감 인터랙션을 중심으로)

  • Oh, Ye-Jeon;Shin, Yoon-Soo;Lee, Jee-Hang;Kim, Jin-Woo
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1293-1301
    • /
    • 2017
  • Recent changes in social structure, such as nuclear family and personalization, are leading to personal and social problems, which may cause various problems due to negative emotional amplification. The absence of a family member who gives a sense of psychological stability in the past can be considered as a representative cause of the emotional difficulties of modern people. This personal and social problem is solved through the empathic interaction of the companion robot communication with users in daily life. In this study, we developed sophisticated empathic interaction design through prototyping of emotional robots. As a result, it was confirmed that the face interaction greatly affects the emotional interaction of the emotional robot and the interaction of the robot improves the emotional sense of the robot. This study has the theoretical and practical significance in that the emotional robot is made more sophisticated interaction and the guideline of the sympathetic interaction design is presented based on the experimental results.