• 제목/요약/키워드: Human computer

검색결과 5,011건 처리시간 0.041초

과학기술과 결합된 패션디자인의 기능성에 관한 연구 (A Study on Functionality of Fashion Design Combined with Technology)

  • 권기영
    • 한국의류학회지
    • /
    • 제28권1호
    • /
    • pp.88-99
    • /
    • 2004
  • The purpose of this study is to observe the significance and symbolic meaning of fashion design which is combined with machines, and to develop excellent designs in aesthetic aspects. The method of this study analyzed the related documents and journal, fashion magazines. The results is as follows. The cyborg discourse and virtual human body in virtual world predict the existence of new human. In present times, up-to-date equipments such as wearable computer have begun to be built-in clothing based on technology, and these dress and ornaments influence on human's life gradually in practicality and ability. These equipments are divided as military affairs, medical treatment equipment, leisure, information access and communication, guard, and business assistance. And according to the kind of item, they are divided by headwear, clothing and accessory. The significance of clothing combined with machine is practical viewpoint, economical viewpoint and aesthetic viewpoint. And the symbolic meaning of these fashion combined machine with technology is dis-identity in digital society, dis-borderness between human and machine and the uncertainty of human being. Like this, the clothing which is combined with high-technology shapes its own unique and individual system. But, this must escape from the limit of medium system itself that is technology to become a future clothing for human, suggest new agenda about human and society, and confirm the human being's existence and identity continually.

Comparison of Impulses Experienced on Human Joints Walking on the Ground to Those Experienced Walking on a Treadmill

  • So, Byung-Rok;Yi, Byung-Ju;Han, Seog-Young
    • International Journal of Control, Automation, and Systems
    • /
    • 제6권2호
    • /
    • pp.243-252
    • /
    • 2008
  • It has been reported that long-term exercise on a treadmill (running machine) may cause injury to the joints in a human's lower extremities. Previous works related to analysis of human walking motion are, however, mostly based on clinical statistics and experimental methodology. This paper proposes an analytical methodology. Specifically, this work deals with a comparison of normal walking on the ground and walking on a treadmill in regard to the external and internal impulses exerted on the joints of a human's lower extremities. First, a modeling procedure of impulses, impulse geometry, and impulse measure for the human lower extremity model will be briefly introduced and a new impulse measure for analysis of internal impulse is developed. Based on these analytical tools, we analyze the external and internal impulses through a planar 7-linked human lower extremity model. It is shown through simulation that the human walking on a treadmill exhibits greater internal impulses on the knee and ankle joints of the supporting leg when compared to that on the ground. In order to corroborate the effectiveness of the proposed methodology, a force platform was developed to measure the external impulses exerted on the ground for the cases of the normal walking and walking on the treadmill. It is shown that the experimental results correspond well to the simulation results.

Stereo-Vision-Based Human-Computer Interaction with Tactile Stimulation

  • Yong, Ho-Joong;Back, Jong-Won;Jang, Tae-Jeong
    • ETRI Journal
    • /
    • 제29권3호
    • /
    • pp.305-310
    • /
    • 2007
  • If a virtual object in a virtual environment represented by a stereo vision system could be touched by a user with some tactile feeling on his/her fingertip, the sense of reality would be heightened. To create a visual impression as if the user were directly pointing to a desired point on a virtual object with his/her own finger, we need to align virtual space coordinates and physical space coordinates. Also, if there is no tactile feeling when the user touches a virtual object, the virtual object would seem to be a ghost. Therefore, a haptic interface device is required to give some tactile sensation to the user. We have constructed such a human-computer interaction system in the form of a simple virtual reality game using a stereo vision system, a vibro-tactile device module, and two position/orientation sensors.

  • PDF

Wireless EMG-based Human-Computer Interface for Persons with Disability

  • Lee, Myoung-Joon;Moon, In-Hyuk;Kim, Sin-Ki;Mun, Mu-Seong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.1485-1488
    • /
    • 2003
  • This paper proposes a wireless EMG-based human-computer interface (HCI) for persons with disabilities. For the HCI, four interaction commands are defined by combining three elevation motions of shoulders such as left, right and both elevations. The motions are recognized by comparing EMG signals on the Levator scapulae muscles with double thresholds. A real-time EMG processing hardware is implemented for acquiring EMG signals and recognizing the motions. To achieve real-time processing, filters such as high- and low-pass filter and band-pass and -rejection filter, and a full rectifier and a mean absolute value circuit are embedded on a board with a high speed microprocessor. The recognized results are transferred to a wireless client system such as a mobile robot via a Bluetooth module. From experimental results using the implemented real-time EMG processing hardware, the proposed wireless EMG-based HCI is feasible for the disabled.

  • PDF

얼굴 주시방향 인식을 이용한 장애자용 의사 전달 시스템 (Human-Computer Interaction System for the disabled using Recognition of Face Direction)

  • 정상현;문인혁
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2001년도 하계종합학술대회 논문집(4)
    • /
    • pp.175-178
    • /
    • 2001
  • This paper proposes a novel human-computer interaction system for the disabled using recognition of face direction. Face direction is recognized by comparing positions of center of gravity between face region and facial features such as eyes and eyebrows. The face region is first selected by using color information, and then the facial features are extracted by applying a separation filter to the face region. The process speed for recognition of face direction is 6.57frame/sec with a success rate of 92.9% without any special hardware for image processing. We implement human-computer interaction system using screen menu, and show a validity of the proposed method from experimental results.

  • PDF

실시간 근전도 인터페이스의 구현 (Implementation of Real-time EMG-based Human-computer Interface)

  • 이명준;문인혁;강성재;김경훈;문무성
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 V
    • /
    • pp.2729-2732
    • /
    • 2003
  • This paper proposes a real-time method to recognize shoulder elevation motions by comparing EMG signals on the Levator scapulae muscles with double threshold values. To achieve real-time, we implement a EMG signal processing hareware embedded band-rejection filter, low-pass filter, full rectifier and moving average circuits. And a high speed microprocessor is used for implementing the double thresholds method. The available shoulder motions for the human-computer interface are elevation of left, right and both shoulders. From experimental results we show that the proposed real-time processing hardware and double thresholds method are useful for the real-time EMG-based human-computer interface.

  • PDF

하이퍼 수화문장을 사용한 수화 생성 시스템 (Sign Language Avatar System Based on Hyper Sign Sentence)

  • 오영준;박광현;장효영;김대진;정진우;변증남
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2006년도 춘계학술발표대회
    • /
    • pp.621-624
    • /
    • 2006
  • 본 논문은 기존의 수화 발생 시스템이 갖는 처리 성능의 한계와 신체요소의 움직임에 대한 문제점을 지적하고, 이를 개선하기 위해 하이퍼 수화문장을 제안한다. 하이퍼 수화문장은 기존 수화문장의 구조를 확장하여 수화단어와 신체효소의 동작기호로 구성된 수화문장이다. 제안한 하이퍼 수화문장 생성 방법에 따라 하이퍼 수화어절을 연결하여 수화동작을 합성하고 수화문장에 대한 아바타의 움직임을 실제 수화자와 유사하게 생성하는 시스템을 보인다.

  • PDF

인간의 언어와 얼굴 표정에 통하여 자동적으로 감정 인식 시스템 새로운 접근법 (Automatic Human Emotion Recognition from Speech and Face Display - A New Approach)

  • 딩�E령;이영구;이승룡
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2011년도 한국컴퓨터종합학술대회논문집 Vol.38 No.1(B)
    • /
    • pp.231-234
    • /
    • 2011
  • Audiovisual-based human emotion recognition can be considered a good approach for multimodal humancomputer interaction. However, the optimal multimodal information fusion remains challenges. In order to overcome the limitations and bring robustness to the interface, we propose a framework of automatic human emotion recognition system from speech and face display. In this paper, we develop a new approach for fusing information in model-level based on the relationship between speech and face expression to detect automatic temporal segments and perform multimodal information fusion.

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2006년도 학술대회 1부
    • /
    • pp.884-892
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF

Modeling of Superficial Pain using ANNs

  • Matsunaga, Nobutomo;Kuroki, Asayo;Kawaji, Shigeyasu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.1293-1298
    • /
    • 2005
  • In the environment where human coexists with robot, the problem of safety is very important. But it is difficult to separate the robot from the human in time-domain or space-domain unlike the case of factory automation, so a new concept is needed. One approach is to notice sensory and emotional feeling of human, and in this study "pain" is focused, which is a typical unpleasant feeling when the robot contacts us. In this paper, to design the controller based on the pain, an artificial superficial pain model caused by impact is proposed. This ASPM model consists of mechanical pain model, skin model and gate control by artificial neural networks (ANNs). The proposed ASPM is evaluated by experiments.

  • PDF