• Title/Summary/Keyword: Motion Interface

Search Result 663, Processing Time 0.023 seconds

A Study on Technical Elements for Vision Therapy based on VR HMD (VR HMD에서의 비전 테라피 활용을 위한 기술 요소 연구)

  • Choi, Sangmi;Kim, Jungho;Kwon, Soonchul;Lee, Seunghyun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.12
    • /
    • pp.161-168
    • /
    • 2016
  • Thanks to mass production and provision of smartphones and the HMD (head mounted display), VR (virtual reality) is now being applied to various areas. The VR HMD is the interface equipment which allows users to have realistic experiences through human sensory organs such as vision and auditory sense. Since the majority of VR equipment is operated by the display for both eyes, 360-degree video content and the depth information, the VR mechanism is closely related to human senses, especially vision. Previous studies have focused on how to minimize negative impact such as motion sickness or visual fatigue. Little attention has been paid on research about the visual treatment. Therefore, the focus of this study is to develop technical elements for utilization of vision therapy with the VR HMD and explore possible areas to apply it. To this end, we analyzed the past case studies and technical elements to identify 16 areas for vision therapy. We also developed the optical parameters for utilization of the VR HMD visual targets. The result of this study is expected to be utilized for development of visual targets for vision therapy based on the VR HMD.

Study on the Design and Usability factor analysis of Web Multimedia Contents (웹 멀티미디어 컨텐츠의 디자인과 유용성분석에 대한 연구)

  • Koh, Eun-Young;Shin, Soon-Ho
    • Archives of design research
    • /
    • v.17 no.4
    • /
    • pp.69-78
    • /
    • 2004
  • This thesis is designed to investigate the relationship of Web Multimedia design. Web sites are needed to approach the lot of ways that increase the user intuitive lay over communicating information nowadays. In the digital era of today, multimedia is composed of such various elements as text, image, sound, animation, video. The method of the thesis is a research questions as this follow. First, the structuring element of the web site type was classified by a form and the contents in the web site. These types in this classified web site are HTML, flash and mixed type. Second, to study this thesis, we made up pose a questionnaire to students in the university as th method of random sampling. The goal of this survey is that user is how to understand about the web multimedia design. Results of analysis may be summarized as follows: 1) It was the image design in the NIKE web site that there would be evaluated the best design effected by flash, and it is finding that the NIDE web site will need more supporting Information, Interactive design through this studies. 2) It was information that there would be evaluated good design effect by text and image in SAMSUNG web site. Based on HTML, SAMSUNG need supporting motion image design by this studies. In conclusion, this thesis suggests that the each web medias should be constructed by the different web design component, according to information, Interactive, image design of web site.

  • PDF

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.

A Content-based Video Rate-control Algorithm Interfaced to Human-eye (인간과 결합한 내용기반 동영상 율제어)

  • 황재정;진경식;황치규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.3C
    • /
    • pp.307-314
    • /
    • 2003
  • In the general multiple video object coder, more interested objects such as speaker or moving object is consistently coded with higher priority. Since the priority of each object may not be fixed in the whole sequence and be variable on frame basis, it must be adjusted in a frame. In this paper, we analyze the independent rate control algorithm and global algorithm that the QP value is controled by the static parameters, object importance or priority, target PSNR, weighted distortion. The priority among static parameters is analyzed and adjusted into dynamic parameters according to the visual interests or importance obtained by camera interface. Target PSNR and weighted distortion are proportionally derived by using magnitude, motion, and distortion. We apply those parameters for the weighted distortion control and the priority-based control resulting in the efficient bit-rate distribution. As results of this paper, we achieved that fewer bits are allocated for video objects which has less importance and more bits for those which has higher visual importance. The duration of stability in the visual quality is reduced to less than 15 frames of the coded sequence. In the aspect of PSNR, the proposed scheme shows higher quality of more than 2d13 against the conventional schemes. Thus the coding scheme interfaced to human- eye proves an efficient video coder dealing with the multiple number of video objects.

Hallym Jikimi: A Remote Monitoring System for Daily Activities of Elders Living Alone (한림 지킴이: 독거노인 일상 활동 원격 모니터링 시스템)

  • Lee, Seon-Woo;Kim, Yong-Joong;Lee, Gi-Sup;Kim, Byung-Jung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.4
    • /
    • pp.244-254
    • /
    • 2009
  • This paper describes a remote system to monitor the circadian behavioral patterns of elders who live alone. The proposed system was designed and implemented to provide more conveniently and reliably the required functionalities of a remote monitoring system for elders based on the development of first phase prototype[2]. The developed system is composed of an in-house sensing system and a server system. The in-house sensing system is a set of wireless sensor nodes which have pyroelectric infrared (PIR) sensor to detect a motion of elder. Each sensing node sends its detection signal to a home gateway via wireless link. The home gateway stores the received signals into a remote database. The server system is composed of a database server and a web server, which provides web-based monitoring system to caregivers (friends, family and social workers) for more cost effective intelligent care service. The improved second phase system can provide 'automatic diagnosis', 'going out detection', and enhanced user interface functionalities. We have evaluated the first and second phase monitoring systems from real field experiments of 3/4 months continuous operation with installation of 9/15 elders' houses, respectively. The experimental results show the promising possibilities to estimate the behavioral patterns and the current status of elder even though the simplicity of sensing capability.

A Control Method for designing Object Interactions in 3D Game (3차원 게임에서 객체들의 상호 작용을 디자인하기 위한 제어 기법)

  • 김기현;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.322-331
    • /
    • 2003
  • As the complexity of a 3D game is increased by various factors of the game scenario, it has a problem for controlling the interrelation of the game objects. Therefore, a game system has a necessity of the coordination of the responses of the game objects. Also, it is necessary to control the behaviors of animations of the game objects in terms of the game scenario. To produce realistic game simulations, a system has to include a structure for designing the interactions among the game objects. This paper presents a method that designs the dynamic control mechanism for the interaction of the game objects in the game scenario. For the method, we suggest a game agent system as a framework that is based on intelligent agents who can make decisions using specific rules. Game agent systems are used in order to manage environment data, to simulate the game objects, to control interactions among game objects, and to support visual authoring interface that ran define a various interrelations of the game objects. These techniques can process the autonomy level of the game objects and the associated collision avoidance method, etc. Also, it is possible to make the coherent decision-making ability of the game objects about a change of the scene. In this paper, the rule-based behavior control was designed to guide the simulation of the game objects. The rules are pre-defined by the user using visual interface for designing their interaction. The Agent State Decision Network, which is composed of the visual elements, is able to pass the information and infers the current state of the game objects. All of such methods can monitor and check a variation of motion state between game objects in real time. Finally, we present a validation of the control method together with a simple case-study example. In this paper, we design and implement the supervised classification systems for high resolution satellite images. The systems support various interfaces and statistical data of training samples so that we can select the most effective training data. In addition, the efficient extension of new classification algorithms and satellite image formats are applied easily through the modularized systems. The classifiers are considered the characteristics of spectral bands from the selected training data. They provide various supervised classification algorithms which include Parallelepiped, Minimum distance, Mahalanobis distance, Maximum likelihood and Fuzzy theory. We used IKONOS images for the input and verified the systems for the classification of high resolution satellite images.

An Implementation of Dynamic Gesture Recognizer Based on WPS and Data Glove (WPS와 장갑 장치 기반의 동적 제스처 인식기의 구현)

  • Kim, Jung-Hyun;Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.561-568
    • /
    • 2006
  • WPS(Wearable Personal Station) for next generation PC can define as a core terminal of 'Ubiquitous Computing' that include information processing and network function and overcome spatial limitation in acquisition of new information. As a way to acquire significant dynamic gesture data of user from haptic devices, traditional gesture recognizer based on desktop-PC using wire communication module has several restrictions such as conditionality on space, complexity between transmission mediums(cable elements), limitation of motion and incommodiousness on use. Accordingly, in this paper, in order to overcome these problems, we implement hand gesture recognition system using fuzzy algorithm and neural network for Post PC(the embedded-ubiquitous environment using blue-tooth module and WPS). Also, we propose most efficient and reasonable hand gesture recognition interface for Post PC through evaluation and analysis of performance about each gesture recognition system. The proposed gesture recognition system consists of three modules: 1) gesture input module that processes motion of dynamic hand to input data 2) Relational Database Management System(hereafter, RDBMS) module to segment significant gestures from input data and 3) 2 each different recognition modulo: fuzzy max-min and neural network recognition module to recognize significant gesture of continuous / dynamic gestures. Experimental result shows the average recognition rate of 98.8% in fuzzy min-nin module and 96.7% in neural network recognition module about significantly dynamic gestures.

Development of the Whole Body 3-Dimensional Topographic Radiotherapy System (3차원 전신 정위 방사선 치료 장치의 개발)

  • Jung, Won-Kyun;Lee, Byung-Yong;Choi, Eun-Kyung;Kim, Jong-Hoon;An, Seung-Do;Lee, Seok;Min, Chul-Ki;Park, Cham-Bok;Jang, Hye-Sook
    • Progress in Medical Physics
    • /
    • v.10 no.2
    • /
    • pp.63-71
    • /
    • 1999
  • For the purpose of utilization in 3-D conformal radiotherapy and whole body radiosurgery, the Whole Body 3-Dimensional Topographic Radiation Therapy System has been developed. Whole body frame was constructed in order to be installed on the couch. Radiopaque catheters were engraved on it for the dedicated coordinate system and a MeV-Green immobilizer was used for the patient setup by the help of side panels and plastic rods. By designing and constructing the whole body frame in this way, geometrical limitation to the gantry rotation in 3-D conformal radiotherapy could be minimized and problem which radiation transmission may be altered in particular incident angles was solved. By analyzing CT images containing information of patient setup with respect to the whole body frame, localization and coordination of the target is performed so that patient setup error may be eliminated between simulation and treatment. For the verification of setup, the change of patient positioning is detected and adjusted in order to minimize the setup error by means of comparison of the body outlines using 3 CCTV cameras. To enhance efficiency of treatment procedure, this work can be done in real time by watching the change of patient setup through the monitor. The method of image subtraction in IDL (Interactive Data Language) was used to visualize the change of patient setup. Rotating X-ray system was constructed for detecting target movement due to internal organ motion. Landmark screws were implanted either on the bones around target or inside target, and variation of target location with respect to markers may be visualized in order to minimize internal setup error through the anterior and the lateral image information taken from rotating X-ray system. For CT simulation, simulation software was developed using IDL on GUI(Graphic User Interface) basis for PC and includes functions of graphic handling, editing and data acquisition of images of internal organs as well as target for the preparation of treatment planning.

  • PDF

Web-based Text-To-Sign Language Translating System (웹기반 청각장애인용 수화 웹페이지 제작 시스템)

  • Park, Sung-Wook;Wang, Bo-Hyeun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.265-270
    • /
    • 2014
  • Hearing-impaired people have difficulty in hearing, so it is also hard for them to learn letters that represent sound and text that conveys complex and abstract concepts. Therefore it has been natural choice for the hearing-impaired people to use sign language for communication, which employes facial expression, and hands and body motion. However, the major communication methods in daily life are text and speech, which are big obstacles for the hearing-impaired people to access information, to learn and make intellectual activities, and to get jobs. As delivering information via internet become common the hearing-impaired people are experiencing more difficulty in accessing information since internet represents information mostly in text forms. This intensifies unbalance of information accessibility. This paper reports web-based text-to-sign language translating system that helps web designer to use sign language in web page design. Since the system is web-based, if web designers are equipped with common computing environment for internet browsing, they can use the system. The web-based text-to-sign language system takes the format of bulletin board as user interface. When web designers write paragraphs and post them through the bulletin board to the translating server, the server translates the incoming text to sign language, animates with 3D avatar and records the animation in a MP4 file. The file addresses are fetched by the bulletin board and it enables web designers embed the translated sign language file into their web pages by using HTML5 or Javascript. Also we analyzed text used by web pages of public services, then figured out new words to the translating system, and added to improve translation. This addition is expected to encourage wide and easy acceptance of web pages for hearing-impaired people to public services.

A Biomechanical Study on a New Surgical Procedure for the Treatment of Intertrochanteric Fractures in relation to Osteoporosis of Varying Degrees (대퇴골 전자간 골절의 새로운 수술기법에 관한 생체역학적 분석)

  • 김봉주;이성재;권순용;탁계래;이권용
    • Journal of Biomedical Engineering Research
    • /
    • v.24 no.5
    • /
    • pp.401-410
    • /
    • 2003
  • This study investigates the biomechanical efficacies of various cement augmentation techniques with or without pressurization for varying degrees of osteoporotic femur. For this study, a biomechanical analysis using a finite element method (FEM) was undertaken to evaluate surgical procedures, Simulated models include the non-cemented(i.e., hip screw only, Type I), the cement-augmented(Type II), and the cemented augmented with pressurization(Type III) models. To simulate the fracture plane and other interfacial regions, 3-D contact elements were used with appropriate friction coefficients. Material properties of the cancellous bone were varied to accommodate varying degrees of osteoporosis(Singh indices, II∼V). For each model. the following items were analyzed to investigate the effect surgical procedures in relation to osteoporosis of varying degrees : (a) von Mises stress distribution within the femoral head in terms of volumetric percentages. (b) Peak von Mises stress(PVMS) within the femoral head and the surgical constructs. (c) Maximum von Mises strain(MVMS) within the femoral head, (d) micromotions at the fracture plane and at the interfacial region between surgical construct and surrounding bone. Type III showed the lowest PVMS and MVMS at the cancellous bone near the bone-construct interface regardless of bone densities. an indication of its least likelihood of construct loosening due to failure of the host bone. Particularly, its efficacy was more prominent when the bone density level was low. Micromotions at the interfacial surgical construct was lowest in Type III. followed by Type I and Type II. They were about 15-20% of other types. which suggested that pressurization was most effective in limiting the interfacial motion. Our results demonstrated the cement augmentation with hip screw could be more effective when used with pressurization technique for the treatment of intertrochanteric fractures. For patients with low bone density. its effectiveness can be more pronounced in limiting construct loosening and promoting bone union.