• Title/Summary/Keyword: hands tracking

Search Result 53, Processing Time 0.03 seconds

A Study on Key Arrangement of Virtual Keyboard based on Eyeball Input system (안구 입력 시스템 기반의 화상키보드 키 배열 연구)

  • Sa Ya Lee;Jin Gyeong Hong;Joong Sup Lee
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.94-103
    • /
    • 2024
  • The eyeball input system is a text input system designed based on 'eye tracking technology' and 'virtual keyboard character-input technology'. The current virtual keyboard structure used is a rectangular QWERTY array optimized for a multi-input method that simultaneously utilizes all 10 fingers on both hands. However, since 'eye-tracking technology' is a single-input method that relies solely on eye movement, requiring only one focal point for input, problems arise when used in conjunction with a rectangular virtual keyboard structure designed for multi-input method. To solve this problem, first of all, previous studies on the shape, type, and movement of muscles connected to the eyeball were investigated. Through the investigation, it was identified that the principle of eye movement occurs in a circle rather than in a straight line. This study, therefore, proposes a new key arrangement wherein the keys are arranged in a circular structure suitable for rotational motion rather than the key arrangement of the current virtual keyboard which is arranged in a rectangular structure and optimized for both-hand input. In addition, compared to the existing rectangular key arrangement, a performance verification experiment was conducted on the circular key arrangement, and through the experiment, it was confirmed that the circular arrangement would be a good replacement for the rectangular arrangement for the virtual keyboard.

Design and Development of University Asset Management systems (대학 자산관리 시스템의 설계 및 구현)

  • Park, Chul-Young;Park, Dae-Heon;Cho, Sung-Eon;Park, Jang-Woo
    • Journal of Advanced Navigation Technology
    • /
    • v.13 no.6
    • /
    • pp.971-976
    • /
    • 2009
  • This paper demonstrates the design and development of asset management systems suitable for the universities full of very various kind of assets. Universities consists of many departments, which have a multiplicity of many experimental. It is very difficult to record and manage assets with hands. In addition, the equipments are moving freely from one lab to another inside the school, which means it is tough to find the location of the assets and so some stuffs that are given lack attention are likely to disappear. So, these things occurring frequently in the university asset management environment should be considered in the design and embodiment of the asset management system. In the proposed system, location recognition of the assets is realized based on a route tracking method, so it is possible to detect the loss of the high priced assets and entrance, export, and lending of them are controlled efficiently. The system is likely to reduce the load of a manager responsible for asset management, because configured to decrease interventions of the manager in overall asset management process. Especially, the proposed system and implementation method will be suitable for small and medium-scale asset management, path tracking, history management.

  • PDF

Evaluation of Quantitative Effectiveness of MR-DTI Analysis with and without Functional MRI (기능적 자기공명영상 사용유무에 따른 확산텐서영상 분석의 유효성 평가)

  • Lee, Dong-Hoon;Park, Ji-Won;Hong, Cheol-Pyo
    • The Journal of Korean Physical Therapy
    • /
    • v.25 no.5
    • /
    • pp.260-265
    • /
    • 2013
  • Purpose: This study was conducted in order to evaluate the quantitative effectiveness of region of interest (ROI) setting in MR-DTI analysis with and without fMRI activation results. Methods: Ten right-handed normal volunteers participated in this study. DTI and fMRI datasets for each subject were obtained using a 1.5T MRI system. For neural fiber tracking, ROIs were drawn using two methods: The drawing points were located in the fMRI activation areas or areas randomly selected by users. In this study, the neural fiber tract targeted the corticospinal tract (CST) Quantitative analyses were performed and compared. The pixel numbers passing through the fiber tract in the individual brain volume were counted. The ratios between the ROI pixel numbers and the extracted fiber pixel numbers, and the ratios between the fiber pixel numbers and the whole-brain pixel numbers were also calculated. Results: According to our results, extracted CST fiber tract in which the ROI was drawn with fMRI activation areas showed higher distribution than drawing the ROI by users' hands. In addition, the quantitatively measured values represented higher pixel distribution: The counted average pixel numbers were 4553.8 and 1943.3. The average ratios of the ROI areas were 33.87 and 22.52. The average percentages of the individual whole-brain volume numbers were 2.06 and 0.87. Conclusion: Results of this study appear to indicate that use of this method can allow for more objectives and significant for study of the recovery of neural fiber mechanisms and brain rehabilitation.

Fingertip Detection through Atrous Convolution and Grad-CAM (Atrous Convolution과 Grad-CAM을 통한 손 끝 탐지)

  • Noh, Dae-Cheol;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.5
    • /
    • pp.11-20
    • /
    • 2019
  • With the development of deep learning technology, research is being actively carried out on user-friendly interfaces that are suitable for use in virtual reality or augmented reality applications. To support the interface using the user's hands, this paper proposes a deep learning-based fingertip detection method to enable the tracking of fingertip coordinates to select virtual objects, or to write or draw in the air. After cutting the approximate part of the corresponding fingertip object from the input image with the Grad-CAM, and perform the convolution neural network with Atrous Convolution for the cut image to detect fingertip location. This method is simpler and easier to implement than existing object detection algorithms without requiring a pre-processing for annotating objects. To verify this method we implemented an air writing application and showed that the recognition rate of 81% and the speed of 76 ms were able to write smoothly without delay in the air, making it possible to utilize the application in real time.

Technological Trend of Endoscopic Robots (내시경 로봇의 기술동향)

  • Kim, Min Young;Cho, Hyungsuck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.3
    • /
    • pp.345-355
    • /
    • 2014
  • Since the beginning of the 21st century, emergence of innovative technologies in robotic and telepresence surgery has revolutionized minimally access surgery and continually has advanced them till recent years. One of such surgeries is endoscopic surgery, in which endoscope and endoscopic instruments are inserted into the body through small incision or natural openings, surgical operations being carried out by a laparoscopic procedure. Due to a vast amount of developments in this technology, this review article describes only a technological state-of-the arts and trend of endoscopic robots, being further limited to the aspects of key components, their functional requirements and operational procedure in surgery. In particular, it first describes technological limitations in developments of key components and then focuses on the description of the performance required for their functions, which include position control, tracking, navigation, and manipulation of the flexible endoscope body and its end effector as well, and so on. In spite of these rapid developments in functional components, endoscopic surgical robots should be much smaller, less expensive, easier to operate, and should seamlessly integrate emerging technologies for their intelligent vision and dexterous hands not only from the points of the view of surgical, ergonomic but also from safety. We believe that in these respects a medical robotic technology related to endoscopic surgery continues to be revolutionized in the near future, sufficient enough to replace almost all kinds of current endoscopic surgery. This issue remains to be addressed elsewhere in some other review articles.

Non-restraint Master Interface of Minimally Invasive Surgical Robot Using Hand Motion Capture (손동작 영상획득을 이용한 최소침습수술로봇 무구속 마스터 인터페이스)

  • Jang, Ik-Gyu
    • Journal of Biomedical Engineering Research
    • /
    • v.37 no.3
    • /
    • pp.105-111
    • /
    • 2016
  • Introduction: Surgical robot is the alternative instrument that substitutes the difficult and precise surgical operation; should have intuitiveness operationally to transfer natural motions. There are limitations of hand motion derived from contacting mechanical handle in the surgical robot master interface such as mechanical singularity, isotropy, coupling problems. In this paper, we will confirm and verify the feasibility of intuitive Non-restraint master interface which tracking the hand motion using infra-red camera and only 3 reflective markers without the hardware handle for the surgical robot master interface. Materials & methods: We configured S/W and H/W system; arranged 6 infra-red cameras and attached 3 reflective markers on hands for measuring 3 dimensional coordinate then we find the 7 motions of grasp, yaw, pitch, roll, px, py, pz. And we connected Virtual-Master to the slave surgical robot(Laparobot) and observed the feasibility. To verify the result of motion, we compare the result of Non-restraint master and that of clinometer (and protractor) through measuring 0~180 degree, 10degree interval, 1000 samples and recorded standard deviation stands for error rate of the value. Results: We confirmed that the average angle values of Non-restraint master interface is accurately corresponds to the result of clinometer (and protractor) and have low error rates during motion. Investigation & Conclusion: In this paper, we confirmed the feasibility and accuracy of 3D Non-restraint master interface that can offer the intuitive motion of non-contact hardware handle. As a result, we can expect the high intuitiveness, dexterousness of surgical robot.

Offline In-Hand 3D Modeling System Using Automatic Hand Removal and Improved Registration Method (자동 손 제거와 개선된 정합방법을 이용한 오프라인 인 핸드 3D 모델링 시스템)

  • Kang, Junseok;Yang, Hyeonseok;Lim, Hwasup;Ahn, Sang Chul
    • Journal of the HCI Society of Korea
    • /
    • v.12 no.3
    • /
    • pp.13-23
    • /
    • 2017
  • In this paper, we propose a new in-hand 3D modeling system that improves user convenience. Since traditional modeling systems are inconvenient to use, an in-hand modeling system has been studied, where an object is handled by hand. However, there is also a problem that it requires additional equipment or specific constraints to remove hands for good modeling. In this paper, we propose a contact state change detection algorithm for automatic hand removal and improved ICP algorithm that enables outlier handling and additionally uses color for accurate registration. The proposed algorithm enables accurate modeling without additional equipment or any constraints. Through experiments using real data, we show that it is possible to accomplish accurate modeling under the general conditions without any constraint by using the proposed system.

Recognition of Finger Language Using FCM Algorithm (FCM 알고리즘을 이용한 지화 인식)

  • Kim, Kwang-Baek;Woo, Young-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.6
    • /
    • pp.1101-1106
    • /
    • 2008
  • People who have hearing difficulties suffer from satisfactory mutual interaction with normal people because there are little chances of communicating each other. It is caused by rare communication of people who have hearing difficulties with normal people because majority of normal people can not understand sing language that is represented by gestures and is used by people who have hearing difficulties as a principal way of communication. In this paper, we propose a recognition method of finger language using FCM algorithm in order to be possible of communication of people who have hearing difficulties with normal people. In the proposed method, skin regions are extracted from images acquired by a camera using YCbCr and HSI color spaces and then locations of two hands are traced by applying 4-directional edge tracking algorithm on the extracted skin lesions. Final hand regions are extracted from the traced hand regions by noise removal using morphological information. The extracted final hand regions are classified and recognized by FCM algorithm. In the experiment using images of finger language acquired by a camera, we verified that the proposed method have the effect of extracting two hand regions and recognizing finger language.

A Study on the Gesture Based Virtual Object Manipulation Method in Multi-Mixed Reality

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.125-132
    • /
    • 2021
  • In this paper, We propose a study on the construction of an environment for collaboration in mixed reality and a method for working with wearable IoT devices. Mixed reality is a mixed form of virtual reality and augmented reality. We can view objects in the real and virtual world at the same time. And unlike VR, MR HMD does not occur the motion sickness. It is using a wireless and attracting attention as a technology to be applied in industrial fields. Myo wearable device is a device that enables arm rotation tracking and hand gesture recognition by using a triaxial sensor, an EMG sensor, and an acceleration sensor. Although various studies related to MR are being progressed, discussions on developing an environment in which multiple people can participate in mixed reality and manipulating virtual objects with their own hands are insufficient. In this paper, We propose a method of constructing an environment where collaboration is possible and an interaction method for smooth interaction in order to apply mixed reality in real industrial fields. As a result, two people could participate in the mixed reality environment at the same time to share a unified object for the object, and created an environment where each person could interact with the Myo wearable interface equipment.

Performance Comparison for Exercise Motion classification using Deep Learing-based OpenPose (OpenPose기반 딥러닝을 이용한 운동동작분류 성능 비교)

  • Nam Rye Son;Min A Jung
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.59-67
    • /
    • 2023
  • Recently, research on behavior analysis tracking human posture and movement has been actively conducted. In particular, OpenPose, an open-source software developed by CMU in 2017, is a representative method for estimating human appearance and behavior. OpenPose can detect and estimate various body parts of a person, such as height, face, and hands in real-time, making it applicable to various fields such as smart healthcare, exercise training, security systems, and medical fields. In this paper, we propose a method for classifying four exercise movements - Squat, Walk, Wave, and Fall-down - which are most commonly performed by users in the gym, using OpenPose-based deep learning models, DNN and CNN. The training data is collected by capturing the user's movements through recorded videos and real-time camera captures. The collected dataset undergoes preprocessing using OpenPose. The preprocessed dataset is then used to train the proposed DNN and CNN models for exercise movement classification. The performance errors of the proposed models are evaluated using MSE, RMSE, and MAE. The performance evaluation results showed that the proposed DNN model outperformed the proposed CNN model.