• Title/Summary/Keyword: marker vision

Search Result 74, Processing Time 0.027 seconds

Performance Enhancement of the Attitude Estimation using Small Quadrotor by Vision-based Marker Tracking (영상기반 물체추적에 의한 소형 쿼드로터의 자세추정 성능향상)

  • Kang, Seokyong;Choi, Jongwhan;Jin, Taeseok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.444-450
    • /
    • 2015
  • The accuracy of small and low cost CCD camera is insufficient to provide data for precisely tracking unmanned aerial vehicles(UAVs). This study shows how UAV can hover on a human targeted tracking object by using CCD camera rather than imprecise GPS data. To realize this, UAVs need to recognize their attitude and position in known environment as well as unknown environment. Moreover, it is necessary for their localization to occur naturally. It is desirable for an UAV to estimate of his attitude by environment recognition for UAV hovering, as one of the best important problems. In this paper, we describe a method for the attitude of an UAV using image information of a maker on the floor. This method combines the observed position from GPS sensors and the estimated attitude from the images captured by a fixed camera to estimate an UAV. Using the a priori known path of an UAV in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a marker on the floor and the estimated UAV's attitude. Since the equations are based on the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the UAV. The Kalman filter scheme is applied for this method. its performance is verified by the image processing results and the experiment.

An Implementation of Table-top based Augmented Reality System for Motor Rehabilitation of the Paretic Hand (손 마비환자의 재활운동을 위한 테이블-탑 증강현실 시스템 구현)

  • Lee, Seokjun;Park, Kil Houm;Lee, Yang Soo;Kwak, Ho Wan;Moon, Gye Wan;Choi, Jae Hun;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.2
    • /
    • pp.254-268
    • /
    • 2013
  • This paper presents an augmented reality (AR) based rehabilitation exercise system to enhance the motor function of the hands for the paretic/hemi-paretic patient. The existing rehabilitation systems rely on mechanical apparatus for palsy rehabilitation, but we aim to use the rehabilitation system at home with easy configuration and minimized equipment by the computer vision based approach. The proposed method evaluates the interaction status of the fingertip action by using the position and the contact of the fingertip markers. We obtain the 2D positions of the fingertip markers from a single camera, and then transform the 3D positions from the calibrated camera space by using an ARToolKit marker. We adopt simple geometric calculation by the conversion of the 2D interest points into the 3D interaction points for the simple interactive task in AR environment. Some experimental results show that the proposed method is practical and simply applicable to the applications with personal AR interaction.

Finger-Gesture Recognition Using Concentric-Circle Tracing Algorithm (동심원 추적 알고리즘을 사용한 손가락 동작 인식)

  • Hwang, Dong-Hyun;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.12
    • /
    • pp.2956-2962
    • /
    • 2015
  • In this paper, we propose a novel algorithm, Concentric-Circle Tracing algorithm, which recognizes finger's shape and counts the number of fingers of hand using low-cost web-camera. We improve algorithm's usability by using low-price web-camera and also enhance user's comfortability by not using a additional marker or sensor. As well as counting the number of fingers, it is possible to extract finger's shape information whether finger is straight or folded, efficiently. The experimental result shows that the finger gesture can be recognized with an average accuracy of 95.48%. It is confirmed that the hand-gesture is an useful method for HCI input and remote control command.

Augmented Reality exhibition content implemented using Project Tango (프로젝트 탱고 기반의 증강현실 전시 콘텐츠 구현)

  • Kim, Ji-seong;Lee, Dong-cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.12
    • /
    • pp.2312-2317
    • /
    • 2017
  • Museums are converging with digital technology to convey information to viewers in various ways. Augmented reality technology enhances virtual objects seamlessly in the real world, and provides a high sense of immersion and realism because it can use various senses of users in combination with information providing role of exhibits. However, the location-based augmented reality may cause the inaccurate registration of the virtual object with the real world due to the error of the GPS information, and the vision-based augmented reality can be enhanced at the position where the marker is placed. To solve this problem, we implemented the exhibition contents that interact with the real world by using the developed project tango. The exhibited contents were based on Lenovo Phab 2 Pro and Project Tango SDK in Unity 3D. Visitors were able to improve immersion and realism in exhibition contents, and it would be able to combine with various exhibition fields such as shopping malls as well as museums.

ARVisualizer : A Markerless Augmented Reality Approach for Indoor Building Information Visualization System

  • Kim, Albert Hee-Kwan;Cho, Hyeon-Dal
    • Spatial Information Research
    • /
    • v.16 no.4
    • /
    • pp.455-465
    • /
    • 2008
  • Augmented reality (AR) has tremendous potential in visualizing geospatial information, especially on the actual physical scenes. However, to utilize augmented reality in mobile system, many researches have undergone with GPS or ubiquitous marker based approaches. Although there are several papers written with vision based markerless tracking, previous approaches provide fairly good results only in largely under "controlled environments." Localization and tracking of current position become more complex problem when it is used in indoor environments. Many proposed Radio Frequency (RF) based tracking and localization. However, it does cause deployment problems of large RF-based sensors and readers. In this paper, we present a noble markerless AR approach for indoor (possible outdoor, too) navigation system only using monoSLAM (Monocular Simultaneous Localization and Map building) algorithm to full-fill our grand effort to develop mobile seamless indoor/outdoor u-GIS system. The paper briefly explains the basic SLAM algorithm, then the implementation of our system.

  • PDF

Smart HCI Based on the Informations Fusion of Biosignal and Vision (생체 신호와 비전 정보의 융합을 통한 스마트 휴먼-컴퓨터 인터페이스)

  • Kang, Hee-Su;Shin, Hyun-Chool
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.4
    • /
    • pp.47-54
    • /
    • 2010
  • We propose a smart human-computer interface replacing conventional mouse interface. The interface is able to control cursor and command action with only hand performing without object. Four finger motions(left click, right click, hold, drag) for command action are enough to express all mouse function. Also we materialize cursor movement control using image processing. The measure what we use for inference is entropy of EMG signal, gaussian modeling and maximum likelihood estimation. In image processing for cursor control, we use color recognition to get the center point of finger tip from marker, and map the point onto cursor. Accuracy of finger movement inference is over 95% and cursor control works naturally without delay. we materialize whole system to check its performance and utility.

Human Legs Motion Estimation by using a Single Camera and a Planar Mirror (단일 카메라와 평면거울을 이용한 하지 운동 자세 추정)

  • Lee, Seok-Jun;Lee, Sung-Soo;Kang, Sun-Ho;Jung, Soon-Ki
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.11
    • /
    • pp.1131-1135
    • /
    • 2010
  • This paper presents a method to capture the posture of the human lower-limbs on the 3D space by using a single camera and a planar mirror. The system estimates the pose of the camera facing the mirror by using four coplanar IR markers attached on the planar mirror. After that, the training space is set up based on the relationship between the mirror and the camera. When a patient steps on the weight board, the system obtains relative position between patients' feet. The markers are attached on the sides of both legs, so that some markers are invisible from the camera due to the self-occlusion. The reflections of the markers on the mirror can partially resolve the above problem with a single camera system. The 3D positions of the markers are estimated by using the geometric information of the camera on the training space. Finally the system estimates and visualizes the posture and motion of the both legs based on the 3D marker positions.

Indoor Navigation System for Visually Impaired Persons Using Camera and Range Sensors (카메라와 거리센서를 이용한 시각장애인 실내 보행안내 시스템)

  • Lee, Jin-Hee;Shin, Byeong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.517-528
    • /
    • 2011
  • In this paper, we propose an indoor navigation system that can do walk safely to the destination for visually impaired persons. The proposed system analyzes images taken with the camera finds the ID of the marker to identify the absolute position of the pedestrian. Using the distance and angle obtained from IMU(Inertial Measurement Unit) accelerometer sensor and a gyro sensor, the system decides the relative position of a pedestrian for the previous position to determine the next direction. At the same time, we simplify a complex spatial structure in front of user by means of ultrasonic sensors and determine an avoidance direction by estimating the patterns. Then, it uses a few IR(Infrared Rays) sensors to detect stair. Our system offers position of visually impaired persons incorporating multiple sensors and helps users to arrive to destination safely.

Augmented Reality to Localize Individual Organ in Surgical Procedure

  • Lee, Dongheon;Yi, Jin Wook;Hong, Jeeyoung;Chai, Young Jun;Kim, Hee Chan;Kong, Hyoun-Joong
    • Healthcare Informatics Research
    • /
    • v.24 no.4
    • /
    • pp.394-401
    • /
    • 2018
  • Objectives: Augmented reality (AR) technology has become rapidly available and is suitable for various medical applications since it can provide effective visualization of intricate anatomical structures inside the human body. This paper describes the procedure to develop an AR app with Unity3D and Vuforia software development kit and publish it to a smartphone for the localization of critical tissues or organs that cannot be seen easily by the naked eye during surgery. Methods: In this study, Vuforia version 6.5 integrated with the Unity Editor was installed on a desktop computer and configured to develop the Android AR app for the visualization of internal organs. Three-dimensional segmented human organs were extracted from a computerized tomography file using Seg3D software, and overlaid on a target body surface through the developed app with an artificial marker. Results: To aid beginners in using the AR technology for medical applications, a 3D model of the thyroid and surrounding structures was created from a thyroid cancer patient's DICOM file, and was visualized on the neck of a medical training mannequin through the developed AR app. The individual organs, including the thyroid, trachea, carotid artery, jugular vein, and esophagus were localized by the surgeon's Android smartphone. Conclusions: Vuforia software can help even researchers, students, or surgeons who do not possess computer vision expertise to easily develop an AR app in a user-friendly manner and use it to visualize and localize critical internal organs without incision. It could allow AR technology to be extensively utilized for various medical applications.

Scientometrics-based R&D Topography Analysis to Identify Research Trends Related to Image Segmentation (이미지 분할(image segmentation) 관련 연구 동향 파악을 위한 과학계량학 기반 연구개발지형도 분석)

  • Young-Chan Kim;Byoung-Sam Jin;Young-Chul Bae
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.3
    • /
    • pp.563-572
    • /
    • 2024
  • Image processing and computer vision technologies are becoming increasingly important in a variety of application fields that require techniques and tools for sophisticated image analysis. In particular, image segmentation is a technology that plays an important role in image analysis. In this study, in order to identify recent research trends on image segmentation techniques, we used the Web of Science(WoS) database to analyze the R&D topography based on the network structure of the author's keyword co-occurrence matrix. As a result, from 2015 to 2023, as a result of the analysis of the R&D map of research articles on image segmentation, R&D in this field is largely focused on four areas of research and development: (1) researches on collecting and preprocessing image data to build higher-performance image segmentation models, (2) the researches on image segmentation using statistics-based models or machine learning algorithms, (3) the researches on image segmentation for medical image analysis, and (4) deep learning-based image segmentation-related R&D. The scientometrics-based analysis performed in this study can not only map the trajectory of R&D related to image segmentation, but can also serve as a marker for future exploration in this dynamic field.