• Title/Summary/Keyword: marker vision

Search Result 72, Processing Time 0.028 seconds

Optical Flow-Based Marker Tracking Algorithm for Collaboration Between Drone and Ground Vehicle (드론과 지상로봇 간의 협업을 위한 광학흐름 기반 마커 추적방법)

  • Beck, Jong-Hwan;Kim, Sang-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.3
    • /
    • pp.107-112
    • /
    • 2018
  • In this paper, optical flow based keypoint detection and tracking technique is proposed for the collaboration between flying drone with vision system and ground robots. There are many challenging problems in target detection research using moving vision system, so we combined the improved FAST algorithm and Lucas-Kanade method for adopting the better techniques in each feature detection and optical flow motion tracking, which results in 40% higher in processing speed than previous works. Also, proposed image binarization method which is appropriate for the given marker helped to improve the marker detection accuracy. We also studied how to optimize the embedded system which is operating complex computations for intelligent functions in a very limited resources while maintaining the drone's present weight and moving speed. In a future works, we are aiming to develop collaborating smarter robots by using the techniques of learning and recognizing targets even in a complex background.

Scholarly Assessment of Aruco Marker-Driven Worker Localization Techniques within Construction Environments (Aruco marker 기반 건설 현장 작업자 위치 파악 적용성 분석)

  • Choi, Tae-Hun;Kim, Do-Kuen;Jang, Se-Jun
    • Journal of the Korea Institute of Building Construction
    • /
    • v.23 no.5
    • /
    • pp.629-638
    • /
    • 2023
  • This study introduces an innovative approach to monitor the whereabouts of workers within indoor construction settings. While traditional modalities such as GPS and NTRIP have demonstrated efficacy for outdoor localizations, their precision dwindles in indoor environments. In response, this research advocates for the adoption of Aruco markers. Leveraging computer vision technology, these markers facilitate the quantification of the distance between a worker and the marker, subsequently pinpointing the worker's instantaneous location with heightened accuracy. The methodology's efficacy was rigorously evaluated in a real-world construction scenario. Parameters including system stability, the influence of lighting conditions, the extremity of measurable distances, and the breadth of recognition angles were methodically appraised. System stability was ascertained by maneuvering the camera at a uniform velocity, gauging its marker recognition prowess. The impact of varying luminosity on marker discernibility was scrutinized by modulating the ambient lighting. Furthermore, the camera's spatial movement ascertained both the upper threshold of distance until marker recognition waned and the maximal angle at which markers remained discernible.

Indoor Location and Pose Estimation Algorithm using Artificial Attached Marker (인공 부착 마커를 활용한 실내 위치 및 자세 추정 알고리즘)

  • Ahn, Byeoung Min;Ko, Yun-Ho;Lee, Ji Hong
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.240-251
    • /
    • 2016
  • This paper presents a real-time indoor location and pose estimation method that utilizes simple artificial markers and image analysis techniques for the purpose of warehouse automation. The conventional indoor localization methods cannot work robustly in warehouses where severe environmental changes usually occur due to the movement of stocked goods. To overcome this problem, the proposed framework places artificial markers having different interior pattern on the predefined position of the warehouse floor. The proposed algorithm obtains marker candidate regions from a captured image by a simple binarization and labeling procedure. Then it extracts maker interior pattern information from each candidate region in order to decide whether the candidate region is a true marker or not. The extracted interior pattern information and the outer boundary of the marker are used to estimate location and heading angle of the localization system. Experimental results show that the proposed localization method can provide high performance which is almost equivalent to that of the conventional method using an expensive LIDAR sensor and AMCL algorithm.

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.

A Fast Way for Alignment Marker Detection and Position Calibration (Alignment Marker 고속 인식 및 위치 보정 방법)

  • Moon, Chang Bae;Kim, HyunSoo;Kim, HyunYong;Lee, Dongwon;Kim, Tae-Hoon;Chung, Hae;Kim, Byeong Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.1
    • /
    • pp.35-42
    • /
    • 2016
  • The core of the machine vision that is frequently used at the pre/post-production stages is a marker alignment technology. In this paper, a method to detect the angle and position of a product at high speed by use of a unique pattern present in the marker stamped on the product, and calibrate them is proposed. In the proposed method, to determine the angle and position of a marker, the candidates of the marker are extracted by using a variation of the integral histogram, and then clustering is applied to reduce the candidates. The experimental results revealed about 5s 719ms improvement in processing time and better precision in detecting the rotation angle of a product.

3D Orientation and Position Tracking System of Surgical Instrument with Optical Tracker and Internal Vision Sensor (광추적기와 내부 비전센서를 이용한 수술도구의 3차원 자세 및 위치 추적 시스템)

  • Joe, Young Jin;Oh, Hyun Min;Kim, Min Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.579-584
    • /
    • 2016
  • When surgical instruments are tracked in an image-guided surgical navigation system, a stereo vision system with high accuracy is generally used, which is called optical tracker. However, this optical tracker has the disadvantage that a line-of-sight between the tracker and surgical instrument must be maintained. Therefore, to complement the disadvantage of optical tracking systems, an internal vision sensor is attached to a surgical instrument in this paper. Monitoring the target marker pattern attached on patient with this vision sensor, this surgical instrument is possible to be tracked even when the line-of-sight of the optical tracker is occluded. To verify the system's effectiveness, a series of basic experiments is carried out. Lastly, an integration experiment is conducted. The experimental results show that rotational error is bounded to max $1.32^{\circ}$ and mean $0.35^{\circ}$, and translation error is in max 1.72mm and mean 0.58mm. Finally, it is confirmed that the proposed tool tracking method using an internal vision sensor is useful and effective to overcome the occlusion problem of the optical tracker.

A Study on Autonomous Indoor Flight using Computer Vision System and Smartphone (컴퓨터비전과 스마트폰을 활용한 실내 자동비행체에 관한 연구)

  • Choi, Young;Kim, Kye-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.5
    • /
    • pp.353-358
    • /
    • 2013
  • In this paper, we present an implementation of indoor flight to navigate to the designated places capable of hands-off autonomous operation within indoor environments. Our flight requires computer vision technique and smartphone device to allow it to be flown indoors without high-performance sensors which are too expensive to commercialization. The experimental result show that proposed implementation is fairly meaningful in a general building.

A Method of Lane Marker Detection Robust to Environmental Variation Using Lane Tracking (차선 추적을 이용한 환경변화에 강인한 차선 검출 방법)

  • Lee, Jihye;Yi, Kang
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1396-1406
    • /
    • 2018
  • Lane detection is a key function in developing autonomous vehicle technology. In this paper, we propose a lane marker detection algorithm robust to environmental variation targeting low cost embedded computing devices. The proposed algorithm consists of two phases: initialization phase which is slow but has relatively higher accuracy; and the tracking phase which is fast and has the reliable performance in a limited condition. The initialization phase detects lane markers using a set of filters utilizing the various features of lane markers. The tracking phase uses Kalman filter to accelerate the lane marker detection processing. In a tracking phase, we measure the reliability of the detection results and switch it to initialization phase if the confidence level becomes below a threshold. By combining the initialization and tracking phases we achieved high accuracy and acceptable computing speed even under a low cost computing resources in which we cannot use the computing intensive algorithm such as deep learning approach. Experimental results show that the detection accuracy is about 95% on average and the processing speed is about 20 frames per second with Raspberry Pi 3 which is low cost device.

Vision-based hybrid 6-DOF displacement estimation for precast concrete member assembly

  • Choi, Suyoung;Myeong, Wancheol;Jeong, Yonghun;Myung, Hyun
    • Smart Structures and Systems
    • /
    • v.20 no.4
    • /
    • pp.397-413
    • /
    • 2017
  • Precast concrete (PC) members are currently being employed for general construction or partial replacement to reduce construction period. As assembly work in PC construction requires connecting PC members accurately, measuring the 6-DOF (degree of freedom) relative displacement is essential. Multiple planar markers and camera-based displacement measurement systems can monitor the 6-DOF relative displacement of PC members. Conventional methods, such as direct linear transformation (DLT) for homography estimation, which are applied to calculate the 6-DOF relative displacement between the camera and marker, have several major problems. One of the problems is that when the marker is partially hidden, the DLT method cannot be applied to calculate the 6-DOF relative displacement. In addition, when the images of markers are blurred, error increases with the DLT method which is employed for its estimation. To solve these problems, a hybrid method, which combines the advantages of the DLT and MCL (Monte Carlo localization) methods, is proposed. The method evaluates the 6-DOF relative displacement more accurately compared to when either the DLT or MCL is used alone. Each subsystem captures an image of a marker and extracts its subpixel coordinates, and then the data are transferred to a main system via a wireless communication network. In the main system, the data from each subsystem are used for 3D visualization. Thereafter, the real-time movements of the PC members are displayed on a tablet PC. To prove the feasibility, the hybrid method is compared with the DLT method and MCL in real experiments.

Cooperative UAV/UGV Platform for a Wide Range of Visual Information (광범위 시야 정보를 위한 UAV와 UGV의 협업 연구)

  • Lee, Jae-Keun;Jung, Hahmin;Kim, Dong Hun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.225-232
    • /
    • 2014
  • In this study, a cooperative UAV and UGV platform is proposed to obtain a wide range of visual information. The UAV recognizes a pattern marker on UGV and tracks the UGV without user control. It can provide wide range of visual information for a user in the UGV. The UGV by a user is controled equipped with an aluminum board. And the UAV can take off and land on the UGV. The UAV uses two cameras; one camera is used to recognize a pattern marker and another is used to provide a wide range of visual information to the UGV's user. It is guaranteed that the proposed visual-based approach detects and tracks the target marker on the UGV, and then lands well. The experimental results show that the proposed approach can effectively construct a cooperative UAV/UGV platform for obtaining a wide range of vision information.