• Title/Summary/Keyword: vision-based technology

Search Result 1,063, Processing Time 0.024 seconds

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

Tele-operating System of Field Robot for Cultivation Management - Vision based Tele-operating System of Robotic Smart Farming for Fruit Harvesting and Cultivation Management

  • Ryuh, Youngsun;Noh, Kwang Mo;Park, Joon Gul
    • Journal of Biosystems Engineering
    • /
    • v.39 no.2
    • /
    • pp.134-141
    • /
    • 2014
  • Purposes: This study was to validate the Robotic Smart Work System that can provides better working conditions and high productivity in unstructured environments like bio-industry, based on a tele-operation system for fruit harvesting with low cost 3-D positioning system on the laboratory level. Methods: For the Robotic Smart Work System for fruit harvesting and cultivation management in agriculture, a vision based tele-operating system and 3-D position information are key elements. This study proposed Robotic Smart Farming, an agricultural version of Robotic Smart Work System, and validated a 3-D position information system with a low cost omni camera and a laser marker system in the lab environment in order to get a vision based tele-operating system and 3-D position information. Results: The tasks like harvesting of the fixed target and cultivation management were accomplished even if there was a short time delay (30 ms ~ 100 ms). Although automatic conveyor works requiring accurate timing and positioning yield high productivity, the tele-operation with user's intuition will be more efficient in unstructured environments which require target selection and judgment. Conclusions: This system increased work efficiency and stability by considering ancillary intelligence as well as user's experience and knowhow. In addition, senior and female workers will operate the system easily because it can reduce labor and minimized user fatigue.

Intelligent System based on Command Fusion and Fuzzy Logic Approaches - Application to mobile robot navigation (명령융합과 퍼지기반의 지능형 시스템-이동로봇주행적용)

  • Jin, Taeseok;Kim, Hyun-Deok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1034-1041
    • /
    • 2014
  • This paper propose a fuzzy inference model for obstacle avoidance for a mobile robot with an active camera, which is intelligently searching the goal location in unknown environments using command fusion, based on situational command using an vision sensor. Instead of using "physical sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data. In this paper, "command fusion" method is used to govern the robot motions. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. We describe experimental results obtained with the proposed method that demonstrate successful navigation using real vision data.

Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images (옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템)

  • Kim, Jong-Rok;Lim, Mee-Seub;Lim, Joon-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.

3D Vision-Based Local Path Planning System of a Humanoid Robot for Obstacle Avoidance

  • Kang, Tae-Koo;Lim, Myo-Taeg;Park, Gwi-Tae;Kim, Dong W.
    • Journal of Electrical Engineering and Technology
    • /
    • v.8 no.4
    • /
    • pp.879-888
    • /
    • 2013
  • This paper addresses the vision based local path planning system for obstacle avoidance. To handle the obstacles which exist beyond the field of view (FOV), we propose a Panoramic Environment Map (PEM) using the MDGHM-SIFT algorithm. Moreover, we propose a Complexity Measure (CM) and Fuzzy logic-based Avoidance Motion Selection (FAMS) system to enable a humanoid robot to automatically decide its own direction and walking motion when avoiding an obstacle. The CM provides automation in deciding the direction of avoidance, whereas the FAMS system chooses the avoidance path and walking motion, based on environment conditions such as the size of the obstacle and the available space around it. The proposed system was applied to a humanoid robot that we designed. The results of the experiment show that the proposed method can be effectively applied to decide the avoidance direction and the walking motion of a humanoid robot.

Real-Time Precision Vehicle Localization Using Numerical Maps

  • Han, Seung-Jun;Choi, Jeongdan
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.968-978
    • /
    • 2014
  • Autonomous vehicle technology based on information technology and software will lead the automotive industry in the near future. Vehicle localization technology is a core expertise geared toward developing autonomous vehicles and will provide location information for control and decision. This paper proposes an effective vision-based localization technology to be applied to autonomous vehicles. In particular, the proposed technology makes use of numerical maps that are widely used in the field of geographic information systems and that have already been built in advance. Optimum vehicle ego-motion estimation and road marking feature extraction techniques are adopted and then combined by an extended Kalman filter and particle filter to make up the localization technology. The implementation results of this paper show remarkable results; namely, an 18 ms mean processing time and 10 cm location error. In addition, autonomous driving and parking are successfully completed with an unmanned vehicle within a $300m{\times}500m$ space.

Development of a Test Environment for Performance Evaluation of the Vision-aided Navigation System for VTOL UAVs (수직 이착륙 무인 항공기용 영상보정항법 시스템 성능평가를 위한 검증환경 개발)

  • Sebeen Park;Hyuncheol Shin;Chul Joo Chung
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.6
    • /
    • pp.788-797
    • /
    • 2023
  • In this paper, we introduced a test environment to test a vision-aided navigation system, as an alternative navigation system when global positioning system (GPS) is unavailable, for vertical take-off and landing (VTOL) unmanned aerial system. It is efficient to use a virtual environment to test and evaluate the vision-aided navigation system under development, but currently no suitable equipment has been developed in Korea. Thus, the proposed test environment is developed to evaluate the performance of the navigation system by generating input signal modeling and simulating operation environment of the system, and by monitoring output signal. This paper comprehensively describes research procedure from derivation of requirements specifications to hardware/software design according to the requirements, and production of the test environment. This test environment was used for evaluating the vision-aided navigation algorithm which we are developing, and conducting simulation based pre-flight tests.

An Improved Multiple Interval Pixel Sampling based Background Subtraction Algorithm (개선된 다중 구간 샘플링 배경제거 알고리즘)

  • Mahmood, Muhammad Tariq;Choi, Young Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.3
    • /
    • pp.1-6
    • /
    • 2019
  • Foreground/background segmentation in video sequences is often one of the first tasks in machine vision applications, making it a critical part of the system. In this paper, we present an improved sample-based technique that provides robust background image as well as segmentation mask. The conventional multiple interval sampling (MIS) algorithm have suffer from the unbalance of computation time per frame and the rapid change of confidence factor of background pixel. To balance the computation amount, a random-based pixel update scheme is proposed and a spatial and temporal smoothing technique is adopted to increase reliability of the confidence factor. The proposed method allows the sampling queue to have more dispersed data in time and space, and provides more continuous and reliable confidence factor. Experimental results revealed that our method works well to estimate stable background image and the foreground mask.

U2Net-based Single-pixel Imaging Salient Object Detection

  • Zhang, Leihong;Shen, Zimin;Lin, Weihong;Zhang, Dawei
    • Current Optics and Photonics
    • /
    • v.6 no.5
    • /
    • pp.463-472
    • /
    • 2022
  • At certain wavelengths, single-pixel imaging is considered to be a solution that can achieve high quality imaging and also reduce costs. However, achieving imaging of complex scenes is an overhead-intensive process for single-pixel imaging systems, so low efficiency and high consumption are the biggest obstacles to their practical application. Improving efficiency to reduce overhead is the solution to this problem. Salient object detection is usually used as a pre-processing step in computer vision tasks, mimicking human functions in complex natural scenes, to reduce overhead and improve efficiency by focusing on regions with a large amount of information. Therefore, in this paper, we explore the implementation of salient object detection based on single-pixel imaging after a single pixel, and propose a scheme to reconstruct images based on Fourier bases and use U2Net models for salient object detection.

Indoor Surveillance Camera based Human Centric Lighting Control for Smart Building Lighting Management

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Mariappan, Vinayagam;Lee, Min Woo;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.207-212
    • /
    • 2020
  • The human centric lighting (HCL) control is a major focus point of the smart lighting system design to provide energy efficient and people mood rhythmic motivation lighting in smart buildings. This paper proposes the HCL control using indoor surveillance camera to improve the human motivation and well-beings in the indoor environments like residential and industrial buildings. In this proposed approach, the indoor surveillance camera video streams are used to predict the day lights and occupancy, occupancy specific emotional features predictions using the advanced computer vision techniques, and this human centric features are transmitted to the smart building light management system. The smart building light management system connected with internet of things (IoT) featured lighting devices and controls the light illumination of the objective human specific lighting devices. The proposed concept experimental model implemented using RGB LED lighting devices connected with IoT features open-source controller in the network along with networked video surveillance solution. The experiment results are verified with custom made automatic lighting control demon application integrated with OpenCV framework based computer vision methods to predict the human centric features and based on the estimated features the lighting illumination level and colors are controlled automatically. The experiment results received from the demon system are analyzed and used for the real-time development of a lighting system control strategy.