• Title/Summary/Keyword: vision-based technology

Search Result 1,063, Processing Time 0.03 seconds

The Performance Analysis of Integrated Navigation System Based on the Tactical Communication and VISION for the Accurate Localization of Unmanned Robot (무인로봇 정밀위치추정을 위한 전술통신 및 영상 기반의 통합항법 성능 분석)

  • Choi, Ji-Hoon;Park, Yong-Woon;Song, Jae-Bok;Kweon, In-So
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.14 no.2
    • /
    • pp.271-280
    • /
    • 2011
  • This paper presents a navigation system based on the tactical communication and vision system in outdoor environments which is applied to unmanned robot for perimeter surveillance operations. GPS errors of robot are compensated by the reference station of C2(command and control) vehicle and WiBro(Wireless Broadband) is used for the communication between two systems. In the outdoor environments, GPS signals can be easily blocked due to trees and buildings. In this environments, however, vision system is very efficient because there are many features. With the feature MAP around the operation environments, the robot can estimate the position by the image matching and pose estimation. In the navigation system, thus, operation modes is switched by navigation manager according to some environment conditions. The experimental results show that the unmanned robot can estimate the position very accurately in outdoor environment.

Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image (어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피)

  • Choi, Yun Won;Choi, Jeong Won;Im, Sung Gyu;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.

Performance Analysis of Vision-based Positioning Assistance Algorithm (비전 기반 측위 보조 알고리즘의 성능 분석)

  • Park, Jong Soo;Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.101-108
    • /
    • 2019
  • Due to recent improvements in computer processing speed and image processing technology, researches are being actively carried out to combine information from camera with existing GNSS (Global Navigation Satellite System) and dead reckoning. In this study, developed a vision-based positioning assistant algorithm to estimate the distance to the object from stereo images. In addition, GNSS/on-board vehicle sensor/vision based positioning algorithm is developed by combining vision based positioning algorithm with existing positioning algorithm. For the performance analysis, the velocity calculated from the actual driving test was used for the navigation solution correction, simulation tests were performed to analyse the effects of velocity precision. As a result of analysis, it is confirmed that about 4% of position accuracy is improved when vision information is added compared to existing GNSS/on-board based positioning algorithm.

Intelligent Monitoring System for Solitary Senior Citizens with Vision-Based Security Architecture (영상보안 구조 기반의 지능형 독거노인 모니터링 시스템)

  • Kim, Soohee;Jeong, Youngwoo;Jeong, Yue Ri;Lee, Seung Eun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.639-641
    • /
    • 2022
  • With the increasing of aging population, a lot of researches on monitoring systems for solitary senior citizens are under study. In general, a monitoring system provides a monitoring service by computing the information of vision, sensors, and measurement values on a server. Design considering data security is essential because a risk of data leakage exists in the structure of the system employing the server. In this paper, we propose a intelligent monitoring system for solitary senior citizens with vision-based security architecture. The proposed system protects privacy by ensuring high security through an architecture that blocks communication between a camera module and a server by employing an edge AI module. The edge AI module was designed with Verilog HDL and verified by implementing on a Field Programmable Gate Array (FPGA). We tested our proposed system on 5,144 frame data and demonstrated that a dangerous detection signal is generated correctly when human motion is not detected for a certain period.

  • PDF

Real-time geometry identification of moving ships by computer vision techniques in bridge area

  • Li, Shunlong;Guo, Yapeng;Xu, Yang;Li, Zhonglong
    • Smart Structures and Systems
    • /
    • v.23 no.4
    • /
    • pp.359-371
    • /
    • 2019
  • As part of a structural health monitoring system, the relative geometric relationship between a ship and bridge has been recognized as important for bridge authorities and ship owners to avoid ship-bridge collision. This study proposes a novel computer vision method for the real-time geometric parameter identification of moving ships based on a single shot multibox detector (SSD) by using transfer learning techniques and monocular vision. The identification framework consists of ship detection (coarse scale) and geometric parameter calculation (fine scale) modules. For the ship detection, the SSD, which is a deep learning algorithm, was employed and fine-tuned by ship image samples downloaded from the Internet to obtain the rectangle regions of interest in the coarse scale. Subsequently, for the geometric parameter calculation, an accurate ship contour is created using morphological operations within the saturation channel in hue, saturation, and value color space. Furthermore, a local coordinate system was constructed using projective geometry transformation to calculate the geometric parameters of ships, such as width, length, height, localization, and velocity. The application of the proposed method to in situ video images, obtained from cameras set on the girder of the Wuhan Yangtze River Bridge above the shipping channel, confirmed the efficiency, accuracy, and effectiveness of the proposed method.

Auto Parts Visual Inspection in Severe Changes in the Lighting Environment (조명의 변화가 심한 환경에서 자동차 부품 유무 비전검사 방법)

  • Kim, Giseok;Park, Yo Han;Park, Jong-Seop;Cho, Jae-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.12
    • /
    • pp.1109-1114
    • /
    • 2015
  • This paper presents an improved learning-based visual inspection method for auto parts inspection in severe lighting changes. Automobile sunroof frames are produced automatically by robots in most production lines. In the sunroof frame manufacturing process, there is a quality problem with some parts such as volts are missed. Instead of manual sampling inspection using some mechanical jig instruments, a learning-based machine vision system was proposed in the previous research[1]. But, in applying the actual sunroof frame production process, the inspection accuracy of the proposed vision system is much lowered because of severe illumination changes. In order to overcome this capricious environment, some selective feature vectors and cascade classifiers are used for each auto parts. And we are able to improve the inspection accuracy through the re-learning concept for the misclassified data. The effectiveness of the proposed visual inspection method is verified through sufficient experiments in a real sunroof production line.

Guidance Line Extraction Algorithm using Central Region Data of Crop for Vision Camera based Autonomous Robot in Paddy Field (비전 카메라 기반의 무논환경 자율주행 로봇을 위한 중심영역 추출 정보를 이용한 주행기준선 추출 알고리즘)

  • Choi, Keun Ha;Han, Sang Kwon;Park, Kwang-Ho;Kim, Kyung-Soo;Kim, Soohyun
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.1
    • /
    • pp.1-8
    • /
    • 2016
  • In this paper, we propose a new algorithm of the guidance line extraction for autonomous agricultural robot based on vision camera in paddy field. It is the important process for guidance line extraction which finds the central point or area of rice row. We are trying to use the central region data of crop that the direction of rice leaves have convergence to central area of rice row in order to improve accuracy of the guidance line. The guidance line is extracted from the intersection points of extended virtual lines using the modified robust regression. The extended virtual lines are represented as the extended line from each segmented straight line created on the edges of the rice plants in the image using the Hough transform. We also have verified an accuracy of the proposed algorithm by experiments in the real wet paddy.

Robot Vision to Audio Description Based on Deep Learning for Effective Human-Robot Interaction (효과적인 인간-로봇 상호작용을 위한 딥러닝 기반 로봇 비전 자연어 설명문 생성 및 발화 기술)

  • Park, Dongkeon;Kang, Kyeong-Min;Bae, Jin-Woo;Han, Ji-Hyeong
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.22-30
    • /
    • 2019
  • For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot's understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.

Recent Trends and Prospects of 3D Content Using Artificial Intelligence Technology (인공지능을 이용한 3D 콘텐츠 기술 동향 및 향후 전망)

  • Lee, S.W.;Hwang, B.W.;Lim, S.J.;Yoon, S.U.;Kim, T.J.;Kim, K.N.;Kim, D.H;Park, C.J.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.15-22
    • /
    • 2019
  • Recent technological advances in three-dimensional (3D) sensing devices and machine learning such as deep leaning has enabled data-driven 3D applications. Research on artificial intelligence has developed for the past few years and 3D deep learning has been introduced. This is the result of the availability of high-quality big data, increases in computing power, and development of new algorithms; before the introduction of 3D deep leaning, the main targets for deep learning were one-dimensional (1D) audio files and two-dimensional (2D) images. The research field of deep leaning has extended from discriminative models such as classification/segmentation/reconstruction models to generative models such as those including style transfer and generation of non-existing data. Unlike 2D learning, it is not easy to acquire 3D learning data. Although low-cost 3D data acquisition sensors have become increasingly popular owing to advances in 3D vision technology, the generation/acquisition of 3D data is still very difficult. Even if 3D data can be acquired, post-processing remains a significant problem. Moreover, it is not easy to directly apply existing network models such as convolution networks owing to the various ways in which 3D data is represented. In this paper, we summarize technological trends in AI-based 3D content generation.

A Study on Three-Dimensional Model Reconstruction Based on Laser-Vision Technology (레이저 비전 기술을 이용한 물체의 3D 모델 재구성 방법에 관한 연구)

  • Nguyen, Huu Cuong;Lee, Byung Ryong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.7
    • /
    • pp.633-641
    • /
    • 2015
  • In this study, we proposed a three-dimensional (3D) scanning system based on laser-vision technique and rotary mechanism for automatic 3D model reconstruction. The proposed scanning system consists of a laser projector, a camera, and a turntable. For laser-camera calibration a new and simple method was proposed. 3D point cloud data of the surface of scanned object was fully collected by integrating extracted laser profiles, which were extracted from laser stripe images, corresponding to rotary angles of the rotary mechanism. The obscured laser profile problem was also solved by adding an addition camera at another viewpoint. From collected 3D point cloud data, the 3D model of the scanned object was reconstructed based on facet-representation. The reconstructed 3D models showed effectiveness and the applicability of the proposed 3D scanning system to 3D model-based applications.