• Title/Summary/Keyword: technology vision

Search Result 2,033, Processing Time 0.024 seconds

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Reflectance estimation for infrared and visible image fusion

  • Gu, Yan;Yang, Feng;Zhao, Weijun;Guo, Yiliang;Min, Chaobo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.2749-2763
    • /
    • 2021
  • The desirable result of infrared (IR) and visible (VIS) image fusion should have textural details from VIS images and salient targets from IR images. However, detail information in the dark regions of VIS image has low contrast and blurry edges, resulting in performance degradation in image fusion. To resolve the troubles of fuzzy details in dark regions of VIS image fusion, we have proposed a method of reflectance estimation for IR and VIS image fusion. In order to maintain and enhance details in these dark regions, dark region approximation (DRA) is proposed to optimize the Retinex model. With the improved Retinex model based on DRA, quasi-Newton method is adopted to estimate the reflectance of a VIS image. The final fusion outcome is obtained by fusing the DRA-based reflectance of VIS image with IR image. Our method could simultaneously retain the low visibility details in VIS images and the high contrast targets in IR images. Experiment statistic shows that compared to some advanced approaches, the proposed method has superiority on detail preservation and visual quality.

Real-time geometry identification of moving ships by computer vision techniques in bridge area

  • Li, Shunlong;Guo, Yapeng;Xu, Yang;Li, Zhonglong
    • Smart Structures and Systems
    • /
    • v.23 no.4
    • /
    • pp.359-371
    • /
    • 2019
  • As part of a structural health monitoring system, the relative geometric relationship between a ship and bridge has been recognized as important for bridge authorities and ship owners to avoid ship-bridge collision. This study proposes a novel computer vision method for the real-time geometric parameter identification of moving ships based on a single shot multibox detector (SSD) by using transfer learning techniques and monocular vision. The identification framework consists of ship detection (coarse scale) and geometric parameter calculation (fine scale) modules. For the ship detection, the SSD, which is a deep learning algorithm, was employed and fine-tuned by ship image samples downloaded from the Internet to obtain the rectangle regions of interest in the coarse scale. Subsequently, for the geometric parameter calculation, an accurate ship contour is created using morphological operations within the saturation channel in hue, saturation, and value color space. Furthermore, a local coordinate system was constructed using projective geometry transformation to calculate the geometric parameters of ships, such as width, length, height, localization, and velocity. The application of the proposed method to in situ video images, obtained from cameras set on the girder of the Wuhan Yangtze River Bridge above the shipping channel, confirmed the efficiency, accuracy, and effectiveness of the proposed method.

Particle Filters using Gaussian Mixture Models for Vision-Based Navigation (영상 기반 항법을 위한 가우시안 혼합 모델 기반 파티클 필터)

  • Hong, Kyungwoo;Kim, Sungjoong;Bang, Hyochoong;Kim, Jin-Won;Seo, Ilwon;Pak, Chang-Ho
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.47 no.4
    • /
    • pp.274-282
    • /
    • 2019
  • Vision-based navigation of unmaned aerial vehicle is a significant technology that can reinforce the vulnerability of the widely used GPS/INS integrated navigation system. However, the existing image matching algorithms are not suitable for matching the aerial image with the database. For the reason, this paper proposes particle filters using Gaussian mixture models to deal with matching between aerial image and database for vision-based navigation. The particle filters estimate the position of the aircraft by comparing the correspondences of aerial image and database under the assumption of Gaussian mixture model. Finally, Monte Carlo simulation is presented to demonstrate performance of the proposed method.

Hardware Design of VLIW coprocessor for Computer Vision Application (컴퓨터 비전 응용을 위한 VLIW 보조프로세서의 하드웨어 설계)

  • Choi, Byeong-Yoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.9
    • /
    • pp.2189-2196
    • /
    • 2014
  • In this paper, a VLIW(Very Long Instruction Word) vision coprocessor which can efficiently accelerate computer vision algorithm for automotive is designed. The VLIW coprocessor executes four instructions per clock cycle via 8-stage pipelined structure and has 36 integer and floating-point instructions to accelerate computer vision algorithm for pedestrian detection. The processor has about 300-MHz operating frequency and about 210,900 gates under 45nm CMOS technology and its estimated performance is 1.2 GOPS(Giga Operations Per Second). The vision system composed of vision primitive engine and eight VLIW coprocessors can execute pedestrian detection at 25~29 frames per second(FPS). Because the VLIW coprocessor has high detection rate and loosely coupled interface with host processor, it can be efficiently applicable to a wide range of vision applications.

Human Visual Ability Enhancement Technology Trends and Development Prospects (인간 시각 능력 향상 기술 동향 및 발전 전망)

  • C.Y. Jeong;M.S. Kim;S.R. Yun;K.D. Moon;H.C. Shin
    • Electronics and Telecommunications Trends
    • /
    • v.39 no.4
    • /
    • pp.63-72
    • /
    • 2024
  • Vision is a process in which the brain and eyes collaborate to enable sight by analyzing light reflected from objects. Vision is also the most crucial among the five basic human senses for recognizing environments. The eyes contain 70% of the sensory receptors in the body, and 90% of the information processed by the brain is visual. Currently, approximately 2.2 billion people worldwide have vision impairments. A recent study estimated that the global economic productivity losses due to vision impairment and blindness amount to approximately $410 billion. Additionally, as people age, their ability to control their vision declines, leading to presbyopia, which typically starts in their 40s. Since people heavily rely on vision in their daily lives, vision problems can significantly reduce the quality of life. Approaches to solving vision problems can be broadly categorized into visual prostheses requiring surgery, sensory substitution based on neuroplasticity, and smart glasses for presbyopia. We present the trends and future development prospects for three key areas of research: visual prostheses, visual substitution technologies, and smart glasses technologies. These areas are being explored with the aim of addressing visual impairments and blindness.

Unusual Motion Detection for Vision-Based Driver Assistance

  • Fu, Li-Hua;Wu, Wei-Dong;Zhang, Yu;Klette, Reinhard
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.27-34
    • /
    • 2015
  • For a vision-based driver assistance system, unusual motion detection is one of the important means of preventing accidents. In this paper, we propose a real-time unusual-motion-detection model, which contains two stages: salient region detection and unusual motion detection. In the salient-region-detection stage, we present an improved temporal attention model. In the unusual-motion-detection stage, three kinds of factors, the speed, the motion direction, and the distance, are extracted for detecting unusual motion. A series of experimental results demonstrates the proposed method and shows the feasibility of the proposed model.

Autonomous Omni-Directional Cleaning Robot System Design

  • Choi, Jun-Yong;Ock, Seung-Ho;Kim, San;Kim, Dong-Hwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2019-2023
    • /
    • 2005
  • In this paper, an autonomous omni directional cleaning robot which recognizes an obstacle and a battery charger is introduced. It utilizes a robot vision, ultra sonic sensors, and infrared sensors information along with appropriate algorithm. Three omni-directional wheels make the robot move any direction, enabling a faster maneuvering than a simple track typed robot. The robot system transfers command and image data through Blue-tooth wireless modules to be operated in a remote place. The robot vision associated with sensor data makes the robot proceed in an autonomous behavior. An autonomous battery charger searching is implemented by using a map-building which results in overcoming the error due to the slip on the wheels, and camera and sensor information.

  • PDF

Control of Visual Tracking System with a Random Time Delay (랜덤한 시간 지연 요소를 갖는 영상 추적 시스템의 제어)

  • Oh, Nam-Kyu;Choi, Goon-Ho
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.3
    • /
    • pp.21-28
    • /
    • 2011
  • In recent years, owing to the development of the image processing technology, the research to build control system using a vision sensor is stimulated. However, a random time delay must be considered, because it works of a various time to get a result of an image processing in the system. It can be seen as an obstacle factor to a control of visual tracking in real system. In this paper, implementing two vision controllers each, first one is made up PID controller and the second one is consisted of a Smith Predictor, the possibility was shown to overcome a problem of a random time delay in a visual tracking system. A number of simulations and experiments were done to show the validity of this study.

Position Control of an Object Using Vision Sensor (비전 센서를 이용한 물체의 위치 제어)

  • Ha, Eun-Hyeon;Choi, Goon-Ho
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.2
    • /
    • pp.49-56
    • /
    • 2011
  • In recent years, owing to the development of the image processing technology, the research to build control system using a vision sensor is stimulated. However, the time delay must be considered, because it works of time to get the result of an image processing in the system. It can be seen as an obstacle factor to real-time control. In this paper, using the pattern matching technique, the location of two objects is recognized from one image which was acquired by a camera. And it is implemented to a position control system as feedback data. Also, a possibility was shown to overcome a problem of time delay using PID controller. A number of experiments were done to show the validity of this study.