• Title/Summary/Keyword: vision-based technology

Search Result 1,063, Processing Time 0.03 seconds

A Study on Public Library Book Location Guidance System based on AI Vision Sensor

  • Soyoung Kim;Heesun Kim
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.253-261
    • /
    • 2024
  • The role of the library is as a public institution that provides academic information to a variety of people, including students, the general public, and researchers. These days, as the importance of lifelong education is emphasized, libraries are evolving beyond simply storing and lending materials to complex cultural spaces that share knowledge and information through various educational programs and cultural events. One of the problems library user's faces is locating books to borrow. This problem occurs because of errors in the location of borrowed books due to delays in updating library databases related to borrowed books, incorrect labeling, and books temporarily located in different locations. The biggest problem is that it takes a long time for users to search for the books they want to borrow. In this paper, we propose a system that visually displays the location of books in real time using an AI vision sensor and LED. The AI vision sensor-based book location guidance system generates a QR code containing the call number of the borrowed book. When the AI vision sensor recognizes this QR code, the exact location of the book is visually displayed through LED to guide users to find it easily. We believe that the AI vision sensor-based book location guidance system dramatically improves book search and management efficiency, and this technology is expected to have great potential for use not only in libraries and bookstores but also in a variety of other fields.

Integrated System for Autonomous Proximity Operations and Docking

  • Lee, Dae-Ro;Pernicka, Henry
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.1
    • /
    • pp.43-56
    • /
    • 2011
  • An integrated system composed of guidance, navigation and control (GNC) system for autonomous proximity operations and the docking of two spacecraft was developed. The position maneuvers were determined through the integration of the state-dependent Riccati equation formulated from nonlinear relative motion dynamics and relative navigation using rendezvous laser vision (Lidar) and a vision sensor system. In the vision sensor system, a switch between sensors was made along the approach phase in order to provide continuously effective navigation. As an extension of the rendezvous laser vision system, an automated terminal guidance scheme based on the Clohessy-Wiltshire state transition matrix was used to formulate a "V-bar hopping approach" reference trajectory. A proximity operations strategy was then adapted from the approach strategy used with the automated transfer vehicle. The attitude maneuvers, determined from a linear quadratic Gaussian-type control including quaternion based attitude estimation using star trackers or a vision sensor system, provided precise attitude control and robustness under uncertainties in the moments of inertia and external disturbances. These functions were then integrated into an autonomous GNC system that can perform proximity operations and meet all conditions for successful docking. A six-degree of freedom simulation was used to demonstrate the effectiveness of the integrated system.

A Study on the Robot Vision Control Schemes of N-R and EKF Methods for Tracking the Moving Targets (이동 타겟 추적을 위한 N-R과 EKF방법의 로봇비젼제어기법에 관한 연구)

  • Hong, Sung-Mun;Jang, Wan-Shik;Kim, Jae-Meung
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.23 no.5
    • /
    • pp.485-497
    • /
    • 2014
  • This paper presents the robot vision control schemes based on the Newton-Raphson (N-R) and the Extended Kalman Filter (EKF) methods for the tracking of moving targets. The vision system model used in this study involves the six camera parameters. The difference is that refers to the uncertainty of the camera's orientation and focal length, and refers to the unknown relative position between the camera and the robot. Both N-R and EKF methods are employed towards the estimation of the six camera parameters. Based on the these six parameters estimated using three cameras, the robot's joint angles are computed with respect to the moving targets, using both N-R and EKF methods. The two robot vision control schemes are tested by tracking the moving target experimentally. Given the experimental results, the two robot control schemes are compared in order to evaluate their strengths and weaknesses.

A vision based people tracking and following for mobile robots using CAMSHIFT and KLT feature tracker (캠시프트와 KLT특징 추적 알고리즘을 융합한 모바일 로봇의 영상기반 사람추적 및 추종)

  • Lee, S.J.;Won, Mooncheol
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.7
    • /
    • pp.787-796
    • /
    • 2014
  • Many mobile robot navigation methods utilize laser scanners, ultrasonic sensors, vision camera, and so on for detecting obstacles and path following. However, human utilizes only vision(e.g. eye) information for navigation. In this paper, we study a mobile robot control method based on only the camera vision. The Gaussian Mixture Model and a shadow removal technology are used to divide the foreground and the background from the camera image. The mobile robot uses a combined CAMSHIFT and KLT feature tracker algorithms based on the information of the foreground to follow a person. The algorithm is verified by experiments where a person is tracked and followed by a robot in a hallway.

Image-Based Visual Servoing Control of a SCARA Robot

  • Han, Sung-Hyun;Lee, Man-Hyung;Hashimoto, Hideki
    • Journal of Mechanical Science and Technology
    • /
    • v.14 no.7
    • /
    • pp.782-788
    • /
    • 2000
  • In this paper, we present a new approach to visual feedback control using image-based visual servoing with stereo vision. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only around a desired location but also at other locations. The suggested technique can guide a robot manipulator to the desired location without providing a priori knowledge such as the relative distance to the desired location or the model of an object even when the initial positioning error is large. This paper describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by experimental results and compared with conventional control methods for an assembly robot.

  • PDF

Optical Flow Based Collision Avoidance of Multi-Rotor UAVs in Urban Environments

  • Yoo, Dong-Wan;Won, Dae-Yeon;Tahk, Min-Jea
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.3
    • /
    • pp.252-259
    • /
    • 2011
  • This paper is focused on dynamic modeling and control system design as well as vision based collision avoidance for multi-rotor unmanned aerial vehicles (UAVs). Multi-rotor UAVs are defined as rotary-winged UAVs with multiple rotors. These multi-rotor UAVs can be utilized in various military situations such as surveillance and reconnaissance. They can also be used for obtaining visual information from steep terrains or disaster sites. In this paper, a quad-rotor model is introduced as well as its control system, which is designed based on a proportional-integral-derivative controller and vision-based collision avoidance control system. Additionally, in order for a UAV to navigate safely in areas such as buildings and offices with a number of obstacles, there must be a collision avoidance algorithm installed in the UAV's hardware, which should include the detection of obstacles, avoidance maneuvering, etc. In this paper, the optical flow method, one of the vision-based collision avoidance techniques, is introduced, and multi-rotor UAV's collision avoidance simulations are described in various virtual environments in order to demonstrate its avoidance performance.

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

Vision-based Obstacle State Estimation and Collision Prediction using LSM and CPA for UAV Autonomous Landing (무인항공기의 자동 착륙을 위한 LSM 및 CPA를 활용한 영상 기반 장애물 상태 추정 및 충돌 예측)

  • Seongbong Lee;Cheonman Park;Hyeji Kim;Dongjin Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.485-492
    • /
    • 2021
  • Vision-based autonomous precision landing technology for UAVs requires precise position estimation and landing guidance technology. Also, for safe landing, it must be designed to determine the safety of the landing point against ground obstacles and to guide the landing only when the safety is ensured. In this paper, we proposes vision-based navigation, and algorithms for determining the safety of landing point to perform autonomous precision landings. To perform vision-based navigation, CNN technology is used to detect landing pad and the detection information is used to derive an integrated navigation solution. In addition, design and apply Kalman filters to improve position estimation performance. In order to determine the safety of the landing point, we perform the obstacle detection and position estimation in the same manner, and estimate the speed of the obstacle using LSM. The collision or not with the obstacle is determined based on the CPA calculated by using the estimated state of the obstacle. Finally, we perform flight test to verify the proposed algorithm.

Light-Adaptive Vision System for Remote Surveillance Using an Edge Detection Vision Chip

  • Choi, Kyung-Hwa;Jo, Sung-Hyun;Seo, Sang-Ho;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.20 no.3
    • /
    • pp.162-167
    • /
    • 2011
  • In this paper, we propose a vision system using a field programmable gate array(FPGA) and a smart vision chip. The output of the vision chip is varied by illumination conditions. This chip is suitable as a surveillance system in a dynamic environment. However, because the output swing of a smart vision chip is too small to definitely confirm the warning signal with the FPGA, a modification was needed for a reliable signal. The proposed system is based on a transmission control protocol/internet protocol(TCP/IP) that enables monitoring from a remote place. The warning signal indicates that some objects are too near.

Real-time Robotic Vision Control Scheme Using Optimal Weighting Matrix for Slender Bar Placement Task (얇은 막대 배치작업을 위한 최적의 가중치 행렬을 사용한 실시간 로봇 비젼 제어기법)

  • Jang, Min Woo;Kim, Jae Myung;Jang, Wan Shik
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.1
    • /
    • pp.50-58
    • /
    • 2017
  • This paper proposes a real-time robotic vision control scheme using the weighting matrix to efficiently process the vision data obtained during robotic movement to a target. This scheme is based on the vision system model that can actively control the camera parameter and robotic position change over previous studies. The vision control algorithm involves parameter estimation, joint angle estimation, and weighting matrix models. To demonstrate the effectiveness of the proposed control scheme, this study is divided into two parts: not applying the weighting matrix and applying the weighting matrix to the vision data obtained while the camera is moving towards the target. Finally, the position accuracy of the two cases is compared by performing the slender bar placement task experimentally.