• 제목/요약/키워드: Vision Based System

검색결과 1,699건 처리시간 0.029초

Vision-based multipoint measurement systems for structural in-plane and out-of-plane movements including twisting rotation

  • Lee, Jong-Han;Jung, Chi-Young;Choi, Eunsoo;Cheung, Jin-Hwan
    • Smart Structures and Systems
    • /
    • 제20권5호
    • /
    • pp.563-572
    • /
    • 2017
  • The safety of structures is closely associated with the structural out-of-plane behavior. In particular, long and slender beam structures have been increasingly used in the design and construction. Therefore, an evaluation of the lateral and torsional behavior of a structure is important for the safety of the structure during construction as well as under service conditions. The current contact measurement method using displacement meters cannot measure independent movements directly and also requires caution when installing the displacement meters. Therefore, in this study, a vision-based system was used to measure the in-plane and out-of-plane displacements of a structure. The image processing algorithm was based on reference objects, including multiple targets in Lab color space. The captured targets were synchronized using a load indicator connected wirelessly to a data logger system in the server. A laboratory beam test was carried out to compare the displacements and rotation obtained from the proposed vision-based measurement system with those from the current measurement method using string potentiometers. The test results showed that the proposed vision-based measurement system could be applied successfully and easily to evaluating both the in-plane and out-of-plane movements of a beam including twisting rotation.

이동 로봇의 비젼 기반 제어 (Vision Based Mobile Robot Control)

  • 김진환
    • 전기학회논문지P
    • /
    • 제60권2호
    • /
    • pp.63-67
    • /
    • 2011
  • This paper presents the mobile robot control based on vision system. The proposed vision based controller consist of the camera tracking controller and the formation controller. Th e camera controller has the adaptive gain based on IBVS. The formation controller which is designed in the sense of the Lyapunov stability follows the leader. Simluation results show that the proposed vision based mobile robot control is validated for indoor mobile robot applications.

컴퓨터 비젼 방법을 이용한 3차원 물체 위치 결정에 관한 연구 (A Study on the Determination of 3-D Object's Position Based on Computer Vision Method)

  • 김경석
    • 한국생산제조학회지
    • /
    • 제8권6호
    • /
    • pp.26-34
    • /
    • 1999
  • This study shows an alternative method for the determination of object's position, based on a computer vision method. This approach develops the vision system model to define the reciprocal relationship between the 3-D real space and 2-D image plane. The developed model involves the bilinear six-view parameters, which is estimated using the relationship between the camera space location and real coordinates of known position. Based on estimated parameters in independent cameras, the position of unknown object is accomplished using a sequential estimation scheme that permits data of unknown points in each of the 2-D image plane of cameras. This vision control methods the robust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the robot, and correct knowledge of the relative positions and orientation of the robot and CCD camera. Finally, the developed vision control method is tested experimentally by performing determination of object position in the space using computer vision system. These results show the presented method is precise and compatible.

  • PDF

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

A completely non-contact recognition system for bridge unit influence line using portable cameras and computer vision

  • Dong, Chuan-Zhi;Bas, Selcuk;Catbas, F. Necati
    • Smart Structures and Systems
    • /
    • 제24권5호
    • /
    • pp.617-630
    • /
    • 2019
  • Currently most of the vision-based structural identification research focus either on structural input (vehicle location) estimation or on structural output (structural displacement and strain responses) estimation. The structural condition assessment at global level just with the vision-based structural output cannot give a normalized response irrespective of the type and/or load configurations of the vehicles. Combining the vision-based structural input and the structural output from non-contact sensors overcomes the disadvantage given above, while reducing cost, time, labor force including cable wiring work. In conventional traffic monitoring, sometimes traffic closure is essential for bridge structures, which may cause other severe problems such as traffic jams and accidents. In this study, a completely non-contact structural identification system is proposed, and the system mainly targets the identification of bridge unit influence line (UIL) under operational traffic. Both the structural input (vehicle location information) and output (displacement responses) are obtained by only using cameras and computer vision techniques. Multiple cameras are synchronized by audio signal pattern recognition. The proposed system is verified with a laboratory experiment on a scaled bridge model under a small moving truck load and a field application on a footbridge on campus under a moving golf cart load. The UILs are successfully identified in both bridge cases. The pedestrian loads are also estimated with the extracted UIL and the predicted weights of pedestrians are observed to be in acceptable ranges.

A Study on Public Library Book Location Guidance System based on AI Vision Sensor

  • Soyoung Kim;Heesun Kim
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제16권3호
    • /
    • pp.253-261
    • /
    • 2024
  • The role of the library is as a public institution that provides academic information to a variety of people, including students, the general public, and researchers. These days, as the importance of lifelong education is emphasized, libraries are evolving beyond simply storing and lending materials to complex cultural spaces that share knowledge and information through various educational programs and cultural events. One of the problems library user's faces is locating books to borrow. This problem occurs because of errors in the location of borrowed books due to delays in updating library databases related to borrowed books, incorrect labeling, and books temporarily located in different locations. The biggest problem is that it takes a long time for users to search for the books they want to borrow. In this paper, we propose a system that visually displays the location of books in real time using an AI vision sensor and LED. The AI vision sensor-based book location guidance system generates a QR code containing the call number of the borrowed book. When the AI vision sensor recognizes this QR code, the exact location of the book is visually displayed through LED to guide users to find it easily. We believe that the AI vision sensor-based book location guidance system dramatically improves book search and management efficiency, and this technology is expected to have great potential for use not only in libraries and bookstores but also in a variety of other fields.

DSP를 이용한 스테레오 비젼 로봇의 설계에 관한 연구 (The Stereoscopic Vision Robot System Design with DSP Processor)

  • 노석환;강희조;류광렬
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2003년도 추계종합학술대회
    • /
    • pp.264-267
    • /
    • 2003
  • 본 논문은 DSP를 이용한 스테레오 비젼 로봇의 설계에 관한 연구이다. 스테레오 비젼 로봇은 제어 시스템, 비젼 시스템, 그리고 호스트 컴퓨터로 구성된다. 비젼 시스템은 32비트 DSP 프로세서를 기반으로 구현하였고, 스테레오 영상 처리는 상관계수법을 적용하였다. 실험 결과, 영상인식에 의해 로봇의 제어가 원활하게 되었으며, 영상인식률은 약 95%를 얻었다.

  • PDF

Deep Learning Machine Vision System with High Object Recognition Rate using Multiple-Exposure Image Sensing Method

  • Park, Min-Jun;Kim, Hyeon-June
    • 센서학회지
    • /
    • 제30권2호
    • /
    • pp.76-81
    • /
    • 2021
  • In this study, we propose a machine vision system with a high object recognition rate. By utilizing a multiple-exposure image sensing technique, the proposed deep learning-based machine vision system can cover a wide light intensity range without further learning processes on the various light intensity range. If the proposed machine vision system fails to recognize object features, the system operates in a multiple-exposure sensing mode and detects the target object that is blocked in the near dark or bright region. Furthermore, short- and long-exposure images from the multiple-exposure sensing mode are synthesized to obtain accurate object feature information. That results in the generation of a wide dynamic range of image information. Even with the object recognition resources for the deep learning process with a light intensity range of only 23 dB, the prototype machine vision system with the multiple-exposure imaging method demonstrated an object recognition performance with a light intensity range of up to 96 dB.

조명 변화에 강인한 로봇 축구 시스템의 색상 분류기 (Robust Color Classifier for Robot Soccer System under Illumination Variations)

  • 이성훈;박진현;전향식;최영규
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제53권1호
    • /
    • pp.32-39
    • /
    • 2004
  • The color-based vision systems have been used to recognize our team robots, the opponent team robots and a ball in the robot soccer system. The color-based vision systems have the difficulty in that they are very sensitive to color variations brought by brightness changes. In this paper, a neural network trained with data obtained from various illumination conditions is used to classify colors in the modified YUV color space for the robot soccer vision system. For this, a new method to measure brightness is proposed by use of a color card. After the neural network is constructed, a look-up-table is generated to replace the neural network in order to reduce the computation time. Experimental results show that the proposed color classification method is robust under illumination variations.

RV 차량용 싱킹 시트의 용접 품질 검사 시스템 개발 (Development of Welding Quality Inspection System for RV Sinking Seat)

  • 윤상환;김한종;김성관
    • 제어로봇시스템학회논문지
    • /
    • 제14권1호
    • /
    • pp.75-80
    • /
    • 2008
  • This paper presents a vision based autonomous inspection system for welding quality control of a RV sinking seat. In order to overcome the precision error that arises from a visible inspection by an operator in the manufacturing process of a RV sinking seat, the machine vision based welding quality control system is proposed. It consists of the CMOS camera and the NI vision system. The geometry of the welding bead, which is the welding quality criteria, is measured by using the captured image with a median filter applied on it. The image processing software for the system was developed using the NI LabVIEW software. The proposed welding quality inspection system for RV sinking seat was verified using experimentation.