• Title/Summary/Keyword: multiple vision

Search Result 455, Processing Time 0.03 seconds

Development of a Vision-based Blank Alignment Unit for Press Automation Process (프레스 자동화 공정을 위한 비전 기반 블랭크 정렬 장치 개발)

  • Oh, Jong-Kyu;Kim, Daesik;Kim, Soo-Jong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.65-69
    • /
    • 2015
  • A vision-based blank alignment unit for a press automation line is introduced in this paper. A press is a machine tool that changes the shape of a blank by applying pressure and is widely used in industries requiring mass production. In traditional press automation lines, a mechanical centering unit, which consists of guides and ball bearings, is employed to align a blank before a robot inserts it into the press. However it can only align limited sized and shaped of blanks. Moreover it cannot be applied to a process where more than two blanks are simultaneously inserted. To overcome these problems, we developed a press centering unit by means of vision sensors for press automation lines. The specification of the vision system is determined by considering information of the blank and the required accuracy. A vision application S/W with pattern recognition, camera calibration and monitoring functions is designed to successfully detect multiple blanks. Through real experiments with an industrial robot, we validated that the proposed system was able to align various sizes and shapes of blanks, and successfully detect more than two blanks which were simultaneously inserted.

A completely non-contact recognition system for bridge unit influence line using portable cameras and computer vision

  • Dong, Chuan-Zhi;Bas, Selcuk;Catbas, F. Necati
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.617-630
    • /
    • 2019
  • Currently most of the vision-based structural identification research focus either on structural input (vehicle location) estimation or on structural output (structural displacement and strain responses) estimation. The structural condition assessment at global level just with the vision-based structural output cannot give a normalized response irrespective of the type and/or load configurations of the vehicles. Combining the vision-based structural input and the structural output from non-contact sensors overcomes the disadvantage given above, while reducing cost, time, labor force including cable wiring work. In conventional traffic monitoring, sometimes traffic closure is essential for bridge structures, which may cause other severe problems such as traffic jams and accidents. In this study, a completely non-contact structural identification system is proposed, and the system mainly targets the identification of bridge unit influence line (UIL) under operational traffic. Both the structural input (vehicle location information) and output (displacement responses) are obtained by only using cameras and computer vision techniques. Multiple cameras are synchronized by audio signal pattern recognition. The proposed system is verified with a laboratory experiment on a scaled bridge model under a small moving truck load and a field application on a footbridge on campus under a moving golf cart load. The UILs are successfully identified in both bridge cases. The pedestrian loads are also estimated with the extracted UIL and the predicted weights of pedestrians are observed to be in acceptable ranges.

Evaluation of Video Codec AI-based Multiple tasks (인공지능 기반 멀티태스크를 위한 비디오 코덱의 성능평가 방법)

  • Kim, Shin;Lee, Yegi;Yoon, Kyoungro;Choo, Hyon-Gon;Lim, Hanshin;Seo, Jeongil
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.273-282
    • /
    • 2022
  • MPEG-VCM(Video Coding for Machine) aims to standardize video codec for machines. VCM provides data sets and anchors, which provide reference data for comparison, for several machine vision tasks including object detection, object segmentation, and object tracking. The evaluation template can be used to compare compression and machine vision task performance between anchor data and various proposed video codecs. However, performance comparison is carried out separately for each machine vision task, and information related to performance evaluation of multiple machine vision tasks on a single bitstream is not provided currently. In this paper, we propose a performance evaluation method of a video codec for AI-based multi-tasks. Based on bits per pixel (BPP), which is the measure of a single bitstream size, and mean average precision(mAP), which is the accuracy measure of each task, we define three criteria for multi-task performance evaluation such as arithmetic average, weighted average, and harmonic average, and to calculate the multi-tasks performance results based on the mAP values. In addition, as the dynamic range of mAP may very different from task to task, performance results for multi-tasks are calculated and evaluated based on the normalized mAP in order to prevent a problem that would be happened because of the dynamic range.

Obstacle Avoidance Method for Multi-Agent Robots Using IR Sensor and Image Information (IR 센서와 영상정보를 이용한 다 개체 로봇의 장애물 회피 방법)

  • Jeon, Byung-Seung;Lee, Do-Young;Choi, In-Hwan;Mo, Young-Hak;Park, Jung-Min;Lim, Myo-Taeg
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.12
    • /
    • pp.1122-1131
    • /
    • 2012
  • This paper presents obstacle avoidance method for scout robot or industrial robot in unknown environment by using IR sensor and vision system. In the proposed method, robots share the information where the obstacles are located in real-time, thus the robots can choose the best path for obstacle avoidance. Using IR sensor and vision system, multiple robots efficiently evade the obstacles by the proposed cooperation method. No landmark is used at wall or floor in experiment environment. The obstacles don't have specific color or shape. To get the information of the obstacle, vision system extracts the obstacle coordinate by using an image labeling method. The information obtained by IR sensor is about the obstacle range and the locomotion direction to decide the optimal path for avoiding obstacle. The experiment was conducted in $7m{\times}7m$ indoor environment with two-wheeled mobile robots. It is shown that multiple robots efficiently move along the optimal path in cooperation with each other in the space where obstacles are located.

Self-Localization of Autonomous Mobile Robot using Multiple Landmarks (다중 표식을 이용한 자율이동로봇의 자기위치측정)

  • 강현덕;조강현
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.1
    • /
    • pp.81-86
    • /
    • 2004
  • This paper describes self-localization of a mobile robot from the multiple candidates of landmarks in outdoor environment. Our robot uses omnidirectional vision system for efficient self-localization. This vision system acquires the visible information of all direction views. The robot uses feature of landmarks whose size is bigger than that of others in image such as building, sculptures, placard etc. Robot uses vertical edges and those merged regions as the feature. In our previous work, we found the problem that landmark matching is difficult when selected candidates of landmarks belonging to region of repeating the vertical edges in image. To overcome these problems, robot uses the merged region of vertical edges. If interval of vertical edges is short then robot bundles them regarding as the same region. Thus, these features are selected as candidates of landmarks. Therefore, the extracted merged region of vertical edge reduces the ambiguity of landmark matching. Robot compares with the candidates of landmark between previous and current image. Then, robot is able to find the same landmark between image sequences using the proposed feature and method. We achieved the efficient self-localization result using robust landmark matching method through the experiments implemented in our campus.

Vision-based hand Gesture Detection and Tracking System (비전 기반의 손동작 검출 및 추적 시스템)

  • Park Ho-Sik;Bae Cheol-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.12C
    • /
    • pp.1175-1180
    • /
    • 2005
  • We present a vision-based hand gesture detection and tracking system. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. In this experiment, the proposed method has recognition rate of $99.28\%$ that shows more improved $3.91\%$ than the conventional appearance method.

Real-Time Vehicle Detection in Traffic Scenes using Multiple Local Region Information (국부 다중 영역 정보를 이용한 교통 영상에서의 실시간 차량 검지 기법)

  • 이대호;박영태
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.163-166
    • /
    • 2000
  • Real-time traffic detection scheme based on Computer Vision is capable of efficient traffic control using automatically computed traffic information and obstacle detection in moving automobiles. Traffic information is extracted by segmenting vehicle region from road images, in traffic detection system. In this paper, we propose the advanced segmentation of vehicle from road images using multiple local region information. Because multiple local region overlapped in the same lane is processed sequentially from small, the traffic detection error can be corrected.

  • PDF

The automatic tire classfying vision system for real time processing (실시간 처리를 위한 타이어 자동 선별 비젼 시스템)

  • 박귀태;김진헌;정순원;송승철
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10a
    • /
    • pp.358-363
    • /
    • 1992
  • The tire manufacturing process demands classification of tire types when the tires are transferred between the inner processes. Though most processes are being well automated, the classification relies greatly upon the visual inspection of humen. This has been an obstacle to the factory automation of tire manufacturing companies. This paper proposes an effective vision systems which can be usefully applied to the tire classification process in real time. The system adopts a parallel architecture using multiple transputers and contains the algorithms of preprocesssing for character recognition. The system can be easily expandable to manipulate the large data that can be processed seperately.

  • PDF

Vision-based hand gesture recognition system for object manipulation in virtual space (가상 공간에서의 객체 조작을 위한 비전 기반의 손동작 인식 시스템)

  • Park, Ho-Sik;Jung, Ha-Young;Ra, Sang-Dong;Bae, Cheol-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.553-556
    • /
    • 2005
  • We present a vision-based hand gesture recognition system for object manipulation in virtual space. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. Experimental results show the effectiveness of our method.

  • PDF

Multiple Vision Based Micromanipulation System for 3D-Shaped Micro Parts Assembly

  • Lee, Seok-Joo;Park, Gwi-Tae;Kim, Kyunghwan;Kim, Deok-Ho;Park, Jong-Oh
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.103.5-103
    • /
    • 2001
  • This paper presents a visual feedback system that controls a micromanipulator using multiple microscopic vision information. The micromanipulation stations basically have optical microscope. However the single field-of-view of optical microscope essentially limits the workspace of the micromanipulator and low dept-of-field makes it difficult to handle 3D-shaped micro objects. The system consists of a stereoscopic microscope, three CCD cameras, the micromanipulator and personal computer. The use of stereoscopic microscope which has long working distance and high depth-of-field with selective field-of-view improves the recognizability of 3D-shaped micro objects and provides a method for overcoming several essential limitations in micromanipulation. Thus, visual feedback information is very important in handling micro objects for overcoming those limitations and provides a mean for the ...

  • PDF