• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.026 seconds

Moving Object Extraction and Relative Depth Estimation of Backgrould regions in Video Sequences (동영상에서 물체의 추출과 배경영역의 상대적인 깊이 추정)

  • Park Young-Min;Chang Chu-Seok
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.247-256
    • /
    • 2005
  • One of the classic research problems in computer vision is that of stereo, i.e., the reconstruction of three dimensional shape from two or more images. This paper deals with the problem of extracting depth information of non-rigid dynamic 3D scenes from general 2D video sequences taken by monocular camera, such as movies, documentaries, and dramas. Depth of the blocks are extracted from the resultant block motions throughout following two steps: (i) calculation of global parameters concerned with camera translations and focal length using the locations of blocks and their motions, (ii) calculation of each block depth relative to average image depth using the global parameters and the location of the block and its motion, Both singular and non-singular cases are experimented with various video sequences. The resultant relative depths and ego-motion object shapes are virtually identical to human vision.

Position Improvement of a Mobile Robot by Real Time Tracking of Multiple Moving Objects (실시간 다중이동물체 추적에 의한 이동로봇의 위치개선)

  • Jin, Tae-Seok;Lee, Min-Jung;Tack, Han-Ho;Lee, In-Yong;Lee, Joon-Tark
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.2
    • /
    • pp.187-192
    • /
    • 2008
  • The Intelligent Space(ISpace) provides challenging research fields for surveillance, human-computer interfacing, networked camera conferencing, industrial monitoring or service and training applications. ISpace is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human Jollowing by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. This paper describes appearance based unknown object tracking with the distributed vision system in intelligent space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

A Study on Multi-Object Tracking Method using Color Clustering in ISpace (컬러 클러스터링 기법을 이용한 공간지능화의 다중이동물체 추척 기법)

  • Jin, Tae-Seok;Kim, Hyun-Deok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.11
    • /
    • pp.2179-2184
    • /
    • 2007
  • The Intelligent Space(ISpace) provides challenging research fields for surveillance, human-computer interfacing, networked camera conferencing, industrial monitoring or service and training applications. ISpace is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. This paper described appearance based unknown object tracking with the distributed vision system in intelligent space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

A Study of 3D World Reconstruction and Dynamic Object Detection using Stereo Images (스테레오 영상을 활용한 3차원 지도 복원과 동적 물체 검출에 관한 연구)

  • Seo, Bo-Gil;Yoon, Young Ho;Kim, Kyu Young
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.10
    • /
    • pp.326-331
    • /
    • 2019
  • In the real world, there are both dynamic objects and static objects, but an autonomous vehicle or mobile robot cannot distinguish between them, even though a human can distinguish them easily. It is important to distinguish static objects from dynamic objects clearly to perform autonomous driving successfully and stably for an autonomous vehicle or mobile robot. To do this, various sensor systems can be used, like cameras and LiDAR. Stereo camera images are used often for autonomous driving. The stereo camera images can be used in object recognition areas like object segmentation, classification, and tracking, as well as navigation areas like 3D world reconstruction. This study suggests a method to distinguish static/dynamic objects using stereo vision for an online autonomous vehicle and mobile robot. The method was applied to a 3D world map reconstructed from stereo vision for navigation and had 99.81% accuracy.

Blurred Image Enhancement Techniques Using Stack-Attention (Stack-Attention을 이용한 흐릿한 영상 강화 기법)

  • Park Chae Rim;Lee Kwang Ill;Cho Seok Je
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.83-90
    • /
    • 2023
  • Blurred image is an important factor in lowering image recognition rates in Computer vision. This mainly occurs when the camera is unstablely out of focus or the object in the scene moves quickly during the exposure time. Blurred images greatly degrade visual quality, weakening visibility, and this phenomenon occurs frequently despite the continuous development digital camera technology. In this paper, it replace the modified building module based on the Deep multi-patch neural network designed with convolution neural networks to capture details of input images and Attention techniques to focus on objects in blurred images in many ways and strengthen the image. It measures and assigns each weight at different scales to differentiate the blurring of change and restores from rough to fine levels of the image to adjust both global and local region sequentially. Through this method, it show excellent results that recover degraded image quality, extract efficient object detection and features, and complement color constancy.

Determination and evaluation of dynamic properties for structures using UAV-based video and computer vision system

  • Rithy Prak;Ji Ho Park;Sanggi Jeong;Arum Jang;Min Jae Park;Thomas H.-K. Kang;Young K. Ju
    • Computers and Concrete
    • /
    • v.31 no.5
    • /
    • pp.457-468
    • /
    • 2023
  • Buildings, bridges, and dams are examples of civil infrastructure that play an important role in public life. These structures are prone to structural variations over time as a result of external forces that might disrupt the operation of the structures, cause structural integrity issues, and raise safety concerns for the occupants. Therefore, monitoring the state of a structure, also known as structural health monitoring (SHM), is essential. Owing to the emergence of the fourth industrial revolution, next-generation sensors, such as wireless sensors, UAVs, and video cameras, have recently been utilized to improve the quality and efficiency of building forensics. This study presents a method that uses a target-based system to estimate the dynamic displacement and its corresponding dynamic properties of structures using UAV-based video. A laboratory experiment was performed to verify the tracking technique using a shaking table to excite an SDOF specimen and comparing the results between a laser distance sensor, accelerometer, and fixed camera. Then a field test was conducted to validate the proposed framework. One target marker is placed on the specimen, and another marker is attached to the ground, which serves as a stationary reference to account for the undesired UAV movement. The results from the UAV and stationary camera displayed a root mean square (RMS) error of 2.02% for the displacement, and after post-processing the displacement data using an OMA method, the identified natural frequency and damping ratio showed significant accuracy and similarities. The findings illustrate the capabilities and reliabilities of the methodology using UAV to evaluate the dynamic properties of structures.

Implementation of the SLAM System Using a Single Vision and Distance Sensors (단일 영상과 거리센서를 이용한 SLAM시스템 구현)

  • Yoo, Sung-Goo;Chong, Kil-To
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.149-156
    • /
    • 2008
  • SLAM(Simultaneous Localization and Mapping) system is to find a global position and build a map with sensing data when an unmanned-robot navigates an unknown environment. Two kinds of system were developed. One is used distance measurement sensors such as an ultra sonic and a laser sensor. The other is used stereo vision system. The distance measurement SLAM with sensors has low computing time and low cost, but precision of system can be somewhat worse by measurement error or non-linearity of the sensor In contrast, stereo vision system can accurately measure the 3D space area, but it needs high-end system for complex calculation and it is an expensive tool. In this paper, we implement the SLAM system using a single camera image and a PSD sensors. It detects obstacles from the front PSD sensor and then perceive size and feature of the obstacles by image processing. The probability SLAM was implemented using the data of sensor and image and we verify the performance of the system by real experiment.

Basic Study on Quality Evaluation Technique for Peeled Garlics(I) -Rotation sytem for vision-based garlic sorter- (박피 마늘의 품위판정 기술개발에 관한 기초연구(I) -영상식 마늘 선별기용 반전장치 개발-)

  • 이종환;이성범;안청운
    • Journal of Biosystems Engineering
    • /
    • v.26 no.3
    • /
    • pp.271-278
    • /
    • 2001
  • Many workers in the garlic peeling factory are separating the sound peeled garlics from the unpeeled and defective ones in a manual way. In order to reduce the seasonal labor requirement and operating cost, the mechanized garlic sorting system such as the vision-based garlic sorter should be developed. This study was conducted as one of basic studies on developing quality evaluation technique for peeled garlics, especially to developed the system for acquiring the whole surface images of garlics with a CCD camera. The following results were obtained from this study. 1. The belt-type garlic rotation system was devised to apply for the vision-based garlic sorter and was tested to decide the criteria of design and optimum conveying speed. 2. To evaluate the performance of the developed garlic rotation system, feeding rate and rotating rate were measured under the conditions of four experimental factors such as the inclined angle of rotating belt, the inclined angle of feeding belt, the height of plate arrays on feeding belt and the conveying speed of belts. And the capacity of the system according to mixture ratios of peeled garlics and unpeeled garlics was analyzed as a feasibility test. 3. For the inclined angle of rotating belt 20°and height of plate array on feeding belt 22㎜, the maximum rotating rate for garlic samples including unpeeled ones was 81.1% at the conveying speed of 4.2 garlic/sec. And under these condition, the maximum feeding rate was 85% at the inclined angle of feeding belt 6.5°. 4. The capacity of the developed garlic rotation system was almost constant regardless of mixture ratio of peeled garlics and unpeeled garlics and its range was 2.95∼3.92 garlic/sec. At the conveying speed of 4.2 garlic/sec, the capacity of the garlic rotation system was calculated ad 58∼64 kg/hr. 5. To improve performance of the garlic rotation system, it is recommended to develop a device to slide garlics into feeding belt.

  • PDF

Collision Avoidance for Indoor Mobile Robotics using Stereo Vision Sensor (스테레오 비전 센서를 이용한 실내 모바일 로봇 충돌 회피)

  • Kwon, Ki-Hyeon;Nam, Si-Byung;Lee, Se-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.5
    • /
    • pp.2400-2405
    • /
    • 2013
  • We detect the obstacle for the UGV(unmanned ground vehicle) from the compound image which is generated by stereo vision sensor masking the depth image and color image. Stereo vision sensor can gathers the distance information by stereo camera. The obstacle information from the depth compound image can be send to mobile robot and the robot can localize the indoor area. And, we test the performance of the mobile robot in terms of distance between the obstacle and the robot's position and also test the color, depth and compound image respectively. Moreover, we test the performance in terms of number of frame per second which is processed by operating machine. From the result, compound image shows the improved performance in distance and number of frames.

Development of On-line Grading Algorithm of Green Pepper Using Machine Vision (기계시각에 의한 풋고추 온라인 등급판정 알고리즘 개발)

  • Cho, N. H.;Lee, S. H.;Hwang, H.;Lee, Y. H.;Choi, S. M.;Park, J. R.;Cho, K. H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.6
    • /
    • pp.571-578
    • /
    • 2001
  • Production of green pepper has increased for ten years in Korea, as customer's preference of a pepper tuned to fiesta one. This study was conducted to develop an on-line fading algorithm of green pepper using machine vision and aimed to develop the automatic on-line grading and sorting system. The machine vision system was composed of a professive scan R7B CCD camera, a frame grabber and sets of 3-wave fluorescent lamps. The length and curvature, which were main quality factors of a green pepper were measured while removing the stem region. The first derivative of the thickness profile was used to remove the stem area of the segmented image of the pepper. A new boundary was generated after the stem was removed and a baseline of a pepper which was used for the curvature determination was also generated. The developed algorithm showed that the accuracy of the size measurement was 86.6% and the accuracy of the bent was 91.9%. Processing time spent far grading was around 0.17 sec per pepper.

  • PDF