• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.035 seconds

Estimation of Urban Traffic State Using Black Box Camera (차량 블랙박스 카메라를 이용한 도시부 교통상태 추정)

  • Haechan Cho;Yeohwan Yoon;Hwasoo Yeo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.2
    • /
    • pp.133-146
    • /
    • 2023
  • Traffic states in urban areas are essential to implement effective traffic operation and traffic control. However, installing traffic sensors on numerous road sections is extremely expensive. Accordingly, estimating the traffic state using a vehicle-mounted camera, which shows a high penetration rate, is a more effective solution. However, the previously proposed methodology using object tracking or optical flow has a high computational cost and requires consecutive frames to obtain traffic states. Accordingly, we propose a method to detect vehicles and lanes by object detection networks and set the region between lanes as a region of interest to estimate the traffic density of the corresponding area. The proposed method only uses less computationally expensive object detection models and can estimate traffic states from sampled frames rather than consecutive frames. In addition, the traffic density estimation accuracy was over 90% on the black box videos collected from two buses having different characteristics.

A Study on the Estimation of Multi-Object Social Distancing Using Stereo Vision and AlphaPose (Stereo Vision과 AlphaPose를 이용한 다중 객체 거리 추정 방법에 관한 연구)

  • Lee, Ju-Min;Bae, Hyeon-Jae;Jang, Gyu-Jin;Kim, Jin-Pyeong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.279-286
    • /
    • 2021
  • Recently, We are carrying out a policy of physical distancing of at least 1m from each other to prevent the spreading of COVID-19 disease in public places. In this paper, we propose a method for measuring distances between people in real time and an automation system that recognizes objects that are within 1 meter of each other from stereo images acquired by drones or CCTVs according to the estimated distance. A problem with existing methods used to estimate distances between multiple objects is that they do not obtain three-dimensional information of objects using only one CCTV. his is because three-dimensional information is necessary to measure distances between people when they are right next to each other or overlap in two dimensional image. Furthermore, they use only the Bounding Box information to obtain the exact coordinates of human existence. Therefore, in this paper, to obtain the exact two-dimensional coordinate value in which a person exists, we extract a person's key point to detect the location, convert it to a three-dimensional coordinate value using Stereo Vision and Camera Calibration, and estimate the Euclidean distance between people. As a result of performing an experiment for estimating the accuracy of 3D coordinates and the distance between objects (persons), the average error within 0.098m was shown in the estimation of the distance between multiple people within 1m.

Images Grouping Technology based on Camera Sensors for Efficient Stitching of Multiple Images (다수의 영상간 효율적인 스티칭을 위한 카메라 센서 정보 기반 영상 그룹핑 기술)

  • Im, Jiheon;Lee, Euisang;Kim, Hoejung;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.713-723
    • /
    • 2017
  • Since the panoramic image can overcome the limitation of the viewing angle of the camera and have a wide field of view, it has been studied effectively in the fields of computer vision and stereo camera. In order to generate a panoramic image, stitching images taken by a plurality of general cameras instead of using a wide-angle camera, which is distorted, is widely used because it can reduce image distortion. The image stitching technique creates descriptors of feature points extracted from multiple images, compares the similarities of feature points, and links them together into one image. Each feature point has several hundreds of dimensions of information, and data processing time increases as more images are stitched. In particular, when a panorama is generated on the basis of an image photographed by a plurality of unspecified cameras with respect to an object, the extraction processing time of the overlapping feature points for similar images becomes longer. In this paper, we propose a preprocessing process to efficiently process stitching based on an image obtained from a number of unspecified cameras for one object or environment. In this way, the data processing time can be reduced by pre-grouping images based on camera sensor information and reducing the number of images to be stitched at one time. Later, stitching is done hierarchically to create one large panorama. Through the grouping preprocessing proposed in this paper, we confirmed that the stitching time for a large number of images is greatly reduced by experimental results.

Autonomous Mobile Robot System Using Adaptive Spatial Coordinates Detection Scheme based on Stereo Camera (스테레오 카메라 기반의 적응적인 공간좌표 검출 기법을 이용한 자율 이동로봇 시스템)

  • Ko Jung-Hwan;Kim Sung-Il;Kim Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.1C
    • /
    • pp.26-35
    • /
    • 2006
  • In this paper, an automatic mobile robot system for a intelligent path planning using the detection scheme of the spatial coordinates based on stereo camera is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity map obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth information can be detected. Finally, based-on the analysis of these calculated coordinates, a mobile robot system is derived as a intelligent path planning and a estimation. From some experiments on robot driving with 240 frames of the stereo images, it is analyzed that error ratio between the calculated and measured values of the distance between the mobile robot and the objects, and relative distance between the other objects is found to be very low value of $2.19\%$ and $1.52\%$ on average, respectably.

Performance Enhancement of the Attitude Estimation using Small Quadrotor by Vision-based Marker Tracking (영상기반 물체추적에 의한 소형 쿼드로터의 자세추정 성능향상)

  • Kang, Seokyong;Choi, Jongwhan;Jin, Taeseok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.444-450
    • /
    • 2015
  • The accuracy of small and low cost CCD camera is insufficient to provide data for precisely tracking unmanned aerial vehicles(UAVs). This study shows how UAV can hover on a human targeted tracking object by using CCD camera rather than imprecise GPS data. To realize this, UAVs need to recognize their attitude and position in known environment as well as unknown environment. Moreover, it is necessary for their localization to occur naturally. It is desirable for an UAV to estimate of his attitude by environment recognition for UAV hovering, as one of the best important problems. In this paper, we describe a method for the attitude of an UAV using image information of a maker on the floor. This method combines the observed position from GPS sensors and the estimated attitude from the images captured by a fixed camera to estimate an UAV. Using the a priori known path of an UAV in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a marker on the floor and the estimated UAV's attitude. Since the equations are based on the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the UAV. The Kalman filter scheme is applied for this method. its performance is verified by the image processing results and the experiment.

Image Mosaicking Using Feature Points Based on Color-invariant (칼라 불변 기반의 특징점을 이용한 영상 모자이킹)

  • Kwon, Oh-Seol;Lee, Dong-Chang;Lee, Cheol-Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.89-98
    • /
    • 2009
  • In the field of computer vision, image mosaicking is a common method for effectively increasing restricted the field of view of a camera by combining a set of separate images into a single seamless image. Image mosaicking based on feature points has recently been a focus of research because of simple estimation for geometric transformation regardless distortions and differences of intensity generating by motion of a camera in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

Inspection System for The Metal Mask (Metal Mask 검사시스템)

  • 최경진;이용현;박종국
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.40 no.2
    • /
    • pp.1-9
    • /
    • 2003
  • We develop an experimental system to inspect a metal mask and, in this paper, introduce its inspection algorithm. This system is composed of an ASC(Area Scan Camera) and a belt type xy-table. The whole area of the metal mask is divided into several inspection blocks. The area of each block is equal to FOV(Field of View). For each block, the camera image is compared to the reference image. The reference image is made by gerber file. The rotation angle of the metal mask is calculated through the linear equation that is substituted two end points of horizontal boundary of a specific hole in a camera image. To calculate the position error caused by the belt type xy-table, HT(Hough-Transform) using distances among the holes in two images is used. The center of the reference image is moved as much as the calculated Position error to be coincided with the camera image. The information of holes in each image, such as centroid, size, width and height, are calculated through labeling. Whether a holes is mado correctly by laser machine or not, is judged by comparing the centroid and the size of hole in each image. Finally, we build the experimental system and apply this algorithm.

Terrain Geometry from Monocular Image Sequences

  • McKenzie, Alexander;Vendrovsky, Eugene;Noh, Jun-Yong
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.1
    • /
    • pp.98-108
    • /
    • 2008
  • Terrain reconstruction from images is an ill-posed, yet commonly desired Structure from Motion task when compositing visual effects into live-action photography. These surfaces are required for choreography of a scene, casting physically accurate shadows of CG elements, and occlusions. We present a novel framework for generating the geometry of landscapes from extremely noisy point cloud datasets obtained via limited resolution techniques, particularly optical flow based vision algorithms applied to live-action video plates. Our contribution is a new statistical approach to remove erroneous tracks ('outliers') by employing a unique combination of well established techniques-including Gaussian Mixture Models (GMMs) for robust parameter estimation and Radial Basis Functions (REFs) for scattered data interpolation-to exploit the natural constraints of this problem. Our algorithm offsets the tremendously laborious task of modeling these landscapes by hand, automatically generating a visually consistent, camera position dependent, thin-shell surface mesh within seconds for a typical tracking shot.

Background Subtraction in Dynamic Environment based on Modified Adaptive GMM with TTD for Moving Object Detection

  • Niranjil, Kumar A.;Sureshkumar, C.
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.1
    • /
    • pp.372-378
    • /
    • 2015
  • Background subtraction is the first processing stage in video surveillance. It is a general term for a process which aims to separate foreground objects from a background. The goal is to construct and maintain a statistical representation of the scene that the camera sees. The output of background subtraction will be an input to a higher-level process. Background subtraction under dynamic environment in the video sequences is one such complex task. It is an important research topic in image analysis and computer vision domains. This work deals background modeling based on modified adaptive Gaussian mixture model (GMM) with three temporal differencing (TTD) method in dynamic environment. The results of background subtraction on several sequences in various testing environments show that the proposed method is efficient and robust for the dynamic environment and achieves good accuracy.

Developing a Dynamic Selection Algorithm in Multiple Cameras (다중 카메라의 동적인 선택 알고리즘 개발)

  • Jang, Seok-Woo;Choi, Hyun-Jun;Lee, Suk-Yun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2013.01a
    • /
    • pp.223-225
    • /
    • 2013
  • 본 논문에서는 카메라가 여러 개 존재하는 다중의 카메라 환경에서 주변의 환경에 최적으로 적합한 카메라를 동적으로 선택하는 알고리즘을 제안한다. 제안된 알고리즘에서는 초기의 입력영상을 받아들인 후, 이 영상으로부터 주위의 환경을 가장 잘 표현할 수 있는 특징인 밝기와 텍스처 특징을 추출한다. 그리고 이전 단계에서 추출된 밝기와 텍스처 특징값들을 가장 잘 반영할 수 있는 카메라를 선택하는 규칙을 생성함으로써 주위 환경에 맞는 카메라를 자동으로 선택해 준다. 본 논문의 실험결과에서는 제안된 방법이 여러 가지 환경에서 잘 동작하며, 결과적으로 주위 환경에 적합한 카메라의 선택을 통해 보다 정확한 3차원의 정보를 추출함을 보여준다.

  • PDF