• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.034 seconds

Motion Detection using Adaptive Background Image and Pixel Space (적응적 배경영상과 픽셀 간격을 이용한 움직임 검출)

  • 지정규;이창수;오해석
    • Journal of Information Technology Applications and Management
    • /
    • v.10 no.3
    • /
    • pp.45-54
    • /
    • 2003
  • Security system with web camera remarkably has been developed at an Internet era. Using transmitted images from remote camera, the system can recognize current situation and take a proper action through web. Existing motion detection methods use simply difference image, background image techniques or block matching algorithm which establish initial block by set search area and find similar block. But these methods are difficult to detect exact motion because of useless noise. In this paper, the proposed method is updating changed background image as much as $N{\times}M$pixel mask as time goes on after get a difference between imput image and first background image. And checking image pixel can efficiently detect motion by computing fixed distance pixel instead of operate all pixel.

  • PDF

Location Identification Using an Fisheye Lens and Landmarks Placed on Ceiling in a Cleaning Robot (어안렌즈와 천장의 위치인식 마크를 활용한 청소로봇의 자기 위치 인식 기술)

  • Kang, Tae-Gu;Lee, Jae-Hyun;Jung, Kwang-Oh;Cho, Deok-Yeon;Yim, Choog-Hyuk;Kim, Dong-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.10
    • /
    • pp.1021-1028
    • /
    • 2009
  • In this paper, a location identification for a cleaning robot using a camera shooting forward a room ceiling which kas three point landmarks is introduced. These three points are made from a laser source which is placed on an auto charger. A fisheye lens covering almost 150 degrees is utilized and the image is transformed to a camera image grabber. The widly shot image has an inevitable distortion even if wide range is coverd. This distortion is flatten using an image warping scheme. Several vision processing techniques such as an intersection extraction erosion, and curve fitting are employed. Next, three point marks are identified and their correspondence is investigated. Through this image processing and image distortion adjustment, a robot location in a wide geometrical coverage is identified.

Automated measurement of tool wear using an image processing system

  • Sawai, Nobushige;Song, Joonyeob;Park, Hwayoung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.311-314
    • /
    • 1995
  • This paper presents a method for measuring tool wear parameters based on two dimensional image information. The tool wear images were obtained from an ITV camera with magnifying and lighting devices, and were analyzed using image processing techniques such as thresholding, noise filtering and boundary tracing. Thresholding was used to transform the captured gray scale image into a binary image for rapid sequential image processing. The threshold level was determined using a novel technique in which the brightness histograms of two concentric windows containing the tool wear image were compared. The use of noise filtering and boundary tracing to reduce the measuring errors was explored. Performance tests of the measurement precision and processing speed revealed that the direct method was highly effective in intermittent tool wear monitoring.

  • PDF

User Positioning Method Based on Image Similarity Comparison Using Single Camera (단일 카메라를 이용한 이미지 유사도 비교 기반의 사용자 위치추정)

  • Song, Jinseon;Hur, SooJung;Park, Yongwan;Choi, Jeonghee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.8
    • /
    • pp.1655-1666
    • /
    • 2015
  • In this paper, user-position estimation method is proposed by using a single camera for both indoor and outdoor environments. Conventionally, the GPS of RF-based estimation methods have been widely studied in the literature for outdoor and indoor environments, respectively. Each method is useful only for indoor or outdoor environment. In this context, this study adopts a vision-based approach which can be commonly applicable to both environments. Since the distance or position cannot be extracted from a single still image, the reference images pro-stored in image database are used to identify the current position from the single still image captured by a single camera. The reference image is tagged with its captured position. To find the reference image which is the most similar to the current image, the SURF algorithm is used for feature extraction. The outliers in extracted features are discarded by using RANSAC algorithm. The performance of the proposed method is evaluated for two buildings and their outsides for both indoor and outdoor environments, respectively.

Development and Application of High-resolution 3-D Volume PIV System by Cross-Correlation (해상도 3차원 상호상관 Volume PIV 시스템 개발 및 적용)

  • Kim Mi-Young;Choi Jang-Woon;Lee Hyun;Lee Young-Ho
    • Proceedings of the KSME Conference
    • /
    • 2002.08a
    • /
    • pp.507-510
    • /
    • 2002
  • An algorithm of 3-D particle image velocimetry(3D-PIV) was developed for the measurement of 3-D velocity Held of complex flows. The measurement system consists of two or three CCD camera and one RGB image grabber. Flows size is $1500{\times}100{\times}180(mm)$, particle is Nylon12(1mm) and illuminator is Hollogen type lamp(100w). The stereo photogrammetry is adopted for the three dimensional geometrical mesurement of tracer particle. For the stereo-pair matching, the camera parameters should be decide in advance by a camera calibration. Camera parameter calculation equation is collinearity equation. In order to calculate the particle 3-D position based on the stereo photograrnrnetry, the eleven parameters of each camera should be obtained by the calibration of the camera. Epipolar line is used for stereo pair matching. The 3-D position of particle is calculated from the three camera parameters, centers of projection of the three cameras, and photographic coordinates of a particle, which is based on the collinear condition. To find velocity vector used 3-D position data of the first frame and the second frame. To extract error vector applied continuity equation. This study developed of various 3D-PIV animation technique.

  • PDF

Estimation of Camera Calibration Parameters using Line Corresponding Method (선 대응 기법을 이용한 카메라 교정파라미터 추정)

  • 최성구;고현민;노도환
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.10
    • /
    • pp.569-574
    • /
    • 2003
  • Computer vision system is broadly adapted like as autonomous vehicle system, product line inspection, etc., because it has merits which can deal with environment flexibly. However, for applying it for that industry, it has to clear the problem that recognize position parameter of itself. So that computer vision system stands in need of camera calibration to solve that. Camera calibration consists of the intrinsic parameter which describe electrical and optical characteristics and the extrinsic parameter which express the pose and the position of camera. And these parameters have to be reorganized as the environment changes. In traditional methods, however, camera calibration was achieved at off-line condition so that estimation of parameters is in need again. In this paper, we propose a method to the calibration of camera using line correspondence in image sequence varied environment. This method complements the corresponding errors of the point corresponding method statistically by the extraction of line. The line corresponding method is strong by varying environment. Experimental results show that the error of parameter estimated is within 1% and those is effective.

Camera Parameter Extraction Method for Virtual Studio Applications by Tracking the Location of TV Camera (가상스튜디오에서 실사 TV 카메라의 3-D 기준 좌표와 추적 영상을 이용한 카메라 파라메타 추출 방법)

  • 한기태;김회율
    • Journal of Broadcast Engineering
    • /
    • v.4 no.2
    • /
    • pp.176-186
    • /
    • 1999
  • In order to produce an image that lends realism to audience in the virtual studio system. it is important to synchronize precisely between foreground objects and background image provided by computer graphics. In this paper, we propose a method of camera parameter extraction for the synchronization by tracking the pose of TV camera. We derive an equation for extracting camera parameters from inverse perspective equations for tracking the pose of the camera and 3-D transformation between base coordinates and estimated coordinates. We show the validity of the proposed method in terms of the accuracy ratio between the parameters computed from the equation and the real parameters that applied to a TV camera.

  • PDF

A Study of the Scene-based NUC Using Image-patch Homogeneity for an Airborne Focal-plane-array IR Camera (영상 패치 균질도를 이용한 항공 탑재 초점면배열 중적외선 카메라 영상 기반 불균일 보정 기법 연구)

  • Kang, Myung-Ho;Yoon, Eun-Suk;Park, Ka-Young;Koh, Yeong Jun
    • Korean Journal of Optics and Photonics
    • /
    • v.33 no.4
    • /
    • pp.146-158
    • /
    • 2022
  • The detector of a focal-plane-array mid-wave infrared (MWIR) camera has different response characteristics for each detector pixel, resulting in nonuniformity between detector pixels. In addition, image nonuniformity occurs due to heat generation inside the camera during operation. To solve this problem, in the process of camera manufacturing it is common to use a gain-and-offset table generated from a blackbody to correct the difference between detector pixels. One method of correcting nonuniformity due to internal heat generation during the operation of the camera generates a new offset value based on input frame images. This paper proposes a technique for dividing an input image into block image patches and generating offset values using only homogeneous patches, to correct the nonuniformity that occurs during camera operation. The proposed technique may not only generate a nonuniformity-correction offset that can prevent motion marks due to camera-gaze movement of the acquired image, but may also improve nonuniformity-correction performance with a small number of input images. Experimental results show that distortion such as flow marks does not occur, and good correction performance can be confirmed even with half the number of input images or fewer, compared to the traditional method.

Global Localization Based on Ceiling Image Map (천장 영상지도 기반의 전역 위치추정)

  • Heo, Hwan;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.9 no.3
    • /
    • pp.170-177
    • /
    • 2014
  • This paper proposes a novel upward-looking camera-based global localization using a ceiling image map. The ceiling images obtained through the SLAM process are integrated into the ceiling image map using a particle filter. Global localization is performed by matching the ceiling image map with the current ceiling image using SURF keypoint correspondences. The robot pose is then estimated by the coordinate transformation from the ceiling image map to the global coordinate system. A series of experiments show that the proposed method is robust in real environments.

Determination of Epipolar Geometry for High Resolution Satellite Images

  • Noh Myoung-Jong;Cho Woosug
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.652-655
    • /
    • 2004
  • The geometry of satellite image captured by linear pushbroom scanner is different from that of frame camera image. Since the exterior orientation parameters for satellite image will vary scan line by scan line, the epipolar geometry of satellite image differs from that of frame camera image. As we know, 2D affine orientation for the epipolar image of linear pushbroom scanners system are well-established by using the collinearity equation (Testsu Ono, 1999). Also, another epipolar geometry of linear pushbroom scanner system is recently established by Habib(2002). He reported that the epipolar geometry of linear push broom satellite image is realized by parallel projection based on 2D affine models. Here, in this paper, we compared the Ono's method with Habib's method. In addition, we proposed a method that generates epipolar resampled images. For the experiment, IKONOS stereo images were used in generating epipolar images.

  • PDF