• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.028 seconds

Analysis of Effect on Camera Distortion for Measuring Velocity Using Surface Image Velocimeter (표면영상유속측정법을 이용한 유속 측정 시 카메라 왜곡 영향 분석)

  • Lee, Jun Hyeong;Yoon, Byung Man;Kim, Seo Jun
    • Ecology and Resilient Infrastructure
    • /
    • v.8 no.1
    • /
    • pp.1-8
    • /
    • 2021
  • A surface image velocimeter (SIV) measures the velocity of a particle group by calculating the intensity distribution of the particle group in two consecutive images of the water surface using a cross-correlation method. Therefore, to increase the accuracy of the flow velocity calculated by a SIV, it is important to accurately calculate the displacement of the particle group in the images. In other words, the change in the physical distance of the particle group in the two images to be analyzed must be accurately calculated. In the image of an actual river taken using a camera, camera lens distortion inevitably occurs, which affects the displacement calculation in the image. In this study, we analyzed the effect of camera lens distortion on the displacement calculation using a dense and uniformly spaced grid board. The results showed that the camera lens distortion gradually increased in the radial direction from the center of the image. The displacement calculation error reached 8.10% at the outer edge of the image and was within 5% at the center of the image. In the future, camera lens distortion correction can be applied to improve the accuracy of river surface flow rate measurements.

An Automatic Mapping Points Extraction Algorithm for Calibration of the Wide Angle Camera (광각 카메라 영상의 보정을 위한 자동 정합 좌표 추출 방법)

  • Kim, Byung-Ik;Kim, Dae-Hyeon;Bae, Tae-Wuk;Kim, Young-Choon;Shim, Tae-Eun;Kim, Duk-Gyoo
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.3
    • /
    • pp.410-416
    • /
    • 2010
  • This paper presents the auto-extraction method that searches for the Mapping points in the calibration algorithm of the image acquired by the wide angle CCD camera. In this algorithm, we remove the noise from the distorted image and then obtain the edge image. Proposed method extracts the distortion point, comparing the threshold value of the histogram of the horizontal and vertical pixel lines in edge image. This processing step can be directly applied to the original image of the wide angle CCD camera output. Proposed method results are compared with hand-worked result image using the two wide angle CCD cameras having different angles with the difference value of the result images respectively. Experimental results show that proposed method can allocate the distortion-calibration constant of the wide angle CCD camera regardless of lens type, distortion shape and image type.

Image Synthesis and Multiview Image Generation using Control of Layer-based Depth Image (레이어 기반의 깊이영상 조절을 이용한 영상 합성 및 다시점 영상 생성)

  • Seo, Young-Ho;Yang, Jung-Mo;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.8
    • /
    • pp.1704-1713
    • /
    • 2011
  • This paper proposes a method to generate multiview images which use a synthesized image consisting of layered objects. The camera system which consists of a depth camera and a RGB camera is used in capturing objects and extracts 3-dimensional information. Considering the position and distance of the synthesizing image, the objects are synthesized into a layered image. The synthesized image is spaned to multiview images by using multiview generation tools. In this paper, we synthesized two images which consist of objects and human and the multiview images which have 37 view points were generated by using the synthesized images.

A Study on Iris Image Restoration Based on Focus Value of Iris Image (홍채 영상 초점 값에 기반한 홍채 영상 복원 연구)

  • Kang Byung-Jun;Park Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.30-39
    • /
    • 2006
  • Iris recognition is that identifies a user based on the unique iris texture patterns which has the functionalities of dilating or contracting pupil region. Iris recognition systems extract the iris pattern in iris image captured by iris recognition camera. Therefore performance of iris recognition is affected by the quality of iris image which includes iris pattern. If iris image is blurred, iris pattern is transformed. It causes FRR(False Rejection Error) to be increased. Optical defocusing is the main factor to make blurred iris images. In conventional iris recognition camera, they use two kinds of focusing methods such as lilted and auto-focusing method. In case of fixed focusing method, the users should repeatedly align their eyes in DOF(Depth of Field), while the iris recognition system acquires good focused is image. Therefore it can give much inconvenience to the users. In case of auto-focusing method, the iris recognition camera moves focus lens with auto-focusing algorithm for capturing the best focused image. However, that needs additional H/W equipment such as distance measuring sensor between users and camera lens, and motor to move focus lens. Therefore the size and cost of iris recognition camera are increased and this kind of camera cannot be used for small sized mobile device. To overcome those problems, we propose method to increase DOF by iris image restoration algorithm based on focus value of iris image. When we tested our proposed algorithm with BM-ET100 made by Panasonic, we could increase operation range from 48-53cm to 46-56cm.

Multiple Camera Based Imaging System with Wide-view and High Resolution and Real-time Image Registration Algorithm (다중 카메라 기반 대영역 고해상도 영상획득 시스템과 실시간 영상 정합 알고리즘)

  • Lee, Seung-Hyun;Kim, Min-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.10-16
    • /
    • 2012
  • For high speed visual inspection in semiconductor industries, it is essential to acquire two-dimensional images on regions of interests with a large field of view (FOV) and a high resolution simultaneously. In this paper, an imaging system is newly proposed to achieve high quality image in terms of precision and FOV, which is composed of single lens, a beam splitter, two camera sensors, and stereo image grabbing board. For simultaneously acquired object images from two camera sensors, Zhang's camera calibration method is applied to calibrate each camera first of all. Secondly, to find a mathematical mapping function between two images acquired from different view cameras, the matching matrix from multiview camera geometry is calculated based on their image homography. Through the image homography, two images are finally registered to secure a large inspection FOV. Here the inspection system of using multiple images from multiple cameras need very fast processing unit for real-time image matching. For this purpose, parallel processing hardware and software are utilized, such as Compute Unified Device Architecture (CUDA). As a result, we can obtain a matched image from two separated images in real-time. Finally, the acquired homography is evaluated in term of accuracy through a series of experiments, and the obtained results shows the effectiveness of the proposed system and method.

Calibration of Thermal Camera with Enhanced Image (개선된 화질의 영상을 이용한 열화상 카메라 캘리브레이션)

  • Kim, Ju O;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.621-628
    • /
    • 2021
  • This paper proposes a method to calibrate a thermal camera with three different perspectives. In particular, the intrinsic parameters of the camera and re-projection errors were provided to quantify the accuracy of the calibration result. Three lenses of the camera capture the same image, but they are not overlapped, and the image resolution is worse than the one captured by the RGB camera. In computer vision, camera calibration is one of the most important and fundamental tasks to calculate the distance between camera (s) and a target object or the three-dimensional (3D) coordinates of a point in a 3D object. Once calibration is complete, the intrinsic and the extrinsic parameters of the camera(s) are provided. The intrinsic parameters are composed of the focal length, skewness factor, and principal points, and the extrinsic parameters are composed of the relative rotation and translation of the camera(s). This study estimated the intrinsic parameters of thermal cameras that have three lenses of different perspectives. In particular, image enhancement based on a deep learning algorithm was carried out to improve the quality of the calibration results. Experimental results are provided to substantiate the proposed method.

Enhancement on Time-of-Flight Camera Images (Time-of-Flight 카메라 영상 보정)

  • Kim, Sung-Hee;Kim, Myoung-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.708-711
    • /
    • 2008
  • Time-of-flight(ToF) cameras deliver intensity data as well as range information of the objects of the scene. However, systematic problems during the acquisition lead to distorted values in both distance and amplitude. In this paper we propose a method to acquire reliable distance information over the entire scene correcting each information based on the other data. The amplitude image is enhanced based on the depth values and this leads depth correction especially for far pixels.

  • PDF

A Study on a Motion Recognition from Moving Images with Camera Works

  • Murakami, Shin-ichi;Tomohiko-Shindoh
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.35-40
    • /
    • 1998
  • This paper describes an automatic recognition method of contents in moving images. The recognition process is carried out by the following two steps. At first, camera works in moving images are analyzed and moving objects are extracted from the moving images. Next, the motion of the object is recognized by pre-procured knowledge. These techniques will be applied to a construction of an efficient image database.

  • PDF

Real-time camera tracking using co-planar feature points (동일 평면상에 존재하는 특징점 검출을 이용한 실시간 카메라 추적 기법)

  • Seok-Han Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.5
    • /
    • pp.358-366
    • /
    • 2024
  • This paper proposes a method for the real-time camera tracking which detects and employs feature points located on a planar object in 3D space. The proposed approach operates in two stages. First, multiple feature points are detected in the 3D space, and then only those that exist on the planar object are selected. The camera's extrinsic parameters are then estimated using the projective geometry relationship between the feature points of the plane and the camera's image plane. The experiments are conducted in a typical indoor environment with regular lighting, without any special illumination setups. In contrast to conventional approaches, the proposed method can detect new feature points on the planar object in real-time and employ them for the camera tracking. This allows for continuous tracking even when the reference features for the camera pose initialization are not available. The experimental results show an average re-projection error of about 5 to 7 pixels, which is relatively small given the image resolution, and demonstrating that camera tracking is possible even in the absence of reference features within the image.

Image Processing using Thermal Infrared Image (열적외선 이미지를 이용한 영상 처리)

  • Jeong, Byoung-Jo;Jang, Sung-Whan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.7
    • /
    • pp.1503-1508
    • /
    • 2009
  • This study applied image processing techniques, constructed to real-time, to thermal infrared camera image. Thermal infrared image data was utilized for hot mapping, cool mapping, and rainbow mapping according to changing temperature. It was histogram image processing techniques so that detected shade contrast function of the thermal infrared image, and the thermal infrared image's edge was extracted to classification of object. Moreover, extraction of temperature from image was measured by using the image information program.