• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.032 seconds

Point Cloud Generation Method Based on Lidar and Stereo Camera for Creating Virtual Space (가상공간 생성을 위한 라이다와 스테레오 카메라 기반 포인트 클라우드 생성 방안)

  • Lim, Yo Han;Jeong, In Hyeok;Lee, San Sung;Hwang, Sung Soo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1518-1525
    • /
    • 2021
  • Due to the growth of VR industry and rise of digital twin industry, the importance of implementing 3D data same as real space is increasing. However, the fact that it requires expertise personnel and huge amount of time is a problem. In this paper, we propose a system that generates point cloud data with same shape and color as a real space, just by scanning the space. The proposed system integrates 3D geometric information from lidar and color information from stereo camera into one point cloud. Since the number of 3D points generated by lidar is not enough to express a real space with good quality, some of the pixels of 2D image generated by camera are mapped to the correct 3D coordinate to increase the number of points. Additionally, to minimize the capacity, overlapping points are filtered out so that only one point exists in the same 3D coordinates. Finally, 6DoF pose information generated from lidar point cloud is replaced with the one generated from camera image to position the points to a more accurate place. Experimental results show that the proposed system easily and quickly generates point clouds very similar to the scanned space.

Design and Implementation of TDMA-Based Wireless IP Video Transmission System (TDMA 기반 무선 IP 영상 전송 시스템 설계 및 구현)

  • Sang-Ok, Yoon;Gyeong-Hyu, Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1025-1032
    • /
    • 2022
  • In this paper, the TDMA-based PoE wireless multi-IP camera transmission system using wireless communication technology is developed to reduce the burden of construction costs caused by the existing wired-based CCTV surveillance system and IP camera system. It intends to design and implement the long-distance wireless transmission technology of video. The transmission/reception prototype of the line IP transmission system and the wireless multi-IP camera transmission terminal and image acquisition device (SB-200) for the proposed related technology were described.

Images Grouping Technology based on Camera Sensors for Efficient Stitching of Multiple Images (다수의 영상간 효율적인 스티칭을 위한 카메라 센서 정보 기반 영상 그룹핑 기술)

  • Im, Jiheon;Lee, Euisang;Kim, Hoejung;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.713-723
    • /
    • 2017
  • Since the panoramic image can overcome the limitation of the viewing angle of the camera and have a wide field of view, it has been studied effectively in the fields of computer vision and stereo camera. In order to generate a panoramic image, stitching images taken by a plurality of general cameras instead of using a wide-angle camera, which is distorted, is widely used because it can reduce image distortion. The image stitching technique creates descriptors of feature points extracted from multiple images, compares the similarities of feature points, and links them together into one image. Each feature point has several hundreds of dimensions of information, and data processing time increases as more images are stitched. In particular, when a panorama is generated on the basis of an image photographed by a plurality of unspecified cameras with respect to an object, the extraction processing time of the overlapping feature points for similar images becomes longer. In this paper, we propose a preprocessing process to efficiently process stitching based on an image obtained from a number of unspecified cameras for one object or environment. In this way, the data processing time can be reduced by pre-grouping images based on camera sensor information and reducing the number of images to be stitched at one time. Later, stitching is done hierarchically to create one large panorama. Through the grouping preprocessing proposed in this paper, we confirmed that the stitching time for a large number of images is greatly reduced by experimental results.

Interactive Projection by Closed-loop based Position Tracking of Projected Area for Portable Projector (이동 프로젝터 투사영역의 폐회로 기반 위치추적에 의한 인터랙티브 투사)

  • Park, Ji-Young;Rhee, Seon-Min;Kim, Myoung-Hee
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.1
    • /
    • pp.29-38
    • /
    • 2010
  • We propose an interactive projection technique to display details of a large image in a high resolution and brightness by tracking a portable projector. A closed-loop based tracking method is presented to update the projected image while a user changes the position of the detail area by moving the portable projector. A marker is embedded in the large image to indicate the position to be occupied by the detail image projected by the portable projector. The marker is extracted in sequential images acquired by a camera attached to the portable projector. The marker position in the large display image is updated under a constraint that the center positions of marker and camera frame coincide in every camera frame. The image and projective transformation for warping are calculated using the marker position and shape in the camera frame. The marker's four corner points are determined by a four-step segmentation process which consists of camera image preprocessing based on HSI, edge extraction by Hough transformation, quadrangle test, and cross-ratio test. The interactive projection system implemented by the proposed method performs at about 24fps. In the user study, the overall feedback about the system usability was very high.

Eliminating Color Mixing of Projector-Camera System for Fast Radiometric Compensation (컬러 보정의 고속화를 위한 프로젝터-카메라 시스템의 컬러 혼합 성분 제거)

  • Lee, Moon-Hyun;Park, Han-Hoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.941-950
    • /
    • 2008
  • The quality of projector output image is influenced by the surrounding conditions such as the shape and color of screen, and environmental light. Therefore, techniques that ensure desirable image quality, regardless of such surrounding conditions, have been in demand and are being steadily developed. Among the techniques, radiometric compensation is a representative one. In general, radiometric compensation is achieved by measuring the color of the screen and environmental light based on an analysis of camera image of projector output image and then adjusting the color of projector input image in a pixel-wise manner. This process is not time-consuming for small sizes of images but the speed of the process drops linearly with respect to image size. In large sizes of images, therefore, reducing the time required for performing the process becomes a critical problem. Therefore, this paper proposes a fast radiometric compensation method. The method uses color filters for eliminating the color mixing between projector and camera because the speed of radiometric compensation depends mainly on measuring color mixing between projector and camera. By using color filters, there is no need to measure the color mixing. Through experiments, the proposed method improved the compensation speed by 44 percent while maintaining the projector output image quality. This method is expected to be a key technique for widespread use of projectors for large-scale and high-quality display.

Appropriate Digital Camera System for Digital Ultraviolet Photography (디지털 자외선 사진을 위한 적정 디지털 카메라 시스템)

  • Lee, Young-Kyu;Har, Dong-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.7
    • /
    • pp.40-48
    • /
    • 2010
  • Reflected-Ultraviolet photography is applied to the evidence of crime, Archaeology, and Dermatology. In the past, Ultraviolet photography was done with standard black-and-white film. Because emulsion of film is more sensitive to near UV light than CCD(Charge Coupled Device) or CMOS(Complementary Metal-Oxide-Semiconductor)of digital camera. In this research, we purpose to improve qulity of ultraviolet photographs and to find out the best alternative digital camera by utilizing a cunsumer digital camera. To achieve theses, we removed IR cutoff filter from digital camera. And by using modified UV pass filter, we verified the increase of image resolution of digital ultraviolet photographs. Also, we analyze reproducibility of digital ultraviolet photographs according to type, size, pixel of image sensor. Furthermore, this research resulted in the development of an practical digital camera system by utilizing a cunsumer digital camera. Eventually, it will contribute to practical use in the various field of digital ultraviolet photographs

Calibration of Omnidirectional Camera by Considering Inlier Distribution (인라이어 분포를 이용한 전방향 카메라의 보정)

  • Hong, Hyun-Ki;Hwang, Yong-Ho
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.63-70
    • /
    • 2007
  • Since the fisheye lens has a wide field of view, it can capture the scene and illumination from all directions from far less number of omnidirectional images. Due to these advantages of the omnidirectional camera, it is widely used in surveillance and reconstruction of 3D structure of the scene In this paper, we present a new self-calibration algorithm of omnidirectional camera from uncalibrated images by considering the inlier distribution. First, one parametric non-linear projection model of omnidirectional camera is estimated with the known rotation and translation parameters. After deriving projection model, we can compute an essential matrix of the camera with unknown motions, and then determine the camera information: rotation and translations. The standard deviations are used as a quantitative measure to select a proper inlier set. The experimental results showed that we can achieve a precise estimation of the omnidirectional camera model and extrinsic parameters including rotation and translation.

  • PDF

A Study on the CMOS Camera robust to radiation environments (방사선 환경에 강인한 CMOS카메라에 관한 연구)

  • Baek, Dong-Hyun;Kim, Bae-Hoon
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.1
    • /
    • pp.27-34
    • /
    • 2020
  • Human access is restricted to environment where radiation sources are used, however observation equipment should be radiation-resistant as it is exposed. Therefore, if tungsten with the highest specific gravity and melting point and the lowest lead were selected to reduce the dose to the Cobalt 60 radiation source to 1/8, Tu had a volume of 432.6cm3, a thickness of 2.4cm, and Pb had a volume of 961cm3,, a thickness of 3.6cm. By applying this method, produced a radiation resistant CMOS camera with a camera module using a CMOS Image sensor and a radiation shielding structured housing. As a result of applying the head detachable 2M AHD camera (No. ①) that survived the experiment to select the optimal shielding thickness, when shielding the associated equipment such as cameras, adapters, etc. is achieved, it was confirmed that the design of the structure is appropriate by operating well at doses higher than 1.88×106rad. Therefore, it is expected to secure the camera technology and business feasibility that can be applied to high radiation environments.

An Easy Camera-Projector Calibration Technique for Structured Light 3-D Reconstruction (구조광 방식 3차원 복원을 위한 간편한 프로젝터-카메라 보정 기술)

  • Park, Soon-Yong;Park, Go-Gwang;Zhang, Lei
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.215-226
    • /
    • 2010
  • The structured-light 3D reconstruction technique uses a coded-pattern to find correspondences between the camera image and the projector image. To calculate the 3D coordinates of the correspondences, it is necessary to calibrate the camera and the projector. In addition, the calibration results affect the accuracy of the 3D reconstruction. Conventional camera-projector calibration techniques commonly require either expensive hardware rigs or complex algorithm. In this paper, we propose an easy camera-projector calibration technique. The proposed technique does not need any hardware rig or complex algorithm. Thus it will enhance the efficiency of structured-light 3D reconstruction. We present two camera-projector systems to show the calibration results. Error analysis on the two systems are done based on the projection error of the camera and the projector, and 3D reconstruction of world reference points.

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.