• Title/Summary/Keyword: 카메라 모델

Search Result 1,041, Processing Time 0.027 seconds

Parametric Video Compression Based on Panoramic Image Modeling (파노라믹 영상 모델에 근거한 파라메트릭 비디오 압축)

  • Sim Dong-Gyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.96-107
    • /
    • 2006
  • In this paper, a low bitrate video coding method based on new panoramic modeling is proposed for panning cameras. An input video frame from a panning camera is decomposed into a background image, rectangular moving object regions, and a residual image. In coding the background, we employ a panoramic model that can account for several image formation processes, such as perspective projection, lens distortion, vignetting and illumination effects. Moving objects aredetected, and their minimum bounding rectangular regions are coded with a JPEG-2000 coder. We have evaluated the effectiveness of the proposed algorithm with several indoor and outdoor sequences and found that the PSNR is improved by $1.3{\sim}4.4dB$ compared to that of JPEG-2000.

Reconstruction of Transmitted Images from Images Displayed on Video Terminals (영상 단말에 전송된 이미지를 이용한 전송 영상 복원)

  • Park, Su-Kyung;Lee, Seon-Oh;Sim, Dong-Gyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.1
    • /
    • pp.49-57
    • /
    • 2012
  • An image reconstruction algorithm is proposed to estimate transmitted original images from images displayed on a video terminal. The proposed algorithm acquires images that are displayed on video terminal screens by using a camera. The transmitted images are then estimated with the acquired images. However, camera-acquired images exhibit geometric and color distortions caused by characteristics of the camera and display devices. We make use of a geometric distortion correction algorithm that exploits homography and color distortions using a weighted-linear model. The experimental results show that the proposed algorithm yields promising estimation performance with respect to the peak signal-to-noise ratio (PSNR). PSNR values of the estimated images with respect to the corresponding original images range from 28-29 dB.

Data-driven camera manipulation about vertical locomotion in a virtual environment (가상환경에서 수직 운동에 대한 데이터 기반 카메라 조작)

  • Seo, Seung-Won;Noh, Seong-Rae;Lee, Ro-Un;Park, Seung-Jun;Kang, Hyeong-Yeop
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.13-21
    • /
    • 2022
  • In this paper, the goal is to investigate how manipulating the camera can minimize motion sickness and maximize immersion when a user moves in a virtual environment that requires vertical movement. In general, since a user uses virtual reality in a flat space, the actual movement of the user and the virtual movement are different, resulting in sensory conflict, which has the possibility of causing virtual reality motion sickness. Therefore, we propose three powerful camera manipulation techniques, implement them, and then propose which model is most appropriate through user experiments.

Camera Motion and Structure Recovery Using Two-step Sampling (2단계 샘플링을 이용한 카메라 움직임 및 장면 구조 복원)

  • 서정국;조청운;홍현기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.347-356
    • /
    • 2003
  • Camera pose and scene geometry estimation from video sequences is widely used in various areas such as image composition. Structure and motion recovery based on the auto calibration algorithm can insert synthetic 3D objects in real but un modeled scenes and create their views from the camera positions. However, most previous methods require bundle adjustment or non linear minimization process [or more precise results. This paper presents a new auto' calibration algorithm for video sequence based on two steps: the one is key frame selection, and the other removes the key frame with inaccurate camera matrix based on an absolute quadric estimation by LMedS. In the experimental results, we have demonstrated that the proposed method can achieve a precise camera pose estimation and scene geometry recovery without bundle adjustment. In addition, virtual objects have been inserted in the real images by using the camera trajectories.

Long-Distance Plume Detection Simulation for a New MWIR Camera (장거리 화염 탐지용 적외선 카메라 성능 광선추적 수치모사)

  • Yoon, Jeeyeon;Ryu, Dongok;Kim, Sangmin;Seong, Sehyun;Yoon, Woongsup;Kim, Jieun;Kim, Sug-Whan
    • Korean Journal of Optics and Photonics
    • /
    • v.25 no.5
    • /
    • pp.245-253
    • /
    • 2014
  • We report a realistic field-performance simulation for a new MWIR camera. It is designed for early detection of missile plumes over a distance range of a few hundred kilometers. Both imaging and radiometric performance of the camera are studied by using real-scale integrated ray tracing, including targets, atmosphere, and background scene models. The simulation results demonstrate that the camera would satisfy the imaging and radiometric performance requirements for field operation.

Multiple Background Modeling using Local Binary Pattern (국부이진패턴을 이용한 다중 배경 모델링 방법)

  • Chae, Young-Soo;Kim, Hyun-Cheol;Kim, Whoi-Yul
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1001-1002
    • /
    • 2008
  • 본 논문에서는 조명 또는 장면의 갑작스러운 변화에 효과적으로 배경모델링을 하기 위해 국부이진패턴을 이용한 다중 배경모델링 방법을 제안한다. 제안하는 방법은 각 장면에서 독립적인 배경모델을 이용하여 모델 업데이트를 실시한다. 이후 검출된 전경 영역의 비율이 일정 임계치를 넘게 되면 기존의 모델 중 적합한 모델을 찾거나 새로운 모델을 생성하여 현재 배경모델을 대체한다. 이는 배경모델의 성능을 유지하면서 효율적으로 장면의 변화에 바로 대응할 수 있는 장점이 있다. 실험결과에서는 실내조명이 갑작스럽게 변하는 영상과 Pan Tilt Zoom 카메라를 이용한 다중 영상에서 제안한 방법이 효과적으로 동작함을 확인할 수 있었다.

  • PDF

Implementation of the Color Matching Between Mobile Camera and Mobile LCD Based on RGB LUT (모바일 폰의 카메라와 LCD 모듈간의 RGB 참조표에 기반한 색 정합의 구현)

  • Son Chang-Hwan;Park Kee-Hyon;Lee Cheol-Hee;Ha Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.3 s.309
    • /
    • pp.25-33
    • /
    • 2006
  • This paper proposed device-independent color matching algorithm based on the 3D RGB lookup table (LUT) between mobile camera and mobile LCD (Liquid Crystal Display) to improve the color-fidelity. Proposed algorithm is composed of thee steps, which is device characterization, gamut mapping, 3D RGB-LUT design. First, the characterization of mobile LCD is executed using the sigmoidal function, different from conventional method such as GOG (Gain Offset Gamma) and S-curve modeling, based on the observation of electro-optical transfer function of mobile LCD. Next, mobile camera characterization is conducted by fitting the digital value of GretagColor chart captured under the daylight environment (D65) and tristimulus values (CIELAB) using the polynomial regression. However, the CIELAB values estimated by polynomial regression exceed the maximum boundary of the CIELAB color space. Therefore, these values are corrected by linear compression of the lightness and chroma. Finally, gamut mapping is used to overcome the gamut difference between mobile camera and moible LCD. To implement the real-time processing, 3D RGB-LUT is designed based on the 3D RGB-LUT and its performance is evaluated and compared with conventional method.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Improved Polynomial Model for Multi-View Image Color Correction (다시점 영상 색상 보정을 위한 개선된 다항식 모델)

  • Jung, Jae-Il;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.881-886
    • /
    • 2013
  • Even though a multi-view camera system is able to capture multiple images at different viewpoints, the color distributions of captured multi-view images can be inconsistent. This problem decreases the quality of multi-view images and the performance of post-image processes. In this paper, we propose an improved polynomial model for effectively correcting the color inconsistency problem. This algorithm is fully automatic without any pre-process and considers occlusion regions of the multi-view image. We use the 5th order polynomial model to define a relative mapping curve between reference and source views. Sometimes the estimated curve is seriously distorted if the dynamic range of extracted correspondences is quite low. Therefore we additionally estimate the first order polynomial model for the bottom and top regions of the dynamic range. Afterwards, colors of the source view are modified via these models. The proposed algorithm shows the good subjective results and has better objective quality than the conventional color correction algorithms.

The Stabilization Loop Design for a Drone-Mounted Camera Gimbal System Using Intelligent-PID Controller (Intelligent-PID 제어기를 사용한 드론용 짐발 시스템의 안정화기 설계)

  • Byun, Gi-sig;Cho, Hyung-rae
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.15 no.1
    • /
    • pp.102-108
    • /
    • 2016
  • A flying drone generates vibrations in a great variety of frequencies, and it requires a gimbal system stabilization loop design in order to obtain clean and accurate image from the camera attached to the drone under this environment. The gimbal system for drone comprises the structure that supports the camera module and the stabilization loop which follows the precise angle while blocking the vibration from outside. This study developed a dynamic model for one axis for the stabilization loop design of a gimbal system for drones and applied classical PID controller and intelligent PID controller. The Stabilization loop design was developed by using MATLAB/Simulink and compared the performance of each controller through simulation. Especially, the intelligent PID controller can be designed almost without the dynamic model and it demonstrates that the angle can be followed without readjusting the parameters of the controller even when the characteristics of the model changes.