• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.029 seconds

Analysis of Relationship between Objective Performance Measurement and 3D Visual Discomfort in Depth Map Upsampling (깊이맵 업샘플링 방법의 객관적 성능 측정과 3D 시각적 피로도의 관계 분석)

  • Gil, Jong In;Mahmoudpour, Saeed;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.31-43
    • /
    • 2014
  • A depth map is an important component for stereoscopic image generation. Since the depth map acquired from a depth camera has a low resolution, upsamling a low-resolution depth map to a high-resolution one has been studied past decades. Upsampling methods are evaluated by objective evaluation tools such as PSNR, Sharpness Degree, Blur Metric. As well, the subjective quality is compared using virtual views generated by DIBR (depth image based rendering). However, works on the analysis of the relation between depth map upsampling and stereoscopic images are relatively few. In this paper, we investigate the relationship between subjective evaluation of stereoscopic images and objective performance of upsampling methods using cross correlation and linear regression. Experimental results demonstrate that the correlation of edge PSNR and visual fatigue is the highest and the blur metric has lowest correlation. Further, from the linear regression, we found relative weights of objective measurements. Further we introduce a formulae that can estimate 3D performance of conventional or new upsampling methods.

An Object Detection and Tracking System using Fuzzy C-means and CONDENSATION (Fuzzy C-means와 CONDENSATION을 이용한 객체 검출 및 추적 시스템)

  • Kim, Jong-Ho;Kim, Sang-Kyoon;Hang, Goo-Seun;Ahn, Sang-Ho;Kang, Byoung-Doo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.4
    • /
    • pp.87-98
    • /
    • 2011
  • Detecting a moving object from videos and tracking it are basic and necessary preprocessing steps in many video systems like object recognition, context aware, and intelligent visual surveillance. In this paper, we propose a method that is able to detect a moving object quickly and accurately in a condition that background and light change in a real time. Furthermore, our system detects strongly an object in a condition that the target object is covered with other objects. For effective detection, effective Eigen-space and FCM are combined and employed, and a CONDENSATION algorithm is used to trace a detected object strongly. First, training data collected from a background image are linear-transformed using Principal Component Analysis (PCA). Second, an Eigen-background is organized from selected principal components having excellent discrimination ability on an object and a background. Next, an object is detected with FCM that uses a convolution result of the Eigen-vector of previous steps and the input image. Finally, an object is tracked by using coordinates of an detected object as an input value of condensation algorithm. Images including various moving objects in a same time are collected and used as training data to realize our system that is able to be adapted to change of light and background in a fixed camera. The result of test shows that the proposed method detects an object strongly in a condition having a change of light and a background, and partial movement of an object.

Three-Dimensional Image Display System using Stereogram and Holographic Optical Memory Techniques (스테레오그램과 홀로그래픽 광 메모리 기술을 이용한 3차원 영상 표현 시스템)

  • 김철수;김수중
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.6B
    • /
    • pp.638-644
    • /
    • 2002
  • In this paper, we implemented a three dimensional image display system using stereogram and holographic optical memory techniques which can store many images and reconstruct them automatically. In this system, to store and reconstruct stereo images, incident angle of reference beam must be controlled in real time, so we used BPH(binary phase hologram) and LCD(liquid crystal display) for controlling reference beam. The reference beams are acquired by Fourier transform of BPHs which designed with SA(simulated annealing)algorithm, and the BPHs are represented on the LCD with the 0.05 seconds time interval using application software for reconstructing the stereo images. And input images are represented on the LCD without polarizer/analyzer for maintaining uniform beam intensities regardless of the brightness of input images. The input images and BPHs are edited using application software(Photoshop) with having the same recording scheduled time interval in storing. The reconstructed stereo images are acquired by capturing the output images with CCD camera at the behind of the analyzer which transforms phase information into brightness information of images. In output plane, we used a LCD shutter that is synchronized to a monitor that display alternate left and right eye images for depth perception. We demonstrated optical experiment which store and reconstruct four stereo images in BaTiO$_3$ repeatedly using the proposed holographic optical memory techniques.

Vision-based Mobile Robot Localization and Mapping using fisheye Lens (어안렌즈를 이용한 비전 기반의 이동 로봇 위치 추정 및 매핑)

  • Lee Jong-Shill;Min Hong-Ki;Hong Seung-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.256-262
    • /
    • 2004
  • A key component of an autonomous mobile robot is to localize itself and build a map of the environment simultaneously. In this paper, we propose a vision-based localization and mapping algorithm of mobile robot using fisheye lens. To acquire high-level features with scale invariance, a camera with fisheye lens facing toward to ceiling is attached to the robot. These features are used in mP building and localization. As a preprocessing, input image from fisheye lens is calibrated to remove radial distortion and then labeling and convex hull techniques are used to segment ceiling and wall region for the calibrated image. At the initial map building process, features we calculated for each segmented region and stored in map database. Features are continuously calculated for sequential input images and matched to the map. n some features are not matched, those features are added to the map. This map matching and updating process is continued until map building process is finished, Localization is used in map building process and searching the location of the robot on the map. The calculated features at the position of the robot are matched to the existing map to estimate the real position of the robot, and map building database is updated at the same time. By the proposed method, the elapsed time for map building is within 2 minutes for 50㎡ region, the positioning accuracy is ±13cm and the error about the positioning angle of the robot is ±3 degree for localization.

  • PDF

Laver(Kim) Thickness Measurement and Control System Design (해태(김)두께측정 및 조절 장치 설계)

  • Lee, Bae-Kyu;Choi, Young-Il;Kim, Jung-Hwa
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.11
    • /
    • pp.226-233
    • /
    • 2013
  • In this study, In Laver's automatic drying device, laver thickness measurement and control devices that are associated with. Disconnect the water and steam, after put a certain amount of the mixture(water and laver) in the mold. In process, Laver of the size and thickness (weight) to determine, constant light source to detect and image LED Lamp occur Vision Sensor (Camera) prepare, then the values of these state of the image is transmitted in real time embedded computers. Built-in measurement and control with the purpose of the application of each of the channels separately provided measurements are displayed on a monitor, And servo signals sent to each of the channels and it become so set function should be. In this paper, the laver drying device, prior to the laver thickness measurement and control devices that rely on the experience of existing workers directly laver manually adjust the thickness of the lever, but the lever by each channel relative to the actuator by installing was to improve the quality. In addition, The effect of productivity gains and labor savings are.

Fiber Classification and Detection Technique Proposed for Applying on the PVA-ECC Sectional Image (PVA-ECC단면 이미지의 섬유 분류 및 검출 기법)

  • Kim, Yun-Yong;Lee, Bang-Yeon;Kim, Jin-Keun
    • Journal of the Korea Concrete Institute
    • /
    • v.20 no.4
    • /
    • pp.513-522
    • /
    • 2008
  • The fiber dispersion performance in fiber-reinforced cementitious composites is a crucial factor with respect to achieving desired mechanical performance. However, evaluation of the fiber dispersion performance in the composite PVA-ECC (Polyvinyl alcohol-Engineered Cementitious Composite) is extremely challenging because of the low contrast of PVA fibers with the cement-based matrix. In the present work, an enhanced fiber detection technique is developed and demonstrated. Using a fluorescence technique on the PVA-ECC, PVA fibers are observed as green dots in the cross-section of the composite. After capturing the fluorescence image with a Charged Couple Device (CCD) camera through a microscope. The fibers are more accurately detected by employing a series of process based on a categorization, watershed segmentation, and morphological reconstruction.

Face and Hand Tracking using MAWUPC algorithm in Complex background (복잡한 배경에서 MAWUPC 알고리즘을 이용한 얼굴과 손의 추적)

  • Lee, Sang-Hwan;An, Sang-Cheol;Kim, Hyeong-Gon;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.2
    • /
    • pp.39-49
    • /
    • 2002
  • This paper proposes the MAWUPC (Motion Adaptive Weighted Unmatched Pixel Count) algorithm to track multiple objects of similar color The MAWUPC algorithm has the new method that combines color and motion effectively. We apply the MAWUPC algorithm to face and hand tracking against complex background in an image sequence captured by using single camera. The MAWUPC algorithm is an improvement of previously proposed AWUPC (Adaptive weighted Unmatched Pixel Count) algorithm based on the concept of the Moving Color that combines effectively color and motion information. The proposed algorithm incorporates a color transform for enhancing a specific color, the UPC(Unmatched Pixel Count) operation for detecting motion, and the discrete Kalman filter for reflecting motion. The proposed algorithm has advantages in reducing the bad effect of occlusion among target objects and, at the same time, in rejecting static background objects that have a similar color to tracking objects's color. This paper shows the efficiency of the proposed MAWUPC algorithm by face and hands tracking experiments for several image sequences that have complex backgrounds, face-hand occlusion, and hands crossing.

Auto Exposure Control System using Variable Time Constants (가변 시상수를 이용한 자동 노출제어 시스템)

  • Kim, Hyun-Sik;Lee, Sung-Mok;Jang, Won-Woo;Ha, Joo-Young;Kim, Joo-Hyun;Kang, Bong-Soon;Lee, Gi-Dong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.2
    • /
    • pp.257-264
    • /
    • 2007
  • In order to obtain a fine picture, a camera has many convenient functions. Its representative functions are Auto Focus(AF), Auto White Balance(AWB) and Auto Exposure(AE). In this paper, we present the new algorithm of Auto Exposure control system, one of its useful functions The proposed algorithm of Auto Exposure control system is based on IIR Filter with Variable Time Constant. First, in order to establish the standards of exposure control, we compare change of the picture luminance with luminance of an object in the Zone system. Second, we make an ideal characteristic graph of luminance by using the results. Finally, we can find the value of the right exposure by comparing an ideal characteristic graph of the luminance with the value of the current expose of a scene. We can find an appropriate exposure as comparing the ideal characteristic graph of the luminance with current exposure of a scene. In order to find a suitable exposure state, we make use of IIR Filter instead of a conventional method using micro-controller. In this paper, the proposed system has therefore simple structure, we use it for compact image sensor module used in the handheld device.

Comparison of DEM Accuracy and Quality over Urban Area from SPOT, EOC and IKONOS Stereo Pairs (SPOT, EOC, IKONOS 스테레오 영상으로부터 생성된 도심지역 DEM의 정확도 및 성능 비교분석)

  • 임용조;김태정
    • Korean Journal of Remote Sensing
    • /
    • v.18 no.4
    • /
    • pp.221-231
    • /
    • 2002
  • In this study we applied a DEM generation algorithm developed in-house to satellite images at various resolution and discussed the results. We tested SPOT images at l0m resolution, EOC images at 6.6m and IKONOS images at 1m resolution. These images include the same urban area in Daejeon city. For camera model, we used Gupta & Hartley's(1997) DLT model for all three image sets. We carried out accuracy assessment using USGS DTED for SPOT and EOC and 23 check points for IKONOS. The assessment showed that SPOT DEM had about 38m RMS error, EOC DEM 12m RMS error and IKONOS DEM 6.5m RMS error. In terms of image resolution, SPOT and EOC DEM error corresponds to 2∼4 pixels where as IKONOS DEM error 6∼7pixels. IKONOS DEM contains more errors in pixels. However, in IKONOS DEM, individual buildings, apartments and major roads are identifiable. All three DEMs contained errors due to height discontinuity, occlusion and shadow. These experiments show that our algorithm can generate urban DEM from 1m resolution and that, however, we need to improve the algorithm to minimize effects of occlusion and building shadows on DEMs.

Estimation of the Original Location of Haechi (Haetae) Statues in Front of Gwanghwamun Gate Using Archival Photos from Early 1900s and Newly Taken Photos by Image Analysis (1900년대 초반의 기록사진과 디지털 카메라 사진분석을 활용한 광화문 앞 해치상의 원위치 추정)

  • Oh, Hyundok;Nam, Ho Hyun;Yoo, Yeongsik;Kim, Jung Gon;Kang, Kitaek;Yoo, Woo Sik
    • Journal of Conservation Science
    • /
    • v.37 no.5
    • /
    • pp.491-504
    • /
    • 2021
  • Gwanghwamun Gate of Gyeongbokgung Palace was dismantled and relocated during the Japanese colonial period, destroyed during the Korean War, reconstructed with reinforced concrete in 1968, and finally erected at its present location in 2010. A pair of Haechi statues located in front of Gwanghwamun was dismantled and relocated several times, and the statues have yet to be returned precisely to their original positions. This study assesses the historical accuracy of their current placement under the Gwanghwamun Square Restructuring Project of the Seoul Metropolitan Government and the Cultural Heritage Administration based on archival photos from the early 1900s, and proposes a method to estimate the original positions of the Haechi through image analysis of contemporary photographs and recent digital camera photos. We estimated the original position of the Haechi before the Japanese colonial period by identifying the shooting location of the archival photo and reproducing contemporary photographs by calculating the angle and distance to the Haechi from the shooting location. The leftmost and rightmost Haechi were originally located about 9.6 m to the east and 7.4 m to the north and about 1.9 m to the west and 8.0 m to the north, respectively, of their current location indicators. As the first attempt to determine the original location of a building and its accessories using archival photos, this study launches a new scientific methodology for the restoration of cultural properties.