• Title/Summary/Keyword: 입체 오차 지도

Search Result 51, Processing Time 0.054 seconds

The usability analysis of the Ray-sum technique and SSD (Shaded Surface display) technique in stomach CT Scan (위장 CT 검사에서 Ray-sum 기법과 SSD(Shaded Surface Display) 기법의 유용성 분석)

  • Kim, Hyun-Joo;Cho, Jae-Hwan;Song, Hoon
    • Journal of Digital Contents Society
    • /
    • v.12 no.2
    • /
    • pp.151-156
    • /
    • 2011
  • The analysis and image evaluation the Ray-sum technique and Shaded Surface Display (under SSD) technique which is the reconstruction image processing technique after the CT scan was evaluated and the usability of the three-dimensional information offering was confirmed in the patient with stomach cancer. After obtaining the raw data by using 64-MDCT in 20 patient with stomach cancers, the image reconstruction processing was done. It was evaluated to describe accurately the analyzed result Ray-sum and SSD reconstruction image everyone anatomical structure. In the precision estimation of the image, the lesion location could coincide in the Ray-sum and SSD reconstruction image majority with the gastro fiberscope and we can know than the gastro fiberscope over 6cm that there was the error. In addition, We could know that degree of accordance of the results of the image interpretation about the lesion and endoscope and pathological opinion were high.

Pre-processing of Depth map for Multi-view Stereo Image Synthesis (다시점 영상 합성을 위한 깊이 정보의 전처리)

  • Seo Kwang-Wug;Han Chung-Shin;Yoo Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.91-99
    • /
    • 2006
  • Pre-processing is one of image processing techniques to enhance image quality or appropriately convert a given image into another form for a specific purpose. An 8 bit depth map obtained by a depth camera usually contains a lot of noisy components caused by the characteristics of depth camera and edges are also more distorted by the quality of a source object and illumination condition comparing with edges in RGB texture image. To reduce this distortion, we use noise removing filters, but they are only able to reduce noise components, so that distorted edges of depth map can not be properly recovered. In this paper, we propose an algorithm that can reduce noise components and also enhance the quality of edges of depth map by using edges in RGB texture. Consequently, we can reduce errors in multi-view stereo image synthesis process.

Hybrid Camera System with a TOF and DSLR Cameras (TOF 깊이 카메라와 DSLR을 이용한 복합형 카메라 시스템 구성 방법)

  • Kim, Soohyeon;Kim, Jae-In;Kim, Taejung
    • Journal of Broadcast Engineering
    • /
    • v.19 no.4
    • /
    • pp.533-546
    • /
    • 2014
  • This paper presents a method for a hybrid (color and depth) camera system construction using a photogrammetric technology. A TOF depth camera is efficient since it measures range information of objects in real-time. However, there are some problems of the TOF depth camera such as low resolution and noise due to surface conditions. Therefore, it is essential to not only correct depth noise and distortion but also construct the hybrid camera system providing a high resolution texture map for generating a 3D model using the depth camera. We estimated geometry of the hybrid camera using a traditional relative orientation algorithm and performed texture mapping using backward mapping based on a condition of collinearity. Other algorithm was compared to evaluate performance about the accuracy of a model and texture mapping. The result showed that the proposed method produced the higher model accuracy.

A study on the simplification of HRTF within low frequency region (저역 주파수 영역에서 HRTF의 간략화에 관한 연구)

  • Lee, Chai-Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.6
    • /
    • pp.581-587
    • /
    • 2010
  • In this study, we investigated the effect of the simplification for low frequency region in Head-Related Transfer Function(HRTF) on the sound localization. For this purpose, HRTF was measured and analyzed. The result in the standard deviation of HRTF showed that the directional dependence of low frequency was smaller than that of high frequency region, which means the possibility of simplification in the low frequency region. Simplification was performed by flattening of the low frequency amplitude characteristics with the insertion of the high-pass filter, whose cutoff frequency is given by boundary frequency. Auditory experiments were performed to evaluate the simplified HRTF. The result showed that direction perception was not influenced by the simplification of the frequency characteristics of HRTF for the error of sound localization. The rate of confusion for the front and back was not affected by the simplification of the frequency characteristics within 1kHz of HRTF. Finally, we made it clear that the sound localization was not affected by the simplification of frequency characteristics of HRTF within 1kHz. The result is expected to be utilized to reduce the size of speech information with no deterioration of the directional characteristics of the speech signal.

An Analysis of Visual Fatigue Caused From Distortions in 3D Video Production (3D 영상의 제작 왜곡이 시청 피로도에 미치는 영향 분석)

  • Jang, Hyung-Jun;Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.17 no.1
    • /
    • pp.1-16
    • /
    • 2012
  • In order to improve the workflow of 3D video production, this paper analyzes the visual fatigue caused from distortions in 3D video production stage through a set of subjective visual assessment tests. To establish a set of objective indicators for subjective visual tests, various distortions in production stage are investigated to be categorized into 7 representative visual-fatigue-producing factors, and to conduct visual assessment tests for each selected category, 4 test video clips are produced by combining the extent of camera movement as well as the object(s) movement in the scene. Each produced test video is distorted to reflect each of the selected 7 visual-fatigue-producing factors, and we set 7 levels of distortion for each factor, resulting in 196 5-second-long video clips for testing. Based on these test materials and the recommendation of ITU-R BT.1438, subjective visual assessment tests are conducted by 101 applicants. The test results provide a relative importance and the tolerance limit of each visual-fatigue-producing factor, which corresponds to various distortions in 3D video production field.

Development of Three-Dimensional Gamma-ray Camera (방사선원 3차원 위치탐지를 위한 방사선 영상장치 개발)

  • Lee, Nam-Ho;Hwang, Young-Gwan;Park, Soon-Yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.2
    • /
    • pp.486-492
    • /
    • 2015
  • Radiation source imaging system is essential for protecting of radiation leakage accidents and minimizing damages from the radioactive materials, and is expected to play an important role in the nuclear plant decommissioning area. In this study, the stereoscopic camera principle was applied to develop a new radiation imaging device technology that can extract the radiation three-dimensional position information. This radiation three-dimensional imaging device (K3-RIS) was designed as a compact structure consisting of a radiation sensor, a CCD camera, and a pan-tilt only. It features the acquisition of stereoscopic radiation images by position change control, high-resolution detection by continuous scan mode control, and stereoscopic image signal processing. The performance analysis test of K3-RIS was conducted for a gamma-ray source(Cs-137) in radiation calibration facility. The test result showed that a performance error with less than 3% regardless of distances of the objects.

Development and Comparative Analysis of Mapping Quality Prediction Technology Using Orientation Parameters Processed in UAV Software (무인기 소프트웨어에서 처리된 표정요소를 이용한 도화품질 예측기술 개발 및 비교분석)

  • Lim, Pyung-Chae;Son, Jonghwan;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.895-905
    • /
    • 2019
  • Commercial Unmanned Aerial Vehicle (UAV) image processing software products currently used in the industry provides camera calibration information and block bundle adjustment accuracy. However, they provide mapping accuracy achievable out of input UAV images. In this paper, the quality of mapping is calculated by using orientation parameters from UAV image processing software. We apply the orientation parameters to the digital photogrammetric workstation (DPW) for verifying the reliability of the mapping quality calculated. The quality of mapping accuracy was defined as three types of accuracy: Y-parallax, relative model and absolute model accuracy. The Y-parallax is an accuracy capable of determining stereo viewing between stereo pairs. The Relative model accuracy is the relative bundle adjustment accuracy between stereo pairs on the model coordinates system. The absolute model accuracy is the bundle adjustment accuracy on the absolute coordinate system. For the experimental data, we used 723 images of GSD 5 cm obtained from the rotary wing UAV over an urban area and analyzed the accuracy of mapping quality. The quality of the relative model accuracy predicted by the proposed technique and the maximum error observed from the DPW showed precise results with less than 0.11 m. Similarly, the maximum error of the absolute model accuracy predicted by the proposed technique was less than 0.16 m.

Urban Building Change Detection Using nDSM and Road Extraction (nDSM 및 도로망 추출 기법을 적용한 도심지 건물 변화탐지)

  • Jang, Yeong Jae;Oh, Jae Hong;Lee, Chang No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.3
    • /
    • pp.237-246
    • /
    • 2020
  • Recently, as high resolution satellites data have been serviced, frequent DSM (Digital Surface Model) generation over urban areas has been possible. In addition, it is possible to detect changes using a high-resolution DSM at building level such that various methods of building change detection using DSM have been studied. In order to detect building changes using DSM, we need to generate a DSM using a stereo satellite image. The change detection method using D-DSM (Differential DSM) uses the elevation difference between two DSMs of different dates. The D-DSM method has difficulty in applying a precise vertical threshold, because between the two DSMs may have elevation errors. In this study, we focus on the urban structure change detection using D-nDSM (Differential nDSM) based on nDSM (Normalized DSM) that expresses only the height of the structures or buildings without terrain elevation. In addition, we attempted to reduce noise using a morphological filtering. Also, in order to improve the roadside buildings extraction precision, we exploited the urban road network extraction from nDSM. Experiments were conducted for high-resolution stereo satellite images of two periods. The experimental results were compared for D-DSM, D-nDSM, and D-nDSM with road extraction methods. The D-DSM method showed the accuracy of about 30% to 55% depending on the vertical threshold and the D-nDSM approaches achieved 59% and 77.9% without and with the morphological filtering, respectively. Finally, the D-nDSM with the road extraction method showed 87.2% of change detection accuracy.

Representation of Population Distribution based on Residential Building Types by using the Dasymetric Mapping in Seoul (대시메트릭 매핑 기법을 이용한 서울시 건축물별 주거인구밀도의 재현)

  • Lee, Sukjoon;Lee, Sang Wook;Hong, Bo Yeong;Eom, Hongmin;Shin, Hyu-Seok;Kim, Kyung-Min
    • Spatial Information Research
    • /
    • v.22 no.3
    • /
    • pp.89-99
    • /
    • 2014
  • The aim of this study is to represent the residential population distribution in Seoul, Korea more precisely through the dasymetric mapping method. Dasymetric mapping can be defined as a mapping method to calculate details from truncated spatial distribution of main statistical data by using ancillary data which is spatial data related to the main data. In this research, there are two types of data used for dasymetric mapping: the population data (2010) based on a output area survey in Seoul as the main data and the building footprint data including register information as ancillary spatial data. Using the binary method, it extracts residential buildings as actual areas where residents do live in. After that, the regression method is used for calculating the weights on population density by considering the building types and their gross floor areas. Finally, it can be reproduced three-dimensional density of residential population and drew a detailed dasymetric map. As a result, this allows to extract a more realistic calculating model of population distribution and draw a more accurate map of population distribution in Seoul. Therefore, this study has an important meaning as a source which can be applied in various researches concerning regional population in the future.

Pose Transformation of a Frontal Face Image by Invertible Meshwarp Algorithm (역전가능 메쉬워프 알고리즘에 의한 정면 얼굴 영상의 포즈 변형)

  • 오승택;전병환
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.153-163
    • /
    • 2003
  • In this paper, we propose a new technique of image based rendering(IBR) for the pose transformation of a face by using only a frontal face image and its mesh without a three-dimensional model. To substitute the 3D geometric model, first, we make up a standard mesh set of a certain person for several face sides ; front. left, right, half-left and half-right sides. For the given person, we compose only the frontal mesh of the frontal face image to be transformed. The other mesh is automatically generated based on the standard mesh set. And then, the frontal face image is geometrically transformed to give different view by using Invertible Meshwarp Algorithm, which is improved to tolerate the overlap or inversion of neighbor vertexes in the mesh. The same warping algorithm is used to generate the opening or closing effect of both eyes and a mouth. To evaluate the transformation performance, we capture dynamic images from 10 persons rotating their heads horizontally. And we measure the location error of 14 main features between the corresponding original and transformed facial images. That is, the average difference is calculated between the distances from the center of both eyes to each feature point for the corresponding original and transformed images. As a result, the average error in feature location is about 7.0% of the distance from the center of both eyes to the center of a mouth.