• Title/Summary/Keyword: Image-Based Rendering

Search Result 320, Processing Time 0.026 seconds

Fusion technology in applied geophysics

  • Matsuoka Toshifumi
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2003.11a
    • /
    • pp.21-26
    • /
    • 2003
  • The visualization of three dimensional geophysical data is forcing a revolution in the way of working, and allowing the discovery and production of hydrocarbons at much lower costs than previously thought possible. There are many aspects of this revolution that are behind the scenes, such as the database structure, the storage and retrieval of data, and the exchange of data among programs. Also the user had changes where the interpreter (or manager, or processor) actually looks at and somehow interacts with the data. The use of opacity in volume rendering, and how its judicious application can assist in imaging geologic features in three dimensional seismic data. This revolutionary development of new technology is based on the philosophy of synergy of inter-disciplines of the oil industry. Group interaction fostered by large room visualization environments enables the integration of disciplines we strive for, by putting the petrophysicist, geologist, geophysicist, and reservoir engineer in one place, looking at one image together, without jargon or geography separating them. All these tools developed in the oil industry can be applied into the civil engineering industry also such as the prior geological and geophysical survey of the constructions. Many examples will show how three dimensional geophysical technology might make a revolution in the oil business industry now and in future. This change can be considered as a fusion process at data, information, and knowledge levels.

  • PDF

Segmentation and Visualization of Left Ventricle in MR Cardiac Images (자기공명심장영상의 좌심실 분할과 가시화)

  • 정성택;신일홍;권민정;박현욱
    • Journal of Biomedical Engineering Research
    • /
    • v.23 no.2
    • /
    • pp.101-107
    • /
    • 2002
  • This paper presents a segmentation algorithm to extract endocardial contour and epicardial contour of left ventricle in MR Cardiac images. The algorithm is based on a generalized gradient vector flow(GGVF) snake and a prediction of initial contour(PIC). Especially. the proposed algorithm uses physical characteristics of endocardial and epicardial contours, cross profile correlation matching(CPCM), and a mixed interpolation model. In the experiment, the proposed method is applied to short axis MR cardiac image set, which are obtained by Siemens, Medinus, and GE MRI Systems. The experimental results show that the proposed algorithm can extract acceptable epicardial and endocardial walls. We calculate quantitative parameters from the segmented results, which are displayed graphically. The segmented left vents role is visualized volumetrically by surface rendering. The proposed algorithm is implemented on Windows environment using Visual C ++.

A Voxelization for Geometrically Defined Objects Using Cutting Surfaces of Cubes (큐브의 단면을 이용한 기하학적인 물체의 복셀화)

  • Gwun, Ou-Bong
    • The KIPS Transactions:PartA
    • /
    • v.10A no.2
    • /
    • pp.157-164
    • /
    • 2003
  • Volume graphics have received a lot of attention as a medical image analysis tool nowadays. In the visualization based on volume graphics, there is a process called voxelization which transforms the geometrically defined objects into the volumetric objects. It enables us to volume render the geometrically defined data with sampling data. This paper suggests a voxeliration method using the cutting surfaces of cubes, implements the method on a PC, and evaluates it with simple geometric modeling data to explore propriety of the method. This method features the ability of calculating the exact normal vector from a voxel, having no hole among voxels, having multi-resolution representation.

Development of High Dynamic Range Panorama Environment Map Production System Using General-Purpose Digital Cameras (범용 디지털 카메라를 이용한 HDR 파노라마 환경 맵 제작 시스템 개발)

  • Park, Eun-Hea;Hwang, Gyu-Hyun;Park, Sang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.2
    • /
    • pp.1-8
    • /
    • 2012
  • High dynamic range (HDR) images represent a far wider numerical range of exposures than common digital images. Thus it can accurately store intensity levels of light found in the specific scenes generated by light sources in the real world. Although a kind of professional HDR cameras which support fast accurate capturing has been developed, high costs prevent from employing those in general working environments. The common method to produce a HDR image with lower cost is to take a set of photos of the target scene with a range of exposures by general purpose cameras, and then to transform them into a HDR image by commercial softwares. However, the method needs complicate and accurate camera calibration processes. Furthermore, creating HDR environment maps which are used to produce high quality imaging contents includes delicate time-consuming manual processes. In this paper, we present an automatic HDR panorama environment map generating system which was constructed to make the complicated jobs of taking pictures easier. And we show that our system can be effectively applicable to photo-realistic compositing tasks which combine 3D graphic models with a 2D background scene using image-based lighting techniques.

High-quality Texture Extraction for Point Clouds Reconstructed from RGB-D Images (RGB-D 영상으로 복원한 점 집합을 위한 고화질 텍스쳐 추출)

  • Seo, Woong;Park, Sang Uk;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.61-71
    • /
    • 2018
  • When triangular meshes are generated from the point clouds in global space reconstructed through camera pose estimation against captured RGB-D streams, the quality of the resulting meshes improves as more triangles are hired. However, for 3D reconstructed models beyond some size threshold, they become to suffer from the ugly-looking artefacts due to the insufficient precision of RGB-D sensors as well as significant burdens in memory requirement and rendering cost. In this paper, for the generation of 3D models appropriate for real-time applications, we propose an effective technique that extracts high-quality textures for moderate-sized meshes from the captured colors associated with the reconstructed point sets. In particular, we show that via a simple method based on the mapping between the 3D global space resulting from the camera pose estimation and the 2D texture space, textures can be generated effectively for the 3D models reconstructed from captured RGB-D image streams.

3D Head Modeling using Depth Sensor

  • Song, Eungyeol;Choi, Jaesung;Jeon, Taejae;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.13-16
    • /
    • 2015
  • Purpose We conducted a study on the reconstruction of the head's shape in 3D using the ToF depth sensor. A time-of-flight camera (ToF camera) is a range imaging camera system that resolves distance based on the known speed of light, measuring the time-of-flight of a light signal between the camera and the subject for each point of the image. The above method is the safest way of measuring the head shape of plagiocephaly patients in 3D. The texture, appearance and size of the head were reconstructed from the measured data and we used the SDF method for a precise reconstruction. Materials and Methods To generate a precise model, mesh was generated by using Marching cube and SDF. Results The ground truth was determined by measuring 10 people of experiment participants for 3 times repetitively and the created 3D model of the same part from this experiment was measured as well. Measurement of actual head circumference and the reconstructed model were made according to the layer 3 standard and measurement errors were also calculated. As a result, we were able to gain exact results with an average error of 0.9 cm, standard deviation of 0.9, min: 0.2 and max: 1.4. Conclusion The suggested method was able to complete the 3D model by minimizing errors. This model is very effective in terms of quantitative and objective evaluation. However, measurement range somewhat lacks 3D information for the manufacture of protective helmets, as measurements were made according to the layer 3 standard. As a result, measurement range will need to be widened to facilitate production of more precise and perfectively protective helmets by conducting scans on all head circumferences in the future.

Performance Improvement of Tone Compression of HDR Images and Qualitative Evaluations using a Modified iCAM06 Technique (Modified iCAM06 기법을 이용한 HDR 영상의 tone compression 개선과 평가)

  • Jang, Jae-Hoon;Lee, Sung-Hak;Sohng, Kyu-Ik
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.8
    • /
    • pp.1055-1065
    • /
    • 2009
  • High-dynamic-range (HDR) rendering technology changes the range from the broad dynamic range (up to 9 log units) of a luminance, in a real-world scene, to the 8-bit dynamic range which is the common output of a display's dynamic range. One of the techniques, iCAM06 has a superior capacity for making HDR images. iCAM06 is capable of making color appearance predictions of HDR images based on CIECAM02 and incorporating spatial process models in the human visual system (HVS) for contrast enhancement. However there are several problems in the iCAM06, including obscure user controllable factors to be decided. These factors have a serious effect on the output image but users get into difficulty in that they can't find an adequate solution on how to adjust. So a suggested model gives a quantitative formulation for user controllable factors of iCAM06 to find suitable values which corresponds with different viewing conditions, and improves subjective visuality of displayed images for varying illuminations.

  • PDF

Manufacture of 3-Dimensional Image and Virtual Dissection Program of the Human Brain (사람 뇌의 3차원 영상과 가상해부 풀그림 만들기)

  • Chung, M.S.;Lee, J.M.;Park, S.K.;Kim, M.K.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.57-59
    • /
    • 1998
  • For medical students and doctors, knowledge of the three-dimensional (3D) structure of brain is very important in diagnosis and treatment of brain diseases. Two-dimensional (2D) tools (ex: anatomy book) or traditional 3D tools (ex: plastic model) are not sufficient to understand the complex structures of the brain. However, it is not always guaranteed to dissect the brain of cadaver when it is necessary. To overcome this problem, the virtual dissection programs of the brain have been developed. However, most programs include only 2D images that do not permit free dissection and free rotation. Many programs are made of radiographs that are not as realistic as sectioned cadaver because radiographs do not reveal true color and have limited resolution. It is also necessary to make the virtual dissection programs of each race and ethnic group. We attempted to make a virtual dissection program using a 3D image of the brain from a Korean cadaver. The purpose of this study is to present an educational tool for those interested in the anatomy of the brain. The procedures to make this program were as follows. A brain extracted from a 58-years old male Korean cadaver was embedded with gelatin solution, and serially sectioned into 1.4 mm-thickness using a meat slicer. 130 sectioned specimens were inputted to the computer using a scanner ($420\times456$ resolution, true color), and the 2D images were aligned on the alignment program composed using IDL language. Outlines of the brain components (cerebrum, cerebellum, brain stem, lentiform nucleus, caudate nucleus, thalamus, optic nerve, fornix, cerebral artery, and ventricle) were manually drawn from the 2D images on the CorelDRAW program. Multimedia data, including text and voice comments, were inputted to help the user to learn about the brain components. 3D images of the brain were reconstructed through the volume-based rendering of the 2D images. Using the 3D image of the brain as the main feature, virtual dissection program was composed using IDL language. Various dissection functions, such as dissecting 3D image of the brain at free angle to show its plane, presenting multimedia data of brain components, and rotating 3D image of the whole brain or selected brain components at free angle were established. This virtual dissection program is expected to become more advanced, and to be used widely through Internet or CD-title as an educational tool for medical students and doctors.

  • PDF

Automatic Lower Extremity Vessel Extraction based on Bone Elimination Technique in CT Angiography Images (CT 혈관 조영 영상에서 뼈 소거법 기반의 하지 혈관 자동 추출)

  • Kim, Soo-Kyung;Hong, Helen
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.12
    • /
    • pp.967-976
    • /
    • 2009
  • In this paper, we propose an automatic lower extremity vessel extraction based on rigid registration and bone elimination techniques in CT and CT angiography images. First, automatic partitioning of the lower extremity based on the anatomy is proposed to consider the local movement of the bone. Second, rigid registration based on distance map is performed to estimate the movement of the bone between CT and CT angiography images. Third, bone elimination and vessel masking techniques are proposed to remove bones in CT angiography image and to prevent the vessel near to bone from eroding. Fourth, post-processing based on vessel tracking is proposed to reduce the effect of misalignment and noises like a cartilage. For the evaluation of our method, we performed the visual inspection, accuracy measures and processing time. For visual inspection, the results of applying general subtraction, registered subtraction and proposed method are compared using volume rendering and maximum intensity projection. For accuracy evaluation, intensity distributions of CT angiography image, subtraction based method and proposed method are analyzed. Experimental result shows that bones are accurately eliminated and vessels are robustly extracted without the loss of other structure. The total processing time of thirteen patient datasets was 40 seconds on average.

Large Point Cloud-based Pipe Shape Reverse Engineering Automation Method (대용량 포인트 클라우드 기반 파이프 형상 역설계 자동화 방법 연구)

  • Kang, Tae-Wook;Kim, Ji-Eum
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.3
    • /
    • pp.692-698
    • /
    • 2016
  • Recently, the facility extension construction and maintenance market portion has increased instead of decreased the newly facility construction. In this context, it is important to examine the reverse engineering of MEP (Mechanical Electrical and Plumbing) facilities, which have the high operation and management cost in the architecture domains. The purpose of this study was to suggest the Large Point Cloud-based Pipe Shape Reverse Engineering Method. To conduct the study, the related researches were surveyed and the reverse engineering automation method of the pipe shapes considering large point cloud was proposed. Based on the method, the prototype was developed and the results were validated. The proposed method is suitable for large data processing considering the validation results because the rendering performance standard deviation related to the 3D point cloud massive data searching was 0.004 seconds.