• Title/Summary/Keyword: 3D imaging system

Search Result 498, Processing Time 0.039 seconds

Accuracy and precision of integumental linear dimensions in a three-dimensional facial imaging system

  • Kim, Soo-Hwan;Jung, Woo-Young;Seo, Yu-Jin;Kim, Kyung-A;Park, Ki-Ho;Park, Young-Guk
    • The korean journal of orthodontics
    • /
    • v.45 no.3
    • /
    • pp.105-112
    • /
    • 2015
  • Objective: A recently developed facial scanning method uses three-dimensional (3D) surface imaging with a light-emitting diode. Such scanning enables surface data to be captured in high-resolution color and at relatively fast speeds. The purpose of this study was to evaluate the accuracy and precision of 3D images obtained using the Morpheus 3D$^{(R)}$ scanner (Morpheus Co., Seoul, Korea). Methods: The sample comprised 30 subjects aged 24.34 years (mean $29.0{\pm}2.5$ years). To test the correlation between direct and 3D image measurements, 21 landmarks were labeled on the face of each subject. Sixteen direct measurements were obtained twice using digital calipers; the same measurements were then made on two sets of 3D facial images. The mean values of measurements obtained from both methods were compared. To investigate the precision, a comparison was made between two sets of measurements taken with each method. Results: When comparing the variables from both methods, five of the 16 possible anthropometric variables were found to be significantly different. However, in 12 of the 16 cases, the mean difference was under 1 mm. The average value of the differences for all variables was 0.75 mm. Precision was high in both methods, with error magnitudes under 0.5 mm. Conclusions: 3D scanning images have high levels of precision and fairly good congruence with traditional anthropometry methods, with mean differences of less than 1 mm. 3D surface imaging using the Morpheus 3D$^{(R)}$ scanner is therefore a clinically acceptable method of recording facial integumental data.

Water-Fat Imaging with Automatic Field Inhomogeneity Correction Using Joint Phase Magnitude Density Function at Low Field MRI (저자장 자기공명영상에서 위상-크기 결합 밀도 함수를 이용한 자동 불균일 자장 보정 물-지방 영상 기법)

  • Kim, Pan-Ki;Ahn, Chang-Beom
    • Investigative Magnetic Resonance Imaging
    • /
    • v.15 no.1
    • /
    • pp.57-66
    • /
    • 2011
  • Purpose : A new inhomogeneity correction method based on two-point Dixon sequence is proposed to obtain water and fat images at 0.35T, low field magnetic resonance imaging (MRI) system. Materials and Methods : Joint phase-magnitude density function (JPMF) is obtained from the in-phase and out-of-phase images by the two-point Dixon method. The range of the water signal is adjusted from the JPMF, and 3D inhomogeneity map is obtained from the phase of corresponding water volume. The 3D inhomogeneity map is used to correct the inhomogeneity field iteratively. Results : The proposed water-fat imaging method was successfully applied to various organs. The proposed 3D inhomogeneity correction algorithm provides good performances in overall multi-slice images. Conclusion : The proposed water-fat separation method using JPMF is robust to field inhomogeneity. Three dimensional inhomogeneity map and the iterative inhomogeneity correction algorithm improve water and fat imaging substantially.

360-degree Viewable Cylindrical Integral Imaging System Using Electroluminescent Films

  • Jung, Jae-Hyun;Park, Gil-Bae;Kim, Yun-Hee;Lee, Byoung-Ho
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1254-1257
    • /
    • 2009
  • A 360-degree viewable three-dimensional display based on integral imaging is proposed. The cylindrically arranged point light source array which is generated by electroluminescent (EL) pinhole film reconstructs 360-degree viewable virtual 3D image at the center of the cylinder. In this paper, the principle of operation and experimental results are presented.

  • PDF

360-degree Viewable Cylindrical Integral Imaging System Using Electroluminescent Films

  • Jung, Jae-Hyun;Park, Gil-Bae;Kim, Yun-Hee;Lee, Byoung-Ho
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1330-1333
    • /
    • 2009
  • A 360-degree viewable three-dimensional display based on integral imaging is proposed. The cylindrically arranged point light source array which is generated by electroluminescent (EL) pinhole film reconstructs 360-degree viewable virtual 3D image at the center of the cylinder. In this paper, the principle of operation and experimental results are presented.

  • PDF

Three-dimensional Dynamic Display System Based on Integral Imaging

  • Jung, Sung-Yong;Min, Sung-Wook;Park, Jae-Hyeung;Lee, Byoung-Ho
    • Journal of Information Display
    • /
    • v.3 no.1
    • /
    • pp.22-26
    • /
    • 2002
  • Three-dimensional dynamic display system based on computer-generated integral imaging is discussed and its feasibility is verified via some basic experiments. Integrated images observed from different viewing points are seen to have full parallax and the animated 3D image was implemented successfully. Moreover, using large size Fresnel lens array was found to helps widen viewing angle and to make the system more practical.

Server and Client Simulator for Web-based 3D Image Communication

  • Ko, Jung-Hwan;Lee, Sang-Tae;Kim, Eun-Soo
    • Journal of Information Display
    • /
    • v.5 no.4
    • /
    • pp.38-44
    • /
    • 2004
  • In this paper, a server and client simulator for the web-based multi-view 3D image communication system is implemented by using the IEEE 1394 digital cameras, Intel Xeon server computer and Microsoft's DirectShow programming library. In the proposed system, two-view image is initially captured by using the IEEE 1394 stereo camera and then, this data is compressed through extraction of its disparity information in the Intel Xeon server computer and transmitted to the client system, in which multi-view images are generated through the intermediate views reconstruction method and finally display on the 3D display monitor. Through some experiments it is found that the proposed system can display 8-view image having a grey level of 8 bits with a frame rate of 15 fps.

Effect of field-of-view size on gray values derived from cone-beam computed tomography compared with the Hounsfield unit values from multidetector computed tomography scans

  • Shokri, Abbas;Ramezani, Leila;Bidgoli, Mohsen;Akbarzadeh, Mahdi;Ghazikhanlu-Sani, Karim;Fallahi-Sichani, Hamed
    • Imaging Science in Dentistry
    • /
    • v.48 no.1
    • /
    • pp.31-39
    • /
    • 2018
  • Purpose: This study aimed to evaluate the effect of field-of-view (FOV) size on the gray values derived from cone-beam computed tomography (CBCT) compared with the Hounsfield unit values from multidetector computed tomography (MDCT) scans as the gold standard. Materials and Methods: A radiographic phantom was designed with 4 acrylic cylinders. One cylinder was filled with distilled water, and the other 3 were filled with 3 types of bone substitute: namely, Nanobone, Cenobone, and Cerabone. The phantom was scanned with 2 CBCT systems using 2 different FOV sizes, and 1 MDCT system was used as the gold standard. The mean gray values(MGVs) of each cylinder were calculated in each imaging protocol. Results: In both CBCT systems, significant differences were noted in the MGVs of all materials between the 2 FOV sizes(P<.05) except for Cerabone in the Cranex3D system. Significant differences were found in the MGVs of each material compared with the others in both FOV sizes for each CBCT system. No significant difference was seen between the Cranex3D CBCT system and the MDCT system in the MGVs of bone substitutes on images obtained with a small FOV. Conclusion: The size of the FOV significantly changed the MGVs of all bone substitutes, except for Cerabone in the Cranex3D system. Both CBCT systems had the ability to distinguish the 3 types of bone substitutes based on a comparison of their MGVs. The Cranex3D CBCT system used with a small FOV had a significant correlation with MDCT results.

Automatic Surface Matching for the Registration of LIDAR Data and MR Imagery

  • Habib, Ayman F.;Cheng, Rita W.T.;Kim, Eui-Myoung;Mitishita, Edson A.;Frayne, Richard;Ronsky, Janet L.
    • ETRI Journal
    • /
    • v.28 no.2
    • /
    • pp.162-174
    • /
    • 2006
  • Several photogrammetric and geographic information system applications such as surface matching, object recognition, city modeling, environmental monitoring, and change detection deal with multiple versions of the same surface that have been derived from different sources and/or at different times. Surface registration is a necessary procedure prior to the manipulation of these 3D datasets. This need is also applicable in the field of medical imaging, where imaging modalities such as magnetic resonance imaging (MRI) can provide temporal 3D imagery for monitoring disease progression. This paper will present a general automated surface registration procedure that can establish correspondences between conjugate surface elements. Experimental results using light detection and ranging (LIDAR) and MRI data will verify the feasibility, robustness, and accuracy of this approach.

  • PDF

Standard Terminology System Referenced by 3D Human Body Model

  • Choi, Byung-Kwan;Lim, Ji-Hye
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.2
    • /
    • pp.91-96
    • /
    • 2019
  • In this study, a system to increase the expressiveness of existing standard terminology using three-dimensional (3D) data is designed. We analyze the existing medical terminology system by searching the reference literature and perform an expert group focus survey. A human body image is generated using a 3D modeling tool. Then, the anatomical position of the human body is mapped to the 3D coordinates' identification (ID) and metadata. We define the term to represent the 3D human body position in a total of 12 categories, including semantic terminology entity and semantic disorder. The Blender and 3ds Max programs are used to create the 3D model from medical imaging data. The generated 3D human body model is expressed by the ID of the coordinate type (x, y, and z axes) based on the anatomical position and mapped to the semantic entity including the meaning. We propose a system of standard terminology enabling integration and utilization of the 3D human body model, coordinates (ID), and metadata. In the future, through cooperation with the Electronic Health Record system, we will contribute to clinical research to generate higher-quality big data.