• Title/Summary/Keyword: 3D imaging system

Search Result 498, Processing Time 0.025 seconds

Integral imaging system with enhanced depth of field using birefringence lens array

  • Park, Chan-Kyu;Lee, Sang-Shin;Hwang, Yong-Seok
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.1135-1137
    • /
    • 2008
  • In this paper, it is proposed that the integral imaging technique is applied to reconstruct 3D (three dimensional) objects with enhanced depth of field, computationally and optically. Lens array using birefringence material is adopted to obtain the reconstruction. The elemental images sets are picked up through common micro lens array and utilized to present 3D reconstruction images using adopted lens array.

  • PDF

Optical implementation of 3D image correlator using integral imaging technique (집적영상 기술을 이용한 3D 영상 상관기의 광학적 구현)

  • Piao, Yongri;Kim, Seok-Tae;Kim, Eun-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.8
    • /
    • pp.1659-1665
    • /
    • 2009
  • In this paper, we propose an implementation method of 3D image correlator using integral imaging technique. In the proposed method, elemental images of the reference and signal 3D objects are recorded by lenslet arrays and then reference and signal output plane images with high resolution are optically reconstructed on the output plane by displaying these elemental images into a display panel. Through cross-correlations between the reconstructed reference and the single plane images, 3D object recognition is performed. The proposed method can provide a precise 3D object recognition by using the high-resolution output plane images compared with the previous methods and implement all-optical structure for real-time 3D object recognition system. To show the feasibility of the proposed method, optical experiments are carried out and the results are presented.

Reproducibility of the sella turcica landmark in three dimensions using a sella turcica-specific reference system

  • Pittayapat, Pisha;Jacobs, Reinhilde;Odri, Guillaume A.;Vasconcelos, Karla De Faria;Willems, Guy;Olszewski, Raphael
    • Imaging Science in Dentistry
    • /
    • v.45 no.1
    • /
    • pp.15-22
    • /
    • 2015
  • Purpose: This study was performed to assess the reproducibility of identifying the sella turcica landmark in a three-dimensional (3D) model by using a new sella-specific landmark reference system. Materials and Methods: Thirty-two cone-beam computed tomographic scans (3D Accuitomo$^{(R)}$ 170, J. Morita, Kyoto, Japan) were retrospectively collected. The 3D data were exported into the Digital Imaging and Communications in Medicine standard and then imported into the Maxilim$^{(R)}$ software (Medicim NV, Sint-Niklaas, Belgium) to create 3D surface models. Five observers identified four osseous landmarks in order to create the reference frame and then identified two sella landmarks. The x, y, and z coordinates of each landmark were exported. The observations were repeated after four weeks. Statistical analysis was performed using the multiple paired t-test with Bonferroni correction (intraobserver precision: p<0.005, interobserver precision: p<0.0011). Results: The intraobserver mean precision of all landmarks was <1 mm. Significant differences were found when comparing the intraobserver precision of each observer (p<0.005). For the sella landmarks, the intraobserver mean precision ranged from $0.43{\pm}0.34mm$ to $0.51{\pm}0.46mm$. The intraobserver reproducibility was generally good. The overall interobserver mean precision was <1 mm. Significant differences between each pair of observers for all anatomical landmarks were found (p<0.0011). The interobserver reproducibility of sella landmarks was good, with >50% precision in locating the landmark within 1 mm. Conclusion: A newly developed reference system offers high precision and reproducibility for sella turcica identification in a 3D model without being based on two-dimensional images derived from 3D data.

A Study on the Implementation of a Portable Hologram Recording System for Optical Education

  • Cheolyoung, Go;Juyoung, Hong;Kwangpyo, Hong;Eunyeop, Shin;Leehwan, Hwang;Soonchul, Kwon;Seunghyun, Lee
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.478-486
    • /
    • 2022
  • Holography is a human-friendly technology that can express 3D stereoscopic information without eye fatigue. You can experience the mysterious features of light directly and indirectly, and you can also experience the principles of 3D imaging, which makes it a very good curriculum. As such, although holography is very effective in teaching students optics and 3D imaging technology, it has not yet been systematically established. The reason is the cost burden such as expensive equipment and laboratories, and the lack of easily accessible educational equipment. We implemented a portable holographic recording system to solve this problem. In addition, since silver halides, which use harmful chemicals, are not used in the process of developing the recording medium, and photopolymers are used, it is possible to educate not only the general public but also young students. In order to improve the completeness of the recorded result, the mechanism part, light source, and recording medium part of the production system were newly constructed to complement all the existing problems. The proposed system will make holography easily accessible to many people in a variety of fields, not just education. Through the interesting experience of various features and principles of light and the production of holograms with high satisfaction, we hope to popularize them in various fields such as education.

A Dual-Band Through-the-Wall Imaging Radar Receiver Using a Reconfigurable High-Pass Filter

  • Kim, Duksoo;Kim, Byungjoon;Nam, Sangwook
    • Journal of electromagnetic engineering and science
    • /
    • v.16 no.3
    • /
    • pp.164-168
    • /
    • 2016
  • A dual-band through-the-wall imaging radar receiver for a frequency-modulated continuous-wave radar system was designed and fabricated. The operating frequency bands of the receiver are S-band (2-4 GHz) and X-band (8-12 GHz). If the target is behind a wall, wall-reflected waves are rejected by a reconfigurable $G_m-C$ high-pass filter. The filter is designed using a high-order admittance synthesis method, and consists of transconductor circuits and capacitors. The cutoff frequency of the filter can be tuned by changing the reference current. The receiver system is fabricated on a printed circuit board using commercial devices. Measurements show 44.3 dB gain and 3.7 dB noise figure for the S-band input, and 58 dB gain and 3.02 dB noise figure for the X-band input. The cutoff frequency of the filter can be tuned from 0.7 MHz to 2.4 MHz.

Assessment and Comparison of Three Dimensional Exoscopes for Near-Infrared Fluorescence-Guided Surgery Using Second-Window Indocyanine-Green

  • Cho, Steve S.;Teng, Clare W.;Ravin, Emma De;Singh, Yash B.;Lee, John Y.K.
    • Journal of Korean Neurosurgical Society
    • /
    • v.65 no.4
    • /
    • pp.572-581
    • /
    • 2022
  • Objective : Compared to microscopes, exoscopes have advantages in field-depth, ergonomics, and educational value. Exoscopes are especially well-poised for adaptation into fluorescence-guided surgery (FGS) due to their excitation source, light path, and image processing capabilities. We evaluated the feasibility of near-infrared FGS using a 3-dimensional (3D), 4 K exoscope with near-infrared fluorescence imaging capability. We then compared it to the most sensitive, commercially-available near-infrared exoscope system (3D and 960 p). In-vitro and intraoperative comparisons were performed. Methods : Serial dilutions of indocyanine-green (1-2000 ㎍/mL) were imaged with the 3D, 4 K Olympus Orbeye (system 1) and the 3D, 960 p VisionSense Iridium (system 2). Near-infrared sensitivity was calculated using signal-to-background ratios (SBRs). In addition, three patients with brain tumors were administered indocyanine-green and imaged with system 1, with two also imaged with system 2 for comparison. Results : Systems 1 and 2 detected near-infrared fluorescence from indocyanine green concentrations of >250 ㎍/L and >31.3 ㎍/L, respectively. Intraoperatively, system 1 visualized strong near-infrared fluorescence from two, strongly gadolinium-enhancing meningiomas (SBR=2.4, 1.7). The high-resolution, bright images were sufficient for the surgeon to appreciate the underlying anatomy in the near-infrared mode. However, system 1 was not able to visualize fluorescence from a weakly-enhancing intraparenchymal metastasis. In contrast, system 2 successfully visualized both the meningioma and the metastasis but lacked high resolution stereopsis. Conclusion : Three-dimensional exoscope systems provide an alternative visualization platform for both standard microsurgery and near-infrared fluorescent guided surgery. However, when tumor fluorescence is weak (i.e., low fluorophore uptake, deep tumors), highly sensitive near-infrared visualization systems may be required.

Depth-Conversion in Integral Imaging Three-Dimensional Display by Means of Elemental Image Recombination (3차원 영상 재생을 위한 집적결상법에서 기본영상 재조합을 통한 재생영상의 깊이 변환)

  • Ser, Jang-Il;Shin, Seung-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.18 no.1
    • /
    • pp.24-30
    • /
    • 2007
  • We have studied depth conversion of a reconstructed image by means of recombination of the elemental images in the integral imaging system for 3D display. With the recombination, depth conversion to the pseudoscopic, the orthoscopic, the real or the virtual as well as to arbitrary depth without any distortion is possible under proper conditions. The conditions on the recombinations for the depth conversion are theoretically derived. The reconstructed images using the converted elemental images are presented.

Automatic Generation of 3D Face Model from Trinocular Images (Trinocular 영상을 이용한 3D 얼굴 모델 자동 생성)

  • Yi, Kwang-Do;Ahn, Sang-Chul;Kwon, Yong-Moo;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.7
    • /
    • pp.104-115
    • /
    • 1999
  • This paper proposes an efficient method for 3D modeling of a human face from trinocular images by reconstructing face surface using range data. By using a trinocular camera system, we mitigated the tradeoff between the occlusion problem and the range resolution limitation which is the critical limitation in binocular camera system. We also propose an MPC_MBS (Matching Pixel Count Multiple Baseline Stereo) area-based matching method to reduce boundary overreach phenomenon and to improve both of accuracy and precision in matching. In this method, the computing time can be reduced significantly by removing the redundancies. In the model generation sub-pixel accurate surface data are achieved by 2D interpolation of disparity values, and are sampled to make regular triangular meshes. The data size of the triangular mesh model can be controlled by merging the vertices that lie on the same plane within user defined error threshold.

  • PDF

Enhanced Image Mapping Method for Computer-Generated Integral Imaging System (집적 영상 시스템을 위한 향상된 이미지 매핑 방법)

  • Lee Bin-Na-Ra;Cho Yong-Joo;Park Kyoung-Shin;Min Sung-Wook
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.295-300
    • /
    • 2006
  • The integral imaging system is an auto-stereoscopic display that allows users to see 3D images without wearing special glasses. In the integral imaging system, the 3D object information is taken from several view points and stored as elemental images. Then, users can see a 3D reconstructed image by the elemental images displayed through a lens array. The elemental images can be created by computer graphics, which is referred to the computer-generated integral imaging. The process of creating the elemental images is called image mapping. There are some image mapping methods proposed in the past, such as PRR(Point Retracing Rendering), MVR(Multi-Viewpoint Rendering) and PGR(Parallel Group Rendering). However, they have problems with heavy rendering computations or performance barrier as the number of elemental lenses in the lens array increases. Thus, it is difficult to use them in real-time graphics applications, such as virtual reality or real-time, interactive games. In this paper, we propose a new image mapping method named VVR(Viewpoint Vector Rendering) that improves real-time rendering performance. This paper describes the concept of VVR first and the performance comparison of image mapping process with previous methods. Then, it discusses possible directions for the future improvements.