• Title/Summary/Keyword: 3D Images

Search Result 3,550, Processing Time 0.032 seconds

The change of pupil size after viewing three dimensional TV (2안식 입체TV 주시전후의 동공면적의 변화)

  • Cho, Am
    • Proceedings of the ESK Conference
    • /
    • 1995.04a
    • /
    • pp.187-198
    • /
    • 1995
  • The physiological change of eyes while viewing 3D TV was investigated. The change of pupil size was used as the measure of evaluation. The results are as follows: (1) The pupil size decreases after viewing 3D images. (2) The indoor illumination has a significant effect on the pupil size in both 2D and 3D cases. (3) Less change of pupil size were observed under the indoor illumination. Thus, if we only focus on the visual load on the eye, for viewing 3D images, it will be better to use indoor illumination.

  • PDF

Analysis of Skin Movement Artifacts Using MR Images (자기공명 영상을 이용한 피부 움직임 에러 분석에 관한 연구)

  • ;N. Miyata;M. Kouchi;M. Mochimaru
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.8
    • /
    • pp.164-170
    • /
    • 2004
  • The skin movement artifacts are referred to as the relative motion of skin with respect to the motion of underlying bones. This is of great importance in joint biomechanics or internal kinematics of human body. This paper describes a novel experiment that measures the skin movement of a hand based on MR(magnetic resonance) images in conjunction with surface modeling techniques. The proposed approach consists of 3 phases: (1) MR scanning of a hand with surface makers, (2) 3D reconstruction from the MR images, and (3) registration of the 3D models. The MR images of the hand are captured by 3 different postures. And the surface makers which are attached to the skin are employed to trace the skin motion. After reconstruction of 3D models from the scanned MR images, the global registration is applied to the 3D models based on the particular bone shape of different postures. The results of registration are then used to trace the skin movement by measuring the positions of the surface markers.

Nonlinear 3D Image Correlator Using Fast Computational Integral Imaging Reconstruction Method (고속 컴퓨터 집적 영상 복원 방법을 이용한 비선형 3D 영상 상관기)

  • Shin, Donghak;Lee, Joon-Jae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.10
    • /
    • pp.2280-2286
    • /
    • 2012
  • In this paper, we propose a novel nonlinear 3D image correlator using a fast computational integral imaging reconstruction (CIIR) method. In order to implement the fast CIIR method, the magnification process was eliminated. In the proposed correlator, elemental images of the reference and target objects are picked up by lenslet arrays. Using these elemental images, reference and target plane images are reconstructed on the output plane by means of the proposed fast CIIR method. Then, through nonlinear cross-correlations between the reconstructed reference and the target plane images, the pattern recognition can be performed from the correlation outputs. Nonlinear correlation operation can improve the recognition of 3D objects. To show the feasibility of the proposed method, some preliminary experiments are carried out and the results are presented by comparing the conventional method.

An Analysis of 3D Mesh Accuracy and Completeness of Combination of Drone and Smartphone Images for Building 3D Modeling (건물3D모델링을 위한 드론과 스마트폰영상 조합의 3D메쉬 정확도 및 완성도 분석)

  • Han, Seung-Hee;Yoo, Sang-Hyeon
    • Journal of Cadastre & Land InformatiX
    • /
    • v.52 no.1
    • /
    • pp.69-80
    • /
    • 2022
  • Drone photogrammetry generally acquires images vertically or obliquely from above, so when photographing for the purpose of three-dimensional modeling, image matching for the ground of a building and spatial accuracy of point cloud data are poor, resulting in poor 3D mesh completeness. Therefore, to overcome this, this study analyzed the spatial accuracy of each drone image by acquiring smartphone images from the ground, and evaluated the accuracy improvement and completeness of 3D mesh when the smartphone image is not combined with the drone image. As a result of the study, the horizontal (x,y) accuracy of drone photogrammetry was about 1/200,000, similar to that of traditional photogrammetry. In addition, it was analyzed that the accuracy according to the photographing method was more affected by the photographing angle of the object than the increase in the number of photos. In the case of the smartphone image combination, the accuracy was not significantly affected, but the completeness of the 3D mesh was able to obtain a 3D mesh of about LoD3 that satisfies the digital twin city standard. Therefore, it is judged that it can be sufficiently used to build a 3D model for digital twin city by combining drone images and smartphones or DSLR images taken on the ground.

3D Brain-Endoscopy Using VRML and 2D CT images (VRML을 이용한 3차원 Brain-endoscopy와 2차원 단면 영상)

  • Kim, D.O.;Ahn, J.Y.;Lee, D.H.;Kim, N.K.;Kim, J.H.;Min, B.G.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.285-286
    • /
    • 1998
  • Virtual Brain-endoscopy is an effective method to detect lesion in brain. Brain is the most part of the human and is not easy part to operate so that reconstructing in 3D may be very helpful to doctors. In this paper, it is suggested that to increase the reliability, method of matching 3D object with the 2D CT slice. 3D Brain-endoscopy is reconstructed with 35 slices of 2D CT images. There is a plate in 3D brain-endoscopy so as to drag upward or downward to match the relevant 2D CT image. Relevant CT image guides the user to recognize the exact part he or she is investigating. VRML Script is used to make the change in images and PlaneSensor node is used to transmit the y coordinate value with the CT image. The result is test on the PC which has the following spec. 400MHz Clock-speed, 512MB ram, and FireGL 3000 3D accelerator is set up. The VRML file size is 3.83MB. There was no delay in controlling the 3D world and no collision in changing the CT images. This brain-endoscopy can be also put to practical use on medical education through internet.

  • PDF

3D Film Image Inspection Based on the Width of Optimized Height of Histogram (히스토그램의 최적 높이의 폭에 기반한 3차원 필름 영상 검사)

  • Jae-Eun Lee;Jong-Nam Kim
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.2
    • /
    • pp.107-114
    • /
    • 2022
  • In order to classify 3D film images as right or wrong, it is necessary to detect the pattern in a 3D film image. However, if the contrast of the pixels in the 3D film image is low, it is not easy to classify as the right and wrong 3D film images because the pattern in the image might not be clear. In this paper, we propose a method of classifying 3D film images as right or wrong by comparing the width at a specific frequency of each histogram after obtaining the histogram. Since, it is classified using the width of the histogram, the analysis process is not complicated. From the experiment, the histograms of right and wrong 3D film images were distinctly different, and the proposed algorithm reflects these features, and showed that all 3D film images were accurately classified at a specific frequency of the histogram. The performance of the proposed algorithm was verified to be the best through the comparison test with the other methods such as image subtraction, otsu thresholding, canny edge detection, morphological geodesic active contour, and support vector machines, and it was shown that excellent classification accuracy could be obtained without detecting the patterns in 3D film images.

View Synthesis Using OpenGL for Multi-viewpoint 3D TV (다시점 3차원 방송을 위한 OpenGL을 이용하는 중간영상 생성)

  • Lee, Hyun-Jung;Hur, Nam-Ho;Seo, Yong-Duek
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.507-520
    • /
    • 2006
  • In this paper, we propose an application of OpenGL functions for novel view synthesis from multi-view images and depth maps. While image based rendering has been meant to generate synthetic images by processing the camera view with a graphic engine, little has been known about how to apply the given images and depth information to the graphic engine and render the scene. This paper presents an efficient way of constructing a 3D space with camera parameters, reconstructing the 3D scene with color and depth images, and synthesizing virtual views in real-time as well as their depth images.

3D Building Detection and Reconstruction from Aerial Images Using Perceptual Organization and Fast Graph Search

  • Woo, Dong-Min;Nguyen, Quoc-Dat
    • Journal of Electrical Engineering and Technology
    • /
    • v.3 no.3
    • /
    • pp.436-443
    • /
    • 2008
  • This paper presents a new method for building detection and reconstruction from aerial images. In our approach, we extract useful building location information from the generated disparity map to segment the interested objects and consequently reduce unnecessary line segments extracted in the low level feature extraction step. Hypothesis selection is carried out by using an undirected graph, in which close cycles represent complete rooftops hypotheses. We test the proposed method with the synthetic images generated from Avenches dataset of Ascona aerial images. The experiment result shows that the extracted 3D line segments of the reconstructed buildings have an average error of 1.69m and our method can be efficiently used for the task of building detection and reconstruction from aerial images.

An Efficient Depth Measurement of 3D Microsystem from Stereo Images (입체화상으로부터 3차원 마이크로계의 효과적인 깊이측정)

  • Hwang, J.W.;Lee, J.;Yoon, D.Y.
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.5
    • /
    • pp.178-182
    • /
    • 2007
  • This study represents the efficient depth measurement for 3-dimensional microsystems using the disparity histogram from stereo images. Implementation of user-friendly Windows program written in C++ involves the various methods for the stereo-image processing in which the minimization of matching-pixel error upon the unique point for stereo images was carried out as a pre-processing method. Even though MPC among various methods was adopted in the present measurement, the resulting measurements seem to require optimizations of the windows sizes and corrections of post-manipulation for stereo images. The present work using Windows program is promising to measure the 3-dimensional depth of micro-system efficiently in implementing the 3-dimensional structure of micro-systems.

An Input/Output Technology for 3-Dimensional Moving Image Processing (3차원 동영상 정보처리용 영상 입출력 기술)

  • Son, Jung-Young;Chun, You-Seek
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.8
    • /
    • pp.1-11
    • /
    • 1998
  • One of the desired features for the realizations of high quality Information and Telecommunication services in future is "the Sensation of Reality". This will be achieved only with the visual communication based on the 3- dimensional (3-D) moving images. The main difficulties in realizing 3-D moving image communication are that there is no developed data transmission technology for the hugh amount of data involved in 3-D images and no established technologies for 3-D image recording and displaying in real time. The currently known stereoscopic imaging technologies can only present depth, no moving parallax, so they are not effective in creating the sensation of the reality without taking eye glasses. The more effective 3-D imaging technologies for achieving the sensation of reality are those based on the multiview 3-D images which provides the object image changes as the eyes move to different directions. In this paper, a multiview 3-D imaging system composed of 8 CCD cameras in a case, a RGB(Red, Green, Blue) beam projector, and a holographic screen is introduced. In this system, the 8 view images are recorded by the 8 CCD cameras and the images are transmitted to the beam projector in sequence by a signal converter. This signal converter converts each camera signal into 3 different color signals, i.e., RGB signals, combines each color signal from the 8 cameras into a serial signal train by multiplexing and drives the corresponding color channel of the beam projector to 480Hz frame rate. The beam projector projects images to the holographic screen through a LCD shutter. The LCD shutter consists of 8 LCD strips. The image of each LCD strip, created by the holographic screen, forms as sub-viewing zone. Since the ON period and sequence of the LCD strips are synchronized with those of the camera image sampling adn the beam projector image projection, the multiview 3-D moving images are viewed at the viewing zone.

  • PDF