• Title/Summary/Keyword: 3D image analysis

Search Result 1,170, Processing Time 0.028 seconds

A Comparison of System Performances Between Rectangular and Polar Exponential Grid Imaging System (POLAR EXPONENTIAL GRID와 장방형격자 영상시스템의 영상분해도 및 영상처리능력 비교)

  • Jae Kwon Eem
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.2
    • /
    • pp.69-79
    • /
    • 1994
  • The conventional machine vision system which has uniform rectangular grid requires tremendous amount of computation for processing and analysing an image especially in 2-D image transfermations such as scaling, rotation and 3-D reconvery problem typical in robot application environment. In this study, the imaging system with nonuiformly distributed image sensors simulating human visual system, referred to as Ploar Exponential Grid(PEG), is compared with the existing conventional uniform rectangular grid system in terms of image resolution and computational complexity. By mimicking the geometric structure of the PEG sensor cell, we obtained PEG-like images using computer simulation. With the images obtained from the simulation, image resolution of the two systems are compared and some basic image processing tasks such as image scaling and rotation are implemented based on the PEG sensor system to examine its performance. Furthermore Fourier transform of PEG image is described and implemented in image analysis point of view. Also, the range and heading-angle measurement errors usually encountered in 3-D coordinates recovery with stereo camera system are claculated based on the PEG sensor system and compared with those obtained from the uniform rectangular grid system. In fact, the PEC imaging system not only reduces the computational requirements but also has scale and rotational invariance property in Fourier spectrum. Hence the PEG system has more suitable image coordinate system for image scaling, rotation, and image recognition problem. The range and heading-angle measurement errors with PEG system are less than those of uniform rectangular rectangular grid system in practical measurement range.

  • PDF

Analysis of the Optimized 3D Depth of Integral Imaging (집적영상 방식 3D 디스플레이의 최적 입체감에 관한 분석)

  • Choi, Hee-Jin
    • Korean Journal of Optics and Photonics
    • /
    • v.23 no.1
    • /
    • pp.32-35
    • /
    • 2012
  • In this paper, an analysis of the optimized 3D depth of integral imaging is proposed. We achieve this by calculating the amount of image distortion and considering the threshold of recognition in the human visual system. Experimental results are also provided to test the theory.

3D Line Segment Detection from Aerial Images using DEM and Ortho-Image (DEM과 정사영상을 이용한 항공 영상에서의 3차원 선소추출)

  • Woo Dong-Min;Jung Young-Kee;Lee Jeong-Yong
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.3
    • /
    • pp.174-179
    • /
    • 2005
  • This paper presents 3D line segment extraction method, which can be used in generating 3D rooftop model. The core of our method is that 3D line segment is extracted by using line fitting of elevation data on 2D line coordinates of ortho-image. In order to use elevations in line fitting, the elevations should be reliable. To measure the reliability of elevation, in this paper, we employ the concept of self-consistency. We test the effectiveness of the proposed method with a quantitative accuracy analysis using synthetic images generated from Avenches data set of Ascona aerial images. Experimental results indicate that the proposed method shows average 30 line errors of .16 - .30 meters, which are about $10\%$ of the conventional area-based method.

3D Analysis of Scene and Light Environment Reconstruction for Image Synthesis (영상합성을 위한 3D 공간 해석 및 조명환경의 재구성)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.45-50
    • /
    • 2006
  • In order to generate a photo-realistic synthesized image, we should reconstruct light environment by 3D analysis of scene. This paper presents a novel method for identifying the positions and characteristics of the lights-the global and local lights-in the real image, which are used to illuminate the synthetic objects. First, we generate High Dynamic Range(HDR) radiance map from omni-directional images taken by a digital camera with a fisheye lens. Then, the positions of the camera and light sources in the scene are identified automatically from the correspondences between images without a priori camera calibration. Types of the light sources are classified according to whether they illuminate the whole scene, and then we reconstruct 3D illumination environment. Experimental results showed that the proposed method with distributed ray tracing makes it possible to achieve photo-realistic image synthesis. It is expected that animators and lighting experts for the film and animation industry would benefit highly from it.

  • PDF

Foreground Extraction and Depth Map Creation Method based on Analyzing Focus/Defocus for 2D/3D Video Conversion (2D/3D 동영상 변환을 위한 초점/비초점 분석 기반의 전경 영역 추출과 깊이 정보 생성 기법)

  • Han, Hyun-Ho;Chung, Gye-Dong;Park, Young-Soo;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.11 no.1
    • /
    • pp.243-248
    • /
    • 2013
  • In this paper, depth of foreground is analysed by focus and color analysis grouping for 2D/3D video conversion and depth of foreground progressing method is preposed by using focus and motion information. Candidate foreground image is generated by estimated movement of image focus information for extracting foreground from 2D video. Area of foreground is extracted by filling progress using color analysis on hole area of inner object existing candidate foreground image. Depth information is generated by analysing value of focus existing on actual frame for allocating depth at generated foreground area. Depth information is allocated by weighting motion information. Results of previous proposed algorithm is compared with proposed method from this paper for evaluating the quality of generated depth information.

The Search of Image Outline Using 3D Viewpoint Change (3차원 시점 변화를 활용한 이미지 외곽라인 검색 제안)

  • Kim, Sungkon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.3
    • /
    • pp.283-288
    • /
    • 2019
  • We propose a method to search for similar images by using outline lines and viewpoints. In the first test, the three-dimensional image, which can't control the motion, has lower search accuracy than the static flat image. For the cause analysis, six specific tropical fish data were selected. We made a 3D graphics of tropical fishes of each kind, and we made 144 image outline lines with 12 stage viewpoints of top, bottom, left and right. Tropical fish by type were collected and sorted by time of search through similar search. Studies have shown that there are many unique viewpoints for each species of tropical fish. To increase the accuracy of the search, a User Interface was created to select the user's point of view. When the user selects the viewpoint of the image, a method of showing the result in consideration of the range of the related viewpoint is proposed.

Measuring the Degree of Crop Growth through Image Analysis (영상 분석을 통한 작물의 생육 정도 측정)

  • Heo, Gyeongyong;Choi, Eun Young;Kim, Ji Hong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.657-659
    • /
    • 2022
  • Hydroponics and aquaponics are attracting attention as they enable automated farm management and stable production thanks to the spread of smart farms. There are issues that need to be addressed in applying smart farm, but one of them is to be able to respond flexibly to demand by automatically deciding when to ship, which requires a method for automatically determining the growth level of crops. In this paper, we focused on the simple fact that the area and volume occupied by crops increase with the growth of them, and showed that it is possible to monitor the growth process of crops with 2D and 3D cameras, and to determine the degree of growth of crops by calculating the area and volume. It is necessary to verify the method by applying it to various environments and crops, but in the case of common crops in hydroponics and aquaponics, it is possible to determine the growth level through the analysis of the acquired image through 2D and 3D camera.

  • PDF

Development and Evaluation of the Usefulness for Hoffman Brain Phantom Based on 3D Printing Technique (3D 프린팅 기법 기반의 Hoffman Brain 팬텀 개발 및 유용성 평가)

  • Park, Hoon-Hee;Lee, Joo-Young
    • Journal of radiological science and technology
    • /
    • v.42 no.6
    • /
    • pp.441-446
    • /
    • 2019
  • The purpose of this paper is to recognize the usefulness of the Phantom produced with 3D printing technology by reproducing the original phantom with 3D printing technology. Using CT, we obtained information from the original phantom. The acquired file was printed by the SLA method of ABS materials. For inspection, SPECT/CT was used to obtain images. We filled the both Phantom with a solution mixed with 99mTcO4 1 mCi in 1 liter of water and acq uired images in accordance with the standard protocol. Using Image J, the SNR for each slice of the image was obtained. As a reference images, AC images were used. For the analysis of acquired images, ROI was set in the White mater and Gray mater sections of each image, and the average Intensity Value within the ROI were compared. According to the results of this study, 3D printed phantom's SNR is about 0.1 higher than the conventional phantom. And the ratio of Intensity Value was shown in the original 1 : 3.4, and the printed phantom was shown to be 1 : 3.2. Therefore, if Calibration Value is applied, It is assumed that it can be used as an alternative to the original.

3D Building Reconstruction and Visualization by Clustering Airborne LiDAR Data and Roof Shape Analysis

  • Lee, Dong-Cheon;Jung, Hyung-Sup;Yom, Jae-Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.6_1
    • /
    • pp.507-516
    • /
    • 2007
  • Segmentation and organization of the LiDAR (Light Detection and Ranging) data of the Earth's surface are difficult tasks because the captured LiDAR data are composed of irregularly distributed point clouds with lack of semantic information. The reason for this difficulty in processing LiDAR data is that the data provide huge amount of the spatial coordinates without topological and/or relational information among the points. This study introduces LiDAR data segmentation technique by utilizing histograms of the LiDAR height image data and analyzing roof shape for 3D reconstruction and visualization of the buildings. One of the advantages in utilizing LiDAR height image data is no registration required because the LiDAR data are geo-referenced and ortho-projected data. In consequence, measurements on the image provide absolute reference coordinates. The LiDAR image allows measurement of the initial building boundaries to estimate locations of the side walls and to form the planar surfaces which represent approximate building footprints. LiDAR points close to each side wall were grouped together then the least-square planar surface fitting with the segmented point clouds was performed to determine precise location of each wall of an building. Finally, roof shape analysis was performed by accumulated slopes along the profiles of the roof top. However, simulated LiDAR data were used for analyzing roof shape because buildings with various shapes of the roof do not exist in the test area. The proposed approach has been tested on the heavily built-up urban residential area. 3D digital vector map produced by digitizing complied aerial photographs was used to evaluate accuracy of the results. Experimental results show efficiency of the proposed methodology for 3D building reconstruction and large scale digital mapping especially for the urban area.

Evaluation of Radioactivity Concentration According to Radioactivity Uptake on Image Acquisition of PET/CT 2D and 3D (PET/CT 2D와 3D 영상 획득에서 방사능 집적에 따른 방사능 농도의 평가)

  • Park, Sun-Myung;Hong, Gun-Chul;Lee, Hyuk;Kim, Ki;Choi, Choon-Ki;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.111-114
    • /
    • 2010
  • Purpose: There has been recent interest in the radioactivity uptake and image acquisition of radioactivity concentration. The degree of uptake is strongly affected by many factors containing $^{18}F$-FDG injection volume, tumor size and the density of blood glucose. Therefore, we investigated how radioactivity uptake in target influences 2D or 3D image analysis and elucidate radioactivity concentration that mediate this effect. This study will show the relationship between the radioactivity uptake and 2D,3D image acquisition on radioactivity concentration. Materials and Methods: We got image with 2D and 3D using 1994 NEMA PET phantom and GE Discovery(GE, U.S.A) STe 16 PET/CT setting the ratio of background and hot sphere's radioactivity concentration as being a standard of 1:2, 1:4, 1:8, 1:10, 1:20, and 1:30 respectively. And we set 10 minutes for CT attenuation correction and acquisition time. For the reconstruction method, we applied iteration method with twice of the iterative and twenty times subset to both 2D and 3D respectively. For analyzing the images, We set the same ROI at the center of hot sphere and the background radioactivity. We measured the radioactivity count of each part of hot sphere and background, and it was comparative analyzed. Results: The ratio of hot sphere's radioactivity density and the background radioactivity with setting ROI was 1:1.93, 1:3.86, 1:7.79, 1:8.04, 1:18.72, and 1:26.90 in 2D, and 1:1.95, 1:3.71, 1:7.10, 1:7.49, 1:15.10, and 1:23.24 in 3D. The differences of percentage were 3.50%, 3.47%, 8.12%, 8.02%, 10.58%, and 11.06% in 2D, the minimum differentiation was 3.47%, and the maximum one was 11.06%. In 3D, the difference of percentage was 3.66%, 4.80%, 8.38%, 23.92%, 23.86%, and 22.69%. Conclusion: The difference of accumulated concentrations is significantly increased following enhancement of radioactivity concentration. The change of radioactivity density in 2D image is affected by less than 3D. For those reasons, when patient is examined as follow up scan with changing the acquisition mode, scan should be conducted considering those things may affect to the quantitative analysis result and take into account these differences at reading.

  • PDF