• Title/Summary/Keyword: image analysis algorithm

Search Result 1,508, Processing Time 0.034 seconds

A Study on the Hyperspectral Image Classification with the Iterative Self-Organizing Unsupervised Spectral Angle Classification (반복최적화 무감독 분광각 분류 기법을 이용한 하이퍼스펙트럴 영상 분류에 관한 연구)

  • Jo Hyun-Gee;Kim Dae-Sung;Yu Ki-Yun;Kim Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.2
    • /
    • pp.111-121
    • /
    • 2006
  • The classification using spectral angle is a new approach based on the fact that the spectra of the same type of surface objects in RS data are approximately linearly scaled variations of one another due to atmospheric and topographic effects. There are many researches on the unsupervised classification using spectral angle recently. Nevertheless, there are only a few which consider the characteristics of Hyperspectral data. On this study, we propose the ISOMUSAC(Iterative Self-Organizing Modified Unsupervised Spectral Angle Classification) which can supplement the defects of previous unsupervised spectral angle classification. ISOMUSAC uses the Angle Division for the selection of seed points and calculates the center of clusters using spectral angle. In addition, ISOMUSAC perform the iterative merging and splitting clusters. As a result, the proposed algorithm can reduce the time of processing and generate better classification result than previous unsupervised classification algorithms by visual and quantitative analysis. For the comparison with previous unsupervised spectral angle classification by quantitative analysis, we propose Validity Index using spectral angle.

Analysis on the Snow Cover Variations at Mt. Kilimanjaro Using Landsat Satellite Images (Landsat 위성영상을 이용한 킬리만자로 만년설 변화 분석)

  • Park, Sung-Hwan;Lee, Moung-Jin;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.4
    • /
    • pp.409-420
    • /
    • 2012
  • Since the Industrial Revolution, CO2 levels have been increasing with climate change. In this study, Analyze time-series changes in snow cover quantitatively and predict the vanishing point of snow cover statistically using remote sensing. The study area is Mt. Kilimanjaro, Tanzania. 23 image data of Landsat-5 TM and Landsat-7 ETM+, spanning the 27 years from June 1984 to July 2011, were acquired. For this study, first, atmospheric correction was performed on each image using the COST atmospheric correction model. Second, the snow cover area was extracted using the NDSI (Normalized Difference Snow Index) algorithm. Third, the minimum height of snow cover was determined using SRTM DEM. Finally, the vanishing point of snow cover was predicted using the trend line of a linear function. Analysis was divided using a total of 23 images and 17 images during the dry season. Results show that snow cover area decreased by approximately $6.47km^2$ from $9.01km^2$ to $2.54km^2$, equivalent to a 73% reduction. The minimum height of snow cover increased by approximately 290 m, from 4,603 m to 4,893 m. Using the trend line result shows that the snow cover area decreased by approximately $0.342km^2$ in the dry season and $0.421km^2$ overall each year. In contrast, the annual increase in the minimum height of snow cover was approximately 9.848 m in the dry season and 11.251 m overall. Based on this analysis of vanishing point, there will be no snow cover 2020 at 95% confidence interval. This study can be used to monitor global climate change by providing the change in snow cover area and reference data when studying this area or similar areas in future research.

A study on evaluation of the image with washed-out artifact after applying scatter limitation correction algorithm in PET/CT exam (PET/CT 검사에서 냉소 인공물 발생 시 산란 제한 보정 알고리즘 적용에 따른 영상 평가)

  • Ko, Hyun-Soo;Ryu, Jae-kwang
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.1
    • /
    • pp.55-66
    • /
    • 2018
  • Purpose In PET/CT exam, washed-out artifact could occur due to severe motion of the patient and high specific activity, it results in lowering not only qualitative reading but also quantitative analysis. Scatter limitation correction by GE is an algorism to correct washed-out artifact and recover the images in PET scan. The purpose of this study is to measure the threshold of specific activity which can recovers to original uptake values on the image shown with washed-out artifact from phantom experiment and to compare the quantitative analysis of the clinical patient's data before and after correction. Materials and Methods PET and CT images were acquired in having no misalignment(D0) and in 1, 2, 3, 4 cm distance of misalignment(D1, D2, D3, D4) respectively, with 20 steps of each specific activity from 20 to 20,000 kBq/ml on $^{68}Ge$ cylinder phantom. Also, we measured the distance of misalignment of foley catheter line between CT and PET images, the specific activity which makes washed-out artifact, $SUV_{mean}$ of muscle in artifact slice and $SUV_{max}$ of lesion in artifact slice and $SUV_{max}$ of the other lesion out of artifact slice before and after correction respectively from 34 patients who underwent $^{18}F-FDG$ Fusion Whole Body PET/CT exam. SPSS 21 was used to analyze the difference in the SUV between before and after scatter limitation correction by paired t-test. Results In phantom experiment, $SUV_{mean}$ of $^{68}Ge$ cylinder decreased as specific activity of $^{18}F$ increased. $SUV_{mean}$ more and more decreased as the distance of misalignment between CT and PET more increased. On the other hand, the effect of correction increased as the distance more increased. From phantom experiments, there was no washed-out artifact below 50 kBq/ml and $SUV_{mean}$ was same from origin. On D0 and D1, $SUV_{mean}$ recovered to origin(0.95) below 120 kBq/ml when applying scatter limitation correction. On D2 and D3, $SUV_{mean}$ recovered to origin below 100 kBq/ml. On D4, $SUV_{mean}$ recovered to origin below 80 kBq/ml. From 34 clinical patient's data, the average distance of misalignment was 2.02 cm and the average specific activity which makes washed-out artifact was 490.15 kBq/ml. The average $SUV_{mean}$ of muscles and the average $SUV_{max}$ of lesions in artifact slice before and after the correction show a significant difference according to a paired t-test respectively(t=-13.805, p=0.000)(t=-2.851, p=0.012), but the average $SUV_{max}$ of lesions out of artifact slice show a no significant difference (t=-1.173, p=0.250). Conclusion Scatter limitation correction algorism by GE PET/CT scanner helps to correct washed-out artifact from motion of a patient or high specific activity and to recover the PET images. When we read the image occurred with washed-out artifact by measuring the distance of misalignment between CT and PET image, specific activity after applying scatter limitation algorism, we can analyze the images more accurately without repeating scan.

Quantification of Myocardial Blood flow using Dynamic N-13 Ammonia PET and factor Analysis (N-13 암모니아 PET 동적영상과 인자분석을 이용한 심근 혈류량 정량화)

  • Choi, Yong;Kim, Joon-Young;Im, Ki-Chun;Kim, Jong-Ho;Woo, Sang-Keun;Lee, Kyung-Han;Kim, Sang-Eun;Choe, Yearn-Seong;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.33 no.3
    • /
    • pp.316-326
    • /
    • 1999
  • Purpose: We evaluated the feasibility of extracting pure left ventricular blood pool and myocardial time-activity curves (TACs) and of generating factor images from human dynamic N-13 ammonia PET using factor analysis. The myocardial blood flow (MBF) estimates obtained with factor analysis were compared with those obtained with the user drawn region-of-interest (ROI) method. Materials and Methods: Stress and rest N-13 ammonia cardiac PET imaging was acquired for 23 min in 5 patients with coronary artery disease using GE Advance tomograph. Factor analysis generated physiological TACs and factor images using the normalized TACs from each dixel. Four steps were involved in this algorithm: (a) data preprocessing; (b) principal component analysis; (c) oblique rotation with positivity constraints; (d) factor image computation. Area under curves and MBF estimated using the two compartment N-13 ammonia model were used to validate the accuracy of the factor analysis generated physiological TACs. The MBF estimated by factor analysis was compared to the values estimated by using the ROI method. Results: MBF values obtained by factor analysis were linearly correlated with MBF obtained by the ROI method (slope = 0.84, r = 0.91), Left ventricular blood pool TACs obtained by the two methods agreed well (Area under curve ratio: 1.02 ($0{\sim}1min$), 0.98 ($0{\sim}2min$), 0.86 ($1{\sim}2min$)). Conclusion: The results of this study demonstrates that MBF can be measured accurately and noninvasively with dynamic N-13 ammonia PET imaging and factor analysis. This method is simple and accurate, and can measure MBF without blood sampling, ROI definition or spillover correction.

  • PDF

The Relationship Analysis between the Epicenter and Lineaments in the Odaesan Area using Satellite Images and Shaded Relief Maps (위성영상과 음영기복도를 이용한 오대산 지역 진앙의 위치와 선구조선의 관계 분석)

  • CHA, Sung-Eun;CHI, Kwang-Hoon;JO, Hyun-Woo;KIM, Eun-Ji;LEE, Woo-Kyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.3
    • /
    • pp.61-74
    • /
    • 2016
  • The purpose of this paper is to analyze the relationship between the location of the epicenter of a medium-sized earthquake(magnitude 4.8) that occurred on January 20, 2007 in the Odaesan area with lineament features using a shaded relief map(1/25,000 scale) and satellite images from LANDSAT-8 and KOMPSAT-2. Previous studies have analyzed lineament features in tectonic settings primarily by examining two-dimensional satellite images and shaded relief maps. These methods, however, limit the application of the visual interpretation of relief features long considered as the major component of lineament extraction. To overcome some existing limitations of two-dimensional images, this study examined three-dimensional images, produced from a Digital Elevation Model and drainage network map, for lineament extraction. This approach reduces mapping errors introduced by visual interpretation. In addition, spline interpolation was conducted to produce density maps of lineament frequency, intersection, and length required to estimate the density of lineament at the epicenter of the earthquake. An algorithm was developed to compute the Value of the Relative Density(VRD) representing the relative density of lineament from the map. The VRD is the lineament density of each map grid divided by the maximum density value from the map. As such, it is a quantified value that indicates the concentration level of the lineament density across the area impacted by the earthquake. Using this algorithm, the VRD calculated at the earthquake epicenter using the lineament's frequency, intersection, and length density maps ranged from approximately 0.60(min) to 0.90(max). However, because there were differences in mapped images such as those for solar altitude and azimuth, the mean of VRD was used rather than those categorized by the images. The results show that the average frequency of VRD was approximately 0.85, which was 21% higher than the intersection and length of VRD, demonstrating the close relationship that exists between lineament and the epicenter. Therefore, it is concluded that the density map analysis described in this study, based on lineament extraction, is valid and can be used as a primary data analysis tool for earthquake research in the future.

Analysis of 3D Accuracy According to Determination of Calibration Initial Value in Close-Range Digital Photogrammetry Using VLBI Antenna and Mobile Phone Camera (VLBI 안테나와 모바일폰 카메라를 활용한 근접수치사진측량의 캘리브레이션 초기값 결정에 따른 3차원 정확도 분석)

  • Kim, Hyuk Gi;Yun, Hong Sik;Cho, Jae Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.1
    • /
    • pp.31-43
    • /
    • 2015
  • This study had been aimed to conduct the camera calibration on VLBI antenna in the Space Geodetic Observation Center of Sejong City with a low-cost digital camera, which embedded in a mobile phone to determine the three-dimension position coordinates of the VLBI antenna, based on stereo images. The initial values for the camera calibration have been obtained by utilizing the Direct Linear Transformation algorithm and the commercial digital photogrammetry system, PhotoModeler $Scanner^{(R)}$ ver. 6.0, respectively. The accuracy of camera calibration results was compared with that the camera calibration results, acquired by a bundle adjustment with nonlinear collinearity condition equation. Although two methods showed significant differences in the initial value, the final calibration demonstrated the consistent results whichever methods had been performed for obtaining the initial value. Furthermore, those three-dimensional coordinates of feature points of the VLBI antenna were respectively calculated using the camera calibration by the two methods to be compared with the reference coordinates obtained from a total station. In fact, both methods have resulted out a same standard deviation of $X=0.004{\pm}0.010m$, $Y=0.001{\pm}0.015m$, $Z=0.009{\pm}0.017m$, that of showing a high degree of accuracy in centimeters. From the result, we can conclude that a mobile phone camera opens up the way for a variety of image processing studies, such as 3D reconstruction from images captured.

Analysis of Mass Transport in PEMFC GDL (연료전지 가스확산층(GDL) 내의 물질거동에 대한 연구)

  • Jeong, Hee-Seok;Kim, Jeong-Ik;Lee, Seong-Ho;Lim, Cheol-Ho;Ahn, Byung-Ki;Kim, Charn-Jung
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.36 no.10
    • /
    • pp.979-988
    • /
    • 2012
  • The 3D structure of GDL for fuel cells was measured using high-resolution X-ray tomography in order to study material transport in the GDL. A computational algorithm has been developed to remove noise in the 3D image and construct 3D elements representing carbon fibers of GDL, which were used for both structural and fluid analyses. Changes in the pore structure of GDL under various compression levels were calculated, and the corresponding volume meshes were generated to evaluate the anisotropic permeability of gas within GDL as a function of compression. Furthermore, the transfer of liquid water and reactant gases was simulated by using the volume of fluid (VOF) and pore-network model (PNM) techniques. In addition, the simulation results of liquid water transport in GDL were validated by analogous experiments to visualize the diffusion of fluid in porous media. Through this research, a procedure for simulating the material transport in deformed GDL has been developed; this will help in optimizing the clamping force of fuel cell stacks as well as in determining the design parameters of GDL, such as thickness and porosity.

Development of 3D Mapping System for Web Visualization of Geo-spatial Information Collected from Disaster Field Investigation (재난현장조사 공간정보 웹 가시화를 위한 3차원 맵핑시스템 개발)

  • Kim, Seongsam;Nho, Hyunju;Shin, Dongyoon;Lee, Junwoo;Kim, Hyunju
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_4
    • /
    • pp.1195-1207
    • /
    • 2020
  • With the development of GeoWeb technology, 2D/3D spatial information services through the web are also has been used increasingly in the application of disaster management. This paper is suggested to construct a web-based 3D geo-spatial information mapping platform to visualize various spatial information collected at the disaster site in a web environment. This paper is presented a web-based geo-spatial information mapping service plan for the various types of 2D/3D spatial data and large-volume LiDAR point cloud data collected at the disaster accident site using HTML5/WebGL, web development standard technology and open source. Firstly, the collected disaster site survey 2D data is constructed as a spatial DB using GeoServer's WMS service and PostGIS provided an open source and rendered in a web environment. Secondly, in order to efficiently render large-capacity 3D point cloud data in a web environment, a Potree algorithm is applied to simplifies point cloud data into 2D tiles using a multi-resolution octree structure. Lastly, OpenLayers3 based 3D web mapping pilot system is developed for web visualization of 2D/3D spatial information by implementing basic and application functions for controlling and measuring 3D maps with Graphic User Interface (GUI). For the further research, it is expected that various 2D survey data and various spatial image information of a disaster site can be used for scientific investigation and analysis of disaster accidents by overlaying and visualizing them on a built web-based 3D geo-spatial information system.

Quality Assurance of Multileaf Collimator Using Electronic Portal Imaging (전자포탈영상을 이용한 다엽시준기의 정도관리)

  • ;Jason W Sohn
    • Progress in Medical Physics
    • /
    • v.14 no.3
    • /
    • pp.151-160
    • /
    • 2003
  • The application of more complex radiotherapy techniques using multileaf collimation (MLC), such as 3D conformal radiation therapy and intensity-modulated radiation therapy (IMRT), has increased the significance of verifying leaf position and motion. Due to thier reliability and empirical robustness, quality assurance (QA) of MLC. However easy use and the ability to provide digital data of electronic portal imaging devices (EPIDs) have attracted attention to portal films as an alternatives to films for routine qualify assurance, despite concerns about their clinical feasibility, efficacy, and the cost to benefit ratio. In this study, we developed method for daily QA of MLC using electronic portal images (EPIs). EPID availability for routine QA was verified by comparing of the portal films, which were simultaneously obtained when radiation was delivered and known prescription input to MLC controller. Specially designed two-test patterns of dynamic MLC were applied for image acquisition. Quantitative off-line analysis using an edge detection algorithm enhanced the verification procedure as well as on-line qualitative visual assessment. In conclusion, the availability of EPI was enough for daily QA of MLC leaf position with the accuracy of portal films.

  • PDF

A Framework of Recognition and Tracking for Underwater Objects based on Sonar Images : Part 2. Design and Implementation of Realtime Framework using Probabilistic Candidate Selection (소나 영상 기반의 수중 물체 인식과 추종을 위한 구조 : Part 2. 확률적 후보 선택을 통한 실시간 프레임워크의 설계 및 구현)

  • Lee, Yeongjun;Kim, Tae Gyun;Lee, Jihong;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.3
    • /
    • pp.164-173
    • /
    • 2014
  • In underwater robotics, vision would be a key element for recognition in underwater environments. However, due to turbidity an underwater optical camera is rarely available. An underwater imaging sonar, as an alternative, delivers low quality sonar images which are not stable and accurate enough to find out natural objects by image processing. For this, artificial landmarks based on the characteristics of ultrasonic waves and their recognition method by a shape matrix transformation were proposed and were proven in Part 1. But, this is not working properly in undulating and dynamically noisy sea-bottom. To solve this, we propose a framework providing a selection phase of likelihood candidates, a selection phase for final candidates, recognition phase and tracking phase in sequence images, where a particle filter based selection mechanism to eliminate fake candidates and a mean shift based tracking algorithm are also proposed. All 4 steps are running in parallel and real-time processing. The proposed framework is flexible to add and to modify internal algorithms. A pool test and sea trial are carried out to prove the performance, and detail analysis of experimental results are done. Information is obtained from tracking phase such as relative distance, bearing will be expected to be used for control and navigation of underwater robots.