• Title/Summary/Keyword: Iterative technique

Search Result 569, Processing Time 0.034 seconds

Comparison of Effectiveness about Image Quality and Scan Time According to Reconstruction Method in Bone SPECT (영상 재구성 방법에 따른 Bone SPECT 영상의 질과 검사시간에 대한 실효성 비교)

  • Kim, Woo-Hyun;Jung, Woo-Young;Lee, Ju-Young;Ryu, Jae-Kwang
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.9-14
    • /
    • 2009
  • Purpose: Nowadays in the nuclear medicine, many studies and efforts are being made to reduce the scan time, as well as the waiting time to be needed to execute exams after injection of radionuclide medicines. Several methods are being used in clinic, such as developing new radionuclide compounds that enable to be absorbed into target organs more quickly and reducing acquisition scan time by increase the number of Gamma Camera detectors to examine. Each medical equipment manufacturer has improved the imaging process techniques to reduce scan time. In this paper, we tried to analyze the difference of image quality between FBP, 3D OSEM reconstruction methods that commercialized and being clinically applied, and Astonish reconstruction method (A kind of Iterative fast reconstruction method of Philips), also difference of image quality on scan time. Material and Methods: We investigated in 32 patients that examined the Bone SPECT from June to July 2008 at department of nuclear medicine, ASAN Medical Center in Seoul. 40sec/frame and 20sec/frame images were acquired that using Philips‘ PRECEDENCE 16 Gamma Camera and then reconstructed those images by using the Astonish (Philips’ Reconstruction Method), 3D OSEM and FBP methods. The blinded test was performed to the clinical interpreting physicians with all images analyzed by each reconstruction method for qualitative analysis. And we analyzed target to non target ratio by draws lesions as the center of disease for quantitative analysis. At this time, each image was analyzed with same location and size of ROI. Results: In a qualitative analysis, there was no significant difference by acquisition time changes in image quality. In a quantitative analysis, the images reconstructed Astonish method showed good quality due to better sharpness and distinguish sharply between lesions and peripheral lesions. After measuring each mean value and standard deviation value of target to non target ratio with 40 sec/frame and 20sec/frame images, those values are Astonish (40 sec-$13.91{\pm}5.62$ : 20 sec-$13.88{\pm}5.92$), 3D OSEM (40 sec-$10.60{\pm}3.55$ : 20 sec-$10.55{\pm}3.64$), FBP (40 sec-$8.30{\pm}4.44$ : 20 sec-$8.19{\pm}4.20$). We analyzed target to non target ratio from 20 sec and 40 sec images. And we analyzed the result, In Astonish (t=0.16, p=0.872), 3D OSEM (t=0.51, p=0.610), FBP (t=0.73, p=0.469) methods, there was no significant difference statistically by acquisition time change in image quality. But FBP indicates no statistical differences while some images indicate difference between 40 sec/frame and 20 sec/frame images by various factors. Conclusions: In the circumstance, try to find a solution to reduce nuclear medicine scan time, the development of nuclear medicine equipment hardware has decreased while software has marched forward at a relentless. Due to development of computer hardware, the image reconstruction time was reduced and the expanded capacity to restore enables iterative methods that couldn't be performed before due to technical limits. As imaging process technique developed, it reduced scan time and we could observe that image quality keep similar level. While keeping exam quality and reducing scan time can induce the reduction of patient's pain and sensory waiting time, also accessibility of nuclear medicine exam will be improved and it provide better service to patients and clinical physician who order exams. Consequently, those things make the image of department of nuclear medicine be improved. Concurrent Imaging - A new function that setting up each image acquisition parameter and enables to acquire images simultaneously with various parameters to once examine.

  • PDF

Increase of Tc-99m RBC SPECT Sensitivity for Small Liver Hemangioma using Ordered Subset Expectation Maximization Technique (Tc-99m RBC SPECT에서 Ordered Subset Expectation Maximization 기법을 이용한 작은 간 혈관종 진단 예민도의 향상)

  • Jeon, Tae-Joo;Bong, Jung-Kyun;Kim, Hee-Joung;Kim, Myung-Jin;Lee, Jong-Doo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.344-356
    • /
    • 2002
  • Purpose: RBC blood pool SPECT has been used to diagnose focal liver lesion such as hemangioma owing to its high specificity. However, low spatial resolution is a major limitation of this modality. Recently, ordered subset expectation maximization (OSEM) has been introduced to obtain tomographic images for clinical application. We compared this new modified iterative reconstruction method, OSEM with conventional filtered back projection (FBP) in imaging of liver hemangioma. Materials and Methods: Sixty four projection data were acquired using dual head gamma camera in 28 lesions of 24 patients with cavernous hemangioma of liver and these raw data were transferred to LINUX based personal computer. After the replacement of header file as interfile, OSEM was performed under various conditions of subsets (1,2,4,8,16, and 32) and iteration numbers (1,2,4,8, and 16) to obtain the best setting for liver imaging. The best condition for imaging in our investigation was considered to be 4 iterations and 16 subsets. After then, all the images were processed by both FBP and OSEM. Three experts reviewed these images without any information. Results: According to blind review of 28 lesions, OSEM images revealed at least same or better image quality than those of FBP in nearly all cases. Although there showed no significant difference in detection of large lesions more than 3 cm, 5 lesions with 1.5 to 3 cm in diameter were detected by OSEM only. However, both techniques failed to depict 4 cases of small lesions less than 1.5 cm. Conclusion: OSEM revealed better contrast and define in depiction of liver hemangioma as well as higher sensitivity in detection of small lesions. Furthermore this reconstruction method dose not require high performance computer system or long reconstruction time, therefore OSEM is supposed to be good method that can be applied to RBC blood pool SPECT for the diagnosis of liver hemangioma.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

Standard Penetration Test Performance in Sandy Deposits (모래지반에서 표준관입시험에 따른 관입거동)

  • Dung, N.T.;Chung, Sung-Gyo
    • Journal of the Korean Geotechnical Society
    • /
    • v.29 no.10
    • /
    • pp.39-48
    • /
    • 2013
  • This paper presents an equation to depict the penetration behavior during the standard penetration test (SPT) in sandy deposits. An energy balance approach is considered and the driving mechanism of the SPT sampler is conceptually modeled as that of a miniature open-ended steel pipe pile into sands. The equation consists of three sets of input parameters including hyperbolic parameters (m and ${\lambda}$) which are difficult to determine. An iterative technique is thus applied to determine the optimized values of m and ${\lambda}$ using three measured values from a routine SPT data. It is verified from a well-documented record that the simulated penetration curves are in good agreement with the measured ones. At a given depth, the increase in m results in the decrease in ${\lambda}$ and the increase in the curvature of the penetration curve as well as the simulated N-value. Generally, the predicted penetration curve becomes nearly straight for the portion of exceeding the seating drive zone, which is more pronounced as soil density increases. Thus, the simulation method can be applied to extrapolating a prematurely completed test data, i.e., to determining the N value equivalent to a 30 cm penetration. A simple linear equation is considered for obtaining similar results.

Image Restoration and Segmentation for PAN-sharpened High Multispectral Imagery (PAN-SHARPENED 고해상도 다중 분광 자료의 영상 복원과 분할)

  • Lee, Sanghoon
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_1
    • /
    • pp.1003-1017
    • /
    • 2017
  • Multispectral image data of high spatial resolution is required to obtain correct information on the ground surface. The multispectral image data has lower resolution compared to panchromatic data. PAN-sharpening fusion technique produces the multispectral data with higher resolution of panchromatic image. Recently the object-based approach is more applied to the high spatial resolution data than the conventional pixel-based one. For the object-based image analysis, it is necessary to perform image segmentation that produces the objects of pixel group. Image segmentation can be effectively achieved by the process merging step-by-step two neighboring regions in RAG (Regional Adjacency Graph). In the satellite remote sensing, the operational environment of the satellite sensor causes image degradation during the image acquisition. This degradation increases variation of pixel values in same area, and results in deteriorating the accuracy of image segmentation. An iterative approach that reduces the difference of pixel values in two neighboring pixels of same area is employed to alleviate variation of pixel values in same area. The size of segmented regions is associated with the quality of image segmentation and is decided by a stopping rue in the merging process. In this study, the image restoration and segmentation was quantitatively evaluated using simulation data and was also applied to the three PAN-sharpened multispectral images of high resolution: Dubaisat-2 data of 1m panchromatic resolution from LA, USA and KOMPSAT3 data of 0.7m panchromatic resolution from Daejeon and Chungcheongnam-do in the Korean peninsula. The experimental results imply that the proposed method can improve analytical accuracy in the application of remote sensing high resolution PAN-sharpened multispectral imagery.

A Metrics-Based Approach to the Reorganization of Class Hierarchy Structures (클래스계층구조의 품질평가척도를 기반으로 하는 재구성기법)

  • Hwang, Sun-Hyung;Yang, Hea-Sool;Hwang, Young-Sub
    • The KIPS Transactions:PartD
    • /
    • v.10D no.5
    • /
    • pp.859-872
    • /
    • 2003
  • Class hierarchies often constitute the backbone of object-oriented software. Their quality is therefore quite crucial. Building class hierarchies with good qualify is a very important and common tasks on the object oriented software development, but such hierarchies are not so easy to build. Moreover, the class hierarchy structure under construction is frequently restructured and refined until it becomes suitable for the requirement on the iterative and incremental development lifecycle. Therefore, there has been renewal of interest in all methodologies and tools to assist the object oriented developers in this task. In this paper, we define a set of quantitative metrics which provide a wav of capturing features of a rough estimation of complexity of class hierarchy structure. In addition to, we suggest a set of algorithms that transform a original class hierarchy structure into reorganized one based on the proposed metrics for class hierarchy structure. Furthermore, we also prove that each algorithm is "object-preserving". That is, we prove that the set of objects are never changed before and after applying the algorithm on a class hierarchy. The technique presented in this paper can be used as a guidelines of the construction, restructuring and refinement of class hierarchies. Moreover, the proposed set of algorithms based on metrics can be helpful for developers as an useful instrument for the object-oriented software development.velopment.

Turbid water atmospheric correction for GOCI: Modification of MUMM algorithm (GOCI영상의 탁한 해역 대기보정: MUMM 알고리즘 개선)

  • Lee, Boram;Ahn, Jae Hyun;Park, Young-Je;Kim, Sang-Wan
    • Korean Journal of Remote Sensing
    • /
    • v.29 no.2
    • /
    • pp.173-182
    • /
    • 2013
  • The early Sea-viewing Wide Field-of-view Sensor(SeaWiFS) atmospheric correction algorithm which is the basis of the atmospheric correction algorithm for Geostationary Ocean Color Imager(GOCI) assumes that water-leaving radiances is negligible at near-infrared(NIR) wavelengths. For this reason, all of the satellite measured radiances at the NIR wavelengths are assigned to aerosol radiances. However that assumption would cause underestimation of water-leaving radiances if it were applied to turbid Case-2 waters. To overcome this problem, Management Unit of the North Sea Mathematical Models(MUMM) atmospheric correction algorithm has been developed for turbid waters. This MUMM algorithm introduces new parameter ${\alpha}$, representing the ratio of water-leaving reflectance at the NIR wavelengths. ${\alpha}$ is calculated by statistical method and is assumed to be constant throughout the study area. Using this algorithm, we can obtain comparatively accurate water-leaving radiances in the moderately turbid waters where the NIR water-leaving reflectance is less than approximately 0.01. However, this algorithm still underestimates the water-leaving radiances at the extremely turbid water since the ratio of water-leaving radiance at two NIR wavelengths, ${\alpha}$ is changed with concentration of suspended particles. In this study, we modified the MUMM algorithm to calculate appropriate value for ${\alpha}$ using an iterative technique. As a result, the accuracy of water-leaving reflectance has been significantly improved. Specifically, the results show that the Root Mean Square Error(RMSE) of the modified MUMM algorithm was 0.002 while that of the MUMM algorithm was 0.0048.

FEM-based Seismic Reliability Analysis of Real Structural Systems (실제 구조계의 유한요소법에 기초한 지진 신뢰성해석)

  • Huh Jung-Won;Haldar Achintya
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.19 no.2 s.72
    • /
    • pp.171-185
    • /
    • 2006
  • A sophisticated reliability analysis method is proposed to evaluate the reliability of real nonlinear complicated dynamic structural systems excited by short duration dynamic loadings like earthquake motions by intelligently integrating the response surface method, the finite element method, the first-order reliability method, and the iterative linear interpolation scheme. The method explicitly considers all major sources of nonlinearity and uncertainty in the load and resistance-related random variables. The unique feature of the technique is that the seismic loading is applied in the time domain, providing an alternative to the classical random vibration approach. The four-parameter Richard model is used to represent the flexibility of connections of real steel frames. Uncertainties in the Richard parameters are also incorporated in the algorithm. The laterally flexible steel frame is then reinforced with reinforced concrete shear walls. The stiffness degradation of shear walls after cracking is also considered. The applicability of the method to estimate the reliability of real structures is demonstrated by considering three examples; a laterally flexible steel frame with fully restrained connections, the same steel frame with partially restrained connections with different rigidities, and a steel frame reinforced with concrete shear walls.

Implementation of Stopping Criterion Algorithm using Variance Values of LLR in Turbo Code (터보부호에서 LLR 분산값을 이용한 반복중단 알고리즘 구현)

  • Jeong Dae-Ho;Kim Hwan-Yong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.9 s.351
    • /
    • pp.149-157
    • /
    • 2006
  • Turbo code, a kind of error correction coding technique, has been used in the field of digital mobile communication system. As the number of iterations increases, it can achieves remarkable BER performance over AWGN channel environment. However, if the number of iterations is increased in the several channel environments, any further iteration results in very little improvement, and requires much delay and computation in proportion to the number of iterations. To solve this problems, it is necessary to device an efficient criterion to stop the iteration process and prevent unnecessary delay and computation. In this paper, it proposes an efficient and simple criterion for stopping the iteration process in turbo decoding. By using variance values of LLR in turbo decoder, the proposed algerian can largely reduce the average number of iterations without BER performance degradation in all SNR regions. As a result of simulation, the average number of iterations in the upper SNR region is reduced by about $34.66%{\sim}41.33%$ compared to method using variance values of extrinsic information. the average number of iterations in the lower SNR region is reduced by about $13.93%{\sim}14.45%$ compared to CE algorithm and about $13.23%{\sim}14.26%$ compared to SDR algorithm.

A New SPW Scheme for PAPR Reduction in OFDM Systems by Using Genetic Algorithm (유전자 알고리즘을 적용한 SPW에 의한 새로운 OFDM 시스템 PAPR 감소 기법)

  • Kim Sung-Soo;Kim Myoung-Je;Kee Jong-Hae
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.16 no.11 s.102
    • /
    • pp.1131-1137
    • /
    • 2005
  • An orthogonal frequency division multiplexing(OFDM) system has the problem of peak-to-average power ratio (PAPR) due to the overlapping phenomena of many sub-carriers. In order to improve the performance of PAPR, we propose in this paper a new genetic sub-block phase weighting(GA-SPW) using the SPW technique. Not only the selecting mapping(SLM) and the partial sequence(PTS) but also the previously proposed SPW becomes more effective as the number of sub-blocks and phase elements increases. However, all of them have limitation on the number of sub-blocks since the searching repetition increases exponentially as the number of sub-blocks increases. Therefore, in this research, a new GA SPW is proposed to reduce the amount of calculation by using Genetic algorithm(GA). In the proposed method, the number of calculations involved in the iterative phase searching yields to depend on the number of population and generation not on the number of sub-blocks and phase elements. The superiority of the proposed method is presented in the experimental results and analysis.