• Title/Summary/Keyword: Parameters setting range

Search Result 37, Processing Time 0.026 seconds

Three-Dimensional Volume Assessment Accuracy in Computed Tomography Using a Phantom (모형물을 이용한 전산화 단층 촬영에서 3차원적 부피측정의 정확성 평가)

  • Kim, Hyun-Su;Wang, Ji-Hwan;Lim, Il-Hyuk;Park, Ki-Tae;Yeon, Seong-Chan;Lee, Hee-Chun
    • Journal of Veterinary Clinics
    • /
    • v.30 no.4
    • /
    • pp.268-272
    • /
    • 2013
  • The purpose of this study was to assess the effects of reconstruction kernel, and slice thickness on the accuracy of spiral CT-based volume assessment over a range of object sizes typical of synthetic simulated tumor. Spiral CT scanning was performed at various reconstruction kernels (soft tissue, standard, bone), and slice thickness (1, 2, 3 mm) using a phantom made of gelatin and 10 synthetic simulated tumors of different sizes (diameter 3.0-12.0 mm). Three-dimensional volume assessments were obtained using an automated software tool. Results were compared with the reference volume by calculating the percentage error. Statistical analysis was performed using ANOVA and setting statistical significance at P < 0.05. In general, smaller slice thickness and larger sphere diameters produced more accurate volume assessment than larger slice thickness and smaller sphere diameter. The measured volumes were larger than the actual volumes by a common factor depending on slice thickness; in 100HU simulated tumors that had statistically significant, 1 mm slice thickness produced on average 27.41%, 2 mm slice thickness produced 45.61%, 3 mm slice thickness produced 93.36% overestimates of volume. However, there was no statistically significant difference in volume error for spiral CT scans taken with techniques where only reconstruction kernel was changed. These results supported that synthetic simulated tumor size, slice thickness were significant parameters in determining volume measurement errors. For an accurate volumetric measurement of an object, it is critical to select an appropriate slice thickness and to consider the size of an object.

The Influence of Iteration and Subset on True X Method in F-18-FPCIT Brain Imaging (F-18-FPCIP 뇌 영상에서 True-X 재구성 기법을 기반으로 했을 때의 Iteration과 Subset의 영향)

  • Choi, Jae-Min;Kim, Kyung-Sik;NamGung, Chang-Kyeong;Nam, Ki-Pyo;Im, Ki-Cheon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.122-126
    • /
    • 2010
  • Purpose: F-18-FPCIT that shows strong familiarity with DAT located at a neural terminal site offers diagnostic information about DAT density state in the region of the striatum especially Parkinson's disease. In this study, we altered the iteration and subset and measured SUV${\pm}$SD and Contrasts from phantom images which set up to specific iteration and subset. So, we are going to suggest the appropriate range of the iteration and subset. Materials and Methods: This study has been performed with 10 normal volunteers who don't have any history of Parkinson's disease or cerebral disease and Flangeless Esser PET Phantom from Data Spectrum Corporation. $5.3{\pm}0.2$ mCi of F-18-FPCIT was injected to the normal group and PET Phantom was assembled by ACR PET Phantom Instructions and it's actual ratio between hot spheres and background was 2.35 to 1. Brain and Phantom images were acquired after 3 hours from the time of the injection and images were acquired for ten minutes. Basically, SIEMENS Bio graph 40 True-point was used and True-X method was applied for image reconstruction method. The iteration and Subset were set to 2 iterations, 8 subsets, 3 iterations, 16 subsets, 6 iterations, 16 subsets, 8 iterations, 16 subsets and 8 iterations, 21 subsets respectively. To measure SUVs on the brain images, ROIs were drawn on the right Putamen. Also, Coefficient of variance (CV) was calculated to indicate the uniformity at each iteration and subset combinations. On the phantom study, we measured the actual ratio between hot spheres and back ground at each combinations. Same size's ROIs were drawn on the same slide and location. Results: Mean SUVs were 10.60, 12.83, 13.87, 13.98 and 13.5 at each combination. The range of fluctuation by sets were 22.36%, 10.34%, 1.1%, and 4.8% respectively. The range of fluctuation of mean SUV was lowest between 6 iterations 16 subsets and 8 iterations 16 subsets. CV showed 9.07%, 11.46%, 13.56%, 14.91% and 19.47% respectively. This means that the numerical value of the iteration and subset gets higher the image's uniformity gets worse. The range of fluctuation of CV by sets were 2.39, 2.1, 1.35, and 4.56. The range of fluctuation of uniformity was lowest between 6 iterations, 16 subsets and 8 iterations, 16 subsets. In the contrast test, it showed 1.92:1, 2.12:1, 2.10:1, 2.13:1 and 2.11:1 at each iteration and subset combinations. A Setting of 8 iterations and 16 subsets reappeared most close ratio between hot spheres and background. Conclusion: Findings on this study, SUVs and uniformity might be calculated differently caused by variable reconstruction parameters like filter or FWHM. Mean SUV and uniformity showed the lowest range of fluctuation at 6 iterations 16 subsets and 8 iterations 16 subsets. Also, 8 iterations 16 subsets showed the nearest hot sphere to background ratio compared with others. But it can not be concluded that only 6 iterations 16 subsets and 8 iterations 16 subsets can make right images for the clinical diagnosis. There might be more factors that can make better images. For more exact clinical diagnosis through the quantitative analysis of DAT density in the region of striatum we need to secure healthy people's quantitative values.

  • PDF

Tribological study on the thermal stability of thick ta-C coating at elevated temperatures

  • Lee, Woo Young;Ryu, Ho Jun;Jang, Young Jun;Kim, Gi Taek;Deng, Xingrui;Umehara, Noritsugu;Kim, Jong Kuk
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2016.02a
    • /
    • pp.144.2-144.2
    • /
    • 2016
  • Diamond-like carbon (DLC) coatings have been widely applied to the mechanical components, cutting tools due to properties of high hardness and wear resistance. Among them, hydrogenated amorphous carbon (a-C:H) coatings are well-known for their low friction properties, stable production of thin and thick film, they were reported to be easily worn away under high temperature. Non-hydrogenated tetrahedral amorphous carbon (ta-C) is an ideal for industrial applicability due to good thermal stability from high $sp^3$-bonding fraction ranging from 70 to 80 %. However, the large compressive stress of ta-C coating limits to apply thick ta-C coating. In this study, the thick ta-C coating was deposited onto Inconel alloy disk by the FCVA technique. The thickness of the ta-C coating was about $3.5{\mu}m$. The tribological behaviors of ta-C coated disks sliding against $Si_3N_4$ balls were examined under elevated temperature divided into 23, 100, 200 and $300^{\circ}C$. The range of temperature was setting up until peel off observed. The experimental results showed that the friction coefficient was decreased from 0.14 to 0.05 with increasing temperature up to $200^{\circ}C$. At $300^{\circ}C$, the friction coefficient was dramatically increased over 5,000 cycles and then delaminated. These phenomenon was summarized two kinds of reasons: (1) Thermal degradation and (2) graphitization of ta-C coating. At first, the reason of thermal degradation was demonstrated by wear rate calculation. The wear rate of ta-C coatings showed an increasing trend with elevated temperature. For investigation of relationship between hardness and graphitization, thick ta-C coatings(2, 3 and $5{\mu}m$) were additionally deposited. As the thickness of ta-C coating was increased, hardness decreased from 58 to 49 GPa, which means that graphitization was accelerated. Therefore, now we are trying to increase $sp^3$ fraction of ta-C coating and control the coating parameters for thermal stability of thick ta-C at high temperatures.

  • PDF

Volume Change of Spiral Computed Tomography due to the Changed in the Parameters (파라미터의 변경에 따라 나선형 전산화 단층 촬영의 체적 변화)

  • Lee, JunHaeng
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.4
    • /
    • pp.307-311
    • /
    • 2013
  • This study examined the change of artifact volume by analyzing the level of image change associated with the setting of threshold through 3D imaging in scan parameter(slice thickness and helical pitch) and 3D image reconstruction to explore whether the presence of pathology was fully distinguished when CT was taken by lower dose than the existent dose to reduce exposure. Furthermore, this study attempted to investigate Scan Parameter acceptable in CT to reduce exposure dose. For materials and methods, silicon was used to produce samples. Five spherical samples were produced at 10-millimeter intervals(50, 40, 30, 20, and 10 mm) in diameter and were fixed at 120 Kvp of tube voltage and 50 mA of tube current. Varied slab thickness((1.0, 2.0, 3.0, 5.0, and 7.0mm) and Helical Pitch(1.5, 2.0, 3.0) were scanned. The image at an interval of 1.0, 2.0, 3.0, 5.0, and 7.0mm was transmitted to the workstation. Threshold(-200, -50, 50 ~ 1,000) was changed using the volume rendering technique, 3D image was reconstructed, and artifact volume was measured. In conclusion, 1.5 of Helical Pitch showed the least change of volume and 3.0 of helical pitch showed the greatest reduction of volume change. The experiment suggested that as slice thickness was increased, artifact volume was decreased more than actual measurement. Furthermore, in the 3D image reconstruction, when the range of threshold was set as -200 ~1,000, artifact volume was changed the least. Based on the results, it is expected to have an effect of reducing exposure dose.

Comparison of Association Rule Learning and Subgroup Discovery for Mining Traffic Accident Data (교통사고 데이터의 마이닝을 위한 연관규칙 학습기법과 서브그룹 발견기법의 비교)

  • Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.1-16
    • /
    • 2015
  • Traffic accident is one of the major cause of death worldwide for the last several decades. According to the statistics of world health organization, approximately 1.24 million deaths occurred on the world's roads in 2010. In order to reduce future traffic accident, multipronged approaches have been adopted including traffic regulations, injury-reducing technologies, driving training program and so on. Records on traffic accidents are generated and maintained for this purpose. To make these records meaningful and effective, it is necessary to analyze relationship between traffic accident and related factors including vehicle design, road design, weather, driver behavior etc. Insight derived from these analysis can be used for accident prevention approaches. Traffic accident data mining is an activity to find useful knowledges about such relationship that is not well-known and user may interested in it. Many studies about mining accident data have been reported over the past two decades. Most of studies mainly focused on predict risk of accident using accident related factors. Supervised learning methods like decision tree, logistic regression, k-nearest neighbor, neural network are used for these prediction. However, derived prediction model from these algorithms are too complex to understand for human itself because the main purpose of these algorithms are prediction, not explanation of the data. Some of studies use unsupervised clustering algorithm to dividing the data into several groups, but derived group itself is still not easy to understand for human, so it is necessary to do some additional analytic works. Rule based learning methods are adequate when we want to derive comprehensive form of knowledge about the target domain. It derives a set of if-then rules that represent relationship between the target feature with other features. Rules are fairly easy for human to understand its meaning therefore it can help provide insight and comprehensible results for human. Association rule learning methods and subgroup discovery methods are representing rule based learning methods for descriptive task. These two algorithms have been used in a wide range of area from transaction analysis, accident data analysis, detection of statistically significant patient risk groups, discovering key person in social communities and so on. We use both the association rule learning method and the subgroup discovery method to discover useful patterns from a traffic accident dataset consisting of many features including profile of driver, location of accident, types of accident, information of vehicle, violation of regulation and so on. The association rule learning method, which is one of the unsupervised learning methods, searches for frequent item sets from the data and translates them into rules. In contrast, the subgroup discovery method is a kind of supervised learning method that discovers rules of user specified concepts satisfying certain degree of generality and unusualness. Depending on what aspect of the data we are focusing our attention to, we may combine different multiple relevant features of interest to make a synthetic target feature, and give it to the rule learning algorithms. After a set of rules is derived, some postprocessing steps are taken to make the ruleset more compact and easier to understand by removing some uninteresting or redundant rules. We conducted a set of experiments of mining our traffic accident data in both unsupervised mode and supervised mode for comparison of these rule based learning algorithms. Experiments with the traffic accident data reveals that the association rule learning, in its pure unsupervised mode, can discover some hidden relationship among the features. Under supervised learning setting with combinatorial target feature, however, the subgroup discovery method finds good rules much more easily than the association rule learning method that requires a lot of efforts to tune the parameters.

Koreanized Analysis System Development for Groundwater Flow Interpretation (지하수유동해석을 위한 한국형 분석시스템의 개발)

  • Choi, Yun-Yeong
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.3 no.3 s.10
    • /
    • pp.151-163
    • /
    • 2003
  • In this study, the algorithm of groundwater flow process was established for koreanized groundwater program development dealing with the geographic and geologic conditions of the aquifer have dynamic behaviour in groundwater flow system. All the input data settings of the 3-DFM model which is developed in this study are organized in Korean, and the model contains help function for each input data. Thus, it is designed to get detailed information about each input parameter when the mouse pointer is placed on the corresponding input parameter. This model also is designed to easily specify the geologic boundary condition for each stratum or initial head data in the work sheet. In addition, this model is designed to display boxes for input parameter writing for each analysis condition so that the setting for each parameter is not so complicated as existing MODFLOW is when steady and unsteady flow analysis are performed as well as the analysis for the characteristics of each stratum. Descriptions for input data are displayed on the right side of the window while the analysis results are displayed on the left side as well as the TXT file for this results is available to see. The model developed in this study is a numerical model using finite differential method, and the applicability of the model was examined by comparing and analyzing observed and simulated groundwater heads computed by the application of real recharge amount and the estimation of parameters. The 3-DFM model is applied in this study to Sehwa-ri, and Songdang-ri area, Jeju, Korea for analysis of groundwater flow system according to pumping, and obtained the results that the observed and computed groundwater head were almost in accordance with each other showing the range of 0.03 - 0.07 error percent. It is analyzed that the groundwater flow distributed evenly from Nopen-orum and Munseogi-orum to Wolang-bong, Yongnuni-orum, and Songja-bong through the computation of equipotentials and velocity vector using the analysis result of simulation which was performed before the pumping started in the study area. These analysis results show the accordance with MODFLOW's.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.