• Title/Summary/Keyword: System Optimization

Search Result 6,499, Processing Time 0.034 seconds

Quantitative Analysis of Brain Metabolite Spectrum Depending on the Concentration of the Contrast Media in Phantom (팬텀 내 조영제 농도에 따른 뇌 대사물질 Spectrum의 정량분석)

  • Shin, WoonJae;Gang, EunBo;Chun, SongI
    • Journal of the Korean Society of Radiology
    • /
    • v.9 no.1
    • /
    • pp.47-53
    • /
    • 2015
  • Quantitative analysis of MR spectrum depending on mole concentration of the contrast media in cereberal metabolite phantom was performed. PRESS pulse sequence was used to obtain MR spectrum at 3.0T MRI system (Archieva, Philips Healthcare, Best, Netherland), and the phantom contains brain metabolites such as N-Acetyl Asparatate (NAA), Choline (Cho), Creatine (Cr) and Lactate (Lac). In this study, optimization of MRS PRESS pulse sequency depending on the concentration of contrast media (0, 0.1 and $0.3mmol/{\ell}$) was evaluated for various repetition time(TR; 1500, 1700 and 2000 ms). In control (cotrast-media-free) group, NAA and Cho signals were the highest at TR 2000 ms than at 1700 and 1500 ms. Cr had the highest peak signal at TR 1500 ms. When concentration of contrast media was $0.1mmol/{\ell}$, the metabolites were increased NAA 73%, Cho 249%, Cr 37% at TR 1700 ms compared with other TR, and also signal increased at $0.3mmol/{\ell}$, In $0.5mmol/{\ell}$ of contrast agent, cerebral metabolite peaks reduced, especially when TR 1500 ms and 2000 ms they decreased below those of control group. The ratio of metabolite peaks such as NAA/Cr and Cho/Cr decreased as the concentration of the contrast agent increased from 0.1 to $0.5mmol/{\ell}$. Authors found that the optimization of PRESS sequence for 0.3T MRS was as follows: low density of contrast agent ($0.1mmol/{\ell}$ and $0.3mmol/{\ell}$) made the highest signal intensity, while high density of contrast agent reveals the least reduction of signal intensity at 1700 ms. In conclusion, authors believe that it is helpful to reduce TR for acquiring maximum signal intensity.

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.

Numerical and Experimental Study on the Coal Reaction in an Entrained Flow Gasifier (습식분류층 석탄가스화기 수치해석 및 실험적 연구)

  • Kim, Hey-Suk;Choi, Seung-Hee;Hwang, Min-Jung;Song, Woo-Young;Shin, Mi-Soo;Jang, Dong-Soon;Yun, Sang-June;Choi, Young-Chan;Lee, Gae-Goo
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.2
    • /
    • pp.165-174
    • /
    • 2010
  • The numerical modeling of a coal gasification reaction occurring in an entrained flow coal gasifier is presented in this study. The purposes of this study are to develop a reliable evaluation method of coal gasifier not only for the basic design but also further system operation optimization using a CFD(Computational Fluid Dynamics) method. The coal gasification reaction consists of a series of reaction processes such as water evaporation, coal devolatilization, heterogeneous char reactions, and coal-off gaseous reaction in two-phase, turbulent and radiation participating media. Both numerical and experimental studies are made for the 1.0 ton/day entrained flow coal gasifier installed in the Korea Institute of Energy Research (KIER). The comprehensive computer program in this study is made basically using commercial CFD program by implementing several subroutines necessary for gasification process, which include Eddy-Breakup model together with the harmonic mean approach for turbulent reaction. Further Lagrangian approach in particle trajectory is adopted with the consideration of turbulent effect caused by the non-linearity of drag force, etc. The program developed is successfully evaluated against experimental data such as profiles of temperature and gaseous species concentration together with the cold gas efficiency. Further intensive investigation has been made in terms of the size distribution of pulverized coal particle, the slurry concentration, and the design parameters of gasifier. These parameters considered in this study are compared and evaluated each other through the calculated syngas production rate and cold gas efficiency, appearing to directly affect gasification performance. Considering the complexity of entrained coal gasification, even if the results of this study looks physically reasonable and consistent in parametric study, more efforts of elaborating modeling together with the systematic evaluation against experimental data are necessary for the development of an reliable design tool using CFD method.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Evaluation of beam delivery accuracy for Small sized lung SBRT in low density lung tissue (Small sized lung SBRT 치료시 폐 실질 조직에서의 계획선량 전달 정확성 평가)

  • Oh, Hye Gyung;Son, Sang Jun;Park, Jang Pil;Lee, Je Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.7-15
    • /
    • 2019
  • Purpose: The purpose of this study is to evaluate beam delivery accuracy for small sized lung SBRT through experiment. In order to assess the accuracy, Eclipse TPS(Treatment planning system) equipped Acuros XB and radiochromic film were used for the dose distribution. Comparing calculated and measured dose distribution, evaluated the margin for PTV(Planning target volume) in lung tissue. Materials and Methods : Acquiring CT images for Rando phantom, planned virtual target volume by size(diameter 2, 3, 4, 5 cm) in right lung. All plans were normalized to the target Volume=prescribed 95 % with 6MV FFF VMAT 2 Arc. To compare with calculated and measured dose distribution, film was inserted in rando phantom and irradiated in axial direction. The indexes of evaluation are percentage difference(%Diff) for absolute dose, RMSE(Root-mean-square-error) value for relative dose, coverage ratio and average dose in PTV. Results: The maximum difference at center point was -4.65 % in diameter 2 cm size. And the RMSE value between the calculated and measured off-axis dose distribution indicated that the measured dose distribution in diameter 2 cm was different from calculated and inaccurate compare to diameter 5 cm. In addition, Distance prescribed 95 % dose($D_{95}$) in diameter 2 cm was not covered in PTV and average dose value was lowest in all sizes. Conclusion: This study demonstrated that small sized PTV was not enough covered with prescribed dose in low density lung tissue. All indexes of experimental results in diameter 2 cm were much different from other sizes. It is showed that minimized PTV is not accurate and affects the results of radiation therapy. It is considered that extended margin at small PTV in low density lung tissue for enhancing target center dose is necessary and don't need to constraint Maximum dose in optimization.

Optimization of Medium Components using Response Surface Methodology for Cost-effective Mannitol Production by Leuconostoc mesenteroides SRCM201425 (반응표면분석법을 이용한 Leuconostoc mesenteroides SRCM201425의 만니톨 생산배지 최적화)

  • Ha, Gwangsu;Shin, Su-Jin;Jeong, Seong-Yeop;Yang, HoYeon;Im, Sua;Heo, JuHee;Yang, Hee-Jong;Jeong, Do-Youn
    • Journal of Life Science
    • /
    • v.29 no.8
    • /
    • pp.861-870
    • /
    • 2019
  • This study was undertaken to establish optimum medium compositions for cost-effective mannitol production by Leuconostoc mesenteroides SRCM201425 isolated from kimchi. L. mesenteroides SRCM21425 from kimchi was selected for efficient mannitol production based on fructose analysis and identified by its 16S rRNA gene sequence, as well as by carbohydrate fermentation pattern analysis. To enhance mannitol production by L. mesenteroides SRCM201425, the effects of carbon, nitrogen, and mineral sources on mannitol production were first determined using Plackett-Burman design (PBD). The effects of 11 variables on mannitol production were investigated of which three variables, fructose, sucrose, and peptone, were selected. In the second step, each concentration of fructose, sucrose, and peptone was optimized using a central composite design (CCD) and response surface analysis. The predicted concentrations of fructose, sucrose, and peptone were 38.68 g/l, 30 g/l, and 39.67 g/l, respectively. The mathematical response model was reliable, with a coefficient of determination of $R^2=0.9185$. Mannitol production increased 20-fold as compared with the MRS medium, corresponding to a mannitol yield 97.46% when compared to MRS supplemented with 100 g/l of fructose in flask system. Furthermore, the production in the optimized medium was cost-effective. The findings of this study can be expected to be useful in biological production for catalytic hydrogenation causing byproduct and additional production costs.

Analysis of the Effect of Objective Functions on Hydrologic Model Calibration and Simulation (목적함수에 따른 매개변수 추정 및 수문모형 정확도 비교·분석)

  • Lee, Gi Ha;Yeon, Min Ho;Kim, Young Hun;Jung, Sung Ho
    • Journal of Korean Society of Disaster and Security
    • /
    • v.15 no.1
    • /
    • pp.1-12
    • /
    • 2022
  • An automatic optimization technique is used to estimate the optimal parameters of the hydrologic model, and different hydrologic response results can be provided depending on objective functions. In this study, the parameters of the event-based rainfall-runoff model were estimated using various objective functions, the reproducibility of the hydrograph according to the objective functions was evaluated, and appropriate objective functions were proposed. As the rainfall-runoff model, the storage function model(SFM), which is a lumped hydrologic model used for runoff simulation in the current Korean flood forecasting system, was selected. In order to evaluate the reproducibility of the hydrograph for each objective function, 9 rainfall events were selected for the Cheoncheon basin, which is the upstream basin of Yongdam Dam, and widely-used 7 objective functions were selected for parameter estimation of the SFM for each rainfall event. Then, the reproducibility of the simulated hydrograph using the optimal parameter sets based on the different objective functions was analyzed. As a result, RMSE, NSE, and RSR, which include the error square term in the objective function, showed the highest accuracy for all rainfall events except for Event 7. In addition, in the case of PBIAS and VE, which include an error term compared to the observed flow, it also showed relatively stable reproducibility of the hydrograph. However, in the case of MIA, which adjusts parameters sensitive to high flow and low flow simultaneously, the hydrograph reproducibility performance was found to be very low.

Optimization and Stabilization of Automated Synthesis Systems for Reduced 68Ga-PSMA-11 Synthesis Time (68Ga-PSMA-11 합성 시간 단축을 위한 자동합성장치의 최적화 및 안정성 연구)

  • Ji hoon KANG;Sang Min SHIN;Young Si PARK;Hea Ji KIM;Hwa Youn JANG
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.56 no.2
    • /
    • pp.147-155
    • /
    • 2024
  • Gallium-68-prostate-specific membrane antigen-11 (68Ga-PSMA-11) is a positron emission tomography radiopharmaceutical that labels a Glu-urea-Lys-based ligand with 68Ga, binding specifically to the PSMA. It is used widely for imaging recurrent prostate cancer and metastases. On the other hand, the preparation and quality control testing of 68Ga-PSMA-11 in medical institutions takes over 60 minutes, limiting the daily capacity of 68Ge/68Ga generators. While the generator provides 1,110 MBq (30 mCi) nominally, its activity decreases over time, and the labeling yield declines irregularly. Consequently, additional preparations are needed, increasing radiation exposure for medical technicians, prolonging patient wait times, and necessitating production schedule adjustments. This study aimed to reduce the 68Ga-PSMA-11 preparation time and optimize the automated synthesis system. By shortening the reaction time between 68Ga and the PSMA-11 precursor and adjusting the number of purification steps, a faster and more cost-effective method was tested while maintaining quality. The final synthesis time was reduced from 30 to 20 minutes, meeting the standards for the HEPES content, residual solvent EtOH content, and radiochemical purity. This optimized procedure minimizes radiation exposure for medical technicians, reduces patient wait times, and maintains consistent production schedules, making it suitable for clinical application.

Optimization of Total Arc Degree for Stereotactic Radiotherapy by Using Integral Biologically Effective Dose and Irradiated Volume (정위방사선치료 시 적분 생물학적 유효선량 및 방사선조사용적을 이용한 Total Arc Degree의 최적화)

  • Lim Do Hoon;Lee Myung Za;Chun Ha Chung;Kim Dae Yong
    • Radiation Oncology Journal
    • /
    • v.19 no.2
    • /
    • pp.199-204
    • /
    • 2001
  • Purpoe : To find the optimal values of total arc degree to protect the normal brain tissue from high dose radiation in stereotactic radiotherapy planning. Methods and Materials : With Xknife-3 planning system & 4 MV linear accelerator, the authors planned under various values of parameters. One isocenter, 12, 20, 30, 40, 50, and 60 mm of collimator diameters, $100^{\circ},\;200^{\circ},\;300^{\circ},\;400^{\circ}C,\;500^{\circ},\;600^{\circ}$ or total arc degrees, and $30^{\circ}\;or\;45^{\circ}$ or arc intervals were used. After the completion of planning, the plans were compared each other using $V_{50}$ (the volume of normal brain that is delivered high dose radiation) and integral biologically effective dose. Results : At $30^{\circ}$ of arc interval, the values of $V_{50}$ had the decreased pattern with the increase of total arc degree in any collimator diameter. At 45 arc interval, up to $400^{\circ}$ of total arc degree, the values of $ V_{50}$ decreased with the increase of total arc degree, but at $500^{\circ}\;and\;600^{\circ}$ of total arc degrees, the values increased. At $30^{\circ}$ of arc interval, integral biologically effective dose showed the decreased pattern with the increase of total arc degree in any collimator diameter. At $45^{\circ}$ arc interval with less than 40 mm collimator diameter, the integral biologically effective dose decreased with the increase of total arc degree, but with n and n mm or collimator diameters, up to $400^{\circ}$ or total arc degree, integral biologically effective dose decreased with the increase of total arc degree, but at $500^{\circ}\;and\;600^{\circ}$ of total arc degrees, the values increased. Conclusion : In the stereotactic radiotherapy planning for brain lesions, planning with $400^{\circ}$ of total arc degree is optimal. Especially, when the larger collimator more than 50 mm diameter should be used, the uses of $500^{\circ}\;and\;600^{\circ}$ of total arc degrees make the increase of$V_{50}$ and integral biologically effective dose. Therefore stereotactic radiotherapy planning using $400^{\circ}$ of total arc degree can increase the therapeutic ratio and produce the effective outcome in the management of personal and mechanical sources in radiotherapy department.

  • PDF

Comparison of Helical TomoTherapy with Linear Accelerator Base Intensity-modulated Radiotherapy for Head & Neck Cases (두경부암 환자에 대한 선량체적 히스토그램에 따른 토모치료외 선형가속기기반 세기변조방사선치료의 정량적 비교)

  • Kim, Dong-Wook;Yoon, Myong-Geun;Park, Sung-Yong;Lee, Se-Byeong;Shin, Dong-Ho;Lee, Doo-Hyeon;Kwak, Jung-Won;Park, So-Ah;Lim, Young-Kyung;Kim, Jin-Sung;Shin, Jung-Wook;Cho, Kwan-Ho
    • Progress in Medical Physics
    • /
    • v.19 no.2
    • /
    • pp.89-94
    • /
    • 2008
  • TomoTherapy has a merit to treat cancer with Intensity modulated radiation and combines precise 3-D imaging from computerized tomography (CT scanning) with highly targeted radiation beams and rotating beamlets. In this paper, we comparing the dose distribution between TomoTherapy and linear accelerator based intensity modulated radiotherapy (IMRT) for 10 Head & Neck patients using TomoTherapy which is newly installed and operated at National Cancer Center since Sept. 2006. Furthermore, we estimate how the homogeneity and Normal Tissue Complication Probability (NTCP) are changed by motion of target. Inverse planning was carried out using CadPlan planning system (CadPlan R.6.4.7, Varian Medical System Inc. 3100 Hansen Way, Palo Alto, CA 94304-1129, USA). For each patient, an inverse IMRT plan was also made using TomoTherapy Hi-Art System (Hi-Art2_2_4 2.2.4.15, TomoTherapy Incorporated, 1240 Deming Way, Madson, WI 53717-1954, USA) and using the same targets and optimization goals. All TomoTherapy plans compared favorably with the IMRT plans regarding sparing of the organs at risk and keeping an equivalent target dose homogeneity. Our results suggest that TomoTherapy is able to reduce the normal tissue complication probability (NTCP) further, keeping a similar target dose homogeneity.

  • PDF