• Title/Summary/Keyword: Changing algorithm

Search Result 1,004, Processing Time 0.03 seconds

Metamodeling Construction for Generating Test Case via Decision Table Based on Korean Requirement Specifications (한글 요구사항 기반 결정 테이블로부터 테스트 케이스 생성을 위한 메타모델링 구축화)

  • Woo Sung Jang;So Young Moon;R. Young Chul Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.381-386
    • /
    • 2023
  • Many existing test case generation researchers extract test cases from models. However, research on generating test cases from natural language requirements is required in practice. For this purpose, the combination of natural language analysis and requirements engineering is very necessary. However, Requirements analysis written in Korean is difficult due to the diverse meaning of sentence expressions. We research test case generation through natural language requirement definition analysis, C3Tree model, cause-effect graph, and decision table steps as one of the test case generation methods from Korean natural requirements. As an intermediate step, this paper generates test cases from C3Tree model-based decision tables using meta-modeling. This method has the advantage of being able to easily maintain the model-to-model and model-to-text transformation processes by modifying only the transformation rules. If an existing model is modified or a new model is added, only the model transformation rules can be maintained without changing the program algorithm. As a result of the evaluation, all combinations for the decision table were automatically generated as test cases.

Application study of random forest method based on Sentinel-2 imagery for surface cover classification in rivers - A case of Naeseong Stream - (하천 내 지표 피복 분류를 위한 Sentinel-2 영상 기반 랜덤 포레스트 기법의 적용성 연구 - 내성천을 사례로 -)

  • An, Seonggi;Lee, Chanjoo;Kim, Yongmin;Choi, Hun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.5
    • /
    • pp.321-332
    • /
    • 2024
  • Understanding the status of surface cover in riparian zones is essential for river management and flood disaster prevention. Traditional survey methods rely on expert interpretation of vegetation through vegetation mapping or indices. However, these methods are limited by their ability to accurately reflect dynamically changing river environments. Against this backdrop, this study utilized satellite imagery to apply the Random Forest method to assess the distribution of vegetation in rivers over multiple years, focusing on the Naeseong Stream as a case study. Remote sensing data from Sentinel-2 imagery were combined with ground truth data from the Naeseong Stream surface cover in 2016. The Random Forest machine learning algorithm was used to extract and train 1,000 samples per surface cover from ten predetermined sampling areas, followed by validation. A sensitivity analysis, annual surface cover analysis, and accuracy assessment were conducted to evaluate their applicability. The results showed an accuracy of 85.1% based on the validation data. Sensitivity analysis indicated the highest efficiency in 30 trees, 800 samples, and the downstream river section. Surface cover analysis accurately reflects the actual river environment. The accuracy analysis identified 14.9% boundary and internal errors, with high accuracy observed in six categories, excluding scattered and herbaceous vegetation. Although this study focused on a single river, applying the surface cover classification method to multiple rivers is necessary to obtain more accurate and comprehensive data.

Target dose study of effects of changes in the AAA Calculation resolution on Lung SABR plan (Lung SABR plan시 AAA의 Calculation resolution 변화에 의한 Target dose 영향 연구)

  • Kim, Dae Il;Son, Sang Jun;Ahn, Bum Seok;Jung, Chi Hoon;Yoo, Suk Hyun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.171-176
    • /
    • 2014
  • Purpose : Changing the calculation grid of AAA in Lung SABR plan and to analyze the changes in target dose, and investigated the effects associated with it, and considered a suitable method of application. Materials and Methods : 4D CT image that was used to plan all been taken with Brilliance Big Bore CT (Philips, Netherlands) and in Lung SABR plan($Eclipse^{TM}$ ver10.0.42, Varian, the USA), use anisotropic analytic algorithm(AAA, ver.10, Varian Medical Systems, Palo Alto, CA, USA) and, was calculated by the calculation grid 1.0, 3.0, 5.0 mm in each Lung SABR plan. Results : Lung SABR plan of 10 cases are using each of 1.0 mm, 3.0 mm, 5.0 mm calculation grid, and in case of use a 1.0 mm calculation grid $V_{98}$. of the prescribed dose is about $99.5%{\pm}1.5%$, $D_{min}$ of the prescribed dose is about $92.5{\pm}1.5%$ and Homogeneity Index(HI) is $1.0489{\pm}0.0025$. In the case of use a 3.0 mm calculation grid $V_{98}$ dose of the prescribed dose is about $90{\pm}4.5%$, $D_{min}$ of the prescribed dose is about $87.5{\pm}3%$ and HI is about $1.07{\pm}1$. In the case of use a 5.0 mm calculation grid $V_{98}$ dose of the prescribed dose is about $63{\pm}15%$, $D_{min}$ of the prescribed dose is about $83{\pm}4%$ and HI is about $1.13{\pm}0.2$, respectively. Conclusion : The calculation grid of 1.0 mm is better improves the accuracy of dose calculation than using 3.0 mm and 5.0 mm, although calculation times increase in the case of smaller PTV relatively. As lung, spread relatively large and low density and small PTV, it is considered and good to use a calculation grid of 1.0 mm.

Development of Position Encoding Circuit for a Multi-Anode Position Sensitive Photomultiplier Tube (다중양극 위치민감형 광전자증배관을 위한 위치검출회로 개발)

  • Kwon, Sun-Il;Hong, Seong-Jong;Ito, Mikiko;Yoon, Hyun-Suk;Lee, Geon-Song;Sim, Kwang-Souk;Rhee, June-Tak;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.6
    • /
    • pp.469-477
    • /
    • 2008
  • Purpose: The goal of this paper is to present the design and performance of a position encoding circuit for $16{\times}16$ array of position sensitive multi-anode photomultiplier tube for small animal PET scanners. This circuit which reduces the number of readout channels from 256 to 4 channels is based on a charge division method utilizing a resistor array. Materials and Methods: The position encoding circuit was simulated with PSpice before fabrication. The position encoding circuit reads out the signals from H9500 flat panel PMTs (Hamamatsu Photonics K.K., Japan) on which $1.5{\times}1.5{\times}7.0\;mm^3$ $L_{0.9}GSO$ ($Lu_{1.8}Gd_{0.2}SiO_{5}:Ce$) crystals were mounted. For coincidence detection, two different PET modules were used. One PET module consisted of a $29{\times}29\;L_{0.9}GSO$ crystal layer, and the other PET module two $28{\times}28$ and $29{\times}29\;L_{0.9}GSO$ crystal layers which have relative offsets by half a crystal pitch in x- and y-directions. The crystal mapping algorithm was also developed to identify crystals. Results: Each crystal was clearly visible in flood images. The crystal identification capability was enhanced further by changing the values of resistors near the edge of the resistor array. Energy resolutions of individual crystal were about 11.6%(SD 1.6). The flood images were segmented well with the proposed crystal mapping algorithm. Conclusion: The position encoding circuit resulted in a clear separation of crystals and sufficient energy resolutions with H9500 flat-panel PMT and $L_{0.9}GSO$ crystals. This circuit is good enough for use in small animal PET scanners.

The effects of physical factors in SPECT (물리적 요소가 SPECT 영상에 미치는 영향)

  • 손혜경;김희중;나상균;이희경
    • Progress in Medical Physics
    • /
    • v.7 no.1
    • /
    • pp.65-77
    • /
    • 1996
  • Using the 2-D and 3-D Hoffman brain phantom, 3-D Jaszczak phantom and Single Photon Emission Computed Tomography, the effects of data acquisition parameter, attenuation, noise, scatter and reconstruction algorithm on image quantitation as well as image quality were studied. For the data acquisition parameters, the images were acquired by changing the increment angle of rotation and the radius. The less increment angle of rotation resulted in superior image quality. Smaller radius from the center of rotation gave better image quality, since the resolution degraded as increasing the distance from detector to object increased. Using the flood data in Jaszczak phantom, the optimal attenuation coefficients were derived as 0.12cm$\^$-1/ for all collimators. Consequently, the all images were corrected for attenuation using the derived attenuation coefficients. It showed concave line profile without attenuation correction and flat line profile with attenuation correction in flood data obtained with jaszczak phantom. And the attenuation correction improved both image qulity and image quantitation. To study the effects of noise, the images were acquired for 1min, 2min, 5min, 10min, and 20min. The 20min image showed much better noise characteristics than 1min image indicating that increasing the counting time reduces the noise characteristics which follow the Poisson distribution. The images were also acquired using dual-energy windows, one for main photopeak and another one for scatter peak. The images were then compared with and without scatter correction. Scatter correction improved image quality so that the cold sphere and bar pattern in Jaszczak phantom were clearly visualized. Scatter correction was also applied to 3-D Hoffman brain phantom and resulted in better image quality. In conclusion, the SPECT images were significantly affected by the factors of data acquisition parameter, attenuation, noise, scatter, and reconstruction algorithm and these factors must be optimized or corrected to obtain the useful SPECT data in clinical applications.

  • PDF

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

The Availability of the step optimization in Monaco Planning system (모나코 치료계획 시스템에서 단계적 최적화 조건 실현의 유용성)

  • Kim, Dae Sup
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.207-216
    • /
    • 2014
  • Purpose : We present a method to reduce this gap and complete the treatment plan, to be made by the re-optimization is performed in the same conditions as the initial treatment plan different from Monaco treatment planning system. Materials and Methods : The optimization is carried in two steps when performing the inverse calculation for volumetric modulated radiation therapy or intensity modulated radiation therapy in Monaco treatment planning system. This study was the first plan with a complete optimization in two steps by performing all of the treatment plan, without changing the optimized condition from Step 1 to Step 2, a typical sequential optimization performed. At this time, the experiment was carried out with a pencil beam and Monte Carlo algorithm is applied In step 2. We compared initial plan and re-optimized plan with the same optimized conditions. And then evaluated the planning dose by measurement. When performing a re-optimization for the initial treatment plan, the second plan applied the step optimization. Results : When the common optimization again carried out in the same conditions in the initial treatment plan was completed, the result is not the same. From a comparison of the treatment planning system, similar to the dose-volume the histogram showed a similar trend, but exhibit different values that do not satisfy the conditions best optimized dose, dose homogeneity and dose limits. Also showed more than 20% different in comparison dosimetry. If different dose algorithms, this measure is not the same out. Conclusion : The process of performing a number of trial and error, and you get to the ultimate goal of treatment planning optimization process. If carried out to optimize the completion of the initial trust only the treatment plan, we could be made of another treatment plan. The similar treatment plan could not satisfy to optimization results. When you perform re-optimization process, you will need to apply the step optimized conditions, making sure the dose distribution through the optimization process.

Evaluation of Dose Change by Using the Deformable Image Registration (DIR) on the Intensity Modulated Radiation Therapy (IMRT) with Glottis Cancer (성문암 세기조절 방사선치료에서 변형영상정합을 이용한 선량변화 평가)

  • Kim, Woo Chul;Min, Chul Kee;Lee, Suk;Choi, Sang Hyoun;Cho, Kwang Hwan;Jung, Jae Hong;Kim, Eun Seog;Yeo, Seung-Gu;Kwon, Soo-Il;Lee, Kil-Dong
    • Progress in Medical Physics
    • /
    • v.25 no.3
    • /
    • pp.167-175
    • /
    • 2014
  • The purpose of this study is to evaluate the variation of the dose which is delivered to the patients with glottis cancer under IMRT (intensity modulated radiation therapy) by using the 3D registration with CBCT (cone beam CT) images and the DIR (deformable image registration) techniques. The CBCT images which were obtained at a one-week interval were reconstructed by using B-spline algorithm in DIR system, and doses were recalculated based on the newly obtained CBCT images. The dose distributions to the tumor and the critical organs were compared with reference. For the change of volume depending on weight at 3 to 5 weeks, there was increased of 1.38~2.04 kg on average. For the body surface depending on weight, there was decreased of 2.1 mm. The dose with transmitted to the carotid since three weeks was increased compared be more than 8.76% planned, and the thyroid gland was decreased to 26.4%. For the physical evaluation factors of the tumor, PITV, TCI, rDHI, mDHI, and CN were decreased to 4.32%, 5.78%, 44.54%, 12.32%, and 7.11%, respectively. Moreover, $D_{max}$, $D_{mean}$, $V_{67.50}$, and $D_{95}$ for PTV were increased or decreased to 2.99%, 1.52%, 5.78%, and 11.94%, respectively. Although there was no change of volume depending on weight, the change of body types occurred, and IMRT with the narrow composure margin sensitively responded to such a changing. For the glottis IMRT, the patient's weight changes should be observed and recorded to evaluate the actual dose distribution by using the DIR techniques, and more the adaptive treatment planning during the treatment course is needed to deliver the accurate dose to the patients.

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.