• Title/Summary/Keyword: 성능모델

Search Result 11,866, Processing Time 0.041 seconds

Nitroglycerin-Challenged Tc-99m MIBI Quantitative Gated SPECT to Predict Functional Recovery After Coronary Artery Bypass Surgery (니트로글리세린 투여 Tc-99m-MIBI 정량 게이트 심근SPECT를 이용한 관상동맥우회로술 후 심근 기능 회복 예측)

  • Lee, Dong-Soo;Kim, Yu-Kyeong;Cheon, Gi-Jeong;Paeng, Jin-Chul;Lee, Myoung-Mook;Kim, Ki-Bong;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.5
    • /
    • pp.278-287
    • /
    • 2003
  • Purpose: The performance of nitroglycerin-challenged Tc-99m-MIBI quantitative gated SPECT for the detection of viable myocardium was compared with rest/24-hour redistribution Tl-201 SPECT Materials and Methods: In 22 patients with coronary artery disease, rest Tl-20l/ dipyridamole stress Tc-99m-MIBI gated/24-hour redistribution Tl-201 SPECT were peformed, and gated SPECT was repeated on-site after sublingual administration of nitroglycerin (0.6 mg). Follow-up gated SPECT was done 3 months after coronary artery bypass graft surgery. For 20 segments per patient, perfusion at rest and 24-hour redistribution, and wall motion and thickening at baseline and nitroglycerin-challenged state were quantified. Quantitative viability markers were evaluated and compared;(1) rest thallium uptake, (2) thallium uptake on 24-hour redistribution SPECT, (3) systolic wall thickening at baseline, and (4) systolic wall thickening with nitroglycerin-challenge. Results: Among 100 revascularized dysfunctional segments, wall motion improved in 66 segments (66%) on follow-up gated myocardial SPECT after bypass surgery. On receiver operating characteristic (ROC) curve analysis, the sensitivity and specificity of rest and 24-hour delayed redistribution Tl-201 SPECT were 79%, 44% and 82%, 44%, respectively, at the optimal cutoff value of 50% of Tl-201 uptake. The sensitivity and specificity of systolic wall thickening at baseline and nitroglycerin-challenge were 49%, 50% and 64%, 65% at the optimal cutoff value of 15% of systolic wall thickening. Area under the ROC curve of nitroglycerin-challenged systolic wall thickening was significantly larger than that of baseline systolic wall thickening (p=0.004). Conclusion: Nitroglycerin-challenged quantitative gated Tc-99m-MIBI SPECT was a useful method for predicting functional recovery of dysfunctional myocardium.

Traffic Forecasting Model Selection of Artificial Neural Network Using Akaike's Information Criterion (AIC(AKaike's Information Criterion)을 이용한 교통량 예측 모형)

  • Kang, Weon-Eui;Baik, Nam-Cheol;Yoon, Hye-Kyung
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.7 s.78
    • /
    • pp.155-159
    • /
    • 2004
  • Recently, there are many trials about Artificial neural networks : ANNs structure and studying method of researches for forecasting traffic volume. ANNs have a powerful capabilities of recognizing pattern with a flexible non-linear model. However, ANNs have some overfitting problems in dealing with a lot of parameters because of its non-linear problems. This research deals with the application of a variety of model selection criterion for cancellation of the overfitting problems. Especially, this aims at analyzing which the selecting model cancels the overfitting problems and guarantees the transferability from time measure. Results in this study are as follow. First, the model which is selecting in sample does not guarantees the best capabilities of out-of-sample. So to speak, the best model in sample is no relationship with the capabilities of out-of-sample like many existing researches. Second, in stability of model selecting criterion, AIC3, AICC, BIC are available but AIC4 has a large variation comparing with the best model. In time-series analysis and forecasting, we need more quantitable data analysis and another time-series analysis because uncertainty of a model can have an effect on correlation between in-sample and out-of-sample.

Study on the Fire Risk Prediction Assessment due to Deterioration contact of combustible cables in Underground Common Utility Tunnels (지하공동구내 가연성케이블의 열화접촉으로 인한 화재위험성 예측평가)

  • Ko, Jaesun
    • Journal of the Society of Disaster Information
    • /
    • v.11 no.1
    • /
    • pp.135-147
    • /
    • 2015
  • Recent underground common utility tunnels are underground facilities for jointly accommodating more than 2 kinds of air-conditioning and heating facilities, vacuum dust collector, information processing cables as well as electricity, telecommunications, waterworks, city gas, sewerage system required when citizens live their daily lives and facilities responsible for the central function of the country but it is difficult to cope with fire accidents quickly and hard to enter into common utility tunnels to extinguish a fire due to toxic gases and smoke generated when various cables are burnt. Thus, in the event of a fire, not only the nerve center of the country is paralyzed such as significant property damage and loss of communication etc. but citizen inconveniences are caused. Therefore, noticing that most fires break out by a short circuit due to electrical works and degradation contact due to combustible cables as the main causes of fires in domestic and foreign common utility tunnels fire cases that have occurred so far, the purpose of this paper is to scientifically analyze the behavior of a fire by producing the model of actual common utility tunnels and reproducing the fire. A fire experiment was conducted in a state that line type fixed temperature detector, fire door, connection deluge set and ventilation equipment are installed in underground common utility tunnels and transmission power distribution cables are coated with fire proof paints in a certain section and heating pipes are fire proof covered. As a result, in the case of Type II, the maximum temperature was measured as $932^{\circ}C$ and line type fixed temperature detector displayed the fire location exactly in the receiver at a constant temperature. And transmission power distribution cables painted with fire proof paints in a certain section, the case of Type III, were found not to be fire resistant and fire proof covered heating pipes to be fire resistant for about 30 minutes. Also, fire simulation was carried out by entering fire load during a real fire test and as a result, the maximum temperature is $943^{\circ}C$, almost identical with $932^{\circ}C$ during a real fire test. Therefore, it is considered that fire behaviour can be predicted by conducting fire simulation only with common utility tunnels fire load and result values of heat release rate, height of the smoke layer, concentration of O2, CO, CO2 etc. obtained by simulation are determined to be applied as the values during a real fire experiment. In the future, it is expected that more reliable information on domestic underground common utility tunnels fire accidents can be provided and it will contribute to construction and maintenance repair effectively and systematically by analyzing and accumulating experimental data on domestic underground common utility tunnels fire accidents built in this study and fire cases continuously every year and complementing laws and regulations and administration manuals etc.

Prognostic Usefulness of Maximum Standardized Uptake Value on FDG-PET in Surgically Resected Non-small-cell Lung Cancer (수술로 제거된 비소세포폐암의 예후 예측에 있어 FDG-PET 최대 표준화 섭취계수의 유용성)

  • Nguyen Xuan Canh;Lee Won-Woo;Sung Sook-Whan;Jheon Sang-Hoon;Kim Yu-Kyeong;Lee Dong-Soo;Chung June-Key;Lee Myung-Chul;Kim Sang-Eun
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.40 no.4
    • /
    • pp.205-210
    • /
    • 2006
  • Purpose: FDG uptake on positron omission tomography (PET) has been considered a prognostic indicator in non-small cell lung cancer (NSCLC). The aim of this study was to assess the clinical significance of maximum value of SUV (maxSUV) in recurrence prediction in patients with surgically resected NSCLC. Materials & methods: NSCLC patients (n=42, F:M =14:28, age $62.3{\pm}12.3$ y) who underwent curative resection after FDG-PET were enrolled. Twenty-nine patients had pathologic stage 1, and 13 had pathologic stage II. Thirty-one patients were additionally treated with adjuvant oral chemotherapy. MaxSUVs of primary tumors were analyzed for correlation with tumor recurrence and compared with pathologic or clinical prognostic indicators. The median follow-up duration was 16 mo (range, 3-26 mo). Results: Ten (23.8%) of the 42 patients experienced recurrence during a median follow-up of 7.5 mo (range, 3-13 mo). Univariate analysis revealed that disease-free survival (DFS) was significantly correlated with maxSUV (<7 vs. $\geq7$, p=0.006), tumor size (<3 cm vs. $\geq3$ cm, p=0.024), and tumor tell differentiation (well/moderate vs. poor, p=0.044). However, multivariate Cox proportional analysis identified maxSUV as the single determinant for DFS (p=0.014). Patients with a maxSUV of $\geq7$(n=10) had a significantly lower 1-year DFS rate (50.0%) than those with a maxSUV of <7 (n=32, 87.5%). Conclusion: MaxSUV is a significant independent predictor for recurrence in surgically resected NSCLC. FDG uptake can be added to other well-known factors in prognosis prediction of NSCLC.

Pervaporation Separation of Ethanol Aqueous Solution through Carbonate-type Polyurethane Membrane II. The Effect of Pendent Anionic Group (카보네이트형 폴리우레탄막을 이용한 에탄올 수용액의 투과증발분리 II. 음이온성기에 의한 영향)

  • Han, In Ki;Oh, Boo Keum;Lee, Young Moo;Noh, Si Tae
    • Applied Chemistry for Engineering
    • /
    • v.3 no.4
    • /
    • pp.595-604
    • /
    • 1992
  • Carbonate-type polyurethane resins containing anionic moieties were systhesized from NCO-terminated prepolymer method. Membranes were manufactured from the polymer solution and the separation of aqueous ethanol solution was investigated. To enhance the property of urethane resin, carbonate-type polyol(PTMCG) was used. ${\alpha}^{\prime},{\alpha}^{{\prime}{\prime}}$-dimethylolpropionic acid was used as a chain extender to increase the hydrophilicily of the urethane membrane. The ionization of the pendent carboxylic groups in urethane resin was carried out using trimthylamine. To confirm the formation of anionic groups in urethane resin, IR spectra of model compounds were compared with those of urethane resins. It was confirmed that the concentration of hard segment and hydrogen bond contributed to the property of the concentration of hard segment and hydrogen bond contributed to the property of urethane resin in which the mole ratio of chain extender and polyol was from 3:1 to urethane resin in which the mole ratio of chain extender and polyol was from 3:1 to 5:1. The carbonate-type polyurethane containing pendent carboxylic grop(PU) had Tg of around-$25^{\circ}C$ and Tm, $45^{\circ}C$ measured by DSC. Transition temperatures of one containing pendent anionic group(APU) prepared from the ionization of PU shifted to $8{\sim}10^{\circ}C$ lower temperature region than those of PU. Pervaporation membrane was prepared through the casting method. N, N-dimethylformamide (DMF) were used as a solvent and hexamethylene diisocyanate(HMDl) as a crosslinking agent. Swelling degree increased with ethanol concentration in mixure and the control of the swelling degree of the membrane could be achieved by crossliking. The results of pervaporation were as follows : separation factor, 2.3~9.8 ; flux, $27{\sim}79.5g/m^2hr$. Pervaporation separation capacity could be enhanced by reducing the molecular weight of polyol from 2,000 to 1,000.

  • PDF

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

A Topic Modeling-based Recommender System Considering Changes in User Preferences (고객 선호 변화를 고려한 토픽 모델링 기반 추천 시스템)

  • Kang, So Young;Kim, Jae Kyeong;Choi, Il Young;Kang, Chang Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.43-56
    • /
    • 2020
  • Recommender systems help users make the best choice among various options. Especially, recommender systems play important roles in internet sites as digital information is generated innumerable every second. Many studies on recommender systems have focused on an accurate recommendation. However, there are some problems to overcome in order for the recommendation system to be commercially successful. First, there is a lack of transparency in the recommender system. That is, users cannot know why products are recommended. Second, the recommender system cannot immediately reflect changes in user preferences. That is, although the preference of the user's product changes over time, the recommender system must rebuild the model to reflect the user's preference. Therefore, in this study, we proposed a recommendation methodology using topic modeling and sequential association rule mining to solve these problems from review data. Product reviews provide useful information for recommendations because product reviews include not only rating of the product but also various contents such as user experiences and emotional state. So, reviews imply user preference for the product. So, topic modeling is useful for explaining why items are recommended to users. In addition, sequential association rule mining is useful for identifying changes in user preferences. The proposed methodology is largely divided into two phases. The first phase is to create user profile based on topic modeling. After extracting topics from user reviews on products, user profile on topics is created. The second phase is to recommend products using sequential rules that appear in buying behaviors of users as time passes. The buying behaviors are derived from a change in the topic of each user. A collaborative filtering-based recommendation system was developed as a benchmark system, and we compared the performance of the proposed methodology with that of the collaborative filtering-based recommendation system using Amazon's review dataset. As evaluation metrics, accuracy, recall, precision, and F1 were used. For topic modeling, collapsed Gibbs sampling was conducted. And we extracted 15 topics. Looking at the main topics, topic 1, top 3, topic 4, topic 7, topic 9, topic 13, topic 14 are related to "comedy shows", "high-teen drama series", "crime investigation drama", "horror theme", "British drama", "medical drama", "science fiction drama", respectively. As a result of comparative analysis, the proposed methodology outperformed the collaborative filtering-based recommendation system. From the results, we found that the time just prior to the recommendation was very important for inferring changes in user preference. Therefore, the proposed methodology not only can secure the transparency of the recommender system but also can reflect the user's preferences that change over time. However, the proposed methodology has some limitations. The proposed methodology cannot recommend product elaborately if the number of products included in the topic is large. In addition, the number of sequential patterns is small because the number of topics is too small. Therefore, future research needs to consider these limitations.

Adaptive Lock Escalation in Database Management Systems (데이타베이스 관리 시스템에서의 적응형 로크 상승)

  • Chang, Ji-Woong;Lee, Young-Koo;Whang, Kyu-Young;Yang, Jae-Heon
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.742-757
    • /
    • 2001
  • Since database management systems(DBMSS) have limited lock resources, transactions requesting locks beyond the limit mutt be aborted. In the worst carte, if such transactions are aborted repeatedly, the DBMS can become paralyzed, i.e., transaction execute but cannot commit. Lock escalation is considered a solution to this problem. However, existing lock escalation methods do not provide a complete solution. In this paper, we prognose a new lock escalation method, adaptive lock escalation, that selves most of the problems. First, we propose a general model for lock escalation and present the concept of the unescalatable look, which is the major cause making the transactions to abort. Second, we propose the notions of semi lock escalation, lock blocking, and selective relief as the mechanisms to control the number of unescalatable locks. We then propose the adaptive lock escalation method using these notions. Adaptive lock escalation reduces needless aborts and guarantees that the DBMS is not paralyzed under excessive lock requests. It also allows graceful degradation of performance under those circumstances. Third, through extensive simulation, we show that adaptive lock escalation outperforms existing lock escalation methods. The results show that, compared to the existing methods, adaptive lock escalation reduces the number of aborts and the average response time, and increases the throughput to a great extent. Especially, it is shown that the number of concurrent transactions can be increased more than 16 ~256 fold. The contribution of this paper is significant in that it has formally analysed the role of lock escalation in lock resource management and identified the detailed underlying mechanisms. Existing lock escalation methods rely on users or system administrator to handle the problems of excessive lock requests. In contrast, adaptive lock escalation releases the users of this responsibility by providing graceful degradation and preventing system paralysis through automatic control of unescalatable locks Thus adaptive lock escalation can contribute to developing self-tuning: DBMSS that draw a lot of attention these days.

  • PDF

An Outlier Detection Using Autoencoder for Ocean Observation Data (해양 이상 자료 탐지를 위한 오토인코더 활용 기법 최적화 연구)

  • Kim, Hyeon-Jae;Kim, Dong-Hoon;Lim, Chaewook;Shin, Yongtak;Lee, Sang-Chul;Choi, Youngjin;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.6
    • /
    • pp.265-274
    • /
    • 2021
  • Outlier detection research in ocean data has traditionally been performed using statistical and distance-based machine learning algorithms. Recently, AI-based methods have received a lot of attention and so-called supervised learning methods that require classification information for data are mainly used. This supervised learning method requires a lot of time and costs because classification information (label) must be manually designated for all data required for learning. In this study, an autoencoder based on unsupervised learning was applied as an outlier detection to overcome this problem. For the experiment, two experiments were designed: one is univariate learning, in which only SST data was used among the observation data of Deokjeok Island and the other is multivariate learning, in which SST, air temperature, wind direction, wind speed, air pressure, and humidity were used. Period of data is 25 years from 1996 to 2020, and a pre-processing considering the characteristics of ocean data was applied to the data. An outlier detection of actual SST data was tried with a learned univariate and multivariate autoencoder. We tried to detect outliers in real SST data using trained univariate and multivariate autoencoders. To compare model performance, various outlier detection methods were applied to synthetic data with artificially inserted errors. As a result of quantitatively evaluating the performance of these methods, the multivariate/univariate accuracy was about 96%/91%, respectively, indicating that the multivariate autoencoder had better outlier detection performance. Outlier detection using an unsupervised learning-based autoencoder is expected to be used in various ways in that it can reduce subjective classification errors and cost and time required for data labeling.

A preliminary assessment of high-spatial-resolution satellite rainfall estimation from SAR Sentinel-1 over the central region of South Korea (한반도 중부지역에서의 SAR Sentinel-1 위성강우량 추정에 관한 예비평가)

  • Nguyen, Hoang Hai;Jung, Woosung;Lee, Dalgeun;Shin, Daeyun
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.6
    • /
    • pp.393-404
    • /
    • 2022
  • Reliable terrestrial rainfall observations from satellites at finer spatial resolution are essential for urban hydrological and microscale agricultural demands. Although various traditional "top-down" approach-based satellite rainfall products were widely used, they are limited in spatial resolution. This study aims to assess the potential of a novel "bottom-up" approach for rainfall estimation, the parameterized SM2RAIN model, applied to the C-band SAR Sentinel-1 satellite data (SM2RAIN-S1), to generate high-spatial-resolution terrestrial rainfall estimates (0.01° grid/6-day) over Central South Korea. Its performance was evaluated for both spatial and temporal variability using the respective rainfall data from a conventional reanalysis product and rain gauge network for a 1-year period over two different sub-regions in Central South Korea-the mixed forest-dominated, middle sub-region and cropland-dominated, west coast sub-region. Evaluation results indicated that the SM2RAIN-S1 product can capture general rainfall patterns in Central South Korea, and hold potential for high-spatial-resolution rainfall measurement over the local scale with different land covers, while less biased rainfall estimates against rain gauge observations were provided. Moreover, the SM2RAIN-S1 rainfall product was better in mixed forests considering the Pearson's correlation coefficient (R = 0.69), implying the suitability of 6-day SM2RAIN-S1 data in capturing the temporal dynamics of soil moisture and rainfall in mixed forests. However, in terms of RMSE and Bias, better performance was obtained with the SM2RAIN-S1 rainfall product over croplands rather than mixed forests, indicating that larger errors induced by high evapotranspiration losses (especially in mixed forests) need to be included in further improvement of the SM2RAIN.