• Title/Summary/Keyword: Non-Prediction Algorithm

Search Result 218, Processing Time 0.027 seconds

Prediction of Lung Cancer Based on Serum Biomarkers by Gene Expression Programming Methods

  • Yu, Zhuang;Chen, Xiao-Zheng;Cui, Lian-Hua;Si, Hong-Zong;Lu, Hai-Jiao;Liu, Shi-Hai
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.21
    • /
    • pp.9367-9373
    • /
    • 2014
  • In diagnosis of lung cancer, rapid distinction between small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) tumors is very important. Serum markers, including lactate dehydrogenase (LDH), C-reactive protein (CRP), carcino-embryonic antigen (CEA), neurone specific enolase (NSE) and Cyfra21-1, are reported to reflect lung cancer characteristics. In this study classification of lung tumors was made based on biomarkers (measured in 120 NSCLC and 60 SCLC patients) by setting up optimal biomarker joint models with a powerful computerized tool - gene expression programming (GEP). GEP is a learning algorithm that combines the advantages of genetic programming (GP) and genetic algorithms (GA). It specifically focuses on relationships between variables in sets of data and then builds models to explain these relationships, and has been successfully used in formula finding and function mining. As a basis for defining a GEP environment for SCLC and NSCLC prediction, three explicit predictive models were constructed. CEA and NSE are requentlyused lung cancer markers in clinical trials, CRP, LDH and Cyfra21-1 have significant meaning in lung cancer, basis on CEA and NSE we set up three GEP models-GEP 1(CEA, NSE, Cyfra21-1), GEP2 (CEA, NSE, LDH), GEP3 (CEA, NSE, CRP). The best classification result of GEP gained when CEA, NSE and Cyfra21-1 were combined: 128 of 135 subjects in the training set and 40 of 45 subjects in the test set were classified correctly, the accuracy rate is 94.8% in training set; on collection of samples for testing, the accuracy rate is 88.9%. With GEP2, the accuracy was significantly decreased by 1.5% and 6.6% in training set and test set, in GEP3 was 0.82% and 4.45% respectively. Serum Cyfra21-1 is a useful and sensitive serum biomarker in discriminating between NSCLC and SCLC. GEP modeling is a promising and excellent tool in diagnosis of lung cancer.

Prediction of Failure Time of Tunnel Applying the Curve Fitting Techniques (곡선적합기법을 이용한 터널의 파괴시간 예측)

  • Yoon, Yong-Kyun;Jo, Young-Do
    • Tunnel and Underground Space
    • /
    • v.20 no.2
    • /
    • pp.97-104
    • /
    • 2010
  • The materials failure relation $\ddot{\Omega}=A{(\dot{\Omega})}^\alpha$ where $\Omega$ is a measurable quantity such as displacement and the dot superscript is the time derivative, may be used to analyze the accelerating creep of materials. Coefficients, A and $\alpha$, are determined by fitting given data sets. In this study, it is tried to predict the failure time of tunnel using the materials failure relation. Four fitting techniques of applying the materials failure relation are attempted to forecast a failure time. Log velocity versus log acceleration technique, log time versus log velocity technique, inverse velocity technique are based on the linear least squares fits and non-linear least squares technique utilizes the Levenberg-Marquardt algorithm. Since the log velocity versus log acceleration technique utilizes a logarithmic representation of the materials failure relation, it indicates the suitability of the materials failure relation applied to predict a failure time of tunnel. A linear correlation between log velocity and log acceleration appears satisfactory(R=0.84) and this represents that the materials failure relation is a suitable model for predicting a failure time of tunnel. Through comparing the real failure time of tunnel with the predicted failure times from four curve fittings, it is shown that the log time versus log velocity technique results in the best prediction.

CT-Based Radiomics Signature for Preoperative Prediction of Coagulative Necrosis in Clear Cell Renal Cell Carcinoma

  • Kai Xu;Lin Liu;Wenhui Li;Xiaoqing Sun;Tongxu Shen;Feng Pan;Yuqing Jiang;Yan Guo;Lei Ding;Mengchao Zhang
    • Korean Journal of Radiology
    • /
    • v.21 no.6
    • /
    • pp.670-683
    • /
    • 2020
  • Objective: The presence of coagulative necrosis (CN) in clear cell renal cell carcinoma (ccRCC) indicates a poor prognosis, while the absence of CN indicates a good prognosis. The purpose of this study was to build and validate a radiomics signature based on preoperative CT imaging data to estimate CN status in ccRCC. Materials and Methods: Altogether, 105 patients with pathologically confirmed ccRCC were retrospectively enrolled in this study and then divided into training (n = 72) and validation (n = 33) sets. Thereafter, 385 radiomics features were extracted from the three-dimensional volumes of interest of each tumor, and 10 traditional features were assessed by two experienced radiologists using triple-phase CT-enhanced images. A multivariate logistic regression algorithm was used to build the radiomics score and traditional predictors in the training set, and their performance was assessed and then tested in the validation set. The radiomics signature to distinguish CN status was then developed by incorporating the radiomics score and the selected traditional predictors. The receiver operating characteristic (ROC) curve was plotted to evaluate the predictive performance. Results: The area under the ROC curve (AUC) of the radiomics score, which consisted of 7 radiomics features, was 0.855 in the training set and 0.885 in the validation set. The AUC of the traditional predictor, which consisted of 2 traditional features, was 0.843 in the training set and 0.858 in the validation set. The radiomics signature showed the best performance with an AUC of 0.942 in the training set, which was then confirmed with an AUC of 0.969 in the validation set. Conclusion: The CT-based radiomics signature that incorporated radiomics and traditional features has the potential to be used as a non-invasive tool for preoperative prediction of CN in ccRCC.

Development of Grid Based Distributed Rainfall-Runoff Model with Finite Volume Method (유한체적법을 이용한 격자기반의 분포형 강우-유출 모형 개발)

  • Choi, Yun-Seok;Kim, Kyung-Tak;Lee, Jin-Hee
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.9
    • /
    • pp.895-905
    • /
    • 2008
  • To analyze hydrologic processes in a watershed requires both various geographical data and hydrological time series data. Recently, not only geographical data such as DEM(Digital Elevation Model) and hydrologic thematic map but also hydrological time series from numerical weather prediction and rainfall radar have been provided as grid data, and there are studies on hydrologic analysis using these grid data. In this study, GRM(Grid based Rainfall-runoff Model) which is physically-based distributed rainfall-runoff model has been developed to simulate short term rainfall-runoff process effectively using these grid data. Kinematic wave equation is used to simulate overland flow and channel flow, and Green-Ampt model is used to simulate infiltration process. Governing equation is discretized by finite volume method. TDMA(TriDiagonal Matrix Algorithm) is applied to solve systems of linear equations, and Newton-Raphson iteration method is applied to solve non-linear term. Developed model was applied to simplified hypothetical watersheds to examine model reasonability with the results from $Vflo^{TM}$. It was applied to Wicheon watershed for verification, and the applicability to real site was examined, and simulation results showed good agreement with measured hydrographs.

Long Range Forecast of Garlic Productivity over S. Korea Based on Genetic Algorithm and Global Climate Reanalysis Data (전지구 기후 재분석자료 및 인공지능을 활용한 남한의 마늘 생산량 장기예측)

  • Jo, Sera;Lee, Joonlee;Shim, Kyo Moon;Kim, Yong Seok;Hur, Jina;Kang, Mingu;Choi, Won Jun
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.391-404
    • /
    • 2021
  • This study developed a long-term prediction model for the potential yield of garlic based on a genetic algorithm (GA) by utilizing global climate reanalysis data. The GA is used for digging the inherent signals from global climate reanalysis data which are both directly and indirectly connected with the garlic yield potential. Our results indicate that both deterministic and probabilistic forecasts reasonably capture the inter-annual variability of crop yields with temporal correlation coefficients significant at 99% confidence level and superior categorical forecast skill with a hit rate of 93.3% for 2 × 2 and 73.3% for 3 × 3 contingency tables. Furthermore, the GA method, which considers linear and non-linear relationships between predictors and predictands, shows superiority of forecast skill in terms of both stability and skill scores compared with linear method. Since our result can predict the potential yield before the start of farming, it is expected to help establish a long-term plan to stabilize the demand and price of agricultural products and prepare countermeasures for possible problems in advance.

Comparison of Prediction Accuracy Between Classification and Convolution Algorithm in Fault Diagnosis of Rotatory Machines at Varying Speed (회전수가 변하는 기기의 고장진단에 있어서 특성 기반 분류와 합성곱 기반 알고리즘의 예측 정확도 비교)

  • Moon, Ki-Yeong;Kim, Hyung-Jin;Hwang, Se-Yun;Lee, Jang Hyun
    • Journal of Navigation and Port Research
    • /
    • v.46 no.3
    • /
    • pp.280-288
    • /
    • 2022
  • This study examined the diagnostics of abnormalities and faults of equipment, whose rotational speed changes even during regular operation. The purpose of this study was to suggest a procedure that can properly apply machine learning to the time series data, comprising non-stationary characteristics as the rotational speed changes. Anomaly and fault diagnosis was performed using machine learning: k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), and Random Forest. To compare the diagnostic accuracy, an autoencoder was used for anomaly detection and a convolution based Conv1D was additionally used for fault diagnosis. Feature vectors comprising statistical and frequency attributes were extracted, and normalization & dimensional reduction were applied to the extracted feature vectors. Changes in the diagnostic accuracy of machine learning according to feature selection, normalization, and dimensional reduction are explained. The hyperparameter optimization process and the layered structure are also described for each algorithm. Finally, results show that machine learning can accurately diagnose the failure of a variable-rotation machine under the appropriate feature treatment, although the convolution algorithms have been widely applied to the considered problem.

Prediction of Decompensation and Death in Advanced Chronic Liver Disease Using Deep Learning Analysis of Gadoxetic Acid-Enhanced MRI

  • Subin Heo;Seung Soo Lee;So Yeon Kim;Young-Suk Lim;Hyo Jung Park;Jee Seok Yoon;Heung-Il Suk;Yu Sub Sung;Bumwoo Park;Ji Sung Lee
    • Korean Journal of Radiology
    • /
    • v.23 no.12
    • /
    • pp.1269-1280
    • /
    • 2022
  • Objective: This study aimed to evaluate the usefulness of quantitative indices obtained from deep learning analysis of gadoxetic acid-enhanced hepatobiliary phase (HBP) MRI and their longitudinal changes in predicting decompensation and death in patients with advanced chronic liver disease (ACLD). Materials and Methods: We included patients who underwent baseline and 1-year follow-up MRI from a prospective cohort that underwent gadoxetic acid-enhanced MRI for hepatocellular carcinoma surveillance between November 2011 and August 2012 at a tertiary medical center. Baseline liver condition was categorized as non-ACLD, compensated ACLD, and decompensated ACLD. The liver-to-spleen signal intensity ratio (LS-SIR) and liver-to-spleen volume ratio (LS-VR) were automatically measured on the HBP images using a deep learning algorithm, and their percentage changes at the 1-year follow-up (ΔLS-SIR and ΔLS-VR) were calculated. The associations of the MRI indices with hepatic decompensation and a composite endpoint of liver-related death or transplantation were evaluated using a competing risk analysis with multivariable Fine and Gray regression models, including baseline parameters alone and both baseline and follow-up parameters. Results: Our study included 280 patients (153 male; mean age ± standard deviation, 57 ± 7.95 years) with non-ACLD, compensated ACLD, and decompensated ACLD in 32, 186, and 62 patients, respectively. Patients were followed for 11-117 months (median, 104 months). In patients with compensated ACLD, baseline LS-SIR (sub-distribution hazard ratio [sHR], 0.81; p = 0.034) and LS-VR (sHR, 0.71; p = 0.01) were independently associated with hepatic decompensation. The ΔLS-VR (sHR, 0.54; p = 0.002) was predictive of hepatic decompensation after adjusting for baseline variables. ΔLS-VR was an independent predictor of liver-related death or transplantation in patients with compensated ACLD (sHR, 0.46; p = 0.026) and decompensated ACLD (sHR, 0.61; p = 0.023). Conclusion: MRI indices automatically derived from the deep learning analysis of gadoxetic acid-enhanced HBP MRI can be used as prognostic markers in patients with ACLD.

Efficient Coding of Motion Vector Predictor using Phased-in Code (Phased-in 코드를 이용한 움직임 벡터 예측기의 효율적인 부호화 방법)

  • Moon, Ji-Hee;Choi, Jung-Ah;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.15 no.3
    • /
    • pp.426-433
    • /
    • 2010
  • The H.264/AVC video coding standard performs inter prediction using variable block sizes to improve coding efficiency. Since we predict not only the motion of homogeneous regions but also the motion of non-homogeneous regions accurately using variable block sizes, we can reduce residual information effectively. However, each motion vector should be transmitted to the decoder. In low bit rate environments, motion vector information takes approximately 40% of the total bitstream. Thus, motion vector competition was proposed to reduce the amount of motion vector information. Since the size of the motion vector difference is reduced by motion vector competition, it requires only a small number of bits for motion vector information. However, we need to send the corresponding index of the best motion vector predictor for decoding. In this paper, we propose a new codeword table based on the phased-in code to encode the index of motion vector predictor efficiently. Experimental results show that the proposed algorithm reduces the average bit rate by 7.24% for similar PSNR values, and it improves the average image quality by 0.36dB at similar bit rates.

Traffic Congestion Estimation by Adopting Recurrent Neural Network (순환인공신경망(RNN)을 이용한 대도시 도심부 교통혼잡 예측)

  • Jung, Hee jin;Yoon, Jin su;Bae, Sang hoon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.6
    • /
    • pp.67-78
    • /
    • 2017
  • Traffic congestion cost is increasing annually. Specifically congestion caused by the CDB traffic contains more than a half of the total congestion cost. Recent advancement in the field of Big Data, AI paved the way to industry revolution 4.0. And, these new technologies creates tremendous changes in the traffic information dissemination. Eventually, accurate and timely traffic information will give a positive impact on decreasing traffic congestion cost. This study, therefore, focused on developing both recurrent and non-recurrent congestion prediction models on urban roads by adopting Recurrent Neural Network(RNN), a tribe in machine learning. Two hidden layers with scaled conjugate gradient backpropagation algorithm were selected, and tested. Result of the analysis driven the authors to 25 meaningful links out of 33 total links that have appropriate mean square errors. Authors concluded that RNN model is a feasible model to predict congestion.

A Prediction of N-value Using Artificial Neural Network (인공신경망을 이용한 N치 예측)

  • Kim, Kwang Myung;Park, Hyoung June;Goo, Tae Hun;Kim, Hyung Chan
    • The Journal of Engineering Geology
    • /
    • v.30 no.4
    • /
    • pp.457-468
    • /
    • 2020
  • Problems arising during pile design works for plant construction, civil and architecture work are mostly come from uncertainty of geotechnical characteristics. In particular, obtaining the N-value measured through the Standard Penetration Test (SPT) is the most important data. However, it is difficult to obtain N-value by drilling investigation throughout the all target area. There are many constraints such as licensing, time, cost, equipment access and residential complaints etc. it is impossible to obtain geotechnical characteristics through drilling investigation within a short bidding period in overseas. The geotechnical characteristics at non-drilling investigation points are usually determined by the engineer's empirical judgment, which can leads to errors in pile design and quantity calculation causing construction delay and cost increase. It would be possible to overcome this problem if N-value could be predicted at the non-drilling investigation points using limited minimum drilling investigation data. This study was conducted to predicted the N-value using an Artificial Neural Network (ANN) which one of the Artificial intelligence (AI) method. An Artificial Neural Network treats a limited amount of geotechnical characteristics as a biological logic process, providing more reliable results for input variables. The purpose of this study is to predict N-value at the non-drilling investigation points through patterns which is studied by multi-layer perceptron and error back-propagation algorithms using the minimum geotechnical data. It has been reviewed the reliability of the values that predicted by AI method compared to the measured values, and we were able to confirm the high reliability as a result. To solving geotechnical uncertainty, we will perform sensitivity analysis of input variables to increase learning effect in next steps and it may need some technical update of program. We hope that our study will be helpful to design works in the future.