• Title/Summary/Keyword: mean absolute error

Search Result 562, Processing Time 0.025 seconds

Comparison of Image Quality among Different Computed Tomography Algorithms for Metal Artifact Reduction (금속 인공물 감소를 위한 CT 알고리즘 적용에 따른 영상 화질 비교)

  • Gui-Chul Lee;Young-Joon Park;Joo-Wan Hong
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.4
    • /
    • pp.541-549
    • /
    • 2023
  • The aim of this study wasto conduct a quantitative analysis of CT image quality according to an algorithm designed to reduce metal artifacts induced by metal components. Ten baseline images were obtained with the standard filtered back-projection algorithm using spectral detector-based CT and CT ACR 464 phantom, and ten images were also obtained on the identical phantom with the standard filtered back-projection algorithm after inducing metal artifacts. After applying the to raw data from images with metal artifacts, ten additional images for each were obtained by applying the virtual monoenergetic algorithm. Regions of interest were set for polyethylene, bone, acrylic, air, and water located in the CT ACR 464 phantom module 1 to conduct compare the Hounsfield units for each algorithm. The algorithms were individually analyzed using root mean square error, mean absolute error, signal-to-noise ratio, peak signal-to-noise ratio, and structural similarity index to assess the overall image quality. When the Hounsfield units of each algorithm were compared, a significant difference was found between the images with different algorithms (p < .05), and large changes were observed in images using the virtual monoenergetic algorithm in all regions of interest except acrylic. Image quality analysis indices revealed that images with the metal artifact reduction algorithm had the highest resolution, but the structural similarity index was highest for images with the metal artifact reduction algorithm followed by an additional virtual monoenergetic algorithm. In terms of CT images, the metal artifact reduction algorithm was shown to be more effective than the monoenergetic algorithm at reducing metal artifacts, but to obtain quality CT images, it will be important to ascertain the advantages and differences in image qualities of the algorithms, and to apply them effectively.

Estimation and assessment of natural drought index using principal component analysis (주성분 분석을 활용한 자연가뭄지수 산정 및 평가)

  • Kim, Seon-Ho;Lee, Moon-Hwan;Bae, Deg-Hyo
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.6
    • /
    • pp.565-577
    • /
    • 2016
  • The objective of this study is to propose a method for computing the Natural Drought Index (NDI) that does not consider man-made drought facilities. Principal Component Analysis (PCA) was used to estimate the NDI. Three monthly moving cumulative runoff, soil moisture and precipitation were selected as input data of the NDI during 1977~2012. Observed precipitation data was collected from KMA ASOS (Korea Meteorological Association Automatic Synoptic Observation System), while model-driven runoff and soil moisture from Variable Infiltration Capacity Model (VIC Model) were used. Time series analysis, drought characteristic analysis and spatial analysis were used to assess the utilization of NDI and compare with existing SPI, SRI and SSI. The NDI precisely reflected onset and termination of past drought events with mean absolute error of 0.85 in time series analysis. It explained well duration and inter-arrival time with 1.3 and 1.0 respectively in drought characteristic analysis. Also, the NDI reflected regional drought condition well in spatial analysis. The accuracy rank of drought onset, termination, duration and inter-arrival time was calculated by using NDI, SPI, SRI and SSI. The result showed that NDI is more precise than the others. The NDI overcomes the limitation of univariate drought indices and can be useful for drought analysis as representative measure of different types of drought such as meteorological, hydrological and agricultural droughts.

Short-term Prediction of Travel Speed in Urban Areas Using an Ensemble Empirical Mode Decomposition (앙상블 경험적 모드 분해법을 이용한 도시부 단기 통행속도 예측)

  • Kim, Eui-Jin;Kim, Dong-Kyu
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.38 no.4
    • /
    • pp.579-586
    • /
    • 2018
  • Short-term prediction of travel speed has been widely studied using data-driven non-parametric techniques. There is, however, a lack of research on the prediction aimed at urban areas due to their complex dynamics stemming from traffic signals and intersections. The purpose of this study is to develop a hybrid approach combining ensemble empirical mode decomposition (EEMD) and artificial neural network (ANN) for predicting urban travel speed. The EEMD decomposes the time-series data of travel speed into intrinsic mode functions (IMFs) and residue. The decomposed IMFs represent local characteristics of time-scale components and they are predicted using an ANN, respectively. The IMFs can be predicted more accurately than their original travel speed since they mitigate the complexity of the original data such as non-linearity, non-stationarity, and oscillation. The predicted IMFs are summed up to represent the predicted travel speed. To evaluate the proposed method, the travel speed data from the dedicated short range communication (DSRC) in Daegu City are used. Performance evaluations are conducted targeting on the links that are particularly hard to predict. The results show the developed model has the mean absolute error rate of 10.41% in the normal condition and 25.35% in the break down for the 15-min-ahead prediction, respectively, and it outperforms the simple ANN model. The developed model contributes to the provision of the reliable traffic information in urban transportation management systems.

The Impacts of Proposed Landfill Sites on Housing Values

  • Jung, Su Kwan
    • Environmental and Resource Economics Review
    • /
    • v.21 no.3
    • /
    • pp.743-776
    • /
    • 2012
  • This study utilizes the meta-analysis for the benefits transfer (MA-BT) approach to measure social costs the 7 target sites in the City and County of Honolulu. The estimated MA models (MA-1 and MA-1) were evaluated in terms of validity and reliability criteria. This study utilized a parametric t-test and a non-parametric sign rank test for checking validity. A transfer error measured by an absolute percentage difference was utilized to check reliability their similarity. The GIS was utilized for data collection in order to measure social costs for each target site. The results clearly demonstrated that social costs were substantially higher thant direct costs and varied market conditions and different methods used. In terms of validity and reliability criteria, MA models were preferred to the mean transfer value approach. MA-BT approach is desirable for measuring social costs for a project designed to measure social costs for these 7 proposed landfill sites with inaccessible data, on short time frames, and with little money. If researchers and planners have enough time and money, they can implement primary research. If not, the meta-analysis for the benefits transfer approach can be much better than no framework. The use of a GIS can help to identify secondary data within a specific radius of each target site.

  • PDF

An Efficient BC Approach to Compute Fractal Dimension of Coastlines (개선된 BC법과 해안선의 프랙탈 차원 계산)

  • So, Hye-Rim;So, Gun-Baek;Jin, Gang-Gyoo
    • Journal of Navigation and Port Research
    • /
    • v.40 no.4
    • /
    • pp.207-212
    • /
    • 2016
  • The box-counting(BC) method is one of the most commonly used methods for fractal dimension calculation of binary images in the fields of Engineering, Science, Medical Science, Geology, etc due to its simplicity and reliability. It deals with only square images with each size equal to the power of 2 to prevent it from discarding unused pixels for images of arbitrary size. In this paper, we presents a more efficient BC method based on the original one, which is applicable to images of arbitrary size. The proposed approach allows the number of the counting boxes to be real to improve the estimation accuracy. The mean absolute error performance is computed on two deterministic fractal images whose theoretical dimensions are well known to compare with those of the existing BC method and triangular BC method. The experimental results show that the proposed method can outperform the two methods and assess the complexity of coastline images of Korea and Chodo island taken from the Google map.

A High Speed Block Turbo Code Decoding Algorithm and Hardware Architecture Design (고속 블록 터보 코드 복호 알고리즘 및 하드웨어 구조 설계)

  • 유경철;신형식;정윤호;김근회;김재석
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.7
    • /
    • pp.97-103
    • /
    • 2004
  • In this paper, we propose a high speed block turbo code decoding algorithm and an efficient hardware architecture. The multimedia wireless data communication systems need channel codes which have the high-performance error correcting capabilities. Block turbo codes support variable code rates and packet sizes, and show a high performance due to a soft decision iteration decoding of turbo codes. However, block turbo codes have a long decoding time because of the iteration decoding and a complicated extrinsic information operation. The proposed algorithm using the threshold that represents a channel information reduces the long decoding time. After the threshold is decided by a simulation result, the proposed algorithm eliminates the calculation for the bits which have a good channel information and assigns a high reliability value to the bits. The threshold is decided by the absolute mean and the standard deviation of a LLR(Log Likelihood Ratio) in consideration that the LLR distribution is a gaussian one. Also, the proposed algorithm assigns '1', the highest reliable value, to those bits. The hardware design result using verilog HDL reduces a decoding time about 30% in comparison with conventional algorithm, and includes about 20K logic gate and 32Kbit memory sizes.

Forecasting the Steel Cargo Volumes in Incheon Port using System Dynamics (System Dynamics를 활용한 인천항 철재화물 물동량 예측에 관한 연구)

  • Park, Sung-Il;Jung, Hyun-Jae;Jeon, Jun-Woo;Yeo, Gi-Tae
    • Journal of Korea Port Economic Association
    • /
    • v.28 no.2
    • /
    • pp.75-93
    • /
    • 2012
  • The steel cargoes as the core raw materials for the manufacturing industry have important roles for increasing the handling volume of the port. In particular, steel cargoes are fundamental to vitalize Port of Incheon because they have recognized as the primary key cargo items among the bulk cargoes. In this respect, the IPA(Incheon Port Authority) ambitiously developed the port complex facilities including dedicated terminals and its hinterland in northern part of Incheon. However, these complex area has suffered from low cargo handling records and has faced operational difficulties due to decreased net profits. In general, the import and export steel cargo volumes are sensitively fluctuated followed by internal and external economy index. There is a scant of research for forecasting the steel cargo volume in Incheon port which used in various economy index. To fill the research gap, the aim of this research is to predict the steel cargoes of Port of Incheon using the well established methodology i.e. System Dynamics. As a result, steel cargoes volume dealt with in Incheon port is forecasted from about 8 million tons to about 10 million tons during simulation duration (2011-2020). The Mean Absolute Percentage Error (MAPE) is measured as 0.0013 which verifies the model's accuracy.

Reproducibility and Validity of a Self-Administered Semiquantitative Food Frequency Questionnaire (자기기록식 반정량 식이섭취 빈도조사의 신뢰도 및 타당도 연구)

  • 김미경;이상선;안윤옥
    • Korean Journal of Community Nutrition
    • /
    • v.1 no.3
    • /
    • pp.376-394
    • /
    • 1996
  • This study evaluated the reproducibility and validity of the self-administered semiquantitative food frequency questionnaire used in a large prospective cohort study(Korean Cancer Research Survey) in middle-aged men. The questionnaire was administered twice at an interval of approximately two years(December, 1992-January, 1995), and four or five 24-hour recalls for each subject were collected at intervals of approximately three months. The results were as follows; 1) Although the distributions of the data estimated by the questionnaire were somewhat wider, the mean nutrient intakes of group estimated by our questionnaires and the multiple 24-hour recalls were roughly comparable. 2) The reproducibility determined by correlation of absolute(unadjusted nutrient intake) and calorie adjusted nutrient intakes from two semiquantitative food frequency questionnaires were more than 0.5, and the weighted kappa values were more than 0.4. 3) The Pearson correlation coefficients between unadjusted nutrient intakes values were average 0.40 on the average(Ca, 0.13-Carbohydrate, 0.58) at the first questionnaire vs. 24-hour recalls, and 0.28 at the second questionnaire vs. 24-hour recalls. The spearman rank order correlation coefficients were similar. When energy intake was adjusted, there was a slight reduction : 0.28 at the second questionnaire, 0.25 average on the second. In order to correct the measurement error of 24-hour recall data, the deattenuated correlation coefficient was calculated. It averaged 0.53 on the first questionnaire, 0.37 on the second questionnaire for unadjusted nutrient intake. for calorie-adjusted nutrient intake, it averaged 0.44 on the first questionnaire, 0.37 on the second questionnarie. 4) There was lower agreement(k<0.4) between the questionnaries and the 24-hour recalls. And the subjects classified in the same quartile by 24-hour recalls and first questionnaire were average 37$\%$(energy-adjusted values) and 40$\%$(unadjusted values) on the average. More than k10$\%$(average) of subjects were in the extreme quartile of the questionnarie and 24-hour recall method. But 8.2$\%$(average) of subjects classified in the lowest quartile of unadjusted nutrient intake level by the 24-hour recalls were in the highest quartile by the first questionnaire. These data indicate that our self-administered semiquantitative food frequency questionnarie is reproducible. Correlation coefficients comparing nutrient intakes measured by two different dietary assessment methods were less than 0.5. The validity of our questionnarie is not high enough.

  • PDF

Predicting the Popularity of Post Articles with Virtual Temperature in Web Bulletin (웹게시판에서 가상온도를 이용한 게시글의 인기 예측)

  • Kim, Su-Do;Kim, So-Ra;Cho, Hwan-Gue
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.10
    • /
    • pp.19-29
    • /
    • 2011
  • A Blog provides commentary, news, or content on a particular subject. The important part of many blogs is interactive format. Sometimes, there is a heated debate on a topic and any article becomes a political or sociological issue. In this paper, we proposed a method to predict the popularity of an article in advance. First, we used hit count as a factor to predict the popularity of an article. We defined the saturation point and derived a model to predict the hit count of the saturation point by a correlation coefficient of the early hit count and hit count of the saturation point. Finally, we predicted the virtual temperature of an article using 4 types(explosive, hot, warm, cold). We can predict the virtual temperature of Internet discussion articles using the hit count of the saturation point with more than 70% accuracy, exploiting only the first 30 minutes' hit count. In the hot, warm, and cold categories, we can predict more than 86% accuracy from 30 minutes' hit count and more than 90% accuracy from 70 minutes' hit count.

Proposal of a Step-by-Step Optimized Campus Power Forecast Model using CNN-LSTM Deep Learning (CNN-LSTM 딥러닝 기반 캠퍼스 전력 예측 모델 최적화 단계 제시)

  • Kim, Yein;Lee, Seeun;Kwon, Youngsung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.8-15
    • /
    • 2020
  • A forecasting method using deep learning does not have consistent results due to the differences in the characteristics of the dataset, even though they have the same forecasting models and parameters. For example, the forecasting model X optimized with dataset A would not produce the optimized result with another dataset B. The forecasting model with the characteristics of the dataset needs to be optimized to increase the accuracy of the forecasting model. Therefore, this paper proposes novel optimization steps for outlier removal, dataset classification, and a CNN-LSTM-based hyperparameter tuning process to forecast the daily power usage of a university campus based on the hourly interval. The proposing model produces high forecasting accuracy with a 2% of MAPE with a single power input variable. The proposing model can be used in EMS to suggest improved strategies to users and consequently to improve the power efficiency.