• Title/Summary/Keyword: Finding error

Search Result 469, Processing Time 0.023 seconds

The Accuracy of ICD codes for Cerebrovascular Diseases in Medical Insurance Claims (의료보험청구자료중 뇌혈관질환 상병기호의 정확도에 관한 연구)

  • Park, Jong-Ku;Kim, Ki-Soon;Lee, Tae-Yong;Lee, Kang-Sook;Lee, Duk-Hee;Lee, Sun-Hee;Jee, Sun-Ha;Suh, Il;Koh, Kwang-Wook;Ryu, So-Yeon;Park, Kee-Ho;Park, Woon-Je;Kim, Chun-Bae
    • Journal of Preventive Medicine and Public Health
    • /
    • v.33 no.1
    • /
    • pp.76-82
    • /
    • 2000
  • Objectives : We attempted to assess He accuracy of ICD codes for cerebrovascular diseases in medical insurance claims (ICMIC) and to investigate the reasons for error. This study was designed as a preliminary study to establish a nationwide surveillance system. Methods : A total of 626 patients with medical insurance claims who indicated a diagnosis of cerebrovascular diseases during the period from 1993 to 1997 was selected from the Korea Medical Insurance Corporation cohort (KMIC cohort: 115,600 persons). The KMIC cohort was 10% of those insured who had taken health examinations in 1990 and 1992 consecutively. The registered medical record administrators were trained in the survey technique and gathered data from March to May 1999. The definition of cerebrovascular diseases in this study included cases which met ore of two criteria (Minnesota, WHO) or 'definite stroke' in CT/MRI finding. We questioned the medical record administrators to explain the error if the final diagnoses were not coded as stroke. Results : The accuracy rate of the ICMIC was 83.0% (425 cases) Medical records were not available for 8.2% (51 cases) due to the closing of hospitals, the absence of a computer system or omission of medical record, etc. Sixty-three cases (10.0%) were classified as impossible to interpret due to insufficient records in 'major clinical symptoms' or 'neurological deficits'. The most common reason was 'to meet review criteria of medical insurance benefits (52.9%)'. The department where errors in the ICMIC occurred most frequently was the department for medical insurance claims in the hospital. Conclusion : The accuracy rate of the ICMIC was 83.0%.

  • PDF

A Fast Algorithm of the Belief Propagation Stereo Method (신뢰전파 스테레오 기법의 고속 알고리즘)

  • Choi, Young-Seok;Kang, Hyun-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.5
    • /
    • pp.1-8
    • /
    • 2008
  • The belief propagation method that has been studied recently yields good performance in disparity extraction. The method in which a target function is modeled as an energy function based on Markov random field(MRF), solves the stereo matching problem by finding the disparity to minimize the energy function. MRF models provide robust and unified framework for vision problem such as stereo and image restoration. the belief propagation method produces quite correct results, but it has difficulty in real time implementation because of higher computational complexity than other stereo methods. To relieve this problem, in this paper, we propose a fast algorithm of the belief propagation method. Energy function consists of a data term and a smoothness tern. The data term usually corresponds to the difference in brightness between correspondences, and smoothness term indicates the continuity of adjacent pixels. Smoothness information is created from messages, which are assigned using four different message arrays for the pixel positions adjacent in four directions. The processing time for four message arrays dominates 80 percent of the whole program execution time. In the proposed method, we propose an algorithm that dramatically reduces the processing time require in message calculation, since the message.; are not produced in four arrays but in a single array. Tn the last step of disparity extraction process, the messages are called in the single integrated array and this algorithm requires 1/4 computational complexity of the conventional method. Our method is evaluated by comparing the disparity error rates of our method and the conventional method. Experimental results show that the proposed method remarkably reduces the execution time while it rarely increases disparity error.

Does Water Consumption Cause Economic Growth Vice-Versa, or Neither? Evidence from Korea (한국에서의 물소비와 경제성장 -오차수정모형을 이용하여-)

  • Lim, Hea-Jin;Yoo, Seung-Hoon;Kwak, Seung-Jun
    • Journal of Korea Water Resources Association
    • /
    • v.37 no.10
    • /
    • pp.869-880
    • /
    • 2004
  • The purpose of this study is to examine relationship between water consumption and economic growth in Korea, and to obtain policy implications of the results. To this end, we attempt to provide more careful consideration of the causality issues by applying rigorous techniques of Granger causality. Tests for unit roots, co-integration, and Granger causality based on an error-correction model are presented. The existence of bi-directional causality between water consumption and economic growth in Korea is detected. This finding has various implications for policy analysts and forecasters in Korea. Economic growth requires enormous water consumption, though there are many other factors contributing to economic growth, and water consumption is but one part of it. Thus, this study generates confidence in decisions to invest in the water supply infrastructure. Moreover, this study lends support to the argument that an increase in real income, ceteris paribus, gives rise to water consumption. Economic growth results in a higher proportion of national income spent on water supply services and stimulates further water consumption.

Analysis of the applicability of parameter estimation methods for a transient storage model (저장대모형의 매개변수 산정을 위한 최적화 기법의 적합성 분석)

  • Noh, Hyoseob;Baek, Donghae;Seo, Il Won
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.10
    • /
    • pp.681-695
    • /
    • 2019
  • A Transient Storage Model (TSM) is one of the most widely used model accounting for complex solute transport in natural river to understanding natural river properties with four TSM key parameters. The TSM parameters are estimated via inverse modeling. Parameter estimation of the TSM is carried out by solving optimization problem about finding best fitted simulation curve with measured curve obtained from tracer test. Several studies have reported uncertainty in parameter estimation from non-convexity of the problem. In this study, we assessed best combination of optimization method and objective function for TSM parameter estimation using Cheong-mi Creek tracer test data. In order to find best optimization setting guaranteeing convergence and speed, Evolutionary Algorithm (EA) based global optimization methods, such as CCE of SCE-UA and MCCE of SP-UCI, and error based objective functions were compared, using Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL). Overall results showed that multi-EA SC-SAHEL with Percent Mean Squared Error (PMSE) objective function is the best optimization setting which is fastest and stable method in convergence.

A Study on the Quantitative Analysis of Portable XRF for the Components Analysis of Metal Cultural Heritage (금속문화재 성분분석을 위한 휴대용 XRF 정량분석법 연구)

  • Lim, So-Mang;Kwon, Young-Suk;Cho, Young-Rae;Chung, Won-Sub
    • Journal of Conservation Science
    • /
    • v.37 no.5
    • /
    • pp.451-463
    • /
    • 2021
  • In this study we conducted component analyses of portable XRF detectors using four Au-Cu alloy standard samples to improve their accuracy by drawing up a calibration curve based on ICP-OES standard values. The portable XRF analysis found absolute errors of 0.3 to 3.7 wt% for Au and 0.2 to 8.2 wt% for Cu, confirming that the error range and standard deviation differed from one detector to another. Furthermore, the calibration curve improved their accuracy, such that the relative error rates of Au and Cu decreased from 9.8% and 14% to 3.5% and 3.7%, respectively. Accordingly, an experiment to confirm the calibration curve was conducted using unknown samples, finding that the measured values of the unknown samples fell on the calibration curve. Therefore, to accurately analyze the components of metal cultural heritage items, it is necessary to prepare a calibration curve for each element after checking whether the detector is suitable for the artifact.

Calculation of surface image velocity fields by analyzing spatio-temporal volumes with the fast Fourier transform (고속푸리에변환을 이용한 시공간 체적 표면유속 산정 기법 개발)

  • Yu, Kwonkyu;Liu, Binghao
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.11
    • /
    • pp.933-942
    • /
    • 2021
  • The surface image velocimetry was developed to measure river flow velocity safely and effectively in flood season. There are a couple of methods in the surface image velocimetry. Among them the spatio-temporal image velocimetry is in the spotlight, since it can estimate mean velocity for a period of time. For the spatio-temporal image velocimetry analyzes a series of images all at once, it can reduce analyzing time so much. It, however, has a little drawback to find out the main flow direction. If the direction of spatio-temporal image does not coincide to the main flow direction, it may cause singnificant error in velocity. The present study aims to propose a new method to find out the main flow direction by using a fast Fourier transform(FFT) to a spatio-temporal (image) volume, which were constructed by accumulating the river surface images along the time direction. The method consists of two steps; the first step for finding main flow direction in space image and the second step for calculating the velocity magnitude in main flow direction in spatio-temporal image. In the first step a time-accumulated image was made from the spatio-temporal volume along the time direction. We analyzed this time-accumulated image by using FFT and figured out the main flow direction from the transformed image. Then a spatio-temporal image in main flow direction was extracted from the spatio-temporal volume. Once again, the spatio-temporal image was analyzed by FFT and velocity magnitudes were calculated from the transformed image. The proposed method was applied to a series of artificial images for error analysis. It was shown that the proposed method could analyze two-dimensional flow field with fairly good accuracy.

Development and application of automation algorithm for optimal parameter combination in two-dimensional flow analysis model (2차원 흐름해석모형의 매개변수 최적조합결정 자동화 알고리즘의 개발과 적용)

  • An, Sehyuck;Shin, Eun-taek;Song, Chang Geun;Park, Sungwon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.spc1
    • /
    • pp.1007-1014
    • /
    • 2023
  • Two-dimensional flow analysis, a fundamental component of hydrodynamics, plays a pivotal role in numerically simulating fluid behavior in rivers and waterways. This modeling approach heavily relies on parameters such as eddy viscosity and roughness coefficient to accurately represent flow characteristics. Therefore, combination of appropriate parameters is very important to accurately simulate flow characteristics. In this study, an automation algorithm was developed and applied to find the optimal combination of parameters. Previously, when applying a two-dimensional flow analysis model, former researchers usually depend on the empirical approach, which causes many difficulties in finding optimal variable values. Using the experimental data, we tracked errors according to the combination of various parameters and applied the algorithm that can determine the optimal combination of parameters with the Python language. The automation algorithm can easily determine the most accurate combination by comparing the flow velocity error values among the two-dimensional flow analysis results among the combinations of 121 (11×11) parameters. In the perspective of utilizing automation algorithm, there is an expected high utility in promptly and straightforwardly determining the optimal combination of parameters with the smallest error.

Regional Differentiation of Agrarian Practices in the Late Choson Period as Reflected in Wu Ha-Young's Cheonilrok ("천일록(千一錄)"을 통해 본 조선후기 농업의 지역적 특성)

  • Jung, Chi-Young
    • Journal of the Korean association of regional geographers
    • /
    • v.9 no.2
    • /
    • pp.119-134
    • /
    • 2003
  • This paper analyzes Wu Ha-Youngs Cheonilrok in order to reconstruct the regional characteristics of farming in the late 18th-century Korean countryside. The projected objective is approached through the examination of various indices drawn from the volume such as environment, distribution of arable lands, major crops, agricultural techniques, and productivity. The main finding of this research is that unlike todays homogenous picture of agriculture, quite significant differences of agrarian practices existed across the country in the past. The regional differentiation was attributable foremost to natural environment. To elaborate, landform, climate and soil influenced the distribution and use of land plots, the kinds of main crops produced, and the agricultural productivity. The region-specific agricultural techniques result from the cumulative processes of trial and error against the given environment. Other social and economic conditions which include population, skill of the peasants, size of landownership, and irrigation facilities sustained the regional differentiation of agriculture.

  • PDF

Stock Market Forecasting : Comparison between Artificial Neural Networks and Arch Models

  • Merh, Nitin
    • Journal of Information Technology Applications and Management
    • /
    • v.19 no.1
    • /
    • pp.1-12
    • /
    • 2012
  • Data mining is the process of searching and analyzing large quantities of data for finding out meaningful patterns and rules. Artificial Neural Network (ANN) is one of the tools of data mining which is becoming very popular in forecasting the future values. Some of the areas where it is used are banking, medicine, retailing and fraud detection. In finance, artificial neural network is used in various disciplines including stock market forecasting. In the stock market time series, due to high volatility, it is very important to choose a model which reads volatility and forecasts the future values considering volatility as one of the major attributes for forecasting. In this paper, an attempt is made to develop two models - one using feed forward back propagation Artificial Neural Network and the other using Autoregressive Conditional Heteroskedasticity (ARCH) technique for forecasting stock market returns. Various parameters which are considered for the design of optimal ANN model development are input and output data normalization, transfer function and neuron/s at input, hidden and output layers, number of hidden layers, values with respect to momentum, learning rate and error tolerance. Simulations have been done using prices of daily close of Sensex. Stock market returns are chosen as input data and output is the forecasted return. Simulations of the Model have been done using MATLAB$^{(R)}$ 6.1.0.450 and EViews 4.1. Convergence and performance of models have been evaluated on the basis of the simulation results. Performance evaluation is done on the basis of the errors calculated between the actual and predicted values.

Prediction of Elementary Students' Computer Literacy Using Neural Networks (신경망을 이용한 초등학생 컴퓨터 활용 능력 예측)

  • Oh, Ji-Young;Lee, Soo-Jung
    • Journal of The Korean Association of Information Education
    • /
    • v.12 no.3
    • /
    • pp.267-274
    • /
    • 2008
  • A neural network is a modeling technique useful for finding out hidden patterns from data through repetitive learning process and for predicting target values for new data. In this study, we built multilayer perceptron neural networks for prediction of the students' computer literacy based on their personal characteristics, home and social environment, and academic record of other subjects. Prediction performance of the network was compared with that of a widely used prediction method, the regression model. From our experiments, it was found that personal characteristic features best explained computer proficiency level of a student, whereas the features of home and social environment resulted in the worse prediction accuracy among all. Moreover, the developed neural network model produced far more accurate prediction than the regression model.

  • PDF