• Title/Summary/Keyword: 오차평가기법

Search Result 656, Processing Time 0.03 seconds

A Study on Compilation of Monthly Benchmarked Construction Indicators (벤치마킹 기법을 활용한 월별 건설지표 작성)

  • Min, Kyung-Sam
    • Survey Research
    • /
    • v.10 no.1
    • /
    • pp.113-139
    • /
    • 2009
  • It is desirable to use a monthly benchmarked construction indicator which contains the characteristics of statistical data in an annual survey in order to analyze the cyclical phenomenon of the construction activity. The benchmarked indicator is expected to improve the data quality in terms of accuracy, consistency, comparability, and completeness. In this paper, benchmarking methodologies of compiling monthly construction indicators arc researched by using a monthly prompt data holding short - term fluctuations and an annual survey data regarded as more accurate statistics than monthly data. The benchmarking is the methodology by which a high frequency data should he adjusted in order to hold the short-term and cyclical phenomena, and the long - term trend of two data groups with ensuring the consistency of an annual summation between a high frequency data and a low frequency data. This paper considered the numerical approach like pro rata distribution method, proportional Denton method, EFL or HP filter Benchmark - to - Indicator ratio method, and the model - based approach such as Chow and Lin method, $Fem{\acute{a}}ndez$ method. Also, the benchmarked construction indicators were estimated by early mentioned benchmarking methods with practical data, and these methods were empirically reviewed and compared. In case of construction indicators with severe seasonal fluctuations and irregulars, the numerical approach seemed to be performed more correctly than the model- based approach. Among numerical methods, the proportional Denton method used in general was a little nice. The HP filter Benchmark - to - Indicator ratio method may be considered with survey errors or measurement errors in an annual survey data.

  • PDF

Development of BIM based LID Facilities Supply Auto-checking Module (BIM 기반 LID 시설 물량 자동 검토 모듈 개발)

  • Choi, Junwoo;Jung, Jongsuk;Lim, Seokhwa;Choi, Joungjoo;Kim, Shin;Hyun, Kyounghak
    • Journal of Environmental Impact Assessment
    • /
    • v.26 no.3
    • /
    • pp.195-206
    • /
    • 2017
  • Recently, Discussion about BIM based LID (Low Impact Development) facilities management system is activated because interest of LID technique for urban water cycle restoration is increasing. For this reason, this paper developed the auto-checking module of the BIM (Building Information Model) based supply output table. This module will be the foundation of the BIM based LID facilities total management system. The research order is composed like next follows: (1) Select target area, (2) Make BIM model of LID facilities and extract supply output table, (3) Develop comparison module, (4) Analysis results. As a result, the authors made 27 LID facilities and developed the supply output table comparison automation module. So, the authors could find differences of 2D design documents based supply output table and BIm based supply output table. So, the authors made an improvement suggestion of the design plan and could construct foundation of the BIM based LID facilities total management system.

Study on the Prediction of Motion Response of Fishing Vessels using Recurrent Neural Networks (순환 신경망 모델을 이용한 소형어선의 운동응답 예측 연구)

  • Janghoon Seo;Dong-Woo Park;Dong Nam
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.5
    • /
    • pp.505-511
    • /
    • 2023
  • In the present study, a deep learning model was established to predict the motion response of small fishing vessels. Hydrodynamic performances were evaluated for two small fishing vessels for the dataset of deep learning model. The deep learning model of the Long Short-Term Memory (LSTM) which is one of the recurrent neural network was utilized. The input data of LSTM model consisted of time series of six(6) degrees of freedom motions and wave height and the output label was selected as the time series data of six(6) degrees of freedom motions. The hyperparameter and input window length studies were performed to optimize LSTM model. The time series motion response according to different wave direction was predicted by establised LSTM. The predicted time series motion response showed good overall agreement with the analysis results. As the length of the time series increased, differences between the predicted values and analysis results were increased, which is due to the reduced influence of long-term data in the training process. The overall error of the predicted data indicated that more than 85% of the data showed an error within 10%. The established LSTM model is expected to be utilized in monitoring and alarm systems for small fishing vessels.

Respiratory air flow transducer calibration technique for forced vital capacity test (노력성 폐활량검사시 호흡기류센서의 보정기법)

  • Cha, Eun-Jong;Lee, In-Kwang;Jang, Jong-Chan;Kim, Seong-Sik;Lee, Su-Ok;Jung, Jae-Kwan;Park, Kyung-Soon;Kim, Kyung-Ah
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.5
    • /
    • pp.1082-1090
    • /
    • 2009
  • Peak expiratory flow rate(PEF) is a very important diagnostic parameter obtained from the forced vital capacity(FVC) test. The expiratory flow rate increases during the short initial time period and may cause measurement error in PEF particularly due to non-ideal dynamic characteristic of the transducer. The present study evaluated the initial rise slope($S_r$) on the flow rate signal to compensate the transducer output data. The 26 standard signals recommended by the American Thoracic Society(ATS) were generated and flown through the velocity-type respiratory air flow transducer with simultaneously acquiring the transducer output signal. Most PEF and the corresponding output($N_{PEF}$) were well fitted into a quadratic equation with a high enough correlation coefficient of 0.9997. But only two(ATS#2 and 26) signals resulted significant deviation of $N_{PEF}$ with relative errors>10%. The relationship between the relative error in $N_{PEF}$ and $S_r$ was found to be linear, based on which $N_{PEF}$ data were compensated. As a result, the 99% confidence interval of PEF error was turned out to be approximately 2.5%, which was less than a quarter of the upper limit of 10% recommended by ATS. Therefore, the present compensation technique was proved to be very accurate, complying the international standards of ATS, which would be useful to calibrate respiratory air flow transducers.

Performance Improvement of Speaker Recognition by MCE-based Score Combination of Multiple Feature Parameters (MCE기반의 다중 특징 파라미터 스코어의 결합을 통한 화자인식 성능 향상)

  • Kang, Ji Hoon;Kim, Bo Ram;Kim, Kyu Young;Lee, Sang Hoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.679-686
    • /
    • 2020
  • In this thesis, an enhanced method for the feature extraction of vocal source signals and score combination using an MCE-Based weight estimation of the score of multiple feature vectors are proposed for the performance improvement of speaker recognition systems. The proposed feature vector is composed of perceptual linear predictive cepstral coefficients, skewness, and kurtosis extracted with lowpass filtered glottal flow signals to eliminate the flat spectrum region, which is a meaningless information section. The proposed feature was used to improve the conventional speaker recognition system utilizing the mel-frequency cepstral coefficients and the perceptual linear predictive cepstral coefficients extracted with the speech signals and Gaussian mixture models. In addition, to increase the reliability of the estimated scores, instead of estimating the weight using the probability distribution of the convectional score, the scores evaluated by the conventional vocal tract, and the proposed feature are fused by the MCE-Based score combination method to find the optimal speaker. The experimental results showed that the proposed feature vectors contained valid information to recognize the speaker. In addition, when speaker recognition is performed by combining the MCE-based multiple feature parameter scores, the recognition system outperformed the conventional one, particularly in low Gaussian mixture cases.

Building a Model to Estimate Pedestrians' Critical Lags on Crosswalks (횡단보도에서의 보행자의 임계간격추정 모형 구축)

  • Kim, Kyung Whan;Kim, Daehyon;Lee, Ik Su;Lee, Deok Whan
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.1D
    • /
    • pp.33-40
    • /
    • 2009
  • The critical lag of crosswalk pedestrians is an important parameter in analyzing traffic operation at unsignalized crosswalks, however there is few research in this field in Korea. The purpose of this study is to develop a model to estimate the critical lag. Among the elements which influence the critical lag, the age of pedestrians and the length of crosswalks, which have fuzzy characteristics, and the each lag which is rejected or accepted are collected on crosswalks of which lengths range from 3.5 m to 10.5 m. The values of the critical lag range from 2.56 sec. to 5.56 sec. The age and the length are divided to the 3 fuzzy variables each, and the critical lag of each case is estimated according to Raff's technique, so a total of 9 fuzzy rules are established. Based on the rules, an ANFIS (Adaptive Neuro-Fuzzy Inference System) model to estimate the critical lag is built. The predictability of the model is evaluated comparing the observed with the estimated critical lags by the model. Statistics of $R^2$, MAE, MSE are 0.96, 0.097, 0.015 respectively. Therefore, the model is evaluated to explain the result well. During this study, it is found that the critical lag increases rapidly over the pedestrian's age of 40 years.

Error Analysis of Delivered Dose Reconstruction Using Cone-beam CT and MLC Log Data (콘빔 CT 및 MLC 로그데이터를 이용한 전달 선량 재구성 시 오차 분석)

  • Cheong, Kwang-Ho;Park, So-Ah;Kang, Sei-Kwon;Hwang, Tae-Jin;Lee, Me-Yeon;Kim, Kyoung-Joo;Bae, Hoon-Sik;Oh, Do-Hoon
    • Progress in Medical Physics
    • /
    • v.21 no.4
    • /
    • pp.332-339
    • /
    • 2010
  • We aimed to setup an adaptive radiation therapy platform using cone-beam CT (CBCT) and multileaf collimator (MLC) log data and also intended to analyze a trend of dose calculation errors during the procedure based on a phantom study. We took CT and CBCT images of Catphan-600 (The Phantom Laboratory, USA) phantom, and made a simple step-and-shoot intensity-modulated radiation therapy (IMRT) plan based on the CT. Original plan doses were recalculated based on the CT ($CT_{plan}$) and the CBCT ($CBCT_{plan}$). Delivered monitor unit weights and leaves-positions during beam delivery for each MLC segment were extracted from the MLC log data then we reconstructed delivered doses based on the CT ($CT_{recon}$) and CBCT ($CBCT_{recon}$) respectively using the extracted information. Dose calculation errors were evaluated by two-dimensional dose discrepancies ($CT_{plan}$ was the benchmark), gamma index and dose-volume histograms (DVHs). From the dose differences and DVHs, it was estimated that the delivered dose was slightly greater than the planned dose; however, it was insignificant. Gamma index result showed that dose calculation error on CBCT using planned or reconstructed data were relatively greater than CT based calculation. In addition, there were significant discrepancies on the edge of each beam while those were less than errors due to inconsistency of CT and CBCT. $CBCT_{recon}$ showed coupled effects of above two kinds of errors; however, total error was decreased even though overall uncertainty for the evaluation of delivered dose on the CBCT was increased. Therefore, it is necessary to evaluate dose calculation errors separately as a setup error, dose calculation error due to CBCT image quality and reconstructed dose error which is actually what we want to know.

Evaluation of Setup Uncertainty on the CTV Dose and Setup Margin Using Monte Carlo Simulation (몬테칼로 전산모사를 이용한 셋업오차가 임상표적체적에 전달되는 선량과 셋업마진에 대하여 미치는 영향 평가)

  • Cho, Il-Sung;Kwark, Jung-Won;Cho, Byung-Chul;Kim, Jong-Hoon;Ahn, Seung-Do;Park, Sung-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.81-90
    • /
    • 2012
  • The effect of setup uncertainties on CTV dose and the correlation between setup uncertainties and setup margin were evaluated by Monte Carlo based numerical simulation. Patient specific information of IMRT treatment plan for rectal cancer designed on the VARIAN Eclipse planning system was utilized for the Monte Carlo simulation program including the planned dose distribution and tumor volume information of a rectal cancer patient. The simulation program was developed for the purpose of the study on Linux environment using open source packages, GNU C++ and ROOT data analysis framework. All misalignments of patient setup were assumed to follow the central limit theorem. Thus systematic and random errors were generated according to the gaussian statistics with a given standard deviation as simulation input parameter. After the setup error simulations, the change of dose in CTV volume was analyzed with the simulation result. In order to verify the conventional margin recipe, the correlation between setup error and setup margin was compared with the margin formula developed on three dimensional conformal radiation therapy. The simulation was performed total 2,000 times for each simulation input of systematic and random errors independently. The size of standard deviation for generating patient setup errors was changed from 1 mm to 10 mm with 1 mm step. In case for the systematic error the minimum dose on CTV $D_{min}^{stat{\cdot}}$ was decreased from 100.4 to 72.50% and the mean dose $\bar{D}_{syst{\cdot}}$ was decreased from 100.45% to 97.88%. However the standard deviation of dose distribution in CTV volume was increased from 0.02% to 3.33%. The effect of random error gave the same result of a reduction of mean and minimum dose to CTV volume. It was found that the minimum dose on CTV volume $D_{min}^{rand{\cdot}}$ was reduced from 100.45% to 94.80% and the mean dose to CTV $\bar{D}_{rand{\cdot}}$ was decreased from 100.46% to 97.87%. Like systematic error, the standard deviation of CTV dose ${\Delta}D_{rand}$ was increased from 0.01% to 0.63%. After calculating a size of margin for each systematic and random error the "population ratio" was introduced and applied to verify margin recipe. It was found that the conventional margin formula satisfy margin object on IMRT treatment for rectal cancer. It is considered that the developed Monte-carlo based simulation program might be useful to study for patient setup error and dose coverage in CTV volume due to variations of margin size and setup error.

INTEGRATED RAY TRACING MODEL FOR END-TO-END PERFORMANCE VERIFICATION OF AMON-RA INSTRUMENT (AMON-RA 광학계를 활용한 통합적 광선 추적 기법의 지구 반사율 측정 성능 검증)

  • Lee, Jae-Min;Park, Won-Hyun;Ham, Sun-Jeong;Yi, Hyun-Su;Yoon, Jee-Yeon;Kim, Sug-Whan;Choi, Ki-Hyuk;Kim, Zeen-Chul;Lockwood, Mike
    • Journal of Astronomy and Space Sciences
    • /
    • v.24 no.1
    • /
    • pp.69-78
    • /
    • 2007
  • The international EARTHSHINE mission is to measure 1% anomaly of the Earth global albedo and total solar irradiance using Amon-Ra instrument around Lagrange point 1. We developed a new ray truing based integrated end-to-end simulation tool that overcomes the shortcomings of the existing end-to-end performance simulation techniques. We then studied the in-orbit radiometric performance of the breadboard Anon-Ra visible channel optical system. The TSI variation and the Earth albedo anomaly, reported elsewhere, were used as the key input variables in the simulation. The output flux at the instrument focal plane confirms that the integrated ray tracing based end-to-end science simulation delivers the correct level of incident power to the Amon-Ra instrument well within the required measurement error budget of better than ${\pm}0.28%$. Using the global angular distribution model (ADM), the incident flux is then used to estimate the Earth global albedo and the TSI variation, confirming the validity of the primary science cases at the L1 halo orbit. These results imply that the integrated end-to-end ray tracing technique, reported here, can serve as an effective and powerful building block of the on-line science analysis tool in support of the international EARTHSHINE mission currently being developed.

Evaluation of Size for Crack around Rivet Hole Using Lamb Wave and Neural Network (초음파 판파와 신경회로망 기법을 적용한 리뱃홀 부위의 균열 크기 평가)

  • Choi, Sang-Woo;Lee, Joon-Hyun
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.21 no.4
    • /
    • pp.398-405
    • /
    • 2001
  • The rivet joint has typical structural feature that can be initiation site for the fatigue crack due to the combination of local stress concentration around rivet hole and the moisture trapping. From a viewpoint of structural assurance, it is crucial to evaluate the size of crack around the rivet holes by appropriate nondestructive evaluation techniques. Lamb wave that is one of guided waves, offers a more efficient tool for nondestructive inspection of plates. The neural network that is considered to be the most suitable for pattern recognition has been used by researchers in NDE field to classify different types of flaws and flaw sizes. In this study, clack size evaluation around the rivet hole using the neural network based on the back-propagation algorithm has been tarried out by extracting some features from the ultrasonic Lamb wave for A12024-T3 skin panel of aircraft. Special attention was paid to reduce the coupling effect between the transducer and the specimen by extracting some features related to time md frequency component data in ultrasonic waveform. It was demonstrated clearly that features extracted from the time and frequency domain data of Lamb wave signal were very useful to determine crack size initiated from rivet hole through neural network.

  • PDF