• Title/Summary/Keyword: Adjustment Models

Search Result 292, Processing Time 0.025 seconds

Analysis of Three Dimensional Position Using Unit Models in Aerial Photogrammetry (단위 모델을 이용한 항공 사진의 3차원 위치 해석)

  • 강인준;유복모
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.4 no.1
    • /
    • pp.49-57
    • /
    • 1986
  • Block adjustment procedures are usually classified into three groups according to the smallest units ; the bundle, the model, or the strip. In this paper, three dimensional analysis using a unit model in aerial photogrammetry is studied, and error distibutions are anayzed concerning the control patterns.

  • PDF

Improving the Performance of Risk-adjusted Mortality Modeling for Colorectal Cancer Surgery by Combining Claims Data and Clinical Data

  • Jang, Won Mo;Park, Jae-Hyun;Park, Jong-Hyock;Oh, Jae Hwan;Kim, Yoon
    • Journal of Preventive Medicine and Public Health
    • /
    • v.46 no.2
    • /
    • pp.74-81
    • /
    • 2013
  • Objectives: The objective of this study was to evaluate the performance of risk-adjusted mortality models for colorectal cancer surgery. Methods: We investigated patients (n=652) who had undergone colorectal cancer surgery (colectomy, colectomy of the rectum and sigmoid colon, total colectomy, total proctectomy) at five teaching hospitals during 2008. Mortality was defined as 30-day or in-hospital surgical mortality. Risk-adjusted mortality models were constructed using claims data (basic model) with the addition of TNM staging (TNM model), physiological data (physiological model), surgical data (surgical model), or all clinical data (composite model). Multiple logistic regression analysis was performed to develop the risk-adjustment models. To compare the performance of the models, both c-statistics using Hanley-McNeil pair-wise testing and the ratio of the observed to the expected mortality within quartiles of mortality risk were evaluated to assess the abilities of discrimination and calibration. Results: The physiological model (c=0.92), surgical model (c=0.92), and composite model (c=0.93) displayed a similar improvement in discrimination, whereas the TNM model (c=0.87) displayed little improvement over the basic model (c=0.86). The discriminatory power of the models did not differ by the Hanley-McNeil test (p>0.05). Within each quartile of mortality, the composite and surgical models displayed an expected mortality ratio close to 1. Conclusions: The addition of clinical data to claims data efficiently enhances the performance of the risk-adjusted postoperative mortality models in colorectal cancer surgery. We recommended that the performance of models should be evaluated through both discrimination and calibration.

Data Envelopment Analysis with Imprecise Data Based on Robust Optimization (부정확한 데이터를 가지는 자료포락분석을 위한 로버스트 최적화 모형의 적용)

  • Lim, Sungmook
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.38 no.4
    • /
    • pp.117-131
    • /
    • 2015
  • Conventional data envelopment analysis (DEA) models require that inputs and outputs are given as crisp values. Very often, however, some of inputs and outputs are given as imprecise data where they are only known to lie within bounded intervals. While a typical approach to addressing this situation for optimization models such as DEA is to conduct sensitivity analysis, it provides only a limited ex-post measure against the data imprecision. Robust optimization provides a more effective ex-ante measure where the data imprecision is directly incorporated into the model. This study aims to apply robust optimization approach to DEA models with imprecise data. Based upon a recently developed robust optimization framework which allows a flexible adjustment of the level of conservatism, we propose two robust optimization DEA model formulations with imprecise data; multiplier and envelopment models. We demonstrate that the two models consider different risks regarding imprecise efficiency scores, and that the existing DEA models with imprecise data are special cases of the proposed models. We show that the robust optimization for the multiplier DEA model considers the risk that estimated efficiency scores exceed true values, while the one for the envelopment DEA model deals with the risk that estimated efficiency scores fall short of true values. We also show that efficiency scores stratified in terms of probabilistic bounds of constraint violations can be obtained from the proposed models. We finally illustrate the proposed approach using a sample data set and show how the results can be used for ranking DMUs.

A Study for Software Sizing Method (소프트웨어 규모 측정 방법 연구)

  • 박석규;박중양
    • Journal of the Korea Computer Industry Society
    • /
    • v.5 no.4
    • /
    • pp.471-480
    • /
    • 2004
  • A estimating capability of software effort, duration and cost is based on accurate size estimate of the software to be developed. A simplified function point (FP) approach to software size estimation is described, which first skip the computation step for value adjustment factor, thus directly obtaining final adjusted FP from unadjusted FP. The research seeks suitable models based on statistical regression models in the context of case study based on 783 software projects. The approach also are built for subsets of projects using new development, enhancement and re-development types.

  • PDF

Adaptive Exponential Smoothing Method Based on Structural Change Statistics (구조변화 통계량을 이용한 적응적 지수평활법)

  • Kim, Jeong-Il;Park, Dae-Geun;Jeon, Deok-Bin;Cha, Gyeong-Cheon
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2006.11a
    • /
    • pp.165-168
    • /
    • 2006
  • Exponential smoothing methods do not adapt well to unexpected changes in underlying process. Over the past few decades a number of adaptive smoothing models have been proposed which allow for the continuous adjustment of the smoothing constant value in order to provide a much earlier detection of unexpected changes. However, most of previous studies presented ad hoc procedure of adaptive forecasting without any theoretical background. In this paper, we propose a detection-adaptation procedure applied to simple and Holt's linear method. We derive level and slope change detection statistics based on Bayesian statistical theory and present distribution of the statistics by simulation method. The proposed procedure is compared with previous adaptive forecasting models using simulated data and economic time series data.

  • PDF

Study on the Optimal Selection of Rotor Track and Balance Parameters using Non-linear Response Models and Genetic Algorithm (로터 트랙 발란스(RTB) 파라미터 최적화를 위한 비선형 모델링 및 GA 기법 적용 연구)

  • Lee, Seong Han;Kim, Chang Joo;Jung, Sung Nam;Yu, Young Hyun;Kim, Oe Cheul
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.44 no.11
    • /
    • pp.989-996
    • /
    • 2016
  • This paper intends to develop the rotor track and balance (RTB) algorithm using the nonlinear RTB models and a real-coded hybrid genetic algorithm. The RTB response data computed using the trim solutions with variation of the adjustment parameters have been used to build nonlinear RTB models based on the quadratic interpolation functions. Nonlinear programming problems to minimize the track deviations and the airframe vibration responses have been formulated to find optimum settings of balance weights, trim-tab deflections, and pitch-link lengths of each blade. The results are efficiently resolved using the real-coded genetic algorithm hybridized with the particle swarm optimization techniques for convergence acceleration. The nonlinear RTB models and the optimized RTB parameters have been compared with those computed using the linear models to validate the proposed techniques. The results showed that the nonlinear models lead to more accurate models and reduced RTB responses than the linear counterpart.

A Baltic Dry Index Prediction using Deep Learning Models

  • Bae, Sung-Hoon;Lee, Gunwoo;Park, Keun-Sik
    • Journal of Korea Trade
    • /
    • v.25 no.4
    • /
    • pp.17-36
    • /
    • 2021
  • Purpose - This study provides useful information to stakeholders by forecasting the tramp shipping market, which is a completely competitive market and has a huge fluctuation in freight rates due to low barriers to entry. Moreover, this study provides the most effective parameters for Baltic Dry Index (BDI) prediction and an optimal model by analyzing and comparing deep learning models such as the artificial neural network (ANN), recurrent neural network (RNN), and long short-term memory (LSTM). Design/methodology - This study uses various data models based on big data. The deep learning models considered are specialized for time series models. This study includes three perspectives to verify useful models in time series data by comparing prediction accuracy according to the selection of external variables and comparison between models. Findings - The BDI research reflecting the latest trends since 2015, using weekly data from 1995 to 2019 (25 years), is employed in this study. Additionally, we tried finding the best combination of BDI forecasts through the input of external factors such as supply, demand, raw materials, and economic aspects. Moreover, the combination of various unpredictable external variables and the fundamentals of supply and demand have sought to increase BDI prediction accuracy. Originality/value - Unlike previous studies, BDI forecasts reflect the latest stabilizing trends since 2015. Additionally, we look at the variation of the model's predictive accuracy according to the input of statistically validated variables. Moreover, we want to find the optimal model that minimizes the error value according to the parameter adjustment in the ANN model. Thus, this study helps future shipping stakeholders make decisions through BDI forecasts.

An Approach to Calibrating a Progression Adjustment Factorat Signalized Intersections - Toward Theory of Background - (신호등(信號燈) 연동화보정계수(連動化補正係數) 산출(算出) 모형(模型)의 개발(開發) - 이론적(理論的) 고찰(考察)을 중심(中心)으로 -)

  • Lee, Yong Jae;Choi, Woo Hyuck
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.14 no.3
    • /
    • pp.379-390
    • /
    • 1994
  • The recent study of the delay models have assumed random arrival which has a constant average flow rate throughout the cycle. However, where signals are spaced closely together or form part of progressive system, platoon flows are common and more closely represent reality. In such cases, those results are quite different pattern of estimated delay from that of observed one. In order to solve this problem, the 1985 HCM takes Progression Adjustment Factor (PAF) into account. In the 1985 HCM, however, it has deficiencies in defining and applying it, such as platoon ratio ($R_p$) and platoon arrival type. The Purpose of this study is to investigate theoretically the predictive ability of the individual models concerned through comparing the estimated delay and PAF suggested by NCHRP Report 339, KHCM or USHCM (1985) with the observed obtained by field survey at a signalized intersection.

  • PDF

The Research for Practical Use of GPS/Leveling (GPS/Leveling의 실용적 활용 방안에 관한 연구)

  • Park, Byung-Uk;Choi, Yun-Soo;Shin, Sang-Ho
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.10 no.2 s.20
    • /
    • pp.107-114
    • /
    • 2002
  • This study aimed to estimate accuracy of GPS/Leveling and to present availability of GPS/Leveling in public surveying. For this purpose, we carried out GPS survey for bench marks and control points of Hongsung area. Orthometric heights calculated by two GPS/Leveling methods were compared to reference height. The one is calculated by base of geoid models such as EGM96, OSU91A, KGEOID99, and the other is calculated by network adjustment using fixed point. The results of GPS/Leveling by geoid models show that RMSE of EGM96 is ${\pm}0.061m,\;OSU91A\;{\pm}0.725m,\;KGEOID99\;{\pm}0.598m$. The results of GPS/Leveling by network adjustment show that the best RMSE is ${\pm}0.043m$ in case of using three fixed bench mark, and this method can be used for leveling effectively. GPS/Leveling would be able to apply in forth order public leveling and height determination of public control points.

  • PDF

Negative Exponential Disparity Based Deviance and Goodness-of-fit Tests for Continuous Models: Distributions, Efficiency and Robustness

  • Jeong, Dong-Bin;Sahadeb Sarkar
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.1
    • /
    • pp.41-61
    • /
    • 2001
  • The minimum negative exponential disparity estimator(MNEDE), introduced by Lindsay(1994), is an excellenet competitor to the minimum Hellinger distance estimator(Beran 1977) as a robust and yet efficient alternative to the maximum likelihood estimator in parametric models. In this paper we define the negative exponential deviance test(NEDT) as an analog of the likelihood ratio test(LRT), and show that the NEDT is asymptotically equivalent to he LRT at the model and under a sequence of contiguous alternatives. We establish that the asymptotic strong breakdown point for a class of minimum disparity estimators, containing the MNEDE, is at least 1/2 in continuous models. This result leads us to anticipate robustness of the NEDT under data contamination, and we demonstrate it empirically. In fact, in the simulation settings considered here the empirical level of the NEDT show more stability than the Hellinger deviance test(Simpson 1989). The NEDT is illustrated through an example data set. We also define a goodness-of-fit statistic to assess adequacy of a specified parametric model, and establish its asymptotic normality under the null hypothesis.

  • PDF