• Title/Summary/Keyword: Error Criteria

Search Result 580, Processing Time 0.025 seconds

A UNIFIED CONVERGENCE ANALYSIS FOR SECANT-TYPE METHODS

  • Argyros, Ioannis Konstantinos;Magrenan, Angel Alberto
    • Journal of the Korean Mathematical Society
    • /
    • v.51 no.6
    • /
    • pp.1155-1175
    • /
    • 2014
  • We present a unified local and semilocal convergence analysis for secant-type methods in order to approximate a locally unique solution of a nonlinear equation in a Banach space setting. Our analysis includes the computation of the bounds on the limit points of the majorizing sequences involved. Under the same computational cost our semilocal convergence criteria can be weaker; the error bounds more precise and in the local case the convergence balls can be larger and the error bounds tighter than in earlier studies such as [1-3,7-14,16,20,21] at least for the cases of Newton's method and the secant method. Numerical examples are also presented to illustrate the theoretical results obtained in this study.

Improved SE SD Algorithm based on MMSE for MIMO Detection (MIMO 검파를 위한 MMSE 기반의 향상된 SE SD 알고리듬)

  • Cho, Hye-Min;Park, Soon-Chul;Han, Dong-Seog
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.3A
    • /
    • pp.231-237
    • /
    • 2010
  • Multi-input multi-output (MIMO) systems are used to improve the transmission rate in proportion to the number of antennas. However, their computational complexity is very high for the detection in the receiver. The sphere decoding (SD) is a detection algorithm with reduced complexity. In this paper, an improved Schnorr-Euchner SD (SE SD) is proposed based on the minimum mean square error (MMSE) and the Euclidean distance criteria without additional complexity.

Error Analysis of the Image Measurement System (영상 측정 시스템의 오차 분석)

  • 김준희;유은이;사승윤;김광래;유봉환
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1996.11a
    • /
    • pp.490-495
    • /
    • 1996
  • Though the increment of using computer vision system in modern industry, there are lots of difficulties to measure precisely because of measurement error distortion phenomenon. Among these reasons, the distortion of edge is dominant reason which is occurred by the blurred image. The blurred image is happened when camera can not discriminate its precise focus. To calibrate and generalize distortion phenomenon is important. Thus, we must fix the discrimination criteria which is collected by image recognition of precise focus. Also, radial distortion causes an inward or outward displacement of a given image point from its ideal location. This type of distortion is mainly caused by flawed radial curvature curve of the elements. Thus, we were analyzed the distortion in terms of the changed with lens magnification.

  • PDF

Testing for Lack of Fit via the Generalized Neyman Smooth Test

  • Lee, Geung-Hee
    • Journal of the Korean Statistical Society
    • /
    • v.27 no.3
    • /
    • pp.305-318
    • /
    • 1998
  • Smoothing tests based on an L$_2$ error between a truncated courier series estimator and a true function have shown good powers for a wide class of alternatives, These tests have the same form of the Neyman smooth test whose performance depends on the selected order, a basis, the farm of estimators. We construct flexible data driven Neyman smooth tests by changing a basis, combining model selection criteria and different series estimators. A simulation study shows that the generalized Neyman smooth test with the best basis provides good power for a wider class of alternatives compared with other data driven Neyman smooth tests based on a fixed form of estimator, a fixed basis and a fixed criterion.

  • PDF

Comparative Study of Map Generalization Algorithms with Different Tolerances (임계치 설정에 따른 지도 일반화 기법의 성능 비교 연구)

  • Lee, Jae-Eun;Park, Woo-Jin;Yu, Ki-Yun
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2010.04a
    • /
    • pp.19-21
    • /
    • 2010
  • In this study, regarding to the generalization of the map, we analyze how the different tolerances influence on the performances of linear generalization operators. For the analysis, we apply the generalization operators, especially two simplification algorithms provided in the commercial GIS software, to 1:1000 digital topographic map for analyzing the aspect of the changes in positional error depending on the tolerances. And we evaluate the changes in positional error with the quantitative assessments. The results show that the analysis can be used as the criteria for determining proper tolerance in linear generalization.

  • PDF

Evaluations of predicted models fitted for data mining - comparisons of classification accuracy and training time for 4 algorithms (데이터마이닝기법상에서 적합된 예측모형의 평가 -4개분류예측모형의 오분류율 및 훈련시간 비교평가 중심으로)

  • Lee, Sang-Bock
    • Journal of the Korean Data and Information Science Society
    • /
    • v.12 no.2
    • /
    • pp.113-124
    • /
    • 2001
  • CHAID, logistic regression, bagging trees, and bagging trees are compared on SAS artificial data set as HMEQ in terms of classification accuracy and training time. In error rates, bagging trees is at the top, although its run time is slower than those of others. The run time of logistic regression is best among given models, but there is no uniformly efficient model satisfied in both criteria.

  • PDF

Validation of the Unplugged Robot Education System Capable of Computerless Coding Education

  • Song, Jeong-Beom;Lee, Tae-Wuk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.6
    • /
    • pp.151-159
    • /
    • 2015
  • In traditional programing education, computers were used as the main tool. Consequently, it was problematic to provide education in an environment without computers or for learners without computer skills. To address this problem, this study developed and validated an unplugged robot education system capable of computerless programming education. The key feature of the proposed system is that programing can be done only by connecting programming blocks in symbols of a flow chart with built-in commands. Validation of the system was performed by a specialist group. Validity was very high with values of content validity ratio (CVR) over 0.7 in all evaluation criteria except "Ease of error debugging" and "Linkages to educational curriculum," whose CVR values were each 0.6. Future directions include improvement in the two areas that scored lower than the others did by, respectively, system improvement to support debugging in error conditions that may occur during the programming process, and development of user guide to support linkages to educational curriculum.

The Selection of Growth Models in Technological Forecasting

  • Oh, Hyun-Seung
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.16 no.1
    • /
    • pp.120-134
    • /
    • 1991
  • Various technological forecasting models have been proposed to represent the time pattern of technological growths. Of six such models studied, some models do significantly better than others, especially at low penetration levels, in predicting future levels of growth. Criteria for selecting an appropriate model for technological growth model are examined in this study. Two major characteristics were selected which differentiate the various models ; the skew of the curve and the underlying assumptions regarding the variance of the error structure of the model. Although the use of statistical techniques stil requires some subjective input and interpretations, this study provides some practical procedures in the selection of technological growth models and helps to reduce or control the potential source of judgmental error inconsistencies in the analyst's decision.

  • PDF

A Study on the Methods for Assessing Construct Validity (구성 타당성 평가방법에 관한 연구)

  • 이광희;이선규;장성호
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.22 no.50
    • /
    • pp.1-9
    • /
    • 1999
  • The purpose of this study is to establish a basis for assessing construct validity of measures used in organizational research. The classic Campbell and Fiske's(1959) criteria are found to be lacking in their assumptions, diagnostic information, and power. The inherent confounding of measurement error with systematic trait and method effects is a severe limitation for a proper interpretation of convergent and discriminant validity. The confirmatory factor analysis(CFA) approach overcomes most of the limitations found in Campbell and Fiske's(1959) method. However, the CFA approach confounds random error with unique variance specific to a measure. The second-order confirmatory factor analysis(SOCFA) was shown to harbor rather restrictive assumptions and is unlikely to be met in practice. The first-order, multiple-informant, multiple-item(FOMIMI) model is a viable option, but it may also be of limited use because of the large number measures.

  • PDF

A Sensorless Control of IPMSM using the Adaptive Back-EMF Estimator and Improved Instantaneous Reactive Power Compensator (적응 역기전력 추정기와 개선된 순시 무효전력 보상기를 이용한 돌극형 영구자석 전동기의 센서리스 제어)

  • Lee, Joonmin;Hong, Joo-Hoon;Kim, Young-Seok
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.5
    • /
    • pp.794-803
    • /
    • 2016
  • This paper propose a sensorless control system of IPMSM with a adaptive back-EMF estimator and improved instantaneous reactive power compensator. A saliency-based back-EMF is estimated by using the adaptive algorithm. The estimated back-EMF is inputted to the phase locked loop(PLL) and the improved instantaneous reactive power(IRP) compensator for estimating the position/speed of the rotor and compensating the error components between the estimated and the actual position, respectively. The stability of the proposed system is achieved through Popov's hyper stability criteria. The validity of proposed algorithm is verified by the simulations and experiments.