• Title/Summary/Keyword: Error Estimates

Search Result 919, Processing Time 0.042 seconds

Design and Implementation of Pedestrian Position Information System in GPS-disabled Area (GPS 수신불가 지역에서의 보행자 위치정보시스템의 설계 및 구현)

  • Kwak, Hwy-Kuen;Park, Sang-Hoon;Lee, Choon-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.9
    • /
    • pp.4131-4138
    • /
    • 2012
  • In this paper, we propose a Pedestrian Position Information System(PPIS) using low-cost inertial sensors in GPS-disabled area. The proposed scheme estimates the attitude/heading angle and step detection of pedestrian. Additionally, the estimation error due to the inertial sensors is mitigated by using additional sensors. We implement a portable hardware module to evaluate performance of the proposed system. Through the experiments in indoor building, the estimation error of position information was measured as 2.4% approximately.

Estimation of Chest Compression Depth using two Accelerometers during CPR (심폐소생술에서 두 개의 가속도 센서를 활용한 흉부 압박 깊이 추정)

  • Song, Yeong-Tak;Oh, Jae-Hoon;Suh, Young-Soo;Chee, Young-Joon
    • Journal of Biomedical Engineering Research
    • /
    • v.31 no.5
    • /
    • pp.407-411
    • /
    • 2010
  • During the cardiopulmonary resuscitation (CPR), the correct chest compression depth and period are very important to increase the resuscitation possibility. For the feedback of chest compression depth, the depth monitoring device based on the accelerometer is developed and widely used. But this method tends to overestimate the compression depth on the bed. To overcome this limitation, the chest compression depth estimation method using two accelerometers is suggested With the additional accelerometer between the patient and mattress on the bed, the compression of the mattress is also measured and it is used to compensate the overestimation error. The experimental results show that the single accelerometer estimates as 61.4mm for the actual compression depth of 43.6mm on the mattress. The depth estimation with the dual accelerometer was 44.6mm which is close to the actual depth. With the automatic zeroing in every single compression, the integration error for the depth can be reduced. The dual accelerometer method is effective to increase the accuracy of the chest compression depth estimation.

Improving CMD Areal Density Analysis: Algorithms and Strategies

  • Wilson, R.E.
    • Journal of Astronomy and Space Sciences
    • /
    • v.31 no.2
    • /
    • pp.121-130
    • /
    • 2014
  • Essential ideas, successes, and difficulties of Areal Density Analysis (ADA) for color-magnitude diagrams (CMD's) of resolved stellar populations are examined, with explanation of various algorithms and strategies for optimal performance. A CMD-generation program computes theoretical datasets with simulated observational error and a solution program inverts the problem by the method of Differential Corrections (DC) so as to compute parameter values from observed magnitudes and colors, with standard error estimates and correlation coefficients. ADA promises not only impersonal results, but also significant saving of labor, especially where a given dataset is analyzed with several evolution models. Observational errors and multiple star systems, along with various single star characteristics and phenomena, are modeled directly via the Functional Statistics Algorithm (FSA). Unlike Monte Carlo, FSA is not dependent on a random number generator. Discussions include difficulties and overall requirements, such as need for fast evolutionary computation and realization of goals within machine memory limits. Degradation of results due to influence of pixelization on derivatives, Initial Mass Function (IMF) quantization, IMF steepness, low Areal Densities ($\mathcal{A}$), and large variation in $\mathcal{A}$ are reduced or eliminated through a variety of schemes that are explained sufficiently for general application. The Levenberg-Marquardt and MMS algorithms for improvement of solution convergence are contained within the DC program. An example of convergence, which typically is very good, is shown in tabular form. A number of theoretical and practical solution issues are discussed, as are prospects for further development.

Neuro-controller for a XY positioning table (XY 테이블의 신경망제어)

  • Jang, Jun Oh
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.375-382
    • /
    • 2004
  • This paper presents control designs using neural networks (NN) for a XY positioning table. The proposed neuro-controller is composed of an outer PD tracking loop for stabilization of the fast flexible-mode dynamics and an NN inner loop used to compensate for the system nonlinearities. A tuning algorithm is given for the NN weights, so that the NN compensation scheme becomes adaptive, guaranteeing small tracking errors and bounded weight estimates. Formal nonlinear stability proofs are given to show that the tracking error is small. The proposed neuro-controller is implemented and tested on an IBM PC-based XY positioning table, and is applicable to many precision XY tables. The algorithm, simulation, and experimental results are described. The experimental results are shown to be superior to those of conventional control.

A Simplified Time Domain Channel Tracking Scheme in OFDM Systems with Null Sub-Carriers (Null 부반송파를 갖는 OFDM 시스템에서 단순화된 시간영역 채널 추적 방식)

  • Jeon, Hyoung-Goo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.4C
    • /
    • pp.418-424
    • /
    • 2007
  • This paper proposes a scheme to track channel response in OFDM systems with null sub-carriers. The proposed channel tracking scheme estimates the channel response first in the frequency domain by using the decision directed data. The time domain channel estimation is then performed to remove additive white Gaussian noise (AWGN) components further. Due to the channel estimation in the frequency domain, no inverse matrix calculation is required in the time domain channel estimation. Computational reduction in the proposed method is about 93%, compared with the conventional time domain channel estimation method. Mean square error (MSE) and bit error rate (BER) performances are evaluated by using computer simulation. The proposed method shows the same performance as that of the conventional time domain channel estimation even though the significant computational reduction.

A Comparison of CME Arrival Time Estimations by the WSA/ENLIL Cone Model and an Empirical Model

  • Jang, Soo-Jeong;Moon, Yong-Jae;Lee, Kyoung-Sun;Na, Hyeon-Ock
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.37 no.1
    • /
    • pp.92.1-92.1
    • /
    • 2012
  • In this work we have examined the performance of the WSA/ENLIL cone model provided by Community Coordinated Modeling Center (CCMC). The WSA/ENLIL model simulates the propagation of coronal mass ejections (CMEs) from the Sun into the heliosphere. We estimate the shock arrival times at the Earth using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters from Michalek et al. (2007) as well as their associated interplanetary (IP) shocks. We make a comparison between CME arrival times by the WSA/ENLIL cone model and IP shock observations. For the WSA/ENLIL cone model, the root mean square(RMS) error is about 13 hours and the mean absolute error(MAE) is approximately 10.4 hours. We compared these estimates with those of the empirical model by Kim et al.(2007). For the empirical model, the RMS and MAE errors are about 10.2 hours and 8.7 hours, respectively. We are investigating several possibilities on relatively large errors of the WSA/ENLIL cone model, which may be caused by cone model velocities, CME density enhancement factor, or CME-CME interaction.

  • PDF

A Study on the Determinants of Bilateral Trade : Evidence from China and US

  • He, Yugang
    • East Asian Journal of Business Economics (EAJBE)
    • /
    • v.7 no.1
    • /
    • pp.27-38
    • /
    • 2019
  • Purpose - Recently, the trade war between China and US has been escalating, which has also attracted worldwide attention. Based on this background, this paper sets China and US as an example to explore the determinants of bilateral trade between China and US. Research design, date, and methodology - A quarterly data from the 2000-Q1 to the 2017-Q4 will be used to perform an empirical analysis under some econometric approaches such as the fully modified least squares and the vector error correction estimates. Result - The results illustrate that the two economic entities of China and US have the greatest positive effect on bilateral trade between China and US. The real exchange rate has a positive effect on bilateral trade between China and US. The nominal exchange rate has a negative effect on bilateral trade between China and US in the short run. US's average price has a positive effect on bilateral trade between China and US in the short run. China's average price has a negative effect on bilateral trade between China and US in the short run. Meanwhile, the bilateral trade between China and US also suffers from the economic crisis happened in 2008. Even through the bilateral trade between China and US in the short run is deviate from the long-run equilibrium, there exist an error correction mechanism back to the long-run equilibrium. Conclusion - This paper provides some empirical evidences for both governments. Based on the results of this paper, both governments should take corresponding measures to promote the development of bilateral trade between China and US.

Doppler Frequency Compensated Detection and Ranging Algorithm for High-speed Targets (도플러 주파수가 보상된 고속 표적 탐지 및 레인징 알고리즘)

  • Youn, Jae-Hyuk;Kim, Kwan-Soo;Yang, Hoon-Gee;Chung, Young-Seek;Lee, Won-Woo;Bae, Kyung-Bin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.12B
    • /
    • pp.1244-1250
    • /
    • 2010
  • This paper presents a detection and ranging algorithm for a high-speed targets in the high PRF radar. We show, unlike the conventional methods, it firstly estimates Doppler frequency with a quasi-periodic pulse train prior to range processing. The estimated Doppler frequency can compensate the phase error enbeded in the received signal, which makes the signal integrated coherently in the range direction and localizes the target's signiture in low SNR. We present the derivation of the proposed algorithm and discuss how the system parameters such as the range/Doppler sampling condition, processing time and Doppler estimation error affect the performance of the proposed algorithm, which is verified by simulations.

A Comparative Study of the Parameter Estimation Method about the Software Mean Time Between Failure Depending on Makeham Life Distribution (메이크헴 수명분포에 의존한 소프트웨어 평균고장간격시간에 관한 모수 추정법 비교 연구)

  • Kim, Hee Cheul;Moon, Song Chul
    • Journal of Information Technology Applications and Management
    • /
    • v.24 no.1
    • /
    • pp.25-32
    • /
    • 2017
  • For repairable software systems, the Mean Time Between Failure (MTBF) is used as a measure of software system stability. Therefore, the evaluation of software reliability requirements or reliability characteristics can be applied MTBF. In this paper, we want to compare MTBF in terms of parameter estimation using Makeham life distribution. The parameter estimates used the least square method which is regression analyzer method and the maximum likelihood method. As a result, the MTBF using the least square method shows a non-decreased pattern and case of the maximum likelihood method shows a non-increased form as the failure time increases. In comparison with the observed MTBF, MTBF using the maximum likelihood estimation is smallerd about difference of interval than the least square estimation which is regression analyzer method. Thus, In terms of MTBF, the maximum likelihood estimation has efficient than the regression analyzer method. In terms of coefficient of determination, the mean square error and mean error of prediction, the maximum likelihood method can be judged as an efficient method.

Capturing the Short-run and Long-run Causal Behavior of Philippine Stock Market Volatility under Vector Error Correction Environment

  • CAMBA, Abraham C. Jr.
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.8
    • /
    • pp.41-49
    • /
    • 2020
  • This study investigates the short-run and long-run causal behavior of the Philippine stock market index volatility under vector error correction environment. The variables were tested first for stationarity and then long-run equilibrium relationship. Moreover, an impulse response function was estimated to examine the extent of innovations in the independent variables in explaining the Philippine stock market index volatility. The results reveal that the volatility of the Philippine stock market index exhibit long-run equilibrium relationship with Peso-Dollar exchange rate, London Interbank Offered Rate, and crude oil prices. The short-run dynamics-based VECM estimates indicate that in the short-run, increases (i.e., depreciation) in Peso-Dollar exchange rate cause PSEI volatility to increase. As for the London Interbank Offered Rate, it causes increases in PSEI volatility in the short-run. The adjustment coefficients used with the long-run dynamics validates the presence of unidirectional causal long-run relationship from Peso-Dollar exchange rate, London Interbank Offered Rate, and crude oil prices to PSEI volatility, and bidirectional causal long-run relationship between PSEI volatility and London Interbank Offered Rate. The impulse response functions developed within the VECM framework demonstrate the positive and negative reactions of PSEI volatility to unanticipated Peso-Dollar exchange rate, London Interbank Offered Rate, and crude oil price shocks.