• Title/Summary/Keyword: Error estimator

Search Result 658, Processing Time 0.021 seconds

A Novel Scheme for Code Tracking Bias Mitigation in Band-Limited Global Navigation Satellite Systems (위성 기반 측위 시스템에서의 부호 추적편이 완화 기법)

  • Yoo, Seung-Soo;Kim, Sang-Hun;Yoon, Seok-Ho;Song, Iich-Ho;Kim, Sun-Yong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.10C
    • /
    • pp.1032-1041
    • /
    • 2007
  • The global navigation satellite system (GNSS), which is the core technique for the location based service, adopts the direct sequence/spread spectrum (DS/SS) as its modulation method. The success of a DS/SS system depends on the synchronization between the received and locally generated pseudo noise (PN) signals. As a step in the synchronization process, the tacking scheme performs fine adjustment to bring the phase difference between the two PN signals to zero. The most widely used tracking scheme is the delay locked loop with early minus late discriminator (EL-DLL). In the ideal case, the EL-DLL is the best estimator among various DLL. However, in the band-limited multipath environment, the EL-DLL has tracking bias. In this paper, the timing offset range of correlation function is divided into advanced offset range (AOR) and delayed offset range (DOR) centering around the correct synchronization time point. The tracking bias results from the following two reasons: symmetry distortion between correlation values in AOR and DOR, and mismatch between the time point corresponding to the maximum correlation value and the synchronization time point. The former and latter are named as the type I and type II tracking bias, respectively. In this paper, when the receiver has finite bandwidth in the presence of multipath signals, it is shown that the type II tracking bias becomes a more dominant error factor than the type I tracking bias, and the correlation values in AOR are not almost changed. Exploiting these characteristics, we propose a novel tracking bias mitigation scheme and demonstrate that the tracking accuracy of the proposed scheme is higher than that of the conventional scheme, both in the presence and absence of noise.

The Comparative Study of NHPP Software Reliability Model Based on Log and Exponential Power Intensity Function (로그 및 지수파우어 강도함수를 이용한 NHPP 소프트웨어 무한고장 신뢰도 모형에 관한 비교연구)

  • Yang, Tae-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.6
    • /
    • pp.445-452
    • /
    • 2015
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, proposes the reliability model with log and power intensity function (log linear, log power and exponential power), which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, was employed. Analysis of failure, using real data set for the sake of proposing log and power intensity function, was employed. This analysis of failure data compared with log and power intensity function. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the log type model is also efficient in terms of reliability because it (the coefficient of determination is 70% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, software developers have to consider the growth model by prior knowledge of the software to identify failure modes which can be able to help.

Analysis of influence of parameter error for extended EMF based sensorless control and flux based sensorless control of PM synchronous motor (영구자석 동기전동기의 확장 역기전력 기반 센서리스 제어와 자속기반 센서리스 제어의 파라미터 오차의 영향 분석)

  • Park, Wan-Seo;Cho, Kwan-Yuhl;Kim, Hag-Wone
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.3
    • /
    • pp.8-15
    • /
    • 2019
  • The PM synchronous motor drives with vector control have been applied to wide fields of industry applications due to its high efficiency. The rotor position information for vector control of a PM synchronous motor is detected from the rotor position sensors or rotor position estimators. The sensorless control based on the mathematical model of PM synchronous motor is generally used and it can be classified into back EMF -based sensorless control and magnet flux-based sensorless control. The rotor position estimating performance of the back EMF-based sensorless control is deteriorated at low speeds since the magnitude of back EMF is proportional to the motor speed. The magnitude of the magnet flux for estimating rotor position in the flux-based sensorless control is independent on the motor speed so that the estimating performance is excellent for wide speed ranges. However, the estimation performance of the model-based sensorless control may be influenced by the motor parameter variation since the rotor position estimator uses the mathematical model of the PM synchronous motor. In this paper, the rotor position estimation performance for the back EMF based- and flux-based sensorless controls is analyzed theoretically and is compared through the simulation and experiment when the motor parameters including stator resistance and inductance are varied.

Spatial Upscaling of Aboveground Biomass Estimation using National Forest Inventory Data and Forest Type Map (국가산림자원조사 자료와 임상도를 이용한 지상부 바이오매스의 공간규모 확장)

  • Kim, Eun-Sook;Kim, Kyoung-Min;Lee, Jung-Bin;Lee, Seung-Ho;Kim, Chong-Chan
    • Journal of Korean Society of Forest Science
    • /
    • v.100 no.3
    • /
    • pp.455-465
    • /
    • 2011
  • In order to assess and mitigate climate change, the role of forest biomass as carbon sink has to be understood spatially and quantitatively. Since existing forest statistics can not provide spatial information about forest resources, it is needed to predict spatial distribution of forest biomass under an alternative scheme. This study focuses on developing an upscaling method that expands forest variables from plot to landscape scale to estimate spatially explicit aboveground biomass(AGB). For this, forest stand variables were extracted from National Forest Inventory(NFI) data and used to develop AGB regression models by tree species. Dominant/codominant height and crown density were used as explanatory variables of AGB regression models. Spatial distribution of AGB could be estimated using AGB models, forest type map and the stand height map that was developed by forest type map and height regression models. Finally, it was estimated that total amount of forest AGB in Danyang was 6,606,324 ton. This estimate was within standard error of AGB statistics calculated by sample-based estimator, which was 6,518,178 ton. This AGB upscaling method can provide the means that can easily estimate biomass in large area. But because forest type map used as base map was produced using categorical data, this method has limits to improve a precision of AGB map.

Empirical Study About ODA Effects on Job Creation

  • Seung Hee Ha;JaeHong Park
    • Journal of Korea Trade
    • /
    • v.26 no.6
    • /
    • pp.1-19
    • /
    • 2022
  • Purpose - This study empirically investigates the effects of Official Development Assistance (ODA) on the economic activities of private actors in recipient countries. As a proxy for the economic activities of private actors, we utilize the job creation activities of foreign subsidiaries in recipient countries. The foreign subsidiaries provide a foundation for economic development by creating paying jobs. That is, if ODA has been successfully transferred to foreign subsidiaries, then these foreign subsidiaries should help economic growth and help create a boom in the local market by providing jobs. These jobs eventually lead to the achievement of the primary aims of foreign aid, including poverty reduction. Thus, this study empirically examines the relationship between ODA and the number of jobs created by foreign subsidiaries in recipient countries. Design/methodology - This is the first study to examine the effects of the ODA on the job creation of foreign subsidiaries because it has been hard to obtain internal information related to the employment status of foreign subsidiaries. Fortunately, we have a unique panel dataset provided by the Export-Import Bank of Korea (KEXIM) for 2006 to 2013. In terms of the empirical specification, we use the generalized least squares (GLS) method. The panel GLS estimator allows us to have an efficient estimation that overcomes the limitations of the panel data. It employs assumptions about the heteroscedasticity between the panels and makes an autocorrelation of the error term within each panel. Findings - We find that ODA influences job creation in foreign subsidiaries. In particular, we found that ODA creates more jobs in sales than in managerial or production positions. This study also shows that the effect of the ODA on the foreign subsidiaries' job creation activities depend on the purpose of the ODA. By examining ODA effects on the foreign subsidiaries' economic activities (e.g., job creation), this study fills a gap in the current literature. Originality/value - Existing studies that focus on the ODA effect have either a macroeconomic point or a microeconomic point of view. However, both approaches do not explain how well foreign aid has influenced private economic actors of recipient countries. In essence, previous researchers found it difficult to obtain the necessary data for internal employment status from foreign subsidiaries. However, thanks to the Korea Export-Import Bank, this study shows that ODA indeed influences the job creation activities of foreign subsidiaries even after controlling for other factors such as FDI, GDP growth rate, employment rate, household expenditure, mother firms' share, etc. By doing so, we can examine how ODA influences the job creation of foreign subsidiaries, which might help economic development and reduce the amount of poverty in recipient countries.

A development of DS/CDMA MODEM architecture and its implementation (DS/CDMA 모뎀 구조와 ASIC Chip Set 개발)

  • 김제우;박종현;김석중;심복태;이홍직
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.6
    • /
    • pp.1210-1230
    • /
    • 1997
  • In this paper, we suggest an architecture of DS/CDMA tranceiver composed of one pilot channel used as reference and multiple traffic channels. The pilot channel-an unmodulated PN code-is used as the reference signal for synchronization of PN code and data demondulation. The coherent demodulation architecture is also exploited for the reverse link as well as for the forward link. Here are the characteristics of the suggested DS/CDMA system. First, we suggest an interlaced quadrature spreading(IQS) method. In this method, the PN coe for I-phase 1st channel is used for Q-phase 2nd channels and the PN code for Q-phase 1st channel is used for I-phase 2nd channel, and so on-which is quite different from the eisting spreading schemes of DS/CDMA systems, such as IS-95 digital CDMA cellular or W-CDMA for PCS. By doing IQS spreading, we can drastically reduce the zero crossing rate of the RF signals. Second, we introduce an adaptive threshold setting for the synchronization of PN code, an initial acquistion method that uses a single PN code generator and reduces the acquistion time by a half compared the existing ones, and exploit the state machines to reduce the reacquistion time Third, various kinds of functions, such as automatic frequency control(AFC), automatic level control(ALC), bit-error-rate(BER) estimator, and spectral shaping for reducing the adjacent channel interference, are introduced to improve the system performance. Fourth, we designed and implemented the DS/CDMA MODEM to be used for variable transmission rate applications-from 16Kbps to 1.024Mbps. We developed and confirmed the DS/CDMA MODEM architecture through mathematical analysis and various kind of simulations. The ASIC design was done using VHDL coding and synthesis. To cope with several different kinds of applications, we developed transmitter and receiver ASICs separately. While a single transmitter or receiver ASC contains three channels (one for the pilot and the others for the traffic channels), by combining several transmitter ASICs, we can expand the number of channels up to 64. The ASICs are now under use for implementing a line-of-sight (LOS) radio equipment.

  • PDF

A Study on Sample Allocation for Stratified Sampling (층화표본에서의 표본 배분에 대한 연구)

  • Lee, Ingue;Park, Mingue
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.6
    • /
    • pp.1047-1061
    • /
    • 2015
  • Stratified random sampling is a powerful sampling strategy to reduce variance of the estimators by incorporating useful auxiliary information to stratify the population. Sample allocation is the one of the important decisions in selecting a stratified random sample. There are two common methods, the proportional allocation and Neyman allocation if we could assume data collection cost for different observation units equal. Theoretically, Neyman allocation considering the size and standard deviation of each stratum, is known to be more effective than proportional allocation which incorporates only stratum size information. However, if the information on the standard deviation is inaccurate, the performance of Neyman allocation is in doubt. It has been pointed out that Neyman allocation is not suitable for multi-purpose sample survey that requires the estimation of several characteristics. In addition to sampling error, non-response error is another factor to evaluate sampling strategy that affects the statistical precision of the estimator. We propose new sample allocation methods using the available information about stratum response rates at the designing stage to improve stratified random sampling. The proposed methods are efficient when response rates differ considerably among strata. In particular, the method using population sizes and response rates improves the Neyman allocation in multi-purpose sample survey.

A Study on Forecasting Accuracy Improvement of Case Based Reasoning Approach Using Fuzzy Relation (퍼지 관계를 활용한 사례기반추론 예측 정확성 향상에 관한 연구)

  • Lee, In-Ho;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.67-84
    • /
    • 2010
  • In terms of business, forecasting is a work of what is expected to happen in the future to make managerial decisions and plans. Therefore, the accurate forecasting is very important for major managerial decision making and is the basis for making various strategies of business. But it is very difficult to make an unbiased and consistent estimate because of uncertainty and complexity in the future business environment. That is why we should use scientific forecasting model to support business decision making, and make an effort to minimize the model's forecasting error which is difference between observation and estimator. Nevertheless, minimizing the error is not an easy task. Case-based reasoning is a problem solving method that utilizes the past similar case to solve the current problem. To build the successful case-based reasoning models, retrieving the case not only the most similar case but also the most relevant case is very important. To retrieve the similar and relevant case from past cases, the measurement of similarities between cases is an important key factor. Especially, if the cases contain symbolic data, it is more difficult to measure the distances. The purpose of this study is to improve the forecasting accuracy of case-based reasoning approach using fuzzy relation and composition. Especially, two methods are adopted to measure the similarity between cases containing symbolic data. One is to deduct the similarity matrix following binary logic(the judgment of sameness between two symbolic data), the other is to deduct the similarity matrix following fuzzy relation and composition. This study is conducted in the following order; data gathering and preprocessing, model building and analysis, validation analysis, conclusion. First, in the progress of data gathering and preprocessing we collect data set including categorical dependent variables. Also, the data set gathered is cross-section data and independent variables of the data set include several qualitative variables expressed symbolic data. The research data consists of many financial ratios and the corresponding bond ratings of Korean companies. The ratings we employ in this study cover all bonds rated by one of the bond rating agencies in Korea. Our total sample includes 1,816 companies whose commercial papers have been rated in the period 1997~2000. Credit grades are defined as outputs and classified into 5 rating categories(A1, A2, A3, B, C) according to credit levels. Second, in the progress of model building and analysis we deduct the similarity matrix following binary logic and fuzzy composition to measure the similarity between cases containing symbolic data. In this process, the used types of fuzzy composition are max-min, max-product, max-average. And then, the analysis is carried out by case-based reasoning approach with the deducted similarity matrix. Third, in the progress of validation analysis we verify the validation of model through McNemar test based on hit ratio. Finally, we draw a conclusion from the study. As a result, the similarity measuring method using fuzzy relation and composition shows good forecasting performance compared to the similarity measuring method using binary logic for similarity measurement between two symbolic data. But the results of the analysis are not statistically significant in forecasting performance among the types of fuzzy composition. The contributions of this study are as follows. We propose another methodology that fuzzy relation and fuzzy composition could be applied for the similarity measurement between two symbolic data. That is the most important factor to build case-based reasoning model.