• 제목/요약/키워드: error check

검색결과 623건 처리시간 0.026초

계통운영시스템 계통해석 프로그램 정확도 향상에 관한 연구 (A Study on the Enhancement of Accuracy of Network Analysis Applications in Energy Management Systems)

  • 조윤성
    • 조명전기설비학회논문지
    • /
    • 제29권12호
    • /
    • pp.88-96
    • /
    • 2015
  • This paper describes a new method for enhancing the accuracy of network analysis applications in energy management systems. Topology processing, state estimation, power flow analysis, and contingency analysis play a key factor in the stable and reliable operation of power systems. In this respect, the aim of topology processing is to provide the electrical buses and the electrical islands with the actual state of the power system as input data. The results of topology processing is used to input of other applications. New method, which includes the topology error analysis based on inconsistency check, coherency check, bus mismatch check, and outaged device check is proposed to enhance the accuracy of network analysis. The proposed methodology is conducted by energy management systems and the Korean power systems have been utilized for the test systems.

반응적응 시험설계법을 이용하는 통계적 해석모델 검증 기법 연구 (A Study on the Statistical Model Validation using Response-adaptive Experimental Design)

  • 정병창;허영철;문석준;김영중
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2014년도 추계학술대회 논문집
    • /
    • pp.347-349
    • /
    • 2014
  • Model verification and validation (V&V) is a current research topic to build computational models with high predictive capability by addressing the general concepts, processes and statistical techniques. The hypothesis test for validity check is one of the model validation techniques and gives a guideline to evaluate the validity of a computational model when limited experimental data only exist due to restricted test resources (e.g., time and budget). The hypothesis test for validity check mainly employ Type I error, the risk of rejecting the valid computational model, for the validity evaluation since quantification of Type II error is not feasible for model validation. However, Type II error, the risk of accepting invalid computational model, should be importantly considered for an engineered products having high risk on predicted results. This paper proposes a technique named as the response-adaptive experimental design to reduce Type II error by adaptively designing experimental conditions for the validation experiment. A tire tread block problem and a numerical example are employed to show the effectiveness of the response-adaptive experimental design for the validity evaluation.

  • PDF

Laser imager의 성능관리에 대한 연구

  • 이형진;인경환;이원홍;김건중
    • 대한디지털의료영상학회논문지
    • /
    • 제3권1호
    • /
    • pp.126-132
    • /
    • 1997
  • Purpose : To apply to Program of Auto processor quality control after comparison of Film density variations with amendments to Auto density by using Check density program and Adjust density program of calibration mode into the Laser imager. Methods : Observe Check and Adjust density variations on the Control chart with standard step and value during seven months from December, 1995 to June, 1956 extending twice a week. (1) Measure density value on the steps after printing out 17-step sensitometric pattern of the Check density program. (2) In the same way, measure density values after amending density by using Adjust density program If they are exceeding allowable error limit. Results : In case of Check density program, the exceeding limit rates of Density difference(DD) and Middle density(MD) are: FL-IM3543 DD=75%. MD=72.5%, FL-IMD DD=0%. MD=30.8%(14.5%) After amending density by using Adjust density program, the exceeding limit rates of all both Laser imager were zero percent. The standard deviations are show lower FL-IM D than FL-IM3543 on the Check density control chart, but higher on the Adjust density control chart. Conclusion : (1) Check density variations by printingout sensitometric pattern extending once a week at least for quality control of the Laser imager. (2) In case of a dusty place, check the Laser beam transmission after cleaning Laser optical unit extending once a month. (3) Be sure to measure and check density values by using adjust density program if they are exceeding allowable error limit. (4) Maintain much better film density by performing the adjust density program even if check density values are existed within normal limit.

  • PDF

DECODING OF LEXICODES S10,4

  • KIM, D.G.
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제6권1호
    • /
    • pp.47-52
    • /
    • 2002
  • In this paper we propose a simple decoding algorithm for the 4-ary lexicographic codes (or lexicodes) of length 10 with minimum distance 4, write $S_{10,4}$. It is based on the syndrome decoding method. That is, using a syndrome vector we detect an error and it will be corrected an error from the four parity check equations.

  • PDF

부유분진(PM10) 측정기 상태 코드 분석을 통한 자동 품질검사 알고리즘 개선 및 평가 (Improvement and Evaluation of Automatic Quality Check Algorithm for Particulate Matter (PM10) by Analysis of Instrument Status Code)

  • 김미경;박영산;류상범;조정훈
    • 대기
    • /
    • 제29권4호
    • /
    • pp.501-509
    • /
    • 2019
  • Asian Dust is a meteorological phenomenon that sand particles are raised from the arid and semi-arid regions-Taklamakan Desert, Gobi Desert and Inner Mongolia in China-and transported by westerlies and deposited on the surface. Asian dust results in a negative effect on human health as well as environmental, social and economic aspects. For monitoring of Asian Dust, Korea Meteorological Administration operates 29 stations using a continuous ambient particulate monitor. Kim et al. (2016) developed an automatic quality check (AQC) algorithm for objective and systematic quality check of observed PM10 concentration and evaluated AQC with results of a manual quality check (MQC). The results showed the AQC algorithm could detect abnormal observations efficiently but it also presented a large number of false alarms which result from valid error check. To complement the deficiency of AQC and to develop an AQC system which can be applied in real-time, AQC has been modulated. Based on the analysis of instrument status codes, valid error check process was revised and 6 status codes were further considered as normal. Also, time continuity check and spike check were modified so that posterior data was not referred at inspection time. Two-year observed PM10 concentration data and corresponding MQC results were used to evaluate the modulated AQC compared to the original AQC algorithm. The results showed a false alarm ratio decreased from 0.44 to 0.09 and the accuracy and the probability of detection were conserved well in spite of the exclusion of posterior data at inspection time.

Effects of LDPC Code on the BER Performance of MPSK System with Imperfect Receiver Components over Rician Channels

  • Djordjevic, Goran T.;Djordjevic, Ivan B.;Ivanis, Predrag N.
    • ETRI Journal
    • /
    • 제31권5호
    • /
    • pp.619-621
    • /
    • 2009
  • In this letter, we study the influence of receiver imperfections on bit error rate (BER) degradations in detecting low-density parity-check coded multilevel phase-shift keying signals transmitted over a Rician fading channel. Based on the analytical system model which we previously developed using Monte Carlo simulations, we determine the BER degradations caused by the simultaneous influences of stochastic phase error, quadrature error, in-phase-quadrature mismatch, and the fading severity.

Fully parallel low-density parity-check code-based polar decoder architecture for 5G wireless communications

  • Dinesh Kumar Devadoss;Shantha Selvakumari Ramapackiam
    • ETRI Journal
    • /
    • 제46권3호
    • /
    • pp.485-500
    • /
    • 2024
  • A hardware architecture is presented to decode (N, K) polar codes based on a low-density parity-check code-like decoding method. By applying suitable pruning techniques to the dense graph of the polar code, the decoder architectures are optimized using fewer check nodes (CN) and variable nodes (VN). Pipelining is introduced in the CN and VN architectures, reducing the critical path delay. Latency is reduced further by a fully parallelized, single-stage architecture compared with the log N stages in the conventional belief propagation (BP) decoder. The designed decoder for short-to-intermediate code lengths was implemented using the Virtex-7 field-programmable gate array (FPGA). It achieved a throughput of 2.44 Gbps, which is four times and 1.4 times higher than those of the fast-simplified successive cancellation and combinational decoders, respectively. The proposed decoder for the (1024, 512) polar code yielded a negligible bit error rate of 10-4 at 2.7 Eb/No (dB). It converged faster than the BP decoding scheme on a dense parity-check matrix. Moreover, the proposed decoder is also implemented using the Xilinx ultra-scale FPGA and verified with the fifth generation new radio physical downlink control channel specification. The superior error-correcting performance and better hardware efficiency makes our decoder a suitable alternative to the successive cancellation list decoders used in 5G wireless communication.

Novel construction of quasi-cyclic low-density parity-check codes with variable code rates for cloud data storage systems

  • Vairaperumal Bhuvaneshwari;Chandrapragasam Tharini
    • ETRI Journal
    • /
    • 제45권3호
    • /
    • pp.404-417
    • /
    • 2023
  • This paper proposed a novel method for constructing quasi-cyclic low-density parity-check (QC-LDPC) codes of medium to high code rates that can be applied in cloud data storage systems, requiring better error correction capabilities. The novelty of this method lies in the construction of sparse base matrices, using a girth greater than 4 that can then be expanded with a lift factor to produce high code rate QC-LDPC codes. Investigations revealed that the proposed large-sized QC-LDPC codes with high code rates displayed low encoding complexities and provided a low bit error rate (BER) of 10-10 at 3.5 dB Eb/N0 than conventional LDPC codes, which showed a BER of 10-7 at 3 dB Eb/N0. Subsequently, implementation of the proposed QC-LDPC code in a softwaredefined radio, using the NI USRP 2920 hardware platform, was conducted. As a result, a BER of 10-6 at 4.2 dB Eb/N0 was achieved. Then, the performance of the proposed codes based on their encoding-decoding speeds and storage overhead was investigated when applied to a cloud data storage (GCP). Our results revealed that the proposed codes required much less time for encoding and decoding (of data files having a 10 MB size) and produced less storage overhead than the conventional LDPC and Reed-Solomon codes.

REPEATED LOW-DENSITY BURST ERROR DETECTING CODES

  • Dass, Bal Kishan;Verma, Rashmi
    • 대한수학회지
    • /
    • 제48권3호
    • /
    • pp.475-486
    • /
    • 2011
  • The paper deals with repeated low-density burst error detecting codes with a specied weight or less. Linear codes capable of detecting such errors have been studied. Further codes capable of correcting and simultaneously detecting such errors have also been dealt with. The paper obtains lower and upper bounds on the number of parity-check digits required for such codes. An example of such a code has also been provided.

패리티 검사비트를 이용한 새로운 오류정정 기술 (Error Correcting Technique with the Use of a Parity Check Bit)

  • 현종식;한영열
    • 한국산업정보학회:학술대회논문집
    • /
    • 한국산업정보학회 1997년도 추계학술대회 발표논문집:21세기를 향한 정보통신 기술의 전망
    • /
    • pp.137-146
    • /
    • 1997
  • The simplest bit error detection scheme is to append a parity bit to the end of a bit sequence. In this paper an error correction technique with the use of a parity bit is proposed, and the performance of the proposed system is analyzed. The error probability of the proposed system is compared with the output of computer simulation of the proposed system. It is also compared with the error probability of error at BPSK system, and the signal-to-noise ratio gain is showed.

  • PDF