• Title/Summary/Keyword: Error level

Search Result 2,511, Processing Time 0.028 seconds

An Error Diffusion Technique Based on Principle Distance (주거리 기반의 오차확산 방법)

  • Gang, Gi-Min;Kim, Chun-U
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.1
    • /
    • pp.1-10
    • /
    • 2001
  • In order to generate the gray scale image by the binary state imaging devices such as a digital printer, the gray scale image needs to be converted into the binary image by the halftoning techniques. This paper presents a new error diffusion technique to achieve the homogeneous dot distributions on the binary images. In this paper,'the minimum pixel distance'from the current pixel under binarization to the nearest minor pixel is defined first. Also, the gray levels of the input image are converted into a new variable based on the principal distance for the error diffusion. In the proposed method, the difference in the principal distances is utilized for the error propagation, whereas the gray level difference due to the binarization is diffused to the neighboring pixels in the existing error diffusion techniques. The quantization is accomplished by comparing the updated principal distance with the minimum pixel distance. In order to calculate the minimum pixel distance, MPOA(Minor Pixel Offset Array) is employed to reduce the computational loads and memory resources.

  • PDF

ATC-55 Based Friction Damper Design Procedure for Controlling Inelastic Seismic Responses (비탄성 지진응답 제어를 위한 ATC-55에 기반한 마찰감쇠기 설계절차)

  • Kim, Hyoung-Seop;Min, Kyung-Won;Lee, Sang-Hyun;Park, Ji-Hun
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.9 no.1 s.41
    • /
    • pp.9-16
    • /
    • 2005
  • The purpose of this paper is to present a design procedure of a friction damper for controlling elastic and inelastic responses of building structures under earthquake excitation. The equivalent damping and period increased by the friction damper are estimated using ATC-40 and ATC-55 procedures which provide equivalent linear system for bilinear one, and then a design formula to achieve target performance response level by the friction damper is presented. It is identified that there exists error between the responses obtained by this formula and by performing nonlinear analysis and the features of the error vary according to the hardening ratio, yield strength ratio, and structural period. Equations for compensating the error are proposed based on the least square method, and the results from numerical analysis indicate that the error is significantly reduced. The proposed formula can be used without much error for designing a friction damper for retrofitting a structure showing elastic or inelastic behavior.

Precision GPS Orbit Determination and Analysis of Error Characteristics (정밀 GPS 위성궤도 결정 및 오차 특성 분석)

  • Bae, Tae-Suk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.4
    • /
    • pp.437-444
    • /
    • 2009
  • A bi-directional, multi-step numerical integrator is developed to determine the GPS (Global Positioning System) orbit based on a dynamic approach, which shows micrometer-level accuracy at GPS altitude. The acceleration due to the planets other than the Moon and the Sun is so small that it is replaced by the empirical forces in the Solar Radiation Pressure (SRP) model. The satellite orbit parameters are estimated with the least-squares adjustment method using both the integrated orbit and the published IGS (International GNSS Service) precise orbit. For this estimation procedure, the integration should be applied to the partial derivatives of the acceleration with respect to the unknown parameters as well as the acceleration itself. The accuracy of the satellite orbit is evaluated by the RMS (Root Mean Squares error) of the residuals calculated from the estimated orbit parameters. The overall RMS of orbit error during March 2009 was 5.2 mm, and there are no specific patterns in the absolute orbit error depending on the satellite types and the directions of coordinate frame. The SRP model used in this study includes only the direct and once-per-revolution terms. Therefore there is errant behavior regarding twice-per-revolution, which needs further investigation.

A Parallel Equalization Algorithm with Weighted Updating by Two Error Estimation Functions (두 오차 추정 함수에 의해 가중 갱신되는 병렬 등화 알고리즘)

  • Oh, Kil-Nam
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.49 no.7
    • /
    • pp.32-38
    • /
    • 2012
  • In this paper, to eliminate intersymbol interference of the received signal due to multipath propagation, a parallel equalization algorithm using two error estimation functions is proposed. In the proposed algorithm, multilevel two-dimensional signals are considered as equivalent binary signals, then error signals are estimated using the sigmoid nonlinearity effective at the initial phase equalization and threshold nonlinearity with high steady-state performance. The two errors are scaled by a weight depending on the relative accuracy of the two error estimations, then two filters are updated differentially. As a result, the combined output of two filters was to be the optimum value, fast convergence at initial stage of equalization and low steady-state error level were achieved at the same time thanks to the combining effect of two operation modes smoothly. Usefulness of the proposed algorithm was verified and compared with the conventional method through computer simulations.

Radiation-Induced Soft Error Detection Method for High Speed SRAM Instruction Cache (고속 정적 RAM 명령어 캐시를 위한 방사선 소프트오류 검출 기법)

  • Kwon, Soon-Gyu;Choi, Hyun-Suk;Park, Jong-Kang;Kim, Jong-Tae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.6B
    • /
    • pp.948-953
    • /
    • 2010
  • In this paper, we propose multi-bit soft error detection method which can use an instruction cache of superscalar CPU architecture. Proposed method is applied to high-speed static RAM for instruction cache. Using 1D parity and interleaving, it has less memory overhead and detects more multi-bit errors comparing with other methods. It only detects occurrence of soft errors in static RAM. Error correction is treated like a cache miss situation. When soft errors are occurred, it is detected by 1D parity. Instruction cache just fetch the words from lower-level memory to correct errors. This method can detect multi-bit errors in maximum 4$\times$4 window.

Convergence Property Analysis of Multiple Modulus Self-Recovering Equalization According to Error Dynamics Boosting (다중 모듈러스 자기복원 등화의 오차 역동성 증강에 따른 수렴 특성 분석)

  • Oh, Kil Nam
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.1
    • /
    • pp.15-20
    • /
    • 2016
  • The existing multiple modulus-based self-recovering equalization type has not been applied to initial equalization. Instead, it was used for steady-state performance improvement. In this paper, for the self-recovering equalization type that considers the multiple modulus as a desired response, the initial convergence performance was improved by extending the dynamics of the errors using error boosting and their characteristics were analyzed. Error boosting in the proposed method was carried out in proportion to a symbol decision for the equalizer output. Furthermore, having the initial convergence capability by extending the dynamics of errors, it showed excellent performance in the initial convergence rate and steady-state error level. In particular, the proposed method can be applied to the entire process of equalization through a single algorithm; the existing methods of switching over or the selection of other operation modes, such as concurrent operating with other algorithms, are not necessary. The usefulness of the proposed method was verified by simulations performed under the channel conditions with multipath propagation and additional noise, and for performance analysis of self-recovering equalization for high-order signal constellations.

Wine Quality Prediction by Using Backward Elimination Based on XGBoosting Algorithm

  • Umer Zukaib;Mir Hassan;Tariq Khan;Shoaib Ali
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.31-42
    • /
    • 2024
  • Different industries mostly rely on quality certification for promoting their products or brands. Although getting quality certification, specifically by human experts is a tough job to do. But the field of machine learning play a vital role in every aspect of life, if we talk about quality certification, machine learning is having a lot of applications concerning, assigning and assessing quality certifications to different products on a macro level. Like other brands, wine is also having different brands. In order to ensure the quality of wine, machine learning plays an important role. In this research, we use two datasets that are publicly available on the "UC Irvine machine learning repository", for predicting the wine quality. Datasets that we have opted for our experimental research study were comprised of white wine and red wine datasets, there are 1599 records for red wine and 4898 records for white wine datasets. The research study was twofold. First, we have used a technique called backward elimination in order to find out the dependency of the dependent variable on the independent variable and predict the dependent variable, the technique is useful for predicting which independent variable has maximum probability for improving the wine quality. Second, we used a robust machine learning algorithm known as "XGBoost" for efficient prediction of wine quality. We evaluate our model on the basis of error measures, root mean square error, mean absolute error, R2 error and mean square error. We have compared the results generated by "XGBoost" with the other state-of-the-art machine learning techniques, experimental results have showed, "XGBoost" outperform as compared to other state of the art machine learning techniques.

Research of Satellite Autonomous Navigation Using Star Sensor Algorithm (별 추적기 알고리즘을 활용한 위성 자율항법 연구)

  • Hyunseung Kim;Chul Hyun;Hojin Lee;Donggeon Kim
    • Journal of Space Technology and Applications
    • /
    • v.4 no.3
    • /
    • pp.232-243
    • /
    • 2024
  • In order to perform various missions in space, including planetary exploration, estimating the position of a satellite in orbit is a very important factor because it is directly related to the success rate of mission performance. As a study for autonomous satellite navigation, this study estimated the satellite's attitude and real time orbital position using a star sensor algorithm with two star trackers and earth sensor. To implement the star sensor algorithm, a simulator was constructed and the position error of the satellite estimated through the technique presented in the paper was analyzed. Due to lens distortion and errors in the center point finding algorithm, the average attitude estimation error was at the level of 2.6 rad in the roll direction. And the position error was confirmed by attitude error, so average error in altitude direction was 516 m. It is expected that the proposed satellite attitude and position estimation technique will contribute to analyzing star sensor performance and improving position estimation accuracy.

A Study on an Error Correction Code Circuit for a Level-2 Cache of an Embedded Processor (임베디드 프로세서의 L2 캐쉬를 위한 오류 정정 회로에 관한 연구)

  • Kim, Pan-Ki;Jun, Ho-Yoon;Lee, Yong-Surk
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.1
    • /
    • pp.15-23
    • /
    • 2009
  • Microprocessors, which need correct arithmetic operations, have been the subject of in-depth research in relation to soft errors. Of the existing microprocessor devices, the memory cell is the most vulnerable to soft errors. Moreover, when soft errors emerge in a memory cell, the processes and operations are greatly affected because the memory cell contains important information and instructions about the entire process or operation. Users do not realize that if soft errors go undetected, arithmetic operations and processes will have unexpected outcomes. In the field of architectural design, the tool that is commonly used to detect and correct soft errors is the error check and correction code. The Itanium, IBM PowerPC G5 microprocessors contain Hamming and Rasio codes in their level-2 cache. This research, however, focuses on huge server devices and does not consider power consumption. As the operating and threshold voltage is currently shrinking with the emergence of high-density and low-power embedded microprocessors, there is an urgent need to develop ECC (error check correction) circuits. In this study, the in-output data of the level-2 cache were analyzed using SimpleScalar-ARM, and a 32-bit H-matrix for the level-2 cache of an embedded microprocessor is proposed. From the point of view of power consumption, the proposed H-matrix can be implemented using a schematic editor of Cadence. Therefore, it is comparable to the modified Hamming code, which uses H-spice. The MiBench program and TSMC 0.18 um were used in this study for verification purposes.

Compiler triggered C level error check (컴파일러에 의한 C레벨 에러 체크)

  • Zheng, Zhiwen;Youn, Jong-Hee M.;Lee, Jong-Won;Paek, Yun-Heung
    • The KIPS Transactions:PartA
    • /
    • v.18A no.3
    • /
    • pp.109-114
    • /
    • 2011
  • We describe a technique for automatically proving compiler optimizations sound, meaning that their transformations are always semantics-preserving. As is well known, IR (Intermediate Representation) optimization is an important step in a compiler backend. But unfortunately, it is difficult to detect and debug the IR optimization errors for compiler developers. So, we introduce a C level error check system for detecting the correctness of these IR transformation techniques. In our system, we first create an IR-to-C converter to translate IR to C code before and after each compiler optimization phase, respectively, since our technique is based on the Memory Comparison-based Clone(MeCC) detector which is a tool of detecting semantic equivalency in C level. MeCC accepts only C codes as its input and it uses a path-sensitive semantic-based static analyzer to estimate the memory states at exit point of each procedure, and compares memory states to determine whether the procedures are equal or not. But MeCC cannot guarantee two semantic-equivalency codes always have 100% similarity or two codes with different semantics does not get the result of 100% similarity. To increase the reliability of the results, we describe a technique which comprises how to generate C codes in IR-to-C transformation phase and how to send the optimization information to MeCC to avoid the occurrence of these unexpected problems. Our methodology is illustrated by three familiar optimizations, dead code elimination, instruction scheduling and common sub-expression elimination and our experimental results show that the C level error check system is highly reliable.