• Title/Summary/Keyword: Noise Mapping

Search Result 249, Processing Time 0.025 seconds

Performance Analysis of Generic Bit Error Rate of M-ary Square QAM (정방형 M진 직교 진폭 변조 신호의 일반화된 BER 성능 분석)

  • Cho, Kyong-Kuk;Yoon, Dong-Weon
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.38 no.11
    • /
    • pp.41-48
    • /
    • 2001
  • The exact general bit error rate (TIER) expression of M-ary square quadrature amplitude modulation (QAM) for arbitrary M has not been derived so far. In this paper, a generalized closed-form expression for the BER performance of M-ary square QAM with Gray code bit mapping is derived and analyzed in the presence of additive white Gaussian noise (AWGN) channel. The derivation is based on the consistency of the format in signal constellation o[ Gray coding and it has been derived from the results for M-16, 64, and 256.

  • PDF

Construction of Structured q-ary LDPC Codes over Small Fields Using Sliding-Window Method

  • Chen, Haiqiang;Liu, Yunyi;Qin, Tuanfa;Yao, Haitao;Tang, Qiuling
    • Journal of Communications and Networks
    • /
    • v.16 no.5
    • /
    • pp.479-484
    • /
    • 2014
  • In this paper, we consider the construction of cyclic and quasi-cyclic structured q-ary low-density parity-check (LDPC) codes over a designated small field. The construction is performed with a pre-defined sliding-window, which actually executes the regular mapping from original field to the targeted field under certain parameters. Compared to the original codes, the new constructed codes can provide better flexibility in choice of code rate, code length and size of field. The constructed codes over small fields with code length from tenths to hundreds perform well with q-ary sum-product decoding algorithm (QSPA) over the additive white Gaussian noise channel and are comparable to the improved spherepacking bound. These codes may found applications in wireless sensor networks (WSN), where the delay and energy are extremely constrained.

Finite Element Analysis on Residual Aligning Torque and Frictional Energy of a Tire with Detailed Tread Blocks (트레드 블록을 고려한 타이어의 잔류 복원 토크 및 마찰 에너지에 대한 유한 요소 해석)

  • 김기운;정현성;조진래;양영수
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.12 no.4
    • /
    • pp.173-180
    • /
    • 2004
  • The tread pattern of a tire has an important effect on tire performances such as handling, wear, noise, hydroplaning and so on. However, a finite element analysis of a patterned tire with detailed tread blocks has been limited owing to the complexity of making meshes for tread blocks and the huge computation time. The computation time has been shortened due to the advance in the computer technology. The modeling of tread blocks usually requires creating a solid model using a CAD software. Therefore it is a very complicated and time-consuming job to generate meshes of a patterned tire using a CAD model. A new efficient and convenient method for generating meshes of a patterned tire has been developed. In this method, 3-D meshes of tread pattern are created by mapping 2-D meshes of tread geometry onto 3-D tread surfaces and extruding them through tread depth. Then, the tread pattern meshes are assembled with the tire body meshes by the tie contact constraint. Residual aligning torque and frictional energy are calculated by using a patterned tire model and compared to the experimental results. It is shown that the calculated results of a patterned tire model are in a good agreement with the experimental ones.

Human Visual System based Automatic Underwater Image Enhancement in NSCT domain

  • Zhou, Yan;Li, Qingwu;Huo, Guanying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.837-856
    • /
    • 2016
  • Underwater image enhancement has received considerable attention in last decades, due to the nature of poor visibility and low contrast of underwater images. In this paper, we propose a new automatic underwater image enhancement algorithm, which combines nonsubsampled contourlet transform (NSCT) domain enhancement techniques with the mechanism of the human visual system (HVS). We apply the multiscale retinex algorithm based on the HVS into NSCT domain in order to eliminate the non-uniform illumination, and adopt the threshold denoising technique to suppress underwater noise. Our proposed algorithm incorporates the luminance masking and contrast masking characteristics of the HVS into NSCT domain to yield the new HVS-based NSCT. Moreover, we define two nonlinear mapping functions. The first one is used to manipulate the HVS-based NSCT contrast coefficients to enhance the edges. The second one is a gain function which modifies the lowpass subband coefficients to adjust the global dynamic range. As a result, our algorithm can achieve contrast enhancement, image denoising and edge sharpening automatically and simultaneously. Experimental results illustrate that our proposed algorithm has better enhancement performance than state-of-the-art algorithms both in subjective evaluation and quantitative assessment. In addition, our algorithm can automatically achieve underwater image enhancement without any parameter tuning.

A Trellis-based Technique for Blind Channel Estimation and Equalization

  • Cao, Lei;Chen, Chang-Wen;Orlik, Philip;Zhang, Jinyun;Gu, Daqing
    • Journal of Communications and Networks
    • /
    • v.6 no.1
    • /
    • pp.19-25
    • /
    • 2004
  • In this paper, we present a trellis-based blind channel estimation and equalization technique coupling two kinds of adaptive Viterbi algorithms. First, the initial blind channel estimation is accomplished by incorporating the list parallel Viterbi algorithm with the least mean square (LMS) updating approach. In this operation, multiple trellis mappings are preserved simultaneously and ranked in terms of path metrics. Equivalently, multiple channel estimates are maintained and updated once a single symbol is received. Second, the best channel estimate from the above operation will be adopted to set up the whole trellis. The conventional adaptive Viterbi algorithm is then applied to detect the signal and further update the channel estimate alternately. A small delay is introduced for the symbol detection and the decision feedback to smooth the noise impact. An automatic switch between the above two operations is also proposed by exploiting the evolution of path metrics and the linear constraint inherent in the trellis mapping. Simulation has shown an overall excellent performance of the proposed scheme in terms of mean square error (MSE) for channel estimation, robustness to the initial channel guess, computational complexity, and channel equalization.

Study on Optimization of Fuel Injection Parameters and EGR Rate of Off-road Diesel Engine by Taguchi Method (다구찌 방법을 적용한 Off-road 디젤 엔진의 분사조건 및 EGR 율 최적화에 관한 연구)

  • Ha, Hyeongsoo;Ahn, Juengkyu;Park, Chansu;Kang, Jeongho
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.7
    • /
    • pp.84-89
    • /
    • 2014
  • Not only the emission regulation of on-road vehicle engine, but also emission regulation of off-road engine have been reinforced. It is the reason of wide application of emission reduction technology for off-road engines. In this study, optimization of engine parameters (Injector hole number, Injection timing and EGR rate) for reduction of NOx and smoke emissions were conducted by using the analysis of sensitivity and S/N ratio of Taguchi method(DOE). As results, this paper shows optimum value of the parameters for NOx and smoke emission reduction. From the result of reproducibility verification, it is final that the prediction value of NOx and smoke has the error of below 10%. Consequently, the method and results of this study will be used for quantitative reference to EGR control mapping in next study.

Classification of Imbalanced Data Based on MTS-CBPSO Method: A Case Study of Financial Distress Prediction

  • Gu, Yuping;Cheng, Longsheng;Chang, Zhipeng
    • Journal of Information Processing Systems
    • /
    • v.15 no.3
    • /
    • pp.682-693
    • /
    • 2019
  • The traditional classification methods mostly assume that the data for class distribution is balanced, while imbalanced data is widely found in the real world. So it is important to solve the problem of classification with imbalanced data. In Mahalanobis-Taguchi system (MTS) algorithm, data classification model is constructed with the reference space and measurement reference scale which is come from a single normal group, and thus it is suitable to handle the imbalanced data problem. In this paper, an improved method of MTS-CBPSO is constructed by introducing the chaotic mapping and binary particle swarm optimization algorithm instead of orthogonal array and signal-to-noise ratio (SNR) to select the valid variables, in which G-means, F-measure, dimensionality reduction are regarded as the classification optimization target. This proposed method is also applied to the financial distress prediction of Chinese listed companies. Compared with the traditional MTS and the common classification methods such as SVM, C4.5, k-NN, it is showed that the MTS-CBPSO method has better result of prediction accuracy and dimensionality reduction.

Nonlinear structural model updating based on the Deep Belief Network

  • Mo, Ye;Wang, Zuo-Cai;Chen, Genda;Ding, Ya-Jie;Ge, Bi
    • Smart Structures and Systems
    • /
    • v.29 no.5
    • /
    • pp.729-746
    • /
    • 2022
  • In this paper, a nonlinear structural model updating methodology based on the Deep Belief Network (DBN) is proposed. Firstly, the instantaneous parameters of the vibration responses are obtained by the discrete analytical mode decomposition (DAMD) method and the Hilbert transform (HT). The instantaneous parameters are regarded as the independent variables, and the nonlinear model parameters are considered as the dependent variables. Then the DBN is utilized for approximating the nonlinear mapping relationship between them. At last, the instantaneous parameters of the measured vibration responses are fed into the well-trained DBN. Owing to the strong learning and generalization abilities of the DBN, the updated nonlinear model parameters can be directly estimated. Two nonlinear shear-type structure models under two types of excitation and various noise levels are adopted as numerical simulations to validate the effectiveness of the proposed approach. The nonlinear properties of the structure model are simulated via the hysteretic parameters of a Bouc-Wen model and a Giuffré-Menegotto-Pinto model, respectively. Besides, the proposed approach is verified by a three-story shear-type frame with a piezoelectric friction damper (PFD). Simulated and experimental results suggest that the nonlinear model updating approach has high computational efficiency and precision.

Alzheimer progression classification using fMRI data (fMRI 데이터를 이용한 알츠하이머 진행상태 분류)

  • Ju Hyeon-Noh;Hee-Deok Yang
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.86-93
    • /
    • 2024
  • The development of functional magnetic resonance imaging (fMRI) has significantly contributed to mapping brain functions and understanding brain networks during rest. This paper proposes a CNN-LSTM-based classification model to classify the progression stages of Alzheimer's disease. Firstly, four preprocessing steps are performed to remove noise from the fMRI data before feature extraction. Secondly, the U-Net architecture is utilized to extract spatial features once preprocessing is completed. Thirdly, the extracted spatial features undergo LSTM processing to extract temporal features, ultimately leading to classification. Experiments were conducted by adjusting the temporal dimension of the data. Using 5-fold cross-validation, an average accuracy of 96.4% was achieved, indicating that the proposed method has high potential for identifying the progression of Alzheimer's disease by analyzing fMRI data.

Development of a precision machining process for the outer cylinder of vacuum roll for film transfer (실험계획법을 통한 3.5인치 도광판의 두께 편차 최적화에 대한 연구)

  • Hyo-Eun Lee;Jong-Sun Kim
    • Design & Manufacturing
    • /
    • v.18 no.2
    • /
    • pp.41-50
    • /
    • 2024
  • In this study, experimental design methods were used to derive optimal process conditions for improving the thickness uniformity of a 0.40 mm, 3.5 inch light guide panel. Process mapping and expert group analysis were used to identify factors that influence the thickness of injection molded products. The key factors identified were mold temperature, mold temperature, injection speed, packing pressure, packing time, clamp force, and flash time. Considering the resin manufacturer's recommended process conditions and the process conditions for similar light guide plates, a three-level range was selected for the identified influencing factors. L27 orthogonal array process conditions were generated using the Taguchi method. Injection molding was performed using these L27 orthogonal array to mold the 3.5 inch light guide plates. Thickness measurements were then taken, and the results were analyzed using the signal-to-noise ratio to maximize the CpK value, leading to the determination of the optimal process conditions. The thickness uniformity of the product was analyzed by applying the derived optimum process conditions. The results showed a 97.5% improvement in the Cpk value of 3.22 compared to the process conditions used for similar light guide plates.