• Title/Summary/Keyword: MSE(Mean Square Error)

Search Result 296, Processing Time 0.032 seconds

A Numerically Controlled Oscillator for Multi-Carrier Channel Separation in Cdma2000 3X (Cdma2000 3X 다중 반송파 채널 분리용 수치 제어 발진기)

  • Lim In-Gi;Kim Whan-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11A
    • /
    • pp.1271-1277
    • /
    • 2004
  • We propose a foe phase tuner and a rounding processor for a numerically controlled oscillator (NCO), yielding a reduced phase error in generating a digital sine waveform. By using the fine Phase tuner Presented in this paper, when the ratio of the desired sine wave frequency to the clock frequency is expressed as a fraction, an accurate adjustment in representing the fractional value can be achieved with simple hardware. In addition, the proposed rounding processor reduces the effects of phase truncation on the output spectrum. Logic simulation results of the NCO for multi-carrier channel separation in cdma2000 3X multi-carrier receive system using these techniques show that the noise spectrum and mean square error (MSE) are reduced by 8.68 dB and 5.5 dB, respectively compared to those of truncation method and 2.38 dB and 0.83 dB, respectively, compared to those of Paul's scheme.

Efficient Channel Estimation Method for ZigBee Receiver in Train Environment (철도 환경에서 ZigBee 수신기를 위한 효율적인 채널 추정 기법)

  • Lee, Jingu;Kim, Daehyun;Kim, Jaehoon;Kim, Younglok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.4
    • /
    • pp.12-19
    • /
    • 2016
  • The monitoring system in railway is under study to forecast any derailment and accident by defect of train. Because the monitoring system is composed of wireless sensor network based on ZigBee-communication between inside and outside of train, the study for wireless channel analysis is required. Especially, if multipath delay profile exist in the channel, the equalizer and channel estimator can be required for preventing receiver performance degradation. Therefore, we analyzed the wireless channel in train environment using measured data and, proposed the channel estimation method through the characterisitic of chip code, under the consideration of the channel characteristics in train. To show the performance of proposed method, we demonstrate the performance by mean square error(MSE), computational complexity and bit error rate(BER).

Using Artificial Neural Network in the reverse design of a composite sandwich structure

  • Mortda M. Sahib;Gyorgy Kovacs
    • Structural Engineering and Mechanics
    • /
    • v.85 no.5
    • /
    • pp.635-644
    • /
    • 2023
  • The design of honeycomb sandwich structures is often challenging because these structures can be tailored from a variety of possible cores and face sheets configurations, therefore, the design of sandwich structures is characterized as a time-consuming and complex task. A data-driven computational approach that integrates the analytical method and Artificial Neural Network (ANN) is developed by the authors to rapidly predict the design of sandwich structures for a targeted maximum structural deflection. The elaborated ANN reverse design approach is applied to obtain the thickness of the sandwich core, the thickness of the laminated face sheets, and safety factors for composite sandwich structure. The required data for building ANN model were obtained using the governing equations of sandwich components in conjunction with the Monte Carlo Method. Then, the functional relationship between the input and output features was created using the neural network Backpropagation (BP) algorithm. The input variables were the dimensions of the sandwich structure, the applied load, the core density, and the maximum deflection, which was the reverse input given by the designer. The outstanding performance of reverse ANN model revealed through a low value of mean square error (MSE) together with the coefficient of determination (R2) close to the unity. Furthermore, the output of the model was in good agreement with the analytical solution with a maximum error 4.7%. The combination of reverse concept and ANN may provide a potentially novel approach in designing of sandwich structures. The main added value of this study is the elaboration of a reverse ANN model, which provides a low computational technique as well as savestime in the design or redesign of sandwich structures compared to analytical and finite element approaches.

Adaptive Discrete Wavelet Transform Based on Block Energy for JPEG2000 Still Images (JPEG2000 정지영상을 위한 블록 에너지 기반 적응적 이산 웨이블릿 변환)

  • Kim, Dae-Won
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.8 no.1
    • /
    • pp.22-31
    • /
    • 2007
  • The proposed algorithm in this paper is based on the wavelet decomposition and the energy computation of composed blocks so the amount of calculation and complexity is minimized by adaptively replacing the DWT coefficients and managing the resources effectively. We are now living in the world of a lot. of multimedia applications for many digital electric appliances and mobile devices. Among so many multimedia applications, the digital image compression is very important technology for digital cameras to store and transmit digital images to other sites and JPEG2000 is one of the cutting edge technology to compress still images efficiently. The digital cm technology is mainly using the digital image compression features so that those images could be efficiently saved locally and transferred to other sites without any losses. JPEG2000 standard is applicable for processing the digital images usefully to keep, send and receive through wired and/or wireless networks. The discrete wavelet transform (DWT) is one of the main differences to the previous digital image compression standard such as JPEG, performing the DWT to the entire image rather than splitting into many blocks. Several digital images m tested with this method and restored to compare to the results of conventional DWT which shows that the proposed algorithm get the better result without any significant degradation in terms of MSE & PSNR and the number of zero coefficients when the energy based adaptive DWT is applied.

  • PDF

The Comparative Study of NHPP Software Reliability Model Based on Log and Exponential Power Intensity Function (로그 및 지수파우어 강도함수를 이용한 NHPP 소프트웨어 무한고장 신뢰도 모형에 관한 비교연구)

  • Yang, Tae-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.6
    • /
    • pp.445-452
    • /
    • 2015
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, proposes the reliability model with log and power intensity function (log linear, log power and exponential power), which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, was employed. Analysis of failure, using real data set for the sake of proposing log and power intensity function, was employed. This analysis of failure data compared with log and power intensity function. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the log type model is also efficient in terms of reliability because it (the coefficient of determination is 70% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, software developers have to consider the growth model by prior knowledge of the software to identify failure modes which can be able to help.

High Noise Density Median Filter Method for Denoising Cancer Images Using Image Processing Techniques

  • Priyadharsini.M, Suriya;Sathiaseelan, J.G.R
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.11
    • /
    • pp.308-318
    • /
    • 2022
  • Noise is a serious issue. While sending images via electronic communication, Impulse noise, which is created by unsteady voltage, is one of the most common noises in digital communication. During the acquisition process, pictures were collected. It is possible to obtain accurate diagnosis images by removing these noises without affecting the edges and tiny features. The New Average High Noise Density Median Filter. (HNDMF) was proposed in this paper, and it operates in two steps for each pixel. Filter can decide whether the test pixels is degraded by SPN. In the first stage, a detector identifies corrupted pixels, in the second stage, an algorithm replaced by noise free processed pixel, the New average suggested Filter produced for this window. The paper examines the performance of Gaussian Filter (GF), Adaptive Median Filter (AMF), and PHDNF. In this paper the comparison of known image denoising is discussed and a new decision based weighted median filter used to remove impulse noise. Using Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), and Structure Similarity Index Method (SSIM) metrics, the paper examines the performance of Gaussian Filter (GF), Adaptive Median Filter (AMF), and PHDNF. A detailed simulation process is performed to ensure the betterment of the presented model on the Mini-MIAS dataset. The obtained experimental values stated that the HNDMF model has reached to a better performance with the maximum picture quality. images affected by various amounts of pretend salt and paper noise, as well as speckle noise, are calculated and provided as experimental results. According to quality metrics, the HNDMF Method produces a superior result than the existing filter method. Accurately detect and replace salt and pepper noise pixel values with mean and median value in images. The proposed method is to improve the median filter with a significant change.

Optimizing Clustering and Predictive Modelling for 3-D Road Network Analysis Using Explainable AI

  • Rotsnarani Sethy;Soumya Ranjan Mahanta;Mrutyunjaya Panda
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.9
    • /
    • pp.30-40
    • /
    • 2024
  • Building an accurate 3-D spatial road network model has become an active area of research now-a-days that profess to be a new paradigm in developing Smart roads and intelligent transportation system (ITS) which will help the public and private road impresario for better road mobility and eco-routing so that better road traffic, less carbon emission and road safety may be ensured. Dealing with such a large scale 3-D road network data poses challenges in getting accurate elevation information of a road network to better estimate the CO2 emission and accurate routing for the vehicles in Internet of Vehicle (IoV) scenario. Clustering and regression techniques are found suitable in discovering the missing elevation information in 3-D spatial road network dataset for some points in the road network which is envisaged of helping the public a better eco-routing experience. Further, recently Explainable Artificial Intelligence (xAI) draws attention of the researchers to better interprete, transparent and comprehensible, thus enabling to design efficient choice based models choices depending upon users requirements. The 3-D road network dataset, comprising of spatial attributes (longitude, latitude, altitude) of North Jutland, Denmark, collected from publicly available UCI repositories is preprocessed through feature engineering and scaling to ensure optimal accuracy for clustering and regression tasks. K-Means clustering and regression using Support Vector Machine (SVM) with radial basis function (RBF) kernel are employed for 3-D road network analysis. Silhouette scores and number of clusters are chosen for measuring cluster quality whereas error metric such as MAE ( Mean Absolute Error) and RMSE (Root Mean Square Error) are considered for evaluating the regression method. To have better interpretability of the Clustering and regression models, SHAP (Shapley Additive Explanations), a powerful xAI technique is employed in this research. From extensive experiments , it is observed that SHAP analysis validated the importance of latitude and altitude in predicting longitude, particularly in the four-cluster setup, providing critical insights into model behavior and feature contributions SHAP analysis validated the importance of latitude and altitude in predicting longitude, particularly in the four-cluster setup, providing critical insights into model behavior and feature contributions with an accuracy of 97.22% and strong performance metrics across all classes having MAE of 0.0346, and MSE of 0.0018. On the other hand, the ten-cluster setup, while faster in SHAP analysis, presented challenges in interpretability due to increased clustering complexity. Hence, K-Means clustering with K=4 and SVM hybrid models demonstrated superior performance and interpretability, highlighting the importance of careful cluster selection to balance model complexity and predictive accuracy.

Blind frequency offset estimation method in OFDM systems (OFDM에서 블라인드 주파수 옵셋 추정 방법)

  • Jeon, Hyoung-Goo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.4
    • /
    • pp.823-832
    • /
    • 2011
  • In this paper, an efficient blind carrier frequency offset (CFO) estimation method in orthogonal frequency division multiplexing (OFDM) systems is proposed. In the proposed method, we obtain two time different received OFDM symbols by using both the cyclic prefix and oversampling technique, and a cost function is defined by using the two OFDM symbols. We show that the cost function can be approximately expressed as a cosine function. Using a property of the cosine function, a formular for estimating the CFO is derived. The estimator of the CFO requires three independent cost function values calculated at three different points of frequency offset. The proposed method is very efficient in computational complexity since no searching operation for the minimum cost value is required. The proposed method reduces 97% of the amount of FFT computation, compared with the ML method. Unlike the conventional methods such as the ML method and the MUSIC] method, the accuracy of the proposed method is independent of the searching resolution since the closed form solution exists. The computer simulation shows that the performance of the proposed method is superior to those of the MUSIC and the ML method.

Determining Optimal Aggregation Interval Size for Travel Time Estimation and Forecasting with Statistical Models (통행시간 산정 및 예측을 위한 최적 집계시간간격 결정에 관한 연구)

  • Park, Dong-Joo
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.3
    • /
    • pp.55-76
    • /
    • 2000
  • We propose a general solution methodology for identifying the optimal aggregation interval sizes as a function of the traffic dynamics and frequency of observations for four cases : i) link travel time estimation, ii) corridor/route travel time estimation, iii) link travel time forecasting. and iv) corridor/route travel time forecasting. We first develop statistical models which define Mean Square Error (MSE) for four different cases and interpret the models from a traffic flow perspective. The emphasis is on i) the tradeoff between the Precision and bias, 2) the difference between estimation and forecasting, and 3) the implication of the correlation between links on the corridor/route travel time estimation and forecasting, We then demonstrate the Proposed models to the real-world travel time data from Houston, Texas which were collected as Part of the Automatic Vehicle Identification (AVI) system of the Houston Transtar system. The best aggregation interval sizes for the link travel time estimation and forecasting were different and the function of the traffic dynamics. For the best aggregation interval sizes for the corridor/route travel time estimation and forecasting, the covariance between links had an important effect.

  • PDF

Iterative Reduction of Blocking Artifact in Block Transform-Coded Images Using Wavelet Transform (웨이브렛 변환을 이용한 블록기반 변환 부호화 영상에서의 반복적 블록화 현상 제거)

  • 장익훈;김남철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.12B
    • /
    • pp.2369-2381
    • /
    • 1999
  • In this paper, we propose an iterative algorithm for reducing the blocking artifact in block transform-coded images by using a wavelet transform. In the proposed method, an image is considered as a set of one-dimensional horizontal and vertical signals and one-dimensional wavelet transform is utilized in which the mother wavelet is the first order derivative of a Gaussian like function. The blocking artifact is reduced by removing the blocking component, that causes the variance at the block boundary position in the first scale wavelet domain to be abnormally higher than those at the other positions, using a minimum mean square error (MMSE) filter in the wavelet domain. This filter minimizes the MSE between the ideal blocking component-free signal and the restored signal in the neighborhood of block boundaries in the wavelet domain. It also uses local variance in the wavelet domain for pixel adaptive processing. The filtering and the projection onto a convex set of quantization constraint are iteratively performed in alternating fashion. Experimental results show that the proposed method yields not only a PSNR improvement of about 0.56-1.07 dB, but also subjective quality nearly free of the blocking artifact and edge blur.

  • PDF