• Title/Summary/Keyword: Mean Square Error(MSE)

Search Result 295, Processing Time 0.025 seconds

A Response Surface Model Based on Absorbance Data for the Growth Rates of Salmonella enterica Serovar Typhimurium as a Function of Temperature, NaCl, and pH

  • Park, Shin-Young;Seo, Kyo-Young;Ha, Sang-Do
    • Journal of Microbiology and Biotechnology
    • /
    • v.17 no.4
    • /
    • pp.644-649
    • /
    • 2007
  • Response surface model was developed for predicting the growth rates of Salmonella enterica sv. Typhimurium in tryptic soy broth (TSB) medium as a function of combined effects of temperature, pH, and NaCl. The TSB containing six different concentrations of NaCl (0, 2, 4, 6, 8, and 10%) was adjusted to an initial of six different pH levels (pH 4, 5, 6, 7, 8, 9, and 10) and incubated at 10 or $20^{\circ}C$. In all experimental variables, the primary growth curves were well $(r^2=0.900\;to\;0.996)$ fitted to a Gompertz equation to obtain growth rates. The secondary response surface model for natural logarithm transformations of growth rates as a function of combined effects of temperature, pH, and NaCl was obtained by SAS's general linear analysis. The predicted growth rates of the S. Typhimurium were generally decreased by basic (9, 10) or acidic (5, 6) pH levels or increase of NaCl concentrations (0-8%). Response surface model was identified as an appropriate secondary model for growth rates on the basis of coefficient determination $(r^2=0.960)$, mean square error (MSE=0.022), bias factor $(B_f=1.023)$, and accuracy factor $(A_f=1.164)$. Therefore, the developed secondary model proved reliable predictions of the combined effect of temperature, NaCl, and pH on growth rates for S. Typhimurium in TSB medium.

Blind Frequency offset Estimation for Radio Resource Saving in OFDM (OFDM에서 무선자원 절약을 위한 블라인드 주파수 옵셋 추정 방식)

  • Jeon, Hyoung-Goo;Kim, Kyoung-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.10C
    • /
    • pp.1001-1009
    • /
    • 2009
  • In this paper, an efficient blind frequency offset estimation method for radio resource saving in orthogonal frequency division multiplexing (OFDM) systems is proposed. In the proposed method, we obtain two time different received OFDM signal blocks by using the cyclic prefix and define the cost function by using the two OFDM signal blocks. We show that the cost function can be approximately expressed as a closed form cosine function. The approximated cosine function can be obtained from three independent cost function values calculated at three different frequency offsets. In the proposed method, the frequency offset can be estimated by calculating a frequency offset minimizing the approximated cosine function without searching all the frequency offset range. Unlike the conventional methods such as MUSIC method, the accuracy of the proposed method is independent of the searching resolution since the closed form solution exists. The computer simulation shows that the performance of the proposed method is superior to those of the MUSIC and the oversampling method.

Numerical evaluation of gamma radiation monitoring

  • Rezaei, Mohsen;Ashoor, Mansour;Sarkhosh, Leila
    • Nuclear Engineering and Technology
    • /
    • v.51 no.3
    • /
    • pp.807-817
    • /
    • 2019
  • Airborne Gamma Ray Spectrometry (AGRS) with its important applications such as gathering radiation information of ground surface, geochemistry measuring of the abundance of Potassium, Thorium and Uranium in outer earth layer, environmental and nuclear site surveillance has a key role in the field of nuclear science and human life. The Broyden-Fletcher-Goldfarb-Shanno (BFGS), with its advanced numerical unconstrained nonlinear optimization in collaboration with Artificial Neural Networks (ANNs) provides a noteworthy opportunity for modern AGRS. In this study a new AGRS system empowered by ANN-BFGS has been proposed and evaluated on available empirical AGRS data. To that effect different architectures of adaptive ANN-BFGS were implemented for a sort of published experimental AGRS outputs. The selected approach among of various training methods, with its low iteration cost and nondiagonal scaling allocation is a new powerful algorithm for AGRS data due to its inherent stochastic properties. Experiments were performed by different architectures and trainings, the selected scheme achieved the smallest number of epochs, the minimum Mean Square Error (MSE) and the maximum performance in compare with different types of optimization strategies and algorithms. The proposed method is capable to be implemented on a cost effective and minimum electronic equipment to present its real-time process, which will let it to be used on board a light Unmanned Aerial Vehicle (UAV). The advanced adaptation properties and models of neural network, the training of stochastic process and its implementation on DSP outstands an affordable, reliable and low cost AGRS design. The main outcome of the study shows this method increases the quality of curvature information of AGRS data while cost of the algorithm is reduced in each iteration so the proposed ANN-BFGS is a trustworthy appropriate model for Gamma-ray data reconstruction and analysis based on advanced novel artificial intelligence systems.

A Novel RGB Image Steganography Using Simulated Annealing and LCG via LSB

  • Bawaneh, Mohammed J.;Al-Shalabi, Emad Fawzi;Al-Hazaimeh, Obaida M.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.143-151
    • /
    • 2021
  • The enormous prevalence of transferring official confidential digital documents via the Internet shows the urgent need to deliver confidential messages to the recipient without letting any unauthorized person to know contents of the secret messages or detect there existence . Several Steganography techniques such as the least significant Bit (LSB), Secure Cover Selection (SCS), Discrete Cosine Transform (DCT) and Palette Based (PB) were applied to prevent any intruder from analyzing and getting the secret transferred message. The utilized steganography methods should defiance the challenges of Steganalysis techniques in term of analysis and detection. This paper presents a novel and robust framework for color image steganography that combines Linear Congruential Generator (LCG), simulated annealing (SA), Cesar cryptography and LSB substitution method in one system in order to reduce the objection of Steganalysis and deliver data securely to their destination. SA with the support of LCG finds out the optimal minimum sniffing path inside a cover color image (RGB) then the confidential message will be encrypt and embedded within the RGB image path as a host medium by using Cesar and LSB procedures. Embedding and extraction processes of secret message require a common knowledge between sender and receiver; that knowledge are represented by SA initialization parameters, LCG seed, Cesar key agreement and secret message length. Steganalysis intruder will not understand or detect the secret message inside the host image without the correct knowledge about the manipulation process. The constructed system satisfies the main requirements of image steganography in term of robustness against confidential message extraction, high quality visual appearance, little mean square error (MSE) and high peak signal noise ratio (PSNR).

Comparative Analysis on the Performance of NHPP Software Reliability Model with Exponential Distribution Characteristics (지수분포 특성을 갖는 NHPP 소프트웨어 신뢰성 모형의 성능 비교 분석)

  • Park, Seung-Kyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.641-648
    • /
    • 2022
  • In this study, the performance of the NHPP software reliability model with exponential distribution (Exponential Basic, Inverse Exponential, Lindley, Rayleigh) characteristics was comparatively analyzed, and based on this, the optimal reliability model was also presented. To analyze the software failure phenomenon, the failure time data collected during system operation was used, and the parameter estimation was solved by applying the maximum likelihood estimation method (MLE). Through various comparative analysis (mean square error analysis, true value predictive power analysis of average value function, strength function evaluation, and reliability evaluation applied with mission time), it was found that the Lindley model was an efficient model with the best performance. Through this study, the reliability performance of the distribution with the characteristic of the exponential form, which has no existing research case, was newly identified, and through this, basic design data that software developers could use in the initial stage can be presented.

Real-time prediction on the slurry concentration of cutter suction dredgers using an ensemble learning algorithm

  • Han, Shuai;Li, Mingchao;Li, Heng;Tian, Huijing;Qin, Liang;Li, Jinfeng
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.463-481
    • /
    • 2020
  • Cutter suction dredgers (CSDs) are widely used in various dredging constructions such as channel excavation, wharf construction, and reef construction. During a CSD construction, the main operation is to control the swing speed of cutter to keep the slurry concentration in a proper range. However, the slurry concentration cannot be monitored in real-time, i.e., there is a "time-lag effect" in the log of slurry concentration, making it difficult for operators to make the optimal decision on controlling. Concerning this issue, a solution scheme that using real-time monitored indicators to predict current slurry concentration is proposed in this research. The characteristics of the CSD monitoring data are first studied, and a set of preprocessing methods are presented. Then we put forward the concept of "index class" to select the important indices. Finally, an ensemble learning algorithm is set up to fit the relationship between the slurry concentration and the indices of the index classes. In the experiment, log data over seven days of a practical dredging construction is collected. For comparison, the Deep Neural Network (DNN), Long Short Time Memory (LSTM), Support Vector Machine (SVM), Random Forest (RF), Gradient Boosting Decision Tree (GBDT), and the Bayesian Ridge algorithm are tried. The results show that our method has the best performance with an R2 of 0.886 and a mean square error (MSE) of 5.538. This research provides an effective way for real-time predicting the slurry concentration of CSDs and can help to improve the stationarity and production efficiency of dredging construction.

  • PDF

Study on the Improvement of Lung CT Image Quality using 2D Deep Learning Network according to Various Noise Types (폐 CT 영상에서 다양한 노이즈 타입에 따른 딥러닝 네트워크를 이용한 영상의 질 향상에 관한 연구)

  • Min-Gwan Lee;Chanrok Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.2
    • /
    • pp.93-99
    • /
    • 2024
  • The digital medical imaging, especially, computed tomography (CT), should necessarily be considered in terms of noise distribution caused by converting to X-ray photon to digital imaging signal. Recently, the denoising technique based on deep learning architecture is increasingly used in the medical imaging field. Here, we evaluated noise reduction effect according to various noise types based on the U-net deep learning model in the lung CT images. The input data for deep learning was generated by applying Gaussian noise, Poisson noise, salt and pepper noise and speckle noise from the ground truth (GT) image. In particular, two types of Gaussian noise input data were applied with standard deviation values of 30 and 50. There are applied hyper-parameters, which were Adam as optimizer function, 100 as epochs, and 0.0001 as learning rate, respectively. To analyze the quantitative values, the mean square error (MSE), the peak signal to noise ratio (PSNR) and coefficient of variation (COV) were calculated. According to the results, it was confirmed that the U-net model was effective for noise reduction all of the set conditions in this study. Especially, it showed the best performance in Gaussian noise.

Machine Learning-based Rapid Seismic Performance Evaluation for Seismically-deficient Reinforced Concrete Frame (기계학습 기반 지진 취약 철근콘크리트 골조에 대한 신속 내진성능 등급 예측모델 개발 연구)

  • Kang, TaeWook;Kang, Jaedo;Oh, Keunyeong;Shin, Jiuk
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.193-203
    • /
    • 2024
  • Existing reinforced concrete (RC) building frames constructed before the seismic design was applied have seismically deficient structural details, and buildings with such structural details show brittle behavior that is destroyed early due to low shear performance. Various reinforcement systems, such as fiber-reinforced polymer (FRP) jacketing systems, are being studied to reinforce the seismically deficient RC frames. Due to the step-by-step modeling and interpretation process, existing seismic performance assessment and reinforcement design of buildings consume an enormous amount of workforce and time. Various machine learning (ML) models were developed using input and output datasets for seismic loads and reinforcement details built through the finite element (FE) model developed in previous studies to overcome these shortcomings. To assess the performance of the seismic performance prediction models developed in this study, the mean squared error (MSE), R-square (R2), and residual of each model were compared. Overall, the applied ML was found to rapidly and effectively predict the seismic performance of buildings according to changes in load and reinforcement details without overfitting. In addition, the best-fit model for each seismic performance class was selected by analyzing the performance by class of the ML models.

Improvement of LMS Algorithm Convergence Speed with Updating Adaptive Weight in Data-Recycling Scheme (데이터-재순환 구조에서 적응 가중치 갱신을 통한 LMS 알고리즘 수렴 속 도 개선)

  • Kim, Gwang-Jun;Jang, Hyok;Suk, Kyung-Hyu;Na, Sang-Dong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.9 no.4
    • /
    • pp.11-22
    • /
    • 1999
  • Least-mean-square(LMS) adaptive filters have proven to be extremely useful in a number of signal processing tasks. However LMS adaptive filter suffer from a slow rate of convergence for a given steady-state mean square error as compared to the behavior of recursive least squares adaptive filter. In this paper an efficient signal interference control technique is introduced to improve the convergence speed of LMS algorithm with tap weighted vectors updating which were controled by reusing data which was abandoned data in the Adaptive transversal filter in the scheme with data recycling buffers. The computer simulation show that the character of convergence and the value of MSE of proposed algorithm are faster and lower than the existing LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computation. Adaptive transversal filter with proposed data recycling buffer algorithm could efficiently reject ISI of channel and increase speed of convergence in avoidance burden of computational complexity in reality when it was experimented having the same condition of LMS algorithm.

The Comparative Study of NHPP Software Reliability Model Based on Exponential and Inverse Exponential Distribution (지수 및 역지수 분포를 이용한 NHPP 소프트웨어 무한고장 신뢰도 모형에 관한 비교연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.2
    • /
    • pp.133-140
    • /
    • 2016
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, we were proposed the reliability model with the exponential and inverse exponential distribution, which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, were employed. Analysis of failure, using real data set for the sake of proposing the exponential and inverse exponential distribution, was employed. This analysis of failure data compared with the exponential and inverse exponential distribution property. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the inverse exponential distribution model is also efficient in terms of reliability because it (the coefficient of determination is 80% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, the software developers have to consider life distribution by prior knowledge of the software to identify failure modes which can be able to help.