• Title/Summary/Keyword: Cross-feature Analysis

Search Result 111, Processing Time 0.029 seconds

Model-based localization and mass-estimation methodology of metallic loose parts

  • Moon, Seongin;Han, Seongjin;Kang, To;Han, Soonwoo;Kim, Munsung
    • Nuclear Engineering and Technology
    • /
    • v.52 no.4
    • /
    • pp.846-855
    • /
    • 2020
  • A loose part monitoring system is used to detect unexpected loose parts in a reactor coolant system in a nuclear power plant. It is still necessary to develop a new methodology for the localization and mass estimation of loose parts owing to the high estimation error of conventional methods. In addition, model-based diagnostics recently emphasized the importance of a model describing the behavior of a mechanical system or component. The purpose of this study is to propose a new localization and mass-estimation method based on finite element analysis (FEA) and optimization technique. First, an FEA model to simulate the propagation behavior of the bending wave generated by a metal sphere impact is validated by performing an impact test and a corresponding FEA and optimization for a downsized steam-generator structure. Second, a novel methodology based on FEA and optimization technique was proposed to estimate the impact location and mass of a loose part at the same time. The usefulness of the methodology was then validated through a series of FEAs and some blind tests. A new feature vector, the cross-correlation function, was also proposed to predict the impact location and mass of a loose part, and its usefulness was then validated. It is expected that the proposed methodology can be utilized in model-based diagnostics for the estimation of impact parameters such as the mass, velocity, and impact location of a loose part. In addition, the FEA-based model can be used to optimize the sensor position to improve the collected data quality in the site of nuclear power plants.

Analysis of Magnetic Resonance Characteristics and Images of Korean Red Ginseng (홍삼의 자기공명 특성과 영상 분석)

  • 김성민;임종국
    • Journal of Biosystems Engineering
    • /
    • v.28 no.3
    • /
    • pp.253-260
    • /
    • 2003
  • In this study, the feasibility of magnetic resonance techniques for nondestructive internal quality evaluation of Korean red ginseng was examined. Relaxation time constants were measured using various grades of red ginsengs. Solid state magnetic resonance imaging technique was applied to image dried red ginsengs which have low moisture contents (about 13%). A 7 tesla magnetic resonance imaging system operating at a proton resonant frequency of 300 ㎒ was used for acquiring MR images of dried Korean red ginseng. The comparison test of cross cut digital images and magnetic resonance images of heaven grade, good grade with cavity inside, and good grade with white part inside red ginseng suggested the feasibility of the internal quality evaluation of Korean red ginsengs using MRI techniques. A good grade red ginseng included abnormal tissues such as cavities or white parts inside was observed by the signal intensity of MR image based on magnetic resonance properties of proton nucleus. Analysis on an one dimensional profile of acquired MR image of Korean red ginseng showed easy discrimination of normal and abnormal tissues. MR techniques suggested ways to detect internal defects of red ginsengs effectively.

Technical Investigation into the In-situ Electron Backscatter Diffraction Analysis for the Recrystallization Study on Extra Low Carbon Steels

  • Kim, Ju-Heon;Kim, Dong-Ik;Kim, Jong Seok;Choi, Shi-Hoon;Yi, Kyung-Woo;Oh, Kyu Hwan
    • Applied Microscopy
    • /
    • v.43 no.2
    • /
    • pp.88-97
    • /
    • 2013
  • Technical investigation to figure out the problems arising during in-situ heating electron backscatter diffraction (EBSD) analysis inside scanning electron microscopy (SEM) was carried out. EBSD patterns were successfully acquired up to $830^{\circ}C$ without degradation of EBSD pattern quality in steels. Several technical problems such as image drift and surface microstructure pinning were taking place during in-situ experiments. Image drift problem was successfully prevented in constant current supplying mode. It was revealed that the surface pinning problem was resulted from the $TiO_2$ oxide particle formation during heating inside SEM chamber. Surface pinning phenomenon was fairly reduced by additional platinum and carbon multi-layer coating before in-situ heating experiment, furthermore was perfectly prevented by improvement of vacuum level of SEM chamber via leakage control. Plane view in-situ observation provides better understanding on the overall feature of recrystallization phenomena and cross sectional in-situ observation provides clearer understanding on the recrystallization mechanism.

Cross-Domain Text Sentiment Classification Method Based on the CNN-BiLSTM-TE Model

  • Zeng, Yuyang;Zhang, Ruirui;Yang, Liang;Song, Sujuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.818-833
    • /
    • 2021
  • To address the problems of low precision rate, insufficient feature extraction, and poor contextual ability in existing text sentiment analysis methods, a mixed model account of a CNN-BiLSTM-TE (convolutional neural network, bidirectional long short-term memory, and topic extraction) model was proposed. First, Chinese text data was converted into vectors through the method of transfer learning by Word2Vec. Second, local features were extracted by the CNN model. Then, contextual information was extracted by the BiLSTM neural network and the emotional tendency was obtained using softmax. Finally, topics were extracted by the term frequency-inverse document frequency and K-means. Compared with the CNN, BiLSTM, and gate recurrent unit (GRU) models, the CNN-BiLSTM-TE model's F1-score was higher than other models by 0.0147, 0.006, and 0.0052, respectively. Then compared with CNN-LSTM, LSTM-CNN, and BiLSTM-CNN models, the F1-score was higher by 0.0071, 0.0038, and 0.0049, respectively. Experimental results showed that the CNN-BiLSTM-TE model can effectively improve various indicators in application. Lastly, performed scalability verification through a takeaway dataset, which has great value in practical applications.

Gaussian process regression model to predict factor of safety of slope stability

  • Arsalan, Mahmoodzadeh;Hamid Reza, Nejati;Nafiseh, Rezaie;Adil Hussein, Mohammed;Hawkar Hashim, Ibrahim;Mokhtar, Mohammadi;Shima, Rashidi
    • Geomechanics and Engineering
    • /
    • v.31 no.5
    • /
    • pp.453-460
    • /
    • 2022
  • It is essential for geotechnical engineers to conduct studies and make predictions about the stability of slopes, since collapse of a slope may result in catastrophic events. The Gaussian process regression (GPR) approach was carried out for the purpose of predicting the factor of safety (FOS) of the slopes in the study that was presented here. The model makes use of a total of 327 slope cases from Iran, each of which has a unique combination of geometric and shear strength parameters that were analyzed by PLAXIS software in order to determine their FOS. The K-fold (K = 5) technique of cross-validation (CV) was used in order to conduct an analysis of the accuracy of the models' predictions. In conclusion, the GPR model showed excellent ability in the prediction of FOS of slope stability, with an R2 value of 0.8355, RMSE value of 0.1372, and MAPE value of 6.6389%, respectively. According to the results of the sensitivity analysis, the characteristics (friction angle) and (unit weight) are, in descending order, the most effective, the next most effective, and the least effective parameters for determining slope stability.

Performance Comparison for Radar Target Classification of Monostatic RCS and Bistatic RCS (모노스태틱 RCS와 바이스태틱 RCS의 표적 구분 성능 분석)

  • Lee, Sung-Jun;Choi, In-Sik
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.21 no.12
    • /
    • pp.1460-1466
    • /
    • 2010
  • In this paper, we analyzed the performance of radar target classification using the monostatic and bistatic radar cross section(RCS) for four different wire targets. Short time Fourier transform(STFT) and continuous wavelet transform (CWT) were used for feature extraction from the monostatic RCS and the bistatic RCS of each target, and a multi-layered perceptron(MLP) neural network was used as a classifier. Results show that CWT yields better performance than STFT for both the monostatic RCS and the bistatic RCS. And, when STFT was used, the performance of the bistatic RCS was slightly better than that of the monostatic RCS. However, when CWT was used, the performance of the monostatic RCS was slightly better than that of the bistatic RCS. Resultingly, it is proven that bistatic RCS is a good cadndidate for application to radar target classification in combination with a monostatic RCS.

Hybrid LSTM and Deep Belief Networks with Attention Mechanism for Accurate Heart Attack Data Analytics

  • Mubarak Albathan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.10
    • /
    • pp.1-16
    • /
    • 2024
  • Due to its complexity and high diagnosis and treatment costs, heart attack (HA) is the top cause of death globally. Heart failure's widespread effect and high morbidity and death rates make accurate and fast prognosis and diagnosis crucial. Due to the complexity of medical data, early and accurate prediction of HA is difficult. Healthcare providers must evaluate data quickly and accurately to intervene. This novel hybrid approach predicts HA using Long Short-Term Memory (LSTM) networks, Deep belief networks (DBNs) with attention mechanism, and robust data mining to fill this essential gap. HA is predicted using Kaggle, PhysioNet, and UCI datasets. Wearable sensor data, ECG signals, and demographic and clinical data provide a solid analytical base. To maintain consistency, ECG signals are normalized and segmented after thorough cleaning to remove missing values and noise. Feature extraction employs complex approaches like Principal Component Analysis (PCA) and Autoencoders to pick time-domain (MNN, SDNN, RMSSD, PNN50) and frequency-domain (PSD at VLF, LF, HF bands) characteristics. The hybrid model architecture uses LSTM networks for sequence learning and DBNs for feature representation and selection to create a robust and comprehensive prediction model. Accuracy, precision, recall, F1-score, and ROC-AUC are measured after cross-entropy loss and SGD optimization. The LSTM-DBN model outperforms predictive methods in accuracy, sensitivity, and specificity. The findings show that several data sources and powerful algorithms can improve heart attack predictions. The proposed architecture performed well on many datasets, with an accuracy rate of 96.00%, sensitivity of 98%, AUC of 0.98, and F1-score of 0.97. High performance proves this system's dependability. Moreover, the proposed approach is outperformed compared to state-of-the-art systems.

Fast and Accurate Rigid Registration of 3D CT Images by Combining Feature and Intensity

  • June, Naw Chit Too;Cui, Xuenan;Li, Shengzhe;Kim, Hak-Il;Kwack, Kyu-Sung
    • Journal of Computing Science and Engineering
    • /
    • v.6 no.1
    • /
    • pp.1-11
    • /
    • 2012
  • Computed tomography (CT) images are widely used for the analysis of the temporal evaluation or monitoring of the progression of a disease. The follow-up examinations of CT scan images of the same patient require a 3D registration technique. In this paper, an automatic and robust registration is proposed for the rigid registration of 3D CT images. The proposed method involves two steps. Firstly, the two CT volumes are aligned based on their principal axes, and then, the alignment from the previous step is refined by the optimization of the similarity score of the image's voxel. Normalized cross correlation (NCC) is used as a similarity metric and a downhill simplex method is employed to find out the optimal score. The performance of the algorithm is evaluated on phantom images and knee synthetic CT images. By the extraction of the initial transformation parameters with principal axis of the binary volumes, the searching space to find out the parameters is reduced in the optimization step. Thus, the overall registration time is algorithmically decreased without the deterioration of the accuracy. The preliminary experimental results of the study demonstrate that the proposed method can be applied to rigid registration problems of real patient images.

The Object Image Detection Method using statistical properties (통계적 특성에 의한 객체 영상 검출방안)

  • Kim, Ji-hong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.7
    • /
    • pp.956-962
    • /
    • 2018
  • As the study of the object feature detection from image, we explain methods to identify the species of the tree in forest using the picture taken from dron. Generally there are three kinds of methods, which are GLCM (Gray Level Co-occurrence Matrix) and Gabor filters, in order to extract the object features. We proposed the object extraction method using the statistical properties of trees in this research because of the similarity of the leaves. After we extract the sample images from the original images, we detect the objects using cross correlation techniques between the original image and sample images. Through this experiment, we realized the mean value and standard deviation of the sample images is very important factor to identify the object. The analysis of the color component of the RGB model and HSV model is also used to identify the object.

An Ensemble Approach to Detect Fake News Spreaders on Twitter

  • Sarwar, Muhammad Nabeel;UlAmin, Riaz;Jabeen, Sidra
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.294-302
    • /
    • 2022
  • Detection of fake news is a complex and a challenging task. Generation of fake news is very hard to stop, only steps to control its circulation may help in minimizing its impacts. Humans tend to believe in misleading false information. Researcher started with social media sites to categorize in terms of real or fake news. False information misleads any individual or an organization that may cause of big failure and any financial loss. Automatic system for detection of false information circulating on social media is an emerging area of research. It is gaining attention of both industry and academia since US presidential elections 2016. Fake news has negative and severe effects on individuals and organizations elongating its hostile effects on the society. Prediction of fake news in timely manner is important. This research focuses on detection of fake news spreaders. In this context, overall, 6 models are developed during this research, trained and tested with dataset of PAN 2020. Four approaches N-gram based; user statistics-based models are trained with different values of hyper parameters. Extensive grid search with cross validation is applied in each machine learning model. In N-gram based models, out of numerous machine learning models this research focused on better results yielding algorithms, assessed by deep reading of state-of-the-art related work in the field. For better accuracy, author aimed at developing models using Random Forest, Logistic Regression, SVM, and XGBoost. All four machine learning algorithms were trained with cross validated grid search hyper parameters. Advantages of this research over previous work is user statistics-based model and then ensemble learning model. Which were designed in a way to help classifying Twitter users as fake news spreader or not with highest reliability. User statistical model used 17 features, on the basis of which it categorized a Twitter user as malicious. New dataset based on predictions of machine learning models was constructed. And then Three techniques of simple mean, logistic regression and random forest in combination with ensemble model is applied. Logistic regression combined in ensemble model gave best training and testing results, achieving an accuracy of 72%.