• Title/Summary/Keyword: Network Feature Extraction

Search Result 491, Processing Time 0.028 seconds

Real-time Fault Diagnosis of Induction Motor Using Clustering and Radial Basis Function (클러스터링과 방사기저함수 네트워크를 이용한 실시간 유도전동기 고장진단)

  • Park, Jang-Hwan;Lee, Dae-Jong;Chun, Myung-Geun
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.20 no.6
    • /
    • pp.55-62
    • /
    • 2006
  • For the fault diagnosis of three-phase induction motors, we construct a experimental unit and then develop a diagnosis algorithm based on pattern recognition. The experimental unit consists of machinery module for induction motor drive and data acquisition module to obtain the fault signal. As the first step for diagnosis procedure, preprocessing is performed to make the acquired current simplified and normalized. To simplify the data, three-phase current is transformed into the magnitude of Concordia vector. As the next step, feature extraction is performed by kernel principal component analysis(KPCA) and linear discriminant analysis(LDA). Finally, we used the classifier based on radial basis function(RBF) network. To show the effectiveness, the proposed diagnostic system has been intensively tested with the various data acquired under different electrical and mechanical faults with varying load.

FRS-OCC: Face Recognition System for Surveillance Based on Occlusion Invariant Technique

  • Abbas, Qaisar
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.288-296
    • /
    • 2021
  • Automated face recognition in a runtime environment is gaining more and more important in the fields of surveillance and urban security. This is a difficult task keeping in mind the constantly volatile image landscape with varying features and attributes. For a system to be beneficial in industrial settings, it is pertinent that its efficiency isn't compromised when running on roads, intersections, and busy streets. However, recognition in such uncontrolled circumstances is a major problem in real-life applications. In this paper, the main problem of face recognition in which full face is not visible (Occlusion). This is a common occurrence as any person can change his features by wearing a scarf, sunglass or by merely growing a mustache or beard. Such types of discrepancies in facial appearance are frequently stumbled upon in an uncontrolled circumstance and possibly will be a reason to the security systems which are based upon face recognition. These types of variations are very common in a real-life environment. It has been analyzed that it has been studied less in literature but now researchers have a major focus on this type of variation. Existing state-of-the-art techniques suffer from several limitations. Most significant amongst them are low level of usability and poor response time in case of any calamity. In this paper, an improved face recognition system is developed to solve the problem of occlusion known as FRS-OCC. To build the FRS-OCC system, the color and texture features are used and then an incremental learning algorithm (Learn++) to select more informative features. Afterward, the trained stack-based autoencoder (SAE) deep learning algorithm is used to recognize a human face. Overall, the FRS-OCC system is used to introduce such algorithms which enhance the response time to guarantee a benchmark quality of service in any situation. To test and evaluate the performance of the proposed FRS-OCC system, the AR face dataset is utilized. On average, the FRS-OCC system is outperformed and achieved SE of 98.82%, SP of 98.49%, AC of 98.76% and AUC of 0.9995 compared to other state-of-the-art methods. The obtained results indicate that the FRS-OCC system can be used in any surveillance application.

Simultaneous monitoring of motion ECG of two subjects using Bluetooth Piconet and baseline drift

  • Dave, Tejal;Pandya, Utpal
    • Biomedical Engineering Letters
    • /
    • v.8 no.4
    • /
    • pp.365-371
    • /
    • 2018
  • Uninterrupted monitoring of multiple subjects is required for mass causality events, in hospital environment or for sports by medical technicians or physicians. Movement of subjects under monitoring requires such system to be wireless, sometimes demands multiple transmitters and a receiver as a base station and monitored parameter must not be corrupted by any noise before further diagnosis. A Bluetooth Piconet network is visualized, where each subject carries a Bluetooth transmitter module that acquires vital sign continuously and relays to Bluetooth enabled device where, further signal processing is done. In this paper, a wireless network is realized to capture ECG of two subjects performing different activities like cycling, jogging, staircase climbing at 100 Hz frequency using prototyped Bluetooth module. The paper demonstrates removal of baseline drift using Fast Fourier Transform and Inverse Fast Fourier Transform and removal of high frequency noise using moving average and S-Golay algorithm. Experimental results highlight the efficacy of the proposed work to monitor any vital sign parameters of multiple subjects simultaneously. The importance of removing baseline drift before high frequency noise removal is shown using experimental results. It is possible to use Bluetooth Piconet frame work to capture ECG simultaneously for more than two subjects. For the applications where there will be larger body movement, baseline drift removal is a major concern and hence along with wireless transmission issues, baseline drift removal before high frequency noise removal is necessary for further feature extraction.

Deep Learning based Raw Audio Signal Bandwidth Extension System (딥러닝 기반 음향 신호 대역 확장 시스템)

  • Kim, Yun-Su;Seok, Jong-Won
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1122-1128
    • /
    • 2020
  • Bandwidth Extension refers to restoring and expanding a narrow band signal(NB) that is damaged or damaged in the encoding and decoding process due to the lack of channel capacity or the characteristics of the codec installed in the mobile communication device. It means converting to a wideband signal(WB). Bandwidth extension research mainly focuses on voice signals and converts high bands into frequency domains, such as SBR (Spectral Band Replication) and IGF (Intelligent Gap Filling), and restores disappeared or damaged high bands based on complex feature extraction processes. In this paper, we propose a model that outputs an bandwidth extended signal based on an autoencoder among deep learning models, using the residual connection of one-dimensional convolutional neural networks (CNN), the bandwidth is extended by inputting a time domain signal of a certain length without complicated pre-processing. In addition, it was confirmed that the damaged high band can be restored even by training on a dataset containing various types of sound sources including music that is not limited to the speech.

Shooting sound analysis using convolutional neural networks and long short-term memory (합성곱 신경망과 장단기 메모리를 이용한 사격음 분석 기법)

  • Kang, Se Hyeok;Cho, Ji Woong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.312-318
    • /
    • 2022
  • This paper proposes a model which classifies the type of guns and information about sound source location using deep neural network. The proposed classification model is composed of convolutional neural networks (CNN) and long short-term memory (LSTM). For training and test the model, we use the Gunshot Audio Forensic Dataset generated by the project supported by the National Institute of Justice (NIJ). The acoustic signals are transformed to Mel-Spectrogram and they are provided as learning and test data for the proposed model. The model is compared with the control model consisting of convolutional neural networks only. The proposed model shows high accuracy more than 90 %.

COVID-19 Diagnosis from CXR images through pre-trained Deep Visual Embeddings

  • Khalid, Shahzaib;Syed, Muhammad Shehram Shah;Saba, Erum;Pirzada, Nasrullah
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.175-181
    • /
    • 2022
  • COVID-19 is an acute respiratory syndrome that affects the host's breathing and respiratory system. The novel disease's first case was reported in 2019 and has created a state of emergency in the whole world and declared a global pandemic within months after the first case. The disease created elements of socioeconomic crisis globally. The emergency has made it imperative for professionals to take the necessary measures to make early diagnoses of the disease. The conventional diagnosis for COVID-19 is through Polymerase Chain Reaction (PCR) testing. However, in a lot of rural societies, these tests are not available or take a lot of time to provide results. Hence, we propose a COVID-19 classification system by means of machine learning and transfer learning models. The proposed approach identifies individuals with COVID-19 and distinguishes them from those who are healthy with the help of Deep Visual Embeddings (DVE). Five state-of-the-art models: VGG-19, ResNet50, Inceptionv3, MobileNetv3, and EfficientNetB7, were used in this study along with five different pooling schemes to perform deep feature extraction. In addition, the features are normalized using standard scaling, and 4-fold cross-validation is used to validate the performance over multiple versions of the validation data. The best results of 88.86% UAR, 88.27% Specificity, 89.44% Sensitivity, 88.62% Accuracy, 89.06% Precision, and 87.52% F1-score were obtained using ResNet-50 with Average Pooling and Logistic regression with class weight as the classifier.

Short-Term Water Quality Prediction of the Paldang Reservoir Using Recurrent Neural Network Models (순환신경망 모델을 활용한 팔당호의 단기 수질 예측)

  • Jiwoo Han;Yong-Chul Cho;Soyoung Lee;Sanghun Kim;Taegu Kang
    • Journal of Korean Society on Water Environment
    • /
    • v.39 no.1
    • /
    • pp.46-60
    • /
    • 2023
  • Climate change causes fluctuations in water quality in the aquatic environment, which can cause changes in water circulation patterns and severe adverse effects on aquatic ecosystems in the future. Therefore, research is needed to predict and respond to water quality changes caused by climate change in advance. In this study, we tried to predict the dissolved oxygen (DO), chlorophyll-a, and turbidity of the Paldang reservoir for about two weeks using long short-term memory (LSTM) and gated recurrent units (GRU), which are deep learning algorithms based on recurrent neural networks. The model was built based on real-time water quality data and meteorological data. The observation period was set from July to September in the summer of 2021 (Period 1) and from March to May in the spring of 2022 (Period 2). We tried to select an algorithm with optimal predictive power for each water quality parameter. In addition, to improve the predictive power of the model, an important variable extraction technique using random forest was used to select only the important variables as input variables. In both Periods 1 and 2, the predictive power after extracting important variables was further improved. Except for DO in Period 2, GRU was selected as the best model in all water quality parameters. This methodology can be useful for preventive water quality management by identifying the variability of water quality in advance and predicting water quality in a short period.

Deep learning-based anomaly detection in acceleration data of long-span cable-stayed bridges

  • Seungjun Lee;Jaebeom Lee;Minsun Kim;Sangmok Lee;Young-Joo Lee
    • Smart Structures and Systems
    • /
    • v.33 no.2
    • /
    • pp.93-103
    • /
    • 2024
  • Despite the rapid development of sensors, structural health monitoring (SHM) still faces challenges in monitoring due to the degradation of devices and harsh environmental loads. These challenges can lead to measurement errors, missing data, or outliers, which can affect the accuracy and reliability of SHM systems. To address this problem, this study proposes a classification method that detects anomaly patterns in sensor data. The proposed classification method involves several steps. First, data scaling is conducted to adjust the scale of the raw data, which may have different magnitudes and ranges. This step ensures that the data is on the same scale, facilitating the comparison of data across different sensors. Next, informative features in the time and frequency domains are extracted and used as input for a deep neural network model. The model can effectively detect the most probable anomaly pattern, allowing for the timely identification of potential issues. To demonstrate the effectiveness of the proposed method, it was applied to actual data obtained from a long-span cable-stayed bridge in China. The results of the study have successfully verified the proposed method's applicability to practical SHM systems for civil infrastructures. The method has the potential to significantly enhance the safety and reliability of civil infrastructures by detecting potential issues and anomalies at an early stage.

Determination of High-pass Filter Frequency with Deep Learning for Ground Motion (딥러닝 기반 지반운동을 위한 하이패스 필터 주파수 결정 기법)

  • Lee, Jin Koo;Seo, JeongBeom;Jeon, SeungJin
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.183-191
    • /
    • 2024
  • Accurate seismic vulnerability assessment requires high quality and large amounts of ground motion data. Ground motion data generated from time series contains not only the seismic waves but also the background noise. Therefore, it is crucial to determine the high-pass cut-off frequency to reduce the background noise. Traditional methods for determining the high-pass filter frequency are based on human inspection, such as comparing the noise and the signal Fourier Amplitude Spectrum (FAS), f2 trend line fitting, and inspection of the displacement curve after filtering. However, these methods are subject to human error and unsuitable for automating the process. This study used a deep learning approach to determine the high-pass filter frequency. We used the Mel-spectrogram for feature extraction and mixup technique to overcome the lack of data. We selected convolutional neural network (CNN) models such as ResNet, DenseNet, and EfficientNet for transfer learning. Additionally, we chose ViT and DeiT for transformer-based models. The results showed that ResNet had the highest performance with R2 (the coefficient of determination) at 0.977 and the lowest mean absolute error (MAE) and RMSE (root mean square error) at 0.006 and 0.074, respectively. When applied to a seismic event and compared to the traditional methods, the determination of the high-pass filter frequency through the deep learning method showed a difference of 0.1 Hz, which demonstrates that it can be used as a replacement for traditional methods. We anticipate that this study will pave the way for automating ground motion processing, which could be applied to the system to handle large amounts of data efficiently.

Performance Comparison of CNN-Based Image Classification Models for Drone Identification System (드론 식별 시스템을 위한 합성곱 신경망 기반 이미지 분류 모델 성능 비교)

  • YeongWan Kim;DaeKyun Cho;GunWoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.639-644
    • /
    • 2024
  • Recent developments in the use of drones on battlefields, extending beyond reconnaissance to firepower support, have greatly increased the importance of technologies for early automatic drone identification. In this study, to identify an effective image classification model that can distinguish drones from other aerial targets of similar size and appearance, such as birds and balloons, we utilized a dataset of 3,600 images collected from the internet. We adopted a transfer learning approach that combines the feature extraction capabilities of three pre-trained convolutional neural network models (VGG16, ResNet50, InceptionV3) with an additional classifier. Specifically, we conducted a comparative analysis of the performance of these three pre-trained models to determine the most effective one. The results showed that the InceptionV3 model achieved the highest accuracy at 99.66%. This research represents a new endeavor in utilizing existing convolutional neural network models and transfer learning for drone identification, which is expected to make a significant contribution to the advancement of drone identification technologies.