• Title/Summary/Keyword: 1차원 컨볼루션

Search Result 10, Processing Time 0.026 seconds

Customized AI Exercise Recommendation Service for the Balanced Physical Activity (균형적인 신체활동을 위한 맞춤형 AI 운동 추천 서비스)

  • Chang-Min Kim;Woo-Beom Lee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.4
    • /
    • pp.234-240
    • /
    • 2022
  • This paper proposes a customized AI exercise recommendation service for balancing the relative amount of exercise according to the working environment by each occupation. WISDM database is collected by using acceleration and gyro sensors, and is a dataset that classifies physical activities into 18 categories. Our system recommends a adaptive exercise using the analyzed activity type after classifying 18 physical activities into 3 physical activities types such as whole body, upper body and lower body. 1 Dimensional convolutional neural network is used for classifying a physical activity in this paper. Proposed model is composed of a convolution blocks in which 1D convolution layers with a various sized kernel are connected in parallel. Convolution blocks can extract a detailed local features of input pattern effectively that can be extracted from deep neural network models, as applying multi 1D convolution layers to input pattern. To evaluate performance of the proposed neural network model, as a result of comparing the previous recurrent neural network, our method showed a remarkable 98.4% accuracy.

Machine Tool State Monitoring Using Hierarchical Convolution Neural Network (계층적 컨볼루션 신경망을 이용한 공작기계의 공구 상태 진단)

  • Kyeong-Min Lee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.2
    • /
    • pp.84-90
    • /
    • 2022
  • Machine tool state monitoring is a process that automatically detects the states of machine. In the manufacturing process, the efficiency of machining and the quality of the product are affected by the condition of the tool. Wear and broken tools can cause more serious problems in process performance and lower product quality. Therefore, it is necessary to develop a system to prevent tool wear and damage during the process so that the tool can be replaced in a timely manner. This paper proposes a method for diagnosing five tool states using a deep learning-based hierarchical convolutional neural network to change tools at the right time. The one-dimensional acoustic signal generated when the machine cuts the workpiece is converted into a frequency-based power spectral density two-dimensional image and use as an input for a convolutional neural network. The learning model diagnoses five tool states through three hierarchical steps. The proposed method showed high accuracy compared to the conventional method. In addition, it will be able to be utilized in a smart factory fault diagnosis system that can monitor various machine tools through real-time connecting.

EPS Gesture Signal Recognition using Deep Learning Model (심층 학습 모델을 이용한 EPS 동작 신호의 인식)

  • Lee, Yu ra;Kim, Soo Hyung;Kim, Young Chul;Na, In Seop
    • Smart Media Journal
    • /
    • v.5 no.3
    • /
    • pp.35-41
    • /
    • 2016
  • In this paper, we propose hand-gesture signal recognition based on EPS(Electronic Potential Sensor) using Deep learning model. Extracted signals which from Electronic field based sensor, EPS have much of the noise, so it must remove in pre-processing. After the noise are removed with filter using frequency feature, the signals are reconstructed with dimensional transformation to overcome limit which have just one-dimension feature with voltage value for using convolution operation. Then, the reconstructed signal data is finally classified and recognized using multiple learning layers model based on deep learning. Since the statistical model based on probability is sensitive to initial parameters, the result can change after training in modeling phase. Deep learning model can overcome this problem because of several layers in training phase. In experiment, we used two different deep learning structures, Convolutional neural networks and Recurrent Neural Network and compared with statistical model algorithm with four kinds of gestures. The recognition result of method using convolutional neural network is better than other algorithms in EPS gesture signal recognition.

Design of Convolutional RBFNNs Pattern Classifier for Two dimensional Face Recognition (2차원 얼굴 인식을 위한 Convolutional RBFNNs 패턴 분류기 설계)

  • Kim, Jong-Bum;Oh, Sung-Kwun
    • Proceedings of the KIEE Conference
    • /
    • 2015.07a
    • /
    • pp.1355-1356
    • /
    • 2015
  • 본 논문에서는 Convolution기법 기반 RBFNNs 패턴 분류기를 사용한 2차원 얼굴인식 시스템을 설계한다. 제안된 방법은 특징 추출과 차원축소를 하는 컨볼루션 계층과 부분추출 계층을 교대로 연결하여 2차원 이미지를 1차원의 특징 배열로 만든다. 그 후, 만들어진 1차원의 특징 배열을 RBFNNs 패턴 분류기의 입력으로 사용하여 인식을 수행한다. RBFNNs의 조건부에는 FCM 클러스터링 알고리즘을 사용하며 연결가중치는 1차 선형식을 사용하였다. 또한 최소 자승법(LSE : Least Square Estimation)을 사용하여 다항식의 계수를 추정하였다. 제안된 모델의 성능을 평가하기 위해 CMU PIE Database를 사용한다.

  • PDF

Swear Word Detection through Convolutional Neural Network (딥러닝 기반 욕설 탐지)

  • Kim, Yumin;Gang, Hyobin;Han, Suhyeun;Jeong, Hieyong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.685-686
    • /
    • 2021
  • 개인의 소셜미디어 활동이 활발해지면서 익명성을 악용하여 타인에게 욕설을 주저없이 해버리는 사용자가 늘고 있다. 본 연구는 욕설이 난무하는 채팅창에서 욕설 데이터를 크롤링하여 데이터셋을 구축하여 컨볼루션 네트워크로 학습시켰을 때 욕설을 탐지하고, 전체 문장에서 그 탐지한 욕설의 위치를 파악하여 블러링 처리를 할 수 있는지를 확인하는 것을 목적으로 한다. 전처리 작업으로 한글과 공백을 제외하고 형태소 단위로 토큰화한 후 불용어를 제거해서 패딩처리를 하였다. 학습 모델로는 1차원 컨볼루션을 사용하여 수집한 데이터의 80%를 훈련에 사용하고 나머지 20%를 테스트에 사용하였다. 키워드를 이용한 단순 분류 모델과 비교하였을 때, 본 연구에서 이용한 모델이 약 14% 정확도가 향상된 것을 확인할 수 있었다. 테스트에서 전체 문장에서 욕설이 포함되었을 때 욕설과 그 위치 정보를 잘 획득하는 것도 확인할 수 있었다.

A Trellis-Coded 3-Dimensional OFDM System (격자 부호화 3차원 직교 주파수분할다중화 시스템)

  • Li, Shuang;Kang, Seog Geun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.9
    • /
    • pp.1635-1641
    • /
    • 2017
  • In this paper, a trellis-coded 3-dimensional (3-D) orthogonal frequency division multiplexing (OFDM) system is presented and its performance is analyzed. Here, a set-partitioning technique for trellis coding with respect to a 3-D signal constellation is also presented. We show theoretically that the proposed system, which exploits a trellis coding scheme with recursive systematic convolutional codes (RSC) of code rate R = 1/3 and 2/3, can improve symbol error rate (SER) up to 7.8 dB as compared with the uncoded OFDM system in an additive white Gaussian noise (AWGN) channel. Computer simulation confirms that the theoretical analysis of the proposed system is very accurate. It is, therefore, considered that the proposed trellis-coded 3-D OFDM system is well suited for the high quality digital transmission system without increase in the available bandwidth.

The Impact of the PCA Dimensionality Reduction for CNN based Hyperspectral Image Classification (CNN 기반 초분광 영상 분류를 위한 PCA 차원축소의 영향 분석)

  • Kwak, Taehong;Song, Ahram;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.959-971
    • /
    • 2019
  • CNN (Convolutional Neural Network) is one representative deep learning algorithm, which can extract high-level spatial and spectral features, and has been applied for hyperspectral image classification. However, one significant drawback behind the application of CNNs in hyperspectral images is the high dimensionality of the data, which increases the training time and processing complexity. To address this problem, several CNN based hyperspectral image classification studies have exploited PCA (Principal Component Analysis) for dimensionality reduction. One limitation to this is that the spectral information of the original image can be lost through PCA. Although it is clear that the use of PCA affects the accuracy and the CNN training time, the impact of PCA for CNN based hyperspectral image classification has been understudied. The purpose of this study is to analyze the quantitative effect of PCA in CNN for hyperspectral image classification. The hyperspectral images were first transformed through PCA and applied into the CNN model by varying the size of the reduced dimensionality. In addition, 2D-CNN and 3D-CNN frameworks were applied to analyze the sensitivity of the PCA with respect to the convolution kernel in the model. Experimental results were evaluated based on classification accuracy, learning time, variance ratio, and training process. The size of the reduced dimensionality was the most efficient when the explained variance ratio recorded 99.7%~99.8%. Since the 3D kernel had higher classification accuracy in the original-CNN than the PCA-CNN in comparison to the 2D-CNN, the results revealed that the dimensionality reduction was relatively less effective in 3D kernel.

Statistical Model of 3D Positions in Tracking Fast Objects Using IR Stereo Camera (적외선 스테레오 카메라를 이용한 고속 이동객체의 위치에 대한 확률모델)

  • Oh, Jun Ho;Lee, Sang Hwa;Lee, Boo Hwan;Park, Jong-Il
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.1
    • /
    • pp.89-101
    • /
    • 2015
  • This paper proposes a statistical model of 3-D positions when tracking moving targets using the uncooled infrared (IR) stereo camera system. The proposed model is derived from two errors. One is the position error which is caused by the sampling pixels in the digital image. The other is the timing jitter which results from the irregular capture-timing in the infrared cameras. The capture-timing in the IR camera is measured using the jitter meter designed in this paper, and the observed jitters are statistically modeled as Gaussian distribution. This paper derives an integrated probability distribution by combining jitter error with pixel position error. The combined error is modeled as the convolution of two error distributions. To verify the proposed statistical position error model, this paper has some experiments in tracking moving objects with IR stereo camera. The 3-D positions of object are accurately measured by the trajectory scanner, and 3-D positions are also estimated by stereo matching from IR stereo camera system. According to the experiments, the positions of moving object are estimated within the statistically reliable range which is derived by convolution of two probability models of pixel position error and timing jitter respectively. It is expected that the proposed statistical model can be applied to estimate the uncertain 3-D positions of moving objects in the diverse fields.

Three-Dimensional Convolutional Vision Transformer for Sign Language Translation (수어 번역을 위한 3차원 컨볼루션 비전 트랜스포머)

  • Horyeor Seong;Hyeonjoong Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.3
    • /
    • pp.140-147
    • /
    • 2024
  • In the Republic of Korea, people with hearing impairments are the second-largest demographic within the registered disability community, following those with physical disabilities. Despite this demographic significance, research on sign language translation technology is limited due to several reasons including the limited market size and the lack of adequately annotated datasets. Despite the difficulties, a few researchers continue to improve the performacne of sign language translation technologies by employing the recent advance of deep learning, for example, the transformer architecture, as the transformer-based models have demonstrated noteworthy performance in tasks such as action recognition and video classification. This study focuses on enhancing the recognition performance of sign language translation by combining transformers with 3D-CNN. Through experimental evaluations using the PHOENIX-Wether-2014T dataset [1], we show that the proposed model exhibits comparable performance to existing models in terms of Floating Point Operations Per Second (FLOPs).

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.