• Title/Summary/Keyword: CNN(Convolutional Neural Networks)

Search Result 351, Processing Time 0.034 seconds

Design of Pet Behavior Classification Method Based On DeepLabCut and Mask R-CNN (DeepLabCut과 Mask R-CNN 기반 반려동물 행동 분류 설계)

  • Kwon, Juyeong;Shin, Minchan;Moon, Nammee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.927-929
    • /
    • 2021
  • 최근 펫팸족(Pet-Family)과 같이 반려동물을 가족처럼 생각하는 가구가 증가하면서 반려동물 시장이 크게 성장하고 있다. 이러한 이유로 본 논문에서는 반려동물의 객체 식별을 통한 객체 분할과 신체 좌표추정에 기반을 둔 반려동물의 행동 분류 방법을 제안한다. 이 방법은 CCTV를 통해 반려동물 영상 데이터를 수집한다. 수집된 영상 데이터는 반려동물의 인스턴스 분할을 위해 Mask R-CNN(Region Convolutional Neural Networks) 모델을 적용하고, DeepLabCut 모델을 통해 추정된 신체 좌푯값을 도출한다. 이 결과로 도출된 영상 데이터와 추정된 신체 좌표 값은 CNN(Convolutional Neural Networks)-LSTM(Long Short-Term Memory) 모델을 적용하여 행동을 분류한다. 본 모델을 바탕으로 행동을 분석 및 분류하여, 반려동물의 위험 상황과 돌발 행동에 대한 올바른 대처를 제공할 수 있는 기반을 제공할 것이라 기대한다.

Research Trends in CNN-based Fingerprint Classification (CNN 기반 지문분류 연구 동향)

  • Jung, Hye-Wuk
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.5
    • /
    • pp.653-662
    • /
    • 2022
  • Recently, various researches have been made on a fingerprint classification method using Convolutional Neural Networks (CNN), which is widely used for multidimensional and complex pattern recognition such as images. The CNN-based fingerprint classification method can be executed by integrating the two-step process, which is generally divided into feature extraction and classification steps. Therefore, since the CNN-based methods can automatically extract features of fingerprint images, they have an advantage of shortening the process. In addition, since they can learn various features of incomplete or low-quality fingerprints, they have flexibility for feature extraction in exceptional situations. In this paper, we intend to identify the research trends of CNN-based fingerprint classification and discuss future direction of research through the analysis of experimental methods and results.

Comparison of Spatial and Frequency Images for Character Recognition (문자인식을 위한 공간 및 주파수 도메인 영상의 비교)

  • Abdurakhmon, Abduraimjonov;Choi, Hyeon-yeong;Ko, Jaepil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.439-441
    • /
    • 2019
  • Deep learning has become a powerful and robust algorithm in Artificial Intelligence. One of the most impressive forms of Deep learning tools is that of the Convolutional Neural Networks (CNN). CNN is a state-of-the-art solution for object recognition. For instance when we utilize CNN with MNIST handwritten digital dataset, mostly the result is well. Because, in MNIST dataset, all digits are centralized. Unfortunately, the real world is different from our imagination. If digits are shifted from the center, it becomes a big issue for CNN to recognize and provide result like before. To solve that issue, we have created frequency images from spatial images by a Fast Fourier Transform (FFT).

  • PDF

Artificial neural network for classifying with epilepsy MEG data (뇌전증 환자의 MEG 데이터에 대한 분류를 위한 인공신경망 적용 연구)

  • Yujin Han;Junsik Kim;Jaehee Kim
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.2
    • /
    • pp.139-155
    • /
    • 2024
  • This study performed a multi-classification task to classify mesial temporal lobe epilepsy with left hippocampal sclerosis patients (left mTLE), mesial temporal lobe epilepsy with right hippocampal sclerosis (right mTLE), and healthy controls (HC) using magnetoencephalography (MEG) data. We applied various artificial neural networks and compared the results. As a result of modeling with convolutional neural networks (CNN), recurrent neural networks (RNN), and graph neural networks (GNN), the average k-fold accuracy was excellent in the order of CNN-based model, GNN-based model, and RNN-based model. The wall time was excellent in the order of RNN-based model, GNN-based model, and CNN-based model. The graph neural network, which shows good figures in accuracy, performance, and time, and has excellent scalability of network data, is the most suitable model for brain research in the future.

Hybrid CNN-SVM Based Seed Purity Identification and Classification System

  • Suganthi, M;Sathiaseelan, J.G.R.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.271-281
    • /
    • 2022
  • Manual seed classification challenges can be overcome using a reliable and autonomous seed purity identification and classification technique. It is a highly practical and commercially important requirement of the agricultural industry. Researchers can create a new data mining method with improved accuracy using current machine learning and artificial intelligence approaches. Seed classification can help with quality making, seed quality controller, and impurity identification. Seeds have traditionally been classified based on characteristics such as colour, shape, and texture. Generally, this is done by experts by visually examining each model, which is a very time-consuming and tedious task. This approach is simple to automate, making seed sorting far more efficient than manually inspecting them. Computer vision technologies based on machine learning (ML), symmetry, and, more specifically, convolutional neural networks (CNNs) have been widely used in related fields, resulting in greater labour efficiency in many cases. To sort a sample of 3000 seeds, KNN, SVM, CNN and CNN-SVM hybrid classification algorithms were used. A model that uses advanced deep learning techniques to categorise some well-known seeds is included in the proposed hybrid system. In most cases, the CNN-SVM model outperformed the comparable SVM and CNN models, demonstrating the effectiveness of utilising CNN-SVM to evaluate data. The findings of this research revealed that CNN-SVM could be used to analyse data with promising results. Future study should look into more seed kinds to expand the use of CNN-SVMs in data processing.

Bio-signal Data Augumentation Technique for CNN based Human Activity Recognition (CNN 기반 인간 동작 인식을 위한 생체신호 데이터의 증강 기법)

  • Gerelbat BatGerel;Chun-Ki Kwon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.2
    • /
    • pp.90-96
    • /
    • 2023
  • Securing large amounts of training data in deep learning neural networks, including convolutional neural networks, is of importance for avoiding overfitting phenomenon or for the excellent performance. However, securing labeled training data in deep learning neural networks is very limited in reality. To overcome this, several augmentation methods have been proposed in the literature to generate an additional large amount of training data through transformation or manipulation of the already acquired traing data. However, unlike training data such as images and texts, it is barely to find an augmentation method in the literature that additionally generates bio-signal training data for convolutional neural network based human activity recognition. Thus, this study proposes a simple but effective augmentation method of bio-signal training data for convolutional neural network based human activity recognition. The usefulness of the proposed augmentation method is validated by showing that human activity is recognized with high accuracy by convolutional neural network trained with its augmented bio-signal training data.

Convolutional Neural Network based Audio Event Classification

  • Lim, Minkyu;Lee, Donghyun;Park, Hosung;Kang, Yoseb;Oh, Junseok;Park, Jeong-Sik;Jang, Gil-Jin;Kim, Ji-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2748-2760
    • /
    • 2018
  • This paper proposes an audio event classification method based on convolutional neural networks (CNNs). CNN has great advantages of distinguishing complex shapes of image. Proposed system uses the features of audio sound as an input image of CNN. Mel scale filter bank features are extracted from each frame, then the features are concatenated over 40 consecutive frames and as a result, the concatenated frames are regarded as an input image. The output layer of CNN generates probabilities of audio event (e.g. dogs bark, siren, forest). The event probabilities for all images in an audio segment are accumulated, then the audio event having the highest accumulated probability is determined to be the classification result. This proposed method classified thirty audio events with the accuracy of 81.5% for the UrbanSound8K, BBC Sound FX, DCASE2016, and FREESOUND dataset.

An Intelligent Fire Learning and Detection System Using Convolutional Neural Networks (컨볼루션 신경망을 이용한 지능형 화재 학습 및 탐지 시스템)

  • Cheoi, Kyungjoo;Jeon, Minseong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.607-614
    • /
    • 2016
  • In this paper, we propose an intelligent fire learning and detection system using convolutional neural networks (CNN). Through the convolutional layer of the CNN, various features of flame and smoke images are automatically extracted, and these extracted features are learned to classify them into flame or smoke or no fire. In order to detect fire in the image, candidate fire regions are first extracted from the image and extracted candidate regions are passed through CNN. Experimental results on various image shows that our system has better performances over previous work.

Wood Classification of Japanese Fagaceae using Partial Sample Area and Convolutional Neural Networks

  • FATHURAHMAN, Taufik;GUNAWAN, P.H.;PRAKASA, Esa;SUGIYAMA, Junji
    • Journal of the Korean Wood Science and Technology
    • /
    • v.49 no.5
    • /
    • pp.491-503
    • /
    • 2021
  • Wood identification is regularly performed by observing the wood anatomy, such as colour, texture, fibre direction, and other characteristics. The manual process, however, could be time consuming, especially when identification work is required at high quantity. Considering this condition, a convolutional neural networks (CNN)-based program is applied to improve the image classification results. The research focuses on the algorithm accuracy and efficiency in dealing with the dataset limitations. For this, it is proposed to do the sample selection process or only take a small portion of the existing image. Still, it can be expected to represent the overall picture to maintain and improve the generalisation capabilities of the CNN method in the classification stages. The experiments yielded an incredible F1 score average up to 93.4% for medium sample area sizes (200 × 200 pixels) on each CNN architecture (VGG16, ResNet50, MobileNet, DenseNet121, and Xception based). Whereas DenseNet121-based architecture was found to be the best architecture in maintaining the generalisation of its model for each sample area size (100, 200, and 300 pixels). The experimental results showed that the proposed algorithm can be an accurate and reliable solution.

TsCNNs-Based Inappropriate Image and Video Detection System for a Social Network

  • Kim, Youngsoo;Kim, Taehong;Yoo, Seong-eun
    • Journal of Information Processing Systems
    • /
    • v.18 no.5
    • /
    • pp.677-687
    • /
    • 2022
  • We propose a detection algorithm based on tree-structured convolutional neural networks (TsCNNs) that finds pornography, propaganda, or other inappropriate content on a social media network. The algorithm sequentially applies the typical convolutional neural network (CNN) algorithm in a tree-like structure to minimize classification errors in similar classes, and thus improves accuracy. We implemented the detection system and conducted experiments on a data set comprised of 6 ordinary classes and 11 inappropriate classes collected from the Korean military social network. Each model of the proposed algorithm was trained, and the performance was then evaluated according to the images and videos identified. Experimental results with 20,005 new images showed that the overall accuracy in image identification achieved a high-performance level of 99.51%, and the effectiveness of the algorithm reduced identification errors by the typical CNN algorithm by 64.87 %. By reducing false alarms in video identification from the domain, the TsCNNs achieved optimal performance of 98.11% when using 10 minutes frame-sampling intervals. This indicates that classification through proper sampling contributes to the reduction of computational burden and false alarms.