• 제목/요약/키워드: CNN algorithms

검색결과 224건 처리시간 0.024초

A Comparative Study of Alzheimer's Disease Classification using Multiple Transfer Learning Models

  • Prakash, Deekshitha;Madusanka, Nuwan;Bhattacharjee, Subrata;Park, Hyeon-Gyun;Kim, Cho-Hee;Choi, Heung-Kook
    • Journal of Multimedia Information System
    • /
    • 제6권4호
    • /
    • pp.209-216
    • /
    • 2019
  • Over the past decade, researchers were able to solve complex medical problems as well as acquire deeper understanding of entire issue due to the availability of machine learning techniques, particularly predictive algorithms and automatic recognition of patterns in medical imaging. In this study, a technique called transfer learning has been utilized to classify Magnetic Resonance (MR) images by a pre-trained Convolutional Neural Network (CNN). Rather than training an entire model from scratch, transfer learning approach uses the CNN model by fine-tuning them, to classify MR images into Alzheimer's disease (AD), mild cognitive impairment (MCI) and normal control (NC). The performance of this method has been evaluated over Alzheimer's Disease Neuroimaging (ADNI) dataset by changing the learning rate of the model. Moreover, in this study, in order to demonstrate the transfer learning approach we utilize different pre-trained deep learning models such as GoogLeNet, VGG-16, AlexNet and ResNet-18, and compare their efficiency to classify AD. The overall classification accuracy resulted by GoogLeNet for training and testing was 99.84% and 98.25% respectively, which was exceptionally more than other models training and testing accuracies.

Automatic melody extraction algorithm using a convolutional neural network

  • Lee, Jongseol;Jang, Dalwon;Yoon, Kyoungro
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권12호
    • /
    • pp.6038-6053
    • /
    • 2017
  • In this study, we propose an automatic melody extraction algorithm using deep learning. In this algorithm, feature images, generated using the energy of frequency band, are extracted from polyphonic audio files and a deep learning technique, a convolutional neural network (CNN), is applied on the feature images. In the training data, a short frame of polyphonic music is labeled as a musical note and a classifier based on CNN is learned in order to determine a pitch value of a short frame of audio signal. We want to build a novel structure of melody extraction, thus the proposed algorithm has a simple structure and instead of using various signal processing techniques for melody extraction, we use only a CNN to find a melody from a polyphonic audio. Despite of simple structure, the promising results are obtained in the experiments. Compared with state-of-the-art algorithms, the proposed algorithm did not give the best result, but comparable results were obtained and we believe they could be improved with the appropriate training data. In this paper, melody extraction and the proposed algorithm are introduced first, and the proposed algorithm is then further explained in detail. Finally, we present our experiment and the comparison of results follows.

Visual Object Tracking Fusing CNN and Color Histogram based Tracker and Depth Estimation for Automatic Immersive Audio Mixing

  • Park, Sung-Jun;Islam, Md. Mahbubul;Baek, Joong-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권3호
    • /
    • pp.1121-1141
    • /
    • 2020
  • We propose a robust visual object tracking algorithm fusing a convolutional neural network tracker trained offline from a large number of video repositories and a color histogram based tracker to track objects for mixing immersive audio. Our algorithm addresses the problem of occlusion and large movements of the CNN based GOTURN generic object tracker. The key idea is the offline training of a binary classifier with the color histogram similarity values estimated via both trackers used in this method to opt appropriate tracker for target tracking and update both trackers with the predicted bounding box position of the target to continue tracking. Furthermore, a histogram similarity constraint is applied before updating the trackers to maximize the tracking accuracy. Finally, we compute the depth(z) of the target object by one of the prominent unsupervised monocular depth estimation algorithms to ensure the necessary 3D position of the tracked object to mix the immersive audio into that object. Our proposed algorithm demonstrates about 2% improved accuracy over the outperforming GOTURN algorithm in the existing VOT2014 tracking benchmark. Additionally, our tracker also works well to track multiple objects utilizing the concept of single object tracker but no demonstrations on any MOT benchmark.

Effective Hand Gesture Recognition by Key Frame Selection and 3D Neural Network

  • Hoang, Nguyen Ngoc;Lee, Guee-Sang;Kim, Soo-Hyung;Yang, Hyung-Jeong
    • 스마트미디어저널
    • /
    • 제9권1호
    • /
    • pp.23-29
    • /
    • 2020
  • This paper presents an approach for dynamic hand gesture recognition by using algorithm based on 3D Convolutional Neural Network (3D_CNN), which is later extended to 3D Residual Networks (3D_ResNet), and the neural network based key frame selection. Typically, 3D deep neural network is used to classify gestures from the input of image frames, randomly sampled from a video data. In this work, to improve the classification performance, we employ key frames which represent the overall video, as the input of the classification network. The key frames are extracted by SegNet instead of conventional clustering algorithms for video summarization (VSUMM) which require heavy computation. By using a deep neural network, key frame selection can be performed in a real-time system. Experiments are conducted using 3D convolutional kernels such as 3D_CNN, Inflated 3D_CNN (I3D) and 3D_ResNet for gesture classification. Our algorithm achieved up to 97.8% of classification accuracy on the Cambridge gesture dataset. The experimental results show that the proposed approach is efficient and outperforms existing methods.

유니티 실시간 엔진과 End-to-End CNN 접근법을 이용한 자율주행차 학습환경 (Autonomous-Driving Vehicle Learning Environments using Unity Real-time Engine and End-to-End CNN Approach)

  • 사비르 호사인;이덕진
    • 로봇학회논문지
    • /
    • 제14권2호
    • /
    • pp.122-130
    • /
    • 2019
  • Collecting a rich but meaningful training data plays a key role in machine learning and deep learning researches for a self-driving vehicle. This paper introduces a detailed overview of existing open-source simulators which could be used for training self-driving vehicles. After reviewing the simulators, we propose a new effective approach to make a synthetic autonomous vehicle simulation platform suitable for learning and training artificial intelligence algorithms. Specially, we develop a synthetic simulator with various realistic situations and weather conditions which make the autonomous shuttle to learn more realistic situations and handle some unexpected events. The virtual environment is the mimics of the activity of a genuine shuttle vehicle on a physical world. Instead of doing the whole experiment of training in the real physical world, scenarios in 3D virtual worlds are made to calculate the parameters and training the model. From the simulator, the user can obtain data for the various situation and utilize it for the training purpose. Flexible options are available to choose sensors, monitor the output and implement any autonomous driving algorithm. Finally, we verify the effectiveness of the developed simulator by implementing an end-to-end CNN algorithm for training a self-driving shuttle.

Ball Grid Array Solder Void Inspection Using Mask R-CNN

  • Kim, Seung Cheol;Jeon, Ho Jeong;Hong, Sang Jeen
    • 반도체디스플레이기술학회지
    • /
    • 제20권2호
    • /
    • pp.126-130
    • /
    • 2021
  • The ball grid array is one of the packaging methods that used in high density printed circuit board. Solder void defects caused by voids in the solder ball during the BGA process do not directly affect the reliability of the product, but it may accelerate the aging of the device on the PCB layer or interface surface depending on its size or location. Void inspection is important because it is related in yields with products. The most important process in the optical inspection of solder void is the segmentation process of solder and void. However, there are several segmentation algorithms for the vision inspection, it is impossible to inspect all of images ideally. When X-Ray images with poor contrast and high level of noise become difficult to perform image processing for vision inspection in terms of software programming. This paper suggests the solution to deal with the suggested problem by means of using Mask R-CNN instead of digital image processing algorithm. Mask R-CNN model can be trained with images pre-processed to increase contrast or alleviate noises. With this process, it provides more efficient system about complex object segmentation than conventional system.

인공지능 기반 영어 발음 인식에 관한 연구 (A Study on the Recognition of English Pronunciation based on Artificial Intelligence)

  • 이철승;백혜진
    • 한국전자통신학회논문지
    • /
    • 제16권3호
    • /
    • pp.519-524
    • /
    • 2021
  • 최근 4차 산업혁명은 주요 선진국을 중심으로 세계의 국가들의 관심을 갖는 분야가 되고 있다. 4차 산업혁명 기술의 핵심기술인 인공지능기술은 다양한 분야에 융합하는 형태로 발전하고 있으며, 에듀테크 분야에도 많은 영향을 미치고 있으며 교육을 혁신적으로 변화하기 위해 많은 관심과 노력을 하고 있다. 본 논문은 DTW 음성인식 알고리즘을 이용하여 실험환경을 구축하고 다양한 원어민 데이터와 비원어민 데이터를 딥러닝 학습하고, CNN 알고리즘과의 비교를 통해 영어 발음의 유사도를 측정하여 비원어민이 원어민과 유사한 발음으로 교정할 수 있도록 연구한다.

ConvXGB: A new deep learning model for classification problems based on CNN and XGBoost

  • Thongsuwan, Setthanun;Jaiyen, Saichon;Padcharoen, Anantachai;Agarwal, Praveen
    • Nuclear Engineering and Technology
    • /
    • 제53권2호
    • /
    • pp.522-531
    • /
    • 2021
  • We describe a new deep learning model - Convolutional eXtreme Gradient Boosting (ConvXGB) for classification problems based on convolutional neural nets and Chen et al.'s XGBoost. As well as image data, ConvXGB also supports the general classification problems, with a data preprocessing module. ConvXGB consists of several stacked convolutional layers to learn the features of the input and is able to learn features automatically, followed by XGBoost in the last layer for predicting the class labels. The ConvXGB model is simplified by reducing the number of parameters under appropriate conditions, since it is not necessary re-adjust the weight values in a back propagation cycle. Experiments on several data sets from UCL Repository, including images and general data sets, showed that our model handled the classification problems, for all the tested data sets, slightly better than CNN and XGBoost alone and was sometimes significantly better.

혼재된 환경에서의 효율적 로봇 파지를 위한 3차원 물체 인식 알고리즘 개발 (Development of an Efficient 3D Object Recognition Algorithm for Robotic Grasping in Cluttered Environments)

  • 송동운;이재봉;이승준
    • 로봇학회논문지
    • /
    • 제17권3호
    • /
    • pp.255-263
    • /
    • 2022
  • 3D object detection pipelines often incorporate RGB-based object detection methods such as YOLO, which detects the object classes and bounding boxes from the RGB image. However, in complex environments where objects are heavily cluttered, bounding box approaches may show degraded performance due to the overlapping bounding boxes. Mask based methods such as Mask R-CNN can handle such situation better thanks to their detailed object masks, but they require much longer time for data preparation compared to bounding box-based approaches. In this paper, we present a 3D object recognition pipeline which uses either the YOLO or Mask R-CNN real-time object detection algorithm, K-nearest clustering algorithm, mask reduction algorithm and finally Principal Component Analysis (PCA) alg orithm to efficiently detect 3D poses of objects in a complex environment. Furthermore, we also present an improved YOLO based 3D object detection algorithm that uses a prioritized heightmap clustering algorithm to handle overlapping bounding boxes. The suggested algorithms have successfully been used at the Artificial-Intelligence Robot Challenge (ARC) 2021 competition with excellent results.

A Comparison of Meta-learning and Transfer-learning for Few-shot Jamming Signal Classification

  • Jin, Mi-Hyun;Koo, Ddeo-Ol-Ra;Kim, Kang-Suk
    • Journal of Positioning, Navigation, and Timing
    • /
    • 제11권3호
    • /
    • pp.163-172
    • /
    • 2022
  • Typical anti-jamming technologies based on array antennas, Space Time Adaptive Process (STAP) & Space Frequency Adaptive Process (SFAP), are very effective algorithms to perform nulling and beamforming. However, it does not perform equally well for all types of jamming signals. If the anti-jamming algorithm is not optimized for each signal type, anti-jamming performance deteriorates and the operation stability of the system become worse by unnecessary computation. Therefore, jamming classification technique is required to obtain optimal anti-jamming performance. Machine learning, which has recently been in the spotlight, can be considered to classify jamming signal. In general, performing supervised learning for classification requires a huge amount of data and new learning for unfamiliar signal. In the case of jamming signal classification, it is difficult to obtain large amount of data because outdoor jamming signal reception environment is difficult to configure and the signal type of attacker is unknown. Therefore, this paper proposes few-shot jamming signal classification technique using meta-learning and transfer-learning to train the model using a small amount of data. A training dataset is constructed by anti-jamming algorithm input data within the GNSS receiver when jamming signals are applied. For meta-learning, Model-Agnostic Meta-Learning (MAML) algorithm with a general Convolution Neural Networks (CNN) model is used, and the same CNN model is used for transfer-learning. They are trained through episodic training using training datasets on developed our Python-based simulator. The results show both algorithms can be trained with less data and immediately respond to new signal types. Also, the performances of two algorithms are compared to determine which algorithm is more suitable for classifying jamming signals.