• Title/Summary/Keyword: 심층 합성 곱 신경망

Search Result 79, Processing Time 0.025 seconds

A Performance Comparison of Protein Profiles for the Prediction of Protein Secondary Structures (단백질 이차 구조 예측을 위한 단백질 프로파일의 성능 비교)

  • Chi, Sang-Mun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.1
    • /
    • pp.26-32
    • /
    • 2018
  • The protein secondary structures are important information for studying the evolution, structure and function of proteins. Recently, deep learning methods have been actively applied to predict the secondary structure of proteins using only protein sequence information. In these methods, widely used input features are protein profiles transformed from protein sequences. In this paper, to obtain an effective protein profiles, protein profiles were constructed using protein sequence search methods such as PSI-BLAST and HHblits. We adjust the similarity threshold for determining the homologous protein sequence used in constructing the protein profile and the number of iterations of the profile construction using the homologous sequence information. We used the protein profiles as inputs to convolutional neural networks and recurrent neural networks to predict the secondary structures. The protein profile that was created by adding evolutionary information only once was effective.

Method of Predicting Path Loss and Base Station Topography Classification using Artificial Intelligent in Mobile Communication Systems (이동통신 시스템에서 인공지능을 이용한 경로 손실 예측 및 기지국 지형 구분 방법)

  • Kim, Jaejeong;Lee, Heejun;Ji, Seunghwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.5
    • /
    • pp.703-713
    • /
    • 2022
  • Accurate and rapid establishment of mobile communication is important in mobile communication system. Currently, the base station parameters to establish a network are determined by cell planning tool. However, it is necessary to perform new cell planning for each new installation of the base station, and there may be a problem that parameters are not suitable for the actual environment are set, such as obstacle information that is not applied in the cell planning tool. In this paper, we proposed methods for path loss prediction using DNN and topographical division using CNN in SON server. After topography classification, a SON server configures the base station parameters according to topography, and update parameters for each topography. The proposed methods can configure the base station parameters automatically that are considered topography information and environmental changes.

Object Detection on the Road Environment Using Attention Module-based Lightweight Mask R-CNN (주의 모듈 기반 Mask R-CNN 경량화 모델을 이용한 도로 환경 내 객체 검출 방법)

  • Song, Minsoo;Kim, Wonjun;Jang, Rae-Young;Lee, Ryong;Park, Min-Woo;Lee, Sang-Hwan;Choi, Myung-seok
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.944-953
    • /
    • 2020
  • Object detection plays a crucial role in a self-driving system. With the advances of image recognition based on deep convolutional neural networks, researches on object detection have been actively explored. In this paper, we proposed a lightweight model of the mask R-CNN, which has been most widely used for object detection, to efficiently predict location and shape of various objects on the road environment. Furthermore, feature maps are adaptively re-calibrated to improve the detection performance by applying an attention module to the neural network layer that plays different roles within the mask R-CNN. Various experimental results for real driving scenes demonstrate that the proposed method is able to maintain the high detection performance with significantly reduced network parameters.

Fraud Detection System Model Using Generative Adversarial Networks and Deep Learning (생성적 적대 신경망과 딥러닝을 활용한 이상거래탐지 시스템 모형)

  • Ye Won Kim;Ye Lim Yu;Hong Yong Choi
    • Information Systems Review
    • /
    • v.22 no.1
    • /
    • pp.59-72
    • /
    • 2020
  • Artificial Intelligence is establishing itself as a familiar tool from an intractable concept. In this trend, financial sector is also looking to improve the problem of existing system which includes Fraud Detection System (FDS). It is being difficult to detect sophisticated cyber financial fraud using original rule-based FDS. This is because diversification of payment environment and increasing number of electronic financial transactions has been emerged. In order to overcome present FDS, this paper suggests 3 types of artificial intelligence models, Generative Adversarial Network (GAN), Deep Neural Network (DNN), and Convolutional Neural Network (CNN). GAN proves how data imbalance problem can be developed while DNN and CNN show how abnormal financial trading patterns can be precisely detected. In conclusion, among the experiments on this paper, WGAN has the highest improvement effects on data imbalance problem. DNN model reflects more effects on fraud classification comparatively.

Development of Radar Super Resolution Algorithm based on a Deep Learning (딥러닝 기술 기반의 레이더 초해상화 알고리즘 기술 개발)

  • Ho-Jun Kim;Sumiya Uranchimeg;Hemie Cho;Hyun-Han Kwon
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.417-417
    • /
    • 2023
  • 도시홍수는 도시의 주요 기능을 마비시킬 수 있는 수재해로서, 최근 집중호우로 인해 홍수 및 침수 위험도가 증가하고 있다. 집중호우는 한정된 지역에 단시간 동안 집중적으로 폭우가 발생하는 현상을 의미하며, 도시 지역에서 강우 추정 및 예보를 위해 레이더의 활용이 증대되고 있다. 레이더는 수상체 또는 구름으로부터 반사되는 신호를 분석해서 강우량을 측정하는 장비이다. 기상청의 기상레이더(S밴드)의 주요 목적은 남한에 발생하는 기상현상 탐지 및 악기상 대비이다. 관측반경이 넓기에 도시 지역에 적합하지 않는 반면, X밴드 이중편파레이더는 높은 시공간 해상도를 갖는 관측자료를 제공하기에 도시 지역에 대한 강우 추정 및 예보의 정확도가 상대적으로 높다. 따라서, 본 연구에서는 딥러닝 기반 초해상화(Super Resolution) 기술을 활용하여 저해상도(Low Resolution. LR) 영상인 S밴드 레이더 자료로부터 고해상도(High Resolution, HR) 영상을 생성하는 기술을 개발하였다. 초해상도 연구는 Nearest Neighbor, Bicubic과 같은 간단한 보간법(interpolation)에서 시작하여, 최근 딥러닝 기반의 초해상화 알고리즘은 가장 일반화된 합성곱 신경망(CNN)을 통해 연구가 이루어지고 있다. X밴드 레이더 반사도 자료를 고해상도(HR), S밴드 레이더 반사도 자료를 저해상도(LR) 입력자료로 사용하여 초해상화 모형을 구성하였다. 2018~2020년에 발생한 서울시 호우 사례를 중심으로 데이터를 구축하였다. 구축된 데이터로부터 훈련된 초해상도 심층신경망 모형으로부터 저해상도 이미지를 고해상도로 변환한 결과를 PSNR(Peak Signal-to-noise Ratio), SSIM(Structural SIMilarity)와 같은 평가지표로 결과를 평가하였다. 본 연구를 통해 기존 방법들에 비해 높은 공간적 해상도를 갖는 레이더 자료를 생산할 수 있을 것으로 기대된다.

  • PDF

Comparison of Prediction Accuracy Between Classification and Convolution Algorithm in Fault Diagnosis of Rotatory Machines at Varying Speed (회전수가 변하는 기기의 고장진단에 있어서 특성 기반 분류와 합성곱 기반 알고리즘의 예측 정확도 비교)

  • Moon, Ki-Yeong;Kim, Hyung-Jin;Hwang, Se-Yun;Lee, Jang Hyun
    • Journal of Navigation and Port Research
    • /
    • v.46 no.3
    • /
    • pp.280-288
    • /
    • 2022
  • This study examined the diagnostics of abnormalities and faults of equipment, whose rotational speed changes even during regular operation. The purpose of this study was to suggest a procedure that can properly apply machine learning to the time series data, comprising non-stationary characteristics as the rotational speed changes. Anomaly and fault diagnosis was performed using machine learning: k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), and Random Forest. To compare the diagnostic accuracy, an autoencoder was used for anomaly detection and a convolution based Conv1D was additionally used for fault diagnosis. Feature vectors comprising statistical and frequency attributes were extracted, and normalization & dimensional reduction were applied to the extracted feature vectors. Changes in the diagnostic accuracy of machine learning according to feature selection, normalization, and dimensional reduction are explained. The hyperparameter optimization process and the layered structure are also described for each algorithm. Finally, results show that machine learning can accurately diagnose the failure of a variable-rotation machine under the appropriate feature treatment, although the convolution algorithms have been widely applied to the considered problem.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

Identification of Multiple Cancer Cell Lines from Microscopic Images via Deep Learning (심층 학습을 통한 암세포 광학영상 식별기법)

  • Park, Jinhyung;Choe, Se-woon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.374-376
    • /
    • 2021
  • For the diagnosis of cancer-related diseases in clinical practice, pathological examination using biopsy is essential after basic diagnosis using imaging equipment. In order to proceed with such a biopsy, the assistance of an oncologist, clinical pathologist, etc. with specialized knowledge and the minimum required time are essential for confirmation. In recent years, research related to the establishment of a system capable of automatic classification of cancer cells using artificial intelligence is being actively conducted. However, previous studies show limitations in the type and accuracy of cells based on a limited algorithm. In this study, we propose a method to identify a total of 4 cancer cells through a convolutional neural network, a kind of deep learning. The optical images obtained through cell culture were learned through EfficientNet after performing pre-processing such as identification of the location of cells and image segmentation using OpenCV. The model used various hyper parameters based on EfficientNet, and trained InceptionV3 to compare and analyze the performance. As a result, cells were classified with a high accuracy of 96.8%, and this analysis method is expected to be helpful in confirming cancer.

  • PDF

A study on combination of loss functions for effective mask-based speech enhancement in noisy environments (잡음 환경에 효과적인 마스크 기반 음성 향상을 위한 손실함수 조합에 관한 연구)

  • Jung, Jaehee;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.3
    • /
    • pp.234-240
    • /
    • 2021
  • In this paper, the mask-based speech enhancement is improved for effective speech recognition in noise environments. In the mask-based speech enhancement, enhanced spectrum is obtained by multiplying the noisy speech spectrum by the mask. The VoiceFilter (VF) model is used as the mask estimation, and the Spectrogram Inpainting (SI) technique is used to remove residual noise of enhanced spectrum. In this paper, we propose a combined loss to further improve speech enhancement. In order to effectively remove the residual noise in the speech, the positive part of the Triplet loss is used with the component loss. For the experiment TIMIT database is re-constructed using NOISEX92 noise and background music samples with various Signal to Noise Ratio (SNR) conditions. Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligibility (STOI) are used as the metrics of performance evaluation. When the VF was trained with the mean squared error and the SI model was trained with the combined loss, SDR, PESQ, and STOI were improved by 0.5, 0.06, and 0.002 respectively compared to the system trained only with the mean squared error.

Multi-Object Goal Visual Navigation Based on Multimodal Context Fusion (멀티모달 맥락정보 융합에 기초한 다중 물체 목표 시각적 탐색 이동)

  • Jeong Hyun Choi;In Cheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.407-418
    • /
    • 2023
  • The Multi-Object Goal Visual Navigation(MultiOn) is a visual navigation task in which an agent must visit to multiple object goals in an unknown indoor environment in a given order. Existing models for the MultiOn task suffer from the limitation that they cannot utilize an integrated view of multimodal context because use only a unimodal context map. To overcome this limitation, in this paper, we propose a novel deep neural network-based agent model for MultiOn task. The proposed model, MCFMO, uses a multimodal context map, containing visual appearance features, semantic features of environmental objects, and goal object features. Moreover, the proposed model effectively fuses these three heterogeneous features into a global multimodal context map by using a point-wise convolutional neural network module. Lastly, the proposed model adopts an auxiliary task learning module to predict the observation status, goal direction and the goal distance, which can guide to learn the navigational policy efficiently. Conducting various quantitative and qualitative experiments using the Habitat-Matterport3D simulation environment and scene dataset, we demonstrate the superiority of the proposed model.