• Title/Summary/Keyword: Deep Learning Dataset

Search Result 796, Processing Time 0.028 seconds

Study on the Video Stabilizer based on a Triplet CNN and Training Dataset Synthesis (Triplet CNN과 학습 데이터 합성 기반 비디오 안정화기 연구)

  • Yang, Byongho;Lee, Myeong-jin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.428-438
    • /
    • 2020
  • The jitter in the digital videos lowers the visibility and degrades the efficiency of image processing and image compressing. In this paper, we propose a video stabilizer architecture based on triplet CNN and a method of synthesizing training datasets based on video synthesis. Compared with a conventional deep-learning video stabilization method, the proposed video stabilizer can reduce wobbling distortion.

Content-Aware Convolutional Neural Network for Object Recognition Task

  • Poernomo, Alvin;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • v.5 no.3
    • /
    • pp.1-7
    • /
    • 2016
  • In existing Convolutional Neural Network (CNNs) for object recognition task, there are only few efforts known to reduce the noises from the images. Both convolution and pooling layers perform the features extraction without considering the noises of the input image, treating all pixels equally important. In computer vision field, there has been a study to weight a pixel importance. Seam carving resizes an image by sacrificing the least important pixels, leaving only the most important ones. We propose a new way to combine seam carving approach with current existing CNN model for object recognition task. We attempt to remove the noises or the "unimportant" pixels in the image before doing convolution and pooling, in order to get better feature representatives. Our model shows promising result with CIFAR-10 dataset.

Document Summarization Model Based on General Context in RNN

  • Kim, Heechan;Lee, Soowon
    • Journal of Information Processing Systems
    • /
    • v.15 no.6
    • /
    • pp.1378-1391
    • /
    • 2019
  • In recent years, automatic document summarization has been widely studied in the field of natural language processing thanks to the remarkable developments made using deep learning models. To decode a word, existing models for abstractive summarization usually represent the context of a document using the weighted hidden states of each input word when they decode it. Because the weights change at each decoding step, these weights reflect only the local context of a document. Therefore, it is difficult to generate a summary that reflects the overall context of a document. To solve this problem, we introduce the notion of a general context and propose a model for summarization based on it. The general context reflects overall context of the document that is independent of each decoding step. Experimental results using the CNN/Daily Mail dataset show that the proposed model outperforms existing models.

Lightweight image classifier for CIFAR-10

  • Sharma, Akshay Kumar;Rana, Amrita;Kim, Kyung Ki
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.5
    • /
    • pp.286-289
    • /
    • 2021
  • Image classification is one of the fundamental applications of computer vision. It enables a system to identify an object in an image. Recently, image classification applications have broadened their scope from computer applications to edge devices. The convolutional neural network (CNN) is the main class of deep learning neural networks that are widely used in computer tasks, and it delivers high accuracy. However, CNN algorithms use a large number of parameters and incur high computational costs, which hinder their implementation in edge hardware devices. To address this issue, this paper proposes a lightweight image classifier that provides good accuracy while using fewer parameters. The proposed image classifier diverts the input into three paths and utilizes different scales of receptive fields to extract more feature maps while using fewer parameters at the time of training. This results in the development of a model of small size. This model is tested on the CIFAR-10 dataset and achieves an accuracy of 90% using .26M parameters. This is better than the state-of-the-art models, and it can be implemented on edge devices.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Rotation-robust text localization technique using deep learning (딥러닝 기반의 회전에 강인한 텍스트 검출 기법)

  • Choi, In-Kyu;Kim, Jewoo;Song, Hyok;Yoo, Jisang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.80-81
    • /
    • 2019
  • 본 논문에서는 자연스러운 장면 영상에서 임의의 방향성을 가진 텍스트를 검출하기 위한 기법을 제안한다. 텍스트 검출을 위한 기본적인 프레임 워크는 Faster R-CNN[1]을 기반으로 한다. 먼저 RPN(Region Proposal Network)을 통해 다른 방향성을 가진 텍스트를 포함하는 bounding box를 생성한다. 이어서 RPN에서 생성한 각각의 bounding box에 대해 세 가지의 서로 다른 크기로 pooling된 특징지도를 추출하고 병합한다. 병합한 특징지도에서 텍스트와 텍스트가 아닌 대상에 대한 score, 정렬된 bounding box 좌표, 기울어진 bounding box 좌표를 모두 예측한다. 마지막으로 NMS(Non-Maximum Suppression)을 이용하여 검출 결과를 획득한다. COCO Text 2017 dataset[2]을 이용하여 학습 및 테스트를 진행하였으며 주관적으로 평가한 결과 기울어진 텍스트에 적합하게 회전된 영역을 얻을 수 있음을 확인하였다.

  • PDF

Deep Learning based Sentence Analysis for Query Generation (검색어 생성을 위한 딥 러닝 기반 문장 분석 연구)

  • Na, Seong-Won;Yoon, Kyoungro
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.336-337
    • /
    • 2018
  • 최근 이미지의 Visual 정보를 추출하고 Multi label 분류를 통해 나온 결과의 상관관계를 modeling하여 문장으로 출력하는 CNN-RNN 아키텍처가 많은 발전을 이뤘다. 이 아키텍처의 출력은 이미지의 정보가 요약되어 문장으로 표현되기 때문에 Semantic정보가 풍부하여 유사 콘텐츠 검색에도 사용 가능하다. 하지만 결과 문장에 사람이 포함 되면 광범위한 검색 결과를 얻게 되고 부정확한 결과를 초래하게 된다. 이에 본 논문에서는 문장에서 사람을 인식하여 Identity를 부여함으로써 검색어를 좀 더 구체적으로 생성하고자 한다. 이 문제를 해결하기 위해 자연어 처리의 분야 중 하나인 개체명 인식(Named Entity Recognition) 문제로 다루며, 가장 많이 사용되고 있는 모델인 Bidirectional-LSTM-CRF와 CoNLL2003 dataset을 사용하여 수행 한다.

  • PDF

Breast Cancer Classification Using Convolutional Neural Network

  • Alshanbari, Eman;Alamri, Hanaa;Alzahrani, Walaa;Alghamdi, Manal
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.101-106
    • /
    • 2021
  • Breast cancer is the number one cause of deaths from cancer in women, knowing the type of breast cancer in the early stages can help us to prevent the dangers of the next stage. The performance of the deep learning depends on large number of labeled data, this paper presented convolutional neural network for classification breast cancer from images to benign or malignant. our network contains 11 layers and ends with softmax for the output, the experiments result using public BreakHis dataset, and the proposed methods outperformed the state-of-the-art methods.

Bottleneck-based Siam-CNN Algorithm for Object Tracking (객체 추적을 위한 보틀넥 기반 Siam-CNN 알고리즘)

  • Lim, Su-Chang;Kim, Jong-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.1
    • /
    • pp.72-81
    • /
    • 2022
  • Visual Object Tracking is known as the most fundamental problem in the field of computer vision. Object tracking localize the region of target object with bounding box in the video. In this paper, a custom CNN is created to extract object feature that has strong and various information. This network was constructed as a Siamese network for use as a feature extractor. The input images are passed convolution block composed of a bottleneck layers, and features are emphasized. The feature map of the target object and the search area, extracted from the Siamese network, was input as a local proposal network. Estimate the object area using the feature map. The performance of the tracking algorithm was evaluated using the OTB2013 dataset. Success Plot and Precision Plot were used as evaluation matrix. As a result of the experiment, 0.611 in Success Plot and 0.831 in Precision Plot were achieved.

A Study on the Alternative Method of Video Characteristics Using Captioning in Text-Video Retrieval Model (텍스트-비디오 검색 모델에서의 캡션을 활용한 비디오 특성 대체 방안 연구)

  • Dong-hun, Lee;Chan, Hur;Hyeyoung, Park;Sang-hyo, Park
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.6
    • /
    • pp.347-353
    • /
    • 2022
  • In this paper, we propose a method that performs a text-video retrieval model by replacing video properties using captions. In general, the exisiting embedding-based models consist of both joint embedding space construction and the CNN-based video encoding process, which requires a lot of computation in the training as well as the inference process. To overcome this problem, we introduce a video-captioning module to replace the visual property of video with captions generated by the video-captioning module. To be specific, we adopt the caption generator that converts candidate videos into captions in the inference process, thereby enabling direct comparison between the text given as a query and candidate videos without joint embedding space. Through the experiment, the proposed model successfully reduces the amount of computation and inference time by skipping the visual processing process and joint embedding space construction on two benchmark dataset, MSR-VTT and VATEX.