• Title/Summary/Keyword: Deep Features

Search Result 1,096, Processing Time 0.025 seconds

Research for Drone Target Classification Method Using Deep Learning Techniques (딥 러닝 기법을 이용한 무인기 표적 분류 방법 연구)

  • Soonhyeon Choi;Incheol Cho;Junseok Hyun;Wonjun Choi;Sunghwan Sohn;Jung-Woo Choi
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.2
    • /
    • pp.189-196
    • /
    • 2024
  • Classification of drones and birds is challenging due to diverse flight patterns and limited data availability. Previous research has focused on identifying the flight patterns of unmanned aerial vehicles by emphasizing dynamic features such as speed and heading. However, this approach tends to neglect crucial spatial information, making accurate discrimination of unmanned aerial vehicle characteristics challenging. Furthermore, training methods for situations with imbalanced data among classes have not been proposed by traditional machine learning techniques. In this paper, we propose a data processing method that preserves angle information while maintaining positional details, enabling the deep learning model to better comprehend positional information of drones. Additionally, we introduce a training technique to address the issue of data imbalance.

Violent crowd flow detection from surveillance cameras using deep transfer learning-gated recurrent unit

  • Elly Matul Imah;Riskyana Dewi Intan Puspitasari
    • ETRI Journal
    • /
    • v.46 no.4
    • /
    • pp.671-682
    • /
    • 2024
  • Violence can be committed anywhere, even in crowded places. It is hence necessary to monitor human activities for public safety. Surveillance cameras can monitor surrounding activities but require human assistance to continuously monitor every incident. Automatic violence detection is needed for early warning and fast response. However, such automation is still challenging because of low video resolution and blind spots. This paper uses ResNet50v2 and the gated recurrent unit (GRU) algorithm to detect violence in the Movies, Hockey, and Crowd video datasets. Spatial features were extracted from each frame sequence of the video using a pretrained model from ResNet50V2, which was then classified using the optimal trained model on the GRU architecture. The experimental results were then compared with wavelet feature extraction methods and classification models, such as the convolutional neural network and long short-term memory. The results show that the proposed combination of ResNet50V2 and GRU is robust and delivers the best performance in terms of accuracy, recall, precision, and F1-score. The use of ResNet50V2 for feature extraction can improve model performance.

Deep Neural Architecture for Recovering Dropped Pronouns in Korean

  • Jung, Sangkeun;Lee, Changki
    • ETRI Journal
    • /
    • v.40 no.2
    • /
    • pp.257-265
    • /
    • 2018
  • Pronouns are frequently dropped in Korean sentences, especially in text messages in the mobile phone environment. Restoring dropped pronouns can be a beneficial preprocessing task for machine translation, information extraction, spoken dialog systems, and many other applications. In this work, we address the problem of dropped pronoun recovery by resolving two simultaneous subtasks: detecting zero-pronoun sentences and determining the type of dropped pronouns. The problems are statistically modeled by encoding the sentence and classifying types of dropped pronouns using a recurrent neural network (RNN) architecture. Various RNN-based encoding architectures were investigated, and the stacked RNN was shown to be the best model for Korean zero-pronoun recovery. The proposed method does not require any manual features to be implemented; nevertheless, it shows good performance.

IoT Device Classification According to Context-aware Using Multi-classification Model

  • Zhang, Xu;Ryu, Shinhye;Kim, Sangwook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.447-459
    • /
    • 2020
  • The Internet of Things(IoT) paradigm is flourishing strenuously for the last two decades. Researchers around the globe have their dreams to transmute every real-world object to the virtual object. Consequently, IoT devices are escalating exponentially. The abrupt evolution of these IoT devices has caused a major challenge i.e. object classification. In order to classify devices comprehensively and accurately, this paper proposes a context-aware based multi-classification model for devices, which classifies the smart devices according to people's contexts. However, the classification features of contextual data of different contexts are difficult to extract. The deep learning algorithm has the capability to solve this problem. This paper proposes a context-aware based multi-classification model of devices, which classifies the smart devices according to people's contexts.

Convolutional Neural Network Based Image Processing System

  • Kim, Hankil;Kim, Jinyoung;Jung, Hoekyung
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.3
    • /
    • pp.160-165
    • /
    • 2018
  • This paper designed and developed the image processing system of integrating feature extraction and matching by using convolutional neural network (CNN), rather than relying on the simple method of processing feature extraction and matching separately in the image processing of conventional image recognition system. To implement it, the proposed system enables CNN to operate and analyze the performance of conventional image processing system. This system extracts the features of an image using CNN and then learns them by the neural network. The proposed system showed 84% accuracy of recognition. The proposed system is a model of recognizing learned images by deep learning. Therefore, it can run in batch and work easily under any platform (including embedded platform) that can read all kinds of files anytime. Also, it does not require the implementing of feature extraction algorithm and matching algorithm therefore it can save time and it is efficient. As a result, it can be widely used as an image recognition program.

Design of Fuzzy k-Nearest Neighbors Classifiers based on Feature Extraction by using Stacked Autoencoder (Stacked Autoencoder를 이용한 특징 추출 기반 Fuzzy k-Nearest Neighbors 패턴 분류기 설계)

  • Rho, Suck-Bum;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.1
    • /
    • pp.113-120
    • /
    • 2015
  • In this paper, we propose a feature extraction method using the stacked autoencoders which consist of restricted Boltzmann machines. The stacked autoencoders is a sort of deep networks. Restricted Boltzmann machines (RBMs) are probabilistic graphical models that can be interpreted as stochastic neural networks. In terms of pattern classification problem, the feature extraction is a key issue. We use the stacked autoencoders networks to extract new features which have a good influence on the improvement of the classification performance. After feature extraction, fuzzy k-nearest neighbors algorithm is used for a classifier which classifies the new extracted data set. To evaluate the classification ability of the proposed pattern classifier, we make some experiments with several machine learning data sets.

Drone Simulation Technologies (드론 시뮬레이션 기술)

  • Lee, S.J.;Yang, J.G.;Lee, B.S.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.4
    • /
    • pp.81-90
    • /
    • 2020
  • The use of machine learning technologies such as deep and reinforcement learning has proliferated in various domains with the advancement of deep neural network studies. To make the learning successful, both big data acquisition and fast processing are required. However, for some physical world applications such as autonomous drone flight, it is difficult to achieve efficient learning because learning with a premature A.I. is dangerous, cost-ineffective, and time-consuming. To solve these problems, simulation-based approaches can be considered. In this study, we analyze recent trends in drone simulation technologies and compare their features. Subsequently, we introduce Octopus, which is a highly precise and scalable drone simulator being developed by ETRI.

Fabrication of Polymeric Optical Waveguide by LIGA (LIGA공정을 이용한 정밀 고분자 광도파로 제작)

  • Kim, Jin-Tae;Kim, Byeong-Cheol;Choi, Choon-Gi;Yoon, Keun-Byoung;Jeong, Myung-Yung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.27 no.6
    • /
    • pp.997-1006
    • /
    • 2003
  • LICA technique evolved as a basic fabrication process fur micro-structure. The present report deals with the basic technological features in the sequence of the LIGA technique such as deep x-ray lithography(DXRL), electroplating, and moulding processes at Pohang Light Source (PLS). We designed 3-D structured master for fabrication of polymeric optical wavegude and manufactured polymeric optical wavegude with the same using hot embossing process. Polymeric optical waveguide could be produced with ${\pm}$ 1 $\mu\textrm{m}$ accuracy and good surface roughness.

Properties of BzK Galaxies Selected in DLS F1 Field

  • Kim, Seongjae;Shim, Hyunjin;Hwang, Ho Seong;Gobat, Raphael;Daddi, Emanuele
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.1
    • /
    • pp.58.1-58.1
    • /
    • 2018
  • The redshift range $1.4{\lesssim}z{\lesssim}2.5$ is often called the 'redshift desert' because of the difficulties in measuring spectroscopic redshifts due to the shifting of the major spectroscopic features into near-infrared wavelength (Steidel et al. 2004). One of the most efficient and fast way to select galaxies at this redshift range is the BzK technique designed by Daddi et al. (2004). Combining deep BVRz data from Deep Lens Survey with the wide-field (~4.08 deg2) K-band image, we select 1200 star-forming BzKs (sBzKs) and 120 passive BzKs (pBzKs) at K < 21.25. We discuss about the photometric redshifts, star formation rates, and stellar mass of the selected BzKs. Possible large scale structure at $1.4{\lesssim}z$ < 1.6 based on the spatial distribution of the BzKs is also introduced.

  • PDF

Siamese Network for Learning Robust Feature of Hippocampi

  • Ahmed, Samsuddin;Jung, Ho Yub
    • Smart Media Journal
    • /
    • v.9 no.3
    • /
    • pp.9-17
    • /
    • 2020
  • Hippocampus is a complex brain structure embedded deep into the temporal lobe. Studies have shown that this structure gets affected by neurological and psychiatric disorders and it is a significant landmark for diagnosing neurodegenerative diseases. Hippocampus features play very significant roles in region-of-interest based analysis for disease diagnosis and prognosis. In this study, we have attempted to learn the embeddings of this important biomarker. As conventional metric learning methods for feature embedding is known to lacking in capturing semantic similarity among the data under study, we have trained deep Siamese convolutional neural network for learning metric of the hippocampus. We have exploited Gwangju Alzheimer's and Related Dementia cohort data set in our study. The input to the network was pairs of three-view patches (TVPs) of size 32 × 32 × 3. The positive samples were taken from the vicinity of a specified landmark for the hippocampus and negative samples were taken from random locations of the brain excluding hippocampi regions. We have achieved 98.72% accuracy in verifying hippocampus TVPs.