• Title/Summary/Keyword: Deep Features

Search Result 1,096, Processing Time 0.03 seconds

Design and Implementation of I/O Performance Benchmarking Framework for Linux Container

  • Oh, Gijun;Son, Suho;Yang, Junseok;Ahn, Sungyong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.180-186
    • /
    • 2021
  • In cloud computing service it is important to share the system resource among multiple instances according to user requirements. In particular, the issue of efficiently distributing I/O resources across multiple instances is paid attention due to the rise of emerging data-centric technologies such as big data and deep learning. However, it is difficult to evaluate the I/O resource distribution of a Linux container, which is one of the core technologies of cloud computing, since conventional I/O benchmarks does not support features related to container management. In this paper, we propose a new I/O performance benchmarking framework that can easily evaluate the resource distribution of Linux containers using existing I/O benchmarks by supporting container-related features and integrated user interface. According to the performance evaluation result with trace-replay benchmark, the proposed benchmark framework has induced negligible performance overhead while providing convenience in evaluating the I/O performance of multiple Linux containers.

PATN: Polarized Attention based Transformer Network for Multi-focus image fusion

  • Pan Wu;Zhen Hua;Jinjiang Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1234-1257
    • /
    • 2023
  • In this paper, we propose a framework for multi-focus image fusion called PATN. In our approach, by aggregating deep features extracted based on the U-type Transformer mechanism and shallow features extracted using the PSA module, we make PATN feed both long-range image texture information and focus on local detail information of the image. Meanwhile, the edge-preserving information value of the fused image is enhanced using a dense residual block containing the Sobel gradient operator, and three loss functions are introduced to retain more source image texture information. PATN is compared with 17 more advanced MFIF methods on three datasets to verify the effectiveness and robustness of PATN.

Improving Adversarial Domain Adaptation with Mixup Regularization

  • Bayarchimeg Kalina;Youngbok Cho
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.2
    • /
    • pp.139-144
    • /
    • 2023
  • Engineers prefer deep neural networks (DNNs) for solving computer vision problems. However, DNNs pose two major problems. First, neural networks require large amounts of well-labeled data for training. Second, the covariate shift problem is common in computer vision problems. Domain adaptation has been proposed to mitigate this problem. Recent work on adversarial-learning-based unsupervised domain adaptation (UDA) has explained transferability and enabled the model to learn robust features. Despite this advantage, current methods do not guarantee the distinguishability of the latent space unless they consider class-aware information of the target domain. Furthermore, source and target examples alone cannot efficiently extract domain-invariant features from the encoded spaces. To alleviate the problems of existing UDA methods, we propose the mixup regularization in adversarial discriminative domain adaptation (ADDA) method. We validated the effectiveness and generality of the proposed method by performing experiments under three adaptation scenarios: MNIST to USPS, SVHN to MNIST, and MNIST to MNIST-M.

Semantic Similarity Calculation based on Siamese TRAT (트랜스포머 인코더와 시암넷 결합한 시맨틱 유사도 알고리즘)

  • Lu, Xing-Cen;Joe, Inwhee
    • Annual Conference of KIPS
    • /
    • 2021.05a
    • /
    • pp.397-400
    • /
    • 2021
  • To solve the problem that existing computing methods cannot adequately represent the semantic features of sentences, Siamese TRAT, a semantic feature extraction model based on Transformer encoder is proposed. The transformer model is used to fully extract the semantic information within sentences and carry out deep semantic coding for sentences. In addition, the interactive attention mechanism is introduced to extract the similar features of the association between two sentences, which makes the model better at capturing the important semantic information inside the sentence. As a result, it improves the semantic understanding and generalization ability of the model. The experimental results show that the proposed model can improve the accuracy significantly for the semantic similarity calculation task of English and Chinese, and is more effective than the existing methods.

Dense Neural Network Graph-based Point Cloud classification (밀집한 신경망 그래프 기반점운의 분류)

  • El Khazari, Ahmed;lee, Hyo Jong
    • Annual Conference of KIPS
    • /
    • 2019.05a
    • /
    • pp.498-500
    • /
    • 2019
  • Point cloud is a flexible set of points that can provide a scalable geometric representation which can be applied in different computer graphic task. We propose a method based on EdgeConv and densely connected layers to aggregate the features for better classification. Our proposed approach shows significant performance improvement compared to the state-of-the-art deep neural network-based approaches.

Deep Learning-based Deraining: Performance Comparison and Trends (딥러닝 기반 Deraining 기법 비교 및 연구 동향)

  • Cho, Minji;Park, Ye-In;Cho, Yubin;Kang, Suk-Ju
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.225-232
    • /
    • 2021
  • Deraining is one of the image restoration tasks and should consider a tradeoff between local details and broad contextual information while recovering images. Current studies adopt an attention mechanism which has been actively researched in natural language processing to deal with both global and local features. This paper classifies existing deraining methods and provides comparative analysis and performance comparison by using several datasets in terms of generalization.

New Record of Three Colpodean Ciliates (Ciliophora: Colpodea) from Korea

  • Kim, Kang-San;Min, Gi-Sik
    • Korean Journal of Environmental Biology
    • /
    • v.33 no.4
    • /
    • pp.375-382
    • /
    • 2015
  • We discovered three soil ciliates of the class Colpodea-Colpoda henneguyi Fabre-Domergue, 1889; C. lucida Greeff, 1888; and Bursaria truncatella $M{\ddot{u}}ller$, 1773 from Obong-ri, Ayajin-ri and Elwang-ri (Korea), respectively. Colpoda henneguyi had the following features: often wider preorally than postorally, size in vivo $60-80{\mu}m{\times}50-70{\mu}m$; extrusomes indistinct in vivo, cylindroid approximately $1{\mu}m$ long; notches caused by deep diagonal groove; yellowish globules on the cortex of the cell; 10-12 postoral kineties; silverline system aspera-type. Colpoda lucida exhibited the following features: broadly reniform, size in vivo $70-90{\mu}m{\times}50-70{\mu}m$; conspicuous extrusomes, $3.5-5{\mu}m$ long in vivo, cylindroid to fusiform; 13-16 postoral kineties; silverline system cucullus-type. Bursaria truncatella had the following features: bursiform, size in vivo $300-470{\mu}m{\times}120-260{\mu}m$; macronucleus coiled with highly variable shapes, $600-1100{\mu}m{\times}30-40{\mu}m$ long in vivo; micronuclei 16-25 in number, approximately $4{\mu}m$ in diameter; extrusomes cylindroid, $3-4{\mu}m$ long in vivo. This is the first report of colpodean ciliates from Korea, and we describe these species based on observations of live and impregnated (protargol and silver nitrate impregnation) specimens.

A Car Plate Area Detection System Using Deep Convolution Neural Network (딥 컨볼루션 신경망을 이용한 자동차 번호판 영역 검출 시스템)

  • Jeong, Yunju;Ansari, Israfil;Shim, Jaechang;Lee, Jeonghwan
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1166-1174
    • /
    • 2017
  • In general, the detection of the vehicle license plate is a previous step of license plate recognition and has been actively studied for several decades. In this paper, we propose an algorithm to detect a license plate area of a moving vehicle from a video captured by a fixed camera installed on the road using the Convolution Neural Network (CNN) technology. First, license plate images and non-license plate images are applied to a previously learned CNN model (AlexNet) to extract and classify features. Then, after detecting the moving vehicle in the video, CNN detects the license plate area by comparing the features of the license plate region with the features of the license plate area. Experimental result shows relatively good performance in various environments such as incomplete lighting, noise due to rain, and low resolution. In addition, to protect personal information this proposed system can also be used independently to detect the license plate area and hide that area to secure the public's personal information.

Convolutional Neural Network based Audio Event Classification

  • Lim, Minkyu;Lee, Donghyun;Park, Hosung;Kang, Yoseb;Oh, Junseok;Park, Jeong-Sik;Jang, Gil-Jin;Kim, Ji-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2748-2760
    • /
    • 2018
  • This paper proposes an audio event classification method based on convolutional neural networks (CNNs). CNN has great advantages of distinguishing complex shapes of image. Proposed system uses the features of audio sound as an input image of CNN. Mel scale filter bank features are extracted from each frame, then the features are concatenated over 40 consecutive frames and as a result, the concatenated frames are regarded as an input image. The output layer of CNN generates probabilities of audio event (e.g. dogs bark, siren, forest). The event probabilities for all images in an audio segment are accumulated, then the audio event having the highest accumulated probability is determined to be the classification result. This proposed method classified thirty audio events with the accuracy of 81.5% for the UrbanSound8K, BBC Sound FX, DCASE2016, and FREESOUND dataset.