• Title/Summary/Keyword: feature weights

Search Result 193, Processing Time 0.024 seconds

A Study on Reducing Learning Time of Deep-Learning using Network Separation (망 분리를 이용한 딥러닝 학습시간 단축에 대한 연구)

  • Lee, Hee-Yeol;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.2
    • /
    • pp.273-279
    • /
    • 2021
  • In this paper, we propose an algorithm that shortens the learning time by performing individual learning using partitioning the deep learning structure. The proposed algorithm consists of four processes: network classification origin setting process, feature vector extraction process, feature noise removal process, and class classification process. First, in the process of setting the network classification starting point, the division starting point of the network structure for effective feature vector extraction is set. Second, in the feature vector extraction process, feature vectors are extracted without additional learning using the weights previously learned. Third, in the feature noise removal process, the extracted feature vector is received and the output value of each class is learned to remove noise from the data. Fourth, in the class classification process, the noise-removed feature vector is input to the multi-layer perceptron structure, and the result is output and learned. To evaluate the performance of the proposed algorithm, we experimented with the Extended Yale B face database. As a result of the experiment, in the case of the time required for one-time learning, the proposed algorithm reduced 40.7% based on the existing algorithm. In addition, the number of learning up to the target recognition rate was shortened compared with the existing algorithm. Through the experimental results, it was confirmed that the one-time learning time and the total learning time were reduced and improved over the existing algorithm.

Guaranteeing delay bounds based on the Bandwidth Allocation Scheme (패킷 지연 한계 보장을 위한 공평 큐잉 기반 대역할당 알고리즘)

  • 정대인
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.8A
    • /
    • pp.1134-1143
    • /
    • 2000
  • We propose a scheduling algorithm, Bandwidth Allocation Scheme (BAS), that guarantees bounded delay in a switching node. It is based on the notion of the GPS (Generalized Processor Sharing) mechanism, which has clarified the concept of fair queueing with a fluid-flow hypothesis of traffic modeling. The main objective of this paper is to determine the session-level weights that define the GPS sewer. The way of introducing and derivation of the so-called system equation' implies the approach we take. With multiple classes of traffic, we define a set of service curves:one for each class. Constrained to the required profiles of individual service curves for delay satisfaction, the sets of weights are determined as a function of both the delay requirements and the traffic parameters. The schedulability test conditions, which are necessary to implement the call admission control, are also derived to ensure the proposed bandwidth allocation scheme' be able to support delay guarantees for all accepted classes of traffic. It is noticeable that the values of weights are tunable rather than fixed in accordance with the varying system status. This feature of adaptability is beneficial towards the enhanced efficiency of bandwidth sharing.

  • PDF

An Image Merging Method for Two High Dynamic Range Images of Different Exposure (노출 시간이 다른 두 HDR 영상의 융합 기법)

  • Kim, Jin-Heon
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.4
    • /
    • pp.526-534
    • /
    • 2010
  • This paper describes an algorithm which merges two HDR pictures taken under different exposure time to display on the LDR devices such as LCD or CRT. The proposed method does not generate the radiance map, but directly merges using the weights computed from the input images. The weights are firstly produced on the pixel basis, and then blended with a Gaussian function. This process prevents some possible sparkle noises caused by radical change of the weights and contributes to smooth connection between 2 image informations. The chrominance informations of the images are merged on the weighted averaging scheme using the deviations of RGB average and their differences. The algorithm is characterized by the feature that it represents well the unsaturated area of 2 original images and the connection of the image information is smooth. The proposed method uses only 2 input images and automatically tunes the whole internal process according to them, thus autonomous operation is possible when it is included in HDR cameras which use double shuttering scheme or double sensor cells.

A Convolutional Neural Network Model with Weighted Combination of Multi-scale Spatial Features for Crop Classification (작물 분류를 위한 다중 규모 공간특징의 가중 결합 기반 합성곱 신경망 모델)

  • Park, Min-Gyu;Kwak, Geun-Ho;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_3
    • /
    • pp.1273-1283
    • /
    • 2019
  • This paper proposes an advanced crop classification model that combines a procedure for weighted combination of spatial features extracted from multi-scale input images with a conventional convolutional neural network (CNN) structure. The proposed model first extracts spatial features from patches with different sizes in convolution layers, and then assigns different weights to the extracted spatial features by considering feature-specific importance using squeeze-and-excitation block sets. The novelty of the model lies in its ability to extract spatial features useful for classification and account for their relative importance. A case study of crop classification with multi-temporal Landsat-8 OLI images in Illinois, USA was carried out to evaluate the classification performance of the proposed model. The impact of patch sizes on crop classification was first assessed in a single-patch model to find useful patch sizes. The classification performance of the proposed model was then compared with those of conventional two CNN models including the single-patch model and a multi-patch model without considering feature-specific weights. From the results of comparison experiments, the proposed model could alleviate misclassification patterns by considering the spatial characteristics of different crops in the study area, achieving the best classification accuracy compared to the other models. Based on the case study results, the proposed model, which can account for the relative importance of spatial features, would be effectively applied to classification of objects with different spatial characteristics, as well as crops.

Automatic Meniscus Segmentation from Knee MR Images using Multi-atlas-based Locally-weighted Voting and Patch-based Edge Feature Classification (무릎 MR 영상에서 다중 아틀라스 기반 지역적 가중 투표 및 패치 기반 윤곽선 특징 분류를 통한 반월상 연골 자동 분할)

  • Kim, SoonBeen;Kim, Hyeonjin;Hong, Helen;Wang, Joon Ho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.4
    • /
    • pp.29-38
    • /
    • 2018
  • In this paper, we propose an automatic segmentation method of meniscus in knee MR images by automatic meniscus localization, multi-atlas-based locally-weighted voting, and patch-based edge feature classification. First, after segmenting the bone and knee articular cartilage, the volume of interest of the meniscus is automatically localized. Second, the meniscus is segmented by multi-atlas-based locally-weighted voting taking into account the weights of shape and intensity distribution in the volume of interest of the meniscus. Finally, to remove leakage to the collateral ligaments with similar intensity, meniscus is refined using patch-based edge feature classification considering shape and distance weights. Dice similarity coefficient between proposed method and manual segmentation were 80.13% of medial meniscus and 80.81 % for lateral meniscus, and showed better results of 7.25% for medial meniscus and 1.31% for lateral meniscus compared to the multi-atlas-based locally-weighted voting.

A novel classification approach based on Naïve Bayes for Twitter sentiment analysis

  • Song, Junseok;Kim, Kyung Tae;Lee, Byungjun;Kim, Sangyoung;Youn, Hee Yong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.6
    • /
    • pp.2996-3011
    • /
    • 2017
  • With rapid growth of web technology and dissemination of smart devices, social networking service(SNS) is widely used. As a result, huge amount of data are generated from SNS such as Twitter, and sentiment analysis of SNS data is very important for various applications and services. In the existing sentiment analysis based on the $Na{\ddot{i}}ve$ Bayes algorithm, a same number of attributes is usually employed to estimate the weight of each class. Moreover, uncountable and meaningless attributes are included. This results in decreased accuracy of sentiment analysis. In this paper two methods are proposed to resolve these issues, which reflect the difference of the number of positive words and negative words in calculating the weights, and eliminate insignificant words in the feature selection step using Multinomial $Na{\ddot{i}}ve$ Bayes(MNB) algorithm. Performance comparison demonstrates that the proposed scheme significantly increases the accuracy compared to the existing Multivariate Bernoulli $Na{\ddot{i}}ve$ Bayes(BNB) algorithm and MNB scheme.

A Study on an Image Classifier using Multi-Neural Networks (다중 신경망을 이용한 영상 분류기에 관한 연구)

  • Park, Soo-Bong;Park, Jong-An
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.1
    • /
    • pp.13-21
    • /
    • 1995
  • In this paper, we improve an image classifier algorithm based on neural network learning. It consists of two steps. The first is input pattern generation and the second, the global neural network implementation using an improved back-propagation algorithm. The feature vector for pattern recognition consists of the codebook data obtained from self-organization feature map learning. It decreases the input neuron number as well as the computational cost. The global neural network algorithm which is used in classifier inserts a control part and an address memory part to the back-propagation algorithm to control weights and unit-offsets. The simulation results show that it does not fall into the local minima and can implement easily the large-scale neural network. And it decreases largely the learning time.

  • PDF

Topical Clustering Techniques of Twitter Documents Using Korean Wikipedia (한글 위키피디아를 이용한 트위터 문서의 주제별 클러스터링 기법)

  • Chang, Jae-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.5
    • /
    • pp.189-196
    • /
    • 2014
  • Recently, the need for retrieving documents is growing in SNS environment such as twitter. For supporting the twitter search, a clustering technique classifying the massively retrieved documents in terms of topics is required. However, due to the nature of twitter, there is a limit in applying previous simple techniques to clustering the twitter documents. To overcome such problem, we propose in this paper a new clustering technique suitable to twitter environment. In proposed method, we augment new terms to feature vectors representing the twitter documents, and recalculate the weights of features using Korean Wikipedia. In addition, we performed the experiments with Korean twitter documents, and proved the usability of proposed method through performance comparison with the previous techniques.

A Comparative Study of Feature Extraction Methods for Authorship Attribution in the Text of Traditional East Asian Medicine with a Focus on Function Words (한의학 고문헌 텍스트에서의 저자 판별 - 기능어의 역할을 중심으로 -)

  • Oh, Junho
    • Journal of Korean Medical classics
    • /
    • v.33 no.2
    • /
    • pp.51-59
    • /
    • 2020
  • Objectives : We would like to study what is the most appropriate "feature" to effectively perform authorship attribution of the text of Traditional East Asian Medicine Methods : The authorship attribution performance of the Support Vector Machine (SVM) was compared by cross validation, depending on whether the function words or content words, single word or collocations, and IDF weights were applied or not, using 'Variorum of the Nanjing' as an experimental Corpus. Results : When using the combination of 'function words/uni-bigram/TF', the performance was best with accuracy of 0.732, and the combination of 'content words/unigram/TFIDF' showed the lowest accuracy of 0.351. Conclusions : This shows the following facts from the authorship attribution of the text of East Asian traditional medicine. First, function words play an important role in comparison to content words. Second, collocations was relatively important in content words, but single words have more important meanings in function words. Third, unlike general text analysis, IDF weighting resulted in worse performance.

Content-based Image Retrieval using Color and Block Region Features (컬러와 블록영역 특징을 이용한 내용기반 화상 검색)

  • 최기호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.6C
    • /
    • pp.610-618
    • /
    • 2002
  • This paper presents a new image retrieval method that is based on color space and block region information. The color space information of images can be obtained by color binary set, and the block region information can be obtained by regional segmentation and feature. The candidate images are decided by comparing with color features and its binary set of query image and image feature database for retrieval. Particularly, it is possible that the retrieval using similarity-measurements has the weights of color spatial distribution arid its objective block region features. This retrieval method using color spatial and block region features is shown with the effectiveness on the result of implementation on image database with 6,000 images.