• Title/Summary/Keyword: pruning method

Search Result 167, Processing Time 0.022 seconds

Anomaly detection in particulate matter sensor using hypothesis pruning generative adversarial network

  • Park, YeongHyeon;Park, Won Seok;Kim, Yeong Beom
    • ETRI Journal
    • /
    • v.43 no.3
    • /
    • pp.511-523
    • /
    • 2021
  • The World Health Organization provides guidelines for managing the particulate matter (PM) level because a higher PM level represents a threat to human health. To manage the PM level, a procedure for measuring the PM value is first needed. We use a PM sensor that collects the PM level by laser-based light scattering (LLS) method because it is more cost effective than a beta attenuation monitor-based sensor or tapered element oscillating microbalance-based sensor. However, an LLS-based sensor has a higher probability of malfunctioning than the higher cost sensors. In this paper, we regard the overall malfunctioning, including strange value collection or missing collection data as anomalies, and we aim to detect anomalies for the maintenance of PM measuring sensors. We propose a novel architecture for solving the above aim that we call the hypothesis pruning generative adversarial network (HP-GAN). Through comparative experiments, we achieve AUROC and AUPRC values of 0.948 and 0.967, respectively, in the detection of anomalies in LLS-based PM measuring sensors. We conclude that our HP-GAN is a cutting-edge model for anomaly detection.

Performance Analysis of Optimal Neural Network structural BPN based on character value of Hidden node (은닉노드의 특징 값을 기반으로 한 최적신경망 구조의 BPN성능분석)

  • 강경아;이기준;정채영
    • Journal of the Korea Society of Computer and Information
    • /
    • v.5 no.2
    • /
    • pp.30-36
    • /
    • 2000
  • The hidden node plays a role of the functional units that classifies the features of input pattern in the given question. Therefore, a neural network that consists of the number of a suitable optimum hidden node has be on the rise as a factor that has an important effect upon a result. However there is a problem that decides the number of hidden nodes based on back-propagation learning algorithm. If the number of hidden nodes is designated very small perfect learning is not done because the input pattern given cannot be classified enough. On the other hand, if designated a lot, overfitting occurs due to the unnecessary execution of operation and extravagance of memory point. So, the recognition rate is been law and the generality is fallen. Therefore, this paper suggests a method that decides the number of neural network node with feature information consisted of the parameter of learning algorithm. It excludes a node in the Pruning target, that has a maximum value among the feature value obtained and compares the average of the rest of hidden node feature value with the feature value of each hidden node, and then would like to improve the learning speed of neural network deciding the optimum structure of the multi-layer neural network as pruning the hidden node that has the feature value smaller than the average.

  • PDF

Large Vocabulary Continuous Speech Recognition Based on Language Model Network (언어 모델 네트워크에 기반한 대어휘 연속 음성 인식)

  • 안동훈;정민화
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.6
    • /
    • pp.543-551
    • /
    • 2002
  • In this paper, we present an efficient decoding method that performs in real time for 20k word continuous speech recognition task. Basic search method is a one-pass Viterbi decoder on the search space constructed from the novel language model network. With the consistent search space representation derived from various language models by the LM network, we incorporate basic pruning strategies, from which tokens alive constitute a dynamic search space. To facilitate post-processing, it produces a word graph and a N-best list subsequently. The decoder is tested on the database of 20k words and evaluated with respect to accuracy and RTF.

A Progressive Skyline Region Decision Method (점진적인 스카이라인 영역 결정 기법)

  • Kim, Jin-Ho;Park, Young-Bae
    • Journal of KIISE:Databases
    • /
    • v.34 no.1
    • /
    • pp.70-83
    • /
    • 2007
  • Most of works for skyline queries have focused on static data objects. With the advance in mobile applications, however, the need of continuous skyline queries for moving objects has been increasing. To process continuous skyline queries, the 4-phased decision method of skyline regions has been proposed recently. However, it is not feasible for a large number of data because of the high cost of computing skyline regions. To solve this problem, this paper first provides a theoretical analysis of the 4-phased decision method. Then we propose a progressive decision method of skyline regions for the 4-phased decision method, which consists of a distance-based pruning and an extent shrinking of region decision lines. The proposed method can efficiently reduce the cost of the decision of skyline region in the 4-phased decision method. This paper also presents the experimental results to show the effectiveness of the proposed method.

A Density-based k-Nearest Neighbors Query Method (밀도 기반의 k-최근접 질의 처리)

  • Jang, In-Sung;Han, Eun-Young;Cho, Dae-Soo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.6 no.4
    • /
    • pp.59-70
    • /
    • 2003
  • Spatial data base system provides many query types and most of them are required frequent disk I/O and much CPU time. k-NN search is to find k-th closest object from the query point and up to now, several k-NN search methods have been proposed. Among these, MINMAX distance method has an aim not to access unnecessary node by adapting pruning technique. But this method accesses more disks than necessary while pruning unnecessary nodes. In this paper, we propose new k-NN search algorithm based on density of object. With this method, we predict the radius to be expected to contain k-NN objects using density of data set and search those objects within this radius and then adjust radius if failed. Experimental results show that this method outperforms the previous MINMAX distance method. This algorithm visit less disks than MINMAX method by the factor of maximum 22% and average 7%.

  • PDF

Hierarchical Ann Classification Model Combined with the Adaptive Searching Strategy (적응적 탐색 전략을 갖춘 계층적 ART2 분류 모델)

  • 김도현;차의영
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.649-658
    • /
    • 2003
  • We propose a hierarchical architecture of ART2 Network for performance improvement and fast pattern classification model using fitness selection. This hierarchical network creates coarse clusters as first ART2 network layer by unsupervised learning, then creates fine clusters of the each first layer as second network layer by supervised learning. First, it compares input pattern with each clusters of first layer and select candidate clusters by fitness measure. We design a optimized fitness function for pruning clusters by measuring relative distance ratio between a input pattern and clusters. This makes it possible to improve speed and accuracy. Next, it compares input pattern with each clusters connected with selected clusters and finds winner cluster. Finally it classifies the pattern by a label of the winner cluster. Results of our experiments show that the proposed method is more accurate and fast than other approaches.

Structure Optimization of Neural Networks using Rough Set Theory (러프셋 이론을 이용한 신경망의 구조 최적화)

  • 정영준;이동욱;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.03a
    • /
    • pp.49-52
    • /
    • 1998
  • Neural Network has good performance in pattern classification, control and many other fields by learning ability. However, there is effective rule or systematic approach to determine optimal structure. In this paper, we propose a new method to find optimal structure of feed-forward multi-layer neural network as a kind of pruning method. That eliminating redundant elements of neural network. To find redundant elements we analysis error and weight changing with Rough Set Theory, in condition of executing back-propagation leaning algorithm.

  • PDF

Pruning Methodology for Reducing the Size of Speech DB for Corpus-based TTS Systems (코퍼스 기반 음성합성기의 데이터베이스 축소 방법)

  • 최승호;엄기완;강상기;김진영
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.8
    • /
    • pp.703-710
    • /
    • 2003
  • Because of their human-like synthesized speech quality, recently Corpus-Based Text-To-Speech(CB-TTS) have been actively studied worldwide. However, due to their large size speech database (DB), their application is very restricted. In this paper we propose and evaluate three DB reduction algorithms to which are designed to solve the above drawback. The first method is based on a K-means clustering approach, which selects k-representatives among multiple instances. The second method is keeping only those unit instances that are selected during synthesis, using a domain-restricted text as input to the synthesizer. The third method is a kind of hybrid approach of the above two methods and is using a large text as input in the system. After synthesizing the given sentences, the used unit instances and their occurrence information is extracted. As next step a modified K-means clustering is applied, which takes into account also the occurrence information of the selected unit instances, Finally we compare three pruning methods by evaluating the synthesized speech quality for the similar DB reduction rate, Based on perceptual listening tests, we concluded that the last method shows the best performance among three algorithms. More than this, the results show that the last method is able to reduce DB size without speech quality looses.

Improving Generalization Performance of Neural Networks using Natural Pruning and Bayesian Selection (자연 프루닝과 베이시안 선택에 의한 신경회로망 일반화 성능 향상)

  • 이현진;박혜영;이일병
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.326-338
    • /
    • 2003
  • The objective of a neural network design and model selection is to construct an optimal network with a good generalization performance. However, training data include noises, and the number of training data is not sufficient, which results in the difference between the true probability distribution and the empirical one. The difference makes the teaming parameters to over-fit only to training data and to deviate from the true distribution of data, which is called the overfitting phenomenon. The overfilled neural network shows good approximations for the training data, but gives bad predictions to untrained new data. As the complexity of the neural network increases, this overfitting phenomenon also becomes more severe. In this paper, by taking statistical viewpoint, we proposed an integrative process for neural network design and model selection method in order to improve generalization performance. At first, by using the natural gradient learning with adaptive regularization, we try to obtain optimal parameters that are not overfilled to training data with fast convergence. By adopting the natural pruning to the obtained optimal parameters, we generate several candidates of network model with different sizes. Finally, we select an optimal model among candidate models based on the Bayesian Information Criteria. Through the computer simulation on benchmark problems, we confirm the generalization and structure optimization performance of the proposed integrative process of teaming and model selection.

Generation of Efficient Fuzzy Classification Rules for Intrusion Detection (침입 탐지를 위한 효율적인 퍼지 분류 규칙 생성)

  • Kim, Sung-Eun;Khil, A-Ra;Kim, Myung-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.6
    • /
    • pp.519-529
    • /
    • 2007
  • In this paper, we investigate the use of fuzzy rules for efficient intrusion detection. We use evolutionary algorithm to optimize the set of fuzzy rules for intrusion detection by constructing fuzzy decision trees. For efficient execution of evolutionary algorithm we use supervised clustering to generate an initial set of membership functions for fuzzy rules. In our method both performance and complexity of fuzzy rules (or fuzzy decision trees) are taken into account in fitness evaluation. We also use evaluation with data partition, membership degree caching and zero-pruning to reduce time for construction and evaluation of fuzzy decision trees. For performance evaluation, we experimented with our method over the intrusion detection data of KDD'99 Cup, and confirmed that our method outperformed the existing methods. Compared with the KDD'99 Cup winner, the accuracy was increased by 1.54% while the cost was reduced by 20.8%.