• Title/Summary/Keyword: Entropy model

Search Result 489, Processing Time 0.021 seconds

Design of the ICMEP Algorithm for the Highly Efficient Entropy Encoding (고효율 엔트로피 부호화를 위한 ICMEP 알고리즘 설계)

  • 이선근;임순자;김환용
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.4
    • /
    • pp.75-82
    • /
    • 2004
  • The channel transmission ratio is speeded up by the combination of the Huffman algorithm, the model scheme of the lossy transform having minimum average code lengths for the image information and good instantaneous decoding capability, with the Lempel-Ziv algorithm showing the fast processing performance during the compression process. In order to increase the processing speed during the compression process, ICMEP algorithm is proposed and the entropy encoder of HDTV is designed and inspected. The ICMEP entropy encoder have been designed by choosing the top-down method and consisted of the source codes and the test benches by the behavior expression with VHDL. As a simulation results, implemented ICMEP entropy encoder confirmed that whole system efficiency by memory saturation prevention and compressibility increase improves.

Tri-training algorithm based on cross entropy and K-nearest neighbors for network intrusion detection

  • Zhao, Jia;Li, Song;Wu, Runxiu;Zhang, Yiying;Zhang, Bo;Han, Longzhe
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3889-3903
    • /
    • 2022
  • To address the problem of low detection accuracy due to training noise caused by mislabeling when Tri-training for network intrusion detection (NID), we propose a Tri-training algorithm based on cross entropy and K-nearest neighbors (TCK) for network intrusion detection. The proposed algorithm uses cross-entropy to replace the classification error rate to better identify the difference between the practical and predicted distributions of the model and reduce the prediction bias of mislabeled data to unlabeled data; K-nearest neighbors are used to remove the mislabeled data and reduce the number of mislabeled data. In order to verify the effectiveness of the algorithm proposed in this paper, experiments were conducted on 12 UCI datasets and NSL-KDD network intrusion datasets, and four indexes including accuracy, recall, F-measure and precision were used for comparison. The experimental results revealed that the TCK has superior performance than the conventional Tri-training algorithms and the Tri-training algorithms using only cross-entropy or K-nearest neighbor strategy.

An Efficiency Analysis of Supply Chain Quality Management Using the Multi-stage DEA Model: Focused on the Domestic Defense Industry Companies (다단계 DEA 모형을 활용한 공급망 품질경영 효율성 분석: 국내 방산업체를 대상으로)

  • Jeon, Gyeryong;Yoo, Hanjoo
    • Journal of Korean Society for Quality Management
    • /
    • v.47 no.1
    • /
    • pp.163-186
    • /
    • 2019
  • Purpose: The purpose of this study was to present a methodology for assessing the efficiency of supply chain quality management considering characteristics of defense industries to provide academic and policy implications for strengthening quality competitiveness of military supplies. Methods: Using the defense industry's empirical data, conduct an efficiency evaluation by utilizing a multi-stage DEA/Entropy Model for defense industries subject to the quality level survey of military goods manufacturers in 2017. Results: The results of this study are as follows; the first step of the multi-stage DEA model, Quality Management Performance Efficiency Analysis, shows that the CCR model and the BCC model are more efficient than the parent company. the second stage of the multi-stage DEA model showed that the CCR model was slightly more efficient than the parent company and the BCC model was more efficient than the parent.the overall efficiency value of the multistage DEA model, calculated by multipointing the efficiency value of the first stage by the second stage, was more efficient than the parent. Conclusion: The results of this study show that the efficiency of supply chian quality management performance and profitability in the defense industry can be analyzed for the first time using the multistage DEA/Entropy model to identify specific inefficiencies and support objective decision making.

An Anomalous Event Detection System based on Information Theory (엔트로피 기반의 이상징후 탐지 시스템)

  • Han, Chan-Kyu;Choi, Hyoung-Kee
    • Journal of KIISE:Information Networking
    • /
    • v.36 no.3
    • /
    • pp.173-183
    • /
    • 2009
  • We present a real-time monitoring system for detecting anomalous network events using the entropy. The entropy accounts for the effects of disorder in the system. When an abnormal factor arises to agitate the current system the entropy must show an abrupt change. In this paper we deliberately model the Internet to measure the entropy. Packets flowing between these two networks may incur to sustain the current value. In the proposed system we keep track of the value of entropy in time to pinpoint the sudden changes in the value. The time-series data of entropy are transformed into the two-dimensional domains to help visually inspect the activities on the network. We examine the system using network traffic traces containing notorious worms and DoS attacks on the testbed. Furthermore, we compare our proposed system of time series forecasting method, such as EWMA, holt-winters, and PCA in terms of sensitive. The result suggests that our approach be able to detect anomalies with the fairly high accuracy. Our contributions are two folds: (1) highly sensitive detection of anomalies and (2) visualization of network activities to alert anomalies.

Information entropy based algorithm of sensor placement optimization for structural damage detection

  • Ye, S.Q.;Ni, Y.Q.
    • Smart Structures and Systems
    • /
    • v.10 no.4_5
    • /
    • pp.443-458
    • /
    • 2012
  • The structural health monitoring (SHM) benchmark study on optimal sensor placement problem for the instrumented Canton Tower has been launched. It follows the success of the modal identification and model updating for the Canton Tower in the previous benchmark study, and focuses on the optimal placement of vibration sensors (accelerometers) in the interest of bettering the SHM system. In this paper, the sensor placement problem for the Canton Tower and the benchmark model for this study are first detailed. Then an information entropy based sensor placement method with the purpose of damage detection is proposed and applied to the benchmark problem. The procedure that will be implemented for structural damage detection using the data obtained from the optimal sensor placement strategy is introduced and the information on structural damage is specified. The information entropy based method is applied to measure the uncertainties throughout the damage detection process with the use of the obtained data. Accordingly, a multi-objective optimal problem in terms of sensor placement is formulated. The optimal solution is determined as the one that provides equally most informative data for all objectives, and thus the data obtained is most informative for structural damage detection. To validate the effectiveness of the optimally determined sensor placement, damage detection is performed on different damage scenarios of the benchmark model using the noise-free and noise-corrupted measured information, respectively. The results show that in comparison with the existing in-service sensor deployment on the structure, the optimally determined one is capable of further enhancing the capability of damage detection.

Part-Of-Speech Tagging using multiple sources of statistical data (이종의 통계정보를 이용한 품사 부착 기법)

  • Cho, Seh-Yeong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.501-506
    • /
    • 2008
  • Statistical POS tagging is prone to error, because of the inherent limitations of statistical data, especially single source of data. Therefore it is widely agreed that the possibility of further enhancement lies in exploiting various knowledge sources. However these data sources are bound to be inconsistent to each other. This paper shows the possibility of using maximum entropy model to Korean language POS tagging. We use as the knowledge sources n-gram data and trigger pair data. We show how perplexity measure varies when two knowledge sources are combined using maximum entropy method. The experiment used a trigram model which produced 94.9% accuracy using Hidden Markov Model, and showed increase to 95.6% when combined with trigger pair data using Maximum Entropy method. This clearly shows possibility of further enhancement when various knowledge sources are developed and combined using ME method.

An Adaptive Data Compression Algorithm for Video Data (사진데이타를 위한 한 Adaptive Data Compression 방법)

  • 김재균
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.12 no.2
    • /
    • pp.1-10
    • /
    • 1975
  • This paper presents an adaptive data compression algorithm for video data. The coling complexity due to the high correlation in the given data sequence is alleviated by coding the difference data, sequence rather than the data sequence itself. The adaptation to the nonstationary statistics of the data is confined within a code set, which consists of two constant length cades and six modified Shannon.Fano codes. lt is assumed that the probability distributions of tile difference data sequence and of the data entropy are Laplacian and Gaussion, respectively. The adaptive coding performance is compared for two code selection criteria: entropy and $P_r$[difference value=0]=$P_0$. It is shown that data compression ratio 2 : 1 is achievable with the adaptive coding. The gain by the adaptive coding over the fixed coding is shown to be about 10% in compression ratio and 15% in code efficiency. In addition, $P_0$ is found to he not only a convenient criterion for code selection, but also such efficient a parameter as to perform almost like entropy.

  • PDF

A Comparative Assessment of the Efficacy of Frequency Ratio, Statistical Index, Weight of Evidence, Certainty Factor, and Index of Entropy in Landslide Susceptibility Mapping

  • Park, Soyoung;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.1
    • /
    • pp.67-81
    • /
    • 2020
  • The rapid climatic changes being caused by global warming are resulting in abnormal weather conditions worldwide, which in some regions have increased the frequency of landslides. This study was aimed to analyze and compare the landslide susceptibility using the Frequency Ratio (FR), Statistical Index, Weight of Evidence, Certainty Factor, and Index of Entropy (IoE) at Woomyeon Mountain in South Korea. Through the construction of a landslide inventory map, 164 landslide locations in total were found, of which 50 (30%) were reserved to validate the model after 114 (70%) had been chosen at random for model training. The sixteen landslide conditioning factors related to topography, hydrology, pedology, and forestry factors were considered. The results were evaluated and compared using relative operating characteristic curve and the statistical indexes. From the analysis, it was shown that the FR and IoE models were better than the other models. The FR model, with a prediction rate of 0.805, performed slightly better than the IoE model with a prediction rate of 0.798. These models had the same sensitivity values of 0.940. The IoE model gave a specific value of 0.329 and an accuracy value of 0.710, which outperforms the FR model which gave 0.276 and 0.680, respectively, to predict the spatial landslide in the study area. The generated landslide susceptibility maps can be useful for disaster and land use planning.

Which country's end devices are most sharing vulnerabilities in East Asia? (거시적인 관점에서 바라본 취약점 공유 정도를 측정하는 방법에 대한 연구)

  • Kim, Kwangwon;Won, Yoon Ji
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.5
    • /
    • pp.1281-1291
    • /
    • 2015
  • Compared to the past, people can control end devices via open channel. Although this open channel provides convenience to users, it frequently turns into a security hole. In this paper, we propose a new human-centered security risk analysis method that puts weight on the relationship between end devices. The measure derives from the concept of entropy rate, which is known as the uncertainty per a node in a network. As there are some limitations to use entropy rate as a measure in comparing different size of networks, we divide the entropy rate of a network by the maximum entropy rate of the network. Also, we show how to avoid the violation of irreducible, which is a precondition of the entropy rate of a random walk on a graph.

Analysis on the Amino Acid Distributions with Position in Transmembrane Proteins

  • Chi, Sang-Mun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.745-758
    • /
    • 2005
  • This paper presents a statistical analysis on the position-specific distributions of amino acid residues in transmembrane proteins. A hidden Markov model segments membrane proteins to produce segmented regions of homogeneous statistical property from variable-length amino acids sequences. These segmented residues are analyzed by using chi-square statistic and relative-entropy in order to find position-specific amino acids. This analysis showed that isoleucine and valine concentrated on the center of membrane-spanning regions, tryptophan, tyrosine and positive residues were found frequently near both ends of membrane.

  • PDF