• Title/Summary/Keyword: coverage of two sets

Search Result 25, Processing Time 0.018 seconds

Quantitative measures of thoroughness of FBD simulations for PLC-based digital I&C system

  • Lee, Dong-Ah;Kim, Eui-Sub;Yoo, Junbeom
    • Nuclear Engineering and Technology
    • /
    • v.53 no.1
    • /
    • pp.131-141
    • /
    • 2021
  • Simulation is a widely used functional verification method for FBD programs of PLC-based digital I&C system in nuclear power plants. It is difficult, however, to estimate the thoroughness (i.e., effectiveness or quality) of a simulation in the absence of any clear measure for the estimation. This paper proposes two sets of structural coverage adequacy criteria for the FBD simulation, toggle coverage and modified condition/decision coverage, which can estimate the thoroughness of simulation scenarios for FBD programs, as recommended by international standards for functional safety. We developed two supporting tools to generate numerous simulation scenarios and to measure automatically the coverages of the scenarios. The results of our experiment on five FBD programs demonstrated that the measures and tools can help software engineers estimate the thoroughness and improve the simulation scenarios quantitatively.

Proposal of Test Scenario Set for Wireless Location Determination Technologies Performance Evaluation (무선 측위 시스템의 성능 평가를 위한 시험 시나리오 집합 제안)

  • Son Seok-Bo;Kim Young-Baek;Park Chan-Sik;Lee Sang-Jeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.8
    • /
    • pp.759-764
    • /
    • 2006
  • This paper introduces test plan and scenario sets proposed by CDG(CDMA Development Group)for wireless location determination technologies performance evaluation, and proposes new test criteria and scenario sets which are more suitable in Korea environment. We propose two scenario sets. One is based on wireless network coverage, and another is based on test types. We evaluate the performance of AGPS(Assisted-GPS) receiver designed by Hanyang Navicom Co., ltd. and analyze the results according to proposed test criteria and scenario sets.

Constrained Relay Node Deployment using an improved multi-objective Artificial Bee Colony in Wireless Sensor Networks

  • Yu, Wenjie;Li, Xunbo;Li, Xiang;Zeng, Zhi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.6
    • /
    • pp.2889-2909
    • /
    • 2017
  • Wireless sensor networks (WSNs) have attracted lots of attention in recent years due to their potential for various applications. In this paper, we seek how to efficiently deploy relay nodes into traditional static WSNs with constrained locations, aiming to satisfy specific requirements of the industry, such as average energy consumption and average network reliability. This constrained relay node deployment problem (CRNDP) is known as NP-hard optimization problem in the literature. We consider addressing this multi-objective (MO) optimization problem with an improved Artificial Bee Colony (ABC) algorithm with a linear local search (MOABCLLS), which is an extension of an improved ABC and applies two strategies of MO optimization. In order to verify the effectiveness of the MOABCLLS, two versions of MO ABC, two additional standard genetic algorithms, NSGA-II and SPEA2, and two different MO trajectory algorithms are included for comparison. We employ these metaheuristics on a test data set obtained from the literature. For an in-depth analysis of the behavior of the MOABCLLS compared to traditional methodologies, a statistical procedure is utilized to analyze the results. After studying the results, it is concluded that constrained relay node deployment using the MOABCLLS outperforms the performance of the other algorithms, based on two MO quality metrics: hypervolume and coverage of two sets.

Development of a Novel Long-Range 16S rRNA Universal Primer Set for Metagenomic Analysis of Gastrointestinal Microbiota in Newborn Infants

  • Ku, Hye-Jin;Lee, Ju-Hoon
    • Journal of Microbiology and Biotechnology
    • /
    • v.24 no.6
    • /
    • pp.812-822
    • /
    • 2014
  • Metagenomic analysis of the human intestinal microbiota has extended our understanding of the role of these bacteria in improving human intestinal health; however, a number of reports have shown that current total fecal DNA extraction methods and 16S rRNA universal primer sets could affect the species coverage and resolution of these analyses. Here, we improved the extraction method for total DNA from human fecal samples by optimization of the lysis buffer, boiling time (10 min), and bead-beating time (0 min). In addition, we developed a new long-range 16S rRNA universal PCR primer set targeting the V6 to V9 regions with a 580 bp DNA product length. This new 16S rRNA primer set was evaluated by comparison with two previously developed 16S rRNA universal primer sets and showed high species coverage and resolution. The optimized total fecal DNA extraction method and newly designed long-range 16S rRNA universal primer set will be useful for the highly accurate metagenomic analysis of adult and infant intestinal microbiota with minimization of any bias.

Development of Matching Priors for P(X < Y) in Exprnential dlstributions

  • Lee, Gunhee
    • Journal of the Korean Statistical Society
    • /
    • v.27 no.4
    • /
    • pp.421-433
    • /
    • 1998
  • In this paper, matching priors for P(X < Y) are investigated when both distributions are exponential distributions. Two recent approaches for finding noninformative priors are introduced. The first one is the verger and Bernardo's forward and backward reference priors that maximizes the expected Kullback-Liebler Divergence between posterior and prior density. The second one is the matching prior identified by matching the one sided posterior credible interval with the frequentist's desired confidence level. The general forms of the second- order matching prior are presented so that the one sided posterior credible intervals agree with the frequentist's desired confidence levels up to O(n$^{-1}$ ). The frequentist coverage probabilities of confidence sets based on several noninformative priors are compared for small sample sizes via the Monte-Carlo simulation.

  • PDF

EPfuzzer: Improving Hybrid Fuzzing with Hardest-to-reach Branch Prioritization

  • Wang, Yunchao;Wu, Zehui;Wei, Qiang;Wang, Qingxian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3885-3906
    • /
    • 2020
  • Hybrid fuzzing which combines fuzzing and concolic execution, has proved its ability to achieve higher code coverage and therefore find more bugs. However, current hybrid fuzzers usually suffer from inefficiency and poor scalability when applied to complex, real-world program testing. We observed that the performance bottleneck is the inefficient cooperation between the fuzzer and concolic executor and the slow symbolic emulation. In this paper, we propose a novel solution named EPfuzzer to improve hybrid fuzzing. EPfuzzer implements two key ideas: 1) only the hardest-to-reach branch will be prioritized for concolic execution to avoid generating uninteresting inputs; and 2) only input bytes relevant to the target branch to be flipped will be symbolized to reduce the overhead of the symbolic emulation. With these optimizations, EPfuzzer can be efficiently targeted to the hardest-to-reach branch. We evaluated EPfuzzer with three sets of programs: five real-world applications and two popular benchmarks (LAVA-M and the Google Fuzzer Test Suite). The evaluation results showed that EPfuzzer was much more efficient and scalable than the state-of-the-art concolic execution engine (QSYM). EPfuzzer was able to find more bugs and achieve better code coverage. In addition, we discovered seven previously unknown security bugs in five real-world programs and reported them to the vendors.

Comparison of tree-based ensemble models for regression

  • Park, Sangho;Kim, Chanmin
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.5
    • /
    • pp.561-589
    • /
    • 2022
  • When multiple classifications and regression trees are combined, tree-based ensemble models, such as random forest (RF) and Bayesian additive regression trees (BART), are produced. We compare the model structures and performances of various ensemble models for regression settings in this study. RF learns bootstrapped samples and selects a splitting variable from predictors gathered at each node. The BART model is specified as the sum of trees and is calculated using the Bayesian backfitting algorithm. Throughout the extensive simulation studies, the strengths and drawbacks of the two methods in the presence of missing data, high-dimensional data, or highly correlated data are investigated. In the presence of missing data, BART performs well in general, whereas RF provides adequate coverage. The BART outperforms in high dimensional, highly correlated data. However, in all of the scenarios considered, the RF has a shorter computation time. The performance of the two methods is also compared using two real data sets that represent the aforementioned situations, and the same conclusion is reached.

Test Set Generation for Pairwise Testing Using Genetic Algorithms

  • Sabharwal, Sangeeta;Aggarwal, Manuj
    • Journal of Information Processing Systems
    • /
    • v.13 no.5
    • /
    • pp.1089-1102
    • /
    • 2017
  • In software systems, it has been observed that a fault is often caused by an interaction between a small number of input parameters. Even for moderately sized software systems, exhaustive testing is practically impossible to achieve. This is either due to time or cost constraints. Combinatorial (t-way) testing provides a technique to select a subset of exhaustive test cases covering all of the t-way interactions, without much of a loss to the fault detection capability. In this paper, an approach is proposed to generate 2-way (pairwise) test sets using genetic algorithms. The performance of the algorithm is improved by creating an initial solution using the overlap coefficient (a similarity matrix). Two mutation strategies have also been modified to improve their efficiency. Furthermore, the mutation operator is improved by using a combination of three mutation strategies. A comparative survey of the techniques to generate t-way test sets using genetic algorithms was also conducted. It has been shown experimentally that the proposed approach generates faster results by achieving higher percentage coverage in a fewer number of generations. Additionally, the size of the mixed covering arrays was reduced in one of the six benchmark problems examined.

A Study on the Improvement of Prediction Accuracy of Collaborative Recommender System under the Effect of Similarity Weight Threshold (협력적 추천시스템에서 유사도 가중치의 임계치 설정에 따른 선호도 예측 정확도 향상에 관한 연구)

  • Lee, Seok-Jun
    • Korean Business Review
    • /
    • v.20 no.1
    • /
    • pp.145-168
    • /
    • 2007
  • Recommender system helps customers to find easily items and helps the e-biz companies to set easily their target customer by automated recommending process. Recommender systems are being adopted by several e-biz companies and from these systems, both of customers and companies take some benefits. This study sets several thresholds to the similarity weight, which indicates a degree of similarity of two customers' preference, to improve the performance of prediction accuracy. According to the threshold, the accuracy of prediction is being improved but some threshold setting shows the reduction of the prediction rate, which is the coverage. This coverage reduction has male effect on the prediction accuracy of customers, so more study on the prediction accuracy of recommender system and to maximize the coverage are needed.

  • PDF

Modified information criterion for testing changes in generalized lambda distribution model based on confidence distribution

  • Ratnasingam, Suthakaran;Buzaianu, Elena;Ning, Wei
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.3
    • /
    • pp.301-317
    • /
    • 2022
  • In this paper, we propose a change point detection procedure based on the modified information criterion in a generalized lambda distribution (GLD) model. Simulations are conducted to obtain empirical critical values of the proposed test statistic. We have also conducted simulations to evaluate the performance of the proposed methods comparing to the log-likelihood method in terms of power, coverage probability, and confidence sets. Our results indicate that, under various conditions, the proposed method modified information criterion (MIC) approach shows good finite sample properties. Furthermore, we propose a new goodness-of-fit testing procedure based on the energy distance to evaluate the asymptotic null distribution of our test statistic. Two real data applications are provided to illustrate the use of the proposed method.