• Title/Summary/Keyword: test coverage

Search Result 522, Processing Time 0.026 seconds

Changes and Cognition of Dental Hygienist and Dentistry after National Health Insurance of Dental Scaling (치석제거 급여화 후 치과위생사와 치과의료기관의 변화 및 인식조사)

  • Yoo, Eun-Ha;Lee, Hyo-jung;Oh, Hye-Young
    • Journal of Korean Dental Hygiene Science
    • /
    • v.2 no.1
    • /
    • pp.31-39
    • /
    • 2019
  • This study focused on the change of environment and cognition of dental hygienists about dental calculus removal after the national health insurance. We conducted online and offline surveys for 290 dental hygienists working in dental clinics in Seoul, Gyeonggi and Incheon areas. Differences in perceptions were assessed by independent t-test and ANOVA. 62.1% answered that the health insurance coverage of dental calculus removal was appropriate, and 49.6% said that the desired number of health insurance coverage about the dental calculus removal should be applied twice a year. 54.3% said that the age after 20 years-old was not appropriate in national health insurance coverage of dental calculus removal, and 49.3% said that the appropriate starting age of dental calculus removal should be applied from high school students. 26.3% said that the number of national health insurance applications should be increased yearly, 20.5% said that oral care education should be added. Most of the dental hygienists said that the number of scaling patients increased, but that the quality of the scaling did not deteriorate. According to general characteristics, in the recognition of the removal of calculus, the dental hygienists having a career for 7~8 years felt less change. The dental hygienist wanted to expand the scope of national health insurance about scaling removal so that more subjects could remove dental calculus removal. Dental hygienists wanted that national health insurance should be systematically supplemented in order to contribute to the promotion of oral health of the people.

An Efficient Built-in Self-Test Algorithm for Neighborhood Pattern- and Bit-Line-Sensitive Faults in High-Density Memories

  • Kang, Dong-Chual;Park, Sung-Min;Cho, Sang-Bock
    • ETRI Journal
    • /
    • v.26 no.6
    • /
    • pp.520-534
    • /
    • 2004
  • As the density of memories increases, unwanted interference between cells and the coupling noise between bit-lines become significant, requiring parallel testing. Testing high-density memories for a high degree of fault coverage requires either a relatively large number of test vectors or a significant amount of additional test circuitry. This paper proposes a new tiling method and an efficient built-in self-test (BIST) algorithm for neighborhood pattern-sensitive faults (NPSFs) and new neighborhood bit-line sensitive faults (NBLSFs). Instead of the conventional five-cell and nine-cell physical neighborhood layouts to test memory cells, a four-cell layout is utilized. This four-cell layout needs smaller test vectors, provides easier hardware implementation, and is more appropriate for both NPSFs and NBLSFs detection. A CMOS column decoder and the parallel comparator proposed by P. Mazumder are modified to implement the test procedure. Consequently, these reduce the number of transistors used for a BIST circuit. Also, we present algorithm properties such as the capability to detect stuck-at faults, transition faults, conventional pattern-sensitive faults, and neighborhood bit-line sensitive faults.

  • PDF

Mobile GUI Testing Tool Based-on Image Flow (이미지 플로우 기반의 모바일 GUI 테스트 도구에 관한 연구)

  • Hwang, Sun-Myung;Yoon, Seok-Jin
    • The KIPS Transactions:PartD
    • /
    • v.15D no.3
    • /
    • pp.347-354
    • /
    • 2008
  • In order to enhance the productivity of mobile and develop reliable software, the test of mobile application software should be required absolutely. The most important way to communicate to users has been used to GUI. But GUI test method in mobile has no the test automation system except manual test by checklist. In this paper we present a test method and tool by image flow to reduce the time required and finds out errors to GUI, by carrying out the study on automatic GUI testing tool based on image flow to GUI test of mobile application.

A Effective Generation of Protocol Test Case Using The Depth-Tree (깊이트리를 이용한 효율적인 프로토콜 시험항목 생성)

  • 허기택;이동호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.9
    • /
    • pp.1395-1403
    • /
    • 1993
  • Protocol conformance is crucial to inter-operability and cost effective computer communication. Given a protocol specification, the task of checking whether an inplementation conforms to the specification is called conformance testing. The efficiency and fault coverage of conformance testing are largely dependent on how test cases are chosen. Some states may have more one UIO sequence when the protocol is represented by FSM (Finite State Machine). The length of test sequence can be minimized if the optimal test sequences are chosen. In this paper, we construct the depth-tree to find the maximum overlapping among the test sequence. By using the resulting depth-tree, we generate the minimum-length test sequence. We show the example of the minimum-length test sequence obtained by using the resulting depth-tree.

  • PDF

AN IMPROVED CONFIDENCE INTERVAL FOR THE POPULATION PROPORTION IN A DOUBLE SAMPLING SCHEME SUBJECT TO FALSE-POSITIVE MISCLASSIFICATION

  • Lee, Seung-Chun
    • Journal of the Korean Statistical Society
    • /
    • v.36 no.2
    • /
    • pp.275-284
    • /
    • 2007
  • Confidence intervals for the population proportion in a double sampling scheme subject to false-positive misclassification are considered. The confidence intervals are obtained by applying Agresti and Coull's approach, so-called "adding two-failures and two successes". They are compared in terms of coverage probabilities and expected widths with the Wald interval and the confidence interval given by Boese et al. (2006). The latter one is a test-based confidence interval and is known to have good properties. It is shown that the Agresti and Coull's approach provides a relatively simple but effective confidence interval.

Dynamic Test Data Generation for Branch Coverage (분기 커버리지를 위한 동적 테스트 데이터 생성)

  • Chung, In-Sang;Seong, Yeong-Rak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06b
    • /
    • pp.150-152
    • /
    • 2012
  • 일반적으로 테스트 데이터 자동 생성을 지원하기 위해 심볼릭 실행기나 제약 해결기와 같은 도구를 요구한다. 그러나 이와 같은 도구들을 개발하는 것은 상당한 노력이 요구되는 것도 사실이다. 이 논문에서는 이러한 도구들의 지원 없이 분기 커버리지를 효과적으로 달성할 수 있는 테스트 데이터 생성 방법을 제안한다. 이를 위해 경로 지향 테스트 데이터 생성을 위해 개발된 Korel의 방법을 확장하여 프로그램의 분기들을 가능한 많이 실행할 수 있는 테스트 데이터를 효과적으로 생성하는 방법을 제안한다.

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

Automated Test Data Generation for Testing Programs with Multi-level Stack-directed Pointers (다단계 스택 지향 포인터가 있는 프로그램 테스트를 위한 테스트 데이터 자동 생성)

  • Chung, In-Sang
    • The KIPS Transactions:PartD
    • /
    • v.17D no.4
    • /
    • pp.297-310
    • /
    • 2010
  • Recently, a new testing technique called concolic testing receives lots of attention. Concolic testing generates test data by combining concrete program execution and symbolic execution to achieve high test coverage. CREST is a representative open-source test tool implementing concolic testing. Currently, however, CREST only deals with integer type as input. This paper presents a new rule for automated test data generation in presence of inputs of pointer type. The rules effectively handles multi-level stack-directed pointers that are mainly used in C programs. In addition, we describe a tool named vCREST implementing the proposed rules together with the results of applying the tool to some C programs.

Back-to-Back Testing based on MC/DC 100% Test case (MC/DC 100% Test case를 활용한 Back-to-Back Testing)

  • Ko, Dong-Ryul;You, Young-Min;Park, In-Kuen;Han, Il-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.500-503
    • /
    • 2017
  • 차량 내 전장부품이 증가하고, 차량 OEM(Original Equipment Manufacturing)이 다양한 차종을 생산 판매함에 따라 다양한 SW(software) 형상이 개발되고 있다. 따라서, 기존에 개발된 SW 형상과 변경된 SW 형상 간에 기능 일치성 검증에 대한 필요성이 증가하고 있다. 두 가지 SW 형상 간에 기능 일치성 확인을 위한 테스팅 방법으로 Back-to-Back Testing이 있는데, 이는 각 SW 형상에 동일한 입력값을 주입하고 동일한 출력값이 산출되는 지 확인하는 테스팅 방법이다. Back-to-Back Testing 수행 시 Test case 설계가 필요한데, Test case의 분량과 테스팅 종료기준에 대해서 아직 확립이 되어 있지 않다. 이제 본 논문에서는 MC/DC(Modified Condition/Decision Coverage) 개념을 이용하여 Test case 분량과 테스팅 종료 기준에 대해서 제시하고, 이를 적용한 사례를 설명한다. 본 논문에서 제시한 Test case 설계 기준을 적용하면, 제한적인 테스팅 일정과 인력을 만족하고, 기능 일치를 확인할 수 있는 충분한 테스팅이 가능할 것으로 판단한다.

Computing Method for The Number of The Interaction Strength Based on Software Whitebox Testing (소프트웨어 화이트박스 테스트의 교호 강도 수 기반 테스트 방법)

  • Choi, Hyeong-Seob;Park, Hong-Seong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.5
    • /
    • pp.29-36
    • /
    • 2009
  • Cost and Time for software test is gradually increasing as the software complexity increases. To cope with this problem, it is very important to reduce the number of test cases used in the software test. The interaction strength number is especially important in decision of the number of test cases for the unit test, where the interaction strength number means the number of arguments which affect the results of a function by the analysis of their combination used in source code of the function. This paper proposes the algorithm that computes the number of the interaction strength, where analyzes the patterns used in the source code of a function and increase its number when the pattern matches one of the specified patterns. The proposed algorithm is validated by some experiments finding coverage and the number of fault detection.