• Title/Summary/Keyword: False positive

Search Result 869, Processing Time 0.03 seconds

Anomaly detection and attack type classification mechanism using Extra Tree and ANN (Extra Tree와 ANN을 활용한 이상 탐지 및 공격 유형 분류 메커니즘)

  • Kim, Min-Gyu;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.79-85
    • /
    • 2022
  • Anomaly detection is a method to detect and block abnormal data flows in general users' data sets. The previously known method is a method of detecting and defending an attack based on a signature using the signature of an already known attack. This has the advantage of a low false positive rate, but the problem is that it is very vulnerable to a zero-day vulnerability attack or a modified attack. However, in the case of anomaly detection, there is a disadvantage that the false positive rate is high, but it has the advantage of being able to identify, detect, and block zero-day vulnerability attacks or modified attacks, so related studies are being actively conducted. In this study, we want to deal with these anomaly detection mechanisms, and we propose a new mechanism that performs both anomaly detection and classification while supplementing the high false positive rate mentioned above. In this study, the experiment was conducted with five configurations considering the characteristics of various algorithms. As a result, the model showing the best accuracy was proposed as the result of this study. After detecting an attack by applying the Extra Tree and Three-layer ANN at the same time, the attack type is classified using the Extra Tree for the classified attack data. In this study, verification was performed on the NSL-KDD data set, and the accuracy was 99.8%, 99.1%, 98.9%, 98.7%, and 97.9% for Normal, Dos, Probe, U2R, and R2L, respectively. This configuration showed superior performance compared to other models.

Conventional Versus Artificial Intelligence-Assisted Interpretation of Chest Radiographs in Patients With Acute Respiratory Symptoms in Emergency Department: A Pragmatic Randomized Clinical Trial

  • Eui Jin Hwang;Jin Mo Goo;Ju Gang Nam;Chang Min Park;Ki Jeong Hong;Ki Hong Kim
    • Korean Journal of Radiology
    • /
    • v.24 no.3
    • /
    • pp.259-270
    • /
    • 2023
  • Objective: It is unknown whether artificial intelligence-based computer-aided detection (AI-CAD) can enhance the accuracy of chest radiograph (CR) interpretation in real-world clinical practice. We aimed to compare the accuracy of CR interpretation assisted by AI-CAD to that of conventional interpretation in patients who presented to the emergency department (ED) with acute respiratory symptoms using a pragmatic randomized controlled trial. Materials and Methods: Patients who underwent CRs for acute respiratory symptoms at the ED of a tertiary referral institution were randomly assigned to intervention group (with assistance from an AI-CAD for CR interpretation) or control group (without AI assistance). Using a commercial AI-CAD system (Lunit INSIGHT CXR, version 2.0.2.0; Lunit Inc.). Other clinical practices were consistent with standard procedures. Sensitivity and false-positive rates of CR interpretation by duty trainee radiologists for identifying acute thoracic diseases were the primary and secondary outcomes, respectively. The reference standards for acute thoracic disease were established based on a review of the patient's medical record at least 30 days after the ED visit. Results: We randomly assigned 3576 participants to either the intervention group (1761 participants; mean age ± standard deviation, 65 ± 17 years; 978 males; acute thoracic disease in 472 participants) or the control group (1815 participants; 64 ± 17 years; 988 males; acute thoracic disease in 491 participants). The sensitivity (67.2% [317/472] in the intervention group vs. 66.0% [324/491] in the control group; odds ratio, 1.02 [95% confidence interval, 0.70-1.49]; P = 0.917) and false-positive rate (19.3% [249/1289] vs. 18.5% [245/1324]; odds ratio, 1.00 [95% confidence interval, 0.79-1.26]; P = 0.985) of CR interpretation by duty radiologists were not associated with the use of AI-CAD. Conclusion: AI-CAD did not improve the sensitivity and false-positive rate of CR interpretation for diagnosing acute thoracic disease in patients with acute respiratory symptoms who presented to the ED.

Fully Automatic Coronary Calcium Score Software Empowered by Artificial Intelligence Technology: Validation Study Using Three CT Cohorts

  • June-Goo Lee;HeeSoo Kim;Heejun Kang;Hyun Jung Koo;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.22 no.11
    • /
    • pp.1764-1776
    • /
    • 2021
  • Objective: This study aimed to validate a deep learning-based fully automatic calcium scoring (coronary artery calcium [CAC]_auto) system using previously published cardiac computed tomography (CT) cohort data with the manually segmented coronary calcium scoring (CAC_hand) system as the reference standard. Materials and Methods: We developed the CAC_auto system using 100 co-registered, non-enhanced and contrast-enhanced CT scans. For the validation of the CAC_auto system, three previously published CT cohorts (n = 2985) were chosen to represent different clinical scenarios (i.e., 2647 asymptomatic, 220 symptomatic, 118 valve disease) and four CT models. The performance of the CAC_auto system in detecting coronary calcium was determined. The reliability of the system in measuring the Agatston score as compared with CAC_hand was also evaluated per vessel and per patient using intraclass correlation coefficients (ICCs) and Bland-Altman analysis. The agreement between CAC_auto and CAC_hand based on the cardiovascular risk stratification categories (Agatston score: 0, 1-10, 11-100, 101-400, > 400) was evaluated. Results: In 2985 patients, 6218 coronary calcium lesions were identified using CAC_hand. The per-lesion sensitivity and false-positive rate of the CAC_auto system in detecting coronary calcium were 93.3% (5800 of 6218) and 0.11 false-positive lesions per patient, respectively. The CAC_auto system, in measuring the Agatston score, yielded ICCs of 0.99 for all the vessels (left main 0.91, left anterior descending 0.99, left circumflex 0.96, right coronary 0.99). The limits of agreement between CAC_auto and CAC_hand were 1.6 ± 52.2. The linearly weighted kappa value for the Agatston score categorization was 0.94. The main causes of false-positive results were image noise (29.1%, 97/333 lesions), aortic wall calcification (25.5%, 85/333 lesions), and pericardial calcification (24.3%, 81/333 lesions). Conclusion: The atlas-based CAC_auto empowered by deep learning provided accurate calcium score measurement as compared with manual method and risk category classification, which could potentially streamline CAC imaging workflows.

Pragmatic Strategies of Self (Other) Presentation in Literary Texts: A Computational Approach

  • Khafaga, Ayman Farid
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.223-231
    • /
    • 2022
  • The application of computer software into the linguistic analysis of texts proves useful to arrive at concise and authentic results from large data texts. Based on this assumption, this paper employs a Computer-Aided Text Analysis (CATA) and a Critical Discourse Analysis (CDA) to explore the manipulative strategies of positive/negative presentation in Orwell's Animal Farm. More specifically, the paper attempts to explore the extent to which CATA software represented by the three variables of Frequency Distribution Analysis (FDA), Content Analysis (CA), and Key Word in Context (KWIC) incorporate with CDA decipher the manipulative purposes beyond positive presentation of selfness and negative presentation of otherness in the selected corpus. The analysis covers some CDA strategies, including justification, false statistics, and competency, for positive self-presentation; and accusation, criticism, and the use of ambiguous words for negative other-presentation. With the application of CATA, some words will be analyzed by showing their frequency distribution analysis as well as their contextual environment in the selected text to expose the extent to which they are employed as strategies of positive/negative presentation in the text under investigation. Findings show that CATA software contributes significantly to the linguistic analysis of large data texts. The paper recommends the use and application of the different CATA software in the stylistic and corpus linguistics studies.

Role of enzyme immunoassay for the Detection of Helicobacter pylori Stool Antigen in Confirming Eradication After Quadruple Therapy in Children (소아에서 4제요법 후 enzyme immunoassay에 의한 Helicobacter pylori 대변 항원 검출법의 유용성에 대한 연구)

  • Yang, Hye Ran;Seo, Jeong Kee
    • Pediatric Gastroenterology, Hepatology & Nutrition
    • /
    • v.7 no.2
    • /
    • pp.153-162
    • /
    • 2004
  • Purpose: The Helicobacter pylori stool antigen (HpSA) enzyme immunoassay is a non-invasive test for the diagnosis and monitoring of H. pylori infection. But, there are few validation studies on the HpSA test after eradication in children. The aim of this study was to assess the diagnostic accuracy of HpSA enzyme immunoassay for the detection of H. pylori to confirm eradication in children. Methods: From January 2001 to October 2003, 164 tests were performed in 146 children aged 1 to 17.5 years (mean $9.3{\pm}4.3$ years). H. pylori infection was confirmed by endoscopy-based tests (rapid urease test, histology, and culture). All H. pylori infected children were treated with quadruple regimens (Omeprazole, amoxicillin, metronidazole and bismuth subcitrate for 7 days). Stool specimens were collected from all patients for the HpSA enzyme immunoassay (Primier platinum HpSA). The results of HpSA tests were interpreted as positive for $OD{\geq}0.160$, unresolved for $$0.140{\leq_-}OD$$<0.160, and negative for OD<0.140 at 450 nm on spectrophotometer. Results: 1) One hundred thirty-one HpSA tests were performed before treatment. The result of HpSA enzyme immunoassay showed three false positive cases and one false negative case. The sensitivity, specificity, positive predictive value, and negative predictive value of HpSA enzyme immunoassay before treatment were 96.4%, 97.1%, 90%, and 99%, respectively. 2) Thirty-three HpSA enzyme immunoassay were performed at least 4 weeks after eradication therapy. The results of HpSA enzyme immunoassay showed two false positive cases and one false negative case. The sensitivity, specificity, positive predictive value, and negative predictive value after treatment were 88.9%, 91.7%, 80%, and 95.7%, respectively. Conclusion: Diagnostic accuracy of the HpSA enzyme immunoassay after eradication therapy was as high as that of the HpSA test before eradication therapy. The HpSA enzyme immunoassay was found to be a useful non-invasive method to confirm H. pylori eradication in children.

  • PDF

Adult Image Filtering using Support Vector Mchine (Support Vector Machine을 이용한 유해 이미지 분류)

  • Song, Chull-Hwan;Yoo, Seong-Joon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10c
    • /
    • pp.218-221
    • /
    • 2006
  • 본 논문은 인터넷의 대표적인 문제점중의 하나인 Adult Image 분류 연구에 대해 기술한다. 특히 우리는 이러한 Adult Image를 분류하기 위한 Data Set을 5가지 타입으로 구성한다. 이러한 각 Image에 대해 Color, Gradient, Edge Direction 특성의 Feature들을 추출하고 이를 Histogram으로 구성한다. 이렇게 구성된 Histogram을 Support Vector Machine에 적용하여 Adult Image를 분류한다. 그 결과, 우리는 8250개의 Test Set에 대하여 Recall(96.53%), Precision(97.33%), False Positive(2.96%), F-Measure(96.93%)의 성능 결과를 보여준다.

  • PDF

The Role of Artificial Observations in Testing for the Difference of Proportions in Misclassified Binary Data

  • Lee, Seung-Chun
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.3
    • /
    • pp.513-520
    • /
    • 2012
  • An Agresti-Coull type test is considered for the difference of binomial proportions in two doubly sampled data subject to false-positive error. The performance of the test is compared with the likelihood-based tests. It is shown that the Agresti-Coull test has many desirable properties in that it can approximate the nominal significance level with compatible power performance.

Computerized Pulmonary Nodule Detection on Chest CT Scans (흉부 CT에서의 폐결절 자동 검출)

  • 이정원;김승환;구진모
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10d
    • /
    • pp.607-609
    • /
    • 2002
  • 본 논문은 흉부 전산화단층촬영 영상에서 폐 영역을 자동으로 분할하는 알고리즘과 폐결절을 자동으로 검출하는 알고리즘에 관한 연구 내용을 담고 있다. 폐 분할 알고리즘은 gray-level thresholding과 morphologic 영상 처리기법을 이용하였고, 폐결절 자동 검출 알고리즘은 추출된 결절 후보의 size, compactness, mean of gray level 값을 분석하여 혈관과 결절을 구분하였다. 개발한 폐결절 자동 검출 시스템은 실험한 영상에 포함된 폐결절 117개 중 55%인 64개를 검출하였고, 3.4 False Positive/section이었다.

  • PDF

Confidence Intervals for the Difference of Binomial Proportions in Two Doubly Sampled Data

  • Lee, Seung-Chun
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.3
    • /
    • pp.309-318
    • /
    • 2010
  • The construction of asymptotic confidence intervals is considered for the difference of binomial proportions in two doubly sampled data subject to false-positive error. The coverage behaviors of several likelihood based confidence intervals and a Bayesian confidence interval are examined. It is shown that a hierarchical Bayesian approach gives a confidence interval with good frequentist properties. Confidence interval based on the Rao score is also shown to have good performance in terms of coverage probability. However, the Wald confidence interval covers true value less often than nominal level.

Plagiarism Detection Using Dependency Graph Analysis Specialized for JavaScript (자바스크립트에 특화된 프로그램 종속성 그래프를 이용한 표절 탐지)

  • Kim, Shin-Hyong;Han, Tai-Sook
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.5
    • /
    • pp.394-402
    • /
    • 2010
  • JavaScript is one of the most popular languages to develope web sites and web applications. Since applicationss written in JavaScript are sent to clients as the original source code, they are easily exposed to plagiarists. Therefore, a method to detect plagiarized JavaScript programs is necessary. The conventional program dependency graph(PDG) based approaches are not suitable to analyze JavaScript programs because they do not reflect dynamic features of JavaScript. They also generate false positives in some cases and show inefficiency with large scale search space. We devise a JavaScript specific PDG(JS PDG) that captures dynamic features of JavaScript and propose a JavaScript plagiarism detection method for precise and fast detection. We evaluate the proposed plagiarism detection method with experiment. Our experiments show that our approach can detect false-positives generated by conventional PDG and can prune the plagiarism search space.