• Title/Summary/Keyword: Detection rule

Search Result 443, Processing Time 0.027 seconds

Field-Induced Translation of Single Ferromagnetic and Ferrimagnetic Grain as Observed in the Chamber-type μG System

  • Kuwada, Kento;Uyeda, Chiaki;Hisayoshi, Keiji;Nagai, Hideaki;Mamiya, Mikito
    • Journal of Magnetics
    • /
    • v.18 no.3
    • /
    • pp.308-310
    • /
    • 2013
  • Translation induced by the field-gradient force is being observed for a single ferromagnetic iron grain and a ferrimagnetic grain of a ferrite sample ($CuFe_2O_4$). From measurements on the translation, precise saturated magnetization of $M_S$ is possible for a single grain. The method is based on the energy conservation rule assumed for the grain during its translation and the grain is translated through a diffuse area under microgravity conditions. The results of the two materials indicate that a field-induced translation of grain bearing spontaneous moment is generally determined by a field-induced potential $-mM_SH(x)$ where m denotes the mass of sample. According to the above translations, the detection of $M_S$ is not interfered by any signals from the sample holder. The $M_S$ measurement does not require m value. By observing translations resulting from fieldinduced volume forces, the magnetization of a single grain is measurable irrespective of its size; the principle is also applicable to measuring susceptibility of diamagnetic and paramagnetic materials.

AI-based system for automatically detecting food risk information from news data (뉴스 데이터로부터 식품위해정보 자동 추출을 위한 인공지능 기술)

  • Baek, Yujin;Lee, Jihyeon;Kim, Nam Hee;Lee, Hunjoo;Choo, Jaegul
    • Food Science and Industry
    • /
    • v.54 no.3
    • /
    • pp.160-170
    • /
    • 2021
  • A recent advance in communication technologies accelerates the spread of food safety issues once presented by the news media. To respond to those safety issues and take steps in a timely manner, automatically detecting related information from the news data matters. This work presents an AI-based system that detects risk information within a food-related news article. Experts in food safety areas participated in labeling risk information from the food-related news articles; we acquired 43,527 articles in which food names and risk information are marked as labels. Based on the news document, our system automatically detects food names and risk information by analyzing similarities between words within a text by leveraging learned word embedding vectors. Our AI-based system shows higher detection accuracy scores over a non-AI rule-based system: achieving an absolute gain of +32.94% in F1 for the food name category and +41.53% for the risk information category.

An Extended Work Architecture for Online Threat Prediction in Tweeter Dataset

  • Sheoran, Savita Kumari;Yadav, Partibha
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.97-106
    • /
    • 2021
  • Social networking platforms have become a smart way for people to interact and meet on internet. It provides a way to keep in touch with friends, families, colleagues, business partners, and many more. Among the various social networking sites, Twitter is one of the fastest-growing sites where users can read the news, share ideas, discuss issues etc. Due to its vast popularity, the accounts of legitimate users are vulnerable to the large number of threats. Spam and Malware are some of the most affecting threats found on Twitter. Therefore, in order to enjoy seamless services it is required to secure Twitter against malicious users by fixing them in advance. Various researches have used many Machine Learning (ML) based approaches to detect spammers on Twitter. This research aims to devise a secure system based on Hybrid Similarity Cosine and Soft Cosine measured in combination with Genetic Algorithm (GA) and Artificial Neural Network (ANN) to secure Twitter network against spammers. The similarity among tweets is determined using Cosine with Soft Cosine which has been applied on the Twitter dataset. GA has been utilized to enhance training with minimum training error by selecting the best suitable features according to the designed fitness function. The tweets have been classified as spammer and non-spammer based on ANN structure along with the voting rule. The True Positive Rate (TPR), False Positive Rate (FPR) and Classification Accuracy are considered as the evaluation parameter to evaluate the performance of system designed in this research. The simulation results reveals that our proposed model outperform the existing state-of-arts.

Three-dimensional geostatistical modeling of subsurface stratification and SPT-N Value at dam site in South Korea

  • Mingi Kim;Choong-Ki Chung;Joung-Woo Han;Han-Saem Kim
    • Geomechanics and Engineering
    • /
    • v.34 no.1
    • /
    • pp.29-41
    • /
    • 2023
  • The 3D geospatial modeling of geotechnical information can aid in understanding the geotechnical characteristic values of the continuous subsurface at construction sites. In this study, a geostatistical optimization model for the three-dimensional (3D) mapping of subsurface stratification and the SPT-N value based on a trial-and-error rule was developed and applied to a dam emergency spillway site in South Korea. Geospatial database development for a geotechnical investigation, reconstitution of the target grid volume, and detection of outliers in the borehole dataset were implemented prior to the 3D modeling. For the site-specific subsurface stratification of the engineering geo-layer, we developed an integration method for the borehole and geophysical survey datasets based on the geostatistical optimization procedure of ordinary kriging and sequential Gaussian simulation (SGS) by comparing their cross-validation-based prediction residuals. We also developed an optimization technique based on SGS for estimating the 3D geometry of the SPT-N value. This method involves quantitatively testing the reliability of SGS and selecting the realizations with a high estimation accuracy. Boring tests were performed for validation, and the proposed method yielded more accurate prediction results and reproduced the spatial distribution of geotechnical information more effectively than the conventional geostatistical approach.

Application of Westgard Multi-Rules for Improving Nuclear Medicine Blood Test Quality Control (핵의학 검체검사 정도관리의 개선을 위한 Westgard Multi-Rules의 적용)

  • Jung, Heung-Soo;Bae, Jin-Soo;Shin, Yong-Hwan;Kim, Ji-Young;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.115-118
    • /
    • 2012
  • Purpose: The Levey-Jennings chart controlled measurement values that deviated from the tolerance value (mean ${\pm}2SD$ or ${\pm}3SD$). On the other hand, the upgraded Westgard Multi-Rules are actively recommended as a more efficient, specialized form of hospital certification in relation to Internal Quality Control. To apply Westgard Multi-Rules in quality control, credible quality control substance and target value are required. However, as physical examinations commonly use quality control substances provided within the test kit, there are many difficulties presented in the calculation of target value in relation to frequent changes in concentration value and insufficient credibility of quality control substance. This study attempts to improve the professionalism and credibility of quality control by applying Westgard Multi-Rules and calculating credible target value by using a commercialized quality control substance. Materials and Methods : This study used Immunoassay Plus Control Level 1, 2, 3 of Company B as the quality control substance of Total T3, which is the thyroid test implemented at the relevant hospital. Target value was established as the mean value of 295 cases collected for 1 month, excluding values that deviated from ${\pm}2SD$. The hospital quality control calculation program was used to enter target value. 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T of Westgard Multi-Rules were applied in the Total T3 experiment, which was conducted 194 times for 20 days in August. Based on the applied rules, this study classified data into random error and systemic error for analysis. Results: Quality control substances 1, 2, and 3 were each established as 84.2 ng/$dl$, 156.7 ng/$dl$, 242.4 ng/$dl$ for target values of Total T3, with the standard deviation established as 11.22 ng/$dl$, 14.52 ng/$dl$, 14.52 ng/$dl$ respectively. According to error type analysis achieved after applying Westgard Multi-Rules based on established target values, the following results were obtained for Random error, 12s was analyzed 48 times, 13s was analyzed 13 times, R4s was analyzed 6 times, for Systemic error, 22s was analyzed 10 times, 41s was analyzed 11 times, 2 of 32s was analyzed 17 times, $10\bar{x}$ was analyzed 10 times, and 7T was not applied. For uncontrollable Random error types, the entire experimental process was rechecked and greater emphasis was placed on re-testing. For controllable Systemic error types, this study searched the cause of error, recorded the relevant cause in the action form and reported the information to the Internal Quality Control committee if necessary. Conclusions : This study applied Westgard Multi-Rules by using commercialized substance as quality control substance and establishing target values. In result, precise analysis of Random error and Systemic error was achieved through the analysis of 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T rules. Furthermore, ideal quality control was achieved through analysis conducted on all data presented within the range of ${\pm}3SD$. In this regard, it can be said that the quality control method formed based on the systematic application of Westgard Multi-Rules is more effective than the Levey-Jennings chart and can maximize error detection.

  • PDF

Accuracy of Frozen Section Analysis of Sentinel Lymph Nodes for the Detection of Asian Breast Cancer Micrometastasis - Experience from Pakistan

  • Hashmi, Atif Ali;Faridi, Naveen;Khurshid, Amna;Naqvi, Hanna;Malik, Babar;Malik, Faisal Riaz;Fida, Zubaida;Mujtuba, Shafaq
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.14 no.4
    • /
    • pp.2657-2662
    • /
    • 2013
  • Background: Intraoperative sentinel lymph node biopsy has now become the standard of care for patients with clinically node negative breast cancer for diagnosis and also in order to determine the need for immediate axillary clearance. Several large scale studies confirmed the diagnostic reliability of this method. However, micrometastases are frequently missed on frozen sections. Recent studies showed that both disease free interval and overall survival are significantly affected by the presence of micrometastatic disease. The aim of this study was to determine the sensitivity and specificity of intraoperative frozen section analysis of sentinel lymph nodes (SLNs) for the detection of breast cancer micrometastasis and to evaluate the status of non-sentinel lymph nodes (non-SLNs) in those patients subjected to further axillary sampling. Materials and Methods: We performed a retrospective study on 154 patients who underwent SLN biopsy from January 2008 till October 2011. The SLNs were sectioned at 2 mm intervals and submitted entirely for frozen sections. Three levels of each section submitted are examined and the results were compared with further levels on paraffin sections. Results: Overall 40% of patients (62/154) were found to be SLN positive on final (paraffin section) histology, out of which 44 demonstrated macrometastases (>2mm) and 18 micrometastases (<2mm). The overall sensitivity and specificity of frozen section analysis of SLN for the detection of macrometastasis was found to be 100% while those for micrometastasis were 33.3% and 100%, respectively. Moreover 20% of patients who had micrometastases in SLN had positive non-SLNs on final histology. Conclusions: Frozen section analysis of SLNs lacks sufficient accuracy to rule out micrometastasis by current protocols. Therefore these need to be revised in order to pick up micrometastasis which appears to have clinical significance. We suggest that this can be achieved by examining more step sections of blocks.

Determination Method of Security Threshold using Fuzzy Logic for Statistical Filtering based Sensor Networks (통계적 여과 기법기반의 센서 네트워크를 위한 퍼지로직을 사용한 보안 경계 값 결정 기법)

  • Kim, Sang-Ryul;Cho, Tae-Ho
    • Journal of the Korea Society for Simulation
    • /
    • v.16 no.2
    • /
    • pp.27-35
    • /
    • 2007
  • When sensor networks are deployed in open environments, all the sensor nodes are vulnerable to physical threat. An attacker can physically capture a sensor node and obtain the security information including the keys used for data authentication. An attacker can easily inject false reports into the sensor network through the compromised node. False report can lead to not only false alarms but also the depletion of limited energy resource in battery powered sensor networks. To overcome this threat, Fan Ye et al. proposed that statistical on-route filtering scheme(SEF) can do verify the false report during the forwarding process. In this scheme, the choice of a security threshold value is important since it trades off detection power and energy, where security threshold value is the number of message authentication code for verification of false report. In this paper, we propose a fuzzy rule-based system for security threshold determination that can conserve energy, while it provides sufficient detection power in the SEF based sensor networks. The fuzzy logic determines a security threshold by considering the probability of a node having non-compromised keys, the number of compromised partitions, and the remaining energy of nodes. The fuzzy based threshold value can conserve energy, while it provides sufficient detection power.

  • PDF

The Role Of Tumor Marker CA 15-3 in Detection of Breast Cancer Relapse After Curative Mastectomy (유방암 환자에서 근치적 유방 절제술 후 재발 발견에 대한 CA 15-3의 역할)

  • Hyun, In-Young;Kim, In-Ho;Lee, Moon-Hee;Kim, Chul-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.4
    • /
    • pp.311-317
    • /
    • 2004
  • Purpose: The purpose of this study was to determine the utility of tumor marker CA 15-3 in the following: the diagnosis of breast cancer relapse after curative mastectomy, and the differentiation or the value of tumor marker by site of metastases. Materials and Methods: Two hundred two patients (median age 48 years) with breast cancer included in the follow-up after curative mastectomy. The tumor marker CA 15-3 was determined by IRMA (CIS BIO INTERNATIONAL, France). Test values > 30 U/ml were considered elevated (positive). Results: Among 202 patients, recurrent diseases were found in 16 patients. CA 15-3 was elevated in 5 of 16 patients with recurrences. There was no false-positive patient who had elevated CA 15-3. Sensitivity and specificity of CA 15-3 for detection of breast cancer recurrence were 31%, and 100%. CA 15-3 was elevated in all of the 4 patients with liver metastases. CA 15-3 was elevated in none of the patients who relapsed with metastasis to bone-only or contralateral breast-only. Conclusion: The tumor marker CA 15-3 in the detection of breast cancer relapse after curative mastectomy is specific, but not sensitive. However, it is useful to rule out liver metastases of breast cancer, which indicates bad prognosis.

A Non-annotated Recurrent Neural Network Ensemble-based Model for Near-real Time Detection of Erroneous Sea Level Anomaly in Coastal Tide Gauge Observation (비주석 재귀신경망 앙상블 모델을 기반으로 한 조위관측소 해수위의 준실시간 이상값 탐지)

  • LEE, EUN-JOO;KIM, YOUNG-TAEG;KIM, SONG-HAK;JU, HO-JEONG;PARK, JAE-HUN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.26 no.4
    • /
    • pp.307-326
    • /
    • 2021
  • Real-time sea level observations from tide gauges include missing and erroneous values. Classification as abnormal values can be done for the latter by the quality control procedure. Although the 3𝜎 (three standard deviations) rule has been applied in general to eliminate them, it is difficult to apply it to the sea-level data where extreme values can exist due to weather events, etc., or where erroneous values can exist even within the 3𝜎 range. An artificial intelligence model set designed in this study consists of non-annotated recurrent neural networks and ensemble techniques that do not require pre-labeling of the abnormal values. The developed model can identify an erroneous value less than 20 minutes of tide gauge recording an abnormal sea level. The validated model well separates normal and abnormal values during normal times and weather events. It was also confirmed that abnormal values can be detected even in the period of years when the sea level data have not been used for training. The artificial neural network algorithm utilized in this study is not limited to the coastal sea level, and hence it can be extended to the detection model of erroneous values in various oceanic and atmospheric data.

X-tree Diff: An Efficient Change Detection Algorithm for Tree-structured Data (X-tree Diff: 트리 기반 데이터를 위한 효율적인 변화 탐지 알고리즘)

  • Lee, Suk-Kyoon;Kim, Dong-Ah
    • The KIPS Transactions:PartC
    • /
    • v.10C no.6
    • /
    • pp.683-694
    • /
    • 2003
  • We present X-tree Diff, a change detection algorithm for tree-structured data. Our work is motivated by need to monitor massive volume of web documents and detect suspicious changes, called defacement attack on web sites. From this context, our algorithm should be very efficient in speed and use of memory space. X-tree Diff uses a special ordered labeled tree, X-tree, to represent XML/HTML documents. X-tree nodes have a special field, tMD, which stores a 128-bit hash value representing the structure and data of subtrees, so match identical subtrees form the old and new versions. During this process, X-tree Diff uses the Rule of Delaying Ambiguous Matchings, implying that it perform exact matching where a node in the old version has one-to one corrspondence with the corresponding node in the new, by delaying all the others. It drastically reduces the possibility of wrong matchings. X-tree Diff propagates such exact matchings upwards in Step 2, and obtain more matchings downwsards from roots in Step 3. In step 4, nodes to ve inserted or deleted are decided, We aldo show thst X-tree Diff runs on O(n), woere n is the number of noses in X-trees, in worst case as well as in average case, This result is even better than that of BULD Diff algorithm, which is O(n log(n)) in worst case, We experimented X-tree Diff on reat data, which are about 11,000 home pages from about 20 wev sites, instead of synthetic documets manipulated for experimented for ex[erimentation. Currently, X-treeDiff algorithm is being used in a commeercial hacking detection system, called the WIDS(Web-Document Intrusion Detection System), which is to find changes occured in registered websites, and report suspicious changes to users.