• Title/Summary/Keyword: Missing Probability

Search Result 76, Processing Time 0.027 seconds

Task Scheduling to Minimize the Effect of Coincident Faults in a Duplex Controller Computer (고성능 컴퓨터의 고신뢰도 보장을 위한 이중(Duplex) 시스템의 작업 시퀀싱/스케쥴링 기법 연구)

  • Im, Han-Seung;Kim, Hak-Bae
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.11
    • /
    • pp.3124-3130
    • /
    • 1999
  • A duplex system enhances reliability by tolerating faults through spatial redundancy. Faults can be detected by duplicating identical tasks in pairs of modules. However, this kind of systems cannot even detect the fault if it occurs coincidently due to either malfunctions of common component such as power supply and clock or due to such environmental disruption as EMI. In the paper, we propose a method to reduce those effects of coincident faults in the duplex controller computer. Specifically, a duplex system tolerates coincident faults by using a sophistication sequencing of scheduling technique with certain timing redundancy. In particular when all tasks should be completed in the sense of real-time, the suggested scheduling method works properly to minimize the probability of faulty tasks due to coincident fault without missing the timing constraints.

  • PDF

A Heuristic Buffer Management and Retransmission Control Scheme for Tree-Based Reliable Multicast

  • Baek, Jin-Suk;Paris, Jehan-Francois
    • ETRI Journal
    • /
    • v.27 no.1
    • /
    • pp.1-12
    • /
    • 2005
  • We propose a heuristic buffer management scheme that uses both positive and negative acknowledgments to provide scalability and reliability. Under our scheme, most receiver nodes only send negative acknowledgments to their repair nodes to request packet retransmissions while some representative nodes also send positive acknowledgments to indicate which packets can be discarded from the repair node's buffer. Our scheme provides scalability because it significantly reduces the number of feedbacks sent by the receiver nodes. In addition, it provides fast recovery of transmission errors since the packets requested from the receiver nodes are almost always available in their buffers. Our scheme also reduces the number of additional retransmissions from the original sender node or upstream repair nodes. These features satisfy the original goal of treebased protocols since most packet retransmissions are performed within a local group.

  • PDF

Log-Average-SNR Ratio and Cooperative Spectrum Sensing

  • Yue, Dian-Wu;Lau, Francis C.M.;Wang, Qian
    • Journal of Communications and Networks
    • /
    • v.18 no.3
    • /
    • pp.311-319
    • /
    • 2016
  • In this paper, we analyze the spectrum-sensing performance of a cooperative cognitive radio (CR) network consisting of a number of CR nodes and a fusion center (FC). We introduce the "log-average-SNR ratio" that relates the average SNR of the CR-node-FC link and that of the primary-user-CR-node link. Assuming that the FC utilizes the K-out-of-N rule as its decision rule, we derive exact expressions for the sensing gain and the coding gain - parameters used to characterize the CR network performance at the high SNR region. Based on these results, we determine ways to optimize the performance of the CR network.

Anomaly detection in particulate matter sensor using hypothesis pruning generative adversarial network

  • Park, YeongHyeon;Park, Won Seok;Kim, Yeong Beom
    • ETRI Journal
    • /
    • v.43 no.3
    • /
    • pp.511-523
    • /
    • 2021
  • The World Health Organization provides guidelines for managing the particulate matter (PM) level because a higher PM level represents a threat to human health. To manage the PM level, a procedure for measuring the PM value is first needed. We use a PM sensor that collects the PM level by laser-based light scattering (LLS) method because it is more cost effective than a beta attenuation monitor-based sensor or tapered element oscillating microbalance-based sensor. However, an LLS-based sensor has a higher probability of malfunctioning than the higher cost sensors. In this paper, we regard the overall malfunctioning, including strange value collection or missing collection data as anomalies, and we aim to detect anomalies for the maintenance of PM measuring sensors. We propose a novel architecture for solving the above aim that we call the hypothesis pruning generative adversarial network (HP-GAN). Through comparative experiments, we achieve AUROC and AUPRC values of 0.948 and 0.967, respectively, in the detection of anomalies in LLS-based PM measuring sensors. We conclude that our HP-GAN is a cutting-edge model for anomaly detection.

Study on the Method of Development of Road Flood Risk Index by Estimation of Real-time Rainfall Using the Coefficient of Correlation Weighting Method (상관계수가중치법을 적용한 실시간 강우량 추정에 따른 도로 침수위험지수 개발 방법에 대한 연구)

  • Kim, Eunmi;Rhee, Kyung Hyun;Kim, Chang Soo
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.4
    • /
    • pp.478-489
    • /
    • 2014
  • Recently, flood damage by frequent localized downpours in cities are on the increase on account of abnormal climate phenomena and growth of impermeable area by urbanization. In this study, we are focused on flooding on roads which is the basis of all means of transportation. To calculate real-time accumulated rainfall on a road link, we use the Coefficient of Correlation Weighting method (CCW) which is one of the revised methods of missing rainfall as we consider a road link as a unobserved rainfall site. CCW and real-time accumulated rainfall entered through the Internet are used to estimate the real-time rainfall on a road link. Together with the real-time accumulated rainfall, flooding history, rainfall range causing flooding of a road link and frequency probability precipitation for road design are used as factors to determine the Flood Risk Index on roads. We simulated two cases in the past, July, 7th, 2009 and July, 15th, 2012 in Busan. As a result, all of road links included in the actual flooded roads at that time got the high level of flood risk index.

Development and Applications of A Paternity and Kinship Analysis System Based on DNA Data (유전자 분석 자료에 의한 친자 및 혈연관계 분석시스템 개발 및 활용)

  • Koo, Kyo-Chan;Kim, Sun-Uk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.10
    • /
    • pp.6715-6721
    • /
    • 2015
  • Recently, DNA data of missing person, killed person, and missing child continue to increase but most of statistical calculation for paternity confirmation is being done through manual methods or Excel. Therefore, we need development of a software which is able to facilitate both systematic management and effective analysis of Short Tandem Repeat (STR) derived from DNA data. Without extensive testing, through a twenty-month study was developed a web-based system which performs paternity analysis and kinship analysis easily based on the various options. The former uses an existing algorithm for paternity index and the latter does Identity by descent (IBD) formula. Due to our system validated over real datasets in terms of likelihood ratio and probability of paternity, it ensures increased reliability as well as effective management and analysis of DNA data in mass disaster. In addition, it includes advanced features such as an integrated environment, user-centered interface, process automation and so on.

Deep Learning Model for Incomplete Data (불완전한 데이터를 위한 딥러닝 모델)

  • Lee, Jong Chan
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.2
    • /
    • pp.1-6
    • /
    • 2019
  • The proposed model is developed to minimize the loss of information in incomplete data including missing data. The first step is to transform the learning data to compensate for the loss information using the data extension technique. In this conversion process, the attribute values of the data are filled with binary or probability values in one-hot encoding. Next, this conversion data is input to the deep learning model, where the number of entries is not constant depending on the cardinality of each attribute. Then, the entry values of each attribute are assigned to the respective input nodes, and learning proceeds. This is different from existing learning models, and has an unusual structure in which arbitrary attribute values are distributedly input to multiple nodes in the input layer. In order to evaluate the learning performance of the proposed model, various experiments are performed on the missing data and it shows that it is superior in terms of performance. The proposed model will be useful as an algorithm to minimize the loss in the ubiquitous environment.

Optimization of Link-level Performance and Complexity for the Floating-point and Fixed-point Designs of IEEE 802.16e OFDMA/TDD Mobile Modem (IEEE 802.16e OFDMA/TDD 이동국 모뎀의 링크 성능과 복잡도 최적화를 위한 부동 및 고정 소수점 설계)

  • Sun, Tae-Hyoung;Kang, Seung-Won;Kim, Kyu-Hyun;Chang, Kyung-Hi
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.11 s.353
    • /
    • pp.95-117
    • /
    • 2006
  • In this paper, we describe the optimization of the link-level performance and the complexity of floating-point and fixed-point methods in IEEE 802.16e OFDMA/TDD mobile modem. In floating-point design, we propose the channel estimation methods for downlink traffic channel and select the optimized method using computer simulation. So we also propose efficent algorithms for time and frequency synchronization, Digital Front End and CINR estimation scheme to optimize the system performance. Furthermore, we describe fixed-point method of uplink traffic and control channels. The superiority of the proposed algorithm is validated using the performances of Detection, False Alarm, Missing Probability and Mean Acquisition Time, PER Curve, etc. For fixed-point design, we propose an efficient methodology for optimized fixed-point design from floating-point At last, we design fixed-point of traffic channel, time and frequency synchronization, DFE block in uplink and downlink. The tradeoff between performance and complexity are optimized through computer simulations.

Implementation for Automatic Inspection System on Ventilating Electronic Device Based on Reliability Improvement (신뢰성 향상 기반의 송풍전자장치 자동검사 시스템 구현)

  • Do, Nam Soo;Ryu, Kwang Ryol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.6
    • /
    • pp.1155-1160
    • /
    • 2017
  • This paper describes a system implementation for the automatic inspection on the ventilating electronic device based on the reliability improvement. To be enhancement, the inspection error is minimized by the automatic inspection system on the ventilating apparatuses against the manual inspecting system. The system consists of the control system, software structure and monitoring system to be scanning the inspection processing. The inspection system for reliability improvement is evaluated in Gage Repeatability and Reproducibility. The experimental results are improved about 2 times inspecting speed, measured error ${\pm}0.02V$, effectiveness of discriminating performance 15%, missing probability 17% and false alarm probability 12% respectively in comparing with the manual inspection based on the wind pressure sensor. The system will be also improved more by making database and product bar codes for the total quality control system to the effective reliability enhancement in the future.

Ranking by Inductive Inference in Collaborative Filtering Systems (협력적 여과 시스템에서 귀납 추리를 이용한 순위 결정)

  • Ko, Su-Jeong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.9
    • /
    • pp.659-668
    • /
    • 2010
  • Collaborative filtering systems grasp behaviors for a new user and need new information for the user in order to recommend interesting items to the user. For the purpose of acquiring the information the collaborative filtering systems learn behaviors for users based on the previous data and can obtain new information from the results. In this paper, we propose an inductive inference method to obtain new information for users and rank items by using the new information in the proposed method. The proposed method clusters users into groups by learning users through NMF among inductive machine learning methods and selects the group features from the groups by using chi-square. Then, the method classifies a new user into a group by using the bayesian probability model as one of inductive inference methods based on the rating values for the new user and the features of groups. Finally, the method decides the ranks of items by applying the Rocchio algorithm to items with the missing values.