• Title/Summary/Keyword: Probability functions

Search Result 724, Processing Time 0.026 seconds

Optimal Seismic Rehabilitation of Structures Using Probabilistic Seismic Demand Model (확률적 지진요구모델을 이용한 구조물의 최적 내진보강)

  • Park, Joo-Nam;Choi, Eun-Soo
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.12 no.3
    • /
    • pp.1-10
    • /
    • 2008
  • The seismic performance of a structure designed without consideration of seismic loading can be effectively enhanced through seismic rehabilitation. The appropriate level of rehabilitation should be determined based on the decision criteria that minimize the anticipated earthquake-related losses. To estimate the anticipated losses, seismic risk analysis should be performed considering the probabilistic characteristics of the hazard and the structural damage. This study presents the decision procedure in which the probabilistic seismic demand model is utilized for the effective estimation and minimization of the total seismic losses through seismic rehabilitation. The probability density function and the cumulative distribution function of the structural damage for a specified time period are established in a closed form, and are combined with the loss functions to derive the expected seismic loss. The procedure presented in this study could be effectively used for making decisions on the seismic rehabilitation of structural systems.

Improvement of Keyword Spotting Performance Using Normalized Confidence Measure (정규화 신뢰도를 이용한 핵심어 검출 성능향상)

  • Kim, Cheol;Lee, Kyoung-Rok;Kim, Jin-Young;Choi, Seung-Ho;Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.380-386
    • /
    • 2002
  • Conventional post-processing as like confidence measure (CM) proposed by Rahim calculates phones' CM using the likelihood between phoneme model and anti-model, and then word's CM is obtained by averaging phone-level CMs[1]. In conventional method, CMs of some specific keywords are tory low and they are usually rejected. The reason is that statistics of phone-level CMs are not consistent. In other words, phone-level CMs have different probability density functions (pdf) for each phone, especially sri-phone. To overcome this problem, in this paper, we propose normalized confidence measure. Our approach is to transform CM pdf of each tri-phone to the same pdf under the assumption that CM pdfs are Gaussian. For evaluating our method we use common keyword spotting system. In that system context-dependent HMM models are used for modeling keyword utterance and contort-independent HMM models are applied to non-keyword utterance. The experiment results show that the proposed NCM reduced FAR (false alarm rate) from 0.44 to 0.33 FA/KW/HR (false alarm/keyword/hour) when MDR is about 8%. It achieves 25% improvement of FAR.

Application of an Automated Time Domain Reflectometry to Solute Transport Study at Field Scale: Transport Concept (시간영역 광전자파 분석기 (Automatic TDR System)를 이용한 오염물질의 거동에 관한 연구: 오염물질 운송개념)

  • Kim, Dong-Ju
    • Economic and Environmental Geology
    • /
    • v.29 no.6
    • /
    • pp.713-724
    • /
    • 1996
  • The time-series resident solute concentrations, monitored at two field plots using the automated 144-channel TDR system by Kim (this issue), are used to investigate the dominant transport mechanism at field scale. Two models, based on contradictory assumptions for describing the solute transport in the vadose zone, are fitted to the measured mean breakthrough curves (BTCs): the deterministic one-dimensional convection-dispersion model (CDE) and the stochastic-convective lognormal transfer function model (CLT). In addition, moment analysis has been performed using the probability density functions (pdfs) of the travel time of resident concentration. Results of moment analysis have shown that the first and second time moments of resident pdf are larger than those of flux pdf. Based on the time moments, expressed in function of model parameters, variance and dispersion of resident solute travel times are derived. The relationship between variance or dispersion of solute travel time and depth has been found to be identical for both the time-series flux and resident concentrations. Based on these relationships, the two models have been tested. However, due to the significant variations of transport properties across depth, the test has led to unreliable results. Consequently, the model performance has been evaluated based on predictability of the time-series resident BTCs at other depths after calibration at the first depth. The evaluation of model predictability has resulted in a clear conclusion that for both experimental sites the CLT model gives more accurate prediction than the CDE model. This suggests that solute transport at natural field soils is more likely governed by a stream tube model concept with correlated flow than a complete mixing model. Poor prediction of CDE model is attributed to the underestimation of solute spreading and thus resulting in an overprediction of peak concentration.

  • PDF

A Evaluation Model of AHP Results Using Monte Carlo Simulation (Depending on the Case Studies of Road and Rail) (몬테카를로 시뮬레이션을 통한 AHP결과 해석모형개발 (도로 및 철도부문 사례를 중심으로))

  • Sul, You-Jin;Chung, Sung-Bong;Song, Ki-Han;Chon, Kyung-Soo;Rhee, Sung-Mo
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.4
    • /
    • pp.195-204
    • /
    • 2008
  • Multi-Criteria Analysis is one method for optimizing decisions that include numerous characteristics and objective functions. The Analytic Hierarchy Process (AHP) is used as a general Multi-Criteria Analysis considering many critical issues. However, since validation procedures for the decision reliability of AHP valuers had been left off existing methodologies, a new methodology including such validation procedures is required to make more reliable decisions. In this research, idea decision results are derived using Monte Carlo Simulation in cases where AHP valuers do not have expertise in the specific project, and these results are compared with the results derived from experts to develop a new analysis model to make more reliable decisions. Finally, this new analysis is applied to various field case studies of road and rail carried out by the Korea Development Institute (KDI) between 2003 and 2006 to validate the new analysis model. The study found that approximately 20% of decisions resulting from the existing methodology are considered prudent. In future studies, the authors suggest analyzing the correlation between initial weights and final results since final results are enormously influenced by the initial weight.

Gene signature for prediction of radiosensitivity in human papillomavirus-negative head and neck squamous cell carcinoma

  • Kim, Su Il;Kang, Jeong Wook;Noh, Joo Kyung;Jung, Hae Rim;Lee, Young Chan;Lee, Jung Woo;Kong, Moonkyoo;Eun, Young-Gyu
    • Radiation Oncology Journal
    • /
    • v.38 no.2
    • /
    • pp.99-108
    • /
    • 2020
  • Purpose: The probability of recurrence of cancer after adjuvant or definitive radiotherapy in patients with human papillomavirus-negative (HPV(-)) head and neck squamous cell carcinoma (HNSCC) varies for each patient. This study aimed to identify and validate radiation sensitivity signature (RSS) of patients with HPV(-) HNSCC to predict the recurrence of cancer after radiotherapy. Materials and Methods: Clonogenic survival assays were performed to assess radiosensitivity in 14 HNSCC cell lines. We identified genes closely correlated with radiosensitivity and validated them in The Cancer Genome Atlas (TCGA) cohort. The validated RSS were analyzed by ingenuity pathway analysis (IPA) to identify canonical pathways, upstream regulators, diseases and functions, and gene networks related to radiosensitive genes in HPV(-) HNSCC. Results: The survival fraction of 14 HNSCC cell lines after exposure to 2 Gy of radiation ranged from 48% to 72%. Six genes were positively correlated and 35 genes were negatively correlated with radioresistance, respectively. RSS was validated in the HPV(-) TCGA HNSCC cohort (n = 203), and recurrence-free survival (RFS) rate was found to be significantly lower in the radioresistant group than in the radiosensitive group (p = 0.035). Cell death and survival, cell-to-cell signaling, and cellular movement were significantly enriched in RSS, and RSSs were highly correlated with each other. Conclusion: We derived a HPV(-) HNSCC-specific RSS and validated it in an independent cohort. The outcome of adjuvant or definitive radiotherapy in HPV(-) patients with HNSCC can be predicted by analyzing their RSS, which might help in establishing a personalized therapeutic plan.

Design and Evaluation of a Fuzzy Logic based Multi-hop Broadcast Algorithm for IoT Applications (IoT 응용을 위한 퍼지 논리 기반 멀티홉 방송 알고리즘의 설계 및 평가)

  • Bae, Ihn-han;Kim, Chil-hwa;Noh, Heung-tae
    • Journal of Internet Computing and Services
    • /
    • v.17 no.6
    • /
    • pp.17-23
    • /
    • 2016
  • In the future network such as Internet of Things (IoT), the number of computing devices are expected to grow exponentially, and each of the things communicates with the others and acquires information by itself. Due to the growing interest in IoT applications, the broadcasting in Opportunistic ad-hoc networks such as Machine-to-Machine (M2M) is very important transmission strategy which allows fast data dissemination. In distributed networks for IoT, the energy efficiency of the nodes is a key factor in the network performance. In this paper, we propose a fuzzy logic based probabilistic multi-hop broadcast (FPMCAST) algorithm which statistically disseminates data accordingly to the remaining energy rate, the replication density rate of sending node, and the distance rate between sending and receiving nodes. In proposed FPMCAST, the inference engine is based the fuzzy rule base which is consists of 27 if-then rules. It maps input and output parameters to membership functions of input and output. The output of fuzzy system defines the fuzzy sets for rebroadcasting probability, and defuzzification is used to extract a numeric result from the fuzzy set. Here Center of Gravity (COG) method is used to defuzzify the fuzzy set. Then, the performance of FPMCAST is evaluated through a simulation study. From the simulation, we demonstrate that the proposed FPMCAST algorithm significantly outperforms flooding and gossiping algorithms. Specially, the FPMCAST algorithm has longer network lifetime because the residual energy of each node consumes evenly.

Geographical Name Denoising by Machine Learning of Event Detection Based on Twitter (트위터 기반 이벤트 탐지에서의 기계학습을 통한 지명 노이즈제거)

  • Woo, Seungmin;Hwang, Byung-Yeon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.10
    • /
    • pp.447-454
    • /
    • 2015
  • This paper proposes geographical name denoising by machine learning of event detection based on twitter. Recently, the increasing number of smart phone users are leading the growing user of SNS. Especially, the functions of short message (less than 140 words) and follow service make twitter has the power of conveying and diffusing the information more quickly. These characteristics and mobile optimised feature make twitter has fast information conveying speed, which can play a role of conveying disasters or events. Related research used the individuals of twitter user as the sensor of event detection to detect events that occur in reality. This research employed geographical name as the keyword by using the characteristic that an event occurs in a specific place. However, it ignored the denoising of relationship between geographical name and homograph, it became an important factor to lower the accuracy of event detection. In this paper, we used removing and forecasting, these two method to applied denoising technique. First after processing the filtering step by using noise related database building, we have determined the existence of geographical name by using the Naive Bayesian classification. Finally by using the experimental data, we earned the probability value of machine learning. On the basis of forecast technique which is proposed in this paper, the reliability of the need for denoising technique has turned out to be 89.6%.

Target Reliability Index of Single Gravel Compaction Piles for Limit State Design (한계상태설계를 위한 단일 쇄석다짐말뚝의 목표신뢰도지수)

  • You, Youngkwon;Lim, Heuidae;Park, Joonmo
    • Journal of the Korean GEO-environmental Society
    • /
    • v.15 no.2
    • /
    • pp.5-15
    • /
    • 2014
  • Target reliability index in the limit state design indicated the safety margin and it is important to determine the partial factor. To determine the target reliability index which is needed in the limit state design, the six design and construction case histories of gravel compaction piles (GCP) were investigated. The limit state functions were defined by bulging failure for the major failure mode of GCP. The reliability analysis were performed using the first order reliability method (FORM) and the reliability index was calculated for each ultimate bearing capacity formulation. The reliability index of GCP tended to be penportional to the safety factor of allowable stress design and average value was ${\beta}$=2.30. Reliability level that was assessed by reliability analysis and target reliability index for existing structure foundations were compared and analyzed. As a result, The GCP was required a relatively low level of safety compared with deep and shallow foundations and the currd t reliability level were similar to the target reliability in the reinforced earth retaining-wall and soil-nailing. Therefore the target reliability index of GCP suggested as ${\beta}_T$=2.33 by various literatures together with the computed reliability level in this study.

A Study on the Mission Reliability of Combat System through the Design Structure Matrix and Interface Matrix (설계구조행렬(DSM) 및 인터페이스 매트릭스 설계를 통한 전투체계 임무신뢰도에 관한연구)

  • Lee, Jeong-Wan;Park, Chan-Hyeon;Kim, So-Jung;Kim, Eui-Whan;Jang, Joong Soon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.451-458
    • /
    • 2019
  • Reliability in the course of weapons system development and operation is a key measure of the ability of a system to perform the required functions under specified conditions over a specified period of time, and the mission confidence for the assessment of mission fulfillment is an important indicator of victory or defeat in a battle. Mission reliability indicates the probability that a given task will succeed or fail in an event or environmental situation over a given period of time. The existing mission reliability was calculated after creating a confidence blow map with only physical connections based on the mission. However, as modern weapons systems evolve and advance, the related equipment structure becomes increasingly complex, making it impossible to express mission relevance when mission classification is required based on functional or physical connections. In this study, the mission reliability was calculated for a gun control system, which is part of a ship's combat system, by expressing the association between the physical and functional structures using the design structure matrix technique and the interface matrix technique. We expect the study results to be used as verification data for mission reliability.

Development of Hybrid Vision Correction Algorithm (Hybrid Vision Correction Algorithm의 개발)

  • Ryu, Yong Min;Lee, Eui Hoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.1
    • /
    • pp.61-73
    • /
    • 2021
  • Metaheuristic search methods have been developed to solve problems with a range of purpose functions in situations lacking information and time constraints. In this study, the Hybrid Vision Correction Algorithm (HVCA), which enhances the performance of the Vision Correction Algorithm (VCA), was developed. The HVCA has applied two methods to improve the performance of VCA. The first method changes the parameters required by the user for self-adaptive parameters. The second method, the CGS structure of the Exponential Bandwidth Harmony Search With a Centralized Global Search (EBHS-CGS), was added to the HVCA. The HVCA consists of two structures: CGS and VCA. To use the two structures, a method was applied to increase the probability of selecting the structure with the optimal value as it was performed. The optimization problem was applied to determine the performance of the HVCA, and the results were compared with Harmony Search (HS), Improved Harmony Search (IHS), and VCA. The HVCA improved the number of times to find the optimal value during 100 repetitions compared to HS, IHS, and VCA. Moreover, the HVCA reduced the Number of Function Evaluations (NFEs). Therefore, the performance of the HVCA has been improved.