• Title/Summary/Keyword: Probabilistic design

Search Result 688, Processing Time 0.019 seconds

Cache-Filter: A Cache Permission Policy for Information-Centric Networking

  • Feng, Bohao;Zhou, Huachun;Zhang, Mingchuan;Zhang, Hongke
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.12
    • /
    • pp.4912-4933
    • /
    • 2015
  • Information Centric Networking (ICN) has recently attracted great attention. It names the content decoupling from the location and introduces network caching, making the content to be cached anywhere within the network. The benefits of such design are obvious, however, many challenges still need to be solved. Among them, the local caching policy is widely discussed and it can be further divided into two parts, namely the cache permission policy and the cache replacement policy. The former is used to decide whether an incoming content should be cached while the latter is used to evict a cached content if required. The Internet is a user-oriented network and popular contents always have much more requests than unpopular ones. Caching such popular contents closer to the user's location can improve the network performance, and consequently, the local caching policy is required to identify popular contents. However, considering the line speed requirement of ICN routers, the local caching policy whose complexity is larger than O(1) cannot be applied. In terms of the replacement policy, Least Recently Used (LRU) is selected as the default one for ICN because of its low complexity, although its ability to identify the popular content is poor. Hence, the identification of popular contents should be completed by the cache permission policy. In this paper, a cache permission policy called Cache-Filter, whose complexity is O(1), is proposed, aiming to store popular contents closer to users. Cache-Filter takes the content popularity into account and achieves the goal through the collaboration of on-path nodes. Extensive simulations are conducted to evaluate the performance of Cache-Filter. Leave Copy Down (LCD), Move Copy Down (MCD), Betw, ProbCache, ProbCache+, Prob(p) and Probabilistic Caching with Secondary List (PCSL) are also implemented for comparison. The results show that Cache-Filter performs well. For example, in terms of the distance to access to contents, compared with Leave Copy Everywhere (LCE) used by Named Data Networking (NDN) as the permission policy, Cache-Filter saves over 17% number of hops.

Risk-Targeted Seismic Performance of Steel Ordinary Concentrically Braced Frames Considering Seismic Hazard (지진재해도를 고려한 철골 보통중심가새골조의 위험도기반 내진성능)

  • Shin, Dong-Hyeon;Hong, Suk-Jae;Kim, Hyung-Joon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.30 no.5
    • /
    • pp.371-380
    • /
    • 2017
  • The risk-targeted seismic design concept was first included in ASCE/SEI 7-10 to address problems related to the uniform-hazard based seismic concept that has been constructed without explicitly considering probabilistic uncertainties in the collapse capacities of structures. However, this concept is not yet reflected to the current Korean building code(KBC) because of insufficient strong earthquake data occurred at the Korean peninsula and little information on the collapse capacities of structures. This study evaluates the risk-targeted seismic performance of steel ordinary concentrically braced frames(OCBFs). To do this, the collapse capacities of prototype steel OCBFs are assessed with various analysis parameters including building locations, building heights and soil conditions. The seismic hazard curves are developed using an empirical spectral shape prediction model that is capable of reflecting the characteristics of earthquake records. The collapse probabilities of the prototype steel OCBFs located at the Korean major cities are then evaluated using the risk integral concept. As a result, analysis parameters considerably influence the collapse probabilities of steel OCBFs. The collapse probabilities of taller steel OCBFs exceed the target seismic risk of 1 percent in 50 years, which the introduction of the height limitation of steel OCBFs into the future KBC should be considered.

Annual Loss Probability Estimation of Steel Moment-Resisting Frames(SMRFs) using Seismic Fragility Analysis (지진취약도를 통한 철골모멘트골조의 연간 손실 평가)

  • Jun, Saemee;Shin, Dong-Hyeon;Kim, Hyung-Joon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.27 no.6
    • /
    • pp.517-524
    • /
    • 2014
  • The ultimate goal of seismic design is to reduce the probable losses or damages occurred during an expected earthquake event. To achieve this goal, this study represents a procedure that can estimate annual loss probability of a structure damaged by strong ground motion. First of all, probabilistic seismic performance assessment should be performed using seismic fragility analyses that are presented by a cumulative distribution function of the probability in each exceedance structural damage state. A seismic hazard curve is then derived from an annual frequency of exccedance per each ground motion intensity. An annual loss probability function is combined with seismic fragility analysis results and seismic hazard curves. In this paper, annual loss probabilities are estimated by the structural fragility curve of steel moment-resisting frames(SMRFs) in San Francisco Bay, USA, and are compared with loss estimation results obtained from the HAZUS methodology. It is investigated from the comparison that seismic losses of the SMRFs calculated from the HAZUS method are conservatively estimated. The procedure presented in this study could be effectively used for future studies related with structural seismic performance assessment and annual loss probability estimation.

Design and Evaluation of a Fuzzy Logic based Multi-hop Broadcast Algorithm for IoT Applications (IoT 응용을 위한 퍼지 논리 기반 멀티홉 방송 알고리즘의 설계 및 평가)

  • Bae, Ihn-han;Kim, Chil-hwa;Noh, Heung-tae
    • Journal of Internet Computing and Services
    • /
    • v.17 no.6
    • /
    • pp.17-23
    • /
    • 2016
  • In the future network such as Internet of Things (IoT), the number of computing devices are expected to grow exponentially, and each of the things communicates with the others and acquires information by itself. Due to the growing interest in IoT applications, the broadcasting in Opportunistic ad-hoc networks such as Machine-to-Machine (M2M) is very important transmission strategy which allows fast data dissemination. In distributed networks for IoT, the energy efficiency of the nodes is a key factor in the network performance. In this paper, we propose a fuzzy logic based probabilistic multi-hop broadcast (FPMCAST) algorithm which statistically disseminates data accordingly to the remaining energy rate, the replication density rate of sending node, and the distance rate between sending and receiving nodes. In proposed FPMCAST, the inference engine is based the fuzzy rule base which is consists of 27 if-then rules. It maps input and output parameters to membership functions of input and output. The output of fuzzy system defines the fuzzy sets for rebroadcasting probability, and defuzzification is used to extract a numeric result from the fuzzy set. Here Center of Gravity (COG) method is used to defuzzify the fuzzy set. Then, the performance of FPMCAST is evaluated through a simulation study. From the simulation, we demonstrate that the proposed FPMCAST algorithm significantly outperforms flooding and gossiping algorithms. Specially, the FPMCAST algorithm has longer network lifetime because the residual energy of each node consumes evenly.

Decision Making of Seismic Performance Management for the Aged Road Facilities Based on Road-Network and Fragility Curve (취약도곡선을 이용한 도로망기반 노후도로시설물 내진성능관리 의사결정)

  • Kim, Dong-Joo;Choi, Ji-Hae
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.25 no.5
    • /
    • pp.94-101
    • /
    • 2021
  • According to the Facility Management System (FMS) operated by the Korea Authority of Land & Infrastructure Safety, it is expected that the number of aging facilities that have been in use for more than 30 years will increase rapidly to 13.9% in 2019 and 34.5% in 2929, and end up with a social problem. In addition, with the revision of "Common Application of Seismic Design Criteria" by the Ministry of Public Administration and Security in 2017, it is mandatory to re-evaluate all existing road facilities and if necessary seismic reinforcement should be done to minimize the magnitude of earthquake damage and perform normal road functions. The seismic performance management-decision support technology currently used in seismic performance management practice in Korea only determines the earthquake-resistance reinforcement priority based on the qualitative index value for the seismic performance of individual facilities. However with this practice, normal traffic functions cannot be guaranteed. A new seismic performance management decision support technology that can provide various judgment data required for decision making is needed to overcome these shortcomings and better perform seismic performance management from a road network perspective.

Clarifying the Meaning of 'Scientific Explanation' for Science Teaching and Learning (과학 학습지도를 위한 '과학적 설명'의 의미 명료화)

  • Jongwon Park;Hye-Gyoung Yoon;Insun Lee
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.6
    • /
    • pp.509-520
    • /
    • 2023
  • Scientific explanation is the main goal of scientists' scientific practice, and the science curriculum also includes developing students' abilities to construct scientific explanations as a major goal. Thus, clarifying its meaning is an important issue in the science education community. In this paper, the researchers identified three perspectives on 'scientific explanation' based on the scoping review method (Deductive-Nomological, Probabilistic, and Pragmatic explanation models). We argued that it is important to clarify and distinguish the meanings of 'scientific explanation' from other concepts used in science education, such as 'description', 'prediction', 'hypothesis', and 'argument' based on a review of the literature. It is also pointed out that there is a difference between 'scientific explanation' as a product and 'explaining scientifically' as communication, and several ways to revise achievement standard statements in the science curriculum are suggested, to guide students to construct scientific explanations and to help students to explain scientifically. By adopting the three scientific explanation models, the important factors to be considered were classified and organized, and examples of science learning activities for scientific explanation considering such factors were suggested. It is hoped that the discussion in this study will help establish clearer learning goals in science learning related to scientific explanation and aid the design of more appropriate learning activities accordingly.

A Study on Radiation Exposure using Nominal Risk Coefficients (명목위험계수를 활용한 방사선 피폭에 관한 연구)

  • Joo-Ah Lee;Jong-Gil Kwak;Cheol-Min Jeon
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.4
    • /
    • pp.383-389
    • /
    • 2024
  • In this study, we aimed to analyze the probability of secondary cancer occurring in the abdomen, a normal organ, due to photoneutron exposure during intensity-modulated radiotherapy for prostate cancer. The design of the radiation treatment plan for prostate cancer was established as a daily prescription dose of 220 cGy, a total of 35 treatments, and 7700 cGy. The experimental equipment was a True Beam STx (Varian, USA) linear accelerator from Varian. The energy used in the experiment was 15 MV, and the treatment plan was designed so that the photoneutron dose would be generated within the planning target volume (PTV). The radiation treatment plan was an Eclipse System (Varian Ver. 10.0, USA), and the number of irradiation portals was set to 5 to 9. The irradiation angle was designed so that 95% of the prescription dose area was set to 0 to 320°, and the number of beamlets per irradiation portal was set to 100. The optically stimulated luminescence dosimeter used in this study to measure the dose of photoneutrons was designed to measure photoneutron doses by coating 6LiCO3 on a device containing aluminum oxide components. It was studied that there is a minimum of 7.07 to 11 cases per 1,000 people with secondary cancer due to the photoneutron dose to the abdomen during intensity-modulated radiotherapy. In this study, we studied the risk of secondary radiation dose that may occur during intensity-modulated radiotherapy, and we expect that this will be used as meaningful data related to the probabilistic effects of radiation in the future.

A Proposal for Simplified Velocity Estimation for Practical Applicability (실무 적용성이 용이한 간편 유속 산정식 제안)

  • Tai-Ho Choo;Jong-Cheol Seo; Hyeon-Gu Choi;Kun-Hak Chun
    • Journal of Wetlands Research
    • /
    • v.25 no.2
    • /
    • pp.75-82
    • /
    • 2023
  • Data for measuring the flow rate of streams are used as important basic data for the development and maintenance of water resources, and many experts are conducting research to make more accurate measurements. Especially, in Korea, monsoon rains and heavy rains are concentrated in summer due to the nature of the climate, so floods occur frequently. Therefore, it is necessary to measure the flow rate most accurately during a flood to predict and prevent flooding. Thus, the U.S. Geological Survey (USGS) introduces 1, 2, 3 point method using a flow meter as one way to measure the average flow rate. However, it is difficult to calculate the average flow rate with the existing 1, 2, 3 point method alone.This paper proposes a new 1, 2, 3 point method formula, which is more accurate, utilizing one probabilistic entropy concept. This is considered to be a highly empirical study that can supplement the limitations of existing measurement methods. Data and Flume data were used in the number of holesman to demonstrate the utility of the proposed formula. As a result of the analysis, in the case of Flume Data, the existing USGS 1 point method compared to the measured value was 7.6% on average, 8.6% on the 2 point method, and 8.1% on the 3 point method. In the case of Coleman Data, the 1 point method showed an average error rate of 5%, the 2 point method 5.6% and the 3 point method 5.3%. On the other hand, the proposed formula using the concept of entropy reduced the error rate by about 60% compared to the existing method, with the Flume Data averaging 4.7% for the 1 point method, 5.7% for the 2 point method, and 5.2% for the 3 point method. In addition, Coleman Data showed an average error of 2.5% in the 1 point method, 3.1% in the 2 point method, and 2.8% in the 3 point method, reducing the error rate by about 50% compared to the existing method.This study can calculate the average flow rate more accurately than the existing 1, 2, 3 point method, which can be useful in many ways, including future river disaster management, design and administration.