• Title/Summary/Keyword: Probabilistic Concept

Search Result 158, Processing Time 0.025 seconds

An Evaluation Method of Water Supply Reliability for a Dam by Firm Yield Analysis (보장 공급량 분석에 의한 댐의 물 공급 안전도 평가기법 연구)

  • Lee, Sang-Ho;Kang, Tae-Uk
    • Journal of Korea Water Resources Association
    • /
    • v.39 no.5 s.166
    • /
    • pp.467-478
    • /
    • 2006
  • Water supply reliability for a dam is defined with a concept of probabilistic reliability. An evaluation procedure of the water supply reliability is shown with an analysis of long term firm yield reliability. The water supply reliabilities of Soyanggang Dam and Chungju Dam were evaluated. To evaluate the water supply reliability, forty one sets of monthly runoff series were generated by SAMS-2000. HEC-5 model was applied to the reservoir simulation to compute the firm yield from a monthly data of time series. The water supply reliability of the firm yield from the design runoff data of Soyanggang Dam is evaluated by 80.5 % for a planning period of 50 years. The water supply reliability of the firm yield from the historic runoff after the dam construction is evaluated by 53.7 %. The firm yield from the design runoff is 1.491 billion $m^3$/yr and the firm yield from the historic runoff is 1.585 billion $m^3$/yr. If the target draft Is 1.585 billion $m^3$/yr, additional water of 0.094 billion $m^3$ could be supplied every year with its risk. From the similar procedures, the firm yield from the design runoff of Chungju Dam is evaluated 3.377 billion $m^3$/yr and the firm yield from the historic runoff is 2.960 billion $m^3$/yr. If the target draft is 3.377 billion $m^3$/yr, water supply insufficiency occurs for all the sets of time series generated. It may result from overestimation of the spring runoff used for design. The procedure shown can be a more objective method to evaluate water supply reliability of a dam.

An Application of Artificial Intelligence System for Accuracy Improvement in Classification of Remotely Sensed Images (원격탐사 영상의 분류정확도 향상을 위한 인공지능형 시스템의 적용)

  • 양인태;한성만;박재국
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.20 no.1
    • /
    • pp.21-31
    • /
    • 2002
  • This study applied each Neural Networks theory and Fuzzy Set theory to improve accuracy in remotely sensed images. Remotely sensed data have been used to map land cover. The accuracy is dependent on a range of factors related to the data set and methods used. Thus, the accuracy of maps derived from conventional supervised image classification techniques is a function of factors related to the training, allocation, and testing stages of the classification. Conventional image classification techniques assume that all the pixels within the image are pure. That is, that they represent an area of homogeneous cover of a single land-cover class. But, this assumption is often untenable with pixels of mixed land-cover composition abundant in an image. Mixed pixels are a major problem in land-cover mapping applications. For each pixel, the strengths of class membership derived in the classification may be related to its land-cover composition. Fuzzy classification techniques are the concept of a pixel having a degree of membership to all classes is fundamental to fuzzy-sets-based techniques. A major problem with the fuzzy-sets and probabilistic methods is that they are slow and computational demanding. For analyzing large data sets and rapid processing, alterative techniques are required. One particularly attractive approach is the use of artificial neural networks. These are non-parametric techniques which have been shown to generally be capable of classifying data as or more accurately than conventional classifiers. An artificial neural networks, once trained, may classify data extremely rapidly as the classification process may be reduced to the solution of a large number of extremely simple calculations which may be performed in parallel.

Probabilistic Analysis of Independent Storm Events: 1. Construction of Annual Maximum Storm Event Series (독립호우사상의 확률론적 해석: 1. 연최대 호우사상 계열의 작성)

  • Park, Min-Kyu;Yoo, Chul-Sang
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.11 no.2
    • /
    • pp.127-136
    • /
    • 2011
  • In this study, annual maximum storm events are proposed to determined by the return periods considering total rainfall and rainfall intensity together. The rainfall series at Seoul since 1961 are examined and the results are as follows. First, the bivariate exponential distribution is used to determine annual maximum storm events. The parameter estimated annually provides more suitable results than the parameter estimated by whole periods. The chosen annual maximum storm events show these properties. The events with the biggest total rainfall tend to be selected in the wet years and the events with the biggest rainfall intensity in the wet years. These results satisfy the concept of critical storm events which produces the most severe runoff according to soil wetness. The average characteristics of the annual maximum storm events said average rainfall intensity 32.7 mm/hr in 1 hr storm duration(total rainfall 32.7 mm), average rainfall intensity 9.7 mm/hr in 24 hr storm duration(total rainfall 231.6 mm) and average rainfall intensity 7.4 mm/hr in 48 hr storm duration(total rainfall 355.0 mm).

Comparative Study of Reliability Design Methods by Application to Donghae Harbor Breakwaters. 2. Sliding of Caissons (동해항 방파제를 대상으로 한 신뢰성 설계법의 비교 연구. 2. 케이슨의 활동)

  • Kim, Seung-Woo;Suh, Kyung-Duck;Oh, Young-Min
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.18 no.2
    • /
    • pp.137-146
    • /
    • 2006
  • This is the second of a two-part paper which describes comparison of reliability design methods by application to Donghae Harbor Breakwaters. In this paper, Part 2, we deal with sliding of caissons. The failure modes of a vertical breakwater, which consists of a caisson mounted on a rubble mound, include the sliding and overturning of the caisson and the failure of the rubble mound or subsoil, among which most frequently occurs the sliding of the caisson. The traditional deterministic design method for sliding failure of a caisson uses the concept of a safety factor that the resistance should be greater than the load by a certain factor (e.g. 1.2). However, the safety of a structure cannot be quantitatively evaluated by the concept of a safety factor. On the other hand, the reliability design method, for which active research is being performed recently, enables one to quantitatively evaluate the safety of a structure by calculating the probability of failure of the structure. The reliability design method is classified into three categories depending on the level of probabilistic concepts being employed, i.e., Level 1, 2, and 3. In this study, we apply the reliability design methods to the sliding of the caisson of the breakwaters of Donghae Harbor, which was constructed by traditional deterministic design methods to be damaged in 1987. Analyses are made for the breakwaters before the damage and after reinforcement. The probability of failure before the damage is much higher than the allowable value, indicating that the breakwater was under-designed. The probability of failure after reinforcement, however, is close to the allowable value, indicating that the breakwater is no longer in danger. On the other hand, the results of the different reliability design methods are in fairly good agreement, confirming that there is not much difference among different methods.

A Proposal for Simplified Velocity Estimation for Practical Applicability (실무 적용성이 용이한 간편 유속 산정식 제안)

  • Tai-Ho Choo;Jong-Cheol Seo; Hyeon-Gu Choi;Kun-Hak Chun
    • Journal of Wetlands Research
    • /
    • v.25 no.2
    • /
    • pp.75-82
    • /
    • 2023
  • Data for measuring the flow rate of streams are used as important basic data for the development and maintenance of water resources, and many experts are conducting research to make more accurate measurements. Especially, in Korea, monsoon rains and heavy rains are concentrated in summer due to the nature of the climate, so floods occur frequently. Therefore, it is necessary to measure the flow rate most accurately during a flood to predict and prevent flooding. Thus, the U.S. Geological Survey (USGS) introduces 1, 2, 3 point method using a flow meter as one way to measure the average flow rate. However, it is difficult to calculate the average flow rate with the existing 1, 2, 3 point method alone.This paper proposes a new 1, 2, 3 point method formula, which is more accurate, utilizing one probabilistic entropy concept. This is considered to be a highly empirical study that can supplement the limitations of existing measurement methods. Data and Flume data were used in the number of holesman to demonstrate the utility of the proposed formula. As a result of the analysis, in the case of Flume Data, the existing USGS 1 point method compared to the measured value was 7.6% on average, 8.6% on the 2 point method, and 8.1% on the 3 point method. In the case of Coleman Data, the 1 point method showed an average error rate of 5%, the 2 point method 5.6% and the 3 point method 5.3%. On the other hand, the proposed formula using the concept of entropy reduced the error rate by about 60% compared to the existing method, with the Flume Data averaging 4.7% for the 1 point method, 5.7% for the 2 point method, and 5.2% for the 3 point method. In addition, Coleman Data showed an average error of 2.5% in the 1 point method, 3.1% in the 2 point method, and 2.8% in the 3 point method, reducing the error rate by about 50% compared to the existing method.This study can calculate the average flow rate more accurately than the existing 1, 2, 3 point method, which can be useful in many ways, including future river disaster management, design and administration.

Performance Evaluation of WWTP Based on Reliability Concept (신뢰성에 기초한 하수처리장 운전효율 평가)

  • Lee, Doo-Jin;Sun, Sang-Woon
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.29 no.3
    • /
    • pp.348-356
    • /
    • 2007
  • Statistical and probabilistic method was used in the analysis of data, which is the most effective one in describing the various natures, and the methodology relating the results with the design was developed. Influents and effluents of three treatment plants were analyzed and the focus was made on BOD, COD, SS, IN, TP The fluctuations of influent such as BOD, COD, SS were extremely large and their standard deviations(st.dev) were more than 10 mg/L. but those of TN, TP were small; the st.dev was 6.6 mg/L for TN, 0.6 mg/L for TP, respectively. But, effluent concentration showed consistent pattern regardless of the influent fluctuations, the st.dev was ranged between 0.28 and 4.48 mg/L. Effluent distributional characteristics were as follows; BOD, COD were distributed normally, but SS, TN, and TP, log-normally; unsymmetric and skewed to the right. The coefficient of reliability(COR) based on the results of statistics of data was introduced to evaluate the process performance an4 to reflect the process performance to the process design. The coefficient of reliability relates the design value(the goal) with the standards and it can be used in operating treatment facilities under a certain reliability level and/or in evaluating the reliability of the treatment facilities on operation. Each treated water quality of effluent showed the half of water quality standards in the level of 50% percentile and all treatment plant was achieved 100% probability of water quality standards. It was concluded that the variability of the process performance should be reflected to the design procedure and the standards through the analysis based on the statistics and the probability.

Features of sample concepts in the probability and statistics chapters of Korean mathematics textbooks of grades 1-12 (초.중.고등학교 확률과 통계 단원에 나타난 표본개념에 대한 분석)

  • Lee, Young-Ha;Shin, Sou-Yeong
    • Journal of Educational Research in Mathematics
    • /
    • v.21 no.4
    • /
    • pp.327-344
    • /
    • 2011
  • This study is the first step for us toward improving high school students' capability of statistical inferences, such as obtaining and interpreting the confidence interval on the population mean that is currently learned in high school. We suggest 5 underlying concepts of 'discretion of contingency and inevitability', 'discretion of induction and deduction', 'likelihood principle', 'variability of a statistic' and 'statistical model', those are necessary to appreciate statistical inferences as a reliable arguing tools in spite of its occasional erroneous conclusions. We assume those 5 concepts above are to be gradually developing in their school periods and Korean mathematics textbooks of grades 1-12 were analyzed. Followings were found. For the right choice of solving methodology of the given problem, no elementary textbook but a few high school textbooks describe its difference between the contingent circumstance and the inevitable one. Formal definitions of population and sample are not introduced until high school grades, so that the developments of critical thoughts on the reliability of inductive reasoning could not be observed. On the contrary of it, strong emphasis lies on the calculation stuff of the sample data without any inference on the population prospective based upon the sample. Instead of the representative properties of a random sample, more emphasis lies on how to get a random sample. As a result of it, the fact that 'the random variability of the value of a statistic which is calculated from the sample ought to be inherited from the randomness of the sample' could neither be noticed nor be explained as well. No comparative descriptions on the statistical inferences against the mathematical(deductive) reasoning were found. Few explanations on the likelihood principle and its probabilistic applications in accordance with students' cognitive developmental growth were found. It was hard to find the explanation of a random variability of statistics and on the existence of its sampling distribution. It is worthwhile to explain it because, nevertheless obtaining the sampling distribution of a particular statistic, like a sample mean, is a very difficult job, mere noticing its existence may cause a drastic change of understanding in a statistical inference.

  • PDF

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.