• Title/Summary/Keyword: 등확률

Search Result 2,373, Processing Time 0.028 seconds

Continuous Discovery of Dense Regions in the Database of Moving Objects (이동객체 데이터베이스에서의 밀집 영역 연속 탐색)

  • Lee, Young-Koo;Kim, Won-Young
    • Journal of Internet Computing and Services
    • /
    • v.9 no.4
    • /
    • pp.115-131
    • /
    • 2008
  • Small mobile devices have become commonplace in our everyday life, from cellular phones to PDAs. Discovering dense regions for the mobile devices is one of the problems of grate practical importance. It can be used in monitoring movement of vehicles, concentration of troops, etc. In this paper, we propose a novel algorithm on continuously clustering a large set of mobile objects. We assume that a mobile object reports its position only if it is too far away from the expected position and thus the location data received may be imprecise. To compute the location of each individual object could be costly especially when the number of objects is large. To reduce the complexity of the computation, we want to first cluster objects that are in proximity into a group and treat the members in a group indistinguishable. Each individual object will be examined only when the inaccuracy causes ambiguity in the final results. We conduct extensive experiments on various data sets and analyze the sensitivity and scalability of our algorithms.

  • PDF

Probabilistic Risk Assessment of Coastal Structures using LHS-based Reliability Analysis Method (LHS기반 신뢰성해석 기법을 이용한 해안구조물의 확률론적 위험도평가)

  • Huh, Jung-Won;Jung, Hong-Woo;Ahn, Jin-Hee;An, Sung-Wook
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.19 no.6
    • /
    • pp.72-79
    • /
    • 2015
  • An efficient and practical reliability evaluation method is proposed for the coastal structures in this paper. It is capable of evaluating reliability of real complicated coastal structures considering uncertainties in various sources of design parameters, such as wave and current loads, resistance-related design variables including Young's modulus and compressive strength of the reinforced concrete, soil parameters, and boundary conditions. It is developed by intelligently integrating the Latin Hypercube sampling (LHS), Monte Carlo simulation (MCS) and the finite element method (FEM). The LHS-based MCS is used to significantly reduce the computational effort by limiting the number of simulation cycles required for the reliability evaluation. The applicability and efficiency of the proposed method were verified using a caisson-type breakwater structure in the numerical example.

Performance Based Evaluation of Concrete Material Properties from Climate Change Effect on Temperature and Humidity Curing Conditions (기후변화의 온도와 습도 양생조건에 따른 콘크리트 재료특성의 성능중심평가)

  • Kim, Tae-Kyun;Shin, Jae-Ho;Shin, Dong-Woo;Shim, Hyun-Bo;Kim, Jang-Ho Jay
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.18 no.6
    • /
    • pp.114-122
    • /
    • 2014
  • Currently, global warming has become a serious problem arising from the usage of fossil fuels such as coal and petroleum. Moreover, due to the global warming, heat wave, heavy snow, heavy rain, super typhoon are frequently occurring all over the world. Due to these serious natural disasters, concrete structures and infrastructures are seriously damaged or collapsed. In order to handle these problems, climate change oriented construction technology and codes are necessary at this time. Therefore, in this study, the validity of the present concrete mixture proportions are evaluated considering temperature and humidity change. The specimens cured at various temperature and humidity conditions were tested to obtain their compressive and split tensile strengths at various curing ages. Moreover, performance based evaluation (PBE) method was used to analyze the satisfaction percentage of the concrete cured at various condition. From the probabilistic method of performance evaluation of concrete performance, feasibility and usability can be determined for future concrete mix design.

Drought Risk Analysis Considering Bivariate Drought Regional Frequency Analysis (이변량 가뭄지역빈도해석에 따른 가뭄위험분석)

  • Yoo, Ji-Young;Park, Jong-Yong;Kwon, Hyun-Han;Kim, Tae-Woong
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2011.05a
    • /
    • pp.52-52
    • /
    • 2011
  • 최근 지구온난화가 가속화되면서 전 세계적으로 기상재해가 급증하고 있다. 특히 강우패턴의 변화를 고려한 강수 전망 연구결과는 온실가스 농도 증가로 호우나 가뭄, 대설 등이 지역에 따라 서로 상반되는 변화를 가져올 가능성이 있으며, 우리나라의 경우도 극한강수의 발생빈도가 1990년대 후반 이래로 뚜렷하게 증가하는 경향을 보이고 있다. 현재 우리나라에서도 이러한 기후변화에 대비하기 위해 여러 가지 가뭄연구를 수행하고 있는 실정이다. 일반적으로 가뭄의 해석에는 그 목적에 따라 여러 가지 지표를 이용하여 가뭄을 정의하며, 그 중 강수 및 하천유량 등은 기상 및 수문학적 가뭄을 판단하기 위한 지표로 널리 사용되고 있다. 특히 강수의 부족은 가뭄의 주된 요인이라 할 수 있으며, 가뭄의 정량적 평가에 효과적으로 이용될 수 있다. 즉 평균수준(혹은 절단수준)을 설정하고 가뭄의 지속기간, 심도, 발생빈도 등을 정의한 후, 이를 시계열 분석하여 가뭄의 특성을 분석하는 것이다. 또한 가뭄은 지속기간과 심도를 주요 특성변수를 가지는 이변량 수문사상이므로, 이를 반영한 확률 및 통계학적 해석방법의 적용이 반드시 필요하다. 그러므로 본 연구에서는 우리나라의 가뭄특성을 가뭄지속기간과 심도의 이변량을 동시에 고려하여 지점별 가뭄빈도해석을 수행하였으며, 지역별 가뭄발생특성을 고려하여, 강우관측지점별 과거에 발생한 최대가 뭄사상에 대한 가뭄위험도를 계산하였다. 그 결과, 우리나라 지점별 미래에 연속되는 10, 50, 100, 150년에 따라 과거의 최대가뭄이 발생할 확률을 지도로 도시하여 지역적 가뭄위험도를 분석하여 가뭄위험지역을 예상하였다. 이는 우리나라 내 가뭄취약지역의 우선순위를 결정하고, 실제로 국가적인 차원에서의 장기적인 가뭄관리를 하는 데 있어, 가뭄취약지역별 차별성 있는 가뭄대응방안을 마련하는 데 있어서도 하나의 객관적 근거로 활용될 수 있을 것으로 판단된다.

  • PDF

Corporate Credit Rating based on Bankruptcy Probability Using AdaBoost Algorithm-based Support Vector Machine (AdaBoost 알고리즘기반 SVM을 이용한 부실 확률분포 기반의 기업신용평가)

  • Shin, Taek-Soo;Hong, Tae-Ho
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.25-41
    • /
    • 2011
  • Recently, support vector machines (SVMs) are being recognized as competitive tools as compared with other data mining techniques for solving pattern recognition or classification decision problems. Furthermore, many researches, in particular, have proved them more powerful than traditional artificial neural networks (ANNs) (Amendolia et al., 2003; Huang et al., 2004, Huang et al., 2005; Tay and Cao, 2001; Min and Lee, 2005; Shin et al., 2005; Kim, 2003).The classification decision, such as a binary or multi-class decision problem, used by any classifier, i.e. data mining techniques is so cost-sensitive particularly in financial classification problems such as the credit ratings that if the credit ratings are misclassified, a terrible economic loss for investors or financial decision makers may happen. Therefore, it is necessary to convert the outputs of the classifier into wellcalibrated posterior probabilities-based multiclass credit ratings according to the bankruptcy probabilities. However, SVMs basically do not provide such probabilities. So it required to use any method to create the probabilities (Platt, 1999; Drish, 2001). This paper applied AdaBoost algorithm-based support vector machines (SVMs) into a bankruptcy prediction as a binary classification problem for the IT companies in Korea and then performed the multi-class credit ratings of the companies by making a normal distribution shape of posterior bankruptcy probabilities from the loss functions extracted from the SVMs. Our proposed approach also showed that their methods can minimize the misclassification problems by adjusting the credit grade interval ranges on condition that each credit grade for credit loan borrowers has its own credit risk, i.e. bankruptcy probability.

Mechanical Properties of Concrete with Statistical Variations (통계적 분산을 고려한 콘크리트의 역학적 특성)

  • Kim, Jee-Sang;Shin, Jeong-Ho
    • Journal of the Korea Concrete Institute
    • /
    • v.21 no.6
    • /
    • pp.789-796
    • /
    • 2009
  • The randomness in the strength of a RC member is caused mainly by the variability of the mechanical properties of concrete and steel, the dimensions of concrete cross sections, and the placement of reinforcing bars and so on . Among those variations, the randomness and uncertainty of mechanical properties of concrete, such as compressive strength, tensile strength, and elastic modulus give the most significant influences and show relatively large statistical variations. In Korea, there has been little effort for the construction of its own statistical models for mechanical properties of concrete and steel, thus the foreign data have been utilized till now. In this paper, variability of compressive strength, tensile strength and elastic modulus of normal-weight structural concrete with various specified design compressive strength levels are examined based on the data obtained from a number of published and unpublished sources in this country and additional laboratory tests done by the authors. The inherent probabilistic models for compressive and tensile strength of normal-weight concrete are proposed as Gaussian distribution. Also, the relationships between compressive and splitting tensile strength and between compressive strength and elastic modulus in current KCI Code are verified and new ones are suggested based on local data.

Robust determination of control parameters in K chart with respect to data structures (데이터 구조에 강건한 K 관리도의 관리 모수 결정)

  • Park, Ingkeun;Lee, Sungim
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.6
    • /
    • pp.1353-1366
    • /
    • 2015
  • These days Shewhart control chart for evaluating stability of the process is widely used in various field. But it must follow strict assumption of distribution. In real-life problems, this assumption is often violated when many quality characteristics follow non-normal distribution. Moreover, it is more serious in multivariate quality characteristics. To overcome this problem, many researchers have studied the non-parametric control charts. Recently, SVDD (Support Vector Data Description) control chart based on RBF (Radial Basis Function) Kernel, which is called K-chart, determines description of data region on in-control process and is used in various field. But it is important to select kernel parameter or etc. in order to apply the K-chart and they must be predetermined. For this, many researchers use grid search for optimizing parameters. But it has some problems such as selecting search range, calculating cost and time, etc. In this paper, we research the efficiency of selecting parameter regions as data structure vary via simulation study and propose a new method for determining parameters so that it can be easily used and discuss a robust choice of parameters for various data structures. In addition, we apply it on the real example and evaluate its performance.

Development of Statistical Downscaling Model Using Nonstationary Markov Chain (비정상성 Markov Chain Model을 이용한 통계학적 Downscaling 기법 개발)

  • Kwon, Hyun-Han;Kim, Byung-Sik
    • Journal of Korea Water Resources Association
    • /
    • v.42 no.3
    • /
    • pp.213-225
    • /
    • 2009
  • A stationary Markov chain model is a stochastic process with the Markov property. Having the Markov property means that, given the present state, future states are independent of the past states. The Markov chain model has been widely used for water resources design as a main tool. A main assumption of the stationary Markov model is that statistical properties remain the same for all times. Hence, the stationary Markov chain model basically can not consider the changes of mean or variance. In this regard, a primary objective of this study is to develop a model which is able to make use of exogenous variables. The regression based link functions are employed to dynamically update model parameters given the exogenous variables, and the model parameters are estimated by canonical correlation analysis. The proposed model is applied to daily rainfall series at Seoul station having 46 years data from 1961 to 2006. The model shows a capability to reproduce daily and seasonal characteristics simultaneously. Therefore, the proposed model can be used as a short or mid-term prediction tool if elaborate GCM forecasts are used as a predictor. Also, the nonstationary Markov chain model can be applied to climate change studies if GCM based climate change scenarios are provided as inputs.

Estimating the compound risk integrated hydrological / hydraulic / geotechnical uncertainty of levee systems (수문·수리학적 / 지반공학적 불확실성을 고려한 제방의 복합위험도 산정)

  • Nam, Myeong Jun;Lee, Jae Young;Lee, Cheol Woo;Kim, Ki Young
    • Journal of Korea Water Resources Association
    • /
    • v.50 no.4
    • /
    • pp.277-288
    • /
    • 2017
  • A probabilistic risk analysis of levee system estimates the overall level of flood risk associated with the levee system, according to a series of possible flood scenarios. It requires the uncertainty analysis of all the risk components, including hydrological, hydraulic and geotechnical parts computed by employing MCMC (Markov Chain Monte Carlo), MCS (Monte Carlo Simulation) and FOSM (First-Order Second Moment), presents a joint probability combined each probability. The methodology was applied to a 12.5 km reach from upstream to downstream of the Gangjeong-Goryeong weir, including 6 levee reaches, in Nakdong river. Overtopping risks were estimated by computing flood stage corresponding to 100/200 year high quantile (97.5%) design flood causing levee overflow. Geotechnical risks were evaluated by considering seepage, slope stability, and rapid drawdown along the levee reach without overflow. A probability-based compound risk will contribute to rising effect of safety and economic aspects for levee design, then expect to use the index for riverside structure design in the future.

Investigations on data-driven stochastic optimal control and approximate-inference-based reinforcement learning methods (데이터 기반 확률론적 최적제어와 근사적 추론 기반 강화 학습 방법론에 관한 고찰)

  • Park, Jooyoung;Ji, Seunghyun;Sung, Keehoon;Heo, Seongman;Park, Kyungwook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.4
    • /
    • pp.319-326
    • /
    • 2015
  • Recently in the fields o f stochastic optimal control ( SOC) and reinforcemnet l earning (RL), there have been a great deal of research efforts for the problem of finding data-based sub-optimal control policies. The conventional theory for finding optimal controllers via the value-function-based dynamic programming was established for solving the stochastic optimal control problems with solid theoretical background. However, they can be successfully applied only to extremely simple cases. Hence, the data-based modern approach, which tries to find sub-optimal solutions utilizing relevant data such as the state-transition and reward signals instead of rigorous mathematical analyses, is particularly attractive to practical applications. In this paper, we consider a couple of methods combining the modern SOC strategies and approximate inference together with machine-learning-based data treatment methods. Also, we apply the resultant methods to a variety of application domains including financial engineering, and observe their performance.