• Title/Summary/Keyword: sampling points

Search Result 643, Processing Time 0.026 seconds

Influence Analysis of Sampling Points on Accuracy of Storage Reliability Estimation for One-shot Systems (원샷 시스템의 저장 신뢰성 추정 정확성에 대한 샘플링 시점의 영향 분석)

  • Chung, Yong H.;Oh, Bong S.;Lee, Hong C.;Park, Hee N.;Jang, Joong S.;Park, Sang C.
    • Journal of Applied Reliability
    • /
    • v.16 no.1
    • /
    • pp.32-40
    • /
    • 2016
  • Purpose: The purpose of this study is to analyze the effect of sampling points on accuracy of storage reliability estimation for one-shot systems by assuming a weibull distribution as a storage reliability distribution. Also propose method for determining of sampling points for increase the accuracy of reliability estimation. Methods: Weibull distribution was divided into three sections for confirming the possible to estimate the parameters of the weibull distribution only some section's sample. Generate quantal response data for failure data. And performed parameter estimation with quantal response data. Results: If reduce sample point interval of 1 section, increase the accuracy of reliability estimation although sampling only section 1. Even reduce total number of sampling point, reducing sampling time interval of the 1 zone improve the accuracy of reliability estimation. Conclusion: Method to increase the accuracy of reliability estimation is increasing number of sampling and the sampling points. But apply this method to One-shot system is difficult because test cost of one-shot system is expensive. So propose method of accuracy of storage reliability estimation of one-shot system by adjustment of the sampling point. And by dividing the section it could reduce the total sampling point.

A Study on approximating subdivision method considering extraordinary points (특이점의 분할을 고려한 근사 서브디비전 방법에 대한 연구)

  • 서흥석;조맹효
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2003.04a
    • /
    • pp.253-260
    • /
    • 2003
  • In computer-aided geometric modeling(CAGD), subdivision surfaces are frequently employed to construct free-form surfaces. In the present study, Loop scheme and Catmull-Clark scheme are applied to generate smooth surfaces. To be consistent with the limit points of target surface, the initial sampling points are properly rearranged. The pointwise errors of curvature and position in the sequence of subdivision process are evaluated in both Loop scheme & Catmull-Clark subdivision scheme. In partcural, a general subdivision method in order to generate considering extraordinary points are implemented free from surface with arbitrary sampling point information.

  • PDF

Comparison of Simple Random Sampling and Two-stage P.P.S. Sampling Methods for Timber Volume Estimation (임목재적(林木材積) 산정(算定)을 위(爲)한 Simple Random Sampling과 Two-stage P.P.S. Sampling 방법(方法)의 비교(比較))

  • Kim, Je Su;Horning, Ned
    • Journal of Korean Society of Forest Science
    • /
    • v.65 no.1
    • /
    • pp.68-73
    • /
    • 1984
  • The purpose of this paper was to figure out the efficiencies of two sampling techniques, a simple random sampling and a two-stage P.P.S. (probability proportional to size) sampling, in estimating the volume of the mature coniferous stands near Salzburg, Austria. With black-and-white infrared photographs at a scale 1:10,000, the following four classes were considered; non-forest, young stands less than 40 years, mature beech and mature coniferous stands. After the classification, a field survey was carried out using a relascope with a BAF (basal area factor) 4. For the simple random sampling, 99 points were sampled, while for the P.P.S. sampling, 75 points were sampled in the mature coniferous stands. The following results were obtained. 1) The mean standing coniferous volume estimate was $422.0m^3/ha$ for the simple random sampling and $433.5m^3/ha$ for the P.P.S. sampling method. However, the difference was not statistically significant. 2) The required number of sampling points for a 5% sampling error were 170 for the two stage P.P.S. sampling, but 237 for the simple random sampling. 3) The two stage P.P.S. method reduced field survey time by 17% as compared to the simple random sampling.

  • PDF

A Hypothesis Test under the Generalized Sampling Plan (일반화된 샘플링 계획에서의 가설 검정)

  • 김명수;오근태
    • Journal of Korean Society for Quality Management
    • /
    • v.26 no.4
    • /
    • pp.79-87
    • /
    • 1998
  • This paper considers the problem of testing a one-sided hypothesis under the generalized sampling plan which is defined by a sequence of independent Bernoulli trials. A certain lexicographic order is defined for the boundary points of the sampling plan. It is shown that the family of probability mass function defined on the boundary points has monotone likelihood ratio, and that the test function is uniformly most powerful.

  • PDF

Studies on the Heavy Metal Contamination in the Sediment of the Han River (한강으로 유입된 저질중의 중금속오염도 조사)

  • 신정식;박상현
    • Journal of environmental and Sanitary engineering
    • /
    • v.6 no.1
    • /
    • pp.83-93
    • /
    • 1991
  • For the survey of water pollution, several heavy metals were analyzed in the sediment of the Han River from March 20 to April 22, 1989. The results were as follows : 1. The respective ranges of heavy metal concentrations of Cadimium, Lead, Copper, Zinc and Manganese found in the sediments of the Han River were 0.32!2.41 $\mu g/g$, 15.80~129.64 $\mu g/g$, 13.82~372.36 $\mu g/g$, 58.40~925.40 $\mu g/g$, 271.50~668.30 $\mu g/g$. 2. In the sediment of inflow site Jung Rang Chon the contents of Lead, Copper, Zinc were the highest among other sampling points and An Yang Chon, the contents of Cadmium, was the highest among other sampling points and Wang Sook Chon, the contents of Manganese, was the highest among other sampling points. 3. Through all sampling points general trend of heavy metal contamination showed the highest in Zinc, the next Manganese, Copper, Lead and Cadmium respectively. 4. The higher amount of heavy metal was found in the finer particles of sediment. 5. The amount of Cadmium and Lead of the Han River water was below the standard of environment.

  • PDF

A Study on the Comparison of Approximation Models for Multi-Objective Design Optimization of a Tire (타이어 다목적 최적설계를 위한 근사모델 생성에 관한 연구)

  • Song, Byoung-Cheol;Kim, Seong-Rae;Kang, Yong-Gu;Han, Min-Hyeon
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.10 no.5
    • /
    • pp.117-124
    • /
    • 2011
  • Tire's performance plays important roles in improving vehicle's performances. Tire makers carry out a lot of research to improve tire's performance. They are making effort to meet multi purposes using various optimization methods. Recently, the tire makers perform the shape optimization using approximation models, which are surrogate models obtained by statistical method. Generally, the reason why we increase sampling points during optimization process, is to get more reliable approximation models, but the more we adopt sampling points, the more we need time to test. So it is important to select approximation model and proper number of sampling points to balance between reliability and time consuming. In this research, we studied to compare two kind cases for a approximation construction. First, we compare RSM and Kriging which are Curve Fitting Method and Interpolation Method, respectively. Second, we construct approximation models using three different number of sampling points. And then, we recommend proper approximation model and orthogonal array adopt tire's design optimization.

Real-time Localization of An UGV based on Uniform Arc Length Sampling of A 360 Degree Range Sensor (전방향 거리 센서의 균일 원호길이 샘플링을 이용한 무인 이동차량의 실시간 위치 추정)

  • Park, Soon-Yong;Choi, Sung-In
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.6
    • /
    • pp.114-122
    • /
    • 2011
  • We propose an automatic localization technique based on Uniform Arc Length Sampling (UALS) of 360 degree range sensor data. The proposed method samples 3D points from dense a point-cloud which is acquired by the sensor, registers the sampled points to a digital surface model(DSM) in real-time, and determines the location of an Unmanned Ground Vehicle(UGV). To reduce the sampling and registration time of a sequence of dense range data, 3D range points are sampled uniformly in terms of ground sample distance. Using the proposed method, we can reduce the number of 3D points while maintaining their uniformity over range data. We compare the registration speed and accuracy of the proposed method with a conventional sample method. Through several experiments by changing the number of sampling points, we analyze the speed and accuracy of the proposed method.

Reliability Estimation Using Kriging Metamodel (크리깅 메타모델을 이용한 신뢰도 계산)

  • Cho Tae-Min;Ju Byeong-Hyeon;Jung Do-Hyun;Lee Byung-Chai
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.30 no.8 s.251
    • /
    • pp.941-948
    • /
    • 2006
  • In this study, the new method for reliability estimation is proposed using kriging metamodel. Kriging metamodel can be determined by appropriate sampling range and sampling numbers because there are no random errors in the Design and Analysis of Computer Experiments(DACE) model. The first kriging metamodel is made based on widely ranged sampling points. The Advanced First Order Reliability Method(AFORM) is applied to the first kriging metamodel to estimate the reliability approximately. Then, the second kriging metamodel is constructed using additional sampling points with updated sampling range. The Monte-Carlo Simulation(MCS) is applied to the second kriging metamodel to evaluate the reliability. The proposed method is applied to numerical examples and the results are almost equal to the reference reliability.

A Hybrid Algorithm for Online Location Update using Feature Point Detection for Portable Devices

  • Kim, Jibum;Kim, Inbin;Kwon, Namgu;Park, Heemin;Chae, Jinseok
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.2
    • /
    • pp.600-619
    • /
    • 2015
  • We propose a cost-efficient hybrid algorithm for online location updates that efficiently combines feature point detection with the online trajectory-based sampling algorithm. Our algorithm is designed to minimize the average trajectory error with the minimal number of sample points. The algorithm is composed of 3 steps. First, we choose corner points from the map as sample points because they will most likely cause fewer trajectory errors. By employing the online trajectory sampling algorithm as the second step, our algorithm detects several missing and important sample points to prevent unwanted trajectory errors. The final step improves cost efficiency by eliminating redundant sample points on straight paths. We evaluate the proposed algorithm with real GPS trajectory data for various bus routes and compare our algorithm with the existing one. Simulation results show that our algorithm decreases the average trajectory error 28% compared to the existing one. In terms of cost efficiency, simulation results show that our algorithm is 29% more cost efficient than the existing one with real GPS trajectory data.

Study on the Effect of Training Data Sampling Strategy on the Accuracy of the Landslide Susceptibility Analysis Using Random Forest Method (Random Forest 기법을 이용한 산사태 취약성 평가 시 훈련 데이터 선택이 결과 정확도에 미치는 영향)

  • Kang, Kyoung-Hee;Park, Hyuck-Jin
    • Economic and Environmental Geology
    • /
    • v.52 no.2
    • /
    • pp.199-212
    • /
    • 2019
  • In the machine learning techniques, the sampling strategy of the training data affects a performance of the prediction model such as generalizing ability as well as prediction accuracy. Especially, in landslide susceptibility analysis, the data sampling procedure is the essential step for setting the training data because the number of non-landslide points is much bigger than the number of landslide points. However, the previous researches did not consider the various sampling methods for the training data. That is, the previous studies selected the training data randomly. Therefore, in this study the authors proposed several different sampling methods and assessed the effect of the sampling strategies of the training data in landslide susceptibility analysis. For that, total six different scenarios were set up based on the sampling strategies of landslide points and non-landslide points. Then Random Forest technique was trained on the basis of six different scenarios and the attribute importance for each input variable was evaluated. Subsequently, the landslide susceptibility maps were produced using the input variables and their attribute importances. In the analysis results, the AUC values of the landslide susceptibility maps, obtained from six different sampling strategies, showed high prediction rates, ranges from 70 % to 80 %. It means that the Random Forest technique shows appropriate predictive performance and the attribute importance for the input variables obtained from Random Forest can be used as the weight of landslide conditioning factors in the susceptibility analysis. In addition, the analysis results obtained using specific sampling strategies for training data show higher prediction accuracy than the analysis results using the previous random sampling method.