• 제목/요약/키워드: process variance

검색결과 928건 처리시간 0.029초

Comparison of Sensitivity Analysis Methods for Building Energy Simulations in Early Design Phases: Once-at-a-time (OAT) vs. Variance-based Methods

  • Kim, Sean Hay
    • KIEAE Journal
    • /
    • 제16권2호
    • /
    • pp.17-22
    • /
    • 2016
  • Purpose: Sensitivity analysis offers a good guideline for designing energy conscious buildings, which is fitted to a specific building configuration. Sensitivity analysis is, however, still too expensive to be a part of regular design process. The One-at-a-time (OAT) is the most common and simplest sensitivity analysis method. This study aims to propose a reasonable ground that the OAT can be an alternative method for the variance-based method in some early design scenarios, while the variance-based method is known adequate for dealing with nonlinear response and the effect of interactions between input variables, which are most cases in building energy simulations. Method: A test model representing the early design phase is built in the DOE2 energy simulations. Then sensitivity ranks between the OAT and the Variance-based methods are compared at three U.S. sites. Result: Parameters in the upper rank by the OAT do not much differ from those by the Main effect index. Considering design practices that designers would chose the most energy saving design option first, this rank similarity between two methods seems to be acceptable in the early design phase.

효과적인 동영상 처리를 위한 움직임 보상 기반 잡음 예측 (Motion-Compensated Noise Estimation for Effective Video Processing)

  • 송병철
    • 대한전자공학회논문지SP
    • /
    • 제46권5호
    • /
    • pp.120-125
    • /
    • 2009
  • 일반적인 동영상 처리에서 효과적으로 잡음을 제거하기 위해서는, 입력 동영상의 잡음 세기나 잡음 분산을 정확하게 찾아낼 필요가 있다. 그러나, 일반적으로 잡음 정보를 정확히 파악하기는 힘들다. 본 논문은 인접 잡음 영상간 움직일 보상에 기반한 정확한 잡음 분산 예측기법을 제안한다. 먼저, 입력 잡음 영상 내 각 블록에 대해 움직임 추정을 수행하고 최적의 움직임 보상 블록의 잔여 분산을 계산한다. 그리고, 구해진 최적 분산값과 근사한 분산값들을 적응적으로 평균화하고 적당히 스케일링함으로써, 그 영상에 대한 잡음 분산 예측치가 얻어진다. 실험결과를 통해 제안하는 방법이 매우 정확하게 잡음 세기를 예측하고 안정적임을 보인다.

실험계획법에 의한 니켈기 경질 용사코팅의 최적 공정 설계 (Process Optimization for Thermal-sprayed Ni-based Hard Coating by Design of Experiments)

  • 김균택;김영식
    • 동력기계공학회지
    • /
    • 제13권5호
    • /
    • pp.89-94
    • /
    • 2009
  • In this work, the optimal process has been designed by $L_9(3^4)$ orthogonal array and analysis of variance(ANOVA) for thermal-sprayed Ni-based hard coating. Ni-based hard coatings were fabricated by flame spray process on steel substrate. Then, the hardness test and observation of microstructure of the coatings were performed. The results of hardness test were analyzed by ANOVA. The ANOVA results demonstrated that the acetylene gas flow had the greatest effect on hardness of the coatings. The oxygen gas flow was found to have a neglecting effect. From these results, the optimal combination of the flame spray parameters could be predicted. The calculated hardness of the coatings by ANOVA was found to lie close to that of confirmation experimental result. Thus, it was considered that design of experiments design using orthogonal array and ANOVA was useful to determine optimal process of thermal-sprayed Ni-based hard coating.

  • PDF

The Evaluation of Evenness of Nonwovens Using Image Analysis Method

  • Jeong, Sung-Hoon;Kim, Si-Hwan;Hong, Cheol-Jae
    • Fibers and Polymers
    • /
    • 제2권3호
    • /
    • pp.164-170
    • /
    • 2001
  • Authors studied on the applicability of image analysis technique using a scanner with a CCD (charged coupled deviced) to the evaluation of evenness of nonwovens because it has distinctive features to considerably save time and labor in the analysis compared with other classical methods. As specimens fur the experiment, two different types that are unpatterned and patterned ones were prepared. For the unpatterned specimen, webs were chemically bonded, while for the patterned specimen, webs being thermally calendered with engraved roller. Several webs having various areal densities were prepared and bonded. Coefficient of variation (CV%) was used as a parameter to evaluate the evenness. Scanning conditions could be suitably set up through comparing the total variance to the between-group variance and to the within-group variance, respectively, on the images scanned at the different conditions. The 2D convolution method with smoothing filter kernel was introduced to further filter the noises on the scanned images. After the filtering process, the increase of web areal densities gave an uniform decrease of the CV%. This showed that the scanned image analysis with proper filtering process could be successfully applicable to the evaluation of evenness in nonwovens.

  • PDF

분산분석법을 이용한 AZ31 합금의 피로균열성장에 미치는 시편두께 효과 평가 (Evaluating the Effect of Specimen Thickness on Fatigue Crack Growth in AZ31 Alloy Using ANOVA)

  • 최선순
    • 한국기계가공학회지
    • /
    • 제19권6호
    • /
    • pp.9-16
    • /
    • 2020
  • This study aims to assess the effects of specimen thickness (ST) on fatigue crack growth in the early stages of crack propagation and near failure in magnesium alloys. The analysis of variance (ANOVA) method was adopted because fatigue crack propagation in magnesium alloys exhibits statistical behavior. The equality of variance test and residual diagnostics were performed on the grown cracks to confirm the validity of ANOVA by verifying the normal distribution and mutual independence of the residuals and their homoscedasticity. ANOVA confirmed that ST heavily impacts crack growth; i.e., when ST is smaller, cracks grow faster in the early crack propagation stage and break more quickly before the formation of larger cracks. We found that ST significantly affects fatigue crack growth in the early crack propagation stage and near the failure stage in magnesium alloys. The regression model was also used to predict crack formation near the failure stage.

Application of compressive sensing and variance considered machine to condition monitoring

  • Lee, Myung Jun;Jun, Jun Young;Park, Gyuhae;Kang, To;Han, Soon Woo
    • Smart Structures and Systems
    • /
    • 제22권2호
    • /
    • pp.231-237
    • /
    • 2018
  • A significant data problem is encountered with condition monitoring because the sensors need to measure vibration data at a continuous and sometimes high sampling rate. In this study, compressive sensing approaches for condition monitoring are proposed to demonstrate their efficiency in handling a large amount of data and to improve the damage detection capability of the current condition monitoring process. Compressive sensing is a novel sensing/sampling paradigm that takes much fewer data than traditional data sampling methods. This sensing paradigm is applied to condition monitoring with an improved machine learning algorithm in this study. For the experiments, a built-in rotating system was used, and all data were compressively sampled to obtain compressed data. The optimal signal features were then selected without the signal reconstruction process. For damage classification, we used the Variance Considered Machine, utilizing only the compressed data. The experimental results show that the proposed compressive sensing method could effectively improve the data processing speed and the accuracy of condition monitoring of rotating systems.

An Improved Mean-Variance Optimization for Nonconvex Economic Dispatch Problems

  • Kim, Min Jeong;Song, Hyoung-Yong;Park, Jong-Bae;Roh, Jae-Hyung;Lee, Sang Un;Son, Sung-Yong
    • Journal of Electrical Engineering and Technology
    • /
    • 제8권1호
    • /
    • pp.80-89
    • /
    • 2013
  • This paper presents an efficient approach for solving economic dispatch (ED) problems with nonconvex cost functions using a 'Mean-Variance Optimization (MVO)' algorithm with Kuhn-Tucker condition and swap process. The aim of the ED problem, one of the most important activities in power system operation and planning, is to determine the optimal combination of power outputs of all generating units so as to meet the required load demand at minimum operating cost while satisfying system equality and inequality constraints. This paper applies Kuhn-Tucker condition and swap process to a MVO algorithm to improve a global minimum searching capability. The proposed MVO is applied to three different nonconvex ED problems with valve-point effects, prohibited operating zones, transmission network losses, and multi-fuels with valve-point effects. Additionally, it is applied to the large-scale power system of Korea. The results are compared with those of the state-of-the-art methods as well.

손실함수를 적용한 공정평균 이동에 대한 조정시기 결정 (Determination of the Resetting Time to the Process Mean Shift by the Loss Function)

  • 이도경
    • 산업경영시스템학회지
    • /
    • 제40권1호
    • /
    • pp.165-172
    • /
    • 2017
  • Machines are physically or chemically degenerated by continuous usage. One of the results of this degeneration is the process mean shift. Under the process mean shift, production cost, failure cost and quality loss function cost are increasing continuously. Therefore a periodic preventive resetting the process is necessary. We suppose that the wear level is observable. In this case, process mean shift problem has similar characteristics to the maintenance policy model. In the previous studies, process mean shift problem has been studied in several fields such as 'Tool wear limit', 'Canning Process' and 'Quality Loss Function' separately or partially integrated form. This paper proposes an integrated cost model which involves production cost by the material, failure cost by the nonconforming items, quality loss function cost by the deviation between the quality characteristics from the target value and resetting the process cost. We expand this process mean shift problem a little more by dealing the process variance as a function, not a constant value. We suggested a multiplier function model to the process variance according to the analysis result with practical data. We adopted two-side specification to our model. The initial process mean is generally set somewhat above the lower specification. The objective function is total integrated costs per unit wear and independent variables are wear limit and initial setting process mean. The optimum is derived from numerical analysis because the integral form of the objective function is not possible. A numerical example is presented.

Computing the Ruin Probability of Lévy Insurance Risk Processes in non-Cramér Models

  • Park, Hyun-Suk
    • Communications for Statistical Applications and Methods
    • /
    • 제17권4호
    • /
    • pp.483-491
    • /
    • 2010
  • This study provides the explicit computation of the ruin probability of a Le¢vy process on finite time horizon in Theorem 1 with the help of a fluctuation identity. This paper also gives the numerical results of the ruin probability in Variance Gamma(VG) and Normal Inverse Gaussian(NIG) models as illustrations. Besides, the paths of VG and NIG processes are simulated using the same parameter values as in Madan et al. (1998).

The Change Point Analysis in Time Series Models

  • Lee, Sang-Yeol
    • 한국통계학회:학술대회논문집
    • /
    • 한국통계학회 2005년도 추계 학술발표회 논문집
    • /
    • pp.43-48
    • /
    • 2005
  • We consider the problem of testing for parameter changes in time series models based on a cusum test. Although the test procedure is well-established for the mean and variance in time series models, a general parameter case has not been discussed in the literature. Therefore, here we develop a cusum test for parameter change in a more general framework. As an example, we consider the change of the parameters in an RCA(1) model and that of the autocovariances of a linear process. We also consider the variance change test for unstable models with unit roots and GARCH models.

  • PDF