• Title/Summary/Keyword: K평균

Search Result 23,803, Processing Time 0.049 seconds

Normalized Mean Field Annealing Algorithm for Module Orientation Problem (모듈 방향 결정 문제 해결을 위한 정규화된 평균장 어닐링 알고리즘)

  • Chong, Kyun-Rak
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.12
    • /
    • pp.988-995
    • /
    • 2000
  • 각 모듈들의 위치가 배치 알고리즘에 의해 결정된 후에도 모듈들을 종축 또는 횡축을 중심으로 뒤집거나 회전시킴으로써 회로의 효율성과 연결성을 향상시킬 수 있다. 고집적 회로설계의 한 단계인 모듈방향 결정 문제는 모듈간에 연결된 선의 길이의 합이 최소가 되도록 각 모듈의 방향을 결정하는 문제이다. 최근에 평균장 어닐링 방법이 조합적 최적화 문제에 사용되어 좋은 결과를 보여 주고 있다. 평균장 어닐링은 신경회로망의 따른 수렴 특성과 시뮬레이티드 어닐링의 우수한 해를 생성하는 특성이 결합된 방법이다. 본 논문에서는 정규화된 평균장 어닐링을 사용해서 모듈 방향 결정 문제를 해결하였고 실험을 통해 기존의 Hopfield 네트워크 방법과 시뮬레이티드 어닐링과 그 결과를 비교하였다. 시뮬레이티드 어닐링, 정규화된 평균장 어닐링과 Hopfield 네트워크의 총 길이 감소율은 각각 19.86%, 19.85%, 19.03%였으며, 정규화된 평균장 어닐링의 실행 시간은 Hopfield 네트워크보다는 1.1배, 시뮬레이티드 어닐링보다는 11.4배 정도 빨랐다.

  • PDF

Study on Contration Distribution of HCB and DDTs in River Sediments of Korea (국내 주요 수계 표층 퇴적물 중 HCB와 DDTs의 농도분포 특성에 관한 연구)

  • Park, Jong-Eun;Lee, Sang-Chun;Hong, Jong-Ki;Kim, Jong-Guk
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.34 no.5
    • /
    • pp.335-344
    • /
    • 2012
  • Hexachlorobenzene (HCB) and Dichloro-Diphenyl-Trichloroethane (DDT) were determined in surface sediments collected from main rivers of Korea. Concentration of HCB in sediments ranged from 0.41 to 3.82 (average 1.58) ng/g, 0.08 to 6.09 (average 0.90) ng/g, 0.02 to 0.97 (average 0.30) ng/g, 0.28 to 0.59 (average 0.42) ng/g and 0.23 to 0.48 (average 0.32) ng/g in Han river, Nakdong river, Geum river, Yeongsan and Seomjin river respectively. The DDTs concentration was ranged from 0.67 to 14.20 (average 4.76) ng/g, N.D. to 10.36 (average 1.81) ng/g, N.D. to 7.26 (average 1.87) ng/g, N.D. to 3.12 (average 1.08) ng/g and 0.02 to 2.04 (average 0.56) ng/g in Han river, Nakdong river, Geum river, Yeongsan and Seomjin river respectively. In comparison with the concentration of HCB and DDTs in other studies, the values in sediments of this study were lower than those of other countries. Comparison with that Sediment quality guideline (SQG) of National Oceanic and Atmospheric Administration (NOAA), the HCB levels of this study were very lower than Effect Range Low (ERL) value. In the case of DDTs, the concentrations of 46 points were higher than ERL (1.58 ng/g). It have not harmful effect on ecosystem of the sediment, however ongoing monitoring of sediments is deemed necessary.

Confidence Intervals for a tow Binomial Proportion (낮은 이항 비율에 대한 신뢰구간)

  • Ryu Jae-Bok;Lee Seung-Joo
    • The Korean Journal of Applied Statistics
    • /
    • v.19 no.2
    • /
    • pp.217-230
    • /
    • 2006
  • e discuss proper confidence intervals for interval estimation of a low binomial proportion. A large sample surveys are practically executed to find rates of rare diseases, specified industrial disaster, and parasitic infection. Under the conditions of 0 < p ${\leq}$ 0.1 and large n, we compared 6 confidence intervals with mean coverage probability, root mean square error and mean expected widths to search a good one for interval estimation of population proportion p. As a result of comparisons, Mid-p confidence interval is best and AC, score and Jeffreys confidence intervals are next.

Estimation of nonlinear GARCH-M model (비선형 평균 일반화 이분산 자기회귀모형의 추정)

  • Shim, Joo-Yong;Lee, Jang-Taek
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.5
    • /
    • pp.831-839
    • /
    • 2010
  • Least squares support vector machine (LS-SVM) is a kernel trick gaining a lot of popularities in the regression and classification problems. We use LS-SVM to propose a iterative algorithm for a nonlinear generalized autoregressive conditional heteroscedasticity model in the mean (GARCH-M) model to estimate the mean and the conditional volatility of stock market returns. The proposed method combines a weighted LS-SVM for the mean and unweighted LS-SVM for the conditional volatility. In this paper, we show that nonlinear GARCH-M models have a higher performance than the linear GARCH model and the linear GARCH-M model via real data estimations.

The Correlation between Groundwater Level and Moving Average of Precipitation in Nakdong River Watershed (낙동강유역의 지하수위와 강우이동평균의 상관관계)

  • Yang, Jeong-Seok;Ahn, Tae-Yeon
    • The Journal of Engineering Geology
    • /
    • v.17 no.4
    • /
    • pp.507-510
    • /
    • 2007
  • The correlation between groundwater level(GWL) and the moving average of precipitation was analyzed based on the observation data in Nakdong river watershed. The precipitation data was compared and analyzed with the GWL data from adjacent observation point to the precipitation gauge station. The correlation between the moving average of precipitation with several averaging periods and GWL were analyzed and we could choose the averaging period that produces maximum correlation. A severe drawdown was observed from December to April. The maximum correlations between GWL and the moving average of precipitation were occurred from 20-day to 80-day averaging period.

The Consideration of Consistent Use of Sample Standard Deviation in the Confidence Interval Estimation of Population Mean and Population Ratio (모평균과 모비율의 구간추정에서 표본표준편차의 일관된 사용에 대한 고찰)

  • Park, Sun Yong;Yoon, Hyoung Seok
    • Journal of Educational Research in Mathematics
    • /
    • v.24 no.3
    • /
    • pp.375-385
    • /
    • 2014
  • This study compares the confidence interval estimation of population mean with that of population ratio, and considers whether these two estimations ensures consistency. As a result, this study suggests the following acquisition method of consistency : dealing with population mean and population ratio in the same mode, substituting the observed or experimental value of sample standard deviation for standard deviation in population in setting a confidence interval of both population mean and population ratio, and distinguishing population ratio $\hat{P}$ from its observed vale $\hat{p}$.

  • PDF

A Study on Thermal Properties of Rocks from Gyeonggi-do Gangwon-do, Chungchung-do, Korea (경기도, 강원도, 충청도 일대의 암석 열물성 특성 연구)

  • Park, Jeong-Min;Kim, Hyoung-Chan;Lee, Young-Min;Song, Moo-Young
    • Economic and Environmental Geology
    • /
    • v.40 no.6
    • /
    • pp.761-769
    • /
    • 2007
  • We made 712 thermal property measurements on igneous, metamorphic and sedimentary rock samples from Gyeonggi-do, Gangwon-do and Chungchung-do, Korea. The average thermal conductivities of igneous, metamorphic and sedimentary rocks are 3.58W/m-K, 4.16W/m-K and 4.53W/m-K, respectively. Thermal conductivity of granite and gneiss are 2.13-5.87W/m-K and 2.26-6.67W/m-K, with average values of 3.57W/m-K and 3.945W/m-K, respectively. The average of thermal diffusivities are $1.43mm^2/sec\;and\;1.55mm^2/sec$, respectively; the average of specific heat values are 0.914J/gK, 0.912J/gK for granite and gneiss samples, respectively. The thermal conductivity of a rock type generally have a wide range because it depends on various factors such as dominant mineral phase, micro-structure, anisotropy, and so on.

Mean-Variance-Validation Technique for Sequential Kriging Metamodels (순차적 크리깅모델의 평균-분산 정확도 검증기법)

  • Lee, Tae-Hee;Kim, Ho-Sung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.34 no.5
    • /
    • pp.541-547
    • /
    • 2010
  • The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean$_0$ validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean$_0$ validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels.

Macromineral intake in non-alcoholic beverages for children and adolescents: Using the Fourth Korea National Health and Nutrition Examination Survey (KNHANES IV, 2007-2009) (어린이와 청소년의 비알콜성음료 섭취에 따른 다량무기질 섭취량 평가: 제 4기 국민건강영양조사 자료를 활용하여)

  • Kim, Sung Dan;Moon, Hyun-Kyung;Park, Ju Sung;Lee, Yong Chul;Shin, Gi Young;Jo, Han Bin;Kim, Bog Soon;Kim, Jung Hun;Chae, Young Zoo
    • Journal of Nutrition and Health
    • /
    • v.46 no.1
    • /
    • pp.50-60
    • /
    • 2013
  • The aims of this study were to estimate daily intake of macrominerals from beverages, liquid teas, and liquid coffees and to evaluate their potential health risks for Korean children and adolescents (1-to 19 years old). Assessment of dietary intake was conducted using the actual level of sodium, calcium, phosphorus, potassium, and magnesium in non-alcoholic beverages and (207 beverages, 19 liquid teas, and 24 liquid coffees) the food consumption amount drawn from "The Fourth Korea National Health and Nutrition Examination Survey (2007-2009)". To estimate the dietary intake of non-alcoholic beverages, 6,082 children and adolescents (Scenario I) were compared with 1,704 non-alcoholic beverage consumption subjects among them (Scenario II). Calculation of the estimated daily intake of macrominerals was based on point estimates and probabilistic estimates. The values of probabilistic macromineral intake, which is a Monte-Carlo approach considering probabilistic density functions of variables, were presented using the probabilistic model. The level of safety for macrominerals was evaluated by comparison with population nutrient intake goal (Goal, 2.0 g/day) for sodium, tolerable upper intake level (UL) for calcium (2,500 mg/day) and phosphorus (3,000-3,500 mg/day) set by the Korean Nutrition Society (Dietary Reference Intakes for Koreans, KDRI). For total children and adolescents (Scenario I), mean daily intake of sodium, calcium, phosphorus, potassium, and magnesium estimated by probabilistic estimates using Monte Carlo simulation was, respectively, 7.93, 10.92, 6.73, 23.41, and 1.11, and 95th percentile daily intake of those was, respectively, 28.02, 44.86, 27.43, 98.14, and 3.87 mg/day. For consumers-only (Scenario II), mean daily intake of sodium, calcium, phosphorus, potassium, and magnesium estimated by probabilistic estimates using Monte Carlo simulation was, respectively, 19.10, 25.77, 15.83, 56.56, and 2.86 mg/day, and 95th percentile daily intake of those was, respectively, 62.67, 101.95, 62.09, 227.92, and 8.67 mg/day. For Scenarios I II, sodium, calcium, and phosphorus did not have a mean an 95th percentile intake that met or exceeded the 5% of Goal and UL.

RHadoop platform for K-Means clustering of big data (빅데이터 K-평균 클러스터링을 위한 RHadoop 플랫폼)

  • Shin, Ji Eun;Oh, Yoon Sik;Lim, Dong Hoon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.3
    • /
    • pp.609-619
    • /
    • 2016
  • RHadoop is a collection of R packages that allow users to manage and analyze data with Hadoop. In this paper, we implement K-Means algorithm based on MapReduce framework with RHadoop to make the clustering method applicable to large scale data. The main idea introduces a combiner as a function of our map output to decrease the amount of data needed to be processed by reducers. We showed that our K-Means algorithm using RHadoop with combiner was faster than regular algorithm without combiner as the size of data set increases. We also implemented Elbow method with MapReduce for finding the optimum number of clusters for K-Means clustering on large dataset. Comparison with our MapReduce implementation of Elbow method and classical kmeans() in R with small data showed similar results.