• Title/Summary/Keyword: New Weighted Variance

Search Result 21, Processing Time 0.027 seconds

An Investigation on Densification by Modified Weighted Station Approach (가중측점망 조정법의 적용에 관한 연구)

  • Baick, Eun Kee;Lee, Young Jin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.11 no.4
    • /
    • pp.133-141
    • /
    • 1991
  • The empirical method is used to integration adjustment for the coordinates revision of a national control point but the existing values are not to be changed or changed with small variation by suitable datum selection (for example, fixed points). This paper treats the modified weighted station parameter adjustment by quasi-observations, and the method used only variance elements of existing coordinates which is substituted for all covariance elements. The movement detection of unstable points and the junction adjustment of new networks are successfully executed by the method, in integration of new secondary networks to old-secondary-triangulation points which are in the absence of the original observations in Korea. The investigation results reveal that the accuracy of old-secondary-triangulation points is ${\pm}16^{{\prime}{\prime}}$(${\pm}0.08m$), which results from the densification of test network and the analyses of old survey specifications. and is ${\pm}2.3^{{\prime}{\prime}}$ in fixing of old-secondary-triangulation points.

  • PDF

Multi Bands Focus Detection Algorithm (주파수 대역 특성을 이용한 초점 검출 알고리즘)

  • Choi, Jong-Seong;Han, Young-Seok;Kang, Moon-Gi
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.825-826
    • /
    • 2008
  • Focusing is the principal factor that decides the image quality. In the low illuminance condition, captured images with a digital camera usually blurred because the autofocus system of the camera fail to detect the in-focus position. The failure of focusing is due to thermal noise in the captured image. In this paper, we propose a new focus detection algorithm. The proposed algorithm use the new focusing index which is weighted sum of the high-frequency energy and mid-frequency energy. The weight is determined by the local variance of the image. The proposed algorithm performs stable focusing detection with in the low illuminance condition.

  • PDF

Motion Estimation Algorithm Using Variance and Adaptive Search Range for Frame Rate Up-Conversion (프레임 율 향상을 위한 분산 및 적응적 탐색영역을 이용한 움직임 추정 알고리듬)

  • Yu, Songhyun;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.23 no.1
    • /
    • pp.138-145
    • /
    • 2018
  • In this paper, we propose a new motion estimation algorithm for frame rate up-conversion. The proposed algorithm uses the variance of errors in addition to SAD in motion estimation to find more accurate motion vectors. Then, it decides which motion vectors are wrong using the variance of neighbor motion vectors and the variance between current motion vector and neighbor's average motion vector. Next, incorrect motion vectors are corrected by weighted sum of eight neighbor motion vectors. Additionally, we propose adaptive search range algorithm, so we can find more accurate motion vectors and reduce computational complexity at the same time. As a result, proposed algorithm improves the average peak signal-to-noise ratio and structural similarity up to 1.44 dB and 0.129, respectively, compared with previous algorithms.

Shape Extraction of Near Target Using Opening Operator with Adaptive Structure Element in Infrared hnages (적응적 구조요소를 이용한 열림 연산자에 의한 적외선 영상표적 추출)

  • Kwon, Hyuk-Ju;Bae, Tae-Wuk;Kim, Byoung-Ik;Lee, Sung-Hak;Kim, Young-Choon;Ahn, Sang-Ho;Sohng, Kyu-Ik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.9C
    • /
    • pp.546-554
    • /
    • 2011
  • Near targets in the infrared (IR) images have the steady feature for inner region and the transient feature for the boundary region. Based on these features, this paper proposes a new method to extract the fine target shape of near targets in the IR images. First, we detect the boundary region of the candidate targets using the local variance weighted information entropy (WIE) of the original images. And then, a coarse target region can be estimated based on the labeling of the boundary region. For the coarse target region, we use the opening filter with an adaptive structure element to extract the fine target shape. The decision of the adaptive structure element size is optimized for the width information of target boundary by calculating the average WIE in the enlarged windows. The experimental results show that a proposed method has better extraction performance than the previous threshold algorithms.

Stochastic FE Analysis of Plate Structure (평판구조의 추계론적 유한요소해석)

  • 최창근;노혁천
    • Computational Structural Engineering
    • /
    • v.8 no.1
    • /
    • pp.127-136
    • /
    • 1995
  • In this paper the stochastic FE analysis considering the material and geometrical property of the plate structure is performed by the weighted integral method. To consider the stochasity of the material and geometrical property, the stochastic field is assumed respectively. The mean value of the stochastic field is 0 and the value of variance is assumed as 0.1. The characteristics of the assumed stochastic field is represented by auto-correlation function. This auto-correlation function is used in evaluating the response variability of the plate structure. In this study a new auto-correlation function is derived to concern the uncertainty of the plate thickness. The newly derived auto-correlation function is a function of auto-correlation function and coefficient of variation of the assumed stochastic field. The two results, obtained by proposed Weighted Integral method and Monte Carlo Simulation method, are coincided with each other and these results are almost equal to the theoretical result that is derived in this study. In the case of considering the variability of plate thickness, the obtained result is well coincide with those of Lawrence and Monte Carlo simulation.

  • PDF

Wavelet-based Fusion of Optical and Radar Image using Gradient and Variance (그레디언트 및 분산을 이용한 웨이블릿 기반의 광학 및 레이더 영상 융합)

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.5
    • /
    • pp.581-591
    • /
    • 2010
  • In this paper, we proposed a new wavelet-based image fusion algorithm, which has advantages in both frequency and spatial domains for signal analysis. The developed algorithm compares the ratio of SAR image signal to optical image signal and assigns the SAR image signal to the fused image if the ratio is larger than a predefined threshold value. If the ratio is smaller than the threshold value, the fused image signal is determined by a weighted sum of optical and SAR image signal. The fusion rules consider the ratio of SAR image signal to optical image signal, image gradient and local variance of each image signal. We evaluated the proposed algorithm using Ikonos and TerraSAR-X satellite images. The proposed method showed better performance than the conventional methods which take only relatively strong SAR image signals in the fused image, in terms of entropy, image clarity, spatial frequency and speckle index.

Performance of a Bayesian Design Compared to Some Optimal Designs for Linear Calibration (선형 캘리브레이션에서 베이지안 실험계획과 기존의 최적실험계획과의 효과비교)

  • 김성철
    • The Korean Journal of Applied Statistics
    • /
    • v.10 no.1
    • /
    • pp.69-84
    • /
    • 1997
  • We consider a linear calibration problem, $y_i = $$\alpha + \beta (x_i - x_0) + \epsilon_i$, $i=1, 2, {\cdot}{\cdot},n$ $y_f = \alpha + \beta (x_f - x_0) + \epsilon, $ where we observe $(x_i, y_i)$'s for the controlled calibration experiments and later we make inference about $x_f$ from a new observation $y_f$. The objective of the calibration design problem is to find the optimal design $x = (x_i, \cdots, x_n$ that gives the best estimates for $x_f$. We compare Kim(1989)'s Bayesian design which minimizes the expected value of the posterior variance of $x_f$ and some optimal designs from literature. Kim suggested the Bayesian optimal design based on the analysis of the characteristics of the expected loss function and numerical must be equal to the prior mean and that the sum of squares be as large as possible. The designs to be compared are (1) Buonaccorsi(1986)'s AV optimal design that minimizes the average asymptotic variance of the classical estimators, (2) D-optimal and A-optimal design for the linear regression model that optimize some functions of $M(x) = \sum x_i x_i'$, and (3) Hunter & Lamboy (1981)'s reference design from their paper. In order to compare the designs which are optimal in some sense, we consider two criteria. First, we compare them by the expected posterior variance criterion and secondly, we perform the Monte Carlo simulation to obtain the HPD intervals and compare the lengths of them. If the prior mean of $x_f$ is at the center of the finite design interval, then the Bayesian, AV optimal, D-optimal and A-optimal designs are indentical and they are equally weighted end-point design. However if the prior mean is not at the center, then they are not expected to be identical.In this case, we demonstrate that the almost Bayesian-optimal design was slightly better than the approximate AV optimal design. We also investigate the effects of the prior variance of the parameters and solution for the case when the number of experiments is odd.

  • PDF

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

Detection of Text Candidate Regions using Region Information-based Genetic Algorithm (영역정보기반의 유전자알고리즘을 이용한 텍스트 후보영역 검출)

  • Oh, Jun-Taek;Kim, Wook-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.70-77
    • /
    • 2008
  • This paper proposes a new text candidate region detection method that uses genetic algorithm based on information of the segmented regions. In image segmentation, a classification of the pixels at each color channel and a reclassification of the region-unit for reducing inhomogeneous clusters are performed. EWFCM(Entropy-based Weighted C-Means) algorithm to classify the pixels at each color channel is an improved FCM algorithm added with spatial information, and therefore it removes the meaningless regions like noise. A region-based reclassification based on a similarity between each segmented region of the most inhomogeneous cluster and the other clusters reduces the inhomogeneous clusters more efficiently than pixel- and cluster-based reclassifications. And detecting text candidate regions is performed by genetic algorithm based on energy and variance of the directional edge components, the number, and a size of the segmented regions. The region information-based detection method can singles out semantic text candidate regions more accurately than pixel-based detection method and the detection results will be more useful in recognizing the text regions hereafter. Experiments showed the results of the segmentation and the detection. And it confirmed that the proposed method was superior to the existing methods.

Research on the Development of Distance Metrics for the Clustering of Vessel Trajectories in Korean Coastal Waters (국내 연안 해역 선박 항적 군집화를 위한 항적 간 거리 척도 개발 연구)

  • Seungju Lee;Wonhee Lee;Ji Hong Min;Deuk Jae Cho;Hyunwoo Park
    • Journal of Navigation and Port Research
    • /
    • v.47 no.6
    • /
    • pp.367-375
    • /
    • 2023
  • This study developed a new distance metric for vessel trajectories, applicable to marine traffic control services in the Korean coastal waters. The proposed metric is designed through the weighted summation of the traditional Hausdorff distance, which measures the similarity between spatiotemporal data and incorporates the differences in the average Speed Over Ground (SOG) and the variance in Course Over Ground (COG) between two trajectories. To validate the effectiveness of this new metric, a comparative analysis was conducted using the actual Automatic Identification System (AIS) trajectory data, in conjunction with an agglomerative clustering algorithm. Data visualizations were used to confirm that the results of trajectory clustering, with the new metric, reflect geographical distances and the distribution of vessel behavioral characteristics more accurately, than conventional metrics such as the Hausdorff distance and Dynamic Time Warping distance. Quantitatively, based on the Davies-Bouldin index, the clustering results were found to be superior or comparable and demonstrated exceptional efficiency in computational distance calculation.