• Title/Summary/Keyword: Weighted Reconstruction Error

Search Result 16, Processing Time 0.025 seconds

Object Tracking Based on Weighted Local Sub-space Reconstruction Error

  • Zeng, Xianyou;Xu, Long;Hu, Shaohai;Zhao, Ruizhen;Feng, Wanli
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.871-891
    • /
    • 2019
  • Visual tracking is a challenging task that needs learning an effective model to handle the changes of target appearance caused by factors such as pose variation, illumination change, occlusion and motion blur. In this paper, a novel tracking algorithm based on weighted local sub-space reconstruction error is presented. First, accounting for the appearance changes in the tracking process, a generative weight calculation method based on structural reconstruction error is proposed. Furthermore, a template update scheme of occlusion-aware is introduced, in which we reconstruct a new template instead of simply exploiting the best observation for template update. The effectiveness and feasibility of the proposed algorithm are verified by comparing it with some state-of-the-art algorithms quantitatively and qualitatively.

Sampling Set Selection Algorithm for Weighted Graph Signals (가중치를 갖는 그래프신호를 위한 샘플링 집합 선택 알고리즘)

  • Kim, Yoon Hak
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.153-160
    • /
    • 2022
  • A greedy algorithm is proposed to select a subset of nodes of a graph for bandlimited graph signals in which each signal value is generated with its weight. Since graph signals are weighted, we seek to minimize the weighted reconstruction error which is formulated by using the QR factorization and derive an analytic result to find iteratively the node minimizing the weighted reconstruction error, leading to a simplified iterative selection process. Experiments show that the proposed method achieves a significant performance gain for graph signals with weights on various graphs as compared with the previous novel selection techniques.

Design of Lazy Classifier based on Fuzzy k-Nearest Neighbors and Reconstruction Error (퍼지 k-Nearest Neighbors 와 Reconstruction Error 기반 Lazy Classifier 설계)

  • Roh, Seok-Beom;Ahn, Tae-Chon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.1
    • /
    • pp.101-108
    • /
    • 2010
  • In this paper, we proposed a new lazy classifier with fuzzy k-nearest neighbors approach and feature selection which is based on reconstruction error. Reconstruction error is the performance index for locally linear reconstruction. When a new query point is given, fuzzy k-nearest neighbors approach defines the local area where the local classifier is available and assigns the weighting values to the data patterns which are involved within the local area. After defining the local area and assigning the weighting value, the feature selection is carried out to reduce the dimension of the feature space. When some features are selected in terms of the reconstruction error, the local classifier which is a sort of polynomial is developed using weighted least square estimation. In addition, the experimental application covers a comparative analysis including several previously commonly encountered methods such as standard neural networks, support vector machine, linear discriminant analysis, and C4.5 trees.

Rank-weighted reconstruction feature for a robust deep neural network-based acoustic model

  • Chung, Hoon;Park, Jeon Gue;Jung, Ho-Young
    • ETRI Journal
    • /
    • v.41 no.2
    • /
    • pp.235-241
    • /
    • 2019
  • In this paper, we propose a rank-weighted reconstruction feature to improve the robustness of a feed-forward deep neural network (FFDNN)-based acoustic model. In the FFDNN-based acoustic model, an input feature is constructed by vectorizing a submatrix that is created by slicing the feature vectors of frames within a context window. In this type of feature construction, the appropriate context window size is important because it determines the amount of trivial or discriminative information, such as redundancy, or temporal context of the input features. However, we ascertained whether a single parameter is sufficiently able to control the quantity of information. Therefore, we investigated the input feature construction from the perspectives of rank and nullity, and proposed a rank-weighted reconstruction feature herein, that allows for the retention of speech information components and the reduction in trivial components. The proposed method was evaluated in the TIMIT phone recognition and Wall Street Journal (WSJ) domains. The proposed method reduced the phone error rate of the TIMIT domain from 18.4% to 18.0%, and the word error rate of the WSJ domain from 4.70% to 4.43%.

A Study of the 3D-Reconstruction of indoor using Stereo Camera System (스테레오 카메라를 이용한 실내환경의 3차원 복원에 관한 연구)

  • Lee Dong-Hun;Um Dae-Youn;Kang Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.42-47
    • /
    • 2005
  • In this papcr, we address the 3D reconstruction of the indoor circumstance using what the data is extracted by a pall of image from Stereo Camera. Generally sucaking, there arc three methods to extract 3-Dimensional data using IR sensor, Laser sensor and Stereo camera sensor. The best is stereo camera sensor which can show a high performance at a reasonable price. We used 'Window Correlation Matching Method' to extract 3-Dimensional data in stereo image. We proposed new Method to reduce error data, said 'Histogram Weighted Hough Transform'. Owing to this mettled, we reduced error data in each stereo image. So reconstruction is well done. 3-Dimensional Reconstruction is accomplished by using the DirectX that is well known as 3D-Game development tool. We show that the stereo camera can be not only used to extract 3-dimensional data but also applied to reconstruct the 3-Dimensional circumstance. And we try to reduce the error data using various method.

Modified Raised-Cosine Interpolation and Application to Image Processing (변형된 상승여현 보간법의 제안과 영상처리에의 응용)

  • 하영호;김원호;김수중
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.25 no.4
    • /
    • pp.453-459
    • /
    • 1988
  • A new interpolation function, named modified raised-cosine interpolation, is proposed. This function is derived from the linear combination of weighted triangular and raised-cosine functions to reduce the effect of side lobes which incur the interpolation error. Interpolation error reduces significantly for higher-order convolutional interpolation functions of linear operators, but at the expense of resolution error due to the attenuation of main lobe. However, the proposed interpolation function enables us to reduce the side lobes as well as to preserve the main lobe. To prove practicality, this function is applied in image reconstruction and enlargement.

  • PDF

Design of 3D Laser Radar Based on Laser Triangulation

  • Yang, Yang;Zhang, Yuchen;Wang, Yuehai;Liu, Danian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.5
    • /
    • pp.2414-2433
    • /
    • 2019
  • The aim of this paper is to design a 3D laser radar prototype based on laser triangulation. The mathematical model of distance sensitivity is deduced; a pixel-distance conversion formula is discussed and used to complete 3D scanning. The center position extraction algorithm of the spot is proposed, and the error of the linear laser, camera distortion and installation are corrected by using the proposed weighted average algorithm. Finally, the three-dimensional analytic computational algorithm is given to transform the measured distance into point cloud data. The experimental results show that this 3D laser radar can accomplish the 3D object scanning and the environment 3D reconstruction task. In addition, the experiment result proves that the product of the camera focal length and the baseline length is the key factor to influence measurement accuracy.

The Correctness Comparison of MCIH Model and WMLF/GI Model for the Individual Haplotyping Reconstruction (일배체형 재조합을 위한 MCIH 모델과 WMLF/GI 모델의 정확도 비교)

  • Jeong, In-Seon;Kang, Seung-Ho;Lim, Hyeong-Seok
    • The KIPS Transactions:PartB
    • /
    • v.16B no.2
    • /
    • pp.157-161
    • /
    • 2009
  • Minimum Letter Flips(MLF) and Weighted Minimum Letter Flips(WMLF) can perform the haplotype reconstruction more accurately from SNP fragments when they have many errors and gaps by introducing the related genotype information. And it is known that WMLF is more accurate in haplotype reconstruction than those based on the MLF. In the paper, we analyze two models under the conditions that the different rates of homozygous site in the genotype information and the different confidence levels according to the sequencing quality. We compare the performance of the two models using neural network and genetic algorithm. If the rate of homozygous site is high and sequencing quality is good, the results of experiments indicate that WMLF/GI has higher accuracy of haplotype reconstruction than that of the MCIH especially when the error rate and gap rate of SNP fragments are high.

Optimized inverse distance weighted interpolation algorithm for γ radiation field reconstruction

  • Biao Zhang;Jinjia Cao;Shuang Lin;Xiaomeng Li;Yulong Zhang;Xiaochang Zheng;Wei Chen;Yingming Song
    • Nuclear Engineering and Technology
    • /
    • v.56 no.1
    • /
    • pp.160-166
    • /
    • 2024
  • The inversion of radiation field distribution is of great significance in the decommissioning sites of nuclear facilities. However, the radiation fields often contain multiple mixtures of radionuclides, making the inversion extremely difficult and posing a huge challenge. Many radiation field reconstruction methods, such as Kriging algorithm and neural network, can not solve this problem perfectly. To address this issue, this paper proposes an optimized inverse distance weighted (IDW) interpolation algorithm for reconstructing the gamma radiation field. The algorithm corrects the difference between the experimental and simulated scenarios, and the data is preprocessed with normalization to improve accuracy. The experiment involves setting up gamma radiation fields of three Co-60 radioactive sources and verifying them by using the optimized IDW algorithm. The results show that the mean absolute percentage error (MAPE) of the reconstruction result obtained by using the optimized IDW algorithm is 16.0%, which is significantly better than the results obtained by using the Kriging method. Importantly, the optimized IDW algorithm is suitable for radiation scenarios with multiple radioactive sources, providing an effective method for obtaining radiation field distribution in nuclear facility decommissioning engineering.

Haplotype Assembly from Weighted SNP Fragments and Related Genotype Information (신뢰도를 가진 SNP 단편들과 유전자형으로부터 일배체형 조합)

  • Kang, Seung-Ho;Jeong, In-Seon;Choi, Mun-Ho;Lim, Hyeong-Seok
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.11
    • /
    • pp.509-516
    • /
    • 2008
  • The Minimum Letter Flips (MLF) model and the Weighted Minimum Letter Flips (WMLF) model are for solving the haplotype assembly problem. But these two models are effective only when the error rate in SNP fragments is low. In this paper, we first establish a new computational model that employs the related genotype information as an improvement of the WMLF model and show its NP-hardness, and then propose an efficient genetic algorithm to solve the haplotype assembly problem. The results of experiments on random data set and a real data set indicate that the introduction of genotype information to the WMLF model is quite effective in improving the reconstruction rate especially when the error rate in SNP fragments is high. And the results also show that genotype information increases the convergence speed of the genetic algorithm.