• Title/Summary/Keyword: weighted dictionary

Search Result 19, Processing Time 0.019 seconds

Double 𝑙1 regularization for moving force identification using response spectrum-based weighted dictionary

  • Yuandong Lei;Bohao Xu;Ling Yu
    • Structural Engineering and Mechanics
    • /
    • v.91 no.2
    • /
    • pp.227-238
    • /
    • 2024
  • Sparse regularization methods have proven effective in addressing the ill-posed equations encountered in moving force identification (MFI). However, the complexity of vehicle loads is often ignored in existing studies aiming at enhancing MFI accuracy. To tackle this issue, a double 𝑙1 regularization method is proposed for MFI based on a response spectrum-based weighted dictionary in this study. Firstly, the relationship between vehicle-induced responses and moving vehicle loads (MVL) is established. The structural responses are then expanded in the frequency domain to obtain the prior knowledge related to MVL and to further construct a response spectrum-based weighted dictionary for MFI with a higher accuracy. Secondly, with the utilization of this weighted dictionary, a double 𝑙1 regularization framework is presented for identifying the static and dynamic components of MVL by the alternating direction method of multipliers (ADMM) method successively. To assess the performance of the proposed method, two different types of MVL, such as composed of trigonometric functions and driven from a 1/4 bridge-vehicle model, are adopted to conduct numerical simulations. Furthermore, a series of MFI experimental verifications are carried out in laboratory. The results shows that the proposed method's higher accuracy and strong robustness to noises compared with other traditional regularization methods.

Weighted Bayesian Automatic Document Categorization Based on Association Word Knowledge Base by Apriori Algorithm (Apriori알고리즘에 의한 연관 단어 지식 베이스에 기반한 가중치가 부여된 베이지만 자동 문서 분류)

  • 고수정;이정현
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.2
    • /
    • pp.171-181
    • /
    • 2001
  • The previous Bayesian document categorization method has problems that it requires a lot of time and effort in word clustering and it hardly reflects the semantic information between words. In this paper, we propose a weighted Bayesian document categorizing method based on association word knowledge base acquired by mining technique. The proposed method constructs weighted association word knowledge base using documents in training set. Then, classifier using Bayesian probability categorizes documents based on the constructed association word knowledge base. In order to evaluate performance of the proposed method, we compare our experimental results with those of weighted Bayesian document categorizing method using vocabulary dictionary by mutual information, weighted Bayesian document categorizing method, and simple Bayesian document categorizing method. The experimental result shows that weighted Bayesian categorizing method using association word knowledge base has improved performance 0.87% and 2.77% and 5.09% over weighted Bayesian categorizing method using vocabulary dictionary by mutual information and weighted Bayesian method and simple Bayesian method, respectively.

  • PDF

Time delay estimation between two receivers using weighted dictionary method for active sonar (능동소나를 위한 가중 딕션너리를 사용한 두 수신기 간 신호 지연 추정 방법)

  • Lim, Jun-Seok;Kim, Seongil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.460-465
    • /
    • 2021
  • In active sonar, time delay estimation is used to find the distance between the target and the sonar. Among the time delay estimation methods for active sonar, estimation in the frequency domain is widely used. When estimating in the frequency domain, the time delay can be thought of as a frequency estimator, so it can be used relatively easily. However, this method is prone to rapid increase in error due to noise. In this paper, we propose a new method which applies weighted dictionary and sparsity in order to reduce this error increase and we extend it to two receivers to propose an algorithm for estimating the time delay between two receivers. And the case of applying the proposed method and the case of not applying the proposed method including the conventional frequency domain algorithm and Generalized Cross Correlation-Phase transform (GCC-PHAT) in a white noise environment were compared with one another. And we show that the newly proposed method has a performance gain of about 15 dB to about 60 dB compared to other algorithms.

Person Re-identification using Sparse Representation with a Saliency-weighted Dictionary

  • Kim, Miri;Jang, Jinbeum;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.4
    • /
    • pp.262-268
    • /
    • 2017
  • Intelligent video surveillance systems have been developed to monitor global areas and find specific target objects using a large-scale database. However, person re-identification presents some challenges, such as pose change and occlusions. To solve the problems, this paper presents an improved person re-identification method using sparse representation and saliency-based dictionary construction. The proposed method consists of three parts: i) feature description based on salient colors and textures for dictionary elements, ii) orthogonal atom selection using cosine similarity to deal with pose and viewpoint change, and iii) measurement of reconstruction error to rank the gallery corresponding a probe object. The proposed method provides good performance, since robust descriptors used as a dictionary atom are generated by weighting some salient features, and dictionary atoms are selected by reducing excessive redundancy causing low accuracy. Therefore, the proposed method can be applied in a large scale-database surveillance system to search for a specific object.

No-reference Image Quality Assessment With A Gradient-induced Dictionary

  • Li, Leida;Wu, Dong;Wu, Jinjian;Qian, Jiansheng;Chen, Beijing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.288-307
    • /
    • 2016
  • Image distortions are typically characterized by degradations of structures. Dictionaries learned from natural images can capture the underlying structures in images, which are important for image quality assessment (IQA). This paper presents a general-purpose no-reference image quality metric using a GRadient-Induced Dictionary (GRID). A dictionary is first constructed based on gradients of natural images using K-means clustering. Then image features are extracted using the dictionary based on Euclidean-norm coding and max-pooling. A distortion classification model and several distortion-specific quality regression models are trained using the support vector machine (SVM) by combining image features with distortion types and subjective scores, respectively. To evaluate the quality of a test image, the distortion classification model is used to determine the probabilities that the image belongs to different kinds of distortions, while the regression models are used to predict the corresponding distortion-specific quality scores. Finally, an overall quality score is computed as the probability-weighted distortion-specific quality scores. The proposed metric can evaluate image quality accurately and efficiently using a small dictionary. The performance of the proposed method is verified on public image quality databases. Experimental results demonstrate that the proposed metric can generate quality scores highly consistent with human perception, and it outperforms the state-of-the-arts.

Nearest-Neighbors Based Weighted Method for the BOVW Applied to Image Classification

  • Xu, Mengxi;Sun, Quansen;Lu, Yingshu;Shen, Chenming
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.4
    • /
    • pp.1877-1885
    • /
    • 2015
  • This paper presents a new Nearest-Neighbors based weighted representation for images and weighted K-Nearest-Neighbors (WKNN) classifier to improve the precision of image classification using the Bag of Visual Words (BOVW) based models. Scale-invariant feature transform (SIFT) features are firstly extracted from images. Then, the K-means++ algorithm is adopted in place of the conventional K-means algorithm to generate a more effective visual dictionary. Furthermore, the histogram of visual words becomes more expressive by utilizing the proposed weighted vector quantization (WVQ). Finally, WKNN classifier is applied to enhance the properties of the classification task between images in which similar levels of background noise are present. Average precision and absolute change degree are calculated to assess the classification performance and the stability of K-means++ algorithm, respectively. Experimental results on three diverse datasets: Caltech-101, Caltech-256 and PASCAL VOC 2011 show that the proposed WVQ method and WKNN method further improve the performance of classification.

Weighted Disassemble-based Correction Method to Improve Recognition Rates of Korean Text in Signboard Images (간판영상에서 한글 인식 성능향상을 위한 가중치 기반 음소 단위 분할 교정)

  • Lee, Myung-Hun;Yang, Hyung-Jeong;Kim, Soo-Hyung;Lee, Guee-Sang;Kim, Sun-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.2
    • /
    • pp.105-115
    • /
    • 2012
  • In this paper, we propose a correction method using phoneme unit segmentation to solve misrecognition of Korean Texts in signboard images using weighted Disassemble Levenshtein Distance. The proposed method calculates distances of recognized texts which are segmented into phoneme units and detects the best matched texts from signboard text database. For verifying the efficiency of the proposed method, a database dictionary is built using 1.3 million words of nationwide signboard through removing duplicated words. We compared the proposed method to Levenshtein Distance and Disassemble Levenshtein Distance which are common representative text string comparison algorithms. As a result, the proposed method based on weighted Disassemble Levenshtein Distance represents an improvement in recognition rates 29.85% and 6% on average compared to that of conventional methods, respectively.

Weighted Collaborative Representation and Sparse Difference-Based Hyperspectral Anomaly Detection

  • Wang, Qianghui;Hua, Wenshen;Huang, Fuyu;Zhang, Yan;Yan, Yang
    • Current Optics and Photonics
    • /
    • v.4 no.3
    • /
    • pp.210-220
    • /
    • 2020
  • Aiming at the problem that the Local Sparse Difference Index algorithm has low accuracy and low efficiency when detecting target anomalies in a hyperspectral image, this paper proposes a Weighted Collaborative Representation and Sparse Difference-Based Hyperspectral Anomaly Detection algorithm, to improve detection accuracy for a hyperspectral image. First, the band subspace is divided according to the band correlation coefficient, which avoids the situation in which there are multiple solutions of the sparse coefficient vector caused by too many bands. Then, the appropriate double-window model is selected, and the background dictionary constructed and weighted according to Euclidean distance, which reduces the influence of mixing anomalous components of the background on the solution of the sparse coefficient vector. Finally, the sparse coefficient vector is solved by the collaborative representation method, and the sparse difference index is calculated to complete the anomaly detection. To prove the effectiveness, the proposed algorithm is compared with the RX, LRX, and LSD algorithms in simulating and analyzing two AVIRIS hyperspectral images. The results show that the proposed algorithm has higher accuracy and a lower false-alarm rate, and yields better results.

Homonym Disambiguation based on Mutual Information and Sense-Tagged Compound Noun Dictionary (상호정보량과 복합명사 의미사전에 기반한 동음이의어 중의성 해소)

  • Heo, Jeong;Seo, Hee-Cheol;Jang, Myung-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.12
    • /
    • pp.1073-1089
    • /
    • 2006
  • The goal of Natural Language Processing(NLP) is to make a computer understand a natural language and to deliver the meanings of natural language to humans. Word sense Disambiguation(WSD is a very important technology to achieve the goal of NLP. In this paper, we describe a technology for automatic homonyms disambiguation using both Mutual Information(MI) and a Sense-Tagged Compound Noun Dictionary. Previous research work using word definitions in dictionary suffered from the problem of data sparseness because of the use of exact word matching. Our work overcomes this problem by using MI which is an association measure between words. To reflect language features, the rate of word-pairs with MI values, sense frequency and site of word definitions are used as weights in our system. We constructed a Sense-Tagged Compound Noun Dictionary for high frequency compound nouns and used it to resolve homonym sense disambiguation. Experimental data for testing and evaluating our system is constructed from QA(Question Answering) test data which consisted of about 200 query sentences and answer paragraphs. We performed 4 types of experiments. In case of being used only MI, the result of experiment showed a precision of 65.06%. When we used the weighted values, we achieved a precision of 85.35% and when we used the Sense-Tagged Compound Noun Dictionary, we achieved a precision of 88.82%, respectively.

Neural-network-based Impulse Noise Removal Using Group-based Weighted Couple Sparse Representation

  • Lee, Yongwoo;Bui, Toan Duc;Shin, Jitae;Oh, Byung Tae
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.8
    • /
    • pp.3873-3887
    • /
    • 2018
  • In this paper, we propose a novel method to recover images corrupted by impulse noise. The proposed method uses two stages: noise detection and filtering. In the first stage, we use pixel values, rank-ordered logarithmic difference values, and median values to train a neural-network-based impulse noise detector. After training, we apply the network to detect noisy pixels in images. In the next stage, we use group-based weighted couple sparse representation to filter the noisy pixels. During this second stage, conventional methods generally use only clean pixels to recover corrupted pixels, which can yield unsuccessful dictionary learning if the noise density is high and the number of useful clean pixels is inadequate. Therefore, we use reconstructed pixels to balance the deficiency. Experimental results show that the proposed noise detector has better performance than the conventional noise detectors. Also, with the information of noisy pixel location, the proposed impulse-noise removal method performs better than the conventional methods, through the recovered images resulting in better quality.