• Title/Summary/Keyword: weighted similarity

Search Result 129, Processing Time 0.024 seconds

Organ Recognition in Ultrasound images Using Log Power Spectrum (로그 전력 스펙트럼을 이용한 초음파 영상에서의 장기인식)

  • 박수진;손재곤;김남철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.9C
    • /
    • pp.876-883
    • /
    • 2003
  • In this paper, we propose an algorithm for organ recognition in ultrasound images using log power spectrum. The main procedure of the algorithm consists of feature extraction and feature classification. In the feature extraction, as a translation invariant feature, log power spectrum is used for extracting the information on echo of the organs tissue from a preprocessed input image. In the feature classification, Mahalanobis distance is used as a measure of the similarity between the feature of an input image and the representative feature of each class. Experimental results for real ultrasound images show that the proposed algorithm yields the improvement of maximum 30% recognition rate than the recognition algorithm using power spectrum and Euclidean distance, and results in better recognition rate of 10-40% than the recognition algorithm using weighted quefrency complex cepstrum.

Evaluation of Denoising Filters Based on Edge Locations

  • Seo, Suyoung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.4
    • /
    • pp.503-513
    • /
    • 2020
  • This paper presents a method to evaluate denoising filters based on edge locations in their denoised images. Image quality assessment has often been performed by using structural similarity (SSIM). However, SSIM does not provide clearly the geometric accuracy of features in denoised images. Thus, in this paper, a method to localize edge locations with subpixel accuracy based on adaptive weighting of gradients is used for obtaining the subpixel locations of edges in ground truth image, noisy images, and denoised images. Then, this paper proposes a method to evaluate the geometric accuracy of edge locations based on root mean squares error (RMSE) and jaggedness with reference to ground truth locations. Jaggedness is a measure proposed in this study to measure the stability of the distribution of edge locations. Tested denoising filters are anisotropic diffusion (AF), bilateral filter, guided filter, weighted guided filter, weighted mean of patches filter, and smoothing filter (SF). SF is a simple filter that smooths images by applying a Gaussian blurring to a noisy image. Experiments were performed with a set of simulated images and natural images. The experimental results show that AF and SF recovered edge locations more accurately than the other tested filters in terms of SSIM, RMSE, and jaggedness and that SF produced better results than AF in terms of jaggedness.

Image Retrieval Based on the Weighted and Regional Integration of CNN Features

  • Liao, Kaiyang;Fan, Bing;Zheng, Yuanlin;Lin, Guangfeng;Cao, Congjun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.894-907
    • /
    • 2022
  • The features extracted by convolutional neural networks are more descriptive of images than traditional features, and their convolutional layers are more suitable for retrieving images than are fully connected layers. The convolutional layer features will consume considerable time and memory if used directly to match an image. Therefore, this paper proposes a feature weighting and region integration method for convolutional layer features to form global feature vectors and subsequently use them for image matching. First, the 3D feature of the last convolutional layer is extracted, and the convolutional feature is subsequently weighted again to highlight the edge information and position information of the image. Next, we integrate several regional eigenvectors that are processed by sliding windows into a global eigenvector. Finally, the initial ranking of the retrieval is obtained by measuring the similarity of the query image and the test image using the cosine distance, and the final mean Average Precision (mAP) is obtained by using the extended query method for rearrangement. We conduct experiments using the Oxford5k and Paris6k datasets and their extended datasets, Paris106k and Oxford105k. These experimental results indicate that the global feature extracted by the new method can better describe an image.

A Empirical Study on Recommendation Schemes Based on User-based and Item-based Collaborative Filtering (사용자 기반과 아이템 기반 협업여과 추천기법에 관한 실증적 연구)

  • Ye-Na Kim;In-Bok Choi;Taekeun Park;Jae-Dong Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.714-717
    • /
    • 2008
  • 협업여과 추천기법에는 사용자 기반 협업여과와 아이템 기반 협업여과가 있으며, 절차는 유사도 측정, 이웃 선정, 예측값 생성 단계로 이루어진다. 유사도 측정 단계에는 유클리드 거리(Euclidean Distance), 코사인 유사도(Cosine Similarity), 피어슨 상관계수(Pearson Correlation Coefficient) 방법 등이 있고, 이웃 선정 단계에는 상관 한계치(Correlation-Threshold), 근접 N 이웃(Best-N-Neighbors) 방법 등이 있다. 마지막으로 예측값 생성 단계에는 단순평균(Simple Average), 가중합(Weighted Sum), 조정 가중합(Adjusted Weighted Sum) 등이 있다. 이처럼 협업여과 추천기법에는 다양한 기법들이 사용되고 있다. 따라서 본 논문에서는 사용자 기반 협업여과와 아이템 기반 협업여과 추천기법에 사용되는 유사도 측정 기법과 예측값 생성 기법의 최적화된 조합을 알아보기 위해 성능 실험 및 비교 분석을 하였다. 실험은 GroupLens의 MovieLens 데이터 셋을 활용하였고 MAE(Mean Absolute Error)값을 이용하여 추천기법을 비교 하였다. 실험을 통해 유사도 측정 기법과 예측값 생성 기법의 최적화된 조합을 찾을 수 있었고, 사용자 기반 협업여과와 아이템 기반 협업여과의 성능비교를 통해 아이템 기반 협업여과의 성능이 보다 우수했음을 확인 하였다.

An efficient dual layer data aggregation scheme in clustered wireless sensor networks

  • Fenting Yang;Zhen Xu;Lei Yang
    • ETRI Journal
    • /
    • v.46 no.4
    • /
    • pp.604-618
    • /
    • 2024
  • In wireless sensor network (WSN) monitoring systems, redundant data from sluggish environmental changes and overlapping sensing ranges can increase the volume of data sent by nodes, degrade the efficiency of information collection, and lead to the death of sensor nodes. To reduce the energy consumption of sensor nodes and prolong the life of WSNs, this study proposes a dual layer intracluster data fusion scheme based on ring buffer. To reduce redundant data and temporary anomalous data while guaranteeing the temporal coherence of data, the source nodes employ a binarized similarity function and sliding quartile detection based on the ring buffer. Based on the improved support degree function of weighted Pearson distance, the cluster head node performs a weighted fusion on the data received from the source nodes. Experimental results reveal that the scheme proposed in this study has clear advantages in three aspects: the number of remaining nodes, residual energy, and the number of packets transmitted. The data fusion of the proposed scheme is confined to the data fusion of the same attribute environment parameters.

A study on the connected-digit recognition using MLP-VQ and Weighted DHMM (MLP-VQ와 가중 DHMM을 이용한 연결 숫자음 인식에 관한 연구)

  • Chung, Kwang-Woo;Hong, Kwang-Seok
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.8
    • /
    • pp.96-105
    • /
    • 1998
  • The aim of this paper is to propose the method of WDHMM(Weighted DHMM), using the MLP-VQ for the improvement of speaker-independent connect-digit recognition system. MLP neural-network output distribution shows a probability distribution that presents the degree of similarity between each pattern by the non-linear mapping among the input patterns and learning patterns. MLP-VQ is proposed in this paper. It generates codewords by using the output node index which can reach the highest level within MLP neural-network output distribution. Different from the old VQ, the true characteristics of this new MLP-VQ lie in that the degree of similarity between present input patterns and each learned class pattern could be reflected for the recognition model. WDHMM is also proposed. It can use the MLP neural-network output distribution as the way of weighing the symbol generation probability of DHMMs. This newly-suggested method could shorten the time of HMM parameter estimation and recognition. The reason is that it is not necessary to regard symbol generation probability as multi-dimensional normal distribution, as opposed to the old SCHMM. This could also improve the recognition ability by 14.7% higher than DHMM, owing to the increase of small caculation amount. Because it can reflect phone class relations to the recognition model. The result of my research shows that speaker-independent connected-digit recognition, using MLP-VQ and WDHMM, is 84.22%.

  • PDF

Evaluation of Tendency for Characteristics of MRI Brain T2 Weighted Images according to Changing NEX: MRiLab Simulation Study (자기공명영상장치의 뇌 T2 강조 영상에서 여기횟수 변화에 따른 영상 특성의 경향성 평가: MRiLab Simulation 연구)

  • Kim, Nam Young;Kim, Ju Hui;Lim, Jun;Kang, Seong-Hyeon;Lee, Youngjin
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.1
    • /
    • pp.9-14
    • /
    • 2021
  • Recently, magnetic resonance imaging (MRI), which can acquire images with good contrast without exposure to radiation, has been widely used for diagnosis. However, noise that reduces the accuracy of diagnosis is essentially generated when acquiring the MR images, and by adjusting the parameters, the noise problem can be solved to obtain an image with excellent characteristics. Among the parameters, the number of excitation (NEX) can acquire images with excellent characteristics without additional degradation of image characteristics. In contrast, appropriate NEX setting is required since the scan time increases and motion artifacts may occur. Therefore, in this study, after fixing all MRI parameters through the MRiLab simulation program, we tried to evaluate the tendency of image characteristics according to changing NEX through quantitative evaluation of brain T2 weighted images acquired by adjusting only NEX. To evaluate the noise level and similarity of the acquired image, signal to noise ratio (SNR), contrast to noise ratio (CNR), root mean square error (RMSE) and peak signal to noise ratio (PSNR) were calculated. As a result, both noise level and similarity evaluation factors showed improved values as NEX increased, while the increasing width gradually decreased. In conclusion, we demonstrated that an appropriate NEX setting is important because an excessively large NEX does not affect image characteristics improvement and cause motion artifacts due to a long scan.

Ground Target Classification Algorithm based on Multi-Sensor Images (다중센서 영상 기반의 지상 표적 분류 알고리즘)

  • Lee, Eun-Young;Gu, Eun-Hye;Lee, Hee-Yul;Cho, Woong-Ho;Park, Kil-Houm
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.2
    • /
    • pp.195-203
    • /
    • 2012
  • This paper proposes ground target classification algorithm based on decision fusion and feature extraction method using multi-sensor images. The decisions obtained from the individual classifiers are fused by applying a weighted voting method to improve target recognition rate. For classifying the targets belong to the individual sensors images, features robust to scale and rotation are extracted using the difference of brightness of CM images obtained from CCD image and the boundary similarity and the width ratio between the vehicle body and turret of target in FLIR image. Finally, we verity the performance of proposed ground target classification algorithm and feature extraction method by the experimentation.

A Market-Based Replacement Cost Approach to Technology Valuation (기술가치평가를 위한 시장대체원가 접근법)

  • Kang, Pilsung;Geum, Youngjung;Park, Hyun-Woo;Kim, Sang-Gook;Sung, Tae-Eung;Lee, Hakyeon
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.2
    • /
    • pp.150-161
    • /
    • 2015
  • This paper proposes a new approach to technology valuation, the market-replacement cost approach which integrates the cost-based approach and market-based approach. The proposed approach estimates the market-replacement cost of a target technology using R&D costs of similar R&D projects previously conducted. Similar R&D projects are extracted from project database based on document similarity between project proposals and technology description of the target technology. R&D costs of similar R&D projects are adjusted by mirroring the rate of technological obsolescence and inflation. Market-replacement cost of the technology is then derived by calculating the weighted average of adjusted costs and similarity values of similar R&D projects. A case of "Prevention method and system for the diffusion of mobile malicious code" is presented to illustrate the proposed approach.

Paper Recommendation Using SPECTER with Low-Rank and Sparse Matrix Factorization

  • Panpan Guo;Gang Zhou;Jicang Lu;Zhufeng Li;Taojie Zhu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.5
    • /
    • pp.1163-1185
    • /
    • 2024
  • With the sharp increase in the volume of literature data, researchers must spend considerable time and energy locating desired papers. A paper recommendation is the means necessary to solve this problem. Unfortunately, the large amount of data combined with sparsity makes personalizing papers challenging. Traditional matrix decomposition models have cold-start issues. Most overlook the importance of information and fail to consider the introduction of noise when using side information, resulting in unsatisfactory recommendations. This study proposes a paper recommendation method (PR-SLSMF) using document-level representation learning with citation-informed transformers (SPECTER) and low-rank and sparse matrix factorization; it uses SPECTER to learn paper content representation. The model calculates the similarity between papers and constructs a weighted heterogeneous information network (HIN), including citation and content similarity information. This method combines the LSMF method with HIN, effectively alleviating data sparsity and cold-start issues and avoiding topic drift. We validated the effectiveness of this method on two real datasets and the necessity of adding side information.