• 제목/요약/키워드: measure of structural similarity

Search Result 52, Processing Time 0.024 seconds

Searching Similar Example-Sentences Using the Needleman-Wunsch Algorithm (Needleman-Wunsch 알고리즘을 이용한 유사예문 검색)

  • Kim Dong-Joo;Kim Han-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.4 s.42
    • /
    • pp.181-188
    • /
    • 2006
  • In this paper, we propose a search algorithm for similar example-sentences in the computer-aided translation. The search for similar examples, which is a main part in the computer-aided translation, is to retrieve the most similar examples in the aspect of structural and semantical analogy for a given query from examples. The proposed algorithm is based on the Needleman-Wunsch algorithm, which is used to measure similarity between protein or nucleotide sequences in bioinformatics. If the original Needleman-Wunsch algorithm is applied to the search for similar sentences, it is likely to fail to find them since similarity is sensitive to word's inflectional components. Therefore, we use the lemma in addition to (typographical) surface information. In addition, we use the part-of-speech to capture the structural analogy. In other word, this paper proposes the similarity metric combining the surface, lemma, and part-of-speech information of a word. Finally, we present a search algorithm with the proposed metric and present pairs contributed to similarity between a query and a found example. Our algorithm shows good performance in the area of electricity and communication.

  • PDF

An Analysis on the Linkage Structure of Industrial Complexes(Clusters) in the Internal and External Capital Region (수도권 산업단지(클러스터)의 광역권 내부 및 외부 연계구조 분석)

  • Koo, Yang-Mi;Nahm, Kee-Bom;Park, Sam-Ock
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.13 no.2
    • /
    • pp.181-195
    • /
    • 2010
  • The policy of industrial complexes (innovative clusters) is being changed to build the linkage structure within Mega Economic Region according to the national policy of Mega Economic Region. The aim of this analysis is to draw the hypothetical linkage structure of industrial complexes in the internal and external Capital Region. First, with the survey data of firms located in the industrial complexes, we can catch the regional linkages of firms in the local area and internal and external Mega Economic Region. Next, the measure of structural similarity between industrial complexes is calculated with the number of employees by industrial sectors. After considering the geographical distance between industrial complexes, the percentage of industrial sectors and the location quotient synthetically, the idea of hub-and-spoke type linkage structure between clusters is deduced.

  • PDF

UHD TV Image Enhancement using Multi-frame Example-based Super-resolution (멀티프레임 예제기반 초해상도 영상복원을 이용한 UHD TV 영상 개선)

  • Jeong, Seokhwa;Yoon, Inhye;Paik, Joonki
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.3
    • /
    • pp.154-161
    • /
    • 2015
  • A novel multiframe super-resolution (SR) algorithm is presented to overcome the limitation of existing single-image SR algorithms using motion information from adjacent frames in a video. The proposed SR algorithm consists of three steps: i) definition of a local region using interframe motion vectors, ii) multiscale patch generation and adaptive selection of multiple optimum patches, and iii) combination of optimum patches for super-resolution. The proposed algorithm increases the accuracy of patch selection using motion information and multiscale patches. Experimental results show that the proposed algorithm performs better than existing patch-based SR algorithms in the sense of both subjective and objective measures including the peak signal-to-noise ratio (PSNR) and structural similarity measure (SSIM).

Speckle Noise Reduction and Image Quality Improvement in U-net-based Phase Holograms in BL-ASM (BL-ASM에서 U-net 기반 위상 홀로그램의 스펙클 노이즈 감소와 이미지 품질 향상)

  • Oh-Seung Nam;Ki-Chul Kwon;Jong-Rae Jeong;Kwon-Yeon Lee;Nam Kim
    • Korean Journal of Optics and Photonics
    • /
    • v.34 no.5
    • /
    • pp.192-201
    • /
    • 2023
  • The band-limited angular spectrum method (BL-ASM) causes aliasing errors due to spatial frequency control problems. In this paper, a sampling interval adjustment technique for phase holograms and a technique for reducing speckle noise and improving image quality using a deep-learningbased U-net model are proposed. With the proposed technique, speckle noise is reduced by first calculating the sampling factor and controlling the spatial frequency by adjusting the sampling interval so that aliasing errors can be removed in a wide range of propagation. The next step is to improve the quality of the reconstructed image by learning the phase hologram to which the deep learning model is applied. In the S/W simulation of various sample images, it was confirmed that the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were improved by 5% and 0.14% on average, compared with the existing BL-ASM.

Deep Learning-Based Low-Light Imaging Considering Image Signal Processing

  • Minsu, Kwon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.2
    • /
    • pp.19-25
    • /
    • 2023
  • In this paper, we propose a method for improving raw images captured in a low light condition based on deep learning considering the image signal processing. In the case of a smart phone camera, compared to a DSLR camera, the size of a lens or sensor is limited, so the noise increases and the reduces the quality of images in low light conditions. Existing deep learning-based low-light image processing methods create unnatural images in some cases since they do not consider the lens shading effect and white balance, which are major factors in the image signal processing. In this paper, pixel distances from the image center and channel average values are used to consider the lens shading effect and white balance with a deep learning model. Experiments with low-light images taken with a smart phone demonstrate that the proposed method achieves a higher peak signal to noise ratio and structural similarity index measure than the existing method by creating high-quality low-light images.

Reduced-Reference Quality Assessment for Compressed Videos Based on the Similarity Measure of Edge Projections (에지 투영의 유사도를 이용한 압축된 영상에 대한 Reduced-Reference 화질 평가)

  • Kim, Dong-O;Park, Rae-Hong;Sim, Dong-Gyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.3
    • /
    • pp.37-45
    • /
    • 2008
  • Quality assessment ai s to evaluate if a distorted image or video has a good quality by measuring the difference between the original and distorted images or videos. In this paper, to assess the visual qualify of a distorted image or video, visual features of the distorted image are compared with those of the original image instead of the direct comparison of the distorted image with the original image. We use edge projections from two images as features, where the edge projection can be easily obtained by projecting edge pixels in an edge map along vertical/horizontal direction. In this paper, edge projections are obtained by using vertical/horizontal directions of gradients as well as the magnitude of each gradient. Experimental results show the effectiveness of the proposed quality assessment through the comparison with conventional quality assessment algorithms such as structural similarity(SSIM), edge peak signal-to-noise ratio(EPSNR), and edge histogram descriptor(EHD) methods.

A Ranking Technique of XML Documents using Path Similarity for Expanded Query Processing (확장된 질의 처리를 위해 경로간 의미적 유사도를 고려한 XML 문서 순위화 기법)

  • Kim, Hyun-Joo;Park, So-Mi;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.37 no.2
    • /
    • pp.113-120
    • /
    • 2010
  • XML is broadly using for data storing and processing. XML is specified its structural characteristic and user can query with XPath when information from data document is needed. XPath query can process when the tern and structure of document and query is matched with each other. However, nowadays there are lots of data documents which are made by using different terminology and structure therefore user can not know the exact idea of target data. In fact, there are many possibilities that target data document has information which user is find or a similar ones. Accordingly user query should be processed when their term usage or structural characteristic is slightly different with data document. In order to do that we suggest a XML document ranking method based on path similarity. The method can measure a semantic similarity between user query and data document using three steps which are position, node and relaxation factors.

Comparison of CNN and GAN-based Deep Learning Models for Ground Roll Suppression (그라운드-롤 제거를 위한 CNN과 GAN 기반 딥러닝 모델 비교 분석)

  • Sangin Cho;Sukjoon Pyun
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.2
    • /
    • pp.37-51
    • /
    • 2023
  • The ground roll is the most common coherent noise in land seismic data and has an amplitude much larger than the reflection event we usually want to obtain. Therefore, ground roll suppression is a crucial step in seismic data processing. Several techniques, such as f-k filtering and curvelet transform, have been developed to suppress the ground roll. However, the existing methods still require improvements in suppression performance and efficiency. Various studies on the suppression of ground roll in seismic data have recently been conducted using deep learning methods developed for image processing. In this paper, we introduce three models (DnCNN (De-noiseCNN), pix2pix, and CycleGAN), based on convolutional neural network (CNN) or conditional generative adversarial network (cGAN), for ground roll suppression and explain them in detail through numerical examples. Common shot gathers from the same field were divided into training and test datasets to compare the algorithms. We trained the models using the training data and evaluated their performances using the test data. When training these models with field data, ground roll removed data are required; therefore, the ground roll is suppressed by f-k filtering and used as the ground-truth data. To evaluate the performance of the deep learning models and compare the training results, we utilized quantitative indicators such as the correlation coefficient and structural similarity index measure (SSIM) based on the similarity to the ground-truth data. The DnCNN model exhibited the best performance, and we confirmed that other models could also be applied to suppress the ground roll.

Boosting the Reasoning-Based Approach by Applying Structural Metrics for Ontology Alignment

  • Khiat, Abderrahmane;Benaissa, Moussa
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.834-851
    • /
    • 2017
  • The amount of sources of information available on the web using ontologies as support continues to increase and is often heterogeneous and distributed. Ontology alignment is the solution to ensure semantic interoperability. In this paper, we describe a new ontology alignment approach, which consists of combining structure-based and reasoning-based approaches in order to discover new semantic correspondences between entities of different ontologies. We used the biblio test of the benchmark series and anatomy series of the Ontology Alignment Evaluation Initiative (OAEI) 2012 evaluation campaign to evaluate the performance of our approach. We compared our approach successively with LogMap and YAM++ systems. We also analyzed the contribution of our method compared to structural and semantic methods. The results obtained show that our performance provides good performance. Indeed, these results are better than those of the LogMap system in terms of precision, recall, and F-measure. Our approach has also been proven to be more relevant than YAM++ for certain types of ontologies and significantly improves the structure-based and reasoningbased methods.

Automated Segmentation of the Lateral Ventricle Based on Graph Cuts Algorithm and Morphological Operations

  • Park, Seongbeom;Yoon, Uicheul
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.2
    • /
    • pp.82-88
    • /
    • 2017
  • Enlargement of the lateral ventricles have been identified as a surrogate marker of neurological disorders. Quantitative measure of the lateral ventricle from MRI would enable earlier and more accurate clinical diagnosis in monitoring disease progression. Even though it requires an automated or semi-automated segmentation method for objective quantification, it is difficult to define lateral ventricles due to insufficient contrast and brightness of structural imaging. In this study, we proposed a fully automated lateral ventricle segmentation method based on a graph cuts algorithm combined with atlas-based segmentation and connected component labeling. Initially, initial seeds for graph cuts were defined by atlas-based segmentation (ATS). They were adjusted by partial volume images in order to provide accurate a priori information on graph cuts. A graph cuts algorithm is to finds a global minimum of energy with minimum cut/maximum flow algorithm function on graph. In addition, connected component labeling used to remove false ventricle regions. The proposed method was validated with the well-known tools using the dice similarity index, recall and precision values. The proposed method was significantly higher dice similarity index ($0.860{\pm}0.036$, p < 0.001) and recall ($0.833{\pm}0.037$, p < 0.001) compared with other tools. Therefore, the proposed method yielded a robust and reliable segmentation result.