• 제목/요약/키워드: Similarity on Data Structures

검색결과 63건 처리시간 0.019초

A XML DTD Matching using Fuzzy Similarity Measure

  • Kim, Chang-Suk;Son, Dong-Cheul;Kim, Dae-Su
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제3권1호
    • /
    • pp.32-36
    • /
    • 2003
  • An equivalent schema matching among several different source schemas is very important for information integration or mining on the XML based World Wide Web. Finding most similar source schema corresponding mediated schema is a major bottleneck because of the arbitrary nesting property and hierarchical structures of XML DTD schemas. It is complex and both very labor intensive and error prune job. In this paper, we present the first complex matching of XML schema, i.e. XML DTD. The proposed method captures not only schematic information but also integrity constraints information of DTD to match different structured DTD. We show the integrity constraints based hierarchical schema matching is more semantic than the schema matching only to use schematic information and stored data.

A Dynamic Locality Sensitive Hashing Algorithm for Efficient Security Applications

  • Mohammad Y. Khanafseh;Ola M. Surakhi
    • International Journal of Computer Science & Network Security
    • /
    • 제24권5호
    • /
    • pp.79-88
    • /
    • 2024
  • The information retrieval domain deals with the retrieval of unstructured data such as text documents. Searching documents is a main component of the modern information retrieval system. Locality Sensitive Hashing (LSH) is one of the most popular methods used in searching for documents in a high-dimensional space. The main benefit of LSH is its theoretical guarantee of query accuracy in a multi-dimensional space. More enhancement can be achieved to LSH by adding a bit to its steps. In this paper, a new Dynamic Locality Sensitive Hashing (DLSH) algorithm is proposed as an improved version of the LSH algorithm, which relies on employing the hierarchal selection of LSH parameters (number of bands, number of shingles, and number of permutation lists) based on the similarity achieved by the algorithm to optimize searching accuracy and increasing its score. Using several tampered file structures, the technique was applied, and the performance is evaluated. In some circumstances, the accuracy of matching with DLSH exceeds 95% with the optimal parameter value selected for the number of bands, the number of shingles, and the number of permutations lists of the DLSH algorithm. The result makes DLSH algorithm suitable to be applied in many critical applications that depend on accurate searching such as forensics technology.

Wind-induced random vibration of saddle membrane structures: Theoretical and experimental study

  • Rongjie Pan;Changjiang Liu;Dong Li;Yuanjun Sun;Weibin Huang;Ziye Chen
    • Wind and Structures
    • /
    • 제36권2호
    • /
    • pp.133-147
    • /
    • 2023
  • The random vibration of saddle membrane structures under wind load is studied theoretically and experimentally. First, the nonlinear random vibration differential equations of saddle membrane structures under wind loads are established based on von Karman's large deflection theory, thin shell theory and potential flow theory. The probabilistic density function (PDF) and its corresponding statistical parameters of the displacement response of membrane structure are obtained by using the diffusion process theory and the Fokker Planck Kolmogorov equation method (FPK) to solve the equation. Furthermore, a wind tunnel test is carried out to obtain the displacement time history data of the test model under wind load, and the statistical characteristics of the displacement time history of the prototype model are obtained by similarity theory and probability statistics method. Finally, the rationality of the theoretical model is verified by comparing the experimental model with the theoretical model. The results show that the theoretical model agrees with the experimental model, and the random vibration response can be effectively reduced by increasing the initial pretension force and the rise-span ratio within a certain range. The research methods can provide a theoretical reference for the random vibration of the membrane structure, and also be the foundation of structural reliability of membrane structure based on wind-induced response.

순차패턴에 기반한 XML 문서 클러스터링 (XML Document Clustering Based on Sequential Pattern)

  • 황정희;류근호
    • 정보처리학회논문지D
    • /
    • 제10D권7호
    • /
    • pp.1093-1102
    • /
    • 2003
  • 인터넷의 사용 증가로 정보의 양은 기하급수적으로 증가하고 있으며 웹 데이터의 표준인 XML의 데이터 표현의 유연성으로 인해 EDMS(Electronic Document Management System), ebXML(e-business extensible Markup Language) 등 웹 기반의 전자문서론 이용하는 시스템들은 XML를 문서 교환 방식 및 표준 문서 형식으로 도입하고 있는 실정이다. 그러므로 점차 확산되어 가고 있는 XML 문서에 대한 효율적인 문서의 관리와 검색을 위한 연구가 필요하다. 이 논문에서는 다중 문서간의 구조적 유사성을 분류하기 위하여 엘리먼트의 순서적 의미를 갖는 XML 문서를 대상으로 순차패턴을 이용하여 문서의 특성을 반영하는 대표구조를 추출하고 추출된 구조를 기반으로 유사 구조 문서를 클러스터링하는 방법을 제시한다. 이 논문의 제안 알고리즘은 클러스터의 응집도와 클러스터간의 유사도를 함께 고려하는 비용계산 방식을 이용하므로써 클러스터링의 정확도를 높일 수 있는 효과를 얻을 수 있다.

영장동물폐(靈長動物肺)의 비교해부학적연구(比較解剖學的硏究) 1. 문헌적고찰(文獻的考察) (Comparative Anatomic Structures of Nonhuman Primate Lungs 1. Literature Review)

  • 김무강
    • 대한수의학회지
    • /
    • 제19권1호
    • /
    • pp.1-8
    • /
    • 1979
  • Detailed human gross anatomic structures have been characterized. No similar data are available in nonhuman primate species in spite of close phylogenic similarity found between man and nonhuman primates. The ever increasing incidence of lung cancer and air pollution related respiratory ailments found in man emphasizes the need for an ideal animal model for studying pathogenesis of these various human pulmonary diseases. Thus, detailed investigation of pulmonary structures found in various species of nonhuman primates is warranted. For determining primate gross pulmonary anatomic structure, published works concerning the number of tracheal cartilage, angle of tracheal bifurcation, caliber of trachea, lung lobe and bifurcation position of trachea recorded for several species of nonhuman pimates, were reviewed. Limited information is available concerning the number of tracheal cartilage, width of tracheal cartilage, angle of bronchus, caliber of trachea and bronchus, and the bifurcation position of the trachea including the length of bronchus on nonhuman primates. Since scanty data have been gathered with no specific reference to their age, sex and body weight, they have no comparative values.

  • PDF

유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안 (Semantic Process Retrieval with Similarity Algorithms)

  • 이홍주
    • Asia pacific journal of information systems
    • /
    • 제18권1호
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

CS-트리 : 고차원 데이터의 유사성 검색을 위한 셀-기반 시그니쳐 색인 구조 (CS-Tree : Cell-based Signature Index Structure for Similarity Search in High-Dimensional Data)

  • 송광택;장재우
    • 정보처리학회논문지D
    • /
    • 제8D권4호
    • /
    • pp.305-312
    • /
    • 2001
  • 최근 고차원 색인 구조들이 멀티미디어 데이터베이스, 데이터 웨어하우징과 같은 데이터베이스 응용에서 유사성 검색을 위해 요구된다. 본 논문에서는 고차원 특징벡터에 대한 효율적인 저장과 검색을 지원하는 셀-기반 시그니쳐 트리(CS-트리)를 제안한다. 제안하는 CS-트리는 고차원 특징 벡터 공간을 셀로써 분할하여 하나의 특징 벡터를 그에 해당되는 셀의 시그니쳐로 표현한다. 특징 벡터 대신 셀의 시그니쳐를 사용함으로써 트리의 깊이를 줄이고, 그 결과 효율적인 검색 성능을 달성한다. 또한 셀에 기반하여 탐색 공간을 효율적으로 줄이는 유사성 검색 알고리즘을 제시한다. 마지막으로 우수한 고차원 색인 기법으로 알려져 있는 X-트리와 삽입시간, k-최근접 질의에 대한 검색 시간 그리고 부가저장 공간 측면에서 성능 비교를 수행한다. 성능비교 결과 CS-트리가 검색 성능에서 우수함을 보인다.

  • PDF

빈발 패턴 네트워크에서 아이템 클러스터링을 통한 연관규칙 발견 (Discovering Association Rules using Item Clustering on Frequent Pattern Network)

  • 오경진;정진국;하인애;조근식
    • 지능정보연구
    • /
    • 제14권1호
    • /
    • pp.1-17
    • /
    • 2008
  • 데이터 마이닝은 대용량의 데이터에 숨겨진 의미있고 유용한 패턴과 상관관계를 추출하여 의사결정에 활용하는 작업이다. 그 중에서도 고객 트랜잭션의 데이터베이스에서 아이템(item) 사이에 존재하는 연관규칙을 찾는 것은 중요한 일이 되었다. Apriori 알고리즘 이후 연관규칙을 찾기 위해 대용량의 데이터베이스로부터 압축된 의미있는 정보를 저장하기 위한 데이터 구조와 알고리즘들이 많이 제안되어 왔다. 연관규칙을 발견하기 위한 기존의 연구들은 모든 규칙을 찾아내지만, 사람이 분석하기에 너무 많은 규칙이 생성되기 때문에 규칙을 분석하기 위한 일 또한 많은 과정을 거쳐야 한다. 본 논문에서는 빈발 패턴 네트워크(Frequent Pattern Network)라 부르는 자료 구조를 제안하고 이를 활용하였다. 네트워크는 정점과 간선으로 구성되며 정점은 아이템을 표현하고, 간선은 두 아이템 집합을 표현한다. 아이템의 빈도수를 이용하여 빈발 패턴 네트워크를 구성하고, 아이템 사이의 유사도를 측정한다. 그리고 클러스터 내의 아이템과는 유사도가 높고, 다른 클러스터의 아이템과는 유사도가 낮도록 클러스터를 생성한다. 클러스터를 이용해 연관규칙을 생성하고 실험을 통해 Apriori와 FP Growth 알고리즘과의 성능을 비교를 하였다. 그 결과 빈발 패턴 네트워크에서 신뢰도 유사도를 이용하는 것이 클러스터의 정확성을 높여줌을 볼 수 있었다. 그리고 전통적인 방법과 비교를 통해 빈발 패턴 네트워크를 이용하는 것이 최소지지도에 유연성을 가짐을 알 수 있었다.

  • PDF

Research on Community Knowledge Modeling of Readers Based on Interest Labels

  • Kai, Wang;Wei, Pan;Xingzhi, Chen
    • Journal of Information Processing Systems
    • /
    • 제19권1호
    • /
    • pp.55-66
    • /
    • 2023
  • Community portraits can deeply explore the characteristics of community structures and describe the personalized knowledge needs of community users, which is of great practical significance for improving community recommendation services, as well as the accuracy of resource push. The current community portraits generally have the problems of weak perception of interest characteristics and low degree of integration of topic information. To resolve this problem, the reader community portrait method based on the thematic and timeliness characteristics of interest labels (UIT) is proposed. First, community opinion leaders are identified based on multi-feature calculations, and then the topic features of their texts are identified based on the LDA topic model. On this basis, a semantic mapping including "reader community-opinion leader-text content" was established. Second, the readers' interest similarity of the labels was dynamically updated, and two kinds of tag parameters were integrated, namely, the intensity of interest labels and the stability of interest labels. Finally, the similarity distance between the opinion leader and the topic of interest was calculated to obtain the dynamic interest set of the opinion leaders. Experimental analysis was conducted on real data from the Douban reading community. The experimental results show that the UIT has the highest average F value (0.551) compared to the state-of-the-art approaches, which indicates that the UIT has better performance in the smooth time dimension.

Clustering Techniques for XML Data Using Data Mining

  • Kim, Chun-Sik
    • 한국전자거래학회:학술대회논문집
    • /
    • 한국전자거래학회 2005년도 e-Biz World Conference 2005
    • /
    • pp.189-194
    • /
    • 2005
  • Many studies have been conducted to classify documents, and to extract useful information from documents. However, most search engines have used a keyword based method. This method does not search and classify documents effectively. This paper identifies structures of XML document based on the fact that the XML document has a structural document using a set theory, which is suggested by Broder, and attempts a test for clustering XML document by applying a k-nearest neighbor algorithm. In addition, this study investigates the effectiveness of the clustering technique for large scaled data, compared to the existing bitmap method, by applying a test, which reveals a difference between the clause based documents instead of using a type of vector, in order to measure the similarity between the existing methods.

  • PDF