• Title/Summary/Keyword: tree search algorithm

Search Result 248, Processing Time 0.024 seconds

Large Scale Protein Side-chain Packing Based on Maximum Edge-weight Clique Finding Algorithm

  • K.C., Dukka Bahadur;Brown, J.B.;Tomita, Etsuji;Suzuki, Jun'ichi;Akutsu, Tatsuya
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.228-233
    • /
    • 2005
  • The protein side-chain packing problem (SCPP) is known to be NP-complete. Various graph theoretic based side-chain packing algorithms have been proposed. However as the size of the protein becomes larger, the sampling space increases exponentially. Hence, one approach to cope with the time complexity is to decompose the graph of the protein into smaller subgraphs. Some existing approaches decompose the graph into biconnected components at an articulation point (resulting in an at-most 21-residue subgraph) or solve the SCPP by tree decomposition (4-, 5-residue subgraph). In this regard, we had also presented a deterministic based approach called as SPWCQ using the notion of maximum edge weight clique in which we reduce SCPP to a graph and then obtain the maximum edge-weight clique of the obtained graph. This algorithm performs well for a protein of less than 500 residues. However, it fails to produce a feasible solution for larger proteins because of the size of the search space. In this paper, we present a new heuristic approach for the side-chain packing problem based on the maximum edge-weight clique finding algorithm that enables us to compute the side-chain packing of much larger proteins. Our new approach can compute side-chain packing of a protein of 874 residues with an RMSD of 1.423${\AA}$.

  • PDF

Human Body Motion Tracking Using ICP and Particle Filter (반복 최근접점와 파티클 필터를 이용한 인간 신체 움직임 추적)

  • Kim, Dae-Hwan;Kim, Hyo-Jung;Kim, Dai-Jin
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.12
    • /
    • pp.977-985
    • /
    • 2009
  • This paper proposes a real-time algorithm for tracking the fast moving human body. Although Iterative closest point (ICP) algorithm is suitable for real-time tracking due to its efficiency and low computational complexity, ICP often fails to converge when the human body moves fast because the closest point may be mistakenly selected and trapped in a local minimum. To overcome such limitation, we combine a particle filter based on a motion history information with the ICP. The proposed human body motion tracking algorithm reduces the search space for each limb by employing a hierarchical tree structure, and enables tracking of the fast moving human bodies by using the motion prediction based on the motion history. Experimental results show that the proposed human body motion tracking provides accurate tracking performance and fast convergence rate.

Sintering process optimization of ZnO varistor materials by machine learning based metamodel (기계학습 기반의 메타모델을 활용한 ZnO 바리스터 소결 공정 최적화 연구)

  • Kim, Boyeol;Seo, Ga Won;Ha, Manjin;Hong, Youn-Woo;Chung, Chan-Yeup
    • Journal of the Korean Crystal Growth and Crystal Technology
    • /
    • v.31 no.6
    • /
    • pp.258-263
    • /
    • 2021
  • ZnO varistor is a semiconductor device which can serve to protect the circuit from surge voltage because its non-linear I-V characteristics by controlling the microstructure of grain and grain boundaries. In order to obtain desired electrical properties, it is important to control microstructure evolution during the sintering process. In this research, we defined a dataset composed of process conditions of sintering and relative permittivity of sintered body, and collected experimental dataset with DOE. Meta-models can predict permittivity were developed by learning the collected experimental dataset on various machine learning algorithms. By utilizing the meta-model, we can derive optimized sintering conditions that could show the maximum permittivity from the numerical-based HMA (Hybrid Metaheuristic Algorithm) optimization algorithm. It is possible to search the optimal process conditions with minimum number of experiments if meta-model-based optimization is applied to ceramic processing.

Construction of Theme Melody Index by Transforming Melody to Time-series Data for Content-based Music Information Retrieval (내용기반 음악정보 검색을 위한 선율의 시계열 데이터 변환을 이용한 주제선율색인 구성)

  • Ha, Jin-Seok;Ku, Kyong-I;Park, Jae-Hyun;Kim, Yoo-Sung
    • The KIPS Transactions:PartD
    • /
    • v.10D no.3
    • /
    • pp.547-558
    • /
    • 2003
  • From the viewpoint of that music melody has the similar features to time-series data, music melody is transformed to a time-series data with normalization and corrections and the similarity between melodies is defined as the Euclidean distance between the transformed time-series data. Then, based the similarity between melodies of a music object, melodies are clustered and the representative of each cluster is extracted as one of theme melodies for the music. To construct the theme melody index, a theme melody is represented as a point of the multidimensional metric space of M-tree. For retrieval of user's query melody, the query melody is also transformed into a time-series data by the same way of indexing phase. To retrieve the similar melodies to the query melody given by user from the theme melody index the range query search algorithm is used. By the implementation of the prototype system using the proposed theme melody index we show the effectiveness of the proposed methods.

Optimization of Warp-wide CUDA Implementation for Parallel Shifted Sort Algorithm (병렬 Shifted Sort 알고리즘의 Warp 단위 CUDA 구현 최적화)

  • Park, Taejung
    • Journal of Digital Contents Society
    • /
    • v.18 no.4
    • /
    • pp.739-745
    • /
    • 2017
  • This paper presents and discusses an implementation of the GPU shifted sorting method to find approximate k nearest neighbors which executes within "warp", the minimum execution unit in GPU parallel architecture. Also, this paper presents the comparison results with other two common nearest neighbor searching methods, GPU-based kd-tree and ANN (Approximate Nearest Neighbor) library. The proposed implementation focuses on the cases when k is small, i.e. 2, 4, 8, and 16, which are handled efficiently within warp to consider it is very common for applications to handle small k's. Also, this paper discusses optimization ways to implementation by improving memory management in a loop for the CUB open library and adopting CUDA commands which are supported by GPU hardware. The proposed implementation shows more than 16-fold speed-up against GPU-based other methods in the tests, implying that the improvement would become higher for more larger input data.

Efficient Indexing for Large DNA Sequence Databases (대용량 DNA 시퀀스 데이타베이스를 위한 효율적인 인덱싱)

  • Won Jung-Im;Yoon Jee-Hee;Park Sang-Hyun;Kim Sang-Wook
    • Journal of KIISE:Databases
    • /
    • v.31 no.6
    • /
    • pp.650-663
    • /
    • 2004
  • In molecular biology, DNA sequence searching is one of the most crucial operations. Since DNA databases contain a huge volume of sequences, a fast indexing mechanism is essential for efficient processing of DNA sequence searches. In this paper, we first identify the problems of the suffix tree in aspects of the storage overhead, search performance, and integration with DBMSs. Then, we propose a new index structure that solves those problems. The proposed index consists of two parts: the primary part represents the trie as bit strings without any pointers, and the secondary part helps fast accesses of the leaf nodes of the trio that need to be accessed for post processing. We also suggest an efficient algorithm based on that index for DNA sequence searching. To verify the superiority of the proposed approach, we conducted a performance evaluation via a series of experiments. The results revealed that the proposed approach, which requires smaller storage space, achieves 13 to 29 times performance improvement over the suffix tree.

A Searching Technique of the Weak Connectivity Boundary using Small Unmanned Aerial Vehicle in Wireless Tactical Data Networks (무선 전술 데이터 네트워크에서 소형 무안항공기를 이용한 연결성 약화 지역 탐색 기법)

  • Li, Jin;Song, Ju-Bin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.1C
    • /
    • pp.89-96
    • /
    • 2012
  • Since tactical robots are going to be grown and tactical data communications will be more network-centric, the reliability of wireless tactical data networks is going to be very important in the future. However, the connectivity of such wireless tactical data networks can be extremely uncertain in practical circumstances. In this paper, we propose a searching technique to find out the weak boundary area of the network connectivity using a small UAV(unmanned aerial vehicle) which has a simple polling access function to wireless nodes on the ground in wireless tactical data networks. The UA V calculates the network topology of the wireless tactical data networks and coverts it to the Lapalcian matrix. In the proposed algorithm, we iteratively search the eigenvalues and find a minimum cut in the network resulting in finding the weak boundary of the connectivity for the wireless tactical data networks. If a UAV works as a relay nodes for the weak area, we evaluate that the throughput performance of the proposed algorithm outperforms star connection method and MST(minimum Spanning Tree) connection method. The proposed algorithm can be applied for recovering the connectivity of wireless tactical data networks.

Efficient Collaboration Method Between CPU and GPU for Generating All Possible Cases in Combination (조합에서 모든 경우의 수를 만들기 위한 CPU와 GPU의 효율적 협업 방법)

  • Son, Ki-Bong;Son, Min-Young;Kim, Young-Hak
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.9
    • /
    • pp.219-226
    • /
    • 2018
  • One of the systematic ways to generate the number of all cases is a combination to construct a combination tree, and its time complexity is O($2^n$). A combination tree is used for various purposes such as the graph homogeneity problem, the initial model for calculating frequent item sets, and so on. However, algorithms that must search the number of all cases of a combination are difficult to use realistically due to high time complexity. Nevertheless, as the amount of data becomes large and various studies are being carried out to utilize the data, the number of cases of searching all cases is increasing. Recently, as the GPU environment becomes popular and can be easily accessed, various attempts have been made to reduce time by parallelizing algorithms having high time complexity in a serial environment. Because the method of generating the number of all cases in combination is sequential and the size of sub-task is biased, it is not suitable for parallel implementation. The efficiency of parallel algorithms can be maximized when all threads have tasks with similar size. In this paper, we propose a method to efficiently collaborate between CPU and GPU to parallelize the problem of finding the number of all cases. In order to evaluate the performance of the proposed algorithm, we analyze the time complexity in the theoretical aspect, and compare the experimental time of the proposed algorithm with other algorithms in CPU and GPU environment. Experimental results show that the proposed CPU and GPU collaboration algorithm maintains a balance between the execution time of the CPU and GPU compared to the previous algorithms, and the execution time is improved remarkable as the number of elements increases.

A DB Pruning Method in a Large Corpus-Based TTS with Multiple Candidate Speech Segments (대용량 복수후보 TTS 방식에서 합성용 DB의 감량 방법)

  • Lee, Jung-Chul;Kang, Tae-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.572-577
    • /
    • 2009
  • Large corpus-based concatenating Text-to-Speech (TTS) systems can generate natural synthetic speech without additional signal processing. To prune the redundant speech segments in a large speech segment DB, we can utilize a decision-tree based triphone clustering algorithm widely used in speech recognition area. But, the conventional methods have problems in representing the acoustic transitional characteristics of the phones and in applying context questions with hierarchic priority. In this paper, we propose a new clustering algorithm to downsize the speech DB. Firstly, three 13th order MFCC vectors from first, medial, and final frame of a phone are combined into a 39 dimensional vector to represent the transitional characteristics of a phone. And then the hierarchically grouped three question sets are used to construct the triphone trees. For the performance test, we used DTW algorithm to calculate the acoustic similarity between the target triphone and the triphone from the tree search result. Experimental results show that the proposed method can reduce the size of speech DB by 23% and select better phones with higher acoustic similarity. Therefore the proposed method can be applied to make a small sized TTS.

Visual Semantic Based 3D Video Retrieval System Using HDFS

  • Ranjith Kumar, C.;Suguna, S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3806-3825
    • /
    • 2016
  • This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose we intent to hitch on BOVW and Mapreduce in 3D framework. Here, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and produce results .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we fiture the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.