• Title/Summary/Keyword: tree search algorithm

Search Result 248, Processing Time 0.022 seconds

An Efficient Data Structure for Queuing Jobs in Dynamic Priority Scheduling under the Stack Resource Policy (Stack Resource Policy를 사용하는 동적 우선순위 스케줄링에서 작업 큐잉을 위한 효율적인 자료구조)

  • Han Sang-Chul;Park Moon-Ju;Cho Yoo-Kun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.6
    • /
    • pp.337-343
    • /
    • 2006
  • The Stack Resource Policy (SRP) is a real-time synchronization protocol with some distinct properties. One of such properties is early blocking; the execution of a job is delayed instead of being blocked when requesting shared resources. If SRP is used with dynamic priority scheduling such as Earliest Deadline First (EDF), the early blocking requires that a scheduler should select the highest-priority job among the jobs that will not be blocked, incurring runtime overhead. In this paper, we analyze the runtime overhead of EDF scheduling when SRP is used. We find out that the overhead of job search using the conventional implementations of ready queue and job search algorithms becomes serious as the number of jobs increases. To solve this problem, we propose an alternative data structure for the ready queue and an efficient job-search algorithm with O([log$_2n$]) time complexity.

Low Complexity Iterative Detection and Decoding using an Adaptive Early Termination Scheme in MIMO system (다중 안테나 시스템에서 적응적 조기 종료를 이용한 낮은 복잡도 반복 검출 및 복호기)

  • Joung, Hyun-Sung;Choi, Kyung-Jun;Kim, Kyung-Jun;Kim, Kwang-Soon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.8C
    • /
    • pp.522-528
    • /
    • 2011
  • The iterative detection and decoding (IDD) has been shown to dramatically improve the bit error rate (BER) performance of the multiple-input multiple-output (MIMO) communication systems. However, these techniques require a high computational complexity since it is required to compute the soft decisions for each bit. In this paper, we show IDD comprised of sphere decoder with low-density parity check (LDPC) codes and present the tree search strategy, called a layer symbol search (LSS), to obtain soft decisions with a low computational complexity. In addition, an adaptive early termination is proposed to reduce the computational complexity during an iteration between an inner sphere decoder and an outer LDPC decoder. It is shown that the proposed approach can achieve the performance similar to an existing algorithm with 70% lower computational complexity compared to the conventional algorithms.

A Data Mining Approach for Selecting Bitmap Join Indices

  • Bellatreche, Ladjel;Missaoui, Rokia;Necir, Hamid;Drias, Habiba
    • Journal of Computing Science and Engineering
    • /
    • v.1 no.2
    • /
    • pp.177-194
    • /
    • 2007
  • Index selection is one of the most important decisions to take in the physical design of relational data warehouses. Indices reduce significantly the cost of processing complex OLAP queries, but require storage cost and induce maintenance overhead. Two main types of indices are available: mono-attribute indices (e.g., B-tree, bitmap, hash, etc.) and multi-attribute indices (join indices, bitmap join indices). To optimize star join queries characterized by joins between a large fact table and multiple dimension tables and selections on dimension tables, bitmap join indices are well adapted. They require less storage cost due to their binary representation. However, selecting these indices is a difficult task due to the exponential number of candidate attributes to be indexed. Most of approaches for index selection follow two main steps: (1) pruning the search space (i.e., reducing the number of candidate attributes) and (2) selecting indices using the pruned search space. In this paper, we first propose a data mining driven approach to prune the search space of bitmap join index selection problem. As opposed to an existing our technique that only uses frequency of attributes in queries as a pruning metric, our technique uses not only frequencies, but also other parameters such as the size of dimension tables involved in the indexing process, size of each dimension tuple, and page size on disk. We then define a greedy algorithm to select bitmap join indices that minimize processing cost and verify storage constraint. Finally, in order to evaluate the efficiency of our approach, we compare it with some existing techniques.

Development of Sentiment Analysis Model for the hot topic detection of online stock forums (온라인 주식 포럼의 핫토픽 탐지를 위한 감성분석 모형의 개발)

  • Hong, Taeho;Lee, Taewon;Li, Jingjing
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.187-204
    • /
    • 2016
  • Document classification based on emotional polarity has become a welcomed emerging task owing to the great explosion of data on the Web. In the big data age, there are too many information sources to refer to when making decisions. For example, when considering travel to a city, a person may search reviews from a search engine such as Google or social networking services (SNSs) such as blogs, Twitter, and Facebook. The emotional polarity of positive and negative reviews helps a user decide on whether or not to make a trip. Sentiment analysis of customer reviews has become an important research topic as datamining technology is widely accepted for text mining of the Web. Sentiment analysis has been used to classify documents through machine learning techniques, such as the decision tree, neural networks, and support vector machines (SVMs). is used to determine the attitude, position, and sensibility of people who write articles about various topics that are published on the Web. Regardless of the polarity of customer reviews, emotional reviews are very helpful materials for analyzing the opinions of customers through their reviews. Sentiment analysis helps with understanding what customers really want instantly through the help of automated text mining techniques. Sensitivity analysis utilizes text mining techniques on text on the Web to extract subjective information in the text for text analysis. Sensitivity analysis is utilized to determine the attitudes or positions of the person who wrote the article and presented their opinion about a particular topic. In this study, we developed a model that selects a hot topic from user posts at China's online stock forum by using the k-means algorithm and self-organizing map (SOM). In addition, we developed a detecting model to predict a hot topic by using machine learning techniques such as logit, the decision tree, and SVM. We employed sensitivity analysis to develop our model for the selection and detection of hot topics from China's online stock forum. The sensitivity analysis calculates a sentimental value from a document based on contrast and classification according to the polarity sentimental dictionary (positive or negative). The online stock forum was an attractive site because of its information about stock investment. Users post numerous texts about stock movement by analyzing the market according to government policy announcements, market reports, reports from research institutes on the economy, and even rumors. We divided the online forum's topics into 21 categories to utilize sentiment analysis. One hundred forty-four topics were selected among 21 categories at online forums about stock. The posts were crawled to build a positive and negative text database. We ultimately obtained 21,141 posts on 88 topics by preprocessing the text from March 2013 to February 2015. The interest index was defined to select the hot topics, and the k-means algorithm and SOM presented equivalent results with this data. We developed a decision tree model to detect hot topics with three algorithms: CHAID, CART, and C4.5. The results of CHAID were subpar compared to the others. We also employed SVM to detect the hot topics from negative data. The SVM models were trained with the radial basis function (RBF) kernel function by a grid search to detect the hot topics. The detection of hot topics by using sentiment analysis provides the latest trends and hot topics in the stock forum for investors so that they no longer need to search the vast amounts of information on the Web. Our proposed model is also helpful to rapidly determine customers' signals or attitudes towards government policy and firms' products and services.

Development and Application of the Butterfly Algorithm Based on Decision Making Tree for Contradiction Problem Solving (모순 문제 해결을 위한 의사결정트리 기반 나비 알고리즘의 개발과 적용)

  • Hyun, Jung Suk;Ko, Ye June;Kim, Yung Gyeol;Jean, Seungjae;Park, Chan Jung
    • The Journal of Korean Association of Computer Education
    • /
    • v.22 no.1
    • /
    • pp.87-98
    • /
    • 2019
  • It is easy to assume that contradictions are logically incorrect or empty sets that have no solvability. This dilemma, which can not be done, is difficult to solve because it has to solve the contradiction hidden in it. Paradoxically, therefore, contradiction resolution has been viewed as an innovative and creative problem-solving. TRIZ, which analyzes the solution of the problem from the perspective of resolving contradictions, has been used for people rather than computers. The Butterfly model, which analyzes the problem from the perspective of solving the contradiction like TRIZ, analyzed the type of contradiction problem using symbolic logic. In order to apply an appropriate concrete solution strategy for a given contradiction problems, we designed the Butterfly algorithm based on decision making tree. We also developed a visualization tool based on Python tkInter to find concrete solution strategies for given contradiction problems. In order to verify the developed tool, the third grade students of middle school learned the Butterfly algorithm, analyzed the contradiction of the wooden support, and won the grand prize at an invention contest in search of a new solution. The Butterfly algorithm developed in this paper systematically reduces the solution space of contradictory problems in the beginning of problem solving and can help solve contradiction problems without trial and errors.

Combined Image Retrieval System using Clustering and Condensation Method (클러스터링과 차원축약 기법을 통합한 영상 검색 시스템)

  • Lee Se-Han;Cho Jungwon;Choi Byung-Uk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.1 s.307
    • /
    • pp.53-66
    • /
    • 2006
  • This paper proposes the combined image retrieval system that gives the same relevance as exhaustive search method while its performance can be considerably improved. This system is combined with two different retrieval methods and each gives the same results that full exhaustive search method does. Both of them are two-stage method. One uses condensation of feature vectors, and the other uses binary-tree clustering. These two methods extract the candidate images that always include correct answers at the first stage, and then filter out the incorrect images at the second stage. Inasmuch as these methods use equal algorithm, they can get the same result as full exhaustive search. The first method condenses the dimension of feature vectors, and it uses these condensed feature vectors to compute similarity of query and images in database. It can be found that there is an optimal condensation ratio which minimizes the overall retrieval time. The optimal ratio is applied to first stage of this method. Binary-tree clustering method, searching with recursive 2-means clustering, classifies each cluster dynamically with the same radius. For preserving relevance, its range of query has to be compensated at first stage. After candidate clusters were selected, final results are retrieved by computing similarities again at second stage. The proposed method is combined with above two methods. Because they are not dependent on each other, combined retrieval system can make a remarkable progress in performance.

Path Metric Comparison-based Adaptive QRD-M Algorithm for MUHO Systems (Path Metric 비교 기반 적응형 QRD-M MIMO 검출 기법)

  • Kim, Bong-Seok;Kim, Han-Nah;Choi, Kwon-Hue
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.6C
    • /
    • pp.487-497
    • /
    • 2008
  • This paper proposes a new adaptive QRD-M algorithm for MIMO systems. The proposed scheme controls the number of survivor paths,0 based on the channel condition at each layer. The original QRD-M algorithm used fixed M at each layer and it needs large M to achieve near-MLD (maximum-likelihood detection) performance. However, using the large M increases the computation complexity. In this paper, we further effectively control M by employing the channel indicator which includes not only the channel gain, but also instantaneous noise information without necessity of SNR measurement. We found that the ratio of the minimum path metric to the second minimum is good reliability indicator for the channel condition. By adaptively changing M based on this ratio, the proposed scheme effectively achieves near MLD performance and computation complexity of the proposed scheme is significantly smaller than the conventional QRD-M algorithms.

Adaptive K-best Sphere Decoding Algorithm Using the Characteristics of Path Metric (Path Metric의 특성을 이용한 적응형 K-best Sphere Decoding 기법)

  • Kim, Bong-Seok;Choi, Kwon-Hue
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11A
    • /
    • pp.862-869
    • /
    • 2009
  • We propose a new adaptive K-best Sphere Decoding (SD) algorithm for Multiple Input Multiple Output (MIMO) systems where the number of survivor paths, K is changed based on the characteristics of path metrics which contain the instantaneous channel condition. In order to overcome a major drawback of Maximum Likelihood Detection (MLD) which exponentially increases the computational complexity with the number of transmit antennas, the conventional adaptive K-best SD algorithms which achieve near to MLD performance have been proposed. However, they still have redundant computation complexity since they only employ the channel fading gain as a channel condition indicator without instantaneous Signal to Noise Ratio (SNR) information. hi order to complement this drawback, the proposed algorithm use the characteristics of path metrics as a simple channel indicator. It is found that the ratio of the minimum path metric to the other path metrics reflects SNR information as well as channel fading gain. By adaptively changing K based on this ratio, the proposed algorithm more effectively reduce the computation complexity compared to the conventional K-best algorithms which achieve same performance.

Fast Join Mechanism that considers the switching of the tree in Overlay Multicast (오버레이 멀티캐스팅에서 트리의 스위칭을 고려한 빠른 멤버 가입 방안에 관한 연구)

  • Cho, Sung-Yean;Rho, Kyung-Taeg;Park, Myong-Soon
    • The KIPS Transactions:PartC
    • /
    • v.10C no.5
    • /
    • pp.625-634
    • /
    • 2003
  • More than a decade after its initial proposal, deployment of IP Multicast has been limited due to the problem of traffic control in multicast routing, multicast address allocation in global internet, reliable multicast transport techniques etc. Lately, according to increase of multicast application service such as internet broadcast, real time security information service etc., overlay multicast is developed as a new internet multicast technology. In this paper, we describe an overlay multicast protocol and propose fast join mechanism that considers switching of the tree. To find a potential parent, an existing search algorithm descends the tree from the root by one level at a time, and it causes long joining latency. Also, it is try to select the nearest node as a potential parent. However, it can't select the nearest node by the degree limit of the node. As a result, the generated tree has low efficiency. To reduce long joining latency and improve the efficiency of the tree, we propose searching two levels of the tree at a time. This method forwards joining request message to own children node. So, at ordinary times, there is no overhead to keep the tree. But the joining request came, the increasing number of searching messages will reduce a long joining latency. Also searching more nodes will be helpful to construct more efficient trees. In order to evaluate the performance of our fast join mechanism, we measure the metrics such as the search latency and the number of searched node and the number of switching by the number of members and degree limit. The simulation results show that the performance of our mechanism is superior to that of the existing mechanism.

Video Matching Algorithm of Content-Based Video Copy Detection for Copyright Protection (저작권보호를 위한 내용기반 비디오 복사검출의 비디오 정합 알고리즘)

  • Hyun, Ki-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.315-322
    • /
    • 2008
  • Searching a location of the copied video in video database, signatures should be robust to video reediting, channel noise, time variation of frame rate. Several kinds of signatures has been proposed. Ordinal signature, one of them, is difficult to describe the spatial characteristics of frame due to the site of fixed window, $N{\times}N$, which is compute the average gray value. In this paper, I studied an algorithm of sequence matching in video copy detection for the copyright protection, employing the R-tree index method for retrieval and suggesting a robust ordinal signatures for the original video clips and the same signatures of the pirated video. Robust ordinal has a 2-dimensional vector structures that has a strong to the noise and the variation of the frame rate. Also, it express as MBR form in search space of R-tree. Moreover, I focus on building a video copy detection method into which content publishers register their valuable digital content. The video copy detection algorithms compares the web content to the registered content and notifies the content owners of illegal copies. Experimental results show the proposed method is improve the video matching rate and it has a characteristics of signature suitable to the large video databases.

  • PDF