• Title/Summary/Keyword: metric-first search

Search Result 13, Processing Time 0.017 seconds

Near ML Decoding Based on Metric-First Searching and Branch Length Threshold for Multiple Input Multiple Output Systems (여러 입력 여러 출력 시스템에서 길이 먼저 살펴보기와 가지 길이 문턱값을 바탕으로 둔 준최적 복호)

  • An, Tae-Hun;Kang, Hyun-Gu;Oh, Jong-Ho;Song, Iick-Ho;Yoon, Seok-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.8C
    • /
    • pp.830-839
    • /
    • 2009
  • In this paper, we address a near maximum likelihood (ML) scheme for the decoding of multiple input multiple output systems. Based on the metric-first search method and by employing Schnorr-Euchner enumeration and branch length thresholds, the proposed scheme provides reduced computational complexity. The proposed scheme is shown by simulation to have lower computational complexity than other near ML decoders while maintaining the bit error rate close to the ML performance.

Review of Studies on V-METRIC Related Models (V-METRIC 관련연구들에 관한 고찰)

  • Kim, Yoon Hwa;Lee, Sung Yong
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.12 no.2
    • /
    • pp.47-57
    • /
    • 2016
  • As the inventory costs of repairable items in military logistics continue to increase, many studies for optimal inventory level of these items are being carried out in advanced countries, including the US, to reduce these costs. Research on inventory level optimization for repairable items aimed to achieve the availability goal of a system with a MIME(Multi Indenture Multi Echelon) repair policy structure first began with Sherbrooke's METRIC and developed into various types. This research is to analyze and compare recent V-METRIC related studies to search for another variation in this field. This paper mainly looks at how to determine optimum inventory level for each repairable item to achieve a specific availability target within a limited budget, and also how to minimize inventory cost while achieving its availability target by determining optimal inventory level of each repairable item.

Location Estimation for Multiple Targets Using Expanded DFS Algorithm (확장된 깊이-우선 탐색 알고리듬을 적용한 다중표적 위치 좌표 추정 기법)

  • Park, So Ryoung;Noh, Sanguk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.12
    • /
    • pp.1207-1215
    • /
    • 2013
  • This paper proposes the location estimation techniques of distributed targets with the multi-sensor data perceived through IR sensors of the military robots in consideration of obstacles. In order to match up targets with measured azimuths, to add to the depth-first search (DFS) algorithms in free-obstacle environment, we suggest the expanded DFS (EDS) algorithm including bypass path search, partial path search, middle level ending, and the supplementation of decision metric. After matching up targets with azimuths, we estimate the coordinate of each target by obtaining the intersection point of the azimuths with the least square error (LSE) algorithm. The experimental results show the error rate of estimated location, mean number of calculating nodes, and mean distance between real coordinates and estimated coordinates of the proposed algorithms.

Parameter search methodology of support vector machines for improving performance (속도 향상을 위한 서포트 벡터 머신의 파라미터 탐색 방법론)

  • Lee, Sung-Bo;Kim, Jae-young;Kim, Cheol-Hong;Kim, Jong-Myon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.3
    • /
    • pp.329-337
    • /
    • 2017
  • This paper proposes a search method that explores parameters C and σ values of support vector machines (SVM) to improve performance while maintaining search accuracy. A traditional grid search method requires tremendous computational times because it searches all available combinations of C and σ values to find optimal combinations which provide the best performance of SVM. To address this issue, this paper proposes a deep search method that reduces computational time. In the first stage, it divides C-σ- accurate metrics into four regions, searches a median value of each region, and then selects a point of the highest accurate value as a start point. In the second stage, the selected start points are re-divided into four regions, and then the highest accurate point is assigned as a new search point. In the third stage, after eight points near the search point. are explored and the highest accurate value is assigned as a new search point, corresponding points are divided into four parts and it calculates an accurate value. In the last stage, it is continued until an accurate metric value is the highest compared to the neighborhood point values. If it is not satisfied, it is repeated from the second stage with the input level value. Experimental results using normal and defect bearings show that the proposed deep search algorithm outperforms the conventional algorithms in terms of performance and search time.

DSL: Dynamic and Self-Learning Schedule Method of Multiple Controllers in SDN

  • Li, Junfei;Wu, Jiangxing;Hu, Yuxiang;Li, Kan
    • ETRI Journal
    • /
    • v.39 no.3
    • /
    • pp.364-372
    • /
    • 2017
  • For the reliability of controllers in a software defined network (SDN), a dynamic and self-learning schedule method (DSL) is proposed. This method is original and easy to deploy, and optimizes the combination of multiple controllers. First, we summarize multiple controllers' combinations and schedule problems in an SDN and analyze its reliability. Then, we introduce the architecture of the schedule method and evaluate multi-controller reliability, the DSL method, and its optimized solution. By continually and statistically learning the information about controller reliability, this method treats it as a metric to schedule controllers. Finally, we compare and test the method using a given testing scenario based on an SDN network simulator. The experiment results show that the DSL method can significantly improve the total reliability of an SDN compared with a random schedule, and the proposed optimization algorithm has higher efficiency than an exhaustive search.

Spatial Locality Preservation Metric for Constructing Histogram Sequences (히스토그램 시퀀스 구성을 위한 공간 지역성 보존 척도)

  • Lee, Jeonggon;Kim, Bum-Soo;Moon, Yang-Sae;Choi, Mi-Jung
    • Journal of Information Technology and Architecture
    • /
    • v.10 no.1
    • /
    • pp.79-91
    • /
    • 2013
  • This paper proposes a systematic methodology that could be used to decide which one shows the best performance among space filling curves (SFCs) in applying lower-dimensional transformations to histogram sequences. A histogram sequence represents a time-series converted from an image by the given SFC. Due to the high-dimensionality nature, histogram sequences are very difficult to be stored and searched in their original form. To solve this problem, we generally use lower-dimensional transformations, which produce lower bounds among high dimensional sequences, but the tightness of those lower-bounds is highly affected by the types of SFC. In this paper, we attack a challenging problem of evaluating which SFC shows the better performance when we apply the lower-dimensional transformation to histogram sequences. For this, we first present a concept of spatial locality, which comes from an intuition of "if the entries are adjacent in a histogram sequence, their corresponding cells should also be adjacent in its original image." We also propose spatial locality preservation metric (slpm in short) that quantitatively evaluates spatial locality and present its formal computation method. We then evaluate five SFCs from the perspective of slpm and verify that this evaluation result concurs with the performance evaluation of lower-dimensional transformations in real image matching. Finally, we perform k-NN (k-nearest neighbors) search based on lower-dimensional transformations and validate accuracy of the proposed slpm by providing that the Hilbert-order with the highest slpm also shows the best performance in k-NN search.

A Data Mining Approach for Selecting Bitmap Join Indices

  • Bellatreche, Ladjel;Missaoui, Rokia;Necir, Hamid;Drias, Habiba
    • Journal of Computing Science and Engineering
    • /
    • v.1 no.2
    • /
    • pp.177-194
    • /
    • 2007
  • Index selection is one of the most important decisions to take in the physical design of relational data warehouses. Indices reduce significantly the cost of processing complex OLAP queries, but require storage cost and induce maintenance overhead. Two main types of indices are available: mono-attribute indices (e.g., B-tree, bitmap, hash, etc.) and multi-attribute indices (join indices, bitmap join indices). To optimize star join queries characterized by joins between a large fact table and multiple dimension tables and selections on dimension tables, bitmap join indices are well adapted. They require less storage cost due to their binary representation. However, selecting these indices is a difficult task due to the exponential number of candidate attributes to be indexed. Most of approaches for index selection follow two main steps: (1) pruning the search space (i.e., reducing the number of candidate attributes) and (2) selecting indices using the pruned search space. In this paper, we first propose a data mining driven approach to prune the search space of bitmap join index selection problem. As opposed to an existing our technique that only uses frequency of attributes in queries as a pruning metric, our technique uses not only frequencies, but also other parameters such as the size of dimension tables involved in the indexing process, size of each dimension tuple, and page size on disk. We then define a greedy algorithm to select bitmap join indices that minimize processing cost and verify storage constraint. Finally, in order to evaluate the efficiency of our approach, we compare it with some existing techniques.

Searching for Variants Using Trie-Index (트라이 인덱스를 이용한 이형태 검색)

  • Park, In-Cheol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.8
    • /
    • pp.1986-1992
    • /
    • 2009
  • A user often searches a data by inputting a variant such as the abbreviation or substring of a word, or a misspelled word. The simple approach to the searching for variants is to build a variants dictionary. However, it entails enormous cost and time and can not handle variants by misspelling. Approximate searching, searching by approximate string matching, is a good approach to the searching. A problem in the approach is that it cannot handle variants by abbreviations. This paper propose a method for searching various variants including abbreviations and misspelled words, by using the trie indexing. First, this paper shows a variant matching method with the calculation of path weighted-metric. In addition, it provides variant searching algorithm to reduce the search time.

Algorithms for Indexing and Integrating MPEG-7 Visual Descriptors (MPEG-7 시각 정보 기술자의 인덱싱 및 결합 알고리즘)

  • Song, Chi-Ill;Nang, Jong-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.1-10
    • /
    • 2007
  • This paper proposes a new indexing mechanism for MPEG-7 visual descriptors, especially Dominant Color and Contour Shape descriptors, that guarantees an efficient similarity search for the multimedia database whose visual meta-data are represented with MPEG-7. Since the similarity metric used in the Dominant Color descriptor is based on Gaussian mixture model, the descriptor itself could be transform into a color histogram in which the distribution of the color values follows the Gauss distribution. Then, the transformed Dominant Color descriptor (i.e., the color histogram) is indexed in the proposed indexing mechanism. For the indexing of Contour Shape descriptor, we have used a two-pass algorithm. That is, in the first pass, since the similarity of two shapes could be roughly measured with the global parameters such as eccentricity and circularity used in Contour shape descriptor, the dissimilar image objects could be excluded with these global parameters first. Then, the similarities between the query and remaining image objects are measured with the peak parameters of Contour Shape descriptor. This two-pass approach helps to reduce the computational resources to measure the similarity of image objects using Contour Shape descriptor. This paper also proposes two integration schemes of visual descriptors for an efficient retrieval of multimedia database. The one is to use the weight of descriptor as a yardstick to determine the number of selected similar image objects with respect to that descriptor, and the other is to use the weight as the degree of importance of the descriptor in the global similarity measurement. Experimental results show that the proposed indexing and integration schemes produce a remarkable speed-up comparing to the exact similarity search, although there are some losses in the accuracy because of the approximated computation in indexing. The proposed schemes could be used to build a multimedia database represented in MPEG-7 that guarantees an efficient retrieval.

A Study on Adaptive Knowledge Automatic Acquisition Model from Case-Based Reasoning System (사례 기반 추론 시스템에서 적응 지식 자동 획득 모델에 관한 연구)

  • 이상범;김영천;이재훈;이성주
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.05a
    • /
    • pp.81-86
    • /
    • 2002
  • In current CBR(Case-Based Reasoning) systems, the case adaptation is usually performed by rule-based method that use rules hand-coded by the system developer. So, CBR system designer faces knowledge acquisition bottleneck similar to those found in traditional expert system design. In this thesis, 1 present a model for learning method of case adaptation knowledge using case base. The feature difference of each pair of cases are noted and become the antecedent part of an adaptation rule, the differences between the solutions in the compared cases become the consequent part of the rule. However, the number of rules that can possibly be discovered using a learning algorithm is enormous. The first method for finding cases to compare uses a syntactic measure of the distance between cases. The threshold fur identification of candidates for comparison is fixed th the maximum number of differences between the target and retrived case from all retrievals. The second method is to use similarity metric since the threshold method may not be an accurate measure. I suggest the elimination method of duplicate rules. In the elimination process, a confidence value is assigned to each rule based on its frequency. The learned adaptation rules is applied in riven target Problem. The basic. process involves search for all rules that handle at least one difference followed by a combination process in which complete solutions are built.

  • PDF