• Title/Summary/Keyword: Rank algorithm

Search Result 282, Processing Time 0.037 seconds

A study on MPEG-7 descriptor combining method using borda count method (Borda count 방법을 이용한 다중 MPEG-7 서술자 조합에 관한 연구)

  • Eom, Min-Young;Choe, Yoon-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.1 s.307
    • /
    • pp.39-44
    • /
    • 2006
  • In this paper, search result list synthesis method is proposed using borda count method for still image retrieval based on MPEG-7 descriptors. MPEG-7 standardizes descriptors that extract feature information from media data. In many cases, using a single descriptor lacks of correctness, it is suggested to use multiple descriptors to enhance retrieval efficiency. In this paper, retrieval efficiency enhancement is achieved by combining multiple search results which are from each descriptor. In combining search result, newly calculated borda count method is proposed. Comparing current frequency compensated calculation, rank considered frequency compensation is used to score animage in database. This combining method is considered in Content based image retrieval system with relevance feedback algorithm which uses high level information from system user. In each relevance iteration step, adoptive borda count method is used to calculate score of images.

Development of Approximate Cost Estimate Model for Aqueduct Bridges Restoration - Focusing on Comparison between Regression Analysis and Case-Based Reasoning - (수로교 개보수를 위한 개략공사비 산정 모델 개발 - 회귀분석과 사례기반추론의 비교를 중심으로 -)

  • Jeon, Geon Yeong;Cho, Jae Yong;Huh, Young
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.4
    • /
    • pp.1693-1705
    • /
    • 2013
  • To restore old aqueduct in Korea which is a irrigation bridge to supply water in paddy field area, it is needed to estimate approximate costs of restoration because the basic design for estimation of construction costs is often ruled out in current system. In this paper, estimating models of construction costs were developed on the basis of performance data for restoration of RC aqueduct bridges since 2003. The regression analysis (RA) model and case-based reasoning (CBR) model for the estimation of construction costs were developed respectively. Error rate of simple RA model was lower than that of multiple RA model. CBR model using genetic algorithm (GA) has been applied in the estimation of construction costs. In the model three factors like attribute weight, attribute deviation and rank of case similarity were optimized. Especially, error rate of estimated construction costs decreased since limit ranges of the attribute weights were applied. The results showed that error rates between RA model and CBR models were inconsiderable statistically. It is expected that the proposed estimating method of approximate costs of aqueduct restoration will be utilized to support quick decision making in phased rehabilitation project.

A probabilistic information retrieval model by document ranking using term dependencies (용어간 종속성을 이용한 문서 순위 매기기에 의한 확률적 정보 검색)

  • You, Hyun-Jo;Lee, Jung-Jin
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.5
    • /
    • pp.763-782
    • /
    • 2019
  • This paper proposes a probabilistic document ranking model incorporating term dependencies. Document ranking is a fundamental information retrieval task. The task is to sort documents in a collection according to the relevance to the user query (Qin et al., Information Retrieval Journal, 13, 346-374, 2010). A probabilistic model is a model for computing the conditional probability of the relevance of each document given query. Most of the widely used models assume the term independence because it is challenging to compute the joint probabilities of multiple terms. Words in natural language texts are obviously highly correlated. In this paper, we assume a multinomial distribution model to calculate the relevance probability of a document by considering the dependency structure of words, and propose an information retrieval model to rank a document by estimating the probability with the maximum entropy method. The results of the ranking simulation experiment in various multinomial situations show better retrieval results than a model that assumes the independence of words. The results of document ranking experiments using real-world datasets LETOR OHSUMED also show better retrieval results.

Bayesian Survival Analysis of High-Dimensional Microarray Data for Mantle Cell Lymphoma Patients

  • Moslemi, Azam;Mahjub, Hossein;Saidijam, Massoud;Poorolajal, Jalal;Soltanian, Ali Reza
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.17 no.1
    • /
    • pp.95-100
    • /
    • 2016
  • Background: Survival time of lymphoma patients can be estimated with the help of microarray technology. In this study, with the use of iterative Bayesian Model Averaging (BMA) method, survival time of Mantle Cell Lymphoma patients (MCL) was estimated and in reference to the findings, patients were divided into two high-risk and low-risk groups. Materials and Methods: In this study, gene expression data of MCL patients were used in order to select a subset of genes for survival analysis with microarray data, using the iterative BMA method. To evaluate the performance of the method, patients were divided into high-risk and low-risk based on their scores. Performance prediction was investigated using the log-rank test. The bioconductor package "iterativeBMAsurv" was applied with R statistical software for classification and survival analysis. Results: In this study, 25 genes associated with survival for MCL patients were identified across 132 selected models. The maximum likelihood estimate coefficients of the selected genes and the posterior probabilities of the selected models were obtained from training data. Using this method, patients could be separated into high-risk and low-risk groups with high significance (p<0.001). Conclusions: The iterative BMA algorithm has high precision and ability for survival analysis. This method is capable of identifying a few predictive variables associated with survival, among many variables in a set of microarray data. Therefore, it can be used as a low-cost diagnostic tool in clinical research.

Bacteria Cooperative Optimization Applying Individual's Speed for Performance Improvements (성능향상을 위하여 개체속력을 적용한 박테리아 협동 최적화)

  • Jung, Sung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.3
    • /
    • pp.67-75
    • /
    • 2010
  • This paper proposes a bacteria cooperative optimization (BCO) method applying individuals's speed for the performance improvements. All individuals in existing BCO methods move the same length at the same time because their speeds are constant. These methods had the problem that the individuals couldn't find the global optimum effectively because good individuals and bad individuals had same speeds. In order to overcome this problem, we applied the speed concept to the BCO algorithm that individuals moved different lengths according to their speeds assigned by the ranks of individuals according to the fitness of individuals. That is to say, we provide high speeds to bad individuals with low fitness in order to fast move to the areas with high fitness and provide low speeds to good individuals with high fitness because they may be near global optimum. It was found from experimental results of four function optimization problems that the proposed method outperformed the existing methods. Our method showed better performances even than the rank replacement method. This means that applying speed concepts to the individuals for BCO is very effective and efficient.

Collection and Extraction Algorithm of Field-Associated Terms (분야연상어의 수집과 추출 알고리즘)

  • Lee, Sang-Kon;Lee, Wan-Kwon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.3
    • /
    • pp.347-358
    • /
    • 2003
  • VSField-associated term is a single or compound word whose terms occur in any document, and which makes it possible to recognize a field of text by using common knowledge of human. For example, human recognizes the field of document such as or , a field name of text, when she encounters a word 'Pitcher' or 'election', respectively We Proposes an efficient construction method of field-associated terms (FTs) for specializing field to decide a field of text. We could fix document classification scheme from well-classified document database or corpus. Considering focus field we discuss levels and stability ranks of field-associated terms. To construct a balanced FT collection, we construct a single FTs. From the collections we could automatically construct FT's levels, and stability ranks. We propose a new extraction algorithms of FT's for document classification by using FT's concentration rate, its occurrence frequencies.

Determination of Optimal Reservoir Locations Using Multi-Objective Genetic Algorithm (다목적 최적화 알고리즘의 적용을 통한 우수저류조 최적 설치지점 선정기법의 제안)

  • Park, Cheong-Hoon;Hoa, Ho Van;Lee, Seung-Yub;Kim, Joong-Hoon
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2012.05a
    • /
    • pp.637-637
    • /
    • 2012
  • 본 연구에서는 내수침수 저감을 위하여 효율적(effective)인 우수저류조 설치에 따른 침수저감효과 극대화 방안을 제시하고자 한다. 여기서 효율성(effectiveness)은 침수저감량의 극대화 측면과 비용의 최소화 측면 두 가지로 구분된다. 최적 방재 시설물의 설치는 단순 설치비용 대비 저감량이 가장 큰 안을 제시하는 것은 의미가 없으며 일정 기준 이상의 방재성능을 발휘하면서 주어진 예산안에서 최적안을 찾아야 하므로 비용의 최소화 측면과 침수 저감량, 즉 맨홀에서의 월류 저감량을 최대화 하는 두 가지의 목적을 동시에 달성해야 한다. 따라서 본 연구에서는 다목적 최적화 알고리즘의 적용을 통하여 우수저류조 최적 설치지점을 선정하는 기법을 제안하였다. 본 연구에 적용한 다목적 최적화 방법으로는 목적함수의 최적해 탐색 효용성 측면에서 우수하다고 평가되고 있는 유전자 알고리즘을 적용하였다. 다목적 최적화의 경우 해의 우열을 판단하기 위한 적합도 함수는 실제 각 목적함수의 적합도 값(real fitness value)이 아닌 해의 상대적인 우열(dominance or non-dominance)에 따라 부여되는 등급(rank)에 의해서 해의 우열이 결정되며 여기서는 Fonseca and Fleming(1993)이 제안한 Ranking method를 적용하여 적합도를 결정하였다. 한편 도시 우수관망의 해석 및 우수저류조 설치에 따른 월류량 분석을 위하여 미 환경청(US Environmental Protection Agency; EPA)에서 제공하고 있는 EPA-SWMM 5.0 engine을 사용하였으며 최적화 알고리즘의 구성을 위하여 Visual C++와 SWMM DLL을 연동하여 사용하였다. 연구 대상유역은 인천 청라지구(3공구)를 대상으로 기법의 적용성을 검토하였으며 저류지 설치에 따른 비용함수는 EPA(2002)에서 제안한 저류지 체적대비 공사비용을 원화로 환산한 후 청라지구의 공시지가를 고려하여 결정하였다. 최적화 기법의 적용 결과 저류지 설치비용에 따라 최대로 월류량을 저감시킬 수 있는 우수저류조 최적 설치위치의 조합(Pareto-front)을 결정할 수 있었다.

  • PDF

Missing Data Correction and Noise Level Estimation of Observation Matrix (관측행렬의 손실 데이터 보정과 잡음 레벨 추정 방법)

  • Koh, Sung-shik
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.99-106
    • /
    • 2016
  • In this paper, we will discuss about correction method of missing data on noisy observation matrix and uncertainty analysis for the potential noise. In situations without missing data in an observation matrix, this solution is known to be accurately induced by SVD (Singular Value Decomposition). However, usually the several entries of observation matrix have not been observed and other entries have been perturbed by the influence of noise. In this case, it is difficult to find the solution as well as cause the 3D reconstruction error. Therefore, in order to minimize the 3D reconstruction error, above all things, it is necessary to correct reliably the missing data under noise distribution and to give a quantitative evaluation for the corrected results. This paper focuses on a method for correcting missing data using geometrical properties between 2D projected object and 3D reconstructed shape and for estimating a noise level of the observation matrix using ranks of SVD in order to quantitatively evaluate the performance of the correction algorithm.

A Ranking Method for Improving Performance of Entropy Coding in Gray-Level Images (그레이레벨 이미지에서의 엔트로피 코딩 성능 향상을 위한 순위 기법)

  • You, Kang-Soo;Sim, Chun-Bo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.4
    • /
    • pp.707-715
    • /
    • 2008
  • This paper proposes an algorithm for efficient compression gray-level images by entropy encoder. The issue of the proposed method is to replace original data of gray-level images with particular ranked data. For this, first, before encoding a stream of gray-level values in an image, the proposed method counts co-occurrence frequencies for neighboring pixel values. Then, it replaces each pay value with particularly ranked numbers based on the investigated co-occurrence frequencies. Finally, the ranked numbers are transmitted to an entropy encoder. The proposed method improves the performance of existing entropy coding by transforming original gray-level values into rank based images using statistical co-occurrence frequencies of gray-level images. The simulation results, using gray-level images with 8-bits, show that the proposed method can reduce bit rate by up to 37.85% compared to existing conventional entropy coders.

Ambient modal identification of structures equipped with tuned mass dampers using parallel factor blind source separation

  • Sadhu, A.;Hazraa, B.;Narasimhan, S.
    • Smart Structures and Systems
    • /
    • v.13 no.2
    • /
    • pp.257-280
    • /
    • 2014
  • In this paper, a novel PARAllel FACtor (PARAFAC) decomposition based Blind Source Separation (BSS) algorithm is proposed for modal identification of structures equipped with tuned mass dampers. Tuned mass dampers (TMDs) are extremely effective vibration absorbers in tall flexible structures, but prone to get de-tuned due to accidental changes in structural properties, alteration in operating conditions, and incorrect design forecasts. Presence of closely spaced modes in structures coupled with TMDs renders output-only modal identification difficult. Over the last decade, second-order BSS algorithms have shown significant promise in the area of ambient modal identification. These methods employ joint diagonalization of covariance matrices of measurements to estimate the mixing matrix (mode shape coefficients) and sources (modal responses). Recently, PARAFAC BSS model has evolved as a powerful multi-linear algebra tool for decomposing an $n^{th}$ order tensor into a number of rank-1 tensors. This method is utilized in the context of modal identification in the present study. Covariance matrices of measurements at several lags are used to form a $3^{rd}$ order tensor and then PARAFAC decomposition is employed to obtain the desired number of components, comprising of modal responses and the mixing matrix. The strong uniqueness properties of PARAFAC models enable direct source separation with fine spectral resolution even in cases where the number of sensor observations is less compared to the number of target modes, i.e., the underdetermined case. This capability is exploited to separate closely spaced modes of the TMDs using partial measurements, and subsequently to estimate modal parameters. The proposed method is validated using extensive numerical studies comprising of multi-degree-of-freedom simulation models equipped with TMDs, as well as with an experimental set-up.