• 제목/요약/키워드: Database Algorithm

검색결과 1,648건 처리시간 0.035초

ABRN:주문형 멀티미디어 데이터 베이스 서비스 시스템을 위한 버퍼 교체 알고리즘 (ABRN:An Adaptive Buffer Replacement for On-Demand Multimedia Database Service Systems)

  • 정광철;박웅규
    • 한국정보처리학회논문지
    • /
    • 제3권7호
    • /
    • pp.1669-1679
    • /
    • 1996
  • In this paper, we address the problem of how to replace huffers in multimedia database systems with time-varying skewed data access. The access pattern in the multimedia database system to support audio-on-demand and video-on-demand services is generally skewed with a few popular objects. In addition the access pattem of the skewed objects has a time-varying property. In such situations, our analysis indicates that conventional LRU(least Recently Used) and LFU(Least Frequently Used) schemes for buffer replacement algorithm(ABRN:Adaptive Buffer Replacement using Neural suited. We propose a new buffer replacement algorithm(ABRN:Adaptive Buffer Replacement using Neural Networks)using a neural network for multimedia database systems with time-varying skewed data access. The major role of our neural network classifies multimedia objects into two classes:a hot set frequently accessed with great popularity and a cold set randomly accessed with low populsrity. For the classification, the inter-arrival time values of sample objects are employed to train the neural network.Our algorithm partitions buffers into two regions to combine the best roperties of LRU and LFU.One region, which contains the 핫셋 objects, is managed by LFU replacement and the other region , which contains the cold set objects , is managed by LRUreplacement.We performed simulation experiments in an actual environment with time-varying skewed data accsee to compare our algorithm to LRU, LFU, and LRU-k which is a variation of LRU. Simulation resuults indicate that our proposed algorthm provides better performance as compared to the other algorithms. Good performance of the neural network-based replacement scheme means that this new approach can be also suited as an alternative to the existing page replacement and prefetching algorithms in virtual memory systems.

  • PDF

유전자 알고리즘을 이용한 최적의 분산 데이터베이스 시스템 설계 (The Optimal Distributed Database System Design Using the Genetic Algorithm)

  • 고석범;윤성대
    • 한국정보처리학회논문지
    • /
    • 제7권9호
    • /
    • pp.2797-2806
    • /
    • 2000
  • 최근에 정보네트워크 사용자의 급증에 따라 DDS(Distributed Database System)는 VAN(Value Added Network)상에서 구현되었다. DDS는 지역적으로 분산된 작업환경에서 중앙집중식 데이터베이스 구축보다 여러 측면에서 장점이 있으나 불합리한 설계는 컴퓨터 및 네트워크 자원의 비효율적 사용에 의한 비용의 증가와 데이터 유지를 위한 복잡도의 증가를 야기한다. DDS 설계시 각 사이트에서 적절한 컴퓨터를 선택하는 문제와 단편화된 데이터를 적절한 사이트에 할당하는 문제가 중요하다. VAN 상에서 컴퓨터 선택과 데이터 파일의 할당은 응답대기시간(waited response time)과 투자비용(investment cost)의 상관관계를 반드시 고려하여 결정되어야 하므로, 본 논문에서는 각 컴퓨터와 파일의 할당의 영향에 따라 두 목적함수의 상관관계를 고려한다. 특히, 응답대기 시간에 대한 보다 실제적인 평가를 위해 M/M/1 큐잉 시스템을 기초로 하여 설계한다. 제안된 설계모델은 경험적 탐색법 중의 하나인 유전자 알고리즘(Genetic Algorithm)의 적용을 통해 효율적인 해의 탐색을 시도하고 제안된 수학적 모델과 알고리즘의 성능 검토를 위해 모의실험 및 결과분석을 한다.

  • PDF

Geolocation Spectrum Database Assisted Optimal Power Allocation: Device-to-Device Communications in TV White Space

  • Xue, Zhen;Shen, Liang;Ding, Guoru;Wu, Qihui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권12호
    • /
    • pp.4835-4855
    • /
    • 2015
  • TV white space (TVWS) is showing promise to become the first widespread practical application of cognitive technology. In fact, regulators worldwide are beginning to allow access to the TV band for secondary users, on the provision that they access the geolocation database. Device-to-device (D2D) can improve the spectrum efficiency, but large-scale D2D communications that underlie TVWS may generate undesirable interference to TV receivers and cause severe mutual interference. In this paper, we use an established geolocation database to investigate the power allocation problem, in order to maximize the total sum throughput of D2D links in TVWS while guaranteeing the quality-of-service (QoS) requirement for both D2D links and TV receivers. Firstly, we formulate an optimization problem based on the system model, which is nonconvex and intractable. Secondly, we use an effective approach to convert the original problem into a series of convex problems and we solve these problems using interior point methods that have polynomial computational complexity. Additionally, we propose an iterative algorithm based on the barrier method to locate the optimal solution. Simulation results show that the proposed algorithm has strong performance with high approximation accuracy for both small and large dimensional problems, and it is superior to both the active set algorithm and genetic algorithm.

규칙기반 시스템에 사용되는 규칙 간소화 알고리즘 (The Rule Case Simplification Algorithm to be used in a Rule-Based System)

  • ;여정모
    • 정보처리학회논문지D
    • /
    • 제17D권6호
    • /
    • pp.405-414
    • /
    • 2010
  • 다양한 업무요소들의 값의 조합에 따라 대상 값이 결정되는 것을 규칙이라고 한다. 업무를 표현한 기업의 정보시스템은 이러한 수많은 규칙들을 포함하는데, 이러한 규칙들을 구현한 서버 시스템을 규칙기반 시스템이라고 한다. 규칙기반 시스템은 규칙 엔진 기법을 사용하거나 직접 데이터베이스를 사용하여 구현된다. 규칙 엔진 기법은 많은 단점을 가지기 때문에 대부분 관계형 데이터베이스를 사용하여 규칙기반 시스템을 구현한다. 업무의 규모가 커지고 복잡하게 될수록 수많은 다양한 경우의 규칙이 존재하게 되므로 시간과 비용이 크게 증가하고, 대량의 저장공간을 요구하게 될 뿐만 아니라 수행속도의 저하 현상도 많이 발생한다. 따라서 본 연구에서는 이러한 수많은 경우의 규칙들을 동일한 효과를 가지는 간소화된 경우의 규칙들로 변환시킬 수 있는 알고리즘을 제안한다. 본 연구의 알고리즘을 가지고 다양한 업무 규칙 데이터에 적용하여 테스트한 결과 데이터 건수를 간소화시킬 수 있음을 입증하였다. 본 연구의 알고리즘을 사용하여 업무 규칙 데이터를 간소화하게 되면 데이터 베이스를 사용하여 구현된 규칙기반 시스템의 성능을 개선할 수 있다.

데이터베이스에서의 시간 시스템에 관한 연구 (A study of Time Management System in Data Base)

  • 최진탁
    • 산업경영시스템학회지
    • /
    • 제21권48호
    • /
    • pp.185-192
    • /
    • 1998
  • A new algorithm is proposed in this paper which efficiently performs join in the temporal database. The main idea is to sort the smaller relation and to partition the larger relation, and the proposed algorithm reduces the cost of sorting the larger relation. To show the usefulness of the algorithm, the cost is analyzed with respect to the number of accesses to secondary storage and compared with that of Sort-Merge algorithm. Through the comparisons, we present and verify the conditions under which the proposed algorithm always outperforms the Sort-Merge algorithm. The comparisons show that the proposed algorithm achieves 10∼30% gain under those conditions.

  • PDF

뉴럴 네트워크 알고리즘을 이용한 비드 가시화 (Using Neural Network Algorithm for Bead Visualization)

  • 구창대;양형석;김중영;신상호
    • Journal of Welding and Joining
    • /
    • 제31권5호
    • /
    • pp.35-40
    • /
    • 2013
  • In this paper, we propose the Tangible Virtual Reality Representation Method to using haptic device and feature to morphology of created bead from Flux Cored Arc Welding. The virtual reality was started to rising for reduce to consumable materials and welding training risk. And, we will expected maximize virtual reality from virtual welding training. In this paper proposed method is get the database to changing the input factor such as work angle, travelling angle, speed, CTWD. And, it is visualization to bead from extract to optimal morphological feature information to using the Neural Network algorithm. The database was building without error to extract data from automatic robot welder. Also, the Neural Network algorithm was set a dataset of the highest accuracy from verification process in many times. The bead was created in virtual reality from extract to morphological feature information. We were implementation to final shape of bead and overlapped in process by time to using bead generation algorithm and calibration algorithm for generate to same bead shape to real database in process of generating bead. The best advantage of virtual welding training, it can be get the many data to training evaluation. In this paper, we were representation bead to similar shape from generated bead to Flux Cored Arc Welding. Therefore, we were reduce the gap to virtual welding training and real welding training. In addition, we were confirmed be able to maximize the performance of education from more effective evaluation system.

Eigen Value 기반의 영상검색 기법 (Eigen Value Based Image Retrieval Technique)

  • 김진용;소운영;정동석
    • 정보기술과데이타베이스저널
    • /
    • 제6권2호
    • /
    • pp.19-28
    • /
    • 1999
  • Digital image and video libraries require new algorithms for the automated extraction and indexing of salient image features. Eigen values of an image provide one important cue for the discrimination of image content. In this paper we propose a new approach for automated content extraction that allows efficient database searching using eigen values. The algorithm automatically extracts eigen values from the image matrix represented by the covariance matrix for the image. We demonstrate that the eigen values representing shape information and the skewness of its distribution representing complexity provide good performance in image query response time while providing effective discriminability. We present the eigen value extraction and indexing techniques. We test the proposed algorithm of searching by eigen value and its skewness on a database of 100 images.

  • PDF

Computing Post-translation Modification using FTMS

  • Shen, Wei;Sung, Wing-Kin;SZE, Siu Kwan
    • 한국생물정보학회:학술대회논문집
    • /
    • 한국생물정보시스템생물학회 2005년도 BIOINFO 2005
    • /
    • pp.331-336
    • /
    • 2005
  • Post translational modifications (PTMs) discovery is an important problem in proteomic. In the past, people discover PTMs by Tandem Mass Spectrometer based on ‘bottom-up’ strategy. However, such strategy suffers from the problem of failing to discover all PTMs. Recently, due to the improvement in proteomic technology, Taylor et al. proposed a database software to discover PTMs with ‘topdown’ strategy by FTMS, which avoids the disadvantages of ‘bottom-up’ approach. However, their proposed algorithm runs in exponential time, requires a database of proteins, and needs prior knowledge about PTM sites. In this paper, a new algorithm is proposed which can work without a protein database and can identify modifications in polynomial time. Besides, no prior knowledge about PTM sites is needed.

  • PDF

데이터마이닝을 이용한 관측적 침하해석의 신뢰성 연구 (A Study on the Reliability of Observational Settlement Analysis Using Data Mining)

  • 우철웅;장병욱
    • 한국농공학회지
    • /
    • 제45권6호
    • /
    • pp.183-193
    • /
    • 2003
  • Most construction works on the soft ground adopt instrumentation to manage settlement and stability of the embankment. The rapid progress of the information technologies and the digital data acquisition on the soft ground instrumentation has led to the fast-growing amount of data. Although valuable information about the behaviour of the soft ground may be hiding behind the data, most of the data are used restrictedly only for the management of settlement and stability. One of the critical issues on soft ground instrumentation is the long-term settlement prediction. Some observational settlement analysis methods are used for this purpose. But the reliability of the analysis results is remained in vague. The knowledge could be discovered from a large volume of experiences on the observational settlement analysis. In this article, we present a database to store settlement records and data mining procedure. A large volume of knowledge about observational settlement prediction were collected from the database by applying the filtering algorithm and knowledge discovery algorithm. Statistical analysis revealed that the reliability of observational settlement analysis depends on stay duration and estimated degree of consolidation.

Large scale word recognizer를 위한 음성 database - POW (The Speech Database for Large Scale Word Recognizer)

  • 임연자
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1995년도 제12회 음성통신 및 신호처리 워크샵 논문집 (SCAS 12권 1호)
    • /
    • pp.291-294
    • /
    • 1995
  • 본논문은 POW algorithm과 알고리즘을 통해 수행된 결과인 large scale word recognizer를 위한 POW set에 대하여 설명하겠다. Large scale word recognizer를 위한 speech database를 구축하기 위해서는 모든 가능한 phonological phenomenon이 POW set에 포함 되어얗 ks다. 또한 POW set의 음운 현상들의 분포는 추출하고자 하는 모집단의 음운현상들의 분포와 유사해야 한다. 위와 같은 목적으로 다음과 같이 3가지 성질을 갖는 POW set을 추출하기 위한 새로운 algorithm을 제안한다. 1. 모집단에서 발생하는 모든 음운현상을 포함해야 한다. 2, 최소한의 단어 집합으로 구성되어야 한다. 3. POW set과 모집단의 음운현상의 분포가 유사해야 한다. 우리는 약 300만 어절의 한국어 text corpus로부터 5천 단어의 고빈도 어절을 추출하고 이로부터 한국어 POW set을 추출하였다.

  • PDF