• Title/Summary/Keyword: 데이타베이스 공유

Search Result 107, Processing Time 0.027 seconds

Host information gathering using the traffic analysis (트래픽 분석을 이용한 호스트 정보 수집)

  • Lee, Hyun-Shin;Lee, Sang-Woo;Kim, Myung-Sup
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2009.04a
    • /
    • pp.1202-1205
    • /
    • 2009
  • 본 논문은 단말 호스트에서 발생한 트래픽 정보를 분석하여 단말 호스트의 다양한 정보를 수집하는 방법론에 대하여 기술한다. 본 논문에서는 첫째로 TCP 의 3-way handshake 중 SYN 패킷의 정보를 이용한 호스트의 운영체제를 예측하는 방법론과 해당 호스트에서 발생한 TCP 연결의 응답시간 분포를 분석하여 호스트의 네트워크 접근 밥법이 유 무선인지 분류하는 새로운 방법론을 제안한다. 분석이 완료된 호스트는 데이타베이스에 해당 호스트의 정보를 기록한다. 이는 웹을 통해 손쉽게 확인 가능하도록 하기 위함이다. 또한 하나의 호스트에서 유 무선 트래픽이 동시에 발생되었을 경우, 이에 대한 정보를 기반으로 유 무선 공유기 설치 유무를 판별할수 있도록 설계하였다.

An Efficient Snapshot Technique for Shared Storage Systems supporting Large Capacity (대용량 공유 스토리지 시스템을 위한 효율적인 스냅샷 기법)

  • 김영호;강동재;박유현;김창수;김명준
    • Journal of KIISE:Databases
    • /
    • v.31 no.2
    • /
    • pp.108-121
    • /
    • 2004
  • In this paper, we propose an enhanced snapshot technique that solves performance degradation when snapshot is initiated for the storage cluster system. However, traditional snapshot technique has some limits adapted to large amount storage shared by multi-hosts in the following aspects. As volume size grows, (1) it deteriorates crucially the performance of write operations due to additional disk access to verify COW is performed. (2) Also it increases excessively the blocking time of write operation performed during the snapshot creation time. (3)Finally, it deteriorates the performance of write operations due to additional disk I/O for mapping block caused by the verification of COW. In this paper, we propose an efficient snapshot technique for large amount storage shared by multi-hosts in SAN Environments. We eliminate the blocking time of write operation caused by freezing while a snapshot creation is performing. Also to improve the performance of write operation when snapshot is taken, we introduce First Allocation Bit(FAB) and Snapshot Status Bit(SSB). It improves performance of write operation by reducing an additional disk access to volume disk for getting snapshot mapping block. We design and implement an efficient snapshot technique, while the snapshot deletion time, improve performance by deallocation of COW data block using SSB of original mapping entry without snapshot mapping entry obtained mapping block read from the shared disk.

High-Dimensional Image Indexing based on Adaptive Partitioning ana Vector Approximation (적응 분할과 벡터 근사에 기반한 고차원 이미지 색인 기법)

  • Cha, Gwang-Ho;Jeong, Jin-Wan
    • Journal of KIISE:Databases
    • /
    • v.29 no.2
    • /
    • pp.128-137
    • /
    • 2002
  • In this paper, we propose the LPC+-file for efficient indexing of high-dimensional image data. With the proliferation of multimedia data, there Is an increasing need to support the indexing and retrieval of high-dimensional image data. Recently, the LPC-file (5) that based on vector approximation has been developed for indexing high-dimensional data. The LPC-file gives good performance especially when the dataset is uniformly distributed. However, compared with for the uniformly distributed dataset, its performance degrades when the dataset is clustered. We improve the performance of the LPC-file for the strongly clustered image dataset. The basic idea is to adaptively partition the data space to find subspaces with high-density clusters and to assign more bits to them than others to increase the discriminatory power of the approximation of vectors. The total number of bits used to represent vector approximations is rather less than that of the LPC-file since the partitioned cells in the LPC+-file share the bits. An empirical evaluation shows that the LPC+-file results in significant performance improvements for real image data sets which are strongly clustered.

Using CORBA, Development of Distributed Database System Prototype for Schools (CORBA를 이용한 학교간 분산데이타베이스 시스템 프로토타입 개발)

  • 최현종;김태영
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10a
    • /
    • pp.39-41
    • /
    • 2000
  • 정부의 "교육정보화" 정책으로 인하여 초.중등학교의 컴퓨팅 환경에 많은 변화가 일어나게 되었다. 특히 학생 생활기록부의 전산화는 지금까지 종이에 기록되어 관리되어 왔던 학생들의 정보를 데이터베이스화 한다는 측면에서 많은 긍정적인 효과를 가져오고 있다. 하지만, 이 시스템은 독립형 데이터베이스 관리 시스템이기 때문에 데이터베이스한 학생들의 자료를 학교내에서만 이용할 수 있을뿐, 이웃 학교나 시.도 교육청과는 전혀 공유되지 못하고 있는 실정이다. 따라서 현재 사용하고 있는 하드웨어와 소프트웨어 환경을 그대로 이용하면서 각 학교에 구축되어 있는 데이터베이스 시스템을 서로 공유할 수 있는 방법의 하나로 분산객체 시스템을 이용한 분산 데이터베이스 관리 시스템을 개발하고자 하는 것이 본 연구의 목적이다. 하지만 선행 연구를 볼 때 분산 객체 시스템과 분산 데이터베이스 시스템은 단일시스템의 개발보다 문제점이 많고 복잡하기 때문에 이 프로토타입에서는 학생에 대한 정보만을 대상으로 개발하고자 한다. 개발하고자 한다.

  • PDF

A Study on Mediterranean Tourism Impact Analysis by Specific Events using Photo Sharing Website (특정 사건에 따른 지중해 관광 영향 분석에 관한 연구 - 사진 공유 웹사이트를 기반으로)

  • Lee, Dong-Yul;Kang, Ji-Hoon;Moon, Sang-Ho
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.5 no.5
    • /
    • pp.167-176
    • /
    • 2015
  • Due to the variety of tourist attractions, many people of the world are visiting annually in Mediterranean area. In this paper, we analyze Mediterranean tourism impact by specific events such as the Arab Spring. In details, we make several density maps based on the position and time information of the geo-tagged photos extracted from Panoramio which is a representative photo sharing website. These density maps should be used to analyze how can specific events affect on Mediterranean tourism. To do this, first, a spatial database is constructed using geo-tag and time information of photo data extracted from Panoramio. Using GIS tool, then, several density maps are produced after performing density process based on spatial databases. Based on these density maps, finally, we analyze visually on Mediterranean tourism impact according to before, progress, and after of the Arab Spring.

Extracting Maximal Similar Paths between Two XML Documents using Sequential Pattern Mining (순차 패턴 마이닝을 사용한 두 XML 문서간 최대 유사 경로 추출)

  • 이정원;박승수
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.553-566
    • /
    • 2004
  • Some of the current main research areas involving techniques related to XML consist of storing XML documents, optimizing the query, and indexing. As such we may focus on the set of documents that are composed of various structures, but that are not shared with common structure such as the same DTD or XML Schema. In the case, it is essential to analyze structural similarities and differences among many documents. For example, when the documents from the Web or EDMS (Electronic Document Management System) are required to be merged or classified, it is very important to find the common structure for the process of handling documents. In this paper, we transformed sequential pattern mining algorithms(1) to extract maximal similar paths between two XML documents. Experiments with XML documents show that our transformed sequential pattern mining algorithms can exactly find common structures and maximal similar paths between them. For analyzing experimental results, similarity metrics based on maximal similar paths can exactly classify the types of XML documents.

Efficient Buffer Coherency Management for a Shared-Disk based Multiple-Server DBMS (공유 디스크 기반의 다중 서버 DBMS를 위한 효율적인 버퍼 일관성 관리)

  • Ko, Hyun-Sun;Kim, Yi-Reun;Lee, Min-Jae;Whang, Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.36 no.5
    • /
    • pp.399-404
    • /
    • 2009
  • In a multiple-server DBMS using the share-disk model, when a server process updates data, the updated ones are not immediately reflected to the buffers of the other server processes. Thus, the other server processes may read invalid data. In this paper, we propose a novel method to solve this problem. In this method the server process stores the identifiers and timestamps of the pages that have been updated during a transaction into the coherency volume when the transaction commits. Then, the server process invalidates its buffers of the pages updated by the other server processes by accessing the coherency volume when the lock is acquired, and, subsequently, read the up-to-date versions of the pages from disk. This method needs only a very small coherency volume and shows a good performance because the amount of data that need to be accessed is very small.

An Efficient Query-based XML Access Control Enforcement Mechanism (효율적인 질의 기반 XML 접근제어 수행 메커니즘)

  • Byun, Chang-Woo;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.34 no.1
    • /
    • pp.1-17
    • /
    • 2007
  • As XML is becoming a de facto standard for distribution and sharing of information, the need for an efficient yet secure access of XML data has become very important. To enforce the fine-level granularity requirement, authorization models for regulating access to XML documents use XPath which is a standard for specifying parts of XML data and a suitable language for both query processing. An access control environment for XML documents and some techniques to deal with authorization priorities and conflict resolution issues are proposed. Despite this, relatively little work has been done to enforce access controls particularly for XML databases in the case of query access. Developing an efficient mechanism for XML databases to control query-based access is therefore the central theme of this paper. This work is a proposal for an efficient yet secure XML access control system. The basic idea utilized is that a user query interaction with only necessary access control rules is modified to an alternative form which is guaranteed to have no access violations using tree-aware metadata of XML schemes and set operators supported by XPath 2.0. The scheme can be applied to any XML database management system and has several advantages over other suggested schemes. These include implementation easiness, small execution time overhead, fine-grained controls, and safe and correct query modification. The experimental results clearly demonstrate the efficiency of the approach.

GAGPC : An Algorithm to Optimize Multiple Continuous Queries on Data Streams (GAGPC : 데이타 스트림에 대한 다중 연속 질의의 최적화 알고리즘)

  • Suh Young-Kyoon;Son Jin-Hyun;Kim Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.33 no.4
    • /
    • pp.409-422
    • /
    • 2006
  • In general, there can be many reusable intermediate results due to the overlapped windows and periodic execution intervals among Multiple Continuous Queries (MCQ) on data streams. In this regard, we propose an efficient greedy algorithm for a global query plan construction, called GAGPC. GAGPC first decides an execution cycle and finds the maximal Set(s) of Related execution Points (SRP). Next, GAGPC constructs a global execution plan to make MCQ share common join-fragments with the highest benefit in each SRP. The algorithm suggests that the best plan of the same continuous queries may be different according to not only the existence of common expressions, but the size of overlapped windows related to them. It also reflects to reuse not only the whole but partial intermediate results unlike previous work. Finally, we show experimental results for the validation of GAGPC.

Linear Resource Sharing Method for Query Optimization of Sliding Window Aggregates in Multiple Continuous Queries (다중 연속질의에서 슬라이딩 윈도우 집계질의 최적화를 위한 선형 자원공유 기법)

  • Baek, Seong-Ha;You, Byeong-Seob;Cho, Sook-Kyoung;Bae, Hae-Young
    • Journal of KIISE:Databases
    • /
    • v.33 no.6
    • /
    • pp.563-577
    • /
    • 2006
  • A stream processor uses resource sharing method for efficient of limited resource in multiple continuous queries. The previous methods process aggregate queries to consist the level structure. So insert operation needs to reconstruct cost of the level structure. Also a search operation needs to search cost of aggregation information in each size of sliding windows. Therefore this paper uses linear structure for optimization of sliding window aggregations. The method comprises of making decision, generation and deletion of panes in sequence. The decision phase determines optimum pane size for holding accurate aggregate information. The generation phase stores aggregate information of data per pane from stream buffer. At the deletion phase, panes are deleted that are no longer used. The proposed method uses resources less than the method where level structures were used as data structures as it uses linear data format. The input cost of aggregate information is saved by calculating only pane size of data though numerous stream data is arrived, and the search cost of aggregate information is also saved by linear searching though those sliding window size is different each other. In experiment, the proposed method has low usage of memory and the speed of query processing is increased.